url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/1311.6788 | Twisted Analytic Torsion and Adiabatic Limits | We study an analogue of the analytic torsion for elliptic complexes that are graded by $\mathbb{Z}_2$, orignally constructed by Mathai and Wu. Motivated by topological T-duality, Bouwknegt an Mathai study the complex of forms on an odd-dimensional manifold equipped with with the twisted differential $d_H = d+H$, where $H$ is a closed odd-dimensional form. We show that the Ray-Singer metric on this twisted determinant is equal to the untwisted Ray-Singer metric when the determinant lines are identified using a canonical isomorphism. We also study another analytical invariant of the twisted differential, the derived Euler characteristic $\chi'(d_H)$, as defined by Bismut and Zhang. | \section{Twisted Analytic Torsion}
\subsection{Background}
For an elliptic complex $(E,d)$ over an odd-dimensional compact manifold $M$, one can construct a canonical metric $\| \, \|_{RS} $ on the determinant line $\det H(E,d)$ known as the Ray-Singer (RS) metric. This was first done by Quillen \cite{Quillen:1985a}, building on the work of Ray and Singer \cite{Ray:1971}. To construct this, one introduces a Euclidean metric $g$ to the complex, then defines
\[ \| v \|_{RS} := \rho \cdot |v|_{\ker \Delta}, \quad v \in \det H(E,d)\]
where
\[ \rho = \left(\prod_{k=0}^{n} \det{}'(\Delta_k)^{\tfrac{1}{2} k (-1)^k}\right) ,\]
and $\Delta_k = d^*_k d_k + d_k d_{k-1}^*$ is the Laplacian on sections of $E_k$ defined using the metric, $\det{}'$ indicates a zeta-regularized determinant, and $ |\cdot|_{\ker \Delta}$ is the metric on $\det H(E,d)$ induced by an orthonormal basis for $\ker \Delta$. It is somewhat remarkable that this metric is independent of the chosen Euclidean metric $g$. The coefficient $\rho$ in front of the metric $|\cdot|_{\ker \Delta}$ is the analytical object originally studied by Ray and Singer. If we multiply $\rho^{-1}$ by a product of orthonormal harmonic volume forms we get an element $\tau \in \det H(E,d)$, also known as the RS torsion, with the property that $\| \tau \|_{RS} = 1$. The analysis of the RS metric is thus closely tied to the analysis of the torsion element $\tau$, and so we occasionally interchange our discussion between the two. This torsion is defined as an analytical analogue of the classical Reidemeister torsion, which is constructed via a similar formula using simplicial techniques. In fact it was shown independently by Cheeger and M\"uller \cite{Cheeger:1977,Muller:1978} that the analytical and combinatorial metrics coincide. In \cite{Mathai:2011}, Mathai and Wu extend this notion to include operators that only preserve the grading modulo 2 in a way that agrees with the Ray-Singer metric when one considers a $\ZZ$-graded complex modulo 2. The prototypical example of such an operator is the \emph{flux twisted} de Rham operator defined as follows: Let $d$ be the usual de Rham exterior derivative acting on the complex of forms, and let $H$ be a $d$-closed odd (possibly non-homogeneous in degree) form on $M$. The operator is $d_H = d+H$ acting on the $\ZZ_2$-graded (even/odd) complex of differential forms on $M$. This operator clearly satisfies $(d_H)^2 = 0$, and thus we have a $\ZZ_2$-graded ($\pm$) elliptic complex $(C = \Omega^{\pm}(M), d_H)$, represented by
\begin{displaymath}
\xymatrix{
\Omega^+(M) \ar@/^/[r]^{d_H} & \Omega^-(M) \ar@/^/[l]^{d_H} \\
}
\end{displaymath}
In this case, Mathai-Wu define
\[ \|v\|_H = \sdet{'}(d^*_H d_H)^{\frac{1}{2}} \cdot | v |_{\ker \Delta_H} \]
where the boldfaced determinant indicates that we are thinking of this as $\ZZ_2$-graded determinant, i.e $\sdet(A) = \det(A_+) / \det(A_-)$ for operators $A$ preserving the $\ZZ_2$-grading (even operators). We use this notation for other $\ZZ_2$-graded extensions of concepts from basic linear algebra, e.g. $\str(A) = \tr(A_+) - \tr(A_-)$. Mathai-Wu show that the metric of the twisted de Rham operator only depends on the cohomology class of the flux form $[H] \in H^3(M,\RR)$. The purpose of this paper is to compare the metric on the twisted complex to that on the untwisted complex, with the main goal being
\begin{theoremn}[\ref{mainthm}]
There is an isomorphism of determinant lines
\[ \kappa : \det H(E,d) \to \det H(E,d_H) \]
under which the twisted and untwisted RS metrics are identified
\[ \| \cdot \|_{RS} =\kappa^* \|\cdot \|_H \]
\end{theoremn}
\subsection{The Zeta Function}
The process of computing zeta-regularized determinants requires that we first compute the zeta function of the operator involved. For the twisted complex, we need to study the partial Laplacians $d_H^*d_H$. These operators, although not elliptic, have well defined zeta functions that have the following Laurent expansion near $s=0$ (c.f. \cite{Mathai:2011,Grubb:2005})
\[ \zeta((d_H^*d_H)_\pm,s) = c_0 + c_1 s + O(s^2) \]
Using this expansion, we use the following symbols for the leading terms in the $s$ expansion of the zeta-function.
\begin{eqnarray*}
\label{zetaexp2}
\szeta(d_H^*d_H,s) &= &\zeta((d_H^*d_H)_+,s) - \zeta((d_H^*d_H)_-,s)\\&=& \schi'(d_H^*d_H) + \szeta'(d_H^*d_H) s + O(s^2)
\end{eqnarray*}
It can easily be shown (c.f. \cite{Mathai:2011}) that the leading term, $\schi'$, is independent of the metric, so we often just write $\schi'(d_H):=\schi'(d_H^* d_H)$. This term is known as the \emph{derived Euler characteristic} in \cite{Bismut:1992} for the following reason. If the complex $E$ is actually a $\ZZ$-graded complex, then we have the following formula for $\schi'$
\[ \schi'(E,d) = \sum_{k=0}^{n} (-1)^k k b_k \]
where $b_k = \dim H^k(E,d)$ are the Betti numbers of the complex. If we let $p_E(t) = \sum_{k=0}^{n} t^k b_k$ be the Poincar\'e polynomial for $E$, then we have $\schi'(E,d) = p'_E(-1)$, hence the `derived' name. However this simple description only works in the case where our $\ZZ_2$-graded complex is actually a $\ZZ$-graded complex that we have reduced the grading modulo two. No such description exists for the 2-periodic case. This derived characteristic can also be thought of as the zeta-regularized rank for the following reason: If $A \in \End(V)$ is a finite linear map, then we have $\det{}'(t A) = t^{\rk A} \det{}'(A)$, where $\det{}'$ indicates the product of the non-zero eigenvalues. Likewise, in the analytic case, we have
\[ \det{}'( t D ) = t^{\chi{}'(D)} \det{}'(D) \]
Thus the derived characteristic is interesting in its own right. Later, we will produce a formula for the derived characteristic in the case of the twisted de Rham complex.
\section{Adiabatic Limits}
Our goal is to compare the regularized determinant of the Laplacians of the de Rham operator $d$ and its twisted counterpart $d_H$. The main object we will be considering is the following \emph{deformation} of the standard de Rham complex $(\Omega(M),d)$.
\[ D_t = d + \sum_{i=1}^{n/2} t^i H_{2i+1} \]
Clearly $D_0 = d$ and $D_1 = d_H$.
We will analyse the behaviour of this family in two regions. Firstly, in a small neighbourhood of $t=0$, we consider it as a germ of an analytic deformation around $t=0$ and we will use techniques developed by Farber \cite{Farber:1995}. Secondly, we consider it as a continuous family in the region $t \in (0,1]$ and show that it is related here to a family where only the metric is varying, instead of the differential, similar to the situation described in the work of Forman \cite{Forman:1994}. By comparing the behaviour of the zeta function of this family in these two regions as $t\to 0$ we can extract information about the determinants at the endpoints. This type of technique is known in the literature as taking an \emph{adiabatic limit} of the operator $d_H$.
\subsection{Small $t$ behaviour}
The paper of Farber \cite{Farber:1995} determines the \emph{small time} behaviour of the Ray-Singer analytic torsion under a deformation of the elliptic complex. The extension from $\ZZ$-graded to the $\ZZ_2$-graded complexes proceeds easily without complication. We need to know precisely how the spectrum of the family $\Delta_t$ behaves around $t=0$. Farber uses the theory of analytic families due to Kato \cite{Kato}, to show that we can gain a complete understanding of such behaviour for our family $D_t$. Here, we consider the behaviour of the regularized determinant component of the RS metric.
Since the family we are considering is polynomial in $t$, we do not need the full machinery of analytic families as used by Farber, and we refer to their paper for a full discussion of the technicalities in that situation. For polynomial families, such as $Q_t = D_t^* D_t$, the situation is much simpler. We can express one of the results of Kato's deformation theory (c.f. \cite{Kato, Farber:1995}) in the following useful way
\begin{theorem}[Kato]
\label{eigenvaluegerms}
Let $Q_t$ be a holomorphic polynomial family. The spectrum of the operator $Q_t$ (near $t=0$) consists of germs of real analytic functions $\lambda(t)$, each of which belong to one of the following types
\begin{enumerate}
\item $\lambda_n(t) \equiv 0, \text{ for } 1 \leq n \leq N_0$ (The \emph{stable kernel})
\item$\lambda_n(t) = t^{\nu_n} \bar{\lambda}_n(t), \nu_n \geq 1, \bar{\lambda}_n(0)\neq 0, \text{ for } N_0 < n \leq N$ (The \emph{unstable kernel})
\item $ \lambda_n(0) \neq 0, \text{ for } n > N$
\end{enumerate}
where $N_0, N \in \ZZ$ are both finite.
\end{theorem}
Now that we know that the behaviour of the spectrum of the operator $Q_t$ near $t=0$, we can describe the behaviour of the zeta functions as $t\to 0$. Using the previous result, we can show
$\zeta(Q_t,s) = \zeta_2(Q_t,s) + \zeta_3(Q_t,s) $, where the subscript $\zeta_i$ indicates that we only consider eigenvalues of type $i$. Since there are only finitely many eigenvalues of type 2 (the unstable kernel), we have
\[ \exp \left( -\zeta_2'(Q_t,0 ) \right)= \prod_{n>N_0}^{N} t^{\nu_n} \bar{\lambda}_n(t) \]
There is a countable number of type 3 eigenvalues, $\lambda(t)$, and we know that the values $\lambda(0)$ capture all the non-zero eigenvalues of $Q_0$.
It can be shown (c.f. \cite{Farber:1995}) that the zeta function for the eigenvalues in the stable kernel is a real analytic function of $t$, and $\lim_{t\to 0} \zeta_3(Q_t,s) = \zeta(Q_0,s)$.
Putting this together, we arrive at
\begin{theorem}[Farber \cite{Farber:1995}]
\label{torsionnearzero}
For the deformations $Q_t$, we have
\[ \det{}'(Q_t) = t^\alpha \theta \det{}'(Q_0) f(t) \]
as $t \to 0$, where
\[ \alpha = \sum_{N_0 < N \leq N} \nu_n, \quad \theta = \prod_{N_0 < N \leq N} \bar\lambda_n(0), \]
$\{t^{\nu_n} \bar\lambda_n(t)\}$ are the type $2$ eigenvalues of the operator $Q_t$, and $f(t)$ is a real analytic function with $f(0) = 1$.
\end{theorem}
Applying this theorem to our superdeterminants, we find
\begin{corollary}
\label{germcomplextpower}
For a deformation of 2-periodic elliptic complex $(C^\infty(E),D_t)$, we have as $t\to 0$
\begin{eqnarray}
\sdet{}'(D_t^* D_t) &=& t^{ \alpha_0 - \alpha_1} \theta_{\bar0} /\theta_{\bar1} \sdet{}'(D_0^* D_0) f(t)
\end{eqnarray}
where $f(t)$ is a real analytic function with $f(0) =1$.
\end{corollary}
Our definition of $\alpha$ here may depend on our choice of metric $g$, however Farber \cite{Farber:1995} shows that $\alpha$ can be computed from a structure known as the parametrized Hodge decomposition, which shown to be independent of the metric chosen. In a later section, we will give a formula for the coefficient $\theta$, in terms of a spectral sequence associated to the deformation.
\subsection{Large $t$ behavior}
We now consider the behaviour of the determinant in the region $0 < t \leq 1$ by using the analysis of Forman, \cite{Forman:1994} who considered adiabatic limits of finite operators. Our goal is to get a formula for the variation $\ddx{t} \szeta{}'(D_t^* D_t) $ as $t\to 0$.
Let $N$ be the grading operator of the de Rham complex and let $\rho_t = t^{N/2}$. It is straightforward to check that \begin{equation}
\label{formanidentity}
t^{1/2} D_t = \rho_t d_H \rho_t^{-1}, \quad t>0
\end{equation}
If we choose a metric $g$, we can compute the adjoint operator
\[ D_t^* = d^* + \sum_{i=1}^{n/2} t^i H_{2i+1}^* \]
We then find, c.f. \cite{Forman:1994}, that the adjoint of the family $D_t$ is proportional to the adjoint of $d_H$ in the scaled metric $g_t = tg$, given by the following relation
\[ t^{1/2} D_t^* = \rho_t d_H^{*_t} \rho_t^{-1}, \quad t>0 \]
where $*_t$ indicates the adjoint taken with respect to the scaled metric $g_t$.
Thus we see that, up to an overall scale of $t^{1/2}$ the operator $D_t$ is similar to $d_H$, and $D_t^*$ is similar to $d_H^{*_t}$.
\begin{corollary}
For the family of elliptic complexes $(C,D_t)$, the derived Euler characteristic is independent of $t$ when $t>0$, i.e.
\begin{equation}
\schi'( D_{t}) =\schi'( d_H)
\end{equation}
\end{corollary}
\begin{proof}
Because conjugation by an invertible operator doesn't affect the spectrum, we have $\szeta( d_H^{*_t} d_H, s) =t^{-s} \szeta( D^*_{t} D_{t}, s)$. We already know that the derived Euler characteristic is independent of the choice of metric, so after evaluating at $s=0$, we are free to take $t=1$ on the left hand side.
\end{proof}
Note that $g_t$ is not a metric at $t=0$, so it is possible that $\schi'(d_H) \neq \schi'(d)$. One of the main results of this paper is to give a formula for the difference $\schi'( d_H) - \schi'( d)$ (proposition \ref{derivedeuleroffluxtwisted}).
In the following, we use the Mellin transform notation
\[ \mellin{F(z)}{z:s} = \int_0^\infty z^s f(z) d\log z \]
We begin with a useful variation formula.
\begin{lemma}
For the family $D_t$, we have
\[ \ddx{t}\szeta(t D_t^* D_t,s) = s \Gamma(s)^{-1} t^{-1} \mellin{\Str( N (e^{-z \Delta_t} - P_t) )}{z:s}, \]
where $P_t$ is the orthogonal projection onto the kernel of $\Delta_t := D_t^* D_t+ D_t D_t^*$.
\end{lemma}
\begin{proof}
We begin with a formula for the derivative of the scaled differential
\[ \ddx{t} (t^{1/2} D_{t}) = [ N/2t , \rho_t d_H \rho_{t}^{-1} ] = t^{-1/2} [N/2, D_{t} ] \]
and
\[ \ddx{t} (t^{1/2} D_{t}^{*} ) = -t^{-1/2} [N/2 ,D_{t}^*] \]
so we see
\[ \ddx{t}( t D_{t}^{*} D_{t} ) = (-[N/2, D^*_{t}] D_{t} + D^*_{t} [N/2,D_{t}]) \]
Now, using Duhamel's formula (c.f. \cite{BGV})
\begin{eqnarray*}
\ddx{t} \str( e^{-z t D_{t}^{*} D_{t} }-P_t) &=& - z \str( (-[N/2, D^*_{t}] D_{t} + D^*_{t} [N/2,D_{t}])e^{-z t D_{t}^{*} D_{t} } ) \\
&=& z \str( N e^{-z t \Delta_t} \Delta_t ) \\
\end{eqnarray*}
So
\begin{eqnarray*}
\ddx{t} \szeta(t D_t^* D_t,s) &=& \Gamma(s)^{-1} \mellin{z \str( N e^{-z t \Delta_t} \Delta_t )}{z:s} \\
&=& \Gamma(s)^{-1} t^{-(s+1)} \mellin{z \str( N e^{-z \Delta_t} \Delta_t )}{z:s} \\
&=& -\Gamma(s)^{-1} t^{-(s+1)} \mellin{z \ddx{z} \str( N (e^{-z \Delta_t} - P_t) )}{z:s} \\
&=& s \Gamma(s)^{-1} t^{-(s+1)} \mellin{\str( N (e^{-z \Delta_t} - P_t) )}{z:s}
\end{eqnarray*}
where we have used the following standard rules for the Mellin transform
\[ \mellin{ f(tz) }{z:s} = t^{-s} \mellin{f(z)}{z:s} \]
\[ \mellin{ z \ddx{z} f(z) }{z:s} = -s \mellin{f(z)}{z:s} \]
and we arrive at the result.
\end{proof}
Using this result, we can calculate the variation of the zeta function
\begin{corollary}
The derivative of the zeta function, $\szeta{}'(D_t^* D_t)$, of the family $D_t$ has the following $t$-dependence
\begin{equation}
\ddx{t} \szeta{}'(D_t^* D_t) = t^{-1} \schi{}'(d_H )-t^{-1} \Str( N P_t )
\end{equation}
\end{corollary}
\begin{proof}
Clearly, we have
\begin{eqnarray*}
\ddx{t} \szeta(t D_t^* D_t,s) &=& \ddx{t} (t^{-s} \szeta(D_t^* D_t,s)) \\
&=& -s t^{-(s+1)} \szeta(D_t^* D_t,s) + t^{-s} \ddx{t}\szeta(D_t^* D_t,s)
\end{eqnarray*}
and so we arrive at
\[ \ddx{t}\szeta(D_t^* D_t,s) = s t^{-1} \szeta(D_t^* D_t,s)+s \Gamma(s)^{-1} t^{-1} \mellin{\Str( N (e^{-z \Delta_t} - P_t) )}{z:s} \]
After taking the derivative w.r.t $s$ and evaluating at $s=0$, the contribution of the Mellin transform of $\Str(N e^{-z \Delta_t})$ yields the index term $\Str(N A_{n/2})$ (c.f. \cite{BGV}) which vanishes because $M$ is odd-dimensional, and thus we arrive at the result.
\end{proof}
There also exists a standard variation formula for the change of the zeta function under a change in the metric, c.f. \cite{Mathai:2011}. The main difference between this variation formula and the usual technique is that the family of metrics $g_t$ degenerates at $t=0$, whereas the family of operators $D_t$ is smooth.
We can now analyze the behaviour of the term $\Str( N P_{t})$ near $t=0$, where we expect only the type 1 eigenvalues to contribute
\begin{proposition}
\label{nptbehaviour}
As $t\to 0$,
\[ \Str( N P_{t}) = \schi'_0 + O(t)\]
where $\schi'_0 := \Str_{\text{type 1}}( N P_{0})$ is the derived Euler characteristic at $t=0$ of the finite collection of type 1 eigenvectors of $ \Delta_t = D_t^* D_t+ D_tD_t^*$.
\end{proposition}
\begin{remark}
The complex $(C,D_0 = d)$ is $\ZZ$-graded, and the usual formula $\schi'_0 = \sum_k (-1)^k k N_k$ applies, where $N_k$ is the number of type 1 eigenvalues of degree $k$.
\end{remark}
\begin{proof}
We need to show that there exists an orthonormal basis $\{\phi_i(t)\}$ for the stable kernel of $\tilde \Delta$ such that each of the forms $\phi_i(0)$ are homogenous in the $\ZZ$-grading, not just the $\ZZ_2$-grading. The existence of such a basis follows quite simply from (\cite{Forman:1994} Thm 6). We describe this in the next section (corollary \ref{zgradingofleadingterms}), once we have introduced the spectral sequence of a deformation. Assuming such a basis exists, then we have
\begin{eqnarray*}
\Str( N P_t ) &=& \sum_{i} ( \phi_i(t) , (-1)^N N \phi_i(t) )\\
&=& \sum_{i} (-1)^{d_i} d_i ( \phi_i(0) , \phi_i(0) ) + O(t) \\
&=:& \schi'_0 + O(t)
\end{eqnarray*}
where $d_i = \deg \phi_i(0)$.
\end{proof}
Using the above, we arrive at
\begin{proposition}
As $t\to 0$,
\label{dtsmalltbehavior}
\[ \sdet{}'(D_t^* D_t) = t^{\schi'_0-\schi{}'(d_H)} \sdet{}'(d_H^* d_H) f(t) \]
where $f(0) \neq 0$.
\end{proposition}
\begin{proof}
For $0 <t <1$, we have
\begin{eqnarray*}
\szeta{}'(D_t^* D_t) &=& \szeta{}'(D_1^* D_1)-\int_{t}^{1} \left( \ddx{s} \szeta{}'(D_s^* D_s) \right) ds \\
& = & \szeta{}'(d_H^* d_H)-\int_{t}^{1} \left( s^{-1} \schi{}'(d_H)-s^{-1} \Str( N P_s ) \right) ds \\
& = & \szeta{}'(d_H^* d_H) +\left( \schi{}'( d_H)-\schi'_0 \right)\log t + \int_{t}^{1} s^{-1}\left(\Str( N P_s) -\schi'_0\right) ds\\
\end{eqnarray*}
Due to proposition (\ref{nptbehaviour}), we know that $\Str( N P_s) -\schi'_0$ is $O(s)$, so the integral is finite as $t\to 0$. This implies
\begin{eqnarray*}
\sdet{}'(D_t^* D_t) &=& \exp( - \szeta{}'(D_t^* D_t)) \\
&=& t^{\schi'_0- \schi{}'(d_H)} \sdet{}'(d_H^* d_H) e^{-h(t)}
\end{eqnarray*}
where
\[ h(t) = \int_{t}^{1} (\Str( N P_s)- \schi'_0) s^{-1} ds. \]
and the result follows.
\end{proof}
\subsection{Comparison of the two behaviors} In proposition \ref{dtsmalltbehavior}, we found that the limiting behaviour as $t\to0$ of the zeta function $\sdet{}'(D_t^* D_t)$ is
\[ \sdet{}'(D_t^* D_t) = t^{\schi'_0 - \schi{}'(d_H)} \sdet{}'(d_H^* d_H) f(t)\]
comparing this with the behaviour computed from the germ complex in proposition \ref{germcomplextpower},
\[ \sdet{}'(D_t^* D_t ) \sim t^{\alpha_{\bar0} - \alpha_{\bar1}} (\theta_{\bar0}/ \theta_{\bar1}) \sdet{}'(d^* d ) \]
we arrive at one of our key results
\begin{theorem}
\label{derivedeuleroffluxtwisted}
The derived Euler characteristic $\schi'(d_H)$ of the flux-twisted de Rham complex is an integer, and is given by the following formula
\begin{eqnarray*}
\schi'(d_H) &=& \schi'_0 - \alpha_{\bar0} + \alpha_{\bar1}
\end{eqnarray*}
\end{theorem}
We can also immediately read off
\begin{corollary}
The determinants are related by
\[ \sdet{}'(d_H^* d_H) = \left(\theta_{\bar0} /\theta_{\bar1}\right)\sdet{}'(d^* d)\,{f(0)} \]
where $\theta_{{\bar k}} := \prod_{i}^{} \bar \lambda_{i}(0)$ is the product over the type $2$ eigenvalues $\lambda(t) = t^\nu \tilde \lambda(t)$ of $d_H^{*} d_H $ acting on $\Omega^{\bar k}$, and
\[ f(0) =\exp \left(- \int_{0}^{1} \left(\Str( N P_s)- \schi'_0\right) s^{-1}ds\right)\]
\end{corollary}
In the next section, we will show that there is an alternative description of the term $\left(\theta_{\bar0} /\theta_{\bar1}\right)$ coming from a spectral sequence associated to the deformation. We will also demonstrate that the `defect' term $f(0)$ in the above formula is identically equal to 1.
\section{The Adiabatic Spectral Sequence}
Here we describe a method to compute the twisted cohomology $H(C,d_H)$ from the untwisted cohomology $H(C,d)$ provided by a spectral sequence. The idea is to successively approximate the kernel of the operator $D_t$ for small $t$, starting from the kernel of the undeformed operator $D_0=d$. With this tool we can extract information about the eigenvalues of the deformed complex, and we can also gain an insight into how the volume form varies along a deformation. We begin by defining the following filtration on $\Omega(M)$.
\[ \sA_i(M) = \sum_{j\geq i, i\equiv j \bmod 2} \Omega^{j}(M) \]
Since $d_H$ only increases form degree, it therefore preserves this filtration, an observation noted in \cite{Rohm:1986}. The spectral sequence we are interested in is the Leray spectral sequence for this filtration (c.f. \cite{McCleary:2001})
\begin{proposition}
There is a spectral sequence, the \emph{adiabatic spectral sequence}, $(E_j^{\bar k},\partial_j)$, with second page $E_2^{\bar k} = H^{\bar k}(d)$, converging in finitely many pages to $H^{\bar k}(d_H)$.
\end{proposition}
For full details we refer to the original paper where this was used in the setting of analytic torsion \cite{Farber:1995}, here we just provide some relevant facts. The spectral sequence page $E_j^{\bar k}$ can be loosely described as of the leading terms $v(0)$ of forms $v(t) \in \Omega^{{\bar k}}(M)[t]$ that vanish to order $t^{2j}$ under the Laplacian $\Delta_t = D_t^*D_t + D_t D_t^*$, modulo certain relations amongst the possible extensions $v(t)$. The identification of $E^{\bar k}_{\infty}$ with $H^{\bar k}(d_H)$ is given by $[v(0)] \mapsto [v(1)]$ for a particular choice of $v(t)$ extending $v(0)$.
It was shown in \cite{Forman:1994} that there is also a natural $\ZZ$-grading on these spaces, which we have made reference to before.
\begin{theorem}[Forman]
\label{zgradingcompatibility}
The differential $\partial_j$ of the spectral sequence is compatible with the $\ZZ$-grading on $E^{\bar k}_j$ given by $E_{j,l}^{\bar k} = E_j^{\bar k} \cap \Omega^l(M)$, i.e.
\begin{itemize}
\item[1)] $E_j^{\bar k} = \bigoplus_{\ell\geq 0} E_{j,l}^{\bar k}$
\item[2)] $\partial_j E_{j,l}^{{\bar k}} \subset E^{\overline {k+1}}_{j,\ell+j}$
\item[3)] $\partial_j^* E_{j,l}^{{\bar k}} \subset E^{\overline {k+1}}_{j,\ell-j}$
\end{itemize}
\end{theorem}
The compatibility of the leading terms of this spectral sequence with this $\ZZ$-grading yields an important result that we used in the previous chapter
\begin{corollary}
\label{zgradingofleadingterms}
For the complex, $(\Omega(M)[t], D_t )$, the space of harmonic forms has a basis in which every element $v(t) = v_0 + O(t)$ has a leading term $v_0$ that is homogenous in the natural $\ZZ$-grading on forms.
\end{corollary}
Given a finite dimensional chain complex $(V,\partial)$, there is no canonical inclusion of the cohomology as a subcomplex $H(V,\partial) \to V$. One such inclusion is given by Hodge theory, where the complex is equipped with inner products, and the representatives for the cohomology are taken to be the harmonic classes of unit norm. There is, however, a \emph{canonical} isomorphism $\det V \cong \det H(V,\partial)$, known as the Knudsen-Mumford map \cite{Knudsen:1976}. Thus, for each page in the spectral sequence, we obtain canonical isomorphisms
\[ \kappa : \det E_j^{\bar k} \to \det H(E_j^{\bar k},\partial_j) = \det E_{j+1}^{\bar k} \]
After composing these maps for each page in the sequence after the second page, we obtain a canonical isomorphism
\[ \kappa : \det H(C,d) \cong \det H(C,d_H) \]
To see how this isomorphism fits into our analysis of the associated metrics on the determinant lines, we can use the following important theorem
\begin{proposition}[Farber \cite{Farber:1995}]
Under the isomorphism $\kappa$ of determinant lines, we have
\[ \kappa^* \| \cdot \|_{\det \Delta_H} = ( \theta_{\bar0}/\theta_{\bar1})^{-1/2} \| \cdot \|_{\det \Delta} \] (\ref{germcomplextpower}).
\end{proposition}
Thus we don't need to worry about the spectral term $ ( \theta_{\bar0}/\theta_{\bar1})$ coming from the finite number of eigenvalues in the unstable kernel, as it will be absorbed by the use of canonical isomorphism of determinant lines.
Combining this theorem with theorem (\ref{derivedeuleroffluxtwisted}), we find
\begin{corollary}
Under the isomorphism $\kappa$ between the corresponding determinant lines the twisted metric is related to the Ray-Singer metric by the following
\[ \| \cdot \|_{RS} = \Gamma \, \kappa^* \| \cdot \|_{H} \]
where
\[ \log \Gamma = -\int_{0}^{1} \left(\Str( N P_s)- \schi'_0(d)\right) s^{-1}ds\]
and $P_s$ is the projection onto the kernel of $\Delta_s = D_s^* D_s+D_s D_s^*$, and $\schi'_0(d)$ is the derived Euler characteristic of the unstable eigenvalues of $D_t$ at $t=0$.
\end{corollary}
\subsection{Vanishing of the Defect}
We now make the observation that since the twisted metric, the untwisted metric and the map $\kappa$ were independent of the various metrics chosen on the complex, the constant $f(0)$ must also be independent of such a choice. Using this, we can show that the defect vanishes.
\begin{lemma}
$\Gamma = 1$.
\end{lemma}
\begin{proof}
Let $\Delta_t^{g}$ be the Laplacian of $D_t$ in the metric $g$. We have noted before that
\[ t \Delta_t^{g}= \rho_t \Delta_1^{tg} \rho_t^{-1}\]
If we consider how the kernel projections act, it is easy to see that $\rho_t P_{\ker \Delta_1^{tg} } \rho_t^{-1} = P_{\ker \Delta_t^g}$. We start with
\begin{equation} \log \Gamma = - \int_{0}^{1} \left(\Str( N P_{\ker \Delta_s^{g} })- \schi'_0(d)\right) s^{-1}ds \tag{$*$}
\end{equation}
We can then see that the term $\Str( N P_{\ker \Delta_s^{g} })$ is equal to $\Str( N \rho_t P_{\ker \Delta_1^{sg} } \rho_t^{-1})$, and since $[N,\rho^t] = 0$, this is equal to $\Str( N P_{\ker \Delta_1^{sg} })$. With this, we have
\begin{equation*}
\log \Gamma = -\int_{0}^{1} \left( \Str( N P_{\ker \Delta_1^{sg} })- \schi'_0(d) \right) s^{-1} ds
\end{equation*}
However, since we know that this quantity must be independent of $g$, we are free to scale $g$ by $\lambda \in \RR_{>0}$.
\begin{eqnarray*}
\log \Gamma = -\int_{0}^{1} \left( \Str( N P_{\ker \Delta_1^{s\lambda^{-1}g} })- \schi'_0(d) \right) s^{-1} ds
\end{eqnarray*}
Now we change coordinates $s = \lambda^{-1} s$, and we find
\begin{eqnarray*}
\log \Gamma = -\int_{0}^{\lambda} \left( \Str( N P_{\ker \Delta_1^{sg} })- \schi'_0(d) \right) s^{-1} ds
\end{eqnarray*}
We can now change the integrand back to its original form,
\begin{eqnarray*}
\log \Gamma = -\int_{0}^{\lambda} \left(\Str( N P_{\ker \Delta_s^{g} })- \schi'_0(d)\right) s^{-1}ds
\end{eqnarray*}
Comparing this expression with $(*)$, we see that the independence of $\log \Gamma$ on $\lambda$ forces it to vanish.
\end{proof}
With this, we can state the main result of this work
\begin{theorem}
\label{mainthm}
The twisted and untwisted metrics are identified under the canonical isomorphism of determinant lines.
\[ \| \cdot \|_{RS} = \kappa^* \| \cdot \|_H \]
\end{theorem}
This theorem allows us to compute the regularized determinant of the Laplacian $d_H^*d_H$ in several examples, and provides extensions of any properties of the untwisted metric to the twisted case.
\section{Examples}
We can use our main results to easily calculate some of the invariants we are interested in for some simple examples.
\begin{lemma}
\label{twistcohomvanish}
If $H(C,d_H) = 0$, then
\[ \schi'( d_H) = -\alpha_{\bar0} + \alpha_{\bar1}\]
\end{lemma}
\begin{proof}
The twisted cohomology groups are isomorphic for all $t>0$, so for them to vanish at $t=1$ implies that the projection $P_t \equiv 0$ for $t>0$ and so $\schi'_0(d)=0$.
\end{proof}
In \cite{Mathai:2011}, the case where $H$ is of top degree was studied
\begin{proposition}[\cite{Mathai:2011}]
If $M^n$ is an odd-dimensional, compact, oriented Riemannian manifold, $H$ is a multiple of the volume form,
\[ \sdet{}'(d_H^* d_H) = \| H \|^{2b_0} \sdet{}'(d^* d) \]where $b_0 = \dim H^0(M,\RR)$ is the $0$th Betti number
\end{proposition}
In their paper, Mathai-Wu prove this result using a subtle factorization argument of regularized determinants developed by Kontsevich and Vishik in \cite{kontsevich:1995a}. Here, we reprove this theorem without the need for such powerful machinery, and answer a question posed in their paper about the value of the derived Euler characteristic.
\begin{theorem}
If $M^n$, is an odd-dimensional, compact, oriented Riemannian manifold of dimension $n>1$ and $H$ is a multiple of the volume form, then
\[ \sdet{}'(d_H^* d_H) = \| H \|^{2b_0} \sdet{}'(d^* d) \]
and furthermore
\[ \schi'(d_H) = \schi'(d) + b_0\]
where $b_0$ is the $0$th Betti number.
\end{theorem}
\begin{proof}
We wish to determine the unstable eigenvectors and eigenvalues of the Laplacian. Let $s = t^{(n-1)/2}$. For $t > 0$, the operator $D_t = d+s H$, although it is $\ZZ_2$-graded, preserves the $\ZZ$-grading on forms except in the first two and last two degrees. $D_t: C^0 \dsum C^{n-1} \to C^1 \dsum C^n$ has kernel $C^{n-1}_{cl}$, since $H \wedge : C^0 \to C^n$ is a bijection. Now $D_t=d$ when acting on forms of non-zero degree, so the $D_t$-closed forms are just $d$-closed forms in this range. The $D_t$-exact forms are given by $d$-exact forms of degree $>1$, as well as the composite forms $(d \tau, s H\wedge \tau) \in C^1 \dsum C^n, \tau \in C^0$. So the cohomology groups of $d_H$ are the same as those for $d$, except possibly in degrees $0,1$ and $n$. Consider a closed form $\beta \in C^n_{cl}$. Since $H^n(C,d) = \RR^{b_0}[H]$, choose a form $\gamma \in C^{n-1}$ so that $\beta-d\gamma = \lambda s H, \lambda \in C^0_{cl}$, i.e $\beta = D_t (\lambda + \gamma)$, thus $\beta$ is $D_t$-exact. Now, take a closed form $\alpha \in C^1_{cl}$, which is necessarily cohomologous to $(\alpha+d\tau,s H\wedge \tau)$. Since $s H\wedge \tau$ is closed, $\alpha$ is also exact by the previous argument. Thus $\alpha \sim \alpha + d\tau$, and if $\alpha$ is $d$-harmonic, then it is also $D_t$-harmonic. Thus we find
\[ H(C,D_t) \cong \bigoplus_{k=1}^{n-1} H^k(C,d) \]
We can also see this by noting that $H(C,d_H) = H(H(C,d),H\wedge)$, but it is not obvious that we can choose homogenous harmonic representatives for our cohomology classes.
This means that the $D_t$-harmonic forms are $\ZZ$-graded, and so we have
\[ \Str( N P_t) = \schi'_0(d) = \sum_{k=1}^{n-1} (-1)^k k b_k = \schi'(d) + n b_n = \schi'(d)+n b_0\]
We need to look at the behavior of the unstable kernel under this deformation. The locally constant functions have a basis $\{v_i\}_{i=1}^{b_0}$, where $v_i$ is non-vanishing only on the $i$th connected component. These trivially satisfy $d^* d v_i = 0$, and we find $D_t^* D_t v_i = s^2 \| H \|^2 v_i $ since $H$ is co-closed. This tells us each $v_i$ is an eigenvector of $D_t^* D_t$, with eigenvalue $\lambda(t) = s^2 \| H \|^2 = t^{n-1} \| H \|^2$, and these are the only even forms in the unstable kernel. Thus we find
\[ \theta_0 = \| H \|^{2 b_0}, \quad \alpha_{\bar0} = (n-1) b_0, \quad \alpha_{\bar1} = 0 \]
where the $\alpha_{\bar k}$ are defined as in theorem \ref{torsionnearzero}.
So using theorem \ref{derivedeuleroffluxtwisted}, we find
\begin{eqnarray*}
\schi'(d_H) &=& \schi'_0(d) - \alpha_{\bar0} + \alpha_{\bar1} \\
&=& \schi'(d) + n b_0 - (n-1) b_0 \\
&=& \schi'(d) + b_0
\end{eqnarray*}
Furthermore
\begin{eqnarray*}
\sdet{}'(d_H^* d_H) &=& (\theta_{\bar0}/\theta_{\bar1}) \sdet{}'(d^* d) \\
&=& \| H \|^{2b_0} \sdet{}'(d^* d)
\end{eqnarray*}
\end{proof}
The main difference between our proof and that of Mathai-Wu is that we only need to focus on the contributions of the finite collection of unstable eigenvalues, as suggested by Farber's work, and not the entire spectrum of the twisted differential.
\section{Relation to $T$-duality}
One motivation for the work of Mathai-Wu was to study the behaviour of the analytic torsion under a transformation known as topological $T$-duality. This concept comes from a geometric duality between the spacetimes of type IIA and IIB string theories. After discarding all the geometric information and all of the string theory, one is left with the following topological set up. Let $p_1: P_1\to M$ be a principal circle bundle over an even dimensional base $M$, and $H_1 \in H^3(P_1,\ZZ)$. The we can pushforward $H_1$ along $p_1$ to get $(p_1)_*H_1 \in H^2(M,\ZZ)$. This class on the base represents the first Chern class of another principal circle bundle $p_2: P_2 \to M$, i.e. $c_1(P_2) = (p_1)_*H_1$. The question is: can we find a degree three cohomology class on $P_2$ such that this picture becomes symmetric?
\begin{proposition}[Topological T-duality \cite{Bouwknegt:2004}]
There exists a unique class $H_2 \in H^3(P_2,\ZZ)$ such that $(p_2)_*H_2 = c_1(P_1)$ and that satisfies $p_1^*H_2 = p_2^*H_1 \in H^3(P_1 \times_M P_2,\ZZ)$.
\end{proposition}
The general philosophy of this kind of T-duality is that we can use a Fourier-Mukai type transform (which we won't describe here) to relate certain 'twisted' $\ZZ_2$-graded objects between $P_1$ and $P_2$. This is loosely summarized as
\[
\begin{array}{c}
\{\text{ even/odd } H_1 \text{-twisted objects on }P_1 \} \\
\Updownarrow\\
\{\text{ odd/even }H_2 \text{-twisted objects on }P_2 \} \\
\end{array}
\]
The canonical example is given by
\begin{proposition}[\cite{Bouwknegt:2004}]
With the data described earlier, there is a $T$-duality isomorphism between the twisted cohomology groups
\[ \mathcal{T} : H^\pm(P_1,d_{H_1}) \to H^{\mp}(P_2,d_{H_2}) \]
Here $H_i$ are taken to be certain de Rham representatives of the corresponding classes in cohomology.
\end{proposition}
Thus we also get an induced map \[\det \mathcal{T} : \det H(P_1,d_{H_1}) \to \det H(P_2,d_{H_2})^* \]
The natural question in our context is how this isomorphism relates the corresponding analytic torsion elements.
\begin{proposition}[Mathai-Wu \cite{Mathai:2011}]
Under the $T$-duality map, we have
\[ \det \mathcal{T} : 2^{\chi(M)} \tau( P_1, d_{H_1} ) \mapsto \left( 2^{\chi(M)} \tau( P_2,d_{H_2} ) \right)^{-1}\]
\end{proposition}
Thus the T-duality map preserves the analytic torsion. Now we can combine this theory with our theorem comparing twisted to untwisted analytic torsion in the following way: Let $P_1$ is a non-trivial circle bundle over $M$, and suppose we want to compute the untwisted torsion $\tau(P_1,d)$. We can first compute the $T$-dual circle bundle $P_2$. We have chosen $H_1=0$, so the dual circle bundle will have vanishing Chern class, thus $P_2 \cong M \times S^1$. The corresponding dual flux must then be $H_2 = c_1(P_1) \boxtimes [S^1] \in H^3(M \times S^1,\ZZ)$. We now apply the T-duality map $\det \mathcal{T}$, which tells us that we only need to compute $\tau(M \times S^1, d_{H_2})$. We now use the canonical isomorphism $\kappa$ to show that we only need to compute $\tau(M \times S^1,d)$. This torsion is significantly simpler, since the non-trivial bundle $P_1$ has been replaced by a trivial bundle. Thus in this way, we can use some \emph{cohomological} maps between determinant lines to simplify the calculation of the untwisted torsion.\section{Extension to Flat Superconnections}
One of the main sources for $\ZZ_2$-graded complexes comes from flat superconnections. These operators, first defined by Quillen \cite{Quillen:1985} are a generalization of the concept of a connection on a vector bundle to that of super-bundles, or $\ZZ_2$-graded vector bundles.
\begin{definition} A \emph{superconnection} on a $\ZZ_2$-graded vector bundle $E = E^+ \ominus E^-$ on $M$, is an odd parity, first order differential operator $\mathbb{A} \in \End^-( \sA(M,E) )$, satisfying the Leibnitz rule
\[ [\mathbb{A},\alpha] = d\alpha \quad \text{for } \alpha \in \sA(M, E), \]
where we think of a form $\alpha$ as operating on the complex of forms by exterior multiplication.
\end{definition}
If we consider $L = M \times\RR$ as the trivial superbundle, $E = E^+ \ominus E^- = L \ominus 0$, then $d_H$ is a superconnection on this bundle.
Note that if we fold up a $\ZZ$-graded complex with a connection, we get a superconnection.
We can extend $\mathbb{A}$ to act on $\sA(M,\End E)$ by $\mathbb{A} \alpha := [\mathbb{A},\alpha]$ for $\alpha \in \sA(M,\End E )$. Using these definitions, we find that the \emph{curvature} $F_\mathbb{A} := \mathbb{A}^2$ is given by exterior multiplication by a form, $F_\mathbb{A} \in \sA^+(M,\End E)$. If we decompose the connection into homogenous components $\mathbb{A} = \sum_{i=0}^{n} \mathbb{A}_{[i]}$, where $\mathbb{A}_{[i]} : \Gamma(M,E) \to \sA^i(M,E)$, we see that $\mathbb{A}_{[1]}$ is a connection on the bundle $E$ that preserves the $\ZZ_2$-grading, and that for each $i\neq 1$, $\mathbb{A}_{[i]}$ is given by exterior multiplication with some form $\mathbb{A}_{[i]} \alpha = \omega_{[i]} \wedge \alpha$, where $\omega_{[i]} \in \sA^{i,-}(M,\End E ) $.
We say a superconnection is \emph{flat} if its curvature vanishes identically, i.e. $F_\mathbb{A} = 0$. Note that if a superconnection $\mathbb{A}$ is flat, it does not necessarily mean that the connection component $\mathbb{A}_{[1]}$ is also flat. Given a flat superconnection, $\mathbb{A}$, we can form the $\ZZ_2$-graded complex $(\sA^\pm(M,E) ,\mathbb{A})$. The Leibnitz property ensures that this complex is elliptic, and so following Mathai-Wu \cite{Mathai:2011a}, we can define the RS metric $\| \cdot \|_{(E,\mathbb{A})}$ on $\det H(E,\mathbb{A})$.
Since the connection $\nabla := \mathbb{A}_{[1]}$ is not flat, there is not any torsion invariant associated to $\nabla$. However, if we label $\partial := \mathbb{A}_{[0]},\phi := \mathbb{A}_{[2]}$, and examine the first three flatness constraints (homogeneous components of $F_\mathbb{A} =0$) for $\mathbb{A}$, we find
\[ \partial^2=0,\qquad [\partial,\nabla] =0\qquad [\partial,\phi] + F_\nabla =0\]
We see that the first condition implies that the bundle map $d$ squares to zero, i.e. $(E, \partial)$ is a $\ZZ_2$-graded complex of vector bundles. The second flatness condition implies that the connection $\nabla$ commutes with $\partial$. The third condition implies that the curvature $F_\nabla = \nabla^2$, is $\partial$-chain homotopically trivial, i.e. vanishes in the $\partial$-cohomology. Now, if we assume that $\ker \partial \subset E$ is a constant-rank vector bundle, then $\im \partial \subset \ker \partial$ is a constant-rank vector sub-bundle, and so the quotient vector bundle $\cH := H(E,\partial) = \ker \partial/ \im \partial$ is also constant rank.
Thus if the bundle $\cH$ is suitably well defined, then the connection $\nabla$ descends to a \emph{flat} superconnection $\tilde\nabla$ on the quotient, known as the \emph{Gauss-Manin} connection. We are then able to construct the RS metric $\| \cdot \|_{(\cH,\tilde\nabla)}$ on $\det H(\cH,\tilde\nabla)$.
So we find that from a flat superconnection we can construct two metrics: the RS metric of the complex $(\mathbb{A})$, and the RS metric of the Gauss-Manin connection $(\cH,\tilde \nabla)$.
There is also a canonical filtration on the complex of forms valued in a super-bundle. Define the filtration by form degree
\[ \sA^{i,{\bar k}}(M,E) = \bigoplus_{j\geq i} \Omega^j(M,E^{{\bar k}-j}) \]
It is clear that these spaces filter $\sA^{{\bar k}}(M,E)$
\[ \sA^{{\bar k}}(M,E) = \sA^{0,{\bar k}}(M,E) \supset \sA^{1,{\bar k}}(M,E) \supset \ldots \supset \sA^{n,{\bar k}}(M,E) \]
and that $\mathbb{A} : \sA^{i,{\bar k}}(M,E) \to \sA^{i,{\overline {k+1}}}(M,E)$. Thus, as before, we can construct a spectral sequence, for which we find $E_2^{\bar k} = H^{\bar k}(\cH,\tilde\nabla)$ and that converges to $E_\infty^{\bar k}= H^{\bar k}(E,\mathbb{A})$. From this we construct a identification of determinant lines $\kappa : \det H(\cH,\tilde\nabla) \to \det H( E,\mathbb{A})$.
\begin{conjecture}
Under the identification of determinant lines, we have
\[ \| \cdot \|_{(\cH,\tilde\nabla)} \mapsto \kappa^* \| \cdot \|_{(E,\mathbb{A})}\]
\end{conjecture}
Thus we have shown this conjecture holds in the case where $E = E^+ = \RR$, and $\mathbb{A} = d_H$ is an arbitrary superconnection on this bundle. This case was easy to handle because $\mathbb{A}_{[0]} = \partial= 0$, and so $H(E,\partial) = E$. For the general case, a more sophisticated analysis of the eigenvalues of the family associated to the scaled operator $\mathbb{A}_t = \rho_t \mathbb{A} \rho_t^{-1}$ should be involved.
\section{Acknowledgements}
This work encompasses part of the content of author's master's thesis completed while studying at the University of Adelaide with the support of an Australian Postgraduate Award (APA). The author benefited greatly from supervision by Varghese Mathai and Michael Murray, and from conversations with Maxim Braverman.
\renewcommand{\bibname}{References}
\bibliographystyle{plain}
| {
"timestamp": "2013-11-27T02:12:14",
"yymm": "1311",
"arxiv_id": "1311.6788",
"language": "en",
"url": "https://arxiv.org/abs/1311.6788",
"abstract": "We study an analogue of the analytic torsion for elliptic complexes that are graded by $\\mathbb{Z}_2$, orignally constructed by Mathai and Wu. Motivated by topological T-duality, Bouwknegt an Mathai study the complex of forms on an odd-dimensional manifold equipped with with the twisted differential $d_H = d+H$, where $H$ is a closed odd-dimensional form. We show that the Ray-Singer metric on this twisted determinant is equal to the untwisted Ray-Singer metric when the determinant lines are identified using a canonical isomorphism. We also study another analytical invariant of the twisted differential, the derived Euler characteristic $\\chi'(d_H)$, as defined by Bismut and Zhang.",
"subjects": "Differential Geometry (math.DG)",
"title": "Twisted Analytic Torsion and Adiabatic Limits",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540722737479,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7099046568439696
} |
https://arxiv.org/abs/1305.4282 | Dynamics of a Continuous Piecewise Affine Map of the Square | We present a one-parameter family of continuous, piecewise affine, area preserving maps of the square, which are inspired by a dynamical system in game theory. Interested in the coexistence of stochastic and (quasi-)periodic behaviour, we investigate invariant annuli separated by invariant circles. For certain parameter values, we explicitly construct invariant circles both of rational and irrational rotation numbers, and present numerical experiments of the dynamics on the annuli bounded by these circles. | \section{Introduction}
Piecewise affine maps and piecewise isometries have received a lot of attention, either as simple,
computationally accessible models for complicated dynamical behaviour, or as a class of systems
with their own unique range of dynamical phenomena. For a list of examples, see
\cite{Bullett1986,Wojtkowski1980,Wojtkowski1981,Wojtkowski2008,Przytycki1983,
Devaney1984,Aharonov1997,Gutkin1995,Goetz2000,Adler2001,Lagarias2005a,Ashwin2005,Beardon,Reeve-Black2013,Ostrovski2010} and references therein.
In this paper, we investigate a family of piecewise affine homeomorphisms of the unit square,
motivated by a particular class of dynamical systems
modeling learning behaviour in game theory, the so-called Fictitious Play dynamics
(see \cite{Sparrow2007,VanStrien2010,VanStrien2011}).
It was shown in \cite{VanStrien2010} that this dynamics can be represented by a flow on $S^3$ with a topological disk
as a global first return section, whose first return map is \emph{continuous, piecewise affine, area preserving
and fixes the boundary of the disk pointwise}. For an investigation of the itinerary
structure and some numerical simulations of such first return maps, see \cite{Ostrovski2010}.
The maps considered in this paper are chosen to satisfy these properties,
and the nine-piece construction considered here
seems to be the simplest possible (nontrivial) example satisfying all of them.
Its qualitative behaviour resembles that seen in
\cite{Ostrovski2010} for the first return maps of Fictitious Play; in particular, the ways in which
stochastic and (quasi-)periodic behaviour coexist seem to be of similar type, giving rise to similar
phenomena.
In Sections \ref{sec:construction} and \ref{sec:properties}
we present a geometric construction of our family of maps
and describe its basic formal properties. Then, interested in the long-term behaviour of iterates
of the maps, our next goal is to establish the existence of certain invariant regions.
For that, in Section \ref{sec:inv_circles}
we develop some technical results about periodic orbits and invariant curves, and in Section \ref{sec:spec_vals}
prove their existence for certain parameter values.
Finally, in Section \ref{sec:gen} we discuss the dynamics for more general parameter values,
present numerical observations and discuss open questions.
\section{Construction of the map}\label{sec:construction}
Let us denote the unit square by $\rect = [0,1]\times [0,1]$.
We construct a one-parameter family of continuous, piecewise affine maps
$F_\theta \colon \rect \to \rect$, $\theta \in (0,\frac{\pi}{4})$,
as follows (see Fig.\ref{fig:construction} for an illustration).
Denote the four vertices of $\rect$ by $E_1 = (0,0)$, $E_2 = (1,0)$, $E_3 = (1,1)$ and $E_4 = (0,1)$.
In the following we will use indices $i\in\irange = \{1,2,3,4\}$ with cyclic order, i.e., with the understanding
that index $i+1$ is $1$ for $i=4$ and index $i-1$ is $4$ for $i=1$.
Let $\theta \in (0,\frac{\pi}{4})$, and for $i \in\irange$ let $L_i$ be the ray through $E_i$,
such that the angle between the segment $\overline{E_i E_{i+1}}$ and $L_i$
is $\theta$. Let $P_i \in \inter(\rect)$ be the point $L_{i-1} \cap L_i$, then
the $P_i$, $i \in\irange$, form a smaller square inside $\rect$.
Now we divide $\rect$ into the following nine regions (see Fig.\ref{fig:construction}(left)):
\begin{itemize}
\item four triangles $\mca_i = \Delta(E_i,E_{i+1},P_i)$, $i\in\irange$, each adjacent to one of the
sides of $\rect$;
\item four triangles $\mcb_i = \Delta(E_i,P_i,P_{i-1})$, $i\in\irange$, each sharing one side with
$\mca_{i-1}$ and $\mca_i$;
\item a square $\mcc = \square(P_1,P_2,P_3,P_4)$, each side of which is adjacent to one of the $\mcb_i$.
\end{itemize}
Now, we repeat the same construction 'in reverse orientation', to obtain a second, very similar partition of $\rect$,
as shown in Fig.\ref{fig:construction}(right). Here we denote the vertex of the inner square which has the same y-coordinate
as $P_1$ by $P'_1$, and the other vertices of the inner square by $P'_2$, $P'_3$, $P'_4$, in counterclockwise order.
For $i\in\irange$ we denote the triangles $\mca'_i = \Delta(E_i, E_{i+1}, P'_i)$, $\mcb'_i = \Delta(E_i, P'_i, P'_{i-1})$
and the square $\mcc' = \square(P'_1,P'_2,P'_3,P'_4)$.
Finally, the map $F = F_\theta \colon \rect \to \rect$ is uniquely defined by the data
\begin{itemize}
\item $F(E_i) = E_i$, $i\in\irange$;
\item $F(P_i) = P'_i$, $i\in\irange$;
\item $F$ affine on each of the pieces $\mca_i$, $\mcb_i$, and $\mcc$.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{construction_total}
\caption{Construction of the map $F = F_\theta$: $F(E_i) = E_i$, $F(P_i) = P'_i$;
$F$ is affine on each of $\mca_i$, $\mcb_i$ and $\mcc$,
such that $F(\mca_i) = \mca'_i$, $F(\mcb_i) = \mcb'_i$ and $F(\mcc) = \mcc'$.}
\label{fig:construction}
\end{figure}
\section{Properties of $F$}\label{sec:properties}
It is easy to see that $F$ is a piecewise affine
homeomorphism of $\rect$, with $F(\mca_i) = \mca'_i$ and $F(\mcb_i) = \mcb'_i$ for each $i\in\irange$,
and $F(\mcc) = \mcc'$.
Moreover, $F$ is area and orientation preserving,
since $d F$ is constant on each of the pieces, with $\det dF = 1$ everywhere.
Note that while $F$ is continuous, its derivative $dF$ has discontinuity lines along the boundaries
of the pieces; we call these the \emph{break lines}.
Note also that $F\vert_{\partial \rect} = \id$.
We denote $P_1 = (s,t)$, where $t \in (0,\frac{1}{2})$ and $s \in (0,\frac{1}{2})$.
The coordinates of the other points $P_i$ and $P'_i$ are then given by symmetry. Simple geometry gives that
$s$, $t$ and $\theta$ satisfy
\begin{equation}\label{eq:relations}
t - t^2 = s^2, \quad t = \sin^2 \theta, \quad s = \sin \theta \cos \theta.
\end{equation}
The map $F$ is given by three types of affine maps $A$, $B$, and $C$:
\begin{itemize}
\item $A \colon \mca_1 \to \mca'_1$ is a shear fixing
$\overline{E_1 E_2} = [0,1]\times\{0\}$ and mapping $P_1$ to $P'_1$:
\begin{equation*}
A(x,y) = \begin{pmatrix} 1 & \frac{1-2s}{t} \\ 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}.
\end{equation*}
It leaves invariant horizontal lines and moves points
in $\inter(\mca_1)$ to the right (since $(1-2s)/t>0$).
\item $B \colon \mcb_1 \to \mcb'_1$ is a linear scaling
map with a contracting and an expanding direction,
defined by $B(E_1) = E_1$, $B(P_1) = P'_1$ and $B(P_4) = P'_4$:
\begin{equation*}
B(x,y) = \frac{1}{t-s} \begin{pmatrix}t^2-(1-s)^2 & (1-2s)t \\ (2s-1)t & (2t-1)t\end{pmatrix}
\begin{pmatrix} x \\ y \end{pmatrix}.
\end{equation*}
It can be checked that the contracting direction of $B$
lies in the sector between $\overline{E_1 P_4}$ and $\overline{E_1 E_4}$,
and the expanding direction in the sector betwen $\overline{E_1 E_2}$ and $\overline{E_1 P_1}$.
From general theory we also have that $B$ preserves the quadratic form given by
\begin{equation}\label{eq:quad_form_B}
Q_B(x,y) = t (x^2 + y^2) - x y .
\end{equation}
\item $C \colon \mcc \to \mcc'$ is the rotation about
$Z = (\frac{1}{2},\frac{1}{2})$, mapping $P_i$ to $P'_i$, $i \in\irange$:
\begin{equation*}
C(x,y) = \begin{pmatrix}2s & 2t-1 \\ 1-2t & 2s\end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}
+ \begin{pmatrix} 1-t-s \\ t-s \end{pmatrix}.
\end{equation*}
The rotation angle is $\alpha = \frac{\pi}{2} - 2 \theta$, where $\theta$ is the parameter angle
in the construction of the map $F$.
\end{itemize}
All other pieces of $F$ are analogous, by symmetry of the construction.
To capture this high degree of symmetry, we make the following observations
which follow straight from the definition of $F$.
\begin{lemma} \label{lem:rot_sym}
Let $R$ denote the rotation about $Z$ by the angle $\frac{\pi}{2}$. Then $F$ and $R$ commute:
\begin{equation*}
F \circ R = R \circ F .
\end{equation*}
\end{lemma}
\begin{lemma}
Let $S(x,y) = (y,x)$. Then $F$ is \textit{$S$-reversible}, i.e., $S$ conjugates $F$ and $F^{-1}$:
\begin{equation*}
S \circ F \circ S = F^{-1}.
\end{equation*}
Further, $F$ is $T_1$- and $T_2$-reversible for the
reflections $T_1 (x,y) = (1-x,y)$ and $T_2(x,y) = (x,1-y)$.
\end{lemma}
Heuristically, $F$ acts similarly to a twist map: The iterates $F^n(X)$
of any point $X\in \inter(\rect)$ rotate counterclockwise
about $Z$ as $n \to \infty$. The 'rotation angle'
(the angle between $\overline{Z X}$ and $\overline{Z F(X)}$)
is not constant,
but it is bounded away from zero as long as $X$ is bounded away
from $\partial \rect$; in particular, every point whose orbit stays bounded away from the boundary
runs infinitely many times around the centre.
Note also that the rotation angle is
monotonically decreasing along any ray emanating from the centre $Z$.
However, it is not strictly
decreasing, as all points in $\mcc$ rotate by the same angle; this
sets the map $F$ apart from a classical twist map, for which strict monotonicity
(the \q{twist condition}) is usually required.
\section{Invariant Circles}\label{sec:inv_circles}
Clearly, the circle inscribed to the inner square $\mcc$ and all concentric circles
in it centred at $Z$ are invariant under $F$, which acts as a rotation on these circles.
When $\theta$ is a rational multiple
of $\pi$, the rotation $C$ is a rational (periodic) rotation, and a whole regular $n$-gon
inscribed to $\mcc$ is $F$-invariant, see Fig.\ref{fig:firstparam}.
We are interested in other invariant circles encircling $Z$,
as these form barriers to the motion of points under $F$ and provide a partitioning
of $\rect$ into $F$-invariant annuli.
Numerical simulations indicate that such curves exist for many parameter
values $\theta$ and create invariant annuli, on which the motion is
predominantly stochastic.
This section follows closely the arguments of Bullett \cite{Bullett1986},
where in a similar way, invariant circles are studied for a piecewise linear version of the standard map.
The idea is to study the orbits of the points where the invariant circles intersect break lines
and to prove that these follow a strict symmetry pattern, forming so-called cancellation orbits.
We consider invariant circles $\Gamma$ on which $F$
preserves the $S^1$-order of points.
This, for example, is the case if all rays from $Z$ intersect
the circle $\Gamma$ in precisely one point.
For an invariant circle $\Gamma$, we denote the rotation number
of $F \vert_\Gamma \colon \Gamma \to \Gamma$
by $\rho_\Gamma = \rho(F \vert_\Gamma)$.
By simple geometric considerations, we get the following lemma.
\begin{lemma}
Let $\Gamma_1, \Gamma_2$ be two invariant circles for $F$ encircling $Z$.
If $\Gamma_1$ is contained in the component of $\rect \setminus \Gamma_2$
containing $Z$, then $\rho_{\Gamma_1} \geq \rho_{\Gamma_2}$.
\end{lemma}
In other words, if there is a family of such nested invariant circles,
their rotation number is monotonically decreasing as the circles approach $\partial \rect$.
It also follows that the rotation number $\rho$ of any orbit is bounded above
by the rotation number on the centre piece $\mcc$, i.e.,
\[0 \leq \rho \leq \frac{\alpha}{2 \pi} = \frac{1}{4}-\frac{\theta}{\pi}.\]
We now consider $F$-invariant circles near the boundary $\partial \rect$,
which do not intersect the centre piece $\mcc$.
Any such curve $\Gamma$ intersects exactly two types of break line segments:
the segments $\mcb_i \cap \mca_i = \overline{E_i P_i}$
and $\mca_i \cap \mcb_{i+1} = \overline{E_{i+1} P_{i+1}}$, $i\in\irange$.
Let us call these intersection points $U_i = \Gamma \cap (\mcb_i \cap \mca_i)$
and $V_i = \Gamma \cap (\mca_i \cap \mcb_{i+1})$.
We say that an invariant curve which encircles $Z$ and on which $F$ preserves the $S^1$-order
is \emph{rotationally symmetric},
if it is invariant under the rotation $R$ (cf.~Lemma \ref{lem:rot_sym}): $R(\Gamma) = \Gamma$.
In the remainder of this section we will use the fact that any invariant circle $\Gamma$
with rational rotation number is of one of the following two types \cite{Katok1983}:
\begin{itemize}
\item pointwise periodic, that is, $F \vert_\Gamma$ is conjugate to a rotation of the circle;
\item non-periodic, that is, $F \vert_\Gamma$ is not conjugate to a rotation; however, in this case,
$F \vert_\Gamma$ still has at least one periodic orbit.
\end{itemize}
Following the ideas in \cite{Bullett1986}, we now prove a number of results illustrating
the importance of the orbits of $U_i$ and $V_i$ for the invariant circle containing them.
\begin{lemma}\label{lem:F_per_case}
Let $\Gamma$ be a rotationally symmetric invariant circle disjoint from $\mcc$. Assume that $F\vert_\Gamma$
has rational rotation number $\rho_\Gamma = \frac{p}{q} \in \Q$, and that $F\vert_\Gamma$ is periodic.
Then for $i\in\irange$ the orbit $\orb(U_i) = \{F^n(U_i)\colon n \in \Z \}$ contains some $V_j$ ,
$j \in\irange$, and vice versa.
\end{lemma}
\begin{proof}
By symmetry it is sufficient to show the result for $U_1$.
Suppose for a contradiction that $V_j \notin \orb(U_1)$ for all $j\in\irange$.
Let $\mathcal{O} = (\bigcup_i \orb(U_i)) \cup (\bigcup_j \orb(V_j))$, which by periodicity of $F$ is finite.
Let $X,Y \in \mathcal{O}$ be the two points closest to $U_1$ on either side along $\Gamma$.
Consider the line segment $\overline{X Y}$ which crosses the break line going through $U_1$.
Then $F (\overline{X Y})$ is a \q{bent} line, consisting of two straight line segments,
so that $F$ maps the triangle $\Delta(X,U_1,Y)$ to a quadrangle.
Now, by assumption and periodicity of $F$, there exists $k > 1$ such that $F^k(U_1) = U_j$ for some $j \in \irange$ and
$F^l(U_1) \notin \{U_i: i \in \irange \} \cup \{V_j: j \in \irange\} $ for $1 \leq l < k$.
This implies that the triangle $\Delta(F(X),F(U_1),F(Y))$ is
mapped by $F^{k-1}$ to the triangle $\Delta(F^k(X),F^k(U_1),F^k(Y))$ without
bending any of its sides (since $X$ and $Y$ are the points in $\mathcal{O}$ closest to $U_1$).
By symmetry we have that $\Delta(F^k(X),F^k(U_1),F^k(Y)) = \tilde R (\Delta(X,U_1,Y))$,
where $\tilde R$ is the rotation about $Z$ by one of the angles $0, \frac{\pi}{2}, \pi, \frac{3\pi}{2}$, and hence
\begin{equation*}
\area(\Delta(F^k(X),F^k(U_1),F^k(Y))) = \area(\Delta(X,U_1,Y)).
\end{equation*}
But $F$ is area preserving, so
\begin{equation*}
\area(\Delta(F^k(X),F^k(U_1),F^k(Y))) = \area(\Delta(F(X),F(U_1),F(Y))),
\end{equation*}
which is a contradiction because $F$ maps $\Delta(X,U_1,Y)$
to a quadrangle which either properly contains or is properly contained in $\Delta(F(X),F(U_1),F(Y))$.
\end{proof}
With a slightly bigger effort, we can extend the result to the case of non-periodic $F\vert_\Gamma$.
\begin{lemma}\label{lem:F_nonper_case}
Let $\Gamma$ be a rotationally symmetric invariant circle disjoint from $\mcc$.
Assume that $F\vert_\Gamma$
has rational rotation number $\rho_\Gamma = \frac{p}{q} \in \Q$, and that $F\vert_\Gamma$ is not periodic.
Then for $i\in\irange$, the orbit of $U_i$ contains some $V_j$ ,
$j \in\irange$, and vice versa.
\end{lemma}
\begin{proof}
As in the previous lemma, we give a proof for $U_1$,
the other cases following by symmetry. We distinguish two cases:
\textbf{Case 1: $U_1$ non-periodic for $F$.}\newline
Let us write $z_k = F^k(U_1)$ for $k \in \Z$.
Since $\rho_\Gamma = \frac{p}{q} \in \Q$, there exist points $Q$ and $Q'$ in $\Gamma$,
each periodic of period $q$, such that
$z_{nq} \to Q$ and $z_{-nq} \to Q'$ as $n \to \infty$.
Note that $F^q$ is affine in a sufficiently small neighbourhood on either side of $Q$.
Then for sufficiently large $N$, the points $z_{nq}$, $n > N$, lie on the straight line segment
$\overline{z_{Nq} Q}$ (the contracting direction at $Q$).
Hence for large $n$, $\Gamma$ contains the straight line segment $\overline{z_{nq} Q}$.
Analogously, for large $n$, the straight line segment $\overline{Q' z_{-nq}}$ is contained in $\Gamma$.
In particular, $\ell = \overline{z_{-(m+2)q} z_{-mq}}$ is contained in $\Gamma$ for large $m$.
But $U_1 \in F^{(m+1)q}(\ell)$, so $F^{nq}(\ell)$ has a kink for large $n$ unless $z_{Nq} = V_j$ for some $N$ and $j$
(note that since $U_1$ is non-periodic, $U_i \notin \orb(U_1)$, $i \in \irange$).
Since $F^{nq}(\ell)$ is near $Q$ for large $n$, it has to be straight,
and it follows that $V_j \in \orb(U_1)$ for some $j$.
\textbf{Case 2: $U_1$ periodic for $F$.}\newline
In this case $F^q(U_1) = U_1$ and the argument is similar to the proof of Lemma \ref{lem:F_per_case}.
Assume for a contradiction that $V_j \notin \orb(U_1)$ for all $j$.
By symmetry, this implies $V_j \notin \bigcup_i \orb(U_i)$.
Pick $X,Y \in \bigcup_i \orb(U_i)$ nearest to $V_1$ from each side
and denote by $S$ the segment of $\Gamma$ between $X$ and $Y$.
Since the straight line segment $\overline{X Y}$ crosses the break line which contains $V_j$,
its image $F(\overline{X Y})$ has a kink.
Therefore, since $F$ is area preserving, the area between $F(S)$ and $\overline{F(X) F(Y)}$ is either
greater or less than the area between $S$ and $\overline{X Y}$. For $0 \leq k < q$,
the area between $F^k(S)$ and $\overline{F^k(X) F^k(Y)}$ is equal to the area between
$F^{k+1}(S)$ and $\overline{F^{k+1}(X) F^{k+1}(Y)}$, unless $F^k(S)$ contains one of the $V_j$.
Whenever $V_j \in F^k(S)$ for some $j$ and $k$
(which can happen at most four times for $0 \leq k < q$),
this area decreases or increases. By symmetry, these up to four changes have the same form,
so the area either always decreases or always increases.
Note on the other hand that $F^q(S) = S$, $F^q(X) = X$ and
$F^q(Y) = Y$ (since $X$ and $Y$ are in the $q$-periodic orbit of $U_1$).
So if we denote the region between $S$ and $\overline{X Y}$ by
$\Omega$, $\area(F^q(\Omega)) \neq \area(\Omega)$, which contradicts
the fact that $F$ is area preserving. This finishes the proof.
\end{proof}
Combinining the above lemmas, we get the following result.
\begin{proposition}\label{prop:rot_num}
Let $\Gamma$ be a rotationally symmetric invariant circle disjoint from $\mcc$
with rotation number $\rho_\Gamma = \frac{p}{q} \in \Q$.
Then for every $i\in\irange$, the $F$-orbit of $U_i$ contains some $V_j$,
$j \in\irange$, and vice versa.
Moreover, every such orbit contains an equal number
$n$ of the $U_i$ and $V_j$, which are traversed in alternating order.
If $n \geq 2$, then any such orbit is periodic.
\end{proposition}
\begin{remark} \label{rem:canc_orbits}
In \cite{Bullett1986}, Bullett coined the term \q{cancellation orbits}
for these orbits of break points on an invariant circle,
reflecting the insight that each \q{kink} introduced by the discontinuity of $d F$
at one such point needs to be \q{cancelled out} by an appropriate \q{reverse kink} at another
$d F$-discontinuity point, if the invariant circle has rational rotation number.
However, cancellation orbits can also occur on invariant circles with $\rho \notin \Q$.
In that case these orbits are not periodic,
and each cancellation orbit would only contain
one of the $U_i$ and one of the $V_j$. We will see
examples for this in Section \ref{sec:spec_vals}.
\end{remark}
We now show that cancellation orbits determine the behaviour of the whole map $F\vert_\Gamma$.
\begin{proposition}
Let $\Gamma$ be a rotationally symmetric invariant circle disjoint from $\mcc$.
Then $F\vert_\Gamma$ is periodic if and only if the $U_i$- and $V_i$-orbits are periodic.
\end{proposition}
\begin{proof}
Of course, if the rotation number $\rho_\Gamma$ is irrational,
neither $F\vert_\Gamma$ nor the break point orbits
can be periodic, so we only need to consider rational rotation number.
Further, if $F\vert_\Gamma$ is periodic, so are all cancellation orbits.
For the converse, suppose for a contradiction that $U_i$ and $V_i$, $i \in \irange$, are periodic,
but $F\vert_\Gamma$ is not.
We repeat an argument already familiar from the proof of Lemma \ref{lem:F_nonper_case}.
Pick any non-periodic point $P \in \Gamma$, then there exists $Q \in \Gamma$,
such that $F^{nq}(P)\to Q$ as $n \to \infty$.
Note that $F^q$ is affine in a sufficiently small neighbourhood on either side of $Q$.
Then for $n > N$ sufficiently large, the points $F^{nq}(P)$
lie on a straight line segment from $F^{N q}(P)$ to $Q$
(the contracting direction at $Q$). So $\Gamma$ contains a straight
line segment $S$ which expands under $F^{-q}$.
This expansion cannot continue indefinitely, so $F^{-mq}(I)$
must meet some $U_i$ (or $V_i$) for some $m$.
But then this $U_i$ (or $V_i$) cannot be periodic, which contradicts the assumption.
\end{proof}
\section{Special parameter values}\label{sec:spec_vals}
In this section, we will show that for a certain countable
subset of parameter values $\theta \in (0,\frac{\pi}{4})$,
the map $F = F_\theta$ has invariant circles of the form described in the previous section.
\begin{theorem}\label{thm:spec_params}
There exists a sequence of parameter angles $\theta_3, \theta_4, \ldots \in (0,\frac{\pi}{4})$,
$\theta_K \to \frac{\pi}{4}$ as $K \to \infty$, such
that for each $K$, $F_{\theta_K}$ has a countable collection
of invariant circles $\{\Gamma_K^N : N \geq 0 \}$, each of rational
rotation number $\rho(\Gamma_K^N) = 1/(4(K+N))$.
The curves $\Gamma_K^N$ consist of straight line segments,
are rotationally symmetric and converge to the boundary
$\partial \rect$ as $N \to \infty$.
\end{theorem}
\begin{proof}
We will prove the result by explicitly finding periodic orbits
in $\bigcup_i (\mca_i \cup \mcb_i)$, which
hit the break lines whenever passing from one of the pieces to another.
More precisely, we will show that for $K \geq 3$ and $N \geq 0$,
there is a parameter value $\theta = \theta_K$ and a point $X_N \in \overline{E_1 P_4}$,
such that $F^n(X_N) \in \mcb_1$ for $0 \leq n < K$, $F^K(X_N) \in \overline{E_1 P_1}$,
$F^n(X_N) \in \mca_1$ for $K \leq n < K+N$, and $F^{N+K}(X_N) = R(X_N) \in \overline{E_2 P_1}$,
where $R$ is the counterclockwise rotation
by $\frac{\pi}{2}$ about the centre of the square, see Fig.\ref{fig:per_orbit}.
By symmetry this clearly gives a periodic cancellation orbit,
and an invariant circle is then given by
\[\Gamma_K^N = \bigcup_{i=0}^{4(K+N)} \overline{F^i(X_N) F^{i+1}(X_N)}.\]
\begin{figure}
\centering
\includegraphics[width=\textwidth]{per_orbit}
\caption{Construction of (a part of) a periodic
cancellation orbit in Theorem \ref{thm:spec_params} for $K = 3$, $N = 2$.
The points $X_N \in \overline{E_1 P_4}$ and $Y_N \in \overline{E_1 P_1}$
are chosen such that $F^K(X_N) = Y_N$ (Lemma \ref{lem:B_int_it})
and $F^N(Y_N) = R(X_N) \in \overline{E_2 P_1}$ (Lemma \ref{lem:A_int_it}).
The dots are the $F$-iterates of $X_N$, the dashed lines indicate
the line segments making up (part of) the invariant circle $\Gamma_K^N$.}
\label{fig:per_orbit}
\end{figure}
We use the following two lemmas, whose proofs we leave to the appendix.
\begin{lemma}\label{lem:B_int_it}
For every $K \in \N$, $K \geq 3$, there exists a parameter
$\theta_K \in (0,\frac{\pi}{4})$, such that
for $F = F_{\theta_K}$, $F^k(\overline{E_1 P_4}) \subset \mcb_1$ for $0 \leq k < K$,
and $F^K(\overline{E_1 P_4}) = \overline{E_1 P_1} = \mcb_1 \cap \mca_1$.
For $K \to \infty$, the angle $\theta_K$ tends to $\frac{\pi}{4}$.
\end{lemma}
\begin{lemma}\label{lem:A_int_it}
For every $\theta \in (0,\frac{\pi}{4})$ and $N \geq 0$ there exists a point
$Y_N \in \overline{E_1 P_1} = \mcb_1 \cap \mca_1$ such that
$F^n(Y_N) \in \mca_1$ for $0 \leq n < N$ and
$F^N(Y_N) \in \overline{E_2 P_1} = \mca_1 \cap \mcb_2$.
For $N \to \infty$, the points $Y_N$ converge to $E_1$.
\end{lemma}
By Lemma \ref{lem:B_int_it}, for every $K \geq 3$ we can find
$\theta_K$, such that $F^K$ maps $\overline{E_1 P_4}$
to $\overline{E_1 P_1}$ (in $\mcb_1$).
Then by Lemma \ref{lem:A_int_it}, for any $N \geq 0$ there exists
$X_N \in \overline{E_1 P_4}$ such that
$F^K(X_N) = Y_N \in \overline{E_1 P_1}$ and $F^{K+N}(X_N) \in \overline{E_2 P_1}$,
and it can be seen that each open line segment
$(F^k(X_N), F^{k+1}(X_N))$, $k = 0,\ldots,K+N-1$,
lies in the interior of either $\mcb_1$ ($0 \leq k < K$)
or $\mca_1$ ($K \leq k < K+N$), see Fig.\ref{fig:per_orbit}.
Further, $B$ preserves the quadratic form $Q_B(x,y) = t (x^2 + y^2) - x y$.
With $X_N = (x_1,y_1) \in \overline{E_1 P_4}$
and $F^K(X_N) = (x_2, y_2) \in \overline{E_1 P_1}$ as above,
a simple calculation shows that $Q_B(X_N) = Q_B(F^K(X_N))$ implies that $x_1 = y_2$.
Then, with $F^{K+N}(X_N) = (x_3,y_3) \in \overline{E_2 P_1}$, one has $y_3 = y_2$, as
$A$ preserves the y-coordinate.
We have that $R(\overline{E_1 P_4}) = \overline{E_2 P_1}$, and since $x_1 = y_3$,
it follows that $F^{K+N}(X_N) = R(X_N)$.
Rotational symmetry (Lemma \ref{lem:rot_sym}) then implies
that $X_N$ is a periodic point of period $4(K+N)$ for $F$,
and the line segments connecting the successive $F$-iterates of $X_N$
form a rotationally symmetric invariant circle for $F$.
Hence we obtain, for each $K \geq 3$, a parameter
$\theta_K$ such that $F_{\theta_K}$ has a sequence
of rotationally invariant circles $\Gamma_K^N$,
$N \geq 0$, each consisting of straight line segments.
Moreover, Lemma \ref{lem:A_int_it} implies that
$\Gamma_K^N \to \partial \rect$ (in the Hausdorff metric)
and $\rho_{\Gamma_K^N} = 1/(4(K+N)) \to 0$ as $N \to \infty$.
\end{proof}
In the proof of the theorem, for a sequence
of special parameter values $\theta_K$
we constructed periodic orbits hitting the
break lines and invariant circles made up
of line segments connecting the points of the periodic orbits.
A closer look at the behaviour of $F$
\emph{between} any two consecutive invariant
circles in this construction reveals that
in fact the dynamics of $F$ in these regions
is very simple for these parameter values.
To see this, let $\theta = \theta_K$, $K \geq 3$, $N \geq 0$,
and take $X \in \Gamma_K^N \cap \overline{E_1 P_1}$,
$Y \in \Gamma_K^{N+1} \cap \overline{E_1 P_1}$.
Then $X$ and $Y$ are periodic cancellation orbits of
periods $4(K+N)$ and $4(K+N+1)$ on the respective invariant circles.
Let $\mathcal{R}$ be the quadrangle with vertices $X, Y, F(Y), F(X)$.
Then $F^{K+N}(X) = R(X)$ and $F^{K+N+1}(Y) = R(Y)$
lie in $\overline{E_2 P_2} = R(\overline{E_2 P_2})$,
and it is easy to see that $F^{K+N}$ maps the triangle
$D_1 = \Delta(X,F(Y),F(X))$ affinely to the triangle
$R(\Delta(X,Y,F(X)))$ and $F^{K+N+1}$ maps $D_2 = \Delta(X,Y,F(Y))$
affinely to $R(\Delta(Y,F(Y),F(X)))$.
This gives a piecewise affine map
$\Phi \colon D_1 \cup D_2 = \mathcal{R} \to R(\mathcal{R})$
of the simple form shown in Fig.\ref{fig:returnmap}.
By symmetry, the first return map of $F$ to the quadrangle $\mathcal{R}$ is then the fourth
iterate of such map, and is easily seen to preserve the y-coordinate.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{returnmap}
\caption{The form of the map $\Phi \colon \mathcal{R} \to R(\mathcal{R})$, where $\mathcal{R}$
is the quadrangle formed by two consecutive cancellation orbit points on
each of two adjacent invariant circles $\Gamma_K^N$, $\Gamma_K^{N+1}$. The first return map
of $F$ to $\mathcal{R}$ is (by symmetry) the fourth iterate of $\Phi$.}
\label{fig:returnmap}
\end{figure}
It follows immediately that in fact all points
in the annulus between $\Gamma_K^N$ and $\Gamma_K^{N+1}$
lie on invariant circles. These invariant circles are of the same form as the $\Gamma_K^N$,
that is, they consist of line segments parallel to those explicitly
constructed in the proof of Theorem \ref{thm:spec_params}.
The rotation number of the invariant circle
through the point $W \in \overline{E_1 P_1}$ changes continuously (in fact, linearly)
from $1/(4(K+N))$ to $1 / (4(K+N+1))$, as $W$ goes from $X$ to $Y$.
These invariant circles take on both rational and irrational rotation numbers,
and their intersections with the break lines
are not necessarily periodic as those of the $\Gamma_K^N$,
but their orbits still form cancellation orbits,
since $F^K(\overline{E_1 P_4}) = \overline{E_1 P_1}$ (see Remark \ref{rem:canc_orbits}).
We get the following corollary.
\begin{corollary}\label{cor:inv_circ_foliation}
For $\theta = \theta_K$, $K \geq 3$, as in
Theorem \ref{thm:spec_params}, the annulus between $\partial \rect$
and $\Gamma_K^0$ (the invariant circle containing $P_i$ and $P_i'$, $i \in \irange$) is completely foliated
by rotationally symmetric invariant circles with rotation numbers
continuously and monotonically varying from $0$ on $\partial \rect$ to $1/(4K)$ on $\Gamma_K^0$.
\end{corollary}
\begin{remark}\label{rem:firstparam}
One can check that $K = 3$ in the proof of Theorem \ref{thm:spec_params} is obtained by
setting $\theta = \frac{\pi}{8}$, corresponding to $t = (2-\sqrt 2) /4$ and $s = \sqrt 2 /4$.
This is the case when $C$ is the rotation by $\frac{\pi}{4}$ on $\mcc$.
For $K \geq 4$, exact values for $\theta$ are less easy to determine explicitly.
In this special case $\theta = \frac{\pi}{8}$, the map $F$ turns out to be of a
very simple form, allowing
a complete description of the dynamics on all of $\rect$.
By a similar argument applied to the region inside
$\Gamma_3^0$ (containing the rotational part $\mcc$), the statement of
Corollary \ref{cor:inv_circ_foliation} can then be strengthened,
stating that in this case
the whole space $\rect$ is foliated by invariant circles,
with rotation numbers varying continuously
and monotonically from $0$ on $\partial \rect$ to
$\frac{1}{8}$ on the invariant octagon $\mathcal{O}$ inscribed
in $\mcc$. The invariant circles between $\Gamma_3^0$ and
$\mathcal{O}$ each consist of twelve straight line segments,
parallel to the twelve segments of $\Gamma_3^0$,
see Fig.\ref{fig:firstparam}.
\end{remark}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{firstparam
\caption{The case $\theta = \theta_3 = \frac{\pi}{8}$ from Remark \ref{rem:firstparam}.
The solid piecewise straight line circles are
the invariant octagon $\mathcal{O}$ inscribed in $C$, as well as the
invariant circles $\Gamma_3^0$, $\Gamma_3^1$ and $\Gamma_3^{10}$. In the annulus
between $\mathcal{O}$ and $\Gamma_3^0$, the first 100 iterates of an orbit are depicted,
all lying on an invariant circle which consists of twelve straight line segments.}
\label{fig:firstparam}
\end{figure}
\section{General parameter values and discussion}\label{sec:gen}
For other parameter values than the ones considered in the previous section,
we generally cannot prove the existence of any invariant circles.
Let us briefly mention more general \q{higher order} cancellation orbits and
invariant circles.
Recall that in Theorem \ref{thm:spec_params}, we constructed a family of invariant circles
consisting of line segments connecting successive orbit points of periodic cancellation orbits.
The chosen cancellation orbits were of the simplest possible kind,
where a point on any break line is mapped by a certain number of iterations to the next possible break line.
It is possible to construct invariant circles from one or several
more complicated periodic cancellation orbits\footnote{Due to the rotational symmetry of the system,
a periodic cancellation orbit could contain one, two, or four pairs of break points.
In the first two cases, the union of all rotated
copies of the cancellation orbit would need to be considered,
to form an invariant circle by adding straight
line segments between successive points in this union.}
(for other values of $\theta$ than the ones in Theorem \ref{thm:spec_params}).
On such periodic cancellation orbit,
the iterates of a point on a break line would cross several break lines before
hitting one.
The resulting invariant circle would still consist of straight line segments, but
in such a way that a given segment and its image under $F$ are not adjacent on the circle.
Doing this \q{higher order} construction is conceptionally not much more difficult,
but certainly more tedious than the \q{first order} construction of Theorem \ref{thm:spec_params}.
The effort to construct even just a single higher order periodic cancellation orbit
seems futile, unless a more general scheme to construct all or many of them at once can be found.
It is unclear whether invariant circles of this type exist for all $\theta$, and whether
for typical $\theta$ there exist piecewise line segment invariant circles
(of rational or irrational rotation number) containing non-periodic cancellation orbits,
as the ones seen in Corollary \ref{cor:inv_circ_foliation}.
\begin{question}
For which parameter values $\theta \in (0,\frac{\pi}{4})$ does $F_\theta$ have invariant
circles with periodic cancellation orbits (and, therefore, rational rotation number)?
For which $\theta$ are there piecewise line segment
invariant circles with non-periodic cancellation orbits?
\end{question}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{erg_reg2
\caption{The first $5\cdot10^5$ iterates of three initial points for $\theta = \frac{\pi}{11}$.
Each of the orbits seems to be confined to an invariant annulus and fill this annulus densely, except for a
number of elliptic islands.}
\label{fig:erg_reg}
\includegraphics[width=\textwidth,trim = 0 0 0 -30]{erg_reg2z
\caption{Zoom-in of the first two orbits from Fig.\ref{fig:erg_reg},
showing that the invariant annuli contain
large numbers of smaller and smaller elliptic island chains.
Further zoom-in (and longer orbits) reveal increasingly
intricate patterns of such quasi-periodic elliptic regions.}
\label{fig:erg_reg_zoom}
\end{figure}
Moreover, we can at this point not rule out the existence of
invariant circles (outside of $\mcc$) of an entirely different kind,
not consisting of straight line segments. Indeed, some numerical experiments seem to indicate
the occurence of invariant regions with smooth boundaries, but it is unclear whether this is
due to the limited resolution (see, for example, left picture in Fig.\ref{fig:erg_reg} and Fig.\ref{fig:erg_reg_zoom}).
By Proposition \ref{prop:rot_num}, the intersections of any invariant circle of rational rotation number
with the break lines form cancellation orbits. For irrational rotation number,
this need not be the case. Note, however, that under irrational rotation,
the \q{kink} introduced to an invariant circle at
its intersection with a break line propagates densely
to the entire circle, unless it is cancelled by eventually being mapped
to another break line intersection.
Hence, an invariant circle of irrational rotation number would either
contain a cancellation orbit for each pair of break lines,
or otherwise would be geometrically
complicated, namely nowhere differentiable.
\begin{question}
Are there invariant circles for $F$ (outside of $\mcc$) which are not comprised
of a finite number of straight line segments?
Are there invariant circles whose intersections
with the break lines do not form cancellation orbits?
\end{question}
As for the dynamics of $F$ between invariant circles,
we can only point to numerical
evidence that annuli between consecutive invariant circles
form ergodic components interspersed with
\q{elliptic islands}. An elliptic island consists of a periodic point of,
say, period $p$, surrounded by a family
of ellipses which are invariant under $F^p$, such that $F^p$
acts as an irrational rotation on each of these ellipses; this is referred to
as \q{quasi-periodic} behaviour.
In the case when $F^p$ is a rational rotation, these quasi-periodic
invariant circles would take on the shape of polygons, consisting entirely of $p$-periodic points.
The rest of the annuli seems to be filled with what is often
referred to as \q{stochastic sea}, that is, the dynamics seems to be ergodic
and typical orbits seem to fill these regions densely.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{4pi_5_10e5}
\caption{The first $10^5$ iterates of four initial points for $\theta = \frac{\pi}{20}$.
Each seems to be a dense orbit in an invariant annulus. The four annuli (without a number of elliptic islands and
the invariant periodic regular 20-gon inscribed in $\mcc$) seem to be the ergodic components of $F$.}
\label{fig:45pi}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{pi_5}
\caption{The first $10^5$ iterates of two initial points for $\theta = \frac{\pi}{5}$.
Each of the orbits seems to densely fill a thin invariant annulus. The rectangle seems
to be partitioned into finitely many such invariant annuli
(which get thinner and more numerous as $\theta \to 0$).}
\label{fig:pi_5}
\includegraphics[width=\textwidth,trim = 0 0 0 -30]{pi_5zoom}
\caption{Zoom-in of the orbits from Fig.\ref{fig:pi_5}. The thin invariant annuli
contain periodic island chains, which under even stronger magnification
could be seen to be surrounded by further, finer, islands of quasi-periodic motion.}
\label{fig:pi_5zoom}
\end{figure}
As in many similar systems (e.g., perturbations of the standard map),
numerical observations seem to indicate that these \q{ergodic regions}
have positive Lebesgue measure (see Figures \ref{fig:erg_reg}-\ref{fig:pi_5zoom}).
This is related to questions surrounding the famous
\q{quasi-ergodic hypothesis}, going back to Ehrenfest \cite{Ehrenfest} and Birkhoff \cite{Birkhoff1932},
conjecturing that typical Hamiltonian dynamical systems have dense orbits
on typical energy surfaces (see also \cite{Herman1998}).
For a piecewise linear version of the standard map, Bullett \cite{Bullett1986} established
a number of results on cancellation orbits and invariant circles of both rational
and irrational rotation numbers. Wojtkowski \cite{Wojtkowski1981,Wojtkowski2008}
showed that the map is almost hyperbolic and deduced that it has ergodic components of positive
Lebesgue measure. Almost hyperbolicity here means the almost everywhere
existence of invariant foliations of the space (or an invariant subset)
by transversal local contracting and expanding fibres. Equivalently,
this can be expressed through the existence of invariant contracting and expanding cone fields.
By classical theory (see, for example, \cite{Sinai}) almost hyperbolicity
implies certain mixing properties of the map, and,
in particular, the existence of an at most countable collection of ergodic components of positive
Lebesgue measure.
A similar kind of cone construction as in \cite{Wojtkowski1981,Wojtkowski2008}
seems to be more difficult for the map studied in this paper.
One important difference is that the piecewise linear standard map in these
papers is a twist map, which is not strictly the case for the map $F$ studied here (see \cite{LeCalvez2000}
and references therein for an overview over the numerous classical results for twist maps,
mostly based on Birkhoff and Aubry-Mather theory).
The additional property that $F$ equals the identity on the boundary of the square also sets
it apart. In particular, the motion of points under $F$ close to the boundary
can be arbitrarily slow, that is, take arbitrarily many iterations to pass through the piece
$\mca_i$ (while the number of iterations for a passage through $\mcb_i$ remains bounded).
This seems to make it more difficult to explicitly construct an invariant contracting
or expanding cone field, as was done for the piecewise linear standard map.
Moreover, such an invariant cone field construction can not be carried out
uniformly for all $\theta$.
In fact, as can be seen from Corollary \ref{cor:inv_circ_foliation}, there are parameter values
$\theta_K$, $K=3,4,\ldots$,
for which almost hyperbolicity cannot hold on large parts of $\rect$,
as the dynamics is completely integrable on the annulus between the invariant
circle $\Gamma_K^0$ and $\partial \rect$ (and even on all of $\rect$ for
$\theta = \theta_3 = \frac{\pi}{8}$, see Remark \ref{rem:firstparam}).
We are led to leave the following as a question.
\begin{question}
Are there parameter values $\theta$ for which the map $F_\theta$ is almost hyperbolic
on some invariant subset of $\rect$? How large is the set of parameters $\theta$ for which this is the case?
\end{question}
While it does not seem likely that almost hyperbolicity can be shown for almost all $\theta$,
numerical evidence suggests that for typical $\theta$, the map $F_\theta$ has a finite number
of ergodic components of positive Lebesgue measure, in which typical points have dense orbits.
\begin{conjecture}
For Lebesgue almost all $\theta \in (0,\frac{\pi}{4})$, there is a finite number of $F$-invariant sets
$A_1,\ldots,A_m$, each of positive Lebesgue measure, such that $F\vert_{A_i} \colon A_i \to A_i$ is
ergodic for every $i=1,\ldots,m$. Each $A_i$ is a topological annulus
with a certain number of elliptic islands removed from it,
and, together with the elliptic islands
and the invariant disk inscribed in $\mcc$, the $A_i$ form a partition of $\rect$.
\end{conjecture}
Numerical experiments also seem to indicate that for many parameter values,
the way in which chaotic and (quasi-)periodic behaviour coexist,
that is, the structure of invariant annuli
containing families of quasi-periodic elliptic islands, can be quite rich,
see Fig.\ref{fig:erg_reg} and Fig.\ref{fig:erg_reg_zoom}.
Besides the total measure of such quasi-periodic elliptic islands,
it would be also interesting to know whether a general scheme
for their itineraries, periods and rotation numbers can be found.
\section*{Acknowledgements}
I would like to thank my PhD supervisor, Sebastian van Strien, for the numerous helpful discussions
while I was doing the work presented in this paper. I am also grateful to Shaun Bullett for
discussing with me his early work on the piecewise linear standard map and
making me aware of some of the references.
\section*{Appendix. Proofs of Lemmas \ref{lem:B_int_it} and \ref{lem:A_int_it}}
\begin{proof}[Proof of Lemma \ref{lem:B_int_it}]
First, recall that the map $B\colon \mcb_1 \to \mcb_1 \cup \mca_1 \cup \mcc$ leaves invariant
the quadratic form (\ref{eq:quad_form_B}). Then, using $P_4 = (1-s, t)$ and $P_1 = (s, t)$, we
calculate
\[
Q_B(P_4) - Q_B(P_1) =
\left((1-s)^2 + t^2 - \frac{(1-s)t}{t}\right) - \left(s^2 + t^2 - \frac{st}{t}\right) = 0,
\]
and hence $Q_B(P_4) = Q_B(P_1) =: c$.
So the line $\{x \in \mcb_1 : Q_B(x) = c \}$ is a segment of a hyperbola connecting $P_4$ and $P_1$.
Therefore, with $V = \{ x \in \mcb_1 : Q_B(x) \leq c \}$, we get $F(V) = B(V) \subset (V \cup \mca_1)$.
Further, note that $B$ maps rays through $E_1$ to other rays through $E_1$.
This implies that all points on the straight line segment $\overline{E_1 P_4} = \mca_4 \cap \mcb_1$
remain in the piece $\mcb_1$ for equally many
iterations of the map $F$, before being mapped into $\mca_1$.
In particular, if for $X \in \overline{E_1 P_4}$ we have that
\begin{equation}\label{eq:B_int_it}
F^k (X) \in
\begin{cases} \mcb_1 & \text{if } 0 \leq k < K,\\
\overline{E_1 P_1} & \text{if } k = K ,
\end{cases}
\end{equation}
then the same holds for every other point $X' \in \overline{E_1 P_4}$, and $F^K(\overline{E_1 P_4}) = \overline{E_1 P_1}$.
We will now show that there exists a sequence of parameter values $\theta_K$, $K \geq 3$,
such that for $F= F_{\theta_K}$, $F^k(P_4) = B^k(P_4) \in \mcb_1$ for $0 \leq k \leq K$
and $F^K(P_4) = P_1$.
For this, we need a few elementary facts about the map $B$,
which follow from straightforward (but rather tedious) calculations:
\begin{itemize}
\item Let $f(t) = \sqrt{1-4t^2}$. Then the hyperbolic map $B$ has two eigendirections
\[
v_1 = \begin{pmatrix} 1 + f(t) \\ 2t \end{pmatrix},\quad v_2 = \begin{pmatrix} 2t \\ 1 + f(t) \end{pmatrix}
\]
with corresponding eigenvalues
\begin{equation}\label{eq:eig_vals}
\lambda_1 = \lambda = \frac{4t^2 - 2t + (2s - 1) (1+f(t))}{2(t-s)} > 1, \quad
\lambda_2 = \lambda^{-1} < 1.
\end{equation}
\item By a linear change of coordinates $\Phi \colon \R^2 \to \R^2$
mapping $v_1$ and $v_2$ to $(1, 0)$ and $(0,1)$, respectively, one gets a conjugate linear map
\[
\tilde B = \Phi \circ B \circ \Phi^{-1}
= \begin{pmatrix} \lambda & 0 \\ 0 & \lambda^{-1} \end{pmatrix}.
\]
Setting $\Phi ( P_4 ) = \Phi (t, 1-s) =: (x_1, y_1)$ and $\Phi ( P_1 ) = \Phi (s, t) =: (x_2, y_2)$,
a somewhat tedious calculation gives
\begin{equation}\label{eq:frac_pts}
\frac{x_2}{x_1} = \frac{2ts + t(f(t)-1)}{2t^2 + (1-s)(f(t)-1)}.
\end{equation}
\end{itemize}
Now, since $\tilde B^K = \Phi \circ B^K \circ \Phi^{-1}$,
it follows that $B^K(P_4) = P_1$ is equivalent to $\tilde B^K \Phi ( P_4 ) = \Phi (P_1)$.
By the simple form of $\tilde B^K$, this is equivalent to $x_2 = \lambda^K x_1$, that is,
\begin{equation}\label{eq:K}
K = \frac{\log(x_2 / x_1)}{\log(\lambda)}.
\end{equation}
Substituting (\ref{eq:eig_vals}) and (\ref{eq:frac_pts}) into (\ref{eq:K}), and
using $s = \sqrt{t - t^2}$ from (\ref{eq:relations}), we get an expression $K = K(t)$ for $0 < t < \frac{1}{2}$.
Then $K$ is differentiable and strictly monotonically decreasing as a function of $t$,
and (by application of L'H\^{o}pital's rule)
\[
K \to \begin{cases} 2 & \text{ as } t \to 0, \\
\infty & \text{ as } t \to \frac{1}{2}.\end{cases}
\]
Since $\sin^2 \theta = t$ with $t \in (0, \frac{1}{2})$, $\theta \in (0,\frac{\pi}{4})$, we get that
for each $K \geq 3$ there exists $\theta_K \in (0,\frac{\pi}{4})$ such that $B^K(P_4) = P_1$,
hence $B^K(\overline{E_1 P_4}) = \overline{E_1 P_1}$, as claimed.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:A_int_it}]
First, the shear map $A \colon \mca_1 \to \mca_1 \cup \mcb_2$ is such that any point
$(x,y) \in \mca_1$ is mapped to $F(x,y) = A(x,y) = (x + \tilde{c} y, y)$, $\tilde c > 0$.
By continuity, it follows that for any $N \geq 0$ there exists a point
$Y_N \in \overline{E_1 P_1} = \mcb_1 \cap \mca_1$ such that
$F^n(Y_N) \in \mca_1$ for $0 \leq n < N$ and $F^N(Y_N) \in \overline{E_2 P_1} = \mca_1 \cap \mcb_2$.
Clearly, $Y_0 = P_1 = (s, t)$, and one can calculate that
\[
Y_N = \frac{1}{1 + N(1-2s)} (s,t), \quad N \geq 0,
\]
and $Y_N \to (0,0) = E_1$ as $N \to \infty$.
\end{proof}
\bibliographystyle{abbrv}
| {
"timestamp": "2013-05-21T02:01:06",
"yymm": "1305",
"arxiv_id": "1305.4282",
"language": "en",
"url": "https://arxiv.org/abs/1305.4282",
"abstract": "We present a one-parameter family of continuous, piecewise affine, area preserving maps of the square, which are inspired by a dynamical system in game theory. Interested in the coexistence of stochastic and (quasi-)periodic behaviour, we investigate invariant annuli separated by invariant circles. For certain parameter values, we explicitly construct invariant circles both of rational and irrational rotation numbers, and present numerical experiments of the dynamics on the annuli bounded by these circles.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Dynamics of a Continuous Piecewise Affine Map of the Square",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540722737479,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7099046568439696
} |
https://arxiv.org/abs/0812.1883 | An introduction to exotic 4-manifolds | This article intends to provide an introduction to the construction of small exotic 4-manifolds. Some of the necessary background is covered. An exposition is given of J. Park's construction inarXiv:math.GT/0311395of an exotic CP^2#7(-CP^2). This article does not intend to present any new results. It was originally a Master's thesis, and its aim is merely to provide a leisurely introduction to exotic 4-manifolds that might be of use to interested graduate students. | \section{Introduction}
A manifold $X$ is \emph{exotic} if it is homeomorphic to another manifold $Y$, but is not diffeomorphic to it. Usually, $Y$ is a well-known manifold and then we say ``$X$ is an exotic $Y$''. \newline
A natural question to ask is ``How easy it is to find an exotic manifold?''. Or, more specifically, ``For a fixed manifold $X$, how many exotic $X$'s are there?'' To say there is an exotic $X$ is the same as saying that $X$ has more than one smooth structure.\newline
Suppose $X$ is a closed, topological $n$-manifold. If $n \leq 3$, then $X$ has a unique smooth structure. If $n \geq 5$, then $X$ has at most finitely many smooth structures. This much is known. However, if $n=4$, then (as far as we know) $X$ must have infinitely many smooth structures or none at all. \newline
In other words, currently there are no closed smooth 4-manifolds known to have only finitely-many smooth structures. \newline
Perhaps the ``simplest'' smooth 4-manifolds, $S^4$, $S^2 \times S^2$ and $\mathbb{CP}^{2}$, will provide us with an example of a 4-manifold with a unique smooth structure, but proving or disproving that this is the case has turned out to be quite difficult. \newline
One way mathematicians have tackled this problem in recent years is to try and find exotic $\mathbb{CP}^{2} \# n \overline{\mathbb{CP}^{2}}$s for as small an $n \in \mathbb{N}$ as possible. Exotic manifolds for the cases $n=9$ and $n=8$ were discovered by S. Donaldson in \cite{Do} and D. Kotschick in \cite{Ko3}, respectively, in the late 1980s. Then, for over 15 years, the next case of $n=7$ lay unsolved. \newline
In 2004, J. Park in \cite{P1} found a symplectic manifold that was homeomorphic, but not diffeomorphic, to $\mathbb{CP}^{2} \# 7 \overline{\mathbb{CP}^{2}}$, and since then infinite families of exotic $\mathbb{CP}^{2} \# n \overline{\mathbb{CP}^{2}}$s have been found for as low as $n=3$. \newline
\pagebreak
It is the purpose of this article to give an exposition of J. Park's construction. The principal sources are J. Park's paper \cite{P1}, and the papers of R. Fintushel and R. Stern \cite{FS1}, \cite{FS2} and \cite{FS3}. In order for the construction to be followed, some of the necessary background material is covered, most of which can be found in \cite{GS}. It is assumed the reader is familar with algebraic topology (see \cite{H1}) and knot theory (see \cite{Ro}). Kirby Calculus (see \cite{GS}) is used in section \ref{rbdsec}. \newline
I would like to thank my supervisor, David Gay, for his continued patience and encouragement over the last two years. \newline
I would also like to thank Andr\'as Stipsicz for his excellent lectures and his patience in answering my many questions. \newline
\subsection*{A Note About This Article}
This article does not intend to present any new results, but merely provide some background and give an introduction to exotic 4-manifolds. It is an updated version of a Master's thesis that was submitted to the University of Cape Town in August 2008. It was suggested that it might be of use to other graduate students starting out in the field and so, after some encouragement, it was posted on the arXiv. \newline
Comments\footnote{I am not sure why, but downloading the postscript file and then turning it into a pdf gives diagrams of a slightly better quality than in the downloadable pdf.}, suggestions and corrections are welcomed and can be emailed to deanab17@yahoo.com. Any errors $-$ typographical, grammatical or mathematical $-$ are entirely my own. \newline
Dean Bodenham \linebreak
December 2008\newline
\pagebreak
\section{A Quick Review of Manifolds}
The definitions of a manifold, a differentiable structure, a homeomorphism and a diffeomorphism can be found in \cite{GP}, \cite{GS} and other books, but we review them here. For other definitions such as embeddings, isotopies, orientations, tangent spaces, the reader is referred to \cite{GP}. A useful introduction to differential geometry is \cite{I1}. For knot theory, see \cite{Ro}. For algebraic topology, see \cite{H1}. For fibre bundles, see \cite{St}. For characterstic classes, see \cite{MiSt} or \cite{GS}. We mainly follow the definitions presented in \cite{GS}.
\newtheorem{defh}{Definition}[section]
\begin{defh}
\upshape
A \emph{homeomorphism} is a bijective map $\phi:X \longrightarrow Y$ between two topological spaces $X$ and $Y$ such that both $\phi$ and $\phi^{-1}$ are continuous.
\end{defh}
\newtheorem{defrp}[defh]{Definition}
\begin{defrp}
\upshape
We define $\mathbb{R}^{n}_{+}$ to be the upper half space of $\mathbb{R}^{n}$, i.e. $\mathbb{R}^{n}_{+} = \{(x_1,x_2,\dots, x_{n}) \in \mathbb{R}^{n}| \; x_{n} \geq 0 \}$.
\end{defrp}
\newtheorem{defm}[defh]{Definition}
\begin{defm}
\upshape
An \emph{$n$-dimensional topological manifold} is a separable Hausdorff topological space $X$, such that for every point $p \in X$ there is an open neighbourhood $U$ of $p$ that is homeomorphic to an open subet of $\mathbb{R}^{n}_{+}$.
\end{defm}
\newtheorem{defmr1}[defh]{Remark}
\begin{defmr1}
\upshape
We usually abbreviate ``$n$-dimensional manifold'' simply to ``$n$-manifold''.
\end{defmr1}
\newtheorem{egm0}[defh]{Example}
\begin{egm0}
\upshape
$\mathbb{R}^{n}$ and $\mathbb{R}^{n}_{+}$ are trivial examples of $n$-manifolds. Note that they are not compact.
\end{egm0}
\newtheorem{defm2}[defh]{Definition}
\begin{defm2}
\upshape
Let $X$ be a topological $n$-manifold. \newline
A pair $(U, \phi)$, where $U$ is an open subset of $X$ and $\phi: U \longrightarrow \mathbb{R}^{n}_{+}$ is a homeomorphism of $U$ onto an open subset of $\mathbb{R}^{n}_{+}$, is called a \emph{chart}. \newline
A collection of charts $\{ (U_{\alpha}, \phi_{\alpha}) | \; \alpha \in A \}$ is called an \emph{atlas} if it is a cover of $X$, i.e. $\cup_{\alpha \in A} U_{\alpha} = X$. \newline
The map $\phi_{\beta} \circ \phi_{\alpha}^{-1}$ from the open subset $\phi_{\alpha}(U_{\alpha} \cap U_{\beta}) \subset \mathbb{R}^{n}_{+}$ to the open subset $\phi_{\beta}(U_{\alpha} \cap U_{\beta}) \subset \mathbb{R}^{n}_{+}$ is called the \emph{transition function} between the charts $(U_{\alpha}, \phi_{\alpha})$ and $(U_{\beta}, \phi_{\beta})$. \newline
A topological manifold $X$ with an atlas $\{(U_{\alpha}, \phi_{\alpha})| \; \alpha \in A \}$ is called a \emph{$C^{r}$-manifold} $(r = 1,2, \dots \infty)$ if the transition functions are $C^{r}$-maps. In the case $r=\infty$, $X$ is called a \emph{smooth manifold}.
\end{defm2}
\newtheorem{defm2r}[defh]{Remarks}
\begin{defm2r}
\upshape
\begin{itemize}
\item[(i)] Often a chart is called a \emph{coordinate chart}.
\item[(ii)] If $X$ is a $C^{r}$-manifold ($r > 0$), $X$ is often simply called a \emph{differentiable manifold}.
\item[(iii)] An atlas on a manifold $X$ with $C^{r}$ transition functions ($r > 0$) is called a \emph{differentiable structure on $X$}.
\end{itemize}
\end{defm2r}
\newtheorem{defm3}[defh]{Definition}
\begin{defm3}
\upshape
Let $X$ be a topological $n$-manifold. The points of $X$ corresponding to the points in $\{(x_1, x_2, \dots, x_{n}) \in \mathbb{R}^{n}_{+} | \; x_{n}=0 \} \cong \mathbb{R}^{n-1}$ form an $(n-1)$-dimensional submanifold of $X$, denoted $\partial X$ and called the \emph{boundary of $X$}.
\end{defm3}
\newtheorem{defm3r}[defh]{Remark}
\begin{defm3r}
\upshape
If we required the homeomorphisms $\phi_{\alpha}$ of the charts $(U_{\alpha}, \phi_{\alpha})$ to be maps into $\mathbb{R}^{n}$ as opposed to $\mathbb{R}^{n}_{+}$, $X$ would have empty boundary, i.e. $\partial X = \emptyset$.
\end{defm3r}
\newtheorem{defm33}[defh]{Definition}
\begin{defm33}
\upshape
We say that a manifold $X$ is \emph{closed} if it is compact and $\partial X = \emptyset$.
\end{defm33}
\newtheorem{egm6}[defh]{Example}
\begin{egm6}
\upshape
The $n$-dimensional sphere $S^n = \{ \mathbf{x} \in \mathbb{R}^{n+1} | \parallel \mathbf{x} \parallel = 1 \}$ is a closed $n$-manifold.
\end{egm6}
\newtheorem{egm7}[defh]{Example}
\begin{egm7}
\upshape
The $n$-dimensional disk $D^n = \{ \mathbf{x} \in \mathbb{R}^{n} | \parallel \mathbf{x} \parallel \leq 1 \}$ is a compact $n$-manifold with boundary $\partial D^{n} = S^{n-1}$.
\end{egm7}
\newtheorem{defm4}[defh]{Definition}
\begin{defm4}
\upshape
(from \cite{I1}) Let $X$ and $Y$ be two $C^{r}$-manifolds, and suppose $X$ is $n$-dimensional and $Y$ is $m$-dimensional. The \emph{local representative} of a map $f: X \longrightarrow Y$ with respect to the charts $(U, \phi)$ and $(V, \psi)$ on $X$ and $Y$, respectively, is the map
\begin{equation*}
\psi \circ f \circ \phi^{-1}: \phi(U) \subset \mathbb{R}^{n}_{+} \longrightarrow \mathbb{R}^{m}_{+}
\end{equation*}
A map $f: X \longrightarrow Y$ is a \emph{$C^{r}$-map} between two $C^{r}$-manifolds $X$ and $Y$ if the local representatives of $f$ are $C^{r}$ with respect to every chart of the atlases of $X$ and $Y$.
\end{defm4}
\newtheorem{defm5}[defh]{Definition}
\begin{defm5}
\upshape
Let $X$ and $Y$ be two $C^{r}$-manifolds. A homeomorphism $f: X \longrightarrow Y$ is called a \emph{$C^{r}$-diffeomorphism} if both $f$ and $f^{-1}$ are $C^{r}$-maps.
\end{defm5}
\newtheorem{defm5r}[defh]{Remark}
\begin{defm5r}
\upshape
In the case $r=\infty$ we usually call such a map a \emph{diffeomorphism}.
\end{defm5r}
\newtheorem{defm7}[defh]{Definition}
\begin{defm7}
\upshape
Let $W$ be an open neighbourhood of $\mathbb{C}$. A function $f: W \longrightarrow \mathbb{C}$ is called \emph{holomorphic} if it is complex-differentiable at every point in $W$.
\end{defm7}
\newtheorem{defm7r}[defh]{Remark}
\begin{defm7r}
\upshape
Recall from complex analysis that a holomorphic function is also \emph{analytic}, i.e. equals its Taylor series in a neighbourhood of each point of its domain (and is therefore also a smooth function).
\end{defm7r}
\newtheorem{defm8}[defh]{Definition}
\begin{defm8}
\upshape
An atlas $\{(U_{\alpha}, \phi_{\alpha}) | \; \alpha \in A \}$ on a (real) $2n$-dimensional manifold $X$ is called a \emph{complex structure} if each $\phi_{\alpha}$ is a homeomorphism between $U_{\alpha}$ and an open subset of $\mathbb{C}^{n}$ (identified with $\mathbb{R}^{2n}$), and the transition functions $\phi_{\beta} \circ \phi_{\alpha}^{-1}$ are holomorphic.
\end{defm8}
\newtheorem{defm8r}[defh]{Remark}
\begin{defm8r} \label{compcanor}
\upshape
Complex manifolds are canonically oriented. The following argument comes from \cite{GS}: Firstly, $\mathbb{C}$ is oriented as a real vector space by the ordered basis $(1,i)$. Secondly, the connected group $GL(n; \mathbb{C})$ lies in $GL^{+}(2n; \mathbb{R})$, and so by choosing a complex isomorphism with $\mathbb{C}^{n}$, any $n$-dimensional complex vector space is canonically oriented.
\end{defm8r}
The next theorem from \cite{Mu} (quoted from \cite{GS}) shows that every $C^{r}$-manifold essentially has a smooth structure (for $r > 0$).
\newtheorem{muthm}[defh]{Theorem}
\begin{muthm}
Suppose that $X$ is a $C^{r}$-manifold and $1 \leq r \leq s$ (including $s=\infty$). Then there is a $C^{s}$-atlas of $X$ for which the induced $C^{r}$-structure is isotopic to the original $C^{r}$-structure on $X$. Moreover, this $C^{s}$-structure is unique up to isotopy (through $C^{r}$-diffeomorphisms); consequently the $C^{r}$-manifold $X$ admits a unique induced $C^{s}$-structure for every $s \geq r$.
\end{muthm}
\newtheorem{muthmr}[defh]{Remark}
\begin{muthmr}
\upshape
Therefore, we only need to focus on classifying the topological manifolds and the smooth manifolds. While every smooth manifold is a topological manifold, we shall see later that there are some topological manifolds that do not admit any smooth structures.
\end{muthmr}
\pagebreak
\section{A Brief Account of Fibre Bundles}
What follows below is the ``provisional definition'' given in \cite{St}, which will be sufficient for our purposes.
\newtheorem{fbd}{Definition}[section]
\begin{fbd}
\upshape
A fibre bundle $\mathfrak{B} = (E,B,p,F)$ is a quadruple consisting of
\begin{itemize}
\item[(1)] a topological space $E$ called the \emph{bundle space}
\item[(2)] a topological space $B$ called the \emph{base space}
\item[(3)] a continuous map $p: E \longrightarrow B$ of $E$ onto $B$ called the \emph{projection map}
\item[(4)] a space $F$ called the \emph{fibre}
\end{itemize}
This quadruple must satisfy two conditions. For each $x \in B$,
\begin{itemize}
\item[(i)] the set $F_{x}$ defined by $F_{x} = p^{-1}(x)$, called the \emph{fibre over the point $x$}, must be homeomorphic to the fibre $F$
\item[(ii)] there is a neighbourhood $U$ of $x$ and a homeomorphism $\phi : U \times F \longrightarrow p^{-1}(U)$ such that for all $u \in U$ and all $f \in F$
\begin{equation*}
p \circ \phi (u,f) = u
\end{equation*}
\end{itemize}
\end{fbd}
\newtheorem{fbdr}[fbd]{Remarks}
\begin{fbdr}
\upshape
\begin{itemize}
\item[(i)] Often, we just call the bundle space the \emph{bundle}, the base space the \emph{base} and the projection map the \emph{projection}.
\item[(ii)] Condition (ii) above is called \emph{local triviality}, and is equivalent to saying that the following diagram commutes
\begin{displaymath}
\xymatrix{
U \times F \ar[dr]_{\pi_{1}} \ar[rr]^{\phi} & & p^{-1}(U) \ar[dl]^{p} \\
& U & }
\end{displaymath}
where $\pi_1$ is the projection of $U \times F$ onto the first factor $U$.
\item[(iii)] It should be noted that a fibre bundle is actually a quintuple $\mathfrak{B} = (E,B,p,F,G)$, where $G$ is a topological tranformation group, called the \emph{structure group}, satisfying certain conditions. However, we shall not really need this additional structure, and shall just consider a fibre bundle to be as described in the definition above.
\end{itemize}
\end{fbdr}
\newtheorem{fbdseq}[fbd]{Remark}
\begin{fbdseq} \label{fbrseqrem}
\upshape
In \cite{St} and \cite{H1} it is proved that a fibre bundle $\mathfrak{B} = (E,B,p,F)$ gives rise to the following long exact sequence of homotopy groups:
\begin{equation*}
\dots \longrightarrow \pi_{n}(F) \longrightarrow \pi_{n}(E) \longrightarrow \pi_{n}(B) \longrightarrow \pi_{n-1}(F) \longrightarrow \dots \longrightarrow \pi_{0}(E) \longrightarrow 0
\end{equation*}
\end{fbdseq}
\newtheorem{fbtr}[fbd]{Definiton}
\begin{fbtr}
\upshape
We define a fibre bundle $\mathfrak{B} = (E,B,p,F)$ to be a trivial bundle if there is a homeomorphism $h: E \longrightarrow B \times F$ which commutes with the projection maps, as above.
\end{fbtr}
With this definition in mind, we have the following important theorem:
\newtheorem{fbtrthm}[fbd]{Theorem}
\begin{fbtrthm}
If the base space $B$ of a fibre bundle $\mathfrak{B} = (E,B,p,F)$ is contractible, then $\mathfrak{B}$ is a trivial bundle.
\end{fbtrthm}
There is a proof of this theorem in \cite{St}, but I prefer the proof in \cite{O1}.
\newtheorem{fbsecdef}[fbd]{Definition}
\begin{fbsecdef}
\upshape
Let $\mathfrak{B} = (E,B,p,F)$ be a fibre bundle. We call the (continuous) map $s:B \longrightarrow E$ a \emph{section} of the fibre bundle if for all $b \in B$ we have $p \circ s (b) = b$.
\end{fbsecdef}
\pagebreak
\section{The Intersection Form}
In this section, let $X$ be a compact, oriented, topological 4-manifold. We follow the notations and definitions presented in \cite{GS}. \newline
Since $X$ is oriented, it admits a fundamental class $[X] \in H_{4}(X, \partial X, \mathbb{Z})$. See \cite{H1} for details.
We now define the intersection form of a 4-manifold $X$.
\newtheorem{dif}{Definition}[section]
\begin{dif}
\upshape
The symmetric bilinear form
\begin{equation*}
Q_{X}: H^{2}(X, \partial X; \mathbb{Z}) \times H^{2}(X, \partial X; \mathbb{Z}) \longrightarrow \mathbb{Z}
\end{equation*}
is defined by $Q_{X}(a,b) = <a \cup b, [X]>$.
\end{dif}
\newtheorem{difr1}[dif]{Remark}
\begin{difr1}
\upshape
In the definition above $a,b \in H^{2}(X, \partial X; \mathbb{Z})$ and `$a \cup b$' denotes the cup product between $a$ and $b$, $[X] \in H_{4}(X, \partial X, \mathbb{Z})$ is the fundamental class of the manifold $X$, and $< . , . >: H^{4}(X, \partial X, \mathbb{Z}) \times H_{4}(X, \partial X, \mathbb{Z}) \longrightarrow \mathbb{Z} $ is the bilinear form where the cohomology class is evaluated on the homology class.
\end{difr1}
\newtheorem{difr2}[dif]{Remark}
\begin{difr2}
\upshape
\begin{itemize}
\item[(i)] We often denote $Q_{X}(a, b)$ by $a \cdot b$.
\item[(ii)] Since by Poincar\'e duality $H_2(X; \mathbb{Z}) \cong H^2(X, \partial X; \mathbb{Z})$, $Q_{X}$ is also defined on $H_2(X; \mathbb{Z}) \times H_2(X; \mathbb{Z})$.
\item[(iii)] By the definition of the intersection form, we have $Q_{\overline{X}} = - Q_{X}$, where $\overline{X}$ is the manifold $X$ with the opposite orientation. \label{difr25}
\end{itemize}
\end{difr2}
If $a$ or $b$ is a torsion element of $H_2(X; \mathbb{Z})$ then $Q_{X}(a,b)=0$. Therefore, we could consider intersection forms as just being defined on $H_2(X; \mathbb{Z})$/torsion, which is a finitely-generated free abelian group. \newline
By choosing a basis $\{b_1,b_2, \dots ,b_{n} \}$ of $H_2(X; \mathbb{Z})$/torsion, we can represent $Q_{X}$ by a matrix $M$. Since $M$ depends on our choice of basis, using another basis $\{\tilde{b_1},\tilde{b_2}, \dots , \tilde{b_{n}} \}$ could result in $Q_{X}$ being represented by another matrix $\tilde{M}$. However, if $B$ is the basis tranformation between the bases $\{b_{i} \}$ and $\{ \tilde{b_{i}} \}$, and $B^{t}$ is the transpose matrix of $B$, then $\tilde{M} = B M B^{t}$, and $B$ is such that $\mathrm{det}(B) = \pm 1$. Then
\begin{align*}
\mathrm{det} (\tilde{M}) &= \mathrm{det} (B) \mathrm{det} (M) \mathrm{det}(B^{t}) = \mathrm{det} (B) \mathrm{det}(B^{t}) \mathrm{det} (M)\\
&= \mathrm{det} (B) \mathrm{det}(B) \mathrm{det}(M) = (\mathrm{det}(B))^2 \mathrm{det} (M) = 1 \cdot \mathrm{det} (M)\\
&= \mathrm{det} (M)
\end{align*}
This shows that $\mathrm{det}(M)$ is independent of the basis we choose, and sometimes we denote this by $\mathrm{det}(Q_{X})$. We shall also usually identify the intersection form $Q_{X}$ with the matrix representing it.
\newtheorem{def2}[dif]{Definition}
\begin{def2}
\upshape
If $M$ and $\tilde{M}$ are two $n \times n$ matrices over $\mathbb{Z}$ and there is a matrix $B$ (also over $\mathbb{Z}$) such that
\begin{equation*}
\tilde{M} = B M B^{t}
\end{equation*}
then we say that $M$ and $\tilde{M}$ are equivalent.
\end{def2}
It is easy to check that this definition of \emph{equivalent} is an equivalence relation. \newline
It is natural to ask where the name \emph{intersection form} originates. If $X$ is a smooth manifold, then $Q_{X}(a,b)$ can be interpreted as the (signed) number of intersections of two submanifolds of $X$. In order to make this statment clear, we shall need a little background (which we take from \cite{GS}). \newline
In what follows, $X$ is a closed, oriented, smooth 4-manifold. Similar results hold for cases when $X$ has boundary, is non-compact or is non-orientable, but we shall not consider these cases here.
\newtheorem{def3}[dif]{Definition}
\begin{def3}
\upshape
Let $X^{n}$ be a smooth $n$-dimensional manifold. We say that a class $\alpha \in H_2(X^{n}; \mathbb{Z})$ is represented by a closed, oriented surface $\Sigma_{\alpha}$ if there is an embedding $i: \Sigma_{\alpha} \hookrightarrow X^{n}$ such that $i_{*}([\Sigma_{\alpha}]) = \alpha$, where $[\Sigma_{\alpha}] \in H_2(\Sigma_{\alpha}; \mathbb{Z})$ is the fundamental class of $\Sigma_{\alpha}$.
\end{def3}
With this definition in mind, we have the following proposition:
\newtheorem{pro1}[dif]{Proposition}
\begin{pro1}
Let $X$ be a closed, oriented, smooth 4-manifold. Then every element of $H_{2}(X; \mathbb{Z})$ can be represented by an embedded surface.
\end{pro1}
The proof in \cite{GS} uses results that are beyond the scope of this article.
\pagebreak
\newtheorem{pro1r1}[dif]{Remark}
\begin{pro1r1}
\upshape
If $X$ is simply connected, by the Hurewicz Theorem (see \cite{H1}) $\pi_2(X) \cong H_2(X; \mathbb{Z})$, which implies that every $\alpha \in H_2(X; \mathbb{Z})$ can be represented by an immersed sphere. Note the difference: the embedded \emph{surface} above need not have been a \emph{sphere}; for example, it could have been a torus. \newline
Further note that although this immersion is not an embedding in general, one can assume that an immersion $S^{2} \longrightarrow X^4$ intersects itself only in transverse double points (see \cite{GP}).
\end{pro1r1}
Again, suppose that $X$ is a closed, oriented, smooth 4-manifold. Let $a, b \in H^2(X; \mathbb{Z})$ and let their Poincar\'e duals be $\alpha = PD(a), \beta = PD(b)$, respectively. For the following, see \cite{GP}. \newline
Let $\Sigma_{\alpha}$ and $\Sigma_{\beta}$ be the surface representatives of $\alpha$ and $\beta$ (and therefore of $a$ and $b$), respectively, and suppose that the surfaces $\Sigma_{\alpha}$ and $\Sigma_{\beta}$ have been chosen generically, so that all their intersections are transverse. \newline
If $p \in \Sigma_{\alpha} \cap \Sigma_{\beta}$, the tangent spaces at the point $p$, denoted $T_{p}\Sigma_{\alpha}$ and $T_{p}\Sigma_{\beta}$ are orthogonal to each other (since the surfaces intersect transversely at $p$). \newline
If we concatenate the basis $\{x_1, x_2 \}$ of $T_{p}\Sigma_{\alpha}$ and the basis $\{y_1, y_2 \}$ of $T_{p}\Sigma_{\beta}$, we get a basis $\{x_1, x_2, y_1, y_2 \}$ for $T_{p}X$. \newline
If this basis $\{x_1, x_2, y_1, y_2 \}$ is positive (i.e. defines a positive orientation on $X$), we define the sign of the intersection at $p$ to be positive, otherwise it is a negative intersection. \newline
Note that the sign will not depend on the order of $\{\alpha, \beta \}$, but will depend on the orientations of the embedded surfaces $\Sigma_{\alpha}$ and $\Sigma_{\beta}$. \newline
This leads to the geometric interpretation of $Q_{X}$ (for the proof, see \cite{GS}):
\newtheorem{pro2}[dif]{Proposition}
\begin{pro2}
Let $X$ be a closed, oriented, smooth 4-manifold. Let $a, b \in H^{2}(X; \mathbb{Z})$. Let $\alpha, \beta \in H_2(X;\mathbb{Z})$ be their Poincar\'e duals, respectively, and let $\Sigma_{\alpha}$ and $\Sigma_{\beta}$ be their surface representatives, respectively. Let $Q_{X}$ be the intersection form of $X$. \newline
Then $Q_{X}(a,b)$ is the number of points in $\Sigma_{\alpha} \cap \Sigma_{\beta}$, counted with sign.
\end{pro2}
\newtheorem{pro2r1}[dif]{Remark}
\begin{pro2r1}
\upshape
For example, if there are three intersection points in $\Sigma_{\alpha} \cap \Sigma_{\beta}$, 2 negative and 1 positive, then $Q_{X}(a,b) = -1$.
\end{pro2r1}
\newtheorem{pro2r2}[dif]{Remarks}
\begin{pro2r2} \label{comrem1}
\upshape
A complex structure on a manifold $X$ defines an orientation on $X$, and so any complex submanifolds of $X$ are canonically oriented (see Remark \ref{compcanor}). \newline
In particular, if $S$ is a complex surface, and $C_1$ and $C_2$ are complex curves in $S$ that intersect each other transversely, then $Q_{S}(C_1, C_2) \geq 0$. In other words, the transverse intersection of complex submanifolds is always positive. This important fact will be used later. \newline
However, it is worth noting that the \emph{self-intersection} $Q_X(C,C)$ of a complex curve $C$ can be negative, as we shall see later.
\end{pro2r2}
The intersection form $Q_{X}$ of a closed, oriented, topological 4-manifold $X$ is a symmetric, bilinear form on a finitely-generated free abelian group. However, there is another property that the matrices representing $Q_{X}$ have, one that is not immediately apparent.
\newtheorem{def4}[dif]{Definition}
\begin{def4}
A matrix $Q$ is called unimodular if $\mathrm{det}(Q) = \pm1$.
\end{def4}
This is equivalent to saying that the matrix $Q$ is invertible over $\mathbb{Z}$. We can now state the following proposition:
\newtheorem{pro3}[dif]{Proposition}
\begin{pro3}
The intersection form $Q_{X}$ of a closed, oriented, topological 4-manifold $X$ is unimodular.
\end{pro3}
The proof of this proposition can be found in both \cite{GS} and \cite{Sc}. \newline
Now, let us forget about the 4-manifolds for a while, and just look at properties of symmetric, bilinear, unimodular forms defined on a finitely-generated free abelian group.
\pagebreak
\section{Classification of Integral Forms}
Actually, the title of this section is a bit of an abbreviation. We shall just be considering symmetric, bilinear, unimodular forms. Again, we follow \cite{GS}. Another good reference is \cite{MH}. \newline
Let $Q: A \times A \longrightarrow \mathbb{Z}$ be a symmetric, bilinear, unimodular form defined on a finitely-generated free abelian group $A$. We define the following invariants of $Q$: \emph{rank}, \emph{signature} and \emph{parity}. \newline
The \emph{rank} of $Q$ is the dimension of $A$ and is denoted $rk(Q)$. \newline
The \emph{signature} of $Q$ is defined as follows: consider $Q$ as an $n \times n$ matrix with entries in $\mathbb{Z}$, and diagonalize it over $\mathbb{R}$. Denote the number of positive eigenvalues on the diagonal by $b_{2}^{+}(Q)$ and the number of negative eigenvalues on the diagonal by $b_{2}^{-}(Q)$. We finally define the signature of $Q$ as $\sigma(Q) = b_{2}^{+}(Q) - b_{2}^{-}(Q)$.
\newtheorem{r21}{Remark}[section]
\begin{r21}
\upshape
We define $b_{2}(Q) = b_{2}^{+}(Q) + b_{2}^{-}(Q)$, and it is called the \emph{second Betti number}. Clearly, $b_{2}(Q) = rk(Q)$. Sometimes we shall write $b_{2}(Q)$ simply as $b_{2}$ when there is no chance of ambiguity.
\end{r21}
We define the \emph{parity} of $Q$ to be either \emph{odd} or \emph{even}. If for all $a \in A$ $Q(a,a) \equiv 0$ (mod 2), we say that $Q$ is even. Otherwise, we say that $Q$ is odd.
\newtheorem{r22}[r21]{Remark}
\begin{r22}
\upshape
Note that if there is just one element $a' \in A$ such that $Q(a', a') \equiv 1$ (mod 2), it is enough to make $Q$ odd.
\end{r22}
It is worthwhile checking that rank, signature and parity are indeed invariants of a symmetric, bilinear, unimodular form. When we say they are invariants, we mean that two matrices representing the same form should have the same rank, signature and parity. This is thre same as saying that if $X$ and $Y$ are two equivalent matrices with entries in $\mathbb{Z}$, i.e. there is a basis tranformation tranformation matrix $B$ such that $X = B Y B^{t}$, then $rk(X) = rk(Y)$, $\sigma(X) = \sigma(Y)$ and the parity of $X$ is the same as the parity of $Y$. \newline
Suppose $X$ and $Y$ are equivalent matrices representing the same symmetric, bilinear, unimodular form $Q$. The fact that rank and signature are invariants is due to the following theorem from linear algebra (Theorem $6.z_{3}$ in \cite{He}):
\newtheorem{hers1}[r21]{Theorem}
\begin{hers1}
Given the real symmetric matrix $A$ there is an invertible matrix $T$ such that
\begin{displaymath}
TAT^{t} =
\left( \begin{array} {ccc}
I_{r} & & \\
& -I_{s} & \\
& & 0_{t}
\end{array} \right)
\end{displaymath}
where $I_{r}$ and $I_{s}$ are respectively the $r \times r$ and $s \times s$ unit matrices and where $0_{t}$ is the $t \times t$ $0$-matrix. The integers $r+s$, which is the rank of $A$, and $r-s$, which is the signature of $A$, characterize the congruence class of $A$. That is, two real symmetric matrices are congruent if and only if they have the same rank and signature.
\end{hers1}
\newtheorem{hersr1}[r21]{Remark}
\begin{hersr1}
\upshape
Herstein's notion of two matrices $A$ and $B$ being \emph{congruent} over $\mathbb{R}$ means there is a non-singular real matrix $T$ such that $B = TAT^{t}$, and the theorem above then proves that if $A$ and $B$ are congruent, they have the same rank and signature. If two matrices are \emph{equivalent} over $\mathbb{Z}$ (as defined in the previous section), then they are clearly \emph{congruent} over $\mathbb{R}$ (as Herstein defines it), and so two equivalent matrices have the same rank and signature. Note that since we also consider our matrices in this section to be unimodular, and therefore invertible, so $t = \mathrm{dim}(0_{t}) = 0$ above.
\end{hersr1}
Finally, a short lemma below proves that parity is also an invariant. We shall use the shorthand `$X \equiv_{2} Y$' to denote `$X \equiv Y$ (mod 2)'.
\newtheorem{parlem1}[r21]{Lemma}
\begin{parlem1} \label{L25}
Let $Q$ be a symmetric, bilinear, unimodular form over $\mathbb{Z}$. Then $Q$ is even if and only if $Q(\alpha_{i}, \alpha_{i}) \equiv_{2} 0$ for each $i = 1, 2, \dots, n$, where $\{ \alpha_{1}, \alpha_{2}, \dots, \alpha_{n} \}$ is a basis for $A$.
\end{parlem1}
Proof: \newline
($\Rightarrow$): By the definition of $Q$ being even. \newline
($\Leftarrow$): Let $A_{0} \subset A$ be the subset of $A$ such that for $a \in A_{0}$, $Q(a,a) \equiv_{2} 0$. If $a, b \in A_{0}$, then
\begin{align*}
Q(a+b, a+b) &= Q(a, a) + Q(a, b) + Q(b, a) + Q(b,b) \\
&= Q(a,a) + 2Q(a,b) + Q(b,b) \\
\Rightarrow Q(a+b, a+b) &\equiv_{2} 0
\end{align*}
since $Q(a,a) \equiv_2 0$, $Q(b,b) \equiv_2 0$ and clearly $2Q(a,b) \equiv_2 0$. A similar argument proves that $Q(a-b, a-b) \equiv_{2} 0$. Therefore, $a, b \in A_{0}$ implies that $a + b \in A_{0}$ and $a - b \in A_{0}$. \newline
Since the basis $\{ \alpha_{1}, \alpha_{2}, \dots, \alpha_{n} \}$ is contained in $A_{0}$, by applying this argument repeatedly we have for any $\lambda_{1}, \lambda_{2}, \dots, \lambda_{n} \in \mathbb{Z}$ that
\begin{equation*}
Q(\lambda_{1}\alpha_{1} + \lambda_{2}\alpha_{2} + \dots + \lambda_{n}\alpha_{n}, \lambda_{1}\alpha_{1} + \lambda_{2}\alpha_{2} + \dots + \lambda_{n}\alpha_{n}) \equiv_2 0
\end{equation*}
This implies that $A \subset A_{0}$, which implies $A = A_{0}$, which proves that $Q$ is even. $\Box$
\newtheorem{parlem2r1}[r21]{Remark}
\begin{parlem2r1}
\upshape
The lemma above proves that if $Q$ is even in one basis of $A$, then it is even in any other basis of $A$. Since two equivalent matrices are just the same symmetric, bilinear, unimodular form represented in two different bases, if one matrix is even then so is the other.
\end{parlem2r1}
We define a further notion known as the \emph{definiteness} of the intersection form as follows:
\begin{itemize}
\item[(i)] If $rk(Q) = \sigma(Q)$, $Q$ is called \emph{positive-definite}.
\item[(ii)] If $rk(Q) = -\sigma(Q)$, $Q$ is called \emph{negative-definite}.
\item[(iii)] Otherwise, $Q$ is called \emph{indefinite}.
\end{itemize}
If $Q$ is not indefinite, it could simply be called \emph{definite}. \newline
Due to a theorem of Serre (quoted from \cite{Sc}, but the original source is \cite{Se}), these three invariants rank, signature and parity are enough to classify all indefinite intersection forms.
\newtheorem{serre}[r21]{Theorem}
\begin{serre}
Let $Q_1$ and $Q_2$ be two indefinite, symmetric, bilinear, unimodular forms. If $Q_1$ and $Q_2$ have the same rank, signature and parity, then they are equivalent.
\end{serre}
This classification of indefinite forms will be very useful later. However, there is no `nice' classification of definite forms; in fact there are positive-definite forms that are not equivalent, even though they have the same rank, signature and parity (refer to page 14, \cite{GS}). \newline
Before we return to the world of 4-manifolds, there is one more definition.
\newtheorem{cd}[r21]{Definition}
\begin{cd}
An element $x \in A$ is called a characteristic element if $Q(x,\alpha) \equiv Q(\alpha,\alpha)$ (mod 2) for all $\alpha \in A$.
\end{cd}
This leads to an interesting result (for the proof, see \cite{GS}):
\newtheorem{l1220}[r21]{Lemma}
\begin{l1220} \label{L210}
If $x \in A$ is characteristic, then $Q(x,x) \equiv \sigma(Q)$ (mod 8). In particular, if $Q$ is even, then the signature $\sigma(Q)$ is divisible by 8.
\end{l1220}
\pagebreak
\section{The Classification of Topological 4-manifolds}
\label{class4msec}
Let $X$ be a simply-connected, closed, oriented, 4-manifold.\newline
Recall that $\pi_1(X)=0$ if $X$ is simply-connected. Then since $H_1(X)$ is just the abelianization of $\pi_1(X)$, we also have $H_1(X; \mathbb{Z}) = 0$. We then have $H^1(X; \mathbb{Z}) \cong \mathrm{Hom}(H_1(X;\mathbb{Z}); \mathbb{Z}) = 0$ and by Poincar\'e duality, we then have $H_{3}(X;\mathbb{Z}) = 0$ and $H^3(X;\mathbb{Z}) =0$. This also implies that $H_2(X;\mathbb{Z}) \cong H^{2}(X; \mathbb{Z})$ has no torsion. Therefore, the intersection form $Q_{X}$ contains all the homological information of $X$. Whitehead first showed that $Q_{X}$ classifies topological 4-manifolds up to homotopy:
\newtheorem{w1}{Theorem}[section]
\begin{w1}
The simply-connected, closed, topological 4-manifolds $X_1$ and $X_2$ are homotopy equivalent if and only if $Q_{X_1} \cong Q_{X_2}$.
\end{w1}
Then, in 1982 M. Freedman proved the following theorem in \cite{F} which shows that $Q_{X}$ actually classifies $X$ up to homeomorphism:
\newtheorem{f1}[w1]{Theorem}
\begin{f1}
For every symmetric, bilinear, unimodular form $Q$ there exists a simply-connected, closed, topological 4-manifold $X$ such that $Q_{X} \cong Q$. \newline
Furthermore, if $Q$ is even, then this manifold is unique up to homeomorphsim. If $Q$ is odd, there are exactly two different homeomorphism types of manifolds with intersection form $Q$, and at most one of these homeomorphism types carries a smooth structure.
\end{f1}
If we restrict our attention to smooth manifolds, this leads to an important corollary:
\newtheorem{f2}[w1]{Corollary}
\begin{f2} \label{frcor1}
If $X_1$ and $X_2$ are smooth, simply-connected 4-manifolds with equivalent intersection forms, then $X_1$ and $X_2$ must be homeomorphic.
\end{f2}
A special case of Freedman's theorem is the topological 4-dimensional Poincar\'e Conjecture:
\newtheorem{f3}[w1]{Corollary}
\begin{f3}
If $X$ is a topological 4-manifold homotopy equivalent to $S^4$, then $X$ is homeomorphic to $S^4$.
\end{f3}
From now on, we shall usually write the invariants rank and signature of the intersection form $Q_{X}$ as $b_2(X) = b_2^{+}(X) + b_2^{-}(X)$ and $\sigma(X) = b_2^{+}(X) - b_2^{-}(X)$, respectively. \newline
Below is an interesting result that Rohlin proved in \cite{R2}:
\newtheorem{r2t1}[w1]{Theorem}
\begin{r2t1} \label{T35}
Let $X$ be a simply-connected, closed, oriented, smooth 4-manifold. If $Q_X$ is even, then the signature $\sigma(X)$ is divisible by 16.
\end{r2t1}
We shall now look at a few examples of 4-manifolds and their intersection forms.
\pagebreak
\section{Examples}
\newtheorem{eg1}{Example}[section]
\begin{eg1}
\upshape
The simplest example is $S^4 = \{\mathbf{x} \in \mathbb{R}^4 | \; \parallel \mathbf{x} \parallel = 1 \}$. Since $H_2(S^4; \mathbb{Z}) = 0$, the intersection form $Q_{S^{4}} = <.> $, where $<n>$ denotes the $1 \times 1$ matrix with the single entry $n \in \mathbb{Z}$, and $<.>$ denotes the ``empty'' intersection form (there are no homology classes to ``intersect'' each other in the case of $S^4$; note this is not standard notation).
\end{eg1}
\newtheorem{eg2}[eg1]{Example}
\begin{eg2}
\upshape
The next examples are the complex projective spaces. We define $\mathbb{CP}^n = \{\mathbf{z} \in \mathbb{C}^{n+1} | \; \mathbf{z} \neq \mathbf{0} \} / \sim$, where $\mathbf{0} = (0, 0, \dots, 0)$ and the relation $\sim$ is defined as: \newline
for all $\lambda \in \mathbb{C} \setminus \{0 \}$, $(\lambda z_0, \lambda z_1, \dots, \lambda z_{n}) \sim (z_0, z_1, \dots, z_{n})$. More compactly, denoting $\mathbb{C} \setminus \{0 \}$ by $\mathbb{C}^{*}$, for all $\lambda \in \mathbb{C}^{*}$, $\lambda \mathbf{z} \sim \mathbf{z}$.
\end{eg2}
\newtheorem{eg2r}[eg1]{Remark}
\begin{eg2r}
\upshape
Note that a point $P \in \mathbb{CP}^{n}$ is an equivalence class of points, so if $(z_0, z_1, \dots, z_{n}) \in P$, we usually denote $P$ by its \emph{homogeneous coordinates} $[z_0:z_1: \dots : z_{n}]$. For example, all the points $(0,\dots,0,1)$, $(0,\dots,0,2)$, etc. are in the equivalence class $[0:\dots:0:1]$. We call $\mathbb{CP}^{1}$ the \emph{complex projective line} and $\mathbb{CP}^{2}$ the \emph{complex projective plane}.
\end{eg2r}
One can similarly define the real projective spaces, and it is worthwhile looking at $\mathbb{RP}^1$ and $\mathbb{RP}^2$ in order to get a better idea of what these complex projective spaces actually are. \newline
We define $\mathbb{RP}^1 = \{(x,y) \in \mathbb{R}^2 | \; (x,y) \neq (0,0) \} / \sim$, where now $\sim$ is the relation that for all $\lambda \in \mathbb{R} \setminus \{0 \}$, $(\lambda x, \lambda y) \sim (x,y)$. Let us pick an element $(x,y) \in \mathbb{R}^2$ that lies on the unit circle. This element also defines a line through the origin in the direction $(x,y)$, and then $(\lambda x, \lambda y)$ for $\lambda \neq 0$ is just any other element on this line, except the origin, and we identify $(\lambda x, \lambda y)$ with $(x,y)$. \newline
So, we could picture $\mathbb{RP}^{1}$ as follows: we start with $\mathbb{R}^{2} \setminus \{(0,0)\}$, then quotienting out by the relation $\sim$ retracts $\mathbb{R}^{2} \setminus \{(0,0)\}$ onto the unit circle $S^1$, and then identifies antipodal points of the unit circle (since $(x,y) \sim (-x, -y)$). Using this approach for higher dimensions, we can consider $\mathbb{RP}^{n}$ as the unit sphere $S^{n}$ with antipodal points identified. \newline
\pagebreak
Although this `picture' of the real projective spaces doesn't quite extend to the complex case, since the scalars $\lambda$ are then complex, it can be used to show that $\mathbb{CP}^{n}$ is both compact and simply-connected, as the following lemma shows:
\newtheorem{cplem}[eg1]{Lemma}
\begin{cplem}
The complex projective spaces $\mathbb{CP}^{n}$ are compact and simply connected.
\end{cplem}
Proof: (from an exercise in \cite{GS}) \newline
Using the idea of the equivalent formulation of real projective spaces, we have the following equivalent definition of $\mathbb{CP}^{n}$:
\begin{equation*}
\mathbb{CP}^{n} = \{ \mathbf{x} \in S^{2n+1} \subset \mathbb{R}^{2n+2} \cong \mathbb{C}^{n+1} | \; \mathbf{x} \neq \mathbf{0} \} / \sim
\end{equation*}
where $\sim$ is defined as: for all $\lambda \in S^{1}$, $\lambda \mathbf{x} \sim \mathbf{x}$ (recall that the set $\{z \in \mathbb{C} | \; |z| = 1 \} \cong S^1$). \newline
This definition makes $S^{2n+1}$ into an $S^1$-fibration over $\mathbb{CP}^{n}$, and from Remark \ref{fbrseqrem} above we have the following long exact sequence of homotopy groups:
\begin{equation*}
\dots \rightarrow \pi_1(S^1) \rightarrow \pi_1(S^{2n+1}) \rightarrow \pi_1(\mathbb{CP}^{n}) \rightarrow \pi_0(S^1) \rightarrow \pi_0(S^{2n+1}) \rightarrow \dots
\end{equation*}
Since $S^{2n+1}$ is simply connected, $\pi_1(S^{2n+1})= 0$. Recall from \cite{H1} (page 346) that a path-connected space $X$ has $\pi_0(X) = 0$. Therefore, since $S^1$ and $S^{2n+1}$ are path-connected, we have $\pi_0(S^1) = 0$ and $\pi_0(S^{2n+1} = 0)$. So, a portion of our exact sequence becomes
\begin{equation*}
\dots \rightarrow \pi_1(S^1) \rightarrow 0 \rightarrow \pi_1(\mathbb{CP}^{n}) \rightarrow 0 \rightarrow 0 \rightarrow \dots
\end{equation*}
and so $\pi_1({\mathbb{CP}^{n}}) = 0$, and therefore $\mathbb{CP}^{n}$ is simply-connected. Since $S^{n+1}$ is compact, and the projection map $p: S^{n+1} \longrightarrow \mathbb{CP}^{n}$ is continuous, we have that $\mathbb{CP}^{n}$ is compact.
\newtheorem{lemrem1}[eg1]{Remark}
\begin{lemrem1} \label{cpnhomref}
\upshape
The homology groups of $\mathbb{CP}^{n}$ follow an interesting pattern: $H_{i}(\mathbb{CP}^{n}; \mathbb{Z}) \cong \mathbb{Z}$ if $i=2d$, where $d=0,1,\dots,n$, and $H_{i}(\mathbb{CP}^{n}; \mathbb{Z}) = 0$ otherwise. For a proof, see \cite{H1} or \cite{GS} Example 4.2.4.
\end{lemrem1}
Let us now focus our attention on $\mathbb{CP}^{2}$, which is a 4-manifold that is important later, and let us calculate its intersection form (from an exercise in \cite{GS}). \newline
Let $h \in H_2(\mathbb{CP}^{2}; \mathbb{Z})$ be the fundamental class of the submanifold $H = \{ [x:y:z] \in \mathbb{CP}^{2} | \; x = 0 \}$, and let $h' \in H_2(\mathbb{CP}^{2}; \mathbb{Z})$ be the fundamental class of the submanifold $H' = \{ [x:y:z] \in \mathbb{CP}^{2} | \; y = 0 \}$. Clearly $H \cap H' = \{[0:0:1] \}$ is a transverse intersection. Since both submanifolds are complex, the intersection is also positive (see Remarks \ref{comrem1}). Therefore, $Q_{\mathbb{CP}^{2}}(h,h') =1$. \newline
Claim: $h$ cannot be the multiple of any other class in $H_2(\mathbb{CP}^{2}; \mathbb{Z})$, and so it generates $H_2(\mathbb{CP}^{2}; \mathbb{Z}) \cong \mathbb{Z}$.\newline
Proof of claim: Suppose $h$ is a multiple of a class $g$, i.e. $h = mg$ for $m \in \mathbb{Z}, |m| > 1$ (otherwise $g = \pm h$). Then
\begin{align*}
Q_{\mathbb{CP}^{2}}(h,h') &= Q_{\mathbb{CP}^{2}}(mg,h') \\
&= m Q_{\mathbb{CP}^{2}}(g,h') \\
&= mk
\end{align*}
where $k \in \mathbb{Z}$. Therefore $mk = 1$, which is clearly a contradiction (since $|m| > 1$). So $h$ is not the multiple of any other class in $H_2(\mathbb{CP}^{2}; \mathbb{Z})$. \newline
Then, since $H_2(\mathbb{CP}^{2}; \mathbb{Z}) \cong \mathbb{Z}$, $h$ must be a generator of $H_2(\mathbb{CP}^{2}; \mathbb{Z}) \cong \mathbb{Z}$. Furthermore, $H_2(\mathbb{CP}^{2}; \mathbb{Z}) \cong \mathbb{Z}$ implies $rk(Q_{\mathbb{CP}^{2}}) = 1$, and since $Q_{\mathbb{CP}^{2}}(h,h')=1$, we must have $Q_{\mathbb{CP}^{2}} = <1>$.
\newtheorem{eg22}[eg1]{Example}
\begin{eg22}
\upshape
We define $\overline{\mathbb{CP}^{2}}$ to be the manifold $\mathbb{CP}^{2}$ with the opposite orientation. Therefore, by Remark \ref{difr25} (iv), we have $Q_{\overline{\mathbb{CP}^{2}}} = -Q_{\mathbb{CP}^{2}} = <-1>$.
\end{eg22}
\newtheorem{eg3}[eg1]{Example}
\begin{eg3}
\upshape
Consider the manifold $\mathbb{CP}^1 \times \mathbb{CP}^1$. Looking at $\mathbb{CP}^1$ more closely, we see that it is actually a real 2-manifold. Moreover, by the lemma above, we know that $\mathbb{CP}^1$ is closed and simply-connected, so by the classification of compact 2-manifolds (see \cite{GP}), it must be homeomorphic to $S^2$. Therefore, $\mathbb{CP}^1 \times \mathbb{CP}^1$ is homeomorphic to $S^2 \times S^2$. \newline
Since $\pi_1(S^2 \times S^2) \cong \pi_1(S^2) \times \pi_1(S^2) \cong 0$, $S^2 \times S^2$ is a simply-connected, closed 4-manifold. Since $H_2(S^2 \times S^2; \mathbb{Z}) \cong \mathbb{Z} \oplus \mathbb{Z}$ (see \cite{H1}), we know the intersection form $Q_{S^2 \times S^2}$ has rank 2. If we choose as a basis for $H_2(S^2 \times S^2; \mathbb{Z})$ the homology elements $\alpha_1 = [S^2 \times \mathrm{pt}]$ and $\alpha_2 = [\mathrm{pt} \times S^2]$, we can see that $\alpha_1 \cdot \alpha_1 = 0$ and $\alpha_2 \cdot \alpha_2 = 0$, and $\alpha_1 \cdot \alpha_2 = \alpha_2 \cdot \alpha_1 = 1$. Therefore, the intersection form is
\begin{displaymath}
Q_{S^2 \times S^2} =
\left( \begin{array} {cc}
0 & 1 \\
1 & 0
\end{array} \right)
\end{displaymath}
We usually denote this particular matrix by $H$.
\end{eg3}
We recall the definition of the \emph{connected sum} of two $n$-manifolds $X_1$ and $X_2$ from \cite{GS}:
\newtheorem{consumdef}[eg1]{Definition}
\begin{consumdef}
\upshape
Let $X_1$ and $X_2$ be two smooth $n$-dimensional manifolds. Let $D_1 \subset X_1$ and $D_2 \subset X_2$ be two embedded $n$-disks, and let $\phi: D_1 \longrightarrow D_2$ be an orientation-reversing diffeomorphism. The \emph{connected sum} $X_1 \# X_2$ of $X_1$ and $X_2$ is defined to be the smooth manifold $(X_1 \setminus D_1) \cup_{\phi|_{\partial D_1}} (X_2 \setminus D_2)$.
\end{consumdef}
\newtheorem{consumdefr}[eg1]{Remarks}
\begin{consumdefr}
\upshape
Note that the connected sum operation is well-defined, in the sense that it does not depend on our choice of disks $D_1, D_2$ or on our choice of homeomorphism $\phi$. We sometimes denote by $\#m X$ the connect sum of $m$ copies of the manifold $X$ (where $m \geq 0$, and if $m=0$ then $\#m X = S^{n}$). Note that $X \# S^n$ is simply $X$.
\end{consumdefr}
\newtheorem{consumdefe1}[eg1]{Remarks}
\begin{consumdefe1}
\upshape
Let $T^2$ denote the familiar genus-1 surface, the torus. Then $T^2 \# T^2$ is the surface of genus 2, and $\#m T^2$ is the surface of genus $m$.
\end{consumdefe1}
Now, there is a simple equation relating the intersection forms of $X_1$ and $X_2$ and their connect sum $X_1 \# X_2$:
\newtheorem{eg4}[eg1]{Lemma}
\begin{eg4}
\upshape
Let $X_1$ and $X_2$ be 4-manifolds with intersections forms $Q_{X_1}$ and $Q_{X_2}$, respectively. Then the connect sum $X_1 \# X_2$ has intersection form
\begin{equation}
Q_{X_1 \# X_2} = Q_{X_1} \oplus Q_{X_2}
\label{eg4eq}
\end{equation}
\end{eg4}
\newtheorem{eg4r}[eg1]{Remark}
\begin{eg4r}
\upshape
This is an important lemma that we shall use many times in later sections. Equation $\eqref{eg4eq}$ is proved using a Mayer-Vietoris sequence. See \cite{GS} for the proof.
\end{eg4r}
\newtheorem{eg5}[eg1]{Example}
\begin{eg5}
\upshape
Consider the manifold with intersection form given by the matrix:
\begin{displaymath}
-E_8 =
\left( \begin{array} {cccccccc}
-2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & -2 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & -2 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & -2 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & -2 & 1 & 0 & 1 \\
0 & 0 & 0 & 0 & 1 & -2 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & -2 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & -2
\end{array} \right)
\end{displaymath}
If one were to denote the basis used to represent $-E_8$ as it appears above by $\{\alpha_1, \alpha_2, \dots, \alpha_8 \}$, the matrix shows us that $Q_{-E_8}(\alpha_{i}, \alpha_{i}) \equiv 0$ (mod 2) for $i = 1,2, \dots, 8$. By Lemma \ref{L25}, this shows us that $-E_8$ is even. By diagonalizing $-E_8$ over $\mathbb{R}$, one finds that $-E_8$ is negative-definite, since $\sigma(-E_8) = -8 = -rk(-E_8)$. Note that we should have expected that $\sigma(-E_8) \equiv 0$ (mod 8) by Lemma \ref{L210}. Finally, since $\sigma(-E_8)$ is not divisible by 16, by (Rohlin's) Theorem \ref{T35}, the 4-manifold with intersection form $-E_8$ cannot admit any smooth structures. \newline
So, we have our first concrete example of a topological 4-manifold that does not admit a smooth structure.
\end{eg5}
\pagebreak
\section{Symplectic Structures, Almost Complex Structures, Complex Structures, and Characteristic Classes} \label{canclasssection}
In this section we recall the definitions of \emph{symplectic structures} and \emph{almost complex structures}. Almost everything presented in this section is from \cite{MS2}, which goes into far more detail. Another good reference, from where we borrow a few definitions, is \cite{LM}. The reason for this discussion is that we shall need the fact that every symplectic manifold has a \emph{canonical class} associated to it that is \emph{compatible with the symplectic structure}. Theorems presented later on in this article use the canonical class in order to distinguish between smooth structures on a manifold. \newline
The topic of characteristic classes is beyond the scope of this article, yet the canonical class is defined to be a ``certain'' Chern class. The reader can just take this definition at face value, and refer to \cite{MS2}, \cite{GS} or \cite{MiSt} for more on characteristic classes.
\newtheorem{d31}{Definition}[section]
\begin{d31}
\upshape
A \emph{symplectic vector space} is a pair $(V, \omega)$ consisting of a finite-dimensional real vector space $V$ and a bilinear form $\omega: V \times V \longrightarrow \mathbb{R}$ satisfying the following two conditions:
\begin{itemize}
\item[(1)] For all $v,w \in V$, $\omega(v,w) = -\omega(w,v)$.
\item[(2)] For every $v \in V$, if $\omega(v,w) = 0$ for all $w \in V$, then $v = 0$.
\end{itemize}
\end{d31}
\newtheorem{d31rr}[d31]{Remark}
\begin{d31rr}
\upshape
Condition $(1)$ is called \emph{skew-symmetry} and condition $(2)$ is called \emph{non-degeneracy}. Therefore, a symplectic form is a skew-symmetric, non-degenerate form.
\end{d31rr}
\newtheorem{d31rr2}[d31]{Remark}
\begin{d31rr2}
\upshape
A symplectic vector space must be even-dimensional, otherwise condition (2) will not be satisfied.
\end{d31rr2}
\newtheorem{d311}[d31]{Definition}
\begin{d311}
\upshape
Let $X$ be a manifold. We say $\omega$ is a \emph{2-form on $X$}, if for each $p \in X$, $\omega_{p}$ is a skew-symmetric bilinear map on the tangent space of $X$ at $p$, i.e. $\omega_{p}: T_{p}X \times T_{p}X \longrightarrow \mathbb{R}$. Furthermore, $\omega_{p}$ varies smoothly in $p$.
\end{d311}
\newtheorem{d312}[d31]{Definition}
\begin{d312}
\upshape
Let $\omega$ be a 2-form on a manifold $X$. We say that $\omega$ is a \emph{symplectic form} if $\omega$ is closed and $\omega_{p}$ is symplectic on $T_{p}X$ for all $p \in X$.
\end{d312}
\newtheorem{d313}[d31]{Definition}
\begin{d313}
\upshape
A \emph{symplectic manifold} is a pair $(X, \omega)$ where $X$ is a manifold and $\omega$ is a symplectic form.
\end{d313}
\newtheorem{d32}[d31]{Remarks}
\begin{d32}
\upshape
See \cite{MS2} for more on the following remarks:
\begin{itemize}
\item[(i)] If $X$ is a symplectic manifold, then $X$ must be even-dimensional.
\item[(ii)] If $(X, \omega)$ is a symplectic manifold of dimension $2n$, then the $n$-fold wedge product $\omega \wedge \dots \wedge \omega$ is never zero. This implies that a symplectic manifold $(X, \omega)$ is orientable.
\end{itemize}
\end{d32}
If we want to define two symplectic manifolds $(X_1, \omega_1)$ and $(X_2, \omega_2)$ to be equivalent, not only do we need the underlying smooth manifolds to be diffeomorphic, but we also need the two symplectic forms to be related in some way. The phrase we use for such equivalence is ``$X_1$ is \emph{symplectomorphic} to $X_2$'', and the map (the diffeomorphism that ``preserves'' the symplectic structure) is called a \emph{symplectomorphism}.
\newtheorem{dsplmphsm}[d31]{Definition}
\begin{dsplmphsm}
\upshape
Let $(X_1, \omega_1)$ and $(X_2, \omega_2)$ be two symplectic manifolds, both of dimension $2n$, and let $f: X_1 \longrightarrow X_2$ be a diffeomorphism. Then $f$ is a \emph{symplectomorphism} if $f^{*}\omega_2 = \omega_1$.
\end{dsplmphsm}
\newtheorem{dsplmphsmr}[d31]{Remark}
\begin{dsplmphsmr}
\upshape
$f^{*}\omega_2$ is the pullback of $\omega_2$ by $f$. See \cite{I1}, \cite{MS2}, or \cite{LM} for details.
\end{dsplmphsmr}
\newtheorem{dcsvs}[d31]{Definition}
\begin{dcsvs}
\upshape
Let $V$ be a vector space. A \emph{complex structure} on $V$ is an automorphism $J: V \longrightarrow V$ such that $J^{2} = -I$, where $I$ is the identity automorphism on $V$. With such a structure, $V$ becomes a complex vector space with multiplication $i = \sqrt{-1}$ corresponding to $J$, by the map
\begin{equation*}
\mathbb{C} \times V \longrightarrow V: (s + it, v) \longmapsto sv + tJv
\end{equation*}
$V$ must be even-dimensional over $\mathbb{R}$.
\end{dcsvs}
\newtheorem{dcsvscom}[d31]{Definition}
\begin{dcsvscom}
\upshape
Let $(V,\omega)$ be a symplectic vector space. A complex structure $J$ on $V$ is said to be \emph{compatible with $\omega$} if for all $v,w \in V$
\begin{equation*}
\omega(Jv,Jw) = \omega(v,w)
\end{equation*}
and if for all $v,w \in V$, with $v$ nonzero,
\begin{equation*}
\omega(v,Jw) > 0
\end{equation*}
\end{dcsvscom}
\newtheorem{d33}[d31]{Definition}
\begin{d33}
\upshape
Let $X$ be a $2n$-dimensional manifold. An \emph{almost complex structure} on $X$ is a complex structure $J$ on the tangent bundle $TM$. A non-degenerate 2-form $\omega$ on $X$ is called \emph{compatible} with $J$ if the bilinear form $g$ defined by
\begin{equation*}
g(v,w) = \omega(v, Jw)
\end{equation*}
defines a Riemannian metric on $X$.
\end{d33}
\newtheorem{d34}[d31]{Definition}
\begin{d34}
\upshape
A Riemannian metric $g$ on $M$ is called \emph{compatible with $J$} if for all $v,w \in T_{p}X$,
\begin{equation*}
g(Jv,Jw) = g(v,w)
\end{equation*}
\end{d34}
All of the above was done to make the following proposition intelligible:
\newtheorem{prop3}[d31]{Proposition}
\begin{prop3}
\upshape
\cite{MS2} \itshape Let $X$ be a $2n$-dimensional manifold. Then
\begin{itemize}
\item[(i)] for each non-degenerate 2-form $\omega$ on $X$, there exists an almost-complex structure $J$ which is compatible with $\omega$.
\item[(ii)] for each almost complex structure $J$ on $X$ there exists a non-degenerate 2-form $\omega$ which is compatible with $J$.
\end{itemize}
\end{prop3}
Now for the canonical class:
\newtheorem{d35}[d31]{Definition}
\begin{d35}
\upshape
If $X$ has an almost complex structure $J$, its tangent bundle $TX$ and its cotangent bundle $T^{*}X$ are complex rank 2 bundles. The \emph{canonical class} $K = K(X)$ is defined to be the first Chern class of the cotangent bundle, i.e.
\begin{equation*}
K = c_{1}(T^{*}X, J) = -c_{1}(TX, J) \in H^{2}(X, \mathbb{Z})
\end{equation*}
\end{d35}
\newtheorem{d35r}[d31]{Remark}
\begin{d35r}
\upshape
Note that we shall often write $Q_{X}(K,K) = K \cdot K = K^2$.
\end{d35r}
We then have the following result from \cite{Wu}, quoted from \cite{GS}:
\newtheorem{wut1}[d31]{Theorem}
\begin{wut1} \label{wuthm}
For a given 4-manifold $X$ and an almost-complex structure $J$ on $X$, Let $K = c_{1}(T^{*}X, J)$ be the canonical class. Then $K^2 = 3 \sigma(X) + 2 \chi(X)$
\end{wut1}
\newtheorem{wut1r}[d31]{Remark}
\begin{wut1r}
\upshape
In the theorem above, $\sigma(X)$ denotes the signature of the intersection form of $X$ and $\chi(X)$ denotes the Euler characteristic of $X$. Note that we are presenting a very ``watered-down'' version of the original result; an almost-complex structure on $X$ provides two further identities (one concerning another characteristic class, the first \emph{Stiefel-Whitney class} of $X$), and there is an appropriate converse. Since we shall not need these extra identities, or the converse, the current version of the theorem is sufficient for our purposes.
\end{wut1r}
\pagebreak
\section{Blowing Up and Blowing Down}
\newtheorem{s41}{Definition}[section]
\begin{s41}
\upshape
Let $X$ be a smooth, oriented manifold. The connected sum $X' = X \# \overline{\mathbb{CP}^{2}}$ is called the \emph{blow-up} of $X$ at a point. We obtain a map $\pi: X' \longrightarrow X$ with the following properties: For a point $P \in X$,
\begin{itemize}
\item[(1)] $\pi|_{X' \setminus \overline{\mathbb{CP}^{1}}}: X' \setminus \overline{\mathbb{CP}^{1}} \longrightarrow X \setminus \{P\}$ is a diffeomorphsim
\item[(2)] $\pi^{-1}(P) = \overline{\mathbb{CP}^{1}}$
\end{itemize}
\end{s41}
\newtheorem{bur}[s41]{Remarks}
\begin{bur}
\upshape
The sphere $\overline{\mathbb{CP}^{1}}$ in (ii) is called the \emph{exceptional sphere}. Note that
\begin{itemize}
\item[(i)] it is contained in the $\overline{\mathbb{CP}^{2}}$ summand of $X'$.
\item[(ii)] its homology class $[\overline{\mathbb{CP}^{1}}]$ is usually denoted by $e = [\overline{\mathbb{CP}^{1}}] \in H_2(X'; \mathbb{Z}) = H_2(X; \mathbb{Z}) \oplus H_2(\overline{\mathbb{CP}^{2}}; \mathbb{Z})$ \linebreak (so, actually $e \in H_{2}(\overline{\mathbb{CP}^{2}}; \mathbb{Z})$).
\item[(iii)] $Q_X(e,e) = -1$.
\item[(iv)] We usually call the map $\pi: X' \longrightarrow X$ the projection map.
\end{itemize}
\end{bur}
\newtheorem{bur2}[s41]{Remark}
\begin{bur2}
\upshape
Informally, when we blow up a manifold at a point $P$, we replace the point $P$ with the space of all lines going through $P$, which is a copy of $\mathbb{CP}^{1}$.
\end{bur2}
\newtheorem{bupt}[s41]{Definition}
\begin{bupt}
\upshape
Let $X$ be a smooth 4-manifold and let $\Sigma$ be a smooth surface in $X$. Suppose we blow up $X$ at a point $P \in \Sigma$, and we denote the projection by $\pi: X' \longrightarrow X$. We define
\begin{itemize}
\item[(i)] the \emph{total transform} of $\Sigma$ to be the inverse image $\Sigma' = \pi^{-1}(\Sigma) \subset X'$.
\item[(ii)] the \emph{proper transform} of $\Sigma$ to be the closure $\tilde{\Sigma} = cl(\pi^{-1}(\Sigma \setminus \{P\}))$.
\end{itemize}
\end{bupt}
\newtheorem{bur22}[s41]{Remark}
\begin{bur22}
\upshape
So, suppose $\Sigma_1$ and $\Sigma_2$ are smooth surfaces in $X$ intersecting each other transversally only in the point $P$. Let $X'$ be the blow-up of $X$ at $P$ and let $\pi:X' \longrightarrow X$ be the projection map. Then, the proper transforms $\tilde{\Sigma}_{1}$, $\tilde{\Sigma}_{2}$ will be disjoint in the blow-up $X'$.
\end{bur22}
There is an inverse operation, called a \emph{blow down}, which can be performed under certain conditions.
\newtheorem{bd1}[s41]{Definition}
\begin{bd1}
\upshape
If the 4-manifold $X$ contains a sphere $\Sigma_{-}$ with $[\Sigma_{-}]^2 = -1$, then $X = Y \# \overline{\mathbb{CP}^{2}}$ for some manifold $Y$, and $Y$ is called the \emph{blow down} of $X$.
\end{bd1}
\newtheorem{bd1r}[s41]{Remark}
\begin{bd1r}
\upshape
There are corresponding blow-up and blow-down operations using $\mathbb{CP}^{2}$ instead of $\overline{\mathbb{CP}^{2}}$, but then the manifolds cease to be complex, as the following theorem (quoted from \cite{GS}), called the Noether formula, shows. We shall always use $\overline{\mathbb{CP}^{2}}$, unless we specify otherwise.
\end{bd1r}
\newtheorem{nf1}[s41]{Theorem}
\begin{nf1}
For a complex surface $S$, the integer $c_1^2(S) + c_2(S) = 3(\sigma(S)+ \chi(S))$ is divisible by 12, or equivalently, $1-b_1(S)+b_2^{+}(S)$ is even. In particular, if $S$ is a simply-connected complex surface, then $b_2^{+}(S)$ is odd.
\end{nf1}
\newtheorem{bur23}[s41]{Remark}
\begin{bur23}
\upshape
Note that the blow-up can be defined holomorphically for complex manifolds (see \cite{GS}) and symplectically for symplectic manifolds (see \cite{MS2}).
\end{bur23}
\newtheorem{bur24}[s41]{Remark}
\begin{bur24}
\upshape
If $X' = X \# \overline{\mathbb{CP}^{2}}$ we have
\begin{itemize}
\item[(i)] $b_2^{+}(X') = b_2^{+}(X)$
\item[(ii)] $b_2^{-}(X') = b_2^{-}(X) + 1$
\item[(iii)] $b_2(X') = b_2(X) + 1$
\item[(iv)] $\sigma(X') = \sigma(X) -1 $
\item[(v)] $\chi(X') = \chi(X) + 1$
\end{itemize}
\end{bur24}
\pagebreak
\section{The Rational Blowdown Technique}
\label{rbdsec}
The rational blowdown technique was first discovered by R. Fintushel and R. Stern (\cite{FS1}). It can be thought of as a generalization of the usual blowdown process, in that a certain \emph{configuration} of spheres $C_p$ (basically, just a special collection of spheres intersecting each other in a certain way and with certain self-intersection numbers) is removed from a manifold and is replaced by a \emph{rational 4-ball} $B_p$ which has the same boundary, i.e. $\partial C_p = \partial B_p$. The reason the rational blowdown technique is useful, is that it is relatively easy to calculate the Seiberg-Witten invariants of manifolds constructed with this technique. \newline
For a fixed prime $p$ , a rational 4-ball $B_p$ is a 4-manifold that has the same rational homology as a ball, i.e. $H_k(B_p; \mathbb{Q}) \cong 0$ for $k>0$. However, this does not mean that its integral homology groups $H_k(B_p; \mathbb{Z})$ are also trivial, but that if they are not trivial, then they are just finite (torsion) groups. \newline
For example, we consider the rational 4-ball $B_p$, given by the Figure \ref{rbdsec}.1.
\newtheorem{bp}{Lemma}[section]
\begin{bp}
$B_p$ has trivial rational homology, and so has the same rational homology as $D^4$.
\end{bp}
Proof: \newline
Using techniques explained in \cite{OzSt}, pages 42 and 43 (and discussed in section \ref{x7proofsection}), we can calculate that $H_1(B_p; \mathbb{Z}) \cong \mathbb{Z}_{p}$, and $H_k(B_p; \mathbb{Z})$ are trivial for $k \geq 2$ (and $H_{0}(B_p; \mathbb{Z}) \cong 0$ since it is path-connected). So, $B_p$ has the same rational homology as $D^4$ (all homology groups are trivial). $\Box$
\newtheorem{bprem1}[bp]{Remark}
\begin{bprem1}
\upshape
Using these techniques, it can also be shown that $H_1(\partial B_p; \mathbb{Z}) \cong \mathbb{Z}_{p^2}$.
\end{bprem1}
We define $C_p$ to be the 4-manifold that is the plumbing according to the graph Figure \ref{rbdsec}.2, where $p \geq 2$ and the number of $-2$'s is $p-2$. So, its Kirby diagram is given by Figure \ref{rbdsec}.3. We now need a short lemma.
\newtheorem{cpbound1}[bp]{Lemma}
\begin{cpbound1}\label{cpzp2lemma}
The boundary $\partial C_p$ is the lens space $L(p^2, p-1)$, and so $\pi_1( \partial C_p) \cong \mathbb{Z}_{p^2}$.
\end{cpbound1}
Proof: \newline
First we note that the continued fraction expansion of $p^2/p-1$ is
\begin{equation*}
\frac{p^2}{p-1} = p+2 - \frac{1}{2 - \frac{1}{2 - \dots}}
\end{equation*}
or, $[p+2, 2, \dots, 2]$, where there are $p-2$ 2's. By using the slam-dunk technique in \cite{GS} from right to left on Figure \ref{rbdsec}.3, we get a single unknot with coefficient $p^2/p-1$, which shows that $\partial C_p \cong L(p^2, p-1)$. \newline
It is well-known (see for example \cite{Ro}) that $\pi_1(L(a,b)) \cong \mathbb{Z}_a$ (whatever the value of $b$ is), and so
\begin{equation*}
\pi_1(\partial C_p) \cong \pi_1(L(p^2,p-1)) \cong \mathbb{Z}_{p^2}.
\end{equation*}
\begin{flushright}
$\Box$
\end{flushright}
We shall now show that $\partial C_p \cong \partial B_p$. We do this indirectly by showing that $B_p \cup_{\partial} \overline{C_p} \cong \# (p-1) \mathbb{CP}^{2}$, which also shows that $C_p$ embeds in $\# (p-1) \overline{\mathbb{CP}^{2}}$. First, we need another (equivalent) Kirby diagram for $C_p$.
\newtheorem{cpequiv1}[bp]{Lemma}
\begin{cpequiv1}
Figure \ref{rbdsec}.4 is also a Kirby diagram for $C_p$, given in Figure \ref{rbdsec}.3.
\end{cpequiv1}
Proof: \newline
We start with Figure \ref{rbdsec}.4, and by a sequence of handleslides and a handle-cancellation, the diagram shall become Figure \ref{rbdsec}.3. We start by sliding a $-1$-framed meridian along the dotted circle to the place where the dotted circle and the $0$-framed circle twist, and then slide the $0$-framed circle over the meridian, which ``removes'' a twist. See Figure \ref{rbdsec}.5.\newline
We slide this ``used-up'' meridan along both strands, below a twist, and then use another $-1$-framed meridian to remove another twist. Note that this handleslide decreases the framing of the 0-framed circle by 1 each time. We do this with each of the meridians, and obtain Figure \ref{rbdsec}.6. \newline
We then slide the $-1$-framed meridians over each other, from top to bottom, as in Figure \ref{rbdsec}.7, which gives us Figure \ref{rbdsec}.8. Note that $p-2$ of the meridians become $-2$-framed meridians. \newline
We then slide the $-(p-1)$-framed circle over the only $-1$-framed meridian, using handle-subraction to get \ref{rbdsec}.18. We do this by looking at the part of the diagram indicated by Figures \ref{rbdsec}.9 and \ref{rbdsec}.10. First, we assign orientations to the $-(p-1)$-framed circle and the $-1$-framed meridian as in Figure \ref{rbdsec}.11 (note that we have chosen orientations so that the linking number of the two circles is $1$). Then, we draw a parallel copy of the $-1$-framed meridian, as in Figure \ref{rbdsec}.12, and perform the handle subtraction to get Figure \ref{rbdsec}.13. \newline
The framing of the $-(p-1)$-framed circle becomes
\begin{equation*}
-(p-1) - 1 - 2(1) = -p + 1 -1 -2 = -p -2
\end{equation*}
and note that it is linked with both the $-1$-framed meridian and the ``lowest'' $-2$-framed circle. \newline
We can drop the orientations and perform a few Reidemeister moves, as in Figures \ref{rbdsec}.14 to \ref{rbdsec}.17, to finally get \ref{rbdsec}.18. \newline
Now, since the dotted circle is only linked with the $-1$-framed meridian, we can perform a handle-cancellation, and so we are left with a link diagram as in Figure \ref{rbdsec}.3, as required. $\Box$ \newline
We now prove the main result.
\newtheorem{rbmain}[bp]{Proposition}
\begin{rbmain}
$B_p \cup_{\partial C_p} \overline{C_p}$ is diffeomorphic to $\# (p-1) \mathbb{CP}^{2}$, and so $\partial C_p \cong \partial B_p$.
\end{rbmain}
Proof: \newline
We start with $DC_p$, the double of $C_p$. As we know from the section on Kirby Calculus, $DC_p$ is formed simply by attaching $0$-framed meridians to each link component, and then adding a 3-handle and a 4-handle. So, taking the double of $C_p$ in Figure \ref{rbdsec}.4 we get Figure \ref{rbdsec}.19. \newline
We perform surgery inside $C_p \subset DC_p$ twice, first to change the dotted circle into a 0-framed circle, and then to change the original 0-framed circle into a dotted circle. Although this surgery changes the 4-manifold, it does not change its boundary. So, although the surgery might change something ``inside'' $DC_p$, its boundary $C_p \cup_{\partial C_p} \overline{C_p}$ will be unchanged.
So, we now have Figure \ref{rbdsec}.20. \newline
We slide each $0$-framed meridians over the $-1$-framed meridian it is linked with, so that the framing of the $0$-framed meridian becomes $+1$ and the meridians become unlinked, as shown in Figure \ref{rbdsec}.21. Blowing down the $-1$-framed circles in $C_p$ gives us Figure \ref{rbdsec}.22. Note the change in the framing of the ``large'' $0$-framed circle. Note that we can see this as a diagram of $B_p \cup_{\partial} \overline{C_p}$, since we did not do anything in the $\overline{C_p}$ half of $DC_p$, which now consists of the 1-framed meridians, the 0-framed meridian, the 3-handle and the 4-handle, while $B_p$ consists of the $(p-1)$-framed circle twisted around the dotted circle, as in Figure \ref{rbdsec}.1. \newline
Sliding the $(p-1)$-framed circle over its 1-framed meridians, as in Figure \ref{rbdsec}.23, we get Figure \ref{rbdsec}.24. Note that each handle-slide in Figure \ref{rbdsec}.23 decreases the framing of the $(p-1)$-framed circle circle by 1. \newline
We now use the 0-framed meridian to unlink the dotted circle and the 0-framed circle (Figure \ref{rbdsec}.25, repeated $p$ times) to get Figure \ref{rbdsec}.26. We now perform two handle-cancellations: first, the dotted circle and its 0-framed meridian cancel as a 1-handle/2-handle cancelling pair, and then the unlinked 0-framed circle cancels with the 3-handle, as a 2-handle/3-handle cancelling pair. This leaves us with Figure \ref{rbdsec}.27, which is simply $\# (p-1) \mathbb{CP}^{2}$, as required. $\Box$.
\newtheorem{cporrem1}[bp]{Remark}
\begin{cporrem1}
\upshape
Sometimes we shall denote the boundary of $C_p$ as $L(p^2,1-p)$, as other authors do, instead of as $L(p^2,p-1)$. This is fine, since $\overline{L(a,b)} = L(a,-b)$, and so we are just considering the opposite orientation.
\end{cporrem1}
\newtheorem{cporrem2}[bp]{Remark}
\begin{cporrem2}
\upshape
It should be noted that we have only shown that there is a diffeomorphism $\phi:\partial B_{p} \longrightarrow \partial C_p$ (i.e. it is a self-diffeomorphism of $\partial B_p = L(p^2, 1-p)$). In order for this operation of rationally blowing down to be well-defined, we need that a self-diffeomorphism of the boundary $\partial B_p$ always extends to a diffeomorphism over the whole rational ball $B_{p}$. Fortunately, the following theorem due to Bonahon in \cite{Bo} shows that there are not too many self-diffeomorphisms of $\partial B_p$ to consider.
\end{cporrem2}
\newtheorem{bon1}[bp]{Theorem}
\begin{bon1}
$\pi_{0}(\mathrm{Diff}(L(p^2, 1-p))) \cong \mathbb{Z}_2.$
\end{bon1}
\newtheorem{cporrem3}[bp]{Remark}
\begin{cporrem3}
\upshape
So, this theorem is saying that, up to homotopy, there are exactly two non-homotopic maps that are self-diffeomorphisms of $\partial B_p$. The identity map is one of these diffeomorphisms, and it clearly extends to a diffeomorphism over $B_p$. As noted in \cite{GS}, if we consider Figure \ref{rbdsec}.1 as in Figure \ref{rbdsec}.28 below, we see that it is symmetric (by a $180^{\circ}$ rotation about the $y$-axis).\newline
Let us denote by $R$ the map that performs this roation. Clearly, $R$ is a self-diffeomorphism of $\partial B_p$, and it is also fairly clear that this self-diffeomorphism extends to $B_p$. If we consider what $R$ does to a meridian $m_1$ of the dotted circle $c_1$, we see that it inverts the meridian (gives it the opposite orientation). This shows that $R$ is not homotopic to the identity map. \newline
We have therefore found two non-homotopic self-diffeomorphisms of $\partial B_p$ which extend to $B_p$, and so we have the following theorem, as in \cite{GS}.
\end{cporrem3}
\newtheorem{bon2}[bp]{Theorem}
\begin{bon2} \label{bon2label}
Any self-diffeomorphism of $\partial B_p$ extends to $B_p$.
\end{bon2}
This finally allows us to give the definition of the rational blowdown of a 4-manifold $X$, as in \cite{GS}, which by Theorem \ref{bon2label} is well-defined up to diffeomorphism for a fixed $X$ and a fixed $C_p$ embedded in $X$.
\newtheorem{rbdef1}[bp]{Definition}
\begin{rbdef1}
Assume that $C_p$ embeds in the 4-manifold $X$, and write $X$ as $X= X_{0} \cup_{L(p^2, p-1)} C_{p}$. The 4-manifold $X_p = X_0 \cup_{L(p^2,p-1)}B_p$ is by definition the rational blowdown of $X$ along the given copy of $C_p$.
\end{rbdef1}
\newtheorem{rbrem1}[bp]{Remark}
\begin{rbrem1}
\upshape
We shall need the observation that if $X$ and $X \setminus C_p$ are simply connected, then $X_p$ is simply connected. This will be shown in section \ref{x7proofsection} for the case $p=7$.
\end{rbrem1}
\pagebreak
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig1.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.1
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig2.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.2
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig3.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.3
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig4.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.4
\end{center}
\begin{center}
\begin{minipage}{13cm}
\includegraphics[height=4cm]{rbfig5.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.5
\end{center}
\pagebreak
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=6cm]{rbfig6.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.6
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig7.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.7
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig8.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.8
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig9.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.9
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig10.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.10
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig11.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.11
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig12.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.12
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig13.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.13
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig14.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.14
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig15.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.15
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig16.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.16
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig17.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.17
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig18.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.18
\end{center}
\begin{center}
\begin{minipage}{11cm}
\includegraphics[width=11cm]{rbfig19.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.19
\end{center}
\begin{center}
\begin{minipage}{10cm}
\includegraphics[width=10cm]{rbfig20.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.20
\end{center}
\begin{center}
\begin{minipage}{10cm}
\includegraphics[width=10cm]{rbfig21.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.21
\end{center}
\begin{center}
\begin{minipage}{10cm}
\includegraphics[width=10cm]{rbfig22.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.22
\end{center}
\begin{center}
\begin{minipage}{10cm}
\includegraphics[width=10cm]{rbfig23.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.23
\end{center}
\begin{center}
\begin{minipage}{10cm}
\includegraphics[width=10cm]{rbfig24.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.24
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{rbfig25.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.25
\end{center}
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{rbfig26.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.26
\end{center}
\begin{center}
\begin{minipage}{6cm}
\includegraphics[width=6cm]{rbfig27.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.27
\end{center}
\begin{center}
\begin{minipage}{10cm}
\includegraphics[width=10cm]{rbfig28.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{rbdsec}.28
\end{center}
\pagebreak
\section{Results Concerning Rational Blowdowns} \label{x7proofsection}
In this section we prove that if $X_7$ is the rational blowdown of $E(1) \# 4 \overline{\mathbb{CP}^{2}}$ along the configuration $C_7$, then $X_7$ is simply-connected.
We need the following theorem from \cite{OzSt} (see pages 37-43).
\newtheorem{ythm}{Theorem}[section]
\begin{ythm}
If $Y$ is a 3-manifold given by Dehn surgery along $(K_1, K_2, \dots, K_n) \subset S^3$ with surgery coefficients $\frac{p_i}{q_i}$ $(i = 1,2,\dots, n)$, then $H_1(Y; \mathbb{Z})$ can be presented by the meridians $\mu_i$ as generators and the expressions
\begin{equation*}
p_i \mu_i + q_i \sum_{j=1}^n \ell k(K_i,K_j) \mu_j = 0
\end{equation*}
as relations.
\end{ythm}
We first calculate $H_1(\partial C_7; \mathbb{Z})$. We consider Figure \ref{x7proofsection}.1, which is a diagram of $C_7$ with the meridians $a_0, a_1, \dots, a_5$ drawn in. We have labelled $a_0$ as the meridian of the $-2$-sphere $S$ of square $-9$, and $a_1, \dots, a_5$ are the meridians of the $-2$-spheres $S_1, \dots, S_5$ in the $\tilde{E_6}$ singular fibre, respectively, where the spheres in Figure \ref{e6constrsec}.6 are given the labels as follows:
\begin{align*}
S_1:& \quad e_4-e_7\\
S_2:& \quad e_1-e_4\\
S_3:& \quad h-e_1-e_2-e_3\\
S_4:& \quad e_2-e_5\\
S_5:& \quad e_5-e_9\\
S_6:& \quad e_3-e_6\\
S_7:& \quad e_6-e_8
\end{align*}
we can read off the relators (note that we use $a_i$ as a label for both the meridian and for the homology class the meridian represents):
\begin{align*}
r_0:& -9 a_0 + a_1 = 0 \\
r_1:& -2a_1 + a_0 + a_2 = 0 \\
r_2:& -2a_2 + a_1 + a_3 = 0 \\
r_3:& -2a_3 + a_2 + a_4 = 0 \\
r_4:& -2a_4 + a_3 + a_5 = 0 \\
r_5:& -2a_5 + a_4 = 0
\end{align*}
since integral surgery with coefficient $n$ is just rational surgery $\frac{p}{q}$ with $p=n$ and $q=1$, and so a presentation of the group is
\begin{equation*}
H_1(\partial C_7; \mathbb{Z}) \cong \; <a_0, a_1, a_2, a_3, a_4, a_5 | r_0, r_1, r_2, r_3, r_4, r_5>
\end{equation*}
where the relators are as above.
\newtheorem{a3thm}[ythm]{Lemma}
\begin{a3thm} \label{a3thmlabel}
$a_3$ is a generator of $\pi_1(\partial C_7)$.
\end{a3thm}
Proof: \newline
First, we use the relators $r_i$ to write all the generators of $H_1(\partial C_7; \mathbb{Z})$ in terms of $a_0$. We start with $r_0$:\newline
\begin{equation*}
r_0 \Rightarrow a_1 = 9a_0
\end{equation*}
Now, $r_1$ and $a_1 = 9a_0$ together give
\begin{align*}
a_2 &= 2a_1-a_0 \\
&= 18a_0 - a_0 \\
\Rightarrow a_2 &= 17a_0
\end{align*}
Similarly, the other relators become
\begin{align*}
a_3 &= 25a_0 \\
a_4 &= 33a_0 \\
a_5 &= 41a_0 \\
49a_0 &= 0
\end{align*}
So, $H_1(\partial C_7; \mathbb{Z})$ can be presented as
\begin{equation} \label{preslabel1}
<a_0, a_1, a_2, a_3, a_4, a_5 | a_1 = 9a_0, a_2=17a_0, a_3=25a_0, a_4=33a_0, a_5=41a_0, 49a_0 =0>
\end{equation}
Since all the $a_i$ can be written in terms of $a_0$, this shows that $a_0$ is a generator. This presentation can be reduced to
\begin{equation*}
<a_0 | 49a_0 =0> \cong \mathbb{Z}_{49}
\end{equation*}
We expected $H_1(\partial C_7; \mathbb{Z}) \cong \mathbb{Z}_{49}$ from Lemma \ref{cpzp2lemma}. We could also reduce $\eqref{preslabel1}$ to
\begin{equation*}
<a_0, a_3 | a_3 = 25a_0, 49a_0 =0>
\end{equation*}
which can now be reduced as follows:
\begin{align*}
&<a_0, a_3 | a_3 = 25a_0, 49a_0 =0> \\
\sim & <a_0, a_3 | a_3 = 25a_0, 49a_0 =0, 2a_3 = 50a_0> \\
\sim & <a_0, a_3 | a_3 = 25a_0, 2a_3 = a_0> \\
\sim & <a_0, a_3 | a_3 = 25a_0, 2a_3 = a_0, 50a_3 = 25a_0> \\
\sim & <a_0, a_3 | 2a_3=a_0, 50a_3 = a_3> \\
\sim & <a_0, a_3 | a_0 = 2a_3, 49a_3 = 0> \\
\sim & <a_3 | 49a_3 = 0>
\end{align*}
which shows that $a_3$ is a generator of $H_1(\partial C_7; \mathbb{Z})$, as desired. \newline
Now, since $\partial C_7 \cong L(49,-6)$, we have $\pi_1(\partial C_7) \cong \pi_1(L(49,-6)) \cong \mathbb{Z}_{49}$. This shows that $\pi_1(\partial C_7)$ is abelian, which implies $\pi_1(\partial C_7) \cong H_1(\partial C_7; \mathbb{Z})$. So, since $a_3$ is a generator of $H_1(\partial C_7; \mathbb{Z})$, it is also a generator of $\pi_1(\partial C_7)$. $\Box$
\newtheorem{ythmrem}[ythm]{Remark}
\begin{ythmrem}
\upshape
It can be checked that actually every generator $a_i$ $(i=0, 1, \dots 5)$ is a generator of the group (although, we are only interested in the fact that $a_3$ is a generator). It should be noted that this does not always happen with every chain, and one reason it happens in this case is that $7$ is prime. For example, it can be checked that not every meridian in $C_6$ is a generator.
\end{ythmrem}
\newtheorem{x0thm}[ythm]{Proposition}
\begin{x0thm}
If we define $X = \mathbb{CP}^{2} \# 13 \overline{\mathbb{CP}^{2}}$, and write $X = X_0 \cup_{L(49,-6)} C_7$ so that $X_7 = X_0 \cup_{L(49,-6)} B_7$ is the rational blowdown along $C_7$, then $X_0$ is simply-connected.
\end{x0thm}
Proof: \newline
Firstly, we know $\pi_1(X) = 1$. Let us define $\Sigma =L(49,-6) \cong \partial C_7$. Then $X = X_0 \cup_{\Sigma} C_7$. We also have from Van Kampen's Theorem that if
\begin{align*}
& j_0: \Sigma \hookrightarrow X_0 \\
& j_1: \Sigma \hookrightarrow C_7
\end{align*}
are the inclusion maps which induce the homomorphisms
\begin{align*}
& j_{0*}: \pi_1(\Sigma) \longrightarrow \pi_1(X_0) \\
& j_{1*}: \pi_1(\Sigma) \longrightarrow \pi_1(C_7)
\end{align*}
then $\pi_1(X) \cong (\pi_1(X_0) * \pi_1(C_7)) / N$, where $N$ is the normal subgroup generated by $j_{0*}(\omega) (j_{1*}(\omega))^{-1}$ for all $\omega \in \pi_1(\Sigma)$. \newline
Now, as Park observes in \cite{P1}, the generator $a_3$ of $\pi_1(\Sigma)$ intersects the $-2$-sphere labelled $S_6$ in the $\tilde{E_6}$-fibre. Note that $S_6$ is not in $C_7$, and so $S_6 \subset X_0$. In fact, $a_3$ intersects $S_6$ in such a way that it bounds a disk which is a hemisphere of $S_6$, and so $j_{0*}(a_3) = 1$. Therefore, by Lemma \ref{a3thmlabel}, $j_{0*}(\omega) = 1$ for all $\omega \in \pi_1(\partial C_7) \cong \pi_1(\Sigma)$. \newline
Therefore, the normal subgroup $N$ is just generated by all elements of the form $(j_{1*}(\omega))^{-1}$, or equivalently all elements of the form $j_{1*}(\omega) \in \pi_1(C_7)$, where $\omega \in \pi_1(\Sigma)$. \newline
Claim: $j_{1*}: \pi_1(\Sigma) \longrightarrow \pi_1(C_7)$ is a surjection. \newline
If this claim is true (and we shall prove it), then $N \cong \pi(C_7)$. Therefore,
\begin{equation*}
\pi_1(X) \cong (\pi_1(X_0) * \pi_1(C_7)) / N \cong \pi_1(X_0)
\end{equation*}
and so because $X$ is simply-connected, we have that $X_0$ is simply-connected. $\Box$ \newline
Proof of Claim (\cite{S}): \newline
Let $M$ be a 4-manifold that has only $0$-, $1$- and $2$-handles (no $3$-handles or $4$-handles). This manifold has non-empty boundary, since it does not have a 4-handle. So, $M$ has a handle decomposition consisting of a unique 0-handle, some 1-handles, some 2-handles, and a boundary $\partial M$. As for CW-complexes, 1-handles ``give'' generators for the $\pi_1(M)$ and 2-handles give relators for $\pi_1(M)$. \newline
We can look at $M$ ``upside-down'', by considering $k$-handles as $(4-k)$-handles. Then, $M$ has a handle decomposition with a unique 4-handle, some 3-handles, some 2-handles and a boundary $\partial M$. Since no generators of $\pi_1(M)$ come from $2$-, $3$- or $4$-handles, all the generators of $\pi_{1}(M)$ must come from its boundary $\partial M$. So, $\pi_1(M)$ has the same presentation as $\pi_1(\partial M)$, except it has extra relators coming from the $2$-handles of $M$. Therefore, there is a surjection $j_{*}: \pi_1(\partial M) \longrightarrow \pi_1(M)$. \newline
Since (for every $p$) $C_p$ is a manifold with no $3$- or $4$-handles, the claim is proved for $C_7$. $\Box$ \newline
Finally, we need the following lemma:
\newtheorem{x7sclem}[ythm]{Lemma}
\begin{x7sclem}
If the configuration $C_{p}$ is embedded in $X$, and $X_p = (X \setminus C_p ) \cup_{L(p^2,p-1)} B_p$ is the rational blowdown of $X$ along $C_p$, and if $X$ and $X \setminus C_p$ are simply-connected, then $X_p$ is also simply connected.
\end{x7sclem}
Proof: \newline
Assume $X$ and $X \setminus C_p$ are both simply connected, and define $\Sigma = L(p^2,p-1)$. Then by Van Kampen's Theorem,
\begin{align*}
\pi_1(X_p) &\cong (\pi_1(X \setminus C_p) * \pi_1(B_p) ) / N \\
&\cong \pi_1(B_p) ) / N
\end{align*}
where $N$ is the normal subgroup generated by the elements $(j_{0*}(\omega))(j_{1*}(\omega))^{-1}$ for all $\omega \in \pi_1(\Sigma)$, where $j_{0*}$ and $j_{1*}$ are the homomorphisms induced from the inclusion maps
\begin{align*}
& j_0: \Sigma \hookrightarrow X \setminus C_p \\
& j_1: \Sigma \hookrightarrow B_p
\end{align*}
as before. \newline
However, since we assume $X \setminus C_p$ is simply connected, $j_{0*}$ is the trivial map, and so $N$ is generated by elements of the form $j_{1*}(\omega)$ for $\omega \in \pi_{1}(\Sigma)$, as before. By \cite{FS1}, this map $j_{1*}$ is surjective, and so $\pi_1(X_p)$ is trivial. $\Box$
\newtheorem{x7sclemr1}[ythm]{Remark}
\begin{x7sclemr1}
\upshape
In particular, $X_7$ is simply-connected.
\end{x7sclemr1}
\begin{center}
\begin{minipage}{10cm}
\includegraphics[width=10cm]{fig201.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{x7proofsection}.1
\end{center}
\pagebreak
\section{Complex surfaces}
Recall that the complex projective line $H= \{[x:y:z] \in \mathbb{CP}^{2} | \; x=0 \}$ defines a generator $h = [H] \in H_2(\mathbb{CP}^{2}; \mathbb{Z})$.
\newtheorem{cs41}{Proposition}[section]
\begin{cs41} \label{propdh}
The set $D = \{ [x:y:z] \in \mathbb{CP}^{2} | \; x^{d} + y^{d} + z^{d} =0 \}$ is a smooth, connected submanifold of $\mathbb{CP}^{2}$ representing $dh \in H_2(\mathbb{CP}^{2}; \mathbb{Z})$.
\end{cs41}
Since this result is so fundamental to many of our calculations later, we give a detailed proof, which comes from \cite{GS}. \newline
Proof: \newline
We first calculate how many points are in the intersection
\begin{align*}
D \cap H &= \{[x:y:z] \in \mathbb{CP}^{2} | x^{d} + y^{d} + z^{d} =0, x =0 \} \\
&= \{[x:y:z] \in \mathbb{CP}^{2} | y^{d} + z^{d} =0 \}
\end{align*}
We notice that if $y = 0$ then $z = 0$, and vice-versa, and then the point is $[0:0:0]$, which is not a point in $\mathbb{CP}^{2}$. So, $y$ and $z$ are both non-zero. If we divide through by $z^d$, the equation $y^{d} + z^{d} =0$ becomes
\begin{equation*}
\alpha^d + 1 =0
\end{equation*}
where $\alpha = \frac{y}{z}$ is a non-zero complex number, and this equation has $d$ distinct solutions
\begin{equation*}
\alpha_{k} = e^{i(\pi / d + 2k\pi / d)}; k = 0, 1, \dots, d-1
\end{equation*}
and so we get the $d$ points in $\mathbb{CP}^{2}$
\begin{equation*}
D \cap H = \{ [0: \alpha_k: 1] \in \mathbb{CP}^{2} | \; \alpha_{k} = e^{i(\pi / d + 2k\pi / d)}, k = 0, 1, \dots, d-1 \}
\end{equation*}
If we define $g(x,y,z) = x^d + y^d + z^d$, then the Implicit Function Theorem says that
\begin{equation*}
\tilde{D} = \{(x,y,z) \in \mathbb{C}^3 | \; g(x,y,z) = 0 \}
\end{equation*}
is a smooth manifold if $(g_{x}(x,y,z), g_y(x,y,z), g_z(x,y,z)) \neq (0,0,0)$ on every point in $\tilde{D}$ (where $g_x$ means $\frac{\partial g}{\partial x}$). \newline
\pagebreak
However, we are working with $D$, a subset of $\mathbb{CP}^{2}$ and not $\tilde{D}$, which is a subset of $\mathbb{C}^3$. We therefore need to check the affine charts of $D$ (for example, the set $\{[x:y:1] \in \mathbb{CP}^{2}\}) = \{(x,y) \in \mathbb{C}^2 \}$). \newline
We start by checking the chart $[x:y:1]$. Let $p(x,y) = g(x,y,1) = x^d + y^d + 1$. Then
\begin{align*}
p_x(x,y) &= dx^{d-1} \\
p_y(x,y) &= dy^{d-1}
\end{align*}
The only point $(x_0, y_0)$ that gives $(p_x(x_0,y_0), p_y(x_0,y_0)) = (0,0)$ is the point $(0,0)$. Fortunately, $p(0,0) = 1$, and so $(0,0)$ is not in the set $\{(x,y) \in \mathbb{C}^2 | p(x,y) = 0 \}$, and so $D$ is a smooth manifold in this chart. Similar calculations show that $D$ is also smooth in the charts $[x:1:z]$ and $[1:y:z]$, and so $D$ is a smooth submanifold of $\mathbb{CP}^{2}$.\newline
Since $D$ and $H$ are both complex submanifolds of $\mathbb{CP}^{2}$, each of their intersection points is a positive intersection, and so $Q_{\mathbb{CP}^{2}}<[D],[H]> = d$, which shows that $D$ represents the homology class $dh \in H_2(\mathbb{CP}^{2}; \mathbb{Z})$. To see that $D$ is connected, notice that $g(x,y,z) = x^d + y^d + z^d$ is an irreducible polynomial, and so its zero set must be a connected component (see \cite{GH}). $\Box$ \newline
We now have the following interesting proposition, the proof of which can be found in \cite{GS}.
\newtheorem{s42}[cs41]{Proposition}
\begin{s42}
If $p_1$ and $p_2$ are two homogeneous polynomials with equal degree (and not powers of other polynomials) and the hypersurfaces $F_1 = \{P \in \mathbb{CP}^{n} |\; p_1(P) = 0 \}$ and $F_2 = \{P \in \mathbb{CP}^{n} |\; p_2(P) = 0 \}$ are smooth submanifolds of $\mathbb{CP}^{n}$, then $F_1$ is diffeomorphic to $F_2$.
\end{s42}
\newtheorem{s42r}[cs41]{Remark}
\begin{s42r}
\upshape
This proposition with Proposition \ref{propdh} above shows that any degree $d$ homogeneous polynomial is represented by $dh \in H_2(\mathbb{CP}^{2}; \mathbb{Z})$.
\end{s42r}
\pagebreak
\section{Resolving Singular Points} \label{resolvesection}
This section comes from Chapter 2 of \cite{GS}, and we follow the notation used there. \newline
Assume that we have two smooth surfaces $\Sigma_1$ and $\Sigma_2$ in $\mathbb{CP}^{2}$ that are both closed and oriented, and assume that $\Sigma_1$ and $\Sigma_2$ intersect each other transversally in the single point $P \in \mathbb{CP}^{2}$. Then $\Sigma = \Sigma_1 \cup \Sigma_2$ is not a smooth surface in $\mathbb{CP}^{2}$, since at the point $P$ it fails to be a manifold, but it still defines a homology class $[\Sigma] = [\Sigma_1] + [\Sigma_2] \in H_2(\mathbb{CP}^{2}; \mathbb{Z})$. \newline
We now describe a process that changes $\Sigma$ into a smooth surface $\tilde{\Sigma}$. We consider a neighbourhood of the intersection point $P$ that is a 4-ball, which we call $D$. Inside $D$, a neighbourhood of the intersection point looks like
\begin{equation*}
F = \{(z_1,z_2) \in \mathbb{C}^2 | \; z_1 z_2 = 0, |z_1|^2 + |z_2|^2 \leq 1 \}
\end{equation*}
which is a model for two 2-dimensional disks (note, $z_1, z_2 \in \mathbb{C}$) intersecting each other in a single point in the 4-ball $D$. To ``remove'' the singular intersection point at $P$, we cut out the pair $(D,F)$ and replace it with a pair $(D,R)$ that does not have a singular point at $P$, does not change the manifold $\mathbb{CP}^{2}$, and also is such that the homology class of $[\tilde{\Sigma}] = [\Sigma]$. \newline
We choose $R$ to be the subset of $D$ that is obtained by perturbing the subset
\begin{equation*}
R'_{\epsilon} = \{(z_1,z_2) \in \mathbb{C}^2 | z_1 z_2 = \epsilon, |z_1|^2 + |z_2|^2 \leq 1\}
\end{equation*}
where $(0 < |\epsilon| \ll 1)$, so that $\partial R = \partial F \subset \partial D $. Since $R'_{\epsilon}$ is the graph of $z_2 = \frac{\epsilon}{z_1}$, it is topologically an annulus, and therefore $R$ is also topologically an annulus. \newline
So, replacing the pair $(D,F)$ with the pair $(D,R)$ ``removes'' the singular point $P$, but since we are simply removing $D$ and gluing it back in, it does not change the $\mathbb{CP}^{2}$ which contains $\Sigma_1$ and $\Sigma_2$. Furthermore, since the subsets $F$ and $R$ are homologous in $(D,\partial D)$, the homology class of $\tilde{\Sigma}$ (what $\Sigma$ becomes after this operation) is still $[\tilde{\Sigma}] = [\Sigma_1] + [\Sigma_2]$.
\pagebreak
\newtheorem{resrem1}{Remark}[section]
\begin{resrem1}
\upshape
The process above, as \cite{GS} say, ``removes the singular point $P$''. Since we worked locally around $P$, the method is valid for every 4-manifold $X$, and every pair of intersecting surfaces in $X$, even if the intersections are transverse self-intersections.
\end{resrem1}
\newtheorem{resrem2}[resrem1]{Remark}
\begin{resrem2}
\upshape
A nice way of looking at this operation is as follows: We have two surfaces $\Sigma_1$ and $\Sigma_2$ in some 4-manifold $X$ that intersect each other in a point $P$. We take a disk neighbourhood of $P$ in $\Sigma_1$, call it $D_1$, and a disk neighbourhood of $P$ in $\Sigma_2$, call it $D_2$. We remove the disks $D_1$ and $D_2$ and replace them with an annulus joining $\partial D_1$ to $\partial D_2$. This is exactly the operation of connect-summing the two surfaces together, from which it can be seen that the homology of the resulting surface $[\tilde{\Sigma}]$ is clearly $[\tilde{\Sigma}] = [\Sigma_1] + [\Sigma_2]$.
\end{resrem2}
\newtheorem{resrem3}[resrem1]{Remark}
\begin{resrem3} \label{resolvesympl}
\upshape
Although we have done this operation \emph{smoothly}, it is possible to resolve singular points \emph{symplectically}. The method is the same, except we use a function such as
\begin{equation*}
f(x) =
\begin{cases}
\exp (-(\frac{1}{x})^2) & \text{if $x \neq 0$}, \\
0 & \text{if $x = 0$}
\end{cases}
\end{equation*}
to symplectically ``smooth corners''.
\end{resrem3}
\pagebreak
\section{Elliptic Fibrations} \label{ellfibsec}
This section is based on Chapter 3 of \cite{GS}. The definitions are from \cite{GS} and we follow the notation presented there.
\newtheorem{esd}{Definition}[section]
\begin{esd}
\upshape
A complex surface $S$ is called an \emph{elliptic surface} if there is a holomorphic map $\pi: S \longrightarrow C$ from $S$ to a complex curve $C$ such that for generic $t \in C$ the inverse image $\pi^{-1}(t)$ is a smooth \emph{elliptic curve}. We call the map $\pi$ a \emph{(holomorphic) elliptic fibration}.
\end{esd}
\newtheorem{esdr1}[esd]{Remark}
\begin{esdr1}
\upshape
Recall that an elliptic curve is topologically a real 2-dimensional torus (\cite{GS}).
\end{esdr1}
\newtheorem{esd2}[esd]{Definition}
\begin{esd2} \label{correcteddefnlabel}
\upshape
Let $X$ be a smooth, closed, oriented 4-manifold, and let $C$ be a complex curve. A smooth map $\pi: X \longrightarrow C$ will be called a (\emph{$C^{\infty}$-}) \emph{elliptic fibration} if each fibre $\pi^{-1}(t)$ (which may be a singular fibre) has a neighbourhood $U \subset X$ and an orientation preserving diffeomorphism $\phi: U \longrightarrow \phi(U)$, where $\phi(U)$ is a subset of an elliptic surface $S$, such that $\phi$ commutes with the maps $\pi$.
\end{esd2}
We present three examples from \cite{GS} in order to illustrate what an elliptic fibration actually is.
\newtheorem{cp1eg}[esd]{Example}
\begin{cp1eg} \label{cp2n1eg}
\upshape
We first construct a $\mathbb{CP}^1$-fibration over $\mathbb{CP}^1$. We consider all the complex projective lines in $\mathbb{CP}^{2}$ passing through the point $P = [0:0:1] \in \mathbb{CP}^{2}$. To each line passing through $P$ we can associate a point $[t_0:t_1] \in \mathbb{CP}^1$ such that the line is $L_{[t_0:t_1]} = \{[x:y:z] \in \mathbb{CP}^{2} | \; t_0 x = t_1 y \}$. Essentially, we are picking the point in $\mathbb{CP}^{2}$ where the line though $P$ crosses the projective line $\{ [x:y:z] \in \mathbb{CP}^{2} | \; z=0\} \cong \mathbb{CP}^{1}$, and this association of a line through $P$ to a point $[t_0:t_1] \in \mathbb{CP}^{1}$ parametrizes the set of such lines. \newline
It is easy to see that the family of lines $\{L_{[t_0:t_1]} | \; [t_0:t_1] \in \mathbb{CP}^{1} \}$ is a one-sheet cover of $\mathbb{CP}^{2} \setminus \{P\}$, i.e. for every point $Q \in \mathbb{CP}^{2} \setminus \{P\}$, there is a unique line in the family that passes through $Q$. Therefore, we can define a map $f: \mathbb{CP}^{2} \setminus \{P\} \longrightarrow \mathbb{CP}^{1}$ as follows: for $Q \in \mathbb{CP}^{2} \setminus \{P\}$, there is a unique line $L_{[t_{0_Q}:t_{1_Q}]}$ in the above family that passes through $Q$, and we define $f(Q) = [t_{0_Q}:t_{1_Q}]$. \newline
We notice that all the lines $L_{[t_0:t_1]}$ intersect each other transversally in $P$, and so we cannot extend this map to all of $\mathbb{CP}^{2}$. However, blowing up a point $P$ in a manifold essentially replaces the point $P$ with the set of lines going through that point, so we can extend $f$ to $\mathbb{CP}^{2} \# \overline{\mathbb{CP}^{2}}$, and therefore we have $\tilde{f}: \mathbb{CP}^{2} \# \overline{\mathbb{CP}^{2}} \longrightarrow \mathbb{CP}^{1}$, a $\mathbb{CP}^{1}$-fibration of $\mathbb{CP}^{2} \# \overline{\mathbb{CP}^{2}}$ over $\mathbb{CP}^{1}$.
\end{cp1eg}
\newtheorem{cp2eg}[esd]{Example}
\begin{cp2eg} \label{cp2n2eg}
\upshape
We generalize the construction above. Above, the polynomials that define the lines passing through $P = [0:0:1]$ are linear polynomials (namely, $p_0(x,y,z) = x$ and $p_1(x,y,z) = y$). \newline
Instead of using linear polynomials, suppose we choose $p_0$ and $p_1$ to be quadratic (and homogeneous) polynomials in the variables $x,y,z$. Suppose further that we choose $p_0$ and $p_1$ ``generically enough'', so that their zero sets,
\begin{align}
V_{p_0} = \{[x:y:z] \in \mathbb{CP}^{2} | \; p_0(x,y,z) = 0\}\\
V_{p_1} = \{[x:y:z] \in \mathbb{CP}^{2} | \; p_1(x,y,z) = 0\}
\end{align}
which are the curves in $\mathbb{CP}^{2}$ corresponding to the polynomials $p_0$ and $p_1$, are such that $V_{p_0}$ intersects $V_{p_1}$ in four points $P_1, P_2, P_3, P_4$. \newline
To see what is meant by choosing polynomials ``generically enough'', let us look at quadratic polynomials in two real variables. Figure \ref{ellfibsec}.1 shows how we could choose two quadratic polynomials that intersect in only two points, while Figure \ref{ellfibsec}.2 shows how we could choose two quadratic polynomials that intersect each other in four points. \newline
It should be remarked that we often identify a polynomial with the curve to which it corresponds (its zero set), in order to give meaning to the phrase of how ``polynomials intersect each other''. \newline
So, to recap, we have two quadratics which give curves $C_0$ and $C_1$ (the zero sets $V_{p_0}$ and $V_{p_1}$) which intersect each other in four points $P_1, P_2, P_3, P_4$. We now consider the family of polynomials
\begin{equation*}
\mathcal{Q} = \{t_0 p_0 + t_1 p_1 | \; [t_0:t_1] \in \mathbb{CP}^{1} \}
\end{equation*}
Such a family is called a \emph{pencil of curves}. \newline
Again, we blur the distinction of a polynomial and the curve to which a polynomial corresponds. So, we consider an element of $\mathcal{Q}$ to be both a curve and a polynomial, depending on context. \newline
As in Example \ref{cp2n1eg} above, this family $\mathcal{Q}$ gives a one-sheeted cover of $\mathbb{CP}^{2} \setminus \{P_1, P_2, P_3, P_4 \}$, and any two curves in $\mathcal{Q}$ intersect each other transversally in every point of $\{P_1, P_2, P_3, P_4 \}$ (since for $i=1,2,3,4$, $t_0 p_0 (P_i) + t_1 p_1 (P_i) = 0+0 = 0$). Therefore, we can define the map
\begin{equation*}
f: \mathbb{CP}^{2} \setminus \{P_1, P_2, P_3, P_4 \} \longrightarrow \mathbb{CP}^{1}
\end{equation*}
and although we cannot extend this map to the whole of $\mathbb{CP}^{2}$, we can blow up at the points $P_1, P_2, P_3, P_4$ to define a map
\begin{equation*}
\tilde{f}: \mathbb{CP}^{2} \# 4 \overline{\mathbb{CP}^{2}} \longrightarrow \mathbb{CP}^{1}
\end{equation*}
\end{cp2eg}
\newtheorem{cp2egr1}[esd]{Remark}
\begin{cp2egr1}
\upshape
Although $\tilde{f}$ in Example \ref{cp2n1eg} was a bundle map, in Example \ref{cp2n2eg} it is not. First of all, we do have generic fibres (a generic quadric curve is irreducible, and so since ``a generic quadric curve in $\mathbb{CP}^{2}$ is a copy of $\mathbb{CP}^{1}$'' (\cite{GS}), a generic fibre is $\mathbb{CP}^{1}$). \newline
However, there are singular fibres as well, which correspond to those ``non-generic'' quadratic polynomials that are reducible, and so we get fibres that are the union of two lines (e.g. $x^2 + y^2 = 0$ gives two lines, $x = iy$ and $x=-iy$), which is not simply a copy of $\mathbb{CP}^{1}$. \newline
The reason we did not encounter this problem in Example \ref{cp2n1eg} is that all linear polynomials are irreducible, and so, as \cite{GS} puts it ``there are no singular linear subspaces of $\mathbb{CP}^{2}$.'' \newline
So, $\tilde{f}: \mathbb{CP}^{2} \# 4 \overline{\mathbb{CP}^{2}} \longrightarrow \mathbb{CP}^{1}$ is not a fibre bundle, since there is the possibility that two different fibres may be non-diffeomorphic. However, we still call $\tilde{f}: \mathbb{CP}^{2} \# 4 \overline{\mathbb{CP}^{2}} \longrightarrow \mathbb{CP}^{1}$ a (singular) fibration.
\end{cp2egr1}
\newtheorem{cp2egr2}[esd]{Remark}
\begin{cp2egr2}
\upshape
Recall from Example \ref{cp2n2eg} that every polynomial $p_{[t_0:t_1]}$ in the pencil $\mathcal{Q} = \{t_0 p_0 + t_1 p_1 | \; [t_0:t_1] \in \mathbb{CP}^{1} \}$ has the property that $p_{[t_0:t_1]}(P) = 0$ for each $P \in \{P_1, P_2, P_3, P_4 \}$. Since each fibre $\tilde{f}^{-1}([t_0:t_1])$ is simply a curve $p_{[t_0:t_1]}$ corresponding to the point $[t_0:t_1] \in \mathbb{CP}^{1}$, and each point $P_i$ was blown-up to become the exceptional sphere $E_{i}$ ($i = 1,2,3,4$), we observe that each exceptional sphere intersects each fibre in a unique point. Therefore, the exceptional spheres of the blow-ups are sections of the fibration $\tilde{f}: \mathbb{CP}^{2} \# 4 \overline{\mathbb{CP}^{2}} \longrightarrow \mathbb{CP}^{1}$.
\end{cp2egr2}
\newtheorem{cp3eg}[esd]{Example}
\begin{cp3eg}
\upshape
We now start with a generic pair of cubics, $p_0$ and $p_1$ that intersect each other in 9 points $\{P_1, \dots, P_9 \}$ (for an example of a generic pair of cubics in two real variables see Figure \ref{ellfibsec}.3, for a non-generic pair of cubics, see Figure \ref{ellfibsec}.4). \newline
We consider the pencil of curves $\mathcal{Q} = \{t_0 p_0 + t_1 p_1 | \; [t_0:t_1] \in \mathbb{CP}^{1} \}$ and we define a map
\begin{equation*}
f: \mathbb{CP}^{2} \setminus \{P_1, \dots, P_9 \} \longrightarrow \mathbb{CP}^{1}
\end{equation*}
as before, i.e. for a point $Q \in \mathbb{CP}^{2} \setminus \{P_1, \dots, P_9 \}$ there is a unique cubic of the form $p_{[t_0:t_1]} = t_0 p_0 + t_1 p_1$ that passes through $Q$, and we define $f(Q) = [t_0:t_1] \in \mathbb{CP}^{1}$. \newline
We blow up at each of the 9 points $P_1, \dots, P_9$ and extend $f$ to a fibration
\begin{equation*}
\pi: \mathbb{CP}^{2} \# 9 \overline{\mathbb{CP}^{2}} \longrightarrow \mathbb{CP}^{1}
\end{equation*}
whose fibres are cubic curves, and so generic fibre a smooth elliptic curve, which is topologically a torus. Therefore, we have just shown that there is a holomorphic elliptic fibration on $\mathbb{CP}^{2} \# 9 \overline{\mathbb{CP}^{2}}$.
\end{cp3eg}
\newtheorem{cp3egr1}[esd]{Remark}
\begin{cp3egr1}
\upshape
If a fibration $\pi: X \longrightarrow \mathbb{CP}^{1}$ has only generic fibres (fibres that are tori), then since we know the Euler characteristic of a torus is $\chi(T^2) = 0$, this would imply that $\chi(X) =0$. \newline
However, we know $H_{2}(\mathbb{CP}^{2}; \mathbb{Z}) \cong \mathbb{Z}$ and $\mathbb{CP}^{2}$ is simply connected, so by the argument at the beginning of section \ref{class4msec} and the definition of the Euler characteristic we have
\begin{equation*}
\chi(\mathbb{CP}^{2}) = \sum_{i=0}^{4}H_{i}(\mathbb{CP}^{2}; \mathbb{Z}) = 1+0+1+0+1= 3
\end{equation*}
So $\chi(\mathbb{CP}^{2} \# 9 \overline{\mathbb{CP}^{2}}) = 3 + 9 = 12 \neq 0$, and so there must be fibres that are not diffeomorphic to the torus. These fibres are the \emph{singular fibres} we have mentioned, and the possible singular fibres will be discussed in the next section. Note that our choice of polynomials $p_0$ and $p_1$ decides which types of singular fibres will occur in the elliptic fibration.
\end{cp3egr1}
\newtheorem{cp3egr2}[esd]{Remark}
\begin{cp3egr2}
\upshape
When we consider $\mathbb{CP}^{2} \# 9 \overline{\mathbb{CP}^{2}}$ as being equipped with an elliptic fibration, we denote it by $E(1)$.
\end{cp3egr2}
We note that this process of constructing (singular) fibrations can be extended for higher-order polynomials:
\newtheorem{cp4thm}[esd]{Lemma}
\begin{cp4thm}
\upshape \cite{GS} \itshape The manifold $\mathbb{CP}^{2} \# d^2 \overline{\mathbb{CP}^{2}}$ admits a (singular) fibration $\mathbb{CP}^{2} \# d^2 \overline{\mathbb{CP}^{2}} \longrightarrow \mathbb{CP}^{1}$, where the generic fibre is a complex curve of genus $\frac{1}{2}(d-1)(d-2)$.
\end{cp4thm}
\subsection*{The general case}
The following is taken from \cite{SSS}. \newline
Suppose that $p_0$ and $p_1$ are two homogeneous polynomials of degree $n$ in the variables $x,y,z$. Suppose furthermore that $\mathrm{gcd}(p_0,p_1) = 1$, which means that the curves $C_0, C_1 \subset \mathbb{CP}^{2}$ defined by $p_0$ and $p_1$ do not share a common component, and intersect each other only in finitely many points. We call the family of polynomials
\begin{equation*}
p_t = \{ t_0 p_0 + t_1 p_1 | \; t = [t_0:t_1] \in \mathbb{CP}^{1} \}
\end{equation*}
the \emph{pencil} generated by $p_0$ and $p_1$. The zero-set of $p_t$,
\begin{equation*}
C_t = \{[x:y:z]\in \mathbb{CP}^{2} | \; p_t(x,y,z) =0 \} \subset \mathbb{CP}^{2}
\end{equation*}
is a complex curve. We define the set
\begin{equation*}
B = \{P \in \mathbb{CP}^{2} | \; p_0(P) = p_1(P) = 0 \}
\end{equation*}
and $B$ is called the set of \emph{base points} of the pencil. By the assumption above, this set must have only finitely many points. \newline
As in the examples above, we notice that the map
\begin{equation*}
P \mapsto [p_0(P):p_1(P)]
\end{equation*}
defines a map from $\mathbb{CP}^{2} \setminus B$ to $\mathbb{CP}^{1}$, and after blowing up at the base points of the pencil, this map extends to a well-defined holomorphic map from $\mathbb{CP}^{2} \# k \overline{\mathbb{CP}^{2}}$ to $\mathbb{CP}^{1}$, where $k$ is the number of points in $B$.
\pagebreak
\begin{center}
\begin{minipage}{6cm}
\includegraphics[width=6cm]{polyfig1.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{ellfibsec}.1
\end{center}
\begin{center}
\begin{minipage}{6cm}
\includegraphics[width=6cm]{polyfig2.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{ellfibsec}.2
\end{center}
\begin{center}
\begin{minipage}{6cm}
\includegraphics[width=6cm]{polyfig3.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{ellfibsec}.3
\end{center}
\begin{center}
\begin{minipage}{6cm}
\includegraphics[width=6cm]{polyfig4.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{ellfibsec}.4
\end{center}
\pagebreak
\section{Singular Fibres in Elliptic Fibrations} \label{kodairasec}
K. Kodaira, in \cite{Ko2}, classified the possible singular fibres of a locally holomorphic elliptic fibration. We follow \cite{SSS} mainly, although the original source is \cite{Ko2}, and other good sources are \cite{HKK} and \cite{BPV}. Two other sources are \cite{Pe} and \cite{M3}, which are quoted by \cite{SSS}. There is also a good, short review in \cite{Sc}. The following theorem is quoted from \cite{SSS}.
\newtheorem{kthm}{Theorem}[section]
\begin{kthm}
\upshape \cite{Ko2} \itshape A singular fibre of a locally holomorphic elliptic fibration without multiple fibres is either of type $I_n$ ($n \geq 1$), of type $II$, $III$, $IV$, or of type $I_n^*$ ($n \geq 0$), or an $\tilde{E_6}$-, $\tilde{E_7}$- or $\tilde{E_8}$-fibre.
\end{kthm}
\newtheorem{kthmr1}[kthm]{Remark}
\begin{kthmr1}
\upshape
We shall not discuss \emph{multiple fibres} here, or how they occur in elliptic fibrations.
\end{kthmr1}
We now describe the topological properties of these singular fibres. See Table \ref{kodairasec}.1 (from \cite{Sc} and \cite{SSS}).
\newtheorem{typeiiirem1}[kthm]{Remark}
\begin{typeiiirem1}
\upshape
We call a 2-sphere of self-intersection $-2$ a \emph{$-2$-sphere}.
\end{typeiiirem1}
\subsection*{Type $I_n$ fibres $(n \geq 1)$}
The $I_1$-fibre is also known as a \emph{fishtail} fibre or as a \emph{nodal} fibre. It is an immersed sphere of homological self-intersection zero with one positive double point. Hence, its Euler characteristic is $\chi(I_1) = 1$. \newline
The $I_n$-fibre for $n \geq 2$ is a plumbing of $n$ $-2$-spheres plumbed along a circle. Such a fibre is sometimes called a \emph{necklace} fibre (\cite{Sc}). The Euler characteristic is $\chi(I_n) = n$.
\subsection*{Type $II$ fibre}
A type $II$ fibre is also known as a \emph{cusp} fibre, since it is topologically a 2-sphere with a cusp singularity, where the singularity is a cone on the trefoil knot (see \cite{GS}). Its Euler characteristic is $2$.
\subsection*{Type $III$ fibre}
A type $III$ fibre is topologically the union of two $-2$-spheres intersecting each other (not transversally) in a unique point, with multiplicity 2. Therefore, its Euler characteristic is 3.
\subsection*{Type $IV$ fibre}
The type $IV$ fibre is topologically the union of three $-2$-spheres intersecting each other transversally in a unique point. Therefore its Euler characteristic is $4$.
\newtheorem{eulcharrem1}[kthm]{Remark}
\begin{eulcharrem1}
\upshape
The Euler characteristic of the a singular fibre is computed quite easily. Each sphere in the fibre contributes 2 to the Euler characteristic, and then each time a pair of points is identified, the Euler characteristic decreases by 1. For example, the type $III$ singular fibre consists two spheres with two points identified, so the Euler characteristic is $2 +2 -1 = 3$. The type $IV$-fibre consists of three spheres, with three points identified, so the Euler characteristic is $2+2+2-1-1 = 4$ (think of it as replacing three points with a single point).
\end{eulcharrem1}
\subsection*{Type $I_n^*$ fibres $(n \geq 0)$}
The type $I_n^*$ fibre is described by the plumbing given in Table \ref{kodairasec}.1, where all the spheres are $-2$-spheres, and the multiplicities are indicated on the tree. There are $n+1$ spheres of multiplicity $2$, and so the there are $n+5$ spheres in total. From the plumbing diagram, it is easy to calculate that the Euler characteristic if the $I_n^*$ fibre is $n+6$.
\subsection*{The $\tilde{E_8}$ fibre}
The type $\tilde{E_8}$ fibre is described by the plumbing given in Table \ref{kodairasec}.1, where all nine of the vertices are $-2$-spheres. The numbers next to the vertices are the multiplicities the spheres have as homology classes in the fibre. Again, we can use the plumbing diagram to calculate that the Euler characteristic of this fibre is 10.
\subsection*{The $\tilde{E_7}$ fibre}
The type $\tilde{E_7}$ fibre is described by the plumbing given in Table \ref{kodairasec}.1, where all eight of the vertices are $-2$-spheres, and again the multiplicities are indicated. The Euler characteristic is 9.
\subsection*{The $\tilde{E_6}$ fibre}
The type $\tilde{E_6}$ fibre is described by the plumbing diagram given in Table \ref{kodairasec}.1, where all seven of the vertices are $-2$-spheres and the multiplicities are again given. The Euler characteristic is 8.
\subsection*{More remarks}
\newtheorem{i0rem}[kthm]{Remark}
\begin{i0rem}
\upshape
We define the $I_0$ fibre to be simply the generic fibre (the torus), and hence it is not a singular fibre.
\end{i0rem}
\newtheorem{e6e7e8rem}[kthm]{Remark}
\begin{e6e7e8rem}
\upshape
The $\tilde{E_8}$, $\tilde{E_7}$ and $\tilde{E_6}$ fibres are also known as the type $II^*$, type $III^*$ and type $IV^*$ fibres, respectively. One reason for this is the Euler characteristics:
\begin{align*}
\chi(II) + \chi(II*) = 2 + 10 = 12 \\
\chi(III) + \chi(III*) = 3 + 9 = 12 \\
\chi(IV) + \chi(IV*) = 4 + 8 = 12
\end{align*}
and so the fibres are ``dual to each other''.
\end{e6e7e8rem}
\newtheorem{e6e7e8rem2}[kthm]{Remark}
\begin{e6e7e8rem2}
\upshape
A section of an elliptic fibration can be thought of as a curve that intersects each fibre in a unique point. Consequently, a section can only intersect those $-2$-spheres in an elliptic fibration that have multiplicity 1. Therefore, a section of the an elliptic fibration that contains an $\tilde{E_8}$ fibre can only intersect the one $-2$-sphere that has multiplicity 1. Similarly, an $\tilde{E_7}$ fibre has two spheres (of multiplicity 1) that a section can interesect, while an $\tilde{E_6}$ has three such spheres.
\end{e6e7e8rem2}
\newtheorem{obsrem1}[kthm]{Remark}
\begin{obsrem1}
\upshape
The reason for calculating the Euler characteristic is that it provides us with an obstruction to the existence of an elliptic fibration with a certain collection of singular fibres (which we shall call a \emph{configuration} of fibres). In other words, since $\chi (\mathbb{CP}^{2} \# \overline{\mathbb{CP}^{2}}) = 12$, the sum of the Euler characteristics of the singular fibres in a particular elliptic fibration must equal 12. \newline
\pagebreak
So, while it may be possible for an elliptic fibration to have 12 $I_1$ fibres, or an $\tilde{E_8}$ fibre and 2 $I_1$ fibres (since the sum of the Euler characteristics is 12), it is not possible to have an elliptic fibration with an $\tilde{E_6}$ fibre, a type $III$ fibre and two $I_1$ fibres, since $\chi(\tilde{E_6}) + \chi(III) + 2\chi(I_1) = 8+3+2(1) = 13$. \newline
Note however, that just because a certain configuration of fibres has (collectively) an Euler characteristic of 12, there is no guarantee that there can exist an elliptic fibration with that collection of singular fibres. \cite{SSS} discusses this issue in detail, and also uses the signature of the singular fibres to show when certain configurations cannot exist. \newline
\end{obsrem1}
\pagebreak
\begin{center}
\begin{minipage}{13cm}
\includegraphics[width=13cm]{kodairafin1.eps}
\end{minipage}
\end{center}
\begin{center}
Table \ref{kodairasec}.1
\end{center}
\pagebreak
\section{The Fishtail Fibre and the Cusp Fibre} \label{fishtailsection}
This section comes from Section 2.3 in \cite{GS} and we follow their notation. We take a closer look at the fishtail and cusp fibres.
\newtheorem{ff}{Definition}[section]
\begin{ff}
\upshape
Consider the following singular curve
\begin{equation*}
C_1 = \{[x:y:z] \in \mathbb{CP}^{2} | \; zy^2 = x^3 + zx^2 \}
\end{equation*}
We call any fibre in $E(1)$, that comes from blowing up a curve ambiently isotopic to $C_1$, a \emph{fishtail fibre}.
\end{ff}
\newtheorem{ffr1}[ff]{Remark}
\begin{ffr1}
\upshape
Figure \ref{fishtailsection}.1 shows an example of a curve $C_1$.
\end{ffr1}
\newtheorem{fff}[ff]{Proposition}
\begin{fff} \label{fishtailprop}
The curve $C_1$ is smooth except at the point $P = [0:0:1] \in \mathbb{CP}^{2}$ and is homeomorphic to a sphere with two points identified.
\end{fff}
Proof: \newline
To prove that $C_1$ is smooth except at the point $P=[0:0:1]$, we use the Implicit Function Theorem. Let $p_1(x,y,z) = zy^2 - zx^2 - x^3$. In the chart $[x:y:1]$, we need to find all the points $(x,y) \in \mathbb{C}^2$ that satisfy the following three equations
\begin{align*}
p_1(x,y) &= y^2 - x^2 - x^3 = 0 \\
\frac{\partial p_1}{\partial x}(x,y) &= - 2x - 3x^2 = 0 \\
\frac{\partial p_1}{\partial y}(x,y) &= 2y = 0
\end{align*}
and the only point in $[x:y:1]$ that satisfies all three equations is $[0:0:1]$. It can be checked that there are no points in the charts $[1:y:z]$ and $[x:1:z]$ that satisfy the relevant three equations, and therefore $C_1$ is a smooth curve except at the point $P=[0:0:1]$. \newline
Recall that $\mathbb{CP}^{1}$ is homeomorphic to the 2-sphere. We now want to show that there is a map $\mathbb{CP}^{1} \longrightarrow C_1$ that is one-to-one except that two points get mapped to $P$. We define this map as follows: consider all the projective lines that pass through $P$. This space can be parametrized by $[t_0:t_1] \in \mathbb{CP}^{1}$ as
\begin{equation*}
L_{[t_0:t_1]} = \{[x:y:z] \in \mathbb{CP}^{2} | t_0x = t_1y \}
\end{equation*}
(every projective line through $P$ is of the form $ax + by + cz = 0$ for some $a,b,c \in \mathbb{C}$ not all zero. Since each line goes through $P$, we must have $a(0)+b(0)+c(1) = 0$, which implies that $c=0$, and so $ax = -by$, where $a,b \in \mathbb{C}$ are both not zero. Therefore, we can choose $a = t_0$, $b = -t_1$, where $[t_0:t_1] \in \mathbb{CP}^{1}$). \newline
Let us calculate the number of intersection points of $L_{[t_0:t_1]}$ and $C_1$ (we define $\alpha = \frac{t_0}{t_1}$ and assume $t_1 \neq 0$):
\begin{align*}
& zy^2 - zx^2 - x^3 = 0 \; \; \mathrm{and} \; \; t_0 x = t_1 y \\
\Rightarrow & zy^2 - zx^2 - x^3 = 0 \; \; \mathrm{and} \; \; y = \frac{t_0}{t_1} x = \alpha x \\
\Rightarrow & z (\alpha x)^2 - zx^2 - x^3 = 0 \\
\Rightarrow & \alpha ^2 zx^2 - zx^2 - x^3 = 0 \\
\Rightarrow & (\alpha ^2-1) zx^2 - x^3 = 0 \\
\Rightarrow & x^2 ((\alpha ^2-1) z - x) = 0 \\
\Rightarrow & (\alpha ^2-1) z = x \qquad(x \neq 0) \\
\Rightarrow & z = \frac{1}{\alpha ^2-1} x
\end{align*}
So the line $L_{[t_0:t_1]}$ intersects $C_1$ in $P$ and the point
\begin{equation*}
[x: \alpha x : \frac{1}{\alpha^2 - 1} x]
\end{equation*}
which is the same as the point
\begin{equation*}
[1: \frac{t_0}{t_1} : \frac{1}{(\frac{t_0}{t_1}) ^2 - 1}]
\end{equation*}
However, when $(\frac{t_0}{t_1}) ^2 = 1$, $\frac{1}{(\frac{t_0}{t_1})^2- 1}$ is undefined. If $(\frac{t_0}{t_1}) ^2 = 1$, then $[t_0:t_1] = [\pm 1: 1]$, and then
\begin{align*}
& zy^2 - zx^2 - x^3 = 0 \; \; \mathrm{and} \; \; x = \pm y \\
\Rightarrow & z(\pm x)^2 - zx^2 - x^3 = 0 \\
\Rightarrow & zx^2 - zx^2 - x^3 = 0 \\
\Rightarrow & x^3 = 0 \\
\Rightarrow & x = 0 \\
\Rightarrow & y = 0 \\
\Rightarrow & [x:y:z] = [0:0:1]
\end{align*}
and so $L_{[t_0:t_1]} \cap C_1 = \{P, Q_{[t_0:t_1]}\}$ except for the cases $[t_0:t_1] = [\pm 1:1]$, and then we define $Q_{[\pm1:1]} = P$, so $L_{[\pm 1: 1]} \cap C_1 = \{P\}$. We define the map $\psi :\mathbb{CP}^{1} \longrightarrow C_1$ by $\psi([t_0:t_1]) = Q_{[t_0:t_1]}$, which is one-to-one except that $\psi([1:1]) = \psi([-1:1]) = P$ (see Figures \ref{fishtailsection}.2 and \ref{fishtailsection}.3). So, $C_1$ is homeomorphic to $\mathbb{CP}^{1}$ with $[1:1]$ and $[-1:1]$ identified, and so is homeomorphic to the 2-sphere with two points identified. $\Box$
\newtheorem{cfd}[ff]{Definition}
\begin{cfd}
\upshape
We call a fibre in $E(1)$ that comes from blowing up the curve ambiently isotopic to
\begin{equation*}
C_2 = \{[x:y:z] \in \mathbb{CP}^{2} | zy^2 = x^3 \}
\end{equation*}
a \emph{cusp fibre}.
\end{cfd}
\newtheorem{cfdr1}[ff]{Remark}
\begin{cfdr1}
\upshape
See Figure \ref{fishtailsection}.4 for an example of a curve $C_2$.
\end{cfdr1}
\newtheorem{cf}[ff]{Proposition}
\begin{cf}
The curve $C_2$ is smooth except at the point $P = [0:0:1] \in \mathbb{CP}^{2}$ and is homeomorphic to a sphere.
\end{cf}
Proof: \newline
The calculations are almost identical to the proof for Proposition \ref{fishtailprop}. The only difference is that $C_2 \cap L_{[t_0:t_1]} = \{P, Q_{[t_0:t_1]} \}$, and $Q_{[0:1]} = P$, in which case we have $C_2 \cap L_{[0:1]} = \{P \}$. Therefore the map $\psi : \mathbb{CP}^{1} \longrightarrow C_2$ defined by $\psi([t_0:t_1]) = Q_{[t_0:t_1]}$ gives a homeomorphism between $\mathbb{CP}^{1}$ and $C_2$. $\Box$
\pagebreak
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{fishtail0.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{fishtailsection}.1
\end{center}
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{fishtail2.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{fishtailsection}.2
\end{center}
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{fishtail1.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{fishtailsection}.3
\end{center}
\begin{center}
\begin{minipage}{5cm}
\includegraphics[width=5cm]{cusp1.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{fishtailsection}.4
\end{center}
\pagebreak
\section{Blowing Up To Create an $\tilde{E_8}$ Fibre} \label{e8sec1}
This example follows the outline given in section \ref{ellfibsec} (source: \cite{S}). \newline
We start with with a cubic polynomial $p_1$ and a linear polynomial $l$, which are chosen so that the corresponding curves $C_1$ and $L$ intersect in only one point $P$. This point will therefore be a point of multiplicity 3. If we let the homology class of $L$ be $h \in H_2(\mathbb{CP}^{2}; \mathbb{Z})$, then the homology class of $C_1$ is $3h \in H_2(\mathbb{CP}^{2}; \mathbb{Z})$, and both homology classes have multiplicity 1. \newline
We then define the cubic $p_0$ as $p_0 = l^3$, and then we have two cubics, $p_0$ and $p_1$ such that their corresponding curves $C_0$ and $C_1$ intersect in only one point $P \in \mathbb{CP}^{2}$. We can think of $C_0$ as three copies of $L$ lying on top of each other, each intersecting $C_1$ three times in the single point $P$; therefore $P$ has multiplicity 9. The homology class of $C_0$ is $h$, but it has multiplicity 3. Therefore, the homology class of $C_0$ in $H_2(\mathbb{CP}^{2}; \mathbb{Z})$ is $h(3)=3h$. Notice that the homology class of $C_1$ in $H_2(\mathbb{CP}^{2}; \mathbb{Z})$ is $3h(1) = 3h$. Any element of the pencil on $\mathbb{CP}^{2}$ must represent the same homology class (in this case, $3h$) $-$ this fact (mentioned in \cite{SSS}) will be used repeatedly to calculate the multiplicity of the exceptional curve $e_i$ in the $i$th blow-up at $P$. There will be 9 blow-ups at $P$, since it has multiplicity 9. \newline
The starting point is illustrated in Figure \ref{e8sec1}.1, and the homology classes are indicated next to their respective curves, with their multiplicities in parentheses. We draw $C_0$ as being tangent to $C_1$ at $P$, and also indicate the multiplicity of the point $P$ in parentheses (this multiplicity will decrease by 1 with each blow-up at $P$, until it is 0). \newline
We show that the pencil of curves generated by the polynomials $p_0$ and $p_1$ provides a fibration with an $\tilde{E_8}$ fibre. We shall need to blow up nine times at the point $P$. \newline
The first blow-up introduces the exceptional curve $e_1$. The proper transform of $C_1$ represents the homology class $3h-e_1 \in H_2(\mathbb{CP}^{2} \# \overline{\mathbb{CP}^{2}}; \mathbb{Z})$, with multiplicity 1 (if a curve has multiplicity $m$, its proper transform will also have multiplicity $m$). The proper transform of $C_0$ represents the homology class $h - e_1 \in H_2(\mathbb{CP}^{2} \# \overline{\mathbb{CP}^{2}}; \mathbb{Z})$, with multiplicity 3 (so actually, it represents $3h- 3e_1 \in H_2(\mathbb{CP}^{2} \# \overline{\mathbb{CP}^{2}}; \mathbb{Z})$). We consider $C_0$ and the exceptional curve(s) to be one element of the pencil. Therefore, we need $e_1$ to have multiplicity $m$ so that $3h-3e_1 + me_1 = 3h-e_1$, since every element in the pencil on $\mathbb{CP}^{2} \# \overline{\mathbb{CP}^{2}}$ must represent the same homology class (in this case, $3h - e_1$). Therefore, $e_1$ must have have multiplicity $2$. Since $L$ intersected $C_1$ in $P$ with multiplicity $3$, it would take three blow-ups at $P$ before the proper transforms of $L$ and $C_1$ would not intersect each other anymore. In fact, the calculation
\begin{align*}
[3h -e_1] \cdot [h - e_1] & = [3h \cdot h] + [3h \cdot (-e_1)] + [-e_1 \cdot h] + [(-e_1) \cdot (-e_1)] \\
&= (3) + (0) + (0) + (-1) \\
&= 2
\end{align*}
shows this (in fact, $h-e_1$ has multiplicity 3, so the curves actually intersect with multiplicity 6). Therefore, we still need to draw the proper tranform of $C_0$ so that it intersects the proper transform of $C_1$. To indicate a change, instead of drawing the curves tangent to each other, we draw the proper transform of $C_0$ intersecting the proper transform of $C_1$ transversally. The proper transforms of $C_0$ and $C_1$ still intersect each other in the point $P$, but now only with multiplicity 8. See Figure \ref{e8sec1}.2. \newline
From now on, we shall label a curve with its homology class (and only indicate its multiplicity in the diagrams). Therefore, we shall say that $3h - e_1$ is a curve with multiplicity 1, and $h - e_1$ is a curve with multiplicity 3. The diagrams will make this terminology clear. \newline
We blow-up a second time at $P$ (repeated blow-ups at a point are called \emph{infinitely close blow-ups}; see \cite{PSS}). This blow-up introduces the exceptional curve $e_2$, which separates $e_1$ and $h-e_1$ since their proper tranforms, which have homology classes $e_1 - e_2$ and $h - e_1 - e_2$, no longer intersect, the following calculation shows:
\begin{align*}
[e_1 - e_2] \cdot [h - e_1 - e_2] & = [e_1 \cdot h] + [e_1 \cdot (-e_1)] + [e_1 \cdot (-e_2)] \\
& \; \; - [e_2 \cdot h] - [e_2 \cdot (-e_1)] - [e_2 \cdot (-e_2)] \\
&= (0) + (-(-1)) + (0) - (0) - (0) - (-(-1)) \\
&= 1 - 1\\
&= 0
\end{align*}
Note that the multiplicities of $e_1 - e_2$ and $h-e_1-e_2$ makes no difference to the calculation. The proper transform of $3h - e_1$ is $3h - e_1 - e_2$. This is only the second blow-up at $P$, and so $h - e_1 - e_2$ still passes through $P$ transversally (we could again check algebraically that $3h-e_1-e_2$ and $h-e_1-e_2$ intersect). We calculate the multiplicity of $e_2$: since 2 is the multiplicity of $e_1-e_2$, 3 is the multiplicity of $h-e_1-e_2$ and 1 is the multiplicity of $3h-e_1-e_2$, if we let $m$ be the multiplicity of $e_2$, we have
\begin{align*}
& m (e_2) + 2(-e_2) + 3(-e_2) = 1(-e_2) \\
\Rightarrow & m = 4
\end{align*}
which shows that the multiplicity of $e_2$ is 4. See Figure \ref{e8sec1}.3. \newline
The third blow-up at $P$ introduces the exceptional curve $e_3$. The proper transforms of $3h-e_1 - e_2$ and $h - e_1 - e_2$ are $3h-e_1 - e_2-e_3$ and $h - e_1 - e_2-e_3$, respectively, and it can be calculated that they no longer intersect. $e_3$ has multiplicity 6, and also separates $e_2$ and $h - e_1 - e_2$ (again, their proper tranforms $e_2-e_3$ and $h-e_1-e_2-e_3$ no longer intersect). See Figure \ref{e8sec1}.4. \newline
The fourth blow-up at $P$ introduces the exceptional curve $e_4$. The proper transform of $3h-e_1-e_2-e_3$ is $3h-e_1-e_2-e_3-e_4$ and the proper tranform of $e_3$ is $e_3-e_4$, and the other curves remain the same. $e_4$ has multiplicity 5. See Figure \ref{e8sec1}.5. \newline
The fifth, sixth, seventh and eighth blow-ups at $P$ follow the same pattern as the fourth blow-up, and are illustrated in Figures \ref{e8sec1}.6,\ref{e8sec1}.7,\ref{e8sec1}.8 and \ref{e8sec1}.9, respectively. It can be seen in the diagrams that the multiplicities of $e_5$, $e_6$, $e_7$ and $e_8$ are 4,3,2 and 1, respectively. \newline
Finally, the ninth blow-up at $P$ introduces the exceptional curve $e_9$. The proper tranform of $e_8$ is $e_8-e_9$ (with multiplicity 1) and the proper transform of $3h-e_1-\dots - e_8$ is $3h-e_1-\dots - e_8-e_9$ (still with multiplicity 1). This means that $e_9$ has multiplicity 0, and note that $P$ also has multiplicity 0, which indicates that it is no longer a singular point (after nine blow-ups at $P$). $e_9$ is therefore a section between the two fibres, $3h-e_1-\dots-e_9 $ (which has self-intersection $0$ and is a fishtail fibre) and the rest of the Figure \ref{e8sec1}.10, which is an $\tilde{E_8}$ fibre. \newline
To see that it is an $\tilde{E_8}$ fibre, observe that every curve $e_j - e_{j+1}$ ($j=1,\dots,8$) and $h-e_1-e_2-e_3$ has self-intersection $-2$, and so are $-2$-spheres. Furthermore, these $-2$-spheres intersect each other and have multiplicities as in Figure \ref{e8sec1}.11, which is the diagram of an $\tilde{E_8}$ as in Table \ref{kodairasec}.1, where the vertices represent the $-2$-spheres (which are labelled above with their multiplicities, and below with the curve to which the sphere corresponds), and the lines joining the vertices show that two $-2$-spheres intersect one another. \newline
So, we have given one construction of an $\tilde{E_8}$ fibre. In section \ref{e8algeomsection}, we shall give an explicit construction using two well-chosen polynomials.
\newtheorem{e8secrem1}{Remark}[section]
\begin{e8secrem1} \label{e8secrem1rem}
\upshape
Note that this $\tilde{E_8}$ fibre has one section, the exceptional sphere $e_9$, connecting the fishtail fibre and the $\tilde{E_8}$ fibre. One definition of a section (of an elliptic fibration) is that it is a curve (the image of the section map) that intersects each fibre exactly once. Therefore, a section only can intersect a singular fibre in one of its spheres that has multiplicity 1. $\tilde{E_8}$ has only one such $-2$-sphere, the sphere $e_8-e_9$, and therefore this is the only sphere through which a section can pass, and we say that $\tilde{E_8}$ admits only one section. In the case of $\tilde{E_6}$, which has three $-2$-spheres of multiplicity 1, there are three spheres that a section can possibly intersect, and we say that $\tilde{E_6}$ has 3 sections.
\end{e8secrem1}
There is a useful lemma from \cite{SSS} (which also contains a proof of this lemma):
\newtheorem{fibseclemsss}[e8secrem1]{Lemma}
\begin{fibseclemsss} \label{fibseclemssslab}
Suppose that two cubic polynomials $p_1, p_2$ define a pencil of elliptic curves in $\mathbb{CP}^{2}$ with $k$ base points. Suppose furthermore that the pencil contains at least one smooth cubic curve. Then the fibration has $k$ sections.
\end{fibseclemsss}
In the next section we construct an elliptic fibration over $E(1)$ with an $\tilde{E_8}$ fibre.
\pagebreak
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{start1.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e8sec1}.1
\end{center}
\begin{center}
\begin{minipage}{8cm}
\includegraphics[width=8cm]{blow1.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e8sec1}.2
\end{center}
\begin{center}
\begin{minipage}{10cm}
\includegraphics[width=10cm]{blow2.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e8sec1}.3
\end{center}
\begin{center}
\begin{minipage}{10cm}
\includegraphics[width=10cm]{blow3.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e8sec1}.4
\end{center}
\begin{center}
\begin{minipage}{10cm}
\includegraphics[width=10cm]{blow4.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e8sec1}.5
\end{center}
\begin{center}
\begin{minipage}{10cm}
\includegraphics[width=10cm]{blow5.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e8sec1}.6
\end{center}
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{blowup6.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e8sec1}.7
\end{center}
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{blowup7.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e8sec1}.8
\end{center}
\begin{center}
\begin{minipage}{13cm}
\includegraphics[width=13cm]{blowup8.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e8sec1}.9
\end{center}
\begin{center}
\begin{minipage}{13cm}
\includegraphics[width=13cm]{blowup9.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e8sec1}.10
\end{center}
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{e8dyn4.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e8sec1}.11
\end{center}
\pagebreak
\section{An Elliptic Fibration on $E(1)$ with an $\tilde{E_8}$ Fibre} \label{e8algeomsection}
The method used in this section can be found in \cite{SS}. \newline
We explicitly construct an elliptic fibration on $E(1)$ with one $\tilde{E_8}$ fibre and two fishtail fibres. We do the calculations in detail in order in order to show how such constructions are done. Again, in this section there are times when we shall not distinguish between a polynomial and the curve to which it corresponds.\newline
The construction in section \ref{e8sec1} will guide our choice of cubics. We choose one cubic to be $p_1(x,y,z) = zy^2 - zx^2 - x^3$, the ``prototype'' of the fishtail fibre. We now need to choose another cubic $p_0(x,y,z)$ that intersects $p_1(x,y,z)$ in exactly one point, and for ease of calculation, we would like this cubic to be fairly simple. Furthermore, this cubic must be the perfect cube of a homogeneous polynomial of degree one (since it must represent a line with multiplicity 3) that intersects the curve $C_1$ exactly once (because of Lemma \ref{fibseclemssslab} and the fact that $\tilde{E_8}$ has only one section). We try $p_0(x,y,z) = z^3$. We define the curves
\begin{align}
C_0 &= \{[x:y:z] \in \mathbb{CP}^{2} | p_0(x,y,z) = z^3 = 0 \} \\
C_1 &= \{[x:y:z] \in \mathbb{CP}^{2} | p_1(x,y,z) = zy^2 - zx^2 - x^3 = 0 \}
\end{align}
Let us calculate the intersection point(s) of $C_0$ and $C_1$:
\begin{align}
& z^3 = 0 \nonumber \\
\Rightarrow & z = 0 \nonumber \\
& zy^2 - zx^2 - x^3 = 0 \qquad (\mathrm{and} \;\; z = 0) \nonumber \\
\Rightarrow & -x^3 = 0 \nonumber \\
\Rightarrow & x = 0 \nonumber \\
& x= 0 \; \; \mathrm{and} \;\; z =0 \nonumber \\
\Rightarrow & P = [0:1:0] \qquad(\mathrm{since} \; \; [x:y:z]\in \mathbb{CP}^{2})
\end{align}
and so the only intersection point of $z^3=0$ and $zy^2-zx^2-x^3=0$ is the point $P=[0:1:0]$. This point will have multiplicity 9. The pencil
\begin{equation*}
C_{[t_0:t_1]} = \{(t_0 p_0 + t_1 p_1)^{-1}(0) \; | \; [t_0:t_1] \in \mathbb{CP}^{1} \}
\end{equation*}
of elliptic curves defined by $C_0$ and $C_1$ provides a map from $\mathbb{CP}^{2}$ to $\mathbb{CP}^{1}$ well-defined away from the point $P = [0:1:0]$. We perform nine (infinitely close) blow-ups at $P$ (as in section \ref{e8sec1}) in order to get the desired elliptic fibration on $E(1)$ with one $\tilde{E_8}$ and one fishtail fibre. Since the Euler characteristic of an $\tilde{E_8}$ fibre is 10 and that of a fishtail is 1, and since the Euler characteristic of $E(1)$ is 12, we know (from section \ref{kodairasec}), that there must be one other fishtail fibre in this elliptic fibration. We shall now explicitly ``find'' this fibre. \newline
We know that our singular fibres are curves $C_{[t_0:t_1]}$ which correspond to polynomials, where $[t_0:t_1] \in \mathbb{CP}^{1}$. We define
\begin{equation*}
p_t(x,y,z) = t_0 p_0(x,y,z) + t_1 p_1(x,y,z) \qquad (t = [t_0:t_1], [x:y:z] \in \mathbb{CP}^{2})
\end{equation*}
Since we are working in $\mathbb{CP}^{2}$, in order to find singular points using the Implicit Function Theorem, we need to work in the charts $[1:y:z]$, $[x:1:z]$ and $[x:y:1]$ (which we shall call charts 1, 2 and 3, respectively), which are all isomorphic to $\mathbb{C}^2$. For example, working in the chart $z=1$, where each point is of the form $[x:y:1]$, our polynomial $p_t$ becomes
\begin{equation*}
p_t(x,y) = t_0 p_0(x,y) + t_1 p_1(x,y) \qquad ((x,y) \in \mathbb{C}^2)
\end{equation*}
and then a singular point $(x_0,y_0) \in \mathbb{C}^2$ is a point that satisfies the following three equations:
\begin{align*}
p_t(x_0,y_0) = 0 \\
\frac{\partial p_t}{ \partial x}(x_0,y_0) = 0 \\
\frac{\partial p_t}{ \partial y}(x_0,y_0) = 0
\end{align*}
and then we say $[x_0:y_0:1] \in \mathbb{CP}^{2}$ is a singular point of $p_t$. We see that the polynomial for $[t_0:t_1] = [0:1]$, $p_{[0:1]}(x,y,z) = zy^2 - zx^2 - x^3$, corresponds to the fishtail fibre (see section \ref{fishtailsection}), and the polynomial for $[t_0:t_1] = [1:0]$, $p_{[1:0]}(x,y,z) = z^3$, corresponds to the $\tilde{E_8}$ fibre (and every point $[x:y:0]$ is a singular point). \newline
Since we have covered the cases when either $t_0 = 0$ or $t_1 = 0$, we now look for all the polynomials $p_t$ that have singular points and $t_0 \neq 0$ and $t_1 \neq 0$. Let us return to our example
\begin{align}
p_0(x,y,z) &=z^3 \\
p_1(x,y,z) &= zy^2 - zx^2 - x^3 \\
p_t(x,y,z) &= t_0z^3 + t_1(zy^2 - zx^2 - x^3) \qquad (t = [t_0:t_1] \in \mathbb{CP}^{1}) \label{e8polyform}
\end{align}
from now on, we shall just write $p_t$ as $p$.
\subsection*{Chart 1}
Let us start\footnote{The reader could skip ahead to chart 3.} by looking for singular points in chart 1, $[1:y:z]$. We need to find points $(y,z) \in \mathbb{C}^2$ that satisfy the following three equations
\begin{align}
p(y,z) &= t_0z^3 + t_1(zy^2 - z - 1) = 0 \label{e8peq}\\
\frac{\partial p}{\partial y}(y,z) &= t_1 2zy = 0 \label{e8pyeq}\\
\frac{\partial p}{\partial z}(y,z) &= t_0 3z^2 + t_1(y^2 -1) =0 \label{e8pzeq}
\end{align}
In order for equation $\eqref{e8pyeq}$ to hold, we need
\begin{itemize}
\item[(i)] $z = 0$, or
\item[(ii)] $y = 0$
\end{itemize}
since $t_1 \neq 0$. In case (i), if $z=0$, then equation $\eqref{e8peq}$ implies
\begin{equation*}
p(y,0) = t_1(- 1) = 0
\end{equation*}
which is a contradiction, since $t_1 \neq 0$. \newline
In case (ii), if $y=0$, then equation $\eqref{e8pzeq}$ implies
\begin{align}
t_0 3z^2 - t_1 &= 0 \nonumber \\
\Rightarrow -t_1 &= -t_0 3z^2 \nonumber \\
\Rightarrow \frac{t_1}{t_0} &= 3z^2 \label{e8t0t11}
\end{align}
and with $y=0$ equation $\eqref{e8peq}$ becomes
\begin{align*}
& t_0z^3 + t_1(- z - 1) = 0 \\
\Rightarrow & z^3 + \frac{t_1}{t_0}(- z - 1) = 0 \\
\end{align*}
and substituting in equation $\eqref{e8t0t11}$, this becomes
\begin{align*}
& z^3 + 3z^2(- z - 1) = 0 \\
\Rightarrow & z^3 - 3z^3 - 3z^2 = 0 \\
\Rightarrow & -2z^3 - 3z^3 = 0 \\
\Rightarrow & -2z^2 \Big(z + \frac{3}{2} \Big) = 0 \\
\Rightarrow & z=0 \; \; \mathrm{or} \; \; z = -\frac{3}{2}
\end{align*}
The case $z=0$ was shown above to lead to a contradiction. However, substituting $z = -\frac{3}{2}$ back into $\eqref{e8t0t11}$ gives
\begin{align*}
& \frac{t_1}{t_0} = 3(-\frac{3}{2})^2 \\
\Rightarrow & \frac{t_1}{t_0} = \frac{27}{4} \\
\Rightarrow & \frac{t_0}{t_1} = \frac{4}{27} \\
\Rightarrow & [t_0:t_1] = [\frac{4}{27}:1]
\end{align*}
and it can be checked the point $[1:0:-\frac{3}{2}]$ is indeed a singular point of $\frac{4}{27}z^3 + zy^2 - zx^2 - x^3 = 0$. \newline
Although we were only expecting to find one more singular curve with a single singular point (a fishtail fibre), we check the other charts for completeness sake.
\subsection*{Chart 2}
In chart 2, $y=1$ and every point is of the form $[x:1:z]$. We need to find points $(x,z)\in \mathbb{C}^2$ that satisfy the following three equations
\begin{align}
p(x,z) &= t_0z^3 + t_1(z - zx^2 - x^3) = 0 \label{e8peq2}\\
\frac{\partial p}{\partial x}(x,z) &= t_1 (-2zx - 3x^2) = 0 \label{e8pxeq2}\\
\frac{\partial p}{\partial z}(x,z) &= t_0 3z^2 + t_1(1-x^2) =0 \label{e8pzeq2}
\end{align}
From equation $\eqref{e8pxeq2}$ we get
\begin{align*}
& -2zx - 3x^2 =0\\
\Rightarrow & -2x \Big( z+\frac{3}{2}x \Big)
\end{align*}
which gives us two cases
\begin{itemize}
\item[(i)] $x = 0$, or
\item[(ii)] $z =-\frac{3}{2}x$
\end{itemize}
In case (i), if $x=0$, then equation $\eqref{e8pzeq2}$ becomes
\begin{align}
& t_0 3z^2 + t_1 =0 \nonumber \\
\Rightarrow & t_1 = -t_0 3z^2 \nonumber \\
\Rightarrow & \frac{t_1}{t_0} = -3z^2 \label{e8alphaeq2}
\end{align}
and equation $\eqref{e8peq2}$ becomes
\begin{align*}
& t_0z^3 + t_1z = 0 \\
\Rightarrow & z^3 + \frac{t_1}{t_0}z = 0 \\
\Rightarrow & z^3 + (- 3z^2) z = 0 \qquad \mathrm{using} \;\; \eqref{e8alphaeq2} \\
\Rightarrow & -2z^3 = 0 \\
\Rightarrow & z =0
\end{align*}
and $z=0$ in equation \eqref{e8alphaeq2} implies $t_1 = 0$, which contradicts our choice of $t_1 \neq 0$. \newline
In case (ii), $z =-\frac{3}{2}x$, equation $\eqref{e8pzeq2}$ becomes
\begin{align}
& t_0 3 \Big( -\frac{3}{2}x \Big) ^2 + t_1(1-x^2) =0 \nonumber \\
\Rightarrow & \frac{t_0}{t_1} \frac{27}{4}x^2 + 1-x^2 =0 \nonumber \\
\Rightarrow & \Big( \frac{t_0}{t_1} \frac{27}{4} - 1 \Big)x^2 = -1 \nonumber \\
\Rightarrow & \frac{t_0}{t_1} \frac{27}{4} - 1 = -\frac{1}{x^2} \nonumber \\
\Rightarrow & \frac{t_0}{t_1} \frac{27}{4} = -\frac{1}{x^2} + 1 \label{e8alphaeq3}
\end{align}
and equation $\eqref{e8peq2}$ becomes
\begin{align*}
& t_0 \Big(-\frac{3}{2}x \Big)^3 + t_1\Bigg( \Big(-\frac{3}{2}x \Big) - \Big(-\frac{3}{2}x \Big) x^2 - x^3 \Bigg) = 0 \\
\Rightarrow & t_0 \Big(-\frac{27}{8} \Big) x^3 + t_1\Bigg( -\frac{3}{2}x + \frac{3}{2}x^3 - x^3 \Bigg) = 0 \\
\Rightarrow & -\frac{t_0}{t_1} \frac{27}{8} x^3 -\frac{3}{2}x + \frac{1}{2}x^3 = 0 \\
\Rightarrow & x \Bigg( -\frac{t_0}{t_1} \frac{27}{8} x^2 -\frac{3}{2} + \frac{1}{2}x^2 \Bigg) = 0 \\
\Rightarrow & -\frac{t_0}{t_1} \frac{27}{8} x^2 -\frac{3}{2} + \frac{1}{2}x^2 = 0 \qquad (x \neq 0) \\
\Rightarrow & x^2 \Bigg(-\frac{t_0}{t_1} \frac{27}{8} + \frac{1}{2} \Bigg) = \frac{3}{2} \\
\Rightarrow & x^2 \Bigg(-\frac{1}{2}\frac{t_0}{t_1} \frac{27}{4} + \frac{1}{2} \Bigg) = \frac{3}{2}\\
\Rightarrow & x^2 \Bigg(-\frac{1}{2} \Big(-\frac{1}{x^2} + 1 \Big) + \frac{1}{2} \Bigg) = \frac{3}{2} \qquad (\mathrm{using} \; \; \eqref{e8alphaeq3})
\end{align*}
\begin{align*}
\Rightarrow & x^2 \Bigg(\frac{1}{2}\frac{1}{x^2} + \Big(-\frac{1}{2}\Big) + \frac{1}{2} \Bigg) = \frac{3}{2} \\
\Rightarrow & x^2 \Bigg(\frac{1}{2}\frac{1}{x^2} \Bigg) = \frac{3}{2} \\
\Rightarrow & \frac{1}{2} = \frac{3}{2}
\end{align*}
which is a contradiction. Therefore, there are no singular points in chart 2.
\subsection*{Chart 3}
In chart 3, $z=1$ and every point is of the form $[x:y:1]$. We need to find points $(x,y)\in \mathbb{C}^2$ that satisfy the following three equations
\begin{align}
p(x,y) &= t_0 + t_1(y^2 - x^2 - x^3) = 0 \label{e8peq3}\\
\frac{\partial p}{\partial x}(x,y) &= t_1 (-2x - 3x^2) = 0 \label{e8pxeq3}\\
\frac{\partial p}{\partial y}(x,y) &= t_1(2y) =0 \label{e8pyeq3}
\end{align}
Equation $\eqref{e8pyeq3}$ implies that $y=0$, since $t_1 \neq 0$. Equation $\eqref{e8pxeq3}$ gives
\begin{align*}
& -2x - 3x^2 = 0 \\
\Rightarrow & -x(2 + 3x) = 0 \\
\Rightarrow & x = 0 \; \; \mathrm{or} \;\; x = -\frac{2}{3}
\end{align*}
If $x=0$, since $y=0$ equation $\eqref{e8peq3}$ becomes
\begin{equation*}
t_0 = 0
\end{equation*}
which contradicts our choice of $t_0 \neq 0$. If $x = -\frac{2}{3}$, then equation $\eqref{e8peq3}$ becomes
\begin{align}
& t_0 + t_1\Bigg(- \Big(-\frac{2}{3} \Big)^2 - \Big(-\frac{2}{3} \Big)^3 \Bigg) = 0 \\
\Rightarrow & t_0 + t_1\Bigg(- \frac{4}{9} + \frac{8}{27} \Bigg) = 0 \\
\Rightarrow & t_0 - \frac{4}{27}t_1 = 0 \\
\Rightarrow & t_0 = \frac{4}{27}t_1
\end{align}
which is the point $[\frac{4}{27}:1] \in \mathbb{CP}^{1}$. We expected this, since $[-\frac{2}{3}:0:1] \sim [1:0:-\frac{3}{2}] \in \mathbb{CP}^{2}$ (which we calculated is a singular point in chart 1). Therefore, if both $t_0 \neq 0$ and $t_1 \neq 0$, there is only one polynomial of the form as in $\eqref{e8polyform}$, namely
\begin{equation}
p(x,y,z) = \frac{4}{27} z^3 + zy^2 - zx^2 - x^3
\end{equation}
which corresponds to the singular curve
\begin{equation}
C_3 = \{[x:y:z] \in \mathbb{CP}^{2} | \frac{4}{27} z^3 + zy^2 - zx^2 - x^3 = 0\} \label{thirdfflab}
\end{equation}
which has a singular point at $[-\frac{2}{3}:0:1]\in \mathbb{CP}^{2}$.
\subsection*{$C_3$ is a fishtail fibre}
We proceed as in the proof of Proposition \ref{fishtailprop}. We first note that the calculations above show that $C_3$ has $P=[-\frac{2}{3}:0:1]$ as its only singular point. The space of projective lines through the point $P=[-\frac{2}{3}:0:1]$ consists of line of the form
\begin{equation*}
ax + by + cz = 0 \qquad [a:b:c] \in \mathbb{CP}^{2}
\end{equation*}
that satisfy
\begin{align*}
& a \Big(-\frac{2}{3} \Big) + b(0) + c(1)= 0 \\
\Rightarrow & c = \frac{2}{3} a
\end{align*}
and so we can parametrize this space of lines by $[u_0:u_1] \in \mathbb{CP}^{1}$ by
\begin{equation*}
L_{[u_0:u_1]} = \{[x:y:z] \in \mathbb{CP}^{2} | u_0x + u_1y + \frac{2}{3}u_0 z = 0 \}
\end{equation*}
Defining
\begin{equation}
\alpha = \frac{u_0}{u_1} \label{alphau0u1lab}
\end{equation}
we have
\begin{align}
u_0x &+ u_1y + \frac{2}{3}u_0 z = 0 \nonumber \\
\Rightarrow u_1 y &= -u_0x - u_0\frac{2}{3}z \nonumber \\
\Rightarrow y &= -\frac{u_0}{u_1} \Big(x + \frac{2}{3}z \Big) \nonumber \\
\Rightarrow y &= -\alpha \Big(x + \frac{2}{3}z \Big) \label{yeqne8}
\end{align}
and then substituting this into our polynomial $\frac{4}{27} z^3 + zy^2 - zx^2 - x^3 = 0$, we get
\begin{align}
& \frac{4}{27} z^3 + zy^2 - zx^2 - x^3 = 0 \nonumber \\
\Rightarrow & \frac{4}{27} z^3 + z\Bigg( -\alpha \Big(x + \frac{2}{3}z \Big) \Bigg)^2 - zx^2 - x^3 = 0 \nonumber \\
\Rightarrow & \frac{4}{27} z^3 + \alpha ^2 z \Big(x + \frac{2}{3}z \Big)^2 - zx^2 - x^3 = 0 \label{zxfact1}
\end{align}
We shall now try and find a factor of $\Big( x+\frac{2}{3}z \Big)^2$ in the above line:
\begin{align*}
\frac{4}{27} z^3 - zx^2 - x^3 &= \frac{4}{27} z^3 - x \Big( x^2 + xz \Big) \\
&= \frac{4}{27} z^3 - x \Big( x^2 + \frac{4}{3}xz + \frac{4}{9}z^2 \Big) + \frac{1}{3}x^2 z + \frac{4}{9}xz^2 \\
&= \frac{1}{3}x^2 z + \frac{4}{9}xz^2 + \frac{4}{27} z^3 - x \Big(x + \frac{2}{3}z \Big)^2 \\
&= \frac{1}{3}z \Big( x^2 + \frac{4}{3}xz + \frac{4}{9} z^2 \Big) - x \Big(x + \frac{2}{3}z \Big)^2 \\
&= \frac{1}{3}z \Big( x + \frac{2}{3} z \Big)^2 - x \Big(x + \frac{2}{3}z \Big)^2 \\
&= \Big( x + \frac{2}{3} z \Big)^2 \Big(\frac{1}{3}z - x \Big)
\end{align*}
and therefore $\eqref{zxfact1}$ becomes
\begin{align*}
& \Big( x+\frac{2}{3} z \Big)^2 \Big(\frac{1}{3}z - x \Big) + \alpha ^2 z \Big(x + \frac{2}{3}z \Big)^2 = 0 \\
\Rightarrow & \Big( x+\frac{2}{3} z \Big)^2 \Big(\frac{1}{3}z - x + \alpha ^2 z \Big) = 0 \\
\Rightarrow & x = -\frac{2}{3}z \; \; \mathrm{or} \;\; x= \Big( \frac{1}{3} + \alpha ^2 \Big) z
\end{align*}
$x = -\frac{2}{3}z$ corresponds to the point $P = [-\frac{2}{3}:0:1]$, but for the second value of $x$, we have from equation $\eqref{yeqne8}$ we have
\begin{align*}
y &= -\alpha \Bigg( \Big( \frac{1}{3} + \alpha ^2 \Big) z + \frac{2}{3}z \Bigg) \\
\Rightarrow y &= -\alpha ( 1 + \alpha ^2)z
\end{align*}
and so we have the point (where we recall that $\alpha = \frac{u_0}{u_1}$ for $[u_0:u_1] \in \mathbb{CP}^{1}$)
\begin{equation*}
Q_{[u_0:u_1]} = [\frac{1}{3} + \Big(\frac{u_0}{u_1}\Big) ^2 : -\frac{u_0}{u_1} ( 1 + \Big(\frac{u_0}{u_1}\Big) ^2) : 1 ]
\end{equation*}
It is perhaps worthwhile to recap what we have just done and try to interpret the results of our calculations. We have just calculated the intersection points of $C_3$ and a line $L_{[u_0:u_1]}$. Our solution $x = -\frac{2}{3}z$ shows that each line $L_{[u_0:u_1]}$ passes through the point $P=[-\frac{2}{3}:0:1]$. We expected this solution, since we are considering the pencil of all curves passing through $P$. The point $Q_{[u_0:u_1]}$ is the other intersection point of $C_3$ and $L_{[u_0:u_1]}$. \newline
Now, we notice that if $\Big( \frac{u_0}{u_1} \Big)^2 = -1$ then $Q_{[u_0:u_1]} = P = [-\frac{2}{3}:0:1]$. Therefore, for the points $[i:1]$ and $[-i:1]$, $Q_{[i:1]} = Q_{[-i:1]} = P$. So, $L_{[u_0:u_1]} \cap C_3 = \{P, Q_{[u_0:u_1]} \}$, except if $[u_0:u_1] = [\pm i:1]$, in which case $L_{[i:1]} \cap C_3 = L_{[-i:1]} \cap C_3 = \{P\}$. We define the map $\psi : \mathbb{CP}^{1} \longrightarrow C_3$ by $\psi ([u_0:u_1]) = Q_{[u_0:u_1]}$, and we define $Q_{[\pm i: 1]} = P$. We have already shown that $C_3$ has only one singular point, and this calculation shows that $C_3$ is homeomorphic to $\mathbb{CP}^{1}$ with two points identified. We can therefore conclude that $C_3$ is a fishtail fibre. $\Box$
\pagebreak
\section{A Construction of an $\tilde{E_6}$ Fibre} \label{e6constrsec}
In this section, we use the techniques explained in section \ref{e8sec1} to give a construction of an $\tilde{E_6}$ fibre. \newline
By Remark \ref{e8secrem1rem} and Lemma \ref{fibseclemssslab}, we need to choose two cubics $p_0$ and $p_1$ that intersect transversally in exactly three points. We choose a cubic $p_1$ so that the curve corresponding to $p_1$ will have homology class $3h$, with multiplicity 1. As in section \ref{e8sec1}, we shall label the curves with their homology classes, so we shall call the curve corresponding to $p_1$ the curve $3h$. Next, we choose a projective line $L$ that intersects $3h$ in exactly three points in $\mathbb{CP}^{2}$. The line $L$ corresponds to a linear polynomial $l$ which has homology class $h$. We then choose the cubic $p_0$ to be $p_0 = l^3$, and therefore the curve corresponding to $p_0$ has homology class $h$ with multiplicity 3 (and we shall also call this curve $h$). We label the intersection points $P_1$, $P_2$ and $P_3$. Each intersection point has multiplicity 3. See Figure \ref{e6constrsec}.1. \newline
We start by blowing-up at $P_1$. This introduces the exceptional curve $e_1$, and the proper transform of $h$ is $h-e_1$ and the proper transform of $3h$ is $3h-e_1$. Therefore, $e_1$ has multiplicity 2. See Figure \ref{e6constrsec}.2. \newline
The second and third blow-ups (at $P_2$ and $P_3$, respectively) are similar to the first, each introducing the exceptional curves $e_2$ and $e_3$, respectively, both with multiplicity 2, and the proper transforms of the curves corresponding to $p_0$ and $p_1$ are now $h-e_1-e_2-e_3$ (with multiplicity 3) and $3h-e_1-e_2-e_3$ (with multiplicity 1), respectively. It can be calculated that these two curves no longer intersect each other. This gives us Figure \ref{e6constrsec}.3. \newline
We perform the fourth blow-up at $P_1$, the fifth blow-up at $P_2$ and the sixth blow-up at $P_3$. These blow-ups introduce the exceptional curves $e_4$, $e_5$ and $e_6$, which each have multiplicity 1. The proper tranforms of $e_1$, $e_2$ and $e_3$ are $e_1-e_4$, $e_2-e_5$ and $e_3-e_6$, respectively, and the proper transform of $3h-e_1-e_2-e_3$ is $3h-e_1 - \dots - e_6$. See Figure \ref{e6constrsec}.4. \newline
\pagebreak
We perform the seventh blow-up at $P_1$, introducing the exceptional curve $e_7$, the eighth blow-up at $P_3$ introducing $e_8$ and the ninth blow-up at $P_2$, introducing $e_8$. This unexpected ordering of blow-ups is done so that the homology classes of the resulting $-2$-spheres will be the same as in \cite{P1}. The proper transforms of $e_4$, $e_5$ and $e_6$ are $e_4-e_7$, $e_5-e_9$ and $e_6-e_8$, respectively. These are therefore $-2$-spheres each with multiplicity 1. The proper transform of $3h-e_1-\dots-e_6$ is $3h-e_1-\dots-e_6-e_7-e_8-e_9$ (still with multiplicity 1), and therefore the three exceptional curves $e_7$, $e_8$ and $e_9$ each have multiplicity 0, and are therefore sections of the $\tilde{E_6}$ fibre. See Figures \ref{e6constrsec}.5 and \ref{e6constrsec}.6.
\pagebreak
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{starte6_1.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e6constrsec}.1
\end{center}
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{blowupe61.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e6constrsec}.2
\end{center}
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{blowupe623.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e6constrsec}.3
\end{center}
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{blowupe6456.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e6constrsec}.4
\end{center}
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{blowupe6789.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e6constrsec}.5
\end{center}
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{e6dyn7.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{e6constrsec}.6
\end{center}
\pagebreak
\section{An Elliptic Fibration on $E(1)$ with an $\tilde{E_6}$ Fibre} \label{explicitE6sec}
We explicitly construct an elliptic fibration on $E(1)$ with one $\tilde{E_6}$ fibre and four fishtail fibres.\newlin
We follow the construction in section \ref{e6constrsec}, where we started with two cubics that intersected each other transversally in three points, and then blew up three times at each of those three points. \newline
We know from \cite{GS} that a fishtail fibre is ambiently isotopic to
\begin{equation}
C_1 = \{ [x:y:z] \in \mathbb{CP}^{2} | zy^2 = x^3 + zx^2 \} \label{fishtaillab}
\end{equation}
so, we shall take one cubic to be
\begin{equation}
p_1(x,y,z) = zy^2 - zx^2- x^3 \label{fish1lab}
\end{equation}
We now need to choose a cubic $p_0$ such that $p_0$ and $p_1$ intersect transversally in three points, which means that the set
\begin{equation*}
\{P \in \mathbb{CP}^{2} | p_0(P) = 0 \; and \; p_1(P) = 0 \}
\end{equation*}
contains exactly three distinct points, none of them tangent points. \newline
Furthermore, we need $p_0$ to be a perfect cube, i.e. a polynomial of the form
\begin{equation*}
p_0(x,y,z) = (ax + by + cz)^3
\end{equation*}
for some constants $a,b,c \in \mathbb{C}$. \newline
We shall choose $p_0(x,y,z) = (y + \frac{1}{2}z)^3$ to be $p_0$. See Remark \ref{choiceremlab}. So, we hav
\begin{align}
p_0(x,y,z) &= (y + \frac{1}{2}z)^3 \label{p0lab} \\
p_1(x,y,z) &= zy^2 - zx^2- x^3 \label{p1lab}
\end{align}
\subsection*{The intersection points}
Firstly,
\begin{align}
& p_0(x,y,z) = 0 \nonumber \\
\Rightarrow & (y + \frac{1}{2}z)^3 = 0 \nonumber \\
\Rightarrow & y + \frac{1}{2}z = 0 \nonumber \\
\Rightarrow & y = -\frac{1}{2}z \nonumber \\
\Rightarrow & y^2 = \frac{1}{4}z^2 \label{p0ylab}
\end{align}
and substituting $\eqref{p0ylab}$ into $\eqref{p1lab}$, we get
\begin{align}
& p_1(x,y,z) = 0 \nonumber \\
\Rightarrow & zy^2 - zx^2- x^3 = 0 \nonumber \\
\Rightarrow & z \Big( \frac{1}{4}z^2 \Big) - zx^2 - x^3 = 0 \nonumber \\
\Rightarrow & z^3 - 4zx^2 - 4x^3 = 0 \label{p1yzlab}
\end{align}
Now, if $x=0$, we get from $\eqref{p1yzlab}$
\begin{align*}
& z^3 - 4zx^2 - 4x^3 = 0 \;\; \mathrm{and} \;\; x=0 \\
\Rightarrow & z^3 = 0 \\
\Rightarrow & z = 0
\end{align*}
and, from $\eqref{p0ylab}$, if $z=0$, then $y=0$. Since $[0:0:0]$ is not a point in $\mathbb{CP}^{2}$, we can assume that $x \neq 0$. Dividing $\eqref{p1yzlab}$ through by $x^3$ and setting $t = \frac{z}{x}$, we get
\begin{align}
& z^3 - 4zx^2 - 4x^3 = 0 \nonumber \\
\Rightarrow & \Big( \frac{z}{x} \Big) ^3 - 4 \Big( \frac{z}{x} \Big) - 4 = 0 \nonumber\\
\Rightarrow & t^3 - 4t - 4 = 0 \nonumber\\
\Rightarrow & t^3 - 4t = 4 \label{tcublab}
\end{align}
From \cite{We}, we can explicitly calculate the roots of a cubic equation of the form
\begin{equation*}
x^3 + px = q \qquad p,q \in \mathbb{R}
\end{equation*}
We have the following intermediate variables:
\begin{align}
Q &= \frac{1}{3}p \label{peqlab} \\
R &= \frac{1}{2}q \label{qeqlab} \\
D &= \frac{p^3}{27} + \frac{q}{4} \\
S &= \sqrt[3]{R + \sqrt{D}} \\
T &= \sqrt[3]{R - \sqrt{D}} \\
A &= S+T \\
B &= S-T \label{Beqlab}
\end{align}
Then the three roots $t_1, t_2, t_3$ are:
\begin{align}
t_1 &= B \label{root1lab} \\
t_2 &= -\frac{1}{2} B + i \frac{\sqrt{3}}{2} A \label{root2lab}\\
t_3 &= -\frac{1}{2} B - i \frac{\sqrt{3}}{2} A \label{root3lab}
\end{align}
For our particular cubic in $\eqref{tcublab}$, $t^3 - 4t = 4$, we have $p = -4$ in $\eqref{peqlab}$ and $q=4$ in $\eqref{qeqlab}$, and so from equations $\eqref{peqlab}$ to $\eqref{Beqlab}$ we get
\begin{align}
B &= \sqrt[3]{2 + \sqrt{ \frac{44}{27} } } + \sqrt[3]{2 - \sqrt{ \frac{44}{27} } } \\
A &= \sqrt[3]{2 + \sqrt{ \frac{44}{27} } } - \sqrt[3]{2 - \sqrt{ \frac{44}{27} } }
\end{align}
Substituting these values for $B$ and $A$ back into equations $\eqref{root1lab}$, $\eqref{root2lab}$ and $\eqref{root3lab}$, it is simple algebra to verify that these are indeed solutions to our polynomial in $\eqref{tcublab}$. \newline
Or, we could check numerically that
\begin{align}
t_1 &\approx 2.3830 \\
t_2 &\approx -1.1915 + 0.5089i \\
t_3 &\approx -1.1915 - 0.5089i
\end{align}
are solutions to $\eqref{tcublab}$. Recalling that $t = \frac{z}{x}$, $y = -\frac{1}{2}z$, $x \neq 0$ and that $[x:y:z] \in \mathbb{CP}^{2}$, we can let $x =1$ and then we get the three intersection points as
\begin{align}
P_1 &= [1: -\frac{1}{2}t_1 :t_1] \\
P_2 &= [1: -\frac{1}{2}t_2 :t_2] \\
P_3 &= [1: -\frac{1}{2}t_3 :t_3]
\end{align}
Notice that these are the three distinct intersection points between the line $y + \frac{1}{2}z$ and the cubic $zy^2 - zx^2- x^3$, and so must be transverse intersection points. Therefore, the cubic $p_0(x,y,z) = (y + \frac{1}{2}z)^3$ and the cubic $p_1(x,y,z) = zy^2 - zx^2- x^3$ intersection transversely in three points, and so blowing up the intersection points of the two cubic curves corresponding to $p_0$ and $p_1$ will give rise to an elliptic fibration with an $\tilde{E_6}$ fibre (as in section \ref{e6constrsec}), and at least one fishtail fibre. The next calculation shows that there are in fact four fishtail fibres.
\subsection*{The singular points}
We now look at every polynomial of the form
\begin{equation}
p_{[t_0:t_1]} = t_0 p_0 + t_1 p_1
\end{equation}
where $[t_0:t_1] \in \mathbb{CP}^{1}$ and $p_0$, $p_1$ are as in $\eqref{p0lab}$ and $\eqref{p1lab}$. Every polynomial of that form is a cubic which passes through the intersection points $P_1,P_2$ and $P_3$. We wish to find the values of $[t_0:t_1] \in \mathbb{CP}^{1}$ such that $p_{[t_0:t_1]}$ is a singular curve. Again, we use the Implicit Function Theorem. We start with the polynomial
\begin{equation}
p(x,y,z) = t_0 \Big(y+\frac{1}{2}z \Big)^3 + t_1(zy^2-zx^2-x^3) \label{polylab}
\end{equation}
The point $[0:1]$ corresponds to a fishtail fibre (see equation $\eqref{fishtaillab}$ and \cite{GS}) and the point $[1:0]$ corresponds to the polynomial $p_0(x,y,z) = (y + \frac{1}{2}z)^3$ which is singular at every point $[x:-\frac{1}{2}z:z] \in \mathbb{CP}^{2}$ and corresponds to the $\tilde{E_6}$ fibre. We therefore look for points (polynomials) $[t_0:t_1]$ such that $t_0 \neq 0$ and $t_1 \neq 0$, and so defining $\alpha = \frac{t_0}{t_1}$, instead of $\eqref{polylab}$ we can consider the polynomial
\begin{equation}
p(x,y,z) = \alpha \Big(y+\frac{1}{2}z\Big)^3 + zy^2-zx^2-x^3 \label{polylab2}
\end{equation}
Since we are working in $\mathbb{CP}^{2}$, we need to work in charts in order to use the Implicit Function Theorem. We first look in the chart $z=1$, i.e. $[x:y:1]$. Then, the polynomial in $\eqref{polylab2}$ becomes
\begin{equation}
p(x,y,1) = \alpha \Big(y+\frac{1}{2} \Big)^3 + y^2-x^2-x^3
\end{equation}
and the partial derivatives are
\begin{align}
\frac{\partial p}{\partial x}(x,y,1) &= -2x- 3x^2 \label{partialxlab}\\
\frac{\partial p}{\partial y}(x,y,1) &= 3\alpha \Big(y+\frac{1}{2} \Big)^2 + 2y \label{partialylab}
\end{align}
Recall that a singular point is a point $P$ such that
\begin{equation*}
p(P) = \frac{\partial p}{\partial x}(P) = \frac{\partial p}{\partial y}(P) = 0
\end{equation*}
One can check that there are only three values of $\alpha$, and therefore only three points $[t_0:t_1]$, that give polynomials $p_{[t_0:t_1]}$ that each have a single singular point $P$. These values are:
\begin{align}
[t_0:t_1] &= [-\frac{8}{27}:1] &&P = [0:1:1] \label{singcurve1}\\
[t_0:t_1] &= [-\frac{32}{121}:1] &&P = [-\frac{2}{3}: \frac{4}{3} :1] \label{singcurve2} \\
[t_0:t_1] &= [1:\frac{1}{8}] &&P = [-\frac{2}{3}: -\frac{1}{3} :1] \label{singcurve3}
\end{align}
\pagebreak
\newtheorem{choicerem}{Remark}[section]
\begin{choicerem} \label{choiceremlab}
\upshape
It should be remarked that the polynomial $p_0(x,y,z) = \Big(y + \frac{1}{2} \Big)^3$ was chosen specifically so that the singular points would be rational, to make the calculations easier. The trade-off is that the intersection points are irrational, but since these points are not used in any further calculations (after we have shown that there are three intersection points, we don't use them anymore), this is not a problem.
\end{choicerem}
We now need to determine the type of singular fibre(s) to which these three polynomials correspond. We show that the polynomial given by $\eqref{singcurve1}$ is a fishtail fibre. \newline
Consider the polynomial
\begin{equation}
p(x,y,z) = -\frac{8}{27} \Big(y+\frac{1}{2}z \Big)^3 + zy^2 - zx^2 - x^3 \label{singpolylab}
\end{equation}
and let $C$ be the curve
\begin{equation}
C = \{[x:y:z] \in \mathbb{CP}^{2} | -\frac{8}{27} \Big(y+\frac{1}{2}z \Big)^3 + zy^2 - zx^2 - x^3 = 0 \}
\end{equation}
We know this curve has a singular point $P = [0:1:1]$, and we look at the space of projective lines that pass through $P$. A line of the form
\begin{equation*}
ax+by+cz = 0 \qquad a,b,c \in \mathbb{C}
\end{equation*}
must satisfy
\begin{align*}
& a(0)+b(1)+c(1) = 0 \\
\Rightarrow & c = -b
\end{align*}
and so we can parametrize these lines by $\mathbb{CP}^{1}$ by setting $a=u_0$, $b=u_1$ for $[u_0:u_1]\in \mathbb{CP}^{1}$, and then these lines are of the form
\begin{equation*}
L_{[u_0:u_1]} = \{[x:y:z] \in \mathbb{CP}^{2} | u_0x + u_1 y - u_1 z = 0 \}
\end{equation*}
which shows that for every line of this form,
\begin{equation*}
z = y + \frac{u_0}{u_1}x
\end{equation*}
and letting
\begin{equation}
\beta = \frac{u_0}{u_1} \label{betau0lab}
\end{equation}
we have
\begin{equation}
z = y + \beta x \label{zbetalabel}
\end{equation}
Substituting this back into $\eqref{singpolylab}$, we get
\begin{align*}
& p(x,y,z) = 0\\
\Rightarrow & -\frac{8}{27} \Big(y+\frac{1}{2}z \Big)^3 + zy^2 - zx^2 - x^3 =0 \\
\Rightarrow & -\frac{8}{27} \Big(y+\frac{1}{2}(y + \beta x) \Big)^3 + (y + \beta x)y^2 - (y + \beta x)x^2 - x^3 =0 \\
\Rightarrow & -\frac{8}{27} \Big(\frac{3}{2}y + \frac{1}{2}\beta x \Big)^3 + y^3 + \beta xy^2 - x^2 y - \beta x^3 - x^3 =0 \\
\Rightarrow & -\frac{8}{27} \Bigg(\frac{27}{8}y^3 + 3 \Big( \frac{9}{4} \Big) \Big( \frac{1}{2} \Big) \beta xy^2 + 3 \Big( \frac{3}{2} \Big) \Big( \frac{1}{4} \Big) \beta ^2 x^2y + \frac{1}{8} \beta ^3 x^3 \Bigg) \\
& \qquad + y^3 + \beta xy^2 - x^2 y - (\beta + 1) x^3 =0 \\
\Rightarrow & -y^3 - \beta xy^2 - \frac{1}{3} \beta ^2 x^2y - \frac{1}{27} \beta ^3 x^3 + y^3 + \beta xy^2 - x^2 y - (\beta +1) x^3=0 \\
\Rightarrow & -y^3 + y^3 - \beta xy^2 + \beta xy^2 - \frac{1}{3} \beta ^2 x^2y - \frac{1}{27} \beta ^3 x^3 - x^2 y - (\beta +1) x^3=0 \\
\Rightarrow & - \frac{1}{3} \beta ^2 x^2y - \frac{1}{27}\beta ^3 x^3 - x^2 y - (\beta +1) x^3=0 \\
\Rightarrow & \Big(- \frac{1}{3} \beta ^2 - 1 \Big) x^2y - (\beta +1 +\frac{1}{27}\beta ^3 ) x^3=0
\end{align*}
and dividing through by $y^3$ and letting $t= \frac{x}{y}$, we get
\begin{align*}
& \Big(- \frac{1}{3} \beta ^2-1\Big) \Big(\frac{x}{y}\Big)^2 - (\beta +1 +\frac{1}{27}\beta ^3 ) \Big( \frac{x}{y} \Big)^3=0 \\
\Rightarrow & \Big(- \frac{1}{3} \beta ^2 -1 \Big) t^2 - (\beta +1 +\frac{1}{27}\beta ^3 ) t ^3=0
\end{align*}
and so if
\begin{equation*}
- \frac{1}{3} \beta ^2 -1 = 0
\end{equation*}
(which, after a short calculation, implies that $(\beta +1 +\frac{1}{27}\beta ^3 ) \neq 0$) \newline
we then must have $t = 0$ with multiplicity 3, which implies
\begin{align}
& t = 0 \nonumber \\
\Rightarrow & \frac{x}{y} = 0 \nonumber \\
\Rightarrow & x= 0 \nonumber \\
\Rightarrow & z= y + \beta (0) \label{zeqylab} \\
\Rightarrow & z = y \nonumber
\end{align}
which is the point $[0:1:1]$ (the singular point), and where we have used $\eqref{zbetalabel}$ in line $\eqref{zeqylab}$. But
\begin{align*}
- \frac{1}{3} & \beta ^2 -1 = 0 \\
\Rightarrow & \beta ^2 = -3 \\
\Rightarrow & \beta = \pm \sqrt{3}i \\
\Rightarrow & \frac{u_0}{u_1} = \pm \sqrt{3}i
\end{align*}
where the last line follows from $\eqref{betau0lab}$. This means that the two lines corresponding to $[\sqrt{3}i:1]$ and $[-\sqrt{3}i:1]$ each intersect the curve $C$ in the point $[0:1:1]$ (and in no other points), which shows that the polynomial in $\eqref{singpolylab}$ corresponds to a fishtail fibre.
\subsection*{The other singular fibres}
So far, we have shown that the two polynomials
\begin{align*}
p_0(x,y,z) &= \Big(y+\frac{1}{2}z \Big)^3 \\
p_1(x,y,z) &= (zy^2-zx^2-x^3)
\end{align*}
give rise to an elliptic fibration with an $\tilde{E_6}$ fibre and two fishtail fibres. By Kodaira's classification of singular fibres in section \ref{kodairasec}, the Euler characteristic of an $\tilde{E_6}$ fibre is 8, and the Euler characteristic of a fishtail fibre is 1, so the sum of the Euler characteristics of these three fibres is 10. \newline
There are two more singular curves given by lines $\eqref{singcurve2}$ and $\eqref{singcurve3}$,
\begin{equation*}
C_1 = \{[x:y:z] \in \mathbb{CP}^{2} | -\frac{32}{121}\Big(y + \frac{1}{2}z \Big)^3 + zy^2 - zx^2 - x^3 =0 \}
\end{equation*}
which has a singular point at $[-\frac{2}{3}: \frac{4}{3}: 1]$, and
\begin{equation*}
C_2 = \{[x:y:z] \in \mathbb{CP}^{2} | \Big(y + \frac{1}{2}z \Big)^3 + \frac{1}{8}\Big(zy^2 - zx^2 - x^3\Big) =0 \}
\end{equation*}
which has a singular point at $[-\frac{2}{3}: -\frac{1}{3}: 1]$. \newline
Both of these singular curves must have Euler characteristic at least 1. However, the Euler characteristic of $E(1) = \mathbb{CP}^{2} \# 9 \overline{\mathbb{CP}^{2}}$ is 12, and so if one of these singular fibres has Euler characteristic greater than 1, then the sum of the Euler characteristics of the five fibres is greater than 12, which is a contradiction. Therefore, both $C_1$ and $C_2$ must be fishtail fibres, and thus we have found an elliptic fibration on $E(1)$ with an $\tilde{E_6}$ fibre and four fishtail fibres.
\pagebreak
\section{An Exotic $\mathbb{CP}^2 \# 7 \overline{\mathbb{CP}^2}$} \label{mainsectionlabel}
In this section, we give an exposition of J. Park's construction in \cite{P1} of an exotic $\mathbb{CP}^{2} \# 7 \overline{\mathbb{CP}^{2}}$. \newline
Let $C_{7}$ be the plumbing given by Figure \ref{x7proofsection}.1, without the meridians $a_i$ $(i=0,1, \dots, 5)$. \newline
It is proved in \cite{FS1} that if the intersection form of $H_2(C_7; \mathbb{Z})$ with respect to the basis $\{u_{1}, u_{2}, \hdots , u_{6}\}$ is given by
\begin{displaymath}
P =
\left( \begin{array} {cccccc}
-2 & 1 & 0 & 0 & 0 & 0 \\
1 & -2 & 1 & 0 & 0 & 0 \\
0 & 1 & -2 & 1 & 0 & 0 \\
0 & 0 & 1 & -2 & 1 & 0 \\
0 & 0 & 0 & 1 & -2 & 1 \\
0 & 0 & 0 & 0 & 1 & -9
\end{array} \right)
\end{displaymath}
then if $\{\gamma_{1}, \gamma_{2}, \hdots, \gamma_{6} \}$ is the basis of $H_{2}(C_{7}, \partial C_{7} ; \mathbb{Z}) \cong H^{2}(C_{7} ; \mathbb{Z})$ which is dual to the basis $\{u_{1}, u_{2}, \hdots , u_{6}\}$ of $ H_{2}(C_{7} ; \mathbb{Z})$, i.e. $<\gamma_{i}, u_{j}> = \delta_{ij}$, then the intersection form of $H^{2}(C_{7} ; \mathbb{Q})$ with respect to this basis is
\begin{displaymath}
T = P^{-1} = \frac{-1}{49}
\left( \begin{array} {cccccc}
41 & 33 & 25 & 17 & 9 & 1 \\
33 & 66 & 50 & 34 & 18 & 2 \\
25 & 50 & 75 & 51 & 27 & 3 \\
17 & 34 & 51 & 68 & 36 & 4 \\
9 & 18 & 27 & 36 & 45 & 5 \\
1 & 2 & 3 & 4 & 5 & 6
\end{array} \right)
\end{displaymath}
\newtheorem{remp1}{Remark}[section]
\begin{remp1} \label{sec21rem1}
\upshape
It will turn out that we only need to know the values in the last row/column.
\end{remp1}
For the proof of the main theorem, we shall need the fact that the manifold on which we perform the rational blowdown has a symplectic structure. In \cite{Sy}, M. Symington proved the following theorem (stated in \cite{P1}, as below) which gives conditions when a manifold that has been rationally blown-down admits a symplectic structure. \pagebreak
\newtheorem{sy1}[remp1]{Theorem}
\begin{sy1}
Suppose that $(X, \omega)$ is a symplectic 4-manifold that contains a configuration $C_{p}$. Suppose further that all the 2-spheres in $C_{p}$ are symplectically embedded in $X$ and intersect each other orthogonally. Then the manifold which is the rational blow-down of $X$ along $C_{p}$, denoted by $X_{p} = X_{0} \cup_{L(p^{2}, 1-p)} B_{p}$, admits a symplectic 2-form $\omega_{p}$ such that $(X_{0}, \omega_{p}|_{X_{0}})$ is symplectomorphic to $(X_{0}, \omega|_{X_{0}})$.
\label{sy1ref}
\end{sy1}
As noted in \cite{Sy}, the fact that the 2-spheres are symplectic means that the orthogonal intersections are positive. \newline
Now suppose we have a 4-manifold $X$ that satisifes the conditions for Theorem \ref{sy1ref} above; $(X, \omega)$ is a symplectic 4-manifold, $X = X_{0} \cup_{L(p^{2}, 1-p)} C_{p}$, and $C_{p}$ is symplectically embedded in $X$. So, $X_{p} = X_{0} \cup_{L(p^{2}, 1-p)} B_{p}$ is a symplectic 4-manifold with symplectic 2-form $\omega_{p}$, such that there is a symplectomorphism $\psi_{p} : (X_{0}, \omega_{p}|_{X_{0}}) \longrightarrow (X_{0}, \omega|_{X_{0}})$. Let $K$ be the canonical class on $X$ induced by $\omega$, and let $K_{p}$ be the canonical class on $X_{p}$ induced by $\omega_{p}$ (see Section \ref{canclasssection}). \newline
We now claim that $H^{1}(L(p^{2}, 1-p); \mathbb{Q})$ and $H^{2}(L(p^{2}, 1-p); \mathbb{Q})$ are both trivial.
\newtheorem{claim1}[remp1]{Claim}
\begin{claim1} \label{claim1ref}
$H^{1}(L(p^{2}, 1-p); \mathbb{Q})$ is trivial
\end{claim1}
Proof: \newline
For ease of reading the calculations, we shall denote the lens space $L(p^2, 1-p)$ simply by $L$. Firstly, by the Universal Coefficient Theorem,
\begin{equation*}
H^{1}(L; \mathbb{Q}) \cong \mathrm{Hom}(H_{1}(L; \mathbb{Z}); \mathbb{Q}) \oplus \mathrm{Ext}(H_{0}(L; \mathbb{Z}); \mathbb{Q})
\end{equation*}
We know $\pi_1(L) \cong \mathbb{Z}_{p^2}$, and since $H_1(L; \mathbb{Z})$ is the abelianization of $\pi_1(L)$ (which is already abelian, in this case), we have $H_1(L; \mathbb{Z}) \cong \mathbb{Z}_{p^2}$. \newline
Recall that if $F$ is a finite abelian group and $G$ is an abelian group of infinite order, then $\mathrm{Hom}(F,G)$ must be trivial, since if $\phi: F \longrightarrow G$ is a homomorphism, then for all $x \in F$, $n\phi(x) = 0_{G}$, where $n$ is the order of $F$, and the only way this can happen is if $\phi(x) = 0_{G}$ for each $x \in F$. \newline
In particular $\mathrm{Hom}(\mathbb{Z}_{p^2}; \mathbb{Q})$ is trivial, and so $\mathrm{Hom}(H_{1}(L; \mathbb{Z}); \mathbb{Q})$ is trivial. \newline
Recall from \cite{H1} that if $F$ is a free group, then $\mathrm{Ext}(F; \mathbb{Q})$ is trivial. Since $H_{0}(L; \mathbb{Z}) \cong \mathbb{Z}$ (see \cite{H1}), $\mathrm{Ext}(H_{0}(L; \mathbb{Z}); \mathbb{Q})$ is trivial, and so finally we have that $H^{1}(L; \mathbb{Q})$ is trivial. This proves the claim. $\Box$ \newline
\newtheorem{claim2}[remp1]{Claim}
\begin{claim2} \label{claim2ref}
$H^{2}(L(p^{2}, 1-p); \mathbb{Q})$ is trivial
\end{claim2}
Proof: \newline
By Poincar\'e Duality, since $L(p^{2}, 1-p)$ is three-dimensional
\begin{equation*}
H^{2}(L(p^{2}, 1-p); \mathbb{Q}) \cong H_{1}(L(p^{2}, 1-p); \mathbb{Q})
\end{equation*}
$H_{1}(L(p^{2}, 1-p); \mathbb{Z}) \cong \mathbb{Z}_{p^2}$, and changing from integral to rational coefficients shows us that $H_{1}(L(p^{2}, 1-p); \mathbb{Q})$ is trivial. $\Box$ \newline
We now claim that the triviality of these cohomology groups allows us to decompose $K$ and $\omega$ as $K = K|_{X_{0}} + K|_{C_{p}}$ and $[\omega] = [\omega|_{X_{0}}] + [\omega|_{C_{p}}]$ (where $K|_{X_{0}}, [\omega|_{X_{0}}] \in H^{2}(X_{0}, \mathbb{Q})$, $K|_{C_{p}}, [\omega|_{C_{p}}] \in H^{2}(C_{p}; \mathbb{Q})$ and $K, [\omega] \in H^{2}(X; \mathbb{Q})$). \newline
If we choose $A$ to be a small open neighbourhood\footnote{I say ``small open neighbourhood'' to mean $A$ deformation retracts onto $X_{0}$, so the homology groups of $A$ and $X_{0}$ are isomorphic. The same goes for $B$ and $A \cap B$.} of $X_0$ and $B$ to be a small open neighbourhood of $C_{p}$, so that $A \cap B$ is a small open neighbourhood of $L(p^2, 1-p)$ and $X = \mathrm{int}(A) \cup \mathrm{int}(B)$, then the Mayer-Vietoris sequence (see \cite{H1})
\begin{equation*}
\dots \longrightarrow H_{2}(A \cap B) \longrightarrow H_{2}(A) \oplus H_{2}(B) \longrightarrow H_{2}(X) \longrightarrow H_{1}(A \cap B) \longrightarrow \dots
\end{equation*}
implies, by Poincar\'e duality and the claims above, that
\begin{align*}
H_{2}(L(p^{2}, 1-p)) &\longrightarrow H_{2}(X_{0}) \oplus H_{2}(C_{p}) \longrightarrow H_{2}(X) \longrightarrow H_{1}(L(p^{2}, 1-p)) \\
\Rightarrow 0 &\longrightarrow H_{2}(X_{0}) \oplus H_{2}(C_{p}) \longrightarrow H_{2}(X) \longrightarrow 0
\end{align*}
which implies $H_{2}(X_{0}) \oplus H_{2}(C_{p}) \cong H_{2}(X)$ (and the omitted coefficients are the rationals). This proves the claim. $\Box$\newline
So, we have the following decompositions:
\begin{equation*}
K = K|_{X_{0}} + K|_{C_{p}}
\end{equation*}
where $K \in H^{2}(X; \mathbb{Q})$, $K|_{X_{0}} \in H^{2}(X_{0}, \mathbb{Q})$ and $K|_{C_{p}} \in H^{2}(C_{p}; \mathbb{Q})$, and
\begin{equation*}
[\omega] = [\omega|_{X_{0}}] + [\omega|_{C_{p}}]
\end{equation*}
where $[\omega] \in H^{2}(X; \mathbb{Q})$, $[\omega|_{X_{0}}] \in H^{2}(X_{0}, \mathbb{Q})$ and $[\omega|_{C_{p}}] \in H^{2}(C_{p}; \mathbb{Q})$. \newline
Similarly, $K_{p}$ and $[\omega_{p}]$ decompose as:
\begin{align*}
&K_{p} = K_{p}|_{X_{0}} + K_{p}|_{B_{p}} \\
&[\omega_{p}] = [\omega_{p}|_{X_{0}}] + [\omega_{p}|_{B_{p}}]
\end{align*}
where $K_{p}, [\omega_{p}] \in H^{2}(X_{p}; \mathbb{Q})$, $K_{p}|_{X_{0}}, [\omega_{p}|_{X_{0}}] \in H^{2}(X_{0}, \mathbb{Q})$ and $K_{p}|_{B_{p}}, [\omega_{p}|_{B_{p}}] \in H^{2}(B_{p}; \mathbb{Q})$. \newline
(Basically, all the cohomology classes are where we expect them to be.) \newline
These decompositions lead to the following lemma (Lemma 2.1 in \cite{P1}).
\newtheorem{L21}[remp1]{Lemma}
\begin{L21} \label{kpomegaplemma}
Under the same hypothesis on $(X, K, \omega)$ and $(X_{p}, K_{p}, \omega_{p})$ as above, we have
\begin{equation*}
K_{p} \cdot [\omega_{p}] = K \cdot [\omega] - K|_{C_{p}} \cdot [\omega|_{C_{p}}]
\end{equation*}
\end{L21}
Proof:\newline
We have the short exact sequence
\begin{equation*}
H_{2}(\partial B_{p}; \mathbb{Q}) \longrightarrow H_{2}(B_{p}; \mathbb{Q}) \longrightarrow H_{2}(B_{p}, \partial B_{p}; \mathbb{Q}) \longrightarrow H_{1} (\partial B_{p}; \mathbb{Q})
\end{equation*}
Since $\partial B_{p} \cong L(p^{2}, 1-p)$, from Claims \ref{claim1ref} and \ref{claim2ref} above, both $H_{2}(\partial B_{p}; \mathbb{Q})$ and $H_{1}(\partial B_{p}; \mathbb{Q})$ are trivial, and so the short exact sequence becomes
\begin{equation*}
0 \longrightarrow H_{2}(B_{p}; \mathbb{Q}) \longrightarrow H_{2}(B_{p}, \partial B_{p}; \mathbb{Q}) \longrightarrow 0
\end{equation*}
proving $H_{2}(B_{p}; \mathbb{Q}) \cong H_{2}(B_{p}, \partial B_{p}; \mathbb{Q})$. \newline
By Poincar\'e duality, $H_{2}(B_{p}, \partial B_{p}; \mathbb{Q}) \cong H^{2}(B_{p}; \mathbb{Q})$, so $H^{2}(B_{p}; \mathbb{Q}) \cong H_{2}(B_{p}; \mathbb{Q})$. Since $B_{p}$ is a rational ball, $H_{2}(B_{p}; \mathbb{Q})$ is trivial, and so $H^{2}(B_{p}; \mathbb{Q})$ is also trivial. \newline
Therefore, $K_{p}|_{B_{p}}$ and $[\omega|_{B_{p}}]$ are zero elements in $H^{2}(B_{p}; \mathbb{Q})$, and consequently zero elements in $H^{2}(X_{p}; \mathbb{Q})$. \newline
Now let us explicitly calculate $K_{p} \cdot [\omega_{p}]$:
\begin{align*}
K_{p} \cdot [\omega_{p}] &= (K_{p}|_{X_{0}} + K_{p}|_{B_{p}}) \cdot ([\omega_{p}|_{X_{0}}] + [\omega_{p}|_{B_{p}}] ) \\
&= K_{p}|_{X_{0}} \cdot [\omega_{p}|_{X_{0}}] + K_{p}|_{X_{0}} \cdot [\omega_{p}|_{B_{p}}] + K_{p}|_{B_{p}} \cdot [\omega_{p}|_{X_{0}}] + K_{p}|_{B_{p}} \cdot [\omega_{p}|_{B_{p}}] \\
&= K_{p}|_{X_{0}} \cdot [\omega_{p}|_{X_{0}}]
\end{align*}
since the last three terms of the second line are all zero, since each contains $K_{p}|_{B_{p}}$ or $[\omega_{p}|_{B_{p}}]$. \newline
By the definition of the symplectomorphism $\psi_{p} : (X_{0}, \omega_{p}|_{X_{0}}) \longrightarrow (X_{0}, \omega|_{X_{0}})$, we must have $\psi_{p}^{*} ([\omega|_{X_{0}}]) = [\omega_{p}|_{X_{0}}]$ and $\psi_{p}^{*} (K|_{X_{0}}) = K_{p}|_{X_{0}}$, so we have
\begin{equation} \label{psieq1}
K_{p} \cdot [\omega_{p}] = K_{p}|_{X_{0}} \cdot [\omega_{p}|_{X_{0}}] = \psi_{p}^{*}(K|_{X_{0}}) \cdot \psi_{p}^{*}([\omega|_{X_{0}}])
\end{equation}
Furthermore, since $\psi_{p}^{*}$ is a homomorphism,
\begin{equation} \label{psieq2}
\psi_{p}^{*}(K|_{X_{0}}) \cdot \psi_{p}^{*}([\omega|_{X_{0}}]) = \psi_{p}^{*}(K|_{X_{0}} \cdot [\omega|_{X_{0}}])
\end{equation}
Since $\psi_{p}$ is a symplectomorphism, in particular, it is an orientation-preserving diffeomorphism, and so $\psi_{p}^{*}$ is an isomorphism between the homology and cohomology classes of $(X_0, \omega_{p}|_{X_0})$ and $(X_0, \omega|_{X_0})$. \newline
Since $H^4(X_0; \mathbb{Z}) \cong \mathbb{Z}$, $\psi_{p}^{*}$ maps $1 \in H^4((X_0, \omega_{p}|_{X_0}); \mathbb{Z})$ to $1 \in H^4((X_0, \omega|_{X_0}); \mathbb{Z})$ (because it is orientation-preserving, $1$ does not get mapped to $-1$). Therefore,
\begin{equation} \label{psieq3}
\psi_{p}^{*}(K|_{X_{0}} \cdot [\omega|_{X_{0}}]) = K|_{X_{0}} \cdot [\omega|_{X_{0}}]
\end{equation}
and using equations $\eqref{psieq2}$ and $\eqref{psieq3}$, we have
\begin{equation} \label{psieq4}
\psi_{p}^{*}(K|_{X_{0}}) \cdot \psi_{p}^{*}([\omega|_{X_{0}}]) = K|_{X_{0}} \cdot [\omega|_{X_{0}}]
\end{equation}
Finally, using equations $\eqref{psieq1}$ and $\eqref{psieq4}$, we have
\begin{equation} \label{kdo1ref}
K_{p} \cdot [\omega_{p}] = K|_{X_{0}} \cdot [\omega|_{X_{0}}]
\end{equation}
We now have $K_{p} \cdot [\omega_{p}] = K|_{X_{0}} \cdot [\omega|_{X_{0}}]$. Calculating
\begin{align*}
K \cdot [\omega] &= (K|_{X_{0}} + K|_{C_{p}}) \cdot ([\omega|_{X_{0}}] + [\omega_{p}|_{C_{p}}]) \\
&= K|_{X_{0}} \cdot [\omega|_{X_{0}}] + K|_{X_{0}} \cdot [\omega_{p}|_{C_{p}}] + K|_{C_{p}} \cdot [\omega|_{X_{0}}] + K|_{C_{p}} \cdot [\omega_{p}|_{C_{p}}]
\end{align*}
Now, $K|_{X_{0}}$ and $[\omega_{p}|_{C_{p}}]$ are forms restricted to different submanifolds, and so $K|_{X_{0}} \cdot [\omega_{p}|_{C_{p}}] = 0$. Algebraically, this is because each term belongs to a different summand of $H_{2}(X_{0}; \mathbb{Q}) \oplus H_{2}(C_{p}; \mathbb{Q}) \cong H_{2}(X; \mathbb{Q})$. Similarly, $K|_{C_{p}} \cdot [\omega|_{X_{0}}] = 0$ and so we have
\begin{equation} \label{kdo2ref}
K \cdot [\omega] = K|_{X_{0}} \cdot [\omega|_{X_{0}}] + K_{C_{p}} \cdot [\omega_{p}|_{C_{p}}]
\end{equation}
Putting $\eqref{kdo1ref}$ and $\eqref{kdo2ref}$ together, we have
\begin{equation*}
K_{p} \cdot [\omega_{p}] = K|_{X_{0}} \cdot [\omega|_{X_{0}}] = K \cdot [\omega] - K_{C_{p}} \cdot [\omega_{p}|_{C_{p}}]
\end{equation*}
which proves the lemma. $\Box$ \newline
Next, we shall need the fact that $E(1) = \mathbb{CP}^{2} \# 9 \overline{\mathbb{CP}^{2}}$ can be described to be an elliptic fibration over $\mathbb{CP}^{1}$ with one $\tilde{E_{6}}$ singular fibre and four fishtail fibres. The existence of such a fibration was shown in section \ref{explicitE6sec}. \newline
The following is Lemma 3.1 in \cite{P1}, credited to D. Auroux and R. Fintushel.
\newtheorem{L31}[remp1]{Lemma}
\begin{L31} \label{aurouxfintlemma}
The second (co)homology classes $[S_{i}]$ ($1 \leq i \leq 7$) of the 2-spheres $S_{i}$ embedded in $\tilde{E_{6}}$ can be represented by: \newline
$[S_{1}] = e_{4} - e_{7}$, $[S_{2}] = e_{1} - e_{4}$, $[S_{3}] = h - e_{1} - e_{2} - e_{3}$, \newline
$[S_{4}] = e_{2} - e_{5}$, $[S_{5}] = e_{5} - e_{9}$, $[S_{6}] = e_{3} - e_{6}$, $[S_{7}] = e_{6} - e_{8}$; \newline
where $h$ denotes a generator of $H_{2}(\mathbb{CP}^{2}; \mathbb{Z})$ and each $e_{i}$ denotes the (co)homology class represented by the i-th exceptional curve in $\overline{\mathbb{CP}^{2}} \subset E(1) = \mathbb{CP}^{2} \# 9 \overline{\mathbb{CP}^{2}}$.
\end{L31}
Section \ref{e6constrsec} gives a proof of this Lemma, and we can take the labelling of the spheres $S_1, \dots, S_7$ to be as given above. \newline
Below are two theorems that we shall need in the proof of the lemma below. The first is Corollary 1.4 in \cite{MS}, credited to A. Liu and H. Ohta and K. Ono. Recall that a 4-manifold $X$ is said to be \emph{minimal} if it contains no exceptional spheres, and that a 4-manifold $X$ is said to be \emph{rational} if it is the blow-up of $\mathbb{CP}^{2}$ or $S^{2} \times S^{2}$. For example, $E(1)$ is rational but not minimal. We now quote a series of important results. \pagebreak
\newtheorem{cor14ms}[remp1]{Lemma}
\begin{cor14ms} \label{cor14mslabel}
Let $X$ be a symplectic 4-manifold. Then the following are equivalent:
\begin{itemize}
\item[(i)] $X$ admits a metric of positive scalar curvature.
\item[(ii)] $X$ admits a symplectic structure $\omega$ with $K \cdot \omega < 0$
\item[(iii)] $X$ is either rational or ruled
\end{itemize}
\end{cor14ms}
First, note that in the statement of the lemma in \cite{MS}, $X$ is a \emph{minimal} symplectic 4-manifold. However, it is then noted in \cite{MS} that the lemma extends to the case when $X$ is not minimal (which is how we have stated it above). Before we can quote another lemma, we need:
\newtheorem{D33LL}[remp1]{Definition}
\begin{D33LL}
\upshape
For a non-minimal rational manifold with a standard decomposition $\mathbb{CP}^{2} \# n \overline{\mathbb{CP}^{2}}$ and a standard basis $\{h, e_{1}, e_{2},\dots, e_{n} \}$, a class $\xi = ah - b_{1}e_{1}-b_{2}e_{2}- \dots - b_{n}e_{n}$ is called \emph{reduced} if
\begin{align*}
& b_{1} \geq b_{2} \geq \dots \geq b_{n} \geq 0, and\\
& a \geq b_{1} + b_{2} + b_{3}
\end{align*}
\end{D33LL}
Note that the second condition (with the first condition) implies $a \geq b_{i}$ for $1 \leq i \leq n$. With this definition in mind, the following lemma is the first part of Lemma 3.4 (which is actually stronger and has other implications) in \cite{LL1}.
\newtheorem{L34LL}[remp1]{Lemma}
\begin{L34LL} \label{reducedsquarelemma}
Let $M$ be a non-minimal rational manifold with a standard decomposition and a standard basis. Then any class of non-negative square is equivalent to a reduced class under the action of orientation-preserving diffeomorphisms. (Furthermore, we can find such a diffeomorphism by a simple algorithm.)
\end{L34LL}
The canonical class of $E(1)$, $K_{E(1)} \in H^{2}(E(1); \mathbb{Z})$, is represented by $K_{E(1)} = -3h + e_{1}+ \dots + e_{9} = -[f]$, following the notation in \cite{P1} (although in \cite{P1} the notation seems to change from `$[f]$' to just `$f$'). Using the lemma above, we later get an important relation between this canonical class and a compatible symplectic 2-form on a non-minimal rational surface, which we shall need in the proof of our main result. \newline
In $\cite{LiLiu2}$ it is proved that $\mathbb{CP}^{2} \# k \overline{\mathbb{CP}^{2}}$ has a `unique' symplectic structure for certain $k \geq 2$:
\newtheorem{L33P}[remp1]{Theorem}
\begin{L33P} \label{usscctref}
There is a unique symplectic structure on $\mathbb{CP}^{2} \# k \overline{\mathbb{CP}^{2}}$ for $ 2 \leq k \leq 9$ up to diffeomorphisms and deformation. For $k \geq 10$, the symplectic structure is still unique for the standard canonical class.
\end{L33P}
\newtheorem{L33PCor}[remp1]{Corollary}
\begin{L33PCor} \label{komegacorr}
$\mathbb{CP}^{2} \# k \overline{\mathbb{CP}^{2}}$, for $2 \leq k \leq 9$, with canonical class $K$ does not admit a symplectic 2-form $\omega$ for which $K \cdot \omega > 0$.
\end{L33PCor}
\newtheorem{L33PCorRem}[remp1]{Remark}
\begin{L33PCorRem}
\upshape
This result is of great importance to us, as it will be used to show that the 4-manifold that we contruct, although homeomorphic to $\mathbb{CP}^{2} \# 7 \overline{\mathbb{CP}^{2}}$, is not diffeomorphic to it.
\end{L33PCorRem}
We now use these results to prove the following lemma.
\newtheorem{L32P}[remp1]{Lemma}
\begin{L32P} \label{redclasslemma}
For each integer $k \geq 1$, $E(1)\#k \overline{\mathbb{CP}^{2}}$ admits a symplectic 2-form $\omega$ which is compatible with the standard canonical class $K_{E(1)\#k \overline{\mathbb{CP}^{2}}} = -3h + e_{1} + e_{2} +\dots+e_{9+k}$, such that its cohomology class $[\omega]$ can be represented by $ah- b_{1}e_{1}-b_{2}e_{2}-\dots - b_{9+k}e_{9+k}$, where $a, b_{1}, b_{2}, \dots, b_{9+k}$ are some rational numbers satisfying
\begin{itemize}
\item [(i)] $a \geq b_{1} \geq \dots \geq b_{9+k} \geq 0$, and
\item [(ii)] $3a > b_{1}+b_{2}+\dots +b_{9+k}$.
\end{itemize}
\end{L32P}
Proof: \newline
By the equivalence between $(iii)$ and $(ii)$ in Lemma $\ref{cor14mslabel}$ above, since $E(1)\#k \overline{\mathbb{CP}^{2}}$ is a rational surface it admits symplectic 2-form $\omega$, which is compatible with the standard canonical class $K_{E(1)\#k \overline{\mathbb{CP}^{2}}} = -3h + e_{1} + e_{2} +\dots+e_{9+k}$ and satisfies the inequality $K_{E(1)\#k \overline{\mathbb{CP}^{2}}} \cdot \omega < 0$. \newline
In \cite{GS} we are reminded that a symplectic form is non-degenerate and so $\omega \wedge \omega > 0$. This then implies that $[\omega]$ has non-negative square. Since $[\omega]$ has non-negative square, Lemma \ref{reducedsquarelemma} above implies that $[\omega]$ can be represented by $ah- b_{1}e_{1}-b_{2}e_{2}-\dots - b_{9+k}e_{9+k}$ for some rational numbers satisfying $a \geq b_{1} \geq \dots \geq b_{9+k} \geq 0$ (part (i) of the lemma). \newline
Since $h \cdot h = 1$, $h \cdot e_{i} = 0$ for all $e_{i}$, and $e_{i} \cdot e_{j} = -\delta_{ij}$,
\begin{align*}
K_{E(1)\#k \overline{\mathbb{CP}^{2}}} \cdot \omega &= (-3h + e_{1} +\dots+e_{9+k}) \cdot (ah - b_{1}e_{1}- \dots - b_{9+k}e_{9+k}) \\
&= -3a h \cdot h - b_{1}e_{1}\cdot e_{1} - \dots - b_{9+k} e_{9+k} \cdot e_{9+k} \\
&= -3a +b_{1} + b_{2} + \dots + b_{9+k}
\end{align*}
and this together with $K_{E(1)\#k \overline{\mathbb{CP}^{2}}} \cdot \omega < 0$ then implies
\begin{equation*}
3a > b_{1} + b_{2} + \dots + b_{9+k}
\end{equation*}
which is part (ii) of the lemma. $\Box$ \newline
Next comes an important proposition, concerning the existence of a specific configuation $C_{p}$ in $E(1) \# k \overline{\mathbb{CP}^{2}}$. We also need that the 2-spheres in the configuration are symplectically embedded, in order to use the theorem proved in \cite{Sy} and stated above.
\newtheorem{P31}[remp1]{Proposition}
\begin{P31} \label{c7sympprop}
There exists a configuration $C_{7}$ in the rational surface $E(1) \# 4 \overline{\mathbb{CP}^{2}}$ such that all the 2-spheres $u_{i}$ lying in $C_{7}$ are symplectically embedded.
\end{P31}
Proof: \newline
Recall that $E(1)$ can be viewed as an elliptic fibration with an $\tilde{E_{6}}$-singular fibre and 4 singular fishtail fibres (section \ref{explicitE6sec}). \newline
Recall that the homology class $[f] = 3h - e_{1} - \dots - e_{9}$ of the elliptic fibre $f$ in $E(1)$ can be represented by an immersed 2-sphere with one positive double point, which is equivalent to a fishtail fibre (sections \ref{e6constrsec} and \ref{explicitE6sec}). \newline
$E(1)$ contains at least 4 such immersed 2-spheres, since it contains 4 singular fishtail fibres. If we blow up at each of these 4 double points, there exist embedded 2-spheres $f-2e_{10}$, $f-2e_{11}$, $f-2e_{12}$, $f-2e_{13}$ in $E(1) \# 4 \overline{\mathbb{CP}^{2}}$. \newline
The reason it is $f-2e_{k}$ and not simply $f-e_{k}$ (for $10 \leq k \leq 13$) is that $f$ has a positive double point, and when we blow-up at this double point, the exceptional divisor will intersect $f$ with multiplicity $2$, and so the proper transform of $f$ is $f-2e_{k}$. \newline
Since $[f] \cdot e_{9} = (3h - e_{1} - \dots - e_{9})\cdot e_9 = -e_{9} \cdot e_{9} = -(-1) = 1$, and $e_{k} \cdot e_{9} = 0$ for $10 \leq k \leq 13$, we have $(f-2e_{k}) \cdot e_{9} = 1$ for $10 \leq k \leq 13$. So, each of the embedded 2-spheres $f-2e_{10}$, $f-2e_{11}$, $f-2e_{12}$, $f-2e_{13}$ intersects a section $e_{9}$ of $E(1)$ positively. Let us call these intersection points $p_{10}, p_{11}, p_{12}, p_{13}$, respectively. \newline
We resolve these 4 intersection points (see section \ref{resolvesection}) to get a sphere $S$. The homology class representing $S$ is
\begin{equation*}
[S] = (f-2e_{10}) + (f-2e_{11}) + (f-2e_{12}) + (f-2e_{13}) + e_{9} = 4f + e_{9} -2e_{10} - 2e_{11}-2e_{12}-2e_{13}
\end{equation*}
since when we resolve the points, we add the homology classes of the surfaces together (see section \ref{resolvesection}). Recall that this ``resolving'' can be done symplectically (see Remark \ref{resolvesympl}). \newline
And recalling that $f \cdot f = 0$, $f \cdot e_{9} = 1$, $f \cdot e_{k} =0 $ for $10 \leq k \leq 13$, and $e_{i} \cdot e_{j} = -\delta_{ij}$, we can compute the square of $[S]$ easily:
\begin{align*}
[S] \cdot [S] &= (4f + e_{9} -2(e_{10} - \dots - e_{13})) \cdot (4f + e_{9} -2(e_{10} - \dots - e_{13})) \\
&= 16f \cdot f + 8f \cdot e_{9} + e_{9}\cdot e_{9} + 4(e_{10} \cdot e_{10} + e_{10} + \cdots + e_{13} \cdot e_{13}) \\
&= 16 \cdot 0 + 8 \cdot 1 + (-1) + 4\cdot ((-1) + (-1) + (-1) + (-1)) \\
&= -9
\end{align*}
Therefore, we have found a symplectically embedded 2-sphere $S$ with square $-9$ in $E(1) \# 4 \overline{\mathbb{CP}^2}$. \newline
We now recall constructing an elliptic fibration on $E(1)$ with an $\tilde{E_{6}}$ fibre. Lemma \ref{aurouxfintlemma} showed that we can consider the $\tilde{E_6}$ fibre as consisting of the spheres $S_1,\dots, S_7$ where
$[S_{1}] = e_{4} - e_{7}$, $[S_{2}] = e_{1} - e_{4}$, $[S_{3}] = h - e_{1} - e_{2} - e_{3}$, $[S_{4}] = e_{2} - e_{5}$ and $[S_{5}] = e_{5} - e_{9}$. Note that
\begin{equation*}
[S_{1}]\cdot[S_{2}] = [S_{2}]\cdot[S_{3}] = [S_{3}]\cdot[S_{4}] = [S_{4}]\cdot[S_{5}] = 1
\end{equation*}
and
\begin{align*}
[S]\cdot [S_{5}] &= (4f + e_{9} -2(e_{10} - \dots - e_{13})) \cdot (e_{5}-e_{9}) \\
&= 4f \cdot e_{5} - 4f\cdot e_{9} - e_{9} \cdot e_{9} \\
&= 4(-e_{5}) \cdot e_{5} - 4(-e_{9}) \cdot e_{9} - e_{9} \cdot e_{9}\\
&= 4 - 4 + 1 \\
&= 1
\end{align*}
So, if we set $u_{1}= S_{1}$, $u_{2}= S_{2}$, \dots,$u_{5}= S_{5}$ and $u_{6}= S$, we obtain a configuration $C_{7}$ lying in $E(1) \# 4 \overline{\mathbb{CP}^{2}}$. See Figures \ref{e6constrsec}.6 and \ref{x7proofsection}.1. \newline
Note that all the 2-spheres $u_{i}$ lying in $C_{7}$ are symplectically embedded. (since $S$ is symplectic and $S_1, \dots, S_5$ were constructed using algebraic techniques, and so are just algebraic curves, and are therefore symplectic). This proves the proposition. $\Box$ \newline
We are finally ready to prove the main result in \cite{P1}.
\newtheorem{pmain}[remp1]{Theorem}
\begin{pmain}
The exists a simply connected 4-manifold with which is homeomorphic, but not diffeomorphic, to $\mathbb{CP}^{2} \# 7 \overline{\mathbb{CP}^{2}}$.
\end{pmain}
Proof: \newline
We shall construct this exotic manifold, which we denote by $X_{7}$. \newline
Let us denote the manifold $E(1) \# 4 \overline{\mathbb{CP}^{2}}$ by $X$. By Proposition \ref{c7sympprop}, there exists a symplectically embedded configuration $C_{7}$ in $X$. If we blow down along this configuration $C_{7}$ in $X = X_{0} \cup _{L(49,-6)} C_{7}$, we get a new smooth 4-manifold, which we shall denote by $X_{7} = X_{0} \cup _{L(49,-6)}B_{7}$. \newline
By the theorem proved in \cite{Sy} and stated above, since $C_{7}$ was symplectically embedded in $X$, there exists a symplectic structure on $X_{7}$. \newline
We proved in Section \ref{x7proofsection} that $X_7$ is simply-connected. \newline
Claim: $X_{7}$ is homeomorphic to $\mathbb{CP}^{2} \# 7 \overline{\mathbb{CP}^{2}}$ \newline
Let us look at $X = \mathbb{CP}^{2} \# 7 \overline{\mathbb{CP}^{2}}$ first. It has Betti numbers $b_2^{+}(X) = 1$ and $b_2^{-}(X) = 7$, so the rank of $X$ is $rk(X) =8$ and the signature is $\sigma(X) = 6$. Since $X$ is a smooth manifold (actually, it is symplectic) and since $\sigma(X)$ is not divisible by 8, we have by Lemma \ref{L210} that the intersection form of $X$ must be odd. \newline
Now, let us look at $X_7 = X_{0} \cup _{L(49,-6)}B_{7}$. Since this was constructed from $E(1) \# 4 \overline{\mathbb{CP}^{2}}$ by blowing down a configuration $C_7$ (and so, removing 6 spheres of negative self-intersection), we know that
\begin{align*}
b_2^{+}(X_7) &= b_2^{+}(E(1) \# 4 \overline{\mathbb{CP}^{2}}) = 1 \\
b_2^{-}(X_7) &= b_2^{-}(E(1) \# 4 \overline{\mathbb{CP}^{2}}) - 6 = 13 - 6 = 7
\end{align*}
So, the rank of $X_7$ is $rk(X_7) = 8$ and the signature is $\sigma(X_7) = 6$. Since $X_7$ is a smooth manifold (in fact, also symplectic) we again have by Lemma \ref{L210} that its intersection form must be odd. \newline
By Corollary \ref{frcor1} (Freedman's Theorem) since $X$ and $X_7$ are both smooth manifolds with the same rank, signature and parity, they must be homeomorphic. \newline
We now need to show that $X_{7}$ is not diffeomorphic to $\mathbb{CP}^{2} \# 7 \overline{\mathbb{CP}^{2}}$. \newline
Let $K_{7}$ be the canonical class on $X_{7}$, and let $\omega_{7}$ be the corresponding symplectic 2-form on $X_{7}$. We claim that $K_{7} \cdot \omega_{7} > 0$ (and we shall prove this claim in the lemma following this theorem). \newline
Therefore, by Corollary \ref{komegacorr}, $X_7$ is not diffeomorphic to $X$. So, we have shown that $X_{7}$ is homeomorphic, but not diffeomorphic, to $\mathbb{CP}^{2} \# 7 \overline{\mathbb{CP}^{2}}$. $\Box$
\newtheorem{LKw}[remp1]{Lemma}
\begin{LKw}
As defined above, $K_{7} \cdot [\omega_{7}] > 0$.
\end{LKw}
Proof: \newline
We shall denoted the homology class of a sphere $S$ by $[S]$. \newline
The canonical class $K_{E(1)}$ of $E(1)$ is represented by $-[f] = -3h + (e_{1} + \dots + e_{9})$, and the canonical class $K$ of $X = E(1) \# 4 \overline{\mathbb{CP}^{2}}$ is represented by $K = -3h + (e_{1} + \dots + e_{13}) = -[f] + (e_{10} + \dots + e_{13})$. \newline
Using Lemma \ref{redclasslemma}, we may assume that the cohomology class $[\omega]$ of the symplectic 2-form $\omega$ on $X$, that is compatible with the canonical class $K$, can be represented by $ah - (b_{1}e_{1} + \dots + b_{13}e_{13})$ for some rational numbers $a, b_{1}, \dots, b_{13}$ satisfying $a \geq b_{1} \geq \dots \geq b_{13} \geq 0$ and $3a \geq b_{1} + \dots b_{13}$. \linebreak Recall from Proposition \ref{c7sympprop} above that
\begin{align*}
[u_{6}] &= [S] \\
&= 4f + e_{9} - 2(e_{10} + \dots + e_{13})\\
\Rightarrow [u_6]&= 12h + e_{9} -4(e_{1} + \dots + e_{9}) - 2(e_{10} + \dots + e_{13})
\end{align*}
Let us now recall that we defined $u_{i} = S_{i}$ for $1 \leq i \leq 5$ in Proposition \ref{c7sympprop} above, and the so the homology classes are:
\begin{align*}
[u_{1}] &= e_{4} - e_{7}, [u_{2}] = e_{1} - e_{4}, [u_{3}] = h - e_{1} - e_{2} - e_{3}, \\
[u_{4}] &= e_{2} - e_{5}, [u_{5}] = e_{5} - e_{9}
\end{align*}
and recall $K = -3h + (e_{1} + \dots + e_{13})$. \newline
From these definitions we can see that $[u_{i}] \cdot K = 0$ (for $1 \leq i \leq 5$). \newline
In order to calculate $K_{7} \cdot [\omega_{7}]$, we shall first calculate $K \cdot [\omega]$ and $K|_{C_{7}} \cdot [\omega|_{C_{7}}]$, and then use the result we proved in Lemma \ref{kpomegaplemma} above:
\begin{equation} \label{lemma215rel}
K_{7} \cdot [\omega_{7}] = K \cdot [\omega] - K|_{C_{7}} \cdot [\omega|_{C_{7}}]
\end{equation}
to calculate $K_{7} \cdot [\omega_{7}]$. \newline
Firstly, if we define $[\omega] = ah - (b_{1}e_{1} + \dots + b_{13}e_{13})$,
\begin{align}
K \cdot [\omega] &= (-3h + (e_{1} + \dots + e_{13})) \cdot (ah - (b_{1}e_{1} + \dots + b_{13}e_{13})) \nonumber \\
\Rightarrow K \cdot [\omega] &= -3a + (b_{1} + \dots + b_{13}) \label{kcdotomegalab}
\end{align}
We now express the two cohomology classes $K|_{C_{7}}$ and $[\omega|_{C_{7}}]$ using the dual basis $ \{ \gamma_{i} : 1 \leq i \leq 6 \}$ (such that $<\gamma_{i}, u_{j} > = \delta_{ij}$) for $H^{2}(C_{7}; \mathbb{Q})$. We have,
\begin{align*}
K|_{C_{7}} &= (K \cdot [u_{1}]) [\gamma_{1}] + (K \cdot [u_{2}])[\gamma_{2}] + \dots + (K \cdot [u_{6}])[\gamma_{6}] \\
&= 7 [\gamma_{6}]
\end{align*}
since $K \cdot [u_{i}] = 0$ for $1 \leq i \leq 5$, and
\begin{align*}
K \cdot [u_{6}] &= (-3h + (e_{1} + \dots + e_{13})) \cdot (12h + e_{9} -4(e_{1} + \dots + e_{9}) \\
& - 2(e_{10} + \dots + e_{13})) \\
&= (-3)(12)h \cdot h + e_{9} \cdot e_{9} -4 (e_{1}\cdot e_{1} + \dots + e_{9} \cdot e_{9}) \\
&- 2(e_{10} \cdot e_{10} + \dots + e_{13} \cdot e_{13}) \\
&= -36 + (-1) - 4(-9) - 2(-4)\\
&= -36 -1 +36 + 8 \\
&= 7
\end{align*}
Similarly,
\begin{align*}
[\omega|_{C_{7}}] &= ([\omega] \cdot [u_{1}])\gamma_{1} + ([\omega] \cdot [u_{2}])\gamma_{2} + \dots + ([\omega] \cdot [u_{6}])\gamma_{6}
\end{align*}
and using
\begin{align*}
[\omega] \cdot [u_{1}] &= (ah - (b_{1}e_{1} + \dots + b_{13}e_{13})) \cdot (e_{4} - e_{7}) = b_{4} - b_{7} \\
[\omega] \cdot [u_{2}] &= (ah - (b_{1}e_{1} + \dots + b_{13}e_{13})) \cdot (e_{1} - e_{4}) = b_{1} - b_{4} \\
[\omega] \cdot [u_{3}] &= (ah - (b_{1}e_{1} + \dots + b_{13}e_{13})) \cdot (h-e_{1}-e_{2}-e_{3}) = a-b_{1}-b_{2}-b_{3} \\
[\omega] \cdot [u_{4}] &= (ah - (b_{1}e_{1} + \dots + b_{13}e_{13})) \cdot (e_{2} - e_{5}) = b_{2} - b_{5} \\
[\omega] \cdot [u_{5}] &= (ah - (b_{1}e_{1} + \dots + b_{13}e_{13})) \cdot (e_{5} - e_{9}) = b_{5} - b_{9} \\
[\omega] \cdot [u_{6}] &= (ah - (b_{1}e_{1} + \dots + b_{13}e_{13})) \cdot [u_{6}] \\
&= 12a + b_{9} - 4(b_{1} + \dots + b_{9}) - 2(b_{10} + \dots + b_{13})
\end{align*}
we finally get
\begin{align*}
[\omega|_{C_{7}}] &= (b_{4} - b_{7}) [\gamma_{1}] + (b_{1} - b_{4}) [\gamma_{2}] + (a-b_{1}-b_{2}-b_{3}) [\gamma_{3}] + (b_{2} - b_{5}) [\gamma_{4}] \\
& + (b_{5} - b_{9}) [\gamma_{5}] + (12a + b_{9} - 4(b_{1} + \dots + b_{9}) - 2(b_{10} + \dots + b_{13})) [\gamma_{6}]
\end{align*}
Then, using the intersection form of $H^{2}(C_{7}; \mathbb{Q})$ given above Remark \ref{sec21rem1}, we have $[\gamma_{6}] \cdot [\gamma_{k}] = <\gamma_6 , \gamma_k> = k(\frac{-1}{49})$ for $1 \leq k \leq 6$, and so
\begin{align*}
K|_{C_{7}} \cdot [\omega|_{C_{7}}] &= 7[\gamma_6] \cdot [\omega|_{C_{7}}] \\
&= (7) \Big(\frac{-1}{49} \Big) \Big( 1(b_{4} - b_{7}) + 2(b_{1} - b_{4}) + 3(a-b_{1}-b_{2}-b_{3}) \\
& + 4(b_{2} - b_{5}) + 5(b_{5} - b_{9}) \\
& + 6 \big(12a + b_{9} - 4(b_{1} + \dots + b_{9}) - 2(b_{10} + \dots + b_{13}) \big) \Big)
\end{align*}
Therefore
\begin{align}
K|_{C_{7}} \cdot [\omega|_{C_{7}}] &= \Big( \frac{-1}{7} \Big) \Big( 75a -25 b_{1} - 23 b_{2} - 27 b_{3} - 25 b_{4} - 23 b_{5} - 24 b_{6} \nonumber\\
& - 25 b_{7} - 24 b_{8} - 23 b_{9} - 12 (b_{10} + b_{11} + b_{12} + b_{13}) \Big) \label{Kc7cdotomegac7}
\end{align}
And using the relation in $\eqref{lemma215rel}$ above, along with $\eqref{kcdotomegalab}$ and $\eqref{Kc7cdotomegac7}$, we have
\begin{align*}
K_{7} \cdot [\omega_{7}] &= K \cdot [\omega] - K|_{C_{7}} \cdot [\omega|_{C_{7}}] \\
&= (-3a + (b_{1} + \dots + b_{13})) - K|_{C_{7}} \cdot [\omega|_{C_{7}}] \\
&= \Big( \frac{1}{7} \Big) \Big(-21a + 7(b_{1} + \dots + b_{13}) \Big) - K|_{C_{7}} \cdot [\omega|_{C_{7}}] \\
&= \Big( \frac{1}{7} \Big) \Big(-21a + 7(b_{1} + \dots + b_{13}) + 75a -25 b_{1} - 23 b_{2} \\
& \qquad \qquad - 27 b_{3} - 25 b_{4} - 23 b_{5} - 24 b_{6} - 25 b_{7} - 24 b_{8} - 23 b_{9} \\
& \qquad \qquad - 12 (b_{10} + b_{11} + b_{12} + b_{13}) \Big) \\
&= \Big( \frac{1}{7} \Big) \Big( 54a -18 b_{1} -16 b_{2} - 20 b_{3} - 18 b_{4} - 16 b_{5} - 17 b_{6} \\
& - 18 b_{7} - 17 b_{8} - 16 b_{9} - 5 (b_{10} + b_{11} + b_{12} + b_{13}) \Big)
\end{align*}
and using the inequality
\begin{equation*}
3a > b_{1} + \dots + b_{13}
\end{equation*}
which implies
\begin{equation*}
54a > 18 b_{1} + \dots + 18 b_{13}
\end{equation*}
we get
\begin{equation*}
K_{7} \cdot [\omega_{7}] > \Big( \frac{1}{7} \Big) \Big( 2 b_{2} - 2 b_{3} + 2 b_{5} + b_{6} + b_{8} + 2 b_{9} + 13 (b_{10} + b_{11} + b_{12} + b_{13}) \Big)
\end{equation*}
and since $b_{2}> b_{3}$, we have $2 b_{2} - 2 b_{3} > 0$, which in turn implies
\begin{equation*}
K_{7} \cdot [\omega_{7}] > \Big( \frac{1}{7} \Big) \Big( 2 b_{5} + b_{6} + b_{8} + 2 b_{9} + 13 (b_{10} + b_{11} + b_{12} + b_{13}) \Big)
\end{equation*}
and since $b_{i} \geq 0$ for $1 \leq i \leq 13$, we finally have
\begin{equation*}
K_{7} \cdot [\omega_{7}] > 0
\end{equation*}
This proves the lemma. $\Box$ \newline
\pagebreak
\section{The Next Constructions}
The first example of a manifold homeomorphic but not diffeomorphic to $\mathbb{CP}^{2} \# 6 \overline{\mathbb{CP}^{2}}$ was constructed by A. Stipsicz and Z. Szab\'o in \cite{SS}. They started with an elliptic fibration on $\mathbb{CP}^{2} \# 9 \overline{\mathbb{CP}^{2}}$ which had an $\tilde{E_7}$ fibre and three fishtail fibres, and (after a few blow-ups) then performed a generalized version of a rational blowdown to obtain a manifold homeomorphic to $\mathbb{CP}^{2} \# 6 \overline{\mathbb{CP}^{2}}$. Then, computation of the Seiberg-Witten invariants of this manifold showed that it was not diffeomorphic to $\mathbb{CP}^{2} \# 6 \overline{\mathbb{CP}^{2}}$. \newline
Up until this point, only finitely many non-diffeomorphic exotic smooth 4-manifolds had been found for $\mathbb{CP}^{2} \# n \overline{\mathbb{CP}^{2}}$ (for $n = 6,7,8$). R. Fintushel and R. Stern introduced a new technique in \cite{FS3} which can be used to construct infinite families of non-diffeomorphic exotic $\mathbb{CP}^{2} \# n \overline{\mathbb{CP}^{2}}$'s, for $n=6,7,8$. \newline
A few days after a preprint of \cite{FS3} was posted on the arXiv, J. Park, A. Stipsicz and Z. Szab\'o posted a preprint of \cite{PSS} on the arXiv, in which they used this technique to construct an infinite family of non-diffeomorphic exotic $\mathbb{CP}^{2} \# 5 \overline{\mathbb{CP}^{2}}$'s. \newline
We give an outline of the technique presented in \cite{FS3} below, as well as an outline of how it was applied to construct exotic $\mathbb{CP}^{2} \# n \overline{\mathbb{CP}^{2}}$'s (for $n=5,6$). First, however, we need to review R. Fintushel and R. Stern's \emph{knot surgery} technique presented in \cite{FS2}, as well as the concept of a \emph{mapping class group} and the \emph{monodromy} of a singular fibre.
\pagebreak
\section{Fibre Sums and Embedded Tori}
For fibre sums, one can refer to \cite{GS}, and for knot surgery the original source is \cite{FS2}, but a good presentation of both can be found in \cite{Sc}.
\subsection*{Fibre sums}
Suppose we have two $C^{\infty}$-elliptic fibrations (see section \ref{ellfibsec})
\begin{align*}
& \pi_1 : S_1 \longrightarrow C_1 \\
& \pi_2 : S_2 \longrightarrow C_2
\end{align*}
Choose $t_1 \in C_1$ and $t_2 \in C_2$ such that the fibres $F_1 = \pi_1 ^{-1}(t_1)$ and $F_2 = \pi_2 ^{-1}(t_2)$ are tori (generic fibres). \newline
A tubular neighbourhood $\nu F_i$ of each $F_i$ in $S_i$ is a copy of $D^2 \times T^2$ in each $S_i$ ($i=1,2$). Then $\partial(S_i \setminus \nu F_i) \cong T^3$ (for $i=1,2$), and we choose a fibre-preserving, orientation-reversing diffeomorphism of $T^3$
\begin{equation*}
\phi : \partial(S_1 \setminus \nu F_1) \longrightarrow \partial(S_2 \setminus \nu F_2)
\end{equation*}
Then, the \emph{fibre sum} $S_1 \# _{f} S_2$ is defined as the manifold $(S_1 \setminus \nu F_1) \cup _{\phi} (S_2 \setminus \nu F_2)$. \newline
Note that $S_1 \# _{f} S_2$ will admit a $C^{\infty}$-elliptic fibration $\pi : S_1 \# _{f} S_2 \longrightarrow C_1 \# C_2$.
\newtheorem{fibresumrem1}{Remark}[section]
\begin{fibresumrem1}
\upshape
Although the diffeomorphism type of $S_1 \#_{f} S_2$ might depend on the choice of the diffeomorphism $\phi$, if either elliptic fibration $\pi_i : S_i \longrightarrow C_i$ ($i=1,2$) contains a cusp fibre, then for any choice of $\phi$ the manifolds $(S_1 \setminus \nu F_1) \cup _{\phi} (S_2 \setminus \nu F_2)$ will be diffeomorphic, and then $S_1 \#_{f} S_2$ is a well-defined 4-manifold. See Chapter 8 in \cite{GS}.
\end{fibresumrem1}
\newtheorem{fibresumrem2}[fibresumrem1]{Remark}
\begin{fibresumrem2}
\upshape
It should be remarked that a tubular neighbourhood is often called a \emph{regular} neighbourhood.
\end{fibresumrem2}
\subsection*{Near-cusp embedded tori}
Before we get to the knot surgery technique itself, we first need a definition from \cite{Sc}.
\newtheorem{ncetor}[fibresumrem1]{Definition}
\begin{ncetor}
\upshape
Let $X$ be a simply-connected 4-manifold. Let $T$ be a torus embedded in $X$ that is homologically nontrivial and has zero self-intersection. Such a torus $T$ is called \emph{near-cusp embedded} if and only if a neighbourhood of $T$ in $X$ is diffeomorphic to a neighbourhood $U$ of a generic torus fibre inside some elliptic fibration, so that $U$ contains a cusp fibre and so that $T$ corresponds to a regular (generic) fibre.
\end{ncetor}
\pagebreak
\section{Knot Surgery} \label{knotsurgerysec}
We now describe the knot surgery technique described in \cite{FS2}. Another good explanation can be found in \cite{Sc}. \newline
Let us start with a closed simply-connected 4-manifold $X$, which contains a near-cusp embedded torus $T$. Let $\nu T$ be a tubular neighbourhood of $T$ (i.e. a copy of $T \times D^2$), and consider the manifold $X \setminus \nu T$. Then $\partial(X \setminus \nu T) \cong \partial (\nu T) \cong S^1 \times S^1 \times S^1 \cong T^3$. \newline
Now, let $K$ be a knot in $S^3$. Let $\nu K$ be a tubular neighbourhood of $K$, and consider the manifold $S^3 \setminus \nu K$ (the \emph{knot complement}). Homologically, $S^3 \setminus \nu K$ is indistinguishable from a solid torus $S^1 \times D^2$, and it therefore has boundary $\partial (S^3 \setminus \nu K) \cong S^1 \times S^1$. Therefore, the boundary of $S^1 \times (S^3 \setminus \nu K)$ is also the 3-torus $S^1 \times S^1 \times S^1 \cong T^3$. \newline
We shall glue $X \setminus \nu T$ and $S^1 \times (S^3 \setminus \nu K)$ together along their boundaries. However, there are several choices we can make in how we glue their boundaries to each other. \newline
Firstly, $\partial(\nu K) \cong \partial (S^3 \setminus \nu K) \cong T^2$, a torus. Let $\lambda$ be a longitude of $\partial(\nu K)$ (and therefore a meridian of $\partial (S^3 \setminus \nu K)$). Consider the homology class $[pt \times \lambda] \in H_1(S^1 \times (S^3 \setminus \nu K); \mathbb{Z})$. \newline
Secondly, $\partial(\nu T) \cong \partial(T \times D^2)$, and we consider the homology class $[pt \times \partial D^2] \in H_1( T \times D^2;\mathbb{Z})$. Since $\partial(\nu T) \cong \partial(X \setminus \nu T)$, we also consider $[pt \times \partial D^2]$ as a homology class in $H_1( X \setminus \nu T;\mathbb{Z})$ \newline
We define the manifold
\begin{equation*}
X_K = (X \setminus \nu T) \cup _{\phi} (S^1 \times (S^3 \setminus \nu K))
\end{equation*}
where $\phi: \partial(X \setminus \nu T) \longrightarrow \partial(S^1 \times (S^3 \setminus \nu K))$ is an orientation-reversing diffeomorphism such that $[pt \times \partial D^2]$ is identified with $[pt \times \lambda]$.
\newtheorem{ksrem1}{Remark}[section]
\begin{ksrem1}
\upshape
We did not specify where the homology class $[pt \times \mu] \in H_1(S^1 \times (S^3 \setminus \nu K); \mathbb{Z}) $ is mapped to, where $\mu$ is the meridian of $\partial(\nu K)$. In fact, $[pt \times \mu]$ can be mapped to any generator of $H_1(T; \mathbb{Z})$ in $T \times \partial D^2$.
Since we assumed that $T$ is near-cusp embedded, the diffeomorphism type of $X_K$ is completely determined by our choice of mapping $[pt \times \lambda]$ to $[pt \times D^2]$, since (because $T$ is near-cusp embedded) the Seiberg-Witten invariant of $X_K$ is completely determined by the Seiberg-Witten invariant of $X$ and the Alexander polynomial of $K$. \newline
This means that all such constructed $X_K$'s (where $[pt \times \mu]$ can be mapped to any generator of $H_1(T; \mathbb{Z})$) have the same Seiberg-Witten invariant. It is not known whether these (different) $X_K$'s are all diffeomorphic (\cite{FS2}); recall that if $Y$ and $Z$ are two smooth 4-manifolds that have different Seiberg-Witten invariants, then they are definitely non-diffeomorphic, but if $Y$ and $Z$ have the same Seiberg-Witten invariant, then they could be diffeomorphic or non-diffeomorphic.
\end{ksrem1}
Let us review the knot surgery construction. We started with a closed, simply-connected smooth 4-manifold $X$. We removed $\nu T \cong T \times D^2$, and then ``glued back'' a homological copy of $T^2 \times D^2$ (our $S^1 \times (S^3 \setminus \nu K)$), and called this new manifold $X_K$. Therefore, the homology of $X_K$ is the same as the homology of $X$, and by the corollary of Freedman's Classification Theorem, Corollary \ref{frcor1}, $X_K$ is homeomorphic to $X$. \newline
However, the Seiberg-Witten invariants of $X_K$ and $X$ will be different for most choices of $K$, and therefore in general $X_K$ will not be diffeomorphic to $X$.
\newtheorem{ksrem2}[ksrem1]{Remark}
\begin{ksrem2}
\upshape
If we also assume (or choose our torus $T$) so that $X \setminus \nu T$ is simply-connected, then the fact that $X$ and $X \setminus \nu T$ are simply-connected implies that $X_K$ is also simply-connected (stated in \cite{FS3}).
\end{ksrem2}
\pagebreak
\section{Fishtail and Cusp Fibres Revisited} \label{fibresrevisitedsec}
We now look at fishtail and cusp fibres from a viewpoint different to that of section \ref{fishtailsection}. \newline
By definition, a generic fibre in an elliptic fibration is a torus. In section \ref{fishtailsection}, it was shown that a fishtail fibre is a sphere with a point of transverse self-intersection. The fishtail fibre ``appears'' in an elliptic fibration by collapsing a homologically nontrivial circle, in a nearby generic torus fibre, to a point (\cite{Sc}). Such a circle is called a \emph{vanishing cycle}. It is a circle that bounds a disk of self-intersection $-1$ in the fibre's complement (for a computation of this fact, see \cite{GS} pages 292-293). \newline
Explicitly, suppose that $\pi : S \longrightarrow C$ is an elliptic fibration and that $t_1 \in C$ is a point such that $F_1 = \pi^{-1}(t_1)$ is a fishtail fibre. Then, in a neighbourhood of $t_1$ in $C$, there is a point $t$ such that $F = \pi^{-1}(t)$ is a generic torus fibre with a vanishing cycle $v$. As $t \rightarrow t_1$, $v$ collapses to a point. See Figure \ref{fibresrevisitedsec}.1 and \ref{fibresrevisitedsec}.2. \newline
In section \ref{fishtailsection}, it is shown that a cusp fibre is also a sphere, but with one singular point. Suppose again that $\pi : S \longrightarrow C$ is an elliptic fibration, and suppose that $t_2 \in C$ is a point such that $F_2 = \pi^{-1}(t_2)$ is a cusp fibre. Then a nearby (generic) torus fibre has two vanishing cycles $v_1$ and $v_2$ which collapse to a single point (we can think of these circles, which are generators of $H_1(T^2; \mathbb{Z})$, as the meridan and longitude of the torus). See Figure \ref{fibresrevisitedsec}.3. The singular point of a cusp fibre has a neighbourhood which looks like a cone over the trefoil knot (see \cite{Sc}, \cite{GS}). \newline
For Kirby diagrams of a fishtail fibre and of a cusp fibre, see \cite{GS}, page 299.
\pagebreak
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{fishtor1e.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{fibresrevisitedsec}.1
\end{center}
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{fishsc1.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{fibresrevisitedsec}.2
\end{center}
\begin{center}
\begin{minipage}{12cm}
\includegraphics[width=12cm]{cusptor1.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{fibresrevisitedsec}.3
\end{center}
\pagebreak
\section{The Mapping Class Group of the Torus} \label{mpgtorsec}
\newtheorem{mcgtdef}{Definition}[section]
\begin{mcgtdef}
\upshape
The \emph{mapping class group} of the torus, labelled $\Gamma_1$, is defined to be the set of isotopy classes of orientation-preserving diffeomorphisms of $T^2$.
\end{mcgtdef}
More formally, if we let $\mathrm{Diff}^{+}(T^2)$ be the set of orientation-preserving diffeomorphisms of $T^2$, and we define $\mathrm{Diff}_{0}(T^2)$ to be the set of diffeomorphisms of $T^2$ that are isotopic to the identity map, then
\begin{equation*}
\Gamma_1 \cong \mathrm{Diff}^{+}(T^2) / \mathrm{Diff}_0(T^2)
\end{equation*}
It was first proved by M. Dehn in \cite{De} that the mapping class group $\Gamma_g$ of a genus-$g$ surface is generated by finitely many \emph{twist homeomorphisms} (now called \emph{Dehn twists}). In fact, only $3g-1$ Dehn twists are needed to generate $\Gamma_g$ (\cite{L2}, see also \cite{L1} and \cite{L3}). The following definition is from \cite{GS}.
\newtheorem{dtfm}{Definition}[section]
\begin{dtfm}
\upshape
Let $S$ be a surface and let $C$ be a circle in $S$. A right-handed \emph{Dehn twist} $\psi:S \longrightarrow S $ is a diffeomorphism obtained by ``cutting'' $S$ along $C$, twisting a neighbourhood of one of the boundary components $360^\circ$ to the right, and then ``regluing'' the cut-out component ``back in''.
\end{dtfm}
We, of course, also have a more formal definition (also from \cite{GS}).
\newtheorem{dtfm2}[dtfm]{Definition}
\begin{dtfm2}
\upshape
Let $S$ be a surface and let $C$ be a circle in $S$. Identify $\nu C$, a neighbourhood of $C$, with $S^1 \times I$. Define the map $\psi$ such that on $\nu C$
\begin{equation} \label{psilab1}
\psi(\theta, t) = (\theta + 2 \pi t, t)
\end{equation}
and $\psi$ is the identity map on $S \setminus \nu C$ (and $\psi$ goes smoothly from $\mathrm{id}|_{S \setminus \nu C}$ to $\eqref{psilab1}$). Then $\psi$ is a right-handed \emph{Dehn twist}.
\end{dtfm2}
For an example of a Dehn twist on a cylinder, see Figure \ref{mpgtorsec}.1 below (from \cite{GS}). \newline
According to \cite{SSS}, $\Gamma_1$ admits a presentation (see also chapter 7 in \cite{CM})
\begin{equation*}
\Gamma_1 = <a,b \; | \; aba=bab, (ab)^6 =1 >
\end{equation*}
It is shown in \cite{FM} that $\Gamma_1$ is isomorphic to $SL(2;\mathbb{Z})$, the group of $2 \times 2$ matrices with integer entries and determinant 1. In fact, the map $\phi: \Gamma_1 \longrightarrow SL(2;\mathbb{Z})$, defined in \cite{SSS} by
\begin{align*}
\phi(a) &=
\left( \begin{array}{cc}
1 & 1 \\
0 & 1
\end{array} \right) \\
\phi(b) &=
\left( \begin{array}{cc}
\phantom{1} 1 & 0 \\
-1 & 1
\end{array} \right) \\
\end{align*}
provides us with an isomorphism between $\Gamma_1$ and $SL(2;\mathbb{Z})$. It is easily checked that $\left( \begin{array}{cc}
1 & 1 \\
0 & 1
\end{array} \right)$ and
$\left( \begin{array}{cc}
\phantom{1}1 & 0 \\
-1 & 1
\end{array} \right)$ generate $SL(2;\mathbb{Z})$ and that
\begin{align*}
\phi(aba) &=
\left( \begin{array}{cc}
\phantom{1} 0 & 1 \\
-1 & 0
\end{array} \right) = \phi(bab) \\
\phi((ab)^6) &=
\left( \begin{array}{cc}
1 & 0 \\
0 & 1
\end{array} \right) \\
\end{align*}
For more about Dehn twists and mapping class groups, see \cite{FM}.
\begin{center}
\begin{minipage}{10cm}
\includegraphics[width=10cm]{dehn1.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{mpgtorsec}.1
\end{center}
\pagebreak
\section{Monodromy and the Existence of Certain Elliptic Fibrations} \label{monodromysec}
We consider an elliptic fibration
\begin{equation} \label{sfib1lab}
\pi : \mathbb{CP}^{2} \# 9 \overline{\mathbb{CP}^{2}} \longrightarrow \mathbb{CP}^{1}
\end{equation}
There are only finitely-many singular points $\{s_1, s_2, \dots, s_n \} \subset \mathbb{CP}^{1}$ (i.e. points $s$ such that $\pi ^{-1}(s)$ is a singular fibre of the type listed in Kodaira's table of singular fibres). In fact, there are at most 12 singular points, since each singular point corresponds to a singular fibre which has Euler characteristic at least 1, and $\chi(\mathbb{CP}^{2} \# 9 \overline{\mathbb{CP}^{2}}) = 12$. \newline
Since there are only finitely-many singular points $\{s_1, s_2, \dots, s_n \} \subset \mathbb{CP}^{1}$, we can find disjoint disks $\{D_1, D_2, \dots D_n \}$ such that $s_i \in \mathrm{int}(D_i)$ and $D_i \cap D_j = \emptyset$ if $i \neq j$ ($i,j = 1,2,\dots n$). \newline
Now consider one of these singular points $s$, and its disk neighbourhood $D$, and let $C = \partial D$. Since every $t\in C \subset \mathbb{CP}^{1}$ is such that $\pi^{-1}(t)$ is a (generic) $T^2$ fibre, if we restrict the fibration in $\eqref{sfib1lab}$ to $C$, by traversing along $C$ we get a diffeomorphism $\psi$ of $T^2$ (\cite{SSS}), and this diffeomorphism is defined up to isotopy and conjugation (\cite{SSS}). The corresponding element (defined only up to conjugation) of $\psi$ in $\Gamma_1$, the mapping class group of the torus, is called the \emph{monodromy} of the singular fibre $F = \pi^{-1}(s)$. \newline
The following argument is stated in \cite{SSS}: suppose that $w$ is a word $\Gamma_1$ which is composed of 12 right-handed Dehn twists $w_1, w_2, \dots, w_{12}$ satisfying
\begin{equation*}
w = w_1 \dots w_{12} = (w_1 \dots w_{i_1})(w_{i_1+1}\dots w_{i_2})\dots(w_{i_k+1}\dots w_{12}) = 1
\end{equation*}
in $\Gamma_1$. Then for the singular fibres $F_j$ $(j=1, \dots , k)$ with monodromies conjugate to $(w_{i_j+1}\dots w_{i_{j+1}})$ there is an elliptic fibration $\pi : \mathbb{CP}^{2} \# 9 \overline{\mathbb{CP}^{2}} \longrightarrow \mathbb{CP}^{1}$ with singular fibres $F_1, \dots, F_k$. \newline
Table \ref{monodromysec}.1 gives a list of the monodromies (in $\Gamma_1$ and $SL(2;\mathbb{Z})$) of the singular fibres in Kodaira's list of singular fibres. This list comes from \cite{SSS} and \cite{HKK}.
\pagebreak
\begin{align*}
& \mathrm{Singular \; \; fibre} && \Gamma_1 && SL(2;\mathbb{Z}) \\
& I_1 && a && \left( \begin{array}{cc} \ 1 & 1 \\ 0 & 1 \end{array} \right) \\
& I_n && a^n && \left( \begin{array}{cc} 1 & n \\ 0 & 1 \end{array} \right) \\
& II && ba && \left( \begin{array}{cc} \phantom{1} 1 & 1 \\ -1 & 0 \end{array} \right)\\
& III && aba=bab && \left( \begin{array}{cc} \phantom{1} 0 & 1 \\ -1 & 0 \end{array} \right) \\
& IV && (ba)^2 && \left( \begin{array}{cc} \phantom{1} 0 & 1 \\ -1 & -1 \end{array} \right)\\
& I_n^{*} && (ab)^3a^n && \left( \begin{array}{cc} -1 & -n \\ \phantom{1}0 & -1 \end{array} \right)\\
& \tilde{E_8} && (ba)^5 && \left( \begin{array}{cc} 0 & -1 \\ 1 & \phantom{1} 1 \end{array} \right) \\
& \tilde{E_7} && (ba)^4 b &&\left( \begin{array}{cc} 0 & -1 \\ 1 & \phantom{1} 0 \end{array} \right) \\
& \tilde{E_6} && (ba)^4 &&\left( \begin{array}{cc} -1 & -1 \\ \phantom{1}1 & \phantom{1} 0 \end{array} \right)
\end{align*}
\begin{center}
Table \ref{monodromysec}.1
\end{center}
\pagebreak
\section{Double Node Neighbourhoods and Exotic $\mathbb{CP}^{2} \# 5 \overline{\mathbb{CP}^{2}}$'s} \label{dnnsec}
We give an outline of the double node neighbourhood technique presented in \cite{FS3}. \newline
A \emph{double node neighbourhood} $D$ is a (fibred) neighbourhood of an elliptic fibration that contains two fishtail fibres with the same monodromy\footnote{This is equivalent to saying that the neightbourhood contains an $I_2$-fibre.}. This means that there is a smooth torus fibre of $D$ that has two vanishing cycles $C_1$ and $C_2$ that collapse to a point over the points $p_1$ and $p_2$ in $D$, respectively. \newline
We now consider the effect of performing knot surgery along a (generic) torus fibre $F$ in a double node neighbourhood using a twist knots $K = T(n)$, pictured in Figure \ref{dnnsec}.1 (from \cite{FS3}). Recall that although we are forced to send the homology class of the longitude of our knot $K$ to a certain homology class, we are free to send the meridian of our knot $K$ to any homology class (that is a generator) of our choice. We choose the gluing in the knot surgery construction to be such that we send the homology class of a meridian of $K$ to the class $a \times pt$, the class of the vanishing cycle. \newline
It is proved in \cite{FS3} that while $D$ had section which was a disk $D_1$, the effect of knot surgery is that $D_1$ has a smaller disk $D_2$ removed from it and a punctured torus glued onto the boundary of $D_2$ in $D_1$ (essentially, connect-summing with a torus). Furthermore, this punctured torus contains a loop which bounds a disk $U$ self-intersection $-1$. We can then perform surgery on an annular neighbourhood of $U$ that will result in the torus becoming an immersed sphere $S$ of self-intersection $-1$. Note that this step is nontrivial and is the key argument in \cite{FS3}. \newline
Now, R. Fintushel and R. Stern proceed in \cite{FS3} to construct an infinite family of $\mathbb{CP}^{2} \# 6 \overline{\mathbb{CP}^{2}}$'s as follows: we recall that $E(1)$ has an elliptic fibration with one $\tilde{E_6}$ fibre and four fishtail fibres. Since $(ab)^6 = 1$ in $\Gamma_1$, the factorisation $(ab)^6 = (ab)^4 a^2 (a^{-1}ba) b$ shows the existence of such a fibration, since
\begin{itemize}
\item[(i)] $b(ab)^4 b^{-1} \sim (ba)^4$ is the monodromy of the $\tilde{E_6}$ fibre,
\item[(ii)] $a$ is the monodromy of a fishtail fibre,
\item[(iii)] a conjugate of $b$ in $\Gamma_1$ is $(ba)b(ba)^{-1} = (bab)a^{-1}b^{-1} = (aba)a^{-1}b^{-1} = a$ (where we used the braid relation $aba=bab$ in the second step), and so $b$ is also a fishtail fibre
\item[(vi)] $a^{-1}ba$ is a conjugate of $b$, and therefore of $a$, in $\Gamma_1$ and so is also a fishtail fibre.
\end{itemize}
and, furthermore, two of the fishtail fibres have the same monodromy $a$. \newline
We can therefore find a double node neighbourhood $D \subset E(1)$ containing two fishtail fibres with the same monodromy, and $E(1)\setminus D$ contains an $\tilde{E_6}$ fibre and the two remaining fishtail fibres $F_1$ and $F_2$. Choosing the knot $K$ to be the twist knot $T(n)$, we perform the knot surgery on $D$ as above, and therefore $D_K$ contains an immersed sphere $S$ of self-intersection $-1$. We then glue $D_K$ back into $E(1) \setminus D$ to obtain the manifold $Y_{n} = (E(1) \setminus D) \cup D_K$, and then the two fishtail fibres $F_1$ and $F_2$ intersect $S$ transversely in a single point. \newline
We blow-up at the double points of $S$, $F_1$ and $F_2$ to obtain spheres of self-intersection $-5$, $-4$ and $-4$, respectively, which intersect in a pair of points. These intersection points are singular points which can be smoothed to obtain a sphere $R$ of self-intersection $-9$. Furthermore, $R$ intersects the $S_5$ sphere in $\tilde{E_6}$ (one of the spheres of multiplicity 1) in a single positive point (i.e. $R \cdot S_5 = +1$). We are therefore in the same position to perform a rational blowdown along a configuration of spheres $C_7$, as in Park's construction, except that we have only used three blow-ups of $E(1)$ (actually, $Y_n$, which is homeomorphic to $E(1)$) instead of the four done in \cite{P1} and in section \ref{mainsectionlabel} above. \newline
Therefore, the rational blowdown along $C_7$ will produce a manifold $X_n$ that is homeomorphic to $\mathbb{CP}^{2} \# 6 \overline{\mathbb{CP}^{2}}$. These manifolds $X_n$ will not be diffeomorphic to $\mathbb{CP}^{2} \# 6 \overline{\mathbb{CP}^{2}}$, and in fact because of the knot surgery, will not even be diffeomorphic to each other. Therefore, we have constructed an infinite family of 4-manifolds homeomorphic to $\mathbb{CP}^{2} \# 6 \overline{\mathbb{CP}^{2}}$ but not diffeomorphic to it.
Several constructions are presented in \cite{PSS} to construct infinite families of manifolds homeomorphic but not diffeomorphic to $\mathbb{CP}^{2} \# 5 \overline{\mathbb{CP}^{2}}$. One particularly nice construction is started by first showing that there is an elliptic fibration on $E(1)$ with an $I_6$ fibre and six fishtail fibres, one with monodromy $m_1$, one with monodromy $m_2$, two with monodromy $m_3$ and the last two with monodromy $m_4$, where the $m_i$ ($i=1,2,3,4$) are conjugates of $a$ in $\Gamma _1$ (explicitly, this is done by using the braid relation to turn a conjugate of $(ab)^6$ into $(a^3 b)^3$ which factorises as $a^6 (a^{-3}b a^3)(bab^{-1})^2 b^2 (b^{-1}ab)$). \newline
Two pairs of fishtail fibres with the same monodromy allow us to perform the double node neighbourhood knot surgery twice, and we get to the stage of having a sphere of self-intersection $-9$ in a manifold $V_n \# 2 \overline{\mathbb{CP}^{2}}$, where $V_n$ is some manifold (the result of the knot surgery with the twist knots $T(1)$ and $T(n)$) that is homeomorphic to $E(1)$, and we are again ready to perform a rational blowdown along a configuration $C_7$. This time, the resulting 4-manifolds $Q_n$ will be homeomorphic (but not diffeomorphic) to $\mathbb{CP}^{2} \# 5 \overline{\mathbb{CP}^{2}}$, and the manifolds $Q_n$ will all be non-diffeomorphic. \newline
This short account of a few results in \cite{FS3} and \cite{PSS} do not do justice to these papers, which contain far more detail.
\begin{center}
\begin{minipage}{10cm}
\includegraphics[width=10cm]{twistknot1.eps}
\end{minipage}
\end{center}
\begin{center}
Figure \ref{dnnsec}.1
\end{center}
\pagebreak
\section{Epilogue}
In early 2007, the first example of an exotic $\mathbb{CP}^{2} \# 3 \overline{\mathbb{CP}^{2}}$ was given in \cite{AP} by A. Akhmedov and B. Doug Park. The reader is also referred to the subsequent papers \cite{BK1} and \cite{ABP}. \newline
K. Yasui recently posted a paper (\cite{Ya}) on the arXiv giving constructions of $\mathbb{CP}^{2} \# n \overline{\mathbb{CP}^{2}}$ ($n = 5,6,7,8,9$) using rational blowdowns and without using elliptic fibrations. \newline
In conclusion, over the last few years mathematicians, in a way, have got closer to finding an exotic $\mathbb{CP}^{2}$. However, it is still unknown whether any of the following 4-manifolds
\begin{equation*}
\mathbb{CP}^{2}, \qquad \mathbb{CP}^{2} \# \overline{\mathbb{CP}^{2}}, \qquad \mathbb{CP}^{2} \# 2 \overline{\mathbb{CP}^{2}}, \qquad S^2 \times S^2, \qquad S^4
\end{equation*}
admits an exotic smooth structure.
\pagebreak
\section{A Note About Sources, Diagrams and Corrections}
Before discussing the books and papers, I must give credit where it is due. My advisor, David Gay, helped me immensely with all the background material and sections 2$-$14 and 21$-$24, and answered many, many questions on the original thesis overall and topology in general. Andr\'as Stipsicz gave me several excellent lectures on singular fibres in elliptic fibrations and double-node neighbourhoods, and patiently answered my multitude of questions, which has resulted in sections 11, 15$-$20 and 25$-$28. I cannot overstate their contribution. \newline
Although the bibliography is quite large, most references are only used for a single result. Only a few books or articles have been used extensively. \newline
\cite{GS} is the key background reference, containing chapters on the classification of topological 4-manifolds, blowing up and blowing down, rational blowdowns, Kirby calculus, elliptic fibrations, $\dots$ the list goes on. It would be helpful to have already read \cite{GP}, or to have a copy at hand, before reading this article. \newline
\cite{P1} and \cite{PSS} are the papers this article is ``built around'', and the main goal of writing this article was to make these papers more accessible. \newline
The next most important are the papers \cite{FS1}, \cite{FS2} and \cite{FS3}, the original sources for rational blowdowns, knot surgery and double node neighbourhoods. \newline
\cite{SSS} is my favourite source for singular fibres, although other good sources are \cite{HKK} and \cite{BPV}. \newline
\cite{Sc} is an excellent book that covers a lot of the material presented in \cite{GS}, although is often not as detailed. It also has an excellent section on knot surgery. \newline
\cite{MS2} is an excellent reference for anything to do with symplectic topology. \newline
\cite{Ro} is my favourite source for the theory of knots and links. \cite{CF} also has a good section on group presentations not found in \cite{Ro}. \newline
Although these references listed above provided over 95\% of the source material for this article, the remaining 5\% provided by the rest of the references is no less important.\newline
Finally, I should mention that the diagrams included in this article were made using the wonderful program Ipe (version 6.0 preview 23).
\subsection*{Corrections}
There are several versions of this article available on the arXiv, of which this is version 4. \newline
Version 1 was the first draft, which I had hoped was error-free. \newline
In version 2 the order of attribution, regarding the construction of exotic $\mathbb{CP}^{2} \# 3 \overline{\mathbb{CP}^{2}}$s, at the beginning of the Epilogue was corrected. \newline
In version 3 the abstract was modified to make clear the purpose of this article.\newline
In version 4 Definition \ref{correcteddefnlabel} was modified, and the accompanying commutative diagram was removed.
The details for the references for [R] and [U], which were neglected in the earlier versions, were finally filled in. Section 20 in version 3 was moved to become section 11, and was renamed ``Results Concerning Rational Blowdowns''. Consequently, a few sections have been shifted accordingly. Various typographical errors were corrected; however, I fear that there are still some out there.
\pagebreak
| {
"timestamp": "2008-12-31T20:10:20",
"yymm": "0812",
"arxiv_id": "0812.1883",
"language": "en",
"url": "https://arxiv.org/abs/0812.1883",
"abstract": "This article intends to provide an introduction to the construction of small exotic 4-manifolds. Some of the necessary background is covered. An exposition is given of J. Park's construction inarXiv:math.GT/0311395of an exotic CP^2#7(-CP^2). This article does not intend to present any new results. It was originally a Master's thesis, and its aim is merely to provide a leisurely introduction to exotic 4-manifolds that might be of use to interested graduate students.",
"subjects": "Geometric Topology (math.GT)",
"title": "An introduction to exotic 4-manifolds",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540722737479,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7099046568439696
} |
https://arxiv.org/abs/2006.16811 | Path Integral Based Convolution and Pooling for Graph Neural Networks | Graph neural networks (GNNs) extends the functionality of traditional neural networks to graph-structured data. Similar to CNNs, an optimized design of graph convolution and pooling is key to success. Borrowing ideas from physics, we propose a path integral based graph neural networks (PAN) for classification and regression tasks on graphs. Specifically, we consider a convolution operation that involves every path linking the message sender and receiver with learnable weights depending on the path length, which corresponds to the maximal entropy random walk. It generalizes the graph Laplacian to a new transition matrix we call maximal entropy transition (MET) matrix derived from a path integral formalism. Importantly, the diagonal entries of the MET matrix are directly related to the subgraph centrality, thus providing a natural and adaptive pooling mechanism. PAN provides a versatile framework that can be tailored for different graph data with varying sizes and structures. We can view most existing GNN architectures as special cases of PAN. Experimental results show that PAN achieves state-of-the-art performance on various graph classification/regression tasks, including a new benchmark dataset from statistical mechanics we propose to boost applications of GNN in physical sciences. | \section{Introduction}
The triumph of convolutional neural networks (CNNs) has motivated researchers to develop similar architectures for graph-structured data. The task is challenging due to the absence of regular grids. One notable proposal is to define convolutions in the Fourier space \cite{BrZaSzLe2013,Bronstein_etal2017}. This method relies on finding the spectrum of the graph Laplacian $I-D^{-1}A$ or $I-D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ and then applies filters to the components of input signal $X$ under the corresponding basis, where $A$ is the adjacency matrix of the graph, and $D$ is the corresponding degree matrix. Due to the high computational complexity of diagonalizing the graph Laplacian, people have proposed many simplifications \cite{defferrard2016convolutional, KiWe2017}.
The graph Laplacian based methods essentially rely on message passing \cite{gilmer2017neural} between directly connected nodes with equal weights shared among all edges, which is at heart a generic random walk (GRW) defined on graphs. It can be seen most obviously from the GCN model \cite{KiWe2017}, where the normalized adjacency matrix is directly applied to the left-hand side of the input. In statistical physics, $D^{-1}A$ is known as the transition matrix of a particle doing a random walk on the graph, where the particle hops to all directly connected nodes with equiprobability. Many direct space-based methods \cite{node2vec_2016,LiTaBrZe2015,velivckovic2017graph,Planetoid_2016} can be viewed as generalizations of GRW, but with biased weights among the neighbors.
In this paper, we go beyond the GRW picture, where information necessarily dilutes when a path branches, and instead consider every path linking the message sender and receiver as the elemental unit in message passing. Inspired by the path integral formulation developed by Feynman \cite{feynman2010quantum,feynman1948space}, we propose a graph convolution that assigns trainable weights to each path depending on its length. This formulation results in a \emph{maximal entropy transition} (MET) matrix, which is the counterpart of graph Laplacian in GRW. By introducing a fictitious temperature, we can continuously tune our model from a fully localized one (MLP) to a spectrum based model. Importantly, the diagonal of the MET matrix is intimately related to the subgraph centrality, and thus provides a natural pooling method without extra computations. We call this complete path integral based graph neural network framework PAN.
We demonstrate that PAN outperforms many popular architectures on benchmark datasets. We also introduce a new dataset from statistical mechanics, which overcomes the lack of explanability and tunability of many previous ones. The dataset can serve as another benchmark, especially for boosting applications of GNN in physical sciences. This dataset again confirms that PAN has a faster convergence rate, higher prediction accuracy, and better stability compared to many counterparts.
\section{Path Integral Based Graph Convolution}
\paragraph{Path integral and MET matrix} Feynman's path integral formulation \cite{feynman2010quantum,Zinn-Justin:2009} interprets the probability amplitude $\phi(x,t)$ as a weighted average in the configuration space, where the contribution from $\phi_0(x)$ is computed by summing over the influences (denoted by $e^{iS[\mathbf{x},\mathbf{\Dot{x}}]}$) from all paths connecting itself and $\phi(x,t)$. This formulation has been later extensively used in statistical mechanics and stochastic processes \cite{kleinert2009path}. We note that this formulation essentially constructs a convolution by considering the contribution from all possible paths in the continuous space.
\begin{figure}[th]
\vskip -4mm
\centering
\begin{minipage}{0.8\textwidth}
\includegraphics[width=\textwidth]{fig1_3.png}
\end{minipage}
\vspace{-2mm}
\caption{A schematic analogy between the original path integral formulation in continuous space (left) and the discrete version for a graph (right). Symbols are defined in the text.}
\label{fig:panconv}
\vskip -0.1in
\end{figure}
Using this idea, but modified for discrete graph structures, we can heuristically propose a statistical mechanics model on how information is shared between different nodes on a given graph.
In the most general form, we write observable $\phi_i$ at the $i$-th node for a graph with $N$ nodes as
\begin{equation} \label{eq:stat}
\phi_i=\frac{1}{Z_i}\sum_{j=1}^{N}\phi_j \sum_{\{\mathbf l|l_0=i,l_{|\mathbf{l}|}=j\}}e^{-\frac{E[\mathbf l]}{T}},
\end{equation}
where $Z_i$ is the normalization factor known as the \textit{partition function} for the $i$-th node. Here a path $\mathbf{l}$ is a sequence of connected nodes $(l_0l_1\dots l_{|\mathbf{l}|})$ where $A_{l_il_{i+1}}=1$, and the length of the path is denoted by $|\mathbf{l}|$. In Figure~\ref{fig:panconv} we draw the analogy between our discrete version and the original formulation. It is straightforward to see that the integral should now be replaced by a summation, and $\phi_0(x)$ only resides on nodes. Since a statistical mechanics perspective is more proper in our case, we directly change the exponential term, which is originally an integral of Lagrangian, to a Boltzmann's factor with fictitious energy $E[\mathbf{l}]$ and temperature $T$; we choose Boltzmann's constant $k_B=1$. Nevertheless, we still exploit the fact that the energy is a functional of the path, which gives us a way to weight the influence of other nodes through a certain path. The fictitious temperature controls the excitation level of the system, which reflects that to what extent information is localized or extended. In practice, there is no need to learn the fictitious temperature or energy separately, instead the neural networks can directly learn the overall weights, as will be made clearer later.
To obtain an explicit form of our model, we now introduce some mild assumptions and simplifications. Intuitively, we know that information quality usually decays as the path between the message sender and the receiver becomes longer, thus it is reasonable to assume that the energy is not only a functional of path, but can be further simplified as a function that solely depends on the length of the path. In the random walk picture, this means that the hopping is equiprobable among all the paths that have the same length, which maximizes the Shannon entropy of the probability distribution of paths globally, and thus the random walk is given the name maximal entropy random walk \cite{burda2009localization} \footnote{For a weighted graph, a feasible choice for the functional form of the energy could be $E(l_{\rm eff})$, where the effective length of the path $l_{\rm eff}$ can be defined as a summation of the inverse of weights along the path, i.e. $l_{\rm eff}=\sum_{i=0}^{|l|-1}1/w_{l_il_{i+1}}$.}. By first conditioning on the length of the path, we can introduce the overall $n$-th layer weight $k(n;i)$ for node $i$ by
\begin{equation} \label{eq:kn}
k(n;i)=\frac{1}{Z_i}{\sum_{j=1}^{N}g(i,j;n)}e^{-\frac{E(n)}{T}},
\end{equation}
where $g(i,j;n)$ denotes the number of paths between nodes $i$ and $j$ with length of $n$, or \textit{density of states} for the energy level $E(n)$ with respect to nodes $i$ and $j$, and the summation is taken over all nodes of the graph. Intuitively, node $j$ with larger $g(i,j;n)$ means that it has more channels to talk with node $i$, thus may impose a greater influence on node $i$ as the case in our formulation. For example, in Figure~\ref{fig:panconv}, nodes $B$ and $C$ are both two-step away from $A$, but $B$ has more paths connecting $A$ and would be assigned with a larger weight as a consequence. Presumably, the energy $E(n)$ is an increasing function of $n$, which leads to a decaying weight as $n$ increases.\footnote{This does not mean that $k(n;i)$ must necessarily be a decreasing function, as $g(i,j;n)$ grows exponentially in general. It would be valid to apply a cutoff as long as $E(n)\gg nT\ln \lambda_1$ for large $n$, where $\lambda_1$ is the largest eigenvalue of the adjacency matrix $A$.} By applying a cutoff of the maximal path length $L$, we exchange the summation order in \eqref{eq:stat} to obtain
\begin{equation}
\phi_i=\sum_{n=0}^{L}k(n;i)\sum_{j=1}^{N}\frac{g(i,j;n)}{\sum_{s=1}^{N}g(i,s;n)}\phi_j
=\frac{1}{Z_i}\sum_{n=0}^{L}e^{-\frac{E(n)}{T}}\sum_{j=1}^{N}g(i,j;n)\phi_j,
\label{eq:sumkn}
\end{equation}
where the partition function can be explicitly written as
\begin{equation} \label{eq:partition}
Z_i=\sum_{n=0}^{L}e^{-\frac{E(n)}{T}}\sum_{j=1}^{N}g(i,j;n).
\end{equation}
A nice property of this formalism is that we can easily compute $g(i,j;n)$ by raising the power of the adjacency matrix $A$ to $n$, which is a well-known property of the adjacency matrix from graph theory, i.e., $g(i,j;n)=A^n_{ij}$.
Plug in \eqref{eq:sumkn} we now have a group of self-consistent equations governed by a transition matrix $M$ (a counterpart of the \textit{propagator} in quantum mechanics), which can be written in the following compact form
\begin{equation} \label{eq:Propagator2}
M=Z^{-1}\sum_{n=0}^{L}e^{-\frac{E(n)}{T}}A^n,
\end{equation}
where ${\rm diag}(Z)_i=Z_i$.
We call the matrix $M$ \emph{maximal entropy transition} (MET) matrix, with regard to the fact that it realizes maximal entropy under the microcanonical ensemble. This transition matrix replaces the role of the graph Laplacian under our framework.
More generally, one can constrain the paths under consideration to, for example, shortest paths or self-avoiding paths. Consequentially, $g(i,j;n)$ will take more complicated forms and the matrix $A^n$ needs to be modified accordingly. In this paper, we focus on the simplest scenario and apply no constraints for the simplicity of the discussion.
\paragraph{PAN convolution} The \emph{eigenstates}, or the basis of the system $\{\psi_i\}$ satisfy $M\psi_i=\lambda_i\psi_i$.
Similar to the basis formed by the graph Laplacian, one can define graph convolution based on the spectrum of MET matrix, which now has a distinct physical meaning. However, it is computationally impractical to diagonalize $M$ in every iteration as it is updated. To reduce the computational complexity, we apply the trick similar to GCN \cite{KiWe2017} by directly multiplying $M$ to the left hand side of the input and accompanying it by another weight matrix $W$ on the right-hand side. The convolutional layer is then reduced to a simple form
\begin{equation} \label{eq:conv}
X^{(h+1)}=M^{(h)}X^{(h)}W^{(h)},
\end{equation}
where $h$ refers to the layer number.
Applying $M$ to the input $X$ is essentially a weighted average among neighbors of a given node, which leads to the question that if the normalization consistent with the path integral formulation works best in a data-driven context. It has been consistently shown experimentally that a symmetric normalization usually gives better results \cite{KiWe2017,LNet,MaLiWa2019}. This observation might have an intuitive explanation. Most generally, one can consider the normalization $Z^{-\theta_1}\cdot Z^{-\theta_2}$, where $\theta_1+\theta_2=1$. There are two extreme situations. When $\theta_1=1$ and $\theta_2=0$, it is called random-walk normalization and the model can be understood as ``receiver-controlled", in the sense that the node of interest performs an average among all the neighbors weighted by the number of channels that connect them. On the contrary, when $\theta_1=0$ and $\theta_2=1$, the model becomes ``sender-controlled", since the weight is determined by the fraction of the flow coming out from the sender that is directed to the receiver. Because of the fact that for an undirected graph, the exact interaction between connected nodes are unknown, as a compromise, the symmetric normalization can outperform both extremes, even it may not be the optimal.
This consideration leads us to a final perfection step that changes the normalization $Z^{-1}$ in $M$ to the symmetric normalized version. The convolutional layer then becomes
\begin{equation}\label{eq:conv_symm_norm}
X^{(h+1)}=M^{(h)}X^{(h)}W^{(h)}=Z^{-1/2}\sum_{n=0}^{L}e^{-\frac{E(n)}{T}}A^n Z^{-1/2}X^{(h)}W^{(h)}.
\end{equation}
We shall call this graph convolution \emph{PANConv}.
The optimal cutoff $L$ of the series depends on the intrinsic properties of the graph, which is represented by temperature $T$. Incorporating more terms is analogous to having more particles excited to the higher energy level at a higher temperature. For instance, in \emph{low-temperature limit}, $L=0$, the model is reduced to the MLP model. In the \emph{high-temperature limit}, all factors $\exp(-E(n)/T)$ are effectively one, and the term with the largest power dominates the summation. We can see it by
$A^n=\sum_{i=1}^{N}\lambda_i^n \psi_i\psi_i^T$,
where $\lambda_1,\dots,\lambda_N$ is sorted in a descending order. By the Perron-Frobenius theorem, we may only keep the leading order term with the unique largest eigenvalue $\lambda_1$ when $n\rightarrow \infty$. We then reach a prototype of the high temperature model $X^{(h+1)}=(I+\psi_1\psi_1^T)X^{(h)}W^{(h)}$. The most suitable choice of the cutoff $L$ reflects the intrinsic dynamics of the graph.
\section{Path Integral Based Graph Pooling}
For graph classification and regression tasks, another critical component is the pooling mechanism, which enables us to deal with graph input with variable sizes and structures. Here we show that the PAN framework provides a natural ranking of node importance based on the MET matrix, intimately related to the subgraph centrality. This pooling scheme, denoted by PANPool, requires no further work aside from the convolution and can discover the underlying local motif adaptively.
\paragraph{MET matrix and subgraph centrality}
Many different ways to rank the ``importance" of nodes in a graph have been proposed in the complex networks community. The most straightforward one is the degree centrality (DC), which counts the number of neighbors, other more sophisticated measures include, for example, betweenness centrality (BC) and eigenvector centrality (EC) \cite{newman2018networks}. Although these methods do give specific measures of the global importance of the nodes, they usually fail to pick up local patterns. However, from the way CNNs work on image classifications, we know that it is the \textit{locally} representative pixels that matter.
Estrada and Rodriguez-Velazquez \cite{estrada2005subgraph} have shown that subgraph centrality is superior to the methods mentioned above in detecting local graph motifs, which are crucial to the analysis of many social and biological networks. The subgraph centrality computes a weighted sum of the number of self-loops with different lengths. Mathematically, it simply writes as $\sum_{k=0}^{\infty}(A^k)_{ii}/k!$ for node $i$.
Interestingly, one immediately sees that the resemblance of this expression and the diagonal elements of the MET matrix. The difference is easy to explain. The summation in the MET matrix is truncated at maximal length $L$, and the weights for different path length $e^{\frac{E(n)}{T}}$ is learnable. In contrast, the predetermined weight $1/k!$ is a convenient choice to ensure the convergence of the summation and an analytical form of the result, which writes $\sum_{j=1}^{N}v_j^2(i)e^{\lambda_j}$, where $v_j(i)$ is the $i$-th element of the orthonormal basis associated with the eigenvalue $\lambda_j$.
Now it becomes clear that the MET matrix plays the role of a path integral-based convolution. Its diagonal elements $M_{ii}$ also automatically provides a measure of the importance of node $i$, thus enabling a pooling mechanism by sorting $M_{ii}$. Importantly, this pooling method has three main merits compared to the subgraph centrality. First, we can exploit the readily-computed MET matrix, thus circumvent extra computations, especially the direct diagonalization of the adjacency matrix in the case of subgraph centrality. Second, the weights are data-driven rather than predetermined, which can effectively adapt to different inputs.
Furthermore, the MET matrix is normalized \footnote{Notice that unlike the case in convolutions, the normalization is symmetric or not does not matter here. Here we only care about the diagonal terms, and different normalization methods will give the same result.}, which adds weights on the \textit{local} importance of the nodes, and can potentially avoid clustering around ``hubs" that are commonly seen in real-world ``scale-free" networks \cite{barabasi2016network}.
The PAN Pooling strategy has similar physical explanations as the PAN convolution. In the low-temperature limit, for example, if we set the cut-off at $L=2$, the rank of $\sum_{n=0}^{L}e^{\frac{E(n)}{T}}A_{ii}^n$ is of the same order as the rank of degrees, and thus we recover the degree centrality. In the high-temperature limit, as $n \rightarrow \infty$, the sum is dominated by the magnitude of the $i$-th element of the orthonormal basis associated with the largest eigenvalue of $A$, thus the corresponding ranking is reduced to the ranking of the eigenvector centrality. By tuning $L$, PANPool provides a flexible strategy that can adapt to the ``sweet spot" of the input.
To better understand the effect of the proposed method, in Figure~\ref{fig:panpool_pointpattern}, we visualize the top 20\% nodes by different measures of node importance of a connected point pattern called RSA, which we detail in Section~\ref{sec:pan_pointpattern}. It is noteworthy that while DC selects points relatively uniform, the result of EC is highly concentrated. This phenomenon is analogous to the contrast between the rather uniform diffusion in the classical picture and the Anderson localization \cite{anderson1958absence} in the quantum mechanics of disordered systems \cite{burda2009localization}. In this sense, it tries to find a ``mesoscopic" description that best fits the structure of input data. Importantly, we note that the unnormalized MET matrix tends to focus on the densely connected areas or hubs. In contrast, the normalized one tends to choose the \textit{locally} representative nodes and leave out the equally well-connected nodes in the hubs. This observation leads us to propose an improved pooling strategy that balances the influencers at both the global and local levels.
\begin{figure}[th]
\centering
\begin{minipage}{\textwidth}
\centering
\begin{minipage}{0.19\textwidth}
\includegraphics[width=\textwidth]{RSA_03_784_degree_sz300.jpg}
\end{minipage}
\begin{minipage}{0.19\textwidth}
\includegraphics[width=\textwidth]{RSA_03_784_eigen_sz300.jpg}
\end{minipage}
\begin{minipage}{0.19\textwidth}
\includegraphics[width=\textwidth]{RSA_03_784_M_sz300.jpg}
\end{minipage}
\begin{minipage}{0.19\textwidth}
\includegraphics[width=\textwidth]{RSA_03_784_M_normalized_sz300.jpg}
\end{minipage}
\begin{minipage}{0.19\textwidth}
\includegraphics[width=\textwidth]{RSA_03_784_pan_sz300.jpg}
\end{minipage}
\end{minipage}
\caption{Top 20\% nodes (shown in blue) by different measures of node importance of an RSA pattern from PointPattern dataset. From left to right are results from: Degree Centrality, Eigenvector Centrality, MET matrix without normalization, MET matrix and Hybrid PANPool.}
\label{fig:panpool_pointpattern}
\vskip -0.1in
\end{figure}
\paragraph{Hybrid PANPool}
To combine the contribution of the local motifs and the global importance, we propose a hybrid PAN pooling (denoted by PANPool) using a simple linear model. The global importance can be represented by, but not limited to the strength of the input signal $X$ itself. More precisely, we project feature $X \in R^{N\times d}$ by a trainable parameter vector $p\in R^d$ and combine it with the diagonal ${\rm diag}(M)$ of the MET matrix to obtain a score vector
\begin{equation}\label{eq:score_pool}
{\rm score} = Xp + \beta {\rm diag}(M).
\end{equation}
Here $\beta$ is a real learnable parameter that controls the emphasis on these two potentially competing factors. PANPool then selects a fraction of the nodes ranked by this score, and outputs the pooled feature array $\widetilde{X} \in R^{K\times d}$ and the corresponding adjacency matrix $\widetilde{A}\in R^{K\times K}$. This new node score in \eqref{eq:score_pool} has jointly considered both node features (at global level) and graph structures (at local level). In Figure~\ref{fig:panpool_pointpattern}, PANPool tends to select nodes that are both important locally and globally. We also tested alternative designs under the same consideration, see supplementary material for details.
\section{Related Works}
Graph neural networks have received much attention recently \cite{Survey_Battaglia,LiTaBrZe2015,scarselli2009graph,Survey_ZhangCQ,
Survey_ZhuWW,Survey_SunMS}. For graph convolutions, many works take accounts of the first order of the adjacency matrix in the spatial domain or graph Laplacian in the spectral domain. Bruna et al. \cite{BrZaSzLe2013} first proposed graph convolution using the Fourier method, which is, however, computationally expensive. Many different methods have been proposed to overcome this difficulty \cite{DCNN_2016,ChZhSo2018,ChMaXi2018fastgcn,defferrard2016convolutional,gilmer2017neural,hamilton2017inductive,KiWe2017,Monti_etal2017,Graph-CNN_2017,GWNN,GIN}.
Another vital stream considers the attention mechanism \cite{velivckovic2017graph}, which infers the interaction between nodes without using a diffusion-like picture. Some other GNN models use multi-scale information and higher-order adjacency matrix \cite{sami2018watch,mixhop,ngcn,flam2020neural,klicpera2019diffusion,LNet,SGC}. Compared to the generic diffusion picture \cite{node2vec_2016,DeepWalk_2014,tang2015line}, the maximal entropy random walk has already shown excellent performance on link prediction \cite{li2011link} or community detection \cite{ochab2013maximal} tasks. However, many popular models can be related to or viewed as certain explicit realizations of our framework. We can interpret the MET matrix as an operator that acts on the graph input, which works as a kernel that allocates appropriate weights among the neighbors of a given node. This mechanism is similar to the attention mechanism \cite{velivckovic2017graph}, while we restrict the functional form of $M$ based on physical intuitions and preserve a compact form. Although we keep the number of features by applying $M$, one can easily concatenate the aggregated information of neighbors like GraphSAGE \cite{hamilton2017inductive} or GAT \cite{velivckovic2017graph}. Importantly, the best choice of the cutoff $L$ reveals the intrinsic dynamics of the graph. In particular, by choosing $L=1$, model \eqref{eq:conv_symm_norm} is essentially the GCN model \cite{KiWe2017}. The trick of adding self-loops is automatically realized in higher powers of $A$. By replacing $A$ in \eqref{eq:conv_symm_norm} with $D^{-1}A$ or $D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$, we can easily transform our model to a multi-step GRW version, which is indeed the format of LanczosNet \cite{LNet}. The preliminary ideas about PAN convolution and its application to node classification have been presented at an ICML workshop \cite{MaLiWa2019}. This paper focuses on path integral based convolution and pooling for classification and regression tasks at graph-level.
Graph pooling is another crucial step of a GNN to make the output uniform size in graph classification and regression tasks. Researchers have proposed many pooling methods from different aspects. For example, one can merely consider node feature or node embeddings \cite{duvenaud2015convolutional, gilmer2017neural,vinyals2015order,zhang2018end}. These global pooling methods do not utilize the hierarchical structure of the graph. One way to reinforce learning ability is to build a data-dependent pooling layer with trainable operations or parameters \cite{cangea2018towards, gao2019graph, knyazev2019understanding,lee2019self,ying2018hierarchical}. One can incorporate more edge information in graph pooling \cite{diehl2019towards, Yuan2020StructPool}. One can also use spectral method and pool in Fourier or wavelet domain \cite{ma2019graph,noutahi2019towards,wang2020haargraph}.
PANPool is a method that takes both feature and structure into account.
Finally, it does not escape our analysis that the loss of paths could represent an efficient way to achieve dropout.
\section{Experiments}
In this section, we present the test results of PAN on various datasets in graph classification tasks. We show a performance comparison of PAN with some existing GNN methods. All the experiments were performed using PyTorch Geometric \cite{fey2019fast} and run on a server with Intel(R) Core(TM) i9-9820X CPU 3.30GHz, NVIDIA GeForce RTX 2080 Ti and NVIDIA TITAN V GV100.
\subsection{PAN on Graph Classification Benchmarks}
\paragraph{Datasets and baseline methods}
We test the performance of PAN on five widely used benchmark datasets for graph classification tasks~\cite{KKMMN2016}, including two protein graph datasets \textbf{PROTEINS} and \textbf{PROTEINS\_full} ~\cite{borgwardt2005protein,dobson2003distinguishing};
one mutagen dataset \textbf{MUTAGEN}~\cite{riesen2008iam,kazius2005derivation} (full name Mutagenicity); and one dataset that consists of chemical compounds screened for activity against non-small cell lung cancer and ovarian cancer cell lines \textbf{NCI1}~\cite{wale2008comparison}; one dataset that consists of molecular compounds for activity against HIV or not \textbf{AIDS}~\cite{riesen2008iam}.
These datasets cover different domains, sample sizes, and graph structures, thus enable us to obtain a comprehensive understanding of PAN's performance in various scenarios. Specifically, the number of data samples ranges from 1,113 to 4,337, the average number of nodes is from 15.69 to 39.06, and the average number of edges is from 16.20 to 72.82, see a detailed statistical summary of the datasets in the supplementary material.
We compare \textbf{PAN} in Table~\ref{tab:pan_benchmark} with existing GNN models built by combining graph convolution layers \textbf{GCNConv} \cite{KiWe2017}, \textbf{SAGEConv} \cite{hamilton2017inductive}, \textbf{GATConv} \cite{velivckovic2017graph}, or \textbf{SGConv} \cite{Wu2019Simplifying}, and graph pooling layers \textbf{TopKPool}, \textbf{SAGPool} \cite{lee2019self}, \textbf{EdgePool} \cite{ma2019graph}, or \textbf{ASAPool} \cite{ranjan2019asap}.
\vspace{-2mm}
\paragraph{Setting}
In each experiment, we split 80\% and 20\% of each dataset for training and test. All GNNs models shared the exactly same architecture: Conv($n_f$-512) + Pool + Conv(512-256) + Pool + Conv(256-128) + FC(128-$n_c$), where $n_f$ is the feature dimension and $n_c$ is the number of classes. We give the choice of hyperparameters for these layers in the supplementary material. We evaluate the performance by the percentage of correctly predicted labels on test data. Specifically for PAN, we compared different choices of the cutoff $L$ (between 2 and 7) and reported the one that achieved the best result (shown in the brackets of Table~\ref{tab:pan_benchmark}).
\vspace{-2mm}
\paragraph{Results}
Table~\ref{tab:pan_benchmark} reports classification test accuracy for several GNN models. PAN has excellent performance on all datasets and achieves top accuracy on four of the five datasets, and in some cases, improve state of the art by a few percentage points. Even for MUTAGEN, PAN still has the second-best performance.
Most interestingly, the optimal choice of the highest order $L$ for the MET matrix varies for different types of graph data. It confirms that the flexibility of PAN enables it to learn and adapt to the most natural representation of the given graph data.
Additionally, we also tested PAN on graph regression tasks such as QM7 and achieved excellent performances. See supplementary material for details.
\begin{table*}[t]
\centering
\begin{minipage}{\textwidth}
\centering
\caption{Performance comparison for graph classification tasks
(test accuracy in percentage; bold font is used to highlight the best performance in the list; the value in brackets is the cutoff $L$ used in the MET matrix.)}\label{tab:pan_benchmark}
\end{minipage}
\begin{center}
\begin{small}
\begin{threeparttable}
\begin{tabularx}{390pt}{l *6{>{\Centering}X}}
\toprule
\newcommand{\phantom{*}}{\phantom{*}}
{\bf Method} & PROTEINS & PROTEINSF & NCI1 & AIDS & MUTAGEN \\
\midrule
GCNConv + TopKPool
&67.71 &68.16 &50.85 &79.25 &58.99 \\
SAGEConv + SAGPool
&64.13 &70.40 &64.84 &77.50 &67.40 \\
GATConv + EdgePool
&64.57 &62.78 &59.37 &79.00 &62.33 \\
SGConv + TopKPool
&68.16 &69.06 &50.85 &79.00 &63.82 \\
GATConv + ASAPool
&64.57 &65.47 &50.85 &79.25 &56.68 \\
SGConv + EdgePool
&70.85 &69.51 &56.33 &79.00 &\textbf{70.05} \\
SAGEConv + ASAPool
&58.74 &58.74 &50.73 &79.25 &56.68 \\
GCNConv + SAGPool
&59.64 &\textbf{72.65} &50.85 &78.75 &67.28 \\
\midrule
PANConv+PANPool (ours)
&\textbf{73.09} (1) &\textbf{72.65} (1) &\textbf{68.98} (3) &\textbf{92.75} (2) &69.70 (2)\\
\bottomrule
\end{tabularx}
\centering
\end{threeparttable}
\end{small}
\end{center}
\vskip -0.2in
\end{table*}
\subsection{PAN for Point Distribution Recognition}\label{sec:pan_pointpattern}
\paragraph{A new classification dataset for point pattern recognition}
People have proposed many graph neural network architectures; however, there are still insufficient well-accepted datasets to access their relative strength \cite{hu2020open}. Despite being popular, many datasets suffer from a lack of understanding of the underlying mechanism, such as whether one can theoretically guarantee that a graph representation is proper. These datasets are usually not controllable either; many different prepossessing tricks might be needed, such as zero paddings. Consequentially, reproducibility might be compromised.
\begin{figure}[th]
\vskip -1mm
\centering
\begin{minipage}{0.85\textwidth}
\centering
\includegraphics[width=0.25\textwidth]{Hard-core_03_736.jpg}
\hspace{2mm}
\includegraphics[width=0.25\textwidth]{Poisson_03_703.jpg}
\hspace{2mm}
\includegraphics[width=0.25\textwidth]{RSA_03_784.jpg}
\end{minipage}
\caption{From left to right: Graph samples generated from HD, Poisson and RSA point processes in PointPattern dataset.}
\label{fig:hpr_examples}
\vskip -2mm
\end{figure}
In order to tackle this challenge, we introduce a new graph classification dataset constructed by simple point patterns from statistical mechanics. We simulated three point patterns in 2D: hard disks in equilibrium (HD), Poisson point process, and random sequential adsorption (RSA) of disks. The HD and Poisson distributions can be seen as simple models that describe the microstructures of liquids and gases \cite{hansen1990theory}, while the RSA is a nonequilibrium stochastic process that introduces new particles one by one subject to nonoverlapping conditions. These systems are well known to be structurally different, while being easy to simulate, thus provides a solid and controllable classification task. For each point pattern, the particles are treated as nodes, and edges are subsequently drawn according to whether two particles are within a threshold distance. We name the dataset \textbf{PointPattern}. See Figure~\ref{fig:hpr_examples} for an example of the three types of resulting graphs.
The volume fraction (covered by particles) $\phi_{\rm HD}$ of HD is fixed at 0.5, while we tune $\phi_{\rm RSA}$ to control the similarity between RSA and the other two distributions (Poisson point pattern corresponds to $\phi_{\rm RSA}$=0). As $\phi_{\rm RSA}$ becomes closer to 0.5, RSA patterns are harder to be distinguished from HD. We use the degree as the feature for each node. It thus allows us to generate a series of graph datasets with varying difficulties as classification tasks.
\begin{figure}[th]
\vskip -1mm
\centering
\begin{minipage}{0.8\textwidth}
\includegraphics[width=0.45\textwidth]{val_loss_pan_gcn_gin_PointPattern_phi03_notitle.pdf}
\hspace{3mm}
\includegraphics[width=0.45\textwidth]{val_acc_pan_gcn_gin_PointPattern_phi03_notitle.pdf}
\end{minipage}
\caption{Comparison of validation loss and accuracy of PAN, GCN and GIN on PointPattern under similar network architectures with 10 repetitions.}
\label{fig:hpr_pan_gcn_gin}
\vskip -3mm
\end{figure}
\paragraph{Setting} We tested the \textbf{PANConv+PANPool} model on \textbf{PointPattern} with $\phi_{\rm RSA}=0.3, 0.35$ and $0.4$, and compared it with other two GNN models which use \textbf{GCNConv+TopKPool} or \textbf{GINConv+TopKPool} as basic architecture blocks \cite{cangea2018towards,gao2019graph,KiWe2017,knyazev2019understanding,GIN}. Each \textbf{PointPattern} dataset is a 3-classification problem for 15,000 graphs (5000 for each type) with sizes varying between 100 and 1000.
All GNN models use the same network architecture: 3 units of one graph convolutional layer plus one graph pooling, followed by fully connected layers. In GCN and GIN models, we also use global max pooling to compress the node size to one before the fully connected layer.
We split the data into training, validation, and test sets of size 12,000, 1,500, and 1,500. We fix the number of neurons in the convolutional layers to 64, the learning rate and weight decay are set to 0.001 and 0.0005.
\begin{table}[thbp!
\begin{minipage}{\textwidth}
\caption
Test accuracy (in percentage) of PAN, GIN and GCN on three types of PointPattern datasets with different difficulties, epoch up to 20. The value in brackets is the cutoff of $L$.}\label{tab:pointpattern_pan_gcn_gin}\vspace{-2mm}
\begin{center}
\begin{small}
\begin{tabularx}{370pt}{lccc}\toprule
\textbf{PointPattern} & GINConv + SAGPool & GCNConv + TopKPool & PANConv + PANPool (ours)\\
\midrule
$\phi_{\rm RSA}=0.3$ & 90.9$\pm$2.95 & 92.9$\pm$3.21 & 99.0$\pm$0.30 (4)\\
$\phi_{\rm RSA}=0.35$ & 86.7$\pm$3.30 & 89.3$\pm$3.31 & 97.6$\pm$0.53 (4)\\
$\phi_{\rm RSA}=0.4$ & 80.2$\pm$3.80 & 85.1$\pm$4.06 & 94.4$\pm$0.55 (4)\\
\bottomrule
\end{tabularx}
\end{small}
\end{center}
\vskip -5mm
\end{minipage}
\end{table}
\vspace{-3mm}
\paragraph{Results} Table~\ref{tab:pointpattern_pan_gcn_gin} shows the mean and SD of the test accuracy of the three networks on the three PointPattern datasets. PAN outperforms GIN and GCN models on all datasets with 5 to 10 percents higher accuracy, while significantly reduces variances. We observe that PAN's advantage is persistent over varying task difficulties, which may be due to the consideration of higher order paths (here $L=4$).
We compare the validation loss and accuracy trends in the training of PANConv+PANPool with GCNConv+TopKPool in Figure~\ref{fig:hpr_pan_gcn_gin}. It illustrates that the learning and generalization capabilities of PAN are better than those of the GCN and GIN models. The loss of PAN decays to much smaller values early while the accuracy reaches higher plateau more rapidly. Moreover, the loss and accuracy of PAN both have much smaller variances, which can be seen most evidently after epoch four. In this perspective, PAN provides a more efficient and stable learning model for the graph classification task.
Another intriguing pattern we notice is that the weights are concentrated on the powers $A^3$ and $A^4$. It suggests that what differentiates these graph structures is the high orders of the adjacency matrix, or physically, the pair correlations at intermediate $r$. It may explain why PAN performs better than GCN, which uses only $A$ in its model.
\vspace{-1mm}
\section{Conclusion}
\vspace{-2mm}
We propose a path integral based GNN framework (PAN), which consists of self-consistent convolution and pooling units, the later is closely related to the subgraph centrality. PAN can be seen as a class of generalization of GNN. PAN achieves excellent performances on various graph classification and regression tasks, while demonstrating fast convergence rate and great stability. We also introduce a new graph classification dataset \textbf{PointPattern} which can serve as a new benchmark.
\bibliographystyle{plain}
| {
"timestamp": "2020-07-09T02:17:39",
"yymm": "2006",
"arxiv_id": "2006.16811",
"language": "en",
"url": "https://arxiv.org/abs/2006.16811",
"abstract": "Graph neural networks (GNNs) extends the functionality of traditional neural networks to graph-structured data. Similar to CNNs, an optimized design of graph convolution and pooling is key to success. Borrowing ideas from physics, we propose a path integral based graph neural networks (PAN) for classification and regression tasks on graphs. Specifically, we consider a convolution operation that involves every path linking the message sender and receiver with learnable weights depending on the path length, which corresponds to the maximal entropy random walk. It generalizes the graph Laplacian to a new transition matrix we call maximal entropy transition (MET) matrix derived from a path integral formalism. Importantly, the diagonal entries of the MET matrix are directly related to the subgraph centrality, thus providing a natural and adaptive pooling mechanism. PAN provides a versatile framework that can be tailored for different graph data with varying sizes and structures. We can view most existing GNN architectures as special cases of PAN. Experimental results show that PAN achieves state-of-the-art performance on various graph classification/regression tasks, including a new benchmark dataset from statistical mechanics we propose to boost applications of GNN in physical sciences.",
"subjects": "Machine Learning (cs.LG); Disordered Systems and Neural Networks (cond-mat.dis-nn); Networking and Internet Architecture (cs.NI); Data Analysis, Statistics and Probability (physics.data-an); Machine Learning (stat.ML)",
"title": "Path Integral Based Convolution and Pooling for Graph Neural Networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540716711546,
"lm_q2_score": 0.7248702761768249,
"lm_q1q2_score": 0.7099046564071678
} |
https://arxiv.org/abs/1305.5114 | Uniform spanning trees on Sierpinski graphs | We study spanning trees on Sierpinski graphs (i.e., finite approximations to the Sierpinski gasket) that are chosen uniformly at random. We construct a joint probability space for uniform spanning trees on every finite Sierpinski graph and show that this construction gives rise to a multi-type Galton-Watson tree. We derive a number of structural results, for instance on the degree distribution. The connection between uniform spanning trees and loop-erased random walk is then exploited to prove convergence of the latter to a continuous stochastic process. Some geometric properties of this limit process, such as the Hausdorff dimension, are investigated as well. The method is also applicable to other self-similar graphs with a sufficient degree of symmetry. | \section{Introduction}
\label{section:intro}
The Sierpi\'nski gasket is certainly one of the most famous fractals, and the Sierpi\'n\-ski graphs, which can be seen as finite approximations of the Sierpi\'nski gasket, are among the most thoroughly studied self-similar graphs. The number of spanning trees in the $n$\nobreakdash-\hskip0pt th Sierpi\'nski graph $G_n$ (starting with a single triangle $G_0$, see Figure~\ref{figure:sg}) turns out to be given by the remarkable explicit formula
\[ \tau(G_n) = \biggl( \frac{3}{20} \biggr)^{1/4} \cdot \biggl( \frac35 \biggr)^{n/2} \cdot 540^{3^n/4}, \]
which was obtained by different methods in several recent works: by setting up and solving a system of recursions \cite{chang2007trees,teufl2006number,teufl2011number}, or by electrical network theory \cite{teufl2011resistance}. In \cite{lawler2010random}, a proof using probabilistic results is sketched. Moreover, the Laplacian spectrum of $G_n$ can be described rather explicitly by means of a technique known as ``spectral decimation'' \cite{fukushima1992spectral,shima1991eigenvalue}, from which another proof can be derived \cite{anema2012counting}.
\begin{figure}[htb]
\centering
\def\fig#1#2#3#4{%
\scope[shift={#1}]
\mytikzsg{}{#2}{#3}
\node[above left] at (60:1/2) {#4};
\endscope}
\begin{tikzpicture}[scale=2.5]
\fig{(0,0)}{}{\draw \mytikztri{--}{node[vertex] {}} -- cycle;}{$G_0$}
\fig{(1.2,0)}{x}{\draw \mytikztri{--}{node[vertex] {}} -- cycle;}{$G_1$}
\fig{(2.4,0)}{xx}{\draw \mytikztri{--}{node[vertex] {}} -- cycle;}{$G_2$}
\fig{(4,0)}{xxxxxx}{\fill \mytikztri{--}{} -- cycle;}{$K$}
\end{tikzpicture}
\caption{Sierpi\'nski graphs $G_0$, $G_1$, $G_2$, and the Sierpi\'nski gasket $K$.}
\label{figure:sg}
\end{figure}
Once the counting problem is solved, it is natural to consider \emph{uniformly random spanning trees} of $G_n$ and to study their structure. Uniform spanning trees are known to have strong connections to other probabilistic models, such as loop-erased random walk (Wilson's celebrated algorithm \cite{lawler2010random,wilson1996generating} to construct uniform spanning trees being a particular application), and they are also of interest in mathematical physics. For this reason, the structure of uniformly random spanning trees in other important families of graphs, such as square grids \cite{burton1993local} has been studied thoroughly.
The recursive nature of Sierpi\'nski graphs and the strong symmetry enables us to derive a number of results on uniform spanning trees, as will be shown in this paper. After some preliminaries, we construct a joint probability space for uniform spanning trees on every finite Sierpi\'nski graph. An important tool in this context is the theory of (a rather general kind of) Galton-Watson processes. Making use of this tool, we also prove some structural results on uniform spanning trees of $G_n$, for instance a strong law of large numbers for the degree distribution of a uniform spanning tree. This extends the work of Chang and Chen \cite{chang2010structure}, who prove convergence of expected values (for which they also give explicit formulae). Similar results for the two-dimensional square lattice were obtained by Manna, Dhar and Majumdar \cite{manna1992spanning}.
\emph{Loop-erased random walk} on the Sierpi\'nski gasket was studied in the paper of Hattori and Mizuno \cite{hattori2012looperased}; our results on uniform spanning trees provide an alternative approach to this topic and were obtained independently of Hattori and Mizuno and approximately at the time (see for instance \cite{teufl2011uniform}). The expected length of such a walk from one corner to another was studied earlier in the physics literature by Dhar and Dhar \cite{dhar1997distribution}; it grows asymptotically like $\bigl(\frac43 + \frac1{15}\sqrt{205}\,\bigr)^n$. As it was also shown by Hattori and Mizuno, we find that, upon renormalization, loop-erased random walk converges to a limit process. The analogue of this process for the square lattice is the celebrated Schramm-Loewner evolution \cite{schramm2000scaling,lawler2004conformal}, whose analysis is notoriously complicated. However, the different geometry of the Sierpi\'nski graphs makes it possible to prove rather strong theorems on the shape of this limiting process comparatively easily, including parameters such as the \emph{Hausdorff dimension}. Similar results on the limit process of the self-avoiding walk were obtained by Hattori, Hattori and Kusuoka \cite{hattori1991selfavoiding,hattori1992exponent,hattori1990selfavoiding,hattori1993selfavoiding} and by Hambly, Hattori and Hattori \cite{hambly2002selfrepelling} for the self-repelling walk.
In Section~\ref{sec:metric}, we study the metric induced by a random spanning tree on the Sierpi\'nski graph $G_n$. We prove almost sure convergence to a limit metric, and show that the resulting metric space is a so-called $\mathbb R$\nobreakdash-\hskip0pt tree. We also study the \emph{interface}, which is, loosely speaking, the set where different branches of a spanning tree embedded in the plane ``touch'', and estimate its Hausdorff dimension.
In the following list, the main results of this paper are summarised. For the sake of simplicity, all results and their derivation are only given for the Sierpi\'nski gasket, but there are other fractals to which the same approach applies, see Section~\ref{sec:other}.
\begin{itemize}
\item We construct a joint probability space for uniform spanning trees
on every finite Sierpi\'nski graph using a projective limit.
As part of the construction, we also have to consider
spanning forests with the property
that each of the components contains one of the three corner vertices.
We show that the distribution of the component sizes
in random spanning forests of this type converges (upon renormalisation)
to a limiting distribution---see Section~\ref{subsection:component}.
\item We prove almost sure convergence of the \emph{degree distribution}
(see Section~\ref{subsection:degree}):
the proportion of vertices of degree $i$ ($i \in \{1,2,3,4\}$ fixed)
in a random spanning tree of $G_n$ converges almost surely
to a limit constant $w(i)$.
\item Section~\ref{section:lerw} is concerned with loop erased random walk
on Sierpi\'nski graphs $G_n$:
using the connection between spanning trees and
loop-erased random walk, we recover the aforementioned result
that the length of such a walk from one corner to another grows
asymptotically like $\bigl(\frac43 + \frac1{15}\sqrt{205}\,\bigr)^n$,
and that the renormalised length has a limit distribution
(cf.~\cite[Theorem~5]{hattori2012looperased}).
We also provide tail estimates for this limit distribution,
see Lemma~\ref{lemma:mg-bounds}.
\item In Section~\ref{section:convergence}, we study the limit process and
prove some geometric properties:
specifically, we show that the limit curve is
almost surely self-avoiding (Theorem~\ref{theorem:prop}),
and has Hausdorff dimension
$\log_2 \bigl(\frac43 + \frac1{15}\sqrt{205}\,\bigr) \approx 1.193995$
(Theorem~\ref{theorem:prop-lerw}, (5)).
These results were also obtained in the aforementioned paper
of Hattori and Mizuno (see \cite[Theorems 9 and 10]{hattori2012looperased}).
Moreover, we prove H\"older continuity with an
explicit exponent (Theorem~\ref{theorem:prop-lerw}, (4)).
\item The limit of the tree metric is the main topic of Section~\ref{sec:metric}.
It is shown (Theorem~\ref{theorem:randommetric}) that
we almost surely obtain a random metric on the ``rational points''
(i.e., all points which are vertices in some finite approximation)
of the Sierpinski gasket whose Cauchy completion is an $\mathbb R$\nobreakdash-\hskip0pt tree,
i.e., a metric space in which there is a unique arc between any two points and
this arc is geodesic (that is an isometric embedding of a real interval).
\end{itemize}
\section{Notation and Preliminaries}
\label{section:notation}
A \emph{graph} $G$ is a pair $(VG,EG)$, where $VG$ is the vertex set and
\[ EG \subseteq \bigl\{ \{x,y\} \,:\, x,y\in VG, x\ne y \bigr\} \]
is the edge set. Two vertices $x,y$ are \emph{adjacent} if $\{x,y\}\in EG$. The \emph{degree} $\deg x$ of a vertex $x$ is the number of adjacent vertices. A \emph{walk} in $G$ is a finite or infinite sequence $(x_0,x_1,\dotsc)$ of vertices in $G$, such that consecutive entries are adjacent. A walk is called \emph{self-avoiding} if its entries are mutually distinct. The edge set $E(x)$ of a walk $x=(x_0,x_1,\dotsc)$ is the set
\[ E(x) = \bigl\{ \{x_0,x_1\}, \{x_1,x_2\}, \dotsc \bigr\}. \]
Equipped with the edge set $E(x)$ a walk $x$ gives rise to a subgraph of $G$. The \emph{length} of the walk $(x_0,\dotsc,x_n)$ is equal to $n$, the number of edges. The \emph{distance} $d_G(v,w)$ of two vertices $v,w$ is the least integer $n$ such that there is a walk of length $n$ in $G$ connecting $v$ and $w$.
A \emph{tree} is a connected graph without cycles. A \emph{spanning tree} of a graph $G$ is a subgraph of $G$ which is a tree and contains all vertices of $G$. Similarly, a \emph{forest} is a graph without cycles and a \emph{spanning forest} of a graph $G$ is a subgraph of $G$ which is a forest and contains all vertices of $G$. Let $F$ be a forest and $v,w$ be two vertices in $F$. If $v,w$ are in the same component of $F$, then we write $vFw$ to denote the unique self-avoiding walk in $F$ connecting $v$ and $w$.
Next we need some ingredients from probability theory. We use multi-index notation: If $r\in\mathbb N$, $\mathbold z=(z_1,\dotsc,z_r)$, and $\mathbold k=(k_1,\dotsc,k_r)$, then $\mathbold z^\mathbold k = z_1^{k_1} \dotsm z_r^{k_r}$. If $\mathbold X=(X_1,\dotsc,X_r)$ is a random vector in $\mathbb N_0^r$, then
\[ \qopname\relax o{\mathsf{PGF}}(\mathbold X, \mathbold z)
= \qopname\relax o{\mathbb{E}}\bigl(\mathbold z^{\mathbold X}\bigr)
= \sum_{\mathbold k\in\mathbb N_0^r} \qopname\relax o{\mathbb{P}}(\mathbold X=\mathbold k) \mathbold z^\mathbold k \]
is the (multivariate) \emph{probability generating function} of $\mathbold X$.
An \emph{$r$\nobreakdash-\hskip0pt type Galton-Watson process} $(\mathbold X_n)_{n\ge0} = (X_{1,n},\dotsc,X_{r,n})_{n\ge0}$ is a stochastic process that starts with one or more individuals, each of which has a type associated to it. Each individual gives birth to zero or more children according to the offspring probabilities
\[ \mathbold p(\mathbold k) = \bigl( p_1(\mathbold k), \dotsc, p_r(\mathbold k) \bigr) \]
for $\mathbold k=(k_1,\dotsc,k_r)\in\mathbb N_0^r$. Here $p_i(\mathbold k)$ is the probability that an individual of type $i$ has $k_j$ children of type $j$ for $j\in\{1,\dotsc,r\}$. The vector $\mathbold X_n$ represents the number of individuals in the $n$\nobreakdash-\hskip0pt th generation by their type (i.e., $X_{i,n}$ is the number of individuals of type $i$ in the $n$\nobreakdash-\hskip0pt th generation). It is convenient to describe the offspring distributions by their multidimensional multivariate probability generating function $\mathbold f=(f_1,\dotsc,f_r)$, which is called \emph{offspring generating function} and given by
\[ \mathbold f(\mathbold z) = \sum_{\mathbold k\in\mathbb N_0^r} \mathbold p(\mathbold k) \mathbold z^\mathbold k \]
for $\mathbold z=(z_1,\dotsc,z_r)$. Then
\[ \qopname\relax o{\mathsf{PGF}}(\mathbold X_n,\mathbold z)
= \qopname\relax o{\mathsf{PGF}}(\mathbold X_{n-1}, \mathbold f(\mathbold z)) = \dotsb
= \qopname\relax o{\mathsf{PGF}}(\mathbold X_0, \mathbold f^n(\mathbold z)), \]
where $\mathbold f^n$ is the $n$\nobreakdash-\hskip0pt fold iteration of $\mathbold f$. The \emph{mean matrix} $\mathbold M = (m_{ij})_{1\le i,j\le r}$ is given by $m_{ij} = (\partial f_i/\partial z_j)(\mathbf 1)$. The process is called
\begin{itemize}
\item \emph{positively regular} if $\mathbold M$ is primitive
(i.e., all entries of $\mathbold M^k$ are positive for some $k$),
\item \emph{singular} if $\mathbold f(\mathbold z) = \mathbold M \mathbold z$,
\item \emph{subcritical}, \emph{critical} or \emph{supercritical} depending on whether the largest eigenvalue of the mean matrix is less than, equal to, or greater than $1$.
\end{itemize}
see for instance \cite{mode1971multitype} for these notions and the theory of multi-type Galton-Watson processes.
Let $\mathbb W$ be a subset of $\mathbb N$ (in the following $\mathbb W$ will always be $\{1,2,3\}$); elements of $\mathbb W^n$ are written as words over the alphabet $\mathbb W$, e.g.\ $12133$ means $(1,2,1,3,3)$. Let $\mathbb W^0 = \{\emptyword\}$ consist of the empty word $\emptyword$ only and set
\[ \mathbb W^* = \biguplus_{n\ge0} \mathbb W^n. \]
Concatenation of two words $v,w\in\mathbb W^*$ is written by juxtaposition $vw$. The set $\mathbb W^*$ carries a graph structure in a canonical way: two words $v,w\in\mathbb W^*$ are adjacent if $w=v\iota$ or $v=w\iota$ for some $\iota\in\mathbb W$. This turns $\mathbb W^*$ into a tree with root $\emptyword$. If $w=v\iota$, then $w$ is called \emph{child} or \emph{offspring} of $v$ and $\iota$ is the \emph{suffix} of $w$.
Let $R$ be a finite set and fix some $\mathbb W\subseteq\mathbb N$. Consider the set of all subtrees with an element of $R$ associated to each vertex, i.e.,
\[ \mathcal W_R = \bigl\{ (W,f) \,:\,
W\subseteq\mathbb W^* \text{ induces a subtree},
\emptyword\in W,
f=(f_w)_{w\in W}\in R^W \bigr\}. \]
If $(W,f)\in\mathcal W_R$ and $w\in W$, then we say that the word $w$ is the \emph{label} and $f_w$ is the \emph{type} of the pair $(w,f_w)$. A \emph{labelled multi-type Galton-Watson tree} with labels in $\mathbb W^*$ and types in $R$ is a random element of the set $\mathcal W_R$, whose distribution is uniquely determined by the following:
\begin{itemize}
\item The type of the root (or \emph{ancestor}) $\emptyword$ is given by a fixed distribution on $R$.
\item The random offspring generation of a vertex (or \emph{individual}) $w$ only depends on the type of $w$.
It is given by a probability distribution (depending on the type of $w$) on the set
\[ \bigl\{ (S,g) \,:\, S\subseteq\mathbb W, g=(g_\iota)_{\iota\in S}\in R^S \bigr\}. \]
The interpretation is that once a pair $(S,g)$ is chosen,
the individual $w$ gives birth to $\card{S}$ children
with labels $w\iota$ for $\iota\in S$, and type $g_\iota$ is assigned to child $w\iota$.
\end{itemize}
A labelled multi-type Galton-Watson tree is denoted by $(F_w)_{w\in W}\in\mathcal W_R$. Notice that in this notation $W\subseteq\mathbb W^*$ is the random set of individuals and $F_w$ is the random type of an individual $w\in W$.
To every labelled multi-type Galton-Watson tree $(F_w)_{w\in W}$ with labels in $\mathbb W^*$ and types in $R$, the (random) number of individuals of a certain type in the $n$\nobreakdash-\hskip0pt th generation yields a multi-type Galton-Watson process with $r=\card{R}$ types. To this end, let $a_1,\dotsc,a_r$ be the elements of $R$ and set
\[ X_{i,n} = \card{\{ w\in W\cap\mathbb W^n \,:\, F_w = a_i \}}, \qquad
\mathbold X_n = (X_{1,n},\dotsc,X_{r,n}) \]
for $n\ge0$ and $i\in\{1,\dotsc,r\}$. Then $(\mathbold X_n)_{n\ge0}$ is an $r$\nobreakdash-\hskip0pt type Galton-Watson processes.
\section{Construction of Sierpi\'nski graphs}
\label{section:sg}
The Sierpi\'nski gasket $K$ (see \cite{sierpinski1915courbe} for its origin in mathematical literature) can be defined formally by means of the following three similitudes:
\[ \psi_i(x) = \tfrac12(x-u_i) + u_i \]
for $i\in\{1,2,3\}$, where $u_1=(0,0)$, $u_2=(1,0)$, and $u_3=\tfrac12(1,\sqrt3)$. Then $K$ is the unique non-empty compact set such that
\[ K = \psi_1(K) \cup \psi_2(K) \cup \psi_3(K). \]
Its Hausdorff dimension is given by
\[ \operatorname{dim}_H K = \frac{\log3}{\log2} = 1.5849625\dotsc \]
The Sierpi\'nski graphs $G_0, G_1, \dotsc$ are discrete approximations to $K$ and are constructed inductively: The vertex set $VG_0$ and edge set $EG_0$ of $G_0$ are given by
\[ VG_0 = \{u_1, u_2, u_3\} \qquad\text{and}\qquad EG_0 = \{\{u_1,u_2\},\{u_2,u_3\},\{u_3,u_1\}\}, \]
respectively. Then, for any $n\ge0$, the sets $VG_{n+1}$ and $EG_{n+1}$ are defined as follows:
\begin{align*}
VG_{n+1} &= \psi_1(VG_n) \cup \psi_2(VG_n) \cup \psi_3(VG_n), \\
EG_{n+1} &= \psi_1(EG_n) \cup \psi_2(EG_n) \cup \psi_3(EG_n).
\end{align*}
Notice that $G_{n+1}$ is an amalgam of three scaled images of $G_n$, which we denote by $\psi_1(G_n)$, $\psi_2(G_n)$, and $\psi_3(G_n)$, i.e.,
\[ G_{n+1} = \psi_1(G_n) \cup \psi_2(G_n) \cup \psi_3(G_n). \]
The vertices in $VG_0\subset VG_n$ are often called \emph{corner vertices} or \emph{boundary vertices} of the graph $G_n$. The vertex sets are nested, i.e., $VG_0\subset VG_1\subset VG_2 \subset \dotsb$, and the Sierpi\'nski gasket $K$ is the closure of the union $VG_0\cup VG_1\cup VG_2\dotsb$. Figure~\ref{figure:sg} shows the graphs $G_0$, $G_1$, $G_2$ and the Sierpi\'nski gasket $K$. The self-similar nature and the fact that the three scaled images only intersect in the three points
\[ \tfrac12(u_2+u_3), \qquad \tfrac12(u_3+u_1), \qquad \tfrac12(u_1+u_2), \]
allows to solve many problems concerning the Sierpi\'nski gasket and Sierpi\'nski graphs exactly.
As explained above, we may view $G_n$ as an amalgam of three copies of $G_{n-1}$. More generally, we may consider $G_n$ as an amalgam of $3^{n-k}$ copies of $G_k$ ($0\le k\le n$). For any word $w=w_1\dotsb w_n\in\mathbb W^n$ ($n\ge1$), set $\psi_w = \psi_{w_1} \circ \dotsb \circ \psi_{w_n}$, and let $\psi_\emptyword$ be the identity map. Then, for $0\le k\le n$,
\[ G_n = \bigcup_{w\in\mathbb W^k} \psi_w(G_{n-k}). \]
If $w\in\mathbb W^k$, we call $\psi_w(G_{n-k})$ (respectively $\psi_w(K)$) a \emph{$k$\nobreakdash-\hskip0pt part} of $G_n$ (respectively $K$). Note that the $k$\nobreakdash-\hskip0pt parts are in one-to-one correspondence with the words in $\mathbb W^k$. For any word $w\in\mathbb W^k$ and any subgraph $H\subseteq G_n$ the \emph{restriction} $\pi_w(H)$ is the subgraph of $G_{n-k}$ given by
\[ \pi_w(H) = \psi_w^{-1}(H\cap\psi_w(G_{n-k})). \]
\section{Spanning trees on Sierpi\'nski graphs}
\label{section:trees}
For the sake of completeness we reproduce the computation of the number of spanning trees of the Sierpi\'nski graphs following the approach given in \cite{chang2007trees,teufl2006number,teufl2011number}. Using a decomposition of certain spanning forests a recursion for the number of spanning trees and two other quantities is derived. In the physics literature this approach is often called the renormalization group. We write
\begin{itemize}
\item $\mathcal T_n$ to denote the set of spanning trees of $G_n$,
\item $\mathcal S_n^i$ ($i\in\{1,2,3\}$) to denote
the set of spanning forests in $G_n$ with two connected components,
so that one component contains $u_i$ and the other contains $VG_0\setminus\{u_i\}$, and
\item $\mathcal R_n$ to denote the set of spanning forests of $G_n$ with three connected components,
each of which contains exactly one vertex from the set $VG_0$.
\end{itemize}
By symmetry, the sets $\mathcal S_n^1$, $\mathcal S_n^2$, and $\mathcal S_n^3$ all have the same cardinality. For notational convenience, we set
\[ \mathcal Q_n = \mathcal T_n \uplus \mathcal S_n^1 \uplus \mathcal S_n^2 \uplus \mathcal S_n^3 \uplus \mathcal R_n. \]
The crucial observation is that the restriction of a spanning forest in $\mathcal Q_{n+1}$ to one of $\psi_1(G_n)$, $\psi_2(G_n)$, or $\psi_3(G_n)$ can be identified with a spanning forest in $\mathcal Q_n$. If $f\in\mathcal Q_{n+1}$, then $\pi_1(f), \pi_2(f), \allowbreak \pi_3(f) \in \mathcal Q_n$ and
\begin{equation}\label{eq:restrict}
f = \psi_1(\pi_1(f)) \cup \psi_2(\pi_2(f)) \cup \psi_3(\pi_3(f)).
\end{equation}
Here and in the following we use lowercase letters for elements of $\mathcal Q_n$ and capital letters for random elements of $\mathcal Q_n$. Since $\mathcal T_0$ consists of the three elements \tree yyynyy, \tree yyyyny, \tree yyyyyn, whereas $\card{\mathcal S_0^i}=\card{\mathcal R_0^{}}=1$ for $i\in\{1,2,3\}$, the subdivision of $\mathcal T_n$ into three families of equal size turns out to be advantageous. In the following we describe one subdivision which is convenient and induced by symmetry. Set
\[ \mathcal T_0^1 = \{\tree yyynyy\}, \qquad
\mathcal T_0^2 = \{\tree yyyyny\}, \qquad
\mathcal T_0^3 = \{\tree yyyyyn\}, \]
and in general, for $n\ge1$ and $i\in\{1,2,3\}$,
\[ \mathcal T_n^i = \bigl\{ t\in\mathcal T_n \,:\,
u_itu_j \subseteq \psi_i(G_{n-1})\cup\psi_j(G_{n-1}) \text{ for all } j \in \{1,2,3\}\setminus\{i\} \bigr\}. \]
Here we consider the self-avoiding walk $u_itu_j$ as the subgraph consisting of the vertices and the edges connecting consecutive vertices. In words, $\mathcal T_n^i$ is the set of spanning trees with the property that the unique paths from $u_i$ to the other corner vertices $u_j$, $j \neq i$, only pass through $\psi_i(G_{n-1})$ and $\psi_j(G_{n-1})$ and do not ``make a detour''. Then
\[ \mathcal T_n^{} = \mathcal T_n^1 \uplus \mathcal T_n^2 \uplus \mathcal T_n^3 \]
and $\card{\mathcal T_n^{}} = 3\,\card{\mathcal T_n^i}$ for $i\in\{1,2,3\}$. Define
\[ \tau_n = \card{\mathcal T_n^1} = \card{\mathcal T_n^2} = \card{\mathcal T_n^3}, \qquad
\sigma_n = \card{\mathcal S_n^1} = \card{\mathcal S_n^2} = \card{\mathcal S_n^3}, \qquad
\rho_n = \card{\mathcal R_n^{}}. \]
\begin{lemma}\label{lemma:count}
If $n\ge0$, then
\begin{align*}
\tau_{n+1} &= 18 \tau_n^2 \sigma_n, \\
\sigma_{n+1} &= 21 \tau_n \sigma_n^2 + 9 \tau_n^2 \rho_n, \\
\rho_{n+1} &= 14 \sigma_n^3 + 36 \tau_n \sigma_n \rho_n,
\end{align*}
and
\begin{align*}
\tau_n &= \bigl(\tfrac53\bigr)^{-n/2} \, 540^{(3^n-1)/4}, \\
\sigma_n &= \bigl(\tfrac53\bigr)^{ n/2} \, 540^{(3^n-1)/4}, \\
\rho_n &= \bigl(\tfrac53\bigr)^{3n/2} \, 540^{(3^n-1)/4}.
\end{align*}
\end{lemma}
\begin{proof}
The recursion satisfied by $\tau_n$, $\sigma_n$, $\rho_n$ follows from the decomposition \eqref{eq:restrict}. For a graphical explanation of the specific terms, see Figure~\ref{figure:rectree}. The initial values are $(\tau_0,\sigma_0,\rho_0)=(1,1,1)$, and using induction, it is easy to verify that the formulae stated in the lemma are indeed the explicit solution to the recursion.
\end{proof}
\begin{figure}[htb]
\centering
\def1.3{1.3}
\def\fig#1#2#3#4#5{%
\scope[shift={#1}]
\mytikzscale{\mytikztype#3}{\mytikztype#4}{\mytikztype#5}
\mytikzsg{}{x}{\draw \mytikztri{}{node[vertex] {}};}
\node[right,font=\footnotesize] at (tri cs:[0.5,0.5]) {#2};
\endscope}
\begin{tikzpicture}[scale=1.3]
\scope[shift={(0,0)}]
\fig{(0.5,0)}{$\times 2$}200
\draw (1.0,-0.05) node[anchor=base]
{$\underbrace{\hbox{\pgfmathparse{1.3 * 1.0 + 0.4}\hskip\pgfmathresult cm}}$};
\draw (1.0,-0.6) node[anchor=base] {$2\cdot(3\tau_n)^2 \sigma_n$};
\draw (0,-0.6) node[anchor=base east] {$\tau_{n+1} = $};
\endscope
\scope[shift={(0,-2)}]
\fig{(0.0,0)}{$\times 2$}031
\fig{(1.2,0)}{$\times 2$}013
\fig{(2.4,0)}{$\times 2$}033
\fig{(3.6,0)}{$\times 1$}330
\draw (2.3,-0.05) node[anchor=base]
{$\underbrace{\hbox{\pgfmathparse{1.3 * 4.6 + 0.4}\hskip\pgfmathresult cm}}$};
\draw (2.3,-0.6) node[anchor=base] {$7\cdot(3\tau_n) \sigma_n^2$};
\draw (5.0,-0.6) node[anchor=base] {$+$};
\fig{(5.4,0)}{$\times 1$}004
\draw (5.9,-0.05) node[anchor=base]
{$\underbrace{\hbox{\pgfmathparse{1.3 * 1 + 0.4}\hskip\pgfmathresult cm}}$};
\draw (5.9,-0.6) node[anchor=base] {$(3\tau_n)^2 \rho_n$};
\draw (0,-0.6) node[anchor=base east] {$\sigma_{n+1} = $};
\endscope
\scope[shift={(0,-4)}]
\fig{(0.0,0)}{$\times 6$}323
\fig{(1.2,0)}{$\times 6$}322
\fig{(2.4,0)}{$\times 2$}312
\draw (1.7,-0.05) node[anchor=base]
{$\underbrace{\hbox{\pgfmathparse{1.3 * 3.4 + 0.4}\hskip\pgfmathresult cm}}$};
\draw (1.7,-0.6) node[anchor=base] {$14\cdot\sigma_n^3$};
\draw (3.8,-0.6) node[anchor=base] {$+$};
\fig{(4.2,0)}{$\times 6$}024
\fig{(5.4,0)}{$\times 6$}014
\draw (5.3,-0.05) node[anchor=base]
{$\underbrace{\hbox{\pgfmathparse{1.3 * 2.2 + 0.4}\hskip\pgfmathresult cm}}$};
\draw (5.3,-0.6) node[anchor=base] {$12\cdot(3\tau_n) \sigma_n \rho_n$};
\draw (0,-0.6) node[anchor=base east] {$\rho_{n+1} = $};
\endscope
\end{tikzpicture}
\caption{All arrangements (up to symmetry) for the construction of spanning trees and spanning forests
used in the recursion for $\tau_n$, $\sigma_n$, and $\rho_n$.
Shaded area indicates connected parts.}
\label{figure:rectree}
\end{figure}
We define the \emph{trace} $\qopname\relax o{\mathsf{Tr}} f\in\mathcal Q_n$ of a spanning forest $f\in\mathcal Q_{n+1}$ as follows: For $f\in\mathcal Q_1$, the trace is given in Table~\ref{table:trace}.
\begin{table}[htb]
\caption{Traces of spanning forests in $\mathcal Q_1$.}
\label{table:trace}
\centering
\begin{tabular}{@{}*{8}{c}@{}}
\toprule
$M$ & $\mathcal T_1^1$ & $\mathcal T_1^2$ & $\mathcal T_1^3$ %
& $\mathcal S_1^1$ & $\mathcal S_1^2$ & $\mathcal S_1^3$ & $\mathcal R_1^{}$ \\
\midrule
$\qopname\relax o{\mathsf{Tr}} f$ for $f\in M$ %
& \tree yyynyy & \tree yyyyny & \tree yyyyyn %
& \tree yyyynn & \tree yyynyn & \tree yyynny & \tree yyynnn \\
\bottomrule
\end{tabular}
\end{table}
If $n>0$ and $f\in\mathcal Q_{n+1}$, then consider the $3^n$ $n$\nobreakdash-\hskip0pt parts of $G_{n+1}$, which are isomorphic to $G_1$. On each of these parts $f$ induces (up to scaling and translation) a forest on $\mathcal Q_1$. In order to obtain the trace $\qopname\relax o{\mathsf{Tr}} f$, replace each of these small forests by their respective trace:
\[ \qopname\relax o{\mathsf{Tr}} f = \bigcup_{w\in\mathbb W^n} \psi_w(\qopname\relax o{\mathsf{Tr}} \pi_w(f)). \]
Note that $\qopname\relax o{\mathsf{Tr}}$ maps $\mathcal T_{n+1}^i$ onto $\mathcal T_n^i$, $\mathcal S_{n+1}^i$ onto $\mathcal S_n^i$ ($i\in\{1,2,3\}$), and $\mathcal R_{n+1}^{}$ onto $\mathcal R_n^{}$. In order to emphasize the dependence on $n$, we write $\qopname\relax o{\mathsf{Tr}}^{n+1}_n t$ instead of $\qopname\relax o{\mathsf{Tr}} f$ if $f\in\mathcal Q_{n+1}$. For $m\ge n$ define $\qopname\relax o{\mathsf{Tr}}^m_n = \qopname\relax o{\mathsf{Tr}}^{n+1}_{n} \circ \dotsb \circ \qopname\relax o{\mathsf{Tr}}^m_{m-1}$. Then
\begin{align*}
\mathcal T_n^1 &= \bigl\{ t\in\mathcal T_n \,:\, \qopname\relax o{\mathsf{Tr}}^n_0 t = \tree yyynyy \bigr\}, \\
\mathcal T_n^2 &= \bigl\{ t\in\mathcal T_n \,:\, \qopname\relax o{\mathsf{Tr}}^n_0 t = \tree yyyyny \bigr\}, \\
\mathcal T_n^3 &= \bigl\{ t\in\mathcal T_n \,:\, \qopname\relax o{\mathsf{Tr}}^n_0 t = \tree yyyyyn \bigr\}.
\end{align*}
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=3]
\scope[shift={(0.0,0)}]
\mytikzg{(0,0)}{xx}{\draw[dashed] \mytikztri{--}{node[vertex] {}} -- cycle;}{}
\mywalk[0,0]{1,0,4,0,0,0}{black}
\mywalk[1,1]{2,0,1,2,4}{black}
\mywalk[1,3]{5,5}{black}\mywalk[2,2]{4}{black}
\endscope
\scope[shift={(1.4,0)}]
\mytikzg{(0,0)}{x}{\draw[dashed] \mytikztri{--}{node[vertex] {}} -- cycle;}{}
\def1}\mywalk[1,0]{3,1}{black{0.5}
\mywalk[0,0]{1,5,0}{black}\mywalk[0,1]{0,2}{black}
\endscope
\scope[shift={(2.8,0)}]
\mytikzg{(0,0)}{}{\draw[dashed] \mytikztri{--}{node[vertex] {}} -- cycle;}{}
\def1}\mywalk[1,0]{3,1}{black{1}\mywalk[1,0]{3,1}{black}
\endscope
\draw (1.2,0.475) node {$\stackrel{\!\qopname\relax o{\mathsf{Tr}}}{\longmapsto}$};
\draw (2.6,0.475) node {$\stackrel{\!\qopname\relax o{\mathsf{Tr}}}{\longmapsto}$};
\end{tikzpicture}
\caption{A spanning tree $t$ on $G_2$ and the traces $\qopname\relax o{\mathsf{Tr}} t = \qopname\relax o{\mathsf{Tr}}_1^2 t$ and $\qopname\relax o{\mathsf{Tr}}_0^2 t$.}
\label{figure:trace}
\end{figure}
Figure~\ref{figure:trace} shows a spanning tree on $G_2$ and its traces on $G_1$ and $G_0$. The importance of the trace stems from the fact that $(\mathcal Q_n, \qopname\relax o{\mathsf{Tr}}^m_n)$ is a projective system. Hence we can define $\mathcal Q_\infty = \varprojlim \mathcal Q_n$ and write $\qopname\relax o{\mathsf{Tr}}^\infty_n$ to denote the canonical projection from $\mathcal Q_\infty$ to $\mathcal Q_n$. Similarly, set
\[ \mathcal T_\infty^i = \varprojlim \mathcal T_n^i, \qquad
\mathcal S_\infty^i = \varprojlim \mathcal S_n^i, \qquad
\mathcal R_\infty^{} = \varprojlim \mathcal R_n^{} \]
for $i\in\{1,2,3\}$. Then
\[ \mathcal T_\infty^{} = \varprojlim \mathcal T_n^{}
= \mathcal T_\infty^1 \uplus \mathcal T_\infty^2 \uplus \mathcal T_\infty^3 \]
and
\[ \mathcal Q_\infty^{} = \mathcal T_\infty \uplus
\mathcal S_\infty^1 \uplus \mathcal S_\infty^2 \uplus \mathcal S_\infty^3 \uplus \mathcal R_\infty. \]
Let $w\in\mathbb W^*$ be a word of length $n\ge0$ and let $f\in\mathcal Q_\infty$. Then
\[ \pi_w(f) = (\pi_w(\qopname\relax o{\mathsf{Tr}}^\infty_n f),\pi_w(\qopname\relax o{\mathsf{Tr}}^\infty_{n+1} f),\dotsc)\in\mathcal Q_\infty \]
extends the definition of the restriction operator $\pi_w$ to $\pi_w\colon\mathcal Q_\infty\to\mathcal Q_\infty$.
Next we define the \emph{type} of an element of $\mathcal Q_{\infty}$ (or a part of it). Set
\[ \mathcal C = \{\img5,\img6,\img7,\img1,\img2,\img3,\img4\}. \]
For $f\in\mathcal Q_\infty$ let $\chi_\emptyword(f)\in\mathcal C$ be given by Table~\ref{table:type}. The symbol $\chi_\emptyword(f)$ gives a crude indication of the shape of $f$.
\begin{table}[htb]
\caption{The definition of $\chi_\emptyword(f)$ for $f\in\mathcal Q_\infty$.}
\label{table:type}
\centering
\begin{tabular}{@{}*{8}{c}@{}}
\toprule
$M$ & $\mathcal T_\infty^1$ & $\mathcal T_\infty^2$ & $\mathcal T_\infty^3$ %
& $\mathcal S_\infty^1$ & $\mathcal S_\infty^2$ & $\mathcal S_\infty^3$ & $\mathcal R_\infty^{}$ \\
\midrule
$\chi_\emptyword(f)$ for $f\in M$ & \img5 & \img6 & \img7 & \img1 & \img2 & \img3 & \img4 \\
\bottomrule
\end{tabular}
\end{table}
For any non-empty word $w\in\mathbb W^*$ define $\chi_w(f)$ by $\chi_w(f) = \chi_\emptyword(\pi_w(f))$. This yields a map $\mathbold\chi\colon\mathcal Q_\infty\to\mathcal C^{\mathbb W^*}$ given by $\mathbold\chi(f) = (\chi_w(f))_{w\in\mathbb W^*}$: $\mathbold\chi(f)$ encodes the shape of $f$ at every level. In order to reconstruct $\qopname\relax o{\mathsf{Tr}}^\infty_n f$ from $\mathbold\chi(f)$, let $\eta$ be the map from $\mathcal C$ to the set of subgraphs of $G_0$ defined in the obvious way, see Table~\ref{table:sub1}. Then
\[ \qopname\relax o{\mathsf{Tr}}^\infty_n f = \bigcup_{w\in\mathbb W^n} \psi_w(\eta(\chi_w(f))). \]
Hence $\mathbold\chi$ is one-to-one.
\begin{table}[htb]
\caption{The mappings $\eta$ and $\nu$.}
\label{table:sub1}
\centering
\begin{tabular}{@{}*{8}{c}@{}}
\toprule
$x$ & \img5 & \img6 & \img7 & \img1 & \img2 & \img3 & \img4 \\
\midrule
$\eta(x)$ & \tree yyynyy & \tree yyyyny & \tree yyyyyn & \tree yyyynn & \tree yyynyn & \tree yyynny & \tree yyynnn \\
$\nu(x)$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\
\bottomrule
\end{tabular}
\end{table}
Let $\nu$ be the bijection from $\mathcal C$ to $\{1,\dotsc,7\}$ given in Table~\ref{table:sub1}. Define the functions $\chi^{\scriptscriptstyle\#}_{i,n}(f) = \card{\{ w\in\mathbb W^n \,:\, \nu(\chi_w(f)) = i \}}$, which count the number of $n$\nobreakdash-\hskip0pt parts of type $i \in \{1,2,\ldots,7\}$, and set
\[ \mathbold\chi^{\scriptscriptstyle\#}_n(f) = \bigl(\chi^{\scriptscriptstyle\#}_{1,n}(f),\dotsc,\chi^{\scriptscriptstyle\#}_{7,n}(f)\bigr) \]
for $n\ge0$. Of course all these definitions also make sense for finite forests $f\in\mathcal Q_m$ (where $m$ is some nonnegative integer). For $w\in\mathbb W^n$, $n\le m$, define $\chi_w(f)$ and $\mathbold\chi_n(f)$ in analogy to the definitions above.
Finally, we define the number of connected components $c(x)$, where $x$ is a symbol in $\mathcal C$, in the canonical way as follows:
\[ c(x) = \begin{cases}
1 & \text{if } x \in \{\img5,\img6,\img7\}, \\
2 & \text{if } x \in \{\img1,\img2,\img3\}, \\
3 & \text{if } x = \img4.
\end{cases} \]
For $f\in\mathcal Q_\infty$, we set $c(f)=c(\chi_\emptyword(f))$, which is also the number of components of $\qopname\relax o{\mathsf{Tr}}^\infty_n f$ for any $n\ge0$. The following simple lemma relates all our counting functions (cf.~Lemma~5.1 in \cite{teufl2011number}). We write $\mathbold v^t$ to denote the transpose of a vector $\mathbold v$.
\begin{lemma}\label{lemma:constraints}
For any $f\in\mathcal Q_\infty$ and any $n\ge0$,
\begin{align*}
\mathbold\chi^{\scriptscriptstyle\#}_n(f) \cdot (1,1,1,1,1,1,1)^t &= 3^n, \\
\mathbold\chi^{\scriptscriptstyle\#}_n(f) \cdot (2,2,2,1,1,1,0)^t &= \tfrac32(3^n+1) - c(f), \\
\mathbold\chi^{\scriptscriptstyle\#}_n(f) \cdot (1,1,1,-1,-1,-1,-3)^t &= 3 - 2c(f).
\end{align*}
\end{lemma}
\begin{proof}
The first equation is immediate. In order to prove the second, notice that $\mathbold\chi^{\scriptscriptstyle\#}_n(f)\cdot(2,2,2,1,1,1,0)^t$ is the number of edges in the spanning forest $\qopname\relax o{\mathsf{Tr}}^\infty_n f$ of $G_n$. As this spanning forest has $c(f)$ components, its number of edges is given by $\card{VG_n}-c(f) = \frac32(3^n+1)-c(f)$. The third equation follows from the first and the second by elimination of $3^n$.
\end{proof}
\section{Uniform spanning trees}
\label{section:ust}
We now come to the core part of this paper: the discussion of the structure of uniform spanning trees of $G_n$. Let us write $\qopname\relax o{\mathsf{Unif}}\mathcal X$ to denote the uniform distribution on a finite, non-empty set $\mathcal X$. For $i\in\{1,2,3\}$ let $T_n^i$ be a uniformly random element in $\mathcal T_n^i$. If $B\sim\qopname\relax o{\mathsf{Unif}}\{1,2,3\}$ is independent of $T_n^i$, then $T_n^B$ is clearly a uniform spanning tree of $G_n$, i.e., $T_n^B\sim\qopname\relax o{\mathsf{Unif}}\mathcal T_n$. In the following lemma, we prove the important fact that the trace preserves probabilities:
\begin{lemma}\label{lemma:extension}
Let $i\in\{1,2,3\}$. If $T_n^i$ is uniformly random on $\mathcal T_n^i$, $S_n^i$ is uniformly random on $\mathcal S_n^i$, and $R_n^{}$ is uniformly random on $\mathcal R_n^{}$, then
\begin{align*}
\qopname\relax o{\mathbb{P}}(\qopname\relax o{\mathsf{Tr}} T_{n+1}^i \in A) &= \qopname\relax o{\mathbb{P}}(T_n^i \in A), \\
\qopname\relax o{\mathbb{P}}(\qopname\relax o{\mathsf{Tr}} S_{n+1}^i \in B) &= \qopname\relax o{\mathbb{P}}(S_n^i \in B), \\
\qopname\relax o{\mathbb{P}}(\qopname\relax o{\mathsf{Tr}} R_{n+1}^{} \in C) &= \qopname\relax o{\mathbb{P}}(R_n^{} \in C)
\end{align*}
for any $A\subseteq\mathcal T_n^i$, $B\subseteq\mathcal S_n^i$, $C\subseteq\mathcal R_n^{}$.
\end{lemma}
\begin{proof}
In order to prove the first identity, we have to show that
\[ \qopname\relax o{\mathbb{P}}(\qopname\relax o{\mathsf{Tr}} T_{n+1}^i = t) = \qopname\relax o{\mathbb{P}}(T_n^i = t) \]
for any $t\in\mathcal T_n^i$. This is equivalent to
\[ \card{\qopname\relax o{\mathsf{Tr}}^{-1} t} = \frac{\tau_{n+1}}{\tau_n} \]
for any $t\in\mathcal T_n^i$. Since
\[ \card{\mathcal T_1^k} = \tau_1 = 18, \qquad
\card{\mathcal S_1^k} = \sigma_1 = 30, \qquad
\card{\mathcal R_1^{}} = \rho_1 = 50 \]
for $k\in\{1,2,3\}$, Lemma~\ref{lemma:constraints} implies that
\[ \card{\qopname\relax o{\mathsf{Tr}}^{-1} t}
= 18^{\chi^{\scriptscriptstyle\#}_{n,1}(t)+\chi^{\scriptscriptstyle\#}_{n,2}(t)+\chi^{\scriptscriptstyle\#}_{n,3}(t)} \cdot
30^{\chi^{\scriptscriptstyle\#}_{n,4}(t)+\chi^{\scriptscriptstyle\#}_{n,5}(t)+\chi^{\scriptscriptstyle\#}_{n,6}(t)} \cdot
50^{\chi^{\scriptscriptstyle\#}_{n,7}(t)}
= 18 \cdot 540^{(3^n-1)/2}. \]
Using Lemma~\ref{lemma:count}, it is easy to see that
\[ \frac{\tau_{n+1}}{\tau_n} = 18 \cdot 540^{(3^n-1)/2}. \]
The same argument applies to the second and third identity, too.
\end{proof}
In light of Lemma~\ref{lemma:extension} and Kolmogorov's Extension Theorem there is a probability measure $P_{T^i}$ on $\mathcal T_\infty^i$ such that $P_{T^i}(\{t \in \mathcal T_{\infty}^i\,:\,\qopname\relax o{\mathsf{Tr}}^\infty_n t\in\cdot\}) = \qopname\relax o{\mathbb{P}}(T_n^i\in\cdot)$. Let $P_{S^i}$ and $P_{R^{}}$ be the analogous measures on $\mathcal S_\infty^i$ and $\mathcal R_\infty^{}$, respectively. Set
\begin{align*}
\Omega &= \{1,2,3\} \times \mathcal T_\infty^1 \times \mathcal T_\infty^2 \times \mathcal T_\infty^3 \times
\mathcal S_\infty^1 \times \mathcal S_\infty^2 \times \mathcal S_\infty^3 \times \mathcal R_\infty^{}, \\
P &= \qopname\relax o{\mathsf{Unif}}\{1,2,3\} \times P_{T^1} \times P_{T^2} \times P_{T^3}
\times P_{S^1} \times P_{S^2} \times P_{S^3} \times P_{R^{}}.
\end{align*}
Let $B, T_\infty^i$, $S_\infty^i$, $R_\infty^{}$ be the canonical projections from $\Omega$ to $\{1,2,3\}$, $\mathcal T_\infty^i$, $\mathcal S_\infty^i$, $\mathcal R_\infty^{}$, respectively. Set $T_\infty = T_\infty^B$ and, for $n\ge0$, $T_n = \qopname\relax o{\mathsf{Tr}}^\infty_n T_\infty$. Then $T_n$ is a uniform spanning tree on $G_n$ and $T_n=\qopname\relax o{\mathsf{Tr}}^m_n T_m=\qopname\relax o{\mathsf{Tr}}^\infty_n T_{\infty}$ for $m\ge n\ge 0$. Analogous statements hold for $S_n^i = \qopname\relax o{\mathsf{Tr}}^\infty_n S_\infty^i$ and $R_n = \qopname\relax o{\mathsf{Tr}}^\infty_n R_\infty$.
In the following we write $\qopname\relax o{\mathbb{P}}$ instead of $P$ and always use $\Omega$ equipped with $\qopname\relax o{\mathbb{P}}$ as probability space, whenever the random elements $T_n^{}$, $T_n^i$, $S_n^i$, etc. are considered.
Let $[\img5,\img5,\img2]$ (suppressing the dependence on $n$) be a shorthand for the set
\[ \{ \psi_1(f_1) \cup \psi_2(f_2) \cup \psi_3(f_3) \,:\,
f_1\in\mathcal T_{n-1}^1, f_2\in\mathcal T_{n-1}^1, f_3\in\mathcal S_{n-1}^2\}, \]
and analogously for other combinations. Using Lemma~\ref{lemma:count} it is easy to see that
\begin{equation}\label{eq:probs}
\begin{aligned}
\qopname\relax o{\mathbb{P}}(T_n^3\in[\img5,\img1,\img5]) &= \qopname\relax o{\mathbb{P}}(T_n^3\in[\img5,\img1,\img6]) = \dotsb
= \frac{\tau_{n-1}^2\sigma_{n-1}}{\tau_n} = \frac1{18}, \\
\qopname\relax o{\mathbb{P}}(S_n^3\in[\img5,\img3,\img1]) &= \qopname\relax o{\mathbb{P}}(S_n^3\in[\img6,\img3,\img1]) = \dotsb
= \frac{\tau_{n-1}\sigma_{n-1}^2}{\sigma_n} = \frac1{30}, \\
\qopname\relax o{\mathbb{P}}(S_n^3\in[\img5,\img5,\img4]) &= \qopname\relax o{\mathbb{P}}(S_n^3\in[\img5,\img6,\img4]) = \dotsb
= \frac{\tau_{n-1}^2\rho_{n-1}}{\sigma_n} = \frac1{30}, \\
\qopname\relax o{\mathbb{P}}(R_n^{}\in[\img3,\img2,\img3]) &= \qopname\relax o{\mathbb{P}}(R_n^{}\in[\img1,\img3,\img3]) = \dotsb
= \frac{\sigma_{n-1}^3}{\rho_n} = \frac1{50}, \\
\qopname\relax o{\mathbb{P}}(R_n^{}\in[\img5,\img2,\img4]) &= \qopname\relax o{\mathbb{P}}(R_n^{}\in[\img6,\img2,\img4]) = \dotsb
= \frac{\tau_{n-1}\sigma_{n-1}\rho_{n-1}}{\rho_n} = \frac1{50},
\end{aligned}
\end{equation}
where dots indicate combinations in the same ``group'' (group sizes are $18$, $21$, $9$, $14$, and $36$, see Figure~\ref{figure:rectree}). Of course, analogous results also hold for $T_n^1$, $T_n^2$, $S_n^1$, $S_n^2$. Furthermore, note that
\[ \qopname\relax o{\mathbb{P}}(\pi_2(T_n^3) \in \cdot \mid T_n^3\in[\img5,\img1,\img5]) = \qopname\relax o{\mathsf{Unif}}\mathcal S_{n-1}^1, \]
and analogously for other combinations and restrictions. Using this fact we obtain the following result, which relates uniform spanning trees on Sierpi\'nski graphs to a multi-type Galton-Watson process:
\begin{proposition}\label{proposition:tree1}
Let $\mathcal U_\infty$ be one of $\mathcal T_\infty^{}$, $\mathcal T^i_\infty$, $\mathcal S^i_\infty$, $\mathcal R^{}_\infty$, and let $U_\infty$ be the corresponding random object.
\begin{enumerate}[\normalfont(1)]
\item The random tree
\[ \mathbold\chi(U_\infty) = (\chi_w(U_\infty))_{w\in\mathbb W^*} \]
is a labelled multi-type Galton-Watson tree with labels in $\mathbb W^*$ and types in $\mathcal C$.
The type distribution of the root depends on the specific choice for $\mathcal U_\infty$ and
is given by $\qopname\relax o{\mathsf{Unif}}\{\chi_\emptyword(f) \,:\, f\in\mathcal U_\infty\}$.
The set of individuals is deterministic and equal to $\mathbb W^*$.
Each individual has three children with suffixes $1,2,3$.
For $x\in\mathcal C$ set
\[ \mathcal D(x)
= \bigl\{ (\chi_1(f),\chi_2(f),\chi_3(f)) \,:\,
f\in\mathcal Q_1, \chi_\emptyword(f)=x \bigr\}
\subseteq \mathcal C^3. \]
Then, by Equation~\eqref{eq:probs}, the offspring distribution of an individual of type $x$
is given by $\qopname\relax o{\mathsf{Unif}}\mathcal D(x)$, that is,
\begin{equation}\label{eq:gwdist}
\qopname\relax o{\mathbb{P}}((\chi_{w1}(U_\infty),\chi_{w2}(U_\infty),\chi_{w3}(U_\infty)) \in \cdot \mid \chi_w(U_\infty) = x)
= \qopname\relax o{\mathsf{Unif}}\mathcal D(x).
\end{equation}
\item $(\mathbold\chi^{\scriptscriptstyle\#}_n(U_\infty))_{n\ge0}$ is a multi-type Galton-Watson process with seven types,
which is non-singular, positively regular, and supercritical.
The type distribution of the root is given by the uniform distribution
$\qopname\relax o{\mathsf{Unif}}\{\nu(\chi_\emptyword(f)) \,:\, f\in\mathcal U_\infty\}$.
The offspring generating function is easily computed by means of Equation~\eqref{eq:gwdist}:
using the abbreviation $s=\frac13(z_1+z_2+z_3)$, we have
\begin{align*}
\mathbold f(\mathbold z) = \Bigl(
&\tfrac12 s^2(z_5+z_6), \; \tfrac12 s^2(z_4+z_6), \; \tfrac12 s^2(z_4+z_5), \vphantom{\Big(}\\
&\tfrac1{10} s\bigl(3z_4^2 + 2z_4(z_5+z_6) + 3sz_7\bigr), \vphantom{\Big(}\\
&\tfrac1{10} s\bigl(3z_5^2 + 2z_5(z_4+z_6) + 3sz_7\bigr), \vphantom{\Big(}\\
&\tfrac1{10} s\bigl(3z_6^2 + 2z_6(z_4+z_5) + 3sz_7\bigr), \vphantom{\Big(}\\
&\tfrac1{25} \bigl(z_4^2z_5 + z_4z_5^2 + z_4^2z_6 + z_4z_6^2 + z_5^2z_6 + z_5z_6^2 \vphantom{\Big(}\\
&\qquad\qquad + z_4z_5z_6 + 6s(z_4+z_5+z_6)z_7\bigr) \Bigr).
\end{align*}
Its mean matrix $\mathbold M$ is given by
\[ \mathbold M = \frac1{150}\begin{pmatrix}
100 & 100 & 100 & 0 & 75 & 75 & 0 \\
100 & 100 & 100 & 75 & 0 & 75 & 0 \\
100 & 100 & 100 & 75 & 75 & 0 & 0 \\
65 & 65 & 65 & 150 & 30 & 30 & 45 \\
65 & 65 & 65 & 30 & 150 & 30 & 45 \\
65 & 65 & 65 & 30 & 30 & 150 & 45 \\
36 & 36 & 36 & 78 & 78 & 78 & 108 \\
\end{pmatrix}. \]
The dominating eigenvalue of $\mathbold M$ is equal to $3$.
The corresponding right and left eigenvectors are
\[ \mathbold v_R=(1,1,1,1,1,1,1)^t \qquad\text{and}\qquad
\mathbold v_L=\tfrac1{288}(53,53,53,38,38,38,15), \]
respectively. $\mathbold v_R$ and $\mathbold v_L$ are normalized
so that $\mathbold v_L \cdot \mathbold v_R = 1$ and $\norm{\mathbold v_L}_1=1$.
\item Since $\norm{\mathbold\chi^{\scriptscriptstyle\#}_n(U_\infty)}_1 = \mathbold\chi^{\scriptscriptstyle\#}_n(U_\infty) \cdot \mathbold v_R = 3^n$,
it follows that
\[ \lim_{n\to\infty} 3^{-n} \mathbold\chi^{\scriptscriptstyle\#}_n(U_\infty) = \mathbold v_L \]
almost surely, see Theorem~1.8.3 in \cite{mode1971multitype}.
\end{enumerate}
\end{proposition}
\begin{remark}\label{remark:coll1}
In order to sample a uniform spanning tree on $G_n$, we may simulate the $n$\nobreakdash-\hskip0pt th generation of a labelled multi-type Galton-Watson tree as described above. In this process, we have to choose one of \img5, \img6, \img7 with equal probability as the type for the ancestor $\emptyword$ of the tree. It is possible to postpone this choice from the beginning to the $n$\nobreakdash-\hskip0pt th generation. To this end, collapse the three types \img5, \img6, \img7 into one type \img0. This yields again a labelled multi-type Galton-Watson tree, but now with five types $\{\img0,\img1,\img2,\img3,\img4\}$. In order to obtain a uniform spanning tree on $G_n$, consider the $n$\nobreakdash-\hskip0pt th generation of this simplified labelled multi-type Galton-Watson tree and replace each occurrence of \img0 independently by one of \img5, \img6, \img7 with equal probability. This modified $n$\nobreakdash-\hskip0pt th generation describes a spanning tree on $G_n$, whose distribution is uniform. Figure~\ref{figure:example-spanning} shows an example of a randomly generated spanning tree on $G_5$.
\end{remark}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.4]{sample-tree}
\caption{A randomly generated spanning tree on $G_5$.}
\label{figure:example-spanning}
\end{figure}
\begin{remark}\label{remark:func1}
Suppose $\lambda$ is a parameter of spanning trees in $G_n$, and we are interested in the behaviour of $\lambda(T_n)$ as $n\to\infty$. When $\lambda(T_n)$ is a functional of $\mathbold\chi^{\scriptscriptstyle\#}_n(T_\infty)$, say $\lambda(T_n) = h(\mathbold\chi^{\scriptscriptstyle\#}_n(T_\infty))$ for some linear function $h$, then $3^{-n}\lambda(T_n) \to h(\mathbold v_L)$ almost surely. Of course, this generalizes to positive homogeneous functions $h$. As a simple example consider the number of $n$\nobreakdash-\hskip0pt parts with $i$ connected components. For $f\in\mathcal Q_\infty$, $i\in\{1,2,3\}$, and $n\ge0$, let us denote this quantity by $c^{\scriptscriptstyle\#}_{i,n}(f) = \card{\{w\in\mathbb W^n \,:\, c(\chi_w(f))=i\}}$. Then
\[ \mathbold c^{\scriptscriptstyle\#}_n(f) = \bigl(c^{\scriptscriptstyle\#}_{1,n}(f),c^{\scriptscriptstyle\#}_{2,n}(f),c^{\scriptscriptstyle\#}_{3,n}(f)\bigr)
= \mathbold\chi^{\scriptscriptstyle\#}_n(f) \cdot \begin{pmatrix}
1 & 1 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 1 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1
\end{pmatrix}^t \]
and therefore $3^{-n} \mathbold c^{\scriptscriptstyle\#}_n(U_\infty) \to \frac1{96}(53,38,5)$ almost surely as $n\to\infty$ if $U_\infty$ is one of $T_\infty^{}$, $T_\infty^i$, $S_\infty^i$, $R_\infty^{}$ for $i\in\{1,2,3\}$. Note that, due to symmetry, $(\mathbold c^{\scriptscriptstyle\#}_n(U_\infty))_{n\ge0}$ is a multi-type Galton-Watson process in its own right.
A straightforward calculation shows that the variance of $\mathbold\chi^{\scriptscriptstyle\#}_n$ and $\mathbold c^{\scriptscriptstyle\#}_n$ is of order $3^n$, so that Chebyshev's inequality yields
\begin{equation}\label{eq:cheby}
\qopname\relax o{\mathbb{P}}(\lVert\mathbold\chi^{\scriptscriptstyle\#}_n(U_{\infty}) - 3^n \mathbold v_L \rVert_1 \geq \alpha^n) \ll 3^{n} \alpha^{-2n}
\end{equation}
for any $\alpha \in (\sqrt{3},3)$, and an analogous inequality for $\mathbold c^{\scriptscriptstyle\#}_n$.
\end{remark}
In the following we study two quantities of a random spanning forest of $G_n$, which need more work as the previous remark does not apply directly: The first quantity are the component sizes in $S_n^1,S_n^2,S_n^3,R_n^{}$. In this case it turns out that components can be described using an augmented labelled multi-type Galton-Watson tree. Secondly, we study the degree distribution in $T_n$. Here the recursive description of uniform spanning trees in $G_n$ (see Figure~\ref{figure:rectree} and Proposition~\ref{proposition:tree1}) and the rapid decay of tail probabilities given by \eqref{eq:cheby} is used.
\subsection{Component sizes}\label{subsection:component}
Spanning trees only have one component, but for random spanning forests $S_n^i$ or $R_n$, the sizes (number of vertices or edges) of the components are interesting random variables. Let us briefly explain how their limiting distribution can be obtained.
First, we need some notation. For a non-empty subset $B$ of $VG_0$, let $f$ be an element of $\mathcal Q_\infty$, and assume that $B$ is the vertex set of the union of some connected components of $\qopname\relax o{\mathsf{Tr}}^\infty_0 f$. Write $C_n(f,B)$ to denote the union of those components of $\qopname\relax o{\mathsf{Tr}}^\infty_n f$ having non-empty intersection with $B$. For example, if $f \in S_{\infty}^1$, then $C_n(f,\{u_1\})$ is the component of $\qopname\relax o{\mathsf{Tr}}^\infty_n f$ that contains $u_1$, $C_n(f,\{u_2,u_3\})$ is the component that contains $u_2$ and $u_3$, and $C_n(f,\{u_1,u_2,u_3\})$ is the entire spanning forest $\qopname\relax o{\mathsf{Tr}}^\infty_n f$.
We are interested in the size of $C_n(f,B)$, which unfortunately is not a linear functional of $\mathbold\chi^{\scriptscriptstyle\#}_n(f)$. However, it is possible to define a subtree of the Galton-Watson-tree $\mathbold\chi(f)$ that encodes $f$ and to add extra information to the types $\mathcal C$ that records the evolution of the components in $C_n(f,B)$. If $f$ is randomly chosen, the resulting subtree with augmented types describes another labelled multi-type Galton-Watson tree, as will be shown in the following. For $n\ge0$, let
\[ \hat W_n(f,B) = \bigl\{ w\in\mathbb W^n \,:\, C_n(f,B) \cap \psi_w(G_0) \ne \emptyset \bigr\} \]
be the set of those words $w\in\mathbb W^n$ for which the corresponding $n$\nobreakdash-\hskip0pt part $\psi_w(G_0)$ of the Sierpi\'nski gasket has non-empty intersection with $C_n(f,B)$. Their union
\[ \hat W(f,B) = \bigcup_{n\ge0} \hat W_n(f,B) \]
induces a subtree of $\mathbb W^*$, and each word in $\hat W(f,B)$ has one, two or three children. For $w\in\hat W(f,B)$, write $\hat\kappa_w(f,B)$ to denote the vertex set $V(\pi_w(\psi_w(G_0)\cap C_n(f,B)))$ (in words: the vertices of the $n$\nobreakdash-\hskip0pt part $\psi_w(G_0)$ that are in common components with vertices of $B$, projected back to $G_0$). To each $w\in\hat W_n(f,B)$, we assign one of the following nineteen types
\[ \mathcal{\hat C} = \{
\comp5y--, \comp6y--, \comp7y--,
\comp1yy-, \comp1ny-, \comp1yn-,
\comp2yy-, \comp2ny-, \comp2yn-,
\comp3yy-, \comp3ny-, \comp3yn-,
\comp4yyy,
\comp4nyy, \comp4yny, \comp4yyn,
\comp4ynn, \comp4nyn, \comp4nny \} \]
encoding two pieces of information: $\chi_w(f)$ (structure of the restriction of $f$ to the respective $n$\nobreakdash-\hskip0pt part) and $\hat\kappa_w(f,B)$ (black parts indicate which of the corner vertices are in common components with elements of $B$). We denote this assignment by $\hat\chi_w(f,B)$, see Table~\ref{table:comptypes} for a precise definition of $\hat\chi_w(f,B)$ in terms of $\chi_w(f)$ and $\hat\kappa_w(f,B)$.
\begin{table}[htb]
\caption{The type $\hat\chi_w(f,B)$, given $\chi_w(f)$ and $\hat\kappa_w(f,B)$.}
\label{table:comptypes}
\centering
\begin{tabular}{@{}*{8}{c}@{}}
\toprule
& \multicolumn{7}{c}{$\chi_w(f)$} \\
\cmidrule(l){2-8}
$\hat\kappa_w(f,B)$ & \img5 & \img6 & \img7 & \img1 & \img2 & \img3 & \img4 \\
\cmidrule(r){1-1} \cmidrule(l){2-8}
$\{u_1,u_2,u_3\}$ & \comp5y-- & \comp6y-- & \comp7y-- & \comp1yy- & \comp2yy- & \comp3yy- & \comp4yyy \\
$\{u_2,u_3\}$ & & & & \comp1ny- & & & \comp4nyy \\
$\{u_1,u_3\}$ & & & & & \comp2ny- & & \comp4yny \\
$\{u_1,u_2\}$ & & & & & & \comp3ny- & \comp4yyn \\
$\{u_1\}$ & & & & \comp1yn- & & & \comp4ynn \\
$\{u_2\}$ & & & & & \comp2yn- & & \comp4nyn \\
$\{u_3\}$ & & & & & & \comp3yn- & \comp4nny \\
\bottomrule
\end{tabular}
\end{table}
Finally set $\mathbold{\hat\chi}(f,B) = (\hat\chi_w(f,B))_{w\in\hat W(f,B)}$. It is easy to see that it is possible to reconstruct the graph $C_n(f,B)$ from $\mathbold{\hat\chi}(f,B)$: formally,
\[ C_n(f,B) = \bigcup_{w\in\hat W_n(f,B)} \psi_w(\hat\eta(\hat\chi_w(f,B))), \]
where $\hat\eta$ is given in Table~\ref{table:subcomp}.
\begin{table}[htb]
\caption{The mappings $\hat\eta$ and $\hat c$.}
\label{table:subcomp}
\centering
\begin{tabular}{@{}*{13}{c}@{}}
\toprule
$x$ & \comp5y-- & \comp6y-- & \comp7y-- %
& \comp1yy- & \comp1ny- & \comp1yn- %
& \comp2yy- & \comp2ny- & \comp2yn- %
& \comp3yy- & \comp3ny- & \comp3yn- \\
\midrule
$\hat\eta(x)$ & \tree yyynyy & \tree yyyyny & \tree yyyyyn %
& \tree yyyynn & \tree nyyynn & \tree ynnnnn %
& \tree yyynyn & \tree ynynyn & \tree nynnnn %
& \tree yyynny & \tree yynnny & \tree nnynnn \\
$\hat c(x)$ & 1 & 1 & 1 & 2 & 4 & 5 & 2 & 4 & 5 & 2 & 4 & 5 \\
\midrule
$x$ & \comp4yyy %
& \comp4nyy & \comp4yny & \comp4yyn %
& \comp4ynn & \comp4nyn & \comp4nny \\
\midrule
$\hat\eta(x)$ & \tree yyynnn %
& \tree nyynnn & \tree ynynnn & \tree yynnnn %
& \tree ynnnnn & \tree nynnnn & \tree nnynnn \\
$\hat c(x)$ & 3 & 6 & 6 & 6 & 7 & 7 & 7 \\
\bottomrule
\end{tabular}
\end{table}
Now let us define $\hat c(x)$ for $x\in\mathcal{\hat C}$ as in Table~\ref{table:subcomp}. For $i\in\{1,\dotsc,7\}$ and $n\ge0$ set $\hat c^{\scriptscriptstyle\#}_{i,n}(f,B) = \card{\{w\in\hat W_n(f,B) \,:\, \hat c(\hat\chi_w(f,B)) = i\}}$ and
\[ \mathbold{\hat c}^{\scriptscriptstyle\#}_n(f,B) = \bigl( \hat c^{\scriptscriptstyle\#}_{1,n}(f,B), \dotsc, \hat c^{\scriptscriptstyle\#}_{7,n}(f,B) \bigr). \]
The vector $\mathbold{\hat c}^{\scriptscriptstyle\#}_n(f,B)$ counts the number of words in $\hat W(f,B)$ of given type up to symmetry. Note that the number of edges in $C_n(f,B)$ can be determined from $\mathbold{\hat c}^{\scriptscriptstyle\#}_n(f,B)$: it is given by
\[ \card{EC_n(f,B)} = \mathbold{\hat c}^{\scriptscriptstyle\#}_n(f,B) \cdot (2,1,0,1,0,0,0)^t, \]
and the number of vertices in $C_n(f,B)$ satisfies
\[ 1 \le \card{VC_n(f,B)} - \card{EC_n(f,B)} \le 3, \]
the precise value of the difference depending on the type of $f$ and the set $B$. Now let $U_\infty$ be one of $T_\infty^{}, T_\infty^i, S_\infty^i, R_\infty^{}$ ($i\in\{1,2,3\}$), and choose $B\subseteq VG_0$ so that $B$ is the vertex set of the union of some components of $\qopname\relax o{\mathsf{Tr}}^\infty_0 U_\infty$. Then $\mathbold{\hat\chi}(U_\infty,B)$ is a labelled multi-type Galton-Watson tree with types in $\mathcal{\hat C}$, and $(\mathbold{\hat c}^{\scriptscriptstyle\#}_n(U_\infty,B))_{n\ge0}$ is a multi-type Galton-Watson process with seven types. The offspring generating function $\mathbold{\hat f}(\mathbold z)$ of the process is given by
\begin{align*}
\mathbold{\hat f}(\mathbold z) = \Bigl(
& z_1^2 z_2, \; \tfrac7{10} z_1 z_2^2 + \tfrac3{10} z_1^2 z_3, \;
\tfrac7{25} z_2^3 + \tfrac{18}{25} z_1 z_2 z_3, \\
& \tfrac2{10} z_1 z_4 z_5 + \tfrac4{10} z_1 z_2 z_4 + \tfrac1{10} z_4^2 + \tfrac3{10} z_1^2 z_6, \\
& \tfrac2{10} z_4 z_5 + \tfrac4{10} z_5 + \tfrac1{10} z_1 z_5^2 + \tfrac3{10} z_7, \\
& \tfrac3{25} z_2^2 z_4 + \tfrac2{25} z_2 z_4 z_5 + \tfrac1{25} z_4 z_5^2 + \tfrac1{25} z_5^2
+ \tfrac6{25} z_1 z_2 z_6 + \tfrac3{25} z_5 z_7 \\
& \qquad\qquad + \tfrac3{25} z_1 z_3 z_4 + \tfrac3{25} z_4 z_6 + \tfrac3{25} z_1 z_5 z_6, \\
& \tfrac6{25} z_5 + \tfrac1{25} z_2 z_4^2 + \tfrac2{25} z_4 z_5 + \tfrac1{25} z_4^2 z_5
+ \tfrac6{25} z_7 + \tfrac3{25} z_1 z_4 z_6 + \tfrac3{25} z_1 z_5 z_7 + \tfrac3{25} z_4 z_7 \Bigr),
\end{align*}
and the mean matrix is given by
\[ \mathbold{\hat M} = \frac1{50} \cdot \begin{pmatrix}
100 & 50 & 0 & 0 & 0 & 0 & 0 \\
65 & 70 & 15 & 0 & 0 & 0 & 0 \\
36 & 78 & 36 & 0 & 0 & 0 & 0 \\
60 & 20 & 0 & 40 & 10 & 15 & 0 \\
5 & 0 & 0 & 10 & 40 & 0 & 15 \\
24 & 28 & 6 & 24 & 24 & 24 & 6 \\
12 & 2 & 0 & 24 & 24 & 6 & 24 \\
\end{pmatrix}. \]
Hence the multi-type Galton-Watson process is non-singular, but not positively regular, as the mean matrix is reducible. The dominating eigenvalue is $3$, which belongs to the $3\times3$ block of $\mathbold{\hat M}$ in the upper left corner. It has multiplicity $1$, and the corresponding right and left eigenvectors are
\[ \mathbold{\hat v}_R = \bigl(1,1,1,\tfrac56,\tfrac16,\tfrac23,\tfrac13\bigr)^t, \qquad
\mathbold{\hat v}_L = \tfrac1{96} \cdot (53,38,5,0,0,0,0), \]
respectively. $\mathbold{\hat v}_R$ and $\mathbold{\hat v}_L$ are normalized so that $\mathbold{\hat v}_L \cdot \mathbold{\hat v}_R = 1$ and $\norm{\mathbold{\hat v}_L}_1 = 1$. Intuitively, the fact that only the first three entries in $\mathbold{\hat v}_L$ are nonzero (and that $\mathbold{\hat M}$ is dominated by the upper left $3 \times 3$\nobreakdash-\hskip0pt block) can be explained by the fact that $n$\nobreakdash-\hskip0pt parts of types such as \comp1ny-, \comp4nny, etc. (some vertices belong to $C_n(f,B)$, others do not) can only occur at the ``borders'' between the components of the forest $\qopname\relax o{\mathsf{Tr}}_n^{\infty} f$, which only make up a very small part of the entire graph $G_n$.
For every choice of the boundary vertices $B$, there is a non-negative random variable $\hat\theta(U_\infty,B)$ such that
\[ 3^{-n} \mathbold{\hat c}^{\scriptscriptstyle\#}_n(U_\infty,B) \to \mathbold{\hat v}_L \hat\theta(U_\infty,B) \]
holds almost surely as $n\to\infty$, see Theorem~2.4.1 in \cite{mode1971multitype}. By symmetry, there are eight different limit distributions, one for each of the following groups:
\begin{gather*}
\bigl\{ \hat\theta(T^{}_\infty,\{u_1,u_2,u_3\}) \bigr\}, \\
\bigl\{ \hat\theta(T^1_\infty,\{u_1,u_2,u_3\}),\hat\theta(T^2_\infty,\{u_1,u_2,u_3\}),
\hat\theta(T^3_\infty,\{u_1,u_2,u_3\}) \bigr\}, \\
\bigl\{ \hat\theta(S^1_\infty,\{u_1,u_2,u_3\}),\hat\theta(S^2_\infty,\{u_1,u_2,u_3\}),
\hat\theta(S^3_\infty,\{u_1,u_2,u_3\}) \bigr\}, \\
\bigl\{ \hat\theta(R^{}_\infty,\{u_1,u_2,u_3\}) \bigr\}, \\
\bigl\{ \hat\theta(S^1_\infty,\{u_2,u_3\}),\hat\theta(S^2_\infty,\{u_1,u_3\}),
\hat\theta(S^3_\infty,\{u_1,u_2\}) \bigr\}, \\
\bigl\{ \hat\theta(S^1_\infty,\{u_1\}),\hat\theta(S^2_\infty,\{u_2\}),\hat\theta(S^3_\infty,\{u_3\}) \bigr\}, \\
\bigl\{ \hat\theta(R^{}_\infty,\{u_2,u_3\}),\hat\theta(R^{}_\infty,\{u_1,u_3\}),
\hat\theta(R^{}_\infty,\{u_1,u_2\}) \bigr\}, \\
\bigl\{ \hat\theta(R^{}_\infty,\{u_1\}),\hat\theta(R^{}_\infty,\{u_2\}),\hat\theta(R^{}_\infty,\{u_3\}) \bigr\}.
\end{gather*}
Let us write $\hat\theta_i$ ($i\in\{0,\dotsc,7\}$) for a random variable having the same distribution as a random variable of the respective group above. Of course, $\hat\theta_0,\dotsc,\hat\theta_3$ (the cases when $B = \{u_1,u_2,u_3\}$) are almost surely constant, i.e.
\[ \hat\theta_0=\hat\theta_1=\hat\theta_2=\hat\theta_3=1 \]
almost surely. The remaining variables $\hat\theta_4,\dotsc,\hat\theta_7$ have continuous densities, and
\[ \qopname\relax o{\mathbb{E}}(\hat\theta_i) = \hat v_{i,R} \]
for $i\in\{4,\dotsc,7\}$, where $\hat v_{i,R}$ is the $i$\nobreakdash-\hskip0pt coordinate of $\mathbold{\hat v}_R$. Note also that $1 - \hat\theta_4$ and $\hat\theta_5$ have the same distribution, and the same holds for $1 - \hat\theta_6$ and $\hat\theta_7$. The limits of the renormalised component sizes can be expressed in terms of these random variables. To be precise,
\[ \lim_{n\to\infty} 3^{-n}\card{VC_n(U_\infty,B)}
= \lim_{n\to\infty} 3^{-n}\card{EC_n(U_\infty,B)} = \tfrac32 \hat\theta(U_\infty,B) \]
almost surely. In particular, the component $C_n(S^1_\infty,\{u_2,u_3\})$ is on average approximately five times larger than the complementary component $C_n(S^1_\infty,\{u_1\})$, since $\hat v_{4,R} = \frac56 = 5\hat v_{5,R}$.
\subsection{Degree distribution}\label{subsection:degree}
The distribution of the vertex degrees in a random spanning tree of the Sierpi\'nski graph $G_n$ was studied at length by Chang and Chen in their recent paper \cite{chang2010structure}. In particular, they determined the precise probability distribution of the degree of a given vertex, and determined the average proportion of the number of vertices of given degree as $n \to \infty$. Here we provide a somewhat different approach to this problem with the advantage that it also allows us to prove almost sure convergence of this proportion to a limit.
The number of vertices with a certain degree in a random spanning tree $T_n$ is again not a simple functional of the types. In fact, the degree distribution of a vertex $v \in VG_n$ depends not only on $n$, but also on the level of the vertex $v$ itself: by the level of a vertex, we mean the smallest $k$ such that $v \in VG_k$. Let us first consider the degree distribution of the corner vertices. By symmetry, it is obviously sufficient to consider one of them. Let $\mathbold d_n(h)$ be the vector of the probabilities that the degree $\deg_{U_n} u_1$ of the lower-left corner vertex $u_1$ in a random spanning forest $U_n$ is equal to $h \in \{0,1,2\}$ for $U_n \in \{T_n^1,T_n^2,T_n^3,S_n^1,S_n^2,S_n^3,R_n^{}\}$. The entries are denoted by $d_n(\img5,h),d_n(\img6,h)$, etc. Thus $d_n(\img5,h) = \qopname\relax o{\mathbb{P}}(\deg_{T_n^1} u_1 = h)$, and the other entries are defined analogously. Then it is obvious that
\begin{align*}
\mathbold d_0(0) &= (0,0,0,1,0,0,1)^t, \\
\mathbold d_0(1) &= (0,1,1,0,1,1,0)^t, \\
\mathbold d_0(2) &= (1,0,0,0,0,0,0)^t.
\end{align*}
Moreover, we deduce from the recursive structure (Figure~\ref{figure:rectree}) that $\mathbold d_n(h) = \mathbold D \mathbold d_{n-1}(h)$, where $\mathbold D$ is the matrix
\[ \mathbold D = \frac1{150} \begin{pmatrix}
50 & 50 & 50 & 0 & 0 & 0 & 0 \\
25 & 25 & 25 & 0 & 0 & 75 & 0 \\
25 & 25 & 25 & 0 & 75 & 0 & 0 \\
5 & 5 & 5 & 60 & 15 & 15 & 45 \\
30 & 30 & 30 & 0 & 45 & 15 & 0 \\
30 & 30 & 30 & 0 & 15 & 45 & 0 \\
12 & 12 & 12 & 36 & 21 & 21 & 36
\end{pmatrix}. \]
This matrix has eigenvalues $1,\frac35,\frac15,\frac1{15},\frac1{25},0,0$, and we easily find that
\begin{equation}\label{eq:degprob}
\begin{aligned}
\mathbold d_n(0) &= \tfrac{11}{28} \cdot \bigl(\tfrac35\bigr)^n \cdot (0,0,0,3,0,0,2)^t
- \tfrac1{28} \cdot \bigl(\tfrac1{25}\bigr)^n \cdot (0,0,0,5,0,0,-6)^t, \\[3pt]
\mathbold d_n(1) &= \tfrac{11}{14} \cdot (1,1,1,1,1,1,1)^t
- \tfrac2{7} \cdot \bigl(\tfrac35\bigr)^n \cdot (0,0,0,3,0,0,2)^t \\
&\quad + \tfrac1{14} \cdot \bigl(\tfrac1{15}\bigr)^n \cdot (-25,10,10,-4,3,3,3)^t
+ \tfrac1{14} \cdot \bigl(\tfrac1{25}\bigr)^n \cdot (0,0,0,5,0,0,-6)^t, \\[3pt]
\mathbold d_n(2) &= \tfrac3{14} \cdot (1,1,1,1,1,1,1)^t
- \tfrac3{28} \cdot \bigl(\tfrac35\bigr)^n \cdot (0,0,0,3,0,0,2)^t \\
&\quad - \tfrac1{14} \cdot \bigl(\tfrac1{15}\bigr)^n \cdot (-25,10,10,-4,3,3,3)^t
- \tfrac1{28} \cdot \bigl(\tfrac1{25}\bigr)^n \cdot (0,0,0,5,0,0,-6)^t
\end{aligned}
\end{equation}
for $n \geq 1$. In particular, we see that the degree of a corner vertex is $1$ in a random spanning tree of $G_n$ with probability tending to $\frac{11}{14}$, and the degree is $2$ with probability tending to $\frac{3}{14}$.
If now $v \in VG_n$ is a vertex of level $k>0$, then there is a unique copy $H$ of $G_{n-k+1}$ in $G_n$ such that $v$ is the midpoint of one of its sides. The degree distribution of $v$ in a random spanning tree $T_n$ now only depends on $k$ and the type of the restriction of $T_n$ to $H$. For example, if $v$ is the midpoint of the horizontal side of $H$, and the restriction is of type \img5, then the probability that $v$ has degree $h$ in $T_n$ is
\begin{multline*}
\frac16 \sum_{\ell=0}^h \bigl(d_{n-k}(\img5,\ell) + d_{n-k}(\img6,\ell) + d_{n-k}(\img7,\ell) \bigl)
d_{n-k}(\img3,h-\ell) \\
+ \frac1{18} \sum_{\ell=0}^h \bigl(d_{n-k}(\img5,\ell) + d_{n-k}(\img6,\ell) + d_{n-k}(\img7,\ell)\bigr) \\
\times \bigl(d_{n-k}(\img5,h-\ell) + d_{n-k}(\img6,h-\ell) + d_{n-k}(\img7,h-\ell)\bigr),
\end{multline*}
where we set $d_n(\cdot,h)=0$ if $h>2$. It follows immediately that for any fixed $k>0$ (or even more generally, if $n-k \to \infty$), the probabilities of the possible degrees $1,2,3,4$ of a level-$k$ vertex converge to
\[ 0, \quad \frac{121}{196}, \quad \frac{33}{98}, \quad \frac{9}{196}, \]
respectively. Intuitively, this means that leaves typically only occur at high levels.
Let now $W_n(h)$ denote the number of vertices of degree $h$ in a random spanning tree $T_n$. We prove that $3^{-n}W_n(h) \to w(h)$ almost surely, where
\[ w(1) = \frac{10957}{26976}, \quad w(2) = \frac{6626035}{9090912}, \quad
w(3) = \frac{2943139}{9090912}, \quad w(4) = \frac{124895}{3030304}. \]
Fix some $\alpha \in (\sqrt{3},3)$. For any $r \geq 0$, the number of copies of $G_{r+1}$ occurring in $G_n$ is $3^{n-r-1}$. By \eqref{eq:cheby}, the number of such copies which have type \img5, \img6 or \img7 is $\frac{53}{96} \cdot 3^{n-r-1} + O(\alpha^{n-r-1})$ with probability $1 - O((3/\alpha^2)^{n-r-1})$. The same is true for the types \img1, \img2, \img3 and type \img4, with the constant $\frac{53}{96}$ replaced by $\frac{19}{48}$ and $\frac{5}{96}$, respectively. Now the distribution of the degrees of the midpoints in each of the copies of $G_{r+1}$ only depends on the type, and the different copies are pairwise independent. Let $m_r(\img5,h)$ be the expectation of the random variable that counts how many of the three ``midpoints''
\[ \tfrac12(u_2+u_3), \qquad \tfrac12(u_3+u_1), \qquad \tfrac12(u_1+u_2)\]
have degree $h$ in a random spanning forest of type \img5 in $G_r$, and define $m_r(\img6,h)$, etc. analogously. By symmetry,
\begin{align*}
m_r(\img5,h) &= m_r(\img6,h) = m_r(\img7,h), \\
m_r(\img1,h) &= m_r(\img2,h) = m_r(\img3,h).
\end{align*}
By independence and another application of Chebyshev's inequality, we find that the total number of vertices of degree $h$ among all level-$(n-r)$ vertices in a random spanning tree $T_n$ is
\[ 3^{n-r-1} \bigl( \tfrac{53}{96} m_{r+1}(\img5,h)
+ \tfrac{19}{48} m_{r+1}(\img1,h)
+ \tfrac{5}{96} m_{r+1}(\img4,h) \bigr)
+ O(\alpha^{n-r-1}) \]
for any $r \geq 0$ with probability $1 - O((3/\alpha^2)^{n-r-1})$. Since there are only $O(3^{n/2})$ vertices at levels $\leq n/2$, we can safely ignore them, and we obtain that the total number of vertices of degree $h$ in a random spanning tree $T_n$ is
\begin{align*}
W_n(h) &= \sum_{r=0}^{\lfloor n/2 \rfloor} 3^{n-r-1}
\bigl( \tfrac{53}{96} m_{r+1}(\img5,h)
+ \tfrac{19}{48} m_{r+1}(\img1,h)
+ \tfrac{5}{96} m_{r+1}(\img4,h) \bigr)
+ O(\alpha^n) \\
&= 3^n \sum_{r=0}^\infty 3^{-r-1}
\bigl( \tfrac{53}{96} m_{r+1}(\img5,h)
+ \tfrac{19}{48} m_{r+1}(\img1,h)
+ \tfrac{5}{96} m_{r+1}(\img4,h) \bigr)
+ O(\alpha^n)
\end{align*}
with probability $1 - O((3/\alpha^2)^{n/2})$, from which almost sure convergence of $3^{-n}W_n(h)$ follows immediately. It remains to find the values of the constants. Let us for instance determine $m_{r+1}(\img5,1)$:
\begin{align*}
m_{r+1}(\img5,1) &= \tfrac29 \bigl(d_r(\img5,1)+d_r(\img6,1)+d_r(\img7,1)\bigr)
\bigl(d_r(\img5,0)+d_r(\img6,0)+d_r(\img7,0)\bigr) \\
&\qquad + \tfrac13 \bigl(d_r(\img5,1)+d_r(\img6,1)+d_r(\img7,1)\bigr)
\bigl(d_r(\img1,0)+d_r(\img2,0)\bigr) \\
&\qquad + \tfrac13 \bigl(d_r(\img5,0)+d_r(\img6,0)+d_r(\img7,0)\bigr)
\bigl(d_r(\img1,1)+d_r(\img2,1)\bigr)
\end{align*}
by the same argument that was used earlier to determine the probabilities of the different degrees. Using \eqref{eq:degprob} we find
\[ m_{r+1}(\img5,1) = \tfrac{1}{1176} \cdot 375^{-r}\cdot (33 \cdot 15^r - 5)^2, \]
and thus
\[ \sum_{r=0}^\infty 3^{-r-1} m_{r+1}(\img5,1) = \frac{49595}{166352}. \]
All other sums are obtained similarly. It follows that the proportion of vertices of degree $1,2,3,4$ in a random spanning tree $T_n$ converges almost surely to
\begin{gather*}
\frac{10957}{40464} \approx 0.270784, \qquad
\frac{6626035}{13636368} \approx 0.485909, \\[3pt]
\frac{2943139}{13636368} \approx 0.215830, \qquad
\frac{124895}{4545456} \approx 0.0274769,
\end{gather*}
respectively. These constants were already determined in \cite{chang2010structure} as the limits of the mean values, but our arguments show that we even have almost sure convergence.
\section{Loop-erased random walk on Sierpi\'nski graphs}
\label{section:lerw}
This section is devoted to the analysis of loop-erased random walks on Sierpi\'nski graphs and their limit process. Let us first recall some definitions, see for instance \cite{lawler2010random}. Let $G$ be a finite and connected graph. The \emph{(chronological) loop erasure} of a walk $x=(x_0,\dotsc,x_n)$ in $G$ yields a new walk $\qopname\relax o{\mathsf{LE}}(x)$ which is defined as follows:
\begin{itemize}
\item Set $\iota(0) = \max\{j\le n \,:\, x_j = x_0\}$.
\item If $\iota(k) < n$, then set $\iota(k+1) = \max\{j\le n \,:\, x_j = x_{\iota(k)+1}\}$,
otherwise set $\iota(k+1)=n$.
\item If $K=\min\{k \,:\, \iota(k)=n\}$, then $\qopname\relax o{\mathsf{LE}}(x)=(x_{\iota(0)},\dotsc,x_{\iota(K)})$.
\end{itemize}
It is clear from the definition that $\qopname\relax o{\mathsf{LE}}(x)$ is self-avoiding.
\emph{Simple random walk} $(X_n)_{n\ge0}$ on a finite and connected graph $G$ is a Markov chain with state space $VG$ and transition probabilities $p(x,y)$ from vertex $x$ to vertex $y$ given by
\[ p(x,y) = \begin{cases}
\frac{1}{\deg x} & \text{if $x$ and $y$ are adjacent,} \\
0 & \text{otherwise.}
\end{cases} \]
For any $B\subseteq VG$, the \emph{hitting time} $\mathsf h(B)$ is given by
\[ \mathsf h(B) = \inf\{ n \,:\, X_n \in B \}. \]
Since $G$ is finite and connected, the hitting time $\mathsf h(B)$ is almost surely finite. Fix a vertex $x\in VG$ and some set $B\subseteq VG$ with $x\notin B$ and consider simple random walk $(X_n)_{n\ge0}$ starting at $x$. The random self-avoiding walk $\qopname\relax o{\mathsf{LE}}((X_n)_{0\le n\le\mathsf h(B)})$ is called \emph{loop-erased random walk} from $x$ to $B$. Figure~\ref{figure:walk} shows instances of loop-erased random walks from one corner vertex to another on $G_5$ and $G_8$, respectively. The aim of this and the following section is to study some of the properties of loop-erased walk on $G_n$ and its limit process.
\begin{figure}[htb]
\centering
\includegraphics[width=6cm]{sample-LERW5}\qquad
\includegraphics[width=6cm]{sample-LERW8}
\caption{Instances of loop-erased random walk on $G_5$ (left) and $G_8$ (right).}
\label{figure:walk}
\end{figure}
Uniform spanning trees and loop-erased random walk are strongly connected concepts. A particular application of this connection is \emph{Wilson's algorithm}~\cite{wilson1996generating}, which is an efficient method for sampling uniform spanning trees of a graph $G$. Fix some ordering of the vertex set $VG$, and let $\{(X_n^x)_{n\ge0} \,:\, x\in VG\}$ be a family of independent simple random walks on $G$, where $(X_n^x)_{n\ge0}$ starts at $x$. We define a sequence $T_0,T_1,\dotsc$ of random subtrees of $G$ as follows:
\begin{itemize}
\item $T_0$ consists of the least vertex (according to the selected ordering) in $G$ only.
\item If $T_k$ does not contain all vertices of $G$, let $x$ be the least vertex in $VG\setminus VT_k$ and define
\[ T_{k+1} = T_k \cup \qopname\relax o{\mathsf{LE}}\bigl((X_n^x)_{0\le n\le\mathsf h(VT_k)}\bigr). \]
If $T_k$ is already spanning, then set $T_{k+1}=T_k$.
\end{itemize}
By construction there is a minimal (random) index $K$ (at most $\card{VG}$) such that $T_K=T_{K+1}$. Then $T_K$ is a uniform spanning tree of $G$. This idea can be reversed: suppose that $T$ is a uniform spanning tree of $G$, and fix two vertices $x,y\in VG$. The random self-avoiding walk $xTy$ turns out to have precisely the same distribution as a loop-erased random walk from $x$ to $y$: this is easy to see from Wilson's algorithm if we assume that $x$ and $y$ are the least and second-least vertices in our ordering.
In the following we use this connection to study loop-erased random walk on Sierpi\'nski graphs $G_n$ in more detail: For example, if $T$ is a uniformly chosen spanning tree on $G_n$, then $u_1T_nu_2$ is a loop-erased random walk in $G_n$ from $u_1$ to $u_2$. The description of $T_\infty$ as a labelled multi-type Galton-Watson tree can be extended to describe the evolution of loop-erased random walks $u_1T_0u_2, u_1T_1u_2, \dotsc$ by a labelled multi-type Galton-Watson tree with twelve types, which capture not only the structure of the spanning tree, but also the unique path between two corner vertices.
The set $\mathcal{\bar C} = \{\conn55, \conn52, \conn53, \conn61, \conn66, \conn63, \conn71, \conn72, \conn77, \conn11, \conn22, \conn33 \}$ encodes the twelve possible types (in a rather obvious way). Fix $k\in\{1,2,3\}$ and let $v,v'$ be the two vertices in $VG_0$ different from $u_k$. Let $f$ be an element in $\mathcal Q_\infty$, so that $v,v'$ are in the same component of the spanning forest $\qopname\relax o{\mathsf{Tr}}^\infty_0 f$. Then $v,v'$ are in the same component of $\qopname\relax o{\mathsf{Tr}}^\infty_n f$ for any $n\ge0$. For $n\ge0$ consider those $n$\nobreakdash-\hskip0pt parts of $G_n$ which contain at least one edge of the self-avoiding walk $v(\qopname\relax o{\mathsf{Tr}}^\infty_n f)v'$, and let
\[ W_n(f,k) = \bigl\{ w\in\mathbb W^n \,:\, E(v(\qopname\relax o{\mathsf{Tr}}^\infty_n f)v') \cap \psi_w(EG_0) \ne \emptyset \bigr\} \]
be the addresses of these $n$\nobreakdash-\hskip0pt parts. Notice that $W_n(f,k)$ is naturally ordered by the fact that $v(\qopname\relax o{\mathsf{Tr}}^\infty_n f)v'$ walks along the $n$\nobreakdash-\hskip0pt parts $\psi_w(G_0)$ with $w\in W_n(f,k)$. Furthermore,
\[ W(f,k) = \bigcup_{n\ge0} W_n(f,k) \]
induces a subtree of $\mathbb W^*$, where each word in $W(f,k)$ has two or three children. Of course, $\chi_w(f) \in \{\img5,\img6,\img7,\img1,\img2,\img3\}$ for any word $w\in W(f,k)$ (the walk has to enter and leave an $n$\nobreakdash-\hskip0pt part at a corner, which is only possible if at least two of the corners are connected). Moreover, for any $w\in W_n(f,k)$, the restriction of $E(v(\qopname\relax o{\mathsf{Tr}}^\infty_n f)v')$ to $\psi_w(EG_0)$ consists of one edge $e=\{x,x'\}$ or two incident edges $e=\{x,y\}$ and $e'=\{y,x'\}$ for some $x,x',y\in\psi_w(VG_0)$. Define $\bar\kappa_w(f,k)$ to be the unique $i\in\{1,2,3\}$ such that $\psi_w(u_i) \neq x,x'$. We encode the two bits of information given by $\chi_w(f)$ and $\bar\kappa_w(f,k)$ by one of the twelve types in $\mathcal{\bar C}$ in a natural way. Write $\bar\chi_w(f,k)$ to denote this type of the $n$\nobreakdash-\hskip0pt part $\psi_w(G_0)$ induced by $f$ and $k$, and set
\[ \mathbold{\bar\chi}(f,k) = (\bar\chi_w(f,k))_{w\in W(f,k)}. \]
For example, $\bar\chi_w(f,k)=\conn55$ if $\chi_w(f)=\img5$ and $\bar\kappa_w(f,k)=1$. Other types are assigned accordingly, see Table~\ref{table:conntypes}.
\begin{table}[htb]
\caption{The type $\bar\chi_w(f,k)$, given $\chi_w(f)$ and $\bar\kappa_w(f,k)$.}
\label{table:conntypes}
\centering
\begin{tabular}{@{}*{7}{c}@{}}
\toprule
& \multicolumn{6}{c}{$\chi_w(f)$} \\
\cmidrule(l){2-7}
$\bar\kappa_w(f,k)$ %
& \img5 & \img6 & \img7 & \img1 & \img2 & \img3 \\
\cmidrule(r){1-1} \cmidrule(l){2-7}
$1$ & \conn55 & \conn61 & \conn71 & \conn11 & & \\
$2$ & \conn52 & \conn66 & \conn72 & & \conn22 & \\
$3$ & \conn53 & \conn63 & \conn77 & & & \conn33 \\
\bottomrule
\end{tabular}
\end{table}
In order to reconstruct the self-avoiding walk $v(\qopname\relax o{\mathsf{Tr}}^\infty_n f)v'$ from $\mathbold{\bar\chi}(f,k)$, let $\bar\eta$ be the map from $\mathcal{\bar C}$ to the set of subgraphs of $G_0$ defined in Table~\ref{table:sub2}. Then
\[ v(\qopname\relax o{\mathsf{Tr}}^\infty_n f)v' = \bigcup_{w\in W_n(f,k)} \psi_w(\bar\eta(\bar\chi_w(f,k))). \]
It is noteworthy that in general $\mathbold{\bar\chi}(f,k)$ contains more information than all the self-avoiding walks $v(\qopname\relax o{\mathsf{Tr}}^\infty_n f)v'$ for $n\ge0$ (since it also contains additional structural information on the underlying spanning tree).
\begin{table}[htb]
\caption{The mappings $\bar\eta$ and $\bar\nu$.}
\label{table:sub2}
\centering
\begin{tabular}{@{}*{13}{c}@{}}
\toprule
$x$ %
& \conn61 & \conn71 %
& \conn52 & \conn72 %
& \conn53 & \conn63 %
& \conn55 & \conn66 & \conn77
& \conn11 & \conn22 & \conn33 \\
\midrule
$\bar\eta(x)$ %
& \tree nyyynn & \tree nyyynn %
& \tree ynynyn & \tree ynynyn %
& \tree yynnny & \tree yynnny %
& \tree yyynyy & \tree yyyyny & \tree yyyyyn %
& \tree nyyynn & \tree ynynyn & \tree yynnny \\
$\bar\nu(x)$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\
\bottomrule
\end{tabular}
\end{table}
Last but not least, let $\bar\nu$ be the bijection from $\mathcal{\bar C}$ to $\{1,\dotsc,12\}$ given by Table~\ref{table:sub2}. In analogy to the previous section, we define the type-counting functions $\bar\chi^{\scriptscriptstyle\#}_{i,n}(f,k) = \card{\{w\in W_n(f,k) \,:\, \bar\nu(\bar\chi_w(f,k)) = i\}}$ and
\[ \mathbold{\bar\chi}^{\scriptscriptstyle\#}_n(f,k) = \bigl(\bar\chi^{\scriptscriptstyle\#}_{1,n}(f,k), \dotsc, \bar\chi^{\scriptscriptstyle\#}_{12,n}(f,k)\bigr) \]
for $i\in\{1,\dotsc,12\}$ and $n\ge0$.
\begin{proposition}\label{proposition:tree2}
Let $\mathcal U_\infty$ be one of $\mathcal T_\infty^{}$, $\mathcal T_\infty^i$, or $\mathcal S_\infty^i$ for $i\in\{1,2,3\}$, and let $U_\infty$ be the corresponding random object. Let $k\in\{1,2,3\}$, and assume that $\qopname\relax o{\mathsf{Tr}}^\infty_0 U_\infty$ connects the two vertices in $VG_0\setminus\{u_k\}$.
\begin{enumerate}[\normalfont(1)]
\item The random tree
\[ \mathbold{\bar\chi}(U_\infty,k) = (\bar\chi_w(U_\infty,k))_{w\in W(U_\infty,k)} \]
is a labelled multi-type Galton-Watson tree with labels in $\mathbb W^*$ and types in $\mathcal{\bar C}$.
The type distribution of the root is given by the uniform distribution
$\qopname\relax o{\mathsf{Unif}}\{\bar\chi_\emptyword(f,k) \,:\, f\in\mathcal U_\infty\}$.
Its offspring generation is given in Table~\ref{table:conngen}.
\item $(\mathbold{\bar\chi}^{\scriptscriptstyle\#}_n(U_\infty,k))_{n\ge0}$ is a multi-type Galton-Watson process with twelve types,
which is non-singular, positively regular, and supercritical.
Using the abbreviations $s_1=\frac13(z_1+z_2+z_7)$, $s_2=\frac13(z_3+z_4+z_8)$, and $s_3=\frac13(z_5+z_6+z_9)$,
the offspring generating function is given by
\begin{align*}
\mathbold{\bar f}(\mathbold z) = \Bigl(
&\tfrac12 s_1(s_1+z_{10}),\;
\tfrac12 s_1(s_1+z_{10}), \vphantom{\Big(}\\
&\tfrac12 s_2(s_2+z_{11}),\;
\tfrac12 s_2(s_2+z_{11}), \vphantom{\Big(}\\
&\tfrac12 s_3(s_3+z_{12}),\;
\tfrac12 s_3(s_3+z_{12}), \vphantom{\Big(}\\
&\tfrac12 s_1(s_3z_{11}+s_2z_{12}),\;
\tfrac12 s_2(s_3z_{10}+s_1z_{12}),\;
\tfrac12 s_3(s_2z_{10}+s_1z_{11}), \vphantom{\Big(}\\
&\tfrac1{10} \bigl(3s_1^2 + 4s_1z_{10} + z_{10}(z_{10} + s_3z_{11} + s_2z_{12})\bigr), \vphantom{\Big(}\\
&\tfrac1{10} \bigl(3s_2^2 + 4s_2z_{11} + z_{11}(s_3z_{10} + z_{11} + s_1z_{12})\bigr), \vphantom{\Big(}\\
&\tfrac1{10} \bigl(3s_3^2 + 4s_3z_{12} + z_{12}(s_2z_{10} + s_1z_{11} + z_{12})\bigr) \Bigr).
\end{align*}
Its mean matrix $\mathbold{\bar M}$ is
\[ \mathbold{\bar M}
= \frac1{30} \begin{pmatrix}
15 & 15 & 0 & 0 & 0 & 0 & 15 & 0 & 0 & 15 & 0 & 0 \\
15 & 15 & 0 & 0 & 0 & 0 & 15 & 0 & 0 & 15 & 0 & 0 \\
0 & 0 & 15 & 15 & 0 & 0 & 0 & 15 & 0 & 0 & 15 & 0 \\
0 & 0 & 15 & 15 & 0 & 0 & 0 & 15 & 0 & 0 & 15 & 0 \\
0 & 0 & 0 & 0 & 15 & 15 & 0 & 0 & 15 & 0 & 0 & 15 \\
0 & 0 & 0 & 0 & 15 & 15 & 0 & 0 & 15 & 0 & 0 & 15 \\
10 & 10 & 5 & 5 & 5 & 5 & 10 & 5 & 5 & 0 & 15 & 15 \\
5 & 5 & 10 & 10 & 5 & 5 & 5 & 10 & 5 & 15 & 0 & 15 \\
5 & 5 & 5 & 5 & 10 & 10 & 5 & 5 & 10 & 15 & 15 & 0 \\
10 & 10 & 1 & 1 & 1 & 1 & 10 & 1 & 1 & 24 & 3 & 3 \\
1 & 1 & 10 & 10 & 1 & 1 & 1 & 10 & 1 & 3 & 24 & 3 \\
1 & 1 & 1 & 1 & 10 & 10 & 1 & 1 & 10 & 3 & 3 & 24
\end{pmatrix}, \]
whose dominating eigenvalue $\bar\alpha$ is
$\frac43+\frac1{15}\sqrt{205} \approx 2.287855$.
The corresponding right and left eigenvectors are
\begin{align*}
\mathbold{\bar v}_R
&= (a_1,a_1,a_1,a_1,a_1,a_1,a_2,a_2,a_2,a_3,a_3,a_3)^t, \\
\mathbold{\bar v}_L
&= (a_4,a_4,a_4,a_4,a_4,a_4,a_4,a_4,a_4,a_5,a_5,a_5),
\end{align*}
where
\begin{gather*}
a_1 = \tfrac{11}{26} + \tfrac{17}{533}\sqrt{205}, \qquad
a_2 = \tfrac{17}{26} + \tfrac{49}{1066}\sqrt{205}, \qquad
a_3 = \tfrac12 + \tfrac{13}{410}\sqrt{205}, \\[4pt]
a_4 = \tfrac1{18}\sqrt{205} - \tfrac{13}{18}, \qquad
a_5 = \tfrac52 - \tfrac16\sqrt{205}.
\end{gather*}
The vectors $\mathbold{\bar v}_R$ and $\mathbold{\bar v}_L$ are normalized
so that $\mathbold{\bar v}_L \cdot \mathbold{\bar v}_R = 1$ and $\norm{\mathbold{\bar v}_L}_1=1$.
\item There is a non-negative random variable $\bar\theta(U_\infty,k)$ such that
\[ \bar\alpha^{-n} \mathbold{\bar\chi}^{\scriptscriptstyle\#}_n(U_\infty^{},k) \to
\mathbold{\bar v}_L \bar\theta(U_\infty,k) \]
almost surely. The distribution of $\bar\theta(U_\infty,k)$ has a continuous density function,
which is strictly positive on the set of positive reals and zero elsewhere.
In particular, $\bar\theta(U_\infty,k)$ is almost surely positive.
By symmetry, there are four different limit distributions, one for each of the following groups:
\begin{gather*}
\{ \bar\theta(T_\infty,1), \bar\theta(T_\infty,2), \bar\theta(T_\infty,3) \}, \\
\{ \bar\theta(T_\infty^1,2), \bar\theta(T_\infty^1,3),
\bar\theta(T_\infty^2,1), \bar\theta(T_\infty^2,3),
\bar\theta(T_\infty^3,1), \bar\theta(T_\infty^3,2) \}, \\
\{ \bar\theta(T_\infty^1,1), \bar\theta(T_\infty^2,2), \bar\theta(T_\infty^3,3) \}, \\
\{ \bar\theta(S_\infty^1,1), \bar\theta(S_\infty^2,2), \bar\theta(S_\infty^3,3) \}.
\end{gather*}
We write $\bar\theta_0, \bar\theta_1, \bar\theta_2, \bar\theta_3$ for random variables
having the same distribution as a random variable in the respective group (ordered as above).
Their expected values are $\qopname\relax o{\mathbb{E}}(\bar\theta_0)=\frac23 a_1+\frac13 a_2$,
$\qopname\relax o{\mathbb{E}}(\bar\theta_1)=a_1$, $\qopname\relax o{\mathbb{E}}(\bar\theta_2)=a_2$ and $\qopname\relax o{\mathbb{E}}(\bar\theta_3)=a_3$, respectively.
Moreover, $\qopname\relax o{\mathbb{P}}_{\bar\theta_0} = \frac23 \qopname\relax o{\mathbb{P}}_{\bar\theta_1} + \frac13 \qopname\relax o{\mathbb{P}}_{\bar\theta_2}$.
\end{enumerate}
\end{proposition}
\begin{table}[htb]
\caption{Offspring generation of $\mathbold{\bar\chi}(U_\infty,k)$ for three types.
The remaining types are obtained by symmetry taking suffixes into account.}
\label{table:conngen}
\centering
\begin{tabular}{@{}clc@{}}
\toprule
Type & \multicolumn{1}{c}{Offspring types} & Probability \\
& \multicolumn{1}{c}{with suffixes $(1,2)$ or $(1,2,3)$} & \\
\midrule
\multirow{4}{10pt}{\conn53}
& $(\conn53,\conn33)$, $(\conn63,\conn33)$, $(\conn77,\conn33)$ & \multirow{1}{7pt}{$\tfrac16$} \\
\cmidrule(){2-3}
& $(\conn53,\conn53)$, $(\conn53,\conn63)$, $(\conn53,\conn77)$, & \multirow{3}{11pt}{$\tfrac1{18}$} \\
& $(\conn63,\conn53)$, $(\conn63,\conn63)$, $(\conn63,\conn77)$, & \\
& $(\conn77,\conn53)$, $(\conn77,\conn63)$, $(\conn77,\conn77)$ & \\
\midrule
\multirow{6}{10pt}{\conn77}
& $(\conn22,\conn55,\conn53)$, $(\conn22,\conn55,\conn63)$, $(\conn22,\conn55,\conn77)$, %
& \multirow{6}{11pt}{$\tfrac1{18}$} \\
& $(\conn22,\conn61,\conn53)$, $(\conn22,\conn61,\conn63)$, $(\conn22,\conn61,\conn77)$, & \\
& $(\conn22,\conn71,\conn53)$, $(\conn22,\conn71,\conn63)$, $(\conn22,\conn71,\conn77)$, & \\
& $(\conn52,\conn11,\conn53)$, $(\conn52,\conn11,\conn63)$, $(\conn52,\conn11,\conn77)$, & \\
& $(\conn66,\conn11,\conn53)$, $(\conn66,\conn11,\conn63)$, $(\conn66,\conn11,\conn77)$, & \\
& $(\conn72,\conn11,\conn53)$, $(\conn72,\conn11,\conn63)$, $(\conn72,\conn11,\conn77)$ & \\
\midrule
\multirow{8}{10pt}{\conn33}
& $(\conn52,\conn11,\conn33)$, $(\conn66,\conn11,\conn33)$, $(\conn72,\conn11,\conn33)$, %
& \multirow{5}{11pt}{$\tfrac1{30}$} \\
& $(\conn22,\conn55,\conn33)$, $(\conn22,\conn61,\conn33)$, $(\conn22,\conn71,\conn33)$, & \\
& $(\conn53,\conn53)$, $(\conn53,\conn63)$, $(\conn53,\conn77)$, & \\
& $(\conn63,\conn53)$, $(\conn63,\conn63)$, $(\conn63,\conn77)$, & \\
& $(\conn77,\conn53)$, $(\conn77,\conn63)$, $(\conn77,\conn77)$ & \\
\cmidrule(){2-3}
& $(\conn53,\conn33)$, $(\conn63,\conn33)$, $(\conn77,\conn33)$, & \multirow{2}{11pt}{$\tfrac1{15}$} \\
& $(\conn33,\conn53)$, $(\conn33,\conn63)$, $(\conn33,\conn77)$ & \\
\cmidrule(){2-3}
& $(\conn33,\conn33)$ & \multirow{1}{11pt}{$\tfrac1{10}$} \\
\bottomrule
\end{tabular}
\end{table}
\begin{proof}
The first part of this result follows from Proposition~\ref{proposition:tree1}. The second is a consequence of the first: the details are not difficult to verify. For the last part, see Theorem~1.8.2 and Theorem~1.9.1 in \cite{mode1971multitype}.
\end{proof}
\begin{remark}\label{remark:coll2}
Similar to Remark~\ref{remark:coll1}, we can collapse three groups of types into new types:
\begin{itemize}
\item \conn55, \conn61, \conn71 become \conn01,
\item \conn52, \conn66, \conn72 become \conn02,
\item \conn53, \conn63, \conn77 become \conn03.
\end{itemize}
Fix again some $k\in\{1,2,3\}$, and let $f\in\mathcal Q_\infty$ be such that the vertices in $VG_0\setminus\{u_k\}$ are in the same component of $f$. Now for $w\in W(f,k)$, set
\[ \tilde\chi_w(f,k)
= \begin{cases}
\conn01 & \text{if } \bar\chi_w(f,k) \in \{\conn55,\conn61,\conn71\}, \\
\conn02 & \text{if } \bar\chi_w(f,k) \in \{\conn52,\conn66,\conn72\}, \\
\conn03 & \text{if } \bar\chi_w(f,k) \in \{\conn53,\conn63,\conn77\}, \\
\bar\chi_w(f,k) & \text{otherwise},
\end{cases} \]
and $\mathbold{\tilde\chi}(f,k) = (\tilde\chi_w(f,k))_{w\in W(f,k)}$. If $U_\infty$ is now one of $T_\infty^{}$, $T_\infty^i$, $S_\infty^i$ for $i\in\{1,2,3\}$, so that the vertices in $VG_0\setminus\{u_k\}$ are in the same component of $\qopname\relax o{\mathsf{Tr}}^\infty_0 U_\infty$, then the random tree $\mathbold{\tilde\chi}(U_\infty,k)$ is a labelled multi-type Galton-Watson tree with types in $\{\conn01,\conn02,\conn03,\conn11,\conn22,\conn33\}$.
In order to sample a loop-erased random walk in $G_n$ from $u_1$ to $u_2$, we can simulate the $n$\nobreakdash-\hskip0pt th generation of $\mathbold{\bar\chi}(T_\infty,3)$. At first we have to choose one of \conn53, \conn63, \conn77 with equal probability as the type of the ancestor $\emptyword$. As in Remark~\ref{remark:coll1} we may postpone this choice to the $n$\nobreakdash-\hskip0pt th generation. To do so, consider the $n$\nobreakdash-\hskip0pt th generation of the simplified tree $\mathbold{\tilde\chi}(T_\infty,3)$. Independently replace each occurrence of
\begin{itemize}
\item \conn01 by one of \conn55, \conn61, \conn71,
\item \conn02 by one of \conn52, \conn66, \conn72,
\item \conn03 by one of \conn53, \conn63, \conn77,
\end{itemize}
always with equal probabilities. Then the modified $n$\nobreakdash-\hskip0pt th generation of $\mathbold{\tilde\chi}(T_\infty,3)$ describes a loop-erased random walk in $G_n$ from $u_1$ to $u_2$.
\end{remark}
\begin{remark}\label{remark:count}
We set
\[ \bar c(x) = \begin{cases}
1 & \text{if } x \in \{\conn61,\conn71,\conn52,\conn72,\conn53,\conn63\}, \\
2 & \text{if } x \in \{\conn55,\conn66,\conn77\}, \\
3 & \text{if } x \in \{\conn11,\conn22,\conn33\},
\end{cases} \]
and once again, we introduce type counters: for $i\in\{1,2,3\}$ and $n\ge0$, define $\bar c^{\scriptscriptstyle\#}_{i,n}(f,k) = \card{\{w\in W_n(f,k) \,:\, \bar c(\bar\chi_w(f))=i\}}$ and
\begin{align*}
\mathbold{\bar c}^{\scriptscriptstyle\#}_n(f,k)
&= \bigl(\bar c^{\scriptscriptstyle\#}_{1,n}(f,k), \bar c^{\scriptscriptstyle\#}_{2,n}(f,k), \bar c^{\scriptscriptstyle\#}_{3,n}(f,k)\bigr), \\
\mathbold{\tilde c}^{\scriptscriptstyle\#}_n(f,k)
&= \bigl(\bar c^{\scriptscriptstyle\#}_{1,n}(f,k) + \bar c^{\scriptscriptstyle\#}_{2,n}(f,k), \bar c^{\scriptscriptstyle\#}_{3,n}(f,k)\bigr).
\end{align*}
Then $\mathbold{\bar c}^{\scriptscriptstyle\#}_n(f,k)$ and $\mathbold{\tilde c}^{\scriptscriptstyle\#}_n(f,k)$ count the occurrences of types up to symmetry in the $n$\nobreakdash-\hskip0pt th generation of $\mathbold{\bar\chi}(f,k)$ and $\mathbold{\tilde\chi}(f,k)$, respectively.
For a random object $U_\infty$ (one of $T_\infty^{}$, $T_\infty^i$, $S_\infty^i$) and suitable $k$, $(\mathbold{\bar c}^{\scriptscriptstyle\#}_n(U_\infty,k))_{n\ge0}$ and $(\mathbold{\tilde c}^{\scriptscriptstyle\#}_n(U_\infty,k))_{n\ge0}$ are multi-type Galton-Watson processes with offspring generating functions
\begin{equation}\label{eq:gdef}
\mathbold{\bar g}(z_1,z_2,z_3) = \bigl(
\tfrac12 s(s+z_3), \, s^2 z_3, \, \tfrac3{10} s^2 + \tfrac15 sz_3(2+z_3) + \tfrac1{10} z_3^2 \bigr),
\end{equation}
where $s=\frac23z_1+\frac13z_2$, and
\begin{equation}\label{eq:tildegdef}
\mathbold{\tilde g}(z_1,z_2) = \bigl(
\tfrac13 z_1(z_1+z_2+z_1z_2), \, \tfrac3{10} z_1^2 + \tfrac15 z_1z_2(2+z_2) + \tfrac1{10} z_2^2 \bigr),
\end{equation}
respectively. If we set
\begin{equation}\label{eq:sigmadef}
\mathbold\Sigma(z_1,z_2,z_3)
= \bigl(\qopname\relax o{\mathsf{PGF}}(\mathbold{\bar\chi}^{\scriptscriptstyle\#}_0(T_\infty^{},k),\mathbold z),
\qopname\relax o{\mathsf{PGF}}(\mathbold{\bar\chi}^{\scriptscriptstyle\#}_0(S_\infty^k,k),\mathbold z)\bigr)
= \bigl(\tfrac23 z_1 + \tfrac13 z_2, z_3\bigr),
\end{equation}
then $\mathbold\Sigma\circ\mathbold{\bar g} = \mathbold{\tilde g}\circ\mathbold\Sigma$. Note also that $\mathbold{\bar c}^{\scriptscriptstyle\#}_n(U_\infty,k)$ and $\mathbold{\tilde c}^{\scriptscriptstyle\#}_n(U_\infty,k)$ depend linearly on $\mathbold{\bar\chi}^{\scriptscriptstyle\#}_n(U_\infty,k)$, hence Proposition~\ref{proposition:tree2} implies
\begin{align*}
\bar\alpha^{-n}\mathbold{\bar c}^{\scriptscriptstyle\#}_n(T_\infty^{},k) &\to (6a_4,3a_4,3a_5) \bar\theta(T_\infty^{},k), &
\bar\alpha^{-n}\mathbold{\bar c}^{\scriptscriptstyle\#}_n(S_\infty^k,k) &\to (6a_4,3a_4,3a_5) \bar\theta(S_\infty^k,k), \\
\bar\alpha^{-n}\mathbold{\tilde c}^{\scriptscriptstyle\#}_n(T_\infty^{},k) &\to (9a_4,3a_5) \bar\theta(T_\infty^{},k), &
\bar\alpha^{-n}\mathbold{\tilde c}^{\scriptscriptstyle\#}_n(S_\infty^k,k) &\to (9a_4,3a_5) \bar\theta(S_\infty^k,k)
\end{align*}
almost surely.
\end{remark}
\begin{remark}\label{remark:laplace}
Using the previous remark, it is possible to describe the distribution of $\bar\theta_0,\bar\theta_1,\bar\theta_2,\bar\theta_3$. Let
\[ \mathbold{\bar\phi}(z) = (\qopname\relax o{\mathbb{E}}(e^{z\bar\theta_1}),\qopname\relax o{\mathbb{E}}(e^{z\bar\theta_2}),\qopname\relax o{\mathbb{E}}(e^{z\bar\theta_3}))
\qquad\text{and}\qquad
\mathbold{\tilde\phi}(z) = (\qopname\relax o{\mathbb{E}}(e^{z\bar\theta_0}),\qopname\relax o{\mathbb{E}}(e^{z\bar\theta_3})) \]
be the moment generating functions of $(\bar\theta_1,\bar\theta_2,\bar\theta_3)$ and $(\bar\theta_0,\bar\theta_3)$, respectively. These two functions exists at least for $z\in\mathbb C$ with $\operatorname{Re}(z)\le0$. Furthermore, it is well known that
\[ \mathbold{\bar\phi}(\bar\alpha z) = \mathbold{\bar g}(\mathbold{\bar\phi}(z))
\qquad\text{and}\qquad
\mathbold{\tilde\phi}(\bar\alpha z) = \mathbold{\tilde g}(\mathbold{\tilde\phi}(z)). \]
holds whenever both sides are finite, see for instance Theorem~1.8.1 of \cite{mode1971multitype}. Since $\mathbold{\bar g}$ and $\mathbold{\tilde g}$ are both polynomials, the moment generating functions $\mathbold{\bar\phi}$ and $\mathbold{\tilde\phi}$ exist for all $z\in\mathbb C$ and are entire functions, see \cite{poincare1890classe}. Furthermore, by iterating the offspring generating function it is possible to approximate the densities of $\bar\theta_0,\bar\theta_1,\bar\theta_2,\bar\theta_3$, see Figure~\ref{figure:density}.
\end{remark}
\begin{figure}[htb]
\def\plotdensity#1{%
\subfloat[density of $\bar\theta_#1$]{
\begin{tikzpicture}[scale=1.2]
\scope
\clip (0,0) rectangle (3.0,2.5);
\draw plot[smooth] file {dens#1.tab};
\endscope
\draw[very thin,->] (0,0) -- (3.2,0);
\draw[very thin,->] (0,0) -- (0,2.7);
\foreach \x in {0,0.5,...,3.0} { \draw[very thin] (\x,-0.05) node[below,font=\tiny] {$\x$} -- (\x,0.05); }
\foreach \y in {0,0.5,...,2.5} { \draw[very thin] (-0.05,\y) node[left,font=\tiny] {$\y$} -- (0.05,\y); }
\end{tikzpicture}}}
\centering
\plotdensity0
\qquad
\plotdensity1
\\[5pt]
\plotdensity2
\qquad
\plotdensity3
\caption{A plot of the densities of $\bar\theta_i$ for $i\in\{0,1,2,3\}$. The densities are approximated by $n=7$ iterations of the offspring generating function $\bar g$.}
\label{figure:density}
\end{figure}
In the following lemma, we prove some estimates for the moment generating functions of $\bar\theta_0,\dotsc,\bar\theta_3$, which lead to estimates for the tails of the distributions. Let us remark that there exist general results concerning tail probabilities (see for instance Jones~\cite{jones2004large}), but our situation does not satisfy the necessary conditions of these results. Thus we follow the arguments in \cite[Proposition~3.1]{barlow1988brownian} and \cite[Proposition~4.2]{kumagai1993construction}. Let the constants $\bar\gamma_\ell$ and $\bar\gamma_r$ be defined by
\[ \bar\gamma_\ell = \frac{\log2}{\log\bar\alpha} \approx 0.837524 \qquad\text{and}\qquad
\bar\gamma_r = \frac{\log3}{\log\bar\alpha} \approx 1.32744. \]
Thus $-\bar\gamma_\ell/(1-\bar\gamma_\ell)\approx5.154759$ and $\bar\gamma_r/(1-\bar\gamma_r)\approx4.053954$. These constants play an important role in the following lemma:
\begin{lemma}\label{lemma:mg-bounds}
There are constants $C_{1,\ell},C_{2,\ell} > 0$ such that
\[ e^{-C_{1,\ell} \abs{z}^{\bar\gamma_\ell}}
\le \qopname\relax o{\mathbb{E}}(e^{z\bar\theta_i})
\le e^{-C_{2,\ell} \abs{z}^{\bar\gamma_\ell}}
\rlap{$\quad (i\in\{0,1,2,3\})$} \]
for all $z\le-1$. The upper bounds also hold for $z\in\mathbb C$ with $\qopname\relax o{Re} z\le0$ and $\abs{z}\ge1$ (after taking absolute values). Analogously, there are constants $C_{1,r},C_{2,r} > 0$ such that
\[ e^{C_{1,r} z^{\bar\gamma_r}}
\le \qopname\relax o{\mathbb{E}}(e^{z\bar\theta_i})
\le e^{C_{2,r} z^{\bar\gamma_r}}
\rlap{$\quad (i\in\{0,1,2,3\})$} \]
for all sufficiently large $z\ge0$ (for instance if $\qopname\relax o{\mathbb{E}}(e^{z\bar\theta_i})\ge4$ for $i\in\{1,2,3,4\}$). As a consequence the following statements hold:
\begin{itemize}
\item There are constants $C_{3,\ell},C_{4,\ell},C_{5,\ell},C_{6,\ell}>0$ such that
\[ C_{3,\ell} \exp(-C_{4,\ell} s^{-\bar\gamma_\ell/(1-\bar\gamma_\ell)}) \le \qopname\relax o{\mathbb{P}}(\bar\theta_i \le s)
\le C_{5,\ell} \exp(-C_{6,\ell} s^{-\bar\gamma_\ell/(1-\bar\gamma_\ell)}) \]
for all $s\ge0$ and all $i\in\{0,1,2,3\}$.
\item There are constants $C_{3,r},C_{4,r},C_{5,r},C_{6,r}>0$ such that
\[ C_{3,r} \exp(-C_{4,r} s^{\bar\gamma_r/(1-\bar\gamma_r)}) \le \qopname\relax o{\mathbb{P}}(\bar\theta_i \ge s)
\le C_{5,r} \exp(-C_{6,r} s^{\bar\gamma_r/(1-\bar\gamma_r)}) \]
for all $s\ge0$ and all $i\in\{0,1,2,3\}$.
\item The random variables $\bar\theta_0,\bar\theta_1,\bar\theta_2,\bar\theta_3$ have densities in $C^\infty$.
\end{itemize}
\end{lemma}
\begin{proof}
Set $\mathbb C_-=\{z\in\mathbb C \,:\, \qopname\relax o{Re} z\le 0\}$. The random variables $\bar\theta_0,\bar\theta_1,\bar\theta_2,\bar\theta_3\ge0$ have positive densities on $(0,\infty)$. Thus $0<\abs{\qopname\relax o{\mathbb{E}}(e^{z\bar\theta_i})}<1$ for all $z\in\mathbb C_-\setminus\{0\}$ and for all $i\in\{0,1,2,3\}$.
We start with the upper bounds of the left tail. Set $M(z) = \max\{\abs{\qopname\relax o{\mathbb{E}}(e^{z\bar\theta_i})} \,:\, i\in\{0,1,2,3\} \}$. Then $M(\bar\alpha z) \le M(z)^2$ for all $z\in\mathbb C_-$ using the functions $\bar g$ and $\tilde g$. Set $H(z) = -\abs{z}^{-\bar\gamma_\ell} \log M(z)$, so that $H(\bar\alpha z) \ge H(z)$ for all $z\in\mathbb C_-$. Due to continuity there is a constant $C_{2,\ell}>0$ such that $H(z)\ge C_{2,\ell}$ for all $z\in\mathbb C_-$ with $1\le\abs{z}\le\bar\alpha$. This implies $H(z)\ge C_{2,\ell}$ for all $z\in\mathbb C_-$ with $\abs{z}\ge1$ and thus $\abs{\qopname\relax o{\mathbb{E}}(e^{z\bar\theta_i})} \le e^{-C_{2,\ell} \abs{z}^{\bar\gamma_\ell}}$ for all $z\in\mathbb C_-$ with $\abs{z}\ge1$ and $i\in\{0,1,2,3\}$.
For the lower bounds of the left tail set $m(z) = \min\{\qopname\relax o{\mathbb{E}}(e^{z\bar\theta_i}) \,:\, i\in\{0,1,3\} \}$, so that $m(\bar\alpha z) \ge \tfrac1{10} m(z)^2$ for all $z\le0$. If we set $h(z) = -\abs{z}^{-\bar\gamma_\ell} \log m(z)$, then
\[ h(\bar\alpha z) \le \tfrac12 \abs{z}^{-\bar\gamma_\ell} \log10 + h(z) \]
for all $z\le0$. For $n\ge0$ this implies
\[ h(\bar\alpha^n z) \le \bigl( (\tfrac12)^1 + \dotsb + (\tfrac12)^n \bigr) \abs{z}^{-\bar\gamma_\ell} \log10 + h(z)
\le \abs{z}^{-\bar\gamma_\ell} \log10 + h(z). \]
As before, there is a constant $C_{1,\ell}>0$ such that $\abs{z}^{-\bar\gamma_\ell} \log10 + h(z)\le C_{1,\ell}$ for all $-\bar\alpha\le z\le-1$. This implies $h(z)\le C_{1,\ell}$ for all $z\le-1$ and so $\qopname\relax o{\mathbb{E}}(e^{z\bar\theta_i}) \ge e^{-C_{1,\ell} \abs{z}^{\bar\gamma_\ell}}$ for all $z\le0$ and $i\in\{0,1,3\}$. If $i=2$, notice that
\[ \bar g_2(z_1,z_2,z_3) \ge \tfrac49 z_1^2 z_3 \]
for all $z_1,z_2,z_3\ge0$. Hence, using the lower bounds above,
\[ \qopname\relax o{\mathbb{E}}(e^{\bar\alpha z\bar\theta_2}) \ge \tfrac49 e^{-3C_{1,\ell} \abs{z}^{\bar\gamma_\ell}} \]
for all $z\le-1$. By a suitable modification of $C_{1,\ell}$ we get the lower bound for the case $i=2$.
The proof of the bounds for the right tail is very similar to the proof for the left tail, hence we omit the details.
For the remaining statements, see \cite[Proposition~3.2, Lemma~3.4]{barlow1988brownian} and \cite[Corollary~4.12.8]{bingham1987regular}.
\end{proof}
Analogous to Remark~\ref{remark:func1} it is easy to describe the limit behaviour of any parameter of loop-erased random walk in $G_n$ from $u_1$ to $u_2$ that is a functional of $\mathbold{\bar\chi}^{\scriptscriptstyle\#}_n(T_\infty)$. As a simple example we consider the length of loop-erased random walk in $G_n$ from $u_1$ to $u_2$, which is given by the distance $d_{T_n}(u_1,u_2)$, where $d_{T_n}$ is the graph metric of the tree $T_n$. We remark that a similar derivation of the expectations below is given in \cite{dhar1997distribution,hattori2012looperased}.
\begin{corollary}\label{corollary:length}
If $n\ge0$, then the probability generating functions of $d_{T_n^{}}(u_1,u_2)$ and $d_{S_n^3}(u_1,u_2)$ are given by, with $\mathbold{\tilde g}$, $\mathbold{\bar g}$ and $\mathbold{\Sigma}$ as defined in~\eqref{eq:gdef},~\eqref{eq:tildegdef},~\eqref{eq:sigmadef},
\[ \bigl(\qopname\relax o{\mathsf{PGF}}(d_{T_n^{}}(u_1,u_2),z),\qopname\relax o{\mathsf{PGF}}(d_{S_n^3}(u_1,u_2),z)\bigr)
= \mathbold{\Sigma}(\mathbold{\bar g}^n(z,z^2,z))
= \mathbold{\tilde g}^n\bigl( \tfrac23 z + \tfrac13 z^2, z \bigr) \]
and the expectations are
\[ \begin{pmatrix} \qopname\relax o{\mathbb{E}}(d_{T_n^{}}(u_1,u_2)) \\ \qopname\relax o{\mathbb{E}}(d_{S_n^3}(u_1,u_2)) \end{pmatrix}
= \begin{pmatrix}
\tfrac23 + \tfrac5{123}\sqrt{205} & \tfrac23 - \tfrac5{123}\sqrt{205} \\[4pt]
\tfrac12 + \tfrac{19}{410}\sqrt{205} & \tfrac12 - \tfrac{19}{410}\sqrt{205}
\end{pmatrix} \cdot
\begin{pmatrix}
\bigl( \tfrac43 + \tfrac1{15} \sqrt{205}\,\bigr)^n \\[4pt]
\bigl( \tfrac43 - \tfrac1{15} \sqrt{205}\,\bigr)^n
\end{pmatrix}. \]
Furthermore,
\[ \bar\alpha^{-n} d_{T_n^{}}(u_1,u_2) \to \tfrac16(\sqrt{205}-7) \bar\theta(T_\infty,3), \qquad
\bar\alpha^{-n} d_{S_n^3}(u_1,u_2) \to \tfrac16(\sqrt{205}-7) \bar\theta(S_\infty,3) \]
almost surely as $n\to\infty$.
\end{corollary}
\begin{proof}
By the description using Galton-Watson trees, see Proposition~\ref{proposition:tree2} and Remark~\ref{remark:count}, we infer that
\begin{align*}
d_{T_n^{}}(u_1,u_2)
&= \mathbold{\bar c}^{\scriptscriptstyle\#}_n(T_\infty^{},3) \cdot (1,2,1)^t, \\
d_{S_n^3}(u_1,u_2)
&= \mathbold{\bar c}^{\scriptscriptstyle\#}_n(S_\infty^3,3) \cdot (1,2,1)^t.
\end{align*}
This implies the statement, since $(6a_4,3a_4,3a_5)\cdot (1,2,1)^t = \frac16(\sqrt{205}-7)$.
\end{proof}
\section{Convergence of loop-erased random walk}
\label{section:convergence}
Let $C$ be the set of continuous curves $\gamma\colon[0,\infty]\to K$ with $\gamma(0)=u_1$ and $\gamma(\infty)=u_2$ and set $d_C(\gamma,\delta) = \sup\{ \norm{\gamma(t)-\delta(t)}_2 \,:\, t\in[0,\infty] \}$ for $\gamma,\delta\in C$. Then $(C,d_C)$ is a complete separable metric space. For $\gamma\in C$ set
\[ \mathsf h(\gamma) = \inf\{t \,:\, \gamma(s) = u_2 \text{ for all } s\ge t \} \in (0,\infty]. \]
A curve $\gamma\in C$ is called \emph{self-avoiding} if $\gamma(s)\ne\gamma(t)$ for $0\le s<t\le\mathsf h(\gamma)$. Fix some curve $\gamma$ in $C$ and some integer $n\ge 0$. Then there is a unique integer $m\ge1$ and two unique sequences
\[ 0 = t_0 < \dotsb < t_m = \mathsf h(\gamma) \]
and $w_1,\dotsc,w_m\in\mathbb W^n$ with the following properties:
\begin{itemize}
\item The curve $\gamma$ walks along the $n$\nobreakdash-\hskip0pt parts $\psi_{w_j}(K)$:
$\gamma([t_{j-1},t_j])\subseteq\psi_{w_j}(K)$ and
$\gamma([t_{j-1},t_j])\cap\psi_{w_j}(K\setminus VG_0) \ne \emptyset$
for all $1\le j\le m$.
\item The quantity $t_j$ is the exit time of $\gamma$ from $\psi_{w_j}(K)$:
$t_j = \inf\{s > t_{j-1} \,:\, \gamma(s)\notin\psi_{w_j}(K)\}$ for all $1\le j\le m-1$.
\end{itemize}
As a consequence, the intersection of $\psi_{w_{j-1}}(K)$ and $\psi_{w_j}(K)$ consists of one point only, which is equal to $\gamma(t_{j-1})\in VG_n$. We write $\Delta_n(\gamma)$ to denote the number $m$ of $n$\nobreakdash-\hskip0pt parts traversed, $\mathsf t_{j,n}(\gamma)$ to denote the time $t_j$, and we set
\[ W_n(\gamma) = (w_1,\dotsc,w_m). \]
Last but not least set $\mathsf s_{j,n}(\gamma)=\mathsf t_{j,n}(\gamma)-\mathsf t_{j-1,n}(\gamma)$, which is the time spent in the $n$\nobreakdash-\hskip0pt part $\psi_{w_j}(K)$. It should be stressed, that $\bigl(\mathsf t_{j,n}(\gamma)\bigr)_{j=0,\dotsc,\Delta(n)}$ are in general not equal to the consecutive hitting times on the set $VG_n$, as it might happen, that the curve $\gamma$ enters the part $\psi_{w_j}(K)$ at $\psi_{w_j}(u_1)$, visits $\psi_{w_j}(u_3)$ without leaving $\psi_{w_j}(K)$, and leaves at $\psi_{w_j}(u_2)$.
By linear interpolation and constant extension we can associate to any walk $x=(x_0,\dotsc,x_r)$ in $G_n$ a curve $\qopname\relax o{\mathsf{LI}}(x)\colon[0,\infty]\to K$ as follows:
\begin{itemize}
\item Linear interpolation: set $\qopname\relax o{\mathsf{LI}}(x)(t) = (k+1-t) x_k + (t-k) x_{k+1}$ if $k\in\{0,\dotsc,r-1\}$ and $k\le t< k+1$.
\item Constant extension: set $\qopname\relax o{\mathsf{LI}}(x)(t) = x_r$ for $t\ge r$.
\end{itemize}
If $\lambda>0$, write $\qopname\relax o{\mathsf{LI}}(x,\lambda)$ for the curve with rescaled time, i.e., $\qopname\relax o{\mathsf{LI}}(x,\lambda)(t) = \qopname\relax o{\mathsf{LI}}(x)(\lambda t)$. Note that $\qopname\relax o{\mathsf{LI}}(x,\lambda)\in C$ if $x_0=u_1$ and $x_r=u_2$.
\begin{remark}\label{remark:conn}
Let $t\in\mathcal T_\infty$ and set $\gamma_n = \qopname\relax o{\mathsf{LI}}(u_1(\qopname\relax o{\mathsf{Tr}}^\infty_n t)u_2) \in C$ for $n\ge0$. If $m\ge n$, then the number $\Delta_n(\gamma_m)$ of $n$\nobreakdash-\hskip0pt parts visited by $\gamma_m$ is given by
\[ \Delta_n(\gamma_m) = \mathbold{\bar c}^{\scriptscriptstyle\#}_n(t,3) \cdot (1,1,1)^t, \]
since $\mathbold{\bar c}^{\scriptscriptstyle\#}_n(t,3)$ counts the $n$\nobreakdash-\hskip0pt parts on the unique path from $u_1$ to $u_2$ by their type. Moreover, the words in $W_n(\gamma_m)$ associated to the $n$\nobreakdash-\hskip0pt parts visited by $\gamma_m$ and the labels $W_n(t,3)$ of the $n$\nobreakdash-\hskip0pt th generation of the tree $\mathbold{\bar\chi}(t,3)$ are equal, if the natural ordering of $W_n(t,3)$ is used. Finally, the length of the self-avoiding walk $u_1(\qopname\relax o{\mathsf{Tr}}^\infty_n t)u_2$ is given by
\[ \mathsf h(\gamma_n) = d_{\qopname\relax o{\mathsf{Tr}}^\infty_n t}(u_1,u_2) = \mathbold{\bar c}^{\scriptscriptstyle\#}_n(t,3) \cdot (1,2,1)^t, \]
since types $\conn55,\conn66,\conn77$ contribute $2$ to the length while all other types contribute $1$. If $m\ge n$ and $0\le j\le\Delta_n(\gamma_n)$, then
\[ \gamma_m(\mathsf t_{j,n}(\gamma_m)) = \gamma_n(\mathsf t_{j,n}(\gamma_n)) \in VG_n. \]
Let $x_{j,n} = \gamma_n(\mathsf t_{j,n}(\gamma_n)) \in VG_n$ and $W_n(\gamma_n) = (w_1,\dotsc,w_{\Delta_n(\gamma_n)})$. It follows that, for any $0\le i<j\le\Delta_n(\gamma_n)$,
\begin{itemize}
\item the vertices $x_{i,n}$ and $x_{j,n}$ are not the same,
\item at $x_{j-1,n}$ the self-avoiding walk $u_1(\qopname\relax o{\mathsf{Tr}}^\infty_m t)u_2$ enters the $n$\nobreakdash-\hskip0pt part $\psi_{w_j}(K)$ and
at $x_{j,n}$ it leaves this $n$\nobreakdash-\hskip0pt part,
\item the quantity $\mathsf s_{j,n}(\gamma_m)$ is the length of the self-avoiding walk $u_1(\qopname\relax o{\mathsf{Tr}}^\infty_m t)u_2$
restricted to the segment from $x_{j-1,n}$ to $x_{j,n}$, i.e.,
it is equal to the number of edges of this walk inside the $n$\nobreakdash-\hskip0pt part $\psi_{w_j}(K)$:
\[ \mathsf s_{j,n}(\gamma_m) = d_{\qopname\relax o{\mathsf{Tr}}^\infty_m t}(x_{j-1,n},x_{j,n})
= \mathbold{\bar c}^{\scriptscriptstyle\#}_{m-n}(\pi_{w_j}(t),\bar\kappa_{w_j}(t,3)) \cdot (1,2,1)^t. \]
\end{itemize}
\end{remark}
The results of Section~\ref{section:lerw} indicate that $\qopname\relax o{\mathsf{LI}}(u_1T_nu_2,\bar\alpha^n)$ converges almost surely for $n\to\infty$. The proof of this fact closely follows the arguments of \cite{barlow1988brownian,hattori1991selfavoiding,kumagai1993construction}. In the first two references uni-type Galton-Watson processes are used, whereas in the last reference a Galton-Watson process with four types is used.
A pair $(W,(b_w)_{w\in W})$ with $W\subseteq\mathbb W^n$ and $b_w\in\mathcal{\bar C}$ is called \emph{admissible} of length $n$ if there is an element $t\in\mathcal T_\infty$ such that $W=W_n(t,3)$ and $b_w=\bar\chi_w(t)$ for $w\in W$. Notice that $W$ inherits the natural ordering from $W_n(t,3)$. An admissible pair $(W,(b_w)_{w\in W})$ completely describes the self-avoiding walk connecting $u_1$ and $u_2$ in the spanning tree $\qopname\relax o{\mathsf{Tr}}^\infty_n t$ for some $t\in\mathcal T_\infty$. Loosely speaking, the following lemma states that conditioning on the $n$\nobreakdash-\hskip0pt th level, i.e. conditioning on $W_n(T_\infty,3) = W$ and $(\bar\chi_w(T_\infty,3))_{w\in W} = (b_w)_{w\in W}$ for some admissible pair $(W,(b_w)_{w\in W})$, the refinements in different $n$\nobreakdash-\hskip0pt parts are conditionally independent and for each $n$\nobreakdash-\hskip0pt part the refinement yields again a multi-type Galton-Watson tree.
\begin{lemma}\label{lemma:indep}
Let $(W,(b_w)_{w\in W})$ be an admissible pair of length $n$. Then, under $\qopname\relax o{\mathbb{P}}(\,\cdot \mid W_n(T_\infty,3) = W, (\bar\chi_w(T_\infty,3))_{w\in W} = (b_w)_{w\in W})$, the following holds:
\begin{itemize}
\item The random trees $\mathbold{\bar\chi}(\pi_w(T_\infty),\bar\kappa_w(T_\infty,3))$
for $w\in W$ are independent labelled multi-type Galton-Watson trees
with labels in $\mathbb W^*$ and types in $\mathcal{\bar C}$
as described in Proposition~\ref{proposition:tree2}.
\item For $w\in W$,
$\bar\alpha^{-n}\mathbold{\bar\chi}^{\scriptscriptstyle\#}_n(\pi_w(T_\infty),\bar\kappa_w(T_\infty,3))$
converges almost surely to $\mathbold{\bar v}_L \bar\theta(w)$
for some non-negative random variable $\bar\theta(w)$,
which has the same distribution as $\bar\theta_{\bar c(b_w)}$.
In particular, $\bar\theta(w)$ is almost surely positive.
The random variables $\bar\theta(w)$ for $w\in W$ are independent.
\item We have $\Delta_n(\qopname\relax o{\mathsf{LI}}(u_1T_nu_2)) = \card{W}$ almost surely.
Let $(w_1,w_2,\dotsc)$ be the natural ordering of $W$ and let $m\ge n$.
Then the random variables
\[ \mathsf s_{j,n}(\qopname\relax o{\mathsf{LI}}(u_1T_mu_2,\bar\alpha^m))
= \bar\alpha^{-m}\mathbold{\bar c}^{\scriptscriptstyle\#}_{m-n}(\pi_{w_j}(T_\infty),\bar\kappa_{w_j}(T_\infty,3))
\cdot (1,2,1)^t \]
for $1\le j\le\card{W}$ are independent and
\[ \mathsf s_{j,n}(\qopname\relax o{\mathsf{LI}}(u_1T_mu_2,\bar\alpha^m)) \to \tfrac16(\sqrt{205}-7)\bar\alpha^{-n}\bar\theta(w_j) \]
almost surely as $m\to\infty$.
\end{itemize}
\end{lemma}
\begin{proof}
The first two parts are consequences of Proposition~\ref{proposition:tree2}. The third part follows from the first and the second and from Remark~\ref{remark:conn}.
\end{proof}
In the following we set $X_n = \qopname\relax o{\mathsf{LI}}(u_1T_nu_2,\bar\alpha^n)$, so that $X_n\colon\Omega\to C$ is a random element in $C$ and
\[ X_n(\mathsf t_{j,n}(X_n)) = X_m(\mathsf t_{j,n}(X_m)) \]
for all $m\ge n$. Define
\[ \Omega' = \Bigl\{ \omega\in\Omega \,:\, \lim_{m\to\infty} \mathsf s_{j,n}(X_m) \in (0,\infty)
\text{ for } n\ge0, 1\le j\le\Delta_n(X_n) \Bigr\}. \]
Then using Lemma~\ref{lemma:indep} we conclude that $\qopname\relax o{\mathbb{P}}(\Omega')=1$. Fix some $\omega\in\Omega'$. For $n\ge0$ and $1\le j\le\Delta_n(X_n)$ set
\[ \mathsf S_{j,n} = \lim_{m\to\infty} \mathsf s_{j,n}(X_m). \]
It follows that
\[ \lim_{m\to\infty} \mathsf t_{j,n}(X_m)
= \lim_{m\to\infty} \sum_{1\le k\le j} \mathsf s_{k,n}(X_m)
= \sum_{1\le k\le j} \mathsf S_{k,n} \in (0,\infty). \]
We write $\mathsf T_{j,n}$ to denote this limit. Lastly, note that
\[ \mathsf h(X_m) = \mathsf t_{1,0}(X_m) = \mathsf t_{\Delta_n(X_n),n}(X_m) \qquad\text{and thus}\qquad
\mathsf T_{1,0} = \mathsf T_{\Delta_n(X_n),n}. \]
\begin{theorem}\label{theorem:conv}
On $\Omega'$ the curve $X_n$ converges uniformly as $n\to\infty$ to a limit curve $X$ in $C$, which satisfies the following properties:
\begin{itemize}
\item $X(\mathsf T_{j,n}) = X_n(\mathsf t_{j,n}(X_n)) \in VG_n$ for all $n\ge0$ and $0\le j\le\Delta_n(X_n)$.
\item If $W_n(X_n) = \{w_1,\dotsc,w_{\Delta_n(X_n)}\}$, then
\begin{gather*}
X(\mathsf T_{i,n})\ne X(\mathsf T_{j,n}), \qquad
X(\mathsf T_{j-1,n}), X(\mathsf T_{j,n})\in\psi_{w_j}(VG_0), \\
X([\mathsf T_{j-1,n},\mathsf T_{j,n}])\subseteq\psi_{w_j}(K), \qquad
X([\mathsf T_{j-1,n},\mathsf T_{j,n}])\cap\psi_{w_j}(K\setminus VG_0)\ne\emptyset
\end{gather*}
for all $0\le i<j\le\Delta_n(X_n)$.
Hence $\Delta_n(X) = \Delta_n(X_n)$ and $W_n(X) = W_n(X_n)$ for all $n\ge0$.
\end{itemize}
\end{theorem}
\begin{proof}
We closely follow the arguments of \cite{hattori1991selfavoiding}. Fix $\omega\in\Omega'$. We will show that $X_n$ converges uniformly in $[0,\infty]$.
Let $n\ge1$ be a non-negative integer. Then $\Delta_n(X_n)\ge2$. By Definition of $\Omega'$ we have
\[ a = \min\{\mathsf S_{j,n} \,:\, 1\le j\le\Delta_n(X_n)\} > 0. \]
Hence there is a positive integer $M=M(n,\omega)$ with $M\ge n$ such that
\[ \max\{\abs{\mathsf t_{j,n}(X_m) - \mathsf T_{j,n}} \,:\, 0\le j\le\Delta_n(X_n)\} \le a \]
for all $m\ge M$. For convenience set $\mathsf T_{\Delta_n(X_n)+1,n} = \mathsf t_{\Delta_n(X_n)+1,n}(X_m) = \infty$ for all $0\le n\le m$. Now consider $t\in[0,\infty]$. There is an integer $j$ with $1\le j\le\Delta_n(X_n)+1$ such that $\mathsf T_{j-1,n}\le t\le\mathsf T_{j,n}$. Let $m\ge M$ and distinguish the following cases:
\begin{itemize}
\item $1<j<\Delta_n(X_n)$: We infer that
\begin{gather*}
\mathsf t_{j-2,n}(X_m) \le \mathsf T_{j-2,n} + a \le \mathsf T_{j-2,n} + \mathsf S_{j-1,n} = \mathsf T_{j-1,n} \le t, \\
t \le \mathsf T_{j,n} = \mathsf T_{j+1,n} - \mathsf S_{j+1,n} \le \mathsf T_{j+1,n} - a \le \mathsf t_{j+1,n}(X_m).
\end{gather*}
Since $X_m([\mathsf t_{j-2,n}(X_m),\mathsf t_{j+1,n}(X_m)]) \subseteq \psi_{w_1}(K)\cup\psi_{w_2}(K)\cup\psi_{w_3}(K)$
for some $w_1,w_2,w_3\in\mathbb W^n$ with $\psi_{w_1}(K)\cap\psi_{w_2}(K) = \{X_m(\mathsf t_{j-1,n}(X_m))\}$ and
$\psi_{w_2}(K)\cap\psi_{w_3}(K) = \{X_m(\mathsf t_{j,n}(X_m))\}$, we obtain
\[ \norm{ X_m(t) - X_m(\mathsf t_{j-1,n}(X_m)) }_2 \le 2^{1-n}. \]
\item $j=1$: It follows as before that $0\le t \le \mathsf t_{2,n}(X_m)$ for all $m\ge M$. Hence
\[ \norm{ X_m(t) - X_m(\mathsf t_{0,n}(X_m)) }_2 \le 2^{1-n}. \]
\item $j=\Delta_n(X_n)$: Again, $\mathsf t_{\Delta_n(X_n)-2,n}(X_m) \le t \le \mathsf t_{\Delta_n(X_n)+1,n}$ and therefore
\[ \norm{ X_m(t) - X_m(\mathsf t_{\Delta_n(X_n)-1,n}(X_m)) }_2 \le 2^{-n}. \]
\item $j=\Delta_n(X_n)+1$: Then $\mathsf t_{\Delta_n(X_n)-1,n}(X_m) \le t \le \mathsf t_{\Delta_n(X_n)+1,n}$ and
\[ \norm{ X_m(t) - X_m(\mathsf t_{\Delta_n(X_n),n}(X_m)) }_2 \le 2^{-n}. \]
\end{itemize}
In any case we have
\[ \norm{ X_m(t) - X_m(\mathsf t_{j-1,n}(X_m)) }_2 \le 2^{1-n} \]
for $m\ge M$. Now let $m_1,m_2\ge M$. Since $X_{m_1}(\mathsf t_{j-1,n}(X_{m_1})) = X_{m_2}(\mathsf t_{j-1,n}(X_{m_2}))$, the estimate above implies
\begin{multline*}
\norm{ X_{m_1}(t) - X_{m_2}(t) }_2 \\
\le \norm{ X_{m_1}(t) - X_{m_1}(\mathsf t_{j-1,n}(X_{m_1})) }_2
+ \norm{ X_{m_2}(t) - X_{m_2}(\mathsf t_{j-1,n}(X_{m_2})) }_2
\le 2^{2-n}.
\end{multline*}
As $X_m(0)=u_1$ and $X_m(\infty)=u_2$, we have proved that $X_n$ converges uniformly to a limit curve $X$ in $C$.
The first property listed in Theorem~\ref{theorem:conv} follows from the fact that $X_m(\mathsf t_{j,n}(X_m)) = X_n(\mathsf t_{j,n}(X_n))$ for all $m\ge n$ and $\mathsf t_{j,n}(X_m)\to\mathsf T_{j,n}$, $X_m\to X$ uniformly as $m\to\infty$.
In order to show the second property let $t$ be in $(\mathsf T_{j-1,n},\mathsf T_{j,n})$. Then, for sufficiently large $m$, $t\in(\mathsf t_{j-1,n}(X_m),\mathsf t_{j,n}(X_m))$. Due to Remark~\ref{remark:conn} we have $X_m(t)\in\psi_{w_j}(K)$ for all sufficiently large $m$. As $X_m(t)\to X(t)$ it follows that $X(t)\in\psi_{w_j}(K)$. Thus Remark~\ref{remark:conn} and the first property imply the second.
\end{proof}
Let $\gamma$ be a curve in $C$, $w$ be a word in $\mathbb W^*$, and $\iota$ be a letter in $\mathbb W$. We say that $\gamma$ has a \emph{peak} of type $\iota$ in the $n$\nobreakdash-\hskip0pt part $\psi_w(K)$ if there are $t_1<t_2$ such that
\begin{itemize}
\item $\gamma([t_1,t_2])\subseteq\psi_w(K)$,
\item $\gamma(t_1)\ne\gamma(t_2)$ and $\gamma(t_1),\gamma(t_2)\in\psi_w(VG_0\setminus\{u_\iota\})$,
\item $\gamma((t_1,t_2))\cap\psi_w(VG_0)=\{\psi_w(u_\iota)\}$.
\end{itemize}
Intuitively speaking, this means that the curve passes through one of the corners of the $n$\nobreakdash-\hskip0pt part $\psi_w(K)$ without moving on to the adjacent part.
\begin{lemma}\label{lemma:peaks}
Almost surely, the limit curve $X$ has no peaks. In particular,
\[ X([0,\infty])\cap VG_n = \{ u_1 = X(\mathsf T_{0,n}), X(\mathsf T_{1,n}), \dotsc, X(\mathsf T_{\Delta_n(X_n),n}) = u_2 \} \]
almost surely for all $n\ge0$. If $i,j\in\{1,\dotsc,\Delta_n(X_n)\}$ with $i<j-1$, then
\[ X([\mathsf T_{i-1,n},\mathsf T_{i,n}]) \cap X([\mathsf T_{j-1,n},\mathsf T_{j,n}]) = \emptyset \]
almost surely. Finally, $\card{X([0,\infty])\cap\psi_w(VG_0)}\le2$ for all $w\in\mathbb W^*$ almost surely.
\end{lemma}
\begin{proof}
If $\iota\in\mathbb W$ and $n\ge0$, we write $\iota^n\in\mathbb W^n$ for the $n$\nobreakdash-\hskip0pt fold repetition of the letter $\iota$ and define $x(\iota)\in\mathcal{\bar C}$ by
\[ x(\iota) = \begin{cases}
\conn55 & \text{if } \iota=1, \\
\conn66 & \text{if } \iota=2, \\
\conn77 & \text{if } \iota=3. \\
\end{cases} \]
Let $w$ be a word in $\mathbb W^n$ for some $n\ge0$. For $m\ge n$ write $A_m=A_m(w,\iota)$ to denote the event
\[ A_m = \{ w\iota^{m-n}\in W_m(T_\infty,3), \bar\chi_{w\iota^{m-n}}(T_\infty,3)=x(\iota) \}. \]
Then, for any $m\ge n$, $A_m\supseteq A_{m+1}$ and
\[ \qopname\relax o{\mathbb{P}}(A_{m+1} \mid A_m) = \tfrac6{18} = \tfrac13 \]
as one can see easily by inspection of Table~\ref{table:conngen}. Hence
\[ \qopname\relax o{\mathbb{P}}(A_m) = \bigl(\tfrac13\bigr)^{m-n} \qopname\relax o{\mathbb{P}}(A_n). \]
Since
\[ \{ X \text{ has a peak of type } \iota \text{ in } \psi_w(K)\} = \bigcap_{m\ge n} A_m, \]
we infer that
\[ \qopname\relax o{\mathbb{P}}(X \text{ has a peak of type } \iota \text{ in } \psi_w(K)) = 0. \]
This yields
\[ \qopname\relax o{\mathbb{P}}(X \text{ has a peak})
\le \sum_{w\in\mathbb W^*}\sum_{\iota\in\mathbb W} \qopname\relax o{\mathbb{P}}(X \text{ has a peak of type } \iota \text{ in } \psi_w(K))
= 0. \]
In order to show the last assertion of the lemma, let $w$ be a word in $\mathbb W^n$. For $i\in\{1,2,3\}$ let $w_i$ be the word in $\mathbb W^n$ (if it exists) for which $w_i\ne w$ and $\psi_w(K)\cap\psi_{w_i}(K)=\{\psi_w(u_i)\}$. If $\card{X([0,\infty])\cap\psi_w(VG_0)}=3$ for some $w\in\mathbb W^*$, then $X$ has a peak in one of the parts $\psi_w(K)$, $\psi_{w_1}(K)$, $\psi_{w_2}(K)$, $\psi_{w_3}(K)$. Thus
\[ \qopname\relax o{\mathbb{P}}(\card{X([0,\infty])\cap\psi_w(VG_0)}=3 \text{ for some } w\in\mathbb W^*)
\leq \qopname\relax o{\mathbb{P}}(X \text{ has a peak}) = 0. \]
Consider two indices $i,j\in\{1,\dotsc,\Delta_n(X_n)\}$ with $i<j-1$. Then $X([\mathsf T_{i-1,n},\mathsf T_{i,n}])$ and $X([\mathsf T_{j-1,n},\mathsf T_{j,n}])$ are contained in distinct $n$\nobreakdash-\hskip0pt parts of $K$ and, furthermore,
\[ \{X(\mathsf T_{i-1,n}),X(\mathsf T_{i,n})\} \cap \{X(\mathsf T_{j-1,n}),X(\mathsf T_{j,n})\} = \emptyset. \]
Hence peaks in both $n$\nobreakdash-\hskip0pt parts are the only possibility for a non-empty intersection. However, this has probability $0$.
\end{proof}
On $\Omega'$ define $\mathsf S_{*,n}$ for $n\ge0$ by
\[ \mathsf S_{*,n} = \max\{ \mathsf S_{j,n} \,:\, 1\le j\le\Delta_n(X_n) \}. \]
Then $\mathsf S_{*,0} = \mathsf S_{1,0} = \mathsf h(X)$ and $\mathsf S_{*,n+1}\le\mathsf S_{*,n}$ for all $n\ge0$. Therefore the limit $\lim_{n\to\infty}\mathsf S_{*,n}$ exists and is finite on $\Omega'$.
\begin{lemma}\label{lemma:times}
$\mathsf S_{*,n} \to 0$ almost surely as $n\to\infty$.
\end{lemma}
\begin{proof}
If $(W,(b_w)_{w\in W})$ is admissible, then write $A(W,(b_w)_{w\in W})$ to denote the event
\[ A(W,(b_w)_{w\in W}) = \{ W_n(T_\infty,3)=W, (\bar\chi_w(T_\infty,3))_{w\in W}=(b_w)_{w\in W} \}. \]
Let $\epsilon>0$, then
\begin{align*}
\qopname\relax o{\mathbb{P}}(\mathsf S_{*,n}\ge\epsilon)
&= \qopname\relax o{\mathbb{P}}(\mathsf S_{j,n}\ge\epsilon \text{ for some } 1\le j\le\Delta_n(X_n)) \\
&\le \sum_{1\le j\le \Delta_n(X_n)} \qopname\relax o{\mathbb{P}}(\mathsf S_{j,n}\ge\epsilon) \\
&= \sum_{(W,(b_w)_{w\in W})} \qopname\relax o{\mathbb{P}}(A(W,(b_w)_{w\in W}))
\sum_{1\le j\le\card{W}} \qopname\relax o{\mathbb{P}}(\mathsf S_{j,n}\ge\epsilon \mid A(W,(b_w)_{w\in W})),
\end{align*}
where the sum is taken over all admissible pairs. For sake of notation set $c=\frac16(\sqrt{205}-7)$. If $(W,(b_w)_{w\in W})$ is admissible, then, under $\qopname\relax o{\mathbb{P}}(\,\cdot \mid A(W,(b_w)_{w\in W}))$, the random variable $\mathsf S_{j,n}$ has the same distribution as $c\bar\alpha^{-n}\bar\theta_i$ for some $i\in\{1,2,3\}$, see Lemma~\ref{lemma:indep}. For $s\in\mathbb R$ set
\[ M(s) = \max\{\qopname\relax o{\mathbb{E}}(e^{s\bar\theta_1}), \qopname\relax o{\mathbb{E}}(e^{s\bar\theta_2}), \qopname\relax o{\mathbb{E}}(e^{s\bar\theta_3})\}. \]
Fix some $s>0$; then $M(cs)$ is finite due to Remark~\ref{remark:laplace}. Applying Markov's inequality yields
\[ \qopname\relax o{\mathbb{P}}(c\bar\alpha^{-n}\bar\theta_i\ge\epsilon)
= \qopname\relax o{\mathbb{P}}(e^{cs\bar\theta_i}\ge e^{s\epsilon\bar\alpha^n})
\le e^{-s\epsilon\bar\alpha^n} M(cs) \]
for all $i\in\{1,2,3\}$. Hence we obtain
\begin{align*}
\qopname\relax o{\mathbb{P}}(\mathsf S_{*,n}\ge\epsilon)
&\le \sum_{(W,(b_w)_{w\in W})} \qopname\relax o{\mathbb{P}}(A(W,(b_w)_{w\in W})) \card{W} e^{-s\epsilon\bar\alpha^n} M(cs) \\
&= e^{-s\epsilon\bar\alpha^n} M(cs) \sum_{(W,(b_w)_{w\in W})} \qopname\relax o{\mathbb{P}}(A(W,(b_w)_{w\in W})) \card{W} \\
&= e^{-s\epsilon\bar\alpha^n} M(cs) \qopname\relax o{\mathbb{E}}(\Delta_n(X_n))
\end{align*}
using Lemma~\ref{lemma:indep} once again. Since $\Delta_n(X_n) = \mathbold{\bar c}^{\scriptscriptstyle\#}_n(T_\infty,3)\cdot(1,1,1)^t$, a short computation shows that
\[ \qopname\relax o{\mathbb{E}}(\Delta_n(X_n))
= \bigl(\tfrac12+\tfrac3{82}\sqrt{205}\,\bigr) \cdot \bigl(\tfrac43+\tfrac1{15}\sqrt{205}\,\bigr)^n
+ \bigl(\tfrac12-\tfrac3{82}\sqrt{205}\,\bigr) \cdot \bigl(\tfrac43-\tfrac1{15}\sqrt{205}\,\bigr)^n
\le 3\bar\alpha^n \]
for all $n\ge0$. Therefore
\[ \qopname\relax o{\mathbb{P}}(\mathsf S_{*,n}\ge\epsilon) \le 3\bar\alpha^n e^{-s\epsilon\bar\alpha^n} M(cs) \]
for all $n\ge0$. By monotonicity
\[ \qopname\relax o{\mathbb{P}}\Bigl(\lim_{n\to\infty}\mathsf S_{*,n}\ge\epsilon\Bigr) = 0 \]
and, as $\epsilon>0$ is arbitrary, $\mathsf S_{*,n}\to0$ almost surely.
\end{proof}
Let $\Omega''$ be the set of all $\omega\in\Omega'$ with the property that the assertions of Lemma~\ref{lemma:peaks} and Lemma~\ref{lemma:times} hold. Then $\qopname\relax o{\mathbb{P}}(\Omega'')=1$. Using the previous preparations we are now able to prove that the curve $X$ is almost surely self-avoiding and that the random times $\mathsf T_{j,n}$ are almost surely equal to the consecutive hitting times on the set $VG_n$.
\begin{theorem}\label{theorem:prop}
On $\Omega''$ the following holds:
\begin{itemize}
\item The limit curve $X$ is self-avoiding.
\item For any $1\le j\le\Delta_n(X_n)$,
\[ \mathsf T_{j,n} = \mathsf t_{j,n}(X) = \inf\{t > \mathsf t_{j-1,n}(X) \,:\, X(t)\in VG_n\}. \]
\end{itemize}
\end{theorem}
\begin{proof}
Fix $\omega\in\Omega''$ and consider times $0\le t_1<t_2\le\mathsf h(X)$. By Lemma~\ref{lemma:times} there is an integer $n\ge0$ such that $\mathsf S_{*,n}<\frac13(t_2-t_1)$. Thus there are indices $i,j\in\{1,\dotsc,\Delta_n(X_n)-1\}$ with $i<j-1$ such that $t_1 \in [\mathsf T_{i-1,n},\mathsf T_{i,n}]$ and $t_2 \in [\mathsf T_{j-1,n},\mathsf T_{j,n}]$. Since $i<j-1$, Lemma~\ref{lemma:peaks} implies that $X(t_1)\ne X(t_2)$, which proves that $X$ is self-avoiding.
The second statement follows immediately using the first statement, Lemma~\ref{lemma:peaks}, and Theorem~\ref{theorem:conv}.
\end{proof}
\begin{remark}
For $\omega\in\Omega''$, the topological closure of the discrete set
\[ \mathsf T = \{ \mathsf T_{j,n} \,:\, n\ge 0, 0\le j\le\Delta_n(X_n) \} \]
contains the interval $[0,\mathsf h(X)]$. Hence $X$ is the continuous extension of
\[ \mathsf T \to K, \quad \mathsf T_{j,n} \mapsto X_n(\mathsf t_{j,n}(X_n)). \]
\end{remark}
\begin{remark}
The map
\[ \mathcal T_n\to C, \quad t\mapsto \qopname\relax o{\mathsf{LI}}(u_1tu_2) \]
is not one-to-one. However, it is possible to use this map and the law of the labelled multi-type Galton-Watson tree of Proposition~\ref{proposition:tree2} to describe the law of the process $X$.
\end{remark}
We use the following lemma as a partial substitute for the missing Markov property in order to prove some properties of the process $(X(t))_{t\ge0}$.
\begin{lemma}\label{lemma:events}
For any $n\in\mathbb N_0$, the following holds:
\begin{itemize}
\item If $t\ge s$ and $\norm{X(s)}_2 \ge 2^{-n}$, then $\norm{X(t)}_2 \ge 2^{-n}$.
\item If $t\ge s$, then $\norm{X(t)}_2 \ge \frac12 \norm{X(s)}_2$.
\item On $\Omega''$ we have
\[ \{ \norm{X(t)}_2 \ge 2^{-n} \}
= \{ \sup\{\norm{X(s)}_2 \,:\, s\le t\} \ge 2^{-n} \}
= \{ \mathsf T_{1,n} \le t \}
= \{ \mathsf S_{1,n} \le t \} \]
\end{itemize}
\end{lemma}
\begin{proof}
The first statement is a simple consequence of the geometry of $K$ and implies the second. For the third one note that on $\Omega''$ the curve $X$ is self-avoiding, has no peaks and $\mathsf T_{1,n}=\mathsf S_{1,n}$ is the hitting time of $\{2^{-n}u_2, 2^{-n}u_3\} = \psi_{11\dotsm1}(\{u_2,u_3\})$, where $11\dotsm1$ is the word of length $n$ whose letters are all equal to $1$. For $n\ge1$, this implies that the first hitting time of $\{2^{-n}u_2, 2^{-n}u_3\} = \psi_{11\cdots1}(\{u_2,u_3\})$ is equal to the last exit time of the set $2^{-n} K = \psi_{11\cdots1}(K)$. This implies the statement.
\end{proof}
\begin{theorem}\label{theorem:prop-lerw}
The following holds:
\begin{enumerate}[\normalfont(1)]
\item There are $C_{7,\ell},C_{8,\ell}>0$ such that for all $s,t\in[0,\infty)$ and all $\delta\in[0,1]$,
\[ C_{3,\ell} \exp(-C_{7,\ell} (\delta t^{-\bar\gamma_\ell})^{1/(1-\bar\gamma_\ell)})
\le \qopname\relax o{\mathbb{P}}( \norm{X(t)}_2 \ge \delta ) \]
and
\begin{align*}
\qopname\relax o{\mathbb{P}}( \norm{X(s+t) - X(s)}_2 \ge \delta )
&\le \qopname\relax o{\mathbb{P}}( \sup\{ \norm{X(s+u)-X(s)}_2 \,:\, 0\le u\le t \} \ge \delta ) \\
&\le C_{5,\ell} \exp(-C_{8,\ell} (\delta t^{-\bar\gamma_\ell})^{1/(1-\bar\gamma_\ell)}).
\end{align*}
\item There are $C_{7,r},C_{8,r}>0$ such that for all $t\in[0,\infty)$ and all $\delta\in[0,1]$,
\begin{align*}
C_{3,r} \exp(-C_{7,r} (\delta^{-1/\bar\gamma_\ell} t)^{\bar\gamma_r/(\bar\gamma_r-1)})
&\le \qopname\relax o{\mathbb{P}}( \sup\{ \norm{X(u)}_2 \,:\, 0\le u\le t \} \le \delta ) \\
&\le \qopname\relax o{\mathbb{P}}( \norm{X(t)}_2 \le \delta )
\end{align*}
and
\[ \qopname\relax o{\mathbb{P}}( \norm{X(t)}_2 \le \delta ) \le
C_{5,r} \exp(-C_{8,r} (\delta^{-1/\bar\gamma_\ell} t)^{\bar\gamma_r/(\bar\gamma_r-1)}). \]
\item For any $p>0$, there exist constants $C_9(p),C_{10}(p)>0$
such that for all $s\in[0,\infty)$ and all $t\in[0,1]$,
\begin{gather*}
C_9(p) \, t^{p\bar\gamma_\ell} \le \qopname\relax o{\mathbb{E}}(\norm{X(t)}_2^p) \qquad\text{and}\qquad
\qopname\relax o{\mathbb{E}}(\norm{X(s+t)-X(s)}_2^p) \le C_{10}(p) \, t^{p\bar\gamma_\ell}.
\end{gather*}
\item There are constants $C_{11},C_{12}>0$ such that for all $s\in[0,\infty)$
\begin{gather*}
\limsup_{t\searrow 0}
\frac{\norm{X(s+t)-X(s)}_2}{t^{\bar\gamma_\ell}(\log\log(1/t))^{1-\bar\gamma_\ell}} \le C_{11}
\rlap{\qquad and} \\
\liminf_{t\searrow 0}
\frac{\norm{X(t)}_2}{t^{\bar\gamma_\ell}(\log\log(1/t))^{-\bar\gamma_\ell(1-1/\bar\gamma_r)}} \ge C_{12}
\end{gather*}
hold almost surely. Note that $1-\bar\gamma_\ell\approx0.162475>0$ and
$-\bar\gamma_\ell(1-1/\bar\gamma_r)\approx-0.206594<0$.
\item The Hausdorff dimension $\dim_H X([0,\infty])$ of the path $X([0,\infty])$ almost surely satisfies
\[ \dim_H X([0,\infty]) = \frac{1}{\bar\gamma_\ell} = \frac{\log\bar\alpha}{\log2} \approx 1.193995. \]
\end{enumerate}
\end{theorem}
\begin{proof}
In order to prove the first statement choose $n\in\mathbb N$ such that $2^{-n}\le\delta\le2^{-(n-1)}$. Then, using Lemma~\ref{lemma:events},
\[ \qopname\relax o{\mathbb{P}}( \mathsf S_{1,n-1} \le t ) \le \qopname\relax o{\mathbb{P}}( \norm{X(t)}_2 \ge \delta ) \]
and
\begin{multline*}
\qopname\relax o{\mathbb{P}}( \sup\{ \norm{X(s+u)-X(s)}_2 \,:\, 0\le u\le t \} \ge \delta ) \\
\le \qopname\relax o{\mathbb{P}}( \mathsf T_{j-1,n+1} \ge s, \mathsf S_{j,n+1} \le t \text{ for some } j\ge1 ).
\end{multline*}
By conditioning as in Lemma~\ref{lemma:indep}, the distribution of $\mathsf S_{j,m}$ is equal to the distribution of $\frac16(\sqrt{205}-7)\bar\alpha^{-m}\bar\theta_i$ for some $i\in\{1,2,3\}$ and $\mathsf S_{j,m}$ is independent of $\mathsf T_{j-1,m}$. Hence the bounds on the tail probability follow from Lemma~\ref{lemma:mg-bounds}. More or less the same arguments yield the second statement. By integrating the bounds of the first statement we get the bounds on $\qopname\relax o{\mathbb{E}}(\norm{X(t)}_2^p)$ and $\qopname\relax o{\mathbb{E}}(\norm{X(s+t)-X(s)}_2^p)$, respectively. The fourth statement follows by the usual Borel-Cantelli argument. The path $X([0,\infty])$ is the limit set of a random recursive construction with multiple types. Thus the formula for the Hausdorff dimension follows from Theorem~3.8 in \cite{hattori2000exact}, where such random sets are studied in general.
\end{proof}
\begin{remark}
The properties (1), (3), (4) proved above are slightly weaker forms of \cite[Theorem~4.3, Corollary~4.4, Theorem~4.7]{barlow1988brownian} (for Brownian motion) and \cite[Theorem~4.5, Corollary~4.6, Theorem~4.8]{kumagai1993construction} (for more general diffusion processes that contain Brownian motion as a special case), respectively. In several cases the statements of the previous theorem are formulated for the special increment $X(t)=X(t)-X(0)$ and not for a general increment $X(s+t)-X(s)$, which is, we only consider the starting time $s=0$. One reason for the weaker statements is the lack of the Markov property. Another difficulty in the general case lies in the fact that parts of the curve that lie in different $k$\nobreakdash-\hskip0pt parts of the Sierpi\'nski gasket $K$ can still be close to each other near the vertices where these $k$\nobreakdash-\hskip0pt parts are connected. At the corner (time $s=0$), this cannot happen. It seems plausible, however, that the strong forms of the cited statements also hold in our case. Fortunately, the formula for the Hausdorff dimension does not rely on the first four properties, but only on the fact that the path $X([0,\infty])$ is the limit set of a specific random recursive construction whith multiple types.
\end{remark}
\begin{remark}
We note that all we have proved in this section remains true if we replace $T_n$ by $S_n^3$. In particular, $\qopname\relax o{\mathsf{LI}}(u_1S_n^3u_2,\bar\alpha^n)$ converges almost surely in $(C,d_C)$ to a limit curve and the results of \ref{theorem:conv}\nobreakdash--\ref{theorem:prop-lerw} hold with $S_n^3$ in place of $T_n$.
\end{remark}
\section{Limit of the tree metric}
\label{sec:metric}
Consider a generic $\omega\in\Omega$. Then $T_n(\omega)$ is a spanning tree on $G_n$ and it is the trace $\qopname\relax o{\mathsf{Tr}}^m_n T_m(\omega)$ for all $m\ge n$. Let $u,v$ be two vertices in $VG_n$ for some $n\ge0$. Their distance $d_{T_m(\omega)}(u,v)$ with respect to the spanning tree $T_m(\omega)$ is well-defined and Corollary~\ref{corollary:length} indicates that $\bar\alpha^{-m} d_{T_m(\omega)}(u,v)$ converges for $m\to\infty$, where $\bar\alpha = \frac43 + \frac1{15}\sqrt{205}$ is the dominating eigenvalue of Proposition~\ref{proposition:tree2}. If this limit exists for all $u,v$ in the countable set
\[ V_* = \bigcup_{n\ge0} VG_n = \bigcup_{w\in\mathbb W^*} \psi_w(VG_0), \]
and it is positive whenever $u \neq v$, then the limit defines a metric $d_{*,\omega}$ on $V_*$:
\[ d_{*,\omega}(u,v) = \lim_{m\to\infty} \bar\alpha^{-m} d_{T_m(\omega)}(u,v) \]
for all $u,v\in V_*$. In the following we show that $d_{*,\omega}$ exists for almost all $\omega\in\Omega$ and yields a random metric $d_*$ on $V_*$. Let $\mathbb M(V_*)$ be the set of all metrics on $V_*$. We equip $\mathbb M(V_*)$ with the $\sigma$\nobreakdash-\hskip0pt algebra $\mathcal M(V_*)$ which is induced by the mappings
\[ \mathbb M(V_*) \to \mathbb R, \quad d \mapsto d(u,v) \]
for $u,v\in V_*$. We recall some notions from metric theory, see for instance \cite{chiswell2001introduction}. A metric space $(X,d)$ is \emph{$0$\nobreakdash-\hskip0pt hyperbolic} if
\[ d(u,v) + d(x,y) \le \max\{ d(u,x) + d(v,y), d(u,y) + d(v,x) \} \]
holds for all $u,v,x,y\in X$ (\emph{four point condition}). A \emph{metric segment} in $(X,d)$ is the image of an isometric embedding $[a,b]\to X$ for some $a,b\in\mathbb R$. Finally, $(X,d)$ is called an \emph{$\mathbb R$\nobreakdash-\hskip0pt tree} if, for any $x,y\in X$, there is a unique arc connecting $x,y$ and this arc is a metric segment. We note that $(X,d)$ is an $\mathbb R$\nobreakdash-\hskip0pt tree if and only if $(X,d)$ is connected and $0$\nobreakdash-\hskip0pt hyperbolic, see \cite[Lemma~2.4.13]{chiswell2001introduction}.
\begin{theorem}\label{theorem:randommetric}
For almost all $\omega\in\Omega$ the limit
\[ d_{*,\omega}(u,v) = \lim_{m\to\infty} \bar\alpha^{-m} d_{T_m(\omega)}(u,v) \]
exists for all $u,v\in V_*$ and yields a metric $d_{*,\omega}$ on the set $V_*$, such that $(V_*,d_{*,\omega})$ is a $0$\nobreakdash-\hskip0pt hyperbolic and totally bounded metric space. Thus, for a suitable subset $\Omega'''\subseteq\Omega$ of probability $1$,
\[ \Omega'''\to\mathbb M(V_*), \quad \omega\mapsto d_{*,\omega} \]
is a random metric in $(\mathbb M(V_*),\mathcal M(V_*))$. Furthermore, for $\omega\in\Omega'''$ the Cauchy completion of $(V_*, d_{*,\omega})$ is a compact $\mathbb R$\nobreakdash-\hskip0pt tree.
\end{theorem}
\begin{proof}
For $x,y\in VG_0$ and $w\in\mathbb W^n$, define $\Omega(w,x,y)$ to be the set of all $\omega\in\Omega$ such that, whenever $x,y$ are connected in the restriction $\pi_w(T_n(\omega))$, the curve $\qopname\relax o{\mathsf{LI}}(x \pi_w(T_m(\omega)) y, \bar\alpha^m)$ converges in $(C,d_C)$ as $m\to\infty$, $m\ge n$, and the assertions of Theorems~\ref{theorem:conv}\nobreakdash--\ref{theorem:prop} hold. The usual conditioning argument shows that $\qopname\relax o{\mathbb{P}}(\Omega(w,x,y))=1$ for all $w\in\mathbb W^*$ and all $x,y\in VG_0$. Thus
\[ \Omega''' = \bigcap_{w\in\mathbb W^*}\bigcap_{x,y\in VG_0} \Omega(w,x,y) \]
has probability $1$. Fix an element $\omega\in\Omega'''$. Then for all $u,v\in V_*$ the limit
\[ d_{*,\omega}(u,v) = \lim_{m\to\infty} \bar\alpha^{-m} d_{T_m(\omega)}(u,v) \]
exists and is an element of $[0,\infty)$. By construction of $\Omega'''$, we have $d_{*,\omega}(u,v) > 0$ for all $u,v\in V_*$, $u\ne v$, which are neighbours in $G_n$ for some $n$. Hence $d_{*,\omega}(u,v) > 0$ for all $u,v\in V_*$, $u\ne v$. Furthermore, as $d_{T_m(\omega)}$ is the graph metric of the tree $T_m(\omega)$, it satisfies the triangle inequality and the four point condition. Thus the limit $d_{*,\omega}$ also satisfies the triangle inequality and the four point condition. Altogether we have proved that $(V_*,d_{*,\omega})$ is a $0$\nobreakdash-\hskip0pt hyperbolic metric space if $\omega\in\Omega'''$. For $x,y\in VG_0$ and $w\in\mathbb W^n$ define $A(w,x,y)$ to be the set of all $\omega\in\Omega'''$, such that, whenever $x,y$ are connected in the restriction $\pi_w(T_n(\omega))$, then $d_{*,\omega}(\psi_w(x),\psi_w(y))\le2^{-n}$. Using the Borel-Cantelli lemma together with the bounds of Lemma~\ref{lemma:mg-bounds}, we see that
\[ A_n = \bigcap_{w\in\mathbb W^n}\bigcap_{x,y\in VG_0} A(w,x,y) \]
holds eventually with probability $1$. Hence, for $\omega\in\Omega'''$, there is an $N=N(\omega)$ such that $\omega\in A_n$ for all $n\ge N$. Fix some $n\ge N$. For $x\in VG_n$ let $C_x = C_x(\omega)$ be the set of all $y\in VG_m$ ($m\ge n$), such that all vertices $v$ on the path $x T_m(\omega) y$ satisfy $\norm{v-x}_2\le2^{-n}$. If $y\in C_x\cap VG_n$, then $d_{*,\omega}(x,y)\le 2^{-n}$. If $y\in C_x\setminus VG_n$, then we can find $x=x_n,x_{n+1},\dotsc,x_m=y$, such that $x_k\in VG_k$ and $x_{k-1},x_k$ are either identical or neighbours in $T_k(\omega)$. Thus
\[ d_{*,\omega}(x,y) \le \sum_{k=n+1}^m d_{*,\omega}(x_{k-1},x_k) \le \sum_{k=n+1}^m 2^{-k} \le 2^{-n}. \]
Thus, if $B_{*,\omega}(x,2^{-n})$ denotes the ball of radius $2^{-n}$ centered at $x$ with respect to $d_{*,\omega}$, then $C_x\subseteq B_{*,\omega}(x,2^{-n})$. Hence
\[ V_* = \bigcup_{x\in VG_n} C_x = \bigcup_{x\in VG_n} B_{*,\omega}(x,2^{-n}), \]
which means that $(V_*,d_{*,\omega})$ is totally bounded. To check measurability we note that $\omega\mapsto d_{T_m(\omega)}(u,v)$ is measurable for fixed $u,v\in V_*$ (if $m$ is sufficiently large). Thus the limit $\omega\mapsto d_{*,\omega}(u,v)$ is measurable, too. By definition of $\mathcal M(V_*)$, this implies measurability of $\omega\mapsto d_{*,\omega}$.
In order to prove that the Cauchy completion $(\check V_{*,\omega}, \check d_{*,\omega})$ of $(V_*, d_{*,\omega})$ for $\omega\in\Omega'''$ is an $\mathbb R$\nobreakdash-\hskip0pt tree, it is sufficient to show that the completion is connected, as $0$\nobreakdash-\hskip0pt hyperbolicity is preserved by completion, see \cite[Lemma~2.2.11]{chiswell2001introduction}. We show that the completion contains a path from $u_1$ to any $x$. Let $x_1,x_2,\dotsc$ be a Cauchy sequence in $V_*$ with $x_n\to x$. Denote by $\alpha_n\colon[0,\infty]\to K$ the limit curve of $\qopname\relax o{\mathsf{LI}}(u_1 T_m(\omega) x_n, \bar\alpha^m)$ as $m\to\infty$, which exists by construction of $\Omega'''$. Then $D_n = \alpha_n^{-1}(V_*)$ is a dense subset of $[0,\infty]$ by Lemma~\ref{lemma:times}. Note that $t=d_{*,\omega}(u_1,\alpha_n(t))$ for all $t\in D_n$ such that $t \le \min\{s \,:\, \alpha_n(s)=x_n\}$. Therefore the restriction $\alpha_n\colon D_n\to V_*$ is continuous with respect to $d_{*,\omega}$ and thus has a continuous extension $\beta_n\colon[0,\infty]\to\check V_{*,\omega}$. Set $s_0=0$ and
\[ s_n = \max\{ t \in [0,\infty] \,:\, \beta_k = \beta_n \text{ on } [0,t] \text{ for all } k\ge n \}. \]
Then we have $s_0\le s_1 \le \dotsb$ and $\beta_n(s_n)\to x$, and
\[ \beta\colon[0,\infty]\to\check V_{*,\omega}, \quad
\beta(t) = \begin{cases}
u_1 & \text{if } t=0, \\
\beta_n(t) & \text{if } s_{n-1}<t\le s_n, \\
x & \text{otherwise,}
\end{cases} \]
is a continuous curve connecting $u_1$ and $x$ (whose image is a metric segment). Finally, $(\check V_{*,\omega}, \check d_{*,\omega})$ is compact for $\omega\in\Omega'''$, since it is the completion of the totally bounded metric space $(V_*,d_{*,\omega})$.
\end{proof}
Let $\omega$ be an element of the set $\Omega'''$ defined in the previous proof and let $(\check V_{*,\omega}, \check d_{*,\omega})$ be the Cauchy completion of $(V_*, d_{*,\omega})$. Consider an element $x\in\check V_{*,\omega}$. Suppose that $x_1,x_2,\dotsc$ is a Cauchy sequence in $(V_*,d_{*,\omega})$, such that $x_n\to x$ with respect to $\check d_{*,\omega}$. Then it is easy to see that $x_1,x_2,\dotsc$ is also a Cauchy sequence in $(V_*, \norm{\,\cdot\,}_2)$ and thus has a limit in $(K, \norm{\,\cdot\,}_2)$, which does not depend on the specific Cauchy sequence but only on $x\in \check V_{*,\omega}$. We write $\xi_\omega(x)$ to denote this limit in $(K, \norm{\,\cdot\,}_2)$. Then $\xi_\omega\colon \check V_{*,\omega}\to K$ is a well-defined, continuous map, such that the restriction $\xi_\omega|_{V_*}$ to $V_*$ is the identity.
\begin{lemma}\label{lemma:mult}
Let $\omega$ be in $\Omega'''$. Then $1\le\card{\xi_\omega^{-1}(x)}\le4$ for all $x\in V_*$ and $1\le\card{\xi_\omega^{-1}(x)}\le3$ for all $x\in K\setminus V_*$.
\end{lemma}
\begin{proof}
For every point $x\in K$ we can find a sequence in $V_*$ that converges to this point in $(K, \norm{\,\cdot\,}_2)$ and which is Cauchy in $(V_*,d_{*,\omega})$. Thus the map $\xi_\omega$ is surjective, whence $\card{\xi_\omega^{-1}(x)}\ge1$. As in the previous proof every sequence $x_1,x_2,\dotsc\in V_*$ converging to a point in $\xi_\omega^{-1}(x)$ in $(V_*,d_{*,\omega})$ yields a metric segment connecting $u_1$ and that point. Using the geometry of the Sierpi\'nski gasket it is easy to see that there are at most four (respectively three if $x\notin V_*$) distinct metric segments joining $u_1$ and a point in $\xi_\omega^{-1}(x)$. This proves the claim.
\end{proof}
\begin{theorem}\label{theorem:limit}
Let $\omega$ be an element of $\Omega'''$. Then the hitting time $\mathsf h(X(\omega))$ of the limit curve $X(\omega)$ in $u_2$ is equal to the distance $d_{*,\omega}(u_1,u_2)$. Furthermore, if $\gamma_\omega\colon[0,d_{*,\omega}(u_1,u_2)]\to\check V_{*,\omega}$ is the unique isometric embedding with $\gamma_\omega(0)=u_1$ and $\gamma_\omega(d_{*,\omega}(u_1,u_2))=u_2$, then
\[ X(t,\omega) = \xi_\omega(\gamma_\omega(t)) \]
for all $t\in[0,d_{*,\omega}(u_1,u_2)]$.
\end{theorem}
\begin{proof}
The statement is a consequence of the definition of the limit curve $X(\omega)$ and the limit metric $d_{*,\omega}$, see Theorem~\ref{theorem:conv} and Theorem~\ref{theorem:randommetric}.
\end{proof}
For $\omega\in\Omega'''$ define $A(\omega)$ to be the set $\{ x \in K \,:\, \card{\xi_\omega^{-1}(x)} > 1 \}$. These are points that ``can be reached from two (or more) different directions''. To understand how this happens, it is useful to consider spanning forests with two components: given for instance some $f \in \mathcal S_{\infty}^1$, every element $v$ of $V_*$ can be associated uniquely to one of the components: $v \in V(G_n)$ for some $n$, and $v$ either belongs to the same component as $u_1$ in $\qopname\relax o{\mathsf{Tr}}_m^{\infty} f$ for all $m \geq n$ or to the same component as $u_2$ and $u_3$, again for all $m \geq n$. There are, however, some points in the completion $K$ that can be reached as limits from both sides; they form the so-called ``interface''. In a spanning tree, there is only one component, but the same phenomenon can occur at higher levels, within certain $n$\nobreakdash-\hskip0pt parts on which the spanning tree induces a spanning forest with more than one component.
In the following we give a description of $A(\omega)$ in terms of Galton-Watson trees and show that the Hausdorff dimension $\dim_H A(\omega)$ is strictly less than $1$ for almost all $\omega$. For $f\in \mathcal Q_\infty$ and $n\ge0$ let $\check W_n(f)$ be the set of all $w\in\mathbb W^n$, such that $\psi_w(VG_0)$ contains vertices of two distinct components of $\qopname\relax o{\mathsf{Tr}}^\infty_n f$. The union
\[ \check W(f) = \bigcup_{n\ge0} \check W_n(f) \]
induces a subtree of $\mathbb W^*$. On a single $n$\nobreakdash-\hskip0pt part $\psi_w(VG_0)$ with $w\in\check W_n(f)$ we always observe one of the following possibilities:
\begin{itemize}
\item The restriction $\pi_w(\qopname\relax o{\mathsf{Tr}}^\infty_n f)$ has two components and
these two components belong to two distinct components of $\qopname\relax o{\mathsf{Tr}}^\infty_n f$.
In this case we set $\check\chi_w(f) = \chi_w(f) \in \{\comp1yy-,\comp2yy-,\comp3yy-\}$.
\item The restriction $\pi_w(\qopname\relax o{\mathsf{Tr}}^\infty_n f)$ has three components and
two of them belong to the same component of $\qopname\relax o{\mathsf{Tr}}^\infty_n f$.
In this case we define $\check\chi_w(f) \in \{\comp4ynn,\comp4nyn,\comp4nny\}$
depending on which two of the three components in $\pi_w(\qopname\relax o{\mathsf{Tr}}^\infty_n f)$
belong to the same component of $\qopname\relax o{\mathsf{Tr}}^\infty_n f$.
\item The restriction $\pi_w(\qopname\relax o{\mathsf{Tr}}^\infty_n f)$ has three components and
these three components belong to three distinct components of $\qopname\relax o{\mathsf{Tr}}^\infty_n f$.
In this case we set $\check\chi_w(f) = \chi_w(f) = \comp4yyy$.
\end{itemize}
Let $\mathcal{\check C} = \{\comp1yy-,\comp2yy-,\comp3yy-,\comp4ynn,\comp4nyn,\comp4nny,\comp4yyy\}$ and set
\[ \mathbold{\check\chi}(f) = (\check\chi_w(f))_{w\in\check W(f)}. \]
As in Section~\ref{subsection:component} it is easy to see that $\mathbold{\check\chi}(U_\infty)$ is a labelled multi-type Galton-Watson tree with types in $\mathcal{\check C}$, where $U_\infty$ is one of $S_\infty^1, S_\infty^2, S_\infty^3, R_\infty^{}$. The associated counting process $(\mathbold{\check c^{\scriptscriptstyle\#}}(U_\infty))_{n\ge0}$, which counts type occurrences in one generation up to symmetry, is a multi-type Galton-Watson process with three types, offspring generating function
\begin{align*}
\mathbold{\check g}(\mathbold z) = \Bigl(
&\tfrac1{10} (4z_1+3z_1^2+3z_2), \\
&\tfrac1{25} (6z_1+3z_1^2+z_1^3+6z_2+9z_1z_2), \\
&\tfrac1{25} z_1(3z_1+4z_1^2+9z_2+9z_3)
\Bigr)
\end{align*}
and mean matrix
\[ \mathbold{\check M} = \frac1{50} \cdot \begin{pmatrix}
50 & 15 & 0 \\
48 & 30 & 0 \\
72 & 18 & 18
\end{pmatrix}. \]
This mean matrix has the dominating eigenvalue $\check\alpha = \frac35\bar\alpha = \frac45 + \frac1{25}\sqrt{205} \approx 1.372712$. Define
\[ I(f) = \bigcap_{n\ge0} \bigcup_{w\in\smash{\check W_n(f)}} \psi_w(K). \]
Then $I(f)$ is the limit set of the component boundaries and
\[ \dim_H I(U_\infty) \le \frac{\log\check\alpha}{\log2}
= \frac{\log\bar\alpha}{\log2} - \frac{\log\frac53}{\log2} \approx 0.457029 \]
holds almost surely using \cite[Proposition~3.9]{tsujii1991markov}. It seems that other results on the Hausdorff dimension do not apply to this specific random recursive construction, so that we only obtain an upper bound. Of course, $I(T_\infty)=\emptyset$ and so $\dim_H I(T_\infty)=0$.
\begin{proposition}\label{proposition:interface}
For $\omega\in\Omega'''$ we have
\[ A(\omega) = \bigcup_{w\in\mathbb W^*} \psi_w(I(\pi_w(T_\infty(\omega)))) \]
and thus
\[ \dim_H A(\omega) \le \frac{\log\check\alpha}{\log2}
= \frac{\log\bar\alpha}{\log2} - \frac{\log\frac53}{\log2} \approx 0.457029 \]
for almost all $\omega$.
\end{proposition}
\begin{proof}
Note that $A(\omega)$ contains $\psi_w(I(\pi_w(T_\infty(\omega))))$ for all $w\in\mathbb W^*$. On the other hand, if $x\in A(\omega)$, then $\xi_\omega^{-1}(x)$ contains at least two distinct points in $\check V_{*,\omega}$, say $x_1$ and $x_2$. Denote by $\overline{u_1x_1}$ (respectively $\overline{u_1x_2}$) the metric segment connecting $u_1$ and $x_1$ (respectively $x_2$). Then there is a word $w\in\mathbb W^*$ such that $x\in\psi_w(K)$ and
\[ \overline{u_1x_1} \cap \overline{u_1x_2} \cap \xi_\omega^{-1}(\psi_w(K)) = \emptyset. \]
This implies that $x\in\psi_w(I(\pi_w(T_\infty(\omega))))$. The usual conditioning argument shows that
\[ \dim_H \psi_w(I(\pi_w(T_\infty(\omega)))) \le \frac{\log\check\alpha}{\log2} \]
for almost all $\omega$. As $\mathbb W^*$ is a countable set and the Hausdorff dimension behaves nicely under countable unions the claim follows.
\end{proof}
\begin{remark}
Note the occurrence of the constant $\frac53$, which is the \emph{resistance scaling factor} of the Sierpi\'nski gasket. It also occurs prominently in the formula for the number of spanning trees (see \cite{teufl2011resistance} for the connection between resistance scaling and the number of spanning trees): if we regard $G_n$ as an electrical network, where each edge represents a unit resistor, then the effective resistance between two of the boundary vertices $u_1,u_2,u_3$ is $\frac23 \cdot (\frac53)^n$. There is a simple heuristic explanation why the identity
\[ \log\check\alpha = \log\bar\alpha - \log\tfrac53 \]
must hold: it is well known (cf.~\cite[p.~44, Theorem~1]{bollobas1998modern}) that the effective resistance between two vertices equals the number of \emph{thickets}, i.e., spanning forests with two components each containing one of the two vertices, divided by the number of spanning trees. For every spanning tree of $G_n$, one can obtain a thicket by removing an edge from the unique path between $u_1$ and $u_2$; conversely, we can turn a thicket into a spanning tree by inserting an edge that connects the two components at the interface. The identity now follows (at least heuristically) from a simple double-counting argument.
\end{remark}
\section{Other self-similar graphs}
\label{sec:other}
The same ideas apply to other self-similar graphs as well: it was shown in \cite{teufl2011number} that the recursions for counting spanning trees and forests in self-similar sequences of graphs have simple explicit solutions as for the Sierpi\'nski graphs if the number of ``boundary'' vertices is two (as for example in the case of the graphs associated with the modified Koch curve, see Figure~\ref{figure:koch}) or three (as for the Sierpi\'nski graphs), provided that the automorphism group acts with either full symmetry or like the alternating group on the set of boundary vertices. For two boundary vertices, this technical condition is always satisfied. The explicit counting formulae guarantee that the projections will still be measure-preserving, and all other arguments can be carried out in the same way as in the previous sections.
\begin{figure}[htb]
\centering
\def\fig#1#2#3#4{%
\scope[shift={#1}]
\mytikzkoch{}{#2}{#3}
\node[above left] at (60:1/2) {#4};
\endscope}
\begin{tikzpicture}[scale=2.5]
\fig{(0,0)}{}{\draw (0:0) node[vertex] {} -- (0:1) node[vertex] {};}{$G_0$}
\fig{(1.2,0)}{x}{\draw (0:0) node[vertex] {} -- (0:1) node[vertex] {};}{$G_1$}
\fig{(2.4,0)}{xx}{\draw (0:0) node[vertex] {} -- (0:1) node[vertex] {};}{$G_2$}
\fig{(4,0)}{xxxx}{\draw (0:0) -- (0:1);}{$K$}
\end{tikzpicture}
\caption{The modified Koch curve.}
\label{figure:koch}
\end{figure}
For two boundary vertices, the rescaling factor is precisely the average length of loop-erased random walk from one boundary vertex to the other in $G_1$ (the initial graph $G_0$ being a single edge), which is always a rational number. For example, for the sequence of graphs in Figure~\ref{figure:koch}, the rescaling constant is $\frac{10}{3}$ (in other words, the length of loop-erased random walk from one boundary vertex of $G_n$ to the other grows like $(\frac{10}{3})^n$). It follows that the Hausdorff dimension of the limit curve is almost surely $\log(\frac{10}{3})/\log3 \approx 1.095903274$ in this example. As a second example, consider the Sierpi\'nski graphs with two subdivisions on each edge in Figure~\ref{figure:sg2}: in this case, we find that the rescaling factor is $\frac1{735}(1431+\sqrt{1669656}\,)$ (it is a priori clear that it has to be algebraic of degree $\leq 2$, being an eigenvalue of a $2 \times 2$\nobreakdash-\hskip0pt matrix with rational entries), giving us a Hausdorff dimension of $\approx 1.192117286$ for the limit curve of loop-erased random walk.
\begin{figure}[htb]
\centering
\def\fig#1#2#3#4{%
\scope[shift={#1}]
\mytikzsgb{}{#2}{#3}
\node[above left] at (60:1/2) {#4};
\endscope}
\begin{tikzpicture}[scale=2.5]
\fig{(0,0)}{}{\draw \mytikztri{--}{node[vertex] {}} -- cycle;}{$G_0$}
\fig{(1.2,0)}{x}{\draw \mytikztri{--}{node[vertex] {}} -- cycle;}{$G_1$}
\fig{(2.4,0)}{xx}{\draw \mytikztri{--}{node[vertex] {}} -- cycle;}{$G_2$}
\fig{(4,0)}{xxxx}{\fill \mytikztri{--}{} -- cycle;}{$K$}
\end{tikzpicture}
\caption{Sierpi\'nski graphs with two subdivisions.}
\label{figure:sg2}
\end{figure}
If the number of boundary vertices is four or more (which happens, for instance, for the higher-dimensional analogues of the Sierpi\'nski graphs), then more different types of spanning forests have to be considered, and there are generally no exact counting formulae. However, asymptotic formulae should hold in such cases, making the projections ``asymptotically measure-preserving'', so that analogous results hold in such cases. The details might be quite intricate though, and new geometric phenomena arise as well: for instance, with four boundary vertices, it becomes possible that a loop-erased random walk on $G_n$ enters and leaves some of the copies of $G_k$ ($k < n$) more than once, which is not possible in the case of Sierpi\'nski graphs that we considered.
\def\doi#1{\href{http://dx.doi.org/#1}{\protect\nolinkurl{doi:#1}}}
\bibliographystyle{amsplainurl}
| {
"timestamp": "2015-01-14T02:11:53",
"yymm": "1305",
"arxiv_id": "1305.5114",
"language": "en",
"url": "https://arxiv.org/abs/1305.5114",
"abstract": "We study spanning trees on Sierpinski graphs (i.e., finite approximations to the Sierpinski gasket) that are chosen uniformly at random. We construct a joint probability space for uniform spanning trees on every finite Sierpinski graph and show that this construction gives rise to a multi-type Galton-Watson tree. We derive a number of structural results, for instance on the degree distribution. The connection between uniform spanning trees and loop-erased random walk is then exploited to prove convergence of the latter to a continuous stochastic process. Some geometric properties of this limit process, such as the Hausdorff dimension, are investigated as well. The method is also applicable to other self-similar graphs with a sufficient degree of symmetry.",
"subjects": "Probability (math.PR)",
"title": "Uniform spanning trees on Sierpinski graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540710685614,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7099046559703658
} |
https://arxiv.org/abs/0704.2203 | On Abelian Difference Sets with Parameters of 3-dimensional Projective Geometries | A difference set is said to have classical parameters if $ (v,k, \lambda) = (\frac{q^d-1}{q-1}, \frac{q^{d-1}-1}{q-1}, \frac{q^{d-2}-1}{q-1}).$ The case $d=3$ corresponds to planar difference sets. We focus here on the family of abelian difference sets with $d=4$. The only known examples of such difference sets correspond to the projective geometries $PG(3,q)$. We consider an arbitrary difference set with the parameters of $PG(3,q)$ in an abelian group and establish constraints on its structure. In particular, we discern embedded substructures. | \section{Group Rings}
Let $G$ be a finite abelian group of order $v$ and let $\mathbb{Z}G$
denote the integral group ring of $G$.
Given an element $a = \sum a_ig_i \in \mathbb{Z}G$, we set
\[
a^{(-1)} = \sum a_i g_i^{-1}.
\]
Let $D$ be a $k$-subset of $G$, where $k\ge 1$.
By a standard abuse of notation we will use the letter $D$ to
represent both the set of elements $D$ and the corresponding group ring element $D =
\sum_{d \in D} d$.
We say that $D$ is a
$(v,k, \lambda)$-difference set in $G$ if $D$ satisfies the group ring equation
\[
DD^{(-1)} =\lambda G +n1,
\]
where $ n: = k - \lambda$ is the \emph{order} of the difference set.
If $\lambda = 1$ the difference set is called
\emph{planar} and is associated with a projective plane of order $n$.
It is elementary to show that $v,k, \lambda$ are related by
the fundamental equation
\[
\lambda (v-1) = k(k-1)
\]
or equivalently
\begin{equation} \label{fund}
\lambda v = k^2 -n.
\end{equation}
Let $G$ be a finite group and let $H$ be a subgroup of index $r$ in $G$.
Let $D$ be a $(v,k,\lambda)$-difference set in $G$.
For $i = 1,2, \ldots, r$ and distinct cosets $Hx_i$ of $H$ in $G$, we define
the \emph{intersection numbers}, $s_i$, of $D$ with respect to $H$ by
\[
s_i: = |D \cap Hx_i|.
\]
The following equations hold:
\begin{eqnarray}
\sum_{i=1}^r s_i = k \label{interno1} \\
\sum_{i=1}^r s_i^2 = \lambda |H| +n. \label{interno2}
\end{eqnarray}
We begin by drawing attention to an elementary result on the
distribution of elements of $D$ through cosets of any subgroup of $G$.
\begin{thm} \label{distribution}
Let $H$ be a subgroup of $G$ with $[G:H] =r$. Let $D$ be a $(v,k,
\lambda)$-difference set in $G$. Suppose that in a certain coset of
$H$ there are $s$ elements of $D$. Then
\begin{equation} \label{si}
\left| s - \frac{k}{r} \right| \leq \sqrt{n}\left( \frac{r-1}{r} \right).
\end{equation}
\end{thm}
\begin{proof}
We can take $s_r = s$ in equations \eqref{interno1} and
\eqref{interno2}. Then these read
$
\sum_{i=1}^{r-1} s_i = k - s
$
and
$
\sum_{i=1}^{r-1} s_i^2 = \lambda |H| +n - s^2.
$
The Cauchy-Schwarz inequality tells us that
$
\left( \sum^{r-1}_{i=1} s_i \right)^2 \leq (r-1) \left( \sum_{i=1}^{r-1} s_i^2 \right)
$
and hence
\[
(k-s)^2 \leq (r-1)( \lambda |H| +n -s^2).
\]
Multiplying out and
noting that $v = r|H|$ and using \eqref{fund}
this simplifies to
\[
rs^2 - 2ks +2n - rn +\lambda |H| \leq 0.
\]
Completing the square
and using \eqref{fund}
again, we obtain
\[
(s - \frac{k}{r})^2 \leq \frac{n(r-1)^2}{r^2}
\]
and hence the result.
\end{proof}
We observe that the mean distribution of $k$ elements across the $r$
cosets is $\frac{k}{r}$ so we can see from \eqref{si} that the number of
elements of $D$ in any coset is within $\sqrt{n}$ of this mean value.
We say that an automorphism $\sigma$ of $G$
is a \emph{multiplier} for $D$ if $\sigma(D) = gD$ for some $g \in
G$. More particularly, we say $\sigma$ is a (numerical) multiplier
for $D$ if $\sigma(x)=x^m$ for all $x\in G$, where $m$ is an integer
relatively prime to $v$.
We call $gD$ a \emph{translate} of the difference set $D$. Clearly
$gD$ is itself a difference set.
We say that the difference set $D$ is \emph{normalized} if
\[
\prod_{d
\in D} d =1.
\]
It is elementary
to show that any difference set with $\gcd(v,k)=1$ has a unique translate which is
normalized. It is straightforward to prove that such a normalized difference set is fixed set-wise by any
numerical multiplier of $D$.
We quote now a well known refinement of the multiplier theorem of Marshall Hall. The proof can be found in
\cite[Section VI.4]{bjl}.
\begin{thm}[M. Hall] \label{mult}
Let $D$ be an abelian $(v,k, \lambda)$-difference set where $n = k -
\lambda$ is a power of a prime $p$ and $\gcd(p,v)=1$. Then the mapping $\sigma: x \to
x^p$ is a multiplier for $D$.
\end{thm}
We also state an abridged version of the Mann Test. A proof of the general test can be found in \cite{bjl}.
\begin{thm}[Mann Test] \label{mann}
Let $D$ be a $(v,k, \lambda)$-difference set in an abelian group $G$ of order
$v$. Let $U$ be a subgroup of $G$ and let $G/U$ have exponent
$u^*$. Suppose $p$ is a prime not dividing $u^*$ and $p^f \equiv -1
\mod u^*$ for some $f \in \mathbb{N}$. Then the following hold:
\begin{enumerate}
\item $n=p^{2j}n'$, where
$\gcd(p,n')=1$, for some $j \in \mathbb{Z}$.
\item For all
cosets $Ug$ of $U$ in $G$, the corresponding
intersection numbers $|D\cap Ug|$ of
$D$ relative to $U$ are congruent modulo $ p^j$.
\item $p^{j} \leq |U|$.
\end{enumerate}
\end{thm}
\section{Difference Sets with Classical Parameters}
Suppose now that $D$ is a $(v,k,\lambda)$-difference set
with parameters \linebreak
\begin{equation} \label{classic}
(v,k, \lambda) = \left( \frac{q^d-1}{q-1}, \frac{q^{d-1}-1}{q-1},
\frac{q^{d-2}-1}{q-1} \right),
\end{equation}
where $q$ is a power of a prime.
Any difference set with these parameters is said to have
\emph{classical parameters}.
The order of these difference sets is $q^{d-2}$ so
Theorem \ref{mult} applies.
If $d=3$ the difference set is said to be planar and this is the case
which has received the most interest.
For the rest of this paper, $G$ will be an abelian group supporting a difference set
with classical parameters \eqref{classic} with $d=4$. So
\begin{equation} \label{d=4}
|G| = \frac{q^{4s}-1}{q^s-1} = (q^s+1)(q^{2s}+1)
\end{equation}
where $q^s$ is a power of a prime number.
We let $H$ be a subgroup of $G$ of order $q^s+1$.
It is known in the folklore of the subject that $D$ has $2$-valued
intersection sizes with the cosets of $H$.
We gather here some useful results
on abelian difference sets with classical parameters which apply to the
case under investigation. The proofs rely on an application of the
Mann Test and can be found in \cite[Theorems 3,4,7]{kpj}
\begin{thm} \label{kj}
Let $D$ be a normalized difference set with classical parameters in an
abelian group $G$ where $|G| = \dfrac{q^{4s}-1}{q^s-1}$ and let $H$ be
a subgroup of $G$ of order $q^s+1$, so $[G:H] = q^{2s}+1$. Then the
following hold:
\begin{enumerate}
\item $H$ is the unique subgroup of $G$ of order $q^s+1$;
\item $Syl_2(G)$ is cyclic, with generator $z$, say;
\item If $q$ is odd, then $Hz \subseteq D$ and $|D \cap Hx| =1$ for
any other coset $Hx$ of $H$;
\item If $q$ is even, then $H \subseteq D$ and $|D \cap Hx| =1$ for
any other coset $Hx$ of $H$.
\end{enumerate}
\end{thm}
We note that by \eqref{si},
\[
|D\cap H| \leq \frac{k}{r} + \sqrt{n}\left( \frac{r-1}{r} \right)
\]
for any subgroup $H$ of $G$ of index $r$ where $G$ contains a
$(v,k,\lambda)$-difference set.
In our case here, we have equality since a simple check verifies that
\[
\frac{k+\sqrt{n}(r-1)}{r} =
\frac{q^{2s}+q^s+1+q^s(q^{2s}+1-1)}{q^{2s}+1}
= q^s+1 = |H|.
\]
It is an interesting observation that $H$ is the largest subgroup that can possibly be
contained inside a non-trivial difference set. The
spread of the elements of $D$ through the cosets of $H$ is as close to
the mean value $\frac{k}{r}$ as possible, so the distribution is unbiased.
\section{Singer Difference Sets and PG$(3,q)$}
Let $K = \mathbb{F}_q$ be the finite field with $q$ elements and let
$F= \mathbb{F}_{q^d}$. We view $F$ as a vector space over $K$ and
let $K^*$ denote the multiplicative group of
nonzero elements of $K$.
Let $\pi$ be that natural epimorphism
$\pi:F^* \to F^*/K^*$.
Let $H$ be a
$K$-hyperplane in $F$.
Then it is well known that $D:= \pi(H \backslash \{0\})$
is a Singer difference set
in $F^*/K^*$.
We often choose $H$ to be the elements of
trace zero in $F$.
If $M$ is an intermediate field between $K$ and $F$,
we let $Tr_{F/M}$ denote the trace form from $F$ into $M$.
$Tr_{M/K}$ is similarly defined.
If $\sigma$ denotes a power of the
Frobenius mapping $\sigma: x \to x^q$, then the
Galois group of $M$ is generated by an appropriate power of $\sigma$.
The trace of any element is given by the sum of its Galois
conjugates.
\begin{thm} \label{extn}
Let $K = \mathbb{F}_q$ be the finite field with $q$ elements. Let $F$
be the extension field of $K$ of degree $ab$, where $a,b \in
\mathbb{N}$. Let $M$ be the
extension field of degree $a$ and $N$ be the extension field of degree $b$
over $K$.
Let $D$ be the $M$-hyperplane in $F$ defined
by
\[
D = \{ x \in F: Tr_{F/M}(x) = 0 \}
\]
and let $E$ be the $K$-hyperplane in $N$ given by
\[
E = \{x \in N: Tr_{N/K}(x) = 0 \}.
\]
If $\gcd(a,b)=1$ then $E \subseteq D$ as subsets of $F$.
\end{thm}
\begin{proof}
It is enough to determine when we have
$Tr_{F/M}(x) = Tr_{N/K}(x)$, for an element $x \in N$.
In our situation,
\[
Tr_{F/M}(x) = x + x^{q^a} +x^{q^{2a}} + \ldots + x^{q^{(b-1)a}},
\]
while
\[
Tr_{N/K}(x) = x+ x^q + x^{q^2} + \ldots + x^{q^{b-1}}.
\]
Let $x \in N$. Then $x^{q^b} = x$. Write
$a = sb+t$,
where $s,t \in
\mathbb{Z}$ and $ 0 \leq t \leq b-1$.
Then
\[
x^{q^a} = x^{q^{sb+t}} = x^{q^{sb}q^t} = (x^{q^{sb}})^{q^t} = x^{q^t}.
\]
Considering $x$ as an element of $F$,
\begin{eqnarray*}
Tr_{F/M}(x) & = & x+ x^{q^a} + \ldots + x^{q^{a(b-1)}} \\
& = & x+ x^{q^t} + \ldots + x^{q^{t(b-1)}}
\end{eqnarray*}
where $t, 2t, \ldots , (b-1)t$ are all interpreted modulo $b$.
If the residues $t, 2t, \ldots , (b-1)t$ are all distinct modulo $b$, then the above
expression becomes
\begin{eqnarray*}
& = & x +x^q + \dots + x^{q^{b-1}} \\
& = & Tr_{N/K}(x).
\end{eqnarray*}
If $\gcd(a,b) = 1$ then these residues are distinct and the result follows.
\end{proof}
In particular,
if $a=4$ and $b$ is odd the condition in Theorem \ref{extn} is satisfied and
$Tr_{F/M}(x) = Tr_{N/K}(x)$ for any $x \in N$.
This gives us the following:
\begin{cor} \label{singer}
Let $D$ be the Singer difference set in a group $G$ with parameters
$\left( \frac{q^{4s}-1}{q^s-1}, \frac{q^{3s}-1}{q^s-1}, \frac{q^{2s}-1}{q^s-1} \right)$
derived from the trace zero hyperplane, where $s$ is odd.
Let $R$ be the subgroup of $G$ of order $\frac{q^4-1}{q-1}$.
Then $D \cap R$ is a
$\left( \frac{q^{4}-1}{q-1}, \frac{q^{3}-1}{q-1}, \frac{q^{2}-1}{q-1} \right)$-
Singer difference set in $R$.
\end{cor}
\section{Abelian Difference Sets with Parameters of PG$(3,q)$}
We mention here that the Singer difference sets are the only known examples of
difference sets with these parameters in abelian groups.
It has not been proved that this is the only structure possible.
Certainly it is a difficult open question whether all abelian planar difference sets are
equivalent to the planar Singer difference sets.
We will now
consider an arbitrary difference set with the parameters
of $PG(3,q)$ in an abelian group and
establish
constraints on its structure.
In particular, we try and generalize Corollary \ref{singer} to
any abelian difference set with these parameters.
Generalizing from cyclic groups to abelian groups requires some
slightly clumsy looking technical conditions in
our hypotheses.
\begin{lem} \label{Mfix}
Let $G$ be an abelian group with $|G|=(q^s+1)(q^{2s}+1)$, where $s$ is
an odd integer. Suppose that $G$ contains a difference set with classical
parameters.
Let $\tau$ denote the (multiplier) automorphism $\tau: x \to
x^{q^4}$. Let $M$ be any subgroup of order $ (q+1)(q^2+1)$ in $G$.
Then $M \le G^{\tau}$, where $G^{\tau}$ denotes the subgroup of fixed
points of $\tau$. Furthermore, if $Syl_r(G)$ is cyclic for all
prime numbers $r$ dividing $b$ or $c$ where $b = \gcd (q+1,s)$
and $c = \gcd(q^2+1,s)$ then $M = G^{\tau}$.
\end{lem}
\begin{proof}
We observe that as $s$ is odd we have
\[
q^s+1 = (q+1)(q^{s-1}-q^{s-2} + \ldots -q+1)
\]
and correspondingly
\[
q^{2s}+1 = (q^2+1)(q^{2(s-1)}-q^{2(s-2)} + \ldots -q^2+1).
\]
Now $x \in G^{\tau}$ if and only if $x^{q^4-1} =1$ \emph{i.e.}
$
x^{(q-1)(q+1)(q^2+1)} =1.
$
From this it is clear that $M \leq G^{\tau}$, because
$m^{(q+1)(q^2+1)} =1$ for all $m \in M$.
We also note that $\gcd(q^4-1, |G|) = (q+1)(q^2+1)$.
Since $G$ contains a difference set with classical parameters,
the Sylow $2$-subgroup
of $G$ is cyclic by Theorem \ref{kj}.
Let $r$ be a prime
dividing $\gcd(q+1, q^{s-1}-q^{s-2}+ \ldots -q+1)$. Now, since $r$
divides $q+1$, we deduce that
$
q \equiv -1 \mod r,
$
and, since $r$ divides $q^{s-1}-q^{s-2}+ \ldots -q+1$, we see that
\[
(-1)^{s-1}-(-1)^{s-2} + \ldots -(-1)+1 \equiv 0 \mod r,
\]
from which we conclude that
$
s \equiv 0 \mod r.
$
Hence $r$ divides $s$. We observe similarly that if $r$ is a prime
dividing $\gcd(q^2+1, \dfrac{q^{2s}+1}{q^2+1})$, then again, $r$ divides $s$.
We let $b = \gcd (q+1,s)$ and $c = \gcd(q^2+1,s)$.
If, for
all prime divisors $r$ of $b$ or of $c$,
each Sylow $r$-subgroup is cyclic
then $M =
G^{\tau}$.
We note that if $s$ is a large enough prime
($s >q^2$ for instance), then $b = c =1$
and this
condition will be automatically satisfied.
Of course, if $G$ is cyclic then $G^{\tau} = M$.
\end{proof}
We observe that the same power of $2$ divides both $|M|$ and $|G|$,
since
$[G:M]$ is odd,
and hence $M$ contains the Sylow
2-subgroup of $G$, which is cyclic by Theorem \ref{kj}, generated by $z$.
As before, let $H$ be the subgroup of $G$ of order $|H| = q^s+1$.
We have that $Hz \subseteq D$ by Theorem \ref{kj}.
To generalize Corollary \ref{singer} we would like to show that $D
\cap M$ is a difference set for $M$. We first show,
subject to
weak restrictions, that $D \cap M$ has the correct size to be a
difference set with classical parameters.
\begin{lem} \label{size}
Let $D$ be a normalized difference set in an abelian group $G$ of
order $(q^s+1)(q^{2s}+1)$, where $s$ is odd. Let $M$ be a subgroup of $G$ with $|M| =
(q+1)(q^2+1)$. Let $b = \gcd(q+1,s)$ and $c = \gcd(q^2+1,c)$. For each prime divisor $r$ of
$b$ or of $c$, suppose that $Syl_r(G)$ is cyclic.
Then $|D \cap M| = q^2+q+1$.
\end{lem}
\begin{proof}
The conditions on $Syl_r(G)$ being cyclic ensure that $M = G^{\tau}$ here,
as in Lemma \ref{Mfix}.
Let $H$ be the subgroup of $G$ of order $q^s+1$.
As shown in the proof of Lemma \ref{Mfix}, the hypotheses ensure that
\[
\gcd(q+1, \dfrac{q^s+1}{q+1}) =1
\]
and hence we can express $H$ as a direct product
$
H = AB,
$
where $|A| = q+1$ and $|B| = \dfrac{q^s+1}{q+1}$
with $A \cap B = 1$.
By the above and the cyclicity of the Sylow $2$-subgroup, $A$
is the unique
subgroup of order $q+1$ in $G$.
Now, since $\gcd(|M|,|H|) = \gcd((q+1)(q^2+1),q^s+1) = q+1$,
we can deduce that
$
H \cap M = A
$
and thus $H \cap M$ is the unique subgroup of order $q+1$ in $G$.
Now $|M:H \cap M| = q^2+1$. We can decompose $M$ as
\begin{displaymath}
M = \bigcup^{q^2+1}_{i=1} (H \cap M)m_i
\end{displaymath}
where $ m_1, \ldots , m_{q^2+1} $ are distinct coset representatives of
$H \cap M$ in $M$.
Then we claim that
\[
\bigcup^{q^2+1}_{i=1} H m_i
\]
is a union of different cosets of $H$ in $G$.
For $Hm_i = Hm_j$ implies $m_im_j^{-1} \in H \cap M $, since it is
clearly in $M$,
and hence $ (H \cap M)m_i = (H \cap M)m_j$ from which $m_i = m_j$,
proving the claim.
As observed above,
$M$ contains the Sylow $2$-subgroup of $G$. If $z$ is a generator for
$Syl_2(G)$ then
$z \in M$ and
$ z \notin H \cap M$, since $H$ does not contain $Syl_2(G)$.
Choose $m_1 = z$. Since $Hm_1 \subset D$, we have that
\[
(H \cap M) m_1 \subset D.
\]
Now each of the distinct cosets $Hm_i$ for $i \geq 2$ contains a unique element
of $D$, by Theorem \ref{kj} again.
So for each such representative $m_i$, there exists a unique $h_i \in H$
with
$
h_im_i \in D.
$
The multiplier $\tau: x \to x^{q^4}$ fixes $M$ by Lemma \ref{Mfix} and
\[
\tau(h_i) \tau(m_i) \in \tau(D) = D,
\]
and thus
\[
\tau(h_i)m_i \in D.
\]
Also, $\tau(h_i) \in H$, so by the uniqueness of $h_i$ we must have
$
\tau(h_i)=h_i.
$
Since $M = G^{\tau}$ here,
we deduce that $h_i \in M$ for each $i$ and thus
$h_im_i \in D \cap M$ for $i \geq 2$.
We conclude that
\[
D \cap M = (H \cap M) m_1 \cup \{h_i m_i :i=2,3,\ldots , q^2+1 \}.
\]
Hence $|D \cap M| = (q+1) + q^2 = q^2+q+1$.
\end{proof}
The following theorem is our generalization of Corollary
\ref{singer} to abelian difference sets.
Apart from
Jungnickel and Vedder's result on
planar difference sets with square order
(Theorem \ref{rod4} here),
this is the only case we know of
where the
parameters of the difference set guarantee a subdifference set.
\begin{thm} \label{main2}
Let $D$ be a normalized difference set with classical parameters in an
abelian group $G$ of order $(q^s+1)(q^{2s}+1)$, where $s$ is an odd
prime with $s \ge q$ and where $s \nmid q^2+1$.
Let $M$ be a subgroup of $G$ of order $(q+1)(q^2+1)$.
Then $D \cap M$ is a normalized
difference set
with classical parameters
in $M$.
\end{thm}
\begin{proof}
Our hypothesis that
$s \nmid q^2+1$ guarantees,
by Lemma \ref{Mfix},
that $M$ is the group of fixed points of the multiplier $\tau: x \to
x^{q^4}$. The orbits of $\tau$ have length dividing $s$
and since $s$ is a prime number,
the orbits of $\tau$ have length $1$ or $s$.
The orbits of length $1$ correspond to elements of $M$, since $M = G^{\tau}$.
Let $g \in M$.
Then there exist $\lambda = q^s+1$ ordered pairs
$(a_i,b_i) \in D \times D$ with $g = a_ib_i^{-1}$.
Now
\[
\tau(g) = g = \tau(a_i) \tau(b_i^{-1}) = \tau(a_i) \tau(b_i)^{-1}
\]
and
\[
(\tau(a_i),\tau(b_i)) \in D \times D.
\]
So the $\lambda$ ordered pairs representing $g$ come in multiplier orbits of
length $1$ or $s$.
Now, since $s$ is prime, we have
\begin{equation} \label{cong}
\lambda = q^s+1 \equiv q+1 \mod s
\end{equation}
and we see that if $s>q+1$,
we must have $q+1$ ordered pairs which are fixed by $\tau$.
Thus each element of $M$ can be represented by exactly $q+1$
pairs $(a_i,b_i) \in D \cap M \times D \cap M$.
Indeed, even if $s=q$ is a prime or in the case where $s = q+1$ and $s$ is a
Mersenne prime, the conclusion remains valid. To show this, we observe that
each element of $M$ can be represented at least twice as a
``difference'' from $D \cap M \times D \cap M$. This is because
\[
D \cap M = (H \cap M) m_1 \cup \{h_i m_i :i=2,3,\ldots , q^2+1 \}
\]
from the arguments in Lemma \ref{size}
and the products $ab^{-1}$ where $a \in (H \cap M)m_1$ and $b \in \{\cup
h_i m_i \} $ cover each element of $M$ as do the products
$ab^{-1}$ where $a \in \{\cup h_i m_i \} $ and $b \in (H \cap M)m_1$.
So \eqref{cong} will guarantee the result if $s>q-1$.
\end{proof}
\section{Another Subgroup with two-valued intersection numbers}
In our case of difference sets with classical parameters
where $d=4$,
we have a second subgroup whose intersection numbers are two-valued.
\begin{thm} \label{DintK}
Let $G$ be an abelian group with
$
|G| = (q+1)(q^2+1),
$
where $q$ is a power of a prime, and let $D$ be a normalized difference set with classical parameters in $G$.
Then,
\begin{enumerate}
\item There is a unique subgroup $K$ of order $q^2+1$ in $G$.
\item
$
|D \cap Kx| = \begin{cases}
1 & \textit{for one distinguished coset} \\
q+1 & \textit{for the other } q \textit{ cosets}.
\end{cases}
$
\end{enumerate}
\end{thm}
\begin{proof}
Firstly, if $q$ is even then the Sylow $2$-subgroup of $G$ is trivial, while if $q$ is odd then Theorem \ref{kj}
tells us that the Sylow $2$-subgroup is cyclic. Since
$$\gcd(|K|,|G:K|) = \gcd(q^2+1,q+1) $$
is a divisor of $2$, we deduce that $K$ is unique.
We note that $|G:K|= q+1$ and that $K$ satisfies the role of $U$ in our statement of the Mann Test,
Theorem \ref{mann}.
Letting $s_i$ denote the intersection number $|D \cap Kx_i|$, we deduce from the Mann Test part (c) that
all the $s_i$ are congruent to each other modulo $q$, say $s_i \equiv y \mod q$, where $0\le y<q$.
As before, we have
\begin{equation} \label{k}
\sum^{q+1}_{i=1} s_i = k = q^2+q+1
\end{equation}
and hence
\[
(q+1)y +rq = q^2+q+1,
\]
for some $r \in \mathbb{Z}$.
It is straightforward to see that $y = 1$ and hence $r=q$.
Since there are $q+1$ cosets,
we conclude that at least one coset of $K$ has intersection size $1$ with $D$ (since otherwise
all $s_i$ satisfy $s_i\ge q+1$, which is impossible from \eqref{k}).
Finally, the Cauchy-Schwarz inequality completes the proof.
Recall that
\begin{equation} \label{CS}
\left( \sum_{i=1}^n a_i \right)^2 \leq n \left( \sum_{i=1}^n a_i^2 \right),
\end{equation}
with equality if and only if all the $a_i$ are equal.
Letting $s_{q+1}=1$, we have that
\[
\sum_{i=1}^q s_i = (q^2+q+1) -1 = q^2+q
\]
while \eqref{interno2} yields
\[
\sum_{i=1}^{q} s_i^2 = ((q+1)(q^2+1)+q )- 1 = q(q+1)^2 .
\]
We now have equality in \eqref{CS}
and hence all the $s_i$ must be equal to each other.
By counting, $s_i = q+1$ for all the other intersection numbers.
\end{proof}
We note that since $|G:K| = q+1$,
the multiplier $\sigma:x \to x^q$ is an involution on the cosets of $K$.
Now the coset $Kx$ is fixed by $\sigma$ if $Kx = Kx^q$
which occurs if and only if $x^{q-1} \in K$.
Since $\gcd(q-1,q+1)$ divides $2$ and since the Sylow $2$-subgroup
of $G$ is cyclic, the only cosets fixed by $\sigma$ are $K$ and $Kw$,
where $w^2 \in K$ but $w \notin K$.
Since $|K| \equiv 2 \mod 4$, the unique element of order $2$ in $G$ is in $K$.
Hence $w$ must be an element of order $4$ in $G$
and the distinguished coset is either $K$ itself or $Kw$.
In particular, when $q$ is even, there is only one coset, $K$ itself,
fixed by $\sigma$. Hence $D \cap K = \{1 \}$ in this case. We summarize
this in the following:
\begin{cor} \label{HK}
Let $D$ be a normalized $(v,k, \lambda)$-abelian difference set in $G$ with parameters \eqref{d=4}.
Suppose $q$ is even, say $q=2^s$ so that $|G|= (2^s+1)(2^{2s}+1) $ and $G$ is a direct
product $G=HK$ where $|H| = 2^s+1$ and $|K| = 2^{2s} +1$. Then $H \subseteq D$ and
$|D \cap Hx|=1$ for each other coset of $H$.
Furthermore $D \cap K = \{ 1\}$ and $|D \cap Kx| = 2^s+1$ for each other coset of $K$.
\end{cor}
\section{Minimal Difference Sets and Conjectures}
\begin{thm} \label{1573}
Let $D$ be a normalized difference set in an abelian group $G$ with parameters
\[
\left( \frac{2^{4s}-1}{2^s-1}, \frac{2^{3s}-1}{2^s-1} , \frac{ 2^{2s}-1}{2^s-1} \right).
\]
Suppose $s$ is odd. Then $G$ has a subgroup, $M$, of order $15$ and
$D \cap M$ is a $(15,7,3)$-difference set in $M$.
\end{thm}
\begin{proof}
As in Corollary \ref{HK}, we write
$
G = HK,
$
where $|H| = 2^s+1$ and $|K| = 2^{2s}+1$.
Since $s$ is odd, we have that
$
2^s+1 \equiv 0 \mod 3
$
and
$
2^{2s}+1 \equiv 0 \mod 5.
$
So $3$ divides $|H|$ and $5$ divides $|K|$.
Let $k \in K$ have order $5$ in $K$. Then since $|D \cap Hk|=1$, by Corollary \ref{HK},
there exists a unique $h \in H$ with $hk \in D$.
By Theorem \ref{mult}, $\sigma: x \to x^2$ is a multiplier fixing
$D$.
So
\[
\sigma^4(hk) = \sigma^4(h)\sigma^4(k) = \sigma^4(h) k \in D
\]
since $\sigma$ fixes $D$. Now, since $h$ is unique,
$ \sigma^4 (h) = h$
and thus
$ h^3 = 1$.
Finally $h \neq 1$ since $D \cap K = 1$, by Corollary \ref{HK}.
Let $M=<hk>$ be a subgroup of $G$ of order $15$. Then
\[
D \cap M = \{ 1, h, h^2, hk,h^2k^2,hk^4,h^2k^3 \}
\]
which is a $(15,7,3)$-difference set for $M$. It can be seen directly
in this case that
each element of $M$ arises exactly $3$ times as a difference from this set.
\end{proof}
We have called this $(15,7,3)$-difference set a \emph{minimal difference set}
as a copy of it appears embedded in the structure of larger members
of the family.
We feel that it recalls the role of the prime subfield in field theory and
that it also echoes Ho's result in Theorem \ref{hothm},
where a Baer subplane is embedded in the
structure of larger members of the family .
Amongst the results on planar abelian difference sets which
motivated this work, we highlight
the following theorems: the first due to Ostrom \cite{ostrom}
in the cyclic case and then extended to the abelian case by Jungnickel and Vedder \cite{jvedder};
and the second due to Ho \cite{ho}.
\begin{thm}[Jungnickel and Vedder] \label{rod4}
Let $G$ be a finite abelian
group and let $D$ be a normalized planar difference set
of square order $m^2$ in $G$. Let $H$ be the
unique subgroup
of order $m^2+m+1$ in $G$. Then $D\cap H$ is a
normalized planar difference set
of order $m$ in $H$.
\end{thm}
\begin{thm}[Ho] \label{hothm}
Let $D$ be a planar difference set of order $m^s$ in the cyclic group $G$.
Then $D$ contains a planar difference set of order $m$ in the
unique subgroup of order $m^2+m+1$ of $G$
if and only if $s$ is not a multiple of $3$.
\end{thm}
\begin{defn}
We call a difference set $D'$ which has the parameters
$((q+1)(q^2+1),q^2+q+1,q+1)$ a \emph{minimal difference set}
if $q=p^r$ where $p$ is a prime and $r$ is a power of $2$.
\end{defn}
We have a partial generalization of Corollary \ref{singer} for
abelian difference sets in Theorem \ref{main2} but we are still a long way
from the following conjecture:
\begin{conj}
Let $D$ be a normalized difference set with parameters \\
$((q^s+1)(q^{2s}+1),q^{2s}+q^s+1,q^s+1)$
in an abelian group $G$. Then $D$ contains a minimal difference set embedded in it,
in the sense that there exists a subgroup $S$ of $G$ with
$D\cap S =D'$, where $D'$ is a minimal difference set.
\end{conj}
It would be sufficient to prove this result true for any odd prime $s$,
in which case, a cascade effect would guarantee the result for any odd $s$. In
Theorem \ref{main2} we have shown that for given $q$, the conjecture
is true for all primes $s$
larger than $q^2$.
By Theorem \ref{1573}, it is true for all $s$ when $q=2$.
\begin{acknowledgements}
This work is part of the author's Ph.D. thesis. The author is very
grateful to Rod Gow (UCD) for his guidance and advice.
\end{acknowledgements}
\bibliographystyle{amsplain}
| {
"timestamp": "2007-04-17T19:25:41",
"yymm": "0704",
"arxiv_id": "0704.2203",
"language": "en",
"url": "https://arxiv.org/abs/0704.2203",
"abstract": "A difference set is said to have classical parameters if $ (v,k, \\lambda) = (\\frac{q^d-1}{q-1}, \\frac{q^{d-1}-1}{q-1}, \\frac{q^{d-2}-1}{q-1}).$ The case $d=3$ corresponds to planar difference sets. We focus here on the family of abelian difference sets with $d=4$. The only known examples of such difference sets correspond to the projective geometries $PG(3,q)$. We consider an arbitrary difference set with the parameters of $PG(3,q)$ in an abelian group and establish constraints on its structure. In particular, we discern embedded substructures.",
"subjects": "Combinatorics (math.CO)",
"title": "On Abelian Difference Sets with Parameters of 3-dimensional Projective Geometries",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540692607816,
"lm_q2_score": 0.7248702761768249,
"lm_q1q2_score": 0.70990465465996
} |
https://arxiv.org/abs/1912.05102 | Dimension and structure of higher-order Voronoi cells on discrete sites | We study the structure of higher-order Voronoi cells on a discrete set of sites in $\mathbb{R}^n$, focussing on the relations between cells of different order, and paying special attention to the ill-posed case when a large number of points lie on a sphere. In particular, we prove that higher order cells of dimension $n-1$ do not exist, even though high-order Voronoi cells may have empty interior. We also present a number of open questions. | \section{Introduction}
Voronoi cells are used in computer graphics, crystallography, facility location, and have numerous other applications. The first recorded use of a Voronoi cell-like object goes back to Ren\'e Descartes \cite{descartes}, while mathematical foundations were initially developed by Dirichlet \cite{dirichlet}, Voronoi \cite{voronoi} and Delone \cite{delone}. Various generalisations of Voronoi cells are of significant practical importance and provide a source of curious mathematical problems.
A classic Voronoi diagram is a tessellation of the Euclidean space by cells generated from a discrete set of points called sites: each cell consists of points which are no farther from a given site than from the remaining ones. A higher-order diagram is a generalisation of this notion, where points nearest to several sites are considered. To our best knowledge, the latter notion first appeared in \cite{shamos}.
There are many applications of higher-order diagrams in diverse fields. Some recent examples include mobile sensor coverage control problems \cite{mobile,hole}, $k$-nearest
neighbour problems in spatial networks \cite{spatial}, smoothing point clouds in higher dimensions \cite{gradflow}, texture generation in computer graphics \cite{texture}, detection of symmetries in discrete point sets \cite{paradoxes}, analysis and modelling of voting in US Supreme Court \cite{supreme}. Higher-order cells are also used as a tool in resolving mathematical problems, for instance, see \cite{edelsbrunner,circlesenclosing} where Ramsey style bounds are obtained on the maximal number of points enclosed by circles passing through a pair of coloured points.
An extreme special case of higher-order diagrams is the farthest Voronoi cells studied in \cite{farthest}. In particular, this setting motivates the generalisation of the notion of boundedly exposed points used by the authors to obtain a characterisation of nonempty farthest Voronoi cell. Our work is in a similar spirit: we are focussed on structural properties of higher-order Voronoi cells, motivated by the puzzling observations on low-dimensional cells presented in \cite{multipoint} and by the work \cite{ryan} focussed on generating regular tessellations from higher-order cells.
Our main contribution is a characterisation of the dimension of a higher-order Voronoi cell in Theorem~\ref{thm:dimension}, and a somewhat unexpected corollary that higher-order cells in $\R^n$ can not have dimension $n-1$ (while lower dimensions are possible). We also discuss some relations between cells of different order.
Our paper is organised as follows. We remind the definition and basic facts about higher-order Voronoi diagrams in Section~\ref{sec:structure}, and follow with a characterisation of the dimension of a higher-order Voronoi cell in Theorem~\ref{thm:dimension} of Section~\ref{sec:dimension}. In Section~\ref{sec:order} we discuss the relations between cells of different orders, providing some illustrative examples, and in Section~\ref{sec:neighbour} address issues pertaining to neighbour relations, specifically focussing on degenerate cases. We finish with a brief discussion of our results in Section~\ref{sec:conclusions}, where we also present some open questions.
\section{Basic structural properties of higher-order Voronoi cells}\label{sec:structure}
Let $S,T\subseteq \R^n$. We define a Voronoi cell of $S$ with respect to $T$ as
\begin{equation}\label{eq:defVoronoi}
V_{T}(S):=\left\{ x\in\mathbb{R}^{n}\,:\,\sup_{s\in S}\dist (x,s)\leq
\inf_{t\in T\setminus S}\dist (x,t)\right\},
\end{equation}
where $\dist(x,s) = \|x-s\|$ is the Euclidean distance in $\R^n$. Whenever the set $T\setminus S$ is nonempty and closed, and $S$ is compact, the infimum and supremum can be replaced by the minimum and maximum, respectively. When $S$ is finite, the \emph{order} of the cell $V_T(S)$ is its cardinality $|S|$.
The following consequence of the definition \eqref{eq:defVoronoi} is key to several proofs in this paper.
\begin{theorem}[{cf. \cite[Theorem~2.2]{multipoint}}]\label{thm:ballchar} Let $S,T\subseteq \R^n$. Then $V_T(S)\neq \emptyset$ if and only if there exists a closed Euclidean ball $B$ such that
\begin{equation}\label{eq:ballchar}
S\subseteq B, \qquad \interior B \cap (T\setminus S) = \emptyset.
\end{equation}
Moreover, $x\in V_T(S)$ if and only if there exists a Euclidean ball $B$ centred at $x$ that satisfies \eqref{eq:ballchar}.
\end{theorem}
\begin{proof}
There exists a Euclidean ball of radius $r$ centred at $x$ satisfying \eqref{eq:ballchar} if and only if
\[
\|s-x\|\leq r\quad \forall s\in S, \quad \|t-x\|\geq r \quad \forall t \in T\setminus S;
\]
equivalently $\|s-x\|\leq \|t-x\|$ for all $s\in S$ and all $t\in T\setminus S$, i.e. $x\in V_T(S)$.
\end{proof}
It may happen that for a point in a higher-order Voronoi cell there are several choices of a Euclidean balls that satisfy \eqref{eq:ballchar}. It is intriguing to explore the relation between the flexibility of this choice and the degeneracy and singularity in the cell structure. We address this in more detail in Section~\ref{sec:conclusions}.
To illustrate the usefulness of Theorem~\ref{thm:ballchar}, we show that the characterisation of a nonempty cell obtained in \cite[Theorem~22]{farthest} follows directly from this result.
\begin{corollary}Let $T\subseteq\R^n$, $s\in \R^n$. The farthest cell $V_{\{s\}}(T\setminus \{s\})$ is nonempty if and only if $T$ is bounded and there exists a closed Euclidean ball $B$ such that
\begin{equation}\label{eq:bdreg}
T\setminus \{s\}\subset \interior B, \quad s\in \partial B.
\end{equation}
\end{corollary}
\begin{proof} It follows from Theorem~\ref{thm:ballchar} that for $V_{\{s\}}(T\setminus \{s\})\neq \emptyset$ it is necessary and sufficient to have a Euclidean ball $B'$ such that
\begin{equation}\label{eq:modbd}
T\setminus \{s\}\subseteq B', \qquad \interior B' \cap \{s\} = \emptyset.
\end{equation}
If \eqref{eq:bdreg} holds for some ball $B'$, it yields \eqref{eq:modbd}. Conversely, assume that \eqref{eq:modbd} holds, let $x$ be the centre of the ball $B'$, and let $r$ be its radius. It is evident that $r\leq \|x-s\|$, moreover,
whenever $s-x$ and $t-x$ are non-collinear, we have
\[
\|t-(2x-s)\| < \|t-x\|+\|s-x\| = 2 \|s-x\|.
\]
When for some $t\in T$ the vectors $s-x$ and $t-x$ are collinear and $t\neq s$, we have $t-x = k (s-x)$ with $-1\leq k<1$. Hence,
\[
\|t-(2x-s)\| = \|(t-x) + (s-x)\| = (1+k)\|s-x\| <2 \|s-x\|.
\]
We deduce that
\[
\|t-(2x-s)\| <2 \|s-x\| \quad \forall t\in T\setminus \{s\}.
\]
Together with
\[
\|s-(2x -s)\|= 2 \|x-s\|
\]
this means that the Euclidean ball $B$ of radius $2\|s-x\|$ centred at $2x-s$ satisfies \eqref{eq:bdreg}.
\end{proof}
The point $s$ satisfying property \eqref{eq:bdreg} is called boundedly exposed extreme point of the set $\cl \co (T\cup \{s\})$ (see \cite{farthest} for a detailed explanation and a generalisation of the original notion introduced in \cite{Edelstein}).
The following elementary result will be useful for subsequent proofs.
\begin{proposition}[{\cite[Proposition~2.1]{multipoint}}]\label{prop:handycharacterisation} Let $T$ and $S$ be subsets of $\R^{n}$. Then $V_{T}\left( S\right) $ can be represented as the intersection of closed halfspaces
\begin{equation}
V_{T}(S)=\bigcap_{\substack{s\in S \\t\in T\setminus S}}\left\{
x\in\R^{n}\,:\,\langle t-s,x\rangle\leq\frac{1}{2}\left( \Vert
t\Vert^{2}-\Vert s\Vert^{2}\right) \right\} .\label{eq:linrep
\end{equation}
\end{proposition}
\section{Dimensions of higher-order Voronoi cells}\label{sec:dimension}
We refine Theorem~\ref{thm:ballchar} in the following explicit characterisation of dimensions of high-order cells.
\begin{theorem}\label{thm:dimension} Let $S,T\subset \R^n$ be such that $T$ is discrete and $S$ is finite. Suppose that $B$ is a closed Euclidean ball such that $S\subseteq B$ and $(T\setminus S )\cap \interior B = \emptyset$. Let
\begin{equation}\label{eq:intersectionC}
C := \co (S\cap \partial B)\cap \co ((T\setminus S)\cap \partial B),
\end{equation}
and let $F_S$ and $F_T$ be the minimal faces of $\co (S\cap \partial B)$ and $\co ((T\setminus S)\cap \partial B)$ respectively that contain $C$.
Then
$$
\dim V_T(S) = \begin{cases}
n\, & \text{if } C = \emptyset,\\
n- \dim \co \{F_S,F_T\}, & \text{if } C \neq \emptyset.
\end{cases}
$$
\end{theorem}
\begin{figure}[ht]
\includegraphics[width = 0.8\textwidth]{illustration-cells}
\label{fig:illustration}
\caption{Illustration for Theorem~\ref{thm:dimension} }
\end{figure}
We will use the following two results in the proof of Theorem~\ref{thm:dimension}.
\begin{proposition}\label{prop:dim} Let $C$ and $D$ be convex sets in $\R^n$ such that $C\cap \interior D\neq \emptyset$. Then $\dim C = \dim (C\cap D)$.
\end{proposition}
\begin{proof} Since $C\cap D \subseteq C$, we have $\dim C \geq \dim C\cap D$. Suppose that $\dim C\cap D<\dim C$. Then there exists $y\in C\setminus \aff (C\cap D)$. Let $x\in C\cap \interior D$. For the line segment $[x,y]$ connecting $x$ and $y$ we have $[x,y]\subset C$, also since $x\in \interior D$ there exists a sufficiently small $\alpha\in (0,1]$ such that $x+ \alpha (y-x)\in D$. Therefore, $x+\alpha (y-x) \in (C \cap D)\subset \aff (C \cap D) $. Since $x\in \aff (C \cap D)$ and $x+\alpha (y-x) \in \aff (C \cap D)$, we must also have $x+ (y-x) = y\in \aff (C \cap D)$, which contradicts the assumption.
\end{proof}
\begin{proposition}\label{prop:dimdual} Let $K\subseteq\R^n$ be a closed convex cone and let
$$
K^\circ = \{y\in \R^n \, |\, \langle x,y\rangle \leq 0 \; \forall x\in K\}
$$
be its (negative) polar. Then $\dim K^\circ = n - \dim \lin K$.
\end{proposition}
\begin{proof}[Proof]
Fix any $x \in \lin K$. We have $\langle x,y\rangle =0$ for all $y \in K^\circ$, hence, $K^\circ \subseteq (\lin K)^\perp$, and $\dim K^\circ \leq n-\dim \lin K$.
Since $K = \lin K + K'$ with $K'\subset (\lin K)^\perp$ pointed (see \cite[Lemma~5.33]{Guler}), within the space $(\lin K)^\perp$ the set $\co [(K \cap (\lin K)^\perp)\cap S]$, where $S$ is the unit sphere, can be strictly separated from zero (see \cite[Corollary 4.1.3]{FundamentalsConvexAnalysis}), which means that the dual cone to $K \cap (\lin K)^\perp$ within $(\lin K)^\perp$ has a nonempty interior. Hence we have $\dim K^\circ \geq \dim (\lin K)^\perp = n-\dim \lin K$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:dimension}] Without loss of generality assume that the Euclidean ball $B$ is centred at zero. Then by Proposition~\ref{prop:handycharacterisation} we have
\begin{equation}\label{eq:ineqrepres}
V_T(S) = \bigcap_{\substack{s\in S\\t\in T\setminus S} }\left\{x\in \R^n\,\Bigl| \, \langle t-s,x\rangle \leq \frac{1}{2}(\|t\|^2 - \|s\|^2)\right\} = V \cap V' \cap V'',
\end{equation}
where
\begin{align*}
V & = \bigcap_{\substack{s\in S\cap \partial B\\t\in (T\setminus S)\cap \partial B} }\left\{x\in \R^n\,\Bigl| \, \langle t-s,x\rangle \leq 0\right\} \quad (\text{since } \|t\|=\|s\|),\\
V'& = \bigcap_{\substack{s\in S\cap \interior B\\t\in (T\setminus S)\cap \partial B} }\left\{x\in \R^n\,\Bigl| \, \langle t-s,x\rangle \leq \frac{1}{2}(\|t\|^2 - \|s\|^2)\right\},\\
V'' & = \bigcap_{\substack{s\in S\\t\in T\setminus B }}\left\{x\in \R^n\,\Bigl| \, \langle t-s,x\rangle \leq \frac{1}{2}(\|t\|^2 - \|s\|^2)\right\}.
\end{align*}
We have $\|t\|>\|s\|$ for all pairs $(s,t) \in \interior B \times (\R^n\setminus \interior B)$ and $(s,t) \in B \times (\R^n \setminus B)$. Since $T$ is discrete and $S$ is finite, there exists a sufficiently small $\varepsilon>0$ such that for a ball $B_\varepsilon$ of radius $\varepsilon$ centred at zero we have $B_\varepsilon \subset V'\cap V''$. Hence, we have
\begin{equation}\label{eq:intballs}
V_T(S)\cap B_\varepsilon = (V\cap B_\varepsilon)\cap (V'\cap B_\varepsilon) \cap (V''\cap B_\varepsilon) = V\cap B_\varepsilon.
\end{equation}
Observe that for any $s\in S$ and $t\in T\setminus S$ we have $\|t\|\geq \|s\|$, hence, it is clear from \eqref{eq:ineqrepres} that $0\in V_T(S)\subseteq V$, so $V_T(S) \cap \interior B_\varepsilon \neq 0 $. We hence have from \eqref{eq:intballs} and Proposition~\ref{prop:dim}
$$
\dim V_T(S) = \dim V_T(S)\cap B_\varepsilon = \dim V \cap B_\varepsilon = \dim V.
$$
Observe that $V = (\cone [((T\setminus S)\cap \partial B)-(S\cap \partial B)])^\circ$. By Proposition~\ref{prop:dimdual} we have
$$
\dim V = n- \dim \lin \cone [((T\setminus S)\cap \partial B)-(S\cap \partial B)].
$$
It remains to show that
$$
\dim \lin \cone [((T\setminus S)\cap \partial B)-(S\cap \partial B)] = \dim \co \{F_S, F_T\},
$$
from which the result would follow. We break this down into two steps: first we show that
\begin{equation}\label{eq:4356546745}
\lin \cone [((T\setminus S)\cap \partial B)-(S\cap \partial B)] = \cone \{F_S- F_T\},
\end{equation}
and then demonstrate that
\begin{equation}\label{eq:dimequal}
\dim \cone \{F_S- F_T\} = \dim\co \{F_T,F_S\}.
\end{equation}
Observe that $z\in \lin \cone [((T\setminus S)\cap \partial B)-(S\cap \partial B)] $ if and only if for some $w = \alpha z$ we have $-w, w\in \co ((T\setminus S)\cap \partial B)-\co (S\cap \partial B) $. Then
$$
w = u_1-v_1, \quad -w = u_2-v_2,\qquad u_1,u_2\in \co ((T\setminus S)\cap \partial B),\quad v_1,v_2\in \co (S\cap \partial B),
$$
hence, $u_1+u_2 = v_1+v_2$, and effectively
$$
(u_1,u_2)\cap (v_1,v_2) \neq \emptyset,
$$
which implies that $u_1,u_2$ and $v_1,v_2$ belong to the minimal faces $F_S$ and $F_T$ containing $C$. Therefore,
$$
\lin \cone [((T\setminus S)\cap \partial B)-(S\cap \partial B)] \subseteq \cone \{F_S - F_T\}.
$$
Now suppose that $z\in \cone \{F_T-F_S\}$. We will show that $-z\in \cone \{F_T-F_S\}$, hence,
\begin{equation}\label{eq:8745345}
\cone \{F_T-F_S\} = \lin \cone\{F_T-F_S\} \subseteq \lin \cone [((T\setminus S)\cap \partial B)-(S\cap \partial B)].
\end{equation}
We have
\begin{equation}\label{eq:zexp}
z = \sum_{i=1}^m \lambda_i (p_i-q_i),
\end{equation}
where $p_i\in F_T$, $q_i \in F_S$, and $\lambda_i\geq 0$ for all $i = 1,2,\dots, m$.
Now since $F_T$ and $F_S$ are the minimal faces, we must have $\relint C \subseteq \relint F_T\cap \relint F_S$, and hence there exists $c\in \relint F_T\cap \relint F_S$. We can then find an $\alpha>0$ such that
\[
c+ \alpha (c-p_i)=: p_i'\in F_T, \quad c+ \alpha(c-q_i)=: q_i'\in F_S\qquad \forall i = 1,2,\dots, m.
\]
Rearranging, we obtain
\[
p_i = \frac{1+\alpha}{\alpha} c - \frac{1}{\alpha}p'_i, \quad
q_i = \frac{1+\alpha}{\alpha} c - \frac{1}{\alpha}q'_i \qquad \forall \, i = 1,2,\dots, m.
\]
Substituting this into \eqref{eq:zexp}, we have
\[
z = \sum_{i=1}^m \lambda_i \left(\left[\frac{1+\alpha}{\alpha} c - \frac{1}{\alpha}p'_i\right]-\left[\frac{1+\alpha}{\alpha} c - \frac{1}{\alpha}q'_i\right]\right) =- \sum_{i=1}^m \frac{\lambda_i}{\alpha} \left(p_i'-q_i'\right) \in - \cone \{F_T-F_S\} ,
\]
hence, $-z \in \cone \{F_T-F_S\} $ and we are done with \eqref{eq:8745345}, which finishes the proof of \eqref{eq:4356546745}.
It remains to show \eqref{eq:dimequal}. Observe that for any $c\in \R^n$ we have
\[
\dim \co \{F_T,F_S\} = \dim \co \{F_T-\{c\},F_S-\{c\}\},
\]
hence it suffices to show that
\[
\aff \co \{F_T-\{c\},F_S-\{c\}\} = \aff \cone \{F_T-F_S\}.
\]
Let $c\in \relint F_T\cap \relint F_S$, as before. Since $0\in F_T-\{c\}$ and $0\in F_T-F_S$, we have
\[
\aff \co \{F_T-\{c\},F_S-\{c\}\} = \lspan \co \{F_T-\{c\},F_S-\{c\}\} = \lspan \cone\{(F_T-\{c\})\cup(F_S-\{c\})\},
\]
and
\[
\aff \cone \{F_T-F_S\} = \lspan \cone \{F_T-F_S\}.
\]
Therefore, it is sufficient to show that $\cone \{F_T-F_S\} = \cone[ (F_T-\{c\})\cup(F_S-\{c\})]$.
Let $z\in \cone \{F_T-F_S\}$. Then
\begin{equation}\label{eq:repr3}
z = \sum_{i=1}^m \lambda_i (p_i-q_i), \quad \lambda_i \geq 0, \; p_i \in F_T, \; q_i \in F_S.
\end{equation}
Observe that since $c\in \relint F_S$, for every $i$ there exists $\alpha_i>0$ such that
\[
c+ \alpha_i (c- q_i) =: q_i'\in F_S,
\]
hence $c-q_i = \frac{1}{\alpha_i} (q_i'-c)$. From \eqref{eq:repr3} we have
\[
z = \sum_{i=1}^m \lambda_i [(p_i-c)-(q_i-c)] = \sum_{i=1}^m \lambda_i (p_i-c)+\sum_{i=1}^m \frac{\lambda_i}{\alpha_i}(q_i'-c),
\]
hence, $z\in \cone[(F_T-\{c\})\cup(F_S-\{c\})]$.
Now assume $z\in \cone[(F_T-\{c\})\cup(F_S-\{c\})]$. Then
\[
z = \sum_{i=1}^m \lambda_i (p_i -c)+\sum_{j=1}^k \mu_j (q_i -c), \quad \lambda_i,\mu_j\geq 0, p_i \in F_T, q_i\in F_S.
\]
If $\sum_{i=1}^m\lambda_i = 0$, then every $\lambda_i=0$, and $z\in \cone\{F_S-\{c\}\}\subseteq \cone \{F_S-F_T\} = \cone \{F_T-F_S\}$ (see the preceding discussion on lineality spaces).
If $\sum_{j=1}^k \mu_j = 0$, then every $\mu_j = 0$ and $z\in \cone\{F_T-\{c\}\}\subseteq \cone \{F_T-F_S\} $.
Finally if both sums are positive, we have
\begin{align*}
z & = \sum_{i=1}^m \lambda_i \cdot \frac{\sum_{j=1}^k \mu_j}{\sum_{j=1}^k \mu_j} (p_i -c)+\sum_{j=1}^k \mu_j \cdot \frac{\sum_{j=1}^m \lambda_i}{\sum_{j=1}^m \lambda_i} (q_i -c)\\
& = \sum_{i=1}^m \lambda_i \cdot \frac{\sum_{j=1}^k \mu_j}{\sum_{j=1}^k \mu_j} (p_i -c)+\sum_{j=1}^k \mu_j \cdot \frac{\sum_{j=1}^m \lambda_i}{\sum_{j=1}^m \lambda_i} (q_i -c)\\
& = \sum_{i=1}^m \sum_{j=1}^k \lambda_i \mu_j\left( \frac{1}{\sum_{j=1}^k \mu_j} (p_i -c)+ \frac{1}{\sum_{j=1}^m \lambda_i} (q_i -c)\right).
\end{align*}
Observe that since $c\in \relint F_T$ and $c\in \relint F_S$, there exists a sufficiently small $\alpha>0$ such that
\[
c +\frac{\alpha}{\sum_{j=1}^k \mu_j} (p_i -c) = p'_i \in F_T, \quad
c -\frac{\alpha}{\sum_{i=1}^m \lambda_i} (q_j -c) = q'_j \in F_S,
\]
so we have
\[
z = \sum_{i=1}^m \sum_{j=1}^k \lambda_i \mu_j\left( \frac{1}{\alpha} (p'_i -c)+ \frac{1}{\alpha} (c-q_j')\right)= \sum_{i=1}^m \sum_{j=1}^k \frac{\lambda_i \mu_j}{\alpha}\left( p'_i - q'_i\right) \in \cone \{F_T-F_S\}.
\]
\end{proof}
It would be curious to see if Theorem~\ref{thm:dimension} can be generalised to a non-discrete setting.
\begin{corollary}\label{cor:nodimn1} If $T\subset \R^n$ is discrete, there are no Voronoi cells of finite order that have dimension $n-1$ in $\R^n$.
\end{corollary}
\begin{proof} It follows from Theorem~\ref{thm:dimension} that for a higher-order Voronoi cell to have dimension $n-1$ one needs to have a Euclidean ball satisfying the assumption of Theorem~\ref{thm:dimension} and $\dim \co \{F_S, F_T\}=1$. Since $\co \{F_S, F_T\}$ is the convex hull of some points from $(S\cap \partial B)\cup (T\setminus S \cap \partial B)$, it must be a line segment connecting two points, one of them from $S$, and the other one from $T\setminus S$. We conclude that the minimal faces $F_S$ and $F_T$ must be single points on the surface of the sphere, so
$$
\emptyset = F_S\cap F_T = \co (S\cap \partial B)\cap \co (T\cap \partial B),
$$
and the Voronoi cell is empty, hence, it cannot have dimension $n-1$.
\end{proof}
Note that Theorem~\ref{thm:dimension} and Corollary~\ref{cor:nodimn1} significantly generalise \cite[Proposition~3.1]{multipoint}. Observe that on the plane we can only have cells of dimensions 0 and 2. Furthermore, in the setting of $|T|=4$ and $|S|=2$ on the plane for a cell of dimension 0 we have $F_S\cap F_T \neq \emptyset$. The only possibility given the constraints on the cardinalities is to have these sets as two intersecting line segments with vertices on a circle. This is precisely the case of the diagonals of a cyclic quadrilateral.
\begin{example}[A cell of dimension 1 in $\R^3$] Let $S = \{(1,0,0),(-1,0,0),(0,0,1)\}$, $T\setminus S = \{(0,1,0),(0,-1,0), (0,0,-1)\}$. Observe that $\|t\|=1$ for every $t\in T$, and order 3 Voronoi cell $V_T(S)$ is defined by the system
\[
y-x\leq 0, \; y+x\leq 0, \;-y-x\leq 0, \;-y+x\leq 0, \; y-z\leq 0, \; -y-z\leq 0, \; -z\leq 0.
\]
From the first four inequalities we obtain $y\leq x\leq y$ and $-y\leq x\leq -y$, hence, $x=y=-y = 0$. Then the last three inequalities yield $z\geq 0$. Hence the Voronoi cell is exactly
$$
V_T(S) = 0_2 \times \R_+.
$$
We next verify Theorem~\ref{thm:dimension} for this example. Observe that the unit ball $B = \{(x,y,z)\,|\, x^2+y^2+z^2 \leq 1\}$ centred at 0 satisfies the conditions of the theorem, moreover, we have $S = S\cap \partial B$, $T = T\cap \partial B$.
For the intersection
\begin{align*}
C & = \co(S\cap \partial B) \cap \co ((T\setminus S) \cap \partial B)\\
& = \co \{(1,0,0),(-1,0,0), (0,0,1)\} \cap \co \{(0,1,0),(0,-1,0), (0,0,-1)\}
\end{align*}
we have $(x,y,z)\in C$ iff
$$
x = \alpha_1-\alpha_2=0, y = \beta_1-\beta_2 = 0, z = \alpha_3 = - \beta_3, \quad \alpha_i,\beta_i \geq 0, \sum\limits_{i=1}^3 \alpha_i = \sum\limits_{i=1}^3 \beta_i =1.
$$
We deduce that $\alpha_1=\alpha_2 = \beta_1 = \beta_2 = 1/2$, $\alpha_3 = \beta_3 = 0$, and $x=y=z=0$, so $C = \{0_3\}$.
It is evident that the relevant minimal faces are $F_S = \co\{(1,0,0),(-1,0,0)\}$, $F_T = \{(0,1,0),(0,-1,0)\}$, so $\dim \co \{F_S,F_T\} = 2$, and $\dim V_T(S) = 3-2=1$, which is consistent with the explicit expression for the cell that we obtained earlier.
\end{example}
\section{Relations between cells of different orders}\label{sec:order}
The purpose of this section is to provide clear statements that generalise several well-known and fairly trivial properties of higher-order Voronoi cells. The next result shows that a higher-order Voronoi cell is the intersection of lower-order cells built from the set of defining sites.
\begin{theorem}\label{thm:intersection} Let $T,S\subseteq \R^n$. Suppose that $\mathcal{U}$ is a family of subsets of $S$ such that
\[
S = \bigcup_{U\in \mathcal{U}} U.
\]
Then
\[
V_T(S) = \bigcap_{U\in \mathcal{U}} V_{T\setminus S}(U).
\]
\end{theorem}
\begin{proof} Let $x\in V_T(S)$. Then
\[
\|x-s\|\leq \|x-t\|\quad\forall s \in S, \; t\in T\setminus S.
\]
It follows that for every $U\in \mathcal{U}$
\[
\|x-s\|\leq \|x-t\|\quad\forall s \in U, \; t\in T\setminus S.
\]
Therefore,
\[
V_T(S) \subseteq \bigcap_{U\in \mathcal{U}} V_{T\setminus S}(U).
\]
To show the converse, let
\[
x\in \bigcap_{U\in \mathcal{U}} V_{T\setminus U}(S\setminus U).
\]
Choose any $s\in S$. There exists $U\in \mathcal U$ such that $s\in U$, and therefore
\[
\|x-s\|\leq \|x-t\|\quad \forall t \in T\setminus S.
\]
Since $s\in S$ is arbitrary, we have by the definition of Voronoi cell $x\in V_T(S)$.
\end{proof}
Let $T$ be a finite subset of $\R^n$, and let $S$ be a proper subset of $T$. Denote by $\mathcal{S}_k$ the set of all subsets of $S$ of cardinality exactly $k$, for all $k\in \{1,\dots, K\}$, where $K=|S|$.
\begin{corollary}\label{cor:intersection} Let $S,T\subset \R^n$ be such that $T$ is discrete and $S$ is finite. Then for any $k\in \{1,\dots, |S|\}$
\[
V_T(S) = \bigcap_{S'\in \mathcal{S}_k} V_{T\setminus S'}(S')= \bigcap_{S'\in \mathcal{S}_k} V_{T}(S').
\]
\end{corollary}
In particular, Corollary~\ref{cor:intersection} recovers the well-known result that order $k$ Voronoi cell is the intersection of first-order cells of all points in $S$ on the set of sites $T\setminus S$.
The next lemma is similar in spirit, providing an uppper estimate on the higher-order Voronoi cells from cells of lower order. However notice that in this case the lower cells are constructed on exactly the same set $T$. This result is helpful in identifying neighbouring points from the Voronoi diagram of a lower order. The essential difference between Theorem~\ref{thm:intersection} and Lemma~\ref{lem:inclusions} is illustrated in Fig.~\ref{fig:twoways}.
\begin{figure}[ht]
{\centering
\includegraphics[width=0.4\textwidth]{twowaysint}
\quad
\includegraphics[width=0.4\textwidth]{twowaysuni}}
\caption{The representation of an order 3 Voronoi cell as the intersection of order 2 cells on the reduced set of sites, illustrating Theorem~\ref{thm:intersection} and an upper bound on the order 3 cell from the intersection of order 2 cells on the complete set of sites, illustrating Lemma~\ref{lem:inclusions}.}
\label{fig:twoways}
\end{figure}
\begin{lemma}\label{lem:inclusions} Let $S,T\subseteq \R^n$, and assume that $S\subset T$. As before, denote by $\mathcal{S}_k$ the set of all subsets of $S$ of cardinality exactly $k$, for all $k\in \{1,\dots, K\}$, where $K=|S|$. Then
\begin{equation}\label{eq:inclusions}
V_T(S) \subseteq \bigcup_{S'\in \mathcal{S}_{K-1}}V_{T}(S') \subseteq \bigcup_{S'\in \mathcal{S}_{K-2}} V_{T}(S') \subseteq \cdots \subseteq \bigcup_{S'\in \mathcal{S}_{1}} V_T(S').
\end{equation}
\end{lemma}
\begin{proof} Note that to prove the inclusions in \eqref{eq:inclusions} it is sufficient to show that
\[
V_T(S) \subseteq \bigcup_{S'\in \mathcal{S}_{K-1}}V_{T}(S'),
\]
the rest follows by induction.
Now let $x\in V_T(S)$. We have
\[
\|x-s\|\leq \|x-t\| \quad \forall s\in S, t\in T\setminus S.
\]
Let $\bar s\in S$ be such that $\|x-\bar s\|=\max_{s\in S}\|x-s\|$ (such $\bar s$ exists since we assumed that $S$ is closed). Then $S'' :=S\setminus \bar s \in \mathcal{S}_{K-1}$, and so
\[
x\in V_{T}(S\setminus \bar s) = V_{T}(S'') \subseteq \bigcup_{S'\in \mathcal{S}_{K-1}}V_{T}(S').
\]
\end{proof}
It appears from empirical observations that the inclusions \eqref{eq:inclusions} are strict whenever $\interior V_T(S)\neq \emptyset$, however we weren't able to prove this. We discuss this in more detail in Section~\ref{sec:conclusions}.
\section{Neighbour Relations}\label{sec:neighbour}
Efficient construction of Voronoi cells relies on identifying the subset of the \emph{nearest neighbours} for this cell. If a cell $V_T(S)$ for some finite subsets $S$ and $T$ of $\R^n$ has a nonempty interior, then it may be natural to define the \emph{neighbours} of $S$ in $T$ as the set of all points $t$ in $T$ that define a facet of $V_T(S)$, i.e. such that the intersection
$\{y\,|\, \|y-s\| = \|y-t\|\}\cap V_T(S)$ is a (convex polyhedral) set of dimension $n-1$. Even though this definition makes perfect sense for the classic Voronoi cells, it is unsuitable for higher-order cells, due to the existence of lower-dimensional cells that do not have facets of dimension $n-1$.
We hence provide a more careful definition of neighbours that captures all special cases including empty and lower-dimensional cells.
\begin{definition}
A subset $N\subseteq T\setminus S$ is a \emph{minimal} set of neighbours for the cell $V_T(S)$ if
\[
V_T(S) = V_{N}(S) \neq V_{N'}(S) \qquad \forall\; N'\subsetneq N.
\]
A site $t\in T$ is a \emph{neighbour} of $V_T(S)$ if it belongs to a minimal set of neighbours for $V_T(S)$. We denote the set of all neighbours of $S$ by $N_T(S)$.
\end{definition}
For some configurations of points it may be possible to choose several minimal subsets of neighbours that fully characterise the cell. The following example illustrates this point.
\begin{example}\label{eg:essential} Consider a set $S$ that consists of two opposite points on the unit circle and the set $T$ that is a bunch of points on the circle on the two semi-circles defined by the two points in $S$ (see Fig.~\ref{fig:ambiguous}).
\begin{figure}[ht]
{\centering
\includegraphics[width=0.4\textwidth]{example-ambiguous-neighbours}}
\caption{Several choices of the minimal set of neighbours}
\label{fig:ambiguous}
\end{figure}
It follows from Theorem~\ref{thm:ballchar} that the cell $V_T(S)$ is a singleton (centre of the circle) and removing any one of the points from $T\setminus S$ does not change the cell. It is hence possible to identify several different minimal subsets of neighbours that are sufficient to fully define the cell $V_T(S)$.
\end{example}
Note that it may happen that for a pair of points $(t,s) \in (T\setminus S)\times S$ their bisector has a nonempty intersection with the cell, however $t\notin N_T(S)$. The next example illustrates this observation.
\begin{example}\label{eg:nonn} Let $T = \{(-1,1),(-1,-1),(1,-1)\}$, $S = \{(1,1)\}$. Then (using Proposition~\ref{prop:handycharacterisation})
\[
V_T(S) = \left\{ (x,y)\in\mathbb{R}^{2}\,:\, -2 x \leq 0, -2 y \leq 0\right\} = \{(x,y)\,|\, x,y\geq 0\}.
\]
In this case the only minimal set of neighbours is $N_T(S) = \{(-1,1),(1,-1)\}$, and while the bisector of the pair $((-1,-1),(1,1))$ touches the Voronoi cell $V_T(S)$ at the origin, the point $(-1,-1)$ is not a neighbour of this Voronoi cell (see Fig.~\ref{fig:nonneighbour}).
\begin{figure}[ht]
{\centering
\includegraphics[width=0.4\textwidth]{non-neighbour}}
\caption{An example of a non-neighbour (see Example~\ref{eg:nonn}).}
\label{fig:nonneighbour}
\end{figure}
\end{example}
It is worth noting that if the set $T$ is discrete, and $S$ is compact, then the minimal set of neighbours for $V_T(S)$ is finite. Moreover, the minimal set of neighbours is unique for Voronoi cells with nonempty interiors, as we show next.
\begin{lemma}\label{lem:nintminimal} Suppose that $S,T\subset \R^n$, where $S$ is finite and $T$ is discrete, and assume that $S\subset T$. If $\interior V_T(S)\neq \emptyset$, then a minimal subset of neighbours $N$ is unique. Moreover, each $t\in N$ defines a facet of $V_T(S)$, i.e.
\[
\dim \left( \{y\,|\, \|y-s\| = \|y-t\|\}\cap V_T(S)\right) = n-1.
\]
\end{lemma}
\begin{proof} If $V_T(S)$ has a nonempty interior, then it is fully determined by the half-spaces defining its facets ($n-1$-dimensional faces). This is the minimal and uniquely defined set of half-spaces. Each one of these half-spaces is defined by a pair $(s,t)\in S\times (T\setminus S)$ via the inequality $\|s-x\|\leq \|t - x\|$, which can be rewritten as
\[
\langle t-s, x\rangle \leq \frac{1}{2}\left(\|t\|^2 - \|s\|^2\right).
\]
Even though the minimal set of half-spaces is unique, we need to make sure that each half-space is defined by a unique pair $(s,t)\in S\times (T\setminus S)$. This follows from Proposition~2.6 in \cite{multipoint}, where it is proved that if two pairs $(s_1,t_1),(s_2,t_2)\in S\times (T\setminus S)$ define the same half-space, then the relevant inequality is nonessential for the cell, and hence can not define a facet. The last part of the lemma follows from the preceding discussion.
\end{proof}
\begin{lemma}[cf. Lemma~\ref{lem:inclusions}]\label{lem:neighbours} Let $T$ be a discrete subset of $\R^n$, and let $S$ be a proper subset of $T$ such that $\interior V_T(S)\neq \emptyset$. Denote by $\mathcal{S}_k$ the set of all subsets of $S$ of cardinality exactly $k$, for all $k\in \{1,\dots, K\}$, where $K=|S|$. Then
\begin{equation}\label{eq:relationsneigh}
N_{T}(S) \subseteq \bigcup_{S'\in \mathcal{S}_{k-1}}N_{T\setminus S}(S') \subseteq \bigcup_{S'\in \mathcal{S}_{k-2}} N_{T\setminus S}(S') \subseteq \cdots \subseteq \bigcup_{S'\in \mathcal{S}_{1}} N_{T\setminus S}(S') .
\end{equation}
\end{lemma}
\begin{proof} It is sufficient to show that
\[
N_T(S) \subseteq \bigcup_{S'\in \mathcal{S}_{|S|-1}}N_{T\setminus S}(S'),
\]
the rest follows by induction. To see that this is indeed true, recall that by Lemma~\ref{lem:nintminimal} each point in the (unique) minimal set of neighbours defines a facet of the cell $V_T(S)$. By Corollary~\ref{cor:intersection} we have
\[
V_T(S) = \bigcap_{S'\in \mathcal{S}_{|S|-1}} V_{T\setminus S}(S').
\]
If for every $p\in N_T(S)$ there exists $ S' \in \mathcal{S}_{|S|-1}$ such that $p$ defines a facet of $V_{T\setminus S} (S')$, then we are done. Otherwise there is a $p\in N_{T}(S)$ that does not define facets of $|S|-1$-order cells. This means that we can choose an alternative minimal set of neighbours, a contradiction.
\end{proof}
\begin{remark} Note that without the condition $\interior V_T(S)\neq \emptyset$ the statement of the preceding lemma may not be true: this is evident from Fig.~\ref{fig:ambiguous}.
\end{remark}
Finally, we prove a statement very similar to Lemma~\ref{lem:inclusions}, replacing $T\setminus S$ with $T$ in the chain of inclusions. Note that this may be more useful for practical purposes, as it allows to reduce the set of candidate neighbours from an existing lower-order diagram.
\begin{lemma}\label{lem:neighboursT} Let $T$ be a discrete subset of $\R^n$, and let $S$ be a proper subset of $T$ such that $\interior V_T(S)\neq \emptyset$. As before, denote by $\mathcal{S}_k$ the set of all subsets of $S$ of cardinality exactly $k$, for all $k\in \{1,\dots, K\}$, where $K=|S|$. Then
\begin{equation}\label{eq:relationsneighT}
N_{T}(S) \subseteq \bigcup_{S'\in \mathcal{S}_{k-1}}N_{T}(S') \subseteq \bigcup_{S'\in \mathcal{S}_{k-2}} N_{T}(S') \subseteq \cdots \subseteq \bigcup_{S'\in \mathcal{S}_{1}} N_{T}(S') .
\end{equation}
\end{lemma}
\begin{proof} Just as in the proof of Lemma~\ref{lem:inclusions}, it is sufficient to show that
\[
N_T(S) \subseteq \bigcup_{S'\in \mathcal{S}_{|S|-1}}N_{T}(S').
\]
Suppose that this is not true, and for some configuration there exists $t\in N_T(S)$ such that
\[
t\notin N_T(S')\quad \forall S'\in \mathcal{S}_{|S|-1}.
\]
Then
\[
V_T(S') = V_{T\setminus \{s\}}(S') \quad \forall S'\in \mathcal{S}_k,
\]
and by Corollary~\ref{cor:intersection}
\[
V_T(S) = \bigcap_{S'\in \mathcal{S}_k} V_{T}(S') = \bigcap_{S'\in \mathcal{S}_k} V_{T\setminus \{t\}}(S') .
\]
We deduce that $t$ is not in $N_T(S)$, a contradiction.
\end{proof}
\section{Conclusions}\label{sec:conclusions}
The motivation for this paper comes from the construction of regular tessellations of the Euclidean space via Voronoi diagrams on lattices on the one hand \cite{ryan}, and from several puzzling examples and negative results related to Voronoi diagrams on the plane obtained in \cite{multipoint}, on the other. We have succeeded in explaining the lack one-dimensional cells in $\R^2$ observed in \cite{multipoint} proving that there can be no higher-order cells of dimension $n-1$ (see Theorem~\ref{thm:dimension}), however the question of generalising other negative results from that study remains open. For instance, it was shown in \cite{multipoint} that in the case when $|T|=4$ and $S\subsetneq T$, $|S|\geq 2$, the higher-order cell $V_T(S)$ can not be a triangle or a quadrilateral. In contrast to line segments, which are impossible to realise as higher-order Voronoi cells in the plane, triangles and cyclic quadrilaterals can be realised using larger sets of sites, as shown in Fig.~\ref{fig:triangsquare}.
\begin{figure}[ht]
\includegraphics[width = 0.45\textwidth]{triangle}
\quad
\includegraphics[width = 0.45\textwidth]{square}
\caption{Triangle and square as order 2 Voronoi cells}
\label{fig:triangsquare}
\end{figure}
The triangular cell $V_T(S)$ is the convex hull of the set $S = \{s_1,s_2,s_3\}$, and there are four inequalities intersecting at each vertex; likewise, for the square cell $V_T(S)$ the two sites of $S$ are opposite vertices of the square, and the construction requires four additional sites in $T$. There are redundant (coincident) inequalities at two vertices of the square, and hence this configuration appears non-generic. This brings us to the following question.
\begin{question}Is it true that higher-order Voronoi cells on discrete set of sites inscribed in a sphere exhibit degeneracy of sorts? Can this be captured in a rigorous way?
\end{question}
A related question is in the spirit of \cite{multipoint} and \cite{farthest} (where a general geometric characterisation of nonempty farthest cells was obtained): it is well-known that any polyhedral set with nonempty interior can be represented as a classic (first-order) Voronoi cell. On the other hand, we know from Corollary~\ref{cor:nodimn1} that cells of dimension $n-1$ are impossible in $\R^n$.
\begin{question} Is it true that any polyhedral set of dimension different to $n-1$ can be represented as a higher-order Voronoi cell in $\R^n$? What is the smallest number of sites $|T\cup S|$ needed to represent a given polytope as a $k$-th order Voronoi cell?
\end{question}
We have heavily utilised in our proofs the Euclidean ball characterisation from Theorem~\ref{thm:ballchar}. It appears that for the interior points of a Voronoi cell there is much flexibility in choosing the radius of the Euclidean ball that satisfies the theorem and certifies that the point belongs to the cell. However, for degenerate configurations (cells with empty interior and vertices) the choice of the radius appears restricted.
\begin{question} Is there a relation between the radii of Euclidean balls satisfying \eqref{eq:ballchar} of Theorem~\ref{thm:ballchar} and the `singularity' or `degeneracy' of the centre point?
\end{question}
There are several minor questions that relate to the tightness of the results in this paper. These are not necessarily of paramount practical importance, however, answering these questions would enhance our understanding of the structure of higher-order Voronoi cells.
\begin{question}
Is it true that in nondegenerate cases (say when $\interior V_T(S)$ is nonempty) the inclusions \eqref{eq:inclusions} in Lemma~\ref{lem:inclusions} are strict?
\end{question}
\begin{question} Note that our main result Theorem~\ref{thm:dimension} is stated for the discrete set of sites. It would be interesting to figure out if this result is true in a more general setting.
\end{question}
\section{Acknowledgements}
The second author is grateful to the Australian Research Council for continuing support.
\bibliographystyle{plain}
| {
"timestamp": "2019-12-12T02:06:19",
"yymm": "1912",
"arxiv_id": "1912.05102",
"language": "en",
"url": "https://arxiv.org/abs/1912.05102",
"abstract": "We study the structure of higher-order Voronoi cells on a discrete set of sites in $\\mathbb{R}^n$, focussing on the relations between cells of different order, and paying special attention to the ill-posed case when a large number of points lie on a sphere. In particular, we prove that higher order cells of dimension $n-1$ do not exist, even though high-order Voronoi cells may have empty interior. We also present a number of open questions.",
"subjects": "Metric Geometry (math.MG)",
"title": "Dimension and structure of higher-order Voronoi cells on discrete sites",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540668504082,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7099046529127518
} |
https://arxiv.org/abs/2108.11020 | Logarithmic Euler Maruyama Scheme for Multi Dimensional Stochastic Delay Differential Equation | In this paper, we extend the logarithmic Euler-Maruyama scheme for stochastic delay differential equation in one dimension to the part where we propose a scheme for a system of stochastic delay differential equations. We then show that the scheme always maintains positivity subject to initial conditions. We then show the convergence of the proposed Euler-Maruyama scheme. With this scheme, all the approximate solutions are positive and the rate of convergence of this scheme is 0.5. | \setcounter{equation}{0}\Section{\setcounter{equation}{0}\Section}
\usepackage{graphicx}
\date{}
\title[Logarithmic Euler-Maruyama scheme]{logarithmic Euler-Maruyama scheme for multi-dimensional stochastic delay equations with jumps}
\author[Agrawal]{Nishant Agrawal}
\address{Department of Mathematical and Statistical Sciences \\
University of Alberta at Edmonton \\
Edmonton, Canada, T6G 2G1}
\email{nagrawal@ualberta.ca, yaozhong@ualberta.ca}
\author[Hu]{Yaozhong Hu}
\subjclass[2010] {Primary 60H05; Secondary 60G51, 60H30}
\keywords{}
\begin{document}
\maketitle
\setcounter{equation}{0}\Section{Introduction}
In \cite{rpaper} we introduced a logarithmic Euler-Maruyama scheme for a
single stochastic delay equation, which preserve positivity if the solution to the original equation is positive.
The convergence rate was also obtained for such scheme.
This scheme is important for simulation of the paths of the
equation. It plays important role in option pricing for example since
we often cannot obtain the explicit pricing value and we need to
use Monte-Carlo to complete the evaluation. Naturally our next question would be what will be the
analogous scheme for a system of stochastic delay equations and if such schemes converges.
This type of problems is very important since there is always more than one stock in the real market.
Now in more than one dimension, the problem of positive solution
and the numerical schemes which preserve the positivity are much more complicated.
In this paper
we shall extend
our work in \cite{rpaper} to a system of stochastic delay differential equations. The
problems of existence and uniqueness of a positive
are solved. The multi-dimensional logarithmic
Euler-Maruyama scheme are constructed which preserve the positivity of the
approximate solutions. The scheme is proved to be convergent with rate $0.5$.
\setcounter{equation}{0}\Section{Positivity}
We consider the following system of stochastic delayed differential equations:
\begin{empheq}[left=\empheqlbrace]{align }
dS_i(t)&= \sum_{j=1}^d f_{ij} (S (t-b))
S_j (t) dt \nonumber\\
& \qquad\qquad +S_i(t-) \sum_{j=1}^d g_{ij} ( S (t-b)) dZ_j(t),, \quad i=1, \cdots, d\,,
\label{e.2.1} \\
S_i(t) &= \phi_i(t) \,,\quad t\in [-b, 0]\,, \ i=1, \cdots, d\,, \nonumber
\end{empheq}
where $S(t)=(S_1(t), \cdots, S_d(t))^T$ and
\begin{enumerate}
\item[(i)]
$f_{ij}, g_{ij}:\mathbb{R}}\def\TT{\mathbb{T}_t^d\rightarrow
\mathbb{R}}\def\TT{\mathbb{T}_t$ are some given bounded measurable functions for all$ \hspace{2mm} 0 \leq i,j \leq d$.
\item[(ii)] $b>0$ is a given number representing the delay of the equation.
\item[(iii)] $\phi_i: [-b,0]\rightarrow \mathbb{R}}\def\TT{\mathbb{T}_t$ is a
(deterministic) measurable function for all$ \hspace{2mm} 0 \leq i \leq d$.
\item[(iv)]
$Z_j(t)=\sum_{k=1}^{N_j(t)}Y_{j,k}$ are L\'evy processes, where $Y_{j,k}$ are i.i.d random variables, \
$N_\ell (t)$ are independent Poisson random processes which are also independent of
$Y_{j,k}$ for $j, \ell, =1,2, \cdots, d$,
$k =1,2, \cdots$
\end{enumerate}
Let $|\cdot|$
Euclidean norm in $\mathbb{R}}\def\TT{\mathbb{T}_t^d$. If $A$ is $d\times m$ matrix, we denote
\[
|A|=\sup_{|x|\le 1} |Ax|\,.
\]
For example, if $A=I+M$ is a $d\times d$ matrix, where $M=(m_{ij})_{1\le i,j\le d}$
is a matrix, then we can bound the norm of $A$ as follows.
Let $ 0\le {\lambda}_1\le \cdots\le {\lambda}_d $ be eigenvalues of $M^TM$
(since $M^TM$ is a positive definite matrix, we can assume that its eigenvalues are all positive).
Then
\begin{eqnarray*}
| I+M |
&=&\sup_{|x|\le 1}
\sqrt{|x|^2+x^TM^TMx} \le \sqrt{ 1+\max_{1\le i\le d} {\lambda}_i }|x|\\
&\le& \sqrt{ 1+\sum_{i=1}^d {\lambda}_i }|x|\,.
\end{eqnarray*}
But $\sum_{i=1}^d {\lambda}_i={ \hbox{ Tr} } (M^TM)$. Thus we have
\begin{equation}
| I+M |
\le \sqrt{ 1+{ \hbox{ Tr} } (M^TM) }|x|\,.
\end{equation}
To study the above stochastic differential equation, it is common to introduce the
Poisson random measure associated with the L\'evy process $Z_j(t)$. We write the jumps of the process $Z_j$ at time $t$ by
\[
\Delta Z_j(t):= Z_j(t) - Z_j(t-) \quad \hbox{if $ \Delta Z_j(t)\not=0$} \hspace{3mm} j=1,2, \cdots, d\,.
\]
Denote $\mathbb{R}_0 := \mathbb{R} \backslash \{0\}$ and let $\mathcal{B}(\mathbb{R}_0)$ be the Borel $\sigma$-algebra generated by the family of
all Borel subsets $U \subset \mathbb{R}$, such that $\Bar{U} \subset \mathbb{R}_0$. For any
$t>0$ and for any $U \in \mathcal{B}(\mathbb{R}_0)$
we define the {\it Poisson random measure},
$N_j: [0, T]\times \mathcal{B}(\mathbb{R}_0)\times {\Omega}\rightarrow \mathbb{R}}\def\TT{\mathbb{T}_t$ (without confusion we use the same notation $N$), associated with
the L\'evy process $Z_j(t)$ by
\begin{equation}
N_j(t, U) := \sum_{0 \leq s \leq t, \ \Delta Z_j(s)\not =0}\chi_U(\Delta Z_j(s)), \hspace{5mm} j=1,2,\cdots,d, \end{equation}
where $\chi_U$ is the indicator function of $U$.
The associated L\'evy measure $\nu$ of the L\'evy process $Z_j$ is given by
\begin{equation}
\nu_j(U) := \mathbb{E}[N_j(1,U)] \hspace{10mm} j=1,2,\cdots, d.
\end{equation}
We now define the compensated Poisson random measure $\tilde{N}_j$ associated with
the L\'evy process $Z_j(t)$ by
\begin{equation}
\Tilde{N}_j(dt,dz) := N_j(dt,dz) - \mathbb{ E}\left[ N_j(dt,dz) \right] = N_j(dt,dz) - \nu_j(dz)dt\,.
\end{equation}
We assume that the process $Z_j(t)$
has only bounded negative jumps to guarantee that the solution $S(t)$ to \eqref{e.2.1} is positive. This means that there is an interval $\mathcal J=[-R, \infty)$ bounded from the left
such that $\Delta Z_j(t)
\in \mathcal J$ for all $t>0$ and for all $j=1,2, \cdots d$.
With these notations, we can write
\[
{Z}_j(t) = \displaystyle\int_{ [0,t] \times \mathcal J } z {N}_j (ds,dz) \quad {\rm or}\quad d {Z}_j(t) = \displaystyle\int_{ \mathcal J } z {N}_j (dt,dz)
\]
and write \eqref{e.2.1} as
\begin{empheq}[left=\empheqlbrace]{align }
dS_i(t)&= \sum_{j=1}^d f_{ij} (S(t-b))
S_j (t) dt +S_i(t-)\sum_{j=1}^d \displaystyle\int_{\mathcal J}zg_{ij} ( S (t-b))\nu_j(dz)dt\nonumber\\
& \qquad\qquad +S_i(t-) \sum_{j=1}^d \displaystyle\int_{\mathcal J}zg_{ij} ( S (t-b)) \tilde{N}_j(dz,dt)\,, \nonumber\\
S_i(t) &= \phi_i(t) \,,\quad t\in [-b, 0]\,, \ i=1, \cdots, d\,. \label{e.2.5}
\end{empheq}
In fact we can consider a slightly more general version of system of equations than \eqref{e.2.5}:
\begin{empheq}[left=\empheqlbrace]{align }
dS_i(t)&= \sum_{j=1}^d f_{ij} (S(t-b))
S_j (t) dt \nonumber\\
& \qquad\qquad +S_i(t-) \sum_{j=1}^d \displaystyle\int_{\mathcal J}g_{ij} (z, S (t-b)) \tilde{N}_j(dz,dt),\quad i=1, \cdots, d\,,
\nonumber\\
S_i(t) &= \phi_i(t) \,,\quad t\in [-b, 0]\,, \ i=1, \cdots, d\, .
\label{e.2.6}
\end{empheq}
First, we discuss the existence, uniqueness and positivity
of \eqref{e.2.6}.
\begin{theorem} \label{t.2.1}
Suppose that $f_{ij}
:\mathbb{R}}\def\TT{\mathbb{T}_t^d \rightarrow \mathbb{R}}\def\TT{\mathbb{T}_t$ and $ g_{ij}:\mathcal J\times \mathbb{R}}\def\TT{\mathbb{T}_t^d\rightarrow \mathbb{R}}\def\TT{\mathbb{T}_t\,, \
1\le i,j\le d$ are bounded measurable functions such that there is a
constant ${\alpha}_0>1$ satisfying
$g_{ij}(z, x) \ge {\alpha}_0>-1$ for all $
1\le i,j\le d$, for all $z\in \mathcal J$ and for all
$ x\in \mathbb{R}}\def\TT{\mathbb{T}_t$, where $\mathcal J=[-R, \infty)$ is the common supporting set of the Poisson measures $\tilde N_{j}(t, dz), j=1, \cdots, d$.
If for all $i\not= j$, $f_{ij}(x)\ge 0$ for all $x\in \mathbb{R}}\def\TT{\mathbb{T}_t$,
and $\phi_i(0)\ge 0\,, \ i=1, \cdots, d$, then, the stochastic differential delay equation \eqref{e.2.6}
admits a unique pathwise solution
such that $S_i(t) \geq 0$ almost surely for all $i=1, \cdots, d$ and for all $t > 0$.
\end{theorem}
\begin{proof} The theorem is stated and proved in \cite[Theorem 1]{rpaper} following the method of \cite{humulti1}
(where the case of Brownian motion was dealt with). In fact, the existence and uniqueness are routine and easy. The main point is to show the positivity of the solution. The idea
in \cite{rpaper} was to decompose the solution to \eqref{e.2.6} as product of some nonnegative processes. Here we give a slightly different decomposition which will prove the positivity and will be very useful in our numerical scheme.
Denote $\tilde f_{ij}(t)=f_{ij}(S(t-b))$ and $\tilde g_{ij}(t,z)=g_{ij}(z,S(t-b))$.
Let $Y_i(t)$ be the solution to the stochastic differential equation
\[
dY_i(t) = \tilde f_{ii} (t)
Y_i(t) dt + Y_i(t-) \sum_{j=1}^d \int_{\mathcal J} \tilde g_{ij} (t,z) \tilde N_{j} (dt, dz)
\]
with initial conditions $Y_i(0)=\phi_i(0)$. Since this is a scalar equation for $Y_i(t)$, its explicit solution can be represented
\begin{eqnarray}
Y_i(t) &=&\phi_i(0)\exp\bigg\{
\sum_{j=1}^d \log\left[ 1+\tilde g_{ij}
(s,z ) \right] \Tilde{N}_j(ds,dz)+\int_0^t
\tilde f_{ii}(s) ds\nonumber\\
&&\qquad + \sum_{j=1}^d \int
_{[0, t]\times \mathcal J} \Big( \log\left[1+\tilde
g_{ij}(s,z)\right] -\tilde g_{ij}(s, z ) \Big)ds\nu_j(dz) \bigg\}\,,
\label{e.2.7a}
\end{eqnarray}
where $\nu_j$ is the associated L\'evy measure for $\tilde N_j(ds, dz)$. Let
$p_i(t)$ be the solution to the following system of
equations
\[
dp_i(t) =\sum_{j=1, j\not=i}^d \tilde f_{ij}(t)p_j(t) dt\,, \quad
p_i(0)=1\,, \quad i=1, \cdots, d\,.
\]
Since by the assumption that $\tilde
f_{ij}(t)\ge 0$ almost surely for all $i\not=j$, Theorem
\cite[p.173]{matanlysis} implies
that
$p_{i}(t)\ge 0$ for all $t\ge 0$ almost surely.
Now it is easy to check by the It\^o formula that $\tilde S_i(t)=p_i(t)Y_i(t)$ satisfies
\eqref{e.2.6} and
$\tilde S_i(t)\ge 0$ almost surely. By the uniqueness of the solution
we see that $S_i(t)=\tilde S_i(t)$ for $i=1, \cdots, d$. The theorem is then proved.
\end{proof}
\setcounter{equation}{0}\Section{Convergence rate of logarithmic Euler-Maruyama scheme}
In this section we construct numerical scheme to approximate \eqref{e.2.1} by positive value processes.
Motivated by the proof of Theorem \ref{t.2.1} we shall decompose equation \eqref{e.2.1} into the following system:
\begin{subequations}
\begin{empheq}[left=\empheqlbrace]{align }
dX_i(t) &= f_{ii}((S(t-b)))
X_i(t )dt+X_i(t-) \sum_{j=1}^d g_{ij} (S(t-b)) dZ_{j} (t)\label{e.3.1a}\\
dp_i(t)&=\sum_{j=1, j\not=i}^df_{ij}((S(t-b)))p_j(t)dt, \label{e.3.1b} \\
S_i(t)& = p_i(t)\cdot X_i(t)\,, \hspace{3mm} i=1,2, \cdots,d\,.
\label{e.3.1c}
\end{empheq}
\end{subequations}
The reason is, as in
the proof of Theorem \ref{t.2.1}, that $X_i(t)$ and $p_{i}(t)$ are all positive.
Consider a finite time interval $[0, T]$ for some fixed $T>0$ and let $\pi$ be a partition of the time interval $[0, T]$:
\[
\pi: 0=t_0<t_1<\cdots <t_n=T\,.
\]
Let $\Delta_k =
t_{k+1}-t_k $ and $\Delta =\max_{0\le k\le n-1}
(t_{k+1}-t_k) $ and assume ${\Delta}<b$.
We shall now construct explicit logarithmic
Euler-Marauyama recursive scheme to numerically
solve \eqref{e.3.1a}-\eqref{e.3.1c}.
By the expression \eqref{e.2.7a}
the solution $X$ on $[t_k, t_{k+1}]$ to Equation \eqref{e.3.1a}
is given\\
by
\begin{eqnarray*}
X_i(t) &=&X_i(t_k) \exp\bigg\{ \int_{t_k}^t
f_{ii}(S(s-b)) ds+\sum_{j=1}^d
\int_{t_k}^t \log\left[ 1+ g_{ij}
(S(u-b) ) dZ_j(s)
\right] \bigg\}\,,
\end{eqnarray*}
where $Z_j(t):=\sum_{k=1}^{N_j(t)}Y_{j,k}$.
If we denote by $F(x ) $ the $d\times d$ matrix whose diagonal elements are all zero and whose off diagonal entries are $f_{ij}(x)$, namely,
\[
F_{ij}(x ) =\begin{cases}
0&\qquad \hbox{when $i=j$}\\
f_{ij}(x) &\qquad \hbox{when $i\not=j$}\,.
\\
\end{cases}
\]
With this notation we can write
\eqref{e.3.1b} as a matrix form:
\begin{equation}
\frac{dp(t)}{dt}=F((S(t-b)))p(t)\,,
\quad p(t)=(p_1(t), \cdots, p_d(t))^T\,,
\label{e.3.3}
\end{equation}
and its solution on the sub-interval $[t_k, t_{k+1}]$ is given by
\begin{equation}
p(t)
= \exp\Big( \tilde F(S(t-b) )
\Big) p(t_k)\,, \hspace{5mm} t\in [t_k, t_{k+1}]\,, \label{ndim3}
\end{equation}
where the exponential of a matrix is in the usual sense: $e^A =\sum_{k=0}^\infty A^k/k!$,
the integral of a matrix is
entry-wise. Here due to the noncommutativity $ \tilde F(S(t-b))$ is complicated to determine
and we give the following formula for the sake of completeness:
\begin{eqnarray}
\tilde F(S(t-b))
&=&\sum_{r=1}^\infty \sum_{\sigma\in P_r}\left(
\frac{(-1)^{e(\sigma)}}{r^2\left({r-1}\atop{e(\sigma)}\right)}\right) \int_{T_r(t)}
\label{e.3.4a} \\
&&\quad \times [[\cdots [F(S(u_{\sigma(1)}-b) F(S(u_{\sigma(2)}-b)]
\cdots] F(S(u_{\sigma(r)}-b)]du_1 \cdots du_r \nonumber
\end{eqnarray}
is given by the Campbell-Baker-Hausdorff-Dynkin Formula
(see e.g. \cite{hu}, \cite{St}), where
$P_r$ is the set of all permutations of $\{1, 2, \cdots, r\}$,
$e(\sigma)$ is the number of errors in ordering consecutive terms
in $\{\sigma(1), \cdots, \sigma(r)\}$, $[AB]=AB-BA$ denotes the commutator
of the matrices, and $T_r(t)=\left\{ 0\\
<u_1<\cdots<u_r<t\right\}$.
Analogously to \cite{rpaper} we propose the following logarithmic scheme to approximate
the solution:
\begin{subequations}
\begin{empheq}[left=\empheqlbrace]{align }
X_i^{\pi}(t)
&= X_i^{\pi}(t_k) \exp\Big( f_{ii}( S^{\pi}(t_k-b))
(t-t_k) \nonumber\\&+\sum_{j=1}^d
\ln{\Big(1 + g_{ij}(S^{\pi}(t_k-b))(Z_j(t)- Z_{j}(t_k)\Big)} \Big)\,, \label{e.3.6a}
\\
p^{\pi}(t )
&= \Big[ F(S^{\pi}(t_k-b)) )(t-t_k) + I \Big]p^{\pi}(t_k), \label{e.3.6b}\\
S^{\pi}_i(t) &= p^{\pi}_i(t)X^{\pi}_i(t)\,,
\label{e.3.6c}\\
X_i^\pi(0)&=\phi_i(0)\,, \quad
p^\pi(0)={\bf 1}\,,\qquad t_k\le t\le t_{k+1}\,, \hspace{7mm} k=1,2, \cdots, n-1\,.
\label{e.3.6d}
\end{empheq}
\end{subequations}
We introduce step processes
\begin{eqnarray*}
\begin{cases} v_{1}(t) = \sum_{k=0}^{\infty}\mathbbm{1}_{[t_k , t_{k+1}
)}(t)S^\pi (t_k )\\
v_{2}(t) = \sum_{k=0}^{\infty}\mathbbm{1}_{[t_k , t_{k+1}
)}(t)S^\pi (t_k -b ).
\end{cases}
\end{eqnarray*}
Using the above step process we can write the continuous interpolation for $X_i$ as
\begin{eqnarray}\label{a24}
&& X^{\pi}_i(t) = \exp\Big(\displaystyle\int_0^t f_{ii}( v_2(u))du+\sum_{j=1}^d\sum_{0 \leq u \leq t, {\Delta} Z(u) \neq 0} \ln\Big(1+ g_{ij}(v_2(u))Y_{j,N_j(u)} \Big) \Big) \,.
\end{eqnarray}
Denote $\lfloor t \rfloor=\max\{k, t_k<t\}$.
From \eqref{e.3.6b} we have
\begin{eqnarray}
p ^{\pi}(t)
&=& \Big[ \displaystyle\int_{t_{\lfloor t\rfloor}}^t F(v_2(u) )du + I \Big]\prod_{k=1}^ {\lfloor t\rfloor} \Big[ \displaystyle\int_{t_{k-1}}^{t_{k}}F(v_2(u))du + I \Big]\,.
\label{ndim3}
\end{eqnarray}
We first show that $p^{\pi}(t_k)\geq 0$.
\begin{lemma}
If $\phi(0)\ge 0$ a.s., then $p^\pi(t_k)\ge 0 $ a.s.
with $p^{\pi}(t) = \phi(t)$ for all $t \in [- b, 0]$.
\end{lemma}
\begin{proof} This can be seen
from \eqref{e.3.6b} and by induction. Assume
$p^\pi(t_k)\ge 0 $ a.s. Since by
our definition of $F(S^{\pi}(t_k-b))$ we know all
of its components are positive,
we see from \eqref{e.3.6b} that
$p^\pi(t )\ge 0 $ a.s. for all $t_k\le t\le t_{k+1}$.
\end{proof}
Similarly we will have
\begin{lemma}
If $\phi(0)\ge 0$ a.s., then $X^\pi(t)\ge 0 $ a.s. , hence $S^\pi(t)\ge 0$ a.s. for all $0\le t\le T$.
\end{lemma}
To obtain the convergence of the logarithmic Euler–Maruyama scheme
\eqref{e.3.6a}-\eqref{e.3.6d}, we make the following assumptions:
\begin{enumerate}
\item[{\bf (A1)}] The initial data $\phi_i(0)>0$ and it is H\"older continuous, i.e. there exist constant $\rho >0$ and $\gamma \in [1/2,1)$ such that for $t,s \in [-b,0]$
\begin{eqnarray}
|\phi_i(t) - \phi_i(s)| \leq \rho |t-s|^{\gamma}.\hspace{2mm} i=1,2, \cdots, d.
\end{eqnarray}
\item[{\bf (A2)}] $f_{ij}$ is bounded. $f_{ij}$ and $g_{ij}$ are global Lipschitz for $i, j=1,2,\cdots,d$. This means that there exists a constant $\rho>0$ such that
\begin{eqnarray}
\begin{cases}\Big|g_{ij}(x_1)-g_{ij}(x_2)\Big|
\leq \rho |x_1 - x_2|\hspace{5mm}\forall \ x_1, x_2\in \mathbb{R}}\def\TT{\mathbb{T}_t^d\, \,;\\
\Big|f_{ij}(x_1)-f_{ij}(x_2)\Big|
\leq \rho |x_1 - x_2|\,\,,\quad \forall \ x_1, x_2\in \mathbb{R}}\def\TT{\mathbb{T}_t^d\,;
\nonumber \\
\big|f_{ij}(x)\big| \leq \rho \,,\quad \forall x\in \mathbb{R}}\def\TT{\mathbb{T}_t^d.
\end{cases} \label{pema102}
\end{eqnarray}
\item[{\bf (A3)}] The support $\mathcal J $ of the Poisson random measure $N_j$ (associated with $Z$) is contained in $[-R, \infty)$ for each $j=1,2,\cdots,d$ for some $R>0$ and there are
constants ${\alpha}_0>1$ and $\rho>0$ satisfying
$-\rho\le g_{ij}(x) \le \frac{{\alpha}_0}{R} $ for all $ x\in \mathbb{R}}\def\TT{\mathbb{T}_t^d$ and for all $i,j=1,2, \cdots,d$.
\item[{\bf (A4)}] For any $q>1$ there is a $\rho_q>0$
\begin{eqnarray}&&
\displaystyle\int_{\mathcal J}(1+ |z |)^q\nu_i(dz) \leq \rho_q \,, \hspace{3mm}\forall x \in \mathbb{R}\,. \hspace{2mm} i=1,2, \cdots,d. \label{pema103}
\end{eqnarray}
\end{enumerate}
\begin{lemma}{\label{a19}}
Let Assumptions (A1)--(A4) be satisfied. Then, for any $q\ge 1$, there exists $K_q$, independent of the partition $\pi$,
such that
\begin{eqnarray*}
\mathbb{E}\Big[\sup_{1\le i\le d} \sup_{0 \leq t \leq T}|X_i(t)|^q \Big] \vee \mathbb{E}\Big[\sup_{1\le i\le d}\sup_{0 \leq t \leq T}|X_i^\pi(t) |^q \Big]
\leq K_q.
\end{eqnarray*}
\end{lemma}
\begin{proof} From our definition of $X_i^\pi$ and boundedness of $f_{ij}$ for all $i,j$ we have
\begin{eqnarray}
\mathbb{E}\Big[\sup_{0 \leq t \leq T}|X_i^\pi(t) |^q \Big]
\nonumber
&= &
\mathbb{E}\Big[\sup_{0 \leq t \leq T} \exp\Big( q\displaystyle\int_0^t f_{ii}( v_2(u))du+q\sum_{j=1}^d\sum_{0\leq u\leq t, \Delta Z(u)\neq 0}\ln(1+g_{ij}(v_2(u)) Y_{j,N_j(u)}) \Big)\Big]
\nonumber\\
& = & \mathbb{E}\Big[\sup_{0 \leq t \leq T}\exp\Big(q\displaystyle\int_0^t f_{ii}( v_2(u))du +q\sum_{j=1}^d\displaystyle\int_{ \mathbb{T} }\ln(1+z_jg_{ij}(v_2(u)))N_j(du,dz)\Big)\Big]\nonumber \\
& \leq & K\mathbb{E}\Big[\sup_{0 \leq t \leq T}\exp\Big(q\sum_{j=1}^d\displaystyle\int_{ \mathbb{T} }\ln(1+z_jg_{ij}(v_2(u)))N_j(du,dz)\Big)\Big]\nonumber
\\ &=:&KI\,,\label{a18}
\end{eqnarray}
where
$\TT=[0, t]\times \mathcal J$.
Denote $h_j = ((1+z_jg_{i,j}(v_2(u))^{2q}-1))/z_j $. Then,
\begin{eqnarray*}
I&=&
\mathbb{E}\Big[\sup_{0 \leq t \leq T}\exp\Big(\frac12\sum_{j=1}^d \displaystyle\int_{ \mathbb{T}_t }\ln(1+z_jh_j)N_j(du,dz_j) \Big)\Big] \\
&=&
\mathbb{E}\Big[\sup_{0 \leq t \leq T}\exp\Big(\sum_{j=1}^d\Big(\frac12 \displaystyle\int_{ \mathbb{T}_t }\ln(1+z_jh_j)\tilde N_j(du,dz_j)
+\frac12 \int_{ \mathbb{T}_t }\ln(1+z_jh_j)\nu_j(dz_j) du
\Big)\Big)\Big] \\
&=&
\mathbb{E}\Big[\sup_{0 \leq t \leq T}\exp\Big(\sum_{j=1}^d\Big(\frac12 \displaystyle\int_{ \mathbb{T}_t }\ln(1+z_jh_j)\tilde N_j(du,dz_j)
+\frac12 \int_{ \mathbb{T}_t }\left[ \ln(1+z_jh_j)-z_jh_j\right]
\nu_j(dz_j) du
\Big)\Big)\Big] \\
&&\qquad \sup_{0 \leq t \leq T}\exp\Big(\sum_{j=1}^d
-\frac12 \int_{ \mathbb{T}_t } (1+z_jg_{ij}(v_2(u))^{2q}-1)\
\nu_j(dz_j) du
\Big)\Big] \\
&\le &C_q
\mathbb{E}\Big[\sup_{0 \leq t \leq T}\exp\Big(\sum_{j=1}^d\Big(\frac12 \displaystyle\int_{ \mathbb{T}_t }\ln(1+z_jh_j)\tilde N_j(du,dz_j)
+\frac12 \int_{ \mathbb{T}_t }\left[ \ln(1+z_jh_j)-z_jh_j\right]
\nu_j(dz_j) du
\Big)\Big)\Big] \,,
\end{eqnarray*}
where we used Assumption (A4) and the boundedness of $g_{ij}$.
Write for $k=1,2, \cdots,d$
\[
M_{k,t}:=\exp\Big( \displaystyle\int_{ \mathbb{T}_t }\ln(1+z_kh_k)\tilde N_k(du,dz_k)
+ \int_{ \mathbb{T}_t }\left[ \ln(1+z_kh_k)-z_kh_k\right]
\nu_k(dz_k) du
\Big)
\,.
\]
Then $(M_{k,t}, 0\le t\le T)$ is an exponential martingale.
Now an application of the Cauchy--Schwartz inequality
yield
\begin{eqnarray*}
I
&\le & C_q
\bigg\{\mathbb{E}\Big[\sup_{0 \leq t \leq T}M_{1,t} \Big]\bigg\}^{d/2} \,,
\end{eqnarray*}
which proves
$$\mathbb{E}\Big[\sup_{0 \leq t \leq T}|X_i^\pi(t) |^q \Big] \le
K_q<\infty.$$ In the same way, we can show
$\mathbb{E}\Big[\sup_{0 \leq t \leq T}|X_i (t) |^q \Big] \le
K_q<\infty$. This completes the proof of the lemma.
\end{proof}
\begin{lemma} \label{a26}
Assume Assumptions (A1)--(A4).
Then for ${\Delta}<1$, there is a constant $K>0$,
independent of $\pi$, such that
\begin{eqnarray*}
\mathbb{E}
\sup_{0\le t\le T} \Big|S^\pi(t)- v_2(t) \Big|^{p} \leq K \Delta^{p/2} \, .
\end{eqnarray*}
\end{lemma}
\begin{proof}
Let $\lfloor t\rfloor =t_k
$ if $t \in [t_k,t_{k+1})$ for some $k $. We have $v_2=(v_{21},v_{22},\cdots,v_{2d})$ for which we write in short $v_2=(\bar{v}_{1},\bar{v}_{2},\cdots,\bar{v}_{d})$. For any $i=1, \cdots, d$, \begin{eqnarray}{\label{a21}}
&&\mathbb{E}\sup_{0\le t\le T} \Big| S_i^{\pi}(t) -\bar{v}_i(t)\Big|^p
= \mathbb{E}
\sup_{0\le t\le T} \Big|p_i^\pi(t)X_i^\pi(t)-p_i^\pi(\lbar{t})X_i^\pi(\lbar{t})\Big|^p\ \nonumber\\
&&= \mathbb{E}
\sup_{0\le t\le T} \Big|p_i^\pi(t)X_i^\pi(t)-p_i^\pi(\lbar{t})X_i^\pi(t)+p_i^\pi(\lbar{t})X_i^\pi(t)-p_i^\pi(\lbar{t})X_i^\pi(\lbar{t})\Big|^p\nonumber\\
&& \leq C \Big(\mathbb{E}\sup_{0\le t\le T} \Big|p_i^\pi(t)-p_i^\pi(\lbar{t})\Big|^{2p}\Big)^{1/2}\Big(\mathbb{E}\sup_{0\le t\le T} \Big|X_i^\pi(t)\Big|^{2p}\Big)^{1/2}\\
&&\qquad \quad +C\Big(\mathbb{E}\sup_{0\le t\le T}\Big|X_i^\pi(t)-X_i^\pi(\lbar{t})\Big|^{2p}\Big)^{1/2}\Big(\mathbb{E}\sup_{0\le t\le T}\Big|p_i^\pi(\lbar{t})\Big|^{2p}\Big)^{1/2}.\nonumber
\\
&& \label{a33}\end{eqnarray}
By Assumption 2 we can bound $\mathbb{E}\sup_{0\le t\le T}\Big|p_i^\pi(\lbar{t})\Big|^{2p}$ and by lemma \eqref{a19} we can bound $\mathbb{E}\sup_{0\le t\le T}\Big|X_i^\pi(t)\Big|^{2p}$.
We now bound the other two components.
\begin{eqnarray}
&& \mathbb{E}
\sup_{0\le t\le T} \Big|p_i^\pi(t)-p_i^\pi(\lbar{t})\Big|^{2p} \leq \sum_{\substack{j, j\neq i}}^d\mathbb{E}\sup_{0\le t\le T} \Big|\int_{\lbar{t} }^tf_{ij}(v_2(u))du\Big|^{2p}.
\end{eqnarray}
By Assumption 2 it is easy to see that for some constant $C_1$ \begin{eqnarray}
&& \mathbb{E}\sup_{0\le t\le T} \Big|p_i^\pi(t)-p_i^\pi(\lbar{t} )\Big|^{2p} \leq C_1 {\Delta}^{2p}. \label{a34}
\end{eqnarray}
For $\mathbb{E}\sup_{0\leq t\leq T}\Big|X_i^\pi(t)-X_i^\pi(\lbar{t})\Big|^{2p}$ we use the expression for $X_i^\pi(t)$, boundedness of $f_{ij}$ for all $i,j$ and
use $|e^x-e^y|\leq |e^x+e^y||x-y|$ to obtain
\begin{eqnarray}
\mathbb{E}
\sup_{0\le t\le T}\Big|X_i^\pi(t)-X_i^\pi(\lbar{t})\Big|^{2p} &\leq& \left\{\mathbb{ E} \sup_{0\le t\le T} \Big| X_i^\pi(t)+X_i^\pi(\lbar{t} ) \Big|^{2p}\right \}^{1/2}\nonumber \\
&&\cdot K \left\{\mathbb{ E} \sup_{0\le t\le T}\left[ \Big| \sum_{j=1}^d\sum_{\lbar{t}\le s<t}\ln(1+g_{ij}(v_2(s))Y_{j,N(s)} )\Big|\right]^{2p}\right \}^{1/2}. \nonumber
\end{eqnarray}
The first factor is bounded and now, we want to bound the second factor:
\[
I:=\mathbb{ E}\sup_{0\le t\le T} \left| \sum_{j=1}^d\sum_{\lbar{t} \leq s \leq t}\ln(1+g_{i,j}(v_2(s))Y_{j,N_j(s)}) \right|^{2p} \,.
\]
(We use the same
notation $I$ to denote different quantities in different
occasions and this does not cause ambiguity).
We write the above sum as an integral:
\begin{eqnarray*}
I
&=&\mathbb{E}\sup_{0\le t\le T} \Big|\sum_{j=1}^d\displaystyle\int_{\mathcal J }\displaystyle\int_{\lbar{t} } ^t\ln(1+z_jg_{ij}(v_2(s))) {N_j}(ds,dz_j)\Big|^{2p}\\
&=&\mathbb{E}\sup_{0\le t\le T}\Big|\sum_{j=1}^d\displaystyle\int_{\mathcal J }\displaystyle\int_{\lbar{t}} ^t\ln(1+z_jg_{ij}(v_2(s)))\tilde{N_j}(ds,dz_j)\\
&&+\sum_{j=1}^d\displaystyle\int_{\mathcal J }\displaystyle\int_{\lbar{t}} ^t\ln(1+z_jg_{ij}(v_{2}(s)))\nu_j(dz_j)ds\Big|^{2p}\\
&\le& C_p \left({\Delta}^{2p} + \mathbb{E}\sup_{0\le t\le T} \Big|\displaystyle\int_{\mathcal J }\displaystyle\int_{\lbar{t} } ^t\ln(1+z_jg_{ij}(v_2(s)))\tilde{N_j}(ds,dz_j) \Big|^{2p}\right)\,.
\end{eqnarray*}
By the Burkholder--Davis--Gundy inequality, we have
\begin{eqnarray}
&& \mathbb{E}\sup_{0\le t\le T} \Big|\displaystyle\int_{\mathcal J }\displaystyle\int_{\lbar{t}} ^t\ln(1+z_jg_{ij}(v_2(s)))\tilde{N_j}(ds,dz_j) \Big|^{2p} \nonumber\\
& &\qquad \quad \le \mathbb{E}\sup_{0\le t\le T} \left(\displaystyle \int_{\mathcal J } \int_{\lbar{t}} ^t
\Big| \ln(1+z_jg_{ij}(v_2(s))) \Big|^2 \nu_j(dz_j)ds \right)^p\nonumber\\
& &\qquad \quad \le K_p {\Delta}^p\,. \label{a22}
\end{eqnarray}
Plugging above, \eqref{a34}, in \eqref{a33} we get for some $K,K_1,K_2>0$
\begin{eqnarray}
&& \mathbb{E}\sup_{0\le t\le T}\Big| S_i^{\pi}(t) -v_i(t)\Big|^p \leq K_1 {\Delta}^p+K_2{\Delta}^{p/2} \leq K{\Delta}^{p/2}.
\end{eqnarray}
This proves the lemma.
\end{proof}
\begin{theorem}
Assume that Assumptions (A1)–(A4) are true.
Then, there is a constant $K_{pd,T}$, independent of $\pi$ such that
\begin{eqnarray}
&& \mathbb{E}\Bigg[\sup_{0 \leq t \leq T}\Big[|S(t) - S^{\pi}(t)|^p\Big]\Bigg] \leq K_{pd,T}{\Delta}^{p/2}.
\nonumber\\ \end{eqnarray}
\end{theorem}
\begin{proof}
First, we we want to bound
\begin{eqnarray}
I_1:= \mathbb{E}\Big(\sup_{0 \leq t \leq r}|p(t) - p^{\pi}(t)|^p\Big) \,.
\end{eqnarray}
From \eqref{e.3.4a}, we see that when $t\in [t_k, t_{k+1}]$,
\[
\tilde F(S(t-b))=\int_{t_k}^t F(S(u-b))du +O(\Delta^2)\,.
\]
Thus
\[
\exp\left(\tilde F(S(t-b))\right)=I+\int_{t_k}^t F(S(u-b))du +O(\Delta^2)\,.
\]
Thus we have a formula for $p(t)$ which is analogous to the one for $p^\pi(t)$ (Equation
\eqref{ndim3} ):
\begin{eqnarray}
p (t)
&=& \Big[I+ \int_{\lbar{t}}^t F(S(u-b))du + O(\Delta^2) \Big]\prod_{k=0}^
{\lfloor t\rfloor} \Big[ I+\int_{t_k}^{t_{k+1}} F(S(u-b))du +O(\Delta^2) \Big]\nonumber\\
&=& \rho(\lbar{t}, t)\prod_{k=0}^{\lfloor t\rfloor} \rho(t_k, t_{k+1})\,,
\end{eqnarray}
where
\[
\rho(r, s)=I+ \int_{r}^s F(S(u-b))du + O(\Delta^2)
\,.
\]
We can also write
\begin{eqnarray}
p^\pi (t)
&=& \rho^\pi(\lbar{t}, t)\prod_{k=0}^{\lfloor t\rfloor} \rho^\pi (t_k, t_{k+1})\,,
\end{eqnarray}
where
\[
\rho^\pi (r, s)=I+ F(S^\pi (\lbar{s}-b)) (s-r)
\,.
\]
When $r,s\in [t_k, t_{k+1}], r<s$, we have by the Lipschitz condition
\begin{eqnarray}
|\rho(r, s)-\rho^\pi(r, s)|
&\le & | F(S(t_k-b)) -F(S^\pi (t_k-b)) | (s-r) \nonumber\\
& &+\int_r^s |
F(S(u-b))-F(S^\pi (t_k-b))|du+ O(\Delta^2)\nonumber\\
&\le& C|S(t_k-b) -S^\pi (t_k-b) |+O(\Delta^{3/2} )\,.
\end{eqnarray}
We also have
\begin{equation}
| \rho^\pi(r, s)|= |I+ F(S^\pi (\lbar{s}-b)) (s-r)
\le |I+C(s-r)| \le e^{C(s-r)}\,.
\end{equation}
In the same way we have
\begin{equation}
| \rho (r, s)| \le e^{C(s-r)}\,.
\end{equation}
Thus
\begin{eqnarray}
|p^\pi (t) -p(t)|
&\le & \left| \rho (\lbar{t}, t)-\rho^\pi(\lbar{t}, t)\right|\prod_{k=0}^{\lfloor t\rfloor} \rho^\pi (t_k, t_{k+1})\nonumber\\
&&\qquad + \sum_{\ell=0}^{\lbar{t}} \left| \rho (t_\ell , t_{\ell+1})-\rho^\pi(t_\ell , t_{\ell+1})\right|
\rho (\lbar{t}, t) \prod_{k=0, k\not=\ell
}^{\lfloor t\rfloor} \rho^\pi (t_k, t_{k+1})\nonumber\\
&\le & \left[C\left|S(t_k-b) -S^\pi (t_k-b) \right|+O(\Delta^{3/2} ) \right]
\prod_{k=0}^{\lfloor t\rfloor} e^{C ( t_{k+1}-t_k)} \nonumber\\
&&\qquad + \sum_{\ell=0}^{\lbar{t}} \left[ C|S(t_\ell -b) -S^\pi (t_\ell
-b) |+O(\Delta^{3/2} ) \right]
\rho (\lbar{t}, t) \prod_{k=0, k\not=\ell
}^{\lfloor t\rfloor} e^{C ( t_{\ell+1}-t_\ell )} \nonumber\\
\end{eqnarray}
Thus we have for some $C>0$
\begin{eqnarray}
&& I_1 \leq C \mathbb{E}\sup_{0\le t\le r}\Big|S (t-b)-S^{\pi} (t-b)\Big|^p+ K_1\mathbb{E}\sup_{0\le t\le r}\Big|v_2(u)-S^{\pi} (t-b)\Big|^p. \nonumber
\end{eqnarray}
Then by lemma \ref{a26} we have
\begin{eqnarray}
&& I_1 \leq C \mathbb{E}\sup_{0\le t\le r}\Big|S (t-b)-S^{\pi} (t-b)\Big|^p+ C {\Delta}^{p/2}. \label{a29}
\end{eqnarray}
We now bound $\mathbb{E}\sup_{0 \leq t \leq r}|X(t)-X^{\pi}(t)|^p$.
Denote
\begin{eqnarray}
&& A_{i,t} = \sum_{j=1}^d\sum_{\substack{0 \leq u \leq t\\ {\Delta} Z(u) \neq 0}} \ln\Big(1+ g_{ij}(S (u-b))Y_{j,N_j(u)} \Big)\nonumber\\&&
A^{\pi}_{i,t} = \sum_{j=1}^d\sum_{\substack{0 \leq u \leq t\\
{\Delta} Z(u) \neq 0}} \ln\Big(1+ g_{ij}(v_2(u))Y_{j,N_j(u)} \Big)
\end{eqnarray}
and denote $I_2 = \mathbb{E}\Big(\sup_{0 \leq t \leq r}|X(t) - X^{\pi}(t)|^p\Big)$.
Then,
\begin{eqnarray*}&&
I_2 = \mathbb{E}\Big(\sup_{0 \leq t \leq r}|X(t) - X^{\pi}(t)|^p\Big)\\
&& \leq\Big(\mathbb{E} \sup_{0 \leq t \leq r}\sum_{i=1}^d\Big|\sum_{\substack{0 \leq u \leq t\\ {\Delta} Z(u) \neq 0}} \sum_{j=1}^d\big[\ln(1 +g_{ij}(S (u-b))Y_{j,N_j(u)} )-
\nonumber \ln(1 +g_{ij}(v_2(u))Y_{j,N(u)})\\
\\
&&\qquad +\displaystyle\int_0^t (f_{ii}( S(u-b))-f_{ii}( v_2(u)))du )\big]\Big|^{2p}\Big)\Big)^{1/2} \Big(\mathbb{E}\Big(|\exp(A_{i,t})+\exp(A^{\pi}_{i,t})|^{2p}\Big)\Big)^{1/2}
\nonumber
\\
&&= \Big(\Big(\sum_{i=1}^d \mathbb{E}\sup_{0 \leq t \leq r}\Big|\displaystyle\int_{\mathcal J \times [0,t]}\sum_{j=1}^d\left[\ln(1+z_jg_{ij}(S(u-b)))-\ln(1+z_jg_{ij}(v_2(u)))\right] \tilde{N}_j(du,dz)\nonumber\\
&&
+\displaystyle\int_{\mathcal J \times [0,t]}\sum_{j=1}^d\left[\ln(1+z_jg_{ij}(S (u-b)))-\ln(1+z_jg_{ij}(v_2(u)))\right]\nu_j(dz)du\\&&+\displaystyle\int_0^t (f_{ii}( S(u-b))-f_{ii}( v_2(u)))du\nonumber\Big|^{2p}\Big)\Big)^{1/2}\cdot \Big(\mathbb{E}\Big(|\exp(A_{i,t})+\exp(A^{\pi}_{i,t})|^{2p}\Big)\Big)^{1/2}.\nonumber
\end{eqnarray*}
Then for some $C_1>0$ we have
\begin{eqnarray}
I_2&&\leq \Big[\Big(C_1 \mathbb{E}\sup_{0 \leq t \leq r}\Big|\displaystyle\int_{\mathcal J \times [0,t]}\sum_{j=1}^d\left[\ln(1+z_jg_{1j}(S (u-b)))-\ln(1+z_jg_{1j}(v_2(u)))\right] \tilde{N}_j(du,dz_j)\Big|^{2p}\Big)^{1/2} \nonumber\\
&&
+\Big(C_1 \mathbb{E}\sup_{0 \leq t \leq r}\Big|\int_{\mathcal J\times [0,t]}\sum_{j=1}^d\left[ \ln(1+z_jg_{1j}(S (u-b)))-\ln(1+z_jg_{1j}(v_2(u)))\right] \nu_j(dz_j)du\Big|^{2p}\Big)^{1/2} \nonumber\\
&&
+\Big(C_1 \mathbb{E}\sup_{0 \leq t \leq r}\Big|\displaystyle\int_0^t (f_{ii}( S(u-b))-f_{ii}( v_2(u)))du\Big|^{2p}\Big)^{1/2}\Big]\nonumber
\\&&\cdot \Big(\mathbb{E}\Big(|\exp(A_{1,t})+\exp(A^{\pi}_{1,t})|^{2p}\Big)\Big)^{1/2}
\nonumber\\&& =: C_1(I^{1/2}_{21} + I^{1/2}_{22}+I^{1/2}_{23})\cdot \Big(\mathbb{E}\Big(|\exp(A_{1,t})+\exp(A^{\pi}_{1,t})|^{2p}\Big)\Big)^{1/2}.\nonumber
\end{eqnarray}
Using the Lipschitz condition on $g_{ij}$, $\int_{\mathcal J}z_j\nu_j(dz_j) = K_{\nu}< \infty $ for $j=1,2 \cdots, d$, lemma \eqref{a26} and Assumption 3 we have
\begin{eqnarray*}
&&I_{22} \leq \mathbb{E}\sup_{0 \leq t \leq r}\Big|\int_{\mathcal J\times [0,t]}\sum_{j=1}^d\left[ \ln(1+z_jg_{ij}(S (u-b)))-\ln(1+z_jg_{ij}(v_2(u)))\right] \nu_j(dz_j)du\Big|^{2p}\Big)\\
&&\qquad
\leq C\mathbb{E}\sup_{0 \leq t \leq r}\Big| S (t-b)-S ^{\pi}(t-b)\Big|^{2p}+ C \mathbb{E}\sup_{0 \leq t \leq r}\Big| v_2(u)-S^{\pi} (t-b)\Big|^{2p}\\
&&=: C \mathbb{E}\sup_{0 \leq t \leq r}\Big| S(t-b)-S^{\pi}(t-b)\Big|^{2p} +C{\Delta}^p.
\end{eqnarray*}
Using the Burkholder-Davis-Gundy inequality we have
\begin{eqnarray*}
&& I_{21} \\&&\leq \sum_{i=1}^d \mathbb{E}\Big(\displaystyle\int_{\mathcal J }\displaystyle\int_0^t\sum_{j=1}^d\Big|\ln(1+z_jg_{ij}(S(u-b)))-\ln(1+z_jg_{ij}(v (u-b)))\Big|^2 \nu_j(dz)du\Big)^{p}.
\end{eqnarray*}
Similar to the bound for $I_{22}$ we have
\begin{eqnarray*}
&& I_{21} \leq
C\mathbb{E}\sup_{0 \leq t \leq r}\Big| S (t-b)-S^{\pi} (t-b)\Big|^{2p}+C {\Delta}^p.
\end{eqnarray*}
Similar to the bound for $I_{22}$ using assumption (A2) we have
\begin{eqnarray*}
&& I_{23} \leq
C\mathbb{E}\sup_{0 \leq t \leq r}\Big| S (t-b)-S^{\pi} (t-b)\Big|^{2p}+C {\Delta}^p.
\end{eqnarray*}
Combining the bounds for $I_{21}, I_{22}, I_{23}$ with the help of lemma \eqref{a19}, we get for some $K_2>0$
\begin{equation}
I_{2} \leq K_2\Big(\mathbb{E}\sup_{0 \leq t \leq r}\Big| S (t-b)-S^{\pi} (t-b)\Big|^{2p}\Big)^{1/2}+K_2{\Delta}^{p/2}. \label{a28}
\end{equation}
We write $I_3= \mathbb{E}\Big(\sup_{0 \leq t \leq r}|S(t) - S^{\pi}(t)|^p\Big)$. Then we have
\begin{eqnarray*}&&
I_3 = \mathbb{E}\Big(\sup_{0 \leq t \leq r}|S(t) - S^{\pi}(t)|^p\Big)
\leq \mathbb{E}\Big(\sup_{0 \leq t \leq r}\Big|(p(t)-p^{\pi}(t))X(t) - (X(t) - X^{\pi}(t))p^{\pi}(t) \Big|^p\Big)\nonumber \\&&
\leq 2^{p-1}\mathbb{E}\Big(\sup_{0 \leq t \leq r}\Big| (p(t)-p^{\pi}(t) )X(t)\Big|^p\Big)
+2^{p-1}\mathbb{E}\Big(\sup_{0 \leq t \leq r}\Big|p^{\pi}(t)(X(t) - X^{\pi}(t)) \Big|^p\Big). \nonumber \\&&
=: C(I_{31}+ I_{32}).
\end{eqnarray*}
We now bound $I_{31},I_{32}$
\begin{eqnarray}
&& I_{31} \leq C\Big(\mathbb{E}\Big(\sup_{0 \leq t \leq r}\Big| X(t)\Big|^{2p}\Big)\Big)^{1/2}\Big( \mathbb{E}\Big(\sup_{0 \leq t \leq r}\Big| (p(t)-p^{\pi}(t) )\Big|^{2p}\Big) \Big)^{1/2}.
\end{eqnarray}
Using the Lemmas \ref{a19} and \ref{a29} we will have f
\begin{eqnarray}&&
I_{31}
\leq C \Big(\mathbb{E}\sup_{0\leq t\leq r}\Big|S (t-b)-S^{\pi} (t-b)\Big|^{2p}+{\Delta}^p\Big)^{1/2}. \nonumber
\end{eqnarray}
Using assumption 2 we can show that $p^{\pi}$ is bounded, hence we can write using \eqref{a28}
\begin{eqnarray}
I_{32} &\leq& C \Big(\Big(\mathbb{E}\sup_{0 \leq t \leq r}\Big| S (t-b)-S^{\pi} (t-b)\Big|^{4p}\Big)^{1/2}+{\Delta}^p \Big)^{1/2}.
\end{eqnarray}
Hence we have for some $K_3>0$
\begin{eqnarray}
&& I_{3}\leq K_{3}\Big(\mathbb{E}\sup_{0 \leq t \leq r}\Big|S (t-b)-S^{\pi} (t-b)\Big|^{2p}\Big)^{1/2}+ K_{3}{\Delta}^{p/2} \label{a27}.
\end{eqnarray}
Therefore we get
\begin{eqnarray}
&&
\mathbb{E}\Bigg[\sup_{0 \leq t \leq r}\Big[|S(t) - S^{\pi}(t)|^p\Big]\Bigg] \nonumber\\
&&
\leq C \Big(\mathbb{E}\sup_{0\leq t \leq r}\Big|S (t-b)-S^{\pi} (t-b)\Big|^{2p}\Big)^{1/2}+ K{\Delta}^{p/2}. \label{a30}
\end{eqnarray}
Taking $r=b$, we have
\begin{eqnarray}
\mathbb{E}\Bigg[\sup_{0 \leq t \leq b}\Big[|S(t) - S^{\pi}(t)|^p\Big]\Bigg] \le C {\Delta}^{p/2}
\end{eqnarray}
for any $p\ge 2$. Now, taking $r=2b$ in \eqref{a30}, we have
\begin{eqnarray}
&&
\mathbb{E}\Bigg[\sup_{0 \leq t \leq 2b}\Big[|S(t) - S^{\pi}(t)|^p\Big]\Bigg]
\le C \left[ \mathbb{E} \sup_{-b\le t\le b} \left|S(t))- S^{\pi}(t) \right| ^{2p}\right]^{1/2} +K {\Delta}^{p/2} \nonumber\\
&&\qquad \le C \left[ K {\Delta}^p \right]^{1/2} +K{\Delta}^{p/2} \le C {\Delta}^{p/2}\,.
\end{eqnarray}
Continuing this way, we obtain for any positive integer $k\in \mathbb{N}$,
\begin{eqnarray}
&&
I_{0 \leq t \leq kb }
\le C_{p, k, d, T} {\Delta}^{p/2}\,.
\end{eqnarray}
Now, since $T$ is finite, we can choose a $k$ such that
$(k-1)b<T\le kb$. This completes the proof of the theorem.
\end{proof}
| {
"timestamp": "2021-09-01T02:06:53",
"yymm": "2108",
"arxiv_id": "2108.11020",
"language": "en",
"url": "https://arxiv.org/abs/2108.11020",
"abstract": "In this paper, we extend the logarithmic Euler-Maruyama scheme for stochastic delay differential equation in one dimension to the part where we propose a scheme for a system of stochastic delay differential equations. We then show that the scheme always maintains positivity subject to initial conditions. We then show the convergence of the proposed Euler-Maruyama scheme. With this scheme, all the approximate solutions are positive and the rate of convergence of this scheme is 0.5.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Logarithmic Euler Maruyama Scheme for Multi Dimensional Stochastic Delay Differential Equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540668504082,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7099046529127518
} |
https://arxiv.org/abs/1510.05360 | The $k$-independent graph of a graph | Let $G=(V,E)$ be a simple graph. A set $I\subseteq V$ is an independent set, if no two of its members are adjacent in $G$. The $k$-independent graph of $G$, $I_k (G)$, is defined to be the graph whose vertices correspond to the independent sets of $G$ that have cardinality at most $k$. Two vertices in $I_k(G)$ are adjacent if and only if the corresponding independent sets of $G$ differ by either adding or deleting a single vertex. In this paper, we obtain some properties of $I_k(G)$ and compute it for some graphs. | \section{Introduction}
Given a simple graph $G=(V,E)$, a set $I\subseteq V$ is an independent set of $G$, if there is no edge of $G$ between any two vertices of $I$.
A maximal independent set is an independent set that is not a proper subset of any other independent set.
A maximum independent set is an independent set of greatest cardinality for $G$. This cardinality is called independence number of $G$,
and is denoted by $\alpha (G)$.
\noindent Reconfiguration problems have been studied often in recent years. These arise in settings
where the goal is to transform feasible solutions to a problem in a step-by-step manner, while
maintaining a feasible solution throughout.
\noindent For the study of dominating set reconfiguration problem: given two dominating sets $S$ and $T$ of a graph $G$, both of size at most $k$, is it possible to transform $S$ into $T$ by adding and removing vertices one-by-one, while maintaining a
dominating set of size at most $k$ throughout?
\noindent Regarding to this dominating set reconfiguration problem, recently the $k$-dominating graph of a graph $G$ has defined in \cite{Hass}.
The $k$-dominating graph of $G$, $D_k (G)$, is defined to be the graph whose vertices correspond to the dominating sets of $G$
that have cardinality at most $k$. Two vertices in $D_k(G)$ are adjacent if and only if the corresponding dominating
sets of $G$ differ by either adding or deleting a single vertex. Authors in \cite{Hass}, gave conditions that
ensure $D_k(G)$ is connected. Also authors in \cite{davood} studied this graph, for certain graphs.
\noindent One of the most well-studied problem in reconfiguration problems, is the reconfiguration of independent
sets. For a graph $G$ and integer $k$, the independent sets of size at least/exactly $k$ of $G$
form the feasible solutions. Independent sets are also called token configurations, where the independent set vertices are viewed as tokens \cite{Bonsama}. Deciding for existence of a reconfiguration between two
$k$-independent sets with at most $\ell$ operations is strongly NP-complete (\cite{Kami}).
\noindent Bonamy and Bousquet in \cite{Bonamy} have considered the $k$-TAR reconfiguration graph, $TAR_k(G)$, as follows:
\noindent A $k$-independent set of $G$ is a set $S\subseteq V$ with $|S|\geq k$, such that no two elements of $S$ are adjacent.
Two $k$-independent sets $I$ and $J$ are adjacent if they differ on exactly one vertex.
This model is called the Token Addition and Removal (TAR). Authors in \cite{Bonamy} provided a cubic-time algorithm to
decide whether $TAR_k(G)$ is connected when $G$ is a graph which does not contain induced paths of length $4$. Their work solves an open question
in \cite{Bonsama}. Also they described a linear-time algorithm which decides
whether two elements of $TAR_k(G)$ are in the same connected component.
\noindent Let to rewrite the definition of the reconfiguration graph $TAR_k(G)$, as follows:
\noindent For a graph $G$ and a non-negative integer $k$, the $k$-independent graph of $G$, $I_k (G)$, is defined to be the graph whose vertices correspond to the independent sets of $G$ that have cardinality
at most $k$. Two vertices in $I_k(G)$ are adjacent if and only if the corresponding independent
sets of $G$ differ by either adding or deleting a single vertex.
\noindent As an example, Figure \ref{figure1} shows $I_{3}(K_{1,3})$.
\begin{figure}[!h]
\hspace{3.8cm}
\includegraphics[width=8cm,height=8cm]{K}
\caption{ \label{figure1} $I_{3}(K_{1,3})$.}
\end{figure}
\noindent Note that $k$-dominating and $k$-independent graph are similar to recent work in graph colouring, too. Given a graph
$H$ and a positive integer $k$, the $k$-colouring graph of $H$, denoted $G_k(H)$, has vertices
corresponding to the (proper) $k$-vertex-colourings of $H$. Two vertices in $G_k(H)$ are
adjacent if and only if the corresponding vertex colourings of $G$ differ on precisely one
vertex. Authors in \cite{4,5,6,8} studied the connectedness of $k$-colouring graphs. Also they studied their hamiltonicity.
\medskip
\noindent The following theorem, gives some properties of the $k$-independent graph of a graph:
\begin{teorem} \label{pro}
\begin{enumerate}
\item[(i)]
If $G$ is a graph of order $n$, then $I_1(G)\cong K_{1,n}$.
\item[(ii)]
For every graph $G$ and every natural $k\leq \alpha(G)$, the independent graph $I_k(G)$ is connected.
\item[(iii)]
For every graph $G$, the independent graph $I_k(G)$ is a bipartite graph.
\item[(iv)]
If $G\ncong \overline{K_n}$, then $I_k(G)$ is not a regular graph.
\item[(v)]
If $G\ncong \overline{K_n}$ then $I_k(G)$ is not a vertex-transitive graph, and so is not a Cayley graph.
\end{enumerate}
\end{teorem}
\noindent{\bf Proof.}
\begin{enumerate}
\item[(i)]
It follows from the definition.
\item[(ii)]
Let $I_1$ and $I_2$ be two independent sets of $G$ (or two vertices of $I_k(G)$). By removing each member of $I_1$,
we have an independent set of $G$ which is a vertex of $I_k(G)$ and this vertex is adjacent to $I_1$. By removing these vertices,
we obtain the empty set. Similarly, we can find a path from $I_2$ to the empty set.
Therefore there exists a path between $I_1$ and $I_2$ and therefore we have the
result.
\item[(iii)]
Let $X$ be the set of independent sets of size less than $k+1$ of $G$ with odd cardinality and $Y$ be the set of independent sets of size
less than $k+1$ with even cardinality. It is clear that $X\cup Y=V(I_k(G))$ and $X\cap Y=\phi$. Suppose that $A,B\in X$,
then $(A\backslash B)\cup (B\backslash A)$ cannot be a vertex of $I_k(G)$. Because $|A|=|B|$ or $\big||A|-|B|\big|\geq 2$.
So $AB$ is not an edge of $I_k(G)$ and with similar argument we have this for two vertices in $Y$.
Therefore $I_k(G)$ is a bipartite graph with parts $X$ and $Y$.
\item[(iv)]
Let $G$ be a graph of order $n$. The empty set is an independent set of $G$ which has degree $n$ in $I_k(G)$.
Let $I_1$ be an independent set of $G$ with $|I_1|=\alpha(G)$. We know that $I_1$ is adjacent to $\alpha$ independent sets.
Since $G\ncong \overline{K_n}$, we have $\alpha(G)\neq n$. Therefore $I_k(G)$ is not a regular graph.
\item[(v)]
It follows from Part (iv).\quad\lower0.1cm\hbox{\noindent \boxit{\copy\thebox}}\bigskip
\end{enumerate}
\noindent It is obvious that, for every graph $G$ and every $k$, the maximum degree of $I_k(G)$ is $\Delta(I_k(G))=|V(G)|$.
\begin{teorem} \label{tree}
\begin{enumerate}
\item[(i)]
Let $G$ be a graph of order $n$. There is no integer $k$, such that $I_k(G)\cong G$.
\item[(ii)]
If $G\ncong K_n$, then the girth of $I_k(G)$ is $4$.
\item[(iii)]
\ Let $G\neq K_n$ be a graph. Then for all integers $k\geq 2$, $I_k(G)$ is not a tree.
\end{enumerate}
\end{teorem}
\noindent{Proof.}
\begin{enumerate}
\item[(i)] Since for every integer number $k\geq 1$, $|V(I_k(G))|\geq n+1$, so we have the result.
\item[(ii)] Let $v_1$ and $v_2$ be two non-adjacent vertices of graph $G$. So $\{v_1\}$ and $\{v_2\}$ are two independent sets of $G$ and
therefore two vertices of $I_k(G)$. Now $\emptyset$, $\{v_1\}$, $\{v_1,v_2\}$, $\{v_2\}$, $\emptyset$ is a cycle in $I_k(G)$ and
this is the shortest cycle in $I_k(G)$. Therefore the girth of $I_k(G)$ is 4.
\item[(iii)] It follows from Part $(ii)$.\quad\lower0.1cm\hbox{\noindent \boxit{\copy\thebox}}\bigskip
\end{enumerate}
\section{$\alpha$-independent graph of some graphs}
\noindent Let $G$ be a simple graph with independence number $\alpha$. Looks that in the among of $k$-independent graph of $G$, the $\alpha$-independent
graph of $G$ is more important. In this section, we study the $\alpha$-independent graph of some graphs.
To study the $\alpha$-independent graph of $G$, we are interested to know the order of $I_{\alpha}(G)$.
\noindent Let $i_k$ be the number of independent sets of cardinality $k$ in $G$. The polynomial
$$I(G,x)=\sum_{k=0}^{\alpha(G)}i_kx^k,$$ is called the independence polynomial of $G$.
Obviously $I(G,1)$ gives the number of all independent sets of a graph $G$. In other words, $|V(I_{\alpha}(G))|=I(G,1)$.
\noindent Since $I(K_n,x)=1+nx$, we have $I(K_n,1)=n+1$. Therefore we have the following easy result:
\begin{teorem}
For any integer $k>1$, there is some connected graph $G$ such that $|V(I_{\alpha}(G))|=k$.
\end{teorem}
%
\noindent The following theorem is about the $\alpha$-independent graph of stars:
\begin{teorem}
\begin{enumerate}
\item[(i)]
The $n$-independent graph $I_n(K_{1,n})$ is a bipartite graph with parts $X$ and $Y$, with $|X|=2^{n-1}$ and $|Y|=2^{n-1}+1$.
\item[(ii)]
The $n$-independent graph $I_n(K_{1,n})$ is not Hamiltonian.
\end{enumerate}
\end{teorem}
\noindent{\bf Proof.}
\begin{enumerate}
\item[(i)]
Let $X$ be the set of independent sets of $K_{1,n}$ with even cardinality and $Y$ be the set of independent
sets of odd cardinality. By Theorem \ref{pro}(iii), $I_n(K_{1,n})$ is a bipartite graph with parts $X$ and $Y$.
Obviously $|X|=\sum_{k=0}^{\lfloor\frac{n}{2}\rfloor} {n\choose 2k}$ and since the number of independent sets of $K_{1,n}$ is
$I(K_{1,n},1)=2^n+1$, we have
$|Y|=1+\sum_{k=1}^{\lfloor\frac{n}{2}\rfloor} {n\choose 2k-1}$. Therefore we have the result.
\item[(ii)]
Since a bipartite graph with different number of vertices in its parts is not a Hamiltonian graph,
so the $n$-independent graph $I_n(K_{1,n})$ is not a Hamiltonian graph.\quad\lower0.1cm\hbox{\noindent \boxit{\copy\thebox}}\bigskip
\end{enumerate}
\noindent Here we consider the $\alpha$-independent of some another graphs. Figure \ref{figure1'} shows the $I_2(P_3)$.
\begin{figure}[!h]
\hspace{3.8cm}
\includegraphics[width=8cm,height=6cm]{2}
\caption{ \label{figure1'} $I_{2}(P_{3})$.}
\end{figure}
\begin{teorem}
For every $n\in \mathbb{N}$, $\delta(I_{\alpha}(P_n))=\lfloor\frac{n}{2}\rfloor.$
\end{teorem}
\noindent{\bf Proof.} The minimum degree of vertices of $I_{\lceil\frac{n}{2}\rceil}(P_n)$ is due to maximal independent sets of $P_n$
with minimum cardinality. These vertices are adjacent to $n-\lceil\frac{n}{2}\rceil=\lfloor\frac{n}{2}\rfloor$ of independent
sets with less cardinality.\quad\lower0.1cm\hbox{\noindent \boxit{\copy\thebox}}\bigskip
\noindent Here we shall obtain information on the Hamiltonicity of $\alpha$-independent of some specific graphs.
Using the value of the independence polynomial at $-1$, we have $I(G;-1)=i_0-i_1+i_2-\ldots+(-1)^{\alpha}i_\alpha=f_0(G)-f_1(G)$,
where $f_0(G)=i_0+i_2+i_4+\ldots$, $f_1(G)=i_1+i_3+i_5+\ldots$
are equal to the numbers of independent sets of even size and odd size of $G$, respectively.
$I(G,-1)$ is known as the alternating number of independent sets. We need the following theorem:
\begin{teorem}{\rm\cite{Specific}} \label{value}
For $n\geq 1$, the following hold:
\begin{enumerate}
\item[(i)] $I(P_{3n-2};-1)=0$ and $I(P_{3n-1};-1)=I(P_{3n};-1)=(-1)^n$;
\item[(ii)] $I(C_{3n};-1)=2(-1)^n, I(C_{3n+1};-1)=(-1)^n$ and $I(C_{3n+2};-1)=(-1)^{n+1}$;
\item[(iii)] $I(W_{3n+1};-1) = 2(-1)^n-1$ and $I(W_{3n};-1)=I(W_{3n+2};-1)=(-1)^n-1$.
\end{enumerate}
\end{teorem}
\begin{korolari}
For all positive integer $n$, the graphs $I_{\alpha}(P_{3n-1})$, $I_{\alpha}(P_{3n})$, $I_{\alpha}(C_{n})$ and $I_{\alpha}(W_{n})$
are not Hamiltonian.
\end{korolari}
\noindent{\bf Proof.}
We know that $I_{\alpha}(P_{n})$, $I_{\alpha}(C_{n})$ and $I_{\alpha}(W_{n})$ are bipartite graphs with parts containing the independent sets
of even and odd cardinality. By Theorem \ref{value}, theses bipartite graphs have parts with different cardinality.
Therefore we have the result.\quad\lower0.1cm\hbox{\noindent \boxit{\copy\thebox}}\bigskip
\section{Connectedness of $k$-independent graph}
\noindent As we have seen in the Section 2, since the empty set is an independent set of any graph, then the $k$-independent graph $I_k(G)$
is a connected graph. Let us to do not consider empty set in the study of $k$-independent graph.
\begin{figure}[h]
\begin{minipage}{7.5cm}
\includegraphics[width=7.6cm,height=8cm]{6}
\end{minipage}
\begin{minipage}{7.5cm}
\includegraphics[width=6.5cm,height=8cm]{5}
\end{minipage}
\caption{\label{figure1''} $I_{3}^*(K_{1,3})$ and $I_{2}^*(C_4)$, respectively.}
\end{figure}
\noindent Suppose that $\mathcal{I}$ is a family of all independent sets of graph $G$. If we put $V(I_k(G))= \mathcal{I}\setminus \emptyset$, then
we denote the $k$-independent graph of $G$, by $I_k^*(G)$. Note that in this case,
for some $k$ and $G$, $I_k^*(G)$ is disconnected and for some $k$ and $G$ is connected.
\begin{figure}[h]
\begin{minipage}{7.5cm}
\includegraphics[width=7.6cm,height=8cm]{3}
\end{minipage}
\begin{minipage}{7.5cm}
\includegraphics[width=5cm,height=7.8cm]{4}
\end{minipage}
\caption{\label{figure2'} $I_{3}^*(P_5)$ and $I_2^*(W_5)$, respectively.}
\end{figure}
\noindent
For example, the Figure \ref{figure1''} shows $I_3^*(K_{1,3})$ and $I_2^*(C_4)$, which are disconnected graphs with two components.
Also Figure \ref{figure2'} shows $I_{3}^*(P_5)$ and $I_2^*(W_5)$, respectively. Observe that $I_{3}^*(P_5)$ is connected and $I_2^*(W_5)$ is
disconnected with three components.
\noindent Theorem \ref{tree} implies that for any graph $G\neq K_n$, and for all integers $k \geq 2$, $I_k(G)$ is not a tree, but as
we see in the Figure \ref{figure2'}, the graph $I_k^*(G)$ can be a forest. This naturally raises the question: For which graph $G$,
the component of $I_k^*(G)$ is a forest? What is the number of components?
\noindent The following theorem is a sufficient condition for disconnectedness of $I_{\alpha}^*(G)$.
\begin{teorem}\label{dis}
If a graph $G$ of order $n$ has a vertex of degree $n-1$, then $I_{\alpha}^*(G)$ is disconnected.
\end{teorem}
\noindent{\bf Proof.}
Let $v$ be a vertex of degree $n-1$. Obviously $\{v\}$ is a non-empty independent set of $G$,
and so is an isolated vertex of $I_{\alpha}^*(G)$.\quad\lower0.1cm\hbox{\noindent \boxit{\copy\thebox}}\bigskip
\noindent Note that the converse of Theorem \ref{dis} is not true. For example $I^*_2(C_4)$ has two components,
but $C_4$ is $2$-regular (Figure \ref{figure1''}).
\noindent We end this paper with the following theorem:
\begin{teorem}
Let $K_{n_1,n_2,\ldots,n_m}$ be a complete $m$-partite graph, then $I^*_\alpha(K_{n_1,n_2,\ldots,n_m})$ has $m$ components.
\end{teorem}
\noindent{\bf Proof.} Let $X_1$ and $X_2$ be two arbitrary parts of $K_{n_1,n_2,\ldots,n_m}$. Suppose that $I_1$ contains all nonempty subsets of
part $X_1$ and $I_2$ contains all nonempty sets of part $X_2$. Obviously, each member of $I_1$ and each member of $I_2$ are independent sets
of $K_{n_1,n_2,\ldots,n_m}$ and so they are vertices of $I^*_\alpha(K_{n_1,n_2,\ldots,n_m})$. No member of $I_1$ is adjacent to
a member of $I_2$ in $I^*_\alpha(K_{n_1,n_2,\ldots,n_m})$. So $I^*_\alpha(K_{n_1,n_2,\ldots,n_m})$ is a disconnected graph. Since the members of
$I_1$ (and the members of $I_2$) form a connected graph, therefore we have $m$ components.\quad\lower0.1cm\hbox{\noindent \boxit{\copy\thebox}}\bigskip
| {
"timestamp": "2015-10-22T02:07:00",
"yymm": "1510",
"arxiv_id": "1510.05360",
"language": "en",
"url": "https://arxiv.org/abs/1510.05360",
"abstract": "Let $G=(V,E)$ be a simple graph. A set $I\\subseteq V$ is an independent set, if no two of its members are adjacent in $G$. The $k$-independent graph of $G$, $I_k (G)$, is defined to be the graph whose vertices correspond to the independent sets of $G$ that have cardinality at most $k$. Two vertices in $I_k(G)$ are adjacent if and only if the corresponding independent sets of $G$ differ by either adding or deleting a single vertex. In this paper, we obtain some properties of $I_k(G)$ and compute it for some graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "The $k$-independent graph of a graph",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540656452213,
"lm_q2_score": 0.7248702761768249,
"lm_q1q2_score": 0.7099046520391479
} |
https://arxiv.org/abs/1104.2511 | On the J-anti-invariant cohomology of almost complex 4-manifolds | For a compact almost complex 4-manifold $(M,J)$, we study the subgroups $H^{\pm}_J$ of $H^2(M, \mathbb{R})$ consisting of cohomology classes representable by $J$-invariant, respectively, $J$-anti-invariant 2-forms. If $b^+ =1$, we show that for generic almost complex structures on $M$, the subgroup $H^-_J$ is trivial. Computations of the subgroups and their dimensions $h^{\pm}_J$ are obtained for almost complex structures related to integrable ones. We also prove semi-continuity properties for $h^{\pm}_J$. | \section{Introduction}
For any almost complex manifold $(M, J)$, the last two authors
\cite{LZ} introduced certain subgroups of the de Rham cohomology
groups, naturally defined by the almost complex structure. These
subgroups are interesting almost
complex invariants and there are several works already devoted to
their study \cite{FT}, \cite{AT}, \cite{DLZ}, \cite{AT2}. Particularly important
are the subgroups $H_J^+$, $H_J^-$ of $H^2(M, \mathbb{R})$,
defined as the sets of cohomology classes which can be represented
by $J$-invariant, respectively, $J$-anti-invariant real $2-$forms.
They naturally appear in the relationship
between the compatible symplectic cone and the tamed symplectic
cone of a given compact almost complex manifold \cite{LZ}. All of the above
quoted works consider the problem of whether or not the subgroups
$H_J^+$, $H_J^-$ induce a direct sum decomposition of $H^2(M,
\mathbb{R})$. This is known to be true for integrable
almost complex structures $J$ which admit compatible K\"ahler
metrics on compact manifolds of any dimension. In this case, the
induced decomposition is nothing but the classical (real)
Hodge-Dolbeault decomposition of $H^2(M, \mathbb{R})$ (see
\cite{LZ}, \cite{FT}, \cite{DLZ}). On the other hand, examples from
\cite{FT}, \cite{AT}, \cite{AT2} show that there exist almost complex
structures, even integrable ones, on compact manifolds of dimension
greater or equal to 6, for which the subgroups $H_J^+$,
$H_J^-$ may even have a non-trivial intersection. Dimension 4 is
special, as it was proved in \cite{DLZ} that on any compact almost
complex 4-manifold $(M^4, J)$, the subgroups $H_J^+$, $H_J^-$
do yield a direct sum decomposition for $H^2(M, \mathbb{R})$. In
this paper, we still concentrate to dimension 4, and give some computations and
estimates for the dimensions $h_J^{\pm}$ of the subgroups
$H_J^{\pm}$.
\vspace{0.2cm}
After some preliminaries, section 2 contains a result of Lejmi
\cite{Le} (Lemma \ref{mehdi} in our paper), from which one can see
the space $H_J^-$ as the kernel of an elliptic operator. Following
an observation of Vestislav Apostolov, Lejmi's lemma combined with a
classical result of Kodaira and Morrow yields semi-continuity
properties of $h_{J_{t}}^{\pm}$ for any path $J_t$ of almost complex
structures on a compact 4-manifold (Theorem \ref{semicont-hJt}).
Two conjectures are made about the dimension $h^-_J$ on a compact 4-manifold:
namely, that $h^-_J$ vanishes for generic almost complex structures (Conjecture \ref{conj1}),
and that an almost complex structure with $h^-_J \geq 3$ is necessarily
integrable (Conjecture \ref{conj2}).
\vspace{0.2cm}
In section 3, we confirm the first conjecture for 4-manifolds
with $b^+=1$ (Theorem \ref{int3}). The main topic of section 3 is
the notion of metric related almost complex structures. Two almost
complex structures are said to be {\it metric related} if they
induce the same orientation and they admit a common compatible
metric. We compute the subgroups $H_J^+$, $H_J^-$ and their
dimensions $h^+_J$, $h^-_J$ for almost complex structures metric
related to an integrable one. The main result is:
\begin{theorem} \label{hJ-}
Let $(M,J)$ be a compact complex surface. If $\tilde J$ is an almost
complex structure on $M$ metric related to $J$, $\tilde J \not\equiv
\pm J$, then $h^-_{\tilde J} \in \{ 0, 1, 2 \}$. The almost complex
structures $\tilde J$ with $h^-_{\tilde J} = 0$ form an open and
dense set with respect to the $C^{\infty}$-topology in the space of
almost complex structures metric related to $J$. The almost complex
structures ${\tilde J}$ for which $h^-_{\tilde J} = 1$ or
$h^-_{\tilde J} = 2$ are explicitly described. In particular, the
case $h^-_{\tilde J} = 2$ appears only when $(M,J)$ is a complex
torus, or a K3 surface.
\end{theorem}
A main tool in the proof of Theorem \ref{hJ-} are the Gauduchon metrics.
We also use them to give an alternative proof of the fact first observed in \cite{LZ},
that for a compact complex surface $(M,J)$, $J$ is tamed by a symplectic form
if and only if $b_1$ is even (Proposition \ref{tamecx}).
Section 3 ends with a couple of applications of Theorem \ref{hJ-}.
In Theorem \ref{nonpure}, we prove that the intersection of $H_J^+$,
$H_J^-$ could be non-trivial even for a K\"ahler $J$ if the
compactness assumption is removed. Theorem \ref{K3T4} shows that the
examples of non-integrable almost complex structures with $h_{\tilde
J}^-=2$ from Theorem \ref{hJ-} cannot admit a smooth
pseudo-holomorphic blowup.
\vspace{0.2cm}
The so called {\it well-balanced} almost
Hermitian 4-manifolds are introduced in section 4, as a natural generalization of both the
Hermitian 4-manifolds and the almost K\"ahler ones. It is likely
that this new notion has links with generalized complex geometry,
but we leave the study of these possible links for future work. For
now, we give some examples of well-balanced almost Hermitian
4-manifolds (Proposition \ref{expwb}), and prove a vanishing result for
$h_J^-$ on a well-balanced compact almost Hermitian 4-manifold with
Hermitian Weyl tensor (Theorem \ref{mainwb}).
\vspace{0.2cm}
Finally, in section 5 we discuss Donaldson's symplectic version of
the Calabi-Yau equation on 4-manifolds. We observe that his
technique based on the Implicit Function Theorem can also be used to
obtain a stronger semi-continuity property for $h_J^{\pm}$ near a $J$ which
admits a compatible symplectic form (Theorem \ref{deformation}).
\vspace{0.2cm} {\bf Acknowledgments:} We are very grateful to V.
Apostolov for pointing out Theorem \ref{semicont-hJt} and for other
valuable suggestions. We also thank D. Angella and A. Tomassini for
good discussions and for sending us the preprint \cite{AT2}, and V.
Tosatti for useful comments.
\section{Definitions and preliminary results}
Let $(M, J)$ be an almost complex manifold. The almost complex
structure $J$ acts on the bundle of real
2-forms $\Lambda^2$ as an involution,
\newline by $\alpha(\cdot, \cdot)
\rightarrow \alpha(J\cdot, J\cdot)$, thus we have the splitting
into $J$-invariant,
respectively, $J$-anti-invariant 2-forms
\begin{equation} \label{formtype}
\Lambda^2=\Lambda_J^+\oplus \Lambda_J^-.
\end{equation}
We will denote by $\Omega^2$ the space of 2-forms on $M$
($C^{\infty}$-sections of the bundle $\Lambda^2$) , $\Omega_J^+$ the
space of $J$-invariant 2-forms, etc. For any $\alpha \in \Omega^2$,
the $J$-invariant (resp. $J$-anti-invariant) component of $\alpha$
with respect to the decomposition (\ref{formtype}) will be denoted
by $\alpha '$ (resp. $\alpha ''$). We will also use the notation $
\mathcal Z^2$ for the space of closed 2-forms on $M$ and $\mathcal
Z_J^{\pm} = \mathcal Z^2 \cap \Omega_J^{\pm}$ for the corresponding
projections.
The bundle $\Lambda^-_J$ inherits an almost complex structure, still
denoted $J$, by
$$\alpha \in \Lambda^-_J \; \rightarrow \; J\alpha\in \Lambda^-_J,
\mbox{ where } J\alpha(X,Y) = -\alpha(JX,Y) .$$
It is well known that when $J$ is integrable (in any dimension), we have
$$ \beta \in \mathcal Z_J^- \Leftrightarrow J\beta \in \mathcal Z_J^- .$$
Conversely (see e.g. \cite{Sal}), if $(M, J)$ is a
connected almost complex 4-manifold and there is a pair $\beta \in \mathcal Z_J^-, J\beta \in \mathcal Z_J^-$
($\beta$ not identically zero), then $J$ is integrable.
\vspace{0.1cm}
The following definitions were introduced in \cite{LZ} for an arbitrary almost complex manifold $(M,J)$.
\begin{definition}
(i) The $J$-invariant, respectively, $J$-anti-invariant cohomology subgroups $H_J^{+}$, $H_J^{-}$, are defined by
\begin{equation} \nonumber
H_J^{\pm}=\{ \mathfrak{a} \in H^2(M;\mathbb R) | \exists \; \alpha\in \mathcal
Z_J^{\pm} \mbox{ such that } [\alpha] = \mathfrak{a} \} \, ;
\end{equation}
(ii) $J$ is said to be {\it $C^{\infty}$-pure} if $H_J^+\cap H_J^-=\{0\}$;
\newline (iii) $J$ is said to be {\it $C^{\infty}$-full} if $H_J^+ + H_J^- = H^2(M;\mathbb
R)$;
\newline (iv) $J$ is {\it $C^{\infty}$-pure and full} if $H_J^+ \oplus H_J^- = H^2(M;\mathbb
R)$.
\end{definition}
\noindent As noted in the introduction, when $J$ is integrable and admits a compatible K\"ahler metric, or when
$(M,J)$ is a complex surface,
the subgroups $H_J^{\pm}$ are nothing but the (real) Dolbeault
cohomology groups (see \cite{DLZ}, \cite{AT}):
\begin{equation}\label{same} H_J^{+}=H_{\bar \partial}^{1,1}\cap H^2(M;\mathbb R),
\quad H_J^{-}=(H_{\bar \partial}^{2,0}\oplus H_{\bar
\partial}^{0,2})\cap H^2(M;\mathbb R).
\end{equation}
In these cases, there is a weight 2 formal Hodge decomposition (more
generally, this is true whenever the Fr\"ohlicher spectral sequence
degenerates at first step), so $J$ is $C^{\infty}$-pure and full.
For complex dimensions greater or equal to 3, there are known
examples of complex structures for which the Fr\"ohlicher spectral
sequence does not degenerate at first step. Recently, Angella and
Tomassini have also shown in \cite{AT} that Iwasawa manifold $X^6$ admits
complex structures which are not $C^{\infty}$-pure nor full.
Other interesting examples appear in \cite{AT2},
showing, in particular, that the notions of $C^{\infty}$-pure and $C^{\infty}$-full are not related.
The first 6-dimensional examples of (non-integrable) almost complex
nilmanifolds which are not $C^{\infty}$-pure nor full were given by
Fino and Tomassini \cite{FT}.
\vspace{0.2cm}
By contrast, in dimension 4, the following result was proved in \cite{DLZ}:
\begin{theorem} \label{pf-dim4}
If $M$ is a compact 4-dimensional manifold then any almost complex
structure $J$ on $M$ is $C^{\infty}$-pure and full, i.e.
\begin{equation} \label{purefull}
H^2(M;\mathbb R)=H_J^+ \oplus H_J^- \; .
\end{equation}
\end{theorem}
\vspace{0.2cm} \noindent We refer to \cite{DLZ} for the proof of
Theorem \ref{pf-dim4}. It is based on Hodge theory and the
particularity of dimension 4 stemming from the self-dual,
anti-self-dual decomposition induced by the Hodge operator $*_g$ of
a Riemannian metric $g$ on $M$:
\begin{equation} \label{sdasd}
\Lambda^2=\Lambda_g^+\oplus \Lambda_g^-.
\end{equation}
If the metric $g$ is compatible with the almost complex structure
$J$ and we let $\omega$ be the fundamental form defined by
$\omega(\cdot, \cdot)=g(J\cdot, \cdot)$, the decompositions (\ref{formtype}) and
(\ref{sdasd}) are related by
\begin{equation}\label{type-Jinv}
\Lambda_J^+=\underline{\mathbb R}(\omega)\oplus \Lambda_g^-,
\end{equation}
\begin{equation} \label{type-sdasd}
\Lambda_g^+ = \underline{\mathbb R}(\omega) \oplus \Lambda_J^-.
\end{equation}
In particular, any $J$-anti-invariant 2-form in 4-dimensions is
self-dual, thus any closed, $J$-anti-invariant 2-form is harmonic,
self-dual. This enables us to identify the space $H^-_J $ with
$\mathcal Z_J^{-}$, and further, with the set ${\mathcal{H}}^{+,
\omega^{\perp}}_g$ of harmonic self-dual forms pointwise orthogonal
to $\omega$. In fact, it is an observation of Lejmi \cite{Le} that
this space can be seen as the kernel of an elliptic operator defined
on $\Omega_J^-$.
\begin{lemma} \label{mehdi} (\cite{Le}, Lemma 4.1)
Let $(M^4, g, J, \omega)$ be a compact, almost Hermitian 4-manifold.
Consider the operator
\[P : \Omega_J^- \rightarrow \Omega_J^- \; , \; \; P(\psi) = (d \delta^g \psi)'' ,\]
where $\delta^g$ is the codifferential with respect to the metric
$g$ and the superscript $''$ denotes the projection $ \Omega^2
\rightarrow \Omega_J^-$. Then $P$ is a self-adjoint strongly
elliptic linear operator with kernel the $g$-harmonic
$J$-anti-invariant 2-forms.
\end{lemma}
\noindent Lemma 4.1 in \cite{Le} is stated for almost K\"ahler
4-manifolds, but the reader can easily check that its proof does not
use the assumption that $\omega$ is closed. Indeed, since
$\Omega_J^- \subset \Omega_g^+$ and since the Riemannian Laplace
operator
\newline $\Delta^g = d \delta^g + \delta^g d$ preserves the
decomposition (\ref{sdasd}), note that for $\psi \in
\Omega_J^-$,
\[ P(\psi) = \frac{1}{2} \Delta^g \psi - \frac{1}{4} <\Delta^g \psi, \omega> \omega .\]
Then since $\psi$ and $\omega$ are pointwise orthogonal, a short
computation gives
\[ <\Delta^g \psi, \omega> = - 2 \delta^g (<\psi, \nabla \omega>) + <\psi, \Delta^g \omega> .\]
The right side contains clearly only one derivative in $\psi$, and
the lemma follows easily. Here and later in the paper, $\delta^g$ denotes the divergence operator, i.e. the
adjoint of $d$ with respect to the metric $g$.
\vspace{0.2cm}
Let us denote the dimension of $H_J^{\pm}$ by $h^{\pm}_J$, let $b_2$
be the second Betti number, and $b^{\pm}$ be the ``self-dual'',
resp. ``anti-self-dual'' Betti numbers of the 4-manifold $M$. By
Theorem \ref{pf-dim4} and the observations above, we have
\begin{equation} \label{h^+_+ h^-}
h^+_J + h^-_J = b_2 ;
\end{equation}
\begin{equation}\label{easyestimate}
h^{+}_J\geq b^-, \quad h^-_J\leq b^+.
\end{equation}
\noindent We propose the following two conjectures:
\begin{conj} \label{conj1}
For generic almost complex structures $J$ on a compact 4-manifold $M$, $h_J^- = 0$.
\end{conj}
\begin{conj} \label{conj2}
On a compact 4-manifold, if $h_J^- \geq 3$ then $J$ is integrable.
\end{conj}
\noindent In the case $b^+=1$, Conjecture \ref{conj1} is proved in
Theorem \ref{int3}. Theorem \ref{hJ-} is a further
partial answer and motivation for both conjectures.
\vspace{0.2cm}
We end this section by establishing a path-wise semi-continuity property for
$h^{\pm}_J$ on a compact 4-manifold. This result was pointed out
to the first author by Vestislav Apostolov.
\begin{theorem} \label{semicont-hJt}
Let $M$ be a compact 4-manifold and let $J_t$, $t \in [0,1]$ be a
smooth family of almost complex structures on $M$. Then $h^-_{J_t}$
(resp. $h^+_{J_t}$) is an upper-semi-continuous (resp.
lower-semi-continuous) function in $t$. That is, for any $t
\in [0,1]$ there exists $\epsilon > 0$ such that if $s \in [0,1]$,
$|s - t| < \epsilon$,
\[h_{J_s}^- \leq h_{J_{t}}^- \; , \; \; h_{J_s}^+ \geq h_{J_{t}}^+ .\]
\end{theorem}
\begin{proof} The statement about $h^-_{J_t}$ follows directly from Lemma \ref{mehdi} and
a classical result of Kodaira and Morrow showing the
upper-semi-continuity of the kernel of a family of elliptic
differential operators (Theorem 4.3 in \cite{KM}). The statement
on $h^+_{J_t}$ follows from Theorem \ref{pf-dim4}.
\end{proof}
\noindent The following immediate corollary sheds some light on Conjecture \ref{conj1}
and on the density statement in Theorem \ref{hJ-}.
\begin{cor} \label{hJ-=0} If $(M^4, J)$ is a compact almost complex manifold with $h^-_J = 0$ and
$J_t$ is a deformation of $J$, then for small $t$, $h^-_{J_t} = 0$.
\end{cor}
\begin{remark} {\rm In Theorem \ref{deformation}, we establish a stronger semi-continuity
property for $h_J^{\pm}$ near an almost complex structure which
admits a compatible symplectic form. Theorems \ref{semicont-hJt} and
\ref{deformation} are no longer true in higher dimension, as recent
examples of Angella and Tomassini imply (see Propositions 4.1, 4.3 and
Examples 4.2, 4.4 in \cite{AT2}). Note that their Example 4.2 shows
that the semi-continuity property fails in dimensions higher than 4,
even if one has a path of almost complex structures which are
$C^{\infty}$-pure and full.}
\end{remark}
\section{Computations of $h^{-}_J$}
\subsection{Generic vanishing of $h_J^-$ when $b^+=1$}
In this subsection we confirm Conjecture \ref{conj1} when $b^+=1$.
\begin{theorem}\label{int3} Suppose $M$ is a compact $4-$manifold with
$b^+=1$ admitting almost complex structures. The almost complex
structures $J$ with $h^-_J = 0$ form an open and dense subset in the
set of all almost complex structures on $M$, with
$C^{\infty}$-topology.
\end{theorem}
\subsubsection{Topology of the space of almost complex structures} \label{topJs}
Let $\mathcal{J}=\mathcal{J}^\infty$ be the space of $C^{\infty}$ almost complex structures.
Let us first describe the $C^{\infty}$ topology of $\mathcal J^{\infty}$.
It is well known that the space $\mathcal J^l$ of $C^l$ almost complex
structures has a natural separable Banach manifold structure via the $C^l$ norm (see
\cite{MS} for example).
The natural $C^\infty$ topology on $\mathcal J^{\infty}$ is induced by
the sequence of semi-norms $C^0, C^1,\cdots,C^l,\cdots$.
Locally, near a $C^{\infty}$ almost complex structure $J$,
$\mathcal{J}$ is a subspace of $\mathcal{J}^l$ with finer topology.
With the $C^{\infty}$ topology, $\mathcal{J}=\mathcal{J}^\infty$ is a
Fr\'echet manifold.
A complete metric inducing the $C^{\infty}$ topology on it can be
defined by
\begin{equation}\label{metric-top}
d(x,y)=\sum_{k=0}^\infty \frac{\|x-y\|_k}{1+\|x-y\|_k}\;2^{-k}.
\end{equation}
Here $\|\cdot \|_k$ represents the $C^k$ semi-norm on it.
\subsubsection{The space of $g-$compatible almost complex
structures}
To prove the density statement in Theorem \ref{int3} we need to consider the space of
almost complex structures compatible with a fixed Riemannian metric $g$.
This can be described as
the space of $g$-self-dual 2-forms $\omega$ satisfying $|\omega|^2_g
= 2$ point-wise on $M$ (equivalently, the space of smooth sections of
the twistor bundle associated to $(M,g)$).
The $C^{\infty}$-topology on this subspace corresponds to
$C^{\infty}$-topology on the space of 2-forms.
Suppose we also fix a $g-$compatible pair $(J, \omega)$. Then any
$g-$compatible almost complex structure corresponds to a $2-$form
\begin{equation} \label{ombeta}
\tilde \omega = f \omega + \beta, \; \mbox{with $\beta \in
\Omega^-_J$, $f \in C^{\infty}(M)$ so that $2f^2 + |\beta|^2 = 2$. }
\end{equation}
For us, the following variation will be useful, extending an idea
from \cite{Lee}. Suppose further a section $\alpha \in \Omega^-_J$
is given. One can define new $g$-compatible almost complex
structures as follows: pick smooth functions $f$ and $r$ on $M$ so
that the form
\begin{equation} \label{om(f,h,alpha)}
\tilde \omega = f \omega + r \alpha
\end{equation} satisfies
$|\tilde \omega|^2_g = 2$, and let $\tilde J$ be the almost complex
structure defined by $(g, \tilde \omega)$. Equivalently, $f$ and $r$
should satisfy the pointwise condition
\begin{equation} \label{squarenorm2}
2f^2 + r^2 |\alpha|_g^2 = 2 .
\end{equation}
For any $\alpha \in \Omega^-_J$, one can find such functions $f$ and
$r$. For instance, take $r$ to be small enough so that $r^2
|\alpha|_g^2 < 2$ everywhere; then $f$ is determined up to sign
by $f = \pm (1 - \frac{1}{2} r^2 |\alpha|_g^2)^{1/2}$. Junho Lee's
almost complex structures $J_{\alpha}$ (see \cite{Lee}) are obtained
for the specific choice \footnote{There is a factor ``2'' difference
in the convention for the norm of a two form between our paper and
\cite{Lee}. For us, if $(g,J, \omega)$ is a 4-dimensional almost
Hermitian structure, $|\omega|^2_g = 2$, whereas in \cite{Lee},
$|\omega|^2_g = 1$. This explains the apparent difference between
our $r$ and $f$ and those in Proposition 1.5 in \cite{Lee}.}
\begin{equation} \label{JLhf}
r = \frac{4}{2+|\alpha|^2} \; \mbox{ and } \; f =
\frac{2-|\alpha|^2}{2 + |\alpha|^2} \; .
\end{equation}
Note that we actually get a pair of almost complex structures
$J^{\pm}_{\alpha}$, as for the above choice of $r$, we have the sign
freedom in choosing $f$. Junho Lee defines these almost complex
structures on a K\"ahler surface $(M,J,g)$ and uses them as a tool
for an easier computation of the Gromov-Witten invariants.
Particularly important in his work are the almost complex structures
$J_{\alpha}$ corresponding to {\it closed} $\alpha$'s, i.e
$\alpha\in \mathcal{Z}^-_J$.
\vspace{0.1cm}
Another natural choice for $(r,f)$ is
\begin{equation} \label{hf}
r = \pm f = \pm \frac{\sqrt{2}}{\sqrt{2 + |\alpha|^2}} \; .
\end{equation}
This corresponds to almost complex structures that arise from the
forms $\pm \omega + \alpha$, conformally rescaled to satisfy the
norm condition.
\vspace{0.1cm}
Even more generally, given $\alpha$, we may choose $r$ so that $r^2
|\alpha|_g^2 \leq 2$, with equality at some points, but then at such
points we have to require the smoothness of the function $(1 -
\frac{1}{2} r^2 |\alpha|_g^2)^{1/2}$. Note also that if such points
exists, then we no longer have an ``up to sign choice'' for $f$
overall.
\vspace{0.1cm}
Finally, note that we can (and will) choose $r$ to satisfy
$r^2 |\alpha|_g^2 < 2$ and be supported on a small open set in $M$.
Then, for $f = (1 - \frac{1}{2} r^2 |\alpha|_g^2)^{1/2}$,
the new almost complex structure $\tilde J$ coincides with
$J$ outside the support of $r$.
\vspace{0.2cm}
\subsubsection{Proof of Theorem \ref{int3}}
\begin{proof} First we show the density.
Let $J$ be an almost complex structure on $M$. It follows from
\eqref{type-sdasd} that $h_J^- \in \{0, 1\}$. If $h^-_J = 0$,
Corollary \ref{hJ-=0} shows that in any neighborhood of $J$ there
are other almost complex structures $\tilde J$ with $h_{\tilde
J}^-=0$. If $h^-_J = 1$, let $\alpha \in \mathcal{Z}^-_J$,
normalized so that $\int_M \alpha^2 = 1$. Pick a $J$-compatible
metric $g$ and let $\omega$ be the fundamental form associated to
$(g, J)$. The form $\alpha$ is $g$-harmonic and point-wise
orthogonal to $\omega$. As in \eqref{om(f,h,alpha)}, let $\tilde
\omega = f \omega + r \alpha$, for some functions $f, r$ satisfying
\eqref{squarenorm2}, and define $\tilde J$, the almost complex
structure induced by $(g, \tilde \omega)$. If $r\not\equiv 0$, it is
clear that $h_{\tilde J}^-=0$, as $\tilde \omega$ is no longer
point-wise orthogonal to $\alpha$. On the other hand, we can choose
$r$ to be compactly supported on a small set, so $\tilde J$ can be
arbitrarily close to $J$.
For openness, we prove that the complement is closed. Let $J_k$ be a
sequence of almost complex structures with $h^-_{J_k} = 1$
converging to the almost complex structure $J$.
Let $g$ be a $J$-compatible Riemannian
metric and let
$$g_k (\cdot, \cdot) = \frac{1}{2} (g(\cdot, \cdot) + g(J_k\cdot, J_k\cdot)) .$$
Clearly, $g_k$ is a Riemannian metric compatible with $J_k$ and
$(g_k, J_k)$ converges to $(g,J)$. Denote by $\Delta^k$ the
Hodge-DeRham Laplace operator associated to $g_k$ and by
$\mathbb{G}^k$ the Green operator associated to $\Delta^k$.
Let $\psi$ be a non-zero $g$-harmonic, self-dual two form, normalized so that
$\int_M \psi^2 = 1$ (up to sign, $\psi$ is unique with these properties, as $b^+ = 1$).
Consider the Hodge decomposition of the 2-form $\psi$ with
respect to each of the metrics $g_k$.
$$\psi = (\psi - \mathbb{G}^k (\Delta^k \psi)) +
\mathbb{G}^k (\Delta^k \psi) = \psi_{h, k} + \psi_{ex, k} ,$$
where $\psi_{h, k} = \psi - \mathbb{G}^k (\Delta^k \psi)$
denotes the $g_k$-harmonic part of $\psi$ and $\psi_{ex, k} =
\mathbb{G}^k (\Delta^k \psi)$ is the $g_k$-exact part of $\psi$.
Since $g_k \rightarrow g$ and $\Delta^g \psi = 0$, this implies
$$\psi_{h, k} \rightarrow \psi \; \; \; , \; \; \psi_{ex, k} \rightarrow
0, \mbox{ as $k \rightarrow \infty$.} $$ Moreover, if
$(\psi_{k,h})^+$ denotes the $g_k$-self-dual part of
$\psi_{k,h}$, we have
$$(\psi_{k, h})^+ \rightarrow \psi \; . $$
But since $b^+ = h^-_{J_k} = 1$, the $g_k$-harmonic, self-dual forms
$(\psi_{k, h})^+$ are $J_k$-anti-invariant. Since $J_k \rightarrow
J$, it follows that $\psi$ must be $J$-anti-invariant. Thus,
$h^-_J =1$.
\end{proof}
\begin{remark} \label{expFT} {\rm There exist compact almost complex 4-manifolds $(M, J)$ with
$b^+ = 1$ and $h^-_J = 1$. Proposition 6.1 of \cite{FT} contains one such example (see also
Proposition \ref{expwb} (iii) in this paper, where this example appears in a different context).
Note also that any such almost complex structure cannot be tamed by a symplectic form,
as a consequence of Theorem 3.3 of \cite{DLZ}.}
\end{remark}
\subsection{When $J$ is integrable} If $(M^4,J)$ is a compact complex surface, it follows from
(\ref{same}) that $h_J^{\pm}$ are the same as the dimensions of the corresponding
Dolbeault groups
\begin{equation} \label{Jint-hpmcx} h_J^+=h_{\bar\partial}^{1,1}, \quad
h_J^-=2h_{\bar\partial}^{2,0}.
\end{equation}
Together with the signature theorem (Theorem 2.7 in \cite{BPV}), we
get
\begin{equation} \label{Jint-hpm}
h^+_J =\left\{ \begin{array}{ll}
b^- +1&\hbox{if $b_1$ even} \\
b^- &\hbox{if $b_1$ odd,} \end{array} \right. \quad h^-_J = \left\{
\begin{array}{ll}
b^+ -1&\hbox{if $b_1$ even} \\
b^+ &\hbox{if $b_1$ odd.} \end{array} \right.
\end{equation}
It is a deep, but now well known fact that the cases $b_1$ even/odd
correspond to whether the complex surface $(M,J)$ admits or not a
compatible K\"ahler structure. We observe that there is a more direct
proof for the following weaker statement.
\begin{prop} \label{tamecx} Let $(M,J)$ be a compact complex surface.
The following are equivalent:
\vspace{0.1cm}
(i) $b_1$ is even; (ii) $b^+ = h^-_J + 1 = 2 h^{2,0}_{\bar \partial} +1$; (iii) $J$ is tamed.
\vspace{0.1cm}
\noindent Similarly, the following are equivalent:
\vspace{0.1cm}
(i') $b_1$ is odd; (ii') $b^+ = h^-_J = 2 h^{2,0}_{\bar \partial}$; (iii') $J$ is not tamed.
\end{prop}
\noindent An almost complex structure $J$ is said to be {\it tamed} if there exists a symplectic form $\omega$
such that $\omega(X, JX) > 0$ for any non-zero tangent vector $X$. The tame-compatible question of Donaldson
\cite{D} predicts that on a compact 4-manifold any tame almost complex structure $J$ admits, in fact, a {\it compatible}
symplectic form, that is, a symplectic form $\tilde \omega$, so that $\tilde \omega(\cdot, J\cdot)$ is a
Riemannian metric.
It was first observed in \cite{LZ} using a result of \cite{HL},
that on a compact complex surface the tame condition is equivalent with
$b_1$ even. Proposition \ref{tamecx} gives a different proof of this fact.
Assuming Kodaira's classification, the tame condition is thus
equivalent with the compatibility. As Donaldson
points out, a direct confirmation of the tame-compatible question would lead to a different proof of the
fact that $b_1$ even corresponds to a complex surface of K\"ahler type. At least in the case $b^+=1$,
the tame-compatible question is known to be a consequence of the symplectic Calabi-Yau problem, also introduced by
Donaldson in \cite{D} (see also \cite{W}, \cite{TWY}, \cite{TW-survey}, and section \ref{sCY} below).
\vspace{0.1cm}
A key tool in our proof of Proposition \ref{tamecx} are the {\it Gauduchon metrics} whose
definition we recall next.
\subsubsection{Gauduchon metric}
For an (almost) Hermitian manifold $(M,
g, J, \omega)$, the {\it Lee form} $\theta$ is defined by $\theta = J\delta^g
\omega$, or, equivalently in dimension 4, by $d\omega =
\theta \wedge \omega$. It is well known that $d\theta$ is a conformal invariant.
When $J$ is integrable, the case when $\theta$ is closed (exact)
corresponds to locally (globally) conformal K\"ahler metrics.
Obviously, Hermitian metrics with $\theta=0$ are, in fact, K\"ahler
metrics.
\begin{definition} A Hermitian metric such that the Lie form is
co-closed, i.e. $\delta^g \theta = 0$, is called a {\it Gauduchon metric}
(or {\it standard Hermitian metric}, in the original terminology of \cite{gauduchon1}).
\end{definition}
\noindent The existence and uniqueness (up to homothety) of a Gauduchon metric in each conformal
class is shown in \cite{gauduchon1}. The result is much more general; it does not require integrability,
nor restriction to dimension 4.
For us, the key property of a (Hermitian) Gauduchon metric in dimension 4 is the following:
\noindent \begin{prop} \label{gaud} (\cite{gauduchon2}) On a compact
complex surface $M$ endowed with a Gauduchon metric $g$,
the trace of a harmonic, self-dual form is a constant.
\end{prop}
\noindent For the proof of Proposition \ref{gaud}, we refer the
reader to Lemma II.3 in \cite{gauduchon2} (see also \cite{ad-dga},
Proposition 3, for a slightly different argument). The Proposition
\ref{gaud} implies that for Hodge decomposition arguments, the
Gauduchon metrics behave quite like the K\"ahler ones. This simple
fact yields good consequences.
\subsubsection{Proof of Proposition \ref{tamecx}}
\begin{proof} As we mentioned already (and is easy to check), for a complex surface
the groups $H^{\pm}_J$ are identified with the (real) Dolbeault groups as in \eqref{same}.
Using \eqref{easyestimate}, we thus have
$$b^+ \geq h^-_J = 2 h^{2,0}_{\bar \partial} .$$
We'll show (ii) $\Rightarrow$ (i) $\Rightarrow$ (iii) $\Rightarrow$ (ii).
It is well known that for any almost complex
4-manifold, $b_1 + b^+$ is odd, thus (ii) $\Rightarrow$ (i) is obvious.
Now assume (i), which is equivalent with $b^+$ odd, by the above observation. It follows that
$b^+ > h^-_J = 2 h^{2,0}_{\bar \partial}$. Choose a $J$-compatible conformal class
and let $g$ be the Gauduchon metric with total volume one in this class; denote by $\omega$ the fundamental 2-form
induced by $(g,J)$. Let $\psi$ be a non-trivial harmonic self-dual
2-form, whose cohomology class $[\psi]$ is cup-product
orthogonal to $H^-_J$ (such $\psi$ exists because $b^+ > h^-_J$). From
Proposition \ref{gaud} and \eqref{type-sdasd}, $\psi$ decomposes
as
\begin{equation} \label{thetadecomp}
\psi = a \, \omega + \beta, \mbox{ with $a$ constant and } \beta
\in \Omega^-_J \; .
\end{equation}
The constant $a$ is non-zero, by the assumption
that $[\psi]$ is cup product orthogonal to $H^-_J$. This implies right away that
$\psi$ is symplectic (as $\beta$ is self-dual and point-wise orthogonal to $\omega$).
By eventually replacing $\psi$ by $-\psi$, we can assume also that
$a>0$, so $J$ is tamed (by $\psi$ or $-\psi$). Thus, we proved (i) $\Rightarrow$ (iii).
Next, suppose that $\psi$ is a symplectic form
that tames $J$. As pointed out in \cite{D}, $\mathbb{R} \, \psi + \Lambda^-_J$ is
is a 3-dimensional bundle on $M$, positive-definite with respect to the wedge pairing and the volume form
$\psi^2$. This induces a $J$-compatible conformal class. Let $g$ be the Gauduchon metric
in this class and denote again by $\omega$ the fundamental form of $(g,J)$.
The form $\psi$ is $g$-self-dual and closed, thus it is harmonic. Then relation \eqref{thetadecomp} holds,
with $a>0$, by the assumption that $\psi$ tames $J$. It follows that $[\psi] \not\in H^-_J$, thus
$b^+ > h^-_J$. Now assume that $\psi_1$ and $\psi_2$ are harmonic self-dual
2-forms, whose cohomology classes $[\psi_1]$, $[\psi_2]$ are cup-product
orthogonal to $H^-_J$. As above,
$$ \psi_1 = a_1 \, \omega + \beta_1 \; , \; \; \psi_2 = a_2 \, \omega + \beta_2 ,$$
with $a_1, a_2$ non-zero constants and $\beta_1, \beta_2 \in \Omega^-_J$.
But then $ a_2 \, \psi_1 - a_1 \, \psi_2 = a_2 \, \beta_1 - a_1 \, \beta_2 $
is $J$-anti-invariant and closed. Together with the assumptions that $[\psi_1]$, $[\psi_2]$ are cup-product
orthogonal to $H^-_J$, this can happen only if
$$ a_2 \, \psi_1 - a_1 \, \psi_2 \equiv 0 \; . $$
Thus $b^+ - h^-_J = 1$, so (iii) $\Rightarrow$ (ii) is proved.
Remark that the proof shows that (i), (ii), (iii) are also equivalent to (iv) $b^+ > h^-_J$.
The equivalence of (i'), (ii'), (iii') is then the negation of the above.
\end{proof}
\subsection{Comparing metric related almost complex structures}
Notice that when $J$ is integrable the dimensions $h^{\pm}_J$ are
topological invariants. Such a property is certainly no longer true
for general almost complex structures. However, we are still able to
calculate the exact value of $h_J^{\pm}$ for almost complex
structures which are metric related to integrable ones.
To achieve
this we first derive some general results about metric related
almost complex structures.
\subsubsection{Estimates for $g-$related almost complex structures}
We again fix a Riemannian metric $g$.
\begin{definition} Suppose $J$ and
$\tilde J$ are two almost complex structures inducing the same
orientation on a 4-manifold $M$. $J$ and $\tilde J$ are said to be
{\it $g-$related} if they are both compatible with $g$.
\end{definition}
\noindent It is clear that if $g$ has this property, then so does any metric from its
conformal class. Also, if $J$ and $\tilde J$ are $g-$related then
\[\Lambda^-_{J} + \Lambda^-_{\tilde J} \subset \Lambda_g^+,
\mbox{ and hence } H^-_{J} + H^-_{\tilde J} \subset \mathcal H_g^+
.\] Recall that since any closed $J$-anti-invariant form is
harmonic, self-dual, we can identify $H^-_J$ with $\mathcal{Z}^-_J$
and see it as a subspace of $\mathcal H_g^+$ (the space of harmonic,
self-dual forms).
\noindent The following observation is the key for the computations of
$h^{\pm}_J$ we achieve in this section.
\begin{prop} \label{int1}
Suppose $J$ and $\tilde J$ are $g-$related almost complex structures
on a connected 4-manifold $M$, with $\tilde J \not\equiv \pm J$.
Then ${\rm dim} \;(H^-_{J} \cap H^-_{\tilde J}) \leq 1$.
\end{prop}
\begin{proof} Let $\omega$ and $\tilde \omega$ be the corresponding self-dual
2-forms. By assumption, the set
\[ U = \{ p \in M \; | J(p) \neq \pm \tilde J(p) \} = \{ p \; | {\rm dim} \;
( {\rm Span} \{\omega(p), \tilde \omega(p) \}) = 2 \} \] is a
non-empty open set in $M$. Without loss of generality we can assume
that $U$ is connected. Otherwise, we can make the reasoning below on
a connected component of $U$.
Assume $ H^-_{J} \cap H^-_{\tilde J} \neq \{ 0 \} $ and let
$\alpha_{1}, \alpha_{2} \in \mathcal{Z}^-_{J} \cap
\mathcal{Z}^-_{\tilde J}=H^-_{J} \cap H^-_{\tilde J}$, not
identically zero. Let $U'$ be the open subset of $U$ where neither
$\alpha_{1}$ or $\alpha_{2}$ vanishes. $U' \neq \emptyset $ because
$\alpha_{1}$ and $\alpha_{2}$ are $g$(-self-dual)-harmonic forms,
thus they satisfy the unique continuation property. Since on $U'$,
${\rm Span}\{\omega,\tilde \omega\}$ is a 2-dimensional subspace of
$\Lambda_g^+ M$ and $\alpha_{1}, \alpha_{2}$ are both orthogonal to
this subspace, there exists $f \in C^{\infty}(U')$ such that $
\alpha_{2} = f \alpha_{1}$. Since $\alpha_{1}, \alpha_{2}$ are, by
assumption, both closed, it follows that $0 = df \wedge \alpha_{1}$.
But $\alpha_{1}$ is non-degenerate on $U'$ (it is self-dual,
non-vanishing). Thus $df = 0$, so $f = const.$ on $U'$. It follows
that $\alpha_{2} = const. \; \alpha_{1}$ on $U'$, but, by unique
continuation, this holds on the whole $M$.
\end{proof}
\begin{remark}\label{openset} {\rm The estimate in Proposition \ref{int1} is
sharp. Indeed, let $(M, g, J, \omega)$ be a connected almost
Hermitian 4-manifold, and assume that $\alpha \in \mathcal{Z}^-_{J}$
is not identically zero. Consider a $g$-compatible almost complex
structure ${\tilde J}$, arising from a self-dual 2-form
\begin{equation} \label{hj-=1}
\tilde \omega = f \omega + r J\alpha \;,
\end{equation}
where $f$ and $r$ are $C^{\infty}$-functions, so that
\begin{equation} \label{norm2}
|\tilde \omega|^2_g = 2f^2 + r^2 |\alpha|^2_g = 2 \; .
\end{equation}
By \eqref{type-sdasd} applied to $(g, \tilde J, \tilde \omega)$,
observe that $\alpha$ is $\tilde J-$anti-invariant. Hence, by
Proposition \ref{int1}, $ H^-_{J} \cap H^-_{\tilde J} = {\rm
Span}([\alpha])$.
Conversely, any $g$-compatible $\tilde J$
such that $[\alpha] \in H^-_{J} \cap H^-_{\tilde J}$ will have a
fundamental form $\tilde \omega$ given by (\ref{hj-=1}) at least on
the open dense set $M' = M \setminus \alpha^{-1}(0)$, with functions
$f, r \in C^{\infty}(M')$ satisfying (\ref{norm2}).}
\end{remark}
\noindent Observe that compactness is not needed for Proposition
\ref{int1} or Remark \ref{openset}. In the compact case, Proposition
\ref{int1} has the following easy consequence.
\begin{cor} \label{atmostone}
In the space of almost complex structures compatible to a given metric $g$
on a compact 4-manifold,
there is at most one $J$ such that
\begin{equation}\label{atmost} h_J^-\geq \left\{ \begin{array}{ll} \frac{b^+
+3}{2} & \hbox{if $b^+$ is odd} \\ \frac{b^+ +2}{2} & \hbox{if $b^+$
is even}.
\end{array} \right.
\end{equation}
\end{cor}
\vspace{0.1cm}
\subsubsection{Metric related almost complex structures}
\vspace{0.2cm}
\begin{definition} Two almost complex structures $J$ and $\tilde J$
on a 4-manifold $M$ are said to be {\it metric related} if they induce the same
orientation and are $g-$related for some Riemannian metric $g$ on $M$.
\end{definition}
\noindent If we fix a volume form $\sigma$ on $M$,
two almost complex structures $J$ and $\tilde J$ are metric related
if and only if there exists a 3-dimensional sub-bundle $\Lambda^+
\subset \Lambda^2 M$, positive definite with respect to the wedge
pairing and $\sigma$, such that $\Lambda^-_{J} \subset \Lambda^+ $,
$\Lambda^-_{\tilde J} \subset \Lambda^+ $. One important
difference versus the ``$g$-related'' condition for a fixed $g$ is that the
metric related condition is not transitive.
Because of this, Corollary \ref{atmostone}, for instance, is not
automatically clear under just the metric related assumptions.
However, Proposition \ref{int1} clearly extends to the metric
related case. One immediate consequence is:
\begin{cor}\label{12}
Suppose $J$ and $\tilde J$ are metric-related almost complex
structures on a compact 4-manifold $M$, with $\tilde J \not\equiv
\pm J$.
\vspace{0.1cm}
(i) If $h_J^-=b^+$, then $h_{\tilde J}^-\leq 1$.
(ii) If $h_J^-=b^+-1$, then $h_{\tilde J}^-\leq 2$.
\end{cor}
\noindent From this Corollary, one obtains immediately the claim $h_{\tilde
J}^- \in \{0, 1, 2\}$ from the statement of Theorem \ref{hJ-}. The
results in the next subsections will be more specific about when
each case occurs.
\vspace{0.2cm}
\subsection{Proof of Theorem \ref{hJ-}} \label{metint} Throughout
this subsection, unless stated otherwise, $J$ will denote a {\it
complex} structure on a compact 4-manifold $M$. Denote by $\mathcal
J$ the space of all (smooth) almost complex structures on $M$ and by
$\mathcal{J}_J$ the set of almost complex structures which are
metric related to the fixed $J$. On both spaces $\mathcal J$ and
$\mathcal{J}_J$ we consider the $C^{\infty}$-topology.
For reasons that will be apparent soon, it is best to divide the
proof into some cases depending on the type of the surface $(M,J)$.
\subsubsection{Surfaces of non-K\"ahler type, or of K\"ahler type but with non-trivial canonical bundle}
For these we have the following result.
\begin{theorem} \label{nontrivK}
Let $(M,J)$ be a compact complex surface of non-K\"ahler type, or a
compact complex surface of K\"ahler type, but with topologically
non-trivial canonical bundle. If $\tilde J \in \mathcal{J}_J$, $\tilde J \not\equiv
\pm J$, then either (i) $h^-_{\tilde J} = 0$, or (ii) $h^-_{\tilde
J} = 1$. Case (i) occurs for an open, dense set of almost complex
structures in $\mathcal{J}_J$. Case (ii) occurs precisely when there
exist $\alpha \in \mathcal{Z }^-_{J}$ such that $ H^-_{\tilde J} = {\rm Span}([\alpha])$, so these
$\tilde J$ appear as described in Remark \ref{openset}.
\end{theorem}
\begin{proof} First, we justify the statement
$h^-_{\tilde J} \in \{0,1\}$. For a
complex surface of non-K\"ahler type,
this follows directly from Corollary \ref{12}.
Now suppose that $(M,J)$ is a complex surface of K\"ahler
type with topologically non-trivial canonical bundle. Consider the
conformal class of metrics compatible with both $J$ and $\tilde J$
and let $g$ be the Gauduchon metric with respect to $J$ in this
class. Let $\omega$ and $\tilde \omega$ denote the fundamental forms
of $(g,J)$ and $(g, \tilde J)$, respectively. They are related as in
\eqref{ombeta},
\[ \tilde \omega = f \omega + \beta, \; \mbox{with $\beta \in
\Omega^-_J$, $f \in C^{\infty}(M)$ so that $2f^2 + |\beta|^2 = 2$. }
\]
Suppose $h^-_{\tilde J} \neq 0$ and let $\psi \in \mathcal{Z}^-_{\tilde J}$,
not identically zero. Since $\psi$ is $g$-harmonic, from Proposition
\ref{gaud} it must be of the form $\psi= a \omega + \alpha$, with
$a$ constant and $\alpha \in \Omega^-_J$. The
pointwise condition $<\psi, \tilde \omega> = 0$ is equivalent to
$$ 2a f + <\alpha,\beta> = 0 \mbox{ everywhere on $M$.} $$ But $\beta$ (and $\alpha$)
must vanish somewhere on $M$, since the canonical bundle is
topologically non-trivial. At a point $p$ where $\beta(p) = 0$, we
have $f^2(p) = 1 \neq 0$, thus it follows that $a=0$. Thus $\psi =
\alpha$, but since $d \psi = 0$, it follows that $\psi = \alpha \in
\mathcal{Z}^-_J$. Hence, $H^-_{\tilde J} \subset H^-_J$. The statement
$h^-_{\tilde J} \in \{0,1\}$ follows now from Proposition
\ref{int1}. Note that we also proved the description of the case $h^-_{\tilde J} = 1$.
\vspace{0.2cm}
Next, we prove the density statement in Theorem \ref{nontrivK}.
This
follows from Corollary \ref{hJ-=0} and the following observation.
\begin{prop} \label{exph-=0}
Let $(M,J)$ be a compact complex surface as in Theorem \ref{nontrivK}.
If $\tilde J \in \mathcal{J}_J$ and $h^-_{\tilde J} \neq 0$, then there exists
$\tilde J' \in \mathcal{J}_J$, arbitrarily close to $\tilde J$, and with $h^-_{\tilde J'} = 0$.
\end{prop}
\begin{proof} Suppose first that the geometric genus vanishes.
For non-K\"ahler type, this means $b^+ = 2h^{2,0}_{\bar \partial} = 0$,
so it follows from \eqref{type-sdasd} that $h^-_{\tilde J} =
0$ for any $\tilde J$ on $M$ (even not metric related to $J$). If
$(M,J)$ is of K\"ahler type and has zero geometric genus,
then $h^-_J = 2h^{2,0}_{\bar \partial} = 0$, so the first part of the proof of
Theorem \ref{nontrivK} shows that $h^-_{\tilde J} = 0$, for any $\tilde J \in \mathcal{J}_J$.
Suppose next that the geometric genus of $(M,J)$ does not vanish.
Consider first the case $\tilde J \not\equiv \pm J$.
From the first part of the proof of Theorem \ref{nontrivK}, the assumption
$h^-_{\tilde J} \neq 0$ implies that there exists $\alpha \in
\mathcal{Z}^-_J$ such that $H^-_{\tilde J} = {\rm Span}\{ [\alpha]
\}$. Moreover, there is a metric $g$ on $M$ compatible with both $J$
and ${\tilde J}$ so that the corresponding forms $\omega$ and
$\tilde \omega$ are related on $M' = M \setminus \alpha^{-1}(0)$ as
in (\ref{hj-=1}):
\[ \tilde \omega = f \omega + r J\alpha \;, \]
where $f$ and $r$ are $C^{\infty}$-functions on $M'$, satisfying the
norm condition (\ref{norm2}). Note that even if the above relation
is valid on the (open, dense) set $M'$, $\tilde \omega$ is defined
on the whole $M$. We deform $\tilde \omega$ as follows. Let $\tilde
r$ be a compactly supported function on a small open subset $U$ of
$M'$ and define
\[ \tilde \omega ' = \tilde f \tilde \omega + \tilde r \alpha \;, \]
where the function $\tilde f$ is chosen so that $|\tilde \omega '|^2
= 2$. Let $\tilde J'$ be the almost complex structure induced by
$(g, \tilde \omega ')$. We claim that $h^-_{\tilde J'} = 0$.
Indeed, if $h^-_{\tilde J'} \neq 0$, as in the proof of Theorem
\ref{nontrivK}, there exists $\beta \in \mathcal{Z}^-_J(M)$, so that
$H^-_{\tilde J'} = {\rm Span}\{ [\beta ]\}$. Moreover, there are
functions $h, q$ so that
\[ \tilde \omega' = h \omega + q J\beta , \]
on the open dense set $M''= M \setminus \beta^{-1}(0)$. On the other
hand, on $M'$ we have
\[ \tilde \omega ' = (\tilde f f) \omega + (\tilde f r) J\alpha +
\tilde r \alpha \; .\] It follows that on $M' \cap M''$, we have
\[ q J\beta = (\tilde f r) J\alpha +
\tilde r \alpha \; . \] Since $\tilde r$ is compactly supported on a
small subset in $M'$, it follows that $J\alpha$ and $J\beta$ are
conformal multiples of one another on a non-empty open set. By the
argument in the proof of Proposition \ref{int1}, it follows that
$\alpha$ and $\beta$ are (non-zero) scalar multiples of one another
on the whole $M$. Thus, $H^-_{\tilde J'} = {\rm Span}\{ [\alpha] \}$.
But, by construction, on the set where $\tilde r
\neq 0$, the form $\tilde \omega '$ is not point-wise orthogonal to
$\alpha$. Thus, $h^-_{\tilde J'} = 0$, as claimed.
\vspace{0.2cm}
In the case $\tilde J \equiv \pm J$, the argument is similar. We
have even larger freedom in considering the deformation. Let $\alpha
\in \mathcal{Z}^-_J$, and let $r_1$, $r_2$ be compactly supported on
disjoint open sets. Consider
\[ \tilde \omega ' = f \omega + r_1 \alpha + r_2 J\alpha, \]
where $f$ is chosen to fulfill the norm condition. As above, one can
show that $h^-_{\tilde J'} = 0$.
\end{proof}
\begin{remark} \label{nononzero} {\rm The first part of the argument above shows the following:
suppose $(M,J)$ is a compact complex surface of K\"ahler type
with vanishing geometric genus and topologically non-trivial canonical bundle.
Then for any $\tilde J \in \mathcal{J}_J$, $h^-_{\tilde J} = 0$.}
\end{remark}
\vspace{0.1cm}
Finally, the openness statement in Theorem \ref{nontrivK} follows
from:
\begin{prop} \label{seqJ}
With the notations and assumptions of Theorem \ref{nontrivK},
suppose $\tilde J_k$ is a sequence of almost complex structures
converging to $\tilde J$ (in the $C^{\infty}$-topology), with
$\tilde J_k, \tilde J \in \mathcal{J}_J$. If $h^-_{\tilde J_k} \neq
0$, then $h^-_{\tilde J} \neq 0$.
\end{prop}
\begin{proof} The assumption $h^-_{\tilde J_k} \neq 0$ and the earlier arguments
in the proof of Theorem \ref{nontrivK}, show that there exists
$\alpha_k \in \mathcal{Z}^-_J$, such that $H^-_{\tilde J_k} = {\rm
Span}( [\alpha_k])$. We can normalize $\alpha_k$ so that $[\alpha_k
] \cdot [\alpha_k] = 1$, where $\cdot$ denotes here the cup-product
of cohomology. Thus, as $\alpha_k$ is a sequence on the unit sphere
in $\mathcal{Z}^-_J$ which is a compact set (note that
$\mathcal{Z}^-_J$ is finite dimensional), we can extract a
subsequence, still denoted $\alpha_k$, which converges to $\alpha
\in \mathcal{Z}^-_J$. Obviously, $[\alpha] \neq 0$, as $[\alpha]
\cdot [\alpha] = 1$. Moreover, since $\tilde J_k \rightarrow \tilde
J$, $\alpha_k \rightarrow \alpha$, the relation
$$\alpha_k(\tilde J_k X, \tilde J_k Y) = - \alpha_k(X, Y) $$
implies
$$ \alpha(\tilde J X, \tilde J Y) = - \alpha(X, Y) .$$
Thus, $h^-_{\tilde J} \neq 0$.
\end{proof}
\noindent This also completes the proof of Theorem \ref{nontrivK}.
\end{proof}
\vspace{0.1cm}
\begin{remark} {\rm A similar argument to the one in Proposition \ref{seqJ}
yields the following result: given a metric $g$ on a compact 4-manifold $M$,
the set of $g$-compatible almost complex structures $\tilde J$ with $h^-_{\tilde J} =0$
is open in the set of all $g$-compatible almost complex structures.}
\end{remark}
\begin{remark} \label{junholee} {\rm Under the assumptions of Theorem \ref{nontrivK},
if $\alpha \in \mathcal{Z}^-_J$,
then the almost complex structures $\tilde J$ defined by
(\ref{om(f,h,alpha)}) have $h^-_{\tilde J} = 1$, for any choice of
$(r, f)$ satisfying (\ref{squarenorm2}). In particular this is true for
Junho Lee's almost complex structures $J^{\pm}_{\alpha}$ defined by
(\ref{JLhf}).
Note that since $J$ is integrable, $\alpha + iJ\alpha$
is a holomorphic $(2,0)$ form on $M$, hence the zero set
$\alpha^{-1}(0)$ is a canonical divisor on $(M, J)$.}
\end{remark}
\begin{remark} {\rm If a compact 4-manifold $M$ admits a pair of
integrable complex structure $(J_1, J_2)$ which are metric related
then $M$ has a bi-Hermitian structure. The study of such structures has been
active recently, (see, for instance,
\cite{Hitchin} and the references therein), especially due to the link with
generalized K\"ahler geometry (\cite{Gualtieri}). An easy
consequence of Theorem \ref{nontrivK} is
the observation that a compact 4-manifold $M$ with $b^+ = 2$, or
$b^+ \geq 4$ does not admit a bi-Hermitian structure (compatible
with the given orientation). This is not new, as it is easily seen
from the classification results of \cite{AGG} and \cite{Ap}, that
manifolds admitting bi-Hermitian structures must have $b^+ \in \{ 0,
1, 3 \}$. }
\end{remark}
\subsubsection{Surfaces of K\"ahler type with topologically trivial,
but holomorphically non-trivial canonical bundle}
\begin{prop} \label{hyperelliptic} Suppose that $(M,J)$ is a complex surface of
K\"ahler type with topologically trivial, but holomorphically
non-trivial canonical bundle. Then for any almost complex structure
$\tilde J$ on $M$ (not even metric related to $J$), we have
$h^-_{\tilde J} \in \{0, 1\}$. The set of almost complex structures
with $h^-_{\tilde J} = 0$ is open and dense with respect to the
$C^{\infty}$-topology, both in $\mathcal J$ and $\mathcal J_J$.
\end{prop}
\begin{proof} Any such surface is a hyperelliptic
surface. In this case $b^+=1$ and the claims follow from
\eqref{type-sdasd} and Theorem \ref{int3}.
\end{proof}
We wonder whether the result in Remark \ref{nononzero} still holds
in this case; in other words, is it still true that $h^-_{\tilde J}
= 0$ for any $\tilde J \in \mathcal{J}_J$?
\subsubsection{Surfaces with holomorphically trivial canonical bundle}
Even if the non-K\"ahler subcase is covered by
Theorem \ref{nontrivK}, it is worth considering it separately, as
the result takes a very simple form. Surfaces of non-K\"ahler type with holomorphically
trivial canonical bundles are Kodaira surfaces.
Thus, let $(M,J)$ be a Kodaira surface. We have $h_J^-=b^+=2$. Let $\Phi = \beta + i J\beta$ be a a nowhere
vanishing holomorphic $(2,0)-$form trivializing the canonical
bundle. The real and imaginary parts of $\Phi$, $\beta$ and $J\beta$
are both closed, nowhere vanishing $J$-anti-invariant forms. Suppose
that $g$ is a metric compatible with $J$ and let $\omega$ be the
corresponding non-degenerate form of $(g,J)$. The triple $\{\omega,
\beta, J\beta\}$ is a pointwise orthogonal basis of the
rank $3$ bundle $\Lambda_g^+$. Thus, any almost complex structure
compatible with $g$ corresponds to a form
\begin{equation} \label{omfls}
\omega_{f,l,s} = f\omega+l\beta+sJ\beta,
\end{equation}
where the functions $f,l,s \in
C^{\infty}(M)$ satisfy $2f^2 + |\beta|^2(l^2+s^2) = 2$.
We denote the almost complex structure corresponding to $(g,
\omega_{f,l,s})$ by $J_{f,l,s}$. Every almost complex structure
metric related to $J$ can be obtained this way.
Since for a Kodaira surface $\mathcal H_g^+=H_J^- = {\rm Span}(\alpha,
J\alpha)$, the only possible self-dual harmonic forms are of
type
\[ a\, \beta+b\, J\beta, \quad \hbox{ where $a$ and $b$ are
constants}.\]
The only condition for this form lying in
$H_{J_{f,l,s}}^-$ is $al+bs=0$. Thus we have proved
\begin{prop} \label{kodsurface}
If $(M, g, J)$ is a Kodaira surface with a compatible metric $g$,
using the notations above,
\[h_{J_{f,l,s}}^-= 2 - rank({\rm Span}(l, s)) .\]
Clearly, $h_{J_{f,l,s}}^-= 0$ is the generic case, $
h_{J_{f,l,s}}^-= 2$ if and only if $l=s=0$, i.e. $\tilde J = \pm J$,
and $ h_{J_{f,l,s}}^-= 1$ if and only if the functions $l$ and $s$
are scalar multiples of each other, not both identically zero.
\end{prop}
Next, suppose that $(M,J)$ is a K\"ahler surface with
holomorphically trivial canonical bundle. Then $b^+=3$ and $(M, J)$
is a K3 surface or $4-$torus. As in the Kodaira surface case, let
$\Phi = \beta + i J\beta$ be a a nowhere vanishing holomorphic
$(2,0)-$form trivializing the canonical bundle. Consider a conformal
class of metrics compatible with $J$, and, in this class, let $g$ be
the Gauduchon metric, with $\omega$ being the associated form. As
above, denote by $J_{f,l,s}$ the almost complex structure
corresponding to form $\omega_{f,l,s}$ given by (\ref{omfls}). Every
almost complex structure metric related to $J$ is of the type
$J_{f,l,s}$ for some Gauduchon metric $g$ and for some functions $f,
l, s$.
The difference from the Kodaira surface case is that $b^+={\rm dim}
(\mathcal H_g^+)=3$, rather than $2$. As argued in Theorem
\ref{nontrivK}, any $g-$harmonic form has a constant inner product
with $\omega$. Let $\omega'$ be the unique $g$-self-dual harmonic
form with $<\omega', \omega>_g = 2$ and which is cup-product
orthogonal to $H^-_J = {\rm span} \{\beta, J\beta \}$. This is
written as
\begin{equation} \label{uv}
\omega' = \omega+ u\, \beta + v\,J\beta \; ,
\end{equation}
where $u, v$ are $C^{\infty}$-functions. They satisfy
\[\int_M u |\beta|^2 \; d\mu_g = \int_M v |\beta|^2 \; d\mu_g = 0 \; ,\]
and a differential equation corresponding to $d\omega' = 0$. Thus,
any self-dual harmonic form is of type
\[c\; \omega'+ a\; \beta+ b\; J\beta,\] where
$a$,$b$,$c$ are constants. The only condition for this form to be in
$H_{J_{f,l,s}}^-$ is to be point-wise orthogonal to $\omega_{f,l,s}$
(see (\ref{omfls})). This amounts to
\[2cf'+al'+bs'=0, \]
where
\begin{equation}\label{'''} l'=l|\beta|^2, \quad
s'=s|\beta|^2\quad \hbox{ and } f'=2f+ul'+vs' \; .
\end{equation}
Therefore we have the following statement.
\begin{prop} \label{linear} Suppose $(M,J)$ is a K\"ahler surface with
holomorphically trivial canonical bundle. Let $\beta$ be a closed form trivializing
the canonical bundle. Consider a conformal class compatible with $J$
and let $g$ be the Gauduchon metric in this class. Let $\omega$ be
the associated form and let $J_{f,l,s}$ be the $g$-related almost
complex structure defined via (\ref{omfls}). Then
\[h_{J_{f,l,s}}^-=3 - rank({\rm Span } (f',l',s')),\] with $f',
l', s'$ as in \eqref{'''}. The case $h_{J_{f,l,s}}^- = 0$ is the generic situation, thus
the set of almost complex structures $\tilde J$ with $h^-_{\tilde J} = 0$ is dense in
$\mathcal{J}_J$. The cases $h_{J_{f,l,s}}^- = 2$, $h_{J_{f,l,s}}^- = 1$ also occur.
\end{prop}
\noindent Note that $g$ is a hyperK\"ahler metric precisely when
$|\beta|^2=2$ pointwise and in this case $\omega' = \omega$.
\begin{remark} \label{a}
{\rm We leave to the interested reader to check the computation that
$h_{J_{f,l,s}}^- = 2$ if
and only if
\[f = \pm (1 - k_1 u - k_2 v) |\beta| w, \; \; l = \pm 2k_1 |\beta|^{-1} w, \; \; s = \pm 2k_2 |\beta|^{-1} w, \]
where $k_1, k_2$ are arbitrary constants, $u, v$ are given by
(\ref{uv}), and
\[w = [(1 - k_1 u - k_2 v)^2 |\beta|^2 + 2(k_1^2 +
k_2^2)]^{-1/2} \; . \]
We just observe that most of the examples with $h_{J_{f,l,s}}^- = 2$ described above
are non-integrable almost complex structures. This can be again
checked directly, or one can argue as follows. If for a certain
metric $g$ and functions $f,l,s$, we obtain an {\it integrable}
almost complex structure $J_{f,l,s}$, then $(g, J, J_{f,l,s})$ is a
bi-Hermitian structure. It is well known that conformal classes
carrying bi-Hermitian structures are very particular, as Theorem 2
in \cite{AGG} shows. On the other hand, our Proposition \ref{linear}
shows that examples of almost complex structures $J_{f,l,s}$ with
$h^-_{J_{f,l,s}} = 2$ occur in {\it each} conformal class associated
to the given $J$. Thus, most of these $J_{f,l,s}$ must be
non-integrable. }
\end{remark}
\noindent As an extension of Conjecture \ref{conj2}, it is natural
to ask:
\begin{question}\label{geq2}
Are there (compact, 4-dimensional) examples of non-integrable almost
complex structures $J$ with $h^-_J \geq 2$ other than the ones
arising from Proposition \ref{linear}? In particular, are there any
examples with $h^-_J \geq 3$?
\end{question}
\vspace{0.3cm}
\noindent Theorem \ref{hJ-} follows from
Theorem \ref{nontrivK} and Propositions \ref{hyperelliptic}, \ref{kodsurface},
\ref{linear}. $\Box$
\vspace{0.2cm}
\subsection{Applications of Theorem \ref{hJ-}.}
We end this section with a couple of applications of our main
result. First, we prove that the $C^{\infty}$-pure property no
longer holds even for a K\"ahler $J$, if one
gives up the compactness of the manifold.
\begin{theorem} \label{nonpure}
Let $(M,J)$ be a compact complex surface with non-trivial canonical
bundle and non-zero geometric genus (equivalently, $h^-_J \neq 0$).
Let $B$ be a small contractible open set in $M$.
Then the $C^{\infty}$-full property for $J$ still holds on $M \setminus B$,
but the $C^{\infty}$-pure property for $J$ on $M
\setminus B$ no longer holds.
\end{theorem}
\begin{proof} Since now is
not obvious to which set the groups
$H^{\pm}_J$ refer to, we'll use here the notations $H^{\pm}_J(M)$,
$H^{\pm}_J(M\setminus B)$, etc. By Mayer-Vietoris, the inclusion $i : M \setminus B
\hookrightarrow M$ induces an isomorphism in cohomology
$$H^2(M; \mathbb{R}) \stackrel{i^{*}}{\rightarrow} H^2(M\setminus B;
\mathbb{R}) \;. $$ Via this isomorphism, the subgroups $H^{\pm}_J(M)$
inject in $H^{\pm}_J(M \setminus B)$, respectively. Thus, $(M
\setminus B, J)$ still has the $C^{\infty}$-full property.
\vspace{0.1cm}
For the $C^{\infty}$-purity statement, let $\alpha \in \mathcal{Z}^-_J(M)$, $\alpha \not\equiv 0$.
Choose a $J$-compatible metric $g$ and
a smooth function $r \geq 0$ compactly supported on $B$, so
that $r^2 |\alpha|_g^2 < 2$. Let $f = (1 - \frac{1}{2} r^2
|\alpha|_g^2)^{1/2}$ and let $\tilde J$ be the almost complex
structure defined by $g$ and $\tilde \omega = f \omega + r \alpha$ as in
(\ref{om(f,h,alpha)}). From Theorem \ref{nontrivK}, we have
$H^-_{\tilde J}(M) = Span\{ [J\alpha]\}$.
Consider now the cohomology class $[\alpha]$. By Theorem
\ref{pf-dim4}, $[\alpha] \in H^+_{\tilde J}(M)$. Thus, there exists
a 1-form $\rho$ on $M$ such that $\alpha + d\rho$ is ${\tilde
J}$-invariant. On the other hand, by construction, it is clear that
$\tilde J = J$ on $M \setminus B$. Thus, on $M \setminus B$, $\alpha
+ d\rho$ is $J$-invariant, while $\alpha$ is obviously
$J$-anti-invariant. Hence $i^*[\alpha] \in H^+_J(M\setminus B) \cap
H^-_J(M\setminus B)$.
But the argument works for any $\alpha \in \mathcal{Z}^-_J(M)$. Thus, we get
$$ i^*(H^-_J(M)) \subset H^+_J(M\setminus B) \; \; \mbox{ and } \; i^*(H^+_J(M)) \subset H^+_J(M\setminus B) \; , \mbox{ so }$$
$$i^*(H^2(M;\mathbb{R})) = H^2(M \setminus B ; \mathbb{R}) = H^+_J(M\setminus B) .$$
Therefore, we obtain
$$ H^+_J(M \setminus B) \cap H^-_J(M \setminus B) = H^-_J(M \setminus B) \; ,$$
and the right hand-side is non-empty, as it contains at least $i^*(H^-_J(M))$.
\end{proof}
\vspace{0.1cm}
Next, we show that our examples of non-integrable almost complex
structures with $h_{\tilde J}^-=2$ from Proposition \ref{linear}
cannot admit a smooth pseudoholomorphic blowup.
\begin{theorem}\label{K3T4}
Suppose $\tilde J$ is a non-integrable almost complex structure with
$h_{\tilde J}^-=2$ on a $K3$ surface (or on $T^4$) and assume also
that ${\tilde J}$ is metric related to a complex structure. Then
there is no \emph{smooth} almost complex structure ${\tilde J}'$ on
$K3\#\overline{\mathbb{CP}^2}$ (or on
$T^4\#\overline{\mathbb{CP}^2}$) so that the blowup map
$f:K3\#\overline{\mathbb{CP}^2} \rightarrow K3$ (or
$f:T^4\#\overline{\mathbb{CP}^2}\rightarrow T^4$) is a $({\tilde
J}',\tilde J)$ holomorphic map. In other words, there is no
pseudoholomorphic blowup for such a $\tilde J$.
\end{theorem}
\begin{proof}
If there is such a ${\tilde J}'$, it should satisfy:
\begin{enumerate}
\item ${\tilde J}'$ is not integrable;
\item $h_{\tilde J'}^-=2$;
\item $\tilde J'$ is metric related to a complex structure.
\end{enumerate}
\noindent However, by our Theorem \ref{hJ-}, there are no such almost complex
structures on $K3\#\overline{\mathbb{CP}^2}$ (or
$T^4\#\overline{\mathbb{CP}^2}$).
\end{proof}
\noindent The above proposition should be compared with Usher's result
\cite{Usher}: there is always such a Lipschitz continuous almost
complex structure $J'$.
The same argument but with some modification
of our previous definition can ensure that there is no such $C^1$
almost complex structure.
\section{Well-balanced almost Hermitian 4-manifolds}
In this section we introduce a class of 4-dimensional almost Hermitian structures
that contains the Hermitian ones and the almost K\"ahler ones.
\vspace{0.2cm}
\subsection{The image of the Nijenhuis tensor}
Given an almost complex structure $J$, at each point $p\in M$
define the image of its Nijenhuis tensor $N_J$
by
\[ Im(N_J)_p = {\rm Span } \{ N_J(X,Y) \; | X, Y \in T_p M \}. \]
This is $J$-invariant, that is if $Z \in Im(N_J)_p$, then $JZ \in Im(N_J)_p$.
The specific of dimension 4 is that at each point $Im(N_J)_p$ is either 0, or 2-dimensional,
but never 4-dimensional. This is so, because $N_J$ can be seen as a map
\[ N_J : T^{2,0}_J \rightarrow T^{0,1}_J, \; \; N_J(Z_1 \wedge Z_2) =
N_J(Z_1, Z_2) = [Z_1, Z_2]^{0,1}, \; Z_1,Z_2 \in T^{1,0}_J, \]
and in dimension 4 the bundle $T^{2,0}_J$ is real 2-dimensional. Here the superscripts denote
the usual complex type of vectors and forms induced by $J$.
One can ask when is $Im(N_J)$ a distribution over $M$.
This certainly happens when $J$ is integrable, as by Nirenberg-Newlander theorem
this holds if and only if $N_J = 0$ everywhere. To ask that $Im(N_J)$ is everywhere
2-dimensional on $M$ is equivalent to say that $N_J$ is non-vanishing at each point.
As the Nijenhuis tensor can be seen as a section of the bundle
$\Lambda^{2,0}_J \otimes T^{0,1}_J$, John Armstrong observed (\cite{Arm}, Lemma 3)
that the non-vanishing of $N_J$ at each point has topological consequences.
\begin{prop} (\cite{Arm}) \label{nonzeroN}
If $(M, J)$ is a 4-dimensional compact almost complex manifold with $N_J$
non-vanishing at each point, then the signature and Euler characteristic of $M$ satisfy
\[ 5\chi(M) + 6 \sigma(M) = 0 \; .\]
\end{prop}
\subsection{The well-balanced condition}
The following is a classical result (see, for instance, \cite{KN})
\begin{prop}
Let $(M, g, J, \omega)$ be an almost Hermitian manifold. Then
\begin{equation} \label{nablaom-N}
(\nabla_X \omega)(\cdot,\cdot) = 2<N_J(\cdot,\cdot), JX> +
\frac{1}{2} \Big( d\omega(X, \cdot, \cdot) - d\omega(X, J\cdot,
J\cdot) \Big)
\end{equation}
\end{prop}
\noindent It is well known that in dimension 4, there are just two Gray-Hervella
\cite{GrHer}
classes of special almost Hermitian manifolds -- Hermitian and
almost K\"ahler ones. These correspond to the vanishing (for any
$X$) of the first, respectively second term on the right side of
(\ref{nablaom-N}). In fact, on a general 4-dimensional almost
Hermitian manifold, let $\theta$ be the Lee form defined be $d\omega
= \theta \wedge \omega$. Then a short computation shows that
\begin{equation} \label{JXwedgetheta}
\frac{1}{2} \Big( d\omega(X, \cdot, \cdot) - d\omega(X, J\cdot,
J\cdot) \Big) = ((JX)^{\flat} \wedge \theta)'',
\end{equation}
where the superscript $''$ denotes the $J$-anti-invariant part of a 2-form.
It is clear that the right hand-side of (\ref{JXwedgetheta}) vanishes for all
$X$ if and only if $\theta = 0$, i.e. $d\omega = 0$.
Relaxing both the Hermitian and the almost K\"ahler conditions, it is natural to ask
that for every $X$ at least one (but not necessarily the same) of the terms in the right
hand-side of (\ref{nablaom-N}) vanishes. From the observations above, we know that in dimension 4
the Nijenhuis term vanishes for at least a two dimensional space at each point.
The proof of the following proposition is tedious (but
straightforward), so we just sketch it, leaving the interested
reader to fill in remaining details.
\begin{prop} \label{equiv-wellbal}
Let $(M^4, g, J, \omega)$ be a 4-dimensional almost Hermitian manifold.
Then the following statements are equivalent:
(i) For any $p \in M$ and any $X \in T_p M$, at least one of the
terms in the right side of (\ref{nablaom-N}) vanishes;
(ii) For any $p \in M$, $(N_J)_p = 0$, or $\theta_p^{\sharp} \in Im(N)_p$;
(iii) $ \Big( \imath_{N_J(X,Y)} d\omega \Big)'' = 0 $, for any $X,Y
\in T_p M$ and $p \in M$;
(iv) For any local non-vanishing section $\psi \in \Omega_J^-$,
\[ |\nabla \psi|^2 = |\nabla (J\psi)|^2 , \; \; <\nabla \psi, \nabla (J\psi)> = 0 . \]
\end{prop}
\begin{proof} Any (smooth) local section $\phi \in \Omega^-_J$ with $|\phi|^2 =
2$, determines (smooth) local 1-forms $a, b, c$ by
\begin{equation} \label{abc}
\begin{array}{ll}
\nabla \omega &= a \otimes \phi + b \otimes J \phi \\
\nabla \phi &= - a \otimes \omega + c \otimes J \phi \\
\nabla (J \phi) &= - b \otimes \omega - c \otimes \phi ,
\end{array}
\end{equation}
We show that conditions (i), (ii), (iii), (iv) are all equivalent
with:
(v) For any point $p \in M$, there exists an open set $U$ containing
$p$ and a section $\phi \in \Omega^-_J$, defined on $U$, with
$|\phi|^2 = 2$ , so that the corresponding 1-forms $a$ and $b$
satisfy the pointwise conditions
\begin{equation} \label{wellbal1}
|a|^2 = |b|^2 \mbox{ and } \; <a,b> = 0.
\end{equation}
Note first of all that if the condition (\ref{wellbal1}) is
satisfied for a given section $\phi \in \Omega^-_J$ with $|\phi|^2 =
2$, then it holds for any other section $\tilde \phi$ with the same
property (in other words, (\ref{wellbal1}) is ``gauge''
independent). Indeed, let
\[ \tilde \phi = \cos t \; \phi + \sin t \; J\phi, \]
for some smooth local function $t$. The corresponding 1-forms
given by (\ref{abc}) change as
\begin{equation} \label{abc-change}
\begin{array}{ll}
\tilde a & = a \, \cos t + b \, \sin t \\
\tilde b & = - a \, \sin t + b \, \cos t \\
\tilde c & = c + d t .
\end{array}
\end{equation}
Then it is easily checked that $\tilde a, \tilde b, \tilde c$
satisfy (\ref{wellbal1}), assuming that $a, b, c$ did so.
We prove now the equivalence (iv) $ \Leftrightarrow $ (v). Given a
section $\phi \in \Omega^-_J$, with $|\phi|^2 = 2$ and the 1-forms
$a, b, c$ defined by (\ref{abc}), one checks that
\[ |\nabla \phi|^2 - |\nabla J \phi |^2 = 2(|a|^2 - |b|^2) \; , \;
\; \; <\nabla \phi, \nabla J\phi> = 2<a, b> .\] Hence, the
implication (iv) $\Rightarrow$ (v) is clear. For the other
implication, let $\psi \in \Omega_J^-$ be a local non-vanishing
section and let $\phi = \frac{\sqrt{2} \psi}{|\psi|}$.
Straightforward computations imply
\[ |\nabla \phi|^2 - |\nabla J\phi|^2 = \frac{2(|\nabla \psi|^2 - |\nabla
J\psi|^2)}{|\psi|^2} \; , \; \; <\nabla \phi, \nabla J \phi> =
\frac{2 <\nabla \psi, \nabla J \psi>}{|\psi|^2}, \] and (v)
$\Rightarrow$ (iv) follows now easily.
Using (\ref{JXwedgetheta}), the reader can check the equivalences
(i) $ \Leftrightarrow $ (ii) $ \Leftrightarrow $ (iii). We will show
next that (i) $ \Leftrightarrow $ (v). Let $\phi$ be a local section
in $\Omega^-_J$, with $|\phi|^2 = 2$. Then, using the symmetries of
the Nijenhuis tensor and (\ref{JXwedgetheta}) one can check that
\[ 2<N_J(\cdot,\cdot), JX> = m(X) \otimes \phi - Jm(X) \otimes J\phi ,\]
\[ \frac{1}{2} \Big( d\omega(X, \cdot, \cdot) - d\omega(X, J\cdot,
J\cdot) \Big) = n(X) \otimes \phi + Jn(X) \otimes J\phi, \] where
$m$ and $n$ are local 1-forms. Thus, with respect to the chosen
section $\phi$, the 1-forms $a, b$ given by (\ref{abc}) are given by
\[ a = m + n \; , \; \; b = -Jm + Jn .\]
Easy computation shows that (\ref{wellbal1}) is equivalent to
\[ <m, n> = 0 \; , \; \; <m, Jn> = 0 , \] which is easily seen to be
equivalent to (i).
\end{proof}
\begin{definition} (i) An almost Hermitian manifold $(M^4, g,
J, \omega)$ is called {\it well-balanced} if it satisfies one (and hence
all) of the conditions in Proposition \ref{equiv-wellbal}.
(ii) An almost complex structure $J$ on a 4-manifold $M^4$ is called
{\it well-balanced} if it admits a compatible well-balanced almost
Hermitian structure.
\end{definition}
\noindent It is interesting to understand how large is the class of
well-balanced almost complex structures on compact 4-manifolds, but
we leave this problem for future study. Locally, any almost complex
structure in dimension 4 is compatible with some symplectic form
\cite{Lej}, so locally any almost complex structure is
well-balanced.
\vspace{0.2cm}
The following result provides examples of
well-balanced almost Hermitian 4-manifolds.
\begin{prop} \label{expwb} (i) Any 4-dimensional Hermitian or almost K\"ahler manifold is
well-balanced.
(ii) Suppose $g$ is a Riemannian metric adapted to a
complex-symplectic on a 4-manifold; in other words, assume that $g$
is compatible to a triple $I, J, K$ of almost complex structures
satisfying the quaternion relations and assume that $I$ is
integrable, and that $(g,J)$ and $(g, K)$ are almost K\"ahler. Then
for any constant angles $t$ and $s$, the almost Hermitian structure
$(g, \tilde J)$ with $\tilde J = \cos t \, I + \sin t \, (\cos s \,
J + \sin s \, K) $ is well-balanced.
(iii) Let $M$ be a compact quotient by a discrete subgroup
of the 3-step nilpotent Lie group $G$, whose nilpotent Lie algebra $\mathfrak{g}$
has structure equations
\[ de^1 = de^2 = 0, de^3 = e^{1} \wedge e^{4}, de^4 = e^{1} \wedge e^{2} .\]
Consider the invariant metric $g = \sum (e^i \otimes e^i) $ and the compatible almost complex structure
$J$ given by $Je^1 = e^2$, $Je^3 = e^4$. Then $(g,J)$ is well-balanced.
\end{prop}
\begin{proof}
(i) In either case, it is obvious that condition (iii) of Proposition \ref{equiv-wellbal} is satisfied.
\vspace{0.2cm}
(ii) It is clear that it is enough to check the case $s=0$. Let us
denote $\omega_I, \omega_J, \omega_K$ the three fundamental forms.
Since $(g, I)$ is Hermitian and $(g,J), (g,K)$ are almost K\"ahler,
we have
\begin{equation} \label{aIJK}
\begin{array}{ll}
\nabla \omega_I &= a \otimes \omega_J + Ia \otimes \omega_K \\
\nabla \omega_J &= - a \otimes \omega_I - Ja \otimes \omega_K \\
\nabla \omega_K &= - Ia \otimes \omega_I + Ja \otimes \omega_J ,
\end{array}
\end{equation}
for a 1-form $a$. Let $\tilde \omega$ the form corresponding to
$\tilde J = \cos t I + \sin t J$. Taking $\tilde \phi = \omega_K$, a
short computation shows that
\[ \nabla \tilde \omega = (Ia \cos t - Ja \sin t) \otimes \tilde
\phi - a \otimes \tilde J \tilde \phi ,\] and the statement is
easily verified.
(iii) Direct computation shows that at each point
$Im(N_J) = {\rm Span}(e_3, e_4)$. Even without computation, one can verify this
by noting that the commutator $[\mathfrak{g}, \mathfrak{g}]$ is ${\rm Span}(e_3, e_4)$
and this is $J$-invariant, by the definition of $J$. Next, using the structure equations,
one checks that $d\omega = - e^3 \wedge \omega$, where $\omega = e^1 \wedge e^2 + e^3 \wedge e^4$.
Thus, condition (ii) of Proposition \ref{equiv-wellbal} is satisfied.
\end{proof}
\begin{remark}
{\rm Note that $J$ from example (iii) in Proposition \ref{expwb}
is not integrable and cannot be tamed by a symplectic form on $M$
(see Proposition 6.1 in \cite{FT} and Remark \ref{expFT} above).}
\end{remark}
\noindent To state the main result of this subsection we need one more
definition.
\begin{definition} An almost Hermitian manifold $(M^4, g,
J)$ has {\it Hermitian type Weyl tensor} if
\begin{equation} \label{HW}
<W^+(J\beta), J\beta> = <W^+(\beta), \beta>, \mbox{ for any } \beta
\in \Omega^-_J .
\end{equation}
\end{definition}
\noindent It is well known that if $J$ is integrable (i.e.
$(M^4, g, J)$ is a Hermitian manifold), then (\ref{HW}) holds.
Also, any almost Hermitian structure with an ASD metric trivially
satisfies (\ref{HW}).
\begin{theorem} \label{mainwb}
Let $(M^4,J)$ be a compact almost complex 4-manifold which admits a
compatible Riemannian metric $g$ so that $(g,J)$ is well-balanced
and has Hermitian type Weyl tensor. Then $h^-_J = 0$ or $J$ is
integrable.
\end{theorem}
\begin{proof} Suppose $\beta$ is a non-trivial closed, $J$-anti-invariant
form on $M$. The next Lemma shows that, under the given assumptions,
$J\beta$ is also closed, thus $\Phi = \beta + i J\beta$ is a closed,
complex form of (2,0) type. The integrability of $J$ then follows
(see e.g. \cite{Sal}).
\begin{lemma}
Let $(M^4, g, J, \omega)$ be a compact, almost Hermitian 4-manifold
which is well-balanced and has Hermitian type Weyl tensor. Then for
any $\beta \in \Omega^-_J$, $d\beta = 0 \Leftrightarrow d(J\beta) =
0$.
\end{lemma}
\noindent {\it Proof of Lemma:} It's enough to prove $d \beta = 0
\Rightarrow d (J\beta) = 0$. The well-known Weitzenb\"ock formula
for a 2-form $\psi$ is
\[ \int_M ( |d \psi|^2 + |\delta \psi|^2 - |\nabla \psi|^2 ) \; dV
= \int_M (\frac{s}{3} |\psi|^2 - <W(\psi), \psi>) \; dV . \]
Applying this for $\beta$ and $J\beta$ and using the assumption on
the Weyl tensor, we get
\[ \int_M ( |d \beta|^2 + |\delta \beta|^2 - |\nabla \beta|^2 ) \; dV
= \int_M ( |d (J\beta)|^2 + |\delta (J\beta)|^2 - |\nabla(J\beta)|^2 ) \;
dV .\] Now, by assumption $\beta \in \Omega^-_J$ and $d\beta = 0$,
thus $\beta$ is harmonic, so it is non-vanishing on an open dense
set in $M$. From the well-balanced assumption and continuity, we get
that $|\nabla(J\beta)|^2 = |\nabla \beta|^2$ everywhere on $M$.
Thus,
\[ 0 = \int_M ( |d \beta|^2 + |\delta \beta|^2 ) \; dV
= \int_M ( |d (J\beta)|^2 + |\delta (J\beta)|^2 ) \;
dV .\] The lemma and the Theorem are thus proved.
\end{proof}
\noindent The following is an immediate consequence.
\begin{cor}
A compact 4-dimensional almost K\"ahler structure $(g,J, \omega)$
with Hermitian type Weyl tensor and with $h^-_J \neq 0$ must be
K\"ahler.
\end{cor}
\begin{remark} {\rm Under different additional conditions, some other
integrability results have been obtained for compact, 4-dimensional
almost K\"ahler manifolds $(g,J, \omega)$ with Hermitian type Weyl
tensor (see \cite{aa}, \cite{aad}).}
\end{remark}
\begin{remark} {\rm The corollary implies that if we start with a K\"ahler
surface $(M, g, J, \omega)$
and define the almost complex structures $\tilde J^{\pm}_{\alpha}$
corresponding to (\ref{om(f,h,alpha)}) and (\ref{hf}) for $\alpha
\in \mathcal Z^-_J$, then $\tilde J^{\pm}_{\alpha}$ cannot admit compatible
almost K\"ahler structures with Hermitian-type Weyl tensor.
They do admit compatible almost K\"ahler
structures (since $\pm \omega + \alpha$ is symplectic).}
\end{remark}
\section{Symplectic Calabi-Yau equation and semi-continuity property of $h_J^{\pm}$} \label{sCY}
In this section, we use the beautiful ideas in \cite{D} to
establish a stronger semi-continuity property for $h_J^{\pm}$ than
in Theorem \ref{semicont-hJt}, near an almost complex structure
which admits a compatible symplectic form.
\subsection{Symplectic CY equation and openness}
The classical Calabi-Yau theorem can be stated as follows: Let
$(M,J, \widetilde{\omega})$ be a K\"ahler manifold. For any volume form $\sigma$
satisfying $\int_M\sigma=\int_M\widetilde {\omega}^n$, there exists a unique
K\"ahler form ${\omega}$ with
$[\omega]=[\widetilde{\omega}]$ s.t. ${\omega}^n=\sigma$.
Yau's original proof of the existence (\cite{Y}) makes use of a
continuity method between the prescribed volume form $\sigma$ and
the natural volume form $\widetilde{\omega}^n$. The proof of openness is by the
implicit function theorem. The closedness part is obtained by a
priori estimates.
\subsubsection{Set up}In \cite{D}, Donaldson
introduced the symplectic version of the Calabi-Yau equation.
Let $(M, J)$ be a compact almost complex
$2n-$manifold and assume that $\Omega$ is a symplectic form
compatible with $J$.
For any function $F$ with
\begin{equation}
\int_M e^F \widetilde{\omega}^n=\int_M \widetilde{\omega}^n
\end{equation}
the symplectic CY equation is the following equation of a
$J-$compatible symplectic form $\widetilde \omega$,
\begin{equation} \label{syCY} \omega^n=e^F \widetilde{\omega}^n.
\end{equation}
In \cite{D}, Donaldson further observed that solvability of the
symplectic CY equation in dimension $4$ may lead to some amazing
results in four dimensional symplectic geometry.
\subsubsection{Openness}
In Donaldson's paper, he proves that the solution set of the
symplectic CY equation (\ref{syCY}) is open by using the implicit
function theorem. This only works for dimension $4$. Donaldson
actually works in the general setting of $2-$forms on $4$
manifolds.
Suppose $M$ is a $4-$manifold with a volume form $\rho$ and a choice
of almost-complex structure $J$. At any point $x$, $\rho$ and $J$
induce a volume form and a complex structure on the vector space
$T_x(M)$. Denote by $P_x$ the set of positive $(1,1)-$forms whose
square is the given volume form. Then $P_x$ is a three-dimensional
submanifold in $\Lambda^2T_x(M)$ (a sphere in a $(3,1)-$space). We
consider the $7-$dimensional manifold $\mathcal {P}$ fibred over
$M$ with fiber $P_x$,
\[\mathcal {P}=\{\omega^2=\rho\,|\, \omega \hbox{ is compatible with
$J$} \}.\] It is a submanifold of the total space of the bundle
$\Lambda^2$.
Now, we want to find a symplectic form $\omega$ which is compatible
with $J$ and has fixed volume form with some cohomology conditions.
That is, we are searching for $\omega$ satisfying the following
conditions (we call this condition type $D$):
\begin{equation}
\left\{ \begin{array}{ll}
\omega & \subset \mathcal{P}_{\rho}, \\
d\omega & =0, \\
\left[\omega\right] & \in e + H^2_+ \subset H^2(M;\mathbb{R}).
\end{array} \right.
\end{equation}
Here $e$ is a fixed cohomology
class and $H^2_+$ is a maximal positive subspace. Notice, we have
three families of variables: $\rho$, $J$ and $e$. In particular, $e$
varies in a finite dimensional space.
We have the following result which is a slight variation of
Proposition $1$ in \cite{D}.
\begin{prop} \label{sol}Suppose $\omega$ is a solution of type $D$
constrain with given $\mathcal{P}$ and $e$. If we have a smooth
family $\mathcal{P}^{(b)}$ parameterized by a Banach space $B$,
$\{\mathcal{P}^{(b)}\}$, with $\mathcal{P}=\mathcal{P}^{(0)}$ and
$b$ varies in $B$, then we have a unique solution of the deformed
constraint in a sufficiently small neighborhood of $0$ in $B$.
Further, this solution lies in a small $C^0$ neighborhood of $\omega$.
\end{prop}
We just indicate
how to find a small neighborhood for which we have the existence.
For each point $x\in M$, the tangent space to ${ P}_x$ at
$\omega(x)$ is a maximal negative space. Thus the solution $\omega$
determines a conformal structure on $M$. We fix a Riemannian metric
$g$ in this conformal class (actually, we can choose the metric
determined by $\omega$ and $J$). For small $\eta$, $\omega+\eta$
lying in $\mathcal P_{\rho}$ is expressed as
\[\eta^+=Q(\eta),\]
where $Q$ is a smooth map with $Q(\eta)=O(\eta^2)$. After choosing
$2-$form representatives of $H^2_+$, closed forms $\omega+\eta$
satisfying our cohomological constraint can be expressed as
$\omega+da+h$ where $h\in H^2_+$ and where $a$ is a $1-$form
satisfying the gauge fixing constraint $d^*a=0$. Thus our
constraints correspond to the solutions of the PDE
\begin{equation} \label{PDE} \left\{ \begin{array}{ll} d^*a&=0 \\
d^+a&=Q(da+h)-h^+.
\end{array} \right.
\end{equation}
Thus, our constraints are represented by a system of nonlinear elliptic PDE.
Donaldson further observes that its linearization
\[L=d^*\oplus d^+:\Omega^1/\mathcal{H}^1\longrightarrow
\Omega^0/\mathcal{H}^0 \oplus \Omega^2_+/H^2_+\] is invertible. Then
we apply the following version of the implicit function theorem:
\begin{theorem} \label{implicit} Let $X$, $Y$, $Z$ be Banach spaces and $f: X \times Y
\longrightarrow Z$ a Fr\'echet differentiable map. If $(x_0,y_0)\in
X \times Y$, $f(x_0,y_0) = 0$, and $y\mapsto D_2f(x_0,y_0)(0,y)$ is a
Banach space isomorphism from $Y$ onto $Z$. Then there exist
neighborhoods $U$ of $x_0$ and $V$ of $y_0$ and a Fr\'echet
differentiable function such that $f(x,g(x)) = 0$ and $f(x,y) = 0$
if and only if $y = g(x)$, for all $(x,y)\in X \times Y$.
\end{theorem}
To use this theorem, first notice that $D_2f$ is just our $L$
defined above, which is invertible at a solution of our constraints.
Moreover, $X$ is our parametrization space $B$, $Y$ is
$(\Omega^{1})_1/\mathcal{H}^1$, $Z$ is $(\Omega^{0})_0/\mathcal{H}^0
\oplus (\Omega^{2}_+)_0/H^2_+$. Here $(\Omega^{n})_m$ represents the
space of $C^m$ $n-$forms.
Then every condition is satisfied in our
setting.
\subsection{Semi-continuity properties of $h_J^{\pm}$}
\subsubsection{Weak and strong neighborhoods}
\label{ngh}
As described in \ref{topJs}, the space of $C^{\infty}$ almost complex structures
$\mathcal{J}=\mathcal{J}^\infty$ is not a Banach manifold but a
Fr\'echet manifold. In this case we can still apply Proposition
\ref{sol} to a smooth path or a finite dimensional space (hence
Banach) in $\mathcal J$. That is to say, if an almost complex
structure $J$ has a solution of the CY equation
${\omega}^2=\rho$ with a $J-$compatible form $\omega$ satisfying
$[{\omega}]\in e+H_+^2$, then for any path through $J$,
there is a small interval near $J$ such that the CY type
equation is solvable with conditions in $(D_t)$ in this interval. In
the end we get a weak neighborhood--the union of all the intervals.
Notice that this is not necessarily ``a small ball" near $J$, i.e.
it may not have an interior point.
We would like to apply Proposition 5.2 to an open neighborhood with respect to
the $C^{\infty}$ topology, which can be called a strong neighborhood compared with
the one described above.
For this purpose, notice that the tangent space $T_J\mathcal{J}^l$ at $J$
consists of $C^l-$sections $A$ of the bundle $End(TM,J)$ such that
$AJ+JA=0$.
It is a Banach space with $C^l$ norm. Moreover, this gives rise to a
local model for $\mathcal J^l$ via $Y\longmapsto J\rm{exp}(-JY)$.
Thus we can apply Proposition \ref{sol} to a Banach chart of $J$ in
the space of $C^l$ almost complex structures $\mathcal{J}^l$ endowed
with $C^l$ norm.
\begin{cor} \label{nbd} If we parameterize $\mathcal{P}^{(b)}$ in
Proposition \ref{sol} by a neighborhood $U(J_0)$ of $J_0$ in
$\mathcal{J}$ with $C^{\infty}$ topology, then we can get a
small neighborhood of $J$ satisfying all the properties stated in
Proposition \ref{sol} under the same topology.
\end{cor}
\begin{proof}
The space of $C^1$ almost complex structures $\mathcal J^1$ with
$C^1$ norm is a Banach manifold. We parameterize a neighborhood of
$J_0$ by an open set in the induced Banach space.
Then we can apply Proposition \ref{sol} for this setting. The only
point we need to check is that the Fr\'echet differentiability of
the reliance of our constraints with respect to the parametrization
space $\mathcal J^1$. Here, we adapt the arguments in \cite{W}.
We define a tensor $\Pi$ (which is denoted by $\mathcal{P}$ in \cite{W}) as
\[\Pi^{ij}_{kl}=\frac{1}{2}(\delta^i_k\delta^j_l-J^i_kJ^j_l).\]
When restricting $\Pi$ on the space of $2-$forms, it is just the
projection onto the $J-$anti-invariant part. As in \cite{W}, we also
define $\chi_1, \cdots, \chi_r$ be self-dual harmonic $2-$forms with
respect to $\omega$ such that $\{\omega, \chi_1, \cdots, \chi_r\}$
are $L^2$ orthogonal bases for $\mathcal H^+_{\omega}$.
Consider the operator $\Phi: (\Omega^{1})_1\times \mathbb R^r\times
\mathcal J^1\longrightarrow (\Omega^2)_0$ by
\[\Phi(b,\underline{s},J)=(\log\dfrac{(\omega+\sum_{i=1}^rs_i \chi_i+db)^2}{\omega^2})\frac{(Id-\Pi_J)\omega}{2}+
\Pi_J(\omega+\sum_{i=1}^rs_i \chi_i+db),\]
The solution of $\Phi(b,\underline{s},J)=0$ gives a closed,
$J-$invariant form with the same volume form as $\omega$. In other
words, we get a description of our constraints by the zero set of a
map. It is easy to see that the map $\Phi$ is a Fr\'echet
differentiable map.
Thus by Proposition \ref{sol} we have a neighborhood $U^1(J_0)$ of
$J_0$ in which we have all the properties stated there. Especially,
we can suppose $U^1(J_0)$ is a ball in $\mathcal J$ with radius
$\epsilon$.
Finally, the small neighborhood of $J_0$ in $C^{\infty}$ with
$d(J_0,J)<\frac{\epsilon}{2(1+\epsilon)}$, where $d$ is defined in \eqref{metric-top}, is what we want.
\end{proof}
\subsubsection{Variations of $h_J^{\pm}$}
Following \cite{LZ}, given a compact almost complex manifold $(M,J)$ define
the $J$-{\em compatible symplectic cone}
$$
\mathcal{K}^c_J=\left\{[\omega]\in
H^2_{dR}(M;\mathbb{R})\,\,\,\vert\,\, \hbox{\rm $\omega$ symplectic and }\,J\,\, \hbox{\rm is}\,\,
\omega\hbox{-\rm compatible}\right\}\,.
$$
It is easy to see that $\mathcal{K}^c_J$ is an open convex set in $H^+_J$.
It is also immediate from the definition that $\mathcal{K}^c_J \neq \emptyset$ if and only if
$J$ admits compatible symplectic forms.
\begin{theorem} \label{deformation} Suppose $M$ is a $4-$manifold with an
almost complex structure $J$ such that $\mathcal K_J^c(M) \neq \emptyset$. is
non-empty. Then for any almost complex structure $J'$ in a
sufficiently small neighborhood of $J$ as in Corollary \ref{nbd}, we
have
\begin{itemize}
\item $\mathcal K_{J'}^c(M)\neq \emptyset$;
\item $h_J^{+}(M)\le h_{J'}^{+}(M)$;
\item $h_J^{-}(M)\ge h_{J'}^{-}(M)$.
\end{itemize}
\end{theorem}
\begin{proof} The first statement is a direct consequence of Corollary \ref{nbd},
and was already observed by Donaldson
(see also \cite{Lej}).
As $\mathcal K_{J'}^c(M)$ and $\mathcal K_J^c(M)$ are nonempty open
sets in $H_{J'}^{+}(M)$ and $H_J^{+}(M)$ respectively, to estimate
$h_J^{+}(M)$ and $h_{J'}^{+}(M)$, we only need to estimate the
dimensions of $\mathcal K_{J'}^c(M)$ and $\mathcal K_J^c(M)$.
Let $h=h_J^{+}(M)$. We choose $h$ rays which are ``in general
position", i.e. the interior of their span is an open set of
$\mathcal K_J^c(M)$. We suppose the $h$ rays are $C\cdot
[\omega_i]$'s where $\omega_i$'s are the $J-$compatible forms and
$[\omega_i]$'s have homology norm 1 with respect to some bases.
Then we use Corollary \ref{nbd} for each $i$ with fixed volume
form $\omega^2_i$. Then we have $h$ neighborhoods $U_i$ such that
for $J'\in U_i$, we have a $J'$ compatible form $\omega'_i$ which is
a small perturbation of $\omega_i$. Let $U$ be the intersection of
these $h$ neighborhoods. Then for any $J' \in U$, we have
$\omega'_i$'s which are still in the general position (because they
are perturbed in $C_0$ norm from a general position). And we see that the span of
the $h$ new rays belongs to $\mathcal K_{J'}^c(M)$ because
positive combinations of $\omega'_i$'s are still $J'-$compatible
forms. Hence we have $h_J^{+}(M)\le h_{J'}^{+}(M)$.
The last inequality is a consequence of the previous one and Theorem
\ref{pf-dim4}.
\end{proof}
\begin{remark}
{\rm The first statement also means that, on a $4-$manifold, the space of
almost K\"ahler complex structure $\mathcal J_{ak}$ is an open
subset of $\mathcal J$. If one considers complex deformation, the analogue of the first statement is a classical
theorem of Kodaira and Spencer. Their theorem is in fact valid for
any even dimension.}
\end{remark}
Let us consider the stratification
\[\mathcal J=\bigsqcup_{i=0}^{b^+} \mathcal
J_i,\] where $J\in \mathcal J_i$ if $h_J^-=i$. Then we have
\begin{cor} \label{0ak} On a $4-$manifold, $\mathcal J_0\cap \mathcal J_{ak}$ is
open in $\mathcal J$.
\end{cor}
\noindent It is known that $\mathcal J_{ak}$ is never the full space $\mathcal J$.
In fact, in any connected component of $\mathcal J$ there are non-tamed almost complex structures (see e.g. \cite{D}).
Nonetheless Corollary \ref{0ak} is a strong evidence of Conjecture \ref{conj1}.
{\rm In addition, the path-wise semi-continuity
established in Theorem \ref{semicont-hJt} indicates that the strong
semi-continuity property of Theorem \ref{deformation} very likely
holds for every $J$. This would imply that $\mathcal J_0$ is open in
$\mathcal J$. }
| {
"timestamp": "2011-04-14T02:02:08",
"yymm": "1104",
"arxiv_id": "1104.2511",
"language": "en",
"url": "https://arxiv.org/abs/1104.2511",
"abstract": "For a compact almost complex 4-manifold $(M,J)$, we study the subgroups $H^{\\pm}_J$ of $H^2(M, \\mathbb{R})$ consisting of cohomology classes representable by $J$-invariant, respectively, $J$-anti-invariant 2-forms. If $b^+ =1$, we show that for generic almost complex structures on $M$, the subgroup $H^-_J$ is trivial. Computations of the subgroups and their dimensions $h^{\\pm}_J$ are obtained for almost complex structures related to integrable ones. We also prove semi-continuity properties for $h^{\\pm}_J$.",
"subjects": "Symplectic Geometry (math.SG); Differential Geometry (math.DG)",
"title": "On the J-anti-invariant cohomology of almost complex 4-manifolds",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540656452213,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7099046520391478
} |
https://arxiv.org/abs/2205.09116 | Exploring the Adjugate Matrix Approach to Quaternion Pose Extraction | Quaternions are important for a wide variety of rotation-related problems in computer graphics, machine vision, and robotics. We study the nontrivial geometry of the relationship between quaternions and rotation matrices by exploiting the adjugate matrix of the characteristic equation of a related eigenvalue problem to obtain the manifold of the space of a quaternion eigenvector. We argue that quaternions parameterized by their corresponding rotation matrices cannot be expressed, for example, in machine learning tasks, as single-valued functions: the quaternion solution must instead be treated as a manifold, with different algebraic solutions for each of several single-valued sectors represented by the adjugate matrix. We conclude with novel constructions exploiting the quaternion adjugate variables to revisit several classic pose estimation applications: 2D point-cloud matching, 2D point-cloud-to-projection matching, 3D point-cloud matching, 3D orthographic point-cloud-to-projection matching, and 3D perspective point-cloud-to-projection matching. We find an exact solution to the 3D orthographic least squares pose extraction problem, and apply it successfully also to the perspective pose extraction problem with results that improve on existing methods. |
\section{Introduction}
\label{intro.sec}
We address the task of understanding whether there are obstacles to using quaternions
to represent orientation space, typical examples being the determination of
the optimal rotation to align two matched 2D or 3D point clouds, or find the pose
of the 2D or 3D point cloud that produced a given projected image; see \Fig{2D3DRMSDPose.fig}. Our approach is to carefully study
how to compute the optimal quaternion corresponding to a given
measurement of a (typically inexact) rotation matrix.
\begin{figure}[h!]
\vspace{-0.0in}
\figurecontent{
\centering
\includegraphics[width=5.5in]{figspdf/Figure1-v1} }
\caption[]{\ifnum0=1 {\bf PointClouds-dir/imgspdf/Figure1-v1.pdf. \\ }\fi
\ifnum0=1 {\it old: PointClouds-dir/imgspdf/new-figures-mar18-fig1.pdf. \\ }\fi \
\footnotesize {\bf The fundamental point-cloud matching problems in 2D and 3D.}
Grey points indicate a reference point-cloud, orange points indicate a rotated point-cloud, and magenta points are projections. In this paper rotations are represented by quaternion rotation parameters: $q=[a,b]$ for 2D rotations and $q = [q_0,q_1,q_2,q_3]$ for 3D rotations.
(a) Left: Alignment between a 2D reference cloud and a 2D test cloud differing by a rotation.
Right: Alignment between a 2D reference cloud and a rotated cloud producing a 1D projected image.
(b) Left: Alignment between a 3D reference cloud and a 3D test cloud differing by a rotation.
Right: Alignment between a 3D reference cloud and a rotated cloud producing a 2D projected image.
}
\label {2D3DRMSDPose.fig}
\end{figure}
The optimal-quaternion extraction problem is universal, and occurs in many frameworks, reflecting the appealing fact that unit quaternions form a smooth manifold that parameterizes rotations free of Euler angle issues such as gimbal lock (see, e.g. \citet{HansonQuatBook:2006,Hanson:ib5072}).
Our investigation is motivated particularly by what has been referred to as the ``quaternion discontinuity problem''
in the context of machine learning in rotation space by, e.g.,
\citet{Saxena-Rot-Opt2009,zhou2019continuity,Peretroukhin2020,zhao2020quaternion,xiang2020revisiting}.
Understanding such potential issues is important, as the determination of orientation and
pose is widespread in machine learning applications, including self-driving vehicles,
drone navigation, and general problems of understanding 3D space, evaluating 3D models,
and the extraction of 3D information from 2D data.
Our first contribution is to show explicitly how the topological properties of a quaternion, understood
as a multi-sector manifold, resolve the questions regarding discontinuities posed in certain
sectors of the machine learning literature. Traditional
computational algorithms (see, e.g., \cite{Shepperd1978,SarabandiARK2018,HansonQuatBook:2006}) for
extracting a corresponding quaternion from a 3D rotation matrix have always included
specific methods to account for possible singularities and discontinuities in the mapping,
but have not been clearly incorporated into some of the recent literature in which such
issues have been encountered. We show how to exploit
a classic linear algebra construction known formally
as the \emph{adjugate matrix};
remarkably, the adjugate embodies an alternative set of quaternion-related variables that has
surprising use cases, greatly clarifies how the traditional quaternion extraction
algorithms avoid singularities, and enables the exact solution of certain
challenging pose-estimation problems.
The adjugate suggests a new appreciation of the variational method
for quaternion extraction introduced by \citet{BarItzhack2000},
and enables unique ways of applying least squares
methods to solve 3D rotation discovery problems. In particular, we can use the
adjugate variables to find a closed-form formula solving a least squares optimization
formula for pose estimation.
We are thus able to explain the origin and resolution of the discontinuity problem,
and to further exploit our technology to provide novel insights into pose estimation.
\qquad
{\bf Outline:}
In the following, we illustrate our arguments beginning with a simplified and intuitive 2D rotation
framework that exhibits essentially all the relevant properties. We explore three ways of looking
at the 2D problem in preparation for a parallel treatment of the application-relevant
3D quaternion-rotation case:
we begin by considering 2D rotations as special cases of 3D rotations
to produce a pair of formulas for the $2\times 2$ orthonormal
2D rotation matrix. We gain new insights by solving
these for the 2D quaternion variables directly. We then explore
the 2D version of a variational method due to
\citet{BarItzhack2000}, minimizing a difference measure between the
two relevant matrices. Next, we replace the ideal, error-free 2D rotation matrix by a noisy version for
which each matrix element is still close to a rotation matrix element, but is treated algebraically
as distinct. The variational methods expose in further detail how to
understand the multivalued nature of the 2D
quaternion extraction problem. We repeat a similar analysis to derive the corresponding
3D results, determining how to extract complete
quaternion information without singularities or discontinuities from 3D rotation matrices,
for both ideal and noisy data elements. Finally, we study some applications of the
adjugate variables as parameters replacing the quaternions themselves in rotation
optimization applications, and show in particular how the problem of estimating the pose
in 3D space of a 2D point image relative to its corresponding 3D point cloud can be computed
in closed form using only rational polynomials combined with a Bar-Itzhack optimization.
\qquad
\section{Fundamental Background}
\label{fundamentals.sec}
The arguments we present in this paper rely on a short list of key background concepts. Our description of the problem at hand relies on the relationship between two ways of representing a rotation matrix. Furthermore, error-robust quaternion extraction can be done in two basic ways. These four concepts are described below:\\[0.05in]
\noindent{\bf Representing rotations:}
\begin{itemize}
\item {\bf Quaternions parameterize a rotation in terms of a point on a topological three-sphere.}
Any 4D vector $q$ with unit length\footnote{Henceforth, we will always be
assuming unit-length quaternions.}, $q\cdot q = 1$, is a quaternion
point on the unit three-sphere $\Sphere{3}$, and corresponds
exactly to a \emph{proper} 3D rotation matrix $R=R(q)$ through the equation
\begin{equation}
R(q) = \left[ \!\!
\begin{array}{ccc}
{q_0}^2+{q_1}^2-{q_2}^2 - {q_3}^2 & 2 q_1 q_2 -2 q_0 q_3
& 2 q_1 q_3 +2 q_0 q_2 \\
2 q_1 q_2 + 2 q_0 q_3 & {q_0}^2-{q_1}^2 + {q_2}^2 - {q_3}^2
& 2 q_2 q_3 - 2 q_0 q_1 \\
2 q_1 q_3 - 2 q_0 q_2 & 2 q_2 q_3 + 2 q_0 q_1 & {q_0}^2 - {q_1}^2 - {q_2}^2 + {q_3}^2
\end{array} \!\! \right] .
\label{Rofqq.eq}
\end{equation}
This fundamental equation is quadratic in $q$, so $R(q) = R(-q)$, and thus
every possible rotation is represented \emph{twice} in the manifold $\Sphere{3}$.
Alternatively, one can say that every possible rotation appears \emph{once} in a
hyperhemisphere of $\Sphere{3}$, the solid three-ball $\mathbf{B}^{3}$
that can be drawn in ordinary 3D space. In mathematical terms, \Eqn{Rofqq.eq}
can also be described in terms of the group $\SO{3}$, whose
topological manifold is $\RP{3}$, the real projective 3-space, but
we will not need to consider that feature in our treatment.
\item {\bf Quaternions encompass the axis-angle rotation parameterization.}
The axis-angle representation $R=R(\theta,\Hat{n})$ parameterizes any 3D rotation
in terms of the unit eigenvector $\Hat{n}$ of $R$, the direction fixed by the rotation, and
the angle $\theta$ of that rotation. With $c =\cos \theta$ and $s =\sin \theta$,
any rotation matrix can be written explicitly using axis-angle parameters as
\begin{equation}
R(\theta,\Hat{n}) = \left[
\begin{array}{ccc}
c +(1- c )\, {\hat{n}_1}^{\ 2} & (1- c )\, \hat{n}_1 \hat{n}_2 - s \, \hat{n}_3 &
(1- c ) \, \hat{n}_1 \hat{n}_3+ s \, \hat{n}_2 \\
(1- c )\, \hat{n}_1 \hat{n}_2+ s \, \hat{n}_3 & c + (1- c )\, {\hat{n}_2}^{\ 2} &
(1- c ) \, \hat{n}_2 \hat{n}_3 - s \, \hat{n}_1 \\
(1- c ) \, \hat{n}_1 \hat{n}_3 - s \,\hat{n}_2 & (1- c )\, \hat{n}_2 \hat{n}_3 + s \, \hat{n}_1&
c + (1- c ) \, {\hat{n}_3}^{\ 2} \\
\end{array} \right] \ .
\label{R.axisangle.eq}
\end{equation}
Choosing the quaternion parameterization
\begin{equation}
q(\theta,\Hat{n}) = ( \cos (\theta/2),\, \sin(\theta/2) \, \Hat{n} )
\label{qthetanhat}
\end{equation}
and substituting it into \Eqn{Rofqq.eq} gives exactly \Eqn{R.axisangle.eq}, double
covered with $0 \leq \theta < 4 \pi$.
\end{itemize}
\qquad
\pagebreak
\noindent{\bf Error-robust quaternion extraction:}
\begin{itemize}
\item {\bf Classical error-robust quaternion extraction is hard.}
The rotation matrices described in \Eqn{Rofqq.eq} and \Eqn{R.axisangle.eq} are exact.
However, extracting the optimal quaternion representation of an inexact measured rotation
matrix $R(m)$, with measured numerical matrix elements $m_{ij}$, is not an exact procedure.
The task can be defined as the problem of recovering the best axis-angle parameters
describing $R(m)$. This is well-known to be a subtle multi-step process
\citep[see, e.g.,][]{Shepperd1978,HansonQuatBook:2006,SarabandiARK2018}.
In order to account for all possible anomalies, however rare, the classical
procedure must check for zeros, conducting several separate checks for small numbers (see
Appendix \ref{RotToQuatCode.app} for detailed pseudocode and examples).
This algorithm reliably generates the axis-angle parameters $\cos \theta$, $\sin \theta$,
and $\Hat{n}$ needed to define a provably optimal $q(\theta, \Hat{n})$
from the numerical data in $R(m)$.
Note that this procedure assumes $R(m)$ is perfectly described
by \Eqn{R.axisangle.eq}, and thus does not gracefully handle finding the
quaternion that is the best
approximation for an imperfectly measured $R(m)$; this can be achieved using
the variational method of
\citet{BarItzhack2000}, which will play a significant role in our narrative.
\item {\bf The adjugate matrix approach to eigenvectors is important.}
A standard method for finding the optimal quaternion corresponding to a rotation aligning two 3D point clouds
uses the quaternion eigenvector corresponding to the maximal eigenvalue of the
$4 \times 4$ \emph{profile matrix} $M$ \citep[see, e.g.,][]{Horn1987,Hanson:ib5072}.
In our treatment we will take advantage of a construction called the `adjugate'.
(Further details may be found in Appendix \ref{adjugatematrix.app}.)
The adjugate of any square matrix $S$ is built from the matrix's transposed cofactors,
which facilitate the construction of the inverse through the following identity:
\begin{equation}
S \cdot \mbox{Adjugate}(S) = \mathop{\rm det}\nolimits S\; I_{4} \ .
\label{adjdef.eq}
\end{equation}
To see how this is exploited, we consider the characteristic matrix of $M$
\begin{equation}
\chi = \left[ M - \lambda\, I_{4} \right] \ ,
\label{char.eq}
\end{equation}
whose characteristic equation $\mathop{\rm det}\nolimits \chi=0$, quartic in $\lambda$,
determines the eigenvalues of $M$.
We then insert the maximal eigenvalue $\lambda_{\mbox{\small opt}}$
into the matrix, setting $\chi = \chi(\lambda_{\mbox{\small opt}} )$, and multiply that
characteristic matrix by its adjugate as follows:
\begin{equation}
\chi \cdot \mbox{Adjugate}(\chi) = \mathop{\rm det}\nolimits \chi\; I_{4} =0 \ .
\label{chiadj.eq}
\end{equation}
This allows us, via \Eqn{char.eq}, to write the solved eigensystem of $M$ as
\begin{equation}
M \cdot \mbox{Adjugate}(\chi) - \lambda_{\mbox{\small opt}} \mbox{Adjugate}(\chi) =0 \ .
\label{adjfinal.eq}
\end{equation}
We see that each of the adjugate's four column vectors is an
eigenvector of the single eigenvalue $\lambda_{\mbox{\small opt}}$; thus the adjugate provides
\emph{four} parallel solutions to the same eigensystem, and hence embodies
four apparently equivalent unnormalized optimal quaternions $q_{\mbox{\small opt}}$.
\end{itemize}
Starting from this list of observations,
we will use the relationships between quaternions and rotation matrices
to show there are no singularity-free single functions relating a quaternion to a measured rotation
matrix, but that an adjugate matrix, listing four alternatives (technically eight, taking into account the
quaternion sign ambiguity) describing the entire quaternion manifold $\Sphere{3}$, always contains
at least one normalizable column that produces a valid quaternion.
\qquad
\section{Two-Dimensional Rotations and the Quaternion Map }
\label{2DRotations.sec}
While quaternions are the most robust possible representation of rotations,
computing the relationship between a rotation matrix and a quaternion
exposes unexpected singularities.
The singularities we wish to investigate occur already for 2D rotations and
their quaternion counterparts, so we begin our journey in the simpler 2D space.
The context in the back of our minds is the exploration of data sets of (reference, sample) pairs of
matched ND point clouds $(r,s)$ related by a single rotation matrix $R$,
\[ s = R \cdot r + \mbox{\it $<\!$ noise $\!>$} \ , \]
or
\[ s = R \cdot (r + \mbox{\it $<\!$ noise $\!>$} )\ , \]
depending on how one views the application context.
We will resolve apparent discrepancies in the use of quaternions
to solve such problems first in 2D by introducing the adjugate variables.
\qquad
\subsection{Direct Solution of the Two-Dimensional Problem}
\label{2DdirectSoln.sec}
We begin with the simplified context of 2D rotations, obtained by
setting $q_{1} = q_{2} =0$ and $\hat{n}_{1} = \hat{n}_{2} =0$ in
Eqs.~(\ref{Rofqq.eq}) and (\ref{R.axisangle.eq}) to restrict rotations
to the $(x,y)$ plane, that is, fixing the $\Hat{z}$ axis. For convenience,
we define $c= \cos \theta$, $s = \sin \theta$, $a = q_{0} = \cos(\theta/2)$, and $b =q_{3} = \sin(\theta/2)$, so $c^2 + s^2 =a ^2+b^2 =1$.
We observe that, taking the range $0^{\circ} \le \theta < 720^{\circ}$, $a$ and $b$
cover the $(a,b)$ circle only once, while the $(c,s)$ pair covers its circle twice
over this range.
With these parameterizations, we can now construct ideal algebraic forms
of either a quaternion-parameterized 2D rotation matrix or
a standard 2D rotation matrix as follows:
\begin{align} \label{2DabRotN1.eq}
R(a,b) = & \left[ \begin{array}{cc} a^2 - b^2 & - 2 a b \\
2 a b & a^2 - b^2 \end{array} \right] \\[0.1in]
R(c,s) =& \left[ \begin{array}{cc} c & - s \\ s & c \end{array} \right]
\label{2DabRotN2.eq} \ .
\end{align}
We easily verify that $ \mathop{\rm det}\nolimits R(a,b) = \mathop{\rm det}\nolimits R(c,s) = 1$, and also that $R\cdot R^{} = I_{2}$, where
$I_{2}$ is the 2D identity matrix. The most important property of $R(c,s)$ is that its matrix elements
correspond to a \emph{numerically measurable} rotation matrix, as do noisy versions of $R(c,s)$,
which we will distinguish by the notation $R(m)$ for separate treatment,
while we will see that $R(a,b)$ has some intriguing ambiguities.
Now suppose we erase the formulas for $R$ in terms of $\theta$, and think only
of the algebraic expressions in Eqs.~(\ref{2DabRotN1.eq},\,\ref{2DabRotN2.eq}),
assuming that we have some
sound way of measuring this 2D rotation to determine the numerical values
of $(c,s)$. Then we can find expressions for the now-abstract
variables $(a,b)$ in several ways. We begin by noting that both
constraints $R(a,b) = R(c,s)$ and $R(a,b)\cdot R(c,s)^{}=I_{2}$
produce the same equations, \begin{equation}
\ \begin{array}{rcl}a^2 - b^2 &=& c\\ 2 a b & =& s \ , \end{array}
\label{R2solveabcs.eq}
\end{equation}
which are simply the trigonometric half-angle formulas.
We now take an important step: assuming the constraint $a^2 + b^2 = 1$,
we can eliminate either $a^2$ or $b^2$, and complete our solution in terms of
the measured rotation transformation parameters $(c,s)$ in two very distinct ways:%
\begin{align}
\left. \begin{array}{rcl}
\mbox{\rm (1) Eliminate\ $b^2$:}\\[0.05in]
a^2 - b^2 = 2 a^2 -1 &=& c\\ 2 a b & =& s \end{array} \right\}
\ \ \mbox{\rm Solve for $(a^2,\, a b)$ } \ \ \rightarrow \ \
a^2 = \frac{ 1+c }{2}, \ \ a b = \frac{s}{2} \ ,
\label{R2solveasq.eq} \\
\ \ \mbox{\rm\emph{Normalize} or solve for $(a,b)$ } \ \ \rightarrow \ \
a = \pm\frac{\sqrt{1+c}}{\sqrt{2}}, \ \ b = \pm \frac{s}{\sqrt{2} \sqrt{1+c}} \ ,
\label{R2solvea.eq}
\end{align}
\begin{align}
\left. \begin{array}{rcl}
\mbox{\rm (2) Eliminate\ $a^2$:}\\[0.05in]
a^2 - b^2 = 1-2 b^2 &= & c \\ 2 a b &= & s \end{array} \right\}
\ \ \mbox{\rm Solve for $(a b, \,b^2)$ } \ \ \rightarrow \ \
a b = \frac{s}{2} , \ \ b^2 = \frac{ 1-c}{2} \ ,
\label{R2solvebsq.eq} \\
\ \ \mbox{\rm \emph{Normalize} or solve for $(a,b)$ } \ \ \rightarrow \ \
a = \pm \frac{s}{\sqrt{2} \sqrt{1-c}}, \ \ b = \pm \frac{\sqrt{1-c}}{\sqrt{2}}
\label{R2solveb.eq} \ .
\end{align}
The second set of solutions, Eqs.~(\ref{R2solvea.eq}) and (\ref{R2solveb.eq}), is seen to
be the same as the result of normalizing Eqs.~(\ref{R2solveasq.eq}) and (\ref{R2solvebsq.eq}),
and these normalized forms are algebraically identical
if we multiply them by the ratios $\sqrt{1-c}/\sqrt{1-c}$ \, or \, $\sqrt{1+c}/\sqrt{1+c}$,
respectively. \emph{This clearly gives situations that require multiplying by $0/0$:}
the first normalized solution is impossible for $a\sim 0$, or $c \sim -1$, a perfectly
legal rotation, and the second solution is impossible for $b \sim 0$, or
$c \sim +1$, also perfectly legal! In addition, \emph{both signs} in
Eqs.~(\ref{R2solvea.eq}) and (\ref{R2solveb.eq}) are valid,
as we have the same rotation $R(a,b)$ if $(a,b) \rightarrow (-a,-b)$.
The problem, actually an important \emph{feature},
is that one normalized solution, \Eqn{R2solvea.eq}, fails in one experimentally
measurable domain, and the other, \Eqn{R2solveb.eq}, fails in a \emph{different} experimentally
measurable domain. \emph{Both must be considered together}, along with their
opposite signs, in order to completely cover the full multivalued
$720^{\circ}$ range of $\theta$ parameterizing $(a,b)$.
Those familiar with the long-standing quaternion extraction method of \citet{Shepperd1978}
may recognize some basic features appearing in a novel context here, and in the
full quaternion treatment later on: there is in effect a condition on \emph{which} rotation matrix
elements can be trusted to produce a regular quaternion.
The left side of \Fig{2Dquaternionsingularities.fig} shows how Eqs.~(\ref{R2solveasq.eq}) and (\ref{R2solvebsq.eq})
describe unit circles passing through the origin, with \emph{distinct centers} at $(1,0)$, $(0,1)$,
while their normalizations, Eqs.~(\ref{R2solvea.eq}) and (\ref{R2solveb.eq}) for $a$ and $b$,
are unit \emph{half circles} centered at $(0,0)$, covering the positive $x$ axis and the positive $y$ axis,
respectively. The two domains of the $(a,b)$ solutions
overlapping in the first quadrant work \emph{together} to cover each other's singular normalization
locations. This shows us unequivocally how the variables $(a,b)$ and their multiple
solutions describe a \emph{manifold}, a topological space that cannot be described
by a single function, but requires overlapping descriptions. Incorporating both signs
of the circles of Eqs.~(\ref{R2solveasq.eq}) and (\ref{R2solvebsq.eq}), passing through the origin
but with \emph{distinct centers} at $(1,0)$, $(0,1)$, $(-1,0)$, and $(0,-1)$, gives
the completed picture illustrated in the right side of \Fig{2Dquaternionsingularities.fig}. The non-singular almost-half-circles
(one sees a ``half-pie'' shape) resulting from normalization cover the
entire range of four progressively overlapping domains that \emph{together}
describe the possible values of $(a,b)$ over the whole range $0^{\circ} \le \theta < 720^{\circ}$
with four local non-singular options.
Observe that, given the variables $(a,b)$, the four
singularities in their solutions in terms of $(c,s)$ occur when one variable or
the other vanishes, $(a,b) \to (0,\pm 1)$ and $(a,b) \to (\pm1, 0)$. We shall see
in the full quaternion case that similar singularities occur in 14 submanifolds,
the loci where any legal combination of quaternion elements vanishes, thus
requiring similar multiple overlapping representations.
\comment{
\begin{figure}[h!]
\vspace{-0.2in}
\figurecontent{
\centering
\includegraphics[width=5.5in ]{adj2UR-labeled.eps}
}
\caption[]{\ifnum0=1 {\bf adj2UR-labeled.eps. }\fi.
\footnotesize The overlapping $(a,b)$ regions in the positive quadrants.
The normalized $x$ axis region is in green, derived from the unnormalized region in blue,
centered at $(1,0)$); this sector is regular at $(1,0)$, and singular at $\pm 180^{\circ}$
(remember that for $(a,b)$, the range of $\theta$ is $720^{\circ}$).
The normalized $y$ axis region is in magenta, derived from the unnormalized
region in red, centered at $(0,1)$), which
is regular at $180^{\circ}$ but singular at $0^{\circ}$ and $360^{\circ}$. They overlap
in the neighborhood of $90^{\circ}$, so, together, they are regular over an entire range in the parameters
of $(a,b)$ that covers $360^{\circ}$.}
\label{2D4arcsLabeled.fig}
\end{figure}
}
\begin{figure}[h!]
\vspace{-0.2in}
\figurecontent{
\centering
\includegraphics[width=6.1in]{figspdf/Figure2-v1}
}
\caption[]{\ifnum0=1 {\bf Figure2-v1.pdf } \fi
\ifnum0=1 {\it PointClouds-dir/imgspdf/new-figures-mar18-fig2.pdf. }\fi
\ifnum0=1 {\it originals: adj2UR-labeled.eps. }\fi
\ifnum0=1 {\it originals: Adj2Dab-picAL.eps, Adj2Dab-picBx.eps, Adj2Dab-picF.eps. }\fi
\footnotesize {\bf Quaternion maps have discontinuities in 2D rotation space. } (a) The overlapping $(a,b)$ regions in the positive quadrants.
The normalized $x$ axis region is in green, derived from the unnormalized region in blue,
centered at $(1,0)$); this sector is regular at $0^{\circ}$, and singular at $\pm 180^{\circ}$
(remember that for $(a,b)$, the range of $\theta$ is $720^{\circ}$).
The normalized $y$ axis region is in magenta, with corresponding singularites at $0^{\circ}$ and $360^{\circ}$. derived from the unnormalized
region in red, centered at $(0,1)$), which
is regular at $180^{\circ}$ but singular at $0^{\circ}$ and $360^{\circ}$. They overlap
in the neighborhood of $90^{\circ}$, so, together, they are regular over an entire range in the parameters
of $(a,b)$ that covers $360^{\circ}$.
(b) Each of the four unnormalized maps that cover the full quaternion space has a singularity in
the normalization.
The blue circles are the paths of $\pm(1+ c,\, s)$, mapping to the green half-circles in $(a,b)$, failing at $c=-1,\,a=0$.
The red circles are the paths of $\pm(s,\,1- c )$, mapping to the magenta half-circles in $(a,b)$,
which fail at $c=+1,\, b=0$. The curves along the positive axes, extending from $-45^{\circ}$ to $135^{\circ}$ actually cover the whole rotation space; all four together span
the entire quaternion space. (c) The discontinuities appear as singular
limits when the outer circles close in on the origin, and the divide-by-zeros at those limits
break the normalized circles half-way through, indicated by the four ``half-pie'' shapes,
two in each color. }
\label {2Dquaternionsingularities.fig}
\end{figure}
\comment{
\begin{figure}[h!]
\vspace{-0.1in}
\figurecontent{
\centering
\includegraphics[width=2.5 in ]{Adj2Dab-picAL} \hspace{.2in}
\includegraphics[width=2.5 in ]{Adj2Dab-picBx} \\
\hspace{1.4in} (a) \hfill (b) \hspace*{1.4in} \\
\includegraphics[width=3.5 in ]{Adj2Dab-picF} \\
\hspace{1.6in} \hfill (c) \hfill \hspace*{1.6in} \\
\vspace{-0in} }
\caption[]{\ifnum0=1 {\bf Adj2Dab-picAL.eps, Adj2Dab-picBx.eps, Adj2Dab-picF.eps. }\fi
\footnotesize
{\bf Each of the four unnormalized maps that cover the full quaternion space has a singularity in
the normalization.}
How quaternion space in 2D must be multiply covered be alternative
solutions depending on the actual 2D cosine and sine rotation parameters. The blue circles are the paths of $\pm(1+ c,\, s)$, mapping to the green half-circles in $(a,b)$, failing at $c=-1,\,a=0$.
The red circles are the paths of $\pm(s,\,1- c )$, mapping to the magenta half-circles in $(a,b)$,
which fail at $c=+1,\, b=0$. The curves along the positive axes, extending from $-45^{\circ}$ to $135^{\circ}$ actually cover the whole rotation
space, while all four together, passing from (a) to the full cover in (b) span
the entire quaternion space, but no single solution suffices. (c) Shows the singular
limits as the outer circles close in on the origin, and the divide-by-zero at that limit
stops the normalized circle half-way through, indicated by the four ``half-pie'' shapes,
two in each color. }\ \\[.4in]
\label{2D4arcs.fig}
\end{figure}
}
\comment{
\begin{figure}[h!]
\vspace{0.0in}
\figurecontent{
\centering
\includegraphics[width=2. in ]{2DarcsA.eps} \hspace{.2in}
\includegraphics[width=2. in ]{2DarcsB.eps} \\
\hspace{1.5in} (a) \hfill (b) \hspace*{2in} \\
\includegraphics[width=2. in ]{2DarcsC.eps} \hspace{.2in}
\includegraphics[width=2. in ]{2DarcsD.eps} \\
\hspace{1.5in} (c) \hfill (d) \hspace*{2in} \\
\vspace{0in} }
\caption[]{\ifnum0=1 {\bf 2DarcsA.eps, 2DarcsB.eps, 2DarcsC.eps,
2DarcsC.eps. }\fi
\footnotesize The ranges of the 2D pair of adjugate functions, which are circles
passing through zero, compared with their normalizations to the corresponding
quaternions, which fail at $180^{\circ}$ for the blue circle, and at $0^{\circ}$ for the red
arc. Together, however, the two can cover the whole needed quaternion range.}
\label{2Darcs.fig}
\end{figure}
}
\qquad
\comment{ Consider better intro:
[see comment] Next, we reexamine the question of solving the equations $R(c,s) = R(a,b)$ for $(a,b)$
using a variational method that will lead us directly to the approach of
\citet{BarItzhack2000} for full 3D quaternion rotations.
}
\subsection{Variational Approach: the Bar-Itzhack Method in 2D}
We now investigate the fact that the results of Section \ref{2DdirectSoln.sec} can
be seen in an alternative light by using a variational
approach \citep{BarItzhack2000} rather than an algebraic approach.
Seeing the quaternion appear as the solution to an
eigensystem gives us a novel way, exploiting the adjugate formula introduced
in Section \ref{fundamentals.sec}, to see how singularities can be systematically
resolved in this problem.
We begin by replacing the direct algebraic solution of the equations $R(c,s) = R(a,b)$
with what is essentially a least-squares formulation,
minimizing the difference between the two matrices expressed using the Fr\"{o}benius norm,
\begin{equation} \label{BIFrobNorm2.eq}
\begin{split}
\mbox{ L}_{\mbox{\footnotesize \bf Fr\"{o}benius}} =&
\mathop{\rm tr}\nolimits\left( \left(R(a,b) - R(c,s)\right) \cdot \left(R(a,b) - R(c,s)\right)^{}\right) \\
= & \mathop{\rm tr}\nolimits\left( I_{2} + I_{2} - 2 R(a,b)\cdot R(c,s)^{} \right) \ .
\end{split}
\end{equation}
At this point we can discard the constants and rephrase the problem of minimizing
the least-squares version of the Fr\"{o}benius norm by a maximization of
the cross-term, which we choose to write as
\begin{equation} \label{BIFrobDelta2.eq}
\Delta_\mathbf{F} = \mathop{\rm tr}\nolimits \frac{1}{2}R(a,b) \cdot R(c,s)^{} \ .
\end{equation}
The task is now to maximize $\Delta_\mathbf{F}$ by finding $(a,b)_{\mbox{\small opt}}$ such that
\begin{alignat}{3} \label{argmaxFrob2D.eq}
(a,b) _{\mbox{\small opt}} & = \ \raisebox{-.9em}{$\stackrel{\textstyle\mathop{\rm argmax}\nolimits}{\textstyle(a,b)} $}
\left(\mathop{\rm tr}\nolimits \frac{1}{2}R(a,b) \cdot R(c,s)^{} \right)\\[0.0in] \label{qMcsMabq2.eq}
& \hspace{-.45in} \mbox{\it -- now regroup quadratic terms in $(a,b)$ into left and right vectors:}
\nonumber \\[0.0in]
& = \; \raisebox{-.9em}{$\stackrel{\textstyle\mathop{\rm argmax}\nolimits}{\textstyle(a,b)} $}
\left( [a,b] \cdot K(c,s) \cdot [a,b]^{} \right)
\ \equiv \ \raisebox{-.9em}{$\stackrel{\textstyle\mathop{\rm argmax}\nolimits}{\textstyle(a,b)} $}
\left( [a,b] \cdot \left[ \begin{array}{cc} c & s \\ s & -c \end{array} \right] \cdot [a,b]^{} \right) .
\end{alignat}
We refer to the matrix $K(c,s)$, which has eigenvalues $ \lambda = \pm1$,
as the \emph{profile matrix} of the Bar-Itzhack optimization problem.
From classical linear algebra \citep{Golub-vanLoan-MatrixComp}, we know that
the task of maximizing $\Delta_\mathbf{F}$ is equivalent to identifying
the maximal eigenvalue of the symmetric real matrix $K(c,s)$ and its eigenvector. The
necessary eigenvalue is just $\lambda = +1$, and the corresponding eigenvector
is just $(a,b)_{\mbox{\small opt}}= \left(\; \cos(\theta/2),\sin(\theta/2)\;\right)$;
this choice for $(a,b)$ minimizes the Fr\"{o}benius norm
\Eqn{BIFrobNorm2.eq}, and in fact sets it to zero in this simplified example. But this misses
the crucial information noted in Eqs.~(\ref{R2solvea.eq}) and (\ref{R2solveb.eq}). A more formal
and generalizable way to find the eigenvector, which can have any non-zero scale without changing the eigenvalue, is to form the characteristic equation's matrix, as noted in Section \ref{fundamentals.sec},
by subtracting the eigenvalue $\lambda = +1$ from the diagonal,
\begin{equation}
\chi(c,s) \, = \, K(c,s) - (+1) I_{2} = \left[ \begin{array}{cc} c - 1& s \\ s & -c -1 \end{array} \right] \ ,
\end{equation}
and extract the eigenvector from $\chi(c,s)$.
We now encounter the main point of this approach: the multiple forms of the solutions
for $(a,b)$ that we found by direct calculation earlier now appear \emph{automatically} in the variational
version. The crucial fact is that, given that the determinant of $\chi$ vanishes, $\mathop{\rm det}\nolimits \chi \equiv 0$,
\emph{both adjugate columns} of the matrix $\chi$ are \emph{unnormalized} eigenvectors of the
given maximal eigenvalue.
For our case, we see that the
two copies of the (unnormalized) maximal eigenvector are the two columns
of the adjugate of $\chi$:
\begin{equation} \label{2Dadjugate.eq}
\mbox{Adjugate} (\chi) = \left\{ \begin{array}{ccc}
\left[ \begin{array}{c}
-1- c\\ - s \end{array} \right] & , &
\left[ \begin{array}{c} - s \\ -1+c \end{array}\right] \end{array} \right\} \ .
\end{equation}
Since the eigenvectors are insensitive to overall
sign and scale, we are free to multiply by $(-1/2)$ to get a more
convenient form of the adjugate eigenvectors, which is
\begin{align} \label{2DadjugateN.eq}
\mbox{AdjEigVectors} (\chi) =& \left\{ \begin{array}{ccc}
\frac{1}{2} \left[ \begin{array}{c}
1+ c\\ s \end{array} \right] & , &
\frac{1}{2} \left[ \begin{array}{c} s \\ 1-c \end{array}\right] \end{array} \right\}
\end{align}
We finally expose the \emph{Adjugate Matrix} for our eigenvector representation
by using the angle $\theta$ to rewrite \Eqn{2DadjugateN.eq} in terms
of our 2D quaternion $(a,b)$,
\begin{align} \label{2DadjugateNab.eq}
\mbox{Adjugate Matrix} =&
\left[ \begin{array}{cc} a^2 & a b \\
a b & b^2 \end{array}\right] \ .
\end{align}
The problem here is by now familiar: neither eigenvector is a complete
solution, as when we normalize, we find
\begin{equation} \label{2DNormAdjugate.eq}
\mbox{Normalized AdjEigVectors} (\chi) = \left\{ \begin{array}{ccc}
\left[ \begin{array}{c}
\frac{\textstyle\sqrt{1+c}}{\textstyle\sqrt{2}}\\[0.15in]
\frac{\textstyle s}{\mktall{\rule{0in}{1em} } \textstyle \sqrt{2} \sqrt{1+c}} \end{array} \right] & , &
\left[ \begin{array}{c} \frac{\textstyle s}{\mktall{\rule{0in}{1em} }\textstyle \sqrt{2}\sqrt{1-c}} \\[0.2in]
\frac{\textstyle\sqrt{1-c}}{\textstyle\sqrt{2}} \end{array}\right]
\end{array} \right\} \ .
\end{equation}
The first column is singular at $c=-1 $, the second column at $c=+1$, both
completely legal points, but neither normalized adjugate column is a valid
quaternion-like 2-vector for the \emph{entire} range of the data $(c,s)$.
\qquad
From \Eqn{2DadjugateNab.eq}, we can see clearly that
both columns normalize to the eigenvector $(a,b)$ since $a^2 + b^2 = 1$.
But that eigenvector is multiplied by $a$ in the first case, so no normalization
is possible as $a\to 0$, and in the second column no normalization is
possible as $b \to 0$. \emph{Both pre-normalization columns} of the adjugate matrix must
be included to cover the entire space of rotations; technically, due to the $(a,b) \to (-a,-b)$ equivalence,
the full topological space of $(a,b)$ of course actually has four natural components.
(See Figure \ref{2Dquaternionsingularities.fig}.)
To reiterate, the reason for
this is that only $a^2$, $b^2$, and $ab$, the quadratic forms, can actually
be expressed in terms of the measurable rotation matrix data $(c,s)$.
We must have access to the \emph{entire} adjugate matrix because the entire
circular quaternion path of $(a,b)$ describes a multivalued \emph{manifold},
not a single-valued \emph{function}.
With the conventional configuration of neural nets that only approximate functions,
one can never determine a global solution for $(a,b)$ directly, but must
target the multiple-valued \emph{adjugate matrix} of locally
normalizable solutions, with the choice of adjugate column occurring
as a final data-driven post-processing step. It is worthwhile noting
that this is essentially the same type of process involved in the classic
multiple-choice Shepperd algorithm \citep{Shepperd1978} for extracting a quaternion
from a 3D rotation matrix, but with the clearer
purely linear algebraic adjugate-matrix context that arises naturally in the
Bar-Itzhack variational approach.
\qquad
\subsection{Bar-Itzhack Errorful Measurement Strategy in 2D}
We have thus far assumed that measurements of a rotation matrix resulted in a perfect
orthonormal matrix. That strategy allowed us to clearly expose the requirement to
use a multivalued formula to find the 2D quaternion analog $(a,b)$ from the parameters $(c,s)$
of an ideal orthonormal measured rotation matrix. The same basic approach is valid also for
inaccurate measurements that report rotation matrix elements that are not orthonormal.
The basic ideas appear in the original work of \citep{Shepperd1978} and of \citet{BarItzhack2000},
while further details of the 2D case appear in \citet{Haralick-pose-1989} and in
the Supplementary
Material of \citet{Hanson:ib5072}.
To see how to work with inaccurate rotation matrix measurements, we introduce the ``measured
matrix data'' $R(m)$,
\begin{equation}
R(m) = \left[
\begin{array}{cc}
m_{11} & m_{12} \\
m_{21} & m_{22} \\
\end{array} \label{rotmat2Mij.eq}
\right] \ .
\end{equation}
Comparing this to our ideal quadratic quaternion-like target for the same matrix, \Eqn{2DabRotN2.eq},
we immediately encounter the problem that we have two variables in $R(a,b)$ and four variables
in $R(m)$. We cannot solve directly for the 2D $(a,b)$ quadratic forms, and, unsurprisingly,
this problem still arises in 3D, as we will see below.
\qquad
We now argue that for noisy data, we gain clarity by applying the
Bar-Itzhack optimization approach, and that its validity is
much easier to understand and justify. We begin with \Eqn{rotmat2Mij.eq} and
insert it into the Fr\"{o}benius norm for the distance between $R(m)$ and
\Eqn{2DabRotN2.eq} for $R(a,b)$, yielding
\begin{align}
S_{\mbox{\bf \footnotesize F} } =& \mathop{\rm tr}\nolimits\left( R(a,b) - R(m)) \right)\cdot \left( R(a,b) - R(m))\right)^{} \nonumber\\
= & \ (a^2 - b^2 - m_{11})^2 + (2 a b +m_{12})^2 + (2 a b - m_{21})^2 + (a^2 - b^2 - m_{22})^2
\nonumber \\
= & \ 2 + \sum_{i,j} \left( m_{ij}\right)^2 - 2(a^2 - b^2)\left(m_{11} + m_{22} \right)
+ 4 a b \left( m_{12} - m_{21} \right) \ .
\label{FrobOfBI2D.eq}
\end{align}
We strip the constants and change the sign to turn the problem of minimizing
$S_{\mbox{\bf \footnotesize F} }$
to the equivalent problem of maximizing the cross term, which, as before,
we can now write immediately as this matrix product defining the profile
matrix $K(m)$:
\begin{align}
\Delta_{\mbox{\bf \footnotesize F} } =& \ (a^2 - b^2)\left(m_{11} + m_{22} \right)
+ 2 a b \left( m_{21} - m_{12} \right) \nonumber \\
= &\ [a \ b] \cdot \left[\begin{array}{cc} \left(m_{11} + m_{22} \right)
& \left( m_{21} - m_{12} \right) \\
\left(m_{21} - m_{12} \right) & - \left(m_{11} + m_{22} \right) \end{array} \right]
\cdot [a \ b ]^{} \nonumber \
\ \equiv \ [a \ b] \cdot K(m_{ij}) \cdot [a \ b ]^{} \ .
\end{align}
The solution to our optimization problem
\begin{align}
[a\ b] _{\mbox{\small opt}} & =\raisebox{-.9em}{$\stackrel{\textstyle{\mathop{\rm argmax}\nolimits}}{\textstyle(a,b)}$} \;
[a \ b] \cdot K(m) \cdot [a \ b ]^{} ]
\label{argmaxK2D.eq}
\end{align}
is then reduced to finding the eigenvector $[a,b]_{\mbox{\small opt}}$ corresponding to the
maximal eigenvalue of $ K(m) $. Since $\mathop{\rm tr}\nolimits\left[ K(m)\right]=0 $, the
eigenvalues are an opposite-sign pair, with the maximal eigenvalue being the positive choice
\begin{equation}
\lambda_{\mbox{\rm max}} = \sqrt{\left(m_{11} + m_{22} \right)^2 + \left(m_{12} - m_{21} \right)^2 }
\label{maxEigK2D.eq}
\end{equation}
Note that the appearance of only the antisymmetric part of the off-diagonal terms
in $R(m)$ is an inevitable consequence of the optimization of the Fr\"{o}benius norm
\Eqn{FrobOfBI2D.eq}.
The final step is to cover the entire manifold of solutions for the multivalued
$(a,b)$ using the adjugate matrix of the maximal eigensystem,
\begin{align}
\MoveEqLeft[35]{ \mbox{\rm Adj}(\,\left[ K(m) - \lambda_{\mbox{\rm max}} I_{2} \right] \,) = } \nonumber\\
= \left\{ \left[ \begin{array}{c} - \left(m_{11} + m_{22} \right) - \lambda_{\mbox{\rm max}} \\
m_{12} - m_{21} \end{array} \right] \ , \
\left[ \begin{array}{c} m_{12} - m_{21}\\
\left(m_{11} + m_{22} \right) - \lambda_{\mbox{\rm max}}
\end{array} \right] \right\}
\label{Adjmat2Da.eq}
\end{align}
We see immediately the automatic appearance of a pair of solutions for $[a,b]_{\mbox{\small opt}}$ that have
singularities in different places when normalized, thus covering, with their negative counterparts,
the entire manifold of $(a,b)$. Since the eigenvectors represented by the columns of the adjugate
matrix are insensitive to rescaling, we can transform \Eqn{Adjmat2Da.eq} to a more readable form
by changing the sign and denoting the trace and antisymmetric off-diagonal terms as follows:
\begin{alignat}{6} \label{dtLambda.eq}
d =& \frac{1}{2}\left( m_{11} + m_{22} \right) & \hspace{.5in} & t =& \frac{1}{2}\left( m_{21} - m_{12} \right)
& \hspace{.5in} &\lambda =& \sqrt{d^2+t^2} \ .
\end{alignat}
We note that if our matrix were to be a pure rotation, we would have $d= \cos \theta$
and $t = \sin\theta$, so $\lambda = 1$.
We find the unnormalized eigenvector pairs
\begin{alignat}{4}
\mbox{\rm Adjugate\ eigenvectors} &= & \left\{ \pm \,\left[ \begin{array}{c}
\sqrt{t^2 + d^2 } + d\\ t
\end{array} \right] \right.
&, \ \left.
\pm\, \left[ \begin{array}{c} t \\[0.05in]
\sqrt{t^2 + d^2 }- d
\end{array} \right] \right\} \\[.1in]
&=& \left\{ \pm \,\left[ \begin{array}{c}
\lambda + d\\ t
\end{array} \right] \right.
&, \ \left.
\pm\, \left[ \begin{array}{c} t \\
\lambda- d
\end{array} \right] \right\}
\ .
\label{Adjmat2Db.eq}
\end{alignat}
Equation (\ref{Adjmat2Db.eq}) is the exact analog for errorful rotations of the adjugate matrix
\Eqn{2DadjugateNab.eq}.
Normalizing \Eqn{Adjmat2Db.eq} to obtain quaternions, we see as before that
the two versions are equivalent except at singular points, but that one is
always a computable normalized eigenvector:
\begin{alignat}{4}
\mbox{\rm normalized Adj eigenvectors} &= &
\left\{\pm \, \left[
\begin{array}{c}
\frac{\mktall{\rule{0in}{1em} }\sqrt{\textstyle \lambda+d}}{\mktall{\rule{0in}{1.2em} }\textstyle \sqrt{2} \sqrt{\lambda}} \\[.25in]
\frac{\textstyle t}{\mktall{\rule{0in}{1.2em} }\textstyle \sqrt{2} \sqrt{\lambda ( \lambda+d)}}
\end{array} \right] \right.
&, \ & \left.
\pm\, \left[ \begin{array}{c}
\frac{\textstyle t}{\mktall{\rule{0in}{1.2em} }\textstyle \sqrt{2} \sqrt{\lambda \left( \lambda - d \right)}} \\[.25in]
\frac{\mktall{\rule{0in}{1em} }\textstyle \sqrt{\lambda -d }}{\mktall{\rule{0in}{1.2em} }\textstyle \sqrt{2} \sqrt{\lambda}} \\
\end{array}\right]
\right\} \ .
\label{NormAdjmat2Db.eq}
\end{alignat}
\qquad
Observe that the numerical eigenvalue $\lambda$ appears throughout
in just such a way that if we have $\lambda = 1$,
we have precisely the previous solution \Eqn{2DNormAdjugate.eq}
for the normalized set of maximal eigenvectors.
It is worth noting that while \Eqn{dtLambda.eq} was mentioned in the
supplement to \citep{Hanson:ib5072}, the importance of the adjugate
and the singularities in the rotation-to-quaternion mappings
of Eqs.~(\ref{Adjmat2Db.eq}) and (\ref{NormAdjmat2Db.eq}) were yet
to be recognized.
%
\qquad
\qquad
\subsection{Summary}
In this section, we have worked through the simple case of two dimensional rotations
by an angle $\theta$, in parallel with how that corresponds to the 2D simplification of
the quaternion-parameterized rotation matrix in \Eqn{Rofqq.eq} to a 2D version
written in terms of the reduced quaternion $(a,b)$. This has led us to the understanding
that in order to solve for $(a,b)$ in terms of the elements of a 2D measured rotation matrix,
we must have two separate sectors, one regular at $(1,0)$ and singular at $(0,1)$, and the
other reversed. To account for the full quaternion space where both $(a,b)$ and $(-a,-b)$
correspond to the same 2D rotation $R(a,b)$, we in fact need four sectors covering
the full circular manifold of the quaternion space. We have also seen that the techniques
that reveal the manifold properties of the quaternion space
also give us a way to find the \emph{optimal} exact rotation corresponding
to a noisy set of rotation matrix data. In the next Section, we perform a parallel analysis
for 3D rotations and full quaternions, deriving and studying the results previewed in
Section \ref{fundamentals.sec}. We will again find that the clearest understanding of this problem is
based on the adjugate matrix arising in the Bar-Itzhack optimization algorithm, and that
noisy input data in particular are most clearly treated in this way.
\qquad
\section{ Three-Dimensional Rotations and the Quaternion Map}
\label{3DRotations.sec}
We now turn to the realistic case of interest, how to correctly determine a quaternion
corresponding to a measured 3D rotation matrix, and presenting a complete derivation
and explanation of the results summarized in Section \ref{fundamentals.sec}. As in
Section \ref{2DRotations.sec}, we will begin with a direct derivation using only the
symbolic forms of the quaternion-rotation problem, followed by the Bar-Itzhack
variational version of the same problem, which will exhibit some new features.
Again, the multivalued target required for, e.g., a rigorous formulation of a machine
learning process, will be expressed in terms of an adjugate matrix.
We conclude with the corresponding variational treatment of noisy inexact rotation measurements
producing the optimal true rotation matrix deduced from the noisy measured approximate
rotation matrix data.
\subsection{Direct Solution of the Three-Dimensional Problem}
\label{3DdirectSoln.sec}
A \emph{proper} orthonormal 3D rotation matrix can be written as a quadratic form
$R(q)$ in the quaternion elements $q = (q_{0},q_{1},q_{2},q_{3})$, with $q \cdot q = 1$,
and identified with the axis-angle form $R(\theta,\Hat{n})$ as follows
\begin{align}
R(q) =& R(\theta,\Hat{n})
\end{align}
\begin{align}
\MoveEqLeft [6] \left[ \begin{array}{ccc}
{q_0}^2+{q_1}^2-{q_2}^2 - {q_3}^2 & 2 q_1 q_2 -2 q_0 q_3
& 2 q_1 q_3 +2 q_0 q_2 \\
2 q_1 q_2 + 2 q_0 q_3 & {q_0}^2-{q_1}^2 + {q_2}^2 - {q_3}^2
& 2 q_2 q_3 - 2 q_0 q_1 \\
2 q_1 q_3 - 2 q_0 q_2 & 2 q_2 q_3 + 2 q_0 q_1
& {q_0}^2 - {q_1}^2 - {q_2}^2 + {q_3}^2
\end{array}
\right] \nonumber \\[0.05in]
&= \left[
\begin{array}{ccc}
c +(1- c )\, {\hat{n}_1}^{\ 2} & (1- c )\, \hat{n}_1 \hat{n}_2 - s \, \hat{n}_3 &
(1- c ) \, \hat{n}_1 \hat{n}_3+ s \, \hat{n}_2 \\
(1- c )\, \hat{n}_1 \hat{n}_2+ s \, \hat{n}_3 & c + (1- c )\, {\hat{n}_2}^{\ 2} &
(1- c ) \, \hat{n}_2 \hat{n}_3 - s \, \hat{n}_1 \\
(1- c ) \, \hat{n}_1 \hat{n}_3 - s \,\hat{n}_2 & (1- c )\, \hat{n}_2 \hat{n}_3 + s \, \hat{n}_1&
c + (1- c ) \, {\hat{n}_3}^{\ 2} \\
\end{array}
\right] ,
\label{qrot.axisangle.eq}
\end{align}
where $\Hat{n}\cdot \Hat{n}=1$ and we abbreviate $c = \cos \theta$, $s = \sin \theta$.
As noted earlier, $\Hat{n}$ is the fixed axis about which we rotate by the angle $\theta$,
and parameterizing the quaternion as
\begin{equation}
q(\theta,\Hat{n}) = \left(\cos(\theta/2),\,\hat{n}_{1} \sin(\theta/2),\,
\hat{n}_{2} \sin(\theta/2),\, \hat{n}_{3} \sin(\theta/2) \right)
\label{qAAform.eq}
\end{equation}
exactly reproduces $R(\theta,\Hat{n})$.
We can easily rearrange \Eqn{qrot.axisangle.eq} (or just apply $q(\theta,\Hat{n})$)
to produce an explicit solution for the 10 quadratic forms in $q$, written in terms
a $4\times 4$ symmetric matrix that will turn out to be important in our narrative:
\begin{align}
\MoveEqLeft[20] \left[
\begin{array}{cccc}
{q_0}^2 & q_0 q_1 & q_0 q_2 & q_0 q_3 \\
q_0 q_1 & {q_1}^2 & q_1 q_2 & q_1 q_3 \\
q_0 q_2 & q_1 q_2 & {q_2}^2 & q_2 q_3 \\
q_0 q_3 & q_1 q_3 & q_2 q_3 & {q_3}^2 \\
\end{array}
\right] \!\!
= \frac{1}{2}
\left[
\begin{array}{cccc}
1+c & s \, n_{1} & s \, n_{2} & s \, n_{3} \\
s \, n_{1} & (1-c)\, { n_{1}} ^2 & (1-c) \, n_{1} n_{2} & (1-c) \, n_{1} n_{3} \\
s \, n_{2} & (1-c) \,n_{1} n_{2} & (1-c) \, {n_{2}} ^2 & (1-c) \, n_{2} n_{3} \\
s \, n_{3} & (1-c) \, n_{1} n_{3} & (1-c) \, n_{2} n_{3} & (1-c)\, {n_{3}} ^2 \\
\end{array}
\right] .
\label{4DQQAA.eq}
\end{align}
Equation (\ref{4DQQAA.eq}) is the full quaternion analog of the unnormalized 2D
Equations~(\ref{R2solveasq.eq}) and (\ref{R2solvebsq.eq}), where we
note that $(c,s)= (\cos\theta, \sin\theta)$ are in terms of the full
angle, not the half-angle quaternion form. In Eqs.~(\ref{R2solvea.eq}) and (\ref{R2solveb.eq}),
we wrote out the explicit normalized quaternions and noted the two distinct
singularities at $c=+1,\,c = -1$. Here we choose to analyze the
normalization factors separately to clarify the analysis. Obviously
each row of \Eqn{4DQQAA.eq} is normalized to a \emph{symbolically
correct} quaternion $q = (q_{0},q_{1},q_{2},q_{3})$ by normalizing, in row-wise
sequence, by \\[-0.35in]
\begin{Large}
\begin{equation}
\begin{array}{cccccc}
\mbox{\normalsize row 0:} & \frac{1}{q_{0}} & = & \frac{1}{\cos(\theta/2)} & = & \sqrt{\frac{2}{1+c}}\ \\
\mbox{\normalsize row 1:} & \frac{1}{q_{1}} & = & \frac{1}{n_{1}\sin(\theta/2)} & = & \frac{1}{n_{1}} \sqrt{\frac{2}{1-c}} \ \\
\mbox{\normalsize row 2:} & \frac{1}{q_{2}} & = & \frac{1}{n_{2}\sin(\theta/2)} & = & \frac{1}{n_{2}} \sqrt{\frac{2}{1-c}}\ \\
\mbox{\normalsize row 3:} & \frac{1}{q_{3}} & = & \frac{1}{n_{3}\sin(\theta/2)} & = & \frac{1}{n_{3}} \sqrt{\frac{2}{1-c}} \ . \\
\end{array}
\label{QQAAnorms.eq}
\end{equation}
\end{Large}
\noindent
We observe that there are singularities in the normalization factors at new locations \emph{in addition}
to the $c= \pm 1$ singularities appearing in the 2D case. This is easy to understand: our expression for
$q(\theta,\Hat{n})$ is arbitrary, and any permutation of the parameter elements is equally valid, so the
singularities \emph{should} be spread among the components without singling out any given one.
We can conclude that the 4D analog of the 2D pair of $(a,b)\to \pm \{(1,0),(0,1) \}$ singularities in the
normalization is in fact this set of 14 distinct cases:
\begin{equation}
\begin{array}{rcl}
\mbox{one\ zero} &\to & \{ (0, x,y,z),\, (x,0,y,z),\, (x,y,0,z),\,(x,y,z,0) \} \\
\mbox{two\ zeroes} &\to & \{ (0, 0,x,y),\, (0,x, 0,y ),\, (0,x,y,0 ),\, (x,0,0,y),\,(x,0,y,0) ,\,(x,y,0,0)\} \\
\mbox{three\ zeroes} &\to &\pm \{ (1,0,0,0),\, (0,1,0,0), \, (0,0,1,0),\,( 0,0,0,1) \}
\end{array} \ ,
\end{equation}
where $x^2+y^2 +z^2 =1$ in the first line and $x^2+y^2=1$ in the second line to preserve $q\cdot q =1 $.
Since the first two cases are spheres $\Sphere{2}$ and $\Sphere{1}$,
the sign option $\pm$ in the third case is included; in fact,
a more general way to think of the points $\pm1$ is as just another sphere,
the zero-sphere $\Sphere{0}$
solving $x^2 = 1$.
The key of course is that, since $q\cdot q = 1$, there always has to be at least one element
with $|q_{i}| \ge 1/2$, and thus we can always find a row that is normalizable. These fourteen unnormalizable regions of the quaternion solution for a given rotation are illustrated
in Figure \ref{ZeroAdjSingAB.fig}. In Appendix \ref{AdjZeroMatrices.app}, \Eqn{tableofSing.eq},
we list the
explicit adjugate matrices observed when each of these anomalies occurs,
along with visualizations in Figures (\ref{obliqueS2Axes.fig}) and (\ref{4D8arcs.fig}).
A standard choice for selecting a well-defined solution from the $4\times 4$ matrix of alternate quaternion
solutions, in parallel to the algorithm of \citet{Shepperd1978}, is to note that the diagonal of the left-hand
side of \Eqn{4DQQAA.eq} is simply $\{ {q_0}^2,{q_1}^2,{q_2}^2,{q_3}^2\}$, so if we identify
the ordinal location $k$ of the maximal diagonal ${q_{k}}^2$, that is the row we normalize:
\begin{equation}
\mbox{\bf Normalizable solution:\ \ } q_{\mbox{\small opt}} = \frac{\pm 1}{|q_{k}|} ( {q_0}q_{k} ,{q_1}q_{k} ,{q_2}q_{k},{q_3}q_{k}) \ .
\label{optimalNormq.eq}
\end{equation}
The sign is of course arbitrary, though a standard choice is to make $q_{0}> 0$ when possible.
The significance of this for our problem is that, because the quaternion sphere $\Sphere{3}$ is a topological
manifold that cannot be described by a single function produced by a neural net, any algorithm that
needs to find \emph{a universally applicable quaternion} must produce a \emph{list of four candidates corresponding to the columns of \Eqn{4DQQAA.eq}},
remembering that there are 14 ways to fail normalizing any single one, and choose a normalizable
candidate from that list to produce a usable quaterniion via \Eqn{optimalNormq.eq}.
\qquad
\qquad
\subsection{Variational Approach: Bar-Itzhack in 3D}
\label{3DVariationalSoln.sec}
The variational method we presented for finding $(a,b)_{\mbox{\small opt}}$ from experimental
2D rotation matrix data extends straightforwardly to the 3D case to determine $q_{\mbox{\small opt}}$. The basic
ideas appear in \citet{BarItzhack2000}, with some refinements in Appendix C and
the Supplementary Material
of \cite{Hanson:ib5072}; see also \cite{SarabandiR2Q2018,SarabandiToR2020}.
The full 3D problem consists of finding the
4D quaternion $q_{\mbox{\small opt}} = (q_0, \,q_1,\, q_2,\, q_3)$ such that $R(q_{\mbox{\small opt}})$
best describes a measured 3D numerical rotation matrix $R$.
We begin as before with the ideal symbolic case, which is now $R=R(\theta,\Hat{n})$.
The task is to exploit quaternion features to minimize the Fr\"{o}benius norm of the
difference between the two matrices, where our initial optimization measure is
\begin{equation} \label{BIFrobNorm.eq}
\begin{split}
{\mathbf L}_{\textstyle\mbox{Fr\"{o}benius}} =&
\mathop{\rm tr}\nolimits\left( \left(R(q) - R(\theta,\Hat{n})\right) \cdot \left(R(q) - R(\theta,\Hat{n})\right)^{}\right) \\
= & \mathop{\rm tr}\nolimits\left( I_{3} + I_{3} - 2 R(q)\cdot R(\theta,\Hat{n})^{} \right) \ .
\end{split}
\end{equation}
At this point we can discard the constants and rephrase the problem of minimizing
the least-squares version of the Fr\"{o}benius norm in terms of maximizing
the cross-term, which we choose to write as
\begin{equation} \label{BIFrobDelta.eq}
\Delta_\mathbf{F} = \mathop{\rm tr}\nolimits R(q) \cdot R(\theta,\Hat{n})^{} = q \cdot K(\theta,\Hat{n}) \cdot q\ .
\end{equation}
where expanding $R(q)$ using the form of \Eqn{qrot.axisangle.eq} allows us to rewrite the
trace in \Eqn{BIFrobDelta.eq} as matrix product using the \emph{profile matrix}
\begin{equation} \label{BIProfileA.eq}
K_{0}(\theta,\Hat{n})
= \left[
\begin{array}{cccc}
2 c+1 & 2 s \, \hat{n}_{1} & 2 s \, \hat{n}_{2} & 2 s \, \hat{n}_{3} \\
2 s \, \hat{n}_{1} & 2 (1-c) \, \hat{n}_{1}^2 - 1 & 2(1-c) \, \hat{n}_{1} \hat{n}_{2} & 2(1-c) \, \hat{n}_{1} \hat{n}_{3} \\
2 s \, \hat{n}_{2} & 2 ( 1-c) \, \hat{n}_{1} \hat{n}_{2} & 2 (1-c) \, \hat{n}_{2}^2 - 1 & 2( 1-c) \, \hat{n}_{2} \hat{n}_{3} \\
2 s \, \hat{n}_{3} & 2 (1-c) \, \hat{n}_{1} \hat{n}_{3} & 2 (1-c) \, \hat{n}_{2} \hat{n}_{3} & 2 (1-c) \, \hat{n}_{3}^2 - 1 \\
\end{array}
\right]
\ .
\end{equation}
The task is now to maximize $\Delta_\mathbf{F}$ by finding $q_{\mbox{\small opt}}$ such that
\begin{equation} \label{argmaxFrob3D.eq}
q _{\mbox{\small opt}} =\ \raisebox{-.9em}{$\stackrel{\textstyle\mathop{\rm argmax}\nolimits}{\textstyle q} $}
\left(\mathop{\rm tr}\nolimits R(q) \cdot R(\theta,\Hat{n})^{} \right)
\ = \ \raisebox{-.75em}{$\stackrel{\textstyle\mathop{\rm argmax}\nolimits}{\textstyle q} $}
\left( q \cdot K(\theta,\Hat{n}) \cdot q \right) \ .
\end{equation}
Since $K_{0}(\theta,\Hat{n})$ is a symmetric real matrix, the maximum of $q \cdot K_{0} \cdot q$
is achieved by picking out the maximal eigenvalue, so the corresponding $q_{\mbox{\small opt}}$ is
the maximal eigenvector. The eigenvalues of $K_{0}(\theta,\Hat{n})$ are $\lambda =(3,-1,-1,-1)$,
so all we need to do is find the eigenvector of $\lambda = 3$. However, we will first use some
linear algebra manipulations to simplify the appearance of our equations. We note that
\begin{itemize}
\item Since the characteristic equation we used to solve for the eigenvalues is
$ \mathop{\rm det}\nolimits( K_{0} - \lambda I_{4})=0 $, \emph{adding a constant} to $K_{0}$ simply adds the same constant
to the eigenvalues, while \emph{scaling} $K_{0}$ simply scales the eigenvalues by a constant.
\item The eigenvectors of any well-behaved eigensystem can be computed from the \emph{adjugate matrix}
of the vanishing-determinant characteristic equation into which a valid eigenvalue
has been substituted. This follows from the fact that for any matrix,
$M\cdot \mbox{Adj}(M) = \mathop{\rm det}\nolimits M\; I_{4}$; the latter
equation also indicates that the four columns of $\mbox{Adj}(K_{0})$ are all formally eigenvectors
of the \emph{same eigenvalue}.
\item The eigenvectors themselves can be multiplied by any non-vanishing scale factor without
changing the eigenvalue equation, since the eigenvector equation is homogenous in the
eigenvector itself.
\end{itemize}
Therefore, we can replace our original matrix $K_{0}(\theta,\Hat{n})$ in \Eqn{BIProfileA.eq} by
adding one copy of the identity matrix and dividing by 4 to yield a new matrix
\begin{equation} \label{BIProfileB.eq}
K(\theta,\Hat{n}) =\frac{1}{4}\left(K_{0}(\theta,\Hat{n}) + 1\times I_{4} \right)
= \frac{1}{2} \left[
\begin{array}{cccc}
1+ c & s \, \hat{n}_{1} & s \, \hat{n}_{2} & s \, \hat{n}_{3} \\
s \, \hat{n}_{1} & (1-c) \, \hat{n}_{1}^2 & (1-c) \, \hat{n}_{1} \hat{n}_{2}
& (1-c) \, \hat{n}_{1} \hat{n}_{3} \\
s \, \hat{n}_{2} & (1-c) \, \hat{n}_{1} \hat{n}_{2} & (1-c) \, \hat{n}_{2}^2
& (1-c) \, \hat{n}_{2} \hat{n}_{3} \\
s \, \hat{n}_{3} & (1-c) \, \hat{n}_{1} \hat{n}_{3} & (1-c) \, \hat{n}_{2}
\hat{n}_{3} & (1-c) \, \hat{n}_{3}^2 \\
\end{array}
\right] \ .
\end{equation}
whose eigenvalues are now $(1,0,0,0))$, so the maximal eigenvalue is now $\lambda_{\mbox{\small opt}}= 1$,
but whose normalized eigenvectors are preserved. $K(\theta,\Hat{n}) $ has some interesting
properties. First, we see from \Eqn{qAAform.eq} for $q(\theta,\Hat{n})$ that we have
simply rediscovered \Eqn{4DQQAA.eq}, except that now we perceive it in a new light,
as the root of an eigensystem whose maximal eigenvector determines $q_{\mbox{\small opt}}$. Furthermore,
whichever of the two forms of $K(\theta,\Hat{n})= K(q)$ we use, the eigenvectors
corresponding to the eigenvalues are just
\begin{equation}
\left\{ \begin{array}{cccc}
\left[ \begin{array}{c} q_0 \\q_1\\ q_2 \\ q_3 \\ \end{array} \right] &
\left[ \begin{array}{c} -q_1 \\q_0\\ 0 \\ 0 \\ \end{array} \right] &
\left[ \begin{array}{c} -q_2 \\0\\ q_0 \\ 0 \\ \end{array} \right] &
\left[ \begin{array}{c} -q_3 \\0 \\ 0 \\ q_0 \\ \end{array} \right]
\end{array} \right\} \ .
\label{theKeigvecs.eq}
\end{equation}
and the quaternion itself, $q = (q_0 ,\,q_1,\, q_2 ,\, q_3)$ is trivially the maximal
eigenvector with $\lambda_{\mbox{\small opt}} = 1$.
However, there is a deeper meaning in the eigensystem generated by the Bar-Itzhack
variational method that tells us everything that is important about the non-trivial manifold
in which quaternions live. \emph{Given the profile matrix $K$}, we can compute the
maximal eigenvector corresponding to $\lambda_{\mbox{\small opt}} = 1$ simultaneously in four
different ways by writing down the characteristic equation of $K$ with $\lambda = 1$
and computing the $4\times 4$ \emph{adjugate matrix}. We can use any equivalent
form we like, but $K(q)$ is particularly simple: first we examine
\begin{align}
\mbox{characteristic equation:\ } = \; \chi \; &= K(q) - 1 \times I_{4} \\
& = K(q) - (q \cdot q) \times I_{4} \ ,
\label{KcharEqn.eq}
\end{align}
and then remarkably, when we compute the adjugate of the maximal
eigenvalue's characteristic equation for $K(q)$, we find it is simply
\emph{the negative of $K(q)$ itself}:
\begin{align}
\mbox{Adjugate}(\chi) =& - \left[
\begin{array}{cccc}
{q_0}^2 & q_0 q_1 & q_0 q_2 & q_0 q_3 \\
q_0 q_1 & {q_1}^2 & q_1 q_2 & q_1 q_3 \\
q_0 q_2 & q_1 q_2 & {q_2}^2 & q_2 q_3 \\
q_0 q_3 & q_1 q_3 & q_2 q_3 & {q_3}^2 \\
\end{array}
\right] = - K(q) \ .
\label{KAdjrEqn.eq}
\end{align}
Recall that the minus sign can be removed and the positive quadratic matrix employed
to represent the family of alternative unnormalized maximal eigenvectors, since the
eigenequation is insensitive to the scale of the eigenvectors.
We already know these solutions for $q_{\mbox{\small opt}}$ are correct, since each row (or column, as it is symmetric) is
proportional to the maximal eigenvector $q = (q_0 ,\,q_1,\, q_2 ,\, q_3)$. However,
in addition we observe a repetition of our observation in Section \ref{3DdirectSoln.sec}
that the four rows of superficially equivalent solutions are \emph{not} equivalent, but indicate that
any of the fourteen combinations of appearances of zeroes in one, two, or three of
the quaternion components $(q_0 ,\,q_1,\, q_2 ,\, q_3)$ renders the entire row useless
for computing the correct quaternion corresponding to the measured rotation matrix,
and another quadratic row with nonsingular normalization must be used for the
calculation.
\qquad
\qquad
\subsection{Bar-Itzhack Variational Approach to 3D Noisy Data}
\label{BarItzh-3D-noise.sec}
We have found the solution for the quaternion manifold's four solutions (eight with signs)
in terms of a perfect orthogonal measurement of the rotation data. As noted by
\citet{BarItzhack2000}, the same basic
procedure can be used for measured rotation matrices with errors, and the resulting quaternion
produces a perfect orthonormal rotation matrix that is the \emph{optimal approximation}
to the provided errorful data. The issue of handling rotation data with wide-ranging errors
has been studied by \citet{SarabandiR2Q2018}; their heuristic approach to determining
an optimal rotation omits critical steps in the noisy-data treatment of Bar-Itzhack that
we handle rigorously here.
Our starting point is a measured $3\times 3$ matrix $R(m)$ that is assumed to originate from a
3D rotation matrix, but cannot be guaranteed to be orthonormal due to measurement
error; we write the components of this input data matrix as
\begin{equation}
R(m) = \left[
\begin{array}{ccc}
m_{11} & m_{12} & m_{13} \\
m_{21} & m_{22} & m_{23} \\
m_{31} & m_{32} & m_{33} \\
\end{array} \label{rotmatMij.eq}
\right] \ .
\end{equation}
We set up the Bar-Itzhack variational problem starting with a symbolic
rotation in the quadratic quaternion form $R(q)$ given in \Eqn{Rofqq.eq},
and write down the cross-term of the Fr\"{o}benius norm to define a maximization problem
that will be our optimization target:
\begin{equation}
\Delta_\mathbf{F} (q,m)= \mathop{\rm tr}\nolimits R(q) \cdot R(m)^{} = q \cdot K_{0}(m) \cdot q \ .
\end{equation}
Our initial profile matrix resulting from rearranging the quadratic quaternion terms
into the form of a scalar-valued symmetric matrix multiplication takes the form
\begin{align}
K_{0}(m) = &
\left[
\begin{array}{cccc}
m_{11}+m_{22}+m_{33} & m_{32}-m_{23} & m_{13}-m_{31} & m_{21}-m_{12} \\
m_{32}-m_{23} & m_{11}-m_{22}-m_{33} & m_{12}+m_{21} & m_{13}+m_{31} \\
m_{13}-m_{31} & m_{12}+m_{21} & -m_{11}+m_{22}-m_{33} & m_{23}+m_{32} \\
m_{21}-m_{12} & m_{13}+m_{31} & m_{23}+m_{32} & -m_{11}-m_{22}+m_{33} \\
\end{array}
\right]
\label{noisyK0mat.eq}
\end{align}
Note that $K_0(m)$ is traceless, and since $K_0(m)$ is a real symmetric matrix, it will have real eigenvalues,
The eigenvector $q_{\mbox{\small opt}}$ of $K_0(m)$'s maximal eigenvalue $\lambda_{\mbox{\small opt}}(m)$ will maximize
$\Delta_\mathbf{F} (q,m)$. This maximal eigensystem will solve the optimization problem
\begin{align} \label{argmaxFrob3DRmat.eq}
q _{\mbox{\small opt}} =\ \raisebox{-.9em}{$\stackrel{\textstyle\mathop{\rm argmax}\nolimits}{\textstyle q} $}
\left(\mathop{\rm tr}\nolimits R(q) \cdot R(m)^{} \right)
\ = \ \raisebox{-.75em}{$\stackrel{\textstyle\mathop{\rm argmax}\nolimits}{\textstyle q} $}
\left( q \cdot K_{0}(m) \cdot q \right) \\[0.1in]
\lambda _{\mbox{\small opt}} = \Delta_\mathbf{F} (q_{\mbox{\small opt}},m) = \left( q_{\mbox{\small opt}} \cdot K_{0}(m) \cdot q_{\mbox{\small opt}} \right)
\ .
\end{align}
However, to be a proper quaternion, the optimizing value of the eigenvector $q$ of
$\lambda_{\mbox{\small opt}}$ will have to be normalized to become $q_{\mbox{\small opt}}$, and we have
argued throughout that this is not always possible, and must be dealt with using the quaternionic
manifold $\Sphere{3}$ with eight covering coordinate patches instead of relying on a single function.
Fortunately, we know from the exact-rotation-data case in the preceding section how to deal correctly with
this issue in the Bar-Itzhack context. We note that while it was useful in the exact case to get a
very clean set of formulas by performing an eigenvector-preserving transformation of the form
\begin{equation}
K(m) = \mbox{scale} \times \left( K_{0}(m) + \mbox{constant} \times I_{4} \right)
\end{equation}
to adjust our maximal eigenvalue to the identity, in the general case, we cannot find a single
pair of constants that will be all that useful, though one might choose to normalize to obtain
a unit maximal eigenvalue by dividing by the value of $K_{0}$'s maximal eigenvalue
$\lambda_{\mbox{\small opt}}(m)$ . We will assume that if there is some reason to readjust
$K_{0}(m)$ to a form $K(m)$ with the same eigenvectors, up to scaling,
preserving the corresponding maximal eigenvalue $ \lambda_{\mbox{\small opt}}(m)$ up to an additive constant,
we may do so, and thus we will continue with that abstract $K(m)$ to complete our argument.
Obviously what we have to do first is find $\lambda_{\mbox{\small opt}}(m)$. Standard numerical eigensystem
software packages can easily accomplish this, and, for symmetric real matrices up to $4 \times 4$
in size, one can even calculate the maximal eigenvalue analytically using Cardano's solution of 4th degree
polynomials (see, e.g, \cite{Hanson:ib5072} for a review). The last step is then to
form the characteristic equation's matrix as before, using now the \emph{numerical} eigenvalue, giving
\begin{align}\label{KofmCharEqn.eq}
\mbox{characteristic matrix:\ } = \chi(m) = & K(m) - \lambda_{\mbox{\small opt}}(m) \times I_{4} \\
= & K(m) - (q_{\mbox{\small opt}} \cdot K(m) \cdot q_{\mbox{\small opt}}) \times I_{4} \nonumber \\[.0in]
\mbox {where} & \nonumber\\
\mathop{\rm det}\nolimits \chi(m) \equiv& 0 \nonumber\ .
\end{align}
Finally, our full quaternion-space covering manifold solution is
determined from the adjugate of \Eqn{KofmCharEqn.eq},
which is an entirely numerical matrix having the following (possibly
scaled) relation to the sign-ambiguous set of eight possible quaternion
formulas:
\begin{align}
\left[ \begin{array}{cccc}
{q_0}^2 & q_0 q_1 & q_0 q_2 & q_0 q_3 \\
q_0 q_1 & {q_1}^2 & q_1 q_2 & q_1 q_3 \\
q_0 q_2 & q_1 q_2 & {q_2}^2 & q_2 q_3 \\
q_0 q_3 & q_1 q_3 & q_2 q_3 & {q_3}^2 \\
\end{array} \right] = & \mbox{Adjugate}(\chi(m)) \ .
\label{KAdjrEqn2.eq}
\end{align}
Recall that the adjugate is determined only up to a nonvanishing scale of either sign,
and that one picks the ordinal index $k$ with the largest diagonal value ${q_{k}}^{2}$
in the adjugate matrix, and normalizes that row to obtain $q_{\mbox{\small opt}}$. Finally, one
calculates the \emph{optimal pure rotation approximation} to the numerically
measured $R(m)$ using this quaternion selected from the adjugate matrix:
\begin{equation}
R_{\mbox{\small opt}}(m) = R(q_{\mbox{\small opt}})
\end{equation}
and our treatment of how to compute a quaternion from any ideal or noisy rotation
matrix data is done.
\qquad
\subsection{Graphical Illustration of the 3D Rotation Case}
We can get an intuitive feeling for what is going on with the multiple valid regions
for the quaternion solution representations be drawing representative spaces corresponding
to the unnormalized (and potentially singular) solutions, and then making pointwise
normalization maps from those spaces to the actual quaternion subspaces, and noting
where validity of the normalization map fails.
In 2D, these maps were fairly simple to see in Figure \ref{2Dquaternionsingularities.fig},
where a simple unnormalized circle mapped to a half-circle
in the 2D reduced quaternion plane. Going one step higher in complexity, we
can produce images in 2D that correspond to a quaternion map, but one dimension
lower. The top part of Figure \ref{ZeroAdjSingAB.fig} illustrates pairs of 2D
spheres embedded in $\R{3}$
centered at $(\pm 1, 0,0)$, $(0,\pm 1,0)$, and $( 0,0,\pm1)$, shown in red.
Taking each point on the red spheres and applying the normalization operation,
we obtain the green spheres centered at the origin $(0,0,0)$. As the red spheres'
points approach the origin, the normalization approaches a divide-by-zero at the
equator of each green sphere, and the map can go no farther.
The bottom part of Figure \ref{ZeroAdjSingAB.fig} illustrates the geometry in
quaternion space of the 14 singular subspaces, where one or more normalizations
of quaternion parameterizations in terms of rotation matrices will fail, and a test
must be made to identify a legal normalization step. When one of the four sets
of three vanishing quaternions occurs, only the point pair at the tip of a 4D axis
is permitted, as shown in the left bottom part of the figure. If one of six pairs
of zeros occurs, the six circles in the middle part denote the allowed subspace
of quaternions, and if only one quaternion vanishes, there are four allowed
spherical subspaces as shown at the left. These fourteen collections of
4, 6, and 4 subspaces correspond roughly to the vertices, edges, and faces of
a complex tetrahedron, as a tetrahedron has 4 vertices, 6 edges, and 4 faces.
Additional illustrations of the higher dimensional properties of these singularities
are given in Appendix \ref{AdjZeroMatrices.app}, Figures (\ref{obliqueS2Axes.fig})
and (\ref{4D8arcs.fig}).
\begin{figure}[h!]
\vspace{-0.5in}
\figurecontent{
\centering
\includegraphics[width=6.25in]{figspdf/Figure3-v1}
\vspace{-0.2in}
\caption[]{\ifnum0=1 {\bf figspdf/Figure3-v1.pdf}\fi
\ifnum0=1 {\it newer: new-figures-mar18-fig3.pdf}\fi
\ifnum0=1 {\it original: S2-projection-4.eps, S2-projection-6.eps, }\fi
\ifnum0=1 {\it original: threeZeroAdjSing.eps, \ twoZeroAdjSing.eps,
justOneZeroAdjSing.eps, oneZeroAdjSing.eps. }\fi
\footnotesize
{\bf (a)} {\bf 3D subspace showing three axes of singularities.}
In this 3D subspace of quaternion space, there are partial spheres instead
of partial circles as in 2D, but the singularity occurs in the same way: as the sphere
closes in on the origin, normalization is impossible. At left, the $x$ and $y$
axes coincide with the four red spheres that produce partial coverings for the inner green
quaternion-subspace spheres. At the right, we include the $z$ axis component
and show all six singularity-limited quaternion-subspace normalized hemispheres in green,
giving the 3D analog of the 2D case \Fig{2Dquaternionsingularities.fig}. \\
{\bf (b)}
{\bf Visualizing the fourteen quaternion singular-normalization domains.}
(Left) The allowed quaternions if any three components vanish are the four $\Sphere{0}$
subspaces at the ends of the 4D axes, $q_{0}=\pm 1$, $q_{1}=\pm 1$, $q_{2}=\pm 1$,
and $q_{3}=\pm 1$.
(Middle) Allowed quaternions with two zeroes are six topological circles
$\Sphere{1}$. A single circle projected to 3D from the unit quaternion
with $q_{0}=q_{1}=0$ is just the curve $(0,0,x,y)$ with $x^2+y^2 = 1$.
The union of all six curves matches the six edges of a complex tetrahedron.
(Right) Regions of singularity for quaternions with a single zero are topological spheres
$\Sphere{2}$ that correspond to four faces of a complex tetrahedron, with the allowed $q_{0}=0$
subspace, for example, being the spherical surface $(0, x,y,z )$ with $x^2+y^2 +z^2= 1$.
}
\label {ZeroAdjSingAB.fig}
\end{figure}
\clearpage
\comment{
\begin{figure}[h!]
\vspace{0in}
\figurecontent{
\centering
\includegraphics[width=2.7 in ]{S2-projection-4.eps} \hspace{.1in}
\includegraphics[width=2.7 in ]{S2-projection-6.eps} \\
\hspace*{1.2in} (a) \hfill (b) \hspace*{1.2in} \\
}
\caption[]{\ifnum0=1 {\bf S2-projection-4.eps, S2-projection-6.eps. }\fi
\footnotesize
{\bf 3D subspace showing three axes of singularities.}
In this 3D subspace of quaternion space, there are partial spheres instead
of partial circles, but the singularity occurs in the same way, as the sphere
closes in on the origin, normalization is impossible. (a) The $x$ and $y$
axes coincide with the four red spheres that produce partial coverings for the inner green
quaternion-subspace spheres. (b) Adding in the $z$ axis to show the full
story of how this subspace of quaternions is covered in a nonsingular fashion
be three pairs of partial spheres analogous to the $(1+c,s)$ circular arcs
in \Fig{2D4arcs.fig}.
}
\label{3D6arcs.fig}
\end{figure}
}
\comment{
\mypar{Strategy for depicting the full quaternion map.} Despite its four-dimensional intrinsic
nature, quaternion geometry can be depicted in a fairly accurate way if we are
willing to follow some analogies between lower dimensional and higher dimensional
spheres. First, we show in \Fig{obliqueS2Axes.fig}(a) an ordinary sphere $\Sphere{2}$
embedded in 3D Euclidean space $\R{3}$, with the three orthogonal axes $\Hat{x}$,
$\Hat{y}$, and $\Hat{z}$, projected in the familiar way to a 2D image. Even though
the image is a dimension lower than the actual 3D object being depicted, we are
accustomed to interpreting this image as a 3D object. Now rotate the sphere
as in in \Fig{obliqueS2Axes.fig}(b) so that the three axes are projected equally
onto the 2D image, with the ends of the axes forming the vertices of an equilateral
triangle. Now we see that this projection corresponds to one hemisphere of
$\Sphere{2}$ flattened into a disk containing all three positive axes, and the
back hemisphere as a second disk containing all three negative axes. It is
clear that if we create two separate images as in \Fig{obliqueS2Axes.fig}(c,d),
\emph{every single point} on the manifold $\Sphere{2}$ can be seen in the
two separate hemispherical images. We can do the same thing with a full
quaternion map using a \emph{solid ball} containing a 3D quadruple of positive
axes (with the four axis ends being the vertices of a tetrahedron), paired with
a matching solid ball containing the symmetric projections of the four
negative axes. Every point of the quaternion sphere is visible in the two
solid balls, exactly analogous to the two filled disks for the hemispheres
of $\Sphere{2}$ in \Fig{obliqueS2Axes.fig}(c).
\qquad
The full quaternion map from the unnormalized representation to the normalized
true quaternion sector is divided into eight distinct regions, in opposite signed
pairs that represent equivalent rotations due to the identification $R(q)=R(-q)$.
Instead of portions of ordinary spheres as in the top of
Figure \ref{ZeroAdjSingAB.fig}, we have
portions of hyperpheres centered at $(\pm 1, 0,0,0)$, $(0,\pm 1,0,0)$,
$(0,0,\pm 1,0 )$, and $(0, 0,0,\pm1)$. Instead of being partial hemispherical
surfaces, these are now solid balls, each corresponding to a portion of a set
of overlapping hemispheres of the quaternion manifold $\Sphere{3}$. These
are difficult to draw, but an attempt can be made by projecting the axes of
the 4D space into 3D in the symmetric directions of the vertices of a tetrahedron.
In Figure \ref{4D8arcs.fig}(a), we show first a collection of slices of the
solid ball at various radii in 4D, aligned with one axis, for a single choice
of the eight unnormalized and normalized maps. Then in Figure \ref{4D8arcs.fig}(b),
we reduce the number of samples of the solid balls to one, but show
a representative pair of unnormalized and normalized slices for \emph{the
four positive unit 4D axes}; there is another opposite sign counterpart for each
of these four that is omitted for clarity.
}
\comment{
\begin{figure}[]
\vspace{-.25in}
\figurecontent{
\centering
\includegraphics[width=2.5 in ]{threeZeroAdjSing.eps} \hspace{.1in}
\includegraphics[width=2.5 in ]{twoZeroAdjSing.eps} \\
\hspace*{1.4in} (a) \hfill (b) \hspace*{1.4in} \\
\includegraphics[width=2.7 in ]{justOneZeroAdjSing.eps} \hspace{.1in}
\includegraphics[width=2.7 in ]{oneZeroAdjSing.eps} \\
\hspace*{1.2in} (c) \hfill (d) \hspace*{1.2in} \\
}
\caption[]{\ifnum0=1 {\bf threeZeroAdjSing.eps,twoZeroAdjSing.eps,
justOneZeroAdjSing.eps, oneZeroAdjSing.eps. }\fi
\footnotesize
{\bf Visualizing the fourteen quaternion singular-normalization domains.}
(a) Regions of singularity for quaternions with three zeroes are just the four pairs of points at the
each end of the projected 4D axes, $q_{0}=\pm 1$, $q_{1}=\pm 1$, $q_{2}=\pm 1$, and $q_{3}=\pm 1$.
The positive points correspond with the vertices of ends of the 4D axes, while the pairs are
intuitively both required because the remaining quaternion manifold in this degenerate case is
actually the zero-sphere $\Sphere{0}$, the two-point solution of $x^2 = 1$.
(b) Regions of singularity for quaternions with two zeroes are six topological circles
$\Sphere{1}$. A single circle projected to 3D from the unit quaternion
with $q_{0}=q_{1}=0$ is just the curve $(0,0,x,y, )$ with $x^2+y^2 = 1$.
The union of all six spheres matches the six edges of the tetrahdral projection
of the unit axes from 4D to 3D, with $q_{0}= q_{1}=0$, $q_{0}= q_{2}=0$, $q_{0}= q_{3}=0$,
$q_{2}=q_{3}=0$, $q_{3} = q_{1}=0$, and $q_{1}=q_{2}=0$.
Regions of singularity for quaternions with a single zero are topological spheres
$\Sphere{2}$.
(c) A single sphere projected to 3D (and a bit squashed) from the unit quaternion
with $q_{0}=0$, so the remaining set of quaternion values with this normalization
singularity is the manifold $(0,x,y,z)$ with $x^2+y^2+z^2 = 1$.
(d) All four spheres, with $q_{0}=0$, $q_{1}=0$, $q_{2}=0$, and $q_{3}=0$.
Each sphere lies within a 3-space perpendicular to one of the 4D axes, and appears
in the 3D projection as a flattened sphere aligned with one of the faces of the
tetrahedron formed by the projected 4D axes.}
\label{ZeroAdjSingAB.fig}
\end{figure}
}
\section{Applications of the Adjugate Representation}
We now apply the quaternion adjugate variable representation to the classic problems
of estimating the optimal rotations needed to align sets of pointwise matched
data. The basic appearance of the data for each of the problems we shall treat
is summarized visually in \Fig{2D3DRMSDPose.fig} in the Introduction.
We first examine in sections 5.1 and 5.2 the simple but pedagogically instructive 2D cloud
problems, namely matching rotation-related pairs of 2D clouds, and then
pose discovery given a 2D cloud and a 1D point-matched image derived
from that cloud. The more relevant 3D cloud matching case has been solved
in many ways, so again our adjugate variable approach in section 5.3 is mainly of pedagogical
interest. The reader will find the most interesting results in section 5.4, in which
we exploit the adjugate reduction in the power of the least squares loss function
for orthographic 3D-cloud-to-2D-image matching, discovering a new closed form
solution to that least squares problem. Finally, in section 5.5, we apply the methods
of our orthographic pose estimation solution to produce a novel and highly accurate
three-step procedure to solve the 3D-to-2D perspective projection pose estimation
problem. Comparisons with frequently cited standard results for this problem show
that our sparse adjugate-based procedure matches or exceeds the loss profiles for random
simulated data sets of other known methods, which rely on more
complicated iterative procedures.
\comment{
In order to show some concrete examples of how the adjugate representations
of 2D and 3D rotations can be used to solve practical problems, we sketch out
how the procedure is applied, in order of increasing complexity, to the tasks of
2D cloud matching and pose estimation, and
then for 3D cloud matching and pose estimation. %
The basic appearance of the data for each of these four problems is summarized
visually in \Fig{2D3DRMSDPose.fig} in the Introduction.
We note that there are of course a great many treatments of the use of
quaternion eigensystems for point cloud matching, e.g,, \citet{HebertThesis1983,FaugerasHebert1983,FaugerasHebert1986,Horn1987,RDiamond1988,Kearsley1989,Kneller1991,Hanson:ib5072}, as well as numerous treatments of various aspects of pose estimation,
ranging, e.g., from \cite{Haralick-pose-1989} to \cite{MLPnP-Urban-2016},
\cite{LuHagerMj-FastPose-2000} to
such recent work as \cite{ZhouWangKaess-ICRA2020},
\cite{FuaEtAl-6DPoseEst-CVPR2020}, \cite{SXiao-fixErr-TOG2021}, or \cite{GuoEtAl-CamOrientwFocal-2021}.
Our purpose here is mainly to show how to exploit particular quaternion-based
approaches that avoid the occurrence of singularities,
so we will not attempt to review the extensive literature on these subjects.
We note that often this problem is phrased as needing to discover the rotation best aligning a test data set with
a fixed reference set, the so-called root mean square deviation or RMSD problem, also
called the ``Generalized Procrustes Problem.'' However, that traditional form is not well-adapted
to the closely related pose estimation problem, which we want to examine in tandem,
so the formulas we
use apply the unknown rotation to the fixed reference set and compare it to the bare
test data set, generating. e.g., the inverse of the usual RMSD rotation matrix solution.
}
\mypar{Universal Least-Squares Loss Function Framework.}
We choose as our framework the basic least squares formulas arising when we require a
set of template reference data to be rotated to agree with another set of
(assumed noisy) measured data. The universal least squares loss function applicable
to all of our matching problems applies an unknown rotation to the fixed reference set
and compares it to a jittered test data set, and takes the following form:
\begin{equation}
\begin{aligned}
{\mathbf S}_{\mbox{\footnotesize (2D,3D : Match,Pose)}} & =
\sum_{k=1}^{K} \left\| R(\mbox{quaternion variables}) \cdot \Vec{x}_{k} - \Vec{u}_{k} \right\| ^2 \\
& =\sum_{k=1}^{K} \left( \Vec{u}_{k}\cdot \Vec{u}_{k} -
2 \Vec{u}_{k} \cdot R(\mbox{vars}) \cdot \Vec{x}_{k}
\: + \; \Vec{x}_{k} \cdot R^{}(\mbox{vars}) \cdot R(\mbox{vars}) \cdot \Vec{x}_{k} \right) \\
& = \mathop{\rm tr}\nolimits( {U} \cdot {U}^{}) - 2 \mathop{\rm tr}\nolimits \left( R(\mbox{vars}) \cdot {X} \cdot {U}^{} \right)
+ \; \mathop{\rm tr}\nolimits \left( R^{}(\mbox{vars}) \cdot R(\mbox{vars}) \cdot {X} \cdot {X}^{} \right) \ .
\end{aligned}
\label{UniversalLoss.eq}
\end{equation}
Here $\{\Vec{x}_{k}\}$ is the set of $K$ reference points describing a cloud in
dimension $D=2$ or $D=3$,
with $ {X}$ denoting a $D \times K$ matrix allowing us to absorb the sums over $k$.
We write $\{\Vec{u}_{k}\}$ or $\ {U}$ for the set of $K$ test points.
For the cloud-to-cloud matching
problem, ${X}$ and ${U}$ denote reference data and rotated data of dimension $D=2$ or $D=3$,
and for these same-dimension matching problems, \Eqn{UniversalLoss.eq} greatly simplifies
due to $R^{} \cdot R = \mbox{Identity Matrix}$, reducing the least squares minimization
problem to the equivalent maximization problem for the cross term
\begin{equation}
\Delta = \mathop{\rm tr}\nolimits ( R \cdot {X} \cdot {U}^{})
\label{UniversalCrossTerm.eq}
\end{equation}
after eliminating all constant terms. For the pose-estimation problem, these become a (corresponding-point-matched) projected image cloud of dimension $D=1$ or $D=2$, and
the necessarily incomplete rotation matrices in the projection process must remain,
so the full form of \Eqn{UniversalLoss.eq} must be retained.
\comment{.
For the cloud-cloud matching problems,
we take the rotation $R$ to be a quaternion-based paramaterization, either
the 2D rotation $R(a,b)$ or the 3D rotation $R(q)$; for these full
rotations, $R^{} \cdot R = (q\cdot q)^{2} \times I_{D} =\mbox{(Identity Matrix)}$,
and thus the last term reduces
to a constant $ {X}\cdot {X}^{}$; only the term \emph{linear} in $R$ appears,
so the problem is \emph{quadratic} in $q$, which makes it much easier to solve.
However, for the pose-estimation problem, although obtaining the full rotation $R$ is
still our goal, the rotation matrix in \Eqn{UniversalLoss.eq} must be reduced to
a projection by dropping the last line, that is, reducing to
either a $1\times2$ projection matrix $P(a,b)$ from 2D to 1D, or a $2\times3$
projection matrix $P(q)$ from 3D to 2D.
Thus for pose estimation, the last term in \Eqn{UniversalLoss.eq}
\emph{does not reduce to a constant},
but in fact re-introduces \emph{quartic quaternion terms} into the problem, so the
quadratic quaternion matrix eigensystem methods frequently exploited in cloud-cloud
matching problems no longer apply. However, we emphasize that the pose problem
is still solvable, as the orthonormality conditions on $R(q)$ permit the missing
bottom row to be reliably reconstructed from the cross-product of the first two rows,
or, indeed, any five elements of the first two rows.
}
There are a variety of approaches to solving each of these problems to obtain the
rotation matrix that optimally aligns the two data sets $ {X}$ and $ {U}$. We will
pay particular attention to the behavior of the least squares optimization problem
represented by the loss function \Eqn{UniversalLoss.eq} when the quaternion parameters of
the rotation matrices and projections are expressed in terms of the non-singular quadratic
parameterizations that we have been exploring.
We will hereafter define the variables resulting from the quadratic quaternion forms
as the \emph{adjugate variables}, which we now define explicitly as:
\begin{equation}
\label{substAdj.eq}
\begin{aligned}
\mbox{2D:\ \ } &R(a,b) \!\! & \to R(a^2,b^2,a b) \to \, &R(\alpha,\beta,\gamma)&\ \ \ \ &
P(a,b) \!\! & \to P(a^2,b^2,a b) \to \, & P(\alpha,\beta,\gamma) \\
\mbox{3D:\ \ } &R(q) \!\!& \to \ \ \ R(q_{i} q_{j}) \ \ \ \to \, & R(q_{ij}) &\ \ \ \ &
P(q) \! \!& \to \ \ \ P(q_{i} q_{j}) \ \ \ \to \, & P(q_{ij}) \ .
\end{aligned}
\end{equation} %
The adjugate matrices themselves are thus given by \Eqn{2DadjugateNab.eq} and \Eqn{KAdjrEqn.eq},
reiterated here for convenient reference:
\begin{equation} \label{AdjugateSummary,eq}
\left. \begin{aligned}
A(a,b) & =
\left[ \begin{array}{cc} a^2 & a b \\ a b & b^2 \\
\end{array} \right] \, \equiv \, \left[ \begin{array}{cc}
\alpha & \gamma\\ \gamma & \beta\\
\end{array} \right] \\[0.2 in]
A(q) &= \left[
\begin{array}{cccc}
{q_0}^2 & q_0 q_1 & q_0 q_2 & q_0 q_3 \\
q_0 q_1 & {q_1}^2 & q_1 q_2 & q_1 q_3 \\
q_0 q_2 & q_1 q_2 & {q_2}^2 & q_2 q_3 \\
q_0 q_3 & q_1 q_3 & q_2 q_3 & {q_3}^2 \\
\end{array} \right] \, \equiv \, \left[
\begin{array}{cccc}
q_{00} & q_{01} & q_{02} & q_{03} \\
q_{01} & q_{11} & q_{12} & q_{13} \\
q_{02} & q_{12} & q_{11} & q_{23} \\
q_{03} & q_{13} & q_{23} & q_{33} \\
\end{array} \right]
\end{aligned} \right\} \ .
\end{equation}
While the basic mathematics is of course unchanged, the conversion from
formulas quadratic or quartic in $q$ to formulas linear or quadratic,
respectively, in $q_{ij}$
leads to some additional insights, not to speak of being explicitly nonsingular,
based on the arguments in the preceding sections.
\comment{
\begin{figure}[h!]
\vspace{0in}
\figurecontent{
\centering
\includegraphics[width=1.8 in ]{2DMatch-a.eps} \hspace{.1in}
\includegraphics[width=1.8 in ]{2DPose-a.eps} \hspace{.1in}
\includegraphics[width=1.8 in ]{2DPose-c.eps} \\
\hspace*{.8in} (a) \hfill (b) \hfill (c) \hspace*{0.8in} \\[0.2in]
\includegraphics[width=1.8 in ]{3DMatch-a.eps} \hspace{-.1in}
\includegraphics[width=1.8 in ]{3DPose-a.eps} \hspace{-.1in}
\includegraphics[width=2in ]{3DPose-b.eps} \\
\hspace*{0.8in} (d) \hfill (e) \hfill (f) \hspace*{0.8in} \\
}
\caption[]{\ifnum0=1 {\bf 2DMatch-a.eps, 2DPose-a.eps,
2DPose-c.eps, 3DMatch-a.eps, 3DPose-a.eps, 3DPose-b.eps . }\fi
\footnotesize
{\bf The fundamental pose-matching problems in 2D and 3D.}
(a) Task of finding the 2D rotation aligning a 2D reference cloud with
another 2D test cloud differing by a rotation.
(b) Task of finding the 2D rotation locating the projection angle
used to make a 1D image of a 2D point cloud.
(c) The solution, rotating the projected points to align with the
2D virtual camera gaze direction.
(d) Task of finding the 3D rotation aligning a 3D reference cloud with
another 3D test cloud differing by a rotation. The purple arrow
is the axis of rotation, proportional to the vector part of the
corresponding quaternion.
(e) Task of finding the 3D rotation used to create the projected
2D image of a 3D point cloud.
(f) The solution, rotating the projected points to align with the
3D virtual camera orientation frame.
}
\label{FourAdjPoseProblems.fig}
\end{figure}
\clearpage
}
We consider applying the loss \Eqn{UniversalLoss.eq} here in two ways.
One canonical standard for obtaining a numerical quaternion from
the loss function is the \emph{argmin} function, which takes as its
input the measured data points and the unknown rotation parameters,
with their constraints, which are the single equation $ q \cdot q = 1 $
for a pure quaternion formulation.
However, our purpose here is to add the quaternion adjugate variables
to the arsenal of our analysis tools. We already are familiar with
changing variables from the four unit-quaternion
elements $q = (q_{0},\, q_{1}, \, q_{2},\, q_{3})$ to their
ten possible quadratic expressions written as $q_{ij}$,
denoting variables whose behavior corresponds to $q_{i} \times q_{j}$.
Since the unit-length constraint $q\cdot q = 1$ implies there are only
three independent variables available, there must be seven constraints
on the ten quaternion adjugate variables $q_{ij}$ derived from their definition.
For example, $q_{00}=q_{0} q_{0}$, $ q_{11}=q_{1} q_{1}$, and
$q_{01}=q_{0} q_{1}$ impose the constraint $q_{00} \, q_{11} = {q_{01}}^2$.
While there are a number of alternative ways of extracting our needed
seven constraints, we have found the following to be a universally
useful combination:
\begin{equation} \label{TheqqConstraints.Eq}
\begin{aligned}
q_{00} + q_{11} + q_{22}+ q_{33} &= 1 \\
q_{00} \, q_{11} &= {q_{01}}^2 , && q_{00} \, q_{22} &= {q_{02}}^2, &&
q_{00} \, q_{33} & = {q_{03}}^2 \\
q_{22} \, q_{33} &= {q_{23}}^2, && q_{11} \, q_{33} &= {q_{13}}^2, &&
q_{11}\, q_{22} &= {q_{12}}^2 \ . \\
\end{aligned}
\end{equation}
This set of constraints reduces the ten adjugate variables to three
independent rotation parameters, and using standard numerical
constraint solution software provided in systems like
Matlab or Mathematica, we get reasonable quaternion
answers for noisy data in all but very unusual cases using \emph{argmin}
on the least squares loss function combined with constraints on the
manifold of legal variations, e.g.,
\begin{align*} \label{UniversalArgmin.eq}
\mbox{\rm argmin}& [ \,\sum_{\mbox{\small points}} \{\,\left\|\, R[q]\cdot \mbox{\rm reference}[x,y,z] -
\mbox{\rm test}[u,v[,w]]\, \right\|^2,\\
& {q_{0}}^{2} + {q_{1}}^{2} + {q_{2}}^{2}+ {q_{3}}^{2} = 1 \},\\
&\{ q_{0}, q_{1}, q_{2}, q_{3}\} \ ]\\[0.25in]
\mbox{\rm argmin}& [ \,\sum_{\mbox{\small points}} \{\,\left\|\, R[q_{ij}]\cdot \mbox{\rm reference}[x,y,z] -
\mbox{\rm test}[u,v[,w]]\, \right\|^2,\\
&\{q_{00} + q_{11} + q_{22} + q_{33} = 1, q_{00} q_{11} = {q_{01}}^2, q_{00} q_{22} = {q_{02}}^2, \\
& q_{00} q_{33} = {q_{03}}^2,
q_{22} q_{33} = {q_{23}}^2, q_{11} q_{33} = {q_{13}}^2,
q_{11} q_{22} = {q_{12}}^2 \}\}, \\
&\{ q_{00}, q_{01}, q_{02}, q_{03}, q_{11}, q_{12}, q_{13}, q_{22}, q_{23}, q_{33} \} \ ] \ .
\end{align*}
Appropriately configured machine learning applications presumably should
function identically to \emph{argmin}. However, there are many choices
to be made in designing neural networks for such purposes, and it may,
for example, be challenging to reliably design an implementation,
possibly leading to the observed misbehavior of such networks.
We hope to deal with these issues more thoroughly in a separate venue.
Here, we will focus on a top-level overview of the ways in which quaternion
adjugate variables can provide insights not only into the relationship between
quaternions and rotations in numerical settings, but also into the
least squares equations for point-cloud matching and their algebraic solutions.
In the remaining parts of this Section, we will show how these two contexts
are intimately related.
\subsection{2D Point Cloud Orientation Matching}
We begin with the simplest example, which we will label as
``2D Matching,'' namely the problem of aligning a pair of 2D point clouds,
where $\{\Vec{x}_{k}\} = \{[x,y]_{k}\}$ is a set of $K$ 2D column vectors
describing points in a reference set, and $\{\Vec{u}_{k}\} = \{[u,v]_{k}\}$ describes
a set of 2D test points. Following our conventions in \Eqn{UniversalLoss.eq},
the least squares loss function then takes the form
\begin{align}
\mathbf{S}_{\mbox{\small (2D Match)}} =& \sum_{k=1}^{K}
\left\| R \cdot \Vec{x}_{k} - \Vec{u}_{k} \right\|^2 \ .
\end{align}
The corresponding optimization problem is easily solved in closed form in terms of the $2\times 2$
cross-covariance matrix
\[ E_{ab}= \sum_{k}{x_{a}}^{k} {u_{b}}^{k} =
\left[ \begin{array}{cc} x\! \cdot \! u & x\! \cdot \! v \\ y\! \cdot \! u & y\! \cdot \! v \\ \end{array} \right] \]
using a wide variety
of methods, including quaternion forms exploiting the parameterization $R(a,b)$
\citep[see, e.g.,][supporting information, Sec.~5]{Hanson:ib5072}.
We now examine what happens if we parameterize $R$ not by the 2D quaternion
itself, $q = [a,b]$, but by the adjugate-inspired variables $\{\alpha,\beta,\gamma\}$
replacing $\{a^2,b^2, a b\}$, subject to the constraints $\alpha+\beta = 1$
and $\alpha \beta = \gamma^{2}$. With $R(\alpha,\beta,\gamma)$, we thus find the loss function %
\begin{align} \label{S2DCloudPair.eq}
\mathbf{S}_{\mbox{\small \rm (2D Match) }} =&
\, u\!\cdot\! u + v\!\cdot\! v - 2 \alpha \, x\!\cdot\! u + 2 \beta \,x\!\cdot\! u
- 4 \gamma\, x\!\cdot\! v + {\alpha}^2 x\!\cdot\! x + 4 {\gamma}^2 x\!\cdot\! x -
2 \alpha \beta \, x\!\cdot\! x + \nonumber\\
& {\beta}^2 x^2 + 4 \gamma\, y\!\cdot\! u- 2 \alpha \,y\!\cdot\! v
+ 2 \beta \, y\!\cdot\! v + {\alpha}^2 y\!\cdot\! y +
4 {\gamma}^2 \,y\!\cdot\! y - 2\alpha \beta \,y\!\cdot\! y +{\beta}^2 y\!\cdot\! y \nonumber\\
\MoveEqLeft[6]{=
\, u\!\cdot\! u + v\!\cdot\! v - 2 \alpha \, x\!\cdot\! u + 2 \beta \,x\!\cdot\! u
- 4 \gamma\, x\!\cdot\! v + 4 \gamma\, y\!\cdot\! u- 2 \alpha \,y\!\cdot\! v
+ 2 \beta \, y\!\cdot\! v + x\!\cdot\! x + y\!\cdot\! y }\ ,
\end{align}
where we used both constraints to remove the quaternion dependence of the $x\!\cdot\! x + y\!\cdot\! y$ terms (this
is equivalent to exploiting $R\cdot R^{} = I_{2}$ in \Eqn{UniversalLoss.eq}).
The least squares solution minimizing $\mathbf{S}$ in \Eqn{S2DCloudPair.eq} is easy to
find in the adjugate variable framework by using the constraints to eliminate $\beta$ and
$\gamma$ in terms of $\alpha$, requiring the derivative with respect to $\alpha$ to
vanish, and solving the resulting quadratic equation for $\alpha$. There is one subtle
point, which is that, because of the form of the constraints, the relative sign of $a$ and $b$
(the sign of $\gamma$) can be indeterminate. Both signs give an adjugate matrix that handles the possible singularities at. $[a,b]=[0,1]$ and $[a,b]=[1,0]$, and in fact
we can determine the appropriate sign from the data,
yielding a result identical with the sign-resolved quadratic
products of the results from the 2D quaternion eigensystem methods noted
in earlier sections.
The solution for the $2\times 2$ adjugate matrix can be written as
\begin{equation}
A_{R\cdot x \to u} = \left[ \begin{array}{cc}
\alpha & \gamma\\ \gamma & \beta\\
\end{array} \right] \; = \; \left[ \begin{array}{cc} a^2 & a b \\ a b & b^2 \\
\end{array} \right] = \left[
\begin{array}{cc}
\frac{1}{2} \left(1+ \frac{\textstyle{x\cdot u}+{y\cdot v}}
{\textstyle \lambda(x,y,u,v)} \right) &
\frac{1}{2} \;\frac{\textstyle{x\cdot v}-{y\cdot u}}
{\textstyle \lambda(x,y,u,v) } \\
\frac{1}{2}\; \frac{\textstyle{x\cdot v}-{y\cdot u}}
{\textstyle \lambda(x,y,u,v) } &
\frac{1}{2} \left(1-\frac{\textstyle{x\cdot u}+{y\cdot v}}
{\textstyle \lambda(x,y,u,v) } \right) \\
\end{array}
\right] \ . \label{SolveMatch2D.eq}
\end{equation}
where $ \lambda(x,y,u,v) = \sqrt{(x \cdot u + y \cdot v)^2 + ( x \cdot v - y \cdot u)^2 }$
actually turns out to be the maximal eigenvalue appearing naturally also in the
quaternion matrix approach.
One can easily verify that $\alpha + \beta =1$ and $\alpha \beta = \gamma^2$.
The optimal aligning 2D quaternion $[a_{\mbox{\small opt}},b_{\mbox{\small opt}}]$
is found as usual by identifying
the maximum of $\alpha$ and $\beta$ and normalizing its row, and the
corresponding rotation matrix is $R_{\mbox{\small opt}} = R(a_{\mbox{\small opt}},b_{\mbox{\small opt}})$. We can write
the latter also in closed form as
\begin{equation}
R_{\mbox{\small opt}} (X,U )\, = \,\left[ \begin{array}{cc}
\frac{\textstyle{x\cdot u}+{y\cdot v}} {\textstyle \lambda(x,y,u,v)} &
- \frac{\textstyle{x\cdot v}-{y\cdot u}} {\textstyle \lambda(x,y,u,v) } \\[0.2in]
\frac{\textstyle{x\cdot v}-{y\cdot u}} {\textstyle \lambda(x,y,u,v) } &
\ \frac{\textstyle{x\cdot u}+{y\cdot v}} {\textstyle \lambda(x,y,u,v) } \\
\end{array}
\right] \ . \label{SolveMatch2DRot.eq}
\end{equation}
Equations for this problem equivalent to \Eqn{SolveMatch2DRot.eq} are well-known
\citep[see, e.g.,][]{Haralick-pose-1989}, but our adjugate-based derivation is novel.
\qquad
\subsection{2D Pose Estimation}
Our next topic is the 2D pose estimation problem, ``2D Pose," which is actually a more
complicated orientation problem than the 2D matching problem, even though
its least squares loss function is one-dimensional and has only a single term.
The problem studies the action of projections acting on 2D cloud points to obtain
an image that is a line of matched points. The projection can be
obtained by truncating the $2\times 2$ rotation matrix \Eqn{2DabRotN2.eq}
to the first line, so $P(a,b)= [ a^2 - b^2, -2\, ab]$ or
$P(\alpha,\beta,\gamma)= [\alpha - \beta, -2 \gamma]$ ,
with the adjugate matrix coordinates $\alpha = a^2,\,\beta = b^2,\, \gamma = a b$.
In principle we should enforce the constraints $\alpha + \beta =1$ and $\alpha \beta = \gamma^2$ to guarantee compatibility with the quaternion roots of our task, but
it turns out that the reduced dimension of the pose estimation problem allows us
some interesting additional freedom. Our least squares optimization problem
takes this explicit form:
\begin{align} \label{LSQ2DPose.eq}
{\mathbf S}_{\mbox{\footnotesize (2D Pose)}} =&
\sum_{k=1}^{K} \left( P( \alpha,\beta,\gamma) \cdot [x_{k},\,y_{k}]^{} - u_{k} \right)^2
\, = \, \sum_{k=1}^{K} \left( (\alpha -\beta)\, x_{k} -2 \gamma \, y_{k} - u_{k} \right)^2 \\
\MoveEqLeft[4] {= ( \alpha -\beta)^2\, x \! \cdot \! x -4 \gamma( \alpha -\beta) \, x \! \cdot \! y
+ 4 {\gamma}^{2} \, y \! \cdot \! y
-2 (\alpha -\beta) \, x \! \cdot \! u + 4 \gamma \, y \! \cdot \! u \ + u \! \cdot \! u
\label{LSQ2DPoseExpr.eq} } \ .
\end{align}
Already we see something that might be interesting: while this equation is \emph{quartic} in the quaternion $(a,b)$ variables, making it unapproachable by the matrix methods
applicable for the ``2D Match''
problem, it is only \emph{quadratic} in the adjugate $(\alpha,\beta,\gamma)$ variables. Might it
be possible to complete the squares, and get an elegant system solvable as a transformed
``2D Match'' problem? Unfortunately, this fails because the square completion transformation
requires that the quadratic terms be represented by a symmetric
nonsingular matrix (as its inverse is required), and the corresponding matrix
following from \Eqn{LSQ2DPoseExpr.eq} has \emph{vanishing determinant}.
To exploit the reduced dimension of the adjugate parameters in the loss function
\Eqn{LSQ2DPoseExpr.eq}, we proceed by requiring the vanishing of the derivatives of the loss
function with respect to each variable, ignoring the constraints for the moment.
We obtain three equations, but only the following two linear equations for our three variables
are independent:
\begin{equation}
\left. \begin{aligned}
\frac{d S}{d \alpha} &= x \! \cdot \! u - \alpha \, x \! \cdot \! x + \beta \, x \! \cdot \! x
+ 2 \gamma \, x \! \cdot \! y =0 \\
\frac{d S}{d \gamma} &= y\! \cdot \! u - \alpha \, x\! \cdot \! y\ + \beta \, x\!\cdot \! y
+ 2 \gamma \, y \! \cdot \!y = 0
\end{aligned} \right\} \ ,
\label{2DPosedervs.eq}
\end{equation}
We find the partial solutions
\begin{equation}
\left. \begin{aligned}
\alpha &= \beta + \frac{x\! \cdot \!u\ y\! \cdot \!y - y\! \cdot \!u \ x\! \cdot \!y }
{x\! \cdot \!x \ y\! \cdot \! y \ - \ (x\! \cdot \!y)^2}\\
\gamma &= \frac{1}{2} \left( \frac{ x\! \cdot \!u \ x\! \cdot \!y - y\! \cdot \!u \ x\! \cdot \!x }
{ x\! \cdot \!x \ y \! \cdot \! y \ - \ (x\! \cdot \!y)^2} \right)
\end{aligned} \right\} \ ,
\label{2DPoseaa:ab.eq}
\end{equation}
At this point we are ready to use one constraint, $\alpha = 1-\beta$, inserted into
the first line of \Eqn{2DPoseaa:ab.eq} to solve for $\beta$; inserting \emph{that} back
into our equation for $\alpha$, we have a complete solution in terms of only the
cross-covariances: introducing the notation $\sum x_k x_k \to \text{xx}$, $\sum x_k y_k \to \text{xy}$,
$\sum y_k y_k \to \text{yy}$, $\sum u_k x_k \to \text{ux}$, $\sum u_k y_k \to \text{uy}$,
and $\sum u_k u_k \to \text{uu}$
for the cross-covariance sums over $k$, we end up with
\begin{align}
\alpha & = \frac{1}{2} \left( 1 + \frac{ \text{ux} \ \text{yy} \ - \ \text{uy}\ \text{xy} }
{ \text{xx}\ \text{yy}-\text{xy}^2} \right) \\
\beta & = \frac{1}{2}
\left(1 - \frac{ \text{ux} \ \text{yy} \ - \ \text{uy}\ \text{xy} } { \text{xx}\ \text{yy}-\text{xy}^2}\right)\\
\gamma & = \frac{1}{2} \frac{\text{ux}\ \text{xy} - \text{uy}\ \text{xx}}
{ \text{xx}\ \text{yy} - \text{xy}^2 } \ .
\label{lsq2DPoseSoln.eq}
\end{align}
We will find it useful to
employ a notation using the $3\times 3$ matrix of all term-by-term cross-covariance
elements, including the self-covariance elements, of $[x,y]$ and $[u ]$
summed over $k$. Accordingly, we define
\begin{align}
C &= \left[ \begin{array}{ccccc}
\text{xx} & \text{xy} & \text{ux} \\
\text{xy} & \text{yy} & \text{uy} \\
\text{ux} & \text{uy} &\text{uu} \\
\end{array} \right] \ .
\label{3by3ccarray.eq}
\end{align}
Our algebraic solutions reduce to ratios of
$2 \times 2$ subdeterminants of \Eqn{3by3ccarray.eq}, which we choose
to write using the notation
\begin{equation} \label{CC2Dsubdets.eq}
\begin{aligned}
d_{1}\to \left[
\begin{array}{ccc}
\text{xx} & \text{xy} \\
\text{xy} & \text{yy} \\
\end{array} \right] & \ \
d_{2}\to \left[ \begin{array}{ccc}
\text{ux} & \text{xy} \\
\text{uy} & \text{yy} \\
\end{array} \right] &
d_{3} \to \left[ \begin{array}{ccc}
\text{ux} & \text{xx} \\
\text{uy} & \text{xy} \\
\end{array} \right]\\
\end{aligned}
\end{equation}
Thus
\begin{equation}\label{lsqDet2DPoseSoln.eq}
\begin{array} {c@{\ \ \ }c@{\ \ \ }c}
\alpha \ = \ \frac{\textstyle 1}{\textstyle 2}
\left( 1 + \frac{\textstyle \mktall{\!}{d_{2}}}{\textstyle d_{1}} \right) &
\beta \ = \ \frac{\textstyle 1}{\textstyle 2} \left( 1 - \frac{\textstyle \mktall{\!}d_{2}}{\textstyle d_{1}} \right) &
\gamma \ = \ \frac{\textstyle 1}{\textstyle 2} \, \frac{\textstyle \mktall{\!}d_{3}} {\textstyle d_{1}}
\end{array}
\end{equation}
To complete our solution of the 2D Pose problem, we note that while
Eqs.~(\ref{lsqDet2DPoseSoln.eq}) satisfy the constraints $\alpha +\beta = 1$,
they do not satisfy our second constraint $\alpha \beta = \gamma^{2}$. However,
it is important to remember that there is one less matrix element
in the loss function \Eqn{LSQ2DPose.eq} than a full rotation matrix loss.
If we attempt to construct the projection, that is the top line of the 2D rotation
matrix, and insert the solutions \Eqn{lsqDet2DPoseSoln.eq}, we find the following
approximate first step, which in fact gives perfect solutions for noise-free test data:
\begin{equation} \label{lsqDetSoln2DRot.eq}
\left. \begin{aligned}
\tilde{P}(\alpha, \beta, \gamma) & = \left[ \alpha - \beta, \ \ -2 \gamma \right] \\[0.05in]
& = \left[ \frac{(u\! \cdot \!x \ y\! \cdot \!y - u\! \cdot \!y \ x\! \cdot \!y}
{ x\! \cdot \!x \ y \! \cdot \! y \ - \ (x\! \cdot \!y)^2}, \ \
\frac{(u\! \cdot \!y \ x\! \cdot \! x - u\! \cdot \! x \ x\! \cdot \!y}
{ x\! \cdot \!x \ y \! \cdot \! y \ - \ (x\! \cdot \!y)^2} \right] \\
& = \left[ \frac{d_{2}}{d_{1}} , -\frac{ d_{3}}{d_{1}} \right]
\end{aligned} \right\} \\[0.05in]
\end{equation}
Evaluating this tentatively as the projection in the 2D Pose loss
function, \Eqn{LSQ2DPose.eq}, against an arbitrary
list of pure or noisy data sets, we find that even though its scale
varies through a range near unity, unlike a rotation matrix row, it
still scores very well as a target for a minimizer of \Eqn{LSQ2DPose.eq}.
Remarkably, without doing any further computation, but simply normalizing
\Eqn{lsqDetSoln2DRot.eq} to produce a legal partial rotation matrix element,
results in mean losses of $\approx 10^{-30}$ for pure data,
and mean losses smaller than those using the \emph{known} initial value for
the 2D projection \Eqn{lsqDetSoln2DRot.eq}. Combining the projection matrix element
\Eqn{lsqDetSoln2DRot.eq} with its orthogonal partner (the 2D cross-product)
and normalizing produces a perfect orthogonal rotation matrix, which is the
solution to the 2D Pose least squares problem:
\begin{equation} \label{lsqDetSoln2DRotFull.eq}
\left. \begin{aligned}
{R}(\alpha, \beta, \gamma)
& = \frac{ \left[ \begin{array}{cc}
(u\! \cdot \!x \ y\! \cdot \!y - u\! \cdot \!y \ x\! \cdot \!y ) &
(u\! \cdot \! y \ x\! \cdot \! x - u\! \cdot \! x \ x\! \cdot \!y)\\
( u\! \cdot \! x \ x\! \cdot \!y - u\! \cdot \!y \ x\! \cdot \! x ) &
(u\! \cdot \!x \ y\! \cdot \!y - u\! \cdot \!y \ x\! \cdot \!y ) \\
\end{array}\right] }
{\left( (u\! \cdot \!x \ y\! \cdot \!y - u\! \cdot \!y \ x\! \cdot \!y )^2 +
(u\! \cdot \! y \ x\! \cdot \! x - u\! \cdot \! x \ x\! \cdot \!y)^2 \right)^{1/2} }\\
& = \frac{1}{\sqrt{{d_{2}}^2 + {d_{3}}^{2}}}
\left[ \begin{array}{cc} d_{2} & - d_{3} \\ d_{3} & \ d_{2} \\ \end{array} \right] \ .
\end{aligned} \right\} \\[0.05in]
\end{equation}
\qquad
\noindent{\bf Remark:} We will see in the 3D pose problem that, to handle noisy data that
move the least squares solution away from a pure rotation, we will take one more step:
the transition from the projection-matrix solution that works for noise-free data to a full
rotation matrix that preserves its form for noisy data will require applying the Bar-Itzhack
procedure to find the \emph{optimal} pure rotation matrix given an approximate candidate.
The 2D case has much less structure, and can avoid that complication. One can check
that applying the Bar-Itzhack process to the numerator of \Eqn{lsqDetSoln2DRotFull.eq}
produces exactly the same rotation matrix.
In \Fig{2DPoseLosses.fig}, we show that $R(\alpha,\beta,\gamma)$ in \Eqn{lsqDetSoln2DRotFull.eq}
is indeed an exact rotation matrix giving the relation between an initial point cloud
$[x,y]$ and its projection $[u]$ after a rotation and added noise.
We show that it outperforms the rotation that was used to simulate the pose data (referred
to as \emph{rot\_mat}); this should be very good, but somewhat random and less
deterministic than the least squares solution, since
it has \emph{no way of knowing} what the least squares formula is.
\mypar{Final remarks on the 2D Pose least squares problem.} The fact that
\Eqn{lsqDetSoln2DRot.eq} satisfies both constraints and achieves basically
vanishing loss for perfect data was somewhat unexpected, but is very likely
related to the fact that the eigenvalues for the quaternion profile matrix approach
to solving the ``Match'' class of problems are rotation-invariant; this feature is
maintained in all the numerical experiments we have done, and presumably there
is a direct proof related to our invariance proof for the ``Match'' problem in
Appendix \ref{RMSDRotInvariance.app}.
The literature on the subject of pose estimation contains numerous papers mentioning closed form
solutions and good approximation methods, but typically the exact solutions are lists
of alternative roots of equations
that must be evaluated one by one against the loss function, and, in the end, numerical
optimization methods such as Newton's method or neural networks
are often preferred to achieve experimental results
\cite[see, e.g.,][]{Haralick-pose-1989,OlssonCVPR2006,WientapperACCV2016,WientapperCVIU2018,WientapperACCV2016,ZhouWangKaess-ICRA2020}.
In 2D, the simple least squares solution \Eqn{lsqDetSoln2DRotFull.eq} that we found
can undoubtedly be obtained by any of many equivalent methods; however, our derivation based on the
quaternion adjugate matrix provides a clear picture of what is happening.
The key is that rotation parameterizations are subtle structures, even in 2D, as we saw in Section
\ref{2DRotations.sec}. There are several fundamental insights that appear: one
is that if you drop one line of a rotation matrix to produce a projection matrix, there is still
enough information present to reproduce the full $N \times N$ rotation matrix by simply
taking the cross-product of the lines of the projection matrix to find the missing line.
Next is that both the algebraic
methods to extract a least-squares solution from a squared-difference loss function such
as \Eqn{LSQ2DPose.eq} and the numerical methods such as \emph{argmin} methods
produce answers without anomalies \emph{only if the correct, reduced, number of
constraints is imposed}; what we found was that for the 2D Match problem, both the constraints
must be applied, but for the 2D Pose problem, only the first constraint is necessary, in addition to
normalization, and in
fact one gets excessive ambiguous branched solutions otherwise (we will see a similar thing
in the 3D Pose problem later). Finally, the particular constraints that are successful in guiding
\emph{argmin} include, for example, the \emph{topological constraints on orthonormality of the
rows of the projection matrix}, which for 2D are simply $\alpha +\beta = 1$ plus the
one-line normalization. These are the
key observations; we conjecture that any successful pose estimation problem has these
properties, which are clarified by formulating the problem using quaternion adjugate variables to
parameterize the rotation, which is computed \emph{directly}, without going through an
isolated quaternion stage. We conclude be noting that \emph{if} a quaternion is desired, which
can often be the case if one wants to visualize global features of families of rotations,
the quaternion can be at once extracted using the fundamental methods such as the
Bar-Itzhack optimization described in Sections \ref{2DRotations.sec} and
\ref{3DRotations.sec}.
\begin{figure}[t]
\vspace{-0.5in}
\figurecontent{
\centering
\includegraphics[width=6.1in]{figspdf/Figure4-v2}}
\caption[]{\ifnum0=1 {\bf PointClouds-dir/figspdf/Figure4-v2.pdf. }\fi
\ifnum0=1 {\it recent: PointClouds-dir/figspdf/Fig4-2D.pdf. }\fi
\ifnum0=1 {\it originals: lsq2DSVDPoseLosses.eps. }\fi
\footnotesize
{\bf Results using Analytical Solution to the 2D Point-Cloud Projection Problem.}
(a) Example data for a small rotation with noise $\sigma=0.1$: original reference
data in grey, rotated sample points in orange, points using rotation matrix
without noise in cyan (mostly hidden), results using
our analytical solution $R(\alpha,\beta,\gamma)$ in black,
and projected points in magenta.
(b) Comparison of least squared errors between our analytical solution
$R(\alpha,\beta,\gamma)$ and the original \emph{rot\_mat} for 100 random
2D point clouds with $N=50$ and $\sigma=0.1$. Data are sorted by the
$R(\alpha,\beta,\gamma)$ results, and dashed lines indicate the mean.
(c) Exploration of the dependency of the least squared errors
on $\sigma$, here with $N= 50$, and we plot the mean results over 100 iterations.
The coloring is as in (b).
(d,e) Exploring the no-noise case for $R(\alpha,\beta,\gamma)$: The original \emph{rot\_mat}
performs consistently better, especially as the number of points $N$ in the cloud increases;
one set of 100 iterations with $N=50$ is shown for clarity. In both cases note
that the scale of the $y$ axis is $10^{-29}$. }
\label{2DPoseLosses.fig}
\end{figure}
\comment{
\begin{figure}[h!]
\vspace{0.0in}
\figurecontent{
\centering
\includegraphics[width=4.5 in ]{lsq2DSVDPoseLosses.eps}}
\caption[]{\ifnum0=1 {\bf lsq2DSVDPoseLosses.eps. }\fi
\footnotesize The ranges of the 2D pose problem losses from
\Eqn{LSQ2DPose.eq} using noisy data
with projection matrices derived from alternate formulas.
[Green] original, simulation-creating, rotation; [Black] the bare least squares
solution that is perfect for noise-free data, but veers away from a good rotation
for noisy data; [Magenta] altering the least squares form by normalizing by
the square root of the determinant; [Blue] the Bar-Izhack optimal rotation
approximating the least squares solution $\to$ this is seen to be
\emph{identical} to the normalized least squares matrix. This is unusual,
and the 3D case does not have this simple relation.}
\label{2DPoseAdjArgQ.fig}
\end{figure}
}
\clearpage
\subsection{3D Point-Cloud Matching}
\label{3DMatch.sec}
Moving on to 3D data, we consider first the classic ``3D Match'' problem, also known as the RMSD or
``Generalized Procrustes'' problem,
whose task is to find the 3D rotation best aligning a possibly noisy test cloud with a reference cloud
to which it corresponds. As noted, here we choose a least squares
loss function for our purposes that
reverses the common order and applies the rotation matrix
to align the reference cloud with the test cloud.
We assume we have two 3D point clouds of size $K$,
a reference set $\{\Vec{x}_{k}\}=\{[x,y,z]_{k}\}$ that
we consider as a list of $K$ columns of 3D points, and a test set of noisy measured
3D points $\{\Vec{u}_{k}\}=\{[u,v,w]_{k}\}$
that is believed to be related, pointwise, to the reference set by an unknown rotation.
If we choose to express that rotation using
\Eqn{Rofqq.eq} as $R(q)$, then we can write the least-squares optimization
target as \cite[see, e.g.,][]{Horn1987,Hanson:ib5072}.
\begin{equation}
{\mathbf S}_{ \mbox{\footnotesize(3D Match) }} (q)= \sum_{k=1}^{K} \| R(q) \cdot \Vec{x}_{k} - \Vec{u}_{k} \| ^{2}\ .
\label{RMSDeqn.eq}
\end{equation}
There are well-known procedures using quaternion eigensystems to solve
this problem in closed form either from the least squares functional ${\mathbf S}(q)$
(see \citet{FaugerasHebert1983,FaugerasHebert1986}), or from the
cross-term, which reduces to a trace over the rotated $3\times 3$ cross-covariance matrix $E_{ab}$
\cite[see, e.g.,][]{Horn1987},
\begin{equation}
\Delta(q) = \mathop{\rm tr}\nolimits R(q) \cdot \left( {X} \cdot {U}^{} \right) = \mathop{\rm tr}\nolimits R(q) \cdot E
\label{basic3DCross.eq} \ .
\end{equation}
Because the $R^{}\cdot R$ factor in \Eqn{RMSDeqn.eq} disappears from
this least squares optimization function, we can always express the problem of
finding the optimal quaternion using a \emph{quadratic} function
of quaternions, and the problem is solvable using standard linear algebra.
(Non-quaternion methods such as SVD are also widely used.) We reiterate that in the
3D$\to$2D pose-estimation problem to be dealt with in the next subsection, those terms no
longer cancel, and the expression for $\mathbf {S}(q)$ becomes \emph{quartic} in $q$,
and \Eqn{basic3DCross.eq} is no longer applicable.
\mypar{The Adjugate.} We now explore how the adjugate variables can be incorporated into this problem.
Starting from the standard framework of \Eqn{RMSDeqn.eq}, we see there is little
motivation to use the full $\mathbf S(q)$ form, since all the quartic terms disappear,
and only the non-constant cross-term $\Delta(q)$ shown in \Eqn{basic3DCross.eq} is relevant.
In the standard quaternion solution of the optimal rotation problem, we rearrange
the cross-term into a form that is optimized by the maximal eigenvector of the
profile matrix, $M(E)$ (see, e.g., \citet{Hanson:ib5072} for further references and a review).
We find
\begin{equation}
\Delta(q)\, = \, \mathop{\rm tr}\nolimits R(q) \cdot E \, = \, (q_0,q_1,q_2,q_3) \cdot M(E) \cdot
(q_0,q_1,q_2,q_3)^{} \equiv q \cdot M(E) \cdot q \ ,
\label{qM3qnn.eq}
\end{equation}
where $M(E)$ is the traceless, symmetric $4\times 4$ matrix that is composed of
linear functions of the elements of the
$3\times 3$ cross-covariance matrix $E={X} \cdot {U}^{}$ of the data:
\begin{equation}
M(E) \! = \!
\left[ \begin{array}{cccc}
\!\! E_{xx} + E_{yy} + E_{zz} & E_{yz} - E_{zy} & E_{zx} - E_{xz} &
E_{xy} - E_{yx} \! \\
\! E_{yz} - E_{zy} & E_{xx} - E_{yy} - E_{zz} & E_{xy} + E_{yx} &
E_{zx} +E_{xz} \! \\
\! E_{zx} - E_{xz} & E_{xy} + E_{yx} & - E_{xx} + E_{yy} - E_{zz} &
E_{yz} + E_{zy} \! \\
\! E_{xy} - E_{yx} & E_{zx} +E_{xz} & E_{yz} + E_{zy} &
- E_{xx} - E_{yy} + E_{zz} \!\!
\end{array} \right] \ .
\label{basicHornnn.eq}
\end{equation}
In the usual method, the transformed loss function \Eqn{qM3qnn.eq}, now a
maximization problem, is solved by computing the maximal eigenvalue
$\lambda{\mbox{\small opt}}$ of $M(E)$, and identifying its normalized eigenvector as
exactly $q_{\mbox{\small opt}}$, with $R_{\mbox{\small opt}} = R(q_{\mbox{\small opt}})$ solving the matching problem.
However, in every such calculation, there is an often-hidden
step that relates precisely to one of our main points in this paper: there are
\emph{always} fourteen submanifolds of possible quarternion solutions that
can obstruct obtaining a normalizable quaternion from $M(E)$ and
$\lambda_{\mbox{\small opt}}$! These are avoided by scaling one element of the unknown
eigenvector to unity, and solving the eigenvector equation for the three remaining
elements using Kramer's rule;
if that eigenvector element happens to vanish, this fails, and one sets the \emph{next}
element to unity, repeating until successful.
Whatever methods a library eigensystem program
uses to return a valid eigenvector of this system corresponding to the maximal
eigenvalue of $M(E)$, it will always be exactly equivalent to computing
the characteristic matrix and its adjugate,
\begin{align*}
\chi(E) & = \left[ M(E) \, - \, \lambda_{\mbox{\small opt}}\, I_{4} \right] \\
A(E) &= \mbox{Adjugate} \, ( \chi(E)) \ ,
\label{RMSDAdjugate.eq}
\end{align*}
finding the maximum-magnitude diagonal of $A(E)$, and normalizing that row to guarantee
the computation will not encounter one of the singular domains of unnormalizable
eigenvectors.
For our purposes, we now
rephrase the 3D Match problem to employ the quaternion adjugate variables $q_{ij} = q_i q_j$
instead of the quaternions $q_{i}$ themselves. Then, with $R(q)$'s quadratic form
in $q$ replaced by the adjugate form
\begin{equation}
R(q_{ij}) = \left[
\begin{array}{ccc}
q_{00} +q_{11}-q_{22} - q_{33} & 2 q_{12} - 2 q_{03} & 2 q_{13} + 2 q_{02} \\
2 q_{12} + 2 q_{03} & q_{00} -q_{11} + q_{22} - q_{33} & 2 q_{23} - 2 q_{01} \\
2 q_{13} - 2 q_{02} & 2 q_{23} + 2 q_{01} & q_{00} - q_{11} - q_{22} + q_{33}
\end{array} \right] \ ,
\label{qadjrmsdeq}
\end{equation}
we find that $\Delta(q_{ij})$ now defines a superficially \emph{linear} optimization problem,
in ten dimensions, that takes the form
\begin{align}
\Delta(q_{ij})\, = &\, \mathop{\rm tr}\nolimits R(q_{ij}) \cdot E \, = \, \sum_{i\le j} q_{ij} \, M(E)_{ij} \ .
\label{M3qadj.eq}
\end{align}
If all the $q_{ij}$ were independent up to an overall scale, the solution
$q_{ij} \propto M_{ij}$ would immediately be seen to maximize $\Delta$.
This opportunity is unfortunately obstructed by the fact that the ten
adjugate variables are not independent, but are related by the
seven constraints of \Eqn{TheqqConstraints.Eq}
that reduce the superficial problem
in ten free variables down to the required three parameters of 3D rotations.
We can now recast the problem traditionally solved using the maximal eigenvalue of $M(E)$
and its associated normalized eigenvector, which is just $q_{\mbox{\small opt}}$, in several ways. First,
we can simply use \Eqn{TheqqConstraints.Eq} to reduce all the ten $q_{ij}$ to functions
of just three independent adjugate variables such as $(q_{11},\, q_{22},\, q_{33})$ , and
require that all three corresponding derivatives of \Eqn{M3qadj.eq} vanish. This is an
effective solution with the drawback that there are eight alternative sign-permuted solutions
due to the square roots in the constraint equations, and the correct values of
$(q_{11},\, q_{22},\, q_{33})$ appear in only one of these terms for any data set, and
which term that is appears to be indeterminate. In addition, the signs of the remaining
seven terms in the list of adjugate variables are indeterminate as well. The correct
adjugate matrix can always be found be checking all permutations substituted into
the optimization function \Eqn{M3qadj.eq} and using the choices giving the maximal
value, but that is an awkward algorithm compared to the maximal quaternion eigensystem
method.
\qquad
We note one other alternative approach that can be used, driven by the proof in Appendix
\ref{RMSDRotInvariance.app} that for \emph{error-free data}, the maximal eigenvalue of
$M(E)$ is independent of $R(q)$. Thus the maximal eigenvalue
is the same as the maximal eigenvalue of
$M(E_{0})$, where $E_{0} = X \cdot X^{}$ is the self-covariance of the reference data,
and that value is simply
\begin{equation}
\lambda_{\mbox{\small opt}} = \mathop{\rm tr}\nolimits E_{0} \ .
\end{equation}
A candidate quaternion applicable to error-free data thus emerges from the hybrid
characteristic equation
\[ \chi(E, E_{0}) = \left[ M(E) - \mathop{\rm tr}\nolimits(E_{0}) I_{4} \right] \]
upon computing the adjugate,
\[ A(\chi) = \mbox{Adjugate} \left(\chi(E,E_{0})\right) \ , \]
and computing the optimal quaternion $q_{\mbox{\small opt}}$ from the largest-magnitude row of $A(\chi)$.
For noise-containing data, the procedure becomes somewhat circular for the particular task
of 3D cloud-matching, as the rotation $R_{\mbox{\small opt}} = R(q_{\mbox{\small opt}})$ becomes inexact, and one
unavoidably has to compute a more complicated maximal eigenvalue to solve the Bar-Itzhack
problem, producing an optimal exact rotation matrix $R_{\mbox {\tiny BI}} \approx R_(q_{\mbox{\small opt}})$ that
produces an acceptable solution to the 3D cloud matching problem.
\qquad
\subsection{3D to 2D Pose Estimation }
\label{3DPose.sec}
Finally, we turn our attention to the ``3D Pose'' pose-estimation problem that corresponds
most closely to the classic 3D RMSD squared-difference optimization.
We suppose we have a 3D point cloud reference set ${X}$ that
we consider as a list of $K$ columns of 3D points $\{\Vec{x}_{k}\}$, and
a 2D test set of points ${U}$, with 2D image-plane components $\{\Vec{u}_{k}\}$
that are considered as paired images of each point in the 3D cloud projected
in parallel from some to-be-determined camera orientation. Here we study only
the idealized case of the parallel projection described by a $2\times 3$ projection
matrix $P(q)$ that is extracted from the top two rows of a 3D rotation matrix; this
is of course quite relevant for applications like microscopy for which the effective
focal length relative to the scale of the data is infinite.
If we choose to express our rotation using \Eqn{Rofqq.eq} as $R(q)$, then
we may write the projection as
\begin{equation}
P(q) = \left[
\begin{array}{ccc}
{q_0}^2+{q_1}^2-{q_2}^2 - {q_3}^2 & 2 q_1 q_2 -2 q_0 q_3
& 2 q_1 q_3 +2 q_0 q_2 \\
2 q_1 q_2 + 2 q_0 q_3 & {q_0}^2-{q_1}^2 + {q_2}^2 - {q_3}^2
& 2 q_2 q_3 - 2 q_0 q_1 \\
\end{array} \right] \ ,
\label{qrotproj.eq}
\end{equation}
and the least-squares optimization target can be written
\begin{align} \label{3D2DPoseLSQ.eq}
\mathbf{S}_{\mbox{\small 3D Pose}} =&
\sum_{k=1}^{K} \| P(q) \cdot \Vec{x}_{k} -\Vec{u}_{k} \| ^{2}\ .
\end{align}
While the 3D point cloud matching loss function in Section \ref{3DMatch.sec}
can be reduced to the quadratic cross-term $\Delta$ and solved using
an optimal quaternion eigenvector, \emph{this approach fails for pose estimation}.
In the pose-estimation problem we can no longer
eliminate the quartic quaternion part of the optimization and the problem
becomes potentially much more complex.
The adjugate formalism now comes into play: we replace the individual quaternions
in \Eqn{3D2DPoseLSQ.eq}, as they appear in \Eqn{qrotproj.eq},
by their adjugate quadratic forms, $q_{i}q_{j} \to q_{ij}$, so our adjugate-valued
projection matrix becomes
\begin{equation}
P(q_{ij}) = \left[
\begin{array}{ccc}
q_{00} +q_{11}-q_{22} - q_{33} & 2 q_{12} - 2 q_{03} & 2 q_{13} + 2 q_{02} \\
2 q_{12} + 2 q_{03} & q_{00} -q_{11} + q_{22} - q_{33} & 2 q_{23} - 2 q_{01} \\
\end{array} \right] \ .
\label{qadjproj.eq}
\end{equation}
We note the projection matrix is lacking these matrix elements,
$q_{13} -q_{02}$, $q_{23} + q_{01}$, and $q_{00} -q_{11} - q_{22} + q_{33}$,
so the number of constraints needed can in principle be reduced.
Secondly, those variables are in a specific sense
recoverable because, since $P(q_{ij})$ is part of an orthonormal $3\times 3$ matrix, the
missing bottom row can be computed, if the first two rows have been
determined, by taking the cross-product of the two rows of ${P(q_{ij})}_{\mbox{\small opt}}$ and
normalizing the result (if necessary) to get the missing last row of a full orthonormal rotation matrix.
The resulting form of \Eqn{3D2DPoseLSQ.eq}
now becomes a quadratic function in the adjugate variables $q_{ij}$ that can be
solved, in principle, using least squares algebraic
methods, resulting in a computable adjugate matrix from
which a guaranteed non-singular quaternion can be extracted.
Our loss function, written in terms of the measured data components summed
over $K$, is complicated by the appearance of both linear and quadratic terms
in $q_{ij}$, and takes the form:
\begin{align}
\MoveEqLeft{
\mathbf{S}_{\mbox{\small 3D Pose}}(q_{ij},x,y,z,u,v) =
} \nonumber \\
&{q_{00}}^2 \,{x\!\cdot \!x} +{q_{11}}^2 {x\! \cdot \! x}+{q_{22}}^2 {x\! \cdot \! x} +{q_{33}}^2 {x\! \cdot \! x}
+{q_{00}}^2\, {y\!\cdot \!y} +{q_{11}}^2 {y\! \cdot \! y}+{q_{22}}^2 {y\! \cdot \! y}
+{q_{33}}^2 {y\! \cdot \! y} \nonumber \\ &
-4 {q_{00}} {q_{01}} \, {y\! \cdot \! z} +4 {q_{00}} {q_{02}}\, {x\! \cdot \! z}+2 {q_{00}} {q_{11}} \,{x\! \cdot \! x}
-2 {q_{00}} {q_{11}}\, {y\! \cdot \! y}+8 {q_{00}} {q_{12}}\, {x\! \cdot \! y} \nonumber \\ &
+4 {q_{00}} {q_{13}}\, {x\! \cdot \! z}-2 {q_{00}} {q_{22}}\, {x\! \cdot \! x}
+2 {q_{00}} {q_{22}}\, {y\! \cdot \! y}+4 {q_{00}} {q_{23}}\, {y\! \cdot \! z}
-2 {q_{00}} {q_{33}}\, {x\! \cdot \! x}-2 {q_{00}} {q_{33}}\, {y\! \cdot \! y} \nonumber \\ &
-2 {q_{00}}\, {u\! \cdot \! x}-2 {q_{00}}\, {v\! \cdot \! y}
+4 {q_{01}}^2 {z\! \cdot \! z}-8 {q_{01}} {q_{03}}\, {x\! \cdot \! z}
+4 {q_{01}} {q_{11}}\, {y\! \cdot \! z}-8 {q_{01}} {q_{12}}\, {x\! \cdot \! z} \nonumber \\ &
-4 {q_{01}} {q_{22}}\, {y\! \cdot \! z}
-8 {q_{01}} {q_{23}}\, {z\! \cdot \! z} +4 {q_{01}} {q_{33}}\, {y\! \cdot \! z}
+4 {q_{01}}\, {v\! \cdot \! z}+4 {q_{02}}^2 {z\! \cdot \! z}-8 {q_{02}} {q_{03}}\, {y\! \cdot \! z}
\nonumber \\ &
+4 {q_{02}} {q_{11}}\, {x\! \cdot \! z} +8 {q_{02}} {q_{12}}\, {y\! \cdot \! z}
+8 {q_{02}} {q_{13}}\, {z\! \cdot \! z}-4 {q_{02}} {q_{22}}\, {x\! \cdot \! z}
-4 {q_{02}} {q_{33}}\, {x\! \cdot \! z}-4 {q_{02}}\, {u\! \cdot \! z} \nonumber \\ &
+4 {q_{03}}^2 {x\! \cdot \! x}
+4 {q_{03}}^2 {y\! \cdot \! y} -8 {q_{03}} {q_{11}}\, {x\! \cdot \! y}
+8 {q_{03}} {q_{12}}\, {x\! \cdot \! x} -8 {q_{03}} {q_{12}}\, {y\! \cdot \! y}
-8 {q_{03}} {q_{13}}\, {y\! \cdot \! z} \nonumber \\ &
+8 {q_{03}} {q_{22}}\, {x\! \cdot \! y}+8 {q_{03}} {q_{23}}\, {x\! \cdot \! z}
+4 {q_{03}}\, {u\! \cdot \! y} -4 {q_{03}}\, {v\! \cdot \! x}
+4 {q_{11}} {q_{13}}\, {x\! \cdot \! z}-2 {q_{11}} {q_{22}}\, {x\! \cdot \! x} \nonumber \\ &
-2 {q_{11}} {q_{22}}\, {y\! \cdot \! y}
-4 {q_{11}} {q_{23}}\, {y\! \cdot \! z}-2 {q_{11}} {q_{33}}\, {x\! \cdot \! x}
+2 {q_{11}} {q_{33}}\, {y\! \cdot \! y}
-2 {q_{11}}\, {u\! \cdot \! x}+2 {q_{11}}\, {v\! \cdot \! y} \nonumber \\ &
+4 {q_{12}}^2 {x\! \cdot \! x}+4 {q_{12}}^2 {y\! \cdot \! y}
+8 {q_{12}} {q_{13}}\, {y\! \cdot \! z}+8 {q_{12}} {q_{23}}\, {x\! \cdot \! z}
-8 {q_{12}} {q_{33}}\, {x\! \cdot \! y} -4 {q_{12}}\, {u\! \cdot \! y} \nonumber \\ &
-4 {q_{12}}\, {v\! \cdot \! x}
+4 {q_{13}}^2 {z\! \cdot \! z}-4 {q_{13}} {q_{22}}\, {x\! \cdot \! z}
-4 {q_{13}} {q_{33}}\, {x\! \cdot \! z} -4 {q_{13}}\, {u\! \cdot \! z}
+4 {q_{22}} {q_{23}}\, {y \! \cdot \! z} \nonumber \\ &
+2 {q_{22}} {q_{33}}\, {x\! \cdot \! x}-2 {q_{22}} {q_{33}}\, {y\! \cdot \! y}
+2 {q_{22}}\, {u\! \cdot \! x}-2 {q_{22}}\, {v\! \cdot \! y}
+4 {q_{23}}^2 {z\! \cdot \! z}-4 {q_{23}} {q_{33}}\, {y\! \cdot \! z} \nonumber \\ &
-4 {q_{23}}\, {v\! \cdot \! z}
+2 {q_{33}}\, {u\! \cdot \! x} +2 {q_{33}}\, {v\! \cdot \! y}+{u\! \cdot \! u}+{v\! \cdot \! v} \label{3DposeLossFcn.eq} \ .
\end{align}
(Note that it is the sum of $q_{ii}$ that sums to unity, not the sum of ${q_{ii}}^2$, so there
is no simplification in the first line.)
Parallel to the 2D case, one cannot complete the squares to recover a simpler quadratic form
in a transformed variable set because the $10 \times 10$ matrix incorporating
the quadratic products of the adjugate variables is singular.
As usual, the redundancy of the adjugate variables has to be reduced by the
imposition of constraints such as those in \Eqn{TheqqConstraints.Eq}.
However, we are potentially lacking some degrees of freedom
in the projection-matrix adjugate variables, so if we try to constrain all the
variables, we may come up with no solutions. Thus it appears possible
that we do not need (or cannot utilize) all seven adjugate constraints
in order to obtain the canonical three rotational degrees of freedom in a rotation.
\qquad
%
%
\subsubsection{Solving the 3D Pose Least-Squares Loss Function Algebraically}
We can get full solutions of the least squares problem
defined by the general form of \Eqn{UniversalArgmin.eq}
using a specific choice of the adjugate constraints in \Eqn{TheqqConstraints.Eq}.
When we impose the four constraints containing
the adjugate variable $q_{00}$, with \verb|lossRPose3DAdj| denoting
the algebraic expression \Eqn{3DposeLossFcn.eq} and symbols for the
cross-covariance terms $x\cdot y$ and etc., this Mathematica
expression yields a list of eight candidate solutions:
\begin{quote}
\begin{small}
\begin{verbatim}
the3D2DAdjSolns =
Module[{eqn = lossPose3DAdj},
Solve[ { D[eqn, q00] == 0,
D[eqn, q11] == 0, D[eqn, q22] == 0, D[eqn, q33] == 0,
D[eqn, q01] == 0, D[eqn, q02] == 0, D[eqn, q03] == 0,
D[eqn, q23] == 0, D[eqn, q13] == 0, D[eqn, q12] == 0,
q00 + q11 + q22 + q33 == 1,
q00 q11 == q01 q01, q00 q22 == q02 q02, q00 q33 == q03 q03
(* q22 q33==q23 q23,q11 q33==q13 q13,q11 q22==q12 q12 *) },
{q00, q11, q22, q33, q01, q02, q03, q23, q13, q12}]]. .
\end{verbatim}\end{small} \end{quote}
(The three unused constraints are commented out, retained for later reference.)
The resulting set of eight algebraic expressions can be tested by substituting
randomly generated rotations applied to a cloud of points, and adding noise
to generate a 2D projected data image. We test each of the list of 8 against
100 data sets to see, first,
whether they produce an adjugate matrix that provides a solution,
and then to see whether the resulting solutions
obey all seven constraints in \Eqn{TheqqConstraints.Eq}. For exact data,
four of the solutions are usually complex, and thus unusable. Four of the
solutions are always real, and, strangely, exactly \emph{one} of them
always produces the quaternion (via the adjugate procedure) that was used
to generate the data. However, we have more work to do to achieve a
deterministic algorithm. For pure data, all the constraints are in fact obeyed,
while for errorful data, the constraint identities that
were \emph{enforced} are always maintained, while those that were not
enforced (commented out in the \verb|the3D2DAdjSolns| expression)
are no longer valid. We can do better, but first we need some notation,
as the immediate algebraic solutions in some cases are ten megabytes
in length.
We have found a useful symbolic representation of the first four solutions,
one of which always gives the right quaternion for error-free data sets. We
introduce first the $5\times 5$ matrix of all term-by-term cross-covariance
elements, including the self-covariance elements, of $[x,y,z]$ and $[u,v]$
summed over $k$, denoting $\sum x_k x_k \to \text{xx}$, $\sum x_k y_k \to \text{xy}$, \ldots,
$\sum z_k v_k \to \text{vz}$, so we have
\begin{align}
C &= \left[ \begin{array}{ccccc}
\text{xx}& \text{xy} & \text{xz}& \text{ux} & \text{vx} \\
\text{xy}& \text{yy} & \text{yz}& \text{uy} & \text{vy} \\
\text{xz}& \text{yz} &\text{ zz}&\text{ uz} & \text{vz} \\
\text{ux}& \text{uy} & \text{uz} &\text{uu} &\text{uv} \\
\text{vx}& \text{vy} & \text{vz} & \text{uv} &\text{vv} \\
\end{array} \right] \ .
\label{5by5ccarray.eq}
\end{align}
All of the algebraic solutions reduce to ratios of order 3 products of the elements
of cross-covariances \Eqn{5by5ccarray.eq} or square roots of appropriate powers
of such elements. We define the $3 \times 3$ subdeterminants of \Eqn{5by5ccarray.eq}
using the notation
\begin{equation} \label{CCsubdets.eq}
\begin{aligned}
& \ \ d_{1}\to \left[
\begin{array}{ccc}
\text{xx} & \text{xy} & \text{xz} \\
\text{xy} & \text{yy} & \text{yz} \\
\text{xz} & \text{yz} & \text{zz} \\
\end{array} \right] \\
d_{2}\to \left[ \begin{array}{ccc}
\text{xx} & \text{xy} & \text{ux} \\
\text{xy} & \text{yy} & \text{uy} \\
\text{xz} & \text{yz} & \text{uz} \\
\end{array} \right] & \ \
d_{3} \to \left[ \begin{array}{ccc}
\text{xx} & \text{xy} & \text{vx} \\
\text{xy} & \text{yy} & \text{vy} \\
\text{xz} & \text{yz} & \text{vz} \\
\end{array} \right] &
d_{4}\to \left[ \begin{array}{ccc}
\text{xx} & \text{xz} & \text{ux} \\
\text{xy} & \text{yz} & \text{uy} \\
\text{xz} & \text{zz} & \text{uz} \\
\end{array} \right] \ \ \ \\
d_{5}\to \left[ \begin{array}{ccc}
\text{xx} & \text{xz} & \text{vx} \\
\text{xy} & \text{yz} & \text{vy} \\
\text{xz} & \text{zz} & \text{vz} \\
\end{array} \right] & \ \
d_{6}\to \left[ \begin{array}{ccc}
\text{xx} & \text{ux} & \text{vx} \\
\text{xy} & \text{uy} & \text{vy} \\
\text{xz} & \text{uz} & \text{vz} \\
\end{array} \right] &
d_{7} \to \left[ \begin{array}{ccc}
\text{xy} & \text{xz} & \text{ux} \\
\text{yy} & \text{yz} & \text{uy} \\
\text{yz} & \text{zz} & \text{uz} \\
\end{array} \right] \ \ \ \\
d_{8} \to \left[ \begin{array}{ccc}
\text{xy} & \text{xz} & \text{vx} \\
\text{yy} & \text{yz} & \text{vy} \\
\text{yz} & \text{zz} & \text{vz} \\
\end{array} \right] & \ \
d_{9}\to \left[ \begin{array}{ccc}
\text{xy} & \text{ux} & \text{vx} \\
\text{yy} & \text{uy} & \text{vy} \\
\text{yz} & \text{uz} & \text{vz} \\
\end{array} \right]
& d_{10}\to \left[ \begin{array}{ccc}
\text{xz} & \text{ux} & \text{vx} \\
\text{yz} & \text{uy} & \text{vy} \\
\text{zz} & \text{uz} & \text{vz} \\
\end{array} \right] \ , \\
\end{aligned}
\end{equation}
where $d_{1}$ plays a special role as the \emph{self-covariance} of the reference
cloud's point values.
The four usable versions of the least-squares solutions differ by pairs of signs
of square roots in the expressions for $(q_{01},q_{02},q_{23},q_{13})$,
so we can write the 3D pose least squares solutions as a function of
their corresponding square root signs, which we denote by $s_{ij}$.
We can then express the four solutions in terms of
the combinations of signs that distinguish one
from another as $\omega(s_{01},s_{02},s_{23},s_{13})$, where
\begin{equation}
\left. \begin{aligned}
\mbox{\rm soln}(1) &= \omega(+1,+1; \, +1,+1) \\
\mbox{\rm soln}(2) &= \omega(+1,-1; \, +1,-1) \\
\mbox{\rm soln}(3) &= \omega(-1,+1; \, -1,+1) \\
\mbox{\rm soln}(4) &= \omega(-1, -1; \, -1,-1)
\end{aligned} \ \ \right\} \ .
\end{equation}
Then a more explicit form of the solutions terms of the cross-covariance
determinants of \Eqn{CCsubdets.eq} takes the form
\begin{equation}
\label{pose3Dsoln.eq}
\begin{aligned}
\begin{array}{lr}
\hspace*{-5.25in}{\omega(s_{01},s_{02},s_{23},s_{13}) = } & \ \\[.05in]
\end{array}\\
\left[ \begin{array}{l}
{q_{00}}\to \frac{\sqrt{{d_{1}}^2
\left(-{d_4}+({d_5}-{d_7})^2-{d_8}\right)}+{d_1}
({d_7}-{d_5})}{4 {d_1}^2} \\[0.1in]
{q_{11}}\to \frac{{d_1} (2 {d_1}+{d_5}+{d_7})-\sqrt{{d_1}^2
\left(-{d_4}+({d_5}-{d_7})^2-{d_8}\right)}}{4 {d_1}^2} \\[0.1in]
{q_{22}}\to -\frac{\sqrt{{d_1}^2
\left(-{d_4}+({d_5}-{d_7})^2-{d_8}\right)}+{d_1} (-2
{d_1}+{d_5}+{d_7})}{4 {d_1}^2} \\[0.1in]
{q_{33}}\to \frac{\sqrt{{d_1}^2
\left(-{d_4}+({d_5}-{d_7})^2-{d_8}\right)}+{d_1}
({d_5}-{d_7})}{4 {d_1}^2} \\[0.1in]
{q_{01}}\to -\frac{{s_{01}} \sqrt{2 ({d_1}+{d_5}) \left(\sqrt{{d_1}^2
\left(-{d_4}+({d_5}-{d_7})^2-{d_8}\right)}+{d_1}
({d_7}-{d_5})\right)-{d_1} ({d_4}+{d_8})^2}}{4 \sqrt{{d_1}^3}} \\[0.15in]
{q_{02}}\to -\frac{{s_{02}} \sqrt{2 ({d_1}-{d_7}) \left(\sqrt{{d_1}^2
\left(-{d_4}+({d_5}-{d_7})^2-{d_8}\right)}+{d_1}
({d_7}-{d_5})\right)-{d_1} ({d_4}+{d_8})^2}}{4 \sqrt{{d_1}^3}} \\[0.15in]
{q_{03}}\to \frac{{d_4}+{d_8}}{4 {d_1}} \\[0.1in]
{q_{23}}\to \frac{{d_3}}{2 {d_1}}-\frac{{s_{23}} \sqrt{2 ({d_1}+{d_5})
\left(\sqrt{{d_1}^2
\left(-{d_4}+({d_5}-{d_7})^2-{d_8}\right)}+{d_1}
({d_7}-{d_5})\right)-{d_1} ({d_4}+{d_8})^2}}{4 \sqrt{{d_1}^3}} \\[0.15in]
{q_{13}}\to \frac{{d_2}}{2 {d_1}} + \frac{{s_{13}} \sqrt{2 ({d_1}-{d_7}) \left(\sqrt{{d_1}^2
\left(-{d_4}+({d_5}-{d_7})^2-{d_8}\right)}+{d_1}
({d_7}-{d_5})\right)-{d_1} ({d_4}+{d_8})^2}}{4 \sqrt{{d_1}^3}} \\[0.1in]
{q_{12}}\to \frac{{d_8}-{d_4}}{4 {d_1}} \\
\end{array} \right] \ .
\end{aligned}
\end{equation}
With careful examination and experimentation, the daunting form of \Eqn{pose3Dsoln.eq} reveals
some remarkable structure. If one takes a list of error-free data sets, and,
evaluates all four functions in \Eqn{pose3Dsoln.eq} \emph{against the pose loss function} \Eqn{3DposeLossFcn.eq}, all four least-squares alternate solutions produce a \emph{perfect
match}.
This seems impossible until one carefully checks the steps, and discovers that, although the elements
$(q_{01},\,q_{02},\,q_{23},\,q_{13})$ differ among the four functions, when all are combined
together to form the $2 \times 3$ \emph{projection matrix} $P(q_{ij})$, \emph{the differences cancel
and all four produce the same projection with no square roots}.
Since only the top two lines enter into the least squares loss formula, this is completely logical:
our solution only asks to minimize those two lines, and the third line does not appear
at all, and in fact if we substitute the four versions of $q_{ij}$ solutions in \Eqn{pose3Dsoln.eq}
into the third line of the adjugate-parameterized rotation matrix \Eqn{qadjrmsdeq}, they are
\emph{all different}. Now we come to a procedure that remarkably takes us full circle
back to the calculations for optimal matches to noisy rotation matrices in Section
\ref{BarItzh-3D-noise.sec}. First, since we get a perfect first two lines of the rotation matrix
for all four solutions, we can simply construct the third line by taking the cross-product of
the two projection-matrix components. In the second step, we observe that for noisy
test data, the perfect success of \Eqn{pose3Dsoln.eq} is deformed and, in general, we
do not know exactly what process is going on. However, the basic fact is that what
we need is an optimal rotation matrix that is the \emph{ideal} approximation, with perfect
orthonormality, to our solution that (so far) behaves well only for perfect data. We already know how to do that, from our work in previous sections on extracting adjugate vectors,
and hence quaternions, using methods like those in the introductory sections.
For pedagogical reasons that will become clear, we first write
the initial form of the projection-matrix solution in terms of the cross-covariance
determinants in \Eqn{CCsubdets.eq} as follows:
\begin{equation}
\tilde{P}(x,y,z;u,v) = \left[ \begin{array}{ccc}
\displaystyle\frac{\textstyle d_{7}}{\textstyle d_{1}} &
- \displaystyle\frac{\textstyle d_{4}}{\textstyle d_{1}}&
\displaystyle \frac{\textstyle d_{2}}{\textstyle d_{1}}\\[0.15in]
\displaystyle \frac{\textstyle d_{8}}{\textstyle d_{1}} &
- \displaystyle \frac{\textstyle d_{5}}{\textstyle d_{1}}&
\displaystyle \frac{\textstyle d_{3}}{\textstyle d_{1}}\\
\end{array} \right] \ .
\label{PoseSolnPmatUN.eq}
\end{equation}
On any error-free data set, this projection remarkably is a perfect least-squares solution,
is orthonormal, and produces a vanishing loss function to 30 orders of magnitude accuracy.
If we ignore the disagreements with the form of the third rotation-matrix line coming from
the four solutions for $q_{ij}$, we can simply take the cross-product of the two lines,
that is $P_{1} \times P_{2}$, and that will give a unique answer for
a third line that will also be orthonormal on pure data, establishing
our initial form for the full 3D Pose rotation matrix solution of the form
\begin{equation}
\tilde{R}(x,y,z;u,v) = \left[ \begin{array}{ccc}
\displaystyle\frac{\textstyle d_{7}}{\textstyle d_{1}} &
- \displaystyle\frac{\textstyle d_{4}}{\textstyle d_{1}}&
\displaystyle \frac{\textstyle d_{2}}{\textstyle d_{1}}\\[0.15in]
\displaystyle \frac{\textstyle d_{8}}{\textstyle d_{1}} &
- \displaystyle \frac{\textstyle d_{5}}{\textstyle d_{1}}&
\displaystyle \frac{\textstyle d_{3}}{\textstyle d_{1}}\\[0.15in]
\displaystyle \frac{\textstyle d_{6}}{\textstyle d_{1}} &
\displaystyle \frac{\textstyle d_{9}}{\textstyle d_{1}}&
\displaystyle \frac{\textstyle d_{10}}{\textstyle d_{1}}\\
\end{array} \right] \ .
\label{PoseRotSolnPmatUN.eq}
\end{equation}
Why must we call this our ``initial form'' instead of our final solution?
Our issue here is virtually identical to the distinctions we found in
Sections \ref{3DdirectSoln.sec}, \ref{3DVariationalSoln.sec},
and \ref{BarItzh-3D-noise.sec} in the treatment of exact rotation
matrices vs error-containing measurements of rotation matrices.
When we use error-containing data for the pose problem, the perfect
match of \Eqn{PoseSolnPmatUN.eq} and its extension to
an actual $3\times 3$ camera model matrix in \Eqn{PoseRotSolnPmatUN.eq}
breaks down, just as it did when we introduced the data-generic Bar-Itzhack
method in Section \ref{BarItzh-3D-noise.sec}. As soon as we insert data
with errors in these equations, the different components are \emph{not even
normalized to unity}, much less orthogonal. This cannot be the optimal answer
for a rotation matrix placing a noisy 2D point image into a corresponding
3D cloud scene. It will still be a least-squares solution minimizing the cost
function \Eqn{3D2DPoseLSQ.eq}, but since it does not preserve the properties
of a rotation matrix, it will not actually correspond to an optimal \emph{rotation},
which is what we require of the pose estimation problem.
Finally, we can \emph{reverse-engineer} a new version of the adjugate variables
in \Eqn{pose3Dsoln.eq} found by hand-solving the least-squares optimization. We know that \Eqn{PoseRotSolnPmatUN.eq}
corresponds with the adjugate variables via \Eqn{qadjrmsdeq}, so if we simply solve that for
the $q_{ij}$, we find the adjugate variables in directly in terms of the cross-covariance determinants
that produce \Eqn{PoseRotSolnPmatUN.eq}:
\begin{equation} \label{cleanPose3Dsoln.eq}
\left. \begin{array}{l}
{q_{00}}\to \frac{\textstyle 1}{\textstyle 4 d_1} ({d_1}+{d_{10}}-{d_5}+{d_7}) \\ [0.1in]
{q_{11}}\to \frac{\textstyle 1}{\textstyle 4 d_1} ({d_1}-{d_{10}}+{d_5}+{d_7}) \\ [0.1in]
{q_{22}}\to \frac{\textstyle 1}{\textstyle 4 d_1} ({d_1}-{d_{10}}-{d_5}-{d_7}) \\ [0.1in]
{q_{33}}\to \frac{\textstyle 1}{\textstyle 4 d_1} ({d_1}+{d_{10}}+{d_5}-{d_7}) \\ [0.1in]
{q_{01}}\to \frac{\textstyle 1}{\textstyle 4 d_1} ({d_9}-{d_3}) \\ [0.1in]
{q_{02}}\to \frac{\textstyle 1}{\textstyle 4 d_1} ({d_2}-{d_6}) \\ [0.1in]
{q_{03}}\to \frac{\textstyle 1}{\textstyle 4 d_1} ({d_4}+{d_8}) \\ [0.1in]
{q_{23}}\to \frac{\textstyle 1}{\textstyle 4 d_1} ({d_3}+{d_9}) \\ [0.1in]
{q_{13}}\to \frac{\textstyle 1}{\textstyle 4 d_1} ({d_2}+{d_6}) \\ [0.1in]
{q_{12}}\to \frac{\textstyle 1}{\textstyle 4 d_1} ({d_8}-{d_4}) \\
\end{array} \right\} \ .
\end{equation}
\mypar{The Adjugate and the Solution to the 3D Pose Estimation Problem.} \nopagebreak
Let us state our problem clearly: we have a least squares solution that works on all data,
but that solution corresponds to a rotation matrix only for perfect data. This latter behavior
is almost certainly a manifestation of the rotation-invariant eigenvalues that occur for
perfect data in the 3D cloud alignment problem of Section \ref{3DMatch.sec}, outlined
in Appendix \ref{RMSDRotInvariance.app}. To solve the problem, we must
\emph{restrict the subspace of solutions to pure rotations}. But we know \emph{exactly}
how to accomplish that! We simply take our ``good approximation,'' namely our
initial solution $\tilde{R}(\Vec{x};\Vec{u})$ given in \Eqn{PoseRotSolnPmatUN.eq},
consider it as an \emph{error-containing rotation matrix}, and apply the Bar-Itzhack
optimization as a \emph{second iteration optimization} to produce a new quaternion
adjugate corresponding to the perfect rotation matrix \emph{best approximating}
our initial formula \Eqn{PoseRotSolnPmatUN.eq}; this rotation will then apply
in both the case of perfect data and in the more general case when, as we can see,
the data themselves produce a completely reasonable example of an inexact
rotation matrix that must be optimized. We would argue that no more accurate
least-squares-related pure rotation matrix solving the pose estimation problem can
be found; without question this solves the perfect-data pose problem, and very
plausibly is the best solution to the errorful-data pose estimation problem.
For completeness, we review the actual steps producing a closed form algebraic
solution to the 3D Pose task. First, we examine the matrix \Eqn{PoseRotSolnPmatUN.eq},
which in the errorful-data case effectively is the cross-covariance matrix
appearing in the 3D Match task \Eqn{basic3DCross.eq}. While the expression $\tilde{R}$ of
\Eqn{PoseRotSolnPmatUN.eq} is a rotation matrix for perfect data, and continues
to give a least squares solution minimizing \Eqn{3D2DPoseLSQ.eq} for noisy data,
it no longer obeys all the constraints needed to make it a valid rotation matrix.
We can thus apply the Bar-Itzhack optimization we explored earlier to find the
\emph{nearest} rotation matrix to \Eqn{PoseRotSolnPmatUN.eq}. Since the
Bar-Itzhack loss function needs the \emph{inverse} of the desired approximate
matrix to find the quaternion of the closest pure rotation matrix, we now compute
the profile matrix of the \emph{transpose} of $\tilde{R}$, \Eqn{PoseRotSolnPmatUN.eq},
which, from \Eqn{basicHornnn.eq}, now becomes:
\begin{equation}
M(x,y,z; u,v) \! = \!
\left[ \begin{array}{cccc}
\!\! d_{7} - d_{5} + d_{10} &-( d_{9} - d_{3} )& -( d_{2} - d_{6})& -( d_{8} + d_{4}) \! \\
\! -( d_{9} - d_{3} )& d_{7} + d_{5} - d_{10} & d_{8} - d_{4} & d_{6} +d_{2} \! \\
\! -( d_{2} - d_{6}) & d_{8} - d_{4} & - d_{7} - d_{5} - d_{10} & d_{3} + d_{9} \! \\
\! -( d_{8} +d_{4}) & d_{6} +d_{2} & d_{3} + d_{9} & - d_{7} + d_{5} + d_{10} \!\!
\end{array} \right] \ ,
\label{poseHornrot.eq}
\end{equation}
that is, the last three rows of the first column, and the last three columns of the first
row are negated to correspond to the matrix ${\tilde{R}}^{}$.
We then compute the maximal eigenvalue using any method we like, but we note that
it is not hard to compute the analytic algebraic formula using the Cardano equations \citep{Hanson:ib5072}. Given that eigenvalue
\begin{equation} \label{maxeigMxyzyv.eq}
\lambda_{\mbox{\footnotesize max}} = \mbox{\bf (Maximal Eigenvalue) }\left( M(x,y,z; u,v)\right) \ ,
\end{equation}
we form the characteristic matrix $\chi$ with vanishing determinant by subtracting
$\lambda_{\mbox{\footnotesize max}}$,
\begin{equation} \label{3DposeCharMat.eq}
\chi(x,y,z;u,v) = \left[ M(x,y,z;uv) - \lambda_{\mbox{\footnotesize max}} \; I_{4} \right] \ .
\end{equation}
Recall that the critical feature is the maximal eigenvector, whose normalized value is the
quaternion giving the optimal solution for the sought-for rotation matrix. As usual, we now
just compute the adjugate, which, up to a normalization, will now always be four copies
of the needed optimal quaternion,
\begin{equation} \label{poseHornrotAdj.eq}
A(x,y,z;u,v) \, = \, \mbox{Adjugate}(\chi(x,y,z;u,v)) =
\left[ \begin{array}{cccc}
{q_0}^2 & q_0 q_1 & q_0 q_2 & q_0 q_3 \\
q_0 q_1 & {q_1}^2 & q_1 q_2 & q_1 q_3 \\
q_0 q_2 & q_1 q_2 & {q_2}^2 & q_2 q_3 \\
q_0 q_3 & q_1 q_3 & q_2 q_3 & {q_3}^2 \\
\end{array} \right] \ .
\end{equation}
The final answer is found by choosing a nonsingular
row from the adjugate $A(x,y,z;u,v)$ for normalization to determine $q_{\mbox{\small opt}}$:
\begin{equation} \label{maxeigvecQ.eq}
q_{\mbox{\small opt}} = \mbox{\bf (Normalize Row with Largest Diagonal)}\left( A(x,y,z; u,v)\right) \ .
\end{equation}
We noted in previous simpler examples that, while $q_{\mbox{\small opt}}$ can be very complicated,
reassembling the quaternions
into the rotation matrix itself, $R_{\mbox{\small opt}} = R(q_{\mbox{\small opt}})$, can result in a simplified final form;
we have not yet accomplished that explicitly for the 3D problem, but a differentiable
final form of the optimal quaternion can be obtained using the noted algebraic form
of the solution.
\qquad
{\bf Note: } One finds experimentally an odd feature of this optimization process: because
our fundamental standard for optimization is the Fr\"{o}benius norm of the $3\times 3$ rotation
matrix differences, the appropriate test of differences is based on the \emph{rotation matrices}.
If one compares the \emph{quaternions} resulting from \Eqn{maxeigvecQ.eq} using a
\emph{quaternion distance measure}, one occasionally finds quaternions that are nearly
inverses of one another, and thus very far apart. However, examining the resulting rotation
matrices, which will also be close to inverses of the expected matrix, one finds that, indeed,
the inverse-like rotation matrix will correspond to a \emph{smaller} Fr\"{o}benius norm, and
so is technically correct. This phenomenon, which seems to appear for $|q_0| \ll 1$,
is one of many odd features appearing when we add substantial noise to rotation matrices.
\qquad
\comment{
\begin{figure}[h!]
\vspace{0.0in}
\figurecontent{
\centering
\includegraphics[width=4.5in]{lsq3DSVDPoseLosses.eps}}
\caption[]{\ifnum0=1 {\bf lsq3DSVDPoseLosses.eps. }\fi
\footnotesize
Plotting the values of the 3D Pose loss function \Eqn{3D2DPoseLSQ.eq}
for 100 randomly chosen, noise injected, reference and test pose data sets.
The projection values are chosen from [Green] the original rotation used
in the creation of the simulated data; [Black] the least squares solution
that is perfect for perfect data, but does not exactly give a rotation matrix
for noisy data; [Magenta] the Bar-Itzhack optimal pure rotation \emph{closest}
to the not-quite-a-rotation least squares solution. The differences between
these losses for various permutations at the bottom show that indeed the
least squares solution can be less than the optimal rotation solution, and
that the original rotation is always a poorer choice than the optimal rotation.
}
\label{3DPoseAdjArgQ0.fig}
\end{figure}
}
\begin{figure}[t]
\vspace{-0.75in}
\figurecontent{
\centering \hspace*{-0.5in}
\includegraphics[trim=1in 0 0 0, clip=true,width=6.9in]{figspdf/Figure5-v2}}
\caption[]{\ifnum0=1 {\bf PointClouds-dir/figspdf/Figure5-v2.pdf. }\fi
\ifnum0=1 {\it newer: PointClouds-dir/figspdf/Fig4-3D.pdf. }\fi
\ifnum0=1 {\it originals: lsq3DSVDPoseLosses.eps. }\fi
\footnotesize
{\bf Results using Analytical Solution to the 3D Point-Cloud Projection Problem.}
(a) Example data for a small rotation with noise $\sigma=0.1$: original reference
data in grey, rotated sample points in orange, points using rotation matrix
without noise in cyan,
our analytical solution $\tilde{R}(x,y,z:u,v)$ in blue, the rotation-corrected
solution $R_{\mbox{\tiny BI}}$ in black, and projected points in magenta.
(b) Comparison of least squared errors between our analytical solution
$\tilde{R}(x,y,z:u,v)$ in blue, the rotation-corrected
solution $R_{\mbox{\tiny BI}}$ in black, and
the original \emph{rot\_mat} in cyan for 100 random
3D point clouds with $N=50$ and $\sigma=0.1$. Data are sorted by the
$R_{\mbox{\tiny BI}}$ results, and dashed lines indicate the mean.
(c) Exploration of the dependency of the least squared errors
on $\sigma$, here with $N= 50$, and we plot the mean results over 100 iterations.
The coloring is as in (b).
(d,e) Exploring the no-noise case for $\tilde{R}(x,y,z:u,v)$ and $R_{\mbox{\tiny BI}}$:
the original \emph{rot\_mat}
performs consistently better, but without any dependency on the number of points;
one set of 100 iterations with $N=50$ is shown for clarity. In both cases note
that the scale of the $y$ axis is $10^{-29}$.
}
\label{3DPoseLosses.fig}
\end{figure}
To bring this to a close, we test our new solutions using randomly generated 3D point clouds
and corresponding 2D projections after a rotation and added noise (\Fig{3DPoseLosses.fig}).
For each data set, we consider both the exact solution \Eqn{PoseRotSolnPmatUN.eq}, which
gives a valid rotation matrix only for noise-free data but always minimizes the least-squares,
as well as the solution after applying Bar-Itzhack via the profile matrix \Eqn{poseHornrot.eq}.
When we compute these and compare the list of
losses to those of the original rotation used to simulate the
pose data, we find we outperform this original rotation matrix (referred to as \emph{rot\_mat})
and our results improve as more noise is added (\Fig{3DPoseLosses.fig}). The least squares
solution, which is not a rotation for noisy data, can be even better, but of
course those results are not useful. We also checked these
results for zero noise, and our optimal rotation and the original
rotation substituted into the loss function are uniformly zero to machine
accuracy $\approx 10^{-30}$, consistent with an exact least-squares
loss minimizing solution to the pose estimation optimization problem.
\clearpage
\subsection{3D Pose Estimation with Perspective Projection}
We have assumed that orthographic projection could be used in
our 2D and 3D Pose estimation exercises so far in order to find closed-form
least squares solutions for optimal adjugate matrices of unnormalized
quaternions and their corresponding rotation matrices. Pieces of our
solution methods can be applied also to the more difficult problem
of perspective projection, with finite focal length. For our final
application, we now explore an approach to perspective 3D pose
estimation that exploits the same methods as the previous parts
of this Section.
We will make no attempt to review the vast literature on this subject,
but it is appropriate to mention a few of the influential developments
along with more current pieces of literature,
starting with the classic work of Haralick et al. \cite{Haralick-pose-1989}, which
defines the problem for the case of corresponding points, which has been our
context throughout. Weintapper et al. and Zhou et al.
\cite{WientapperACCV2016,WientapperCVIU2018,ZhouWangKaess-ICRA2020}
continue with some recent developments of the classic methods.
\cite{LuHagerMj-FastPose-2000} invoke an approach
similar to ours, utilizing multiple stages, but without the closed-form aspects that
we are available to us, \cite{GuoEtAl-CamOrientwFocal-2021}
study other approaches, while related quaternion methods are employed by
\cite{ForbesPoseEst2011}. We focus exclusively on the rotation aspects, but a
number of authors examine full 6 degree-of-freedom methods, most recently making
heavy use of machine learning, such as, e.g.,
\cite{XiangY2018PoseCNN,FuaEtAl-6DPoseEst-CVPR2020}.
We start as usual with a loss function based on least squares
that needs to be minimized in order to find the rotation matrix that rotates
a 3D reference cloud to its best possible alignment with a planar 2D image.
While we were able
to get away with just the top two lines of the rotation matrix in the orthographic
projection squared-error function, now we need the entire matrix because the
bottom row determines the depth coordinate that implements the perspective
division. Thus we start with the two-row projection \Eqn{qadjproj.eq} determining
the numerator of the rotated 3D cloud, and adjoin to that the 3rd line in
quaternion adjugate coordinates
\[ R_{3} = D(q)= \left[
\begin{array}{ccc}
2 q_{13} - 2 q_{02} & 2 q_{23} + 2 q_{01} & q_{00} - q_{11} - q_{22} + q_{33}
\end{array} \right] \]
to produce the relevant depth element $z' = D(q)\cdot [x,y,z]$.
{\bf Alternate Choices of Camera Location.}
We can choose from two alternative ways of looking at the least squares loss
function for perspective projection, illustrated in \Fig{3DPoseGraphs.fig}
and \Fig{QmeanRotErr-vs-Npoints.fig}
noting that a perspective
projection loss function analogous to the orthographic loss \Eqn{3D2DPoseLSQ.eq}
must incorporate an additional division by the depth. One traditional approach
places the point cloud's center of mass at the origin and the image plane at $z=0$,
with the camera looking down from a pinhole camera at focal distance $f$,
so $x_{\mbox{cam}} = (0,0,f)$. This has the advantage that the similar-triangles
perspective formula contains only the inverse focal length $\bar{f} = 1/(\mbox{focal length})$
multiplying the depth:
\begin{align} \label{3D2DFPoseLSQ.eq}
\mathbf{S}_{\mbox{\small 3D Pose $\bar{f}$}} &= \ \mathbf{S}(\bar{f} ) \ = \
\sum_{k=1}^{K} \left\| \frac{\textstyle P(q) \cdot \Vec{x}_{k}}
{\textstyle (1 - \bar{f} D(q)\cdot\Vec{x}_{k})} - \Vec{u}_{k} \right\| ^{2}\ .
\end{align}
This form allows one to easily take the orthographic limit $\bar{f} \to 0$ without needing infinite numbers. An alternative perspective formula commonly used in machine vision
reverses the camera and the point cloud, so the camera is at the origin, the image
is at $z = f$, and the cloud center of mass is off-center at $(0,0,f)$, so the similar-triangles
computation yields the least squares expression involving only $f$, as opposed to only
$\bar{f}= 1/f$:
\begin{align} \label{3D2DFPoseLSQAlt.eq}
\mathbf{S}_{\mbox{\small 3D Pose $f$}} &= \ \mathbf{S}(f) \ = \
\sum_{k=1}^{K} \left\| \frac{\textstyle f \,P(q) \cdot \Vec{x}_{k}}
{\textstyle D(q)\cdot\Vec{x}_{k}} - \Vec{u}_{k} \right\| ^{2}\ .
\end{align}
The cloud-driven geometry of these two approaches is shown in \Fig{3DPoseGraphs.fig}
and \Fig{QmeanRotErr-vs-Npoints.fig}.
In either case, we will assume that
the focal point of the pinhole camera appears well outside the cloud so that the optimization problem has smooth mathematical behavior.
{\bf Remark on focal length determination.} The focal length can be determined
from the data at any stage by setting the derivative of $\mathbf{S}(f)$ or
$\mathbf{S}(\bar{f})$ to zero and solving
for $f$ or $\bar{f}$, provided one has a candidate for the rotation matrix
$R(q_{\mbox{\small opt}}) = \left[P(q_{\mbox{\small opt}}),D(q_{\mbox{\small opt}})\right]$.
However, the $\bar{f}$ version results
upon differentiation in a very high degree formula in $\bar{f}$ whose vanishing
point needs to be found numerically, though it is typically well-behaved. In contrast,
given some known $R(q_{\mbox{\small opt}})$ that
produces $R(q_{\mbox{\small opt}})\cdot \left[x,y,z\right]= \left[x',\,y',\,z'\right]$,
the $f$ version gives a closed form solution that is simply
\begin{align} \label{3D2DSolveALTforF.eq}
f =&
\frac{ \textstyle \sum_{k=1}^{K} \left( (u_{k} x'_{k}+ v_{k} y'_{k})/z'_{k}\right) }
{ \textstyle \sum_{k=1}^{K} \left( ({x'_{k}}^2+ {y'_{k}}^2)/{z'_{k}}^2\right) } \ .
\end{align}
\qquad
\begin{figure}[h!]
\vspace{0.0in}
\figurecontent{
\centerline{
\includegraphics[width=6.5in]{figspdf/Figure6-v4} }}
\caption[]{\ifnum0=1 {\bf Figure6-v4.eps} \fi \footnotesize
{\bf 3D Perspective Pose Problem and Loss Spectra: Camera at $\mathbf{ z=f}$.}
(a) Geometry of perspective projection from camera at $(0,0,f)$ from
the 3D reference data to centered at the origin noisy 2D image, with $u/f=x/(f-z)$ defining
the projected coordinates $(u,v)$.
(b) Plotting the values of the 3D perspective loss function of \Eqn{3D2DFPoseLSQ.eq}
with focal length $f$ for the 3D Pose estimation problem,
comparing the original data-generating rotation \emph{rot\_mat},
the LHM method's results \citep{LuHagerMj-FastPose-2000}, our closed-form least
squares solution $\tilde{R}$,
and the corrected exact rotation $R_{\mbox{\tiny BI}}$.
An error distribution with $\sigma = 0.1$ is used for the simulated projection data with
50 points.
(c) The mean least-squares error averaged over 100 sample data sets of size 50 with
normal error $\sigma=0.1$ as a function of the focal length $f$, on a logarithmic scale.
The $R_{\mbox{\tiny BI}}$ rotation solution does very well at smaller focal lengths,
while $\tilde{R}$ (not a rotation) least-squares solution gets better at large camera
distances, and $R_{\mbox{\tiny BI}}$ appears to outperform the LHM. }
\label{3DPoseGraphs.fig}
\end{figure}
While we propose an iterative three-step solution in Appendix \ref{perspective_soln.app},
here we simply test our orthographic solutions $\tilde{R}$ and $R_{\mbox{\tiny BI}}$, described
above by \Eqn{PoseRotSolnPmatUN.eq}, and
\Eqn{poseHornrot.eq}, against existing solutions for the perspective pose estimation problem,
with the difference that the least-squares loss is measured as described in \Eqn{3D2DFPoseLSQ.eq}
or \Eqn{3D2DFPoseLSQAlt.eq}, appropriately. In \Fig{3DPoseGraphs.fig} we study the case in which
the point cloud is at the origin, and compare to the iterative LHM method \citep{LuHagerMj-FastPose-2000}.
In \Fig{QmeanRotErr-vs-Npoints.fig}, we have adapted the available MatLab code for the MLPnP
method as in \citep{MLPnP-Urban-2016} to study the case where the camera is at the origin,
and compare our orthographic results to both LHM and MLPnP.
In all cases, we employ noisy ($\sigma = 0.1$) data, comparing different focal lengths, and in the case of
the MLPnP tests, different numbers of points N. For the MLPnP tests, we supplement our least-squares measure with
a quaternion-quaternion distance measure, to be more similar to the existing literature (e.g., \citet{MLPnP-Urban-2016}).
Here we use the quaternion-quaternion angle measure,
\begin{equation}
\mbox{Rotation Error}(q_{\mbox{\small opt}}, q_{0}) = 2 \arccos (q_{\mbox{\small opt}} \cdot q_{0}) \ ,
\label{qoptvsqinput.eq}
\end{equation}
where $q_{0}$ is chosen to be the RMSD solution between the reference 3D point cloud and
the sample 3D point cloud after rotation and added noise, but before projection.
It is plain to see that in both \Fig{3DPoseGraphs.fig} and \Fig{QmeanRotErr-vs-Npoints.fig},
our $R_{\mbox{\tiny BI}}$ rotation matrix outperforms both LHM and MLPnP, with improved performance
at longer focal lengths, as would be expected given our orthographic assumption.
\begin{figure}[h!]
\vspace{0.0in}
\figurecontent{
\centering \hspace{-0.5in}
\includegraphics[width= 6.25in]{figspdf/Figure7-v3}}
\caption[]{\ifnum0=1 {\bf figspdf/Figure7-v3.pdf }\fi
\footnotesize
{\bf 3D Perspective Pose Problem and Loss Spectra: Camera at Origin.}
(a) Geometry of perspective projection from camera at $(0,0,0)$ to
3D reference data to centered at $(0,0,f)$, producing a noisy 2D image,
with $u/f=x/f$ defining the projected coordinates $(u,v)$.
(b) For 3D point cloud sizes ranging from 10 to 200, random quaternion
rotations were applied to produce noisy 2D projected images with standard
error $\sigma = 0.1$ and focal length 6. We plot the
relative performances of the LHM and MLPnP algorithms compared to
ours using our version of the ``Mean (Quaternion) Rotation Error'' in
\Eqn{qoptvsqinput.eq}. In this context, our
idealized least-squares solution $\tilde{R}$ and our corrected-to-robust-rotation
solution $R_{\mbox{\tiny BI}}$ are basically indistinguishable, with $\tilde{R}$'s
black dots hidden behind the blue dots of $R_{\mbox{\tiny BI}}$.
(c) For the same collection of point-cloud sizes and parameters, we plot
the corresponding 3D pose least-squares measure \Eqn{3D2DFPoseLSQAlt.eq},
and find similar results, with the optimal-rotation solution $R_{\mbox{\tiny BI}}$
responding best to this measure.
(d) Here we fix the cloud sample size at 60 and plot mean least-squares measure
across a spectrum of different focal lengths. Again, $R_{\mbox{\tiny BI}}$ shows
a good response.}
\label{QmeanRotErr-vs-Npoints.fig}
\end{figure}
\clearpage
\subsection{Remarks
{\bf Properties of variational approaches:} The algebraic solution methods we have outlined here are not necessarily the most
effective approach. In particular, numerical approximation methods involving
\emph{argmin} or \emph{argmax} numerical searches for the numerical optimization
of the loss functions that we have considered here can be quite effective, and
can efficiently enforce constraints on either the quaternion variables themselves,
or on the adjugate variables. The latter have the advantage of course that the
singular domains of the quaternion determination can be explicilty avoided,
and the constraints essentially function as Lagrange multipliers if the
optimization is considered as a dynamical system. Similarly, trainable
neural networks can function more or less equivalently to numerical search
optimization, with the particular advantage that, if successfully trained, the
expensive search process in an \emph{argmin} implementation, which is repeated
for each and every new data instance, is skipped in every later application of a neural network.
Using the loss functions that we have presented to implicitly guide the training
target \emph{without explicit training data} is basically the same context as
\emph{argmin}, except that the resulting successful search path can be efficiently
encoded for arbitrary future exploitation using one-time trained weights.
Strategies parallel to the \emph{argmin} method using the adjugate variable
approach to the loss function can clearly be implemented using
neural networks, and such approaches should be effective for pose estimation.
Constraints can supplement the loss function used to update the weights (to function
much the same as Lagrange multipliers) in a neural network's virtual
dynamical system. We intend to treat these issues in detail elsewhere.
\qquad
{\bf Possible applications to multiple datasets, bundle adjustment, and cryo-EM:} Here we have studied optimal alignment of single datasets with a single rigid point cloud and a single camera model. Many important applications examine collections of camera models providing an assembly of
data imposing restrictions on one or more imprecisely known point clouds.
For example, the bundle adjustment problem \citep{schoenberger2016sfm,schoenberger2016mvs,TriggsEtAl:BundleAdj2000,ChenChenWanArxiv2016,Remondino-CIPA-2017} examines a collection of camera models, optimizes them individually, and then optimizes them collectively in combination with the optimization of the candidate point cloud coordinates. This problem has many properties in parallel with the field of cryo-EM single particle analysis
\citep[see, e.g.][]{relion-2012,cryosparc-2017,relion-2018, singer-singlecryo-2020}
which determines the 3D structure of a molecule from a collection of 2D images of that molecule. The techniques that we have introduced here may also be able to contribute to the multiple dataset problem.
\qquad
\clearpage
\section{Conclusion}
Our objective in this paper has been to establish a clear framework for
understanding how quaternions must be treated in the context of measurable rotation
matrices, whatever the source. There are many domains in which the use of
quaternions for representing rotations is attractive, and some recent papers \citep{zhou2019continuity,Peretroukhin2020,zhao2020quaternion,xiang2020revisiting}
have cast doubt on the validity of quaternions as an output parameterization
for automatic learning of the quaternion corresponding to implicit or explicit rotation matrices.
We have shown that a variational method based on the work of \citet{BarItzhack2000}
recasts both ideal and noisy rotation measurements into the framework of
an adjugate matrix; this matrix contains four separate algebraic formulas for the
same quaternion corresponding to a given rotation matrix. Each expression
is valid in a certain region of the quaternion manifold $\Sphere{3}$, but breaks
down with a singularity in the normalization outside its own region. Combined,
however, these four
formulas, actually eight if we include their opposite signs, completely
cover the quaternion manifold with nonsingular patches. The natural
occurrence of these singular regions and the ways to escape them by crossing
between formulas to cover the whole manifold then allow us to understand
quaternions from a consistent mathematical viewpoint.
Having established the importance of the adjugate, we adopted the quaternion adjugate variables, substituting single
adjugate variables for all possible quadratic quaternion forms, as a framework
for treating matching and pose estimation problems. This is of interest
because algebraic problems using quaternion variables are reduced in degree
by a factor of two in the adjugate variables. The cost of this transformation is
the introduction of additional constraints, but in certain cases the advantage
of using the adjugate variables instead of bare quaternions can be significant.
Using this framework, we were able to solve the pose estimation problem with orthographic
projection, resulting, resulting in closed form least squares solutions valid
for perfect data, and correctable to optimal rotations for noisy data using a second-stage
Bar-Itzhack optimization. Furthermore, we applied this result successfully to the pose estimation
problem with perspective projection, and found that even with the imperfect orthographic approximation
our results outperformed those of existing methods. Thus we argue
that the adjugate variables not only solve the question of how to understand the
quaternion manifold in rotation-determination tasks, but have applications of
their own in simplifying certain least squares problems for optimal rotations.
\newpage
\section*{Acknowledgments}
We are indebted to B.K.P.~Horn for his crucial role in introducing
us to this problem, and for continuing support and encouragement,
and to Pascal Fua and Yinlin Hu for their generous advice and assistance.
SMH acknowledges the support of the Flatiron Institute and
many helpful interactions with her colleagues there.
| {
"timestamp": "2022-05-20T02:00:09",
"yymm": "2205",
"arxiv_id": "2205.09116",
"language": "en",
"url": "https://arxiv.org/abs/2205.09116",
"abstract": "Quaternions are important for a wide variety of rotation-related problems in computer graphics, machine vision, and robotics. We study the nontrivial geometry of the relationship between quaternions and rotation matrices by exploiting the adjugate matrix of the characteristic equation of a related eigenvalue problem to obtain the manifold of the space of a quaternion eigenvector. We argue that quaternions parameterized by their corresponding rotation matrices cannot be expressed, for example, in machine learning tasks, as single-valued functions: the quaternion solution must instead be treated as a manifold, with different algebraic solutions for each of several single-valued sectors represented by the adjugate matrix. We conclude with novel constructions exploiting the quaternion adjugate variables to revisit several classic pose estimation applications: 2D point-cloud matching, 2D point-cloud-to-projection matching, 3D point-cloud matching, 3D orthographic point-cloud-to-projection matching, and 3D perspective point-cloud-to-projection matching. We find an exact solution to the 3D orthographic least squares pose extraction problem, and apply it successfully also to the perspective pose extraction problem with results that improve on existing methods.",
"subjects": "Image and Video Processing (eess.IV); Computer Vision and Pattern Recognition (cs.CV); Quantitative Methods (q-bio.QM)",
"title": "Exploring the Adjugate Matrix Approach to Quaternion Pose Extraction",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540728763411,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7099046514599049
} |
https://arxiv.org/abs/1701.07339 | Loci of Points Inspired by Viviani's Theorem | We consider loci of points such that their sum of distances or sum of squared distances to each of the sides of a given triangle is constant. These loci are inspired by Viviani's theorem and its extension. The former locus is a line segment or the whole triangle and the latter locus is an ellipse. | \section*{Constant sum of distances}
Samelson \cite[p. 225]{Sam} gave a proof of Viviani's theorem that uses
vectors and Chen \& Liang \cite[p. 390-391]{CL} used this vector method to
prove a converse: if inside a triangle there is a circular region in which
the sum of the distances from the sides is constant, then the triangle is
equilateral. In \cite{Abb}, this converse is generalized in the form of two
theorems that we will use in addressing Question 1. To state these theorems,
we need the following terminology: Let $\mathcal{P}$ be a polygon consisting
of both boundary and interior points. Define the \textit{distance sum
function} $\mathcal{V}:\mathcal{P}\rightarrow \mathbb{R},$ where for each
point $P\in \mathcal{P}$ $\ $the value $\mathcal{V}(P)$ is defined as the
sum of the distances from $P$ to the sides of $\mathcal{P}.$\ \
\begin{theorem}
\label{equilateral}Any triangle can be divided into parallel line segments
on which $\mathcal{V}$ is constant. Furthermore, the following conditions
are equivalent:
\begin{itemize}
\item $\mathcal{V}$ is constant on\textit{\ }the\textit{\ triangle }$\Delta
. $
\item \textit{There are three non-collinear points, inside the triangle, at
which }$\mathcal{V}$ takes the same value\textit{.}
\item \textit{\ }$\Delta $ is \textit{equilateral. }
\end{itemize}
\end{theorem}
\begin{theorem}
\label{convex polygon}(a) Any convex polygon $\mathcal{P}$ can be divided
into parallel line segments, on which $\mathcal{V}$ is constant. \
(b) If $\ \mathcal{V}$ takes equal values at three non-collinear points,
inside a convex polygon, then $\mathcal{V}$ is constant on $\mathcal{P}$ .
\end{theorem}
The discussion in \cite[p. 207-210]{Abb} lays out a connection between
linear programming and Theorems \ref{equilateral} and \ref{convex polygon}.
Theorem \ref{equilateral} is proved explicitly by defining a suitable linear
programming problem and Theorem \ref{convex polygon} is proved by means of
analytic geometry, where the distance sum function $\mathcal{V}$ is computed
directly and shown to be linear in two variables. We shall apply these
theorems to find the loci $T_{k}(\Delta )$ for isosceles and scalene
triangles $\Delta .$
\subsection*{Isosceles triangle}
For isosceles triangles we exploit their reflection symmetry to find
directly the line segments on which $\mathcal{V}$ is constant, and for
scalene triangles we apply any of the above theorems to show that $\mathcal{
}$ is constant on certain parallel line segments. To find the direction of
these line segments we compute explicitly the equation of $\mathcal{V}$. The
idea is illustrated through an example.
Since an isosceles triangle has a reflection symmetry across the altitude,
we conclude that; if the sum of distances of point $P$ from the sides of the
triangle\ is $k$, then the reflection point $P^{\prime }$ across the
altitude satisfies the same property. For such triangles we have the
following proposition.
\begin{proposition}
If $\Delta $ is a non-equilateral isosceles triangle and $k$ ranges between
the lengths of the smallest and the largest altitudes of the triangle, then
T_{k}(\Delta )$ is a line segment, whose end points are on the boundary of
the triangle, parallel to the base.
\end{proposition}
\begin{proof}
In Fig. \ref{fig1}, if the line segment $DE$ is parallel to the base $BC$
then, when $P$ moves along $DE$, the length $a$ remains constant and
b+c=DP\sin \alpha +PE\sin \alpha =DE\sin \alpha ,$ which is also constant.
Hence, $a+b+c=k$ is constant on the segment $DE.$ By Theorem \re
{equilateral}, there are no other points in $\Delta $ with distance sum $k$
unless the triangle is equilateral.
If $a$ approaches zero then the length of $DE$ approaches the length of the
base $BC,$ and $k=a+DE\sin \alpha $ approaches the length of the altitudes
from $B$ (or $C).$ On the other hand, if the length of $DE$ approaches zero
then $k$ approaches the length of the altitude from $A$. Hence,
T_{k}(\Delta )$ is defined whenever $k$ ranges between the lengths of the
smallest and the largest altitudes of the triangle.
\end{proof}
\FRAME{ftbphFU}{3.9807in}{2.6472in}{0pt}{\Qcb{The locus of points is a line
segment parallel to the base. }}{\Qlb{fig1}}{isoceles.eps}{\special{language
"Scientific Word";type "GRAPHIC";display "USEDEF";valid_file "F";width
3.9807in;height 2.6472in;depth 0pt;original-width 6.4013in;original-height
3.7395in;cropleft "0.1641";croptop "1";cropright "0.8111";cropbottom
"0";filename 'isoceles.eps';file-properties "XNPEU";}}
\subsection*{Scalene triangle}
Similarly, by Theorem \ref{equilateral}, the locus of points $T_{k}(\Delta )$
for a scalene triangle $\Delta $ is a line segment, provided $k$ ranges
between the lengths of the smallest and the largest altitudes of the
triangle. Otherwise, $T_{k}(\Delta )$ will be empty. Indeed, the connection
with linear programming allows us to deduce the result. The convex polygon
is taken to be the feasible region, and the distance sum function $\mathcal{
}$ corresponds to the objective function. The parallel line segments, on
which $\mathcal{V}$ is constant, correspond to the isoprofit lines. The
mathematical theory behind linear programming states that an optimal
solution to any problem will lie at a corner point of the feasible region.
If the feasible region is bounded, then both the maximum and the minimum are
attained at corner points. Now, the distance sum function $\mathcal{V}$ is a
linear continuous function in two variables. The values of $\mathcal{V}$ at
the vertices of the feasible region, which is the triangle $\Delta ,$ are
exactly the lengths of the altitudes. Moreover, this function attains its
minimum and its maximum at the vertices of the triangle, and ranges
continuously between its extremal values. Hence, \textbf{it takes\ on every
value between its minimum and its maximum}. Therefore, we have the following
proposition.
\begin{proposition}
If $\Delta $ is a scalene triangle and $k$ ranges between the lengths of the
smallest and the largest altitudes of the triangle, then $T_{k}(\Delta )$ is
a line segment, whose end points are on the boundary of the triangle.
\end{proposition}
The question is: How can we determine this segment for a general triangle?
We shall illustrate the method by the following example. Given the vertices
of a triangle, we first compute the equations of the sides then we find the
distances from a general point $(x,y)$ inside the triangle to each of the
sides. In this way we obtain the corresponding distance sum function
\mathcal{V}$. Taking $\mathcal{V}=c,$ we get a family of parallel lines
which, for certain values of the constant $c,$ intersect the given triangle
in the desired line segments.
\begin{example}
\label{Ex1}Let $\Delta $ be the right angled triangle with vertices
(0,0),(0,3)$ and $(4,0),$ respectively (see Fig. \ref{fig2}). If the
constant sum of distances from the sides is $k,$ $2.4\leq k\leq $ $4,$ then
the locus is a line segment inside the triangle parallel to the line $2x+y=0
.
In Fig. \ref{fig2}, $T_{2.4}(\Delta )=\{A\},T_{2.8}(\Delta
)=DG,T_{3.2}(\Delta )=EH,T_{3.6}(\Delta )=FI$ and $T_{4}(\Delta )=\{C\}.$
Note that at the extremal values of $k,$ the segments shrink to a corner
point of the triangle $\Delta .$
Indeed, the smallest altitude of the triangle is $2.4$ and the largest
altitude is $4.$ Hence, by the previous proposition, the\ locus of points is
a line segment. To find this line segment, we compute the distance sum
function $\mathcal{V}.$ The equation of the hypotenuse is $3x+4y=12.$ Hence
\begin{equation*}
\mathcal{V}=x+y-\frac{3x+4y-12}{5}=\frac{2}{5}x+\frac{1}{5}y+\frac{12}{5}.
\end{equation*
Therefore, the lines $\mathcal{V}=c$ are parallel to the line $2x+y=0$ and
the result follows.
\end{example}
Note that the proofs of Theorems \ref{equilateral} and \ref{convex polygon}
follow the same line.
\FRAME{ftbphFU}{3.7879in}{2.4068in}{0pt}{\Qcb{ $T_{k}(\Delta ),2.4\leq k\leq
$ $4,$ are line segments inside the triangle parallel to the line $2x+y=0$.
}{\Qlb{fig2}}{constant-sum.eps}{\special{language "Scientific Word";type
"GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width
3.7879in;height 2.4068in;depth 0pt;original-width 46.4612in;original-height
19.4764in;cropleft "0.1687";croptop "1";cropright "0.8312";cropbottom
"0";filename 'constant-sum.eps';file-properties "XNPEU";}}
\section*{Constant sum of squares of distances}
Motivated by the previous results, we deal with the second question of
finding the locus of points which have a constant sum of squares of
distances from the sides of a given triangle. Referring to Example \ref{Ex1
, $DE^{2}+DF^{2}+DG^{2}=5$ (the number 5 is chosen arbitrary) implies the
following equation of a quadratic curve;
\begin{equation*}
x^{2}+y^{2}+\left( \frac{3x+4y-12}{5}\right) ^{2}=5.
\end{equation*
Simplifying, one gets the equivalent equation
\begin{equation*}
34x^{2}+41y^{2}+24xy-72x-96y+19=0.
\end{equation*
This is exactly the equation of the ellipse shown in Fig. \ref{fig3}.
\FRAME{ftbphFU}{5.0185in}{3.0528in}{0pt}{\Qcb{Each point $D$ on the ellipse
satisfies $DE^{2}+DF^{2}+DG^{2}=5$}}{\Qlb{fig3}}{square-of-distances-is-5.ep
}{\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio
TRUE;display "USEDEF";valid_file "F";width 5.0185in;height 3.0528in;depth
0pt;original-width 35.376in;original-height 15.0728in;cropleft
"0.2287";croptop "0.9562";cropright "0.8807";cropbottom "0.0155";filename
'square-of-distances-is-5.eps';file-properties "XNPEU";}}
\subsection*{ A new characterization of the ellipse}
In view of the previous example we may prove the following theorem, which
states that among all quadratic curves the ellipse can be characterized as
the locus of points that have a constant sum of squares of distances from
the sides of an appropriate triangle.
\begin{theorem}
The locus $S_{k}(\Delta ABC)$ of points that have a constant sum of squares
of distances from the sides of an appropriate triangle $ABC$ is an ellipse
(and vice versa).
\end{theorem}
\begin{proof}
We shall prove the following two claims:
(a) Given a triangle, the locus of points which have a constant sum of
squares of distances from the sides is an ellipse. These loci, for different
values of the constant, are homothetic ellipses with respect to their common
center (their corresponding axes are proportional with the same factor).
(b) Given an ellipse, there is a triangle for which the sum of the squares
of the distances from the sides, for all points on the ellipse, is constant.
Using analytic geometry, we choose for part (a) the coordinate system such
that the vertices of the triangle lie on the axes. Suppose the coordinates
of the vertices of the triangle are $A(0,a),B(-b,0),C(c,0),$ where $a,b,c>0$
(see Fig. \ref{fig4}). Let $(x,y)$ be any point in the plane and let
d_{1},d_{2},d_{3}$ be the distances of $(x,y)$ from the sides of the
triangle $ABC$. It follows tha
\begin{equation*}
\underset{i=1}{\overset{3}{\sum }}d_{i}^{2}=\frac{(ax+cy-ac)^{2}}{a^{2}+c^{2
}+\frac{(ax-by+ab)^{2}}{a^{2}+b^{2}}+y^{2}.
\end{equation*
Hence, $\underset{i=1}{\overset{3}{\sum }}d_{i}^{2}=k$ (constant) if and
only if the point $(x,y)$ lies on the quadratic curve:
\begin{equation}
\frac{(ax+cy-ac)^{2}}{a^{2}+c^{2}}+\frac{(ax-by+ab)^{2}}{a^{2}+b^{2}
+y^{2}=k. \label{eq1}
\end{equation}
In general, a quadratic equation in two variables,
\begin{equation}
\mathcal{A}x^{2}+\mathcal{B}xy+\mathcal{C}y^{2}+\mathcal{D}x+\mathcal{E}y
\mathcal{F}=0, \label{eq2}
\end{equation
represents an ellipse provided the discriminant $\delta =\mathcal{B}^{2}-
\mathcal{AC}<0$ is negative.
To prove our claim, note that from equation (\ref{eq1}) we have
\begin{equation}
\mathcal{A}=\frac{a^{2}}{p}+\frac{a^{2}}{q},\mathcal{B}=\frac{2ac}{p}-\frac
2ab}{q},\mathcal{C}=\frac{c^{2}}{p}+\frac{b^{2}}{q}+1 \label{eq3}
\end{equation
where, $p=$ $a^{2}+c^{2}$ and $q=$ $a^{2}+b^{2}.$ Therefore,
\begin{equation*}
\delta =\mathcal{B}^{2}-4\mathcal{AC}=-4\frac{a^{2}}{pq}\left(
b^{2}+2bc+c^{2}+p+q\right) \text{,}
\end{equation*
and the result follows.
Now, we prove part (b). Applying rotation or translation of the axes we may
assume that the ellipse has a canonical form $\frac{x^{2}}{\alpha ^{2}}
\frac{y^{2}}{\beta ^{2}}=1,$ $\alpha \geq \beta >0.$ If we find positive
real numbers $a,b$ with $\alpha ^{2}=a^{2}+3b^{2},\beta ^{2}=2a^{2},$ then
the isosceles triangle with vertices $A^{\prime }(0,a-l),B^{\prime
}(-b,-l),C^{\prime }(b,-l),$ gives the required property in part (b), where
l=\frac{2ab^{2}}{a^{2}+3b^{2}}.$
Indeed, because of the symmetry of the ellipse, we choose first an isosceles
triangle with vertices $A(0,a),B(-b,0),C(b,0)$ and compute the sum of the
squared distances from its sides. Substituting $c=b$ in equation (\ref{eq1
), gives the equation of the locus $S_{k}(\Delta ABC)$ as
\begin{equation*}
\frac{(ax+by-ab)^{2}}{a^{2}+b^{2}}+\frac{(ax-by+ab)^{2}}{a^{2}+b^{2}
+y^{2}=k.
\end{equation*
Equivalently, we get a translation of a canonical ellipse:
\begin{equation}
\frac{x^{2}}{a^{2}+3b^{2}}+\frac{(y-\frac{2ab^{2}}{a^{2}+3b^{2}})^{2}}{2a^{2
}=\frac{(a^{2}+b^{2})k-2a^{2}b^{2}}{2a^{2}(a^{2}+3b^{2})}+\frac{2b^{4}}
(a^{2}+3b^{2})^{2}}. \label{eq3.5}
\end{equation
Thus, it is enough to take
\begin{equation*}
\frac{(a^{2}+b^{2})k-2a^{2}b^{2}}{2a^{2}(a^{2}+3b^{2})}+\frac{2b^{4}}
(a^{2}+3b^{2})^{2}}=1,
\end{equation*
and therefore
\begin{equation}
k=\frac{2a^{2}(a^{4}+7a^{2}b^{2}+10b^{4})}{(a^{2}+b^{2})(a^{2}+3b^{2})}.
\label{eq4}
\end{equation}
Translating downward by $l=\frac{2ab^{2}}{a^{2}+3b^{2}},$ we get that the
ellipse $\frac{x^{2}}{a^{2}+3b^{2}}+\frac{y^{2}}{2a^{2}}=1$ is the locus of
points, $S_{k}(\Delta A^{\prime }B^{\prime }C^{\prime }),$ which have a
constant sum of squares of distances from the sides of the triangle with
vertices $A^{\prime }(0,a-l),B^{\prime }(-b,-l),C^{\prime }(b,-l).$
Moreover, this constant is given by equation (\ref{eq4}).
\end{proof}
\FRAME{ftbphFU}{4.3145in}{2.2615in}{0pt}{\Qcb{The locus is an ellipse}}{\Qlb
fig4}}{ellipse1.eps}{\special{language "Scientific Word";type
"GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width
4.3145in;height 2.2615in;depth 0pt;original-width 8.0652in;original-height
4.7063in;cropleft "0.0825";croptop "0.9036";cropright "0.7603";cropbottom
"0.0778";filename 'ellipse1.eps';file-properties "XNPEU";}}\
Table \ref{table1} includes two examples that demonstrate Theorem 3.
\begin{table}[H] \centerin
\begin{tabular}{ccccccc}
\hline
$a$ & $b$ & $\alpha ^{2}=a^{2}+3b^{2}$ & $\beta ^{2}=2a^{2}$ & $\text{Ellips
}$ & $k$ & $\text{Figure}$ \\ \hline
$1$ & $1$ & $4$ & $2$ & $\frac{x^{2}}{4}+\frac{y^{2}}{2}=1$ & $\frac{9}{2}$
& $\ref{fig5}$ \\
$\sqrt{3}$ & $1$ & $6$ & $6$ & $x^{2}+y^{2}=6$ & $10$ & $\ref{fig6}$ \\
\hline
\end{tabular
\caption{Examples}\label{table1
\end{table
Notice that, though a given triangle and a given constant $k$ yield at most
one ellipse, the same ellipse can be obtained using different triangles.
This can be easily seen in Fig. \ref{fig5}. By the above computations, the
ellipse $\frac{x^{2}}{4}+\frac{y^{2}}{2}=1$ is the locus of points
S_{4.5}(\Delta ),$ where $\Delta $ is the triangle with vertices $A^{\prime
}=$ $(0,0.5),B^{\prime }=(-1,-0.5)$ and $C^{\prime }=(1,-0.5).$ If we
reflect the whole figure across the $x-$axis, then the same ellipse $\frac
x^{2}}{4}+\frac{y^{2}}{2}=1$ will be the locus of points $S_{4.5}(\widetilde
\Delta }),$ where $\widetilde{\Delta }$ is the reflection of $\Delta $
across the $x-$axis with vertices: $(0,-0.5),(-1,0.5)$ and $(1,0.5).$
Notice also that, in the second example, the locus of points is a circle. In
general, the locus of points is a circle exactly when, in equation (\ref{eq2
),
\begin{equation*}
\mathcal{A}=\mathcal{C}\text{ and }\mathcal{B}=0.
\end{equation*
Equivalently, from (\ref{eq3}) we have
\begin{equation}
\frac{a^{2}}{p}+\frac{a^{2}}{q}=\frac{c^{2}}{p}+\frac{b^{2}}{q}+1\text{ and
\frac{2ac}{p}-\frac{2ab}{q}=0. \label{eq5}
\end{equation
Substituting the values of $p$ and $q$ and simplifying we get
\begin{equation*}
\frac{2ac}{p}-\frac{2ab}{q}=0\Leftrightarrow (a^{2}-bc)(c-b)=0.
\end{equation*
Consequently, two cases have to be considered:
First, $a^{2}=bc,$ in which case the triangle $\Delta ABC$ is right angled.
This case does not occur, since the conditions in (\ref{eq5}) lead to a
contradiction as follows: $\frac{2ac}{p}-\frac{2ab}{q}=0$ implies $\frac{c}{
}=\frac{b}{q}$ or equivalently $\frac{bc}{p}=\frac{b^{2}}{q}.$ Since,
a^{2}=bc$ we get $\frac{a^{2}}{p}=\frac{b^{2}}{q}.$ Hence, the first
condition in (\ref{eq5}) simplifies into\ $\frac{a^{2}}{q}=\frac{c^{2}}{p
+1. $ This equation together with the relations $a^{2}=bc$ and $\frac{c}{p}
\frac{b}{q},$ yield a contradiction.
Second, $c=b,$ in which case the triangle $\Delta ABC$ is isosceles. In this
case, we have $p=q$. Substituting in the first condition of (\ref{eq5}) we
get $a^{2}+3b^{2}=2a^{2},$ which is equivalent to $a=\sqrt{3}b.$ Hence, the
vertices of the triangle are $A(0,\sqrt{3}b),B(-b,0)$ and $C(b,0)$. This
implies that the triangle is equilateral.
Therefore, we have the following result.
\begin{conclusion}
The locus $S_{k}(\Delta )$ of points that have a constant sum of squares of
distances from the sides of a given triangle $\Delta $ is a circle if and
only if the triangle $\Delta $ is equilateral. \ \ \
\end{conclusion}
\FRAME{ftbphFU}{3.723in}{2.7959in}{0pt}{\Qcb{The locus is an ellipse with
k=4.5$}}{\Qlb{fig5}}{square-of-distances-k-is-4.eps}{\special{language
"Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display
"USEDEF";valid_file "F";width 3.723in;height 2.7959in;depth
0pt;original-width 16.8647in;original-height 7.1857in;cropleft
"0.1750";croptop "1";cropright "0.7439";cropbottom "0";filename
'square-of-distances-k-is-4.eps';file-properties "XNPEU";}}
\FRAME{ftbphFU}{3.4463in}{2.949in}{0pt}{\Qcb{The locus is a circle with
k=10 $}}{\Qlb{fig6}}{square-of-distances-k-is-10.eps}{\special{language
"Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display
"USEDEF";valid_file "F";width 3.4463in;height 2.949in;depth
0pt;original-width 27.1205in;original-height 11.5513in;cropleft
"0.2234";croptop "1";cropright "0.7218";cropbottom "0";filename
'square-of-distances-k-is-10.eps';file-properties "XNPEU";}}
\subsection*{Minimal sum of squared distances}
Another interesting question is to find the minimal sum of squared
distances, from the sides of a given triangle. For the isosceles triangle
with vertices $A(0,a),B(-b,0)$ and $C(b,0),$ equation (\ref{eq3.5})
indicates that, as the right hand side approaches $0,$ the ellipse
degenerates to one point: $(0,\frac{2ab^{2}}{a^{2}+3b^{2}}).$ Thus, the
locus of points is defined exactly whe
\begin{equation*}
\frac{(a^{2}+b^{2})k-2a^{2}b^{2}}{2a^{2}(a^{2}+3b^{2})}+\frac{2b^{4}}
(a^{2}+3b^{2})^{2}}\geq 0.
\end{equation*
Equivalently,
\begin{equation*}
k\geq \frac{2a^{2}b^{2}}{a^{2}+3b^{2}}.
\end{equation*
Therefore the following consequence holds.
\begin{conclusion}
\label{Con2}For the isosceles triangle with vertices $A(0,a),B(-b,0)$ and
C(b,0),$ the minimal sum of squared distances from the sides is $\frac
2a^{2}b^{2}}{a^{2}+3b^{2}}$ attained at the point $(0,\frac{2ab^{2}}
a^{2}+3b^{2}}),$ inside the triangle.
\end{conclusion}
When $a^{2}=3b^{2}$ then the triangle is equilateral with vertices $A(0
\sqrt{3}b),B(-b,0)$ and $C(b,0).$ In this case, the minimal sum of squared
distances is $b^{2}$ attained at the point $(0,\frac{\sqrt{3}}{3}b)$ which
is exactly the incenter of\ the equilateral triangle. In Fig. \re
{fig-minimal-k-circle}, $b=1$ and the loci of points are circles which
degenerate to the incenter of the equilateral triangle as $k$ approaches $1.$
\FRAME{ftbphFU}{4.3422in}{2.8314in}{0pt}{\Qcb{The circles degenerate to the
incenter $G$ of the equilateral triangle, where the minimum of $k$ is $1.$}}
\Qlb{fig-minimal-k-circle}}{minimal-k-circle.eps}{\special{language
"Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display
"USEDEF";valid_file "F";width 4.3422in;height 2.8314in;depth
0pt;original-width 34.193in;original-height 14.3213in;cropleft
"0.1340";croptop "1";cropright "0.7785";cropbottom "0";filename
'minimal-k-circle.eps';file-properties "XNPEU";}}
\section*{Concluding remarks}
Other related results are the following: Kawasaki \cite[p. 213]{Kaw}, with a
proof without words, used only rotations to establish Viviani's theorem.
Polster \cite{Pol}, considered a natural twist to a beautiful one-glance
proof of Viviani's theorem and its implications for general triangles.
The restriction of the locus, in the first question, to subsets of the
closed triangle is intended to avoid the use of the signed distances. These
signed distances, when considered, allows us to search loci of points
outside the triangle for "large" values of the constant $k.$ The reader is
encouraged to do some examples.
Theorem \ref{convex polygon}, allows us to address Question 1 to any convex
polygon in the plane.
The question about loci of points, with constant sum of squares of
distances, can be generalized to any polygon or any set of lines in the
plane. In this case, one should distinguish whether all the lines are
parallel or not. The reader is encouraged to work out some examples using
GeoGebra.
Conclusion \ref{Con2} can be restated for general triangles, by a change of
the coordinate system. This demands familiarity with the following theorem
(which is beyond our discussion): Every real quadratic form $q=X^{T}AX$ with
symmetric matrix $A$ can be reduced by an orthogonal transformation to a
canonical form.
Finally, this characterization of the ellipse was exploited to build an
algorithm for drawing ellipses using GeoGebra (see \cite{abb0}).
\
{\large Acknowledgement}: \textit{The author is indebted to the referees,
who read the manuscript carefully, and whose valuable comments concerning
the style, the examples, the theorems and the proofs improved substantially
the exposition of the paper. Special thanks are also due to the editor for
his valuable remarks}.
\textit{This work \textit{is part of research which }was supported by Beit
Berl College Research Fund.}
\textit{\ }
| {
"timestamp": "2017-01-26T02:06:54",
"yymm": "1701",
"arxiv_id": "1701.07339",
"language": "en",
"url": "https://arxiv.org/abs/1701.07339",
"abstract": "We consider loci of points such that their sum of distances or sum of squared distances to each of the sides of a given triangle is constant. These loci are inspired by Viviani's theorem and its extension. The former locus is a line segment or the whole triangle and the latter locus is an ellipse.",
"subjects": "History and Overview (math.HO)",
"title": "Loci of Points Inspired by Viviani's Theorem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540722737479,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7099046510231031
} |
https://arxiv.org/abs/1602.05476 | On concordances in 3-manifolds | We describe an action of the concordance group of knots in the three-sphere on concordances of knots in arbitrary 3-manifolds. As an application we define the notion of almost-concordance between knots. After some basic results, we prove the existence of non-trivial almost-concordance classes in all non-abelian 3-manifolds. Afterwards, we focus the attention on the case of lens spaces, and use a modified version of the Ozsvath-Szabo-Rasmussen's tau-invariant to obstruct almost-concordances and prove that each L(p,1) admits infinitely many nullhomologous non almost-concordant knots. Finally we prove an inequality involving the cobordism PL-genus of a knot and its tau-invariants. | \section*{Introduction}\label{sec:intro}
A classical and extensively studied feature of knots in the $3$-sphere is the group structure induced by connected sum on concordance classes. Much is known on the concordance group $\mathcal{C}$, and many recent progresses have been made by Heegaard Floer theoretic techniques. On the other hand, concordances in manifolds other than $S^3$ lack a clear algebraic structure.
The purpose of this paper is to describe an action of the concordance group of knots in $S^3$ on concordances of knots in an arbitrary $3$-manifold. The action consists simply in taking the connected sum of a concordance class of a knot in a $3$-manifold with a concordance class of a knot in $S^3$. After showing that the action is well defined, we introduce the related notion of \emph{almost-concordance}. This can be thought of as concordance up to connect sum with knots in $S^3$.
Similar constructions and definitions have been previously considered by Rolfsen in \cite{rolfsen85} (see also Hillman's book \cite[Sec. 1.5]{hillman2012algebraic}).
Denote the set of almost-concordances in a closed, oriented $3$-manifold $Y$ by $\widetilde{\mathcal{C}}^Y$. We deduce, as a result of explicit computations (Sections \ref{sec:concrete} and \ref{sec:extension}), the following result establishing the non-triviality of the equivalence relation provided by almost-concordance:
\begin{thm}\label{teo:qconcnonbanalelens}
Each lens space $L(p,1)$ contains infinitely many almost-concordance classes of knots, so $| \widetilde{\mathcal{C}}^{L(p,1)}| = \infty$. Moreover, all these classes are represented by nullhomologous knots.
\end{thm}
Instead, in the case of $3$-manifolds with non-abelian fundamental group we obtain:
\begin{thm}\label{teo:qconcnonbanale}
All $3$-manifolds with non-abelian fundamental group have non-trivial almost-concordance classes.
\end{thm}
Theorem \ref{teo:qconcnonbanalelens} is established by defining the \emph{$\tau$-shifted invariant} $\tau_{sh}$ for knots in lens spaces, which is derived by the usual Ozsv{\'a}th-Szab{\'o}-Rasmussen's $\tau$-invariant.
We then make use of Hedden's generalisation \cite{hedden2008ozsvath} of the $\tau$ invariants to extend the definition of $\tau_{sh}$ to a much larger set of $3$-manifolds (Definition \ref{taushiftgener}).
We prove that $\tau_{sh}$ is unchanged under the previously defined action:
\begin{prop}\label{proptaushinv}
The $\tau$-shifted invariant $\tau_{sh}$ is an almost-concordance invariant of knots.
\end{prop}
The proof of Theorem \ref{teo:qconcnonbanalelens} then follows from the computation of $\tau_{sh}$ for a knot $\widetilde{K} \subset L(3,1)$ (Example \ref{esempio:0134}), coupled with a sequence of chain homotopies described in Section \ref{sec:extension}.
Theorems \ref{teo:qconcnonbanalelens} and \ref{teo:qconcnonbanale} can be equivalently stated in terms of non-transitivity of the concordance action; the same result in the case of the 3-torus is a direct consequence of D. Miller's paper \cite{miller1995extension} on Milnor's invariants.\\
Following a remark of A.Levine, we relate almost-concordance to $PL$-concordance (Definition \ref{def:plcobo}), define some generalisations of the slice genus, and establish the following inequality:
\begin{thm}\label{thm:plcobotau}
Suppose $K_0$ and $K_1$ are two knots in a lens space $L(p,q)$, connected by a PL-cobordism $\Sigma$. Then
$$D(\tau(K_0), \tau(K_1) ) \le \widetilde{g}_{PL}(\Sigma),$$
where $D$ is the distance on $\mathbb{Z}^p$ described in Definition \ref{def:distanzalattice}, and $\widetilde{g}_{PL}$ is the (cobordism) $PL$-genus of a knot (Definition \ref{defigeneri}).
\end{thm}
As an aside, we generalize a result of Kirby and Lickorish \cite{kirby1979prime}, showing in Theorem \ref{thm:conctogenuine} that every knot in a $3$-manifold is concordant to a \emph{l-prime} knot (Definition \ref{def:genuino}).
Lastly we present some results on local knots, and outline possible improvements and future directions with some conjectures.\\
\textbf{Acknowledgments:} The author wishes to thank Adam Levine for helpful comments on the first version of this paper, Marco Golla for his invaluable expertise, Paolo Lisca and Paolo Aceto for useful observations and corrections, and Agnese Barbensi for her constant support. Also, the results of Section \ref{sec:fund} would not have been possible without the helpful remarks of Eylem Zeliha Yildiz and Patrick Orson. A special thanks also to Mark Powell and Kent Orr for suggesting and improving some references, making me learn new topics in the process.
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 674978).
\section{Definitions}\label{sec:defi}
In the following $Y$ is always going to denote a closed, connected and oriented $3$-manifold. None of these conditions is actually strictly needed, but will encompass all relevant examples that will follow.\\
A knot in $Y$ is the ambient isotopy class of a smooth embedding $\iota : S^1 \hookrightarrow Y$, and $\mathcal{K}(Y)$ will denote the set of oriented knots in $Y$. To avoid confusion we will often use the pair $(Y,K)$ to denote $K \in \mathcal{K}(Y)$.
In every such $Y$ there is a unique knot bounding an embedded disk, the unknot, denoted by $\bigcirc$. Given a knot $(Y,K)$, call $[K] \in H_1 (Y; \mathbb{Z})$ the homology class it represents; if $[K] = 0$, we say it is \emph{nullhomologous}, and \emph{rationally nullhomologous} if it represents a torsion element of $H_1 (Y; \mathbb{Z})$.
A nullhomologous knot is the boundary of embedded surfaces in $Y$, while in all other cases it is not\footnote{However there are several ways to define Seifert surfaces for rationally nullhomologous knots, see \cite{calegari2009knots} and \cite{rasmussen2007lens}.}.
Given two oriented knots $(Y_i,K_i)$ for $i=0,1$, we can consider their \emph{connected sum} $(Y_0 \# Y_1, K_0 \# K_1)$, given by removing from each $Y_i$ a $3$-disk $D_i$ intersecting $K_i$ in an unknotted arc, and glueing with an orientation reversing diffeomorphism $Y_0 \setminus D_0$ to $Y_1 \setminus D_1$, matching the orientations of the knots.
A knot $(Y,K)$ is said to be \emph{local} if there exists an embedded 2-sphere in $Y$ bounding a $3$-ball $B$, such that $K \subset B$. Clearly a local knot is nullhomologous, but the converse is generally far from true (cf. example \ref{esempio:0134}). Alternatively one could define local knots as those which admit a decomposition of the form $(Y,\bigcirc) \# (S^3, K)$.
Besides isotopy, there is another weaker equivalence relation we can consider on the set of embeddings $S^1 \hookrightarrow Y$:
\begin{defi}
We say that two knots $K_0, K_1 \subset Y$ are \emph{concordant} if there exists a smooth properly embedded annulus $$A \cong S^1 \times [0,1] \hookrightarrow Y \times [0,1],$$ such that $\partial A \cap \left(Y \times [0,1] \right) = K_0 \sqcup \overline{K}_1$. If $K_0$ is concordant to $K_1$ we write $K_0 \sim K_1$.
Concordance is an equivalence relation on $\mathcal{K}(Y)$; the set of equivalence classes is denoted by $\mathcal{C}^Y$.
\end{defi}
Concordances preserve homology classes. In other words if $K_0 \sim K_1$, then $[K_0] = [K_1]$. This implies that $\mathcal{C}^Y$ splits:
\begin{equation}\label{eq:splitting}
\mathcal{C}^Y = \bigoplus_{m \in H_1 (Y;\mathbb{Z})} \concom{m} .
\end{equation}
If $Y = S^3$, we can endow $\mathcal{C} \coloneqq \mathcal{C}^{S^3} $ with a group structure. The operation is provided by the oriented connected sum, and the inverse of a class $[K]$ is represented by the reverse mirror of $K$.
The structure of this usual concordance group, despite its importance in low-dimensional topology, still remains elusive. Recently many improvements in the understanding of $\mathcal{C}$ were made by several authors, especially by means of Heegaard Floer theoretic constructions.
For an excellent survey on concordances in $S^3$ see \cite{concordanceliv}, and for a recent survey on the interactions between Heegaard Floer homology and $\mathcal{C}$ see \cite{2015arXiv151200383H}.\\
It is clear from the definition that the connected sum of knots does not preserve the ambient manifold, so it does not provide a binary operation on $\mathcal{C}^Y$ whenever $Y \neq S^3$. Hence $\mathcal{C}^Y$ has no natural group operation, and is only a set. So there seems to be a total loss of algebraic structure when dealing with a manifold other than $S^3$.
There is however a natural action $\mathcal{K}(S^3) \curvearrowright \mathcal{C}^Y$ which respects the splitting into homology components. \\This action is simply defined as:
\begin{equation}\label{azione}
(S^3,K) \cdot [(Y, K^\prime)] = [(Y, K \# K^\prime)]
\end{equation}
\begin{rmk}
This action is well defined. That is, if $K_0,K_1 \in \mathcal{K}(Y) $ and $K_0 \sim K_1$, then for each knot $(S^3,K)$ we have $K_0 \# K \sim K_1 \# K$. To see why this is the case, denote by $A$ a concordance between $K_0$ and $K_1$. Choose a simple properly embedded arc $a$ on $A$, such that the endpoints are on the two knots, and $a$ does not intersect any critical point\footnote{For the restriction to $A$ of the Morse function induced by the projection $S^3 \times [0,1] \rightarrow [0,1]$.} of $A$ (Figure \ref{fig:cactus}).
\begin{figure}
\includegraphics[width=6cm]{cactus.png}
\caption{The path $a$ on a concordance in $Y\times [0,1]$ between $K_0$ and $K_1$.}
\label{fig:cactus}
\end{figure}
Consider the product $K \times [0,1] \subset S^3 \times [0,1]$, and remove from it $D \times [0,1]$, where $D$ is a $3$-disk intersecting $K$ in an unknotted arc. Now remove a small tubular neighborhood $\nu (a)$ of $a$, and replace it with $\left( S^3 \setminus D \right) \times [0,1]$, making the boundaries of $\left( K \setminus (K \cap D) \right) \times [0,1]$ and $\left( A \setminus (\nu(a) \cap A) \right)$ coincide. The result is a concordance from $K \# K_0$ to $K \# K_1$.
\end{rmk}
Moreover this action factors through the concordance group $\mathcal{C}$:
\begin{prop}\label{prop:buonadef}
If $K_0, K_1 \in \mathcal{K}(S^3)$ and $K_0 \sim K_1$, then for each knot $(Y,K)$:
$$(S^3,K_0) \cdot [(Y, K)] \sim (S^3,K_1) \cdot [(Y, K)]$$
\end{prop}
\begin{proof}
Denote by $A \subset S^3 \times [0,1]$ an annulus realizing the concordance between $K_0$ and $K_1$. Then, as in \cite[Thm 3.3.2]{concordanceliv} we can suppose that, up to isotopy, $A$ is the product $a \times [0,1]$ for a small arc $a \subset K_0$. Remove from $S^3 \times [0,1]$ the product $D \times [0,1]$, where $D$ is a $3$-disk intersecting $K_0$ only in $a$; the complement is diffeomorphic to $\mathbb{D}^3 \times [0,1]$.
Take the trivial concordance $K \times [0,1] \subset Y \times [0,1]$ and remove a product $D^\prime \times [0,1]$, where $D^\prime$ is a $3$-disk intersecting $K$ in an unknotted arc. Then we just need to glue $S^3 \setminus D \times [0,1]$ to $Y \setminus D^\prime \times [0,1]$, in such a way that the two concordances are glued along their vertical boundaries\footnote{By vertical boundary we mean the part created by removing the intersection with the disks $D$ or $D^\prime$ times $[0,1]$.}, making the edges of the annuli coincide: the resulting annulus is a concordance from $K_0 \# K$ to $K_1 \# K$ in $Y \times [0,1]$.
\end{proof}
So we have in fact an action $\mathcal{C} \curvearrowright \mathcal{C}^Y$, which is easily seen to preserve the splitting of Equation \eqref{eq:splitting}.\\
We can introduce yet another equivalence relation on $\mathcal{K}(Y)$, by taking this $\mathcal{C}$-action into account:
\begin{defi}\label{def:almconc}
Two knots $K_0$ and $K_1$ in $Y$ are \emph{almost-concordant}, written $K_0 \dot{\sim} K_1$, if there exist two knots $K_0^\prime , K_1^\prime \subset S^3$ such that
$$K_0 \# K_0^\prime \sim K_1 \# K_1^\prime .$$
Almost-concordance is an equivalence relation on $\mathcal{K}(Y)$, and we denote by$\;\widetilde{\mathcal{C}}^Y$ the quotient.
\end{defi}
Clearly two concordant knots are also almost-concordant (just choose $K_0^\prime = K_1^\prime = \bigcirc$), but the converse does not hold. In particular this means that almost-concordance classes are unions of concordance classes. It is thus natural to ask whether this relation is trivial, \emph{i.e.} if there is only one almost-concordance class of knots for each pair $(Y,m)$, with $m \in H_1(Y;\mathbb{Z})$. As anticipated by Theorem \ref{teo:qconcnonbanale} this is not always the case.
\begin{rmk}\label{rmk:solouno}
Note that we might also only use one knot in the definition: if $K_0 \# K_0^\prime \sim K_1 \# K_1^\prime$ then $K_0 \sim K_1 \# K^\prime$ where $K^\prime = K_1^\prime \# r\overline{K_0^\prime}$.
\end{rmk}
\begin{rmk}
It is immediate from the definition that $|\widetilde{\mathcal{C}}^{S^3}| = 1$, that is all knots in the three-sphere are almost-concordant to each other.
In the next section we are going to outline a way to obstruct the existence of almost-concordances, after introducing a new invariant $\tau_{sh}$ capable of distinguishing them.
\end{rmk}
Shortly after the first version of this paper appeared online, A. Levine proved that almost-concordance is in fact equivalent to the older notion of $PL$-concordance; the same statement was in fact proved in greater generality by Rolfsen in \cite{rolfsen85}.
\begin{defi}\label{def:plcobo}
Two knots $K_0, K_1 \in \mathcal{K}(Y)$ are $PL$-concordant, if they are connected by a properly embedded annulus in the product cobordism $Y \times [0,1]$, which is everywhere smooth except for a finite number of singular points which are cones over knots in $S^3$.
$PL$-cobordisms are defined in an analogous manner, allowing the singular surface cobounding the knots to have non-zero genus.
\end{defi}
The equivalence between almost-concordance and $PL$-concordance goes as follows: given a $PL$-concordance between $K_0$ and $K_1$, we can connect each singular point on the concordance to either knot with simple non-intersecting arcs. Carving out a small neighborhood of these arcs yields a concordance between the original knots with extra connect-summands given by the link of the singular points.
Conversely, given an almost-concordance, we can push $K_0^\prime$ and $K_1^\prime$ inside the cobordism, capping them off with their cones.
In \cite{levine2014non} Levine proves the existence of a non-surjective satellite operator, and uses it to exhibit a knot in an homology $3$-sphere $Y$ which does not bound a PL-disk in any contractible $4$-manifold cobounding $Y$.
In Section \ref{sec:concrete} we are going to produce an invariant of almost-concordance for knots in lens spaces, which by Levine's remark is also an invariant of $PL$-concordance. We are also going to show that this invariant bounds (in an appropriate setting) from below the minimal $PL$-genus of a surface cobounding two knots.
\begin{defi}\label{defigeneri}
One can define several kinds of $4$-dimensional genera for a knot $K \in \mathcal{K}(Y)$; given a $3$-manifold $Y$ and a contractible $4$-manifold $W$ such that $\partial W = Y$, one can define the \emph{smooth slice genus} $g_{*}$, the \emph{PL genus} $g_{PL}$, the \emph{topological $4$-genus} $g_{TOP}$ and the \emph{topological PL $4$-genus } $g_{TOPL}$. They are defined as follows:
\begin{equation}
g_{\circ}^W (K)= \min \{g(\Sigma) \;|\; (\Sigma ,\partial \Sigma) \xhookrightarrow{\iota} (W,Y), \partial \Sigma = K \}
\end{equation}
With $\circ \in \{*, TOP, PL, TOPL\}$, and the corresponding embedding $\iota$ is required to be smooth if $\circ = *$, topological locally flat if $\circ = TOP$, smooth\slash locally flat except for a finite number of points which are cones over knots in $S^3$ if $\circ = PL$ or $TOPL$ respectively.
One can also define related genus by taking the minimum of the previous quantities over all $4$-manifolds $W$ cobounding a fixed $Y$, with some restriction on the algebraic topology of these fillings.
In what follows however we will be more concerned with yet another, slightly different notion of $4$-dimensional genus for knots; suppose the homology class of $K \in \mathcal{K}(Y)$ contains a ``standard" representative $T_{[K]}$. In the case where $[K] = 0 \in H_1(Y;\mathbb{Z})$ one can always choose the unknot $\bigcirc$, but this is not the only possibility: in the case of lens spaces one has the simple knots $K(p,q,k)$ (in the notation of \cite{rasmussen2007lens}), which share many interesting properties with the unknot (cf. \cite{rasmussen2007lens}, and \cite{hedden2011floer}), despite being only rationally nullhomologous.
We can then define the genera
\begin{equation}\label{generitilde}
\widetilde{g}_{\circ} (K) = \min\{g(\Sigma) \;|\; \Sigma \subset Y \times [0,1], \partial \Sigma = K \sqcup \overline{T}_{[K]} \},
\end{equation}
where again $\circ$ belongs to the set $\{*, TOP, PL, TOPL\}$, and the surface $\Sigma$ is embedded in the trivial cobordism $Y\times [0,1]$ accordingly (\emph{i.e.}$\:$in the appropriate category).
We will refer to each of the $\widetilde{g}_{\circ} (K)$ genera as the \emph{$\circ$-cobordism genus} of the knot $K$.
\end{defi}
Note that in the $3$-sphere case there is no difference between $\widetilde{g}_{\circ}$ and $g_\circ$, since removing a $4$-ball from the interior of $\mathbb{D}^4$ yields $S^3 \times [0,1]$.
For a fixed filling $W$ (which is omitted from the notation), there are some obvious relations between these genera:
\begin{align*}
g_{TOPL} (K) \le g_{PL} (K) \;\;\;\;&\;\;\;\; g_{TOPL} (K) \le g_{TOP} (K) \\
g_{PL} (K) \le g_*(K) \;\;\;\;&\;\;\;\; g_{TOP} (K) \le g_*(K),
\end{align*}
and $g_{TOPL} = g_{PL} = 0$ for all knots in $S^3$.
It would be interesting to determine if there is a relation between $g_{TOP}$ and $g_{PL}$.
Note that if a knot has minimal PL-genus greater than $0$, then in particular, it is not concordant in a homology cobordism to any knot in $S^3$ (cf. \cite{levine2014non}).
The following elementary inequality always holds for $K \in \mathcal{K}(Y)$ and any choice of $\circ$ as above (after picking a suitable $T_{[K]}$ where available):
\begin{equation}
g_{\circ} (K) \le \widetilde{g}_{\circ} (K) + g_{\circ}(T_{[K]}).
\end{equation}
\newline
The notion of almost-concordance is closely related to primeness of a knot. The definition makes sense also in a $3$-manifold other than $S^3$
\begin{defi}\label{def:genuino}
Call a knot $(Y, K)$ \emph{l-prime} if it is not a connected sum with a knot in $S^3$, so there is no embedded\footnote{The boundary of a ball as such is usually called a \emph{Conway sphere}.} $3$-ball $B$ intersecting $K$ non trivially. By triviality of the intersection we mean that the pair $(B, K \cap B)$ is isotopic (relatively to the boundary) to the pair $(\mathbb{D}^2 \times \mathbb{D}^1 , \{0\}\times \mathbb{D}^1 )$.
In other words $(Y,K)$ is l-prime \emph{iff} for every decomposition $(Y,K) = (Y,K_0) \# (S^3,K_1)$ we have $K_1 = \bigcirc$.
\end{defi}
It was first proven by Kirby and Lickorish in \cite{kirby1979prime} that every knot in $S^3$ is concordant to a prime knot. Using the same argument of Livingston \cite{livingston1981homology}, we can obtain a generalization of their result to arbitrary $3$-manifolds. To the best of the author's knowledge this result (despite being an almost straightforward generalisation of the techniques of \cite{livingston1981homology}) does not appear in the literature, so we include it for the sake of completeness.
\begin{thm}\label{thm:conctogenuine}
Every knot $K \subset Y$ is concordant to an l-prime knot.
\end{thm}
\begin{proof}
Consider the knot $P\subset S^1 \times \mathbb{D}^2$ shown in Figure \ref{fig:dimgenuini}.
\begin{figure}
\includegraphics[width=9cm]{patternlivingst.png}
\caption{The pattern $P$ for the satellite construction. Attaching the gray band, and capping the nullhomologous component, yields a concordance in $S^1 \times \mathbb{D}^2 \times [0,1]$ between $P$ and the core of the solid torus.}
\label{fig:dimgenuini}
\end{figure}
Given any knot $(Y, K)$, we can remove a tubular neighborhood $\nu(K)$ and glue in\footnote{We do not need to specify the framing with respect to which we are attaching the solid torus, since the result holds for any choice.} the solid torus containing the pattern, obtaining a new knot $(Y , K_P)$, the satellite of $K$ with pattern $P$. The concordance suggested in Figure \ref{fig:dimgenuini} induces a concordance from $K_P$ to $K$. Now we just need to check that $K_P$ is in fact l-prime;
we start by noting that the pattern inside the solid torus is prime\footnote{This is proven in a more general setting in \cite{livingston1981homology}.}, \emph{i.e.}$\:$it cannot be split into non-trivial knots by a Conway sphere. Then we just need to argue by contradiction that any sphere giving a decomposition of $K_P$ can be isotoped away from the torus given by the boundary of the neighborhood for the original knot $K$. So necessarily the sphere would be contained in $\nu (K)$, and we can conclude by the primeness of $P$.
This can be done similarly to \cite[Thm. 4.2]{livingston1981homology}.
We sketch the construction here; call $S$ an embedded sphere giving a decomposition in two summands of $K_P$, and $R$ the annulus obtained by deleting a small neighborhood of the two points $K_P \cap S$. Generically, the intersections between $R$ and $\partial \nu (K)$ are composed by nullhomologous circles (in $R$), and circles which are parallel to $\partial R$. The first kind can be eliminated by isotopies, starting from the innermost ones.
We want to show that there can not be any intersection which is parallel to $\partial R$; if such intersections existed, then by considering one close to $\partial R$, we would have found a disk cobounding a meridian of $\partial \nu (K)$ with intersection 1. But this is absurd, since the minimal number of intersections between $P$ and a disk cobounding a meridian $\{p\} \times S^1$ in the solid torus is 3.
\end{proof}
\section{Relation with the fundamental group}\label{sec:fund}
As independently pointed out by Eylem Zeliha Yildiz and Patrick Orson, there is a simple relation between almost-concordance and free homotopy classes of loops in a $3$-manifold $Y$:
\begin{prop}\label{prop:free}
If $K_0, K_1 \in \mathcal{K}(Y)$ are almost-concordant, then they are freely homotopic.
\end{prop}
\begin{proof}
By hypothesis there exists a smoothly embedded annulus $A$ in \\$Y \times [0,1]$ connecting $K_0 \# K^\prime_0$ to $K_1 \#K^\prime_1$, for some knots $K^\prime_0 , K^\prime_1$ in $S^3$.
Consider two regular homotopies\footnote{The $h_i$ can be \emph{e.g.} a sequence of crossing changes taking each knot to $\bigcirc$.} $h_i$ for $i=0$ and $1$, between $K^\prime_i$ and the unknot in $S^3$; we can suppose that the $h_i$ are the identity outside a ball $D$ (times an interval) containing $K^\prime_i$, and that they fix a point of $K^\prime_i$. Removing a small neighborhood of the fixed point produces a relative homotopy from $K^\prime_i$ (minus a small interval) to the unknot (again, minus a small interval) supported in $\mathbb{D}^3 \times [0,1]$.
One can then attach these two resulting $4$-balls to the identity cobordisms $Y\times [0,1] \setminus D^\prime_i \times [0,1]$, where $D^\prime_i$ is a $3$-ball intersecting trivially $K_i$.
The result of these attachments are two homotopies, going from $K_0 \# K^\prime_0$ to $K_0$ and from $K_1 \# K^\prime_1$ to $K_1$. Attaching these to the two ends of $A$ provides the needed homotopy from $K_0$ to $K_1$.
\end{proof}
\begin{proof}[Proof of Thm. \ref{teo:qconcnonbanale}]
It is a well known fact that free homotopy classes of loops in $Y$ are in bijection with conjugacy classes of $\pi_1 (Y)$.
If the fundamental group of a closed $3$-manifold $Y$ is not abelian, there must be at least one commutator, say $[a,b] = aba^{-1} b^{-1}$, with $a, b \in \pi_1 (Y)$ which is not the identity. Taking an embedded and connected representative for $[a,b]$ yields a nullhomologous knot which is not almost-concordant to the unknot. Therefore\footnote{See for example \cite{hempel20043} or \cite{friedlintroduction} for a classification of $3$-manifolds with non-abelian fundamental group.} ``most" $3$-manifolds must have $\left|\mathcal{C}^Y_0 \right| >1$.
\end{proof}
This implies, \emph{e.g.}$\:$if $Y$ is a $\mathbb{Z} HS^3$, that any smooth and simple representative of a non-trivial element of $\pi_1 (Y)$ is a nullhomologous knot which is not freely homotopic, hence not almost-concordant to the unknot in $Y$.
In the following two sections we are going to prove that a similar result holds also in a special case of closed $3$-manifolds with abelian\footnote{Recall that the only closed and orientable $3$-manifolds with non-trivial abelian fundamental group are lens spaces, the $3$-torus and $S^1\times S^2$.} fundamental groups, that is the lens spaces $L(p,1)$.
\begin{figure}[h]
\includegraphics[width=10cm]{knotsin3manifolds.png}
\caption{The partition of the set of knots in a $3$-manifold into homology classes, free homotopy classes, almost-concordances and smooth concordance classes.}
\end{figure}
Proposition \ref{prop:free} implies that for each $3$-manifold $Y$ we have a splitting into free homotopy classes of loops in $Y$:
\begin{equation}
\widetilde{\mathcal{C}}^Y = \bigoplus_{l \in \faktor{\pi_1(Y)}{\mbox{\tiny{conj.}}}} \widetilde{\mathcal{C}}^Y_l
\end{equation}
\begin{rmk}
Note that the inclusions between these various classes (homology, free homotopy, PL-concordance and regular concordance) are strict, meaning that in general there are knots which are distinct in one class, but not in the larger ones.
As an example of the only not well known case, we are going to show in the next section the existence of knots (in fact an infinite family) which are not $PL$-concordant to the unknot, but nonetheless belong to the same free homotopy\footnote{This is an easy consequence of the commutativity of the fundamental group of lens spaces.} class of $\bigcirc$. Thus the converse of Proposition \ref{prop:free} does not hold.
\end{rmk}
\section{A ``concrete" example}\label{sec:concrete}
In this section we are going to exploit Ozsv{\'a}th-Szab{\'o}-Rasmussen's Floer theoretic invariant $\tau$ to distinguish almost-concordance classes, in the case where $Y = L(p,q)$. We remark here that the following constructions holds in a more general context (Definition \ref{taushiftgener}), but it is generally hard to compute the invariants we will use.
On the other hand, the combinatorial versions of knot Floer homology, first developed by Manolescu, Ozsv{\'a}th and Sarkar \cite{manolescu2009combinatorial} for links in $S^3$, and by Baker, Hedden and Grigsby \cite{BGH} for links in $L(p,q)$ are easily computable, and the latter can be used to provide many interesting examples.
We adopt the conventions of \cite{gompf4} and \cite{BGH} regarding lens spaces, according to which $L(p,q) = S^3_{-\frac{p}{q}} (\bigcirc)$.
What follows is a brief recap on the needed results on knot Floer homology. Great sources on the subject are \cite{holdisknot} and \cite{manolescu2014introduction}.\\
Heegaard Floer homology is a package of invariants of $\mbox{Spin}^c$ $3$-man-\\ifolds, introduced by {Ozsv{\'a}th and Szab{\'o} in \cite{holdisk3man}. The simplest of these invariants is denoted by $\widehat{HF} (Y,\mathfrak{s})$, where $\mathfrak{s} \in \mbox{Spin}^c (Y)$.
Soon after its definition, it was realized in \cite{holdisknot} and \cite{rasmussenknot} that a nullhomologous\footnote{The same holds for rationally nullhomologous knots, and similar results hold also in the general case, see \cite{2010arXiv1012.3088S}.} knot $(Y,K)$ induces a filtration on the complexes $\widehat{CF} (Y,\mathfrak{s})$ computing the Heegaard Floer homology group of $(Y,\mathfrak{s})$.\\
The filtered chain homotopy type of these filtered complexes, denoted by $\widehat{CFK} (Y,K,\mathfrak{s})$, is an invariant of the triple $(Y,K,\mathfrak{s})$. In particular to each such triple $(Y,K,\mathfrak{s})$, with $\mathfrak{s} \in \mbox{Spin}^c (Y)$ and $K \in \mathcal{K}(Y)$, we associate a relatively bigraded group $\widehat{HFK}(Y,K,\mathfrak{s})$, finitely generated over $\mathbb{F} = \mathbb{Z} /2 \mathbb{Z}$. This is just the homology of the graded object associated to the filtered chain homotopy type of $\widehat{CFK}(Y,K,\mathfrak{s})$.
The two gradings are known as the Maslov and Alexander degrees. The first one can be thought of as an homological degree (it decreases by 1 under the action of the differential), while the latter is the degree associated to the filtration induced on $\widehat{CF}(Y,\mathfrak{s})$ by $K$.
If $Y$ is a lens space (or more generally a rational homology $3$-sphere), the gradings lift to an absolute $\mathbb{Q}$-valued bigrading by results of \cite{absolutely}.
An $L$-space $\overline{Y}$ is a rational homology $3$-sphere such that for each $\mathfrak{s} \in \mbox{Spin}^c (\overline{Y})$:
$$\mbox{rk}_\mathbb{F} \left(\widehat{HF}(\overline{Y},\mathfrak{s}) \right) = 1.$$
All lens spaces are $L$-spaces, and in the following we will fix an identification of $\mbox{Spin}^c (L(p,q))$ with $\mathbb{Z} / p \mathbb{Z}$, as described in \cite{absolutely}.\\
Knot Floer homology is known (see \emph{e.g.} \cite{holdisknot}) to satisfy a formula\footnote{We do not specify here the various conventions involved for $\mbox{Spin}^c$ structures, since in what follows we will only deal with the case $Y_0 = S^3$.} for the connected sum of two knots; if $ (Y,K,\mathfrak{s}) = (Y_0,K_0,\mathfrak{s}_0)\#(Y_1,K_1,\mathfrak{s}_1)$, then
\begin{equation}\label{eqn:connectedsum}
\widehat{HFK} \left( Y,K,\mathfrak{s} \right) \cong \widehat{HFK} \left( Y_0 , K_0 ,\mathfrak{s}_0 \right) \otimes \widehat{HFK} \left(Y_1, K_1 ,\mathfrak{s}_1 \right).
\end{equation}
The isomorphism on the complex level is a filtered chain homotopy equivalence.
The knot Floer homology of the unknot in a lens space is readily seen to be
\begin{equation}\label{eqbanale}
\widehat{HFK} (L(p,q), \bigcirc ,\mathfrak{s}) \cong \mathbb{F}_{[d(p,q,\mathfrak{s}),0]}
\end{equation}
where the subscript of the module indicates the bidegree (Maslov, Alexander) of the generator, and $d(p,q,\mathfrak{s})$ is a rational number known as the correction term\footnote{See \cite{absolutely} for the definition and a recursive formula.} for $L(p,q)$ in the $\mbox{Spin}^c$ structure $\mathfrak{s}$.
\begin{rmk}\label{rmk:nongenuino}
If a knot $K$ in an $L$-space $\overline{Y}$ is such that
$$\mbox{rk}_\mathbb{F} \left( \widehat{HFK} (\overline{Y},K,\mathfrak{s})\right) =1 \;\; \mbox{ and } \;\; \mbox{rk}_\mathbb{F} \left( \widehat{HFK} (\overline{Y},K,\mathfrak{s}^\prime)\right) \neq 1$$ for some $\mathfrak{s},\mathfrak{s}^\prime \in \mbox{Spin}^c (Y)$, then it is l-prime by Equations \eqref{eqn:connectedsum} and \eqref{eqbanale}, coupled with the unknot detection of $\widehat{HFK}$ in $S^3$ (first proved in \cite{ozsvath2004genus}).
\end{rmk}
The main tool we are going to use in order to study the notions defined in the previous section will be a modified version of the $\tau$-invariant. This invariant was first defined for knots in the $3$-sphere in the holomorphic setting in \cite{ozsvath2003knot}, and it has proven to be extremely useful since. It is a concordance invariant\footnote{In fact it is an homomorphism $\tau :\mathcal{C} \rightarrow \mathbb{Z}$, see also Theorem \ref{thm:tauadd}.} of knots in $S^3$, and provides a lower bound on the slice genus. Its properties can be exploited \emph{e.g.} to give a self-contained combinatorial proof of the Milnor conjecture and to exhibit exotic $\mathbb{R}^4$s (see \cite[Ch. 8]{SOS}).
\begin{defi}\label{def:tau}
For $\mathfrak{s} \in \mbox{Spin}^c(L(p,q)) $, $a \in \mathbb{Q}$ and $K \in \mathcal{K}(L(p,q))$, denote by $\mathcal{F}_{a}(L(p,q),K,\mathfrak{s})$ the elements in the complex $\widehat{CFK}(L(p,q),K,\mathfrak{s})$ with Alexander degree $\le a$.
There is a natural inclusion map:
\begin{equation*}
\iota^a : \mathcal{F}_{a}(L(p,q),K,\mathfrak{s}) \hookrightarrow \widehat{CF}(L(p,q),\mathfrak{s}).
\end{equation*}
The $\tau$-invariant associated to the $\mbox{Spin}^c$ structure $\mathfrak{s} \in Spin^c(L(p,q))$ for a knot $(L(p,q), K)$, denoted by $\tau^\mathfrak{s} (K)$, is the minimal $a \in \mathbb{Q}$ such that the induced map in homology
\begin{equation*}
\iota^a_* : H_* \left( \mathcal{F}_{a}(L(p,q),K,\mathfrak{s}) \right) \hookrightarrow \widehat{HF}(L(p,q),\mathfrak{s})
\end{equation*}
is non-trivial.
Furthermore define $$\tau (K) = \left( \tau^0 (K), \ldots , \tau^{p-1} (K) \right) \in \mathbb{Q}^{p}.$$
\end{defi}
\begin{rmk}\label{tauraz}
Despite being $\mathbb{Q}$-valued, the Alexander degrees of the elements in $\widehat{CFK} (L(p,q),K,\mathfrak{s})$ differ by integers. So, for each $\mbox{Spin}^c$ structure $\mathfrak{s} \in \mbox{Spin}^c (L(p,q))$, these degrees are in fact a subset of $\{r_\mathfrak{s} (K) + \mathbb{Z}\}$ for some $r_\mathfrak{s} (K)\in \mathbb{Q}$.
In each lens space, there is a canonical choice for $r_{\mathfrak{s}}(K)$, which depends\footnote{In each homology class of a lens space there is a unique Floer simple knot, called \emph{simple knot} (see \emph{e.g.} \cite{rasmussen2007lens}); $r_{\mathfrak{s}}(K)$ is then just the Alexander degree of the only element of its knot Floer homology.} on the homology class of $K$, the $\mbox{Spin}^c$ structure $\mathfrak{s}$ and the parameters $p,q$.
Moreover, if the knot is nullhomologous, the degrees are all integers, and we can choose $r_\mathfrak{s} = 0 \; \forall \mathfrak{s} \in \mbox{Spin}^c (L(p,q))$.
In the interest of simplicity, in what follows we are going to regard each component of the $\tau$-invariant of knots in lens spaces as being $\mathbb{Z}$-valued.
\end{rmk}
The following result was proved for knots in $S^3$ in \cite{ozsvath2003knot}, building on \cite[Theorem 7.1]{holdisknot} (which instead works for general $3$-manifolds). It was then proved in full generality by Hedden in \cite[Prop. 3.6]{hedden2008ozsvath}.
We reformulate it here as follows:
\begin{thm}[\cite{hedden2008ozsvath}]\label{thm:tauadd}
Suppose $(L(p,q),K) = (L(p,q), K_0)\# (S^3,K_1)$. Then for each $\mathfrak{s} \in \mbox{Spin}^c (L(p,q))$:
$$\tau^\mathfrak{s} (K) = \tau^\mathfrak{s} (K_0) + \tau(K_1)$$
\end{thm}
In other words, the $\mathcal{C}$-action shifts the $\tau$-invariants of $(L(p,q),K^\prime)$ in a uniform manner in each $\mbox{Spin}^c$ structure.\\
The next theorem is a generalization of a well known result for knots in the $3$-sphere, first proven for knots in $S^3$ by Sarkar \cite{sarkar2010grid} in a purely combinatorial setting. In the same paper it is used to give an elementary proof of the Milnor Conjecture, first proven by Kronheimer and Mrowka \cite{kronheimer1993gauge} using gauge-theoretic techniques.
The result holds with small modifications for arbitrary $3$-manifolds (see \cite{heddenunpublished}), but in what follows we will only need a version for lens spaces.
\begin{thm}[\cite{heddenunpublished}]\label{thm:cobo}
Let $\Sigma$ be a smooth cobordism of genus $g(\Sigma)$ in $L(p,q) \times [0,1]$ between the knots $K_0 , K_1 \in \mathcal{K} (L(p,q))$.\\
Then $\forall \mathfrak{s} \in \mbox{Spin}^c (L(p,q))$:
$$ |\tau^\mathfrak{s} (K_0) - \tau^\mathfrak{s} (K_1)| \le g(\Sigma).$$
\end{thm}
\begin{cor}
Suppose $(L(p,q), K_0) \sim (L(p,q) , K_1)$. Then for all $\mathfrak{s} \in \mbox{Spin}^c (L(p,q))$ $$\tau^\mathfrak{s} (K_0) = \tau^\mathfrak{s} (K_1),$$ that is the $p$-tuple of $\tau$-invariants is a concordance invariant.
\end{cor}
\begin{proof}
By hypothesis there is a genus-0 surface $\Sigma$ connecting $K_0$ and $K_1$ in $L(p,q) \times [0,1]$, so for all $\mathfrak{s} \in \mbox{Spin}^c (L(p,q))$:
$$0 \le |\tau^\mathfrak{s} (K_0) - \tau^\mathfrak{s} (K_1)| \le g(\Sigma) = 0.$$
\end{proof}
\begin{rmk}
By adapting the techniques of \cite{SOS} and \cite{BGH}, Theorem \ref{thm:cobo} can be proven in a combinatorial fashion also for knots in lens spaces (see \cite{PHDtesi}).
\end{rmk}
We can now turn to the study of the almost-concordance classes of knots in $L(p,q)$.
The key fact that will allow us to distinguish them is Theorem \ref{thm:tauadd}:
\begin{defi}\label{def:taushifted}
Let $(L(p,q) , K)$ be a knot; define the \emph{shifted $\tau$-invariant} as the $p$-tuple
$$\tau_{sh} (K) = (\tau^1(K) + n , \ldots , \tau^p(K) + n)$$
where $n \in \mathbb{Z}$ is the only integer\footnote{\emph{cf.} Remark \ref{tauraz}.} such that $\displaystyle \min_{\mathfrak{s}} \{\tau^\mathfrak{s} (K) + n\} = 0$.
\end{defi}
We can now turn to the proof of Proposition \ref{proptaushinv}.
\begin{proof}[Proof of Prop. \ref{proptaushinv}]
Using Theorem \ref{thm:tauadd}, it is immediate to show that
the $\tau_{sh}$-invariant is unchanged under the action described in Equation \eqref{azione}, hence almost-concordant knots have the same $\tau_{sh}$-invariant.
\end{proof}
\begin{prop}\label{prop:localeshif}
If $K$ is a local knot in $L(p,q)$, then $\tau_{sh} (K) = (0,\ldots,0)$.
\end{prop}
\begin{proof}
It follows immediately from Theorem \ref{thm:tauadd}, and the fact that the unknot has trivial $\tau_{sh}$-invariant.
\end{proof}
It is possible to generalise greatly the definition of the $\tau_{sh}$-invariant, by considering Hedden's approach to $\tau$ invariants \cite{hedden2008ozsvath}:
\begin{defi}\label{taushiftgener}
Given a knot $K \in \mathcal{K}(Y)$, one can choose an ordered tuple of elements $(x_1, \ldots, x_m) \in \left( \widehat{CF}(Y) \right)^m$ which are non-trivial and distinct in the homology $\widehat{HF}(Y)$. To each of these elements we can associate a numerical invariant $\tau_{[x_i]}(Y,K)$.
\end{defi}
\begin{rmk}
In the previous definition of $\tau_{sh}$, we chose $[x_i] = \widehat{HF}(L(p,q),\mathfrak{s}_i)$, that is the only element of $\widehat{HF}(L(p,q))$ in the $i$-\emph{th} $\mbox{Spin}^c$ structure.\\
The $\tau_{[x_i]}(Y,K)$ invariants have the same behaviour (\cite[Prop. 3.6]{hedden2008ozsvath}) under connected sum with knots in the three-sphere as the ``regular" $\tau$-invariants, hence can be used to obstruct the existance of almost-concordances as well.
\end{rmk}
In particular if $[x] \neq [y]$ are two non-trivial classes in $\widehat{HF}(Y)$ for a $\mathbb{Q} HS^3$ $Y$, and $K_0,K_1$ are two knots in $\mathcal{K}(Y)$ such that
\begin{equation}
\left(\tau_{[x]}(Y,K_0),\tau_{[y]}(Y,K_0)\right) \neq \left(\tau_{[x]}(Y,K_1) + r,\tau_{[y]}(Y,K_1)) + r\right),
\end{equation}
where $r\in \mathbb{Q}$ is the only rational such that $\tau_{[x]}(Y,K_0) = \tau_{[x]}(Y,K_1) + r$, then $K_0 $ is not almost-concordant to $ K_1$.
\begin{ex}\label{esempio:0134}
Adapting the grid-diagrammatic approach\footnote{See also \cite{manolescu2007combinatorial} and \cite{SOS}.} developed in \cite{BGH}, we were able to compute in \cite{PHDtesi} the $\tau$-invariants for the knot $(L(3,1),\widetilde{K})$ shown in the twisted grid diagram form\footnote{For the definitions of twisted grid diagrams for knots in lens spaces see \cite{BGH}.} in Figure \ref{nodo:0134}.
The following tables provide the sets of generators for the complexes $\widehat{GC}(L(3,1), \widetilde{K},\mathfrak{s})$, which are combinatorially defined and quasi-isomorphic to the complexes $\widehat{CFK}(L(3,1), \widetilde{K},\mathfrak{s})$ by \cite[Thm.1.1]{BGH}.
These are finitely generated $\mathbb{F}[U]$-modules, where $U$ denotes a graded endomorphism decreasing the Maslov and Alexander degrees of a generator by 2 and 1 respectively. Note that the resulting homology is a finitely generated $\mathbb{F}$-module.
There are 6 generators (as an $\mathbb{F} [U]$-module) for each complex \\$\widehat{GC}(L(3,1),\widetilde{K},\mathfrak{s})$ computing the knot Floer homology of $(L(3,1), \widetilde{K})$ in the $\mbox{Spin}^c$ structure $\mathfrak{s}$. We denote them by $x_i^\mathfrak{s}$ for $i = 0, \ldots,5$ and $\mathfrak{s} = 0,1$, and display their filtered differentials below. We omit the computations for $\mathfrak{s} = 2$, since in that case the complex is identical to the $\mathfrak{s} = 1$ one.
The filtration on the complex is used to compute the $\tau$-invariants according to definition \ref{def:tau}. To obtain the homology of the associated graded $\widehat{HFK}(L(3,1),\widetilde{K},\mathfrak{s}) \cong \widehat{GH}(L(3,1),\widetilde{K},\mathfrak{s})$, just delete all the differentials which do not preserve the Alexander degree (these are the grey dashed lines in Figure \ref{fig:complessi}).
\begin{tabular}{ccc}\label{contazzi}
Generator & $\left(M, A \right)$ & Differential\\
\hline
$\mbox{Spin}^c$ degree = 0 & & \\
\hline
& &\\
$x_0^0$ & $\left(\frac{3}{2}, 1 \right)$ & $\partial (x_0^0) = x_1^0 + x_2^0$\\
$x_1^0$ & $\left(\frac{1}{2}, 0 \right)$ & $\partial (x_1^0) = Ux_0^0 + x_3^0 + x_4^0$\\
$x_2^0$ & $\left(\frac{1}{2}, 0 \right)$ & $\partial (x_2^0) = Ux_0^0 + x_3^0 + x_4^0$\\
$x_3^0$ & $\left(-\frac{1}{2}, -1 \right)$ & $ \partial (x_3^0) = U (x_1^0 + x_2^0)$\\
$x_4^0$ & $\left(-\frac{1}{2}, -1 \right)$ & $\partial (x_4^0) = 0$\\
$x_5^0$ & $\left(-\frac{3}{2}, -2 \right)$ & $\partial (x_5^0) = U x_4^0$ \\
& &\\
\hline
$\mbox{Spin}^c$ degree = 1 & & \\
\hline
& &\\
$x_0^1$ & $\left(\frac{7}{6}, 0 \right)$ & $ \partial (x_1^1) = x_1^1 + x_2^1$\\
$x_1^1$ & $\left(\frac{1}{6}, 0 \right)$ & $\partial (x_1^1) = x_4^1 + x_5^1 $\\
$x_2^1$ & $\left(\frac{1}{6}, 0 \right)$ & $\partial (x_2^1) = x_4^1 + x_5^1$\\
$x_3^1$ & $\left(\frac{1}{6}, -1 \right)$ & $\partial (x_3^1) = x_4^1 + x_5^1$\\
$x_4^1$ & $\left(-\frac{5}{6}, -1\right)$ & $\partial (x_4^1) = U (x_1^1 + x_3^1)$\\
$x_5^1$ & $\left(-\frac{5}{6}, -1 \right)$ & $\partial (x_5^1) = U (x_1^1 + x_3^1) $ \\
& &\\
\end{tabular}\\
\begin{center}
\begin{figure}
\includegraphics[width = 9cm]{complesso0134.png}
\caption{The complexes $\widehat{GC}(L(3,1),\widetilde{K},\mathfrak{s})$ for $\mathfrak{s} = 0$ (top) and $\mathfrak{s} = 1$ (bottom). The axes are labeled by Alexander degree and powers of $U$. The dotted line highlights the filtration level where the map\newline \hspace{\linewidth} $\;\;\;\;\;\;\; \iota^a_* : H_* \left( \mathcal{F}_{a}(L(3,1),\widetilde{K},0) \right) \hookrightarrow \widehat{HF}(L(3,1),0)\;\;\;\;\;\;\;\;\;\;\; $ becomes surjective.}
\label{fig:complessi}
\end{figure}
\end{center}
We display the homology of the associated graded complex here:
\begin{equation}
\widehat{HFK} (L(3,1),\widetilde{K},\mathfrak{s}) =
\begin{cases}
\mathbb{F}_{\left[ \frac{3}{2} , 1 \right]} \oplus\mathbb{F}_{\left[ \frac{1}{2} , 0\right]} \oplus \mathbb{F}_{\left[ -\frac{1}{2} , -1\right]} & \mbox{ if } \mathfrak{s} = 0\\
\mathbb{F}_{\left[ \frac{1}{6},0\right]} & \mbox{ if } \mathfrak{s} = 1\\
\mathbb{F}_{\left[ \frac{1}{6},0\right]} & \mbox{ if } \mathfrak{s} = 2\\
\end{cases}
\end{equation}
$\widetilde{K}$ is a nullhomologous knot, and $$\tau (\widetilde{K}) = (-1,0,0) \;\;\;\; \Longrightarrow \;\;\;\; \tau_{sh}(\widetilde{K}) = (0,1,1).$$
In particular this means that $\widetilde{K}$ is not even almost-concordant to a local knot. Furthermore, since it has non-trivial knot Floer homology only
in the $\mbox{Spin}^c$ structure 0, by Remark \ref{rmk:nongenuino} it can not be the connected sum with a knot in $S^3$, hence it is also l-prime.
\begin{rmk}
The computation of the $\tau$-invariants displayed in Figure \ref{fig:complessi} was partially aided by a grid homology calculator for knots in lens spaces. This program was developed in \raisebox{-0.7ex}{\includegraphics[width=1cm]{sagelogo.jpg}}, and a GUI version is freely available on my homepage \url{http://poisson.phc.dm.unipi.it/~celoria/#programs}.
\end{rmk}
\begin{figure}[h]
\includegraphics[width=6cm]{0134.png}
\caption{A grid diagrammatic representation of the knot $\widetilde{K}$ in $L(3,1)$.}
\label{nodo:0134}
\end{figure}
\end{ex}
\section{Extension to $L(p,1)$}\label{sec:extension}
Now we are going to extend the result obtained from the previous computations to obtain a nullhomologous knot $\widetilde{K}_p \subset L(p,1) $ for each $p\ge 3$, such that $\widetilde{K}_p $ is not almost-concordant to $\bigcirc$.
The knots $\widetilde{K}_p$ are constructed by \emph{expanding} (Definition \ref{def:expansion}) the grid diagram presentation for the knot $\widetilde{K} = \widetilde{K}_3$ of example \ref{esempio:0134}. Every such knot can be described by a five-tuple of integers $(p,X_0,X_1,O_0,O_1)$; \\$p$ is the parameter of $L(p,1)$, while the other four values describe the position of the $\mathbb{X}$ and $\mathbb{O}$ markings appearing in the grid.
In our case all the knots $K_p$ can be described by $\mathbb{X} = (2,3)$ and $\mathbb{O} = (5,0)$, so one can easily see that $K_3 = \widetilde{K}$ from example \ref{esempio:0134}.
\begin{rmk}\label{rmklevine}
As pointed out by Levine \cite{levineprivate}, a knot in a lens space whose homology in each $\mbox{Spin}^c$ structure is isomorphic to either the trefoil or the unknot (and both cases occur) has non-trivial $\tau_{sh}$-invariant. This is due to the fact that if $\widehat{HFK}(L(p,q),K, \mathfrak{s})\cong \widehat{HFK}(S^3,3_1)$ (up to a shift in the Alexander grading), the spectral sequence (see \cite[Lemma 3.6]{holdisknot}) $$\widehat{HFK}(L(p,q),K, \mathfrak{s}) \Longrightarrow \widehat{HF}(L(p,q),\mathfrak{s}) \cong \mathbb{F}$$ has two consecutive terms canceling each other out, leaving a surviving generator in Alexander degree $\pm 1$.
\end{rmk}
\begin{defi}\label{def:expansion}
The operation of \emph{expansion} consists in adding an $n\times n$ box on the right of an $n$-dimensional grid representing a knot in $L(p,1)$, taking it to a new grid, with the same dimension, but representing a knot in $L(p+1,1)$ (see Figure \ref{nodo:kappap}).
\end{defi}
In what follows we are going to exhibit a (filtered) chain homotopy between the grid complex for $\widetilde{K}_p$ to $\widetilde{K}_{p+1}$ in the $\mbox{Spin}^c$ structures $0$ and $1$. Using Remark \ref{rmklevine}, we are going to prove that each of these knots have non-trivial $\tau_{sh}$-invariants, hence they can not be almost-concordant to the unknot.
\begin{figure}[h]
\includegraphics[width=9cm]{0134p.png}
\caption{A grid diagram representing the knots $K_p \in \mathcal{K}(L(p,1))$ (the grid is composed by $4p$ squares). }
\label{nodo:kappap}
\end{figure}
\begin{rmk}
This expansion procedure can be carried out in slightly greater generality, and is going to be fully detailed in an upcoming paper.
\end{rmk}
We find it more convenient to prove the following statements in terms of the \emph{tilde} version\footnote{Cf. \cite[Ch. 4]{SOS}.} of grid complex and homology, denoted by $\widetilde{GC}(L(p,1),K, \mathfrak{s})$ and $\widetilde{GH}(L(p,1),K, \mathfrak{s})$ respectively. \\In the case at hand (so for grid diagrams of dimension 2), we have an isomorphism
\begin{equation}
\widehat{HFK}(L(p,1),K,\mathfrak{s}) \cong \widetilde{GH}(L(p,1),K, \mathfrak{s}) \otimes \left(\mathbb{F}_{(0,0)} \oplus \mathbb{F}_{(-1,-1)} \right).
\end{equation}
The differential $\widetilde{\partial}$ of the tilde version of grid homology can be represented by empty\footnote{That is not containing any $\mathbb{X}$ or $\mathbb{O}$ marking or component of the generators.} embedded rectangles in the grid.
The computation of the differentials is a easy but quite tedious exercise, we report here the results. First denote by $G_p$ the grid diagram describing the knot $K_p$;
each generator of the complex $\widetilde{GC}(G_p)$ can be regarded as a pair $(\sigma,(a,b)) \in \mathfrak{S}_2 \times \{0,\ldots, p-1\}^2$.
In order to ease a bit the notation, we call $x_{a,b} = (Id,(a,b))$ and $y_{a,b} = ((12)),(a,b))$, and refer to $(a,b)$ as the \emph{$p$-coordinates} of the generator (note that in the following they will be regarded as integers $\text{mod}\;p$).
The assumption on the $\mbox{Spin}^c$ structure being 0 implies (\cite[Sec. 2.2]{BGH}) that $ a+b \equiv 2 \; (mod \;p)$. We can thus divide the generators according to whether $0 \le a,b\le 2$, or $a,b \ge 3$.
If $\mathfrak{s} = 0$ the differential is:
\begin{equation}\label{eqdiff1}
\widetilde{\partial}(x_{a,b}) =
\begin{cases}
0 & \mbox{ if } a,b \in \{0,1,2\}\\
y_{b,a} + y_{a-1,b+1} & \mbox{ if } 3 \le a\le b\\
y_{a,b} + y_{b-1,a+1} & \mbox{ if } 3 \le b \le a
\end{cases}
\end{equation}
\begin{equation}
\widetilde{\partial}(y_{a,b}) =
\begin{cases}
0 & \mbox{ if } a,b \in \{0,1,2\} \mbox{ or } a = p-1\\
x_{a,b} + x_{b,a} & \mbox{ if } 3 \le a < b\\
x_{a+1,b-1} + x_{b-1,a+1} & \mbox{ if } 3 \le b \le a < p-1
\end{cases}
\end{equation}
If instead $\mathfrak{s} = 1$, the generators $(\sigma,(a,b)) \in \mathfrak{S}_2 \times \{0,\ldots, p-1\}^2$ satisfy $a + b \equiv 3 \; (mod\; p)$.
If $p\ge 5$, one can divide the generators in two sets as before: those in which both indices $a,b$ are strictly smaller than $4$, and the other ones in which both $p$-coordinates are $\ge 4$. The case in which $p=4$ can be worked out by hand (or even better with a computer), and one can prove that there are (filtered) chain homotopies relating the complexes of $\widetilde{K}_3, \widetilde{K}_4$ and $\widetilde{K}_5$ for $\mathfrak{s}=1$. \\The corresponding differentials for $p\ge 5$ are:
\begin{equation}
\widetilde{\partial}(x_{a,b}) =
\begin{cases}
0 & \mbox{ if } (a,b) = (0,3),(1,2),(2,1)\\
y_{2,1} + y_{0,3} & \mbox{ if } (a,b) = (3,0)\\
y_{b,a} + y_{a-1,b+1} & \mbox{ if } 4\le a\le b \le p-1\\
y_{a,b} + y_{b-1,a+1} & \mbox{ if } 4 \le b\le a\le p-1
\end{cases}
\end{equation}
\begin{equation}\label{eqdiff2}
\widetilde{\partial}(y_{a,b}) =
\begin{cases}
0 & \mbox{ if } (a,b) = (0,3),(2,1) \\
x_{0,3} & \mbox{ if } (a,b) = (3,0),(p-1,4)\\
x_{2,1} + x_{1,2} & \mbox{ if } (a,b) = (1,2)\\
x_{a,b} + x_{b,a} & \mbox{ if } 4 \le a < b\\
x_{a+1,b-1} + x_{b-1,a+1} & \mbox{ if } 4 < b \le a
\end{cases}
\end{equation}
In both cases we can apply the \emph{cancellation lemma} (see \emph{e.g.} \cite{krcatovich2015reduced}) to ``cancel" the two rightmost generators (see Figure \ref{evolutioncpx1}).\\
We obtain two complexes which are chain homotopic to both $\widetilde{GC}(G_p,\mathfrak{s})$ and $\widetilde{GC}(G_{p-1},\mathfrak{s})$.
\begin{figure}[h]
\includegraphics[width=11cm]{complessiquasiiso.png}
\caption{The``evolution"of the complexes $\widetilde{GC}(L(p,1),K_p,\mathfrak{s})$ for $\mathfrak{s} = 0$ (top part) and $\mathfrak{s} = 1$ (on the lower part), and $p = 3,4$ and $7$ from left to right. In this case increasing the dimension of the grid produces chain homotopic complexes. }
\label{evolutioncpx}
\vspace{0.5cm}
\includegraphics[width=9cm]{acyclicpart.png}
\caption{A closer look to the lower parts in the last figure. The circled generator is the only one surviving in homology; cancelling the two generators enclosed by the dotted grey line produces a chain homotopy to the previous complex.}
\label{evolutioncpx1}
\end{figure}
Equivalently, the homology of the complexes in the two relevant $\mbox{Spin}^c$ structures in minimal Alexander degree is always represented by the cycle $[y_{2,0}]$ if $\mathfrak{s} = 0$ and is acyclic for $\mathfrak{s}=1$ (see Figures \ref{evolutioncpx} and \ref{evolutioncpx1}).
In these last sections we have proved that $\widetilde{\mathcal{C}}^{L(p,1)}_0$ is non-trivial, by exhibiting the null-homologous knots $K_p \in \mathcal{K}(L(p,1))$, and showing that they are not almost-concordant to the unknot:
\begin{prop}\label{ellepiuno}
All the knots $\widetilde{K}_p \in \mathcal{K}(L(p,1))$ represent non-trivial almost-concordance classes in $\widetilde{\mathcal{C}}_0^{L(p,1)}$.
\end{prop}
\begin{proof}
The results of this section imply that each of the $\widetilde{K}_p$ (for $p\ge 3$) are nullhomologous knots in $L(p,1)$, such that
$$\tau^0 (\widetilde{K}_p) = \tau^0 (\widetilde{K}_3) = -1 \neq 0 = \tau^0 (\widetilde{K}_3) = \tau^0 (\widetilde{K}_p).$$
In particular their $\tau_{sh}$-invariants are non-trivial.
\end{proof}
\begin{rmk}\label{rmk:l21}
We left out the case of $L(2,1)$ up to now, but it is possible to prove that there is a nullhomologous knot $K_2 \in \mathcal{K}(L(2,1))$ such that $\widehat{GH}(L(2,1),K_2,0) \equiv \widehat{GH}(3_1)$ and $\widehat{GH}(L(2,1),K_2,1) \equiv \widehat{GH}(\bigcirc)$. The dimension of a minimal grid representing $K_2$ is $3$, and it can be represented as $\mathbb{X} = (0,1,2), \mathbb{O} = (2,3,4)$ (see Figure \ref{fig:nododue}).
\begin{figure}
\includegraphics[width=7cm]{gridnodo2.png}
\caption{A grid representing the ``smallest" knot in $L(2,1)$ with non-trivial $\tau_{sh}$ invariant. }
\label{fig:nododue}
\end{figure}
\end{rmk}
Now we can just recall a result by Hedden \cite{hedden2009knot} adapted to our situation to actually obtain infinitely many non-trivial almost-concordance classes in each $L(p,1)$.
Since we are dealing with nullhomologous knots we do not need to worry about framing issues\footnote{The ambiguity in this case is removed by choosing the Seifert longitude for the pattern attachment.}, which arise in the general case. Note also that any choice of the parameters for the cabling applied to $K_p$ yield another nullhomologous knot.
\begin{thm}[2.2 of \cite{hedden2009knot}]\label{teo:cabling}
Let $K \in \mathcal{K}(L(p,1))$, and choose a sufficiently large integer $n$; then if $K_{m, -mn+1}$ denotes the $(m, mn+1)$-cable of $K$, the following holds for each $\mathfrak{s} \in \mbox{Spin}^c (L(p,1))$:
\begin{equation}
\tau^\mathfrak{s} (K_{m,-mn+1}) =
\begin{cases}
m \tau^\mathfrak{s} (K) + \frac{mn(m-1)}{2} + m-1 \; \mbox{ or }\\
m \tau^\mathfrak{s} (K) + \frac{mn(m-1)}{2}.
\end{cases}
\end{equation}
\end{thm}
\begin{proof}[Proof of Thm. \ref{teo:qconcnonbanalelens}]
The chain homotopies
from Section \ref{sec:extension}, together with Remark \ref{rmk:l21} ensure the existence of a nullhomologous knot $\widetilde{K}_p$ in each $L(p,1)$ ($p\ge2$) which is not almost-concordant to the unknot. Theorem \ref{teo:cabling} ensures that by cabling the knots $\widetilde{K}_p$ we obtain other knots whose first two components of the $\tau$-invariant differ, hence represent non-trivial almost-concordance classes.
It is also easy to show that even keeping the parameter $m$ fixed, \emph{e.g.}$\; m=2$, the $\tau$-invariants one obtains by iterate cabling are all distinct.
Together with Proposition \ref{ellepiuno} this concludes the proof of the Theorem.
\end{proof}
\begin{rmk}\label{rmk:tanteclassi}
An extensive number of computer-aided computations allow one to prove that for $p \le 20$ it is also possible to exhibit non-trivial almost-concordance classes in each homology class of each lens space $L(p,q)$.
\end{rmk}
\begin{rmk}
It is also possible to use methods and computations by Grigsby (\cite{grigsby2006knot}) and Levine \cite{levine2008computing}, to establish the existence of non-trivial almost-concordance classes in rational homology spheres (many of which are not lens spaces).
\end{rmk}
\begin{rmk}
The behavior of the $\tau$-invariants under mirroring (described in \cite[Prop. 3.5]{hedden2008ozsvath}), coupled with the previous computations imply the existence of a knot $\overline{K}$ in $L(3,2)$ such that $\tau_{sh} (\overline{K}) = \tau (\overline{K}) = (1,0,0)$.
More generally, the procedure detailed in this section allows to exhibit non-trivial almost-concordance classes in $L(p,p-1)$ as well.
The present techniques do not seem however to be directly applicable to the more general case of $q \neq \pm 1$; in particular the \emph{expansion} procedure described in Definition \ref{def:expansion} does not work whenever $q \neq \pm 1$. It seems nonetheless likely (cf. Remark \ref{rmk:tanteclassi}) that even in these other cases one can obtain infinitely many almost-concordance classes in each lens space.
\end{rmk}
\section{Final remarks}
We collect in this last section some results on local knots and prove Theorem \ref{thm:plcobotau}.
It seems natural to ask whether locality and concordance are in some way related. The answer to the following question says that this is not the case.
\begin{ques}\label{questionlocal}
Can a local knot be concordant to a non local knot?
\end{ques}
The answer is positive: the easiest way to produce infinitely many examples was suggested by Marco Golla. Take a non-local and non-nullhomologous knot $(Y,K)$, with $Y$ a rational homology $3$-sphere, and a ribbon pattern $P \subset S^1 \times \mathbb{D}^2$, as in Figure \ref{fig:patternlocale}.
Suppose moreover that $K$ has rational genus\footnote{For the definition see \cite{calegari2009knots}.} $g_{\mathbb{Q}} (K) >0$. Then consider $K_P$, the satellite of $K$ with pattern $P$, embedded in $Y \times \{0\} \subset Y \times [0,1]$. Note that $K_P$ bounds a ribbon disk, and it is nullhomologous in $Y$. Push the ribbon disk inwards, and remove a small disk from its interior. Tubing the boundary of the removed disk to $Y \times \{1\}$ provides the needed concordance from $K_P$ to $(Y ,\bigcirc)$.
Now we need to prove the non-locality of $K_P$; suppose there existed an embedded 2-sphere $S\subset Y$ bounding a ball containing $K_P$.
If the sphere does not intersect $\partial \nu(K)$, then either it is contained in $\nu(K)$ or it contains it. The first case can be easily dismissed by looking at the pattern $P$\footnote{It can be proven that there can not be any such sphere whenever the minimal (geometric) number of intersections with a disk cobounding a meridian is not $0$, see \cite{livingston1981homology}.}.
In the second case we would have found a sphere containing $\nu (K)$, which is absurd by the non-locality of $K$. Then we just need to argue, similarly to what was done in Theorem \ref{thm:conctogenuine}, that all intersections between $S$ and $\partial \nu (K)$ can be removed up to isotopy.
These intersections appear on $S$ as simple and disjoint circles, which might be nested. Consider an innermost circle; if the corresponding intersection is nullhomologous on $\partial \nu (K)$, we can find a $3$-ball bounded by the union of a disk on $S$ and one on $\partial \nu (K)$. So this kind of intersections can be eliminated by an isotopy. There are two qualitatively different kinds of intersections which are not nullhomologous on $\partial \nu (K)$: the ones which are parallel to a meridian, and those which can instead also wind along a longitude for $\partial \nu (K)$. In the former case (again, by considering an innermost circle on $S$), we would have found a disk cobounding a meridian of $\nu(K)$ and not intersecting $K_P$, which is absurd. In the latter, the rational genus hypothesis on $K$ prevents the existence of such a disk.
\begin{figure}
\includegraphics[width=5cm]{conctolocale.png}
\caption{The band attachment shown produces a concordance in $S^1 \times \mathbb{D}^2 \times [0,1]$ between $P$ and a pair of unknots.}
\label{fig:patternlocale}
\end{figure}
\begin{rmk}\label{obstructiontolocality}
Theorem \ref{thm:tauadd}, coupled with Equation \eqref{eqbanale} implies that the quantity
$$\max_{i,j \in \Z_p} |\tau^i (K) - \tau^j (K)|$$
is an obstruction to locality (up to almost-concordance) for a knot $(L(p,q),K)$, \emph{i.e.} if it is nonzero the knot can not be local.
\end{rmk}
It easy to realize that if a (almost) concordance class contains a local knot, then its cobordism $PL$-genus is equal to 0. The following easy lemma strengthens slightly the result:
\begin{lemma}
Let $K \in \mathcal{K}(Y)$ be a knot concordant to a local knot. Then for every satellite pattern $P$ we have
$$K \dot{\sim} P(K).$$
In other words, almost-concordance classes of local knots are preserved by satellites operators.
\end{lemma}
Note that we are not taking care of framings when performing the satellite construction. There is no ambiguity, since the result holds for every possible choice of framing for the knot.
\begin{proof}
Just take $K_0^\prime = \overline{P(K)}$ and $K_1^\prime = \overline{K}$ in the notation of Definition \ref{def:almconc}, to obtain the product concordance.
\end{proof}
The converse of this Lemma is quite interesting:
\begin{conj}
A knot $K \in \mathcal{K}(Y)$ is local if and only if its almost-concordance class is preserved by all satellite operators.
\end{conj}
More generally it would be interesting to find a characterization of the satellite operators which preserve the splitting into almost-concordance classes. Clearly all connected sum operators have this property, but this does not seem to be the case for other winding number 1 operators.
\newline
Before proving Theorem \ref{thm:plcobotau} we need to introduce some notation.
Consider the lattice $L(p) = \mathbb{Z}^{p}$ for $p \ge 2$; we want to endow $L(p)$ with a sort of path metric that encompass the behaviour of the $\tau_{sh}$-invariant under cobordism.
\begin{defi}\label{def:distanzalattice}
Given two points $x = (x_1, \ldots , x_p)$ and $y = (y_1, \ldots, y_p) $ in $L(p)$, consider $x^\prime = (x_2 - x_1, \ldots, x_p - x_1), y^\prime = (y_2 - y_1, \ldots, y_p - y_1)$ in $L(p-1)$; now consider the undirected graph $\mathcal{G}_p$ whose vertices are the points in $\mathbb{Z}^{p-1}$ and any two vertices are connected with an edge whenever their coordinates differ by $\epsilon = (\epsilon_1, \ldots, \epsilon_{p-1})$, with $\epsilon_i \in \{-1,0,1\}$ for all $i = 1, \ldots ,p-1$. In other words we are adding all ``diagonals" to the lattice. Endow $\mathcal{G}_p$ with the path metric $\widetilde{D}$, and define $$D(x,y) \coloneqq \widetilde{D} (x^\prime, y^\prime).$$
\end{defi}
\begin{proof}[Proof of Thm. \ref{thm:plcobotau}]
We want to reduce ourselves to a situation in which we can apply Theorem \ref{thm:cobo}. According to Remark \ref{rmk:solouno} genus-$g$ PL-concor-\\dance induces a regular concordance between $K_0\# K_0^\prime$ and $K_1 \# K_1^\prime$ for some $K_0^\prime, K_1^\prime \in \mathcal{K}$; the idea of the proof thus consists in estimating the minimal distance on the lattice $\mathbb{Z}^p$ between $\tau(K_0\# K_0^\prime)$ and $\tau(K_1 \# K_1^\prime)$ for varying $K^\prime$. This is just the minimal distance between the two equivalence classes of $\tau$-invariants induced by almost concordance.
We can consider two representatives $\widehat{K}_0$ and $\widehat{K}_1$ of the almost-concor-\\dance classes of $K_0$ and $K_1$ respectively, such that $\tau^0 (\widehat{K}_0) = \tau^0 (\widehat{K}_1) = 0$.
By Theorem \ref{thm:cobo}, each component of $\tau$ can change by at most $g$ under a cobordism of genus $g$. Hence the distance $\widetilde{D}$ on $\mathbb{Z}^{p-1}$ between $\widehat{K}_0$ and $\widehat{K}_1$ coincides with $$\max_{\mathfrak{s} \in \{1,\ldots, p-1\}} \{|\tau^\mathfrak{s}(\widehat{K}_0) - \tau^\mathfrak{s} (\widehat{K}_1)|\},$$
which by Theorem \ref{thm:cobo} bounds from below the genus of a cobordism between the two representatives of the almost-concordance classes.
\end{proof}
In a previous version of this paper, we posed as a conjecture a question raised by A.Levine, on whether there exists a pair $(Y,m)$ such that $|\mathcal{C}^Y_m| < + \infty$. Recently Yildiz \cite{yildiz} proved, among other things, that for the pair $(S^2\times S^1, \{p\}\times S^1)$ there is only one almost-concordance class (cf. also \cite{friedl2016satellites}, \cite{nagel2017smooth} and \cite{arunima}).
\bibliographystyle{amsplain}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2018-01-11T02:06:10",
"yymm": "1602",
"arxiv_id": "1602.05476",
"language": "en",
"url": "https://arxiv.org/abs/1602.05476",
"abstract": "We describe an action of the concordance group of knots in the three-sphere on concordances of knots in arbitrary 3-manifolds. As an application we define the notion of almost-concordance between knots. After some basic results, we prove the existence of non-trivial almost-concordance classes in all non-abelian 3-manifolds. Afterwards, we focus the attention on the case of lens spaces, and use a modified version of the Ozsvath-Szabo-Rasmussen's tau-invariant to obstruct almost-concordances and prove that each L(p,1) admits infinitely many nullhomologous non almost-concordant knots. Finally we prove an inequality involving the cobordism PL-genus of a knot and its tau-invariants.",
"subjects": "Geometric Topology (math.GT)",
"title": "On concordances in 3-manifolds",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540716711547,
"lm_q2_score": 0.7248702702332476,
"lm_q1q2_score": 0.7099046505863013
} |
https://arxiv.org/abs/1506.04629 | The 3-colorability of planar graphs without cycles of length 4, 6 and 9 | In this paper, we prove that planar graphs without cycles of length 4, 6, 9 are 3-colorable. | \section{Introduction}
The well-known Four Color Theorem
states that every planar graph is 4-colorable. On the 3-colorability
of planar graphs, a famous theorem owing to Gr\"{o}tzsch \cite{Grotzsch1959109} states
that every planar graph without cycles of length 3 is 3-colorable.
Therefore, next sufficient conditions that guarantee 3-colorability of planar graphs should always allow the presence of cycles of length 3.
In 1976, Steinberg conjectured that every planar graph without cycles
of length 4 and 5 is 3-colorable. Erd\"{o}s \cite{Steinberg1993211} suggested a relaxation of Steinberg's Conjecture: does there exist a constant $k$ such
that every planar graph without cycles of length from 4 to $k$ is
3-colorable? Abbott and Zhou \cite{AbbottZhou1991203} proved that such a constant exists and
$k\leq 11$. This result was later on improved to $k\leq 9$ by
Borodin \cite{Borodin1996183} and, independently, Sanders and Zhao \cite{SandersZhao199591}, and to
$k\leq 7$ by Borodin, Glebov, Raspaud and Salavatipour \cite{BorodinEtc2005303}.
Besides, much attention was paid to sufficient conditions that forbid cycles of some other certain length.
The results concerning four kinds of forbidden length of cycles were obtained in several different papers and summarized in \cite{LuEtc20094596}:
\begin{theorem}
A planar graph is 3-colorable if it
has no cycle of length $4$, $i$, $j$ and $k$, where $5\leq i<j<k\leq
9$.
\end{theorem}
A more general problem than Steinberg's was formulated also in \cite{LuEtc20094596}:
\begin{problem} \label{pro_forbidden 4 and i}
What is $\cal{A}$, a set of integers between 5 and 9, such that for $i\in \cal{A}$, every planar graph with cycles
of length neither 4 nor $i$ is 3-colorable?
\end{problem}
It seems very far to settle Problem \ref{pro_forbidden 4 and i}, since no element of such a set $\cal{A}$ is found.
Therefore, a reasonable way to deal with this problem is to ask following question:
\begin{problem} \label{pro_forbidden 4 i and j}
What is $\cal{B}$, a set of pairs of integers
$(i,j)$ with $5\leq i<j\leq 9$, such that planar graphs without
cycles of length 4, $i$ and $j$ are 3-colorable?
\end{problem}
The first step towards Problem \ref{pro_forbidden 4 i and j} was made by
Xu \cite{Xu2006958}, who proved that a planar graph is 3-colorable if it has neither 5- and 7-cycles nor adjacent 3-cycles. Unfortunately, there is a gap in his proof, as pointed out by Borodin etc. \cite{BorodinEtc2009668}, who later on gave a new proof of the same statement. Afterwards, Xu \cite{Xu2009347} fixed this gap. Hence $(5,7)\in \cal{B}$.
Other known elements of $\cal{B}$ includes pair (6,8) given by Wang and Chen \cite{WangChen20071552}, pair (7,9) given by Lu etc. \cite{LuEtc20094596}, and pair (6,7) given by Borodin, Glebov and Raspaud \cite{BorodinEtc20102584}. Actually, the theorem proved in \cite{BorodinEtc20102584} states that planar graphs without triangles adjacent to cycles of length from 4 to 7 are 3-colorable, which implies $(6,7)\in \cal{B}$.
In this paper, we show that $(6,9)\in \cal{B}$, that is, we prove the following theorem:
\begin{theorem} \label{thm469}
Every planar graph without cycles of length 4, 6, 9 is 3-colorable.
\end{theorem}
The graphs considered in this paper are finite and simple.
Let $G$ be a plane graph and $C$ a cycle of $G$.
By $Int(C)$ (or $Ext(C)$) we denote the subgraph of $G$ induced by the vertices lying inside (or outside) of $C$.
Cycle $C$ is \textit{separating} if both $Int(C)$ and $Ext(C)$ are not empty.
By $\overline{Int}(C)$ (or $\overline{Ext}(C)$) we denote the subgraph of $G$ consisting of $C$ and its interior (or exterior).
Denote by $G[S]$ the subgraph of $G$ induced by $S$, where either $S\subseteq V(G)$ or $S\subseteq E(G)$.
A vertex is a \textit{neighbor} of another vertex if they are adjacent.
A \textit{chord} of $C$ is an edge of $\overline{Int}(C)$ that connects two nonconsecutive vertices on $C$.
If $Int(C)$ has a vertex $v$ with three neighbors $v_1,v_2,v_3$ on $C$, then $G[\{vv_1,vv_2, vv_3\}]$ is called a \textit{claw} of $C$.
If $Int(C)$ has two adjacent vertices $u$ and $v$ such that $u$ has two neighbors $u_1,u_2$ on $C$ and $v$ has two neighbors $v_1,v_2$ on $C$, then $G[\{uv,uu_1,uu_2,vv_1,vv_2\}]$ is called a \textit{biclaw} of $C$.
If $Int(C)$ has three pairwise adjacent vertices $u,v,w$ such that $u,v$ and $w$ have a neighbor $u',v'$ and $w'$ on $C$ respectively, then $G[\{uv,vw,uw,uu',vv',ww'\}]$ is called a \textit{triclaw} of $C$ (see Figure \ref{fig_claw}).
\begin{figure}[h]
\centering
\includegraphics[width=13cm]{claw.pdf}\\
\caption{chord, claw, biclaw and triclaw of a cycle}\label{fig_claw}
\end{figure}
Let $C$ be a cycle and $T$ be one of the chords, claws, biclaws and triclaws of $C$. We call the graph consisting of $C$ and $T$ a \textit{bad partition} $H$ of $C$. The boundary of any one of the parts, into which $C$ is divided by $H$, is called a \textit{cell} of $H$. Clearly, every cell is a cycle.
In case of confusion, let us always order the cells $c_1,\cdots,c_t$ of $H$ in the way as shown in Figure \ref{fig_claw}.
For every cell $c_i$ of $H$, let $k_i$ be the length of $c_i$. Then $T$ is further called a $(k_1,k_2)$-chord, a $(k_1,k_2,k_3)$-claw, a $(k_1,k_2,k_3,k_4)$-biclaw or a $(k_1,k_2,k_3,k_4)$-triclaw, respectively.
Let $k$ be a positive integer.
A $k$-cycle is a cycle of length $k$. A $k^-$-cycle (or $k^+$-cycle) is a cycle of length at least (or at most) $k$.
A \textit{good cycle} is a 12$^-$-cycle that has none of claws, biclaws and triclaws.
A \textit{bad cycle} is a 12$^-$-cycle that is not good.
We say a 9-cycle is \textit{special} if it has a (3,8)-chord or a (5,5,5)-claw.
Let $\cal{G}$ be the class of connected plane graphs with neither 4- and 6-cycle nor special 9-cycle.
Instead of Theorem \ref{thm469}, it is easier for us to prove the following stronger one:
\begin{theorem} \label{thm46s9}
Let $G\in \cal{G}$. We have
\begin{enumerate}[(1)]
\item $G$ is 3-colorable; and
\item If $D$, the boundary of the exterior face of $G$, is a good cycle, then every proper 3-coloring of $G[V(D)]$ can be extended to a proper 3-coloring of $G$.
\end{enumerate}
\end{theorem}
This section is concluded with some notations that are used in the next section. Let $G$ be a plane graph.
Denote by $d(v)$ the degree of a vertex $v$, by $|C|$ the length of a cycle $C$ and by $|f|$ the size of a face $f$. Let $k$ be a positive integer.
A \textit{$k$-vertex} is a vertex of degree $k$, and a \textit{$k$-face} is a face of size $k$. A \textit{$k^+$-vertex} (or \textit{$k^-$-vertex}) is a vertex of degree at least (or at most) $k$, and a \textit{$k^+$-face} (or \textit{$k^-$-face}) is a face of size at least (or at most) $k$.
A \emph{$k$-path} is a path that contains $k$ edges.
A $k$-cycle containing vertices $v_1,\ldots,v_k$ in cyclic order is denoted by $[v_1\ldots v_k]$.
Denote by $N(v)$ the set of neighbors of a vertex $v$.
Let $N_H(v)=N(v)\cap V(H)$ whenever $v$ is a vertex of a subgraph $H$ of $G$.
A vertex is \textit{external} if it lies on the exterior face, \textit{internal} otherwise.
A vertex incident with a triangle is called a \textit{triangular vertex}.
We say a vertex is \textit{bad} if it is an internal triangular 3-vertex; \textit{good} otherwise.
A path is a \textit{splitting path} of a cycle $C$ if it has two end-vertices on $C$ and all other vertices inside $C$.
We say a path is \textit{good} if it contains only internal 3-vertices and has an end-edge incident with a triangle.
A cycle or a face $C$ is \textit{triangular} if $C$ is adjacent to a triangle $T$. Furthermore, if $C$ is a cycle and $T\in \overline{Ext}(C)$, then we say $C$ is an \textit{ext-triangular} cycle. A triangular 7-face is \textit{light} if it has no external vertex and every incident nontriangular vertex has degree 3.
\section{Proof of Theorem \ref{thm46s9}}
Suppose to the contrary that Theorem \ref{thm46s9} is false.
From now on, let $G$ be a counterexample to Theorem \ref{thm46s9} with fewest vertices.
Actually, $G$ violates the second conclusion of Theorem \ref{thm46s9}, since conclusion (2) implies conclusion (1).
We still use $D$ to denote the boundary of the exterior face of $G$, and let $\phi$ be a proper 3-coloring of $G[V(D)]$ which cannot be extended to a proper 3-coloring of $G$. Clearly, $D$ is a good cycle. By the minimality of $G$, $D$ has no chord.
\subsection{Structural properties of minimal counterexample $G$}
\begin{lemma} \label{lem_min degree}
Every internal vertex of $G$ has degree at least 3.
\end{lemma}
\begin{proof}
Suppose to the contrary that $G$ has an internal vertex $v$ such that $d(v)\leq 2$. We can extend $\phi$ to $G-v$ by the minimality of $G$, and then to $G$ by coloring $v$ different from its neighbors.
\end{proof}
\begin{lemma}
$G$ is 2-connected and therefore, the boundary of each face of $G$ is a cycle.
\end{lemma}
\begin{proof}
Otherwise, we may assume that $G$ has a pendant block $B$ with cut vertex $v$ such that $B-v$ does not intersect with $D$.
We first extend $\phi$ to $G-(B-v)$, and then 3-color $B$ such that the color assigned to $v$ is unchanged.
\end{proof}
\begin{lemma} \label{lem_sep good cycle}
$G$ has no separating good cycle.
\end{lemma}
\begin{proof}
Suppose to the contrary that $G$ has a separating good cycle $C$. We extend $\phi$ to $G-Int(C)$.
Furthermore, since $C$ is a good cycle, the color of $C$ can be extended to its interior.
\end{proof}
One can easily conclude following three lemmas.
\begin{lemma} \label{lem_cycle of G}
Every $9^-$-cycle of $G$ is facial except that an 8-cycle of $G$ might have a chord, which is a (3,7)- or (5,5)-chord.
\end{lemma}
\begin{lemma} \label{bad cycle}
Let $H\in \cal{G}$. If $C$ is a bad cycle of $H$, then $C$ has length either 11 or 12. Furthermore, if $|C|=11$, then $C$ has a (3,7,7)- or (5,5,7)-claw; if $|C|=12$, then $C$ has a (5,5,8)-claw, a (3,7,5,7)- or (5,5,5,7)-biclaw, or a (3,7,7,7)-triclaw.
\end{lemma}
\begin{lemma} \label{lem triangular bad cycle}
Every bad cycle $C$ of $G$ is adjacent to at most one triangle. Furthermore, if $C$ is ext-triangular, then $C$ has either a (5,5,7)-claw or a (5,5,5,7)-biclaw.
\end{lemma}
\begin{lemma}\label{lem_splitting path}
Let $P$ be a splitting path of $D$ which divides $D$ into two cycles $D'$ and $D''$.
\begin{enumerate}[(1)]
\item If $|P|=2$, then there is a 3-face between $D'$ and $D''$;
\item If $|P|=3$, then there is a 5-face between $D'$ and $D''$;
\item If $|P|=4$, then there is a 5- or 7-face between $D'$ and $D''$;
\item If $|P|=5$, then there is a $9^-$-cycle between $D'$ and $D''$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since $D$ has length at most 12, we have $|D'|+|D''|=|D|+2|P|\leq 12+2|P|$.
Recall that every $7^-$-cycle of $G$ is a facial cycle by Lemma \ref{lem_cycle of G}.
(1) ~Let $P=xyz$. Suppose to the contrary that $|D'|, |D''| \geq 5$.
By Lemma \ref{lem_min degree}, $y$ has a neighbor other than $x$ and $z$, say $y'$. It follows that $y'$ is internal since otherwise $D$ is a bad cycle with a claw. Without loss of generality, let $y'$ lie inside $D'$. Thus $|D'|\geq 11$ by Lemma \ref{lem_sep good cycle}. Since $|D'|+|D''|\leq 16$, we have $|D'|=11$ and $|D''|=5$. Now $D'$ has a claw by Lemma \ref{bad cycle}, which implies that $D$ has a biclaw, a contradiction.
(2) ~Let $P=wxyz$. Suppose to the contrary that $|D'|, |D''| \geq 7$.
Let $x'$ and $y'$ be a neighbor of $x$ and $y$ not on $P$, respectively.
If both $x'$ and $y'$ are external, then $D$ has a biclaw.
Hence, we may assume $x'$ lies inside $D'$.
By Lemma \ref{bad cycle} and inequality $|D'|+|D''|\leq 18$, we have $|D'|=11$ and $|D''|$=7.
Thus $D'$ has a claw which divides $D'$ into three faces. Since $D''$ is facial, $y'$ can only coincide with $x'$. Now $D$ has a triclaw.
(3) ~Let $P=vwxyz$. Suppose to the contrary that $|D'|, |D''| \geq 8$.
Since $|D'|+|D''|\leq 20$, we have $|D'|, |D''| \leq 12$.
If $G$ has an edge $e$ connecting two nonconsecutive vertices on $P$, then $e$ together with $P$ can form only a triangle.
Without loss of generality, let $e=wy$ and $e$ belongs to $\overline{Int}(D')$. Now path $vwyz$ is a splitting 3-path of $D$ and hence $D'$ is a 6-cycle with a (3,5)-chord, a contradiction.
Therefore, no pair of nonconsecutive vertices on $P$ are adjacent.
Let $w', x', y'$ be a neighbor of $w, x, y$ not on $P$, respectively.
If $x'$ is external, say $xx'$ is a chord of $D'$, then both of paths $vwxx'$ and $x'xyz$ are splitting 3-paths of $D$. It follows that $D'$ is an 8-cycle with a (5,5)-chord $xx'$.
Hence $y'$ has no other possibility but to lie inside of $D''$, and so does $w'$.
By noticing that $w'$ cannot coincide with $y'$, we know $D''$ is a bad 12-cycle. It follows that $G$ has an edge connecting $w'$ and $y'$, which yields a special 9-cycle of $G$.
Therefore, vertex $x'$ is internal.
We may assume $x'$ lies inside of $D'$.
Thus $D'$ is a bad 11- or 12-cycle, which implies $D''$ has length 8 or 9.
If $|D''|=9$, then $D''$ is facial and $D'$ is a bad 11-cycle with a claw, which is impossible because of the locations of $w'$ and $y'$. Hence we may assume $|D''|=8$. It follows that not both $w'$ and $y'$ lie in $\overline{Int}(D'')$ and that $w',x',y'$ are pairwise distinct. Now $G$ has a 4-cycle that is either $[wxx'w']$ or $[xyy'x']$, a contradiction.
(4) ~Let $P=uvwxyz$. Suppose to the contrary that $|D'|, |D''| \geq 10$.
Since $|D'|+|D''|\leq 22$, we have $|D'|, |D''| \leq 12$.
By similar argument as in (3), one can conclude that $G$ has no edge connecting two nonconsecutive vertices on $P$.
Let $v', w', x', y'$ be a neighbor of $v, w, x, y$ not on $P$, respectively.
We claim that both vertices $w'$ and $x'$ are internal. Otherwise, let $ww'$ be a chord of $D'$. Since both $uvww'$ and $w'wxyz$ are splitting paths of $D$, $D'$ is a 10-cycle with a (5,7)-chord $ww'$. Thus all of $v',x'$ and $y'$ belong to $\overline{Int}(D'')$. If $x'$ is external, then similarly, $D''$ is a 10-cycle with a (5,7)-chord $xx'$, which is impossible because of the location of $y'$. Hence, we may assume that $x'$ lies inside $D''$. Furthermore, $v'$ also lies inside $D''$, since otherwise $G$ has 3-face $[uvv']$ adjacent to a 5-face. Clearly, $v'\neq x'$. Hence $D''$ is a bad 12-cycle containing two adjacent vertices $v'$ and $x'$ inside. A contradiction is obtained by noticing both the location of $y'$ and the specific interior of $D''$.
Let $w'$ lie inside $D'$. If $x'$ lies inside $D''$, then both $D'$ and $D''$ are bad 11-cycles.
It follows that $v'=w', y'=x'$, and both $D'$ and $D''$ have a $(3,7,7)$-claw, yielding special 9-cycles of $G$.
Hence, we may assume that $x'$ lies also inside $D'$. It follows that $x'$ coincide with $w'$, since otherwise the adjacency of $x'$ and $w'$ gives a 4-cycle of $G$.
Thus $D'$ is a bad cycle with either a $(3,7,7)$-claw or a (3,7,5,7)-biclaw, which implies both $v'$ and $y'$ belong to $\overline{Int}(D'')$.
If $v'$ lies on $D''$, then $G$ has triangle $[uvv']$ adjacent to an 8-cycle of $D'$, a contradiction.
Hence we may assume $v'$ lies inside $D''$ and so does $y'$.
It follows that either $v'= y'$ or $v'y'\in E(G)$, yielding 6-cycles of $G$ in both cases.
The proof of this lemma is completed.
\end{proof}
\begin{lemma} \label{operation}
Let $G'$ be a plane graph obtained from $G$ by a graph operation $T$.
Let $T$ consist of deleting a nonempty set of internal vertices and either identifying two vertices or
adding an edge between two nonadjacent vertices.
If after $T$ we\\
$(a)$~ identify no two vertices on $D$, and create no edge connecting two vertices on $D$, and\\
$(b)$~ create neither $6^-$-cycle nor ext-triangular 7- or 8-cycle,\\
then $\phi$ can be extended to $G'$.
Let $T$ consist of deleting a nonempty set $S$ of internal vertices and identifying
two edges $u_1u_2$ and $v_1v_2$ so that $u_1$ is identified with $v_1$.
For $i\in \{1,2\}$, let $T_i$ denote the operation on $G$ that consists of deleting all vertices in $S$ and identifying $u_i$ and $v_i$.
If at least one of $u_1u_2$ and $v_1v_2$ is contained in no $8^-$-cycle of $G-S$, and if conditions $(a)$ and $(b)$ above hold for both $T_1$ and $T_2$,
then $\phi$ can be extended to $G'$.
\end{lemma}
\begin{proof}
First let $T$ consist of deleting a nonempty set of internal vertices and identifying two other vertices $t_1$ and $t_2$.
Let $t'$ denote the vertex obtained from $t_1$ and $t_2$ after $T$.
Conditions $(a)$ and $(b)$ implies (i) to show $G'\in \cal{G}$, it suffices to show $G'$ has no special 9-cycles; and (ii) $D$ bounds $G'$ and $\phi$ is a proper 3-coloring of $G'[V(D)]$.
Therefore, $\phi$ can be extended to $G'$ by the minimality of $G$ if we can show both that $G'$ has no special 9-cycles and that $D$ is good in $G'$.
Suppose $G'$ has a special 9-cycle $C$. Let $H$ be a bad partition of $C$. We have $t'\in V(H)$ since otherwise $C$ is a special 9-cycle in $G$. Condition $(b)$ implies that every vertex of $N_H(t')$ is adjacent to precisely one of $t_1$ and $t_2$ in $G$.
If all the vertices of $N_H(t')$ is adjacent to $t_1$, then $C$ is a special 9-cycle in $G$.
Hence, we may assume that $N_H(t')$ has a vertex adjacent to $t_2$ and similarly, has another vertex adjacent to $t_1$.
Thus after $T$ a cell of $H$ containing $t'$ is created, that is, we have created a 3- or 5-cycle or an ext-triangular 8-cycle, contradicting $(b)$. Therefore, $G'$ has no special 9-cycle.
Suppose $D$ is bad in $G'$. Let $H$ be a bad partition of $D$. We have $t'\in V(H)$ since otherwise $D$ is bad in $G$.
If $t'$ has degree 2 in $H$, then $t_1, t_2 \in V(D)$ since otherwise $D$ is bad in $G$.
Now we identify two vertices on $D$, contradicting $(a)$.
Hence $t'$ has degree 3 in $H$. Similarly as paragraph above, we may assume that $N_H(t')$ has a vertex $w_1$ adjacent to $t_1$ and two other vertices $w'_2, w''_2$ adjacent to $t_2$ in $G$. It follows that $H$ has two cells containing either $w_1t'w'_2$ or $w_1t'w''_2$ created by $T$.
Clearly, $G' \in \cal{G}$.
Hence, after $T$ we create a 3- or 5-cycle, or an ext-triangular 7-cycle, contradicting $(b)$.
Therefore, $D$ is good in $G'$.
Next let $T$ consist of deleting a nonempty set of internal vertices and adding an edge $e$ between two nonadjacent vertices.
Similarly, to complete the proof in this case, it suffices to guarantee that $G'$ has no special 9-cycles and that $D$ is good in $G'$.
Suppose $G'$ has a special 9-cycle $C$. Let $H$ be a bad partition of $C$. We have $e\in E(H)$ since otherwise $C$ is a special 9-cycle of $G$.
Hence, every cell of $H$ containing $e$ is created, which implies that we have created a 3- or 5-cycle or an ext-triangular 8-cycle, contradicting $(b)$.
Suppose $D$ is bad in $G'$. Let $H$ be a bad partition of $Int(D)$. Similarly, one can conclude that every cell of $H$ containing $e$ is created. Since $e\notin E(D)$ and $G' \in \cal{G}$, we create a 3- or 5-cycle or an ext-triangular 7-cycle, a contradiction.
At last, let $T$ consist of deleting all vertices in $S$ and identifying two edges $u_1u_2$ and $v_1v_2$.
Denote by $w_1$ the vertex of $G'$ obtained from $u_1$ and $v_1$ after $T$, and by $w_2$ one obtained from $u_2$ and $v_2$.
Since condition $(a)$ holds for both $T_1$ and $T_2$, $D$ bounds $G'$ and $\phi$ is a proper 3-coloring of $G'[V(D)]$.
Suppose we create a $6^-$-cycle $C'$ after $T$. Since condition $(b)$ holds for both $T_1$ and $T_2$,
we have $w_1,w_2\in V(C')$ and furthermore, one of the two paths of $C'$ between $w_1$ and $w_2$ connects $u_1$ and $u_2$, and the other connects $v_1$ and $v_2$. Clearly, $w_1$ and $w_2$ are nonconsecutive on $C'$, since otherwise $C'$ is a $6^-$-cycle of $G$. It follows that both $u_1u_2$ and $v_1v_2$ are contained in a $5^-$-cycles of $G-S$, a contradiction. Therefore, we create no $6^-$-cycle by $T$.
Furthermore, by a similar argument, one can conclude that we create no ext-triangular 7- or 8-cycle by $T$.
Suppose we create a special 9-cycle $C$ after $T$. Let $H$ be a bad partition of $C$.
Clearly, no cell of $H$ is created by $T$. It follows that $G$ has a 2-path between $u_1$ and $u_2$ and a 7-path between $v_1$ and $v_2$ so that edge $w_1w_2$ is a (3,8)-chord of $C$, since otherwise $C$ is a special 9-cycle of $G$. Now both $u_1u_2$ and $v_1v_2$ are contained in an $8^-$-cycle of $G$, a contradiction.
Therefore, we create no special 9-cycle after $T$.
Suppose $D$ is bad in $G'$. Let $H$ be a bad partition of $D$.
Notice that by $T$ we identify one pair of edges, and that each cell of $H$ has more than one edge shared with some other cell. If no cell of $H$ is created by $T$, then $D$ is bad in $G$.
Hence, we may assume that $H$ has a cell $C_H$ that is created by $T$.
Recall that condition $(b)$ holds for $T$, too. It follows that $H$ has either a (5,5,7)- or (5,5,8)-claw or a (5,5,5,7)-biclaw, and $C_H$ is the cell of length at least 7.
Furthermore, since $D$ is unchanged and no $6^-$-cycle is created after $T$, it is impossible that we create $C_H$ but no other cells of $H$ by $T$.
Therefore, $D$ is good in $G'$.
By the conclusions above, $\phi$ can be extended to $G'$ because of the minimality of $G$.
\begin{comment}
Suppose we create a special 9-cycle $C$ after $T$. Let $H$ be a bad partition of $C$.
Since condition $(b)$ holds for both $T_1$ and $T_2$,
every cell of $H$ is not created if it contains at most one of vertices $w_1$ and $w_2$.
It follows that $H$ has a cell $C_H$ that contains both $w_1$ and $w_2$, since otherwise $C$ is a special 9-cycle of $G$. Let $P_1$ and $P_2$ be the two paths of $C_H$ between $w_1$ and $w_2$. Since again condition $(b)$ holds for both $T_1$ and $T_2$,
we may assume that $P_1$ connects $u_1$ and $u_2$ and $P_2$ connects $v_1$ and $v_2$.
If $w_1$ and $w_2$ are consecutive on $C'$, then $C$ is a special 9-cycle of $G$, a contradiction.
Hence, we may assume that $w_1$ and $w_2$ are not consecutive on $C'$, that is, $|P_1|,|P_2|\geq 2$. Clearly, $C_H$ has length at most 8, which implies $|P_1|,|P_2|\leq 6$. Hence, both $u_1u_2$ and $v_1v_2$ are contained in a $7^-$-cycles of $G-S$, a contradiction.
Therefore, $G'$ has no special 9-cycle.
\end{comment}
\begin{comment}
At last, let $T$ consist of deleting all vertices in $S$ and identifying two edges $u_1u_2$ and $v_1v_2$.
Denote by $w_1$ the vertex of $G'$ obtained from $u_1$ and $v_1$ after $T$, and by $w_2$ one obtained from $u_2$ and $v_2$.
Since conditions $(a)$ and $(b)$ hold for both $T_1$ and $T_2$, we can compete the proof of this lemma by showing in this case that $G'$ has no special 9-cycles and that $D$ is good in $G'$.
Suppose $G'$ has a special 9-cycle $C$. Let $H$ be a bad partition of $C$.
Denote by $G_1$ and $G_2$ the graphs obtained from $G$ by $T_1$ and $T_2$, respectively.
By the proof of the first part of this lemma, both $G_1$ and $G_2$ have no special 9-cycle.
If $w_1\notin V(H)$ or if $u_1$ is adjacent to all vertices from $N_H(w_1)\setminus \{w_2\}$, then $C$ is a special 9-cycle of $G_2$, a contradiction.
Hence, we may assume that $w_1\in V(H)$, and $N_H(w_1)\setminus \{w_2\}$ has a vertex adjacent to $u_1$ but not to $v_1$, and another vertex adjacent to $v_1$ but not to $u_1$.
Similarly, $w_2\in V(H)$ and $N_H(w_2)\setminus \{w_1\}$ has a vertex adjacent to $u_2$ but not to $v_2$, and another vertex adjacent to $v_2$ but not to $u_2$.
Since every cell of $H$ has length at most 8, no matter whether $w_1w_2$ is an edge of $H$ or not, both $u_1u_2$ and $v_1v_2$ are contained in an $8^-$-cycles of $G-S$, a contradiction.
Suppose $D$ is bad in $G'$. A contradiction can also be derived by a similar argument above.
\end{comment}
\end{proof}
\begin{lemma} \label{lem_good path}
Every face of $G$ contains no good path.
\end{lemma}
\begin{proof}
Suppose to the contrary that $G$ has a $k$-face $f$ that contains a good path $Q$. Since $G\in \cal{G}$, we have $k\geq 7$.
Let $f=[v_1\ldots v_k]$ and $Q=v_2\ldots v_5$. Let $t$ be a common neighbor of $v_2$ and $v_3$ not on $Q$, and $x$ be a neighbor of $v_4$ other than $v_3$ and $v_5$. Clearly, $x\neq v_1$. We do a graph operation $T$ on $G$ as follows: delete all vertices on $Q$ and identify $v_1$ and $x$, obtaining a plane graph $G'$.
Suppose that through $T$ we identify two vertices on
$D$, or create an edge connecting two vertices on $D$.
$G$ has a splitting 4- or 5-path $P$ of $D$ that contains
path $v_1\ldots v_4x$. Thus by Lemma \ref{lem_splitting path}, $G$ has a $9^-$-cycle $C$ formed by $P$ and $D$.
Clearly, $C$ is a good cycle and thus none of $t$ and $v_5$ lies inside $C$, which implies $t$ lies on $C$.
Now $C$ has two chords $tv_2$ and $tv_3$, a contradiction with Lemma \ref{lem_cycle of G}. Therefore, item $(a)$ in Lemma \ref{operation} holds for $T$.
Suppose that through $T$ we create a $6^-$-cycle or an ext-triangular 7- or 8-cycle.
Thus $G-v_5$ has a $12^-$-cycle $C$ containing path $v_1\ldots v_4x$, such that $\overline{Ext}(C)$ has a triangle adjacent to $C$ with common edge on $C-\{v_2,v_3,v_4\}$ when $|C|\in\{11,12\}$.
It follows that $t\notin V(C)$ since otherwise $G$ has a $6^-$-face adjacent to triangle $[tv_2v_3]$.
Hence, $C$ is a bad cycle containing either $t$ or $v_5$ inside.
Now $C$ is adjacent to two triangles, contradicting Lemma \ref{lem triangular bad cycle}.
Therefore, item $(b)$ in Lemma \ref{operation} holds for $T$.
Hence $\phi$ can be extended to $G'$ by Lemma \ref{operation}. Next we extend $\phi$ from $G'$ to $G$: first properly color $v_5$ and $v_4$ in turn, then $v_2$ and $v_3$ can be properly colored since $v_1$ and $v_4$ receive different colors.
\end{proof}
\begin{lemma} \label{lem_face all 3}
$G$ has no $k$-face containing $k$ internal 3-vertex, where $k\in \{5,7\}$.
\end{lemma}
\begin{proof}
Suppose to the contrary that $G$ has such a $k$-face $f$. Let $f=[v_1\ldots v_k]$.
For $1\leq i\leq k$, denote by $u_i$ the neighbor of $v_i$ not on $f$.
Clearly, vertices $u_1,\cdots,u_k$ are pairwise distinct.
(1)~ Let $k=5$.
Since $G$ has no special 9-cycles, $f$ has a vertex incident with two 7$^+$-faces.
Without loss of generality, let $v_1$ be such a vertex.
We do a graph operation $T$ on $G$ as follows: delete all the vertices on $f$ and insert an edge between $u_5$ and $u_2$. Denote by $G'$ the resulting plane graph.
Suppose that $u_2,u_5\in V(D)$.
As a splitting 4-path of $D$, path $u_5v_5v_1v_2u_2$ together with $D$ forms a $5$- or $7$-face of $G$, an obvious contradiction. Therefore, item $(a)$ holds for $T$.
Suppose through $T$ we create a $6^-$-cycle or an ext-triangular 7- or 8-cycle.
Then $G-\{v_3,v_4\}$ has a $11^-$-cycle $C$ containing path $u_5v_5v_1v_2u_2$ such that $\overline{Ext}(C)$ has a triangle adjacent to $C$ with common edge on $C-\{v_5,v_1,v_2\}$ when $|C|\in \{10,11\}$.
If $C$ is a good cycle, then none of $u_1,v_3$ and $v_4$ lies inside $C$, which implies $u_1\in V(C)$.
Now $u_1v_1$ divides $C$ into two cycles $C_1$ and $C_2$.
On one hand, since $v_1$ is incident with two 7$^+$-faces, we have $|C_1|, |C_2|\geq 7$.
On the other hand, we have $|C_1|+|C_2|=|C|+2\leq 13$. An contradiction is obtained.
Hence, we may assume $C$ is a bad 11-cycle.
It follows that $C$ has a $(5,5,7)$-claw by Lemma \ref{lem triangular bad cycle}, which is impossible since now either $C$ contains two vertices $v_3$ and $v_4$ inside or $\overline{Int}(C)$ has two 7$^+$-faces.
Therefore, item $(b)$ holds for $T$.
Hence by Lemma \ref{operation}, $\phi$ can be extended to $G'$. Notice that $u_1$ receives a color different from at least one of $u_2$ and $u_5$. Without loss of generality, let us say $u_2$.
We extend $\phi$ from $G'$ to $G$ in following way: color $v_2$ same as $u_1$, then $v_3,v_4,v_5$ and $v_1$ can be properly colored in turn.
(2)~ Let $k=7$. We do following operation $T$ on $G$: delete all vertices on $f$ and insert an edge between $u_1$ and $u_5$, obtaining a plane graph $G'$.
Suppose both $u_1$ and $u_5$ belong to $D$. Let $P=u_1v_1v_7v_6v_5u_5$. Since $P$ is a splitting path of $D$, $G$ has a $9^-$-cycle $C$ formed by $P$ and $D$ by Lemma \ref{lem_splitting path}. Clearly, $C$ is good. Thus $u_6,u_7\in V(C)$. Now $C$ has two chords, a contradiction with Lemma \ref{lem_cycle of G}. Therefore, item $(a)$ holds for $T$.
Suppose through $T$ we create a $6^-$-cycle or an ext-triangular 7- or 8-cycle.
Then $G-\{v_2,v_3,v_4\}$ has a $12^-$-cycle $C$ containing path $P$ such that $\overline{Ext}(C)$ has a triangle adjacent to $C$ with common edge on $C-\{v_1,v_7,v_6,v_5\}$ when $|C|\in \{11,12\}$.
If $C$ is a good cycle, then both $u_6$ and $u_7$ lie on $C$.
Since $|C|\leq12$, edges $v_6u_6$ and $v_7u_7$ divide $C$ into three cycles, each of which has length 5.
It follows that $|C|=11$ and hence $\overline{Int}(C)$ has a 5-face adjacent to a triangle, a contradiction.
Hence, we may assume $C$ is a bad cycle.
By Lemma \ref{lem triangular bad cycle}, $C$ has either a $(5,5,7)$-claw or a $(5,5,5,7)$-biclaw, which is impossible obviously.
Therefore, item $(b)$ holds for $T$.
\begin{comment}
Suppose through $T$ we create a $6^-$-cycle or a triangular 7- or 8-cycle. Then $G$ has a $5^-$-path or a triangular 6- or 7-path, say $P$, between $u_1$ and $u_5$ such that $P$ contains no deleted vertex.
Let $C$ be the cycle formed by paths $P$ and $u_1v_1v_7v_6v_5u_5$. Clearly, $C$ has length at most 12.
If $C$ is a good cycle, then both $u_6, u_7$ can only lie on $C$.
This yields that $|C| =11$ and furthermore, every face incident with one of edges $v_1v_7,v_7v_6$ and $v_6v_5$ other than $f$ has length 5.
Thus $P$ is a triangular 6-path, which is impossible since otherwise $G$ has a 6-cycle with a (3,5)-chord.
Hence $C$ is a bad cycle and thus $P$ is a triangular 6- or 7-path. It follows that $C$ has either a $(5,5,7)$-claw or a (5,5,5,7)-biclaw by remark 2(2).
This is impossible since now $C$ either contains vertices $v_2,v_3,v_4$ inside, or $u_6,u_7$ inside whose adjacency implies a 4-cycle $v_7v_6u_6u_7v_7$ of $G$.
\end{comment}
Hence by Lemma \ref{operation}, $\phi$ can be extended to $G'$.
Furthermore, $\phi$ can be extended from $G'$ to $G$ in a similar way as part (1) of this lemma.
\end{proof}
\begin{lemma} \label{lem_two light 7-face}
$G$ has no two 7-faces $[xv_1\ldots v_6]$ and $[xu_1\ldots u_6]$ such that $x$ is their unique common vertex, $u_1$ and $v_1$ are adjacent, both $x$ and $u_1$ are internal 4-vertices, and all other vertices on these two 7-faces are internal 3-vertices.
\end{lemma}
\begin{proof}
Suppose to the contrary $G$ has such two 7-faces. Let $f=[xv_1\ldots v_6]$ and $g=[xu_1\ldots u_6]$. Let $y$ and $z$ be the neighbors of $u_1$ and $v_6$ not on $f\cup g$, respectively.
We do the following operation $T$ on $G$: delete both $V(f)$ and $V(g)$, and identify $z$ and $y$, obtaining a plane graph $G'$.
Suppose through $T$ we identify two vertices on
$D$, or create an edge connecting two vertices on $D$.
Then $G$ has a splitting 4- or 5-path $P$ of $D$ containing
path $yu_1xv_6z$. It follows from Lemma \ref{lem_splitting path} that $G$ has a $9^-$-cycle $C$ formed by $P$ and $D$.
Hence, $C$ is a good cycle and thus not separating, contradicting that $C$ has either $u_2$ or $v_1$ inside.
Therefore, item $(a)$ holds for $T$.
Suppose through $T$ we create a $6^-$-cycle or an ext-triangular 7- or 8-cycle.
Then $G-V(f)\cup V(g)$ has a $8^-$-path between $y$ and $z$, which together with path $yu_1xv_6z$ form a $12^-$-cycle $C$.
It follows that $G$ has at most three vertices inside $C$, contradicting the fact that now either $u_2,\ldots, u_6$ or $v_1,\ldots,v_5$ lie inside $C$.
Therefore, item $(b)$ holds for $T$.
Hence by Lemma \ref{operation}, $\phi$ can be extended to $G'$.
We further extend $\phi$ from $G'$ to $G$ in following way: first color $x$ same as $y$, then $u_6,\ldots,u_1$ can be properly colored in turn, and so do $v_1,\ldots,v_6$.
\end{proof}
\begin{lemma} \label{lem_theta}
$G$ has no 8-cycle $[xyzu_1\ldots u_5]$ with a chord $xz$ such that $z$ is an internal 4-vertex and all other vertices of this 8-cycle are internal 3-vertices.
\end{lemma}
\begin{proof}
Suppose to the contrary that $G$ has such an 8-cycle $C$.
Let $z'$ and $y'$ be the neighbors of $z$ and $y$ not on $C$, respectively.
We remove $C$ from $G$ to obtain a plane graph $G'$ with fewer vertices.
By the minimality of $G$, $\phi$ can be extended to $G'$. We complete the proof by extending $\phi$ from $G'$ to $G$ in following way: if $z'$ and $y'$ receive a same color, then we color $x$ same as $z'$ and finally, $u_5,\ldots,u_1,z,y$ can be properly colored in turn; otherwise, we color $z$ same as $y'$, and then $u_1,\ldots,u_5,x,y$ can be properly colored in turn.
\end{proof}
\begin{lemma} \label{lem_9-face}
$G$ has no 9-face $[u_1\ldots u_9]$ such that $u_1,u_2,u_3,u_5,u_6,u_7$ are six bad vertices and $u_4$ is a 4-vertex incident with two 3-faces.
\end{lemma}
\begin{proof}
Suppose to the contrary $G$ has such a 9-face $f$.
$G$ has four 3-faces $[xu_1u_2]$, $[yu_3u_4]$, $[zu_4u_5]$, $[wu_6u_7]$ adjacent to $f$.
Let $S=\{u_1,u_2,u_3,u_5,u_6,u_7\}$.
We apply following graph operation $T$ on $G$ to obtain a plane graph $G'$ with fewer vertices: delete all vertices of $S$ and identify two edges $u_8u_9$ and $zu_4$ so that $u_8$ is identified with $z$.
Denote by $T_1$ (or $T_2$) the graph operation on $G$ that consists of deleting all vertices in $S$ and identifying $u_8$ and $z$ (or $u_9$ and $u_4$).
Similarly as the proof of Lemma \ref{lem_good path}, one can conclude that items $(a)$ and $(b)$ hold for both $T_1$ and $T_2$.
Besides, $u_4z$ is contained in no $8^-$-cycle of $G-S$. Hence, $\phi$ can be extended to $G'$ by Lemma \ref{operation}.
Furthermore, we can extend $\phi$ from $G'$ to $G$ in a similar way as Lemma \ref{lem_good path}.
\end{proof}
\begin{comment}
\newtheorem{lem5}[lemsure1]{Lemma}
\begin{lem5}
$G$ has no 9-face $f=[u_1u_2\cdots u_9]$ such that $f$ contains five triangular edges $u_1u_2$, $u_2u_3$, $u_4u_5$, $u_6u_7$, $u_8u_9$, and all the vertices incident with $f$ have degree 3 other than $v_2$, $v_6, v_7$, each of which has degree 4.
\end{lem5}
\newtheorem{lem6}[lemsure1]{Lemma}
\begin{lem6}
$G$ has no completely internal 9-face $f=[u_1,u_2,\cdots,u_9]$ such that $f$ contains five triangular edges $u_2u_3, u_4u_5, u_6u_7, u_7u_8, u_9u_1$, and all the vertices incident with $f$ have degree 3 other than $u_2, u_7$, both of which has degree 4.
\end{lem6}
\end{comment}
\subsection{Discharging in $G$}
\label{secch}
Let $V=V(G)$, $E=E(G)$, and $F$ be the set of faces of $G$.
Denote by $f_0$ the exterior face of $G$.
Give initial charge $ch(x)$ to each element $x$ of $V\cup F$, where $ch(f_0)=d(f_0)+4$, $ch(v)=d(v)-4$ for $v\in V$, and $ch(f)=|f|-4$ for $f\in F\setminus \{f_0\}$.
Discharge the elements of $V\cup F$ according to the following rules:
\begin{enumerate}[$R1.$]
\item Every 3-face receives $\frac{1}{3}$ from each incident vertex.
\item Let $v$ be an internal 3-vertex and $f$ be a face containing $v$.
\begin{enumerate}[(1)]
\item Vertex $v$ receives $\frac{1}{4}$ from $f$ if $d(f)=5$.
\item Suppose $d(f)\geq7$. Let $a$ and $b$ denote the lengths of two faces containing $v$ other than $f$, and $a\leq b$. Vertex $v$ receives from $f$ charge $\frac{2}{3}$ if $a=3$, charge $\frac{1}{2}$ if $a=b=5$, charge $\frac{3}{8}$ if $a=5$ and $b\geq7$, and charge $\frac{1}{3}$ if $a\geq 7$.
\end{enumerate}
\item Let $v$ be an internal 4-vertex and $f$ be a $7^+$-face containing $v$.
\begin{enumerate}[(1)]
\item If $v$ is incident with precisely two 3-faces, then $v$ receives $\frac{1}{3}$ from $f$.
\item If $v$ is incident with precisely one 3-face that is adjacent to $f$, then $v$ receives $\frac{1}{6}$ from $f$.
\end{enumerate}
\item Let $f$ be a light 7-face adjacent to a 3-face $T$ on edge $xy$, $z$ be the vertex on $T$ other than $x$ and $y$, and $h$ be the face containing edge $yz$ other than $T$.
\begin{enumerate}[(1)]
\item If $d(x)=3$ and $d(y)\geq 5$, then $y$ sends $\frac{1}{24}$ to $f$.
\item If $z\in V(D)$, then $z$ sends $\frac{5}{24}$ to $f$ through $T$.
\item If $d(x)=3, d(y)=4, z\notin V(D)$ and $d(z)\geq4$, then $h$ sends $\frac{5}{24}$ to $f$ through $y$.
\end{enumerate}
\item The exterior face $f_0$ sends $\frac{4}{3}$ to each incident vertex.
\item Let $v$ be an external vertex and $f$ be a $5^+$-face containing $v$ other than $f_0$.
\begin{enumerate}[(1)]
\item If $d(v)=2$, then $v$ receives $\frac{2}{3}$ from $f$.
\item Suppose $d(v)=3$. If $v$ is triangular, then $v$ receives $\frac{1}{12}$ from $f$; otherwise, $v$ sends $\frac{1}{12}$ to $f$.
\item If $d(v)\geq4$, then $v$ sends $\frac{1}{3}$ to $f$.
\end{enumerate}
\end{enumerate}
Let $ch^*(x)$ denote the final charge of each element $x$ of $V\cup F$ after discharging.
On one hand, by Euler's formula we deduce $\sum\limits_{x\in V\cup F}ch(x)=0.$
Since the sum of charge over all elements of $V\cup F$ is unchanged, we have $\sum\limits_{x\in V\cup F}ch^*(x)=0.$ On the other hand, we show that $ch^*(x)\geq 0$ for $x\in V\cup F$ and $ch^*(x_0)> 0$ for some vertex $x_0$. Hence, this obvious contradiction completes the proof of Theorem \ref{thm46s9}.
It remains to show that $ch^*(x)\geq 0$ for $x\in V\cup F$ and $ch^*(x_0)> 0$ for some vertex $x_0$.
\begin{claim}
$ch^*(f)\geq0$ for $f\in F$.
\end{claim}
Denote by $V(f)$ the set of vertices of $f$.
First suppose that $f$ contains no external vertex.
Let $|f|=3$. By $R1$, we have $ch^{*}(f)=|f|-4+3\times \frac{1}{3}=0$.
Let $|f|=5$. Lemma \ref{lem_face all 3} implies that $f$ contains at most four 3-vertices. Hence, we have $ch^*(f)\geq|f|-4-4\times \frac{1}{4}=0$ by $R2(1)$.
Let $|f|=7$. If $G$ has no 3-face adjacent to $f$, then $f$ sends at most $\frac{1}{2}$ to each incident 3-vertex by $R2(2)$. Since Lemma \ref{lem_face all 3} implies that $f$ contains at most six 3-vertices, we have $ch^*(f)\geq|f|-4-6\times \frac{1}{2}=0.$ Hence, we may assume that $f$ is adjacent to a 3-face $T=[xyz]$ on edge $xy$, where $d(x)\leq d(y).$ Since $G$ has no special 9-cycle, $f$ is adjacent to no other 3-face than $T$.
Notice that now only rules $R2(2), R3(2)$ and $R4(3)$ might make $f$ send charge out.
Suppose $d(y)=3$. In this case $f$ sends $\frac{2}{3}$ to both $x$ and $y$, and at most $\frac{1}{2}$ to each of other incident 3-vertices.
Moreover, it follows from Lemma \ref{lem_good path} that $f$ contains at least two $4^+$-vertices. Hence, we have $ch^*(f)\geq |f|-4-2\times \frac{2}{3}-3\times \frac{1}{2}>0.$
Suppose $d(x)=3$ and $d(y)=4$. In this case $f$ sends $\frac{2}{3}$ to $x$, at most $\frac{1}{6}$ to $y$, and at most $\frac{3}{8}$ to the neighbor of $x$ on $f$ other than $y$.
If $z$ is not an internal 3-vertex, then $f$ receives charge $\frac{5}{24}$ either from $z$ by $R6(3)$ or from the face containing $yz$ other than $T$ by $R4(3)$, yielding $ch^*(f)\geq |f|-4-\frac{2}{3}-\frac{1}{6}-\frac{3}{8}-4\times \frac{1}{2}+\frac{5}{24}=0$.
Hence, we may assume $z$ is an internal 3-vertex.
Since Lemma \ref{lem_theta} implies $f$ is not light, we have $ch^*(f)\geq |f|-4-\frac{2}{3}-\frac{1}{6}-4\times \frac{1}{2}>0$.
It remains to suppose $d(x)\geq 4$. In this case, $f$ might send charge out through $x$ and $y$ by $R4(3)$.
If $f$ is not light, then $ch^*(f)\geq |f|-4-2(\frac{1}{6}+\frac{5}{24})-4\times \frac{1}{2}>0$.
If $d(y)\geq 5$, then $f$ sends nothing to $y$ or through $y$, yielding $ch^*(f)\geq |f|-4-(\frac{1}{6}+\frac{5}{24})-5\times \frac{1}{2}>0$.
Hence, we may assume that $f$ is light and $d(x)=d(y)=4$. Lemma \ref{lem_two light 7-face} implies that $f$ sends nothing out through $x$ or $y$.
It follows that $ch^*(f)\geq 7-4-2\times \frac{1}{6}-5\times \frac{1}{2}> 0.$
Let $|f|=8$. Since $f$ sends at most $\frac{1}{2}$ to each incident vertex by $R2(2)$ , we have $ch^*(f)\geq 8-4-8\times \frac{1}{2}=0$.
Let $|f|\geq 9$. We define
\begin{align*}
&A(f)=\{v\colon\ \textit{uvw is a path on f, both u and w are bad, and v is good}\},\\
&B(f)=\{v\colon\ \textit{uvw is a path on f, u is bad, and both v and w are good}\},\\
&C(f)=\{v\colon\ \textit{uvw is a path on f, and all of u, v and w are good}\},\\
&D(f)=\{v\colon\ \textit{v is a bad vertex on f}\}.
\end{align*}
Clearly, $A(f),B(f),C(f)$ and $D(f)$ are pairwise disjoint sets whose union is $V(f)$.
By our rules, $f$ sends at most $\frac{1}{3}$ to each vertex in $A(f)$,
at most $\frac{3}{8}$ in total to and through each vertex in $B(f)$,
at most $\frac{1}{2}$ in total to and through each vertex in $C(f)$ and
$\frac{2}{3}$ to each vertex in $D(f)$.
Hence, we have
\begin{align*}
ch^*(f)&\geq |f|-4-\frac{1}{3}|A(f)|-\frac{3}{8}|B(f)|-\frac{1}{2}|C(f)|-\frac{2}{3}(|f|-|A(f)|-|B(f)|-|C(f)|)\\
&=\frac{1}{3}|A(f)|+\frac{7}{24}|B(f)|+\frac{1}{6}|C(f)|+\frac{1}{3}|f|-4.\tag{$\ast$}
\end{align*}
Clearly, $|B(f)|$ is always even, and if $B(f)=\emptyset$, then either $C(f)=\emptyset$ or $C(f)=V(f)$.
Also note that $f$ sends nothing through a vertex $u$ of $f$ if $f$ has a vertex $v$ such that $uv$ is a common edge of $f$ and a 3-face of $G$.
Suppose $|f|=9$. By inequality ($\ast$), it suffices to consider following three cases.
Case 1: $|A(f)|\leq 2$ and $|B(f)|=|C(f)|=0$. By Lemma \ref{lem_good path}, we have $|A(f)|=2$ (say $A(f)=\{u,v\}$), $D(f)$ is divided by $u$ and $v$ as 3+4 on $f$, and $d(u),d(v)\geq 4$. Through the drawing of 3-faces adjacent to $f$, one can find that Lemma \ref{lem_9-face} implies that not both $u$ and $v$ have degree 4. Hence, we have $ch^*(f)\geq |f|-4-7\times \frac{2}{3}-\frac{1}{3}=0.$
Case 2: $|A(f)|=1, |B(f)|=2$ and $|C(f)|=0$. By Lemma \ref{lem_good path}, $D(f)$ is divided by $B(f)\cup A(f)$ as 3+3 or 2+4 on $f$.
In the former case 3+3, let $A(f)=\{u\}$. By Lemma \ref{lem_9-face}, $u$ is not a 4-vertex incident with two 3-faces, and thus receives at most $\frac{1}{6}$ from $f$. Hence, we have $ch^*(f)\geq |f|-4-6\times \frac{2}{3}-2\times \frac{3}{8}-\frac{1}{6}>0.$
In the latter case 2+4, let $f=[u_1\ldots u_9]$, $u_1\in A(f)$, and $u_4,u_5\in B(f)$. Lemma \ref{lem_good path} implies $d(u_1),d(u_5)\geq 4$.
Furthermore, $u_1$ is a 4-vertex incident with two 3-faces, since otherwise $f$ sends at most $\frac{1}{6}$ to $u_1$ so that $ch^*(f)\geq |f|-4-6\times \frac{2}{3}-2\times \frac{3}{8}-\frac{1}{6}>0.$ Through the drawing of 3-faces adjacent to $f$, one can find that $d(u_4)\geq 4$. Hence, $f$ sends nothing through $u_4$ and $u_5$, and at most $\frac{1}{3}$ to each of them, yielding $ch^*(f)\geq |f|-4-6\times \frac{2}{3}-3\times \frac{1}{3}=0.$
Case 3: $|A(f)|=0, |B(f)|=2$ and $|C(f)|\leq 2$. It follows that $f$ contains five consecutive bad vertices, and hence has a good path, contradicting Lemma \ref{lem_good path}.
Suppose $|f|\geq 10$.
If $|A(f)|+\frac{|B(f)|}{2}\geq 2$, then by inequality ($\ast$) we are done.
Hence, we may assume either $|A(f)|\leq 1$ and $|B(f)|=0$, or $|A(f)|=0$ and $|B(f)|=2$.
Lemma \ref{lem_good path} implies a contradiction in the former case, and $|C(f)|\geq 4$ in the latter case.
Hence, by inequality ($\ast$) we are also done in the latter case.
Next suppose $f$ contains external vertices. Since $|f_0|\leq 12$, if $f=f_0$ then by $R5$ we have $ch^*(f)=|f_0|+4-|f_0|\times \frac{4}{3}\geq0.$
Hence, we may assume $f\neq f_0.$ By our rules, $f$ sends at most $\frac{2}{3}$ to each incident vertex.
Lemma \ref{lem_splitting path} implies that if $|f|\leq 8$, then the external vertices on $f$ are consecutive one by one. Furthermore, $f$ has at most one 2-vertex if $|f|=5$, and has at most two 2-vertices if $|f|\in \{7, 8\}$.
Let $|f|=3$. We have $ch^{*}(f)=|f|-4+3\times \frac{1}{3}=0$ by $R1$.
Let $|f|=5$. If $f$ has no 2-vertex, then $f$ sends at most $\frac{1}{4}$ to each vertex, yielding $ch^*(f)\geq |f|-4-4\times \frac{1}{4}=0$.
Hence, we may assume $f$ has precisely one $2$-vertex. It follows that $f$ has two external 3-vertices, both of which send at least $\frac{1}{12}$ to $f$ by $R6$. Hence, we have $ch^*(f)\geq |f|-4-\frac{2}{3}+2\times \frac{1}{12}-2\times \frac{1}{4}=0$.
Let $|f|=7$. Since in this case $f$ is adjacent to at most one 3-face, $f$ has an internal vertex that is not bad. By our rules, $f$ sends at most $\frac{1}{2}$ to this vertex.
If $f$ has an external $4^+$-vertex, then $f$ receives $\frac{1}{3}$ from this vertex by $R6(3)$, yielding $ch^*(f)\geq |f|-4+\frac{1}{3}-4\times \frac{2}{3}-\frac{1}{2}>0$.
Hence, we may assume that $f$ has no external $4^+$-vertex, which implies $f$ has two external 3-vertices $u$ and $v$.
If both of $u$ and $v$ are not triangular and thus send $\frac{1}{12}$ to $f$, then we have $ch^*(f)\geq |f|-4+2\times \frac{1}{12}-4\times \frac{2}{3}-\frac{1}{2}=0$.
Hence, we may assume that $u$ is triangular but $v$ not. Now $f$ has at most one bad vertex, yielding $ch^*(f)\geq |f|-4+\frac{1}{12}-\frac{1}{12}-3\times \frac{2}{3}-2\times \frac{1}{2}=0$.
Let $|f|=8$.
If $f$ has no 2-vertex, then $f$ sends at most $\frac{1}{2}$ to each incident vertex, yielding $ch^*(f)\geq |f|-4-8\times \frac{1}{2}=0$.
Hence, we may assume that $f$ has precisely one or two $2$-vertices. It follows that $f$ has two external $3^+$-vertices, both of which send at least $\frac{1}{12}$ to $f$. Hence we have $ch^*(f)\geq |f|-4-2\times \frac{2}{3}+2\times \frac{1}{12}-4\times \frac{1}{2}>0$.
It remains to suppose $|f|\geq 9$. If $f$ has an external $4^+$-vertex, then $f$ receives $\frac{1}{3}$ from this vertex by $R6(3)$, yielding $ch^*(f)\geq |f|-4+\frac{1}{3}-(|f|-1)\times \frac{2}{3}\geq 0$.
Hence, we may assume that $f$ has no external $4^+$-vertex, which implies $f$ has at least two external 3-vertices. By $R6$, we have $ch^*(f)\geq |f|-4-2\times \frac{1}{12}-(|f|-2)\times \frac{2}{3}> 0$.
\begin{claim}
$ch^{*}(v)\geq0$ for $v\in V$.
\end{claim}
First suppose that $v$ is internal. We have $d(v)\geq3$ by Lemma \ref{lem_min degree}.
Let $d(v)=3$. Since $G\in \cal{G}$, the set of lengths of the faces containing $v$ is one of the followings: $\{3,7^+,7^+\},\{5,5,7^+\},\{5,7^+,7^+\}$ and $\{7^+,7^+,7^+\}$. Hence, we are done in each case by $R1$ and $R2$.
If $d(v)=4,$ then by $R1$ and $R3$ the charge $v$ sends out equals to the charge $v$ receives, yielding that $ch^*(v)=d(v)-4=0.$
It remains to suppose $d(v)\geq 5$. By $R1$ and $R4(1)$, $v$ sends $\frac{1}{3}$ to each incident 3-face and at most $\frac{1}{24}$ to each other incident face, which gives $ch^*(v)> d(v)-4-\frac{d(v)}{2}\times \frac{1}{3}-\frac{d(v)}{2}\times \frac{1}{24}>0$.
Next suppose that $v$ is external. Clearly, $d(v)\geq 2$.
By $R1$, $R5$ and $R6$, we have $ch^*(v)=d(v)-4+\frac{4}{3}+\frac{2}{3}=0$ if $d(v)=2$, $ch^*(v)=d(v)-4+\frac{4}{3}-\frac{1}{3}+\frac{1}{12}>0$ if $d(v)=3$ and $v$ is triangular, and $ch^*(v)=d(v)-4+\frac{4}{3}-\frac{1}{12}-\frac{1}{12}>0$ if $d(v)=3$ and $v$ is not triangular.
It remains to suppose $d(v)\geq 4$. Then $v$ receives $\frac{4}{3}$ from $f_0$ by $R5$, sends $\frac{1}{3}$ to each other incident face by $R1$ and $R6(3)$, and might sends $\frac{5}{24}$ through each incident 3-face whose other two vertices are internal. It follows that $ch^*(v)\geq d(v)-4+\frac{4}{3}-(d(v)-1)\times \frac{1}{3}-\frac{d(v)-2}{2}\times \frac{5}{24}>0$.
\begin{claim}
$D$ contains a vertex $x_0$ such that $ch^*(x_0) >0.$
\end{claim}
Let $x_0$ be any $3^+$-vertex on $D$, as desired.
The proof of Theorem \ref{thm46s9} is completed.
| {
"timestamp": "2015-06-16T02:17:34",
"yymm": "1506",
"arxiv_id": "1506.04629",
"language": "en",
"url": "https://arxiv.org/abs/1506.04629",
"abstract": "In this paper, we prove that planar graphs without cycles of length 4, 6, 9 are 3-colorable.",
"subjects": "Combinatorics (math.CO)",
"title": "The 3-colorability of planar graphs without cycles of length 4, 6 and 9",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540710685613,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7099046501494991
} |
https://arxiv.org/abs/1405.2073 | Topological Invariants and Fibration Structure of Complete Intersection Calabi-Yau Four-Folds | We investigate the mathematical properties of the class of Calabi-Yau four-folds recently found in [arXiv:1303.1832]. This class consists of 921,497 configuration matrices which correspond to manifolds that are described as complete intersections in products of projective spaces. For each manifold in the list, we compute the full Hodge diamond as well as additional topological invariants such as Chern classes and intersection numbers. Using this data, we conclude that there are at least 36,779 topologically distinct manifolds in our list. We also study the fibration structure of these manifolds and find that 99.95 percent can be described as elliptic fibrations. In total, we find 50,114,908 elliptic fibrations, demonstrating the multitude of ways in which many manifolds are fibered. A sub-class of 26,088,498 fibrations satisfy necessary conditions for admitting sections. The complete data set can be downloaded atthis http URL. | \section{Introduction and review of the classification of CICY four-folds}\seclabel{review}
Calabi-Yau manifolds have played a central role in many aspects of the development of string theory, from phenomenology to formal theory. Several constructions of Calabi-Yau three-folds have seen extensive use in the literature including the hypersurfaces in toric ambient spaces~\cite{Kreuzer:2000xy,Kreuzer:2002uu} and the complete intersections in products of projective spaces~\cite{Hubsch:1986ny,Candelas:1987kf,Green:1986ck,Candelas:1987du}. Complete classes of Calabi-Yau four-folds are somewhat rarer in the literature however~\cite{Brunner:1996bu,Klemm:1996ts,Kreuzer:1997zg,Lynker:1998pb,Gray:2013mja,Anderson:2014gla}, partly due to the greater computational power which is required to exhaust these much larger data sets.
In a previous paper~\cite{Gray:2013mja}, the authors attempted to improve upon this situation by presenting a complete classification of the Calabi-Yau four-folds which can be described as complete intersection in products of projective spaces. In this paper we will expand upon various mathematical properties of these manifolds which are important for their use in physics. In order to make the present paper self-contained, however, we will begin with a brief review of the central findings of the classification presented in ref.~\cite{Gray:2013mja}.
\vspace{0.1cm}
To set up our notation, we consider a complete intersection of $K$ polynomials $p_\alpha$ in an ambient space ${\cal A}$ which is a product of $m$ projective spaces ${\cal A}= \bigotimes_{r=1}^m \field{P}^{n_r} $ of total dimension $K+4=\sum_{r=1}^mn_r$. We shall use indices $r,s,\ldots =1,\ldots,m$ to label the ambient projective space factors $\field{P}^{n_r}$ and indices $\alpha,\beta,\dots =1,\ldots ,K$ to label the defining polynomials $p_\alpha$. We can succinctly describe families of such manifolds in terms of a \emph{configuration matrix}
\begin{equation}\eqlabel{conf2}
[{\bf n}|{\bf q}] \equiv \left[\begin{array}{c|ccc}n_1 & q^1_1&\dots&q^1_K\\ \vdots & \vdots&\ddots&\vdots\\ n_m & q^m_1&\hdots&q^m_K\\\end{array}\right] ,
\end{equation}
where the entries $q_\alpha^r$ are non-negative integers. The columns of the configuration matrix ${\bf q}_\alpha =(q_\alpha^r)_{r=1,\ldots ,m}$ denote the multi-degrees of the defining polynomials $p_\alpha$. In other words, the $\alpha$'th polynomial, $p_\alpha$, is of degree $q_\alpha^r$ in the homogeneous coordinates $x_{r,i}$ of $\field{P}^{n_r}$. In order to ensure that the common zero locus of these equations gives rise to a well-defined four-dimensional manifold, we demand that the $K$-form
\begin{equation}
{\rm d} p_1 \wedge \cdots \wedge {\rm d} p_K
\end{equation}
is nowhere vanishing.
The family of complete intersection varieties which are described by a given configuration matrix $[{\bf n}|{\bf q}]$ is redundantly parametrized by the coefficients in the polynomials $p_\alpha$. A generic choice of theses coefficients in any particular case defines a smooth complete intersection manifold, thanks to an application of Bertini's theorem~\cite{Green:1986ck}. Many of the key properties of the manifolds depend only on the configuration matrix and not on the specific variety within the family. Given this, in the rest of this paper, we will often not need to distinguish between the family described by the matrix $[{\bf n}|{\bf q}]$ and a specific variety therein.
A complete intersection variety of the type described above defines a Calabi-Yau manifold, denoted ${\cal X}$, if and only if the first Chern class vanishes, $c_1 ({\cal X}) = 0$. This is equivalent to the conditions
\begin{equation}\eqlabel{c1zero}
\sum_{\alpha = 1}^K q_\alpha^r = n_r + 1
\end{equation}
on each row of the configuration matrix. In ref.~\cite{Gray:2013mja} the authors classified a complete set of such configuration matrices which describe all CICY four-folds. A priori, there is an infinite number of configuration matrices of the form~\eqref*{conf2} obeying~\eqref*{c1zero}. However, the same CICY four-fold can be described by an infinite number of \emph{different} configuration matrices. To avoid such infinite repetitions, it is possible to identify suitable equivalence relations between configurations and only keep one representative per class~\cite{Candelas:1987kf,Gray:2013mja}. A simple example of such an equivalence relation is the permutation of rows or columns. Clearly, configuration matrices related by such row or column permutations describe the same complete intersection variety since the ordering of ambient space factors and polynomials in the configuration matrix is completely arbitrary. In addition, we made use of other equivalence relations which are both practical as well as rigorously provable~\cite{Candelas:1987kf,Gray:2013mja}. This lead to a finite list classifying all topological types of CICY four-folds~\cite{Gray:2013mja}.
The complete list presented in ref.~\cite{Gray:2013mja} contains 921,497 configuration matrices in 587 different ambient spaces with a maximum matrix size of $16\times 20$. A subset of 15813 matrices corresponds to product manifolds. These fall into four types, namely $T^8$, $T^2 \times$CY$_3$, $T^4 \times K3$ and $K3 \times K3$. The Euler characteristic $\chi$ for each matrix was computed and found to be in the range $0\leq \chi \leq 2610$. All configurations with $\chi=0$ correspond to direct product manifolds and the non-zero values for the Euler characteristic were found to be in the range $288\leq \chi\leq 2610$. In total, the list contains 206 different values of $\chi$. This topological data provided a very weak lower bound on the number of inequivalent CICY four-folds.
In the present paper we compute a number of additional topological invariants associated to these manifolds, such as Hodge numbers, Chern classes and intersection numbers. In addition to being useful in mathematical and physical applications, these results enable us to establish a significantly larger lower bound of at least 36,779 topologically distinct CICY four-folds. In the Hodge data, we find an approximate linear relation between $h^{2,2}$ and $h^{3,1}$, which is, at least to our knowledge, a mere consequence of the construction of CICY four-folds. Analyzing the pair $(h^{1,1}, h^{3,1})$, which is interchanged under mirror symmetry, we conclude that the mirror of a CICY four-fold is in most cases not itself a CICY four-fold. The only exceptions are the 153 Hodge theoretically self-mirror configurations for which $h^{1,1} = h^{3,1}$ holds. In view of potential applications for F-theory compactifications, we also study the elliptic fibration structure of CICY four-folds. We concentrate on a specific, easy-to-handle class of elliptic fibrations, which provides a rich data set of 50,114,908 elliptic fibrations distributed among 99.95 percent of the CICY four-folds. In addition, we present a classification of the different types of almost fano three-folds that occur as base manifolds and we find 26,088,498 fibrations that satisfy necessary conditions for admitting sections.
In the next section, we will describe how to compute several topological invariants associated to CICY four-folds. These will include Chern classes, Hodge data, intersection numbers of favourable divisors, and invariants constructed from the intersection numbers which do not depend upon a choice of basis for $H^{1,1}({\cal X})$. In \secref{fib}, we will describe how many of the CICY four-folds can be written as an elliptic fibration over an almost fano three-fold base. In addition we will discuss how to compute some necessary conditions for the existence of certain types of sections. In \secref{results} we will provide a cartography of the results of the computations described in \secref{topinv,fib} for the data set of the CICY four-folds computed in ref.~\cite{Gray:2013mja}. Finally, a few technical results required in the text will be provided in \appref{perms} and the format in which we present our data is explained in \appref{dataformat}.
\section{Topological invariants}\seclabel{topinv}
In this section, we will describe how to compute various topological invariants of the CICY four-folds, including Chern classes, Hodge numbers and intersection numbers. These numerical characteristics are of importance in both the mathematical and physical investigation of these manifolds. Mathematically, topological invariants contain significant information about the structure of the Calabi-Yau four-fold and they help to establish which configuration matrices could describe the same four-folds. Physically, these quantities are of central importance in questions ranging from determining the number of moduli fields in four dimensions, to the structure of tadpole cancellation conditions.
\subsection{Chern classes}\seclabel{chernclasses}
For a general complete intersection manifold ${\cal X}$, not necessarily Calabi-Yau, with configuration matrix $[{\bf n}|{\bf q}]$, the first four Chern classes are given by the following expressions~\cite{Green:1986ck,Gray:2013mja}
\begin{align}
c_1({\cal X})&=c_1^r J_r = \left[n_r+1-\sum_{\alpha=1}^Kq^r_\alpha\right]J_r \; , \eqlabel{c1} \\
c_2({\cal X})&= c_2^{rs} J_r J_s =
\oneon{2} \left[-(n_r+1)\delta^{rs} + \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s + c_1^r c_1^s \right] J_r J_s \; , \eqlabel{c2} \\
c_3({\cal X})&= c_3^{rst} J_r J_s J_t =
\oneon{3} \left[(n_r+1)\delta^{rst} - \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s q_\alpha^t + 3 c_1^r c_2^{st} - c_1^r c_1^s c_1^t \right] J_r J_s J_t \; ,\eqlabel{c3} \\
c_4 ({\cal X}) &= c_4^{rstu} J_r J_s J_t J_u = \oneon{4}
\left[ -(n_r+1)\delta^{rstu} + \sum_{\alpha=1}^K q_\alpha^r q_\alpha^s q_\alpha^t q_\alpha^u + 2 c_2^{rs} c_2^{tu} \right. \nonumber
\\ & \left. \qquad\qquad\qquad\qquad\qquad\qquad\qquad + 4 c_1^r c_3^{stu} - 4 c_1^r c_1^s c_2^{tu} + c_1^r c_1^s c_1^t c_1^u \vphantom{\sum_{\alpha=1}^K} \right] J_r J_s J_t J_u \; . \eqlabel{c4}
\end{align}
The multi-index Kronecker-symbol appearing above is defined to be $\delta^{r_1 \ldots r_n} = 1$ if $r_1 = r_2 = \ldots = r_n$ and zero otherwise. In these expressions, $J_r$ denotes the K\"ahler form of the $r$'th ambient projective space $\field{P}^{n_r}$, which is normalized in such a way that
\begin{equation}\eqlabel{Pnorm}
\int_{\field{P}^{n_r}} J_r^{n_r} = 1 \; .
\end{equation}
For a configuration to describe a family of Calabi-Yau manifolds we need $c_1({\cal X})=0$. This leads to the Calabi-Yau constraint~\eqref*{c1zero} presented in \secref{review}. If a configuration does indeed represent a family of Calabi-Yau four-folds, the \eqref{c2,c3,c4} for the higher Chern classes simplify substantially, since all terms containing a factor of the first Chern class then vanish.
The fourth Chern class can be used to compute the Euler characteristic $\chi$ by a version of the Gauss-Bonnet formula
\begin{equation}\eqlabel{Euler_c4}
\chi ({\cal X}) = \int_{{\cal X}} c_4({\cal X}) \; .
\end{equation}
This expression is easily evaluated using the fact that the integral of a top-form $\omega$ over ${\cal X}$ can be pulled back to an integration over the ambient space ${\cal A} = \field{P}_1^{n_1} \times\cdots\times \field{P}_m^{n_m}$. To do this we use
\begin{equation}\eqlabel{mudef}
\int_{{\cal X}} \omega = \int_{\cal A} \omega \wedge\mu_{{\cal X}} \; ,\qquad
\mu_{{\cal X}} \equiv \bigwedge_{\alpha=1}^K\left(\sum_{r=1}^mq^r_\alpha J_r\right) ,
\end{equation}
and the normalizations~\eqref*{Pnorm} of the K\"ahler forms $J_r$. The $(K,K)$-form $\mu_{{\cal X}}$ is a representative of the class which is Poincar\'e dual to the homology class of the family of sub-manifolds ${\cal X} = [{\bf n}|{\bf q}]$ in the ambient space ${\cal A}$.
Given the proceeding paragraph, the explicit formula for the Euler characteristic $\chi$ of a four-fold configuration ${\cal X} = [{\bf n}|{\bf q}]$ is simply given by the following
\begin{equation}\eqlabel{chi_c4}
\chi ({\cal X}) = \left[c_4({\cal X})\wedge \mu_{{\cal X}}\right]_{\text{top}} \; .
\end{equation}
Here, the subscript ``top'' refers to the coefficient of the volume form $J_1^{n_1} \wedge\cdots\wedge J_m^{n_m}$ of ${\cal A}$ which should be extracted from the enclosed expression.
\subsection{Hodge data}\seclabel{hodgecalc}
In terms of bundle valued cohomology, the Hodge data of the CICY four-folds may be expressed as follows
\begin{equation}\eqlabel{mrhodge1}
H^{p,q}({\cal X}) \cong H^p({\cal X},\wedge^q {\cal TX}^*) \; .
\end{equation}
On a Calabi-Yau four-fold there are four non-trivial Hodge numbers, namely $h^{1,1}$, $h^{2,1}$, $h^{3,1}$ and $h^{2,2}$, which need to be determined for the $921,497$ configuration matrices in our data set. As such, we need an efficient procedure to calculate the cohomologies~\eqref*{mrhodge1} on a computer. We will begin our discussion with some simple relations amongst the Hodge data of a Calabi-Yau four-fold that allow us to avoid calculating some of the individual cohomologies directly. We will then describe how we compute the remaining bundle valued cohomologies directly for the geometries of interest.
On a Calabi-Yau four-fold, the Betti numbers are determined by the Hodge numbers as follows
\begin{equation}
\begin{split}
&b^0=b^8=1 \;,\qquad b^1=b^7 =0 \;,\qquad b^2=b^6=h^{1,1} \;,\\&b^3=b^5=2 h^{2,1} \;,\qquad b^4=2 h^{3,1}+h^{2,2}+2 \; .
\end{split}
\end{equation}
The Euler characteristic of the manifold, a topological invariant for which we reviewed a simple formula in \secref{chernclasses}, can likewise be expressed in terms of the Betti numbers. We obtain the following expression relating the Euler and Hodge numbers
\begin{equation}
\chi= \sum_{q=0}^8 (-1)^q b^q = 4 + 2 h^{1,1}-4 h^{2,1} + 2 h^{3,1} + h^{2,2} \; . \eqlabel{chi}
\end{equation}
Thus, one of the Hodge numbers is determined by the others and the Euler characteristic, enabling us to avoid calculating one of the cohomologies~\eqref*{mrhodge1} explicitly.
Another simplification of this type can be achieved by considering the indices $\chi_q = \chi({\cal X}, \wedge^q {\cal TX}^*)$. From the index theorem, we have
\begin{equation}\eqlabel{indextheorem}
\chi_q = \sum_{p=0}^4 (-1)^p h^{p,q}({\cal X}) = \int_{\cal X} \text{ch}(\wedge^q {\cal TX}^*) \wedge \text{Td}({\cal TX}) \;.
\end{equation}
The splitting principle formulae, $c({\cal TX}) = \sum_i(1+x_i)$, $\text{ch}({\cal TX})= \sum_i e^{x_i}$ and
\begin{equation}
\text{ch}(\wedge^q {\cal TX}^*) \wedge \text{Td}({\cal TX}) = \sum_{i_1 > \ldots > i_q} e^{-x_{i_1}} \ldots e^{-x_{i_q}} \prod_j \frac{x_j}{1-e^{-x_j}},
\end{equation}
together with the Calabi-Yau condition $c_1({\cal X})=0$ of the four-fold, can be used to show that the indices~\eqref*{indextheorem} take the following form
\begin{align}\eqlabel{eulerc2c2}
\chi_0 &= 2 =\frac{1}{720} \int_{\cal X} (3 c_2^2 -c_4) \; ,\\
\chi_1 &= -h^{1,1}+h^{2,1}-h^{3,1} = \frac{1}{180} \int_{\cal X} (3 c_2^2 - 31 c_4) \; ,\\
\chi_2 &= -2 h^{2,1}+ h^{2,2} = \frac{1}{360} \int_{\cal X}(9c_2^2 + 237 c_4) \; ,\\
\chi_3 &= \chi_1 \;\;,\;\; \chi_4 = \chi_0 \; .
\end{align}
From this we get one additional non-trivial relation: $22 \chi_0 - 4 \chi_1 - \chi_2 =0$, which gives us
\begin{equation}\eqlabel{rel_hodge_numbers}
-4 h^{1,1} + 2 h^{2,1} - 4 h^{3,1} + h^{2,2} =44 \; .
\end{equation}
\vspace{0.1cm}
The direct computation of the remaining two Hodge numbers is performed using the theory of spectral sequences~\cite{Hartshorne:1977,Griffiths:1978,Distler:1987ee,Hubsch:1992nu}\footnote{See also ref.~\cite{Anderson:2008ex} for a nice introduction to these kinds of computations.}. We will make use of two key short exact sequences
\begin{equation}\eqlabel{euler}
0 \to {\cal O}_{{\cal X}}^m \to {\cal R}\to {\cal T}_{\cal A}|_{\cal X} \to 0\;,\qquad
0 \to {\cal T}_{\cal X} \to {\cal T}_{\cal A} |_{\cal X} \to {\cal N} \to 0 \; ,
\end{equation}
referred to as the Euler and adjunction sequence, respectively. The normal bundle ${\cal N}$ and the bundle ${\cal R}$ can both be written as sums of line bundles and are explicitly given by\footnote{In this paper, we are using the following standard notation for line bundles on products of projective spaces and CICYs. The line bundle ${\cal O}_{\cal A}( k^r)$ on ${\cal A}$ is that whose first Chern class is given by $c_1({\cal O}_{\cal A}(k^r))= k^r J_r$. The line bundle ${\cal O}_{\cal X}(k^r)$ is the restriction of ${\cal O}_{\cal A}( k^r)$ to the Calabi-Yau four-fold.}
\begin{equation}
{\cal N}= \bigoplus_{a=1}^K {\cal O}_{\cal X}({\bf q}_a) \;,\qquad {\cal R}=\bigoplus_{r=1}^m {\cal O}_{\cal X}({\bf e}_i)^{\oplus (n_r+1)}\; ,
\end{equation}
where ${\bf e}_i$ are the standard unit vectors. The long exact sequences associated to these two short exact sequences can be written in the form
\begin{equation*}
\begin{array}{c|ccccc|cccccc}
&{\cal O}_{{\cal X}}^m&\to& {\cal R}&\to& {\cal T}_{\cal A}|_{\cal X}&&{\cal T}_{\cal X}&\to&{\cal T}_{\cal A} |_{\cal X}&\to&{\cal N} \\\hline
H^0({\cal X},\cdot)& \mathbb{C}^m&&H^0({\cal X},{\cal R})&&H^0({\cal X}, {\cal T}_{\cal A}|_{\cal X})&&0&&
H^0({\cal X},{\cal T}_{\cal A} |_{\cal X})&&H^0({\cal X},{\cal N})\\
H^1({\cal X},\cdot)& 0&&H^1({\cal X},{\cal R})&&H^1({\cal X}, {\cal T}_{\cal A}|_{\cal X})&&H^{3,1}({\cal X})&&
H^1({\cal X},{\cal T}_{\cal A} |_{\cal X})&&H^1({\cal X},{\cal N})\\
H^2({\cal X},\cdot)& 0&&H^2({\cal X},{\cal R})&&H^2({\cal X}, {\cal T}_{\cal A}|_{\cal X})&&H^{2,1}({\cal X})&&
H^2({\cal X},{\cal T}_{\cal A} |_{\cal X})&&H^2({\cal X},{\cal N})\\
H^3({\cal X},\cdot)& 0&&H^3({\cal X},{\cal R})&&H^3({\cal X}, {\cal T}_{\cal A}|_{\cal X})&&H^{1,1}({\cal X})&&
H^3({\cal X},{\cal T}_{\cal A} |_{\cal X})&&H^3({\cal X},{\cal N})\\
H^4({\cal X},\cdot)& \mathbb{C}^m&&H^4({\cal X},{\cal R})&&H^4({\cal X}, {\cal T}_{\cal A}|_{\cal X})&&0&&
H^4({\cal X},{\cal T}_{\cal A} |_{\cal X})&&H^4({\cal X},{\cal N})
\end{array}
\end{equation*}
From the first of these long exact sequences we learn that
\begin{equation}
\begin{array}{lllllll}
H^0({\cal X},{\cal T}{\cal A}|_{\cal X})&\cong& \frac{H^0({\cal X},{\cal R})}{\mathbb{C}^m}&\quad\quad&
H^1({\cal X},{\cal T}{\cal A}|_{\cal X})&\cong&H^1({\cal X},{\cal R})\\
H^2({\cal X},{\cal T}{\cal A}|_{\cal X})&\cong&H^2({\cal X},{\cal R})&\quad\quad&
H^4({\cal X},{\cal T}{\cal A}|_{\cal X})&\cong& H^4({\cal X},{\cal N})\;
\end{array}
\end{equation}
and
\begin{equation}
H^3({\cal X},{\cal T}{\cal A}|_{\cal X})\cong H^3({\cal X},{\cal R})+{\rm Ker}(\mathbb{C}^m\rightarrow H^4({\cal X},{\cal R}))\eqlabel{H3}\; .
\end{equation}
Combining these results with the second long exact sequence gives
\begin{align}
H^{3,1}({\cal X})&\cong \frac{H^0({\cal X},{\cal N})}{H^0({\cal X},{\cal R})/\mathbb{C}^m}\oplus {\rm Ker}(H^1({\cal X},{\cal R})\rightarrow H^1({\cal X},{\cal N}))\label{h31}\\
H^{2,1}({\cal X})&\cong {\rm Coker}(H^1({\cal X},{\cal R})\rightarrow H^1({\cal X},{\cal N}))\oplus {\rm Ker}(H^2({\cal X},{\cal R})\rightarrow H^2({\cal X},{\cal N}))\\
H^{1,1}({\cal X})&\cong {\rm Coker}(H^2({\cal X},{\cal R})\rightarrow H^2({\cal X},{\cal N}))\oplus {\rm Ker}(H^3({\cal X},{\cal T}{\cal A}|_{\cal X})\rightarrow H^3({\cal X},{\cal N}))
\end{align}
for the desired Hodge cohomologies. The main observation from these results is that the Hodge numbers can be computed entirely from the bundle cohomology of the line bundle sums ${\cal N}$ and ${\cal R}$ on ${\cal X}$.
A particularly interesting sub-class consists of those CICY four-folds which are favour\-able. We call a CICY four-fold favourable if its complete second cohomology descends from the second cohomology of the ambient space, so that $H^{1,1}({\cal X})\cong\mathbb{C}^m$, where $\mathbb{C}^m$ is the space which appears in \eqref{H3}. A sufficient (although slightly too strong) set of conditions for this to be the case is
\begin{equation}
H^2({\cal X},{\cal N})=H^3({\cal X},{\cal N})=H^3({\cal X},{\cal R})=H^4({\cal X},{\cal R})=0\; . \eqlabel{favour}
\end{equation}
Provided these conditions hold the Hodge numbers for favourable CICYs satisfy
\begin{equation}\eqlabel{hfav}
\begin{split}
h^{1,1}({\cal X})&=m\; ,\\ h^{3,1}({\cal X})-h^{2,1}({\cal X})&=m-h^0({\cal X},{\cal R})+h^1({\cal X},{\cal R})-h^2({\cal X},{\cal R})+h^0({\cal X},{\cal N})-h^1({\cal X},{\cal N})\; .
\end{split}
\end{equation}
Together with the Euler number constraint, \eqref{chi}, this fixes three Hodge numbers in terms of the line bundle cohomology of ${\cal N}$ and ${\cal R}$ without the need to compute ranks of maps.\footnote{It seems that, in this situation, the additional constraint, \eqref{rel_hodge_numbers}, fixes the fourth Hodge number. However, it turns out that this constraint is usually automatically implied by \eqref{hfav} and \eqref{chi}.}
To complete the Hodge number calculation, we need to be able to compute line bundle cohomology on CICYs and, in general, determine the ranks of maps between such cohomologies. The first step in this direction is to relate a line bundle ${\cal L}_{\cal A}$ on the ambient space ${\cal A}$ to its restriction ${\cal L}={\cal L}_{\cal A}|_{\cal X}$ onto ${\cal X}$ by the Koszul resolution, a long exact sequence given by~\cite{Hartshorne:1977,Griffiths:1978,Matsumura:1986,Hubsch:1992nu}
\begin{equation}
0\rightarrow \wedge^K{\cal N}_{\cal A}^*\otimes {\cal L}_{\cal A}\rightarrow \dots \rightarrow \wedge^2{\cal N}_{\cal A}\otimes {\cal L}_{\cal A}^*\rightarrow {\cal N}_{\cal A}^*\otimes {\cal L}_{\cal A}\rightarrow {\cal L}_{\cal A}\rightarrow {\cal L}\rightarrow 0\; .
\end{equation}
This long exact sequence can be broken up into short exact sequences each of which have associated long exact sequences in cohomology or, alternatively, we can study the spectral sequence associated to the Koszul resolution. Either way, this allows for the computation of line bundle cohomology on the CICY ${\cal X}$ in terms of ambient space line bundle cohomology. Line bundle cohomology on a single projective space is described by a theorem due to Bott, Borel and Weil, see for example~\cite{Hubsch:1992nu}. To obtain the cohomology for line bundles on our ambient space, which are products of projective spaces, we can simply apply a version of K\"unneth's formula to the result for single projective spaces. In this way, we can develop an algorithm to compute line bundle cohomology on CICYs and, combined with the above results, this allows for a computation of Hodge numbers.
\subsection{Intersection numbers and distinguishing invariants}\seclabel{intnums}
Besides Chern classes, Hodge numbers and the Euler characteristics, there are a number of additional invariants which can be used to distinguish different topological types of CICY four-folds. We will focus, in particular, on those that are easily computable from the configuration matrix.
We begin by introducing a basis $\{\nu^i\}$ of $H^6({\cal X})$, dual to the integral basis, $\{J_i\}$, of $H^2({\cal X})$ such that
\begin{equation}
\int_{\cal X} J_i \wedge \nu^j = \delta_i^j
\end{equation}
as usual. Of course, the products $J_i \wedge J_j \wedge J_k$ can be written as linear combinations of the basis $\{\nu^i\}$ and it follows easily that
\begin{equation}
J_i \wedge J_j \wedge J_k = d_{ijkl} \nu^l \; , \qquad
d_{ijkl} = \int_{\cal X} J_i \wedge J_j \wedge J_k \wedge J_l \; ,
\end{equation}
where $d_{ijkl}$ are the quadruple intersection numbers.
The products $J_i \wedge J_j$ for $i \leq j$ can be thought of as elements of $H^4({\cal X})$, but it is not clear that they are linearly independent. Consider a linear relation $\lambda^{ij} J_i \wedge J_j = 0$ among them. Then it follows that $d_{ijkl} \lambda^{kl} = 0$. In other words, if $d_{ijkl} \lambda^{kl} = 0$ does not have non-trivial solutions $\lambda^{kl}$ or, equivalently, if the matrix $d_{(ij)(kl)}$ has maximal rank, then the forms $J_i \wedge J_j$ for $i \leq j$ are linearly independent.
Now, consider the total Chern class expanded as
\begin{equation}
c({\cal X}) = \cdots + C_2^{ij} J_i \wedge J_j + C_3^{ijk} J_i \wedge J_j \wedge J_k + \cdots = \cdots + C_2^{ij} J_i \wedge J_j + c_{3,i} \nu^i + \cdots \; ,
\end{equation}
where we define $c_{3,i} = d_{ijkl} C_3^{jkl}$, $c_{2,ij} = d_{ijkl} C_2^{kl}$ and so on. Clearly, one may form an invariant in the following way
\begin{equation}\eqlabel{c2inv}
I = \int_{\cal X} c_2({\cal X}) \wedge c_2({\cal X}) = c_{2,ij} C_2^{ij} = \mathrm{tr}(c_2 C_2) \; .
\end{equation}
However, due to~\eqref*{eulerc2c2}, this invariant carries the same information as the Euler characteristic in the case of a Calabi-Yau four-fold. The problem with other contractions which involve $c_2$ or $C_2$ is that they may represent the second Chern class in a redundant way, since the forms $J_i \wedge J_j$ may not be linearly independent. So, in the absence of other assumptions, \eqref{c2inv} seems to be the only further invariant which involves Chern classes.
Let us, for the moment, assume that the forms $J_i \wedge J_j$ for $i \leq j$ are linearly independent, a condition which can be explicitly checked from the intersection numbers in any given case. Then we have the following additional invariants
\begin{align}
I_p &= c_3 C_2 (c_2 C_2)^p c_3 \; , \\
\tilde{I}_q &= \mathrm{tr}((c_2 C_2)^q) \; ,
\end{align}
for $p \geq 0$ and $q \geq 1$. Unfortunately, with the exception of the small configurations at the beginning of the list, the forms $J_i \wedge J_j$ practically always turn out to be linearly dependent and hence the above invariants $I_p$ and $\tilde{I}_q$ are of little practical use.
Next, we turn to invariants extracted solely from the quadruple intersection numbers. We will follow the logic of ref.~\cite{Green:1988fr} and generalise their results to CICY four-folds. We begin by defining the intersection form
\begin{equation}
\Lambda(K_1, K_2, K_3, K_4) = \int_{\cal X} K_1 \wedge K_2 \wedge K_3 \wedge K_4 \; ,
\end{equation}
where $K_1, \ldots, K_4$ represent classes in $H^2({\cal X}, \field{Z})$. In terms of this form, the quadruple intersection numbers are of course given by
\begin{equation}\eqlabel{dLambda}
d_{ijkl} = \Lambda(J_i, J_j, J_k, J_l) \; .
\end{equation}
The next step is to define the following sets
\begin{align}
S_1 &= \{\Lambda(K_1, K_2, K_3, K_4) \,|\, K_a \in H^2({\cal X},\field{Z}) \} \; , \\
S_2 &= \{\Lambda(K_1, K_2, K_3, K_3) \,|\, K_a \in H^2({\cal X},\field{Z}) \} \; , \\
S_3 &= \{\Lambda(K_1, K_2, K_2, K_2) \,|\, K_a \in H^2({\cal X},\field{Z}) \} \; , \\
S_4 &= \{\Lambda(K_1, K_1, K_1, K_1) \,|\, K_1 \in H^2({\cal X},\field{Z}) \} \; ,
\end{align}
and ${\cal I}_p = \mathrm{gcd}(S_p)$. The virtue of the above sets is that they are not only topologically invariant but, unlike the intersection numbers themselves, they are also basis-independent and hence the ${\cal I}_p$ are genuine invariants. This invariance is due to the sets being defined by scanning over the entire integral lattice spanned by $K_a \in H^2({\cal X},\field{Z})$. However, computing the intersection form on all elements of $H^2({\cal X},\field{Z})$ is not practical. Instead, a simplification is achieved by expanding $K_a = n_a^i J_i$ with integer coefficients $n_a^i$, which leads to
\begin{equation}\eqlabel{expandLambdad}
\Lambda(K_1, K_2, K_3, K_4) = d_{ijkl} n_1^i n_2^j n_3^k n_4^l \; .
\end{equation}
If two or more arguments in $\Lambda(K_1, K_2, K_3, K_4)$ are identical, the quadruple sums on the right hand side can be decomposed into smaller building blocks according to
\begin{align}
&d_{ijkl} n_1^i n_1^j n_2^k n_3^l = d_{iijk} (n_1^i)^2 n_2^j n_3^k + 2 \sum_{i<j} d_{ijkl} n_1^i n_1^j n_2^k n_3^l \ , \eqlabel{d1123_expansion}\\
&d_{ijkl} n_1^i n_1^j n_1^k n_2^l = d_{iiij} (n_1^i)^3 n_2^j + 3 \sum_{i<j} \left[ d_{ijjk} n_1^i (n_1^j)^2 n_2^k + d_{iijk} (n_1^i)^2 n_1^j n_2^k \right] \nonumber\\
&\qquad\qquad\qquad\,+ 6 \sum_{i<j<k} d_{ijkl} n_1^i n_1^j n_1^k n_2^l \; , \eqlabel{d1112_expansion}\\
&d_{ijkl} n_1^i n_1^j n_1^k n_1^l = d_{iiii} (n_1^i)^4 + 6 \sum_{i<j} d_{iijj} (n_1^i)^2 (n_1^j)^2 + 4 \sum_{i<j} \left[ d_{ijjj} n_1^i (n_1^j)^3 + d_{iiij} (n_1^i)^3 n_1^j \right] \nonumber\\
&\qquad\qquad\qquad\,+ 12 \sum_{i<j<k} \left[ d_{ijkk} n_1^i n_1^j (n_1^k)^2 + d_{ijjk} n_1^i (n_1^j)^2 n_1^k + d_{iijk} (n_1^i)^2 n_1^j n_1^k \right] \nonumber\\
&\qquad\qquad\qquad\,+ 24 \sum_{i<j<k<l} d_{ijkl} n_1^i n_1^j n_1^k n_1^l \; . \eqlabel{d1111_expansion}
\end{align}
Given this, we define, in addition,\footnote{The sign choices in the definitions of $\tilde{S}_3$ and $\tilde{S}_4$ arise because we want to compare these sets to $S_3$ and $S_4$ which scan over the entire integral lattice. This includes, in particular, those elements which have an expansion of the form $K = \pm J_1 \pm J_2 \pm \ldots$ in terms of the basis $\{J_i\}$. From \eqref{d1112_expansion,d1111_expansion}, we see that all possible relative signs appear in the ordered sums that involve ambiguities in the grouping of indices. Thus, we need to include all possible relative signs in order for the entire lattice to be scanned by $\tilde{S}_3$ and $\tilde{S}_4$.}
\begin{align}
\tilde{S}_1 &= \{ d_{ijkl} \,|\, i,j,k,l = 1,\ldots,h^{1,1}({\cal X}) \} \; , \\
\tilde{S}_2 &= \{d_{iijk}| i,j,k = 1,\ldots,h^{1,1}({\cal X})\} \cup \{ 2 d_{ijkl} | i,j,k,l = 1,\ldots,h^{1,1}({\cal X})\} \; , \\
\tilde{S}_3 &= \{d_{iiij}| i,j = 1,\ldots,h^{1,1}({\cal X})\} \cup \{ 3 (d_{iijk} \pm d_{ijjk}) | i,j,k = 1,\ldots,h^{1,1}({\cal X})\}) \nonumber\\ &\cup \{ 6 d_{ijkl} | i,j,k,l = 1,\ldots,h^{1,1}({\cal X})\} \; , \\
\tilde{S}_4 &= \{d_{iiii}| i = 1,\ldots,h^{1,1}({\cal X})\} \cup \{6d_{iijj}| i,j = 1,\ldots,h^{1,1}({\cal X})\} \nonumber\\ &\cup \{4 (d_{iiij} \pm d_{ijjj})| i,j = 1,\ldots,h^{1,1}({\cal X})\}
\nonumber\\ &\cup \{ 12 (d_{ijkk} \pm d_{ijjk} \pm d_{iijk}) | i,j,k = 1,\ldots,h^{1,1}({\cal X})\}) \nonumber\\ &\cup \{ 24 d_{ijkl} | i,j,k,l = 1,\ldots,h^{1,1}({\cal X})\} \; .
\end{align}
From \eqref{dLambda}, it follows that $\tilde{S}_p \subset S_p$. Therefore, a common divisor of $S_p$ is also a common divisor of $\tilde{S}_p$. Conversely, a common divisor of $\tilde{S}_p$ divides all
$\Lambda(K_1, K_2, K_3, K_4)$ owing to the expansion~\eqref*{expandLambdad} and the fact that the $n_a^i$ are integers. Altogether, this shows that $S_p$ and $\tilde{S}_p$ have equal sets of common divisors and hence, in particular
\begin{equation}
{\cal I}_p = \mathrm{gcd}(S_p) = \mathrm{gcd}(\tilde{S}_p) \; .
\end{equation}
In practice these invariants can, of course, only be explicitly calculated for favourable configurations where we know all of the intersection numbers. These are roughly half of the CICYs in our data set.
Another invariant we consider is the signature of the intersection matrix. Denote by $G$ the matrix $d_{(ij)(kl)}$ with $i \leq j$ and $k \leq j$, where we combine the first and last two indices each into a single index. Then $G$ transforms under a change of basis as $G \to P^T G P$ with certain general linear matrices $P$. The eigenvalues of $G$ are of course not invariant under such a transformation but, by Sylvester's law of inertia, the numbers of positive and negative eigenvalues are. Hence, the two invariants obtained in this way are the number of positive and negative eigenvalues of $G$. Of course, the actual computation of theses invariants is also restricted to the favourable configurations.
\section{Fibration structure}\seclabel{fib}
\subsection{A class of elliptic fibrations}\seclabel{fib_class}
We would like to enumerate and present the different ways in which the CICY four-folds discussed in \secref{review} and ref.~\cite{Gray:2013mja} can be written as an elliptic fibration over a three dimensional base. Finding every rewriting of a Calabi-Yau four-fold as an elliptic fibration turns out to be a formidable task, especially in non-favourable cases where not all of the divisors in ${\cal X}$ descend from divisors on the ambient space projective factors. Nevertheless, there exist specific types of elliptic fibration which can be simply distinguished from the structure of the configuration matrices~\eqref*{conf2} themselves. These bear some similarity to methods of identifying fibrations in other Calabi-Yau four-fold constructions~\cite{Kreuzer:1997zg}. We have performed an exhaustive classification of these readily accessible fibration structures within our data set.
Consider a configuration matrix which can, by row and column permutations, be put in the following form
\begin{equation}\eqlabel{mrfib}
{\cal X}= \left[\begin{array}{c|cc} {\cal A}_1 & 0 & {\cal F} \\
{\cal A}_2 & {\cal B} & {\cal T} \end{array}\right] .
\end{equation}
Here ${\cal A}_1$ and ${\cal A}_2$ are two products of $N_1$ and $m-N_1$ projective spaces respectively, while ${\cal F}, {\cal B}$ and ${\cal T}$ are sub-block matrices. If ${\cal X}$ is a Calabi-Yau four-fold then all of the rows, in particular the first $N_1$, obey the condition~\eqref*{c1zero}. Thus, if the components of ${\cal F}$ are denoted $f_{\hat{\alpha}}^{\hat{r}}$ where $\hat{r}=1,\ldots,N_1$ and $\hat{\alpha}=1,\ldots, \hat{K}$, then we have $\sum_{\hat{\alpha}=1}^{\hat{K}} f_{\hat{\alpha}}^{\hat{r}} = n_{\hat{r}}+1$ and $[{\cal A}_1| {\cal F}]$ is also Calabi-Yau. In examples where $\sum_{\hat{r}=1}^{N_1} n_{\hat{r}} - \hat{K} =1$ this Calabi-Yau is a one-fold, that is, $[{\cal A}_1| {\cal F}]$ is an elliptic curve. In such a situation, the configuration matrix~\eqref*{mrfib} describes an elliptic fibration over the almost fano three-fold base $\left[ {\cal A}_2| {\cal B} \right]$ (here, ``almost fano'' is shorthand for a three-fold configuration whose anticanonical bundle is almost-ample~\cite{Hubsch:1992nu}) with the fibre being described by the matrix $[{\cal A}_1| {\cal F}]$. The twisting of the fibre over the base is encoded in the matrix ${\cal T}$. We shall refer to an elliptic fibration of the form~\eqref*{mrfib} as an ``obvious elliptically fibration'' or ``OEF'' for short.
A given configuration matrix may admit many different OEFs of the form~\eqref*{mrfib}. In enumerating the inequivalent fibrations of this type, we face redundancy issues similar to those encountered in the compilation of the CICY four-fold list itself. It is clear that two different configuration matrices in the form~\eqref*{mrfib} can describe the same OEF, for example, if they are related by permutations of rows and columns which do not mix up the block form of the matrix. In general, the redundancy between fibrations could be due to any of the types of identities between configurations we have discussed in ref.~\cite{Gray:2013mja}. Redundancies can be removed from the description of sets of possible elliptic fibrations using very similar observations to those made in the compilation of the CICY four-fold list~\cite{Gray:2013mja}, and results such as those in \appref{perms}. We remove all of the redundancies that are enumerated in Section 4.III of reference~\cite{Gray:2013mja}, as well as row and column permutations which do not mix up the fibre and base structure described in \eqref{mrfib}. Even once such redundancies are removed, we will see that the CICY four-folds generically admit many OEFs, especially for manifolds with larger Picard number. It should be noted that not all such elliptic fibrations of a manifold can be manifest in the configuration matrix simultaneously. Some of the rows comprising the fibre in one elliptic fibration may also appear in the fibre description of an inequivalent OEF. As such, while the configuration matrix can always be put in the form \eqref{mrfib} for any single fibration, further nesting of such structure can not be assumed in the case of multiple fibrations. We also remind the reader that each configuration matrix describes an entire complex structure moduli space of configurations. Thus two `equivalent' fibrations could differ from one another for different choices of complex structure. The statement is simply that, for two equivalent fibrations, given a choice of complex structure of the first, there is a choice of complex structure of the second such that the two fibrations are identical. For more general recent results on the subject of elliptic fibrations of Calabi-Yau four-folds see ref.~\cite{Kollar:2012pv}.
Testing whether a given matrix is of the form~\eqref*{mrfib} can be straightforwardly implemented on a computer, as can the redundancy removal described above. We have analysed the list of CICY four-folds in this way and the results of this investigation will be presented in \secref{results_fibs}.
\subsection{The class of sections} \seclabel{sec}
Much of the physics literature that has been developed for describing F-theory compactifications on Calabi-Yau four-folds relies not only on the existence of an elliptic fibration, but also on that fibration admitting a section. We thus wish to study a class of sections of the OEFs discussed in the previous subsection.
If a section exists for a given fibration, it constitutes a divisor of the Calabi-Yau four-fold itself. In the description of our manifolds we have one set of divisors over which we have particularly good calculation control - those that descend from hyperplanes in the ambient space. Divisors which descend in this way from ${\cal A}$ to ${\cal X}$ are frequently referred to as ``favourable" in the literature. For computational ease we will restrict our attention to sections which correspond to favourable divisors, referring to these as ``favourable sections". As we will see this will provide us with a very large set of examples with which to work, and thus this choice is not overly restrictive.
Deciding which of the OEFs we shall enumerate admit a favourable section is somewhat beyond the computational scope of the current paper. Instead, we will check a condition which is necessary if a fibration is to admit a section which is a generic representative of a favourable divisor class. We first define a form on the base $[{\cal A}_2|{\cal B}]$ as follows
\begin{equation}
\mu_{\text{points}}=\bigwedge_{\check{r}=N_1+1}^m J_{\check{r}}^{n_{\check{r}}} \; .
\end{equation}
The form $\mu_{\text{points}}$ is dual to a fixed number of points in the usual way
\begin{equation}
\int_{[{\cal A}_2|{\cal B}]} \mu_{\rm{points}} = \#_{\rm points} \; .
\end{equation}
We then write a form, $S = a^r J_r$, which is dual to a general favourable divisor class. We wish to find coefficients $a^r$ for which this divisor class could contain the putative section. To do this we demand that the divisor class $S$ intersects the form dual to $\#_{\rm points}$ fibres, described by the pullback under the fibration map $\pi$ of $\mu_{\rm points}$, $\#_{\rm points}$ times (once for each fibre)
\begin{equation} \eqlabel{secc}
\int_{\cal X} \pi^* \mu_{\rm{points}} \wedge S = \#_{\rm points} \; .
\end{equation}
If there is a solution, $a^r$, to~\eqref*{secc} then the intersection numbers of ${\cal X}$ satisfy the necessary condition for a generic element of the divisor class dual to $S=A^rJ_r$ to be a section. If not, no such section can exist. We emphasize that even in the case of a positive result one has to be careful. In order to prove the existence of a section, one would need to show that there is a representative of the relevant divisor class which is nowhere vertical over the base.
\section{Results}\seclabel{results}
We have applied the methods discussed in this paper to the list of 921,497 CICY four-fold configuration matrices presented in ref.~\cite{Gray:2013mja} in order to further explore their mathematical properties. In this section, we present the main results of this analysis. The complete data, which is the output of several computer programs running in parallel on a computer cluster for several months, can be downloaded from~\cite{cicylist4} in a format that is described in \appref{dataformat}.
\subsection{Cartography of properties: Hodge data and distinguishing invariants}
Using the techniques presented in \secref{hodgecalc}, we have computed all Hodge numbers of all CICY four-folds. We have excluded from this analysis the 15,813 block-diagonal configuration matrices since they correspond to product manifolds, which generally have more non-zero entries in their Hodge diamond than an indecomposable four-fold. However, the Hodge numbers in these cases follow from those of their lower-dimensional constituents and K\"unneth's formula.
\begin{figure}[!t]\centering
\includegraphics[width=0.6\textwidth]{eulerhisto_nonprod.pdf}
\caption{Distribution of the Euler characteristic $\chi$ in the CICY four-fold list (excluding product manifolds), as a logarithmic plot. The values lie in the range $288 \leq \chi \leq 2610$.}
\figlabel{eulerhisto}
\end{figure}
For the remaining $\text{921,497} - \text{15,813} = \text{905,684}$ CICY four-folds, we find the following mean values for the Euler characteristic $\chi$ and the Hodge numbers $h^{p,q}$
\begin{equation}\eqlabel{hodge_mean_values}
\begin{aligned}
\langle\chi\rangle = 341^{2610}_{288} \; , \quad
&\langle h^{1,1} \rangle = 10.1^{24}_{1} \; ,
&\langle h^{2,1} \rangle = 0.817^{33}_{0} \; , \\
&\langle h^{3,1} \rangle = 39.6^{426}_{20} \; ,
&\langle h^{2,2} \rangle = 241^{1752}_{204} \; ,
\end{aligned}
\end{equation}
where the superscripts and subscripts respectively denote the maximal and minimal values that occur. For further details, we refer to the logarithmic plots of the distribution of the Euler and Hodge numbers which can be found in \figref{eulerhisto,hodgehisto} respectively.
\begin{figure}[!t]\centering
\includegraphics[width=0.85\textwidth]{hodgehisto.pdf}
\caption{Logarithmic plots of the abundance of Hodge numbers in the CICY four-fold list (excluding product manifolds). Here, $N$ is the number of times a given value of the Hodge number appears in the CICY four-fold list.}
\figlabel{hodgehisto}
\end{figure}
The four-dimensional space spanned by the Hodge numbers is depicted in \figref{hodgehisto2dsections}, where the six canonical two-dimensional subspaces are shown.
\begin{figure}[!t]\centering
\subfigure{\includegraphics[width=0.328\textwidth]{hodgehistoh11h21.pdf}}
\subfigure{\includegraphics[width=0.328\textwidth]{hodgehistoh11h31.pdf}}
\subfigure{\includegraphics[width=0.328\textwidth]{hodgehistoh11h22.pdf}}\\
\subfigure{\includegraphics[width=0.328\textwidth]{hodgehistoh21h31.pdf}}
\subfigure{\includegraphics[width=0.328\textwidth]{hodgehistoh21h22.pdf}}
\subfigure{\includegraphics[width=0.328\textwidth]{hodgehistoh31h22.pdf}}
\caption{The canonical two-dimensional sections of the space spanned by the Hodge numbers. The colouring encodes the abundance with which the particular combination of Hodge numbers occurs in the CICY four-fold list (excluding product manifolds).}
\figlabel{hodgehisto2dsections}
\end{figure}
There is one noteworthy peculiarity evident in one of the plots.\phantomsection\label{rel_h31_h22_origin} Upon inspection of the bottom right graph in \figref{hodgehisto2dsections}, one discovers a weak correlation between the values of $h^{3,1}$ and $h^{2,2}$. The origin of this correlation is purely empirical and can be explained by considering~\eqref*{rel_hodge_numbers} rewritten as
\begin{equation}
h^{2,2} = 4 h^{3,1} + ( 44 + 4 h^{1,1} - 2 h^{2,1} ) \; .
\end{equation}
Comparing with~\eqref*{hodge_mean_values} shows that on average $h^{2,2}$ and $4 h^{3,1}$ are an order of magnitude larger than the expression in parenthesis. To a good approximation, we may thus replace $h^{1,1}$ and $h^{2,1}$ by their mean values to obtain a linear relationship between $h^{3,1}$ and $h^{2,2}$
\begin{equation}\eqlabel{linrel}
h^{2,2} \approx 4 h^{3,1} + ( 44 + 4 \langle h^{1,1} \rangle - 2 \langle h^{2,1} \rangle ) = 4 h^{3,1} + 82.8 \; .
\end{equation}
The graph of this approximate relationship overlaid with the exact density histogram is plotted in \figref{rel_h31_h22} showing good agreement between the linear curve and the density distribution.
\begin{figure}[!t]\centering
\includegraphics[width=0.50\textwidth]{rel_h31_h22.pdf}
\caption{Density histogram of the pair $(h^{3,1}, h^{2,2})$ in the CICY four-fold list (excluding product manifolds) overlaid with the linear equation $h^{2,2} \approx 4 h^{3,1} + 82.8$ (orange curve). The purely empirical origin of this apparent linear relation between $h^{3,1}$ and $h^{2,2}$ is explained on page \pageref{rel_h31_h22_origin}.}
\figlabel{rel_h31_h22}
\end{figure}
A somewhat similar relation is also known to hold for a different class of explicitly constructed Calabi-Yau four-folds~\cite{Lynker:1998pb}. It is important to stress however that the approximate linear correlation~\eqref*{linrel} is, as far as we know, merely an artefact of the construction of CICY four-folds.
Under mirror symmetry, the two Hodge numbers $h^{1,1}$ and $h^{3,1}$ are interchanged~\cite{Batyrev:1993}. In order to illustrate the situation of mirror symmetry for CICY four-folds, we show a mirror plot in \figref{mirrorplot} -- that is, a plot of $(h^{1,1} + h^{3,1})$ against $(h^{1,1} - h^{3,1})$.
\begin{figure}[!t]\centering
\includegraphics[width=0.60\textwidth]{mirrorplot.pdf}
\caption{A plot of $(h^{1,1} + h^{3,1})$ against $(h^{1,1} - h^{3,1})$. The dashed lines bound the region $h^{1,1} \geq 0$, $h^{3,1} \geq 0$.}
\figlabel{mirrorplot}
\end{figure}
From the highly asymmetrical plot, we can conclude that the mirror of a CICY four-fold is in most cases not itself a CICY four-fold, with the notable exception of 153 configurations with $h^{1,1} = h^{3,1}$. This situation is very similar to the case of CICY three-folds and, as was noted for example in ref.~\cite{Candelas:2007ac}, it is a consequence of the fact that CICYs are a rather special sub-class among \emph{all} Calabi-Yau four-folds. Indeed, for more general constructions the mirror plot typically becomes more symmetrical~\cite{Lynker:1998pb}.
Clearly, it is desirable to know how many topologically distinct manifolds there are in the list of CICY four-folds. We have thus also computed the topological invariants discussed in \secref{intnums}. Taking all of these into account, we find in total 36,779 different sets of topological invariants. This number serves as a new lower bound for the number of topologically distinct manifolds in the list of CICY four-folds. It improves the lower bound given in ref.~\cite{Gray:2013mja} by an order of magnitude. It may well be possible to raise this lower bound further by considering additional topological invariants, such as the ones studied in ref.~\cite{Bizet:2014uua}.
\subsection{The list of elliptic fibrations and their favourable sections}\seclabel{results_fibs}
We have performed an exhaustive computer scan to find all OEF structures of the type described in \secref{fib_class} among the list of CICY four-folds. The resulting data set contains 50,114,908 elliptic fibrations\footnote{We would like to remark that the numbers quoted in this section are to be regarded with some care. Despite modding out by some types of equivalences such as row and column permutations, we expect the presence of residual equivalences among the elliptic fibrations, just as with the four-fold configuration matrices themselves.} distributed among 921,020 CICY four-folds. The remaining 477 CICY four-folds cannot be brought into the OEF form~\eqref*{mrfib}.
For the rest of this section, we exclude the 15,813 product manifolds. This reduces the number of elliptic fibrations by 648,660 to 49,466,248. On average a CICY four-fold thus admits 54.6 OEFs and the range of the number of OEFs per configuration is 0 - 354. A logarithmic plot of the distribution of the elliptic fibration abundance is shown in \figref{fibhisto}.
\begin{figure}[!t]\centering
\includegraphics[width=0.60\textwidth]{fibhisto.pdf}
\caption{Distribution of elliptic fibration abundance in the CICY four-fold list (excluding product manifolds). The values lie in the range 0 - 354. We find 49,466,248 OEFs in total and on average each CICY four-fold configuration is elliptically fibered in 54.6 different ways.}
\figlabel{fibhisto}
\end{figure}
It should be noted that every configuration matrix in the list with $h^{1,1} > 12$ admits at least one OEF. This ubiquity of elliptic fibrations at high Picard number echoes a structure that was found in ref.~\cite{Taylor:2012dr}, for the Kreuzer-Skarke classification of Calabi-Yau three-folds constructed as hypersurfaces in toric ambient spaces~\cite{Kreuzer:2000xy,Kreuzer:2002uu}.
There are two different types of fibre configurations in our list. The first type is given by block-diagonal fibre configurations, such as, for example
\begin{equation}
\left[\begin{array}{c|cc} 1 & 2 & 0 \\ 2 & 0 & 3 \end{array}\right] \; .
\end{equation}
This configuration describes two points in $\field{P}^1$ times a torus $[2|3]$. The fibres of a total of 2,149,222 OEFs are block-diagonal. The remaining 47,317,026 non block-diagonal fibre configurations matrices describe irreducible tori. These fibrations can degenerate over special loci in the base. It should be noted that some $99.4\%$ of the fibre descriptions contain linear constraints in the coordinates of a single projective space. However, such linear constraints cannot be removed (by replacing the relevant $\field{P}^n$ with $\field{P}^{n-1}$) as different redundant descriptions of the fibre can be twisted over the base of the OEF in inequivalent ways.
It is also of interest to analyse the base manifolds that occur in our list. There are three main types of base manifolds, namely products of projective spaces, almost fano complete intersections in products of projective spaces and $\field{P}^1$ times almost del Pezzo complete intersections in products of projective spaces. In \tabref{bases} we further sub-divide the three main types and present a complete classification of the base manifolds that occur in our list.
\begin{table}[!t]\centering
\setlength\extrarowheight{4pt}
\begin{tabular}{c@{\hspace{3.7mm}}c@{\hspace{3.7mm}}c@{\hspace{3.7mm}}c} \toprule
Type & $\#$ & $\chi$ & Example configurations \\[4pt] \toprule
$\field{P}^3$ & 562,342 & 4 & --- \\[4pt] \hline
$\field{P}^1 \times \field{P}^2$ & 9,745,787 & 6 & --- \\[4pt] \hline
$(\field{P}^1)^3$ & 10,030,442 & 8 & --- \\[4pt] \hline
\begin{tabular}{@{}c@{}}almost \\ fano ${\cal B}_3$ \end{tabular} & 6,252,997 & \begin{tabular}{@{}c@{}}$\{-60, -50,$ \\ $-48, -46\}$ \\ $\cup \{2\, n | -21$ \\ $\leq n\leq 12\}$\end{tabular} & \begin{tabular}{@{}cccc@{}} $\left[\begin{smallmatrix} 1 \\ 4 \end{smallmatrix} \big| \begin{smallmatrix} 0 & 2 \\ 3 & 1 \end{smallmatrix}\right]_{\text{\tiny -30}}$, & $\left[\begin{smallmatrix} 1 \\ 3 \end{smallmatrix} \big| \begin{smallmatrix} 2 \\ 2 \end{smallmatrix}\right]_{\text{\tiny 0}}$, & $\left[\begin{smallmatrix} 2 \\ 2 \end{smallmatrix} \big| \begin{smallmatrix} 1 \\ 3 \end{smallmatrix}\right]_{\text{\tiny 6}}$, & {\footnotesize $\ldots$}, \end{tabular} \\[4pt] \hline
fano ${\cal B}_3^\prime$ & 6,995,514 & \begin{tabular}{@{}c@{}}$\{-56, -36,$ \\ $-24, -14,$ \\ $-12, -6,$ \\ $-4, 0, 2, 4,$ \\ $6, 8, 10\}$\end{tabular} & \begin{tabular}{@{}cccc@{}} {\footnotesize $[4|4]_{_{-56}}$}, & {\footnotesize $[5|2\;\,2]_{_{0}}$}, & {\footnotesize $[4|2]_{_{4}}$}, & $\left[\begin{smallmatrix} 2 \\ 3 \end{smallmatrix} \big| \begin{smallmatrix} 1 & 1 \\ 1 & 2 \end{smallmatrix}\right]_{\text{\tiny -4}}$, \\ $\left[\begin{smallmatrix} 3 \\ 3 \end{smallmatrix} \big| \begin{smallmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \end{smallmatrix}\right]_{\text{\tiny 0}}$, & $\left[\begin{smallmatrix} 2 \\ 2 \end{smallmatrix} \big| \begin{smallmatrix} 1 \\ 1 \end{smallmatrix}\right]_{\text{\tiny 6}}$, & {\footnotesize $\ldots$}, & \end{tabular} \\[4pt] \hline
\begin{tabular}{@{}c@{}}$\field{P}^1 \times {\cal B}_2$ \\ (${\cal B}_2$ almost \\ del Pezzo) \end{tabular} & 15,879,166 & \begin{tabular}{@{}c@{}}$\{8, 10, 12,$ \\ $14, 16, 18,$ \\ $20, 24\}$\end{tabular} & $\field{P}^1 \times\begin{dcases} \\ \\ \\ \\ \\ \end{dcases}$\hspace{-0.1cm}\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
$\left[\begin{smallmatrix} 1 \\ 2 \end{smallmatrix} \big| \begin{smallmatrix} 1 \\ 1 \end{smallmatrix}\right]_{\text{\tiny 4}}$, & $\left[\begin{smallmatrix} 1 \\ 1 \\ 2 \end{smallmatrix} \Big| \begin{smallmatrix} 0 & 1 \\ 1 & 0 \\ 1 & 1 \end{smallmatrix}\right]_{\text{\tiny 5}}$, & $\left[\begin{smallmatrix} 1 \\ 1 \\ 1 \end{smallmatrix} \Big| \begin{smallmatrix} 1 \\ 1 \\ 1 \end{smallmatrix}\right]_{\text{\tiny 6}}$, & $\left[\begin{smallmatrix} 1 \\ 2 \end{smallmatrix} \big| \begin{smallmatrix} 1 \\ 2 \end{smallmatrix}\right]_{\text{\tiny 7}}$, \\
{\footnotesize$[4|2\;\,2]_{_{8}}$}, & {\footnotesize$[3|3]_{_{9}}$}, & $\left[\begin{smallmatrix} 1 \\ 2 \end{smallmatrix} \big| \begin{smallmatrix} 2 \\ 2 \end{smallmatrix}\right]_{\text{\tiny 10}}$, & $\left[\begin{smallmatrix} 1 \\ 2 \end{smallmatrix} \big| \begin{smallmatrix} 1 \\ 3 \end{smallmatrix}\right]_{\text{\tiny 12}}$, \\
$\left[\begin{smallmatrix} 2 \\ 2 \end{smallmatrix} \big| \begin{smallmatrix} 1 & 1 \\ 1 & 1 \end{smallmatrix}\right]_{\text{\tiny 6}}$, & $\left[\begin{smallmatrix} 1 \\ 1 \\ 1 \end{smallmatrix} \Big| \begin{smallmatrix} 1 \\ 1 \\ 2 \end{smallmatrix}\right]_{\text{\tiny 8}}$, & $\left[\begin{smallmatrix} 1 \\ 1 \\ 2 \end{smallmatrix} \Big| \begin{smallmatrix} 0 & 1 \\ 1 & 0 \\ 1 & 2 \end{smallmatrix}\right]_{\text{\tiny 8}}$, & $\left[\begin{smallmatrix} 1 \\ 3 \end{smallmatrix} \big| \begin{smallmatrix} 0 & 1 \\ 3 & 1 \end{smallmatrix}\right]_{\text{\tiny 12}}$, \\
$\left[\begin{smallmatrix} 1 \\ 4 \end{smallmatrix} \big| \begin{smallmatrix} 0 & 0 & 1 \\ 2 & 2 & 1 \end{smallmatrix}\right]_{\text{\tiny 12}}$, & $\left[\begin{smallmatrix} 1 \\ 1 \\ 1 \end{smallmatrix} \Big| \begin{smallmatrix} 1 \\ 2 \\ 2 \end{smallmatrix}\right]_{\text{\tiny 12}}$, & $\left[\begin{smallmatrix} 1 \\ 1 \\ 2 \end{smallmatrix} \Big| \begin{smallmatrix} 0 & 1 \\ 2 & 0 \\ 2 & 1 \end{smallmatrix}\right]_{\text{\tiny 12}}$
\end{tabular}\hspace{-0.1cm}$\begin{rcases} \\ \\ \\ \\ \\ \end{rcases}$ \\[4pt]
\bottomrule
\end{tabular}
\caption[Classification of base manifolds]{Classification of base manifolds that occur in our list. The first column lists the different types of three-fold bases. By ${\cal B}_3$ (${\cal B}_2$) we denote almost fano (almost del Pezzo) complete intersections in products of projective spaces, that is three-(two-)fold configurations whose anticanonical bundle is almost-ample. In contrast, ${\cal B}_3^\prime$ denote fano complete intersections in products of projective spaces. Their anticanonical bundle is ample. The subscripts on the configuration matrices denote the Euler characteristics. The second column counts how many times the types of base manifolds occur in the list of fibrations. In the third column, we list all of the different values for the Euler characteristic $\chi$ that occur. The last column contains example configurations and for the case of $\field{P}^1 \times {\cal B}_2$ this list is actually complete.\footnotemark}
\tablabel{bases}
\end{table}
We remark that bases of the form $(\field{P}^1)^2 \times {\cal B}_1$ and $\field{P}^2 \times {\cal B}_1$, where ${\cal B}_1$ is an almost ample complete intersection $1$-fold, such as $[2|2]$, $\left[\begin{smallmatrix} 1 \\ 1 \end{smallmatrix} \big| \begin{smallmatrix} 1 \\ 1 \end{smallmatrix}\right]$ or $\left[\begin{smallmatrix} 1 \\ 1 \end{smallmatrix} \big| \begin{smallmatrix} 1 \\ 2 \end{smallmatrix}\right]$, do not occur in the classification of base manifolds. This is a consequence of the redundancy removal (more precisely, the modding out by ineffective splittings and identities) that was employed in the compilation of the CICY 4-fold list~\cite{Gray:2013mja}, since the ${\cal B}_1$ merely describe different embeddings of $\field{P}^1$~\cite{Candelas:1987kf}. These cases are thus already captured by $(\field{P}^1)^3$ and $\field{P}^1 \times \field{P}^2$.
\vspace{0.1cm}
Of the 50,114,908 OEFs in our data set 26,088,498 satisfy the necessary condition for admitting a section which is a generic element of a favourable divisor class as described in \secref{sec}.
\footnotetext{Moreover, the topological types of almost del Pezzo surfaces are classified by their Euler characteristic, except for the case $\chi({\cal B}_2) = 4$~\cite{Hubsch:1992nu}. Hence, the configurations in the last two rows in the list of $\field{P}^1 \times {\cal B}_2$ configurations are equivalent to the respective configurations in the first two rows that have the same Euler characteristics.}
Restricting ourselves to the 49,466,248 elliptic fibrations which correspond to CICY four-folds which are not direct products, we find 25,999,860 examples which obey the conditions. \Figref{sec} shows the multiplicity of configuration matrices (omitting the direct products) which admit a given number of fibrations satisfying this necessary condition. The largest number of fibrations of a single configuration matrix potentially admitting a generic favourable section is 312. It should be noted that, because of the form of condition~\eqref*{secc}, any fibration which satisfies these conditions will admit multiple divisor classes which are suitable. This would correspond to, potentially, multiple sections (as opposed to a multi-section which all of the fibrations admit).
\begin{figure}[!t]\centering
\includegraphics[width=0.80\textwidth]{sechisto.pdf}
\caption{Distribution of the multiplicity of configuration matrices (excluding product manifolds) admitting a given number of OEFs which satisfy the necessary conditions to admit a generic favourable section, as described in \secref{sec}.}
\figlabel{sec}
\end{figure}
\section*{Acknowledgements}
We would like to thank Lara Anderson, Ron Donagi and Martijn Wijnholt for useful discussions. A.~L.~is partially supported by the EPSRC network grant EP/l02784X/1. The results presented here were (partially) carried out on the cluster system at the Leibniz University of Hanover, Germany.
\section*{Appendix}
| {
"timestamp": "2014-09-19T02:09:39",
"yymm": "1405",
"arxiv_id": "1405.2073",
"language": "en",
"url": "https://arxiv.org/abs/1405.2073",
"abstract": "We investigate the mathematical properties of the class of Calabi-Yau four-folds recently found in [arXiv:1303.1832]. This class consists of 921,497 configuration matrices which correspond to manifolds that are described as complete intersections in products of projective spaces. For each manifold in the list, we compute the full Hodge diamond as well as additional topological invariants such as Chern classes and intersection numbers. Using this data, we conclude that there are at least 36,779 topologically distinct manifolds in our list. We also study the fibration structure of these manifolds and find that 99.95 percent can be described as elliptic fibrations. In total, we find 50,114,908 elliptic fibrations, demonstrating the multitude of ways in which many manifolds are fibered. A sub-class of 26,088,498 fibrations satisfy necessary conditions for admitting sections. The complete data set can be downloaded atthis http URL.",
"subjects": "High Energy Physics - Theory (hep-th); Algebraic Geometry (math.AG)",
"title": "Topological Invariants and Fibration Structure of Complete Intersection Calabi-Yau Four-Folds",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540686581882,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7099046484022913
} |
https://arxiv.org/abs/2302.00468 | LS-category and topological complexity of several families of fibre bundles | In this paper, we study upper bounds for the topological complexity of the total spaces of some classes of fibre bundles. We calculate a tight upper bound for the topological complexity of an $n$-dimensional Klein bottle. We also compute the exact value of the topological complexity of $3$-dimensional Klein bottle. We describe the cohomology rings of several classes of generalized projective product spaces with $\mathbb{Z}_2$-coefficients. Then we study the LS-category and topological complexity of infinite families of generalized projective product spaces. We reckon the exact value of these invariants in many specific cases. We calculate the equivariant LS-category and equivariant topological complexity of several product spaces equipped with $\mathbb{Z}_2$-action. | \section{Introduction}\label{sec:intro}
Lusternik and Schnirelmann \cite{LS-paper1} introduced a homotopy invariant of a topological space, known as the `LS-category', to study some problems in variational calculus. This invariant has been studied widely since the 1940s, see \cite{Fox, LS-paper2, CLOT}. Two decades later, the `genus', a generalization of LS-category, of a fibration was introduced by Schwarz in \cite{Sva}. This overused term `genus' was replaced by `sectional category' in the exposition \cite{James} and subsequent articles on this topic. The notion of LS-category in the presence of group action has been studied first in \cite{eqlscategory}.
In a different context, Farber \cite{FarberTC} introduced the notion of topological complexity to study motion planning in mechanical systems. The configuration space of a mechanical system is the set of all admissible points of the system. Usually, this space has a topology. The continuous motions in the system can be characterized by continuous paths in the space. Topological complexity is a homotopy invariant and a close relative of the LS-category. Later, generalizing the concept of sectional category, Colman and Grant \cite{EqTCGrant} introduced the equivariant sectional category. It was noted that the topological complexity is a particular case of the sectional category. However, an excellent physical interpretation of topological complexity has been mentioned in \cite{FarberTC, Farber}.
Let $G$ be a group and $E, B$ be two $G$-spaces such that $f\colon E \to B$ is an equivariant map. The equivariant sectional category of $f$, denoted by ${\rm secat}_G(f)$, is the least positive integer $k$ such that there exists a $G$-invariant open cover $\{V_1, \ldots, V_k\}$ of $B$ and a $G$-map $\sigma_i \colon V_i \to E$ satisfying $f \sigma_i \simeq_G \iota_{V_i} \colon V_i \hookrightarrow B$ for $i=1, \ldots, k$. If no such $k$ exists, we say ${\rm secat}_G(f)=\infty$. If $f$ is a $G$-fibration, then one can replace the $G$-homotopy $\simeq_G$ by the equality $=$. Moreover, if $G$ is trivial then ${\rm secat}_G(f)$ is called the sectional category of $f$, and in addition, if $E$ is contractible then it becomes the LS-category of $B$, denoted by $\mathrm{cat}(B)$. If $X$ and $Y$ are the path connected spaces and $X \times Y$ is completely normal, then Fox \cite{Fox} showed that
\begin{equation}\label{eq_ls_cat_prod}
\mathrm{cat}(X\times Y)\leq \mathrm{cat}(X)+\mathrm{cat}(Y)-1.
\end{equation}
Marzantowicz \cite{eqlscategory} introduced equivariant LS-category, denoted by $\mathrm{cat}_G(B)$ for a $G$-space $B$, generalizing the notion of LS-category. The $\mathrm{cat}_G(B)$ is the smallest integer $r$ such that $B$ can be covered by $r$ many $G$-invariant open subsets $U_1, \dots, U_r$ of $B$ with each inclusion $U_j\xhookrightarrow{} B$ is $G$-homotopic to an orbit inclusion $Gb_j \hookrightarrow B$ for some $b_j \in B$ for $j=1, \ldots, r$. The sets $U_j$'s are called $G$-categorical open subsets of $B$. We note that if $b \in B^G$ and $B$ is a $G$-connected space then ${\rm secat}_G(\iota) = \mathrm{cat}_G(B)$ for the inclusion map $\iota \colon \{b\} \to B$, see \cite[Corollary 4.7]{EqTCGrant}. Also, if $G$ is trivial then $\mathrm{cat}_G(B)=\mathrm{cat}(B)$.
Let $PB := \{\gamma ~|~ \gamma \colon [0,1]\to B ~ \mbox{is a path in}~ B\}$. Consider the compact open topology on $PB$. Then $PB$ is a $G$-space defined by $(g \cdot \gamma) (t) = g\gamma(t)$,
and the fibration $$\pi_B \colon PB \to B \times B$$ defined by $\pi_{B}(\gamma)=(\gamma(0),\gamma(1))$ is a $G$-fibration. The \emph{equivariant topological complexity} of $B$, denoted by $\mathrm{TC}_G(B)$, is defined by $\mathrm{TC}_G(B) = {\rm secat}_G(\pi_B)$, see \cite{EqTCGrant}. A $G$-invariant open cover $\{V_1, \dots, V_k\}$ of $B \times B$ is called an (equivariant) \emph{motion planning cover} for $B$ if it satisfies the definition of the equivariant sectional category of $\pi_B$.
If $G$ is trivial in ${\rm secat}_G(\pi_B)$ then it is called the topological complexity of $B$, and denoted by $\mathrm{TC}(B)$. Farber \cite{FarberTC} proved that the global section of $\pi_B$ cannot be continuous unless the space $B$ is contractible.
The following inequalities show how LS-category and topological complexity are related.
$$\mathrm{cat}(B)\leq \mathrm{TC}(B)\leq \mathrm{cat}(B\times B)\leq 2\mathrm{cat}(B)-1.$$
If $X$ and $Y$ are path connected spaces. Then Farber \cite[Theorem 11]{FarberTC} showed that
\begin{equation}\label{eq_tc_prod}
\mathrm{TC}(X\times Y)\leq \mathrm{TC}(X)+\mathrm{TC}(Y)-1.
\end{equation}
We note that a product space is an example of a fiber bundle, see \cite{Steenrod}. The topological complexity of fibrations has been studied in several articles such as \cite{strongeqtc, Farbergrant, Naskar}. We recall some related results here.
Let $G$ be a group acting properly on a space $F$.
In \cite[Section 3]{strongeqtc}, Dranishnikov introduced the strong equivariant topological complexity $\mathrm{TC}_G^{\star}(F)$ of $F$ and showed that
\begin{equation}\label{eq:Dra}
\mathrm{TC}(E)\leq \mathrm{TC}(B)+\mathrm{TC}_G^{\star}(F)-1
\end{equation}
where $E$, $B$ are locally compact metric ANR-spaces and $p \colon E \to B$ is a fibre bundle with fibre $F$ having structure group $G$. On the other hand, Farber and Grant \cite[Lemma 7]{Farbergrant} showed that
\begin{equation}\label{eq:FG}
\mathrm{TC}(E)\leq \mathrm{TC}(F)\cdot \mathrm{cat}(B\times B)
\end{equation}
where $p \colon E \to B$ is a Hurewicz fibration with fibre $F$ and base $B$.
Later, Grant \cite[Theorem 3.1]{Grantfibrations} improved the upper bound in the above inequality.
Therefore, it is natural to ask the following.
\begin{question}\label{que}
Let $p \colon E \to B$ be a fibre bundle with fibre $F$. When can we say that $$ \mathrm{TC}(E)\leq \mathrm{TC}(B)+\mathrm{TC}(F)-1?$$
\end{question}
In this paper, we answer \Cref{que} for a class of fibre bundles, see \Cref{thm:TC-fibrebundle}. Then, as a consequence, we compute the tight upper bounds for the topological complexity of a class of generalized projective product spaces and Dold manifolds \cite{sarkargpps} which includes $n$-dimensional Klein bottle introduced by Davis in \cite{DDavis}. We also study the LS-category and topological complexity of several generalized projective product spaces which includes `Dold manifolds of Grassmann type'. We note that \Cref{thm:TC-fibrebundle} gives much better upper bounds than \eqref{eq:Dra} and \eqref{eq:FG} for many fibre bundles.
We recall the definition of generalized projective product spaces following \cite{sarkargpps}. Let $M$ and $N$ be manifolds with involutions $\tau \colon M \to M$ and $\sigma \colon N \to N$ such that $\sigma$ is fixed point free. Define the identification space:
\begin{equation}\label{eq_gen_dman}
X(M, N) :=\displaystyle\frac{M \times N}{(x,y)\sim (\tau(x), \sigma(y))}.
\end{equation}
Then $X(M, N)$ is a manifold of dimension $\dim(M) + \dim(N)$ and there is a fibre bundle
\begin{equation}\label{eq:indc_fib_bndl}
M \xhookrightarrow{} X(M, N) \stackrel{\mathfrak{p}}\longrightarrow N/\sigma
\end{equation}
defined by $\mathfrak{p}([(x,y)])=[y]$, where $N/\sigma$ is the orbit space induced by the involution $\sigma$. Note that this class of manifolds contains all projective product spaces \cite{Davis} and Dold manifolds \cite{Dold}. Note that $M, N$ in \eqref{eq:indc_fib_bndl} can be replaced by topological spaces with similar involutions. However, in this paper, we are interested in the manifold category, unless it is mentioned.
This article is organized as follows.
In \Cref{sec: TC of fb}, we give a counter-example of \cite[Theorem 3.4]{Naskar}. Then, we obtain an upper bound for the topological complexity of the total space of a class of fibre bundles as the sum of the topological complexities of the base and the fiber, see Theorem \ref{thm:TC-fibrebundle}. We recall the result \cite[Theorem 2.6]{Naskar} which is used in the later Sections to compute LS-category of several fibre bundles. We discuss some applications.
In \Cref{sec:TCKn}, we recall the definition of an $n$-dimensional Klein bottle $K_n$ for $2 \leq n \in \Z$. Then we show that $\mathrm{TC} (K_2)=5$ and $\mathrm{TC} (K_3)=6$. For higher $n$, we calculate almost tight upper bounds on the topological complexity. Here, the computation $\mathrm{TC} (K_2)=5$ is an alternative proof for the topological complexity of classical $2$-dimesnional Klein bottle.
We begin \Cref{sec: Cat TC some gpps} by recalling the definition of projective product spaces and some generalized projective product spaces. Then we compute the mod-$2$ cohomology ring of certain generalized projective product spaces, see \Cref{thm: cohoringYM}. This result extends some previous results in \cite{Davis} and \cite{sarkargpps}.
Then, we compute bounds (probably tight) as well as exact values for the LS-category and topological complexity of an infinite families of generalized projective product spaces.
In \Cref{sec: Cat TC some gen Dold}, we introduce a new class of generalized projective product spaces called `Dold manifolds of Grassmann type' whose topological properties have not been studied till now. Then we describe their cohomology ring and compute their LS-category, see \Cref{thm_lscat_gras}. We discuss some lower and upper bounds for the topological complexity of these spaces, see \Cref{prop_tc_grdn} and \Cref{thm:TC-cpn-pn}.
In \Cref{sec: eq Cat TC some gen Dold}, we study the equivariant LS-category and equivariant topological complexity of several $\Z_2$-spaces related to the generalized projective product spaces and generalized Dold manifolds. We compute them for several cases, see \Cref{prop:eqi_lstc} and \Cref{ex:eqi_lstc}.
\section{LS-category and topological complexity of some fiber bundles}\label{sec: TC of fb}
An upper bound for the topological complexity of the total space of a fibre bundle has been studied in \cite{Farbergrant, Grantfibrations} as the product of the topological complexity of the base and the fiber and in \cite{Naskar} as the sum of the topological complexity of the base and the fibre (under a hypothesis). The authors of this paper found that the arguments in the proof of \cite[Theorem 3.4]{Naskar} is inadequate. However, importantly, this observation does not affect the main goals (computing LS-category and topological complexity of Dold manifolds) of the paper \cite{Naskar}.
We begin this section with a counter-example to \cite[Theorem 3.4]{Naskar} observed by the first author of this paper. Then, we study upper bounds for the topological complexity of the total spaces of a class of fibre bundles. We compute an upper bound for the LS-category and topological complexity of manifolds defined in \eqref{eq_gen_dman}.
\begin{example}[{Counter-example to \cite[Theorem 3.4]{Naskar}}] Let $K_2$ be the classical $2$-dimensional Klein bottle. Note that $K_2$ can be considered as the total space of a fibre bundle over $S^1$ with fibre $S^1$, see \cite[Chapter 1]{Steenrod}. We now construct an open cover for $S^1\times S^1$ with three open sets satisfying the hypothesis of \cite[Theorem 3.4]{Naskar}. Let $a_i\in S^1$ for $1\leq i\leq 3$ be three distinct points on the circle. We write $a_i$ instead $\{a_i\}$ for short.
Suppose that $$R_i:=S^1\setminus a_i, \mbox{ and } V_i:=R_i\times R_i, \mbox{ for } 1\leq i\leq 3 .$$ Note that $S^1= \cup_{i=1}^{3}R_i$ and $S^1\times S^1= \cup_{i=1}^{3}V_i$.
Let $(x,y)\in V_i$. Then $x\neq a_i$ and $y\neq a_i$. Therefore there is a unique
geodesic $\gamma_{(x,y)}^i$ from $x$ to $y$ which does not contain $a_i$. Thus the path $\gamma_{(x,y)}^i$ lies inside $R_i$.
Therefore, for $1\leq i\leq 3$, we can define continuous sections $$\sigma_i \colon V_i \to PS^1 \mbox{ by } \sigma_i(x,y):=\gamma_{(x,y)}^i$$ of $\pi_{S^1} \colon PS^1\to S^1\times S^1$.
Note that $\sigma_i(V_i)\subseteq PR_i$. Since each $V_j$ is contractible, we have local trivialization
\[h_j \colon V_j\times(S^1\times S^1)\to (p\times p)^{-1}(V_j)\] for the bundle $K_2\times K_2\stackrel{p\times p}{\longrightarrow} S^1\times S^1$ with fibre $S^1\times S^1$ for $j=1,2,3$.
The above discussion shows that the hypothesis of \cite[Theorem 3.4]{Naskar} are satisfied for this fibre bundle. From the conclusion of \cite[Theorem 3.4]{Naskar}, we get $\mathrm{TC}(K_2)\leq 2+3-1=4 $. Which is not possible, since $\mathrm{TC}(K_2)=5$, see \cite{TCKleinbottle}.
\end{example}
\begin{theorem}\label{thm:TC-fibrebundle}
Let $F\xhookrightarrow {}E\stackrel{p}{\longrightarrow} B$ be a fibre bundle where $F\times F$, $E \times E$ and $B \times B$ are completely normal spaces. Let $\{U_1,\dots,U_m\}$ and $\{V_1,\dots,V_n\}$ be motion planning covers of $B\times B$ and $F\times F$, respectively. Let \[h_i \colon (p\times p)^{-1}(U_i)\to U_i \times F \times F\] be a local trivialization of $F\times F\xhookrightarrow {}E\times E\stackrel{p\times p}{\longrightarrow} B\times B$ such that $(U_i\cap U_j)\times V_k$ is invariant under $h_j^{-1}h_i$ for all $1\leq i<j\leq m$ and $1\leq k\leq n$. Then $\mathrm{TC}(E)\leq m+n-1$.
\end{theorem}
\begin{proof}
Let $X_i:=U_1\cup \dots \cup U_i$ and $Y_j:=V_1\cup \dots \cup V_j$ for $1\leq i \leq m$ and $1\leq j\leq n$. Then, by \cite[Proposition 4.12]{Farber}, we have a section (as a map) $s \colon B\times B \to PB$ of $\pi_B$ and $s'\colon F\times F\to PF$ of $\pi_F$ with $$\emptyset=X_0\subseteq X_1\subseteq \dots \subseteq X_m=B\times B \mbox{ and } \emptyset=Y_0\subseteq Y_1\subseteq \dots \subseteq Y_n=F\times F $$ such that $s|_{U_{i+1}-U_{i}}$ and $s'|_{V_{j+1}-V_{j}}$ are continuous for all $i=0,1,\dots,m-1$ and $j=0,1,\dots,n-1$. Now we define a sequence of open sets which converges to $E\times E$ using $X_i$'s and $Y_j$'s. For each $\ell \in \{1,\dots, m+n-1\}$, define
\[Q_\ell:=Q_{\ell-1}\bigcup \bigg(\bigcup_{r=1}^{\ell}h_{r}^{-1}((X_r-X_{r-1})\times (Y_{\ell-r+1}-Y_{\ell-r})) \bigg).\]
Observe that $\emptyset=Q_0\subseteq Q_1\subseteq \dots \subseteq Q_{m+n-1}=E\times E.$
We note that $X_i$'s and $Y_j$'s are empty if the corresponding index of $X$ and $Y$ exceeds $m$ and $n$, respectively.
Note that all $Q_\ell$'s are open subsets of $E \times E$ and
\begin{equation}\label{disjunion}
Q_{\ell+1}-Q_\ell= \bigcup_{r=1}^{\ell+1}h_r^{-1}((X_r-X_{r-1})\times (Y_{\ell+2-r}-Y_{\ell+1-r})).
\end{equation}
Now our aim is to define a section $s_E \colon E\times E\to PE$ of $\pi_E$ such that $s_E|_{Q_{\ell+1}-Q_{\ell}}$ is continuous for all $\ell=0,1,\dots, m+n-1$.
Let $(e_1,e_2)\in E\times E$. We want to define a path between $e_1$ and $e_2$. Note that $(p(e_1),p(e_2))\in B\times B$. Recall that there is a section $s \colon B\times B \to PB$ of $\pi_B$. Therefore, we have a path $s(p(e_1),p(e_2))$ between $p(e_1)$ and $p(e_2)$. By the lifting property there exists a unique lifting of $s(p(e_1),p(e_2))$. We denote this lift by $\tilde{s}$. Notice that $\tilde{s}(1)\in p^{-1}(p(e_2))$. Again recall that we have a section $s' \colon F\times F\to PF$ of $\pi_F$. Therefore, we have a path $s'(\tilde{s}(1),e_2)$. Now we can define a $s_E(e_1,e_2)$ as a concatenation of paths $\tilde{s}$ and $s'(\tilde{s}(1),e_2)$, \emph{i.e.} ,
\[s_E(e_1,e_2)(t)=
\begin{cases}
\tilde{s}(2t) & 0\leq t\leq 1/2\\
s'(\tilde{s}(1),e_2)(2t-1)& 1/2\leq t\leq 1
\end{cases}.\]
Next we show that $s_E|_{Q_{\ell+1}-Q_{\ell}}$ is continuous for all $\ell=0,1,\dots, m+n-1$.
First we use complete normality and condition of invariantness on transition functions to show that the right hand side of \eqref{disjunion} is in fact a disjoint union.
For each $\ell \in \{0,1,\dots, m+n-1\}$, we write $A^\ell_r:=(X_r-X_{r-1})\times (Y_{\ell+2-r}-Y_{\ell+1-r})$. We now show that $h_r^{-1}(A_r^\ell)\cap \overline{h_{r'}^{-1}(A_{r'}^\ell})=\emptyset$ for $1\leq r<r'\leq \ell+1$. On contrary, suppose $x \in h_r^{-1}(A_r^\ell)\cap \overline{h_{r'}^{-1}(A_{r'}^\ell})$. Since $h_r$ is a homeomorphism, $\overline{h_r^{-1}(S)}=h_{r}^{-1}(\overline{S})$. Therefore, $h_{r'}(x)\in h_{r'}(h_r^{-1}(A_r^\ell))\cap \overline{A_{r'}^\ell}$.
Observe that invariantness condition of the transition functions gives, $h_{r'}(h_r^{-1}(A_r^\ell))=A_r^\ell$.
This implies $h_{r'}(x)\in A_r^\ell \cap \overline{A_{r'}^\ell}$. Let $h_{r'}(x)=(a_{r'},b_{r'})\in A_r^\ell$. \emph{i.e.}, $a_{r'}\in X_r-X_{r-1}$.
Now we note that $$(a_{r'},b_{r'})\in \overline{A_{r'}^\ell}\subseteq \overline{(X_{r'}-X_{r'-1})}\times \overline{(Y_{\ell+2-r'}-Y_{\ell+1-r'})}.$$ Therefore, we have a sequence $a_{r'}(n)$ in $(X_{r'}-X_{r'-1})$ that converges to $a_{r'}$. Since $X_{r'-1}^c$ is closed, $a_{r'}\in X_{r'-1}^c$. This is a contradiction to the fact that $a_{r'}\in X_{r}-X_{r-1}\subseteq X_{r'-1}$. Similarly, we can show that $\overline{h_r^{-1}(A_r^\ell)}\cap h_{r'}^{-1}(A_{r'}^\ell)=\emptyset$ for $1\leq r<r'\leq \ell+1$.
Therefore, complete normality shows that the right hand side of \eqref{disjunion} is a disjoint union.
Now it is enough to show that the restriction of the section $s_E \colon E\times E\to PE$ to the sets $h_r^{-1}(A_r^\ell)$'s is continuous. Note that when we restrict the section $s_E$ to $(p\times p)^{-1}(U_i)$ then it looks like $s\times s'$.
Therefore, $s_E|_{h_r^{-1}(A_r^\ell)}$ is continuous, since $$h_r^{-1}(A_r^\ell)\subseteq h_r^{-1}((U_r-U_{r-1})\times (V_{\ell+2-r}-V_{\ell+1-r})).$$
Finally, we have a section $s_E \colon E\times E\to PE$ of $\pi_E$ and an increasing sequence of $m+n-1$ open sets $Q_1\subseteq Q_2\subseteq \dots \subseteq Q_{m+n-1}=E\times E$ with the property that $s_E|_{Q_{\ell+1}-Q_{\ell}}$ is continuous for each $\ell = 0,1,\dots, m+n-1$. Note that here $s_E$ is just a set map. Now the proof follows from \cite[Proposition 4.12]{Farber}.
\end{proof}
Naskar and the second author \cite{Naskar} discussed an upper bound for the LS-category of the total spaces of certain fibre bundles. We recall that result in the following.
\begin{theorem}[{\cite[Theorem 2.6]{Naskar}}]\label{thm:cat}
Let $F\xhookrightarrow{} E\stackrel{p}{\longrightarrow} B$ be a fibre bundle, where $F$, $E$ and $B$ are path-connected, Hausdorff, second countable topological spaces. Suppose that $E$ is completely normal.
Let $\{U_1,\dots,U_{m} \}$ be a categorical cover of $B$
such that $\phi_i \colon \overline{p^{-1}(U_i)}\to \overline{U_i}\times F$ is a homeomorphism for $1\leq i\leq m$. Moreover, there is a categorical cover $\{V_1,\dots,V_{n}\}$ of $F$ such that $(U_{i'}\cap U_i)\times V_j$ are invariant under $\phi_{i'}\phi_{i}^{-1}$ for all $\{i,i'\}\subseteq \{1,\dots,m\}$ and $j\in\{1,\dots,n\}$. Then $\mathrm{cat}(E)\leq \mathrm{cat}(F)+\mathrm{cat}(B)-1$.
\end{theorem}
The following result is a consequence of \Cref{thm:cat}.
\begin{proposition}\label{prop cat}
Let $X(M, N)$ be a generalized projective product space as define in \eqref{eq_gen_dman}. Let $\{V_1,\dots, V_q\}$ be an $\left<\tau \right>$-invariant categorical cover of $M$. Then $\mathrm{cat}(X(M,N)) \leq q +\mathrm{cat}(N/\sigma)-1$.
\end{proposition}
\begin{proof}
Let $\mathrm{cat}(N/\sigma)=r$ and $\{U_1,\dots,U_r\}$ be a categorical cover of $N/\sigma$. Let $\pi \colon N \to N/\sigma$ be the orbit map which is a double cover. Therefore, for $1\leq i\leq r$, we have the following. \[p^{-1}(U_i)=\frac{M\times \pi^{-1}(U_i) }{(x,y)\sim (\tau(x),\sigma(y))}\cong U_i\times M.\]
Let $\phi_i \colon p^{-1}(U_i)\to U_i\times M$ be this homeomorphism. So, it is a local trivialization of $M \xhookrightarrow{} X(M, N) \stackrel{\mathfrak{p}}\longrightarrow N/\sigma$ for $i=1, \ldots, r$. Now $\phi_{i}^{-1}(([y],x))=[(x,y)]\in M\times \pi^{-1}(U_i)/\sim$. Since the structure group for the bundle $M \hookrightarrow X(M, N) \xrightarrow{p} N/\sigma$ is $\Z_2 \cong \left < \tau \right>$, then either
\[\phi_{j}\circ\phi_{i}^{-1}(([y],x))=
([y],x) \mbox{ or } \phi_{j}\circ\phi_{i}^{-1}(([y],x)) = ([y],\tau(x)).\]
Thus, $(U_i\cap U_j)\times V_k$ is invariant under $\phi_{j}\circ\phi_{i}^{-1}$ for all $1\leq i,j\leq r$ and $1\leq k\leq q$. Then, the proposition follows from \Cref{thm:cat}, since $M$, $N$ and $X(M,N)$ are manifolds.
\end{proof}
Let $\left<\tau \right>$ be the group generated by the involution $\tau$ on $M$. So, $\left<\tau \right>$ acts on $M$ via $\tau$. Then we have the following as an application of \Cref{thm:TC-fibrebundle}.
\begin{proposition}\label{cor:TC of gpps}
Let $X(M, N)$ be a generalized projective product space as define in \eqref{eq_gen_dman}. Let $\{V_1,\dots,V_q\}$ be an $(\left<\tau \right>\times \left< \tau \right>)$-invariant motion planning cover of $M$.
Then \[\mathrm{TC}(M)\leq q + \mathrm{cat}(N/\sigma \times N/\sigma)-1.\] In particular, \[\mathrm{TC}(M)\leq \mathrm{TC}_{\left<\tau \right>}(M)+\mathrm{cat}(N/\sigma \times N/\sigma)-1.\]
\end{proposition}
\begin{proof}
The proof is somewhat similar to the proof of \Cref{prop cat}, and we briefly write the arguments. Let $U_1, \ldots, U_m$ be a categorical cover of $N/\sigma \times N/\sigma$. So, each $U_i$ is a subset of a contractible subset of $N/\sigma \times N/\sigma$ and $U_1, \ldots, U_m$ is a motion planning cover of $N/\sigma \times N/\sigma$. Therefore, there is a trivialization $h_i \colon (p \times p)^{-1}(U_i) \to U_i \times (M \times M)$ for $i=1, \ldots, m$.
Since, $\{V_1,\dots,V_q\}$ is an $(\left<\tau \right>\times \left< \tau \right>)$-invariant motion planning cover of $M$,
the set $(U_i \cap U_j) \times V_\ell$ is invariant onder $h_j^{-1}h_i$ for $1\leq i, j \leq m$ and $1 \leq \ell \leq q$.
Thus $U_i$'s and $V_\ell$'s satisfy the hypothesis of \Cref{thm:TC-fibrebundle}. Hence, the conclusion follows.
\end{proof}
\begin{remark}
\begin{enumerate}
\item The conclusion of \Cref{thm:TC-fibrebundle} and \cite[Theorem 3.4]{Naskar} is same. However, the hypotheses in those theorems are different.
\item The arguments of the proof of \Cref{cor:TC of gpps} and \Cref{prop cat} can be applied if $M, N$ are metric spaces.
\end{enumerate}
\end{remark}
\section{Topological complexity of an $n$-dimensional Klein bottle}\label{sec:TCKn}
Consider $S^1=\{z \in \C ~ |~ z\bar{z}=1\}$ and the $n$-dimensional torus $(S^1)^n$ and an identification $\sim$ on $(S^1)^n$ defined by $(z_{1},\dots,z_{n-1},z_{n})\sim( \bar{z}_{1},\dots,\bar{z}_{n-1},-z_{n})$. Then the identification space $K_n :=(S^1)^n/\sim$ is called an $n$-dimensional Klein bottle in \cite{DDavis}.
Observe that $K_n$ is a generalized projective product space, where $\tau$ is the involution on $M=(S^1)^{n-1}$ generated by conjugation on each component and $\sigma$ is the antipodal involution on $S^1$.
In this section, we compute an upper bound for $\mathrm{TC}(K_n)$ and show that this bound coincides with $\mathrm{TC}(K_n)$ for $n=2 $ and $ 3$. Therefore, in some sense, we give an alternative proof for the computation of the topological complexity of the classical $2$-dimensional Klein bottle.
\begin{theorem}\label{TCKn} We have the following bounds for $\mathrm{TC}(K_n)$.
\[n+3\leq \mathrm{TC}(K_n)\leq
\begin{cases}
3k+3 & \text{ if } $n=2k+1$\\
3k+2& \text{ if } $n=2k$.
\end{cases}\] In particular, $\mathrm{TC}(K_2)=5$ and $\mathrm{TC}(K_3)=6$.
\end{theorem}
\begin{proof}
Note that we have a fibre bundle \[(S^1)^{n-1}\xhookrightarrow{} K_n\xrightarrow{\mathfrak{p}_n} S^1/\sigma =\mathbb{R}P^1\] as in \eqref{eq:indc_fib_bndl} for each $n \geq 2$, and it induces a product bundle \[(S^1)^{n-1}\times (S^1)^{n-1} \xhookrightarrow{} K_n\times K_n \xrightarrow{\mathfrak{p}_n \times \mathfrak{p}_n} \mathbb{R}P^1\times \R P^1.\]
Recall that the action of $\Z_{2}$ on $(S^1)^{n-1}$ is defined by the conjugation on each component. So, $(\Z_{2}\times \Z_2)$ acts on $(S^1)^{n-1}\times (S^1)^{n-1}$ diagonally.
First we prove the claim for $n=2 $ and $3$. Note that $\R P^1 \cong S^1$. Then, using these cases, we get an upper bound for the higher dimension cases.
\smallskip
\vspace{1mm}
\textbf{Case 1}: \emph{Suppose $n=2$.}
Note that $K_2$ is the usual $2$-dimensional Klein bottle. Consider the fibre bundles $S^1\xhookrightarrow{} K_2 \xrightarrow{\mathfrak{p}_2} \mathbb{R}P^1$ and $S^1\times S^1\xhookrightarrow{}K_2\times K_2 \xrightarrow{\mathfrak{p}_2 \times \mathfrak{p}_2} \R P^1\times \R P^1$.
Let
$A :=\{e^{i\theta}\in S^1 \mid \theta \neq \pm\pi/2 \}$.
Define, \[U_1:= S^1\setminus \{1\}\times S^1\setminus \{1\}, U_2:= S^1\setminus \{-1\}\times S^1\setminus \{-1\} ~\mbox{ and }~
U_3=A\times A.\]
Note that all $U_i$'s are $(\Z_2\times \Z_2)$-invariant and covers $S^1\times S^1$. Moreover, $U_3$ is a disjoint union of contractible subsets and $U_1$, $U_2$ are contractible. Thus, the open cover $\{U_1,U_2,U_3\}$ forms a categorical open cover of $S^1\times S^1$. So, $U_1, U_2, U_3$ are motion planning cover for $S^1 \times S^1$.
Therefore, \Cref{cor:TC of gpps} gives $\mathrm{TC}(K_2)\leq \mathrm{cat}(\R P^1\times \R P^1)+3-1=5$. Recall that \cite[Proposition 5.2]{DDavis} gives $5\leq \mathrm{TC}(K_2)$. Hence, $\mathrm{TC}(K_2)=5$.
\smallskip
\vspace{1mm}
\textbf{Case 2}: \emph{Suppose $n=3$.}
In this case, we have the fibre bundles $S^1\times S^1\xhookrightarrow{} K_3 \xrightarrow{\mathfrak{p}_3} \mathbb{R}P^1$ and $(S^1)^2\times (S^1)^2\xhookrightarrow{}K_3\times K_3 \xrightarrow{\mathfrak{p}_3 \times \mathfrak{p}_3} \R P^1\times \R P^1$. Let $$V_1:=\{(z_1,z_2)\in S^1\times S^1 \mid z_1\neq -z_2 \} \mbox{ and } V_2:=\{(z_1,z_2)\in S^1\times S^1 \mid z_1\neq z_2\}.$$ Note that these are Farber's motion planning cover of $S^1$. Observe that both $V_1$ and $V_2$ are $\Z_2$-invariant open sets of $(S^1)^2$. For each $1\leq i, j\leq 2$, the set $V_i\times V_j$ is $(\Z_2\times \Z_2)$-invariant open set and $\{V_i \times V_j\}_{i,j=1}^2$ covers $(S^1)^2\times (S^1)^2$. Thus, the product of local continuous sections on $V_i$ and $V_j$ of $\pi_{S_1}$ gives a local continuous section on $V_i \times V_j$ of $\pi_{(S^1)^2}$ for $1\leq i,j\leq 2$. Therefore, $\mathrm{TC}(K_3)\leq \mathrm{cat}(\R P^1\times \R P^1)+4-1=6$. Now \cite[Proposition 5.2]{DDavis} gives $6\leq \mathrm{TC}(K_3)$. Hence, we have $\mathrm{TC}(K_3)=6$.
\smallskip
\vspace{1mm}
\textbf{Case 3}: \emph{Suppose $n=2k+1$ where $k\geq 2$.}
Note that we have to find $(\Z_2\times \Z_2)$-invariant open subsets of $(S^1)^{2k}\times (S^1)^{2k}$.
We write
\[\begin{split}(S^1)^{2k}\times (S^1)^{2k}&=(X_1\times \cdots \times X_k)\times (Y_1\times \cdots \times Y_k)\\
&= X_1\times Y_1\times\cdots \times X_k\times Y_k,
\end{split}
\]
where $X_i=(S^1)^2=Y_i$ for all $1\leq i\leq k$.
Using {\bf Case 2} and the proof of the (generalized) product formula for the topological complexity \cite[Theorem 4.2]{EqTCGrant}, we get a motion planning cover for $(S^1)^{2k}\times (S^1)^{2k}$ consisting of $4k-(k-1)=3k+1$ many $(\Z_2\times \Z_2)$-invariant open subsets. Therefore, \Cref{cor:TC of gpps} and \cite[Proposition 5.2]{DDavis} give $2k+4\leq\mathrm{TC}(K_{2k+1})\leq 3k+3$.
\smallskip
\vspace{1mm}
\textbf{Case 4}: \emph{Suppose $n=2k$ where $k\geq 2$.}
We write
\[\begin{split}(S^1)^{2k-1}\times (S^1)^{2k-1}&=((X\times Y)\times (X\times Y )\\
&= (X\times X)\times (Y\times Y),
\end{split}
\]
where $X=(S^1)^{2(k-1)}$ and $Y=S^1$. Using {\bf Case 3} and the proof of the product formula for the topological complexity \eqref{eq_tc_prod}, we get a motion planning cover for ${(S^1)^{2k-1}\times (S^1)^{2k-1}}$ consisting of ${3k-2+3-1=3k}$ many $(\Z_2\times \Z_2)$-invariant open subsets. Therefore, \Cref{cor:TC of gpps} and \cite[Proposition 5.2]{DDavis} give $2k+3\leq\mathrm{TC}(K_{2k+1})\leq 3k+2$.
\end{proof}
\section{LS-category and topological complexity of some generalized projective product spaces}\label{sec: Cat TC some gpps}
Let $S^n :=\{(u_1, \ldots, u_{n+1})\in \R^{n+1} ~|~ u_1^2 + \cdots + u_{n+1}^2 = 1\}$.
Davis \cite{Davis} introduced the projective product spaces and studied some of its topological properties. Let $(n_1,\dots,n_r)$ be a tuple of non-negative integers. Define
\begin{equation}\label{eq_ppsp}
P(n_1, \ldots, n_r):= \frac{S^{n_1}\times \cdots \times S^{n_r}}{({\bf x}_1, \dots, {\bf x}_{r})\sim (-{\bf x}_1, \dots, -{\bf x}_{r})}.
\end{equation}
Then $P(n_1, \ldots, n_r)$ is called a projective product spcace corresponding to the tuple $(n_1, \ldots, n_r)$.
Now, consider an involution $\tau_j$ on $S^{n_j} \subset \R^{n_j+1}$ defined as follows:
\begin{equation}\label{eq: invo prodsphere}
\tau_j( (y_1, \ldots, y_{p_j}, y_{p_j+1}, \ldots, y_{n_j+1}) ):= (y_1, \ldots, y_{p_j}, -y_{p_j+1}, \ldots, -y_{n_j+1}),
\end{equation}
for some $0 \leq p_j \leq n_j$ and $1\leq j\leq r$.
Then we have $\Z_2$-action on the product $S^{n_1}\times \dots \times S^{n_{r}}$ via the product of involutions $\tau_1\times \dots \times \tau_{r}$. Note that if $p_j=0$ then $\tau_j$ acts antipodally on $S^{n_j}$ and if $p_j=n_j$, then $\tau_j$ is a reflection across the hyper-plane $y_{n_j+1}=0$ in $\R^{n_j+1}$.
Let $N$ be a topological space with a free involution $\sigma$. Consider the identification space:
\begin{equation}\label{eq:prodsphere_N}
X((n_1, p_1), \ldots, (n_r,p_r), N) :=\frac{ S^{n_1}\times \cdots \times S^{n_r}\times N}{({\bf x}_1, \dots, {\bf x}_{r}, y)\sim (\tau_1({\bf x}_1),\dots ,\tau_r({\bf x}_r), \sigma(y))},
\end{equation}
where $\tau_j$ is a reflection defined as in \eqref{eq: invo prodsphere} for $1\leq j\leq r$. So, $X((n_1, p_1), \ldots, (n_r,p_r), N)$ is a generalized projective product space.
In this section, we study the LS-category and topological complexity of these spaces for several $N$. We note that,
in particular, if $p_j=0$ for $j=1, \ldots, r$ and $N=S^{n_{r+1}}$ with the involution $\sigma$ given by the antipodal action, then $X((n_1, 0), \ldots, (n_r, 0), N)$ is the projective product space $P(n_1, \ldots, n_{r+1})$.
We consider the trivial sphere bundle $$ S^{n_1}\times \cdots \times S^{n_{i-1}} \times S^{n_i} \times N \to S^{n_1}\times \cdots \times S^{n_{i-1}}\times N$$
for $i=2, \ldots, r$. This induces the sphere bundle
\[ S^{n_i} \xhookrightarrow{} X((n_1, p_1), \ldots, (n_i,p_i), N)\xrightarrow{q_i} X((n_1, p_1), \ldots, (n_{i-1},p_{i-1}), N)\] for $i=2, \ldots, r$.
Similarly, we can have a sphere bundle
\begin{equation}\label{eq_sp_bundle}
S^{n_1} \xhookrightarrow{} X((n_1,p_1), N)\xrightarrow{q_1} N/\sigma.
\end{equation}
Let $\alpha$ be the (first) Stiefel-Whitney class of the line bundle associated to the principal $\Z_2$-bundle $N \to N/\sigma$. We denote an exterior algebra by $\Lambda(-)$ and the total Steenrod square by ${\rm Sq} = \sum_{n \geq 0} {\rm Sq}^n$.
\begin{theorem}\label{thm: cohoringYM}
Let $n_1 \leq \cdots \leq n_r$. Then $H^*(X((n_1, p_1), \ldots, (n_r,p_r), N);\Z_2)$ is isomorphic as a graded $\Z_2$-algebra to
\begin{align*}
H^*(N/\sigma, \Z_2) \otimes \Lambda(\beta_1,\ldots, \beta_r),
\end{align*} where $|\beta_j|=n_j,$ ${\rm Sq}(\beta_j)=(1+\alpha)^{n_j+1-p_j} \beta_j$, and $p_j \geq 1$ for $1 \leq j \leq r$.
\end{theorem}
\begin{proof}
Consider the trivial vector bundle $\R^{n_1+1} \times N \to N$. This induces an $(n_1+1)$-vector bundle $$\frac{\R^{n_1+1} \times N} {((x_1, \ldots, x_{n_1}, x_{n_1+1}, y) \sim (\tau_1(x_1, \ldots, x_{n_1}, x_{n_1+1}), \sigma(y)))} \xrightarrow{\eta_1} \frac{N}{(y \sim \sigma(y))} = N/\sigma.$$
Then we have the cofibre sequence $S(\eta_1) \to D(\eta_1) \to {\rm Th}(\eta_1)$ where $S(\eta_1)$ and $D(\eta_1)$ are the total spaces of the sphere bundle and disk bundle of $\eta_1$, and ${\rm Th}(\eta_1)$ is the Thom space of $\eta_1$. So $S(\eta_1) \cong X((n_1,p_1), N)$ and $D(\eta_1) \simeq N/\sigma$. Thus we have the following long exact sequence
\begin{equation}\label{eq_split}
\cdots H^{\ast}({\rm Th} (\eta_1)) \to H^{\ast}(N/\sigma) \to H^{\ast}(X((n_1,p_1), N)) \to H^{\ast+ 1}({\rm Th}(\eta_1)) \to \cdots .
\end{equation}
The point $(1, 0, \ldots, 0) \in S^{n_1}$ is a fixed point under the involution $\tau_1$ on $S^{n_1}$. Thus the map $N \to S^{n_1} \times N$ defined by $y \mapsto ((1, 0, \ldots, 0), y)$ induces a section $s \colon N/\sigma \to X((n_1,p_1), N)$ of $\eta_1$ defined by $[y] \mapsto [((1, 0, \ldots, 0,0), y)]$. Therefore, \eqref{eq_split}
is a split exact sequence, that is;
$$H^{\ast}(X((n_1,p_1), N)) \cong H^{\ast}(N/\sigma) \oplus H^{\ast+ 1}({\rm Th}(\eta_1)).$$ Let $\beta_1$ be the image of the Thom class (in $H^{n_1+1}({\rm Th} (\eta))$) under this isomorphism.
Then, by the Thom isomorphism, we have $$H^{\ast}(X((n_1,p_1), N)) \cong H^{\ast}(N/\sigma) \oplus H^{\ast}(N/\sigma)\cdot \beta_1.$$ Note that as a real bundle, $\eta_1$ is isomorphic to the sum of $p_1$ many trivial bundle $\epsilon$ and $(n_1+1-p_1)$ many the line bundle $\zeta_1$ associated with the double cover $N \to N/\sigma$. Therefore, the total Steenrod square and the total Stiefel-Whitney class have the following relation in our setting, using the arguments in \cite[Page 94]{MiSt}.
\begin{align*}
{\rm Sq}(\beta_1)&=W(n_1\epsilon \oplus (n_1+1-p_1)\zeta_1) \beta_1 \\ &= (1+\alpha)^{n_1+1-p_1}\beta_1
\end{align*}
where $\alpha = w_1(\zeta_1)$ and $|\beta_1| = n_1$. So, $\beta^2_1 = \binom{n_1+1-p_1}{n_1} \alpha^{n_1} \beta_1$ if $p_1=1$ and $\beta_1^2 = 0$ if $p_1 > 1$.
One can prove the claim for $r>1$ inductively using similar arguments and the naturality of the Steifel-Whitney classes.
\end{proof}
\begin{remark} We observe the following things.
\begin{enumerate}
\item When $r=1$ and $n_r=1$, then \Cref{thm: cohoringYM} is \cite[Lemma 2.1]{DDavis}.
\item Suppose $m_1\leq \dots\leq m_k \leq n_1 \leq \cdots \leq n_r$ and $N=S^{m_1}\times \dots \times S^{m_k}$ with the involution $\sigma$ generated by the antipodal on each sphere $S^{m_j}$ for $1\leq j\leq k$. Then $N/\sigma$ is a projective product space $P(m_1,\dots,m_k)$, and \Cref{thm: cohoringYM} coincides with \cite[Theorem 4.1]{sarkargpps}.
\end{enumerate}
\end{remark}
Let $R$ be a commutative ring with unity. Let $X$ be the path connected topological space with its cohomology ring $H^{\ast}(X;R)$. The cup-length of $X$ over $R$ is the maximal number $r$ such that there exists $x_i\in H^{\ast}(X;R)$ for $i=1, \ldots, r$ with $\prod_{i=1}^{r}x_i\neq 0$. We denote this number by ${\rm cl}_{R}(X)$. It is well known \cite[Proposition 1.5]{CLOT} that the number gives a lower bound for the $\mathrm{cat}(X)$, that is ${\rm cl}_{R}(X)+1 \leq \mathrm{cat}(X)$.
Let \[\cup \colon H^{\ast}(X;R)\otimes H^{\ast}(X;R) \longrightarrow H^{\ast}(X;R)\] be the cup product. Then the zero-divisor cup-length of $X$ with respect to coefficient ring $R$ is defined as the maximal number $k$ such that there exist elements $u_i\in H^{\ast}(X;R)\otimes H^{\ast}(X;R)$ such that $\cup(u_i)=0$ for all $1\leq i \leq k$ and $\prod_{i=1}^{k} u_i\neq 0$. We denote this number by ${\rm zl}_{R}(X)$. It is known \cite[Theorem ]{FarberTC} that the number ${\rm zl}_{R}(X)$ gives a lower bound for the $\mathrm{TC}(X)$, that is ${\rm zl}_{R}(X) +1\leq \mathrm{TC}(X)$.
We now compute some bounds on the LS-category and the topological complexity of $X((n_1,p_1) \ldots, (n_r,p_r), N)$.
\begin{proposition}\label{Prop_cat}
Let $1 \leq p_j \leq n_j$. Then,
we have the following inequalities.
\[{\rm cl}_{\Z_2}(N/\sigma)+r+1\leq \mathrm{cat}(X((n_1,p_1) \ldots, (n_r,p_r), N))\leq \mathrm{cat}(N/\sigma)+r.\]
\end{proposition}
\begin{proof}
Let $u=\prod_{i=1}^{{\rm cl}_{\Z_2}(N/\sigma)} y_i$ be a largest non-zero product in $H^*(N/\sigma; \Z_2)$. Then the product $u\cdot\prod_{j=1}^{r}\beta_j$ is non-zero in $H^{\ast}(X((n_1,p_1) \ldots, (n_r,p_r), M);\Z_2)$. Then we get the left inequality using \cite[Proposition 1.5]{CLOT}.
We use \Cref{prop cat} to obtain the upper bound.
Let $e_{n_i}(1)=(1, 0, \ldots, 0) \in S^{n_i}$.
Consider $U_{i_1}=S^{n_i}\setminus \{e_{n_i}(1)\}$ and $U_{i_2}=S^{n_i}\setminus \{-e_{n_i}(1)\}$. Then $\{U_{i_1}, U_{i_2}\}$ is an $\left<\tau_i\right>$-invariant open cover of $S^{n_i}$. Observe that they are also $\left<\tau_i \right>$-categorical for $i=1, \ldots, r$.
Now using \eqref{eq_ls_cat_prod}, we have $\mathrm{cat}(\prod_{i=1}^{r}S^{n_i}) \leq r+1$. Thus, one can construct a categorical open cover of $\prod_{i=1}^{r}S^{n_i}$ with $r+1$ open sets invariant under the $\Z_2$ action. Then we get the right inequality using \Cref{prop cat}.
\end{proof}
\begin{remark}
We observe the following things.
\begin{enumerate}
\item Recall that the LS-category of the projective product spaces was computed in \cite{Vandembroucq}. If $N=S^{n_{r+1}}$ and $\sigma$ is the antipodal action on $N$, then \Cref{Prop_cat} coincides with \cite[Theorem 1.2]{Vandembroucq}.
\item Suppose $m_1\leq \dots\leq m_k$ and $N=S^{m_1}\times \dots \times S^{m_k}$ with the involution $\sigma$ generated by the antipodal on each sphere $S^{m_j}$ for $1\leq j\leq k$. Then $N/\sigma=P(m_1,\dots,m_k)$. Recall that the cup-length of $P(m_1,\dots,m_k)$ is $m_1+k-1$ and $\mathrm{cat}(P(m_1,\dots,m_k))=m_1+k$. Therefore, $\mathrm{cat}((X((n_1,p_1) \ldots, (n_r,p_r), N)))=m_1+k+r$ if $1 \leq p_i \leq n_i$ for $i=1, \ldots, r$.
\end{enumerate}
\end{remark}
Recall the involution $\tau_i$ on the sphere $S^{n_i}$ defined in \eqref{eq: invo prodsphere}. Let $\Z_2 \cong \left < \tau_i \right >$. We compute the $\Z_2$-equivariant topological coplexity of $S^{n_i}$.
\begin{example}
Let $n_i\geq 2$ and $p_i\geq 2$. Then $\mathrm{TC}_{\Z_2}(S^{n_i})=3$ if either $n_i$ is even or $p_i$ is odd and $2\leq \mathrm{TC}_{\Z_2}(S^{n_i})\leq 3$ if $n_i$ is odd and $p_i$ is even.
Suppose $p_i\geq 2$. Then $(S^{n_i})^{\Z_2}=S^{p_i-1}$ is path connected. If $n_i$ is even then \[3=\mathrm{TC}(S^{n_i})\leq\mathrm{TC}_{\Z_2}(S^{n_i})\leq 2\mathrm{cat}_{\Z_2}(S^{n_i})-1=3.\] Now Suppose both $n_i$ and $p_i$ are odd, using \cite[Corollary 5.4 ]{EqTCGrant} then \[3=\mathrm{TC}(S^{p_i-1})\leq \mathrm{TC}_{\Z_2}(S^{n_i})\leq 2\mathrm{cat}_{\Z_2}(S^{n_i})-1= 3,\] since $S^{p_i-1}$ is an even dimensional sphere. Now if $n_i$ is odd and $p_i$ is even, then clearly, $2\leq \mathrm{TC}_{\Z_2}(S^{n_i})\leq 3$.
\end{example}
We now prove the corresponding result for the topological complexity.
\begin{proposition}\label{prop:TCrefaction}
Let $2\leq n_1 \leq \cdots \leq n_r$ and $p_j >1$ for $j=1, \ldots, r$.
Then
\begin{equation}\label{eq: TCYM}
{\rm zl}_{\Z_2}(N/\sigma)+ r+1\leq\mathrm{TC}(X((n_1,p_1)\dots,(n_r,p_r),N)\leq \mathrm{cat}(N/\sigma\times N/\sigma)+ 2r.
\end{equation}
\end{proposition}
\begin{proof}
The following inequality follows from \Cref{cor:TC of gpps}
\[\mathrm{TC}(X((n_1,p_1)\dots,(n_r,p_r),N)\leq \mathrm{cat}(N/\sigma\times N/\sigma)+\mathrm{TC}_{\Z_2}(\prod_{i=1}^{r}S^{n_i})-1.\]
Since for each $i$, $n_i\geq 2$ and $p_j\geq 2$, the fixed point set $(\prod_{i=1}^{r}S^{n_i})^{\Z_2}$ is path connected. Therefore, by \cite[Corollary 5.8]{EqTCGrant}, we have $\mathrm{TC}_{\Z_2}(\prod_{i=1}^{r}S^{n_i})\leq 2\mathrm{cat}_{\Z_2}(\prod_{i=1}^{r}S^{n_i})-1$. Using product inequality of equivariant LS-category (see \cite[Theorem 2.23]{eqicatprodineq}), it can be shown that $\mathrm{cat}_{\Z_2}(\prod_{i=1}^{r}S^{n_i}) \leq r+1$. Thus we get the right inequality of \eqref{eq: TCYM}.
The left inequality of \eqref{eq: TCYM} follows from the zero-divisor cup-length calculation using \Cref{thm: cohoringYM}. This proves the proposition.
\end{proof}
Let $\Sigma_g$ be the orientable surface of genus $g$ embedded in $\R^3$ such that it admits the antipodal action. Observe that the quotient of $\Sigma_g$ by the antipodal action is the non-orientable surface $N_{g+1}$ of genus $g+1$.
Then the following result is a straightforward consequence of \Cref{Prop_cat} and \Cref{prop:TCrefaction}.
\begin{corollary}
Let $2\leq n_1 \leq \cdots \leq n_r$ and $p_j >1$ for $j=1, \ldots, r$. Then
\begin{enumerate}
\item $\mathrm{cat}(X((n_1,p_1)\dots,(n_r,p_r), \Sigma_g))=r+3$.
\item $r+4\leq \mathrm{TC}(X((n_1,p_1)\dots,(n_r,p_r), \Sigma_g))\leq 2r+5$.
\end{enumerate}
\end{corollary}
Next, we consider the following class of generalized projective product spaces.
\begin{equation}\label{eq: Xgn}
X_g^{n-2}:= \frac{S^1\times \cdots \times S^1 \times \Sigma_g}{(z_1,\dots ,z_{n-2},x)\sim (\bar{z}_1,\dots ,\bar{z}_{n-2},-x) }.
\end{equation}
Note that for any $g\geq 0$ and $n \geq 2$ the spaces $X_g^{n-2}$ are closed, smooth manifolds of dimension $n$. If $n=2$, $X_g^{n-2}=N_{g+1}$. Moreover, we have a fibre bundle $(S^1)^{n-2}\xhookrightarrow{} X_g^{n-2}\to N_{g+1}$. Observe that $X_g^{n-2}=X((n_1,p_1),\dots,(n_r,p_r), \Sigma_g)$, where $r=n-2$ and $n_i=p_i=1$ for $1\leq i\leq n-2$.
The following proposition is a consequence of \cite[Lemma 2.1]{DDavis} and \Cref{thm: cohoringYM}.
\begin{proposition}
The cohomology ring of $X_g^{n-2}$ is given by the following.
\[H^{\ast}(X_g^{n-2};\Z_2)=\displaystyle\frac{\Z_2\left< x_1,\dots x_{g+1},y_1,\dots,y_{n-2} \right>}{\left< \{x_ix_j,~ x_i^3,~ y_s^2-w_1y_s \mid 1\leq i\neq j\leq g+1,~ 1\leq s\leq n-2\} \right>},\]
where $|x_i|=1=|y_i|$ and $w_1\in H^1(N_{g+1};\Z_2)$ classifies the double cover $\Sigma_g \to N_{g+1}$.
\end{proposition}
Now we are ready to compute the LS-category and some bounds for the topological complexity of $X_g^{n-2}$.
\begin{proposition}
The LS-category of $X_g^{n-2}$ is $\mathrm{cat}(X_g^{n-2})=n+1$.
\end{proposition}
\begin{proof}
From the description of the cohomology ring of $X_g^{n-2}$, one can show that the product $x_1\cdot w_1\cdot\prod_{i=1}^{n-2}y_i$ is non-zero in $H^{\ast}(X_g^{n-2};\Z_2)$. Therefore, $n+1\leq \mathrm{cat}(X_g^{n-2})$. Also, $\mathrm{cat} (X_g^{n-2}) \leq \mathrm{dim} (X_g^{n-2})+1 = n+1$.
\end{proof}
\begin{theorem}
Let $n\geq 3$. Then we have the following bounds for $\mathrm{TC}(X_g^{n-2})$.
\[n+4\leq \mathrm{TC}(X_g^{n-2})\leq
\begin{cases}
3k+2 & \text{ if } $n=2k$,\\
3k+4& \text{ if } $n=2k+1$.
\end{cases}\]
In particular, $\mathrm{TC}(X_g^1)=7$ and $\mathrm{TC}(X_g^2)=8$.
\end{theorem}
\begin{proof}
First we compute a lower bound for the topological complexity using zero-divisor cup-length.
Consider the basic divisors $X_1=1\otimes x_1 + x_1\otimes 1$ and $Y_i=1\otimes y_i+y_i\otimes 1$ for $2\leq i\leq n-2$. Consider the product \[X_1^3\cdot Y_1^3\cdot\prod_{i=2}^{n-2}Y_i=(x_1^2\otimes x_1+x_1\otimes x_1^2)\cdot (w_1^2y_1\otimes 1 +w_1y_1\otimes y_1+y_1\otimes w_1y_1+1\otimes w_1^2y_1)\cdot\prod_{i=2}^{n-2}Y_i. \]
Note that the above product contains a term $x_1w_1 \prod_{i=1}^{n-2}y_i\otimes x_1^2y_1$ which is not killed by any other term in the product. Therefore, $n+4\leq \mathrm{TC}(X_g^{n-2})$.
Consider the fibre bundle \[(S^1)^{n-2}\times (S^1)^{n-2}\xhookrightarrow{} X_g^{n-2}\times X_g^{n-2}\to N_{g+1}\times N_{g+1}.\]
We have to find a categorical cover of $N_{g+1}\times N_{g+1}$ and $(\Z_2\times \Z_2)$-invariant motion planning cover of $(S^{1})^{n-2}\times (S^{1})^{n-2}$. Recall that we have computed such an open cover in \Cref{TCKn} consisting 3k many open sets if $n=2k+1$ and $3k-2$ many open sets if $n=2k$. Using Kunneth formula, we get the cup-length of $H^*(N_{g+1}\times N_{g+1};\Z_2)=4$, since the cup-length of $H^*(N_{g+1};\Z_2)=2$. Therefore, $\mathrm{cat}(N_{g+1}\times N_{g+1})=5$. Thus, we get the desired upper bound using \Cref{cor:TC of gpps}.
\end{proof}
Let $\tau$ be an involution on $M$. This map induces an automorphism $$\tau^* \colon H^*(M; \Z_2) \to H^*(M; \Z_2).$$
\begin{theorem}\label{thm:toric_mod2}
Let $M$ be a compact, simply connected and path-connected space with an involution $\tau$ such that $\tau^*$ is identity. Let $N$ be a simply connected path-connected space with free involution $\sigma$. Then
$$H^*(X(M, N)); \Z_2) \cong H^*(N/\sigma; \Z_2) \otimes H^*(M; \Z_2).$$
\end{theorem}
\begin{proof}
The projection $M \times N \to N$ induces a fibre bundle $M \xhookrightarrow{} X(M, N) \xrightarrow{\mathfrak{p}} N/\sigma$ with fiber $M$, see \eqref{eq:indc_fib_bndl}. By the hypothesis, we have $\pi_1(X(M, N) = \Z_2$. Since $M$ is compact, the cohomology groups of the fibre and base of this bundle have finite dimension over the field $\Z_2$. The fibre $M$ and the base $N/\sigma$ are path connected, and the map $\tau^*$ is identity. Then, by applying \cite[Proposition 5.5]{Mcc}, one gets that the corresponding spectral sequence collapses at $E_2$. Hence $M$ is totally non-homologous to zero in $X(M, N)$ with respect to $\Z_2$. Therefore, by \cite[Theorem 5.10]{Mcc}, one gets the claim in the theorem.
\end{proof}
If $n_1, \ldots, n_r$ are positive integers greater than one then $N=\prod_{i=1}^r S^{n_i}$ is simply connected. Consider the involution $\sigma$ on $N$ determined by the antipodal action on each factors. Let us denote the generalized projective product space for this $M$ and $N$ by $X(M, n_1, \ldots, n_r)$. Note that if $M=N$ with a free involution and $p_j=0$ for $j=1, \ldots, r$ in \eqref{eq:prodsphere_N} then $X((n_1, 0), \ldots, (n_r, 0), N) = X(M, n_1, \ldots, n_r)$. We have the following result.
\begin{corollary}\label{cor:toric_mod2}
Let $M$ be a compact, simply connected and path-connected space with an involution $\tau$ such that $\tau^*$ is identity, and $n_1, \ldots, n_r$ be positive integers greater than one. Then
$H^*(X(M, n_1, \ldots, n_r)); \Z_2) \cong H^*(P(n_1, \ldots, n_r);\Z_2) \otimes H^*(M; \Z_2).$
\end{corollary}
\begin{proposition}
Let $M$ be a compact, simply connected and path-connected space with an involution $\tau$ such that $\tau^*$ is identity, and ${\rm cl}_{\Z_2}(M)$ be the cup-length of $H^*(M; \Z_2)$ and $2 \leq n_1 \leq \cdots \cdots \leq n_r$. Then $\mathrm{cat}(X(M, n_1, \ldots, n_r)) \geq {\rm cl}_{\Z_2}(M) + r + n_1$.
\end{proposition}
\begin{proof}
Let $\prod_{i=1}^{{\rm cl}_{\Z_2}(M)} y_i$ be non-zero in $H^*(M; \Z_2)$. Then the cup product $\prod_{i=1}^{{\rm cl}_{\Z_2}} y_i \alpha_1^{n_1} \prod_{j=2}^{r}\alpha_i$ is non-zero in $H^{\ast}(X(M, n_1, \ldots, n_r); \Z_2)$, by Corollary \ref{cor:toric_mod2}. Therefore, the result follows from \cite[Proposition 1.5]{CLOT}
\end{proof}
\begin{proposition}\label{prop:tc-bundle}
Let $M$ be a compact, simply connected and path-connected space with an involution $\tau$ such that $\tau^*$ is identity. Let ${\rm zl}_{\Z_2}(M)$ be the zero-divisor cup-length of $H^*(M; \Z_2)$ and $2 \leq n_1 \leq \cdots \cdots \leq n_r$. Then $\mathrm{TC}(X(M, n_1, \ldots, n_r)) \geq {\rm zl}_{\Z_2}(M)+ {\rm zl}_{\Z_2}(\R P^{n_1}) + r$.
\end{proposition}
\begin{proof}
This follows from \Cref{thm:toric_mod2} and the computation of the zero-divisor cup-length of the space $X(M, n_1, \ldots, n_r)$.
\end{proof}
\begin{remark}
If the space $M$ in \Cref{prop:tc-bundle} satisfies the hypothesis in \Cref{thm:TC-fibrebundle}, then $\mathrm{TC}(X(M, n_1)) \leq \mathrm{TC}(\mathbb{R} P^{n_1}) + \mathrm{TC}(M)-1$. In particular, if $M$ is the product of a finitely many spheres with anti-podal actions, then we get \cite[Theorem 1.3]{Vandembroucq}.
\end{remark}
\section{LS-category and topological complexity of some generalized Dold manifolds}\label{sec: Cat TC some gen Dold}
In this section, we study LS-category and topological complexity of certain generalized projective product spaces called `Dold manifolds of Grassmann type'.
Let $1\leq d < n$ and $\mathrm{Gr}_d(\C^n)$ be the set of all $d$-dimensional subspaces of $\C^n$. The space $\mathrm{Gr}_d(\C^n)$ is called a complex Grassmann manifold and its complex dimension is $d(n-d)$. Note that $\mathrm{Gr}_1(\mathbb{C}^n) = \mathbb{C} P^{n-1}$. Let \[X(\mathrm{Gr}_d(\C^n), n_1,\dots,n_r):=\frac{\mathrm{Gr}_d(\C^n) \times S^{n_1}\times \cdots \times S^{n_r}}{(y, {\bf x}_1, \dots, {\bf x}_{r})\sim (\tau(y), -{\bf x}_1, \dots, -{\bf x}_{r})},\] where $\tau$ is the conjugation involution, whose fixed point set is the real Grassmann manifold $\mathrm{Gr}_d(\mathbb{R}^n)$. We call $ X(\mathrm{Gr}_d(\C^n), n_1,\dots,n_r)$ a \emph{Dold manifold of Grassmann type}. We have a fibre bundle
$$\mathrm{Gr}_d(\C^n)\xhookrightarrow{}X(\mathrm{Gr}_d(\C^n), n_1,\dots,n_r)\xrightarrow{\mathfrak{p}}P(n_1,\dots,n_r),$$ where $P(n_1,\dots,n_r)$ is the projective product space defined in \eqref{eq_ppsp}. If $n_1 \leq \cdots \leq n_r$, by Theorem \ref{thm:toric_mod2}, we have
\begin{equation}\label{eq:cohm_dm}
H^*(X(\mathrm{Gr}_d(\C^n), n_1, \ldots, n_r)); \Z_2) \approx H^*(P(n_1, \ldots, n_r);\Z_2) \otimes H^*(\mathrm{Gr}_d(\C^n); \Z_2).
\end{equation}
We recall the cell structure on $\mathrm{Gr}_d(\C^n)$. A $d$-tuple $\lambda = (\lambda_1, \ldots, \lambda_d)$ is called a Schubert symbol if $1 \leq \lambda_1 < \cdots < \lambda_d \leq n$. Consider $\C ^{\ell} := \{(z_1, \ldots, z_{\ell}, 0, \ldots, 0) \in \C^n\}$. For a Schubert symbol $\lambda =(\lambda_1, \ldots, \lambda_d)$, define $$E(\lambda):=\{H \in \mathrm{Gr}_d(\C^n) ~|~ \dim(H \cap \C^{\lambda_j})=j, \dim(H \cap \C^{\lambda_j-1})=j-1 ~\mbox{for}~j=1, \ldots, d\}.$$ Then $E(\lambda)$ is even dimensional and is called a Schubert cell for the Schubert symbol $\lambda$. These Schubert cells give a cell structure on $\mathrm{Gr}_d(\C^n)$, see \cite{MiSt}. The cup-length of $\mathrm{Gr}_d(\C^n)$ is $d(n-d)$. Observe that each Schubert cell is invariant under the conjugation action and any two same dimensional cells corresponding to different Schubert symbols are disjoint.
It is well known that the integral cohomology ring of $\mathrm{Gr}_d(\C^n)$ is described as follows: \[H^*(\mathrm{Gr}_d(\C^n),\Z)=\frac{\Z[c_1,\dots,c_d]}{\left<h_{n-d-1},\dots,h_n\right>},\]
where $|c_i|=2i$ and $h_j$ is defined as the $2j$-th degree term in the expansion of $(1+c_1+\dots+c_d)^{-1}$. Note that $\mathrm{Gr}_d(\C^n)$ is a K\"{a}hler manifold with ${\rm rank}(H^2(\mathrm{Gr}_d(\C^n);\Z)=1$. Therefore, $c_1^{d(n-d)}\neq 0$. Thus ${\rm cl}(\mathrm{Gr}_d(\C^n))=d(n-d)$. Since, $\mathrm{Gr}_d(\C^n)$ is simply connected, $\mathrm{cat}(\mathrm{Gr}_d(\C^n))\leq d(n-d)+1$. Therefore, we have $\mathrm{cat}(\mathrm{Gr}_d(\C^n))=d(n-d)+1$.
\begin{theorem}\label{thm_lscat_gras}
Let $n_1 \leq \cdots \leq n_r$. Then ${\rm cat}(X(\mathrm{Gr}_d(\C^n), n_1, \ldots, n_r)) = d(n-d) + n_1+r$.
\end{theorem}
\begin{proof}
The cohomology of the projective product spaces and \eqref{eq:cohm_dm} give that the cup-length of $X(\mathrm{Gr}_d(\C^n),n_1, \ldots, n_r)$ is $d(n-d) + n_1+r-1$. So, \[{\rm cat}(X(\mathrm{Gr}_d(\C^n),n_1, \ldots, n_r)); \Z_2) \geq d(n-d) + n_1+r.\]
Let $0 \leq i \leq d(n-d)$ and $U_i:= \bigcup_{|\lambda|\leq i} E(\lambda)$. So, $U_{i}$ is a subcomplex of $\mathrm{Gr}_d(\C^n))$. Thus there is a conjugation invariant open neighborhood $V_{i}$ of $U_{i}$ such that $V_{i}$ retracts on $U_{i}$. Then, $\{V_{i} - V_{i -1}\}_{i=0}^{d(n-d)}$ gives a conjugation invariant categorical cover with $d(n-d)+1$ many open sets. Here $V_{-1}=\emptyset$. Therefore, by \Cref{prop cat} and ${\rm cat}({P(n_1,\dots,n_r)})=n_1 +r$, we get the following inequality \[{\rm cat}(X(\mathrm{Gr}_d(\C^n), n_1, \ldots, n_r)) \leq d(n-d) + n_1+r.\]
This proves the claim.
\end{proof}
\begin{proposition}\label{prop_tc_grdn}
Let $n_1 \leq \cdots \leq n_r$. Then
\begin{equation}\label{eq:tc-GRpps}
{\rm zl}_{\Z_2}(\mathrm{Gr}_d(\C^n))+ {\rm zl}_{\Z_2}(\R P^{n_1}) +r \leq\mathrm{TC}(X(\mathrm{Gr}_d(\C^n), n_1, \ldots, n_r)) \leq 2d(n-d) + 2(n_1+r)-1.
\end{equation}
\end{proposition}
\begin{proof}
Using \eqref{eq:cohm_dm}, one can get that
\[{\rm zl}_{\Z_2}((X(\mathrm{Gr}_d(\C^n), n_1, \ldots, n_r))={\rm zl}_{\Z_2}(\mathrm{Gr}_d(\C^n))+ {\rm zl}_{\Z_2}(\R P^{n_1}) +r-1.\]
So, the left inequality of \eqref{eq:tc-GRpps} follows.
To get the right inequality, we have to find the $(\Z_2\times \Z_2)$-invariant motion planning cover of $\mathrm{Gr}_d(\C^n)\times \mathrm{Gr}_d(\C^n)$. We get such cover with $2d(n-d)+1$ sets, since we have conjugation invariant categorical cover with $d(n-d)+1$ many open sets of $\mathrm{Gr}_d(\C^n)$. Now observe that ${\rm cat}(P(n_1,\dots,n_r)\times P(n_1,\dots,n_r))\leq 2(n_1+r)-1$. This proves the right inequality of \eqref{eq:tc-GRpps}.
\end{proof}
\begin{theorem}\label{thm:cat_CPnN}
Let $N$ be a simply connected space with a free involution $\sigma$ and $\mathbb{C}P^{n}$ with the conjugation involution. Then
\begin{equation}\label{eq:cl-cat}
n + {\rm cl}_{\Z_2}(N/\sigma)+1 \leq \mathrm{cat}(X(\mathbb{C} P^n, N))\leq n+ \rm{cat}(N/\sigma).
\end{equation}
\end{theorem}
\begin{proof}
Using \Cref{thm:toric_mod2}, we get the cup-length ${\rm cl}_{\Z_2}{X}(\mathbb{C} P^n, N))=n + {\rm cl}_{\Z_2}(N/\sigma)$. Therefore, we have $\mathrm{cat}({X}(\mathbb{C} P^n, N)) \geq n +{\rm cl}_{\Z_2}(N/\sigma) +1.$
Note that $\mathrm{cat}_{\Z_2}(\C P^n)=n+1$, see the proof of \cite[Proposition 3.10]{Naskar}.
Therefore, using \Cref{thm:cat}, we get the right inequality of \eqref{eq:cl-cat}.
\end{proof}
\begin{remark}
Observe that if $\mathrm{cat}(N/\sigma)={\rm cl}_{\Z_2}({N/\sigma})+1$ then $\mathrm{cat}(X(\mathbb{C} P^n, N))= n+ {\rm cat}(N/\sigma)$. In particular, $\mathrm{cat}(X(\mathbb{C} P^n, n_1,\dots,n_r))= n+ n_1+r$, and $\mathrm{cat}(X(\mathbb{C} P^n, n_1))= n+ n_1+1$ which is \cite[Corollary 2.7]{Naskar}.
\end{remark}
\begin{theorem}\label{thm:TC-cpn-pn}
Let $n_1 \leq \cdots \leq n_r$ and $M=\mathbb{C}P^{n}$ with the conjugation involution. Then
\[{\rm zl}_{\Z_2}(\mathbb{C}P^{n}) + r + {\rm zl}_{\Z_2}(\mathbb{R}P^{n_1}) \leq \mathrm{TC}(X(\mathbb{C} P^n, n_1,\dots,n_r)) \leq \mathrm{TC}(\mathbb{R} P^{n_1}) + r +k + 2n,\] where $k$ is the number of evens in ${n_1, \ldots, n_r}$.
\end{theorem}
\begin{proof}
The left inequality can be obtained from \Cref{thm:toric_mod2} and the computation of the zero-divisor cup-length of $X(\mathbb{C} P^n, n_1,\dots,n_r)$.
Recall that the categorical cover of $\C P^n$ consists of $n+1$ many open sets homeomorphic to $\C^n$. These open sets are invariant under conjugation action. Therefore, we can have conjugation invariant categorical cover of $\C P^n\times \C P^n$ consists of $2n+1$ many open sets. But this categorical cover of $\C P^n\times \C P^n$ can be regarded as a motion planning cover, since $\mathrm{TC}(\C P^n)=2n+1$. Thus, using \Cref{thm:TC-fibrebundle}, we get the following inequality. \[\mathrm{TC}(X(\mathbb{C} P^n, n_1,\dots,n_r))\leq \mathrm{TC}(P(n_1,\dots,n_r))+2n+1-1.\]
Now the right inequality follows from \cite[Theorem 1.3]{Vandembroucq}.
\end{proof}
\section{Equivariant LS-category and topological complexity of a class of $\Z_2$-spaces}\label{sec: eq Cat TC some gen Dold}
Let $N$ be a topological space and $\sigma$ a fixed point free involution on $N$. Recall the involution $\tau_i$ on $S^{n_i}$ for $i=1, \ldots, r$. Define an involution $\tau' \colon S^{n_1}\times \cdots \times S^{n_r}\times N \to S^{n_1}\times \cdots \times S^{n_r}\times N$ by
\begin{equation}\label{eq:free_inv}
\tau'({\bf x}_1, \dots, {\bf x}_{r},y)=(\tau_1({\bf x}_1), \dots, \tau_r({\bf x}_r), \sigma(y)).
\end{equation}
Then $\tau'$ is a free involution on $S^{n_1}\times \cdots \times S^{n_r}\times N$ and $\Z_2$ acts freely on $S^{n_1}\times \cdots \times S^{n_r}\times N$ via $\tau'$. We denote the orbit space by $X((n_1,p_1) \ldots, (n_r,p_r), N)$.
In this section, we study equivariant LS-category and equivariant topologycal complexity of several $\Z_2$-spaces related to the generalized projective product spaces and generalized Dold manifolds.
\begin{proposition}\label{prop:eqi_lstc}
Let $N$ be a metrizable space with a free involution $\sigma$ and $1 \leq p_i \leq n_i$ for $i=1, \ldots, r$. Then for the $\Z_2$-action determined by \eqref{eq:free_inv}, we have
\begin{equation}\label{eq:equi_cat_1}
{\rm cl}_{\Z_2}(N/\sigma)+r+1\leq \mathrm{cat}_{\Z_2}(S^{n_1}\times \cdots \times S^{n_r}\times N)\leq \mathrm{cat}(N/\sigma)+r.
\end{equation}
In particular, if $N=\Sigma_g$ and $\sigma$ is the antipodal involution on $\Sigma_g$, then $\mathrm{cat}_{\Z_2}(S^{n_1}\times \cdots \times S^{n_r}\times \Sigma_g))=r+3$.
\end{proposition}
\begin{proof}
Since $N$ is metrizable, then $S^{n_1}\times \cdots \times S^{n_r}\times N$ is so. Therefore, it follows from \cite[Theorem 1.15]{eqlscategory} that \[\mathrm{cat}_{\Z_2}(S^{n_1}\times \cdots \times S^{n_r}\times N)=\mathrm{cat}(X((n_1,p_1) \ldots, (n_r,p_r), N)).\]
Then the inequalities follow from \Cref{Prop_cat}.
Now assume that $N=\Sigma_g$. We know that the cup-length of $H^*(N_{g+1};\Z_2)$ is $2$. Moreover, $\mathrm{cat}(N_{g+1})=3$. Therefore,
\[2+r+1\leq \mathrm{cat}_{\Z_2}(S^{n_1}\times \cdots \times S^{n_r}\times \Sigma_g))\leq 3+r.\] Hence the proposition follows.
\end{proof}
\begin{remark}
Let $m_1\leq \dots\leq m_{\ell}$ and $N=S^{m_1}\times \dots \times S^{m_{\ell}}$. Consider the involution $\sigma$ generated by the antipodal on each sphere $S^{m_j}$ for $1\leq j\leq k$. Then $N/\sigma=P(m_1,\dots,m_{\ell})$. Recall that the cup-length of $P(m_1,\dots,m_{\ell})$ is $m_1+\ell-1$ and $\mathrm{cat}(P(m_1,\dots,m_{\ell}))=m_1+\ell$. Therefore, $\mathrm{cat}_{\Z_2}(S^{n_1}\times \cdots \times S^{n_r}\times N))=m_1+\ell+r$.
\end{remark}
\begin{proposition}\label{eq tc}
If $N$ is a smooth manifold with a free involution $\sigma$ and $n_i\geq 2$ for $i=1, \ldots, r$. Then for the $\Z_2$-action determined by \eqref{eq:free_inv}, we have the following.
\begin{equation}\label{eq: TCpi0}
r+k+{\rm zl}_{\Q}(N)+1\leq\mathrm{TC}_{\Z_2}(S^{n_1}\times \cdots \times S^{n_r}\times N)\leq r+k+\mathrm{TC}_{\Z_2}(N),
\end{equation}
where $k$ is the number of even $n_i$'s and $p_i=0$ for $1\leq i\leq r$. Moreover,
\begin{equation}\label{eq: TCpi2}
r+k+{\rm zl}_{\Q}(N)+1\leq\mathrm{TC}_{\Z_2}(S^{n_1}\times \cdots \times S^{n_r}\times N)\leq 2r+\mathrm{TC}_{\Z_2}(N),
\end{equation}
when $p_i\geq 2$ for $1\leq i\leq r$.
\end{proposition}
\begin{proof}
Observe that \[H^*(S^{n_1}\times \cdots \times S^{n_r}\times N);\Q) \cong \Lambda_{\Q}(\alpha_1,\dots,\alpha_{r-k})\otimes \otimes_{j=1}^{k}\displaystyle\frac{\Q[\beta_{j}]}{\left<\beta_j^2\right>}\otimes H^*(N;\Q).\]
Let $\bar{\alpha}_i:=\alpha_i\otimes 1-1\otimes\alpha_i$, for $1\leq i\leq r-k$. Let $\bar{\beta}_j:=\beta_j\otimes 1-1\otimes\beta_j$, for $1\leq j\leq k$. Suppose that ${\rm zl}_{\Q}(N)$ is the zero-divisor cup-length of $H^*(N; \Q)$. That is, there are elements $X_1,\dots, X_{\ell}$ in the kernel of $\triangle_N^* \colon H^*(N\times N)\to H^*(N)$, where $\triangle_N(x)=(x,x)$ such that $\prod_{i=1}^{\ell}X_i\neq 0$. Then note that the product \[\prod_{i=1}^{r-k}\bar{\alpha}_{i}\cdot\prod_{j=1}^{k}\bar{\beta}_{j}^{2}\cdot \prod_{s=1}^{\ell}X_s\neq 0.\] Therefore, $r+k+ {\rm zl}_{\Q}(N) \leq \mathrm{TC}_{\Z_2}(S^{n_1}\times \cdots \times S^{n_r}\times N) $. Consider the case $p_i=0$ for $i=1,\dots, r$. Then it follows from \cite[Lemma 4.1]{EqTCGrant},
\[\mathrm{TC}_{\Z_2}(S^n)=
\begin{cases}
2& \text{ if $n$ is odd,} \\
3& \text{ if $n$ is even}.
\end{cases}
\]
Therefore, using generalized additive inequality of topological complexity \cite[Theorem 4.2]{EqTCGrant}, we get
\begin{align*}
\mathrm{TC}_{\Z_2}(S^{n_1}\times \cdots \times S^{n_r}\times N) &\leq \mathrm{TC}_{\Z_2}(\prod_{i=1}^{r}S^{n_i})+ \mathrm{TC}_{\Z_2}(N)-1\\
& = 2(r-k)+3k-r+1+\mathrm{TC}_{\Z_2}(N)-1\\
&= r+k +\mathrm{TC}_{\Z_2}(N).
\end{align*}
Now assume that $p_i\geq 2$. Then
\begin{align*}
\mathrm{TC}_{\Z_2}(S^{n_1}\times \cdots \times S^{n_r}\times N) &\leq \mathrm{TC}_{\Z_2}(\prod_{i=1}^{r}S^{n_i})+ \mathrm{TC}_{\Z_2}(N)-1\\
& \leq 2\mathrm{cat}_{\Z_2}(\prod_{i=1}^{r}S^{n_i})-1+\mathrm{TC}_{\Z_2}(N)-1\\
&= 2r +\mathrm{TC}_{\Z_2}(N),
\end{align*}
since $\mathrm{cat}_{\Z_2}(\prod_{i=1}^{r}S^{n_i})\leq r+1$ when $p_i\geq 2$.
This proves the proposition.
\end{proof}
\begin{remark} We observe the following.
\begin{enumerate}
\item If the $\Z_2$-equivariant topological complexity of $N$ coincides with the zero-divisor cup-length of $N$ plus one, then we have the equality in \eqref{eq: TCpi0}.
\item If $k=r$ and $\Z_2$-equivariant topological complexity of $N$ coincides with the zero-divisor cup-length of $N$ plus one, then then we have the equality in \eqref{eq: TCpi2}.
\end{enumerate}
\end{remark}
Let $M$ be a topological space and $\tau$ be an involution $M$ with non-empty fixed point set.
Define an involution $\sigma' \colon M\times S^{n_1}\times \cdots \times S^{n_r} \to M\times S^{n_1}\times \cdots \times S^{n_r}$
by \[\sigma'(y,{\bf x}_1,\dots,{\bf x}_r)=(\tau(y),-{\bf x}_1,\dots,-{\bf x}_r).\] Then $\Z_2$ acts freely on $M\times S^{n_1}\times \cdots \times S^{n_r}$ via $\sigma'$. We denote the orbit space by $X(M, n_1, \ldots, n_r)$.
\begin{proposition} Let $M$ be a compact, simply connected and path-connected metrizable space with an involution $\tau$ such that $\tau^*$ is identity. Then
\begin{equation}\label{eq:equi_cat2}
n_1+r + {\rm cl}_{\Z_2}(M)\leq \mathrm{cat}_{\Z_2}(M\times S^{n_1}\times \cdots \times S^{n_r})\leq n_1+r+\mathrm{cat}_{\Z_2}(M)-1,
\end{equation}
\end{proposition}
\begin{proof}
Since $M\times S^{n_1}\times \cdots \times S^{n_r}$ is a metrizable space and $\sigma'$ acts freely on it, then it follows from \cite[Theorem 1.15]{eqlscategory} that $$\mathrm{cat}_{\Z_2}(M\times S^{n_1}\times \cdots \times S^{n_r})=\mathrm{cat}(X(M,n_1,\dots,n_r)).$$ Thus the right inequality of \eqref{eq:equi_cat2} follows from the fact that $\mathrm{cat}(P(n_1,\dots,n_r))=n_1+r$ and \Cref{prop cat}.
One can compute the cup-length of $H^*(X(M,n_1,\dots,n_r);\Z_2)$ using \Cref{thm:toric_mod2} which is $n_1-1+r+{\rm cl}_{\Z_2}(M)$. So, the left inequality of \eqref{eq:equi_cat2} follows.
\end{proof}
\begin{example}\label{ex:eqi_lstc}
Let $M=\mathbb{C}P^{n}$ with the conjugation involution. One can show that $\mathrm{cat}_{\Z_2}(\mathbb{C}P^n)=n+1$, see the proof of \cite[Proposition 3.10 ]{Naskar}. Therefore, $\mathrm{TC}_{\Z_2}(\mathbb{C}P^n)\leq 2n+1$. Moreover, ${\rm zl}_{\Q}(\mathbb{C}P^n)=2n$. Now using \Cref{eq tc}, we get
\[\mathrm{TC}_{\Z_2}(\mathbb{C} P^n\times S^{n_1}\times \cdots \times S^{n_r})=r+k+2n+1,\] where
$k$ be the number of even $n_i$'s. More generally, for $M=\mathrm{Gr}_d(\C^n)$ and $\tau$ a conjugation on $\mathrm{Gr}_d(\C^n)$, we have
\[2d(n-d)+r+k+1 \leq \mathrm{TC}_{\Z_2}(\mathrm{Gr}_d(\C^n)\times S^{n_1}\times \cdots \times S^{n_r})\leq 2d(n-d)+r+k+1,\] where $k$ is the number of even $n_i$'s. That is $\mathrm{TC}_{\Z_2}(\mathrm{Gr}_d(\C^n)\times S^{n_1}\times \cdots \times S^{n_r})=2d(n-d)+r+k+1$.
\end{example}
\noindent {\bf Acknowledgement}: The authors thank Mark Grant and Stephan Mescher for some helpful discussion. The first author thank IIT Bombay and the second author thank ICSR of IIT Madras.
\bibliographystyle{plain}
| {
"timestamp": "2023-02-02T02:15:12",
"yymm": "2302",
"arxiv_id": "2302.00468",
"language": "en",
"url": "https://arxiv.org/abs/2302.00468",
"abstract": "In this paper, we study upper bounds for the topological complexity of the total spaces of some classes of fibre bundles. We calculate a tight upper bound for the topological complexity of an $n$-dimensional Klein bottle. We also compute the exact value of the topological complexity of $3$-dimensional Klein bottle. We describe the cohomology rings of several classes of generalized projective product spaces with $\\mathbb{Z}_2$-coefficients. Then we study the LS-category and topological complexity of infinite families of generalized projective product spaces. We reckon the exact value of these invariants in many specific cases. We calculate the equivariant LS-category and equivariant topological complexity of several product spaces equipped with $\\mathbb{Z}_2$-action.",
"subjects": "Algebraic Topology (math.AT)",
"title": "LS-category and topological complexity of several families of fibre bundles",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.979354068055595,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7099046479654894
} |
https://arxiv.org/abs/1706.04069 | Fast Inverse Nonlinear Fourier Transform | This paper considers the non-Hermitian Zakharov-Shabat (ZS) scattering problem which forms the basis for defining the SU$(2)$-nonlinear Fourier transform (NFT). The theoretical underpinnings of this generalization of the conventional Fourier transform is quite well established in the Ablowitz-Kaup-Newell-Segur (AKNS) formalism; however, efficient numerical algorithms that could be employed in practical applications are still unavailable. In this paper, we present two fast inverse NFT algorithms with $O(KN+N\log^2N)$ complexity and a convergence rate of $O(N^{-2})$ where $N$ is the number of samples of the signal and $K$ is the number of eigenvalues. These algorithms are realized using a new fast layer-peeling (LP) scheme ($O(N\log^2N)$) together with a new fast Darboux transformation (FDT) algorithm ($O(KN+N\log^2N)$) previously developed by the author. The proposed fast inverse NFT algorithm proceeds in two steps: The first step involves computing the radiative part of the potential using the fast LP scheme for which the input is synthesized under the assumption that the radiative potential is nonlinearly bandlimited, i.e., the continuous spectrum has a compact support and the discrete spectrum is empty. The second step involves addition of bound states using the FDT algorithm. Finally, the performance of these algorithms is demonstrated through exhaustive numerical tests. | \section*{Notations}
\label{sec:notations}
The set of non-zero positive real numbers ($\field{R}$) is denoted by
$\field{R}_+$. Non-zero positive (negative) integers are denoted by
$\field{Z}_+$ ($\field{Z}_-$). For any complex number $\zeta$,
$\Re(\zeta)$ and $\Im(\zeta)$ refer to the real
and the imaginary parts of $\zeta$, respectively. Its complex conjugate is
denoted by $\zeta^*$. The upper-half (lower-half) of complex plane, $\field{C}$,
is denoted by $\field{C}_+$ ($\field{C}_-$).
\section{Introduction}
The nonlinear Fourier (NF) spectrum offers a novel way of encoding information
in optical pulses where the nonlinear effects are adequately taken into
account as opposed to being treated as a source of distortion. This idea
has its origin in the work of Hasegawa and Nyu~\cite{HN1993} who were the first
to propose the use of discrete eigenvalues of the NF spectrum for encoding
information. Recent
advances in coherent optical communication have made it possible to reconsider
this old idea with some extensions and improvements. Extension of this scheme
consists in using additional degrees of freedom offered by the NF
spectrum such as the norming constants and the continuous spectrum. For an overview
of the recent progress in theoretical as well as experimental aspects of various
optical communication methodologies that are based on the nonlinear Fourier
transform (NFT), we refer the reader to the review article~\cite{TPLWFK2017} and
the references therein.
\begin{figure*}[!ht]
\centering
\includegraphics[scale=1]{schema_inft}
\caption{\label{fig:schema-inft}The figure shows a schematic of the
fast inverse NFT (INFT) algorithm where the dashed line depicts
the missing part of the algorithm to be discussed in this article
(the FDT algorithm has been reported in~\cite{V2017INFT1}).
Here, $q_R(t)$ refers to the ``radiative'' part of the signal
$q(t)$ which is obtained as a result of removing the bound states. Note that
$q_R(t)$ has the reflection coefficient $\rho_R(\xi)$ (see
Sec.~\ref{sec:akns-sys} for the connection between $\rho_R(\xi)$ and $\rho(\xi)$).}
\end{figure*}
In order to realize any NFT-based modulation methodology, it is
imperative to have a suitable low-complexity
NFT algorithm which forms the primary motivation behind this work. The central
idea is to use a fast version of the well-known \emph{layer-peeling} (LP) algorithm
within the framework of an appropriate discretization scheme applied to the
Zakharov-Shabat (ZS) problem. This approach has been characterized as the
differential approach by Bruckstein~\textit{et~al.}~\cite{BLK1985,BK1987} where fast realizations
of the LP algorithm which achieves a complexity of $\bigO{N\log^2N}$
for $N$ samples of the reflection data are also discussed. However, the earliest
work on fast LP is that of McClary~\cite{McClary1983} which appeared
in the geophysics literature. More recently, this method has been adopted by
Brenne and Skaar~\cite{BS2003} in the design of grating-assisted codirectional
couplers. However, this paper reports a complexity of $\bigO{N^2}$\footnote{In
this paper, we do not consider the method of discretization presented
in~\cite{BS2003}; however, let us briefly mention that on account of the
piecewise constant assumption used in this work for the scattering potential, the
order of convergence gets artificially restricted to $\bigO{N^{-1}}$. This
problem has been remedied in~\cite{V2017INFT1} where this
discretization scheme is termed as the \emph{split-Magnus} method.}. It is
interesting to note that, at the heart of it, all of the aforementioned versions of
LP are similar; however, the manner in which the discrete system is obtained
seem to vary. In this work, we consider the discrete system obtained as a result of
applying (exponential) trapezoidal rule to the ZS problem as discussed
in~\cite{V2017INFT1}.
The next important idea is to recognize that the
\emph{Darboux transformation} (DT) provides a
promising route to the most general inverse NFT
algorithm. A fast version of DT (referred to as FDT) is developed
in~\cite{V2017INFT1} which is based on the pioneering work of
Lubich on convolution quadrature~\cite{Lubich1994} and a
fast LP algorithm. The schematic of the fast inverse NFT is shown
in Fig.~\ref{fig:schema-inft} where we note that FDT is
capable of taking a \emph{seed} potential $q_R(t)$ and augmenting it by
introducing the bound states corresponding to $\mathfrak{S}_K$ (the discrete
spectrum to be introduced in Sec.~\ref{sec:akns-sys} and $K$ is the number of
bound states or eigenvalues). If $q_R(t)$ is the
\emph{radiative} part of $q(t)$, i.e., it is generated from NF spectrum which
has an empty discrete spectrum and $\rho_R(\xi)$ as the reflection coefficient, then $q(t)$
is the full inverse of the NF spectrum characterized by $\mathfrak{S}_K$ and
$\rho(\xi)$. The preliminary results of this approach
were reported in~\cite{VW2017OFC}. In this paper, we describe two fast inverse NFT
algorithms that exhibit a complexity of $\bigO{N(K+\log^2N)}$ and a rate of
convergence of $\bigO{N^{-2}}$ where $N$ is
the number of samples and $K$ is the number of eigenvalues (or bound states).
Finally, we note that the LP algorithm (irrespective of the underlying
discrete system) has the reputation of being ill-conditioned or unstable in the
presence of noise~\cite{BKK1986,SF2002} in the reflection coefficient. For optical
communication, this observation is important but not critical as the reflection
coefficient is known exactly at the stage of encoding of information at the
transmitter end. A more relevant question here, therefore, is the stability of the
algorithm in the presence of round-off errors. We provide exhaustive numerical tests
in order to understand the ill-conditioning effects; however, no
theoretical results for stability are provided.
This paper is organized as follows: Sec.~\ref{sec:akns-sys} discusses the basic
theory of scattering. Sec.~\ref{sec:discrete-system} introduces the discrete
framework for forward/inverse scattering, which admits of the layer-peeling
property. This section also introduces a recipe for
computing a class of signals dubbed as the \emph{nonlinearly bandlimited}
signals. Finally, the inverse NFT
is described in Sec.~\ref{sec:fast-inverse-NFT} and the numerical results are
presented in Sec.~\ref{sec:num-res}. Sec.~\ref{sec:final} concludes this paper.
\begin{figure*}[!ht]
\centering
\includegraphics[scale=1]{sm}
\caption{The figure depicts the equivalent \emph{layered-media} for the discrete
scattering problem in Sec.~\ref{sec:discrete-system}. In each of the layers, the ZS-problem
is approximated by two instantaneous scatterers and a ``free-space'' propagation
between them.\label{fig:sm}}
\end{figure*}
\section{The AKNS System}
\label{sec:akns-sys}
The NFT of any complex-valued signal $q(t)$ is introduced
via the associated Zakharov-Shabat scattering problem~\cite{ZS1972}
which can be stated as follows:
Let $\zeta\in\field{R}$ and $\vv{v}=(v_1,v_2)^{\tp}\in\field{C}^2$, then
\begin{equation}\label{eq:zs-prob}
\vv{v}_t = -i\zeta\sigma_3\vv{v}+U\vv{v},
\end{equation}
where $\sigma_3=\diag(1,-1)$, and, the matrix elements of $U$ are $U_{11}=U_{22}=0$ and
$U_{12}=q(t)=-U_{21}^*=-r^*(t)$. Here, $q(t)$ is identified as the \emph{scattering potential}. The
solution of the scattering
problem~\eqref{eq:zs-prob}, henceforth referred
to as the ZS problem, consists in finding the so called
\emph{scattering coefficients} which are defined through
special solutions of~\eqref{eq:zs-prob}
known as the \emph{Jost solutions}. The Jost solutions of the \emph{first kind}, denoted
by $\vs{\psi}(t;\zeta)$, has the asymptotic behavior
$\vs{\psi}(t;\zeta)e^{-i\zeta t}\rightarrow(0,1)^{\tp}$ as $t\rightarrow\infty$.
The Jost solutions of the \emph{second kind}, denoted by $\vs{\phi}(t,\zeta)$, has
the asymptotic behavior $\vs{\phi}(t;\zeta)e^{i\zeta t}\rightarrow(1,0)^{\tp}$
as $t\rightarrow-\infty$. The so-called scattering coefficients, $a(\zeta)$ and
$b(\zeta)$, are obtained from the asymptotic behavior
$\vs{\phi}(t;\zeta)\rightarrow (a(\zeta)e^{-i\zeta t}, b(\zeta)e^{i\zeta
t})^{\tp}$ as $t\rightarrow\infty$. The process
of computing these scattering coefficients will be referred
to as \emph{forward scattering}.
In general, the nonlinear Fourier spectrum for the potential $q(t)$ comprises
a \emph{discrete} and a \emph{continuous} spectrum. The discrete spectrum consists
of the so called \emph{eigenvalues} $\zeta_k\in\field{C}_+$, such that
$a(\zeta_k)=0$, and, the \emph{norming constants} $b_k$ such that
$\vs{\phi}(t;\zeta_k)=b_k\vs{\psi}(t;\zeta_k)$. Note that $(\zeta_k,\,b_k)$
describes a \emph{bound state} or a \emph{solitonic state}
associated with the potential. For convenience, let the
discrete spectrum be denoted by the set
\begin{equation}
\mathfrak{S}_K=\{(\zeta_k,\,b_k)\in\field{C}^2|\,\Im{\zeta_k}>0,\,k=1,2,\ldots,K\}.
\end{equation}
The continuous spectrum, also
referred to as the \emph{reflection coefficient}, is
defined by $\rho(\xi)={b(\xi)}/{a(\xi)}$ for $\xi\in\field{R}$. In preparation
for the discussion in the following sections, let us define
\begin{equation}\label{eq:a-S}
a_S(\zeta)=\prod_{k=1}^{K}\left(\frac{\zeta-{\zeta}_k}{\zeta-{\zeta}^*_k}\right),
\end{equation}
and $\rho_R(\xi)=a_S(\xi)\rho(\xi)$. The reflection coefficient $\rho_R(\xi)$
now corresponds to a purely radiative potential.
Next, let us note that the class of integrable nonlinear evolution problems that can be treated by the
methods proposed in this article are those described by the Ablowitz-Kaup-Newell-Segur
formalism~\cite{AKNS1974,AS1981}. In optical fiber communication, the propagation of optical
field in a loss-less single mode fiber under Kerr-type focusing nonlinearity
is governed by the nonlinear Schr\"odinger equation (NSE)~\cite{HK1987,Agrawal2013} which
can be cast into the following standard form
\begin{equation}\label{eq:NSE}
i\partial_xq=\partial_t^2q+2|q|^2q,\quad(t,x)\in\field{R}\times\field{R}_+,
\end{equation}
where $q(t,x)$ is a complex valued function associated with the slowly varying
envelope of the electric field, $t$ is the retarded time and $x$
is position along the fiber. If the potential evolves according
to~\eqref{eq:NSE}, then, the scattering data evolves as:
$b_k(x)=b_ke^{-4i\zeta_k^2x}$ and $\rho(\xi,x)=\rho(\xi)e^{-4i\xi^2x}$
($a(\zeta)$ and, consequently, $\zeta_k$ do not evolve). In the rest of
the paper, we suppress the dependence on $x$ for the sake brevity.
\section{Discrete Inverse Scattering}
\label{sec:discrete-system}
In order to discuss the discretization scheme, we take an equispaced grid defined
by $t_n= T_1 + nh,\,\,n=0,1,\ldots,N,$ with $t_{N}=T_2$ where $h$ is the grid spacing.
Define $\ell_-,\ell_+\in\field{R}$ such that $h\ell_-= -T_1$, $h\ell_+= T_2$.
Further, let us define $z=e^{i\zeta h}$. For the potential functions supported
in $[T_1, T_2]$, we set $Q_n=2hq(t_n)$,
$R_{n}=2hr(t_n)$. In the following, we summarize the discrete framework reported
in~\cite{V2017INFT1} which is based on the trapezoidal rule of integration.
Setting $\Theta_n=1-Q_nR_n$, the recurrence
relation for the Jost solution reads as
$\vv{v}_{n+1}=z^{-1}M_{n+1}(z^2)\vv{v}_n$, which is referred to as the
\emph{discrete scattering} problem. Here $M_{n+1}(z^2)$
is known as the \emph{transfer matrix} which is given by
\begin{equation}\label{eq:scatter-TR}
M_{n+1}(z^2)=\frac{z^{-1}}{\Theta_{n+1}}
\begin{pmatrix}
1+z^2Q_{n+1}R_n& z^2Q_{n+1}+Q_n\\
R_{n+1}+z^2R_n & R_{n+1}Q_n + z^2
\end{pmatrix}.
\end{equation}
Note that the transfer matrix approach introduced above is analogous
to that used to solve wave-propagation problems in dielectric
layered-media~\cite[Chap.~1]{BW1999}. In particular, from the factorization
\begin{equation*}
\begin{split}
\vv{v}_{n+1}&=\frac{1}{\Theta_{n+1}}
\begin{pmatrix}
1&Q_{n+1}\\
R_{n+1}& 1
\end{pmatrix}
\begin{pmatrix}
z^{-1}&0\\
0& z
\end{pmatrix}
\begin{pmatrix}
1&Q_{n}\\
R_{n}& 1
\end{pmatrix}\vv{v}_n,
\end{split}
\end{equation*}
it can be inferred that the continuous system in~\eqref{eq:zs-prob} is
approximated by two instantaneous scatterers with ``free-space''
propagation between them in each of the layers as shown in Fig.~\ref{fig:sm}.
The error analysis of the discrete system
presented above is carried out in~\cite{V2017INFT1} where it is shown that
the global order of convergence is $\bigO{h^2}$ for fixed $\zeta$.
In order to express the discrete approximation to the Jost solutions, let us
define the vector-valued polynomial
\begin{equation}\label{eq:poly-vec}
\vv{P}_n(z)=\begin{pmatrix}
P^{(n)}_{1}(z)\\
P^{(n)}_{2}(z)
\end{pmatrix}
=\sum_{k=0}^n
\vv{P}^{(n)}_{k}z^k
=\sum_{k=0}^n
\begin{pmatrix}
P^{(n)}_{1,k}\\
P^{(n)}_{2,k}
\end{pmatrix}z^k.
\end{equation}
The Jost solution $\vs{\phi}$ can be written in the form $\vs{\phi}_n =
z^{\ell_-}z^{-n}\vv{P}_n(z^2)$ with the initial condition given by
$\vs{\phi}_0=z^{\ell_-}(1,0)^{\tp}$ that translate into
$\vv{P}_0=(1,0)^{\tp}$. The recurrence relation for $\vv{P}_n(z^2)$
takes the form
\begin{equation}\label{eq:poly-scatter}
\vv{P}_{n+1}(z^2)= M_{n+1}(z^2)\vv{P}_n(z^2).
\end{equation}
The discrete system discussed above facilitated the development of a
fast forward scattering algorithm in~\cite{V2017INFT1}. This relied on the fact that the
transfer matrices have polynomial entries--a form that is amenable to FFT-based
fast polynomial arithmetic~\cite{Henrici1993}.
In the following sections, we provide details of the fast inverse NFT algorithm
by first developing the methods needed for inversion of the continuous
spectrum to compute what can be viewed as a purely radiative potential. The
general version of the inverse NFT is then developed using
the FDT algorithm presented in~\cite{V2017INFT1}.
\subsection{The layer-peeling algorithm}
\label{sec:discrete-TR-summary}
Borrowing the terminology from the theory of layered dielectric
media~\cite[Chap.~1]{BW1999}, let the interval $[t_n,t_{n+1}]$ correspond to the
$(n+1)$-th \emph{layer} which is
completely characterized by the transfer matrix $M_{n+1}(z^2)$ (see Fig.~\ref{fig:sm}). The
\emph{discrete forward scattering} consists in ``accumulating'' all the layers to
form $\vv{P}_N(z^2)$. The problem of recovering the discrete samples of the
scattering potential from the discrete scattering coefficients or
$\vv{P}_N(z^2)$ is referred to as the \emph{discrete inverse
scattering} which is facilitated by the so-called
\emph{layer-peeling} (LP) algorithm. Starting from the recurrence
relation~\eqref{eq:poly-scatter}, one LP step consists in using
$\vv{P}_{n+1}(z^2)$ to retrieve the samples of the potential
needed to compute the transfer matrix
$\wtilde{M}_{n+1}(z^2)=z^{-2}[M_{n+1}(z^2)]^{-1}$ so that the entire step can be
repeated with $\vv{P}_{n}(z^2)$ until all the samples of the potential are
recovered. The mathematical details of this algorithm can be found
in~\cite{V2017INFT1}. For the sake of reader's convenience, some of the main results
are summarized below.
Assume $Q_0=0$. Then the recurrence relation~\eqref{eq:poly-scatter} yields
\begin{equation}
\label{eq:TR-cond}
P^{(n+1)}_{1,0}
=\Theta^{-1}_{n+1}\prod_{k=1}^{n}\biggl(\frac{1+Q_kR_k}{1-Q_kR_k}\biggl)
=\Theta^{-1}_{n+1}\prod_{k=1}^{n}\biggl(\frac{2-\Theta_k}{\Theta_k}\biggl),
\end{equation}
and $\vv{P}^{(n+1)}_{n+1}= 0$. The last relationship follows from the assumption $Q_0=0$. For sufficiently
small $h$, it is reasonable to assume that
$1+Q_nR_n>0$ so that $P^{(n)}_{1,0}>0$ (it also implies that
$|Q_n|=|R_n|<1$). The layer-peeling step consists in computing the samples of
the potential, $R_{n+1}$ and $R_n$ (with $Q_{n+1}=-R^*_{n+1}$ and
$Q_{n}=-R^*_{n}$) as follows:
\begin{equation}
R_{n+1} = \frac{P^{(n+1)}_{2,0}}{P^{(n+1)}_{1,0}},\quad
R_n = \frac{\chi}{1 + \sqrt{1+|\chi|^2}},
\end{equation}
where
\[
\chi=\frac{[P^{(n+1)}_{2,1}-R_{n+1}P^{(n+1)}_{1,1}]}{[P^{(n+1)}_{1,0}-Q_{n+1}P^{(n+1)}_{2,0}]}.
\]
Note that $P^{(n+1)}_{1,0}\neq0$ and ${P^{(n+1)}_{1,0} -
Q_{n+1}P^{(n+1)}_{2,0}}\neq0$.
As evident from~\eqref{eq:scatter-TR}, the transfer matrix, $M_{n+1}(z^2)$,
connecting $\vv{P}_n(z^2)$ and $\vv{P}_{n+1}(z^2)$ is completely determined by
these relations.
If the steps of the LP algorithm are carried out sequentially, one ends up with a
complexity of $\bigO{N^2}$. It turns out that a fast implementation of this LP
algorithm does exist~\cite{V2017INFT1}, which has a complexity of
$\bigO{N\log^2N}$ for the discrete system
considered in this article. In the following sections, we describe how to
synthesize the input for the LP algorithm in order to compute the radiative part of the
scattering potential.
\subsection{Nonlinearly bandlimited signals}
\label{sec:nbs-rho}
A signal is said to be \emph{nonlinearly bandlimited} if it has an empty discrete
spectrum and a reflection coefficient $\rho(\xi)$ that is compactly supported in
$\field{R}$. This is a direct generalization of the notion of bandlimited signals
for conventional Fourier transform. However, nonlinearly
bandlimited signals are not bandlimited, in general. Let us consider
the reflection coefficient $\rho(\xi)$ as
input. Let the support of $\rho(\xi)$ be contained in $[-\Lambda, \Lambda]$ so
that its Fourier series representation is
\begin{equation}
\rho(\xi)=\sum_{k\in\field{Z}}\rho_k e^{\frac{ik\pi \xi}{\Lambda}}.
\end{equation}
If $|\rho_k|$ is significant only for $k\geq -n$ ($n\in\field{Z}_+$), then
$\rho(\xi)=\sum_{k=-n}^{\infty}\rho_kz^{2k}+\mathcal{R}_n(z^2)$,
where $z=\exp(i\pi\xi/2\Lambda)$ and $\mathcal{R}_n$ denotes the remainder terms.
Putting $h=\pi/2\Lambda$ and $T_2=nh\equiv h\ell_+$, we have $\exp(2i\xi T_2)=z^{2n}$ so that
\begin{equation}\label{eq:series}
\breve{\rho}(\xi)=\rho(\xi)z^{2n}
=\sum_{k=0}^{\infty}\breve{\rho}_kz^{2k}+z^{2n}\mathcal{R}_n(z^2).
\end{equation}
Now, it follows that $\breve{\rho}_k=2h\breve{p}(2hk)$ where
\begin{equation}\label{eq:lubich-nbl}
\breve{p}(\tau) = \fourier^{-1}[\rho](\tau)
=\frac{1}{2\pi}\int_{-\Lambda}^{\Lambda}\breve{\rho}(\xi) e^{-i\xi\tau}d\xi.
\end{equation}
Let $2\Lambda_0$ be the fundamental period
and $\Lambda = m\Lambda_0$, where $m\in\field{Z}_+$; then, $h =
\pi/2m\Lambda_0\equiv h_0/m$; therefore, $h\leq h_0$. Now, if we ignore the
remainder term and truncate the series after $N$ terms in~\eqref{eq:series},
the input to the fast LP algorithm can be
\begin{equation}
P^{(N)}_1(z^2) = 1,\quad P^{(N)}_2(z^2) =\sum_{k=0}^{N-1}\breve{\rho}_kz^{2k}.
\end{equation}
This accomplishes the inversion of the reflection coefficient which is assumed
to be compactly supported. Let $\xi_j=j\Delta\xi$ for $j=-M,\ldots,M-1$, where
\[
\Delta\xi = \frac{\pi}{2Mh}.
\]
Then the coefficients $\breve{\rho}_k$ can be estimated using the Fourier sum
\begin{align*}
2hk\breve{p}(2hk)
&\approx \frac{1}{2M}\sum_{j=-M}^{M-1}\breve{\rho}(\xi_j)e^{-i2hk\xi_j}\\
&= \frac{1}{2M}\sum_{j=-M}^{M-1}\breve{\rho}(\xi_j)e^{-i\frac{2\pi jk}{2M}},
\end{align*}
for $k=0,1,\ldots,N$. The quantity $M$ is chosen to be some multiple of $N$, say,
$M=n_{\text{os}}\times N$ where $n_{\text{os}}$ is referred to as the
\emph{oversampling factor}. Therefore, the overall complexity of synthesizing the
input for the LP algorithm works out to be $\bigO{N\log N}$.
Before we conclude this discussion, let us consider the problem of estimation of
$T_2$. It is of interest to determine a $T_2$ such that the energy in the tail
of the scattering potential, which is to be neglected, is below a certain
threshold, say, $\epsilon$. Fortunately, there
is an interesting result due to
Epstein~\cite{Epstein2004} that allows us to do exactly that. From the theory of Gelfand-Levitan-Marchenko
equations, it can be shown that there exists a time $T$ such that
\begin{equation}\label{eq:epstien-thm}
\mathcal{E}_+(T)=\int^{\infty}_{T}|q(t)|^2dt
\leq\frac{2\mathcal{I}^2_2(T)}{[1-\mathcal{I}^2_1(T)]},
\end{equation}
assuming $\mathcal{I}_1(T)<1$ where
\[
\mathcal{I}_m(T)=\left[\int^{\infty}_{2T}|p(-\tau)|^md\tau\right]^{1/m}
\]
for $m=1,2$ (see Appendix~\ref{app:epstein} for a proof which, in essence, is contained
in the work of Epstein~\cite{Epstein2004}). Let $T=T(\epsilon)$ be such that
\begin{equation}\label{eq:T-eps}
\frac{2\mathcal{I}^2_2(T)}{[1-\mathcal{I}^2_1(T)]}\leq\epsilon,
\end{equation}
then $\mathcal{E}_+(T)\leq\epsilon$. Consequently, it suffices to choose $T_2\geq
T(\epsilon)$.
\subsubsection{Alternative approach}
\label{sec:nbs-b}
It is possible to compute the polynomial approximation to the scattering
coefficients $a(\xi)$ and $b(\xi)$ using $\rho(\xi)$, which can be then used to
synthesize the input to the fast LP algorithm. There is no apparent
benefit of this approach compared to the method described above; however, we
describe it for the sake of completeness. The first step consists of constructing
a polynomial approximation for
$a(\zeta)$ in $|z|<1$ where $z=e^{i\zeta h}$ (under the assumption that no bound
states are present). To this end, let
\begin{equation}\label{eq:b-series}
\rho(\xi)=\sum_{k\in\field{Z}}\rho_kz^{2k},\quad z=e^{i\xi h}.
\end{equation}
With a slight abuse of notation, let us denote this expansion as $\rho(z^2)$.
Let us note that in this case, $a(\xi)$ is not analytic in $\field{R}$ which
means that it is also not analytic on the unit circle $|z|=1$. Here, the
relation~\cite{AKNS1974,AS1981} $|a(\xi)|^2+|b(\xi)|^2=1$ allows us to set up a Riemann-Hilbert (RH)
problem for a sectionally analytic function
\begin{equation}
\tilde{g}(z^2)=\begin{cases}
g(z^2) & |z|<1,\\
-{g}^*(1/z^{*2}) & |z|>1,
\end{cases}
\end{equation}
such that the jump condition is given by
\begin{equation}\label{eq:RH-circ}
\tilde{g}^{(-)}(z^2) - \tilde{g}^{(+)}(z^2) =
\log\left[\frac{|\rho(z^2)|^2}{1+|\rho(z^2)|^2}\right],\quad |z|=1,
\end{equation}
where $\tilde{g}^{(-)}(z^2)$ and $\tilde{g}^{(+)}(z^2)$ denotes the boundary values when
approaching the unit circle from $|z|<1$ and $|z|>1$, respectively. Let
the jump function on the RHS of~\eqref{eq:RH-circ} be denoted by $f(z^2)$ which
can be expanded as a Fourier series
\begin{equation}\label{eq:f-series}
f(z^2)=\sum_{k\in\field{Z}}f_kz^{2k},\quad |z|=1.
\end{equation}
Now, the solution to the RH problem can be stated using the Cauchy integral~\cite[Chap.~14]{Henrici1993}
\begin{equation}
\tilde{g}(z^2) = \frac{1}{2\pi i}\oint_{|w|=1}\frac{f(w)}{z^2-w}dw.
\end{equation}
The function $g(z^2)$ analytic in $|z|<1$ then works out to be
\begin{equation}
g(z^2)=\sum_{k\in\field{Z}_+\cup\{0\}}f_kz^{2k},\quad |z|<1.
\end{equation}
Finally, $a_N(z^2)=\{\exp[g(z^2)]\}_{N}$ with $z=e^{i\zeta h}$ where
$\{\cdot\}_N$ denotes truncation after $N$ terms. The implementation of the
procedure laid out above can be carried out using the FFT
algorithm, which involves computation of the coefficients $f_k$ and the
exponentiation in the last step~\cite[Chap.~13]{Henrici1993}. Note that, in the computation of
$g(z^2)$, we discarded half of the coefficients; therefore, in the numerical implementation
it is necessary to work with at least $2N$ number of samples of
$f(z^2)$ in order to obtain $a_N(z^2)$ which is a polynomial of degree $N-1$.
The next step is to compute the polynomial approximation for $\breve{b}(\xi)$. To
this end, consider
\begin{equation}
\breve{b}(\xi)=b(\xi)z^{2n}
=\left[\sum_{k=0}^{\infty}\breve{\rho}_kz^{2k}+z^{2n}\mathcal{R}_n(z^2)\right]\exp[g(z^2)].
\end{equation}
In the following, we will again discard the remainder term. The polynomial
approximation for $\breve{b}(\xi)$ reads as
\begin{equation}
\breve{b}_N(z^2)=\left\{a_N(z^2)\sum_{k=0}^{N-1}\breve{\rho}_kz^{2k}\right\}_N
=\sum_{k=0}^{N-1}\breve{b}_kz^{2k},
\end{equation}
Now, the input to the fast LP algorithm works out to be
\begin{equation}
P^{(N)}_1(z^2)=\sum_{k=0}^{N-1}a_kz^{2k},\quad P^{(N)}_2(z^2)=\sum_{k=0}^{N-1}\breve{b}_kz^{2k}.
\end{equation}
\subsection{Fast inverse NFT}
\label{sec:fast-inverse-NFT}
In the previous sections, we restricted ourselves to the case of empty discrete
spectrum. In this section, we describe how a fast inverse NFT algorithm can be
developed for the general NF spectrum
using either the Classical DT (CDT) or the FDT algorithm reported
in~\cite{V2017INFT1}.
Given a reflection coefficient $\rho(\xi),\,\xi\in\field{R},$ and
the discrete spectrum $\mathfrak{S}_K$, define
$a_S(\xi)$ as in~\eqref{eq:a-S} and $\rho_R(\xi)=a_S(\xi)\rho(\xi)$. Let $q(t)$
denote the scattering potential
corresponding to the aforementioned NF spectrum.
Now, as illustrated in Fig.~\ref{fig:schema-inft}, the inverse NFT can be carried out
in the following two steps:
\begin{itemize}
\item[I.] Generate the signal $q_R(t)$ corresponding to the reflection
coefficient $\rho_R(\xi)$ using the method described in
Sec.~\ref{sec:nbs-rho}. This amounts to computing the purely radiative part of
the complete potential $q(t)$. The complexity of this step is
$\bigO{N\log^2N}$ if the number of nodes used for the FFT operation
involved there is given by $M=n_{\text{os}}N$ where $n_{\text{os}}\ll N$. Here,
$n_{\text{os}}$ can be identified as the
oversampling factor (typically $\leq8$).
\item[II.] Use the signal $q_R(t)$ as the seed potential and add
bound states described by $\mathfrak{S}_K$ using the CDT or the FDT algorithm to
obtain $q(t)$. The complexity of this step is $\bigO{N(K+\log^2N)}$
when FDT is employed while $\bigO{K^2N}$ when CDT is employed. Here we
also consider the partial-fraction (PF) variant of the FDT
algorithm (labeled as FDT-PF), which is shown to offer a small increase in speed~\cite{V2017INFT1}.
Finally, let us note that the overall complexity of the inverse NFT is given by
$\bigO{N(K+\log^2N)}$ when FDT is used and $\bigO{N(K^2+\log^2N)}$ when CDT is
used.
\end{itemize}
\section{Numerical Experiments}
\label{sec:num-res}
Let $q^{(\text{num}.)}$ denote the numerically computed potential for a given NF
spectrum. If the exact potential $q$ is known, then we quantify the error as
\begin{equation}\label{eq:e_rel-q}
e_{\text{rel.}}={\|q^{(\text{num.})}-q\|_{\fs{L}^2}}/{\|q\|_{\fs{L}^2}},
\end{equation}
where the integrals are evaluated numerically using the trapezoidal rule. For
the purpose of convergence analysis, only those examples are deemed to be admissible
where closed-form solutions are available. However,
on account of scarcity of such examples, an exhaustive test for universality of
the algorithm cannot be carried out in this manner. To remedy this, we choose a
higher-order convergent algorithm for the forward scattering problem and compute the NF
spectrum of the potential generated by the fast inverse NFT. The
error between the computed NF spectrum and the provided NF spectrum can serve
as a good metric to measure the robustness of the algorithm.
For the higher-order
scheme, we choose the (exponential) $3$-step \emph{implicit Adams}
method (IA$_3$)~\cite{HNW1993} which has an order of convergence $4$, i.e.,
$\bigO{N^{-4}}$ (see Appendix~\ref{app:IA} for details). Fortunately,
this method can also be made fast by the use of
FFT-based polynomial arithmetic which allows us to test for large number
of samples ($N\in\{2^{10},2^{11},\ldots,2^{20}\}$). Note that this procedure
by no means qualifies as the test for total numerical error
on account of the fact that the error metric is not the \emph{true} numerical
error. Therefore, the results in this case must be interpreted with caution.
Further, for the sake of comparison, we also consider the \emph{T\"oplitz inner bordering}
(TIB) algorithm for inverse scattering
(Belai~\textit{et~al.}~\cite{BFPS2007}) whenever the
discrete spectrum is empty. We use the second order convergent version of this
algorithm which has also been reported in Frumin~\textit{et~al.}~\cite{FBPS2015}. The latter
paper provides an improved understanding of the
original algorithm presented in~\cite{BFPS2007}; therefore, we choose to refer
to~\cite{FBPS2015} in this article whenever we mention the TIB algorithm.
Finally, let us emphasize that the primary objective behind the numerical tests in
this section is to verify the trends expected from the theory. The actual values
of any defined performance metric observed in the results are merely
representative of what can be achieved\footnote{The total run-time, for
instance, may differ on different computing machines; therefore, we would only be
interested in trends as far as the complexity analysis of the
algorithms are concerned.}, and, admittedly, better results
can be obtain by appropriately tuning the parameters used in the
test. For instance, a good choice of the computational domain
helps to maintain a smaller step-size in the numerical discretization and,
hence, lowers the numerical error.
\begin{figure*}[!tbh]
\centering
\includegraphics[scale=1]{convg_cs}
\caption{\label{fig:TR-comp}The figure shows a comparison of the
algorithms LP and TIB for the secant-hyperbolic
potential ($A_R=0.4$) with respect to convergence rate (left) and run-time per sample (right).}
\end{figure*}
\begin{figure*}[!th]
\centering
\def1{1}
\includegraphics[scale=1]{convg}
\caption{\label{fig:fast-INFT}The figure shows the performance of the algorithms
INFT-A/-B/-C for a fixed number of eigenvalues ($K\in\{12,16,20\}$) and
varying number of samples ($N$) for the secant-hyperbolic potential
(see Sec.~\ref{sec:sech-ex}). The error plotted on the
vertical axis is defined by~\eqref{eq:ds-sech}.}
\end{figure*}
\begin{figure*}[!th]
\centering
\def1{1}
\includegraphics[scale=1]{convg_new}
\caption{\label{fig:fast-INFT-EV}The figure shows the performance of the algorithms
INFT-A/-B/-C for a fixed number of samples ($N\in\{2^{13},2^{14},2^{15}\}$) and varying number of eigenvalues
($K$) for the secant-hyperbolic potential (see Sec.~\ref{sec:sech-ex}).
The error is quantified by~\eqref{eq:ds-sech}.}
\end{figure*}
\subsection{Secant-hyperbolic potential}
\label{sec:sech-ex}
Here, we would like to devise tests to confirm the order of
convergence and the complexity of computations for the algorithms proposed
thus far. To this end, we choose the secant-hyperbolic potential given by
$q(t)=(A_R+K)\sech t$, which is treated exactly in~\cite{SY1974}. Here
$A_R\in[0,0.5)$ and $K$ is a positive integer. The discrete spectrum can be stated as
\begin{equation}\label{eq:ds-sech}
\mathfrak{S}_{K}=\left\{(\zeta_k,b_k)
\left|\begin{aligned}
&\zeta_k=i(A_R+0.5+K-k),\\
&b_k=(-1)^{k},\,k=1,2,\ldots,K\end{aligned}\right.
\right\},
\end{equation}
and the continuous spectrum is given by $\rho=\rho_R/a_S$
where $a_S(\xi)$ is defined by~\eqref{eq:a-S} and
\begin{equation}\label{eq:cs-R}
\rho_R(\xi)=b(\xi)\frac{\Gamma(0.5+A_R-i\xi)\Gamma(0.5-A_R-i\xi)}{[\Gamma(0.5-i\xi)]^2}.
\end{equation}
with $b(\xi)=-\sin[(A_R+K)\pi]\sech(\pi\xi)$. This test consist
in studying the behavior of the fast INFTs
for different number of samples ($N$) as well as eigenvalues ($K$).
We set $A_R=0.4$. The scattering potential is scaled by
$\kappa= 2(\sum_{k=1}^K\Im\zeta_k)^{1/2}$ and $[-T,\,T]$,
$T=30\kappa/\min_{k}(\Im\zeta_k)$, is taken as the computational
domain and we set
$N_{\text{th}}=N/8$ for FDT-PF as in~\cite{V2017INFT1}.
Let us first consider the case $K=0$ so that $\rho=\rho_R$ (setting the convention that
$a_S=1$ when $K=0$). Note that on account of the exponential decay of $\rho$, it can
be assumed to be effectively supported in a bounded domain. Besides the knowledge of the true
potential allows us to provide a good estimate of the computational domain.
Set $T=\log(2A_R/\epsilon)\approx 30$ for $\epsilon=10^{-12}$, then $[-T,\,T]$
can be taken as the computational domain\footnote{For the ZS problem,
let us note that the error in the initial condition at the left-boundary
can be kept below $\epsilon>0$, if
$\|q\chi_{(-\infty,T_1]}\|_{\fs{L}^1}\leq\sinh^{-1}\epsilon$~\cite{V2017INFT1}.}. The
result for $A_R=0.4$ is plotted in Fig.~\ref{fig:TR-comp} which shows that
the performance of LP is comparable to that of TIB. Further, each of these algorithms
exhibit a second order of convergence (i.e., error vanishing as
$\bigO{N^{-2}}$). The run-time
behavior in Fig.~\ref{fig:TR-comp} shows that LP-based INFTs have a poly-log complexity
per sample as opposed to the $\bigO{N}$ complexity per sample exhibited by TIB.
For $K>0$, the results are plotted in Fig.~\ref{fig:fast-INFT} which reveal that the fast INFTs
based on FDT (labeled as `INFT-B') and FDT-PF (labeled as `INFT-C') are superior to
that based on CDT (labeled as `INFT-A') which becomes unstable with increasing
number of eigenvalues. The latter, however, can be useful for a small number
eigenvalues. The figure also confirms the second
order of convergence of INFT-B/-C which is consistent with the underlying
one-step method, namely, the trapezoidal rule. For small number of eigenvalues,
INFT-A also exhibits a second order of convergence. Finally, let us
observe that, for fixed $N$, INFT-A has a
complexity of $\bigO{K^2}$ and that for INFT-B/-C is $\bigO{K}$. While these trends can be
confirmed from Fig.~\ref{fig:fast-INFT-EV}, let us mention that, with an improved
implementation, INFT-B/-C can be made even more competitive to INFT-A in complexity.
\begin{figure}[!th]
\centering
\def1{1}
\includegraphics[scale=1]{sig_rc}
\caption{\label{fig:QPSK-sig}The figure shows the potential corresponding to a QPSK modulated
continuous spectrum given by~\eqref{eq:QPSK-rho} with number of symbols
$N_{\text{sym}}\in\{16,32\}$. The number of samples used is $N=2^{12}$ and
the computational domain is $[-15T_2, T_2]$ where $T_2$ is given
by~\eqref{eq:QPSK-T2}. Also, we set $A_{\text{eff.}}=10$ which is defined
by~\eqref{eq:A_eff}.}
\end{figure}
\begin{figure*}[!th]
\centering
\def1{1}
\includegraphics[scale=1]{convg_cs_rc}
\caption{\label{fig:convg-rc} The figure shows the error
analysis for the signal generated from the continuous spectrum given
by~\eqref{eq:RC-spec} which is the frequency-domain description of the raised-cosine
filter (see Sec.~\ref{sec:nbs-test}). The error is quantified
by~\eqref{eq:e_rel-rho}.}
\end{figure*}
\begin{figure*}[!th]
\centering
\def1{1}
\includegraphics[scale=1]{convg_qpsk_rc}
\caption{\label{fig:convg-qpskrc} The figure shows the error analysis for
the QPSK modulated continuous spectrum given by~\eqref{eq:QPSK-rho} for varying
number of symbols $N_{\text{sym}}$ (see Sec.~\ref{sec:nbs-test}). Here, we
set $A_{\text{eff.}}=10$ which is defined
by~\eqref{eq:A_eff}.}
\end{figure*}
\subsection{Nonlinearly bandlimited signals}
\label{sec:nbs-test}
Let us consider a soliton-free signal whose continuous spectrum is given by
\begin{equation}\label{eq:RC-spec}
H_{\text{rc}}(\xi)=
\begin{cases}
A_{\text{rc}}&|\tau_s\xi|\leq 1-\beta,\\
\frac{A_{\text{rc}}}{2}
\left[1+\cos\left(\frac{\pi}{2\beta}\Xi\right)\right]&
||\tau_s\xi|-1|\leq{\beta},\\
0 & |\tau_s\xi|>{1+\beta},
\end{cases}
\end{equation}
where $\Xi=|\tau_s\xi|-(1-\beta)$ with $\beta\in[0,1]$, and,
$A_{\text{rc}}$ and $\tau_s$ are positive constants.
The nonlinear impulse response (NIR) $h_{\text{rc}}(\tau)$ can be worked out exactly;
however, we do not use this information for constructing the input to the fast
LP algorithm. Note that $H_{\text{rc}}(\xi)$ and $h_{\text{rc}}(\tau)$ describe
the well-known \emph{raised-cosine} filter in the frequency-domain and the time-domain, respectively.
In order to estimate the computational domain, we use Epstein's result discussed
in Sec.~\ref{sec:nbs-rho} which consists in finding a time $T$ such that
\begin{equation}\label{eq:epstien}
\mathcal{E}_+(T)=\int^{\infty}_{T}|q(t)|^2dt\leq\frac{2\mathcal{I}^2_2(T)}{[1-\mathcal{I}^2_1(T)]},
\end{equation}
assuming $\mathcal{I}_1(T)<1$ where
\[
\mathcal{I}_m(T)=\left[\int^{\infty}_{2T}|h_{\text{rc}}(-\tau)|^md\tau\right]^{1/m}
\]
for $m=1,2$. A crude estimate for $T$ such that
$\mathcal{I}^2_2(T)=\epsilon$ is given by
\begin{equation}
2T(\epsilon)\sim\left({A_{\text{rc}}^2\tau_s^4}\right)^{1/5}
\beta^{-4/5}\epsilon^{-1/5},
\end{equation}
which uses the asymptotic form of $h_{\text{rc}}(\tau)$. If $\epsilon\ll1$, we
may assume that the potential is effectively supported\footnote{The Epstein's theorem
provides an estimate for the right boundary if the right NIR
is used; therefore, strictly speaking, the computational domain must be of the
form $(-\infty, T(\epsilon)]$.} in
$[-T(\epsilon),T(\epsilon)]$ where we set $\epsilon=10^{-9}$. Also, let
$\beta=0.5$ and $\tau_s=1$ in the following.
For this example, we devise two kinds of tests. For the first kind of tests, we
disregard any modulation scheme and carry out the inverse NFT for varying number
of samples ($N$) for each of the values of $A_{\text{rc}}\in\{10,\ldots,50\}$.
In the second kind of tests, we considers the quadrature-phase-shift-keyed
(QPSK) modulation scheme which is
described later. Let $\Omega_h = [-\pi/2h,\pi/2h]$, then
the error is quantified by
\begin{equation}\label{eq:e_rel-rho}
e_{\text{rel.}}=
{\|\rho^{(\text{num.})}-\rho\|_{\fs{L}^2(\Omega_h)}}/{\|\rho\|_{\fs{L}^2(\Omega_h)}},
\end{equation}
where the integrals are computed from $N$ equispaced samples in $\Omega_h$ using the
trapezoidal rule. As stated in the beginning, the quantity $\rho^{(\text{num.})}$
is computed using the (exponential) IA$_3$.
The results of the first kind of tests are shown in Fig.~\ref{fig:convg-rc} where a
comparison is made between LP and
TIB\footnote{The complexity of TIB becomes prohibitive for increasing
$N$, therefore, we restrict ourselves to $N\leq2^{18}$.}. From the plots in the
top row of Fig.~\ref{fig:convg-rc}, the second order of
convergence is readily confirmed for both of these algorithms with LP performing
somewhat better than TIB. The plateauing of the error in these plots can be
attributed to accumulating numerical errors in the inverse NFT algorithm as well as
the implicit Adams method. The behavior of the error with respect to
$A_{\text{rc}}$ is shown in the bottom row of Fig.~\ref{fig:convg-rc} where
LP shows better performance than TIB.
Now, for the second kind of tests, we consider the QPSK
modulation of the continuous spectrum as follows
\begin{equation}\label{eq:QPSK-rho}
\rho(\xi)=\left(\sum_{n\in J}
s_ne^{-in\pi\tau_s\xi}\right)H_{\text{rc}}(\xi)=S(\xi)H_{\text{rc}}(\xi),
\end{equation}
where the index set is
$J=\{-N_{\text{sym}}/2,\ldots,N_{\text{sym}}/2-1\}$ and $s_n\in\{\pm1, \pm i\}$
with $N_{\text{sym}}>0$ being an even integer. The
estimate for the right boundary works out to be
\begin{equation}\label{eq:QPSK-T2}
T_2=T(\epsilon)+\pi\tau_s N_{\text{sym}}/4;
\end{equation}
however, an estimate for the for the left boundary is not available in a closed
form. Here, we take a heuristic approach by setting $T_1=-W \times T_2$ where $W$
is chosen by trial and error. The scale factor $A_{\text{rc}}$ is chosen
such that $A_{\text{eff.}}=10$ where
\begin{equation}\label{eq:A_eff}
A_{\text{eff.}}=\|\rho\|_2/\|H_{\text{rc}}\|_2.
\end{equation}
It is important
to observe here that the signal generated from~\eqref{eq:QPSK-rho} is highly
asymmetric with poor decay behavior as
$t\rightarrow-\infty$ (see Fig.~\ref{fig:QPSK-sig}). The higher values of the quantities
$N_{\text{sym}}$ and $A_{\text{eff.}}$, both, tend to worsen this phenomenon.
Therefore, this example turns out to be very challenging for the numerical
algorithm. In Fig.~\ref{fig:convg-qpskrc}, we provide results of numerical experiments conducted with
$N_{\text{sym}}\in\{4,8,\ldots,256\}$ number of symbols where
$W=5\log_2N_{\text{sym}}$ is used to determine the computational domain. The accuracy of
LP and TIB, both, tends to worsen with increasing number of symbols where
LP performs slightly better than TIB. Based on these results it is evident
that any method of pulse-shaping must take into account the
relationship between the signal and its NF spectrum as opposed to directly
applying conventional Fourier transform based techniques of
pulse-shaping.
\begin{figure*}[!th]
\centering
\def1{1}
\includegraphics[scale=1]{convg_inft_rc}
\caption{\label{fig:convg-rc-dt} The figure shows the results of the error
analysis for an example where the discrete spectrum is $\mathfrak{S}_K$ and the continuous spectrum is
identical to the Fourier spectrum of the raised-cosine filter (see
Sec.~\ref{sec:bound-states-rc}). The error plotted on the vertical axis corresponds to
the continuous spectrum, which is quantified by~\eqref{eq:e_rel-rho}. Here
$A_{\text{rc}}=20$.}
\end{figure*}
\subsubsection{Addition of Bound states}
\label{sec:bound-states-rc}
Here, we fix $A_{\text{rc}}=20$ and assume no modulation of the continuous spectrum. The bound states
to be added are described by~\eqref{eq:ds-sech}. Let us observe that the ``augmented''
potential has a reflection coefficient which is given by
$\rho^{(\text{aug.})} = \rho/a_S$. Now the error can be
quantified by~\eqref{eq:e_rel-rho}. The potentials are scaled by $\kappa$ as in
Sec.~\ref{sec:sech-ex} and the computational domain is chosen such that
$-T_1=T_2=T(\epsilon)\kappa/\min_{k}(\Im\zeta_k)$.
The results for the continuous spectrum are shown in Fig.~\ref{fig:convg-rc-dt}
where the order of convergence can be confirmed from the plots in the top
row. The plots in the bottom row reveal that INFT-A, which is based on CDT, is
unstable for increasing number of eigenvalues. On the other hand,
the algorithms INFT-B/-C, which are based on FDT/FDT-PF, respectively, seem to
perform equally well without showing any signs of instability.
For the discrete spectrum, we assume that the discrete eigenvalues are known
exactly, and, then use this information to compute the norming constants using
the method discussed in~\cite{V2017INFT1}. The
error is quantified by
\begin{equation}\label{eq:err-RMS}
e_{\text{rel.}}=\sqrt{\left({\sum_{k=1}^K|b^{(\text{num.})}_k-b_k|^2}\right)\biggl/
{\sum_{k=1}^K|b_k|^2}},
\end{equation}
where $b^{(\text{num.})}_k$ is the numerically computed norming constant using IA$_3$.
The results are shown in Fig.~\ref{fig:convg-rc-dt-nconst}
where the order of convergence turns out to be $\bigO{N^{-1}}$ from the plots
in the top row. This decrease of
order of convergence can be attributed to the use of the true eigenvalues
as opposed to the numerically computed one to compute the norming constants. Again,
the plots in the bottom row reveal that INFT-A is unstable for increasing number
of eigenvalues. On the other hand, the algorithms INFT-B/-C seem to perform equally well while
showing no signs of instability.
\begin{figure*}[!th]
\centering
\def1{1}
\includegraphics[scale=1]{convg_nconst_rc}
\caption{\label{fig:convg-rc-dt-nconst} The figures shows the results of the error
analysis for an example where the discrete spectrum is $\mathfrak{S}_K$ and the continuous spectrum is
identical to the Fourier spectrum of the raised-cosine filter (see
Sec.~\ref{sec:nbs-test}). The error plotted on the vertical axis corresponds to
the norming constant, which is quantified by~\eqref{eq:err-RMS}. Here
$A_{\text{rc}}=20$.}
\end{figure*}
\section{Conclusion}
\label{sec:final}
To conclude, we have presented two new fast INFT algorithms
with $\bigO{KN+N\log^2N}$ complexity and a
convergence rate of $\bigO{N^{-2}}$. These algorithms are based on the discrete
framework introduced in~\cite{V2017INFT1} for the ZS scattering problem
where the well-known one-step method, namely, the
\emph{trapezoidal rule} is employed for the numerical discretization. Further, our
algorithm depends on the fast LP and the FDT algorithm presented
in~\cite{V2017INFT1}. Numerical tests reveal
that both variants of the INFT algorithm are
capable of dealing with a large number of eigenvalues (within the limitations of
the double precision arithmetic) previously unreported.
Further, for the cases considered in this article, our algorithms perform
better than the TIB algorithm~\cite{BFPS2007,FBPS2015} in terms of
accuracy while being faster by an order of magnitude. Let us also note that the
TIB algorithm has no consequence for the fast inverse NFT in the general case.
Next, let us mention that
we have not included simulations of a realistic
optical fiber link in order to demonstrate the effectiveness of our algorithms. A thorough
testing for various NFT-based modulation schemes for a realistic optical fiber
link is beyond the scope of this paper. This omission however does not impact
the study of the limitation of the proposed algorithms from a numerical analysis
perspective.
Future research on fast INFTs will further focus on the stability properties of the
LP algorithm and the DT iterations. Moreover,
we would also like to consider other linear multistep
methods to obtain a higher-order convergent forward/inverse NFTs. The implicit
Adams method used in this paper for the purpose of testing already demonstrates
that such possibilities do exist, at least, for the solution of the direct ZS
problem.
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
| {
"timestamp": "2018-05-09T02:12:00",
"yymm": "1706",
"arxiv_id": "1706.04069",
"language": "en",
"url": "https://arxiv.org/abs/1706.04069",
"abstract": "This paper considers the non-Hermitian Zakharov-Shabat (ZS) scattering problem which forms the basis for defining the SU$(2)$-nonlinear Fourier transform (NFT). The theoretical underpinnings of this generalization of the conventional Fourier transform is quite well established in the Ablowitz-Kaup-Newell-Segur (AKNS) formalism; however, efficient numerical algorithms that could be employed in practical applications are still unavailable. In this paper, we present two fast inverse NFT algorithms with $O(KN+N\\log^2N)$ complexity and a convergence rate of $O(N^{-2})$ where $N$ is the number of samples of the signal and $K$ is the number of eigenvalues. These algorithms are realized using a new fast layer-peeling (LP) scheme ($O(N\\log^2N)$) together with a new fast Darboux transformation (FDT) algorithm ($O(KN+N\\log^2N)$) previously developed by the author. The proposed fast inverse NFT algorithm proceeds in two steps: The first step involves computing the radiative part of the potential using the fast LP scheme for which the input is synthesized under the assumption that the radiative potential is nonlinearly bandlimited, i.e., the continuous spectrum has a compact support and the discrete spectrum is empty. The second step involves addition of bound states using the FDT algorithm. Finally, the performance of these algorithms is demonstrated through exhaustive numerical tests.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Fast Inverse Nonlinear Fourier Transform",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540662478147,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7099046466550832
} |
https://arxiv.org/abs/1309.7275 | Superelliptical laws for complex networks | All dynamical systems of biological interest--be they food webs, regulation of genes, or contacts between healthy and infectious individuals--have complex network structure. Wigner's semicircular law and Girko's circular law describe the eigenvalues of systems whose structure is a fully connected network. However, these laws fail for systems with complex network structure. Here we show that in these cases the eigenvalues are described by superellipses. We also develop a new method to analytically estimate the dominant eigenvalue of complex networks. | \section*{Results}
\subsection*{Symmetric Matrices}
\begin{sidewaysfigure}
\includegraphics[width
= \linewidth]{Figures/5000-1-Normal.pdf} \caption{\textit{Probability
density function for the eigenvalues of symmetric matrices whose
underlying structure is a complex network. The network has average
degree $k$ (columns), and degree distribution determined by
different algorithms (rows). For a given degree distribution, the
network is then built using the configuration
model\cite{molloy1995critical,newman2003structure}. The values of
the nonzero elements are sampled from a symmetric bivariate normal
distribution centered at zero (SI). The red line marks the
prediction of Wigner's semicircle
distribution\cite{wigner1958distribution}. The blue line shows the
semi-superelliptical distribution when $\mu_2$ and $\mu_4$ (top
left in the panel) are used to solve the equations determining its
parameters ($a$ and $n$ in the panels). Each graph is obtained by
computing the eigenvalues of a single matrix of size $5000 \times
5000$.}}
\end{sidewaysfigure}
We analyze $s \times s$ matrices with $0$ on the diagonal, and
off-diagonal pairs $(M_{ij},M_{ji})$ obtained by multiplying the
corresponding entries of two matrices $(M_{ij},M_{ji}) = (A_{ij},
A_{ji}) \cdot (N_{ij}, N_{ji})$. $A$ is the adjacency matrix of a
random undirected graph---with a given degree distribution---built
using the configuration
model\cite{molloy1995critical,newman2003structure}. The use of the
configuration model is important, as it ensures that the networks do
not typically have unwanted ``secondary structures'' (e.g., modules,
bipartite or lattice structure) that would affect results. $N$ is a
matrix whose off-diagonal pairs $(N_{ij}, N_{ji})$ are sampled from a
bivariate normal distribution $(X, Y)$, with $\mathbb E[X] = \mathbb
E[Y] = 0$, $\mathbb E[X^2] = \mathbb E[Y^2] = \sfrac{1}{k}$, and
$\mathbb E[X Y] = \sfrac{\rho}{k}$, where $k$ is the average degree of
the network and $-1 \leq \rho \leq 1$ is Pearson's correlation
coefficient. The choice of parameters ensures that for $k \to \infty$,
we recover the type of matrices studied by Wigner and Girko. Our
results also hold for non-normal distributions (e.g., uniform, SI).
We start with the analog of Wigner's case, in which matrices are
symmetric ($\rho = 1$). In this case, all eigenvalues are real, and
for $k \to \infty$ we recover Wigner's semicircle probability
distribution function:
\begin{equation}
\Pr(\lambda = x) = P(x) = \frac{2 \sqrt{(2r)^2 - x^2}}{\pi (2r)^2}
\end{equation}
The variance of this distribution is a function of $r$: $\mu_2(r) =
\int x^2 P(x) dx = r^2$. Because the variance of the eigenvalues of a matrix
with diagonal zero is $\Tr(M^2)/s = \rho = \mu_2(r)$, given that in
our matrices $\rho =1$, then $r = 1$ and thus the horizontal radius is
$a = 2r = 2$. We next generalize Wigner's formula to the case where
$k \ll s$, which leads to a semi-superelliptical distribution:
\begin{equation}
P(x) = \frac{2 \sqrt[n]{(2r)^n - x^n}}{4 (2r)^2 \Gamma(1 +
\sfrac{1}{n})^2 \Gamma(1 + \sfrac{2}{n})^{-1}}
\end{equation}
\noindent where the numerator is the superelliptical equivalent of
Wigner's formulation, and the denominator is the area of a
superellipse with $a = b = 2r$. In this case, we need to solve for two
parameters, $r$ and $n$. Hence, we write equations for the second
($\mu_2(r,n) = \int x^2 P(x) dx = \Tr(M^2)/s$) and fourth ($\mu_4(r,n)
= \int x^4 P(x) dx = \Tr(M^4)/s$) central moments of the eigenvalue
distribution ($\mu_3(r, n) = 0$, due to symmetry), thereby obtaining
the values of $n$ and $r$ (SI).
In Figure 1, we show numerical simulations in which we take a single
$5000 \times 5000$ matrix, whose network structure is determined by
the average degree $k$ (columns) and a specific algorithm used to
construct the degree distribution (rows). In all cases, the density of
the eigenvalues is described by a semi-superellipse, which captures
the tails especially well. This is important, given the role of
dominant eigenvalues in determining the properties of dynamical
systems. The distribution tends to underestimate (small $k$) or
overestimate (large $k$) the number of zeros, especially for very
skewed degree distributions -- an effect similar to that found for
small matrices in Girko's circular law\cite{tao2010random}.
\subsection*{Asymmetric Matrices}
\begin{sidewaysfigure}
\includegraphics[width =
0.9\linewidth]{Figures/5000-0-Normal.pdf} \caption{\textit{Distribution
of the eigenvalues of asymmetric matrices ($\mathbb E[\rho] = 0$) in
the complex plane. As in Figure 1, the columns specify the average
degree of the network, and the rows specify the algorithm used to
build the degree distribution. The red line is obtained using
Girko's circular
law\cite{ginibre1965statistical,girko1985circular,tao2010random}.
The blue line is obtained by solving for $a$, $b$ and $n$ (top right
in the panels) for the superellipse using the second and fourth
moments of the eigenvalue distribution (top left in the panels). The
superelliptical distribution is expected to accurately describe the
case with $k$-regular graph structure, for which the eigenvalues are
approximately uniform in the superellipse. Each graph is obtained
computing the eigenvalues of a single matrix of size $5000 \times
5000$.}}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\includegraphics[width =
0.9\linewidth]{{Figures/5000-0.5-Normal}.pdf}
\caption{\textit{As Figure 2, but with positive correlation $\mathbb
E[\rho] = 0.5$. In this case the red line describes the elliptical
law\cite{girko1985,sommers1988spectrum,Naumov20xx}.}}
\end{sidewaysfigure}
We now move to the case in which $\rho \neq 1$ (Figures 2 and 3). For
$k \to \infty$ and $\rho = 0$, we recover Girko's circular law, which
states that the eigenvalues are uniformly distributed in the unit
circle. For $k \to \infty$ and $-1< \rho <1$, the eigenvalues are
uniformly distributed in an ellipse with horizontal radius $a = 1 +
\rho$ and vertical radius $b = 1 -\rho$.
Although all the eigenvalue distributions appear to be described by
superellipses in the complex plane, only matrices with $k$-regular
graph structure have a distribution that is close to uniform. For
these matrices, we can approximate the p.d.f. as:
\begin{equation}
\Pr(\Re(\lambda) = x, \Im(\lambda) =y ) = P(x,y) = \frac{1}{4 a b
\Gamma(1 + \sfrac{1}{n})^2 \Gamma(1 + \sfrac{2}{n})^{-1}}
\end{equation}
Setting $a = r (1 + \rho)$ and $b = r (1-\rho)$ (SI), we can replicate
the approach above and write the second and fourth central moments as
$\mu_2(r,n) = \iint (x^2 - y^2) P(x,y) dy dx = \Tr(M^2)/s = \rho$ and
$\mu_4(r, n) = \iint (x^4 + y^4 - 6x^2y^2) P(x,y) dy dx = \Tr(M^4)/s$,
respectively. Solving these equations, we obtain $n$ and $r$ (SI).
\subsection*{Adjacency Matrices}
Many applications deal with matrices that do not comply with the
strict requirements we set above. For example, adjacency matrices of
undirected graphs have entries whose value is either zero or one. As
such, the mean of the matrix is not zero, and thus the eigenvalues do
not follow the semi-superellipse introduced above.
Consider a graph generated by the configuration model with arbitrary
degree distribution: the density of all eigenvalues but the dominant
one follows a semi-superellipse, while the dominant eigenvalue lies to
the right of the semi-superellipse (SI). We can exploit this fact to
accurately estimate the value of the dominant eigenvalue (Figure
4).
One notable characteristic of semi-superelliptical distributions is
that they are symmetric about the mean. As such, the distribution
obtained by taking all the eigenvalues but the dominant one should
have odd central moments $\tilde{\mu}_{2z + 1} \approx 0$. Hence, one
can solve a system of equations of the form $\Tr(A^z)
= \sum_j \lambda_j^z = (s - 1)\tilde{\mu'}_{z} + \lambda^z_1$ (where
$\tilde{\mu'}_{z}$ is the $z^{th}$ raw moment of the distribution
obtained removing the dominant eigenvalue) by assuming, for example,
that $\tilde{\mu}_{3} =0$ or that $\tilde{\mu}_{5} =0$ (SI).
We contrast the two approximations obtained by setting the $3^{rd}$ or
$5^{th}$ central moments to zero and the fantastically simple one
proposed by Chung \textit{et al.}\cite{chung2003spectra}, $\lambda_1
\approx \overline{k^2} / \overline{k}$ (i.e., the average of the
squared degrees divided by the average degree). Chung's approximation
holds as long as the minimum degree in the network is not too small
compared to the mean degree, and was independently obtained by
Nadakuditi \& Newman\cite{nadakuditi2013spectra} using free
probability. Figure 4 shows that the approximation based on the fifth
moment works better than the others for non-regular graphs.
\begin{figure}
\begin{centering}
\includegraphics[width = 0.75\linewidth]{Figures/ApproxL1Rnd.pdf}
\caption{\textit{Approximation of the dominant eigenvalues of
matrices built using the configuration model and choosing the
degree distribution according to the algorithm specified in the
rows. The columns stand for the desired average degree. The
x-axis specifies the approximation used to obtain
$\lambda_1^\text{approx}$, while the y-axis represents the
absolute value of the relative error $|(\lambda_1^\text{approx}
- \lambda_1)|/\lambda_1$. }}
\end{centering}
\end{figure}
\section*{Discussion}
A semi-superellipse approximates the density of the eigenvalues of the
symmetric matrix $M$. When $M$ is asymmetric, the eigenvalues fall in
superellipses in the complex plane. When the structure of $M$ is a
random $k$-regular graph, then the distribution is approximately
uniform and we can estimate the parameters of the superellipse.
The spectrum of the adjacency matrix of a random graph with arbitrary
degrees can be described by a ``semi-superellipse plus $\lambda_1$''
distribution. Because a semi-superellipse is symmetric about the mean,
we can analytically approximate the value of $\lambda_1$. This allows,
for example, a more accurate prediction of the occurrence of epidemics
in simple Susceptible-Infected-Susceptible and
Susceptible-Infected-Recovered models that incorporate an explicit
network of contacts between
individuals\cite{chakrabarti2008epidemic,pastor2001epidemic,youssef2011individual,li2012susceptible},
for which the epidemic threshold is defined by $1/\lambda_1$.
Interestingly, we can connect the value of $\lambda_1$ with the
presence of small motifs\cite{milo2002network} in the network. Our
approximation of the dominant eigenvalue of adjacency matrices
produced by the configuration model shows that the dominant eigenvalue
of a network of a given size and connectivity strongly depends on the
number of triangles (approximation using $\tilde{\mu}_{3}$) and
pentagons (using $\tilde{\mu}_{5}$) it contains.
We also considered departures from the configuration model (SI). We
chose the configuration model in order to prevent the emergence of
``secondary structures'' in the network, but the ultimate test for
these methods is to approximate the behavior of real, empirical
networks, which are known to contain interesting structural
features\cite{newman2003structure}.
For symmetric matrices parametrized using the structure of networks of
biological interest, the density of the eigenvalues is captured by the
semi-superelliptical distribution. As expected, the asymmetric case is
not well-described by the superelliptical distribution derived for
$k$-regular graphs. Finally, in this case, the approximation of the
dominant eigenvalue continues to perform very well, and better than
current methods.
The results presented here open the door for the analytic study of
large dynamical systems with complex network structure.
\clearpage
\part*{Supporting Information}
\renewcommand{\thetable}{S\arabic{table}}
\renewcommand{\thefigure}{S\arabic{figure}}
\renewcommand{\theequation}{S\arabic{equation}}
\setcounter{figure}{0}
\setcounter{equation}{0}
\section*{List of symbols}
For reference, we provide a list of the main symbols used in the
following sections.
\begin{longtable}{p{0.15\linewidth}p{0.85\linewidth}}
{Symbol} & {Description} \\ \hline
$\mathcal N(\boldsymbol \mu, \boldsymbol \Sigma)$ & Bivariate normal
distribution with mean $\boldsymbol \mu$ and covariance matrix
$\boldsymbol \Sigma$.\\
$\mathbb E[X]$ & Expectation of the random variable $X$.\\
$\text{Var}[X]$ & Variance of $X$.\\
$\rho$ & Pearson's correlation coefficient.\\
$\mathcal U(-t,t)$ & Uniform distribution on the interval $[-t, t]$. \\
$\mu'_z$ & $z^{th}$ raw moment (moment about the origin) of a
distribution. We sometimes write $\mu'_z(r,n)$ to stress that the
moment depends on the parameters of the distribution.\\
$\mu_z$ & $z^{th}$ central moment (moment about the mean) of a
distribution.\\
$A$ & The adjacency matrix of an undirected graph.\\
$A_{ij}$ & Coefficient of the adjacency matrix.\\
$N$ & A matrix whose entries $(N_{ij}, N_{ji})$ are sampled from a bivariate normal distribution.\\
$U$ & A matrix whose entries are sampled from a uniform distribution.\\
$M$ & Matrix obtained by multiplying the elements of $A$ and $N$ (or
$U$).\\
$\Tr(M)$ & Trace of the matrix $M$.\\
$M^z$ & Matrix $M$ raised to the $z^{th}$ power.\\
$k$ & The average degree of the nodes in a network described by $A$.\\
$s$ & Size of the network.\\
$\lambda_i$ & $i^{th}$ eigenvalue of a matrix.\\
$i$ & $\sqrt{-1}$.\\
$a$ & Horizontal radius of a superellipse.\\
$b$ & Vertical radius of a superellipse.\\
$r$ & $(a + b)/2$.\\
$n$ & Shape parameter of a superellipse.\\
$\Gamma(\cdot)$ & Gamma function.
\end{longtable}
\section*{Superelliptical distributions}
\subsection*{Matrices and Graphs.}
We analyze matrices $M$ of size $s \times s$ that are obtained by
multiplying the elements of two matrices: $M_{ij} = A_{ij}
N_{ij}$. $A$ is the adjacency matrix of an undirected, simple graph
without self-loops, generated by the configuration
model\cite{molloy1995critical,newman2003structure} (for a particular
degree distribution). The nodes in the graph have average degree
$k$. We consider the case of integer $k$, with $k>1$, so that the
graph is almost surely connected, and $k \leq (s - 1)$, as the graph
does not contain self-loops. Thus, the adjacency matrix contains
exactly $s k$ nonzero coefficients. $N$ is a matrix whose elements
$(N_{ij}, N_{ji})$ are sampled from a normal bivariate distribution
$\mathcal N(\boldsymbol \mu, \boldsymbol \Sigma)$, with
\begin{equation}
\boldsymbol \mu = \left[
\begin{array}{c}
0 \\
0
\end{array}
\right]
\end{equation}
\noindent and
\begin{equation}
\boldsymbol \Sigma = \left[
\begin{array}{cc}
\frac{1}{k} & \frac{\rho}{k} \\
\frac{\rho}{k} & \frac{1}{k}
\end{array}
\right]
\end{equation}
Clearly, $\mathbb E [M_{ij}] = 0$, and $\text{Var}[M_{ij}] = \mathbb E
[M_{ij}^2] = \sfrac{1}{s}$. For $k \to \infty$, each value of the
Pearson's correlation coefficient $\rho$ determines which of the
following well-known laws describes the distribution of the
eigenvalues of $M$:
\begin{table}[h]
\begin{center}
\begin{tabular}{c|l}
$\rho$ & Law\\
\hline
$1$ & Wigner's semicircle law\cite{wigner1958distribution}\\
$0$ & Girko's circular law\cite{ginibre1965statistical,girko1985circular,tao2010random}\\
$-1 < \rho < 1$ & Elliptical law\cite{girko1985,sommers1988spectrum,Naumov20xx}
\end{tabular}
\end{center}
\end{table}
We want to study the case in which $k \ll s$, and the structure of the
graph $A$ is generated using different degree distributions. In
particular, we analyze degree distributions obtained from $k$-regular
random graphs, Erd\H{o}s-R\'enyi random graphs\cite{erdHos1959random},
and Power-law (scale-free) graphs\cite{barabasi1999emergence}. All
graphs were generated using the \texttt{igraph} library\cite{igraph}
for the statistical software \texttt{R}\cite{R}. For Power-law graphs,
undirected networks were obtained using the routine
\texttt{static.power.law.game} with a power-law exponent of $2.5$.
To obtain higher precision, we transform the nonzero coefficients of
the matrix $M$ to ensure that $\overline{M}_{ij} = 0$ and
$\overline{M^2}_{ij} = \sfrac{1}{s}$.
\subsection*{Superellipses.}
As we will show, the eigenvalues of matrices with complex network
structure are described by superellipses. A superellipse is defined by
the equation:
\begin{equation}
\frac{|x|^n}{a^n} + \frac{|y|^n}{b^n} = 1
\end{equation}
For $n = 2$ we recover the equation describing an ellipse (a circle
when $a = b$). The area of a superellipse is $4 a b \Gamma(1
+\sfrac{1}{n})^2 \Gamma(1 + \sfrac{2}{n})^{-1}$. Superellipses can
take dramatically different shapes depending on the parameter values
(Figure~\ref{Super}).
\begin{figure}
\begin{centering}
\includegraphics[width
= 0.7\linewidth]{Figures/Shapes.pdf} \caption{\textit{Shapes of
$\sfrac{|x|^n}{a^n} + \sfrac{|y|^n}{b^n} = 1$, for different values
of $a$, $b$ and $n$ (panel titles). }} \label{Super}
\end{centering}
\end{figure}
\subsection*{Moments of the eigenvalue distribution.}
A superellipse is defined by three parameters: $a$, $b$ and $n$. When
deriving the parameters, we will use some well-known identities
relating the traces of the powers of a matrix to its eigenvalue
distribution.
In particular, we will construct equations whose left-hand-side is
obtained integrating the probability density function of the
eigenvalue distribution, and the right-hand-side by exploiting
identities based on the traces of the powers of a matrix.
We write $\mu'_z$ for the $z^{th}$ raw moment (moment about the
origin) of the eigenvalue distribution, and $\mu_z$ for the
corresponding central moment (moment about the mean). For any matrix
with zero on the diagonal (and hence $\overline{\lambda} = 0$), the
raw and central moment are the same:
\begin{equation}
\mu'_z = \mu_z = \frac{1}{s} \sum_{j = 1}^s \lambda_j^z
= \frac{\Tr(M^z)}{s}
\end{equation}
Because we are dealing with real matrices, all eigenvalues are either
real or complex conjugates. It follows that all the moments $\mu'_z$
are real. Moreover, this greatly simplifies the integrals for the
moments. We write $\lambda_j = x_j + iy_j$, where $i = \sqrt{-1}$. The
integral for the second moment is:
\begin{equation}
\mu'_2 = \mu_2 = \int \lambda^2 P(\lambda) d\lambda = \iint (x + iy)^2 P(x,y) dx dy =
\iint (x^2 - y^2 + 2ixy) P(x,y) dx dy
\end{equation}
\noindent where $P( \cdot)$ is the probability density
function. Because the complex eigenvalues are paired, if $x + iy$ is
an eigenvalue, so is $x -iy$. Hence, the imaginary part vanishes from
the integral, leaving
\begin{equation}
\mu'_2 = \mu_2 = \iint (x^2 - y^2) P(x,y) dx dy
\end{equation}
Similarly, we can write
\begin{equation}
\mu'_4 = \mu_4 = \iint (x^4 + y^4 - 6 x^2 y^2) P(x,y) dx dy
\end{equation}
\subsection*{Semi-superelliptical distribution.}
As in the main text, we start by setting $\rho = 1$, so that $M$ is
symmetric (Hermitian). In this case, all eigenvalues are real,
simplifying the problem. Whenever $\rho = -1$, $M$ is
skew-symmetric, and all eigenvalues are imaginary.
For $\rho =1$ and $k \to \infty$, the eigenvalue distribution is
described by Wigner's semicircle law:
\begin{equation}
\Pr(\lambda = x) = P(x) = \frac{2 \sqrt{(2r)^2 - x^2}}{(2r)^2 \pi}
\end{equation}
\noindent where we write the radius as $a = 2r$ to make the derivation
consistent with the asymmetric case below. The distribution of the
eigenvalues has mean 0, and variance:
\begin{equation}
\mu_2(r) = \int_{-2r}^{2r} x^2 P(x) dx = r^2
\end{equation}
Because, for these matrices
\begin{equation}
\mu_2(r) = \frac{\Tr(M^2)}{s} = \mathbb E [M_{ij}M_{ji}] = \mathbb
E [M_{ij}^2] = 1
\end{equation}
\noindent then necessarily $r = 1$.
We want to extend Wigner's semicircle law to the case of matrices with
complex network structure. We hypothesize that in this case, the
density of the eigenvalues follows a semi-superellipse:
\begin{equation}
\Pr(\lambda = x) = P(x)= \frac{2 \sqrt[n]{(2r)^n - x^n}}{4 (2r)^2 \Gamma(1 +
\sfrac{1}{n})^2 \Gamma(1 + \sfrac{2}{n})^{-1}}
\end{equation}
Because we have two parameters, $r$ and $n$, we need to compute two
moments. The first two nonzero moments for this distribution are
$\mu_2(r,n)$ and $\mu_4(r,n)$.
\begin{equation}
\mu_2(r,n) = \int_{-2r}^{2r} x^2 P(x) dx =
r^2 \frac{2 \Gamma\left( \frac{2}{n} \right) \Gamma\left( \frac{3}{n} \right)}
{\Gamma\left( \frac{1}{n} \right) \Gamma\left( \frac{4}{n} \right)}
\end{equation}
Given that also in this case we have $\mu_2(r,n) = 1$, we can solve
for the positive $r$:
\begin{equation}
r
= \sqrt{\frac{\Gamma\left( \frac{1}{n} \right) \Gamma\left( \frac{4}{n} \right)}{
2 \Gamma\left( \frac{2}{n} \right) \Gamma\left( \frac{3}{n} \right)}}
\label{myr}
\end{equation}
Integrating to obtain $\mu_4(r,n)$:
\begin{equation}
\mu_4(r,n) = \int_{-2r}^{2r} x^4 P(x) dx = \frac{8 r^4 \Gamma
\left(\frac{1}{n}\right) \Gamma \left(1 + \frac{2}{n}\right) \Gamma
\left(1 + \frac{5}{n}\right)} {15 \Gamma
\left(1+\frac{1}{n}\right)^2 \Gamma \left(\frac{6}{n}\right)}
\end{equation}
\noindent and substituting the value of $r$:
\begin{equation}
\mu_4(n) = \frac{
2
\Gamma\left( \frac{1}{n} \right)^3
\Gamma\left( \frac{4}{n} \right)^2
\Gamma\left( 1 + \frac{2}{n} \right)
\Gamma\left( 1 + \frac{5}{n} \right)
}{
15
\Gamma\left( 1 + \frac{1}{n} \right)^2
\Gamma\left( \frac{2}{n} \right)^2
\Gamma\left( \frac{3}{n} \right)^2
\Gamma\left( \frac{6}{n} \right)
}
\end{equation}
Hence, knowing the value of $\mu_4(n) = \Tr(M^4)/s$, we can
numerically solve the equation above for $n$, and recover $a = 2 r$
from $n$ using Eq. (\ref{myr}).
\subsection*{$\boldsymbol k$-regular graphs: superelliptical distribution.}
We now derive the superelliptical distribution for eigenvalues that
are uniformly distributed in a superellipse. Our simulations show that
this is the case for matrices whose underlying structure is a
$k$-regular random graph. For those generated using Erd\H{o}s-R\'enyi
random graphs, the distribution slightly departs from uniformity, and
for those built using Power-law graphs, it departs severely. Hence,
only for the case of $k$-regular graphs do we expect the
superelliptical uniform distribution to hold.
We make only one assumption in order to derive the values of $n$ and
$r$. We assume that $a + b = 2 r$ irrespective of $\rho$. This
assumption is strongly supported by numerical simulations for the case
of $k$-regular random graphs (the only case in which we believe the
following results to hold). When $a + b = 2 r$ for any $\rho$, the
semi-superelliptical distribution derived above is recovered for $\rho
= 1$. As such, the value of $r$ is that in Eq. (\ref{myr}).
As we did above, we first compute the moments using the eigenvalue
distribution, and then derive the right-hand-sides of the equations
using traces.
The probability density function for eigenvalues uniformly distributed
in the superellipse $\sfrac{|x|^n}{a^n} + \sfrac{|y|^n}{b^n} \leq 1$
is:
\begin{equation}
P(x,y) = \frac{1}{4 a b \Gamma(1 +
\sfrac{1}{n})^2 \Gamma(1 + \sfrac{2}{n})^{-1}}
\end{equation}
Integrating $(x^2 - y^2)$ we obtain $\mu_2(a,b,n)$:
\begin{equation}
\mu_2(a,b,n) = \int_{-a}^{a} \int_{-\frac{b}{a} \sqrt[n]{a^n -
|x|^n}}^{\frac{b}{a} \sqrt[n]{a^n - |x|^n}} (x^2 - y^2) P(x,y)
dy dx = \frac{(a-b) (a+b) \Gamma \left(\frac{2}{n}\right) \Gamma
\left(\frac{3}{n}\right)}{2 \Gamma \left(\frac{1}{n}\right) \Gamma
\left(\frac{4}{n}\right)}
\end{equation}
Because $\mu_2(a,b,n) = \mathbb E[M_{ij}M_{ji}] = \rho$, we can set $b
= 2r - a$ and solve for $a$:
\begin{equation}
a = r
+ \frac{\rho \Gamma \left(\frac{1}{n}\right) \Gamma \left(\frac{4}{n}\right)}{
2 r \Gamma \left(\frac{2}{n}\right) \Gamma \left(\frac{3}{n}\right)}
= r(1 + \rho)
\end{equation}
Similarly, $b = r (1 -\rho)$, consistently with the fact that,when
$k \to \infty$, then $a = 1 + \rho$ and $b = 1 - \rho$ (elliptical
law\cite{girko1985,sommers1988spectrum,Naumov20xx}).
We then move to the fourth moment:
\begin{equation}
\begin{aligned}
\mu_4(a,b,n) =& \int_{-a}^{a} \int_{-\frac{b}{a} \sqrt[n]{a^n -
|x|^n}}^{\frac{b}{a} \sqrt[n]{a^n - |x|^n}} (x^4 + y^4 - 6 x^2
y^2 ) P(x,y) dy dx =\\
=&
\frac{a^4 \Gamma\left( \frac{1}{n} \right) \Gamma\left(1
+ \frac{2}{n} \right) \Gamma\left( 1
+ \frac{5}{n} \right)}{30 \Gamma\left(1 + \frac{1}{n} \right)^2 \Gamma\left( \frac{6 }{n} \right)} + \\
& \frac{b^4 \Gamma\left( 1 + \frac{2}{n} \right) \Gamma\left( 1
+ \frac{5}{n} \right)}{5 \Gamma\left( 1
+ \frac{1}{n} \right) \Gamma\left( 1 + \frac{6}{n} \right)} - \\
& \frac{a^2 b^2 2^{1-\frac{4}{n}} \Gamma\left( \frac{1}{2}
+ \frac{1}{n} \right) \Gamma\left( \frac{3}{n} \right)}{30 \Gamma\left(1 + \frac{1}{n} \right)^2 \Gamma\left( \frac{6}{n} \right)}
\end{aligned}
\end{equation}
Substituting $a = r(1 + \rho)$, $b = r(1-\rho)$ and replacing $r$ with
the value in Eq. (\ref{myr}) greatly simplifies the expression:
\begin{equation}
\mu_4(n,\rho) = \frac{2^{\frac{4}{n}-2}
\Gamma \left(\frac{1}{2}+\frac{2}{n}\right)
\Gamma \left(\frac{4}{n}\right)
\left(\frac{\left({\rho}^4+6 {\rho}^2+1\right)
\Gamma \left(\frac{1}{n}\right)
\Gamma \left(\frac{5}{n}\right)}
{\Gamma \left(\frac{3}{n}\right)^2}-
3 \left(1 - {\rho}^2\right)^2\right)}{3 \sqrt{\pi }
\Gamma \left(\frac{6}{n}\right)}
\end{equation}
Knowing $\rho$ and $\Tr(M^4)$, we solve this equation numerically to
obtain $n$, use Eq. (\ref{myr}) to obtain $r$, and, finally, use $r$
and $\rho$ to compute $a$ and $b$.
\subsection*{Computing expected traces.}
Above, we computed the values of $n$, $a$ and $b$ using the actual,
observed traces of $M^2$ and $M^4$: $\mu_2 = \Tr(M^2)/s$ and
$\mu_4=\Tr(M^4)/s$. Alternatively, one can compute the expectations
for the traces when the distribution of the coefficients is known and
one can count certain structures in the graph. Clearly $\mathbb
E[\Tr(M^2)/s] = \rho$. The computation of $\mathbb E[\Tr(M^4)/s]$ is
detailed below for the case of symmetric and asymmetric
matrices. Because the results are particularly simple for matrices
with $k$-regular graph structure, we explore the effect of the average
degree on the parameters of the superellipses in this case.
\paragraph*{Symmetric matrices.}
Only four possible network structures contribute to the fourth power
of $M$ in the graphs analyzed here. In fact, the diagonal element
$M_{jj}^4$ is composed of the sum of the closed paths involving four
links that start and end at node $j$. There are four possibilities
(Table S2, all indices are taken to be different -- $j \neq l \neq n
\neq m$):
\begin{itemize}
\item[T1)] $M_{jl}M_{lj}M_{jl}M_{lj}$, i.e., the same undirected
link tread twice. For symmetric matrices, we can write $M_{jl}^4$.
\item[T2)] $M_{jl}M_{lj}M_{jm}M_{mj}$, i.e., a path where we move
from $j$ to $l$, come back and then move to $m$ and come back. For
symmetric matrices, we can write $M_{jl}^2 M_{jm}^2$.
\item[T3)] $M_{jl}M_{lm}M_{ml}M_{lj}$, i.e., a path where we move
from $j$ to $l$, from $l$ to $m$ and then go back to $l$ and
finally to $j$. For symmetric matrices, we can write $M_{jl}^2
M_{lm}^2$.
\item[T4)] $M_{jl}M_{lm}M_{mn}M_{nj}$, i.e., a four-link directed
cycle.
\end{itemize}
For each graph, we can count how many ways there are to obtain each
structure:
\begin{itemize}
\item[T1)] There is an occurrence of T1 for each connection in the
graph. Thus, the total number of structures of this kind is $s k$.
\item[T2)] For a node of degree $k_j$, there are $\binom{k_j}{2}$
ways of choosing the two partners. However, for each choice of
partners ($l$ and $m$) we have two possibilities: visit $l$ first,
and visit $m$ first. Hence, the number of structures is: $2
\sum_{j=1}^{s} \binom{k_j}{2}$.
\item[T3)] For each edge connecting $j$ to $l$, we have $k_l -1$
ways of choosing the third partner. Thus, the number of structures
is $\sum_{j=1}^{s} \sum_{l=1}^{s} A_{jl}(k_l - 1)$.
\item[T4)] The number of four-cycles in the graph cannot be
expressed as a simple function of the degrees of the nodes.
\end{itemize}
Finally, we need to compute the expectation for their values. For the
normal bivariate distribution described above, we have:
\begin{enumerate}
\item[T1)] $\mathbb E[M_{jl}M_{lj}M_{jl}M_{lj}] = \mathbb E[M_{jl}^4]
= \frac{3}{k^2}$, as the fourth central moment of a normal
distribution $\mathcal N(0, \sigma^2)$ is simply $3 \sigma^2$.
\item[T2)] $\mathbb E[M_{jl}M_{lj}M_{jm}M_{mj}] = \mathbb E[M_{jl}^2
M_{jm}^2] = \frac{1}{k^2}$ (the product of the two variances).
\item[T3)] $\mathbb E[M_{jl}M_{lm}M_{ml}M_{lj}] = \mathbb E[M_{jl}^2
M_{lm}^2] = \frac{1}{k^2}$.
\item[T4)] $\mathbb E[M_{jl}M_{lm}M_{mn}M_{nj}] = 0$, as the
expectation of the product is simply the product of the expectations
-- all the coefficients are independent.
\end{enumerate}
Given that $\mathbb E[T4] = 0$, summing $(\# T1 \cdot \mathbb E[T1] +
\# T2 \cdot \mathbb E[T2] + \# T3 \cdot \mathbb E[T3])$ we obtain
$\mathbb E[Tr(M^4)]$. Dividing by $s$, we obtain $\mathbb E[Tr(M^4)/s]
= \mu_4(n)$. The calculation is especially simple for a $k$-regular
graph, in which we can count the number of T2 and T3 structures very
easily. For this type of graph, $\mathbb E[Tr(M^4)/s] = 2
+ \frac{1}{k}$.
\setcounter{table}{0}
\begin{table}
\begin{tabular}{zzzzz}
\hline
Type & Graph & Coefficients & Number of Occurrences & Expectation\\
\hline
T1 &
\begin{tikzpicture}[scale=0.35]
\Vertex{j}
\Vertex[x=3,y=0]{l}
\Edge[style={->,bend left}](j)(l)
\Edge[style={->,bend left=60}](j)(l)
\Edge[style={->,bend left}](l)(j)
\Edge[style={->,bend left=60}](l)(j)
\end{tikzpicture}
&
$M_{jl}M_{lj}M_{jl}M_{lj} = M_{jl}^4$
&
$s k$
&
$\frac{3}{k^2}$\\
T2 &
\begin{tikzpicture}[scale=0.35]
\Vertex{j}
\Vertex[x=2,y=2]{l}
\Vertex[x=2,y=-2]{m}
\Edge[style={->,bend left}](j)(l)
\Edge[style={->,bend left}](j)(m)
\Edge[style={->,bend left}](l)(j)
\Edge[style={->,bend left}](m)(j)
\end{tikzpicture}
&
$M_{jl}M_{lj}M_{jm}M_{mj} = M_{jl}^2 M_{jm}^2$
&
$2 \sum_{j=1}^{s} \binom{k_j}{2}$
&
$\frac{1}{k^2}$\\
T3 &
\begin{tikzpicture}[scale=0.35]
\Vertex{j}
\Vertex[x=3,y=0]{l}
\Vertex[x=6,y=0]{m}
\Edge[style={->,bend left}](j)(l)
\Edge[style={->,bend left}](l)(m)
\Edge[style={->,bend left}](m)(l)
\Edge[style={->,bend left}](l)(j)
\end{tikzpicture}
&
$M_{jl}M_{lm}M_{ml}M_{lj} = M_{jl}^2 M_{lm}^2$
&
$\sum_{j=1}^{s} \sum_{l=1}^{s} A_{jl}(k_l - 1)$
&
$\frac{1}{k^2}$\\
T4 &
\begin{tikzpicture}[scale=0.35]
\Vertex{j}
\Vertex[x=3,y=0]{l}
\Vertex[x=3,y=3]{m}
\Vertex[x=0,y=3]{n}
\Edge[style={->,bend right}](j)(l)
\Edge[style={->,bend right}](l)(m)
\Edge[style={->,bend right}](m)(n)
\Edge[style={->,bend right}](n)(j)
\end{tikzpicture}
&
$M_{jl}M_{lm}M_{mn}M_{nj}$
&
Depends on graph
&
$0$\\
\hline
\end{tabular}
\caption{\textit{The structures contributing to $\Tr(M^4)$. For each
structure, we report the form of the coefficients, the number of
occurrences in a graph, and the expected value when the
coefficients are taken from a bivariate normal distribution with
correlation $\rho =1$.}}
\end{table}
\paragraph*{Asymmetric matrices.}
As in the case of symmetric matrices, the expectation $\mathbb
E[\Tr(M^4)/s]$ can be obtained counting the structures T1, T2 and T3
in the graph. However, we need to re-compute the expectations for the
structures for arbitrary $\rho$ (Table S3). For a $k$-regular random
graph, we have $\mathbb E[\Tr(M^4)/s] = 2 \rho^2 + \frac{1}{k}$.
\begin{table}
\begin{tabular}{zzzzz}
\hline
Type & Graph & Coefficients & Number of Occurrences & Expectation\\
\hline
T1 &
\begin{tikzpicture}[scale=0.35]
\Vertex{j}
\Vertex[x=3,y=0]{l}
\Edge[style={->,bend left}](j)(l)
\Edge[style={->,bend left=60}](j)(l)
\Edge[style={->,bend left}](l)(j)
\Edge[style={->,bend left=60}](l)(j)
\end{tikzpicture}
&
$M_{jl}M_{lj}M_{jl}M_{lj} = M_{jl}^2 M_{lj}^2$
&
$s k$
&
$\frac{1 + 2 \rho^2}{k^2}$\\
T2 &
\begin{tikzpicture}[scale=0.35]
\Vertex{j}
\Vertex[x=2,y=2]{l}
\Vertex[x=2,y=-2]{m}
\Edge[style={->,bend left}](j)(l)
\Edge[style={->,bend left}](j)(m)
\Edge[style={->,bend left}](l)(j)
\Edge[style={->,bend left}](m)(j)
\end{tikzpicture}
&
$M_{jl}M_{lj}M_{jm}M_{mj}$
&
$2 \sum_{j=1}^{s} \binom{k_j}{2}$
&
$\frac{\rho^2}{k^2}$\\
T3 &
\begin{tikzpicture}[scale=0.35]
\Vertex{j}
\Vertex[x=3,y=0]{l}
\Vertex[x=6,y=0]{m}
\Edge[style={->,bend left}](j)(l)
\Edge[style={->,bend left}](l)(m)
\Edge[style={->,bend left}](m)(l)
\Edge[style={->,bend left}](l)(j)
\end{tikzpicture}
&
$M_{jl}M_{lm}M_{ml}M_{lj}$
&
$\sum_{j=1}^{s} \sum_{l=1}^{s} A_{jl}(k_l - 1)$
&
$\frac{\rho^2}{k^2}$\\
T4 &
\begin{tikzpicture}[scale=0.35]
\Vertex{j}
\Vertex[x=3,y=0]{l}
\Vertex[x=3,y=3]{m}
\Vertex[x=0,y=3]{n}
\Edge[style={->,bend right}](j)(l)
\Edge[style={->,bend right}](l)(m)
\Edge[style={->,bend right}](m)(n)
\Edge[style={->,bend right}](n)(j)
\end{tikzpicture}
&
$M_{jl}M_{lm}M_{mn}M_{nj}$
&
Depends on graph
&
$0$\\
\hline
\end{tabular}
\caption{\textit{The structures contributing to $\Tr(M^4)$. For each
structure, we report the form of the coefficients, the number of
occurrences in a graph, and the expected value when the
coefficients are taken from a bivariate normal distribution with
correlation $\rho$.}}
\end{table}
\paragraph*{$\boldsymbol k$-regular graphs.}
As shown in the two sections above, for a matrix $M$ with $k$-regular
graph structure, we have that $\mathbb E[\Tr(M^2)/s] = \rho$, and
$\mathbb E[\Tr(M^4)/s] = 2 \rho^2 + \frac{1}{k}$. As such, we can
easily find the expected $n$ and $r$ for any value of $\rho$ and
$k$. We find that, when increasing $k$, $r$ converges to 1 quite
rapidly, while the value of $n$ converges to 2 more slowly. Because
the value of $r$ depends only on $n$, and the value of $n$ depends on
the average degree $k$ and on $\rho^2$, without loss of generality we
can assume $\rho$ to be positive. The results show that a strong
correlation (large $\rho$) speeds up the convergence of both $r$ and
$n$ (Figure~\ref{kreg}).
\begin{figure}
\begin{centering}
\includegraphics[width = 0.65\linewidth]{Figures/RegularGraphExpected.pdf}
\caption{\textit{Expected $r$ and $n$ for matrices whose underlying
structure is a $k$-regular graph of degree $k$ (x-axis). Colors
represent the value of $\rho$ (from 0 -- darker color, to 1).}}
\label{kreg}
\end{centering}
\end{figure}
\subsection*{Supplementary Results.}
\paragraph*{Normal bivariate distribution, negative correlation.}
In the main text, we drew the eigenvalue distributions for asymmetric
matrices with complex network structure when $\mathbb E[\rho] = 0$
(Figure 2) and $\mathbb E[\rho] = 0.5$ (Figure 3). In
Figure \ref{NegCorr}, we show the case of $\mathbb E[\rho] = - 0.5$.
\begin{sidewaysfigure}
\includegraphics[width
= \linewidth]{Figures/{5000--0.5-Normal}.pdf} \caption{\textit{As
main text Figure 2 and Figure 3, but with negative correlation:
$\mathbb E[\rho] = -0.5$.}} \label{NegCorr}
\end{sidewaysfigure}
\paragraph*{Uniform distribution.}
Girko's circular law has been recently shown to be universal, i.e.,
the law holds given very mild conditions on the distribution of the
coefficients in the matrix\cite{tao2010random}. Hence, it is
interesting to test whether choosing different distributions for the
coefficients in the matrix significantly alters the superelliptical
distribution. As with Girko's circular law, we find that our results
hold for different distributions.
For example, take symmetric matrices whose coefficients are sampled
from a uniform distribution. We set $M_{ij}=M_{ji}=A_{ij}U_{ij}$,
where $U$ is a symmetric matrix whose coefficients come from $\mathcal
U(-\sqrt{\sfrac{3}{k}}, \sqrt{\sfrac{3}{k}})$, so that $\mathbb E
[M_{ij}] = 0$, and $\text{Var}[M_{ij}] = \mathbb E [M_{ij}^2] =
\sfrac{1}{s}$. The eigenvalue density follows the semi-superelliptical
distribution, as shown in Figure \ref{UniSymm}.
\begin{sidewaysfigure}
\includegraphics[width = \linewidth]{Figures/5000-1-Uniform.pdf}
\caption{\textit{As main text Figure 1, but with entries $M_{ij} =
M_{ji} = A_{ij} U_{ij}$, where $U$ is a symmetric matrix with
uniformly distributed coefficients (see text). }}
\label{UniSymm}
\end{sidewaysfigure}
Similarly, make the coefficients in $U$ independent identically
distributed: then matrices whose underlying structure are $k$-regular
graphs follow the superelliptical distribution
(Figure~\ref{UniAsymm}).
\begin{sidewaysfigure}
\includegraphics[width = \linewidth]{Figures/5000-0-Uniform.pdf}
\caption{\textit{As main text Figure 2, but with entries $M_{ij} =
A_{ij} U_{ij}$, where $U$ is a matrix whose coefficients are
uniformly distributed.}}
\label{UniAsymm}
\end{sidewaysfigure}
\section*{Approximating the largest eigenvalue of adjacency matrices}
In the analysis above, we assumed the mean of the coefficients of $M$
to be $0$. The problem is more complicated when this is not the
case. However, many applications deal with non-negative matrices
(i.e., whose entries are $\geq 0$), and therefore it is important to
extend the methods above to encompass such cases.
In the following paragraphs, we deal with adjacency matrices of
undirected graphs with complex network structure. This case is
particularly well-studied, given the potential for applications, and
many bounds on the largest eigenvalue (spectral radius) have been
proposed (for a brief survey, see Das \& Kumar\cite{das2004some}). One
of the simplest and most beautiful approximations for the largest
eigenvalue is that proposed by Chung \textit{et
al.}\cite{chung2003spectra}: $\lambda_1 \approx \overline{k^2}
/ \overline{k}$ (the average of the squared degrees divided by the
average degree). This approximation---which was also derived using a
completely different approach by Nadakuditi \&
Newman\cite{nadakuditi2013spectra}---is expected to hold whenever the
minimum degree is large enough compared to the average degree:
$\min(k_i) \gg \sqrt{\overline{k}} \log^3 s$.
The spectrum of the adjacency matrix of a complex network generally
departs severely from Wigner's
semicircle\cite{farkas2001spectra}. Moreover, although the ``bulk'' of
the eigenvalues falls in a well-defined region, the largest
eigenvalue, $\lambda_1$---and, possibly, other large eigenvalues---is
not localized in this region.
Here we present a novel method to estimate the value of $\lambda_1$
obtained assuming that all the other eigenvalues fall in a symmetric
distribution, such as a semi-superellipse. This seems to be the case
for adjacency matrices of random graphs with arbitrary degree
distributions built using the configuration model
(Figure~\ref{AdjSpec}).
\begin{figure}
\includegraphics[width
= \linewidth]{Figures/SpectraRndAdj.pdf} \caption{\textit{Spectra
of adjacency matrices of size $2500$ built using the configuration
model with average degree $k=15$, and degree distributions specified
by different models. In all cases, the spectrum can be approximated
by a semi-superellipse containing all the eigenvalues but the
dominant one, plus $\lambda_1$ itself.}} \label{AdjSpec}
\end{figure}
We consider the adjacency matrix $A$, describing a simple, undirected
graph without self-loops. In this matrix, the diagonal is $0$, and
therefore the mean of the eigenvalues is zero. We can write:
\begin{equation}
\Tr(A) = 0 = \sum_{i=1}^s \lambda_i = \lambda_1 + \sum_{i=2}^s
\lambda_i
\end{equation}
\noindent where $\lambda_1$ is the largest eigenvalue (spectral
radius). We want to describe the distribution of the non-dominant
eigenvalues $\lambda_2, \ldots, \lambda_s$. In general, we can write:
\begin{equation}
\Tr(A^j) = \sum_{i=1}^s \lambda^j_i = s \mu'_j
\end{equation}
That is, the trace of the $j^{th}$ power of $M$ is equal to the size
$s$ times the $j^{th}$ raw moment of the eigenvalue distribution. When
we consider $\lambda_1$ separately, we have:
\begin{equation}
\Tr(A^j) = s \mu'_j = (s-1) \tilde{\mu'}_j + \lambda_1^j
\end{equation}
\noindent where with $\tilde{\mu'}_j$ we indicate the $j^{th}$ raw
moment of the distribution of the eigenvalues when we exclude the
dominant one. Because the trace of the matrix is zero, we have:
\begin{equation}
\begin{aligned}
\Tr(A) = 0 = (s-1) \tilde{\mu'}_1 + \lambda_1 & \to &
\tilde{\mu'}_1 = - \frac{\lambda_1}{s-1}
\end{aligned}
\end{equation}
The trace of $A^2$ is simply the number of connections:
\begin{equation}
\begin{aligned}
\Tr(A^2) = s k = (s-1) \tilde{\mu'}_2 + \lambda_1^2 & \to &
\tilde{\mu'}_2 = \frac{s k- \lambda_1^2}{s-1}
\end{aligned}
\end{equation}
In general, the $j^{th}$ raw moment is:
\begin{equation}
\tilde{\mu'}_j = \frac{\Tr(A^j) - \lambda_1^j}{s-1}
\end{equation}
If the distribution of all the eigenvalues but $\lambda_1$ is
approximately symmetric, then the odd central moments $\tilde{\mu}_{2j
+1} \approx 0$. We can exploit this fact to approximate the value of
$\lambda_1$. First, we need the following formulas relating raw and
central moments:
\begin{equation}
\begin{aligned} \mu'_3 =& \mu_3 + 3 \mu'_1\mu'_2 - 2
(\mu'_1)^3\\
\mu'_5 =& \mu_5 + 5 \mu'_1\mu'_4 - 10
(\mu'_1)^2\mu'_3 + 10 (\mu'_1)^3\mu'_2 - 4
(\mu'_1)^5\\
\end{aligned}
\label{rawcentral}
\end{equation}
Then, we can for example, choose $\tilde{\mu}_{3} \approx 0$. In this
case,
\begin{equation}
\Tr(A^3) = (s-1) \tilde{\mu'}_3 + \lambda_1^3 \approx
(s-1)\left(
3 \tilde{\mu'}_1 \tilde{\mu'}_2 - 2 (\tilde{\mu'}_1)^3
\right)+ \lambda_1^3
\end{equation}
Substituting the values for $\tilde{\mu'}_1$ and $\tilde{\mu'}_2$, we
obtain:
\begin{equation}
\lambda_1^3 \frac{s (s + 1)}{(s-1)^2} - \lambda_1 \frac{3 k
s}{s-1} \approx \Tr(A^3)
\end{equation}
\noindent which, assuming equality, we can solve numerically, or
analytically (taking the only real value):
\begin{equation}
\footnotesize
\lambda_1 \approx
\frac{-\sqrt[3]{2} \left(s^2 \left(s^2-1\right)
\left(\sqrt{\left(s^2-1\right)
\left(\left(s^2-1\right)
\Tr(A^3)^2-4 k^3 s^2\right)}
+s^2 (-\Tr(A^3))+\Tr(A^3)\right)\right)^{2/3}
-2 k s^2 \left(s^2-1\right)}{2^{2/3} s (s+1)
\sqrt[3]{s^2 \left(s^2-1\right)
\left(\sqrt{\left(s^2-1\right) \left(\left(s^2-1\right)
\Tr(A^3)^2-4 k^3 s^2\right)}
+s^2 (-\Tr(A^3))+\Tr(A^3)\right)}}
\end{equation}
Similarly, we can choose to zero a higher moment. For example, if
$\tilde{\mu}_{5} \approx 0$, we have:
\begin{equation}
\Tr(A^5) = (s-1) \tilde{\mu'}_5 + \lambda_1^5 \approx
(s-1)\left(
5 \tilde{\mu'_1}\tilde{\mu'}_4 - 10
(\tilde{\mu'}_1)^2\tilde{\mu'}_3 + 10 (\tilde{\mu'}_1)^3\tilde{\mu'}_2 - 4
(\tilde{\mu'}_1)^5 \right)+ \lambda_1^5
\end{equation}
\noindent which yields
\begin{equation}
\lambda_1^5 \frac{s (s + 1) (s^2 + 1)}{(s-1)^4}
- \lambda_1^3 \frac{10 s k}{(s-1)^3}
- \lambda_1^2 \frac{10 \Tr(A^3)}{(s-1)^2}
- \lambda_1 \frac{5 \Tr(A^4)}{(s-1)} \approx \Tr(A^5)
\end{equation}
Assuming equality, one can solve the equation numerically. Clearly,
one can derive an approximation assuming any of the odd central
moments $\tilde{\mu}_{2j + 1}$ to be zero.
\section*{Beyond the configuration model: the spectra of real networks}
In the previous sections, we dealt with matrices whose underlying
structure was built using the configuration model. That is, the
networks contain no special structure, besides the imposition of a
degree distribution. They are, in a way, ``random networks''. Here we
explore the sensitivity of our results to the use of networks that
depart considerably from this assumption. In fact, the ultimate test
for any of these methods is to be able to deal with real-world
networks.
We repeat the analysis above using three different data sets: i)
networks built according to the Watts-Strogatz
model\cite{watts1998collective}; ii) networks built with the
Barab\'asi-Albert model\cite{barabasi1999emergence}; and, iii) A set
of four networks of biological interest, describing a) the structure
of the contact network in an high school\cite{salathe2010high}, b) the
neural network of the worm
\textit{C. elegans}\cite{White12111986,watts1998collective}, c) the
metabolic network of \textit{C. elegans}\cite{duch2005community} and,
d) the food web structure of the Weddell Sea
ecosystem\cite{jacob2005trophic}. For simplicity, all the networks are
taken to be undirected.
For the networks built using the Watts-Strogatz model, we set $s =
5000$, and arranged the nodes in a one-dimensional lattice without
boundaries. Each node is connected to the $\sfrac{k}{2}$ nodes on its
left and to its right, forming a regular, one-dimensional
lattice. Links are then rewired with probability $0.05$. This can be
accomplished calling the routine~\texttt{watts.strogatz.game(1, 5000,
k/2, 0.05)} in~\texttt{igraph}.
For the networks built using the Barab\'asi-Albert model, we built
networks of size $s =5000$ using preferential attachment with exponent
$1$, where each node introduced in the network attaches to
$\sfrac{k}{2}$ nodes (if possible). The corresponding routine
in~\texttt{igraph} is~\texttt{barabasi.game(5000, power = 1, m = k/2,
directed = FALSE)}.
\paragraph*{Symmetric matrices.} We used the empirical networks and
those built using the two models above to construct the symmetric
matrices $M$ (i.e., $M_{ij} = A_{ij}N_{ij}$, where $N$ is symmetric).
The semi-superelliptical distribution captures the eigenvalues of $M$
(Figures~\ref{OtherSymm} and \ref{BioSymm}).
\begin{sidewaysfigure}
\includegraphics[width = \linewidth]{Figures/Other5000-1-Normal.pdf}
\caption{\textit{As main text Figure 1, but using different models
to construct the networks.}}
\label{OtherSymm}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\begin{centering}
\includegraphics[width = 0.8\linewidth]{Figures/RealEmpirical.pdf}
\caption{\textit{As main text Figure 1, but using networks of
biological interest.}}
\label{BioSymm}
\end{centering}
\end{sidewaysfigure}
\paragraph*{Asymmetric matrices.} We repeated the exercise using
asymmetric matrices, and found, that the superelliptical
distribution describes the eigenvalues quite poorly---as
expected---, with the exception of the Watts-Strogatz model and the
High school contact network (Figures~\ref{OtherAsymm}
and \ref{BioAsymm}).
\begin{sidewaysfigure}
\includegraphics[width = \linewidth]{Figures/Other5000-0-Normal.pdf}
\caption{\textit{As main text Figure 2, but using different models
to construct the networks.}}
\label{OtherAsymm}
\end{sidewaysfigure}
\begin{sidewaysfigure}
\begin{centering}
\includegraphics[width = 0.75\linewidth]{Figures/ComplexEmpirical.pdf}
\caption{\textit{As main text Figure 2, but using networks of
biological interest.}}
\label{BioAsymm}
\end{centering}
\end{sidewaysfigure}
\paragraph*{Approximating $\boldsymbol \lambda_1.$} Finally, we
constructed a hundred $1000 \times 1000$ adjacency matrices for each
choice of $k$ using the two models and attempted to approximate
$\lambda_1$. For the Watts-Strogatz model, the approximation of
Chung \textit{et al.} is clearly superior, while for the
Barab\'asi-Albert model and the empirical networks the new
approximations introduced here produce better results
(Figure~\ref{AdjNRnd} and \ref{AdjBio}).
\begin{figure}
\begin{centering}
\includegraphics[width = 0.8\linewidth]{Figures/ApproxL1NRnd.pdf}
\caption{\textit{As main text Figure 4, but using different models.}}
\label{AdjNRnd}
\end{centering}
\end{figure}
\begin{figure}
\includegraphics[width
= \linewidth]{Figures/AdjacencyEmpirical.pdf} \caption{\textit{The
spectrum of the adjacency matrix of four networks of biological
interest. The solid black line marks the value of $\lambda_1$. The
dashed lines represent the approximations of Chung \textit{et al.}
(red) and those obtained by setting $\tilde{\mu}_3 \approx 0$
(green) and $\tilde{\mu}_5 \approx 0$ (turquoise).}} \label{AdjBio}
\end{figure}
\section*{Code}
Code performing the analysis described here is freely available for
download in the public repository:
\noindent \texttt{
https://bitbucket.org/AllesinaLab/superellipses}.
\section*{Acknowledgments}
Research supported by NSF DEB \#1148867. We thank J. Grilli and
P. Staniczenko for comments.
\clearpage
| {
"timestamp": "2013-11-08T02:10:26",
"yymm": "1309",
"arxiv_id": "1309.7275",
"language": "en",
"url": "https://arxiv.org/abs/1309.7275",
"abstract": "All dynamical systems of biological interest--be they food webs, regulation of genes, or contacts between healthy and infectious individuals--have complex network structure. Wigner's semicircular law and Girko's circular law describe the eigenvalues of systems whose structure is a fully connected network. However, these laws fail for systems with complex network structure. Here we show that in these cases the eigenvalues are described by superellipses. We also develop a new method to analytically estimate the dominant eigenvalue of complex networks.",
"subjects": "Populations and Evolution (q-bio.PE)",
"title": "Superelliptical laws for complex networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540722737479,
"lm_q2_score": 0.72487026428967,
"lm_q1q2_score": 0.7099046452022363
} |
https://arxiv.org/abs/1903.01566 | On Additive Divisor Sums and minorants of divisor functions | We establish asymptotic formulae for various correlations involving general divisor functions $d_k(n)$ and partial divisor functions $d_l(n,A)=\sum_{q|n:q\leq n^A}d_{l-1}(q)$, where $A\in[0,1]$ is a parameter and $k,l\in\mathbb{N}$ are fixed. Our results relate the parameter $A$ to the lengths of arithmetic progressions in which $d_k(n)$ is uniformly distributed. As applications to additive divisor sums, we establish new lower bounds and a new equivalent condition for the conjectured asymptotic. We also prove a Tauberian theorem for general additive divisor sums. | \section{}
\begin{abstract}
We establish asymptotic formulae for various correlations involving general divisor functions $d_k(n)$ and partial divisor functions $d_l(n,A)=\sum_{q|n:q\leq n^A}d_{l-1}(q)$, where $A\in[0,1]$ is a parameter and $k,l\in\mathbb{N}$ are fixed. Our results relate the parameter $A$ to the lengths of arithmetic progressions in which $d_k(n)$ is uniformly distributed. As applications to additive divisor sums, we establish new lower bounds and a new equivalent condition for the conjectured asymptotic. We also prove a Tauberian theorem for general additive divisor sums.
\end{abstract}
\tableofcontents
\newpage
\section{Introduction}
\noindent The focus of this paper is the problem of finding asymptotic formulae for `additive divisor sums'. That is, correlations
\begin{eqnarray}\label{def}
D_{h,k,l}(x)=\sum_{n\leq x}d_k(n+h)d_l(n),
\end{eqnarray}
where $h,k,l\in\mathbb{N}$ are fixed and $d_k(n)$ denotes the number of ordered ways of writing $n$ as a product of $k$ factors. In other words, this is the problem of counting the number of ordered solutions of the Diophantine equation
\begin{eqnarray}
h=n_1\cdots n_k-m_1\cdots m_l \nonumber
\end{eqnarray}
where $(m_1,...,m_l)\in\mathbb{N}^l$, $(n_1,...,n_k)\in\mathbb{N}^k$ and $n_1\cdots n_k\leq x$. \\
\noindent Our results on the correlations in (\ref{def}) are given in Section \ref{secres2}. These results are immediate corollaries of the results given in Section \ref{secres1}, which deal with correlations involving $d_k(n)$ and partial divisor functions
\begin{eqnarray} \label{ddefa}
d_l(n,A)=\sum_{q|n:q\leq n^A}d_{l-1}(q)\hspace{1cm}A\in(0,1].
\end{eqnarray}
As such, we emphasise the results given in Section \ref{secres1}.
\subsection{Additive divisor sums}
When $k=l$, the correlations in (\ref{def}) arise in connection with the problem of finding asymptotic formulae for the $2k$th moments of the Riemann zeta function on the critical line.
This connection was first exploited by Ingham \cite{Ing} in the course of proving his asymptotic formula for the fourth moment of the Riemann zeta function. Ingham proved that
\begin{eqnarray}
D_{h,2,2}(x)\sim\frac{6}{\pi^2}\sigma_{-1}(h)\log^2x \nonumber
\end{eqnarray}
where $\sigma_{z}(n)=\sum_{d|n}d^{z}$,
and subsequently Estermann \cite{Est} established the asymptotic expansion
\begin{eqnarray}\label{est}
D_{h,2,2}(x)=xP_{h,2,2}(\log x)+O\left(x^{11/12+\epsilon}\right)
\end{eqnarray}
where $P_{h,2,2}$ is a polynomial of degree $2$. Estermann demonstrated that $D_{h,2,2}(x)$ is related to the spectral theory of modular forms - his result made crucial use of a non-trivial bound for Kloosterman sums. Heath-Brown \cite{Heath} subsequently used Weil's improved bound \cite{weil} for Kloosterman sums to obtain the error term $O\left(x^{5/6+\epsilon}\right)$ in (\ref{est}), which was later improved by Motohashi \cite{mot} to $O\left(x^{2/3+\epsilon}\right)$ uniformly for $h\leq x^{20/27}$. Each of these improvements lead to corresponding improvements of the error term in the asymptotic expansion for the fourth moment of the Riemann zeta function. \\
\noindent Due to the work of Hooley \cite{H}, Linnik \cite{lin}, Fouvry and Tenenbaum \cite{ft}, Heath-Brown \cite{H2}, Drappeau \cite{drap}, Motohashi \cite{moto}, Deshouillers and Iwaniec \cite{DI}, Bykovski and Vinogradov \cite{by} and Topacogullari \cite{to1,to2,to3}, it is now also known that for any fixed $k$ there is a $\delta>0$ and a polynomial $P_{h,k,2}$ of degree $k$ such that
\begin{eqnarray}\label{adv}
D_{h,k,2}(x)=xP_{h,k,2}(\log x)+O_{h,k}(x^{1-\delta}).
\end{eqnarray}\\
\noindent Despite these significant advances, asymptotic formulae for $D_{h,k,l}(x)$ remain elusive when both $k,l\geq 3$. The central conjecture---formulated by
Conrey and Gonek \cite{CG} and Ivi\'c \cite{I1,i2} via the `$\delta$-method' of Duke, Friedlander and Iwaniec \cite{Duke}, and recently refined by Ng and Thom \cite{NgT} and Tao \cite{Tao}---is as follows.
\begin{conjecture}\label{manc} If $h,k,l\in\mathbb{N}$ with $k,l$ fixed and $h=O(x^{1-\epsilon})$ for each fixed $\epsilon>0$, then there is a $\delta>0$ and a polynomial $P_{h,k,l}$ of degree $k+l-2$ such that
\begin{eqnarray}\label{mainp}
D_{h,k,l}(x)=xP_{h,k,l}(\log x)+O_{\epsilon,k,l}(x^{1-\delta}).\nonumber
\end{eqnarray}
The asymptotic is conjectured to be
\begin{eqnarray}\label{lead}
\frac{D_{h,k,l}(x)}{x\log^{k+l-2}x}\sim\frac{C_{k,l}f_{k,l}(h)}{(k-1)!(l-1)!}
\end{eqnarray}
as $x\rightarrow\infty$, where
\begin{eqnarray}\label{c}
C_{k,l}=\prod_p \left(1-p^{-1}\right)^{l-1}+\left(1-p^{-1}\right)^{k-1}-\left(1-p^{-1}\right)^{k+l-2}
\end{eqnarray}
and
\begin{eqnarray}\label{fform}
f_{k,l}(h)=\prod_{p|h} \frac{(1-p^{-1})\sum_0^{\gamma}d_{l-1}(p^{\alpha})
\sum_{\alpha}^{\infty}d_k(p^{\beta})p^{-\beta}+d_k(p^{\gamma})\sum_{\gamma+1}^{\infty}d_{l-1}(p^{\alpha}) p^{-\alpha}}{(1-p^{-1})^{1-k}+(1-p^{-1})^{1-l} -1} \nonumber\\
\end{eqnarray}
where $h=\prod p^{\gamma}$.
\end{conjecture}
\noindent The general form of the coefficients $C_{k,l}$ and $f_{k,l}(h)$ appearing in (\ref{lead}) were calculated by Ng and Thom \cite{NgT} based on the techniques introduced by Conrey and Gonek \cite{CG}, and the same prediction was made by Tao \cite{Tao} based on pseudorandomness heuristics. \\
\noindent The asymptotic order of $D_{h,k,l}(x)$ is fairly well understood. Regarding upper bounds, it follows from the general theorem of Nair and Tenenbaum \cite{NandT} that
\begin{eqnarray}\label{boundsd}
D_{h,k,l}(x)=O_{h,k,l}(x\log^{k+l-2}x),
\end{eqnarray}
with uniformity in the $h$ aspect following from the work of Henriot \cite{Hen}---the paper of Ng and Thom \cite{NgT} discusses these matters in detail. However, when $k,l\geq 3$, it is notable that explicit bounds on the size of the constant implied in (\ref{boundsd}) have not yet appeared in the literature. Regarding lower bounds, the best general result in the literature is due to Ng and Thom \cite{NgT}, who showed that for $k,l\geq 3$ there is a $B_{k,l}>0$ such that for
\begin{eqnarray}
h\leq \exp\left(B_{k,l}(\log x\log\log x)^{(\min(k,l)-1)/(\min(k,l)-1.99)}\right)\nonumber
\end{eqnarray}
we have
\begin{eqnarray}\label{nandthombound}
\frac{D_{h,k,l}(x)}{x\log^{k+l-2}x}\geq \left(1+O_{k,l}\left(\frac{\log\log h}{\log x}\right)\right)\frac{2^{2-k-l}C_{k,l}f_{k,l}(h)}{(k-1)!(j-1)!}.
\end{eqnarray} \\
\noindent Regarding averages over $h$, Matomaki, Radziwill and Tao \cite{TMR} have recently shown that the conjectured asymptotic (\ref{lead}) holds for $k$, $l\geq 2$ and almost all $h\leq H$, provided that $x^{8/33+\epsilon}\leq H\leq x^{1-\epsilon}$, improving on previous work of Baier, Browning, Marasingha and Zhao \cite{BB} on the case $k=l=3$. \\
\subsection{Exponents of distribution}
\noindent The problem of the asymptotic behaviour of additive divisor sums is closely related to the problem of improving the `exponent of distribution' for the generalised divisor problem in arithmetic progressions. An exponent of distribution is a lower bound on the lengths of arithmetic progressions $n\equiv h\pmod q$, $(h,q)=g$, in which $d_k(n)$ is uniformly distributed.
\begin{definition}\label{def1}
A real number $0< \theta_{g,k}\leq 1$ is an exponent of distribution for $d_k(n)$ if for every $q\leq x^{\theta_{g,k}-\epsilon}$ and each residue class $h\not\equiv 0 \pmod q$, we have
\begin{eqnarray}\label{sume}
\sum_{\substack{n\leq x\\n\equiv h \pmod q}} d_k(n)= \frac{1}{\phi\left(q/g\right)}
{\emph{Res}}\left(\frac{x^{s}}{s}\sum_{(n,q)=g}\frac{d_k(n)}{n^s},s=1 \right)
+O_{\epsilon,\delta,k}\left(\frac{x^{1-\delta}}{\phi\left(q/g\right)}\right)
\end{eqnarray}
for some fixed $\delta>0$ and $\epsilon>0$.
\end{definition}
\noindent An alternative way of writing (\ref{sume}) is
\begin{eqnarray}\label{sume2}
\sum_{\substack{n\leq x\\n\equiv h \pmod q}} d_k(n)= \frac{1}{\phi\left(q/g\right)} \sum_{n\leq x/g}\chi_0(n)d_k(gn)+O_{\epsilon,\delta,k}\left(\frac{x^{1-\delta}}{\phi\left(q/g\right)}\right)
\end{eqnarray}
where $\chi_0$ is the principal Dirichlet character to the modulus $q/g$. Definition \ref{def1} is motivated by the fact that we expect $d_k(n)$ to be uniformly distributed even in short arithmetic progressions (i.e. with $q\leq x^{1-\epsilon}$ for every fixed $\epsilon>0$). In other words, we expect that $\theta_{g,k}=1$ for all $k$ provided that $g$ is not large, say $g\leq x^{1-\epsilon}$. \\
\noindent Currently however, we can only prove uniform distribution in sufficiently long arithmetic progressions. The best results in the literature are as follows. For $k=2$, Hooley \cite{H} established that
we may take $\theta_{1,2}= 2/3$. We have $\theta_{g,3}= 21/41$ for all $g$ due to Heath-Brown \cite{H2}, $\theta_{1,4}=1/2$ due to Lavrik \cite{Lav}, and $\theta_{1,5}= 9/20$, $\theta_{1,6}= 5/12$ and $\theta_{1,k}= 8/3k$ for $k\geq 7$ due to Friedlander and Iwaniec \cite{FI}.
For $k> 2$, the only known $k$ for which an exponent of distribution greater than $1/2$ is known is $k=3$, and both proofs (including the inferior exponent $58/115$ due to Friedlander and Iwaniec \cite{FI}) depend on Deligne's Riemann hypothesis for algebraic varieties over finite fields. For specific moduli, further increments have also been achieved. For instance, via general estimates for sums of trace functions over finite fields twisted by Fourier coefficients of Eisenstein series, Fouvry, Kowalski and Michel \cite{FKM} have shown that (\ref{sume}) holds for $k=3$ for all primes $q\leq x^{12/23}$ (albeit with $x^{-\delta}$ in the error term replaced with $\log^{-C} x$ for every $C>0$).
The error term can be given explicitly in specific cases, although we will not be concerned with these details here. \\
\noindent With the exception of Heath-Brown's
result for $k=3$, the above results on exponents of distribution are stated only for $g=1$. This is usually because $g=1$ is the only value that is required in applications to primes in arithmetic progressions. Yet, for applications to additive divisor sums, we require exponents of distribution for all $g$ in some range as $x\rightarrow\infty$. In this regard, by generalising Heath-Brown's argument, Chace \cite{Chace} has shown that
\begin{eqnarray}\label{chace}
\theta_{k}=\max\left(\frac{1}{k},\theta_{1,k}+\left(1-k\theta_{1,k}\right)\limsup_{x\rightarrow\infty}\frac{\log g}{\log x}\right)
\end{eqnarray} is an exponent of distribution for $d_k(n)$ for all $g$.
\subsection{Partial divisor functions} A central principle in this paper is that partial divisor functions
\begin{eqnarray}
d_k(n,A)=\sum_{\substack{d|n\\d\leq n^A}}d_{k-1}(d)\hspace{1cm}A\in(0,1]
\end{eqnarray}
provide robust approximations to $d_k(n)$ in arithmetic progressions. This property is essential in applications to correlation problems such as (\ref{def}). We return to this in due course. \\
\noindent The pointwise relationship between $d_k(n)$ and $d_k(n,A)$ is generally unpredictable. In this regard, Tenenbaum \cite{Ten1} showed that
\begin{eqnarray}\label{limt}
\lim_{\substack{n\rightarrow\infty\\ n\in S}}\frac{d_2(n,A)}{d_2(n)}
\end{eqnarray}
does not exist for any fixed $A\in (0,1)$ when $S\subseteq \mathbb{N}$ has positive measure. Furthermore, Tenenbaum \cite{Ten2} showed that for every pair $A,B\in (0,1)$ there is an $S$ of positive measure in which $d_2(n,A)=d_2(n,B)$ for every $n\in S$. Presumably, the same conclusions hold for every $k\geq 2$. \\
\noindent On the other hand, the limit in (\ref{limt}) exists on particular sets $S$ of zero measure. For example, if $p$ is prime then we have
\begin{eqnarray}\label{poiu}
\lim_{\alpha\rightarrow\infty}\frac{d_k(p^{\alpha},A)}{d_k(p^{\alpha})}=A^{k-1}
\end{eqnarray}
and so, by partial summation and (\ref{poiu}), it follows that
\begin{eqnarray}
\sum_{\alpha\leq X}a_{\alpha}d_k(p^{{\alpha}},A)\sim A^{k-1}\sum_{{\alpha}\leq X}a_{\alpha}d_k(p^{{\alpha}})
\end{eqnarray}
whenever $(a_{\alpha})$ is a sequence of non-negative real numbers such that $\sum_{{\alpha}\leq X}a_{\alpha}\rightarrow\infty$. \\
\noindent On average, the relationship between $d_k(n,A)$ and $d_k(n)$ is predictable. In this direction, Deshoulliers, Dress and Tenenbaum \cite{Des} proved that the mean value of $d_{2}(n,A)/d_2(n)$ converges to an arcsine distribution. This has been generalised by Bareikis \cite{Bareikis}, giving a beta distribution
\begin{eqnarray}
\frac{1}{x}\sum_{n\leq x}\frac{d_k(n,A)}{d_k(n)}\sim\frac{\int_{0}^{A}u^{-1/k}(1-u)^{1/k-1}du}{\Gamma(1/k)\Gamma(1-1/k)}
\end{eqnarray}
uniformly for $0\leq A\leq 1$ as $x\rightarrow\infty$, for any fixed $k\geq 2$. \\
\noindent Roughly speaking, the mean of a partial divisor sum corresponds to the logarithmic mean over the partial range. That is, if $f:\mathbb{N}\rightarrow\mathbb{C}$, then
\begin{eqnarray}\label{approxa}
\sum_{n\leq x}\left(\sum_{\substack{d|n\\d\leq n^A}}f(d)\right)=x\sum_{n\leq x^A}\frac{f(n)}{n}+O\left(\sum_{n\leq x^A}\frac{|f(n)|}{n^{1-1/A}}\right)\hspace{1cm}A\in(0,1],\nonumber\\
\end{eqnarray}
uniformly for $A\geq A_0>0$. Taking $f(n)=d_{k-1}(n)$ in (\ref{approxa}) and using Perron's formula to evaluate the r.h.s, it is easily seen that $d_k(n,A)$
approximates $A^{k-1}d_k(n)$ in the mean, that is
\begin{eqnarray}\label{diffb}
\sum_{n\leq x}d_k(n,A)=A^{k-1}\sum_{n\leq x}d_k(n)+O_A\left(x\log^{k-2}x\right).
\end{eqnarray}
Here, it is notable that the non-multiplicativity of $d_k(n,A)$ is crucial to the quality of the approximation in (\ref{diffb}). Indeed, since $d_k(p^{\alpha},A)=d_k(p^{\lfloor {\alpha}A\rfloor})$, the mean value of $\prod_{p|n}d_k(p^\alpha,A)$ exists for every $A<1$ and $k\in\mathbb{N}$, whereas the mean of $d_k(n)$ is $\sim \log^{k-1} x/(k-1)!$.\\
\noindent An elementary refinement of (\ref{approxa}) is that
\begin{eqnarray}\label{approxa2}
\sum_{\substack{n\leq x\\n\equiv h\pmod q}}\left(\sum_{\substack{d|n\\d\leq n^A}}f(d)\right)=\frac{x}{q}\sum_{\substack{n\leq x^A\\(n,q)|h}}\frac{(n,q)f(n)}{n}+O\left(\sum_{n\leq x^A}\frac{|f(n)|}{n^{1-1/A}}\right),
\end{eqnarray}
which may be proved by interchanging the order of summation and trivially estimating the length of the resulting arithmetic progression. However, in applications to correlation problems, the error term in (\ref{approxa2}) is not strong enough. Typically we require an additional factor of $1/q$, uniformly for $q\leq x^{C}$ as $x\rightarrow\infty$ for some $C>1-A$. Our first theorem (Theorem \ref{arith}) establishes the requisite refinement of (\ref{approxa2}) in the case $f(n)=d_{k-1}(n)$, with a fairly strong value of $C$. Our second theorem (Theorem \ref{anotherpoly}) is deduced from Theorem \ref{arith}.
\section{Results}
\subsection{On partial divisor functions}\label{secres1}
\noindent The theorems in this section do all the work in proving the corollaries on additive divisor sums in Section \ref{secres2}. Theorems \ref{arith} and \ref{anotherpoly} are proved in Section \ref{pots} and Theorem \ref{mainlem} is proved in Section \ref{pott}. In light of the structural connection to additive divisor sums, theorems of this type are potentially of further use in such applications. Moreover, in accordance with the conjecture that $\theta_k=1$, we expect that the ranges of $A$, $B$ and $q$ for which these formulae hold may be improved significantly. \\
\noindent Theorem \ref{arith} shows that $d_k(n,A)$ provides a robust approximation to $A^{k-1}d_k(n)$ in arithmetic progressions.
\begin{theorem}\label{arith}If $h,k\in\mathbb{N}$ are fixed and $q\leq x^{\min(\theta_k,A\theta_{k-1})-\epsilon}$, then
\begin{eqnarray}\label{mainpoly0}
\sum_{\substack{n\leq x\\n\equiv h \pmod q}} d_k(n,A)=A^{k-1}\sum_{\substack{n\leq x\\n\equiv h \pmod q}}d_k(n)+ O_{A,\epsilon,h,k}\left(\frac{x\log^{k-2}x}{q}\right).
\end{eqnarray}
In other words
\begin{eqnarray}
\sum_{\substack{n\leq x\\n\equiv h \pmod q}} d_k(n,A)=\frac{x}{q}\sum_{\substack{n\leq x^A\\(n,q)|h}}\frac{(n,q)d_{k-1}\left(n\right)}{n}+O_{A,h,k}\left(\frac{x\log^{k-2}x}{q}\right).
\end{eqnarray}\end{theorem}
\noindent Theorem \ref{anotherpoly} gives an asymptotic formula for the correlation of $d_k(n,A)$ with $d_l(n,B)$.
\begin{theorem}\label{anotherpoly} If $A\leq 1$, $B<\min(\theta_k,A\theta_{k-1})$ and $h,k,l\in\mathbb{N}$ are fixed, then
\begin{eqnarray}\label{mainpoly1}
\sum_{n\leq x}d_k(n+h,A)d_l(n,B)&=&\frac{A^{k-1}B^{l-1}C_{k,l}f_{k,l}(h)}{(k-1)!(l-1)!}x\log^{k+l-2} x\nonumber\\
&+&O_{A,B,h,k,l}\left(x \log^{k+l-3}\right),
\end{eqnarray}
where $C_{k,l}$ and $f_{k,l}(h)$ are defined in (\ref{c}) and (\ref{fform}).
\end{theorem}
\noindent Theorem \ref{mainlem} gives an asymptotic expansion with power saving error term for the correlation of $d_k(n)$ and $d_l(n,A)$.
\begin{theorem}\label{mainlem}If $A<\theta_{k}$ and $h,k,l\in\mathbb{N}$ are fixed,
then there is a $\delta>0$ and a polynomial $P_{A,h,k,l}$ of degree $k+l-2$ such that
\begin{eqnarray}\label{mainpoly2}
\sum_{n\leq x}d_k(n+h)d_l(n,A)=xP_{A,h,k,l}(\log x)+O_{A,\delta,h,k,l}\left(x^{1-\delta}\right).
\end{eqnarray}
An explicit formula for $P_{A,h,k,l}$ is given in (\ref{polydef}). In particular, the coefficient of the leading term is $A^{l-1}C_{k,l}f_{k,l}(h)/(k-1)!(l-1)!$.
\end{theorem}
\noindent We note that if $\theta_k>1/2$ and $l=2$, then $A=1/2$ is admissible in Theorem \ref{mainlem}, which thus yields an alternative proof of (\ref{adv}) in such cases. For example, in Section \ref{append} we carry out this calculation in the case $k=l=2$ to reproduce Estermann's asymptotic expansion (\ref{est}) explicitly.
\subsection{On additive divisor sums}\label{secres2}
Corollaries I, II and III follow immediately from the theorems of Section \ref{secres1}. \\
\noindent Corollary I sharpens the lower bound (\ref{nandthombound}) given by Ng and Thom in \cite{NgT} when $h$ is fixed and $k$ is sufficiently large in comparison with $l$.
\begin{corollarya}\label{t1}
For fixed $h,k,l\in\mathbb{N}$ we have
\begin{eqnarray}\label{mybound2}
\liminf_{x\rightarrow\infty}\frac{D_{h,k,l}(x)}{x\log^{k+l-2}x}\geq \theta_k^{l-1}\frac{C_{k,l}f_{k,l}(h)}{(k-1)!(j-1)!}.
\end{eqnarray}
\end{corollarya}
\begin{proof}
Note that $d_l(n)\geq d_l(n,A)$, and use Theorem \ref{mainlem}.
\end{proof}
\noindent
For instance, given Heath-Brown's exponent $\theta_3=21/41$, it follows from Corollary I that
\begin{eqnarray}\label{mybound3}
\liminf_{x\rightarrow\infty}\frac{D_{h,3,3}(x)}{x\log^{4}x}\geq 0.262\frac{C_{3,3}f_{3,3}(h)}{4}. \nonumber
\end{eqnarray}\\
\noindent Corollary II gives an equivalent condition for the conjectured asymptotic (\ref{lead}).
\begin{corollaryb}\label{t2} For fixed $h,k,l\in\mathbb{N}$, the asymptotic (\ref{lead}) holds if and only if
\begin{eqnarray}\label{diff}
\sum_{n\leq x}d_k(n+h)\left(d_l(n)-B^{1-l}d_l(n,B)\right)=o\left( x\log^{k+l-2} x \right)
\end{eqnarray}
for some (and hence every) $B< \theta_k$.
\end{corollaryb}
\begin{proof}
Compare (\ref{lead}) with Theorem \ref{mainlem} or Theorem \ref{anotherpoly} with $A=1$.
\end{proof}
\noindent In support of the plausibility of (\ref{diff}), we note that
\begin{corollaryc}\label{t6} If $A<\theta_l$, $B<\min(\theta_k,A\theta_{k-1})$ and $h,k,l\in\mathbb{N}$ are fixed, then
\begin{eqnarray}
\sum_{n\leq x}d_k(n+h,A)\left(d_l(n)-B^{1-l}d_l(n,B)\right)=O_{A,B,h,k,l}\left( x\log^{k+l-3} x \right).
\end{eqnarray}
\end{corollaryc}
\begin{proof}
This follows from Theorem \ref{anotherpoly} by swapping variables $A,B$ and $k,l$.
\end{proof}
\noindent The last result of this section is Theorem \ref{t3}. This is a Tauberian theorem and may be viewed as an analogue of the relationship between the Prime Number Theorem and the non-vanishing of $\zeta(1+it)$, in which we view $D_{h,k,l}(x)$ analogously to $\pi(x)$. Theorem \ref{t3} is proved in Section \ref{pot3}.
\begin{theorem}\label{t3}Let $h,k,l\in\mathbb{N}$ and $0\leq y<\infty$ be fixed, then the function
\begin{eqnarray}
\mathcal{D}_{h,k,l}(s,y)=\sum_{1}^{\infty}\frac{d_k(n+h)d_l\left(n,\frac{y}{\log n}\right)}{(n+h)^s}\hspace{1cm}(\sigma>1) \nonumber
\end{eqnarray}
has an analytic continuation to the complex plane except for a pole of order $k-1$ at $s=1$ and, if the limit
\begin{eqnarray}
\lim_{y\rightarrow\infty}\mathcal{D}_{h,k,l}(1+it,y) \nonumber
\end{eqnarray} is continuous for $t\neq 0$, then we have
\begin{eqnarray}\label{asymor}
\frac{D_{h,k,l}(x)}{x\log^{k+l-2}x}\sim\frac{C_{k,l}f_{k,l}(h)}{(k-1)!(l-1)!} \nonumber
\end{eqnarray}
as $x\rightarrow\infty$.
\end{theorem}
\section{Definitions}\label{coeffs}
\noindent Definitions \ref{ftg}---\ref{forma} arise in the course of the proofs.
\begin{definition} \label{ftg} For $j\in\mathbb{N}$ and $s\in\mathbb{C}$, we define
\begin{eqnarray}
(s-1)^j\zeta^j(s)=\sum_{r=0}^{\infty}\frac{a_r(j)}{r!}(s-1)^r \nonumber
\end{eqnarray}
and
\begin{eqnarray}\label{fdef}
\frac{1}{n!}\frac{d^n}{ds^n}\frac{(s-1)^j\zeta^j(s)}{s}\biggr\vert _{s=1}=\sum_{r=0}^{n}\frac{(-1)^{n-r}a_r(j)}{r!}=c_n(j). \nonumber
\end{eqnarray}
\end{definition}
\begin{definition}\label{fth}
For $h,k,l\in\mathbb{N}$, $h=\prod p^{\gamma}$, $\Re w>-1-\frac{\sigma-1}{l-1}$ and $\sigma>-\Re w$, we also define
\begin{eqnarray}
C_{k,l}(s,w)= \prod_p (1-p^{-w-1})^{l-1}+\frac{(1-p^{-s})^k} {1-p^{-1}} - \frac{(1-p^{-s})^k(1-p^{-w-1})^{l-1}}{1-p^{-1}},\nonumber
\end{eqnarray}
\begin{eqnarray}\label{fdef}
f_{h,k,l}(s,w)=\prod_{p|h} \frac{(1-p^{-1})\sum_0^{\gamma}d_{l-1}(p^{\alpha})
\sum_{\alpha}^{\infty}d_k(p^{\beta})p^{-\beta s-\alpha w}+d_k(p^{\gamma})\sum_{\gamma+1}^{\infty}d_{l-1}(p^{\alpha}) p^{-\alpha (w+1)}}{(1-p^{-1})(1-p^{-s})^{-k}+(1-p^{-w-1})^{1-l} -1} ,\nonumber
\end{eqnarray}
and
\begin{eqnarray}\label{dr}
C_{k,l}(s,w)f_{h,k,l}(s,w)=\sum_{1}^{\infty}\frac{\varphi_{h,k,l}(q,s)}{q^{w}}.
\end{eqnarray}
\end{definition}
\begin{definition}\label{form}
For $m<k$ and $n<l$ we define
\begin{eqnarray}\label{coeffse}
b_{h,k,l,m,n}=\sum_{i=0}^{k-1-m}\sum_{j=0}^{l-1-n}\frac{a_{l-1-n-j}(l-1)c_{k-1-m-i}(k)}{(l-1-n-j)!}
\frac{\partial^{i}}{i!\partial s^{i}}\frac{\partial ^{j}}{j!\partial w^{j}}\sum_{1}^{\infty}\frac{\varphi_{h,k,l}(q,s)}{q^{w}}\biggr\vert _{w=0,s=1} \nonumber \\
\end{eqnarray}
and note that the Dirichlet series in (\ref{coeffse}) converge absolutely. In particular, we have $b_{h,k,l,k-1,l-1}=C_{k,l}f_{k,l}(h)$
where $C_{k,l}$ and $f_{k,l}(h)$ are defined in (\ref{c}) and (\ref{fform}).
\end{definition}
\begin{definition}\label{forma}
Lastly, for $m<k+l-2$ and $0<A\leq 1$ we define
\begin{eqnarray}
a_{A,h,k,l,m}&=&(-1)^{m}\sum_{j=m-l+2}^{k-1}{i\choose j}\sum_{r=m-l+2}^{j}\sum_{v=0}^{r-m+l-2}\frac{(-A)^{r-j-v+l-1}a_v(l-1)(v-l+1)_r}{v!}\nonumber\\&\times &{l-v-2\choose j-r}
\sum_{i=j}^{k-1}\frac{c_{k-1-i}(k)}{i!} \frac{\partial^{i-j}}{\partial s^{i-j}} \frac{\partial^{j+l-r-2}}{\partial w^{j+l-r-2}} \sum_{1}^{\infty}\frac{\varphi_{h,k,l}^{}(d,s)}{d^w}\biggr\vert _{w=0,s=1}.
\end{eqnarray}
\end{definition}
\section{Proofs}
\subsection{Theorem \ref{mainlem}}\label{pott}
\begin{proof}
We have
\begin{eqnarray}\label{p1}
\sum_{n\leq x}d_k(n+h)d_l(n,A)&=&\sum_{n\leq x}d_k(n+h)\sum_{\substack{q|n\\q\leq n^A}}d_{l-1}(q)\nonumber\\
&=&\sum_{q\leq x^A}d_{l-1}(q)\sum_{\substack{q^{1/A}+h\leq n\leq x+h\\n\equiv h \pmod q}}d_k(n).
\end{eqnarray}\\
\noindent Using Definition \ref{def1} to evaluate the inner summations on the r.h.s of (\ref{p1}), for $0<A<\theta_{k}$ we see that (\ref{p1}) is
\begin{eqnarray}\label{p2}
\sum_{n\leq x}d_k(n+h)d_l(n,A)
&=&\sum_{q\leq x^A}\frac{d_{l-1}(q)}{\phi\left(\frac{q}{(h,q)}\right)}
\textrm{Res}\left(\frac{(x+h)^{s}}{s}\sum_{(n,q)=(h,q)}\frac{d_k(n)}{n^s},s=1\right)
\nonumber
\end{eqnarray}
\begin{eqnarray}
&-& \sum_{q\leq x^A}\frac{d_{l-1}(q)}{\phi\left(\frac{q}{(h,q)}\right)}
\textrm{Res}\left(\frac{(q^{1/A}+h-\delta_A(q))^{s}}{s}\sum_{(n,q)=(h,q)}\frac{d_k(n)}{n^s},s=1\right)\\
&+&O_{A,\delta,k}\left(x^{1-\delta}\sum_{q\leq x^A}\frac{d_{l-1}(q)}{\phi\left(\frac{q}{(h,q)}\right)}\right)\nonumber\\&+&O_{A,\delta,k}\left(x^{1-\delta}\sum_{q\leq x^A}\frac{d_{l-1}(q)(q^{1/A}+h-\delta_A(q))^{1-\delta}}{\phi\left(\frac{q}{(h,q)}\right)}\right)\nonumber
\end{eqnarray}
\noindent where $\delta_A(q)=0$ or $1$ depending on whether $q^{1/A}$ is an integer or not. The summations in the error terms in the third and fourth lines of (\ref{p2}) are $O_{A,h}(\log^{l-1} x)$, so it remains to evaluate the first two terms.
\subsubsection{Evaluation of the primary term}
We begin by evaluating the first term on the r.h.s of (\ref{p2}). Let $\chi_0$ denote the principal character to the modulus $q/g$, where $q=\prod p^{\alpha}$, $h=\prod p^{\gamma}$ and $g=(h,q)=\prod p^{\delta}$ so $\delta=\min(\alpha,\gamma)$. We have
\begin{eqnarray}\label{g}
\sum_{(n,q)=g}\frac{d_k(n)}{n^s}=\sum_{1}^{\infty}\frac{\chi_0(n)d_k(gn)}{(gn)^s}&=&\prod_p\sum_{0}^{\infty}d_k(p^{\beta+\delta})\chi_0(p^{\beta})p^{-(\beta+\delta)s}
\nonumber\\&=&L^k(s,\chi_0)b_{h,k}(s,q) \nonumber
\end{eqnarray}
where
\begin{eqnarray}\label{adef}
b_{h,k}(s,q)=\prod_{p|g}(1-\chi_0(p)p^{-s})^k\sum_{\delta}^{\infty}d_k(p^{\beta})\chi_0(p^{\beta-\delta})p^{-\beta s} \nonumber
\end{eqnarray}
is a multiplicative function of $g$ for all $k,s$. By Cauchy's theorem, the first term on the r.h.s of (\ref{p2}) is
\begin{eqnarray}\label{mainres}
&=&\frac{1}{(k-1)!}\frac{\partial^{k-1}}{\partial s^{k-1}}\frac{(s-1)^k\zeta^k(s)Z_{h,k,l}\left(s,x^A\right)(x+h)^s}{s}\biggr\vert _{s=1}\nonumber\\
&=&\frac{1}{(k-1)!}\sum_{i=0}^{k-1}{k-1\choose i}\frac{\partial^{i}}{\partial s^{i}}Z_{h,k,l}\left(s,x^A\right)\biggr\vert _{s=1}\frac{\partial^{k-1-i}}{\partial s^{k-1-i}}\frac{(s-1)^k\zeta^k(s)(x+h)^s}{s}\biggr\vert _{s=1}
\end{eqnarray}
where
\begin{eqnarray}\label{sum}
Z_{h,k,l}\left(s,x^A\right)=\sum_{q\leq x^A}\frac{d_{l-1}(q) }{\phi \left(\frac{q}{(h,q)}\right)}\prod_{p|\frac{q}{(h,q)}}\left(1-p^{-s}\right)^kb_{h,k}(s,q),
\end{eqnarray}
and so our first task is to find asymptotic formulae for $\frac{\partial^i}{\partial s^i}Z_{h,k,l}\left(s,Q\right)$ at $s=1$ as $Q\rightarrow\infty$. To proceed, we note that the factor
\begin{eqnarray}
\frac{d_{l-1}(q) }{\phi \left(\frac{q}{(q,h)}\right)}\prod_{p|\frac{q}{(q,h)}}\left(1-p^{-s}\right)^k \nonumber
\end{eqnarray}
of the summand in (\ref{sum}) is a multiplicative function of $q$ for all $h,k,s$, and we shall now show that $b_{h,k}(s,q)$ is also. From (\ref{adef}) we have
\begin{eqnarray}\label{nine}
b_{h,k}(s,q)&=&\prod_{\substack{p|(q,h)\\p|\frac{q}{(q,h)}}}\frac{d_k(p^{\delta(q)})}{p^{\delta(q)s}} \prod_{\substack{p|(q,h)\\p\nmid\frac{q}{(q,h)}}}
(1-p^{-s})^k\sum_{\delta(q)}^{\infty}d_k(p^{\beta})p^{-\beta s}.
\end{eqnarray}
If $q=rt$ with $(r,t)=1$, we have $(rt,h)=(r,h)(t,h)$ and $\delta(rt)=\delta(r)+\delta(t)$ for every $p$ so the inclusion $p|(rt,h)$ in (\ref{nine}) is multiplicative, that is
\begin{eqnarray}\label{ff}
b_{h,k}(s,rt)&=&\prod_{\substack{p|(r,h)\\p|\frac{r}{ (r,h)} \frac{t}{(t,h)} }}\frac{d_k(p^{\delta(r)})}{p^{\delta(r)s}}\prod_{\substack{ p|(r,h)\\ p\nmid\frac{r}{ (r,h)} \frac{t}{(t,h)}}}(1-p^{-s})^k\sum_{\delta(r)}^{\infty}d_k(p^{\beta})p^{-\beta s}
\nonumber\\
&\times&\prod_{\substack{p|(t,h)\\p|\frac{r}{ (r,h)} \frac{t}{(t,h)}}}\frac{d_k(p^{\delta(t)})}{p^{\delta(t)s}}\prod_{\substack{ p|(t,h)\\ p\nmid \frac{r}{ (r,h)} \frac{t}{(t,h)}}}(1-p^{-s})^k\sum_{\delta(t)}^{\infty}d_k(p^{\beta})p^{-\beta s}.
\end{eqnarray}
Since $p|r$ implies $p\nmid t$, the intersection of the sets $p|(r,h)$ and $p| t/(t,h)$ in (\ref{ff}) is already empty. It follows that if $p|(r,h)$ then the inclusion $p| t/(t,h)$ and exclusion $p\nmid t/(t,h)$ is superfluous, and vice-versa. Therefore
\begin{eqnarray}\label{ffs}
b_{h,k}(s,rt)&=&\prod_{\substack{p|(r,h)\\p|\frac{r}{ (r,h) }}}\frac{d_k(p^{\delta(r)})}{p^{\delta(r)s}}\prod_{\substack{ p|(r,h)\\ p\nmid\frac{r}{ (r,h) }}}(1-p^{-s})^k\sum_{\delta(r)}^{\infty}d_k(p^{\beta})p^{-\beta s}
\nonumber\\
&\times&\prod_{\substack{p|(t,h)\\p|\frac{t}{ (t,h) }}}\frac{d_k(p^{\delta(t)})}{p^{\delta(rt)s}}\prod_{\substack{ p|(t,h)\\ p\nmid \frac{t}{(t,h) }}}(1-p^{-s})^k\sum_{\delta(t)}^{\infty}d_k(p^{\beta})p^{-\beta s} \nonumber \\
&=&b_{h,k}(s,r)b_{h,k}(s,t). \nonumber
\end{eqnarray}
Thus
\begin{eqnarray}\label{sum2}
Z_{h,k,l}\left(s,x^A\right)=\sum_{q\leq x^A}\phi_{h,k,l}(s,q) \nonumber
\end{eqnarray}
where the summand
\begin{eqnarray}
\phi_{h,k,l}(s,q)=\frac{d_{l-1}(q) }{\phi \left(\frac{q}{(q,h)}\right)}\prod_{p|\frac{q}{(q,h)}}\left(1-p^{-s}\right)^k\prod_{\substack{p|(q,h)\\p|\frac{q}{(q,h)}}}\frac{d_k(p^{\delta(q)})}{p^{\delta(q)s}} \prod_{\substack{p|(q,h)\\p\nmid\frac{q}{(q,h)}}}
(1-p^{-s})^k\sum_{\delta(q)}^{\infty}d_k(p^{\beta})p^{-\beta s}\nonumber
\end{eqnarray}
is a multiplicative function of $q$ for all $h,k,l,s$. \\
\noindent Since $\phi_{h,k,l}(s,q)$ is multiplicative, we define the Euler product
\begin{eqnarray}\label{prod}
\Phi_{h,k,l}(s,w)=\prod_p \sum_{0}^{\infty}\phi_{h,k,l}(s,p^{\alpha})p^{-\alpha w} \nonumber
\end{eqnarray}
for values of $w\in\mathbb{C}$ for which the r.h.s converges absolutely. If $p\nmid h$ then $\phi_{h,k,l}(s,p^{\alpha})=\phi_{1,k,l}(s,p^{\alpha})$ which, after some routine algebra, gives
\begin{eqnarray}\label{fac}
\Phi_{h,k,l}(s,w)=
C_{k,l}(s,w)f_{h,k,l}(s,w)\zeta^{l-1}(w+1),
\end{eqnarray}
where $f_{h,k,l}(s,w)$ is defined in (\ref{fdef}) and the Euler product
\begin{eqnarray}\label{poi}
C_{k,l}(s,w)= \prod_p \left(1-p^{-w-1}\right)^{l-1}+\frac{(1-p^{-s})^k\left(1-(1-p^{-w-1})^{l-1}\right)}{1-p^{-1}}
\end{eqnarray}
converges absolutely for $\Re w>-1-\frac{\sigma-1}{l-1}$ and $\sigma>-\Re w$ (this follows from the fact that the largest power of $p$ appearing in each factor of the Euler product has real part strictly less than $-1$ when $s,w$ are in this range). Consequently, $C_{k,l}(s,w)$ is analytic and bounded on compact subsets of the half planes $\Re w>-1-\frac{\sigma-1}{l-1}$ and $\sigma>-\Re w$. It follows that for fixed $h,i,k,l$ the Dirichlet series
\begin{eqnarray}\label{ser}
\frac{\partial^i}{\partial s^i}C_{k,l}(s,w)f_{h,k,l}(s,w)=\sum_{1}^{\infty}\frac{\partial^i}{\partial s^i}\frac{\varphi_{h,k,l}(q,s)}{q^{w}} \nonumber
\end{eqnarray}
is absolutely convergent and bounded for such values of $s,w$. Thus, using the relation
\begin{eqnarray}\label{conv}
\phi_{h,k,l}(s,q)=\sum_{d|q}\varphi_{h,k,l}(d,s)\frac{d_{l-1}(q/d)}{q/d},
\end{eqnarray}
we have
\begin{eqnarray}\label{lop}
\frac{\partial^i}{\partial s^i}Z_{h,k,l}\left(s,Q\right)&=&\frac{\partial^i}{\partial s^i}\sum_{q\leq Q}\sum_{d|q}\varphi_{h,k,l}(d,s)\frac{d_{l-1}(q/d)}{q/d}\nonumber\\
&=&\frac{\partial^i}{\partial s^i}\sum_{d\leq Q}\varphi_{h,k,l}(d,s)\sum_{q\leq Q/d}\frac{d_{l-1}(q)}{q}\nonumber\\
&=&\frac{\partial^i}{\partial s^i}\sum_{d\leq Q}\varphi_{h,k,l}(d,s)\Bigg(\sum_{j=0}^{l-1}\frac{a_{l-1-j}(l-1)\log^j(Q/d)}{(l-1-j)!j!}\\
&+&\frac{1}{2\pi i}\int_{\left(\epsilon-2/l\right)}\zeta^{l-1}(w+1)\frac{\left(Q/d\right)^wdw}{w}\Bigg)\nonumber\\
&=&\sum_{0}^{l-1}\frac{a_{l-1-j}(l-1)}{j!(l-1-j)!}\frac{\partial^i}{\partial s^i}\sum_{d\leq Q}\varphi_{h,k,l}(d,s)\log^j(Q/d)\nonumber\\
&+&
O\left(Q^{\epsilon-2/l}\sum_{d\leq Q} \left| \frac{\partial^i}{\partial s^i} \varphi_{h,k,l}(d,s) \right| d^{2/l-\epsilon} \right)
\nonumber
\end{eqnarray}
where the notation $\int_{(c)}$ in the fourth line of (\ref{lop}) denotes integration along a vertical line from $c-i\infty$ to $c+i\infty$. That this integral is $O\left((Q/d)^{\epsilon-2/l}\right)$ follows from classical results on the error term in the generalised Dirichlet divisor problem (see Titchmarsh \cite{Titch}, for instance). Expanding $\log^j(Q/d)$ as a polynomial in $\log Q$, (\ref{lop}) is
\begin{eqnarray}\label{A}
&=&\sum_{j=0}^{l-1}\frac{a_{l-1-j}(l-1)}{j!(l-1-j)!}\sum_{n=0}^{j}{j\choose n}\log ^nQ\frac{\partial^i}{\partial s^i}\sum_{d\leq Q}\varphi_{h,k,l}(d,s)(-\log d)^{j-n}
+O\left(Q^{\epsilon-2/l}\right)
\nonumber\\
&=&\sum_{n=0}^{l-1}\frac{\log ^nQ}{n!}\sum_{j=0}^{l-1-n}\frac{a_{l-1-n-j}(l-1)}{j!(l-1-n-j)!}\frac{\partial^i}{\partial s^i}\frac{\partial^{j}}{\partial w^{j}}\sum_{d\leq Q}\frac{\varphi_{h,k,l}(d,s)}{d^{w}}\biggr\vert _{w=0}+O\left(Q^{\epsilon-2/l}\right).\nonumber\\
\end{eqnarray}\\
\noindent We also have
\begin{eqnarray}\label{zalc}
&&\frac{\partial^{k-1-i}}{\partial s^{k-1-i}}\frac{(s-1)^k\zeta^k(s)(x+h)^s}{s}\biggr\vert _{s=1}\nonumber\\&=&
(x+h)(k-1-i)!\sum_{r=0}^{k-1-i}\frac{a_r(k)}{r!}\sum_{m=0}^{k-1-i-r}\frac{(-1)^{k-1-i-r-m}\log^m(x+h)}{m!}\nonumber\\
&=&(x+h)(k-1-i)!\sum_{m=0}^{k-1-i}\frac{\log^{k-1-i-m}(x+h)}{(k-1-i-m)!}c_m(k)
\end{eqnarray}
Setting $Q=x^A$ in (\ref{A}) and using (\ref{zalc}), we conlude that (\ref{mainres}) is
\begin{eqnarray}\label{ftl}
&=&(x+h)\sum_{m=0}^{k-1}\sum_{n=0}^{l-1}\frac {A^nb_{h,k,l,m,n}}{m!n!}\log^{m}(x+h)\log^{n}x\nonumber\\
&+&O\left((x+h)x^{\epsilon-2A/l}\sum_{i=0}^{k-1}\sum_{m=i}^{k-1}\frac{\log^{k-1-m}(x+h)|c_{m-i}(k)|}{(k-1-m)!}
\sum_{d\leq x^A} \left| \frac{\partial^i}{\partial s^i} \varphi_{h,k,l}(d,s) \right|_{s=1} d^{2/l-\epsilon} \right),\nonumber\\
&=&x\sum_{m=0}^{k-1}\sum_{n=0}^{l-1}\frac {A^nb_{h,k,l,m,n}}{m!n!}\log^{m+n}x+O_{h,k,l}\left(x^{1-2A/l+\epsilon} \right),\nonumber\\
\end{eqnarray}
where the coefficients $b_{h,k,l,m,n}$ are defined in Section \ref{coeffs}.
\subsubsection{Evaluation of the secondary term}
We now evaluate the second term on the r.h.s of (\ref{p2}). By Cauchy's theorem, this is
\begin{eqnarray}\label{mainres4}
&=&\frac{1}{(k-1)!}\frac{\partial^{k-1}}{\partial s^{k-1}}\frac{(s-1)^k\zeta^k(s)W_{h,k,l}\left(s,x^A\right)}{s}\biggr\vert _{s=1}\nonumber\\
&=&\frac{1}{(k-1)!}\sum_{i=0}^{k-1}{k-1\choose i}\frac{\partial^{i}}{\partial s^{i}}W_{h,k,l}\left(s,x^A\right)\biggr\vert _{s=1}\frac{\partial^{k-1-i}}{\partial s^{k-1-i}}\frac{(s-1)^k\zeta^k(s)}{s}\biggr\vert _{s=1}\nonumber\\
&=&\sum_{i=0}^{k-1}\frac{1}{i!}\frac{\partial^{i}}{\partial s^{i}}W_{h,k,l}\left(s,x^A\right)\biggr\vert _{s=1}c_{k-1-i}(k),
\end{eqnarray}
where
\begin{eqnarray}
W_{A,h,k,l}\left(s,Q\right)=\sum_{q\leq Q}\phi_{h,k,l}(s,q) \nonumber
(q^{1/A}+h-\delta_A(q))^s.
\end{eqnarray}
By (\ref{conv}) we have
\begin{eqnarray}\label{w}
\frac{\partial^i}{\partial s^i}W_{A,h,k,l}\left(s,Q\right)\biggr\vert _{s=1}&=&\frac{\partial^i}{\partial s^i}\sum_{d\leq Q}\varphi_{h,k,l}(d,s)\sum_{q\leq Q/d}\frac{d_{l-1}(q)((qd)^{1/A}+h-\delta_A(q))^s}{q}\biggr\vert _{s=1}\nonumber\\
&=&\sum_{j=0}^i{i\choose j}\sum_{d\leq Q} \frac{\partial^{i-j}}{\partial s^{i-j}} \varphi_{h,k,l}(d,s)\frac{\partial^{j}}{\partial s^{j}}V_{A,h,l,Q}(d,s)\biggr\vert _{s=1},
\end{eqnarray}
where
\begin{equation}
V_{A,h,l,Q}(d,s)=\sum_{q\leq Q/d}\frac{d_{l-1}(q)((qd)^{1/A}+h-\delta_A(q))^s}{q}\nonumber
\end{equation}
and
\begin{eqnarray}\label{part}
\frac{\partial^{j}}{\partial s^{j}}V_{A,h,l,Q}(d,s)\biggr\vert _{s=1}&=&(-1)^j\sum_{q\leq Q/d}\frac{d_{l-1}(q)((qd)^{1/A}+h-\delta_A(q))\log^j((qd)^{1/A}+h-\delta_A(q))}{q}
\nonumber\\
&=&(-A)^{-j}d^{1/A}\sum_{q\leq Q/d}\frac{d_{l-1}(q)\log^j(qd)}{q^{1-1/A}}+O_{A,h,j,l}\left((Q/d)^{\epsilon}\right)\nonumber\\
&=&(-A)^{-j}d^{1/A}\sum_{0}^j{j\choose m}\log^{j-m}d\sum_{q\leq Q/d}\frac{d_{l-1}(q)\log^{m}q}{q^{1-1/A}}+O_{A,h,j,l}\left((Q/d)^{\epsilon}\right).
\end{eqnarray}
The inner summation on the r.h.s of (\ref{part}) may be written as
\begin{eqnarray}
\sum_{q\leq Q/d}\frac{d_{l-1}(q)\log^{m}q}{q^{1-1/A}}=\frac{(-1)^m}{2\pi i}\int_{(\epsilon)}\frac{d^m}{dw^m}\zeta^{l-1}(w+1)\frac{(Q/d)^{w+1/A}dw}{w+1/A},\nonumber
\end{eqnarray}
which may be evaluated using Cauchy's Theorem and classical results on the error term in the generalised Dirichlet divisor problem (see Titchmarsh \cite{Titch}). The error term is $O_{A,h,l}\left((Q/d)^{1/A-2/l+\epsilon}\right)$, and the residue at the pole at $w=0$ is
\begin{eqnarray}\label{po}
&=&\frac{(-1)^m}{(m+l-2)!}\sum_{v=0}^{m+l-2}{m+l-2 \choose v}\frac{\partial^{v}}{\partial w^{v}}w^{m+l-1}\frac{\partial^{m}}{\partial w^{m}}\zeta^{l-1}(w+1)\biggr\vert _{w=0}\frac{\partial^{m+l-2-v}}{\partial w^{m+l-2-v}}\frac{(Q/d)^{w+1/A}}{w+1/A}\biggr\vert _{w=0}
\nonumber\\&=&(-1)^{m-1}(Q/d)^{1/A}\sum_{v=0}^{l+m-2}\frac{a_v(l-1)(v-l+1)_m}{v!}\sum_{r=0}^{l+m-2-v}\frac{(-A)^{l+m-1-v-r}\log^r(Q/d)}{r!}\nonumber\\
&=&(-1)^{m-1}(Q/d)^{1/A}\sum_{r=0}^{l+m-2}\frac{\log^r(Q/d)}{r!}\sum_{v=0}^{l+m-2-r}\frac{(-A)^{l+m-1-v-r}a_{v}(l-1)(v-l+1)_m}{v!}.\nonumber
\end{eqnarray}
As such, (\ref{part}) is
\begin{eqnarray}\label{dddd}
&=&(-A)^{-j}Q^{1/A}\sum_{0}^j{j\choose m}(-1)^{m-1}\log^{j-m}d
\sum_{r=0}^{l+m-2}\frac{\log^r(Q/d)}{r!}\nonumber\\
&\times & \sum_{v=0}^{l+m-2-r}\frac{(-A)^{l+m-1-v-r}a_{v}(l-1)(v-l+1)_m}{v!}\\
&+& O_{A,h,j,l}\left(Q^{1/A-2/l+\epsilon}d^{2/l-\epsilon}\right)\nonumber
\end{eqnarray}
\noindent and so, expanding $\log^j(Q/d)$ as a polynomial in $\log Q$ on the r.h.s of (\ref{dddd}), it follows that (\ref{w}) is
\begin{eqnarray}\label{pof}
&=&-Q^{1/A}\sum_{j=0}^{i}{i\choose j}(-1)^j \sum_{m=0}^j {j\choose m}\sum_{r=0}^{l+m-2}\sum_{u=0}^{r}\frac{\log^uQ}{u!(r-u)!}\sum_{v=0}^{l+m-2-r}
\nonumber\\
&\times& \frac{(-A)^{l+m-1-j-v-r}a_{v}(l-1)(v-l+1)_m}{v!}
\frac{\partial^{i-j}}{\partial s^{i-j}} \sum_{d\leq Q}\varphi_{h,k,l}^{}(d,s)(-\log d)^{j-m+r-u}\biggr\vert _{s=1}
\nonumber\\&+&O\left(Q^{1/A+\epsilon-2/l}\right)\nonumber
\end{eqnarray}
\begin{eqnarray}
&=&-Q^{1/A}\sum_{u=2-l}^{i}\frac{\log^{l+u-2} Q}{(l+u-2)!}\sum_{j=u}^{i}{i\choose j}(-1)^j\sum_{m=u}^{j}{j\choose m}\sum_{r=0}^{m-u}\sum_{v=0}^{r}\nonumber\\
&\times&\frac{(-A)^{r-j-v+1}a_v(l-1)(v-l+1)_m}{(m-r-u)!v!}
\frac{\partial^{i-j}}{\partial s^{i-j}} \frac{\partial^{j+l-r-u-2}}{\partial w^{j+l-r-u-2}} \sum_{d\leq Q}\frac{\varphi_{h,k,l}^{}(d,s)}{d^w}\biggr\vert _{s=1,v=0}
\nonumber\\&+&O\left(Q^{1/A+\epsilon-2/l}\right)\nonumber\\
&=&-Q^{1/A}\sum_{u=2-l}^{i}\frac{\log^{l+u-2} Q}{(l+u-2)!}\sum_{j=u}^{i}{i\choose j}(-1)^j\sum_{m=u}^{j}{j\choose m}\sum_{r=u}^{m}\sum_{v=0}^{r-u}\nonumber\\
&\times&\frac{(-A)^{r-j-u-v+1}a_v(l-1)(v-l+1)_m}{(m-r)!v!}
\frac{\partial^{i-j}}{\partial s^{i-j}} \frac{\partial^{j+l-r-2}}{\partial w^{j+l-r-2}} \sum_{d\leq Q}\frac{\varphi_{h,k,l}^{}(d,s)}{d^w}\biggr\vert _{s=1,v=0}
\nonumber\\&+&O\left(Q^{1/A+\epsilon-2/l}\right)\nonumber
\end{eqnarray}
\begin{eqnarray}
&=&-Q^{1/A}\sum_{u=2-l}^{i}\frac{\log^{l+u-2} Q}{(l+u-2)!}\sum_{j=u}^{i}{i\choose j}(-1)^j\sum_{r=u}^{j}\sum_{v=0}^{r-u}\frac{(-A)^{r-j-u-v+1}a_v(l-1)}{v!}\nonumber\\
&\times&\left(\sum_{m=r}^{j}{j\choose m}\frac{(v-l+1)_m}{(m-r)!}\right)
\frac{\partial^{i-j}}{\partial s^{i-j}} \frac{\partial^{j+l-r-2}}{\partial w^{j+l-r-2}} \sum_{d\leq Q}\frac{\varphi_{h,k,l}^{}(d,s)}{d^w}\biggr\vert _{s=1,v=0}
+O\left(Q^{1/A+\epsilon-2/l}\right)\nonumber\\
&=&-Q^{1/A}\sum_{u=2-l}^{i}\frac{\log^{l+u-2} Q}{(l+u-2)!}\sum_{j=u}^{i}{i\choose j}\sum_{r=u}^{j}\sum_{v=0}^{r-u}\frac{(-A)^{r-j-u-v+1}a_v(l-1)(v-l+1)_r}{v!}\nonumber\\
&\times&{l-v-2\choose j-r}
\frac{\partial^{i-j}}{\partial s^{i-j}} \frac{\partial^{j+l-r-2}}{\partial w^{j+l-r-2}} \sum_{d\leq Q}\frac{\varphi_{h,k,l}^{}(d,s)}{d^w}\biggr\vert _{s=1,v=0}
+O\left(Q^{1/A+\epsilon-2/l}\right)\nonumber
\end{eqnarray}
so (\ref{mainres4}) is
\begin{eqnarray}\label{polycalc2}
&=&-Q^{1/A}\sum_{u=2-l}^{k-1}\frac{\log^{l+u-2} Q}{(l+u-2)!} \sum_{j=u}^{k-1}{i\choose j}\sum_{r=u}^{j}\sum_{v=0}^{r-u}\frac{(-A)^{r-j-u-v+1}a_v(l-1)(v-l+1)_r}{v!}\nonumber\\
&\times&{l-v-2\choose j-r}
\sum_{i=j}^{k-1}\frac{c_{k-1-i}(k)}{i!} \frac{\partial^{i-j}}{\partial s^{i-j}} \frac{\partial^{j+l-r-2}}{\partial w^{j+l-r-2}} \sum_{d\leq Q}\frac{\varphi_{h,k,l}^{}(d,s)}{d^w}\biggr\vert _{s=1,v=0}
+O\left(Q^{1/A+\epsilon-2/l}\right)\nonumber \\
&=&-Q^{1/A}\sum_{u=0}^{k+l-3}\frac{\log^{u} Q}{u!} \sum_{j=u-l+2}^{k-1}{i\choose j}\sum_{r=u-l+2}^{j}\sum_{v=0}^{r-u+l-2}\frac{(-A)^{r-j-u-v+l-1}a_v(l-1)(v-l+1)_r}{v!}\nonumber\\
&\times&{l-v-2\choose j-r}
\sum_{i=j}^{k-1}\frac{c_{k-1-i}(k)}{i!} \frac{\partial^{i-j}}{\partial s^{i-j}} \frac{\partial^{j+l-r-2}}{\partial w^{j+l-r-2}} \sum_{d\leq Q}\frac{\varphi_{h,k,l}^{}(d,s)}{d^w}\biggr\vert _{s=1,v=0}
+O\left(Q^{1/A+\epsilon-2/l}\right).\nonumber\\
\end{eqnarray}
Taking $Q=x^A$ in (\ref{polycalc2}) yields
\begin{eqnarray}\label{ftm}
\frac{1}{(k-1)!}\frac{\partial^{k-1}}{\partial s^{k-1}}\frac{(s-1)^k\zeta^k(s)W_{h,k,l}\left(s,x^A\right)}{s}\biggr\vert _{s=1}=-x\sum_{u=0}^{k+l-3}\frac{a_{A,h,k,l,u}\log^{u} x}{u!}+O\left(x^{1+\epsilon-2A/l}\right).\nonumber \\
\end{eqnarray}
From (\ref{p2}), (\ref{ftl}) and (\ref{ftm}), for $A<\theta_k$ we have
\begin{eqnarray}\label{polydef}
\sum_{n\leq x}d_k(n+h)d_l(n,A)&=&x\sum_{m=0}^{k-1}\sum_{n=0}^{l-1}\frac {A^nb_{h,k,l,m,n}}{m!n!}\log^{m+n}x+x\sum_{m=0}^{k+l-3}\frac{a_{A,h,k,l,m}\log^mx}{m!}\nonumber\\&+&O_{A,h,k,l}\left(x^{1+\epsilon-2A/l}\right)+O_{A,\delta,k}\left(x^{1-\delta}\right),
\end{eqnarray}
where the coefficients $b_{h,k,l,m,n}$ and $a_{A,h,k,l,u}$ are defined in Section \ref{coeffs}. This concludes the proof of Theorem \ref{mainlem}.
\end{proof}
\subsection{Theorems \ref{arith} and \ref{anotherpoly}}\label{pots}
\begin{proof}[Proof of Theorem \ref{arith}]
We begin by writing
\begin{eqnarray}
d_k(n,A)=d_k(n)-\sum_{\substack{d|n\\d<x^{1-A}}}d_{k-1}\left(\frac{n}{d}\right)+\sum_{\substack{d|n\\nx^{A-1}<d\leq n^A}}d_{k-1}(d),
\end{eqnarray}
so that
\begin{eqnarray}\label{prof}
\sum_{\substack{n\equiv h\pmod q\\n\leq x}}d_k(n,A) &=& \sum_{\substack{n\equiv h\pmod q\\n\leq x}}d_{k}(n)
-\sum_{d<x^{1-A}} \sum_{\substack{dn\equiv h\pmod q\\n\leq x/d}} d_{k-1} \left(n\right)\nonumber\\
&+& \sum_{d\leq x^A}d_{k-1}(d) \sum_{\substack{dn\equiv h\pmod q\\d^{\frac{1-A}{A}}\leq n< x^{1-A}}} 1.
\end{eqnarray}
Firstly, using Definition \ref{def1} in the form (\ref{sume2}), the first term on the r.h.s of (\ref{prof}) is
\begin{eqnarray}
\frac{1}{\phi\left(q/(h,q)\right)} \sum_{n\leq x/(h,q)}\chi_0(n)d_k((h,q)n)+O_{\epsilon,\delta,k}\left(\frac{x^{1-\delta}}{\phi\left(q/(h,q)\right)}\right)
\end{eqnarray}
for $q\leq x^{\theta_k-\epsilon}$, where $\chi_0$ is the principal character to the modulus $q/(q,h)$. Secondly, the third term on the r.h.s of (\ref{prof}) is
\begin{eqnarray}\label{exa}
&\leq & \sum_{n< x^{1-A}}\sum_{\substack{dn\equiv h\pmod q\\ d\leq x^{A}}}d_{k-1}(d)\nonumber\\
&\leq &\sum_{\substack{n< x^{1-A}\\(n,q)|h}}\sum_{\substack{d\equiv \overline{(n/(n,q))}(h/(n,q))\pmod {q/(n,q)}\\ d\leq x^{A}}}d_{k-1}(d).
\end{eqnarray}
Using Definition \ref{def1} with $q\leq x^{A\theta_{k-1}-\epsilon}$ and the fact that $(\overline{(n/(n,q))}(h/(h,q)),q/(n,q))=(h/(n,q),q/(n,q))=(h,q)/(n,q)$ when $(n,q)|h$, (\ref{exa}) is
\begin{eqnarray}
= O_{A,\epsilon,k}\left(\frac{\phi^{k-2}\left(q/(h,q)\right)x^A\log^{k-2}x}{(q/(h,q))^{k-1}}\sum_{\substack{n< x^{1-A}\\(n,q)|h}}1\right)=O_{A,\epsilon,h,k}\left(\frac{x\log^{k-2}x}{q}\right).\nonumber\\
\end{eqnarray}
As such, (\ref{prof}) is
\begin{eqnarray}\label{prof2}
&=&\frac{1}{\phi\left(q/(h,q)\right)} \sum_{n\leq x/(h,q)}\chi_0(n)d_k((h,q)n)
\nonumber\\&-&\sum_{d<x^{1-A}} \sum_{\substack{dn\equiv h\pmod q\\n\leq x/d}} d_{k-1} \left(n\right)+ O_{A,\epsilon,h,k}\left(\frac{x\log^{k-2}x}{q}\right)\nonumber\\
&=&\frac{1}{\phi\left(q/(h,q)\right)} \sum_{n\leq x/(h,q)}\chi_0(n)d_k((h,q)n)\nonumber\\&-&\frac{1}{\phi\left(q/(h,q)\right)}\sum_{\substack{d<x^{1-A}\\(d,q)|h}}\sum_{n\leq (x/(h,q))/(d/(d,q))}\chi_0(n)d_{k-1}\left(\frac{(h,q)n}{(d,q)}\right)+O_{A,\epsilon,h,k}\left(\frac{x\log^{k-2}x}{q}\right)\nonumber\\
\end{eqnarray}
for $q\leq x^{\min(\theta_k,A\theta_{k-1})-\epsilon}$, where we have used Definition \ref{def1} again to write the second term on the r.h.s of (\ref{prof2}) as a character sum. We now write the second term as
\begin{eqnarray}\label{prof3}
&-&\sum_{m\leq x/(h,q)}\sum_{\substack{d<x^{1-A}\\ (d,q)|h \\ d/(d,q)|m }}\chi_0\left(\frac{(d,q)m}{d} \right)d_{k-1}\left(\frac{(h,q)m}{d}\right)\nonumber\\
&=&-\sum_{m\leq x/(h,q)}\sum_{\substack{d<x^{1-A}\\ (r,q/(d,q))=1 \\ r|m }}\chi_0\left(\frac{m}{r} \right)d_{k-1}\left(\frac{(h,q)m}{(d,q)r}\right)
\end{eqnarray}
where $r=d/(d,q)$. Since $(d,q)|h$ we have $(d,q)|(h,q)$ so $q/(h,q)|q/(d,q)$, therefore the condition $(r,q/(d,q))=1$ may be replaced with $(r,q/(h,q))=1$ and
so (\ref{prof3}) is
\begin{eqnarray}\label{proo6}
&-&\sum_{m\leq x/(h,q)}\sum_{\substack{d<x^{1-A}\\ r|m }}\chi_0 \left(r\right)\chi_0\left(\frac{m}{r} \right)d_{k-1}\left(\frac{(h,q)m}{(d,q)r}\right)\nonumber\\
&=&-\sum_{m\leq x/(h,q)}\chi_0 \left(m\right)\sum_{\substack{d<x^{1-A}\\ r|m }}d_{k-1}\left(\frac{(h,q)m}{(d,q)r}\right)\nonumber
\end{eqnarray}
\begin{eqnarray}
&=&-\sum_{m\leq x/(h,q)}\chi_0 \left(m\right)\sum_{\substack{d<x^{1-A}\\ d/(d,q)|m\\(d,q)|h }}d_{k-1}\left(\frac{(h,q)m}{d}\right)\nonumber\\
&=&-\sum_{m\leq x/(h,q)}\chi_0 \left(m\right)\sum_{\substack{ d/(d,q)|m\\(d,q)|h }}d_{k-1}\left(\frac{(h,q)m}{d}\right)
\nonumber
\end{eqnarray}
\begin{eqnarray}
&+&\sum_{m\leq x/(h,q)}\chi_0 \left(m\right)\sum_{\substack{x^{1-A}\leq d\leq x/(h,q)\\ d/(d,q)|m \\(d,q)|h}}d_{k-1}\left(\frac{(h,q)m}{d}\right)\nonumber\\
&=&-\sum_{m\leq x/(h,q)}\chi_0 \left(m\right)d_{k-1}\left((h,q)m\right)
\nonumber\\&+&\sum_{m\leq x/(h,q)}\chi_0 \left(m\right)\sum_{\substack{x^{1-A}\leq d\leq x/(h,q)\\ d/(d,q)|m\\(d,q)|h }}d_{k-1}\left(\frac{(h,q)m}{d}\right),
\end{eqnarray}
where in the fourth line of (\ref{proo6}) we have used the identity
\begin{eqnarray}
d_{k}\left((h,q)m\right)=\sum_{\substack{ d/(d,q)|m\\(d,q)|h }}d_{k-1}\left(\frac{(h,q)m}{d}\right).
\end{eqnarray}
Therefore, by (\ref{prof2}) and (\ref{proo6}), the main term in (\ref{prof}) is
\begin{eqnarray}\label{prof4}
&=&\frac{1}{\phi\left(q/(h,q)\right)}\sum_{m\leq x/(h,q)}\chi_0 \left(m\right)\sum_{\substack{x^{1-A}\leq d\leq x/(h,q)\\ d/(d,q)|m\\(d,q)|h }}d_{k-1}\left(\frac{(h,q)m}{d}\right)
\end{eqnarray}
and the error term is $O_{A,\epsilon,h,k}(x\log^{k-2}x/q)$. To evaluate the main term, we note that (\ref{prof4}) is
\begin{eqnarray}
&=&\frac{1}{\phi\left(q/(h,q)\right)}\sum_{\substack{x^{1-A}\leq d\leq x/(h,q)\\ (d,q)|h }} \sum_{m\leq (x/(h,q))/(d/(d,q))}\chi_0 \left(\frac{dm}{(d,q)}\right) d_{k-1}\left(\frac{(h,q)m}{(d,q)}\right)\nonumber\\
&=&\frac{1}{\phi\left(q/(h,q)\right)}\sum_{\substack{x^{1-A}\leq d\leq x/(h,q)\\ (d,q)|h }}\chi_0(d) \sum_{\substack{n\leq x/d\\(h,q)|n}}\chi_0 \left(\frac{n}{(h,q)}\right) d_{k-1}\left(n\right)\nonumber
\end{eqnarray}
\begin{eqnarray}
&=&\frac{1}{\phi\left(q/(h,q)\right)}\sum_{\substack{n\leq x\\(h,q)|n}}\chi_0 \left(\frac{n}{(h,q)}\right) d_{k-1}\left(n\right)\sum_{\substack{x^{1-A}\leq d\leq x/(h,q)\\d\leq x/n }}\chi_0(d), \nonumber\\
\end{eqnarray}
where the condition $(d,q)|h$ is removed because $\chi_0(d)=0$ if $(d,q)>1$. This is
\begin{eqnarray}
&=&\frac{1}{\phi\left(q/(h,q)\right)}\sum_{\substack{n\leq x^A}}\chi_0 \left(n\right) d_{k-1}\left((h,q)n\right)\sum_{\substack{x^{1-A}\leq d\leq x/n(h,q) }}\chi_0(d), \nonumber\\
&=&\frac{1}{\phi\left(q/(h,q)\right)}\sum_{\substack{n\leq x^A}}\chi_0 \left(n\right) d_{k-1}\left((h,q)n\right)\sum_{\substack{d\leq x/n(h,q) }}\chi_0(d)+O_{A,h,k}\left(\frac{x\log^{k-2}x}{q}\right) \nonumber\\
&=&\frac{x}{q}\sum_{\substack{n\leq x^A}}\frac{\chi_0 \left(n\right) d_{k-1}\left((h,q)n\right)}{n}+O_{A,h,k}\left(\frac{x\log^{k-2}x}{q}\right). \nonumber\\
\end{eqnarray}
Since $(n/(h,q),q/(h,q))=1$ is equivalent to $(n,q)=(h,q)$ which implies that $(n,q)|h$, we conclude that (\ref{prof}) is
\begin{eqnarray}
\frac{x}{q}\sum_{\substack{n\leq x^A\\(n,q)|h}}\frac{(n,q)d_{k-1}\left(n\right)}{n}+O_{A,h,k}\left(\frac{x\log^{k-2}x}{q}\right).
\end{eqnarray}
By partial summation or otherwise, the remainder of the proof is trivial.
\end{proof}
\begin{proof}[Proof of Theorem \ref{anotherpoly}] This is a straightforward consequence of Theorem \ref{arith} and the method of proof of Theorem \ref{mainlem}. We have
\begin{eqnarray}\label{pota}
\sum_{n\leq x}d_k(n+h,A)d_l(n,B)&=&\sum_{q\leq x^B}d_{l-1}(q)\sum_{\substack{n\equiv h\pmod q\\q^{1/B}\leq n\leq x+h}}d_k(n,A)
\nonumber\\
&=&A^{k-1}\sum_{q\leq x^B}d_{l-1}(q)\sum_{\substack{n\equiv h\pmod q\\q^{1/B}\leq n\leq x+h}}d_k(n)\nonumber\\ &+&O_{A,B,h,k}\left(x\log^{k-2}x\sum_{q\leq x^B}\frac{d_{l-1}(q)}{q}\right)
\end{eqnarray}
provided that $B< \min(\theta_k,A\theta_{k-1})$, by Theorem \ref{arith}. The first term on the r.h.s. of (\ref{pota}) is identical to (\ref{p1}), and the summation in the error term is $O(\log^{l-1}x)$.
\end{proof}
\subsection{Theorem \ref{t3}}\label{pot3}
\begin{proof}
We begin by establishing the analytic continuation of $\mathcal{D}_{h,k,l}(s,Q)$. We have
\begin{eqnarray}
\mathcal{D}_{h,k,l}(s,Q)&=&\sum_{1}^{\infty}\frac{d_k(n+h)d_l\left(n,\frac{Q}{\log n}\right)}{(n+h)^s}\nonumber\\
&=&\sum_{1}^{\infty}\frac{d_k(n+h)}{(n+h)^s}\sum_{\substack{q\leq Q\\q|n}}d_{l-1}(q)\nonumber\\
&=& \sum_{q\leq Q}d_{l-1}(q)\sum_{\substack{n\equiv h\pmod q\\n>h}}\frac{d_k(n)}{n^s}\nonumber\\
&=&\sum_{q\leq Q}d_{l-1}(q)\sum_{n\equiv h\pmod q}\frac{d_k(n)}{n^s}-\frac{d_k(h)}{h^s}\sum_{q\leq Q}d_{l-1}(q).\nonumber
\end{eqnarray}
We have
\begin{eqnarray}
\sum_{n\equiv h\pmod q}\frac{d_k(n)}{n^s}=\frac{1}{\phi \left(\frac{q}{g}\right)}\sum_{\chi \left(\textrm{mod } \frac{q}{g}\right)}\overline{\chi}\left(\frac{h}{g}\right)\sum_{1}^{\infty}\frac{\chi(n)d_k(gn)}{(gn)^s}\nonumber
\end{eqnarray}
where
\begin{eqnarray}\label{g}
\sum_{1}^{\infty}\frac{\chi(n)d_k(gn)}{(gn)^s}&=&\prod_p\sum_{0}^{\infty}d_k(p^{\beta+\delta})\chi(p^{\beta})p^{-(\beta+\delta)s}
\nonumber\\&=&L^k(s,\chi)b_k(s,\chi,g)
\end{eqnarray}
is a meromorphic function of $s$ for all $h,k,l$.\\
\noindent This shows that $\mathcal{D}_{h,k,l}(s,Q)$ is a meromorphic function of $s$ for all $h,k,l,Q$, and we observe that
\begin{eqnarray}\label{9}
\mathcal{D}_{h,k,l}(s,Q)=\zeta^k(s)Z_{h,k,l}(s,Q) +B_{h,k,l}(s,Q)
\end{eqnarray}
say, where
$Z_{h,k,l}(s,Q)$ is defined in (\ref{sum}) and $B_{h,k,l}(s,Q)$ is an analytic function of $s$ for all fixed $h,k,l,Q$. Writing
\begin{eqnarray}
D_{h,k,l}(x,Q)=\sum_{n\leq x}d_k(n+h)d_l\left(n,\frac{Q}{\log n}\right),\nonumber
\end{eqnarray}
we have
\begin{eqnarray}\label{mat}
\mathcal{D}_{h,k,l}(s,Q)=s\int_1^{\infty}D_{h,k,l}(x,Q)\frac{dx}{(x+h)^{s+1}}\nonumber
\end{eqnarray}
and, by (\ref{9}), we have
\begin{eqnarray}\label{pole}
\mathcal{D}_{h,k,l}(s,Q)=\frac{Z_{h,k,l}(s,Q)}{(s-1)^k}+C_{h,k,l}(s,Q)
\end{eqnarray}
for $\sigma>1$, where $C_{h,k,l}(s,Q)=O_{h,k,l,Q}((s-1)^{1-k})$ as $s\rightarrow 1$.
By (\ref{9}) we know that $\mathcal{D}_{h,k,l}(1+it,Q)$ is continuous for $t\neq 0$ (in fact it is analytic in a neighbourhood of the line). As such, the Delange-Ikehara Tauberian theorem \cite{Del} applies, i.e.
\begin{eqnarray}\label{asy2}
\lim_{x\rightarrow\infty}\frac{D_{h,k,l}(x,Q)}{x\log^{k-1}x}=Z_{h,k,l}(1,Q).\nonumber
\end{eqnarray}
Arguing in the same way as in the proof of Theorem \ref{mainlem}, we have
\begin{eqnarray}
\frac{Z_{h,k,l}(1,Q)}{\log^{l-1} Q}=\frac{C_{k,l}f_{k,l}(h)}{(k-1)!(l-1)!}+O_{h,k,l}\left(\frac{1}{\log Q}\right).\nonumber
\end{eqnarray}
Now, if $\lim_{Q\rightarrow\infty}\mathcal{D}_{h,k,l}(1+it,Q)$
is continuous when $t\neq 0$, then the restriction that $Q$ is fixed in (\ref{pole}) can be removed and the Delange-Ikehara Tauberian theorem still applies. Taking $Q=x$ we have $D_{h,k,l}(x,Q)=D_{h,k,l}(x)$, which completes the proof.
\end{proof}
\subsection{The coefficients in the case $k=l=2$}\label{append}
\noindent We conclude this paper with a demonstration that Theorem \ref{mainlem} recovers Estermann's asymptotic expansion for $D_{h,2,2}(x)$ precisely.
We take $k=l=2$ in (\ref{polydef}), so that
\begin{eqnarray}\label{polynom2}
\sum_{n\leq x}d_2(n+h)d_2(n,A)&=&x\sum_{m=0}^{1}\sum_{n=0}^{1}\frac {A^nb_{h,2,2,m,n}}{m!n!}\log^{m+n}x+x\sum_{m=0}^{1}\frac{a_{A,h,2,2,m}\log^mx}{m!}+O_{A,h}\left(x^{1-\delta}\right)\nonumber\\
&=&Ab_{h,2,2,1,1}x\log^2x+\left(b_{h,2,2,1,0}+Ab_{h,2,2,0,1}+a_{A,h,2,2,1}\right)x\log x\nonumber\\
&+&\left(b_{h,2,2,0,0}+a_{A,h,2,2,0}\right)x+O_{A,h}\left(x^{1-\delta}\right).
\end{eqnarray}
Thus, putting $A=1/2$ and using the symmetry of the divisors of $n$ about $n^{1/2}$ in (\ref{polynom2}), we obtain
\begin{eqnarray}\label{polynom3}
D_{h,2,2}(x)&=&b_{h,2,2,1,1}x\log^2x+\left(2b_{h,2,2,1,0}+b_{h,2,2,0,1}+2a_{1/2,h,2,2,1}\right)x\log x\nonumber\\
&+&2\left(b_{h,2,2,0,0}+a_{1/2,h,2,2,0}\right)x+O_{h}\left(x^{1-\delta}\right).
\end{eqnarray}
We now use Definitions \ref{form} and \ref{forma} to calculate the coefficients in (\ref{polynom3}). We use Estermann's \cite{Est} notation
\begin{eqnarray}
\sigma_{-1}'(h)=\sum_{d|h}\frac{\log d}{d}, \hspace{0.8cm} \sigma_{-1}''(h)=\sum_{d|h}\frac{\log^2 d}{d}
\end{eqnarray}
and
\begin{eqnarray}
a'=-\sum_{2}^{\infty}\frac{\mu(n)\log n}{n^2}, \hspace{0.8cm} a''=\sum_{2}^{\infty}\frac{\mu(n)\log^2 n}{n^2}.
\end{eqnarray}
Firstly, for the coefficient of $x\log^2 x$ we have
\begin{eqnarray}\label{e2}
b_{h,2,2,1,1}=C_{2,2}(1,0)f_{h,2,2}(1,0)=\frac{6}{\pi^2}\sigma_{-1}(h).\nonumber
\end{eqnarray}
Secondly, for the coefficient of $x\log x$, we have
\begin{eqnarray}\label{e1}
&&2b_{h,2,2,1,0}+b_{h,2,2,0,1}+2a_{1/2,h,2,2,1}\nonumber\\
&=&2a_1(1)c_0(2)C_{2,2}(1,0)f_{h,2,2}(1,0)+2a_0(1)c_0(2) \frac{\partial}{\partial w}C_{2,2}(1,w)f_{h,2,2}(1,w)\biggr\vert _{w=0}\nonumber\\
&+&a_0(1)c_1(2)C_{2,2}(1,0)f_{h,2,2}(1,0)+a_0(1)c_0(2)\frac{\partial}{\partial s}C_{2,2}(s,0)f_{h,2,2}(s,0)\biggr\vert _{s=0}\nonumber\\
&-&a_0(1)c_0(2)C_{2,2}(1,0)f_{h,2,2}(1,0)\nonumber
\end{eqnarray}
\begin{eqnarray}
&=&2\gamma C_{2,2}(1,0)f_{h,2,2}(1,0)+2 \frac{\partial}{\partial w}C_{2,2}(1,w)f_{h,2,2}(1,w)\biggr\vert _{w=0}\nonumber\\
&+&(2\gamma-1)C_{2,2}(1,0)f_{h,2,2}(1,0)+\frac{\partial}{\partial s}C_{2,2}(s,0)f_{h,2,2}(s,0)\biggr\vert _{s=0}\nonumber\\
&-&C_{2,2}(1,0)f_{h,2,2}(1,0)\nonumber\\
&=&\frac{12}{\pi^2}(2\gamma-1)\sigma_{-1}(h)+2 \frac{\partial}{\partial w}C_{2,2}(1,w)f_{h,2,2}(1,w)\biggr\vert _{w=0}\nonumber\\
&+&\frac{\partial}{\partial s}C_{2,2}(s,0)f_{h,2,2}(s,0)\biggr\vert _{s=0}\nonumber\\
&=&\left(\frac{12}{\pi^2}(2\gamma-1)+2\frac{\partial}{\partial w}C_{2,2}(1,w)\biggr\vert _{w=0}+\frac{\partial}{\partial s}C_{2,2}(s,0)\biggr\vert _{s=0}\right)\sigma_{-1}(h)\nonumber\\
&+&\frac{6}{\pi^2}\left(2\frac{\partial}{\partial w}f_{h,2,2}(1,w)\biggr\vert _{w=0}+\frac{\partial}{\partial s}f_{h,2,2}(s,0)\biggr\vert _{s=0}\right)\nonumber\\
&=&\left(\frac{12}{\pi^2}(2\gamma-1)+4a'\right)\sigma_{-1}(h)-\frac{24}{\pi^2}\sigma_{-1}'(h).
\nonumber
\end{eqnarray}
Lastly, for the coefficient of $x$ we have
\begin{eqnarray}\label{e0}
&&2b_{h,2,2,0,0}+2a_{1/2,h,2,2,0}\nonumber\\
&=&2a_1(1)c_1(2)C_{2,2}(1,0)f_{h,2,2}(1,0)+2a_1(1)C_{2,2}(1,0)\frac{\partial}{\partial s}f_{h,2,2}(s,0)\biggr\vert _{s=1}
\nonumber\\
&+&2a_1(1)f_{h,2,2}(1,0)\frac{\partial}{\partial s}C_{2,2}(s,0)\biggr\vert _{s=1}+ 2a_0(1)c_1(2)C_{2,2}(1,0)\frac{\partial}{\partial w}f_{h,2,2}(1,w)\biggr\vert _{w=0}\nonumber\\&+&2c_1(2)f_{h,2,2}(1,0)\frac{\partial}{\partial w}C_{2,2}(1,w)\biggr\vert _{w=0}+2C_{2,2}(1,0)\frac{\partial}{\partial s}\frac{\partial}{\partial w}f_{h,2,2}(s,w)\biggr\vert _{w=0,s=1}\nonumber\\
&+&2\frac{\partial}{\partial w}C_{2,2}(s,w)\biggr\vert _{w=0}\frac{\partial}{\partial s}f_{h,2,2}(s,w)\biggr\vert _{s=1}+2\frac{\partial}{\partial s}C_{2,2}(s,w)\biggr\vert _{s=1}\frac{\partial}{\partial w}f_{h,2,2}(s,w)\biggr\vert _{w=0}\nonumber
\end{eqnarray}
\begin{eqnarray}
&+&2f_{h,2,2}(1,0)\frac{\partial}{\partial w}\frac{\partial}{\partial s}C_{2,2}(s,w)\biggr\vert _{w=0,s=1}
\nonumber\\&-&c_1(2)C_{2,2}(1,0)f_{h,2,2}(1,0)-f_{h,2,2}(1,0)\frac{\partial}{\partial s} C_{2,2}(s,0) \biggr\vert _{s=1}-C_{2,2}(1,0)\frac{\partial}{\partial s} f_{h,2,2}(s,0) \biggr\vert _{s=1}\nonumber\\
&+&C_{2,2}(1,0)f_{h,2,2}(1,0)
\nonumber
\end{eqnarray}
\begin{eqnarray}
&=&\frac{12\gamma}{\pi^2}(2\gamma-1)\sigma_{-1}(h)-\frac{24\gamma}{\pi^2}\sigma_{-1}'(h)+4\gamma a'\sigma_{-1}(h)-\frac{12}{\pi^2}(2\gamma-1)\sigma_{-1}'(h)
+2(2\gamma-1)a'\sigma_{-1}(h)
\nonumber\\
&+&\frac{24}{\pi^2}\sigma_{-1}''(h)-8a'\sigma_{-1}'(h)+4a''\sigma_{-1}(h)-\frac{6}{\pi^2}(2\gamma-1)\sigma_{-1}(h)+\frac{12}{\pi^2}\sigma_{-1}'(h)-2a'\sigma_{-1}(h)+\frac{6}{\pi^2}\sigma_{-1}(h)
\nonumber\\
&=&\left(\frac{6}{\pi^2}(2\gamma-1)^2+\frac{6}{\pi^2}+4a'(2\gamma-1)+4a'' \right)\sigma_{-1}(h)-\left(\frac{24}{\pi^2}(2\gamma-1)+8a'\right)\sigma_{-1}'(h)+\frac{24}{\pi^2}\sigma_{-1}''(h). \nonumber
\end{eqnarray}
\vspace{1cm}
\noindent \textbf{Acknowledgment:} We would like to thank Professor Aleksandar Ivi\'{c}, Professor Steve Gonek, Professor Zeev Rudnick and Professor Terence Tao for their suggestions on a previous version of this paper. The first author is also grateful to the Leverhulme Trust (RPG-2017-320) for the support through the research project grant ``Moments of $L$-functions in Function Fields and Random Matrix Theory". The second author is grateful for a PhD studentship supported by the College of Engineering, Mathematics and Physical Sciences at the University of Exeter.
\vspace{1cm}
| {
"timestamp": "2019-03-06T02:04:21",
"yymm": "1903",
"arxiv_id": "1903.01566",
"language": "en",
"url": "https://arxiv.org/abs/1903.01566",
"abstract": "We establish asymptotic formulae for various correlations involving general divisor functions $d_k(n)$ and partial divisor functions $d_l(n,A)=\\sum_{q|n:q\\leq n^A}d_{l-1}(q)$, where $A\\in[0,1]$ is a parameter and $k,l\\in\\mathbb{N}$ are fixed. Our results relate the parameter $A$ to the lengths of arithmetic progressions in which $d_k(n)$ is uniformly distributed. As applications to additive divisor sums, we establish new lower bounds and a new equivalent condition for the conjectured asymptotic. We also prove a Tauberian theorem for general additive divisor sums.",
"subjects": "Number Theory (math.NT)",
"title": "On Additive Divisor Sums and minorants of divisor functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540692607816,
"lm_q2_score": 0.7248702642896702,
"lm_q1q2_score": 0.7099046430182268
} |
https://arxiv.org/abs/2108.05569 | Agnostic Online Learning and Excellent Sets | We use algorithmic methods from online learning to revisit a key idea from the interaction of model theory and combinatorics, the existence of large "indivisible" sets, called "$\epsilon$-excellent," in $k$-edge stable graphs (equivalently, Littlestone classes). These sets arise in the Stable Regularity Lemma, a theorem characterizing the appearance of irregular pairs in Szemerédi's celebrated Regularity Lemma. Translating to the language of probability, we find a quite different existence proof for $\epsilon$-excellent sets in Littlestone classes, using regret bounds in online learning. This proof applies to any $\epsilon < {1}/{2}$, compared to $< {1}/{2^{2^k}}$ or so in the original proof. We include a second proof using closure properties and the VC theorem, with other advantages but weaker bounds. As a simple corollary, the Littlestone dimension remains finite under some natural modifications to the definition. A theme in these proofs is the interaction of two abstract notions of majority, arising from measure, and from rank or dimension; we prove that these densely often coincide and that this is characteristic of Littlestone (stable) classes. The last section lists several open problems. | \section{Background and motivation}
\section{Overview}
In this section we briefly present three complementary points of view (combinatorics \S 1.1, online learning \S 1.2, model theory \S 1.3) which inform this work, and state our main results in \S 1.4.
The aim is to allow the paper to be readable by people in all three communities
and hopefully to stimulate further interaction.
In the recent papers \cite{almm},
\cite{blm} ideas from model theory played a role in the conjecture, and then the proof, that Littlestone classes are precisely those which can be PAC learned in a differentially private way
(we direct the reader to the introductions of those papers for precise statements and further literature review).
The present work may be seen as complementary to those papers in that it shows, perhaps even more surprisingly,
that ideas and techniques can travel profitably in the other direction.
\subsection{{Context from combinatorics}}
{Szemer\'edi's celebrated Regularity Lemma for finite graphs says essentially that any huge finite graph $G$ can be well approximated by
a much smaller random graph.
The lemma gives a partition of any such
$G$ into pieces of essentially equal size
so that edges are distributed uniformly between most pairs of pieces (``$\epsilon$-regularity'').
Szemer\'edi's original proof allowed for some pairs to be irregular, and he asked if this was necessary \cite{sz1}.
As described in \cite[\S 1.8]{ks},
it was observed by several researchers including
Alon, Duke, Leffman, R\"odl and Yuster \cite{alon} and
Lov\'asz, Seymour, Trotter that irregular pairs are unavoidable due to the counterexample of half-graphs.
A $k$-half graph has distinct vertices
$a_1, \dots, a_k$, $b_1, \dots, b_k$ such that there is an edge between $(a_i, b_j)$ if and only if $i<j$.}
{Malliaris and Shelah showed that half-graphs characterize the existence of irregular pairs in Szemer\'edi's lemma, by proving a
stronger regularity lemma for $k$-edge stable graphs called the Stable Regularity Lemma \cite{MiSh:978}.
(A graph is called $k$-edge stable if it contains no $k$-half graph.\footnote{The Stable Regularity Lemma
says that a finite $k$-edge stable graph can be equipartitioned into $\leq m$ pieces
(where $m$ is polynomial in $\frac{1}{\epsilon}$) such that \emph{all} pairs of pieces are regular, with densities close to $0$ or $1$.
Note that two of these conditions, the improved size of the partition and the densities of regular pairs being close to $0$ or $1$,
are already expected from finite VC dimension, see \cite{alon}, \cite{lovasz-szegedy}, though here by a different proof.})
A central idea in the stable regularity lemma was
that $k$-edge stability for small $k$ means it is possible to find large ``indivisible'' sets,
so-called \emph{$\epsilon$-excellent} sets. A partition into sets of this kind is quickly seen to have no irregular pairs (for a related $\epsilon$).
For an exposition of this proof, and the model theoretic ideas behind it, see \cite{MiSh:E98}. }
{To recall the definition, let $0 < \epsilon < \frac{1}{2}$. Let $G = (V,E)$ be a finite graph. Following \cite{MiSh:978},
say $B \subseteq V$ is \emph{$\epsilon$-good} if for any $a \in V$, one of $\{ b \in B : (a,b) \in E \}$,
$\{ b \in B : (a,b) \notin E \}$ has size $<\epsilon|B|$. If the first [most $b \in B$ connect to $a$], write $\mathbf{t}(a,B) = 1$, and
if the second [most $b \in B$ do not connect to $a$] write $\mathbf{t}(a,B) = 0$.
Say that $A \subseteq V$ is \emph{$\epsilon$-excellent} if for any $B \subseteq V$ which is
$\epsilon$-good, one of $\{ a \in A : \mathbf{t}(a,B) = 1 \}$, $\{ a \in A : \mathbf{t}(a,B) = 0 \}$ has size $<\epsilon|A|$.
Informally, any $a \in A$ has a majority opinion about any $\epsilon$-good $B$ by definition of good, and
excellence says that additionally, a majority of elements of $A$ have the same majority opinion.
Observe that if $A$ is $\epsilon$-excellent it is $\epsilon$-good, because any set of size one is $\epsilon$-good.}
{Notice that while, e.g., $\frac{1}{4}$-good implies $\frac{1}{3}$-good, the same is
a priori not true for $\epsilon$-excellent, because the definition of $\epsilon$-excellence quantifies over $\epsilon$-good sets.
For the stable regularity lemma, it was sufficient to show that large $\epsilon$-excellent sets exist in
$k$-edge stable graphs for $\epsilon < \frac{1}{2^{2^k}}$ or so. In this language, one contribution of the present paper is
a new proof for existence of $\epsilon$-excellent sets in $k$-edge stable graphs, which works for any $\epsilon < \frac{1}{2}$, i.e. any $\epsilon$ for which excellence is well defined.}
\subsection{Context from online learning}
The online learning setting shifts the basic context from graphs to \emph{hypothesis classes}, i.e. pairs $(X, \mathcal{H})$ where
$X$ is a finite or infinite set and $\mathcal{H} \subseteq \mathcal{P}(X)$ is a set of subsets of $X$, called {hypotheses} or predictors. We will identify
elements $h \in \mathcal{H}$ with their characteristic functions, and write ``$h(x) = 1$'' for ```$x \in h$'' and $h(x) =0$ otherwise.
[It is also usual to consider $h$ as having range $\{ -1, 1 \}$.]
Any such hypothesis class can be naturally viewed as a bipartite graph on the disjoint sets of vertices $X$ and $\mathcal{H}$
with an edge between $x \in X$ and $h \in \mathcal{H}$ if and only if $h(x) = 1$.
However, something which is a priori lost in this translation is a powerful understanding in the computer science community of the role of
dynamic/ adaptive/predictive arguments. This perspective is an important contribution to the
proofs below, and seems to highlight some understanding currently missing in the other contexts.
A mistake tree is a binary decision tree whose internal nodes are labeled by elements of~$X$. We can think of the process of
traversing a root-to-leaf path in a mistake tree of height $d$ as being described by a sequence of pairs $(x_i, y_i) \in X \times \{ 0, 1 \}$
(for $1 \leq i \leq d$) recording that at
step $i$ the node we are at is labeled by $x_i$ and we then travel right (if $y_i = 1$) or left (if $y_i = 0$) to a node labeled by $x_{i+1}$,
and so on. Say that $h \in \mathcal{H}$ realizes a given branch (root-to-leaf path) $(x_1, y_1), \dots (x_d, y_d)$ if $h(x_i) = y_i$ for all
$1 \leq i \leq d$, and say that a given mistake tree is shattered by $\mathcal{H}$ if each branch is realized by some $h \in \mathcal{H}$.
The Littlestone dimension of $\mathcal{H}$, denoted $\mathsf{Ldim}(\mathcal{H})$, is the depth of the largest complete [binary] tree that is shattered by $\mathcal{H}$.
$\mathcal{H}$~is called a Littlestone class if it has finite Littlestone dimension; for reasons explained in the next subsection, we may prefer to say that $(X, \mathcal{H})$ is a Littlestone pair. Littlestone~\cite{littlestone88} and Ben-David, P{\'{a}}l, and Shalev-Shwartz~\cite{ben-david09agnostic} proved that $\mathsf{Ldim}$ characterizes online learnability of the class.
A basic setting within online learning is that of \emph{prediction with expert advice}.
Informally, we are given a set $X$ and a set of experts $H$;
an expert may be {\it oblivious} in which case it is a function $h$ from $X$ to $\{0,1\}$,
or {\it adaptive} in which case it may change its prediction based on the past.
(So, an oblivious expert always predicts according to the same rule $h:X\to\{0,1\}$, whereas
an adaptive experts may change its prediction rule based on past experience.)
The following repeated game is than played between two players called {\it learner} and {\it adversary} (denoted $L$ and $A$ respectively): in each round~$t=1,\ldots, T$ the adversary $A$ poses a query $x_t\in X$ to $L$; the learner $L$ then responds with a prediction\footnote{In general (in the fully agnostic setting), any optimal learner must be randomized and use random predictions $\hat y_t$, in this case the adversary only sees the expected value of $\hat y_t$. (I.e.\ the randomness of the learner is private and inaccessible by the adversary.)} $\hat y_t$, and the round ends with the adversary revealing the ``true'' label $y_t$.
The goal of $L$ is to make as few mistakes as possible:
specifically to minimize the {\it regret} with respect to the set of experts $H$,
which is defined as the difference between the number of mistakes $L$ makes and the number of mistakes the best expert in $H$ makes.
A fundamental theorem in the area uses the Littlestone dimension to characterize the optimal (minimal) possible regret that can be achieved when the class $H$ contains only oblivious experts~\cite{littlestone88,ben-david09agnostic,rakhlin2010online,rakhlin2015sequential,rakhlin2015online,alon21}.
This characterization hinges on an important property of Littlestone dimension which informally asserts that
every (possibly infinite) Littlestone class can be covered by a finite number of {\it adaptive} experts. More formally: for every Littlestone class $\H$ there exists a finite class of adaptive experts $E=E(H,T)$ such that every for every $x_1,\ldots , x_T\in X$
and every $h\in H$ there exists an expert in $E$ which given an input sequence $x_1,\ldots, x_T$, produces labels $y_1=h(x_1),\ldots, y_T=h(x_T)$.
This property essentially reduces any (possibly infinite) Littlestone class to a finite one. Below, in Theorem~\ref{dssp} we prove a mild extension of this important property.
\subsection{Context from model theory}
Consider again the case of a finite bipartite graph $G$ with vertex set $X \cup Y$ and edge relation~$R$ (abstracting the study of
a formula $\varphi(\bar{x}, \bar{y})$). Following logical notation we write
$R(a,b)$ or~$\neg R(a,b)$ to denote an edge or a non-edge. Define a [full] \emph{special tree} \label{tree-page} of
height $n$ to have
internal nodes $\{ a_\eta : \eta \in {^{n>}2} \}$ from $X$ and indexed by binary sequences of length $<n$ and leaves
$\{ b_\rho : \rho \in {^n 2} \}$ from $Y$ and indexed by binary sequences of length exactly $n$, which satisfy the following.\footnote{On this notation, see Convention \ref{conv:notation}.}
For any $a_\eta$ and $b_\rho$, if $\eta$ is an initial segment of $\rho$ (notation: $\eta \trianglelefteq \rho$), then $R(a_\eta, b_\rho)$ if
${\eta^\smallfrown \langle 1 \rangle} \trianglelefteq \rho$ and $\neg R(a_\eta, b_\rho)$ if ${\eta^\smallfrown \langle 0 \rangle} \trianglelefteq \rho$.
A key ingredient in the proof of the Stable Regularity Lemma was the following special case of Shelah's Unstable Formula Theorem
\cite[II.2]{shelah}. (The bounds are due to Hodges \cite{hodges}.)
For a bipartite graph $G$ as above,
if $G$ has a full special tree of height $n$, then it has a half-graph of size about $\log n$, with the $a$'s chosen from $X$ and the
$b$'s chosen from $Y$ $($i.e., it is not $k$-edge stable in the sense above$)$. Moreover,
if $G$ has a half-graph of size $k$, it has a full special tree of height about $\log k$.
It was noticed by Chase and Freitag \cite{cf} that the condition of model-theoretic stability (Shelah's 2-rank;
e.g., in this language, no full special tree of height $n$ for some finite $n$)
corresponds to finite Littlestone dimension, and
they used this to give natural examples of Littlestone classes using stable theories.
The following discussion reflects an understanding developed in the learning theoretic papers Alon-Livni-Malliaris-Moran \cite{almm}, Bun-Livni-Moran \cite{blm}, where
model theoretic ideas had played a role in the proof that the Littlestone classes are precisely those which can be PAC-learned in a
differentially private way. One contribution of the model theoretic point of view for online learning in general, and
for our present argument in particular, is that a condition in online learning which appears inherently asymmetric,
namely the Littlestone dimension (it treats elements and hypotheses as different kinds of objects; they play fairly different roles in the
partitioning) is equivalent to a condition which is extremely symmetric, namely existence of half-graphs (when switching the roles of
$X$ and $Y$ in a half-graph, it suffices to rotate the picture). Thus if $\mathcal{H}$ is a Littlestone class,
the ``dual'' class obtained by setting $X^\prime = \mathcal{H}$ and $\mathcal{H}^\prime = \{ \{ h \in \mathcal{H} : h(x) = 1 \} : x \in X \}$ is also.
In online learning, $k$-edge stability also has a natural meaning: Threshold dimension $k$, that is, there do not exist elements
$a_1, \dots, a_k$ from $X$ and hypotheses $h_1, \dots h_k$ from $\mathcal{H}$ such that $h_j(a_i) = 1$ if and only if $i<j$.
In what follows, we sometimes refer to $(X, \mathcal{H})$ as a \emph{Littlestone pair}, rather than simply saying that $\mathcal{H}$ is a
Littlestone class, to emphasize this line of thought.
\subsection{Results of the paper}
{The plan for the rest of the paper is as follows.
In section two, we spell out the translation from \cite{MiSh:978}
of good and excellent to Littlestone classes and explain why existence of nontrivial good sets characterizes such classes.
For the rest of the paper we work primarily in the language of online learning with hypothesis classes $(X, \mathcal{H})$.
In section three, we prove a dynamic Sauer-Shelah-Perles lemma for Littlestone classes in the form we will need (a mild extension of Lemma 12 in \cite{ben-david09agnostic}).
The theorem says that given any Littlestone class $\mathcal{H}$ of dimension $d$ and $T \in \mathbb{N}$ there exists a collection $\mathbf{A}$ of
$\binom{T}{\leq d}$ algorithms such that for every binary tree $\mathcal{H}$ of height $T$ with $X$-labeled internal nodes, every branch of $\mathcal{T}$ which is
realized by some $h \in \mathcal{H}$ is also realized by some algorithm from $\mathbf{A}$.
Section four gives the existence proof for $\epsilon$-excellent sets in Littlestone classes for any $\epsilon < \frac{1}{2}$ using regret bounds.
We first translate the definitions to the language of probability and define an $\epsilon$-good tree to be a binary tree whose nodes are labelled
by $\epsilon$-good distributions over $X$. The section's main theorem says that an $\epsilon$-good complete binary tree of
finite depth $T$ which is shattered by $\mathcal{H}$ (naturally defined)
witnesses a lower bound on the regret of any online learning algorithm.
This is used to show that if $\mathcal{H}$ is a Littlestone class of dimension $d$ then the maximum
possible depth of a complete $\epsilon$-good tree which is shattered by $\mathcal{H}$ is about $d/(\frac{1}{2}-\epsilon)^2$ times a universal constant.
In section five we give an alternate argument for bounding the maximal height of an $\epsilon$-good tree shattered by a Littlestone class
$\mathcal{H}$ using some closure properties of Littlestone classes and the VC theorem.
In section six, we analyze two a priori strengthenings of Littlestone dimension, which
depend on a choice of $\epsilon$: the ``virtual''
Littlestone dimension considers $\epsilon$-good sets to be essentially virtual elements and asks that the maximal height of a tree labeled by
actual or virtual elements be finite, and the ``approximate'' Littlestone dimension asks that the maximal height of an $X$-labeled tree shattered by $\mathcal{H}$
be finite even when we allow for a small fraction of mistaken edges in the definition of ``$h$ realizes a branch.'' As a corollary of earlier sections,
these are both finite exactly when the original $\mathsf{Ldim}$ is finite (though we do not investigate the distance). In section seven, we compare the
notion of majority arising from measure (such as good, excellent) and from rank or dimension (such as $\mathsf{Ldim}$).
We give some basic axioms for a notion of largeness
and prove that any class admitting such a notion contains nontrivial sets on which a given notion of majority
arising from measure and the notion of majority arising from largeness are both well defined and agree. Moreover, this characterizes Littlestone classes.
Section eight contains several open questions suggested by these proofs. }
\section{Prior results and a characterization}
The original facts about good and excellent sets in \cite{MiSh:978} were proved for $k$-edge stable graphs; the translation to Littlestone
classes is immediate, but we record this here for completeness.
\begin{defn} \label{d:excellence}
Let $0 < \epsilon < \frac{1}{2}$ and let $(X, \mathcal{H})$ be a hypothesis class.
\begin{enumerate}
\item[(1)] Say
$B \subseteq X$ is \emph{$\epsilon$-good} if for any $h \in \mathcal{H}$, one of $\{ b \in B : h(b) = 1 \}$,
$\{ b \in B : h(x) = 0 \}$ has size $<\epsilon|B|$.
Write $\mathbf{t}(h,B) = 1$ in the first case, $\mathbf{t}(h,B) = 0$ in the second.
\item[(2)]
Say that $H \subseteq \mathcal{H}$ is \emph{$\epsilon$-excellent} if for any $B \subseteq X$ which is
$\epsilon$-good, one of $\{ h \in H : \mathbf{t}(h,B) = 1 \}$, $\{ h \in H : \mathbf{t}(h,B) = 0 \}$ has size $<\epsilon|H|$.
Write $\mathbf{t}(H,B) = 1$ in the first case, $\mathbf{t}(H,B) = 0$ in the second.
\item[(3)] Define ``$H$ is an $\epsilon$-good subset of $\mathcal{H}$'' and ``$A$ is an $\epsilon$-excellent subset of $X$'' in the parallel way
switching the roles of $X$ and $\mathcal{H}$.
\end{enumerate}
\end{defn}
\begin{rmk} \label{d:monot}
\emph{The definition of $\epsilon$-good is monotonic in $\epsilon$: it becomes weaker as $\epsilon$ increases $($below $\frac{1}{2}$$)$.
This is a priori not the case for excellence, since as $\epsilon$ becomes larger
the collection of $\epsilon$-good sets $B$ quantified over may increase.}
\end{rmk}
\begin{fact}[\cite{MiSh:978} Claim 5.4 or \cite{MiSh:E98} Claim 1.8, in our language] \label{fact:978}
Suppose $(X, \mathcal{H})$ is a Littlestone class of dimension $d$ and $0 < \epsilon < \frac{1}{2^d}$.
Then for any finite $H \subseteq \mathcal{H}$ there exists $A \subseteq H$, $|A| \geq \epsilon^d |H|$ such that
$A$ is $\epsilon$-excellent.
\end{fact}
The proof of fact \ref{fact:978} proceeds by noting that if
$H = H_\emptyset$ is not $\epsilon$-excellent, then there is some $\epsilon$-good $A = A_\emptyset$ which witnesses this failure,
splitting $H_\emptyset$ naturally into $H_{\langle 0 \rangle}$ and $H_{\langle 1 \rangle}$ according to $\mathbf{t}$.
If either of these is excellent, we stop; if not, continue inductively
to label the internal nodes and leaves of a full binary tree with $A$'s and $H$'s respectively. Suppose we arrive to height $d$.
To extract a Littlestone tree, or equivalently a full special tree
(p. \pageref{tree-page} above), choose a $h_\rho$ from each $H_\rho$
and then show it is possible to choose a suitable
$a_\eta$ from each $A_\eta$ by using $\epsilon < 2^{-d}$, the definition of good (really, of $\mathbf{t}$)
and the union bound. Since this contradicts $\mathsf{Ldim}(\mathcal{H})=d$, at some earlier level some $H_\rho$ must become excellent.
Note moreover that the same proof works to show existence of $\epsilon$-good sets, simply by taking the sets $A$ to be singletons
(note that any singleton is trivially $\epsilon$-good). In this case, the union bound is not needed and the proof works for any $\epsilon < \frac{1}{2}$.
\begin{fact} \label{c:good}
Suppose $(X, \mathcal{H})$ is a Littlestone class, $\mathsf{Ldim}(\mathcal{H}) = d$ and $0 < \epsilon < \frac{1}{2}$.
Then for every finite $A \subseteq \mathcal{H}$ there exists $B \subseteq A$,
$|B| \geq \epsilon^{d}|A|$ such that $B$ is $\epsilon$-good.
\end{fact}
We conclude this section by observing that
existence of large good sets (both in $X$ and in $\mathcal{H}$) is characteristic of Littlestone classes. At this point, a similar result
for excellence could also be stated, for some or for all sufficiently small $\epsilon$.
\begin{claim} \label{c:equiv} The following are equivalent for any hypothesis class $(X, \mathcal{H})$.
\begin{enumerate}
\item For every $\epsilon < \frac{1}{2}$ there is a constant $c= c(\epsilon) > 0$ such that for every finite $A \subseteq \mathcal{H}$ there exists $B \subseteq A$,
$|B| \geq c|A|$ such that $B$ is $\epsilon$-good.
\item For some $\epsilon < \frac{1}{2}$ and some constant $c > 0$, for every finite $A \subseteq \mathcal{H}$ there exists $B \subseteq A$,
$|B| \geq c|A|$ such that $B$ is $\epsilon$-good.
\item $\mathcal{H}$ is a Littlestone class.
\item For every $\epsilon < \frac{1}{2}$ there is a constant $c= c(\epsilon) > 0$ such that for every finite $A \subseteq X$ there exists $B \subseteq A$,
$|B| \geq c|A|$ such that $B$ is $\epsilon$-good.
\item For some $\epsilon < \frac{1}{2}$ and some constant $c > 0$, for every finite $A \subseteq X$ there exists $B \subseteq A$,
$|B| \geq c|A|$ such that $B$ is $\epsilon$-good.
\end{enumerate}
\end{claim}
\begin{proof}
By the remarks above (on the unstable formula theorem),
$(X, \mathcal{H})$ is a Littlestone class if and only if the ``dual class''
$X^\prime = \mathcal{H}$ and $\mathcal{H}^\prime = \{ \{ h \in \mathcal{H} : h(x) = 1 \} : x \in X \}$ is also. So it suffices to prove that (1), (2) (3) are equivalent, and
equivalence of (3), (4), (5) will hold by a parallel argument.
(1) implies (2) is immediate, and (3) implies (1) is Fact \ref{c:good}.
It remains to prove (2) implies (3).
Suppose we are given $\epsilon$ and $c$ from (2). Since any finite hypothesis class is necessarily Littlestone, we may assume
$X$ is infinite. Choose $n$ large enough so that
$\lfloor \epsilon \lfloor cn \rfloor \rfloor \geq 1$ and so that for any $k \geq \lceil \epsilon n \rceil$, we have
$\min \{ \lfloor \frac{k}{2} \rfloor -1, \lfloor \frac{k-1}{2} \rfloor \} \geq \epsilon k$.
If $\mathcal{H}$ is not a Littlestone class, then we know it has infinite (i.e. not finite) Threshold dimension and so for our chosen $n$,
there are elements $\{ x_i : i < n \}$ from $X$ and $H := \{ h_j : j < n \}$ from $\mathcal{H}$ such that
$x_i \in h_j$ if and only if $i<j$. But for any $H^\prime \subseteq H$ of size $k \geq cn$, we can pick out all the cuts of $H^\prime$
using the $x_i$'s. In particular, if $H^\prime = \{ h_{i_\ell} : \ell < k \}$, let $m = \lceil \epsilon k \rceil$ (by choice of $n$, this
is not larger than whichever of $k/2$ or $(k-1)/2$ is an integer).
Then $x_m$ partitions $H^\prime$ into $\{ h_{i_\ell} : 0 \leq \ell < m \}$ and $\{ h_{i_\ell} : m \leq \ell < k \}$, both of which have size
$\geq \epsilon k$, contradicting its being $\epsilon$-good.
\end{proof}
\begin{conv} \label{conv:notation}
We clarify some notational points which hopefully will not cause confusion if explicitly pointed out.
\begin{itemize}
\item The word ``label'' in online learning usually refers to a value such as $0$ or $1$ attached to an element of $X$.
Although we will generally follow this, we also use phrases such as ``$X$-labeled tree'' to mean a tree in which we associate to each node an element of $X$.
\item In set theoretic notation,
for each integer $n$, $n = \{ 0, \dots, n-1 \}$. Also, $^x y$ denotes the set of functions from
$x$ to $y$, as distinguished from $y^x$ which is the \emph{size} of the set of functions from $x$ to $y$.
In this notation, a tree of height $T$ has levels $0$ to $T-1$, whereas in online learning
the same tree would have levels $1$ to $T$.
\end{itemize}
\end{conv}
\section{Dynamic Sauer-Shelah-Perles lemmas for Littlestone classes}
We take the occasion to state and prove a variant of the celebrated Sauer-Shelah-Perles (SSP) lemma~\cite{sauer}
which is adapted to Littlestone classes; this is a mild extension of versions known in the literature. The use of this lemma could be circumvented below by quoting prior work, but it seems to us possibly fruitful for future interactions
of online learning, model theory, and combinatorics, and so worth spelling out.
Let $\mathcal{H}$ be a class with Littlestone dimension $d<\infty$.
Two results which could be considered variants of the SSP lemma are known for Littlestone classes:
the first one, observed by Bhaskar~\cite{bhaskar} provides an upper bound of ${T \choose \leq d}$ on the number
of leaves in a binary-tree of height $T$ with $X$-labeled nodes that are reachable by $\H$.
The second, {dynamic} version is due to Ben-David, Pal, and Shalev-Shwartz~\cite{ben-david09agnostic}.
This lemma is a key ingredient in the characterization of (agnostic) online-learnability by Littlestone dimension;
it asserts the existence of ${T \choose \leq d}$ online algorithms (or experts or dynamic-sets)
such that for every sequence $x_1,\ldots, x_T$ and for every $h\in \mathcal{H}$
there exists an algorithm among the ${T \choose \leq d}$ algorithms
which produces the labels $h(x_1),\ldots , h(x_T)$ when given the sequence $x_1,\ldots, x_T$
as input.
The version we shall prove is a mild extension of the
Ben-David, Pal, Shalev-Shwartz lemma to the case of trees.
After stating the result, we define the key terms and then prove the characterization.
\begin{theorem} \label{dssp} Let $(X, \mathcal{H})$ be a Littlestone class of dimension $d$.
For every $T \in \mathbb{N}$ there exists a collection $\mathbf{A}$ of $\binom{T}{\leq d}$ algorithms $($dynamic sets$)$ such that for every binary tree
$\mathcal{T}$ of height $T$ with $X$-labeled internal nodes, every branch in $\mathcal{T}$ which is realized by some $h \in \mathcal{H}$ is also realized by some
algorithm from $\mathbf{A}$.
\end{theorem}
\begin{rmk}
{For simplicity, we define dynamic sets to be deterministic, but it is also reasonable for them to be random.
In general, a randomized algorithm is simply a distribution over deterministic algorithms. When a randomized algorithm is a distribution over prefix-dependent deterministic algorithms, see below, then we may say it is prefix-dependent.}
\end{rmk}
\begin{definition}[In the language of online learning] Fix $~T \in \mathbb{N}$ and a set $X$.
A dynamic set (or adaptive expert) $\mathcal{A}$ is a function which assigns to each internal node in each $X$-labeled binary tree of height $\leq T$ a value in $\{ 0, 1 \}$ in a prefix-dependent way.
\end{definition}
To explain, notice that $\mathcal{A}$ naturally defines a walk in any such tree: it starts at the root which is labeled by
some $a_\emptyset =: a_0$, it outputs $\mathcal{A}( \langle a_0 \rangle) =: t_0$, then travels left (if its output was $0$) or right (if its output was $1$) to a node
labeled by $a_{\langle t_0 \rangle} =: a_1$, where it outputs $\mathcal{A}(\langle a_0, a_1 \rangle ) =: t_1$ and so on.
Prefix-dependence means that for any $\ell \leq T$, if in two different trees the sequences of values $a_0, \dots, a_\ell$ produced
in this way are the same, then also the output $t_\ell$ of $\mathcal{A}$ in both cases is the same.
It should now be clear what it means for an algorithm $\mathcal{A}$ to realize a branch in a tree (the directions it gives
instruct us to walk along this root-to-leaf path). Note that we can think of each $h \in \mathcal{H}$ as a very simple dynamic set in
its guise as a characteristic function.
\begin{rmk}
In online learning one distinguishes between adaptive and oblivious experts (or between experts with and without memory): an oblivious expert is simply an $X\to\{0,1\}$ function,
whereas an adaptive expert has memory and can change its prediction based on previous observations.
The above definition captures adaptive experts. The above definition slightly deviates from the standard definition of adaptive experts. In the standard definition, one usually only considers sequences (or oblivious trees), rather than general trees. Notice that the distinction between oblivious and general trees can be expressed analogously with respect to the adversary: the adversary, who presents the examples to the online learner, can be oblivious -- in which case it decided on the sequence of examples in advance, or it can be adaptive -- in which case it decides which example to present at time $t$ based on the predictions the online learning algorithm made up to time $t$. In this language, our version of the dynamic SSP applies also to adaptive adversaries, whereas the previous version was restricted to oblivious adversaries.
\end{rmk}
\begin{definition}[In more set-theoretic language]
Let $T \in \mathbb{N}$, and $\kappa = |X|^T$.
Consider the set $\mathcal{E} = \langle e_i : i < \kappa \rangle$ of all $T$-element sequences of elements of $X$.
A \emph{dynamic set} assigns to each enumeration $e_i$ a function $f_i : T \rightarrow \{ 0, 1 \}$, and the assignment must be
\emph{coherent} in the sense that if $e_i \upharpoonright \beta = e_j \upharpoonright \beta$ then $f_i \upharpoonright \beta = f_j \upharpoonright \beta$.
\end{definition}
\begin{expl}
Let $X = \mathbb{N}$. Let $\mathcal{A}$ be the algorithm which receives $a_t$ at time $t$ and
outputs $1$ if $a_t$ is the largest prime it has seen so far and $0$ otherwise.
\end{expl}
\begin{definition}
Given a possibly partial characteristic function $g$ with $\operatorname{dom}(g) \subseteq X$ and $\operatorname{range}(g) \subseteq \{ 0, 1 \}$, define $H_g = \{ h \in \mathcal{H} : g \subseteq h \}$.
\end{definition}
\begin{rmk} \label{l:half}
Observe that if $\mathcal{H}$ is a Littlestone class,
$H \subseteq \mathcal{H}$ and $f$ is a possibly partial, possibly empty characteristic function with $\operatorname{dom}(f) \subseteq X$ and $a \in X$, then
\[ (\star) ~~~\min \{ \mathsf{Ldim}(H_{f \cup \{ (a, 0) \}}), \mathsf{Ldim}(H_{f \cup \{ (a, 1) \}}) \} < \mathsf{Ldim} (H_{f}) \]
i.e., on one side of any partition by a half-space the dimension must drop,
since $\mathsf{Ldim}(H_{f}) \leq \mathsf{Ldim}(\mathcal{H}) = d$ is defined and finite. $($This property is what enables the notion of Littlestone majority vote.$)$
\end{rmk}
\begin{proof}[Proof of Theorem \ref{dssp}]
To define our algorithms some notation will be useful. By ``tree'' in this proof we always mean an $X$-labeled binary tree of height $T$.
Given an algorithm $\mathcal{A}$ and a tree $\mathcal{T}$, let $\sigma = \sigma(\mathcal{A}, \mathcal{T}) = \langle a_i : i < T \rangle$
and let $\tau = \tau(\mathcal{A}, \mathcal{T}) = \langle t_i : i < T \rangle$ denote the sequence of elements of $X$ associated to the nodes traversed,
and the corresponding outputs of $\mathcal{A}$, respectively. Again, given $\mathcal{A}$ and $\mathcal{T}$, let $\gamma = \gamma(\mathcal{A}, \mathcal{T}) = \langle g_i : i < T \rangle$ be the sequence of partial characteristic functions
given by $g_i = \{ (a_j, t_j) : j < i \}$.
We define the $\binom{T}{\leq d}$ algorithms as follows. Each algorithm $\mathcal{A}$ is parametrized by a set $A \subseteq\{ 0, \dots, T-1 \}$ of size $\leq d$
(and there is an algorithm for each such set).
Given any tree $\mathcal{T}$, the algorithm proceeds as follows. Upon reaching a node at level $i$ labeled by $a_i$, it computes the values
$\mathsf{Ldim}(H_{g_i \cup \{ (a_i, 0) \}})$ and $\mathsf{Ldim}(H_{g_i \cup \{ (a_i, 1) \}})$.
Informally, it asks how the Littlestone dimension of the set
$H_{g_i}$ will change according to the decision on~$a_i$. It then makes its decision by cases.
If $i \in A$, then the algorithm chooses the value of $t_i$ which will make $\mathsf{Ldim}(H_{g_i \cup \{ (a_i, t_i) \}})$ smaller, and in case of ties chooses $0$.
If $i \notin A$, then the algorithm chooses the value of $t_i$ which will make $\mathsf{Ldim}(H_{g_i \cup \{ (a_i, t_i) \}})$ larger, and in case of ties chooses~$1$.
This finishes the definition of our class $\mathbf{A}$. Clearly the algorithms involved are all prefix dependent.
Let us verify that for any tree $\mathcal{T}$ and any $h \in \mathcal{H}$ there is an algorithm in $\mathbf{A}$ realizing the same branch as $h$.
Let $(b_0, s_0), \dots, (b_{t-1}, s_{t-1})$ denote the root-to-leaf path traversed by $h$. For each $i < T$, let
$f_i = \{ (b_j, s_j) : j < i \}$ denote the partial characteristic function in play as we arrive to $b_i$.
(Notice that necessarily each $f_i \subseteq h$.)
Let us consider how we may use $A$ to signal what to do.
Let $d^i = \mathsf{Ldim}(H_{f_i})$,
let $d^i_0 = \mathsf{Ldim} (H_{f_i \cup \{ (a_i, 0) \}}$ and let
$d^i_1 = \mathsf{Ldim} (H_{f_i \cup \{ (a_i, 1) \}}$.
There are several cases. If we know that at stage $i$ the $\mathsf{Ldim}$
does not drop then by \ref{l:half} the choice is determined.
If we know that the $\mathsf{Ldim}$ drops and $d^i_0 \neq d^i_1$ then the
choice is determined by knowing whether we chose the larger or smaller. If we know that $\mathsf{Ldim}$ drops and $d^i_0 = d^i_1$ then the
choice is determined by knowing whether or not we went left. With this in mind,
define $B \subseteq \{ 0, \dots, T-1 \}$ to be $B = \{ i < T :
( ~\mathsf{Ldim}(H_{f_i}) \geq \mathsf{Ldim} (H_{f_i \cup \{ (a_i, 1-s_i) \}})
> \mathsf{Ldim} (H_{f_i \cup \{ (a_i, s_i) \}})~) $ or $ (\mathsf{Ldim}(H_{f_i \cup \{ (a_i, s_i) \}}) = \mathsf{Ldim}((H_{f_i \cup \{ (a_i, 1-s_i) \}}) = \mathsf{Ldim}(H_{f_i})-1$ and
$s_i = 1 ) \}$.
In English, $B$ is the set of all $i < T$ at which either there was only one way to make the dimension drop as much as possible,
or both ways the dimension dropped by the same amount and we went left. Since at every $i \in B$ the Littlestone dimension drops, necessarily $|B| \leq d$.
Consider the algorithm $\mathcal{A} \in \mathbf{A}$ parameterized by $B$. We argue by induction on $i<T$ that
$g_i = f_i$, that is, $a_i = b_i$ and $t_i = s_i$. To start, $a_0 = b_0$ is the label of the root.
If $i \notin B$, then at this stage along the path traversed by $h$, either the
Littlestone dimension did not drop as much as possible or $s_i =1$.
In the first case,
there is only one value of $t_i$ which will keep the dimension larger,
and that is $t_i = s_i$. If the dimension went down equally for both successors, $\mathcal{A}$ will choose $t_i = 1 = s_i$.
If $i \in B$, then here the Littlestone dimension must drop as much as possible, so either there is only one way to achieve this and
so $t_i = s_i$, or both successors drop equally and
$t_i = 0 = s_i$. This completes the proof.
\end{proof}
We now verify that this is a characterization.
\begin{lemma}
Suppose $(X, \mathcal{H})$ does not have finite Littlestone dimension. Then for every $d \in \mathbb{N}$, for all sufficiently large $T \in \mathbb{N}$
and every collection $\mathbf{A}$ of $\binom{T}{\leq d}$ dynamic sets, there is some binary tree $\mathcal{T}$ of height $T$ with
$X$-labeled internal nodes and some $h \in \mathcal{H}$ which realizes a branch in $\mathcal{T}$ not realized by any algorithm from $\mathbf{A}$.
\end{lemma}
\begin{proof}
Suppose $d$ is given. Choose $T$ large enough so that $2^T > T^d$.
Since $\mathsf{Ldim}$ is not finite, we may construct a full binary tree of height $T$ whose nodes are labeled by $X$ and such that every branch is
realized by some $h \in \mathcal{H}$. However, every algorithm $\mathcal{A} \in \mathbf{A}$ realizes one and only one branch in $\mathcal{T}$, so there are
not enough of them to cover all branches.
\end{proof}
One final point. We have verified that the algorithms $\mathbf{A}$ built for $\mathcal{H}$ in \ref{dssp} can realize any branch in any relevant tree
which is realized by an element of $\mathcal{H}$. It is useful to observe that $\mathbf{A}$ simulates $\mathcal{H}$ in an even stronger way:
its algorithms can continue to simulate the realization of branches
by $\mathcal{H}$ even when we weaken the notion of realization to allow a certain number of mistakes.
\begin{corollary} \label{c:path}
Let $\mathcal{H}$ be a Littlestone class of dimension $d$, let $T \in \mathbb{N}$, and let $\mathbf{A}$ be the family of $\binom{T}{\leq d}$ algorithms
constructed for $\mathcal{H}$ in Theorem $\ref{dssp}$. Let $\mathcal{T}$ be any binary tree of height $T$ with $X$-labeled internal nodes. Given any
branch $(a_0, t_0), \dots, (a_{T-1}, t_{T-1})$ and any $h \in \mathcal{H}$, let $S = \{ i < T : h(a_i) \neq t_i \}$ be the set of ``mistakes'' made by
$h$ for this branch. Then there is an algorithm $\mathcal{A} \in \mathbf{A}$ which makes the same set of mistakes for this branch.
\end{corollary}
\begin{proof}
Consider a new tree $\mathcal{T}_*$ where all the nodes at level $i$ have the same label $a_i$ (the tree is ``oblivious'').
So branches through $\mathcal{T}_*$ amount to choosing subsets of $\{ a_i : i < T \}$. This special tree also
falls under the jurisdiction of Theorem \ref{dssp}, and so gives our corollary.
\end{proof}
\section{Existence}
{In this section we give an existence proof for $\epsilon$-excellent sets via regret bounds. To begin we re-present the definitions of
good and excellent in the language of probability.}
\paragraph{$\epsilon$-Good and $\epsilon$-Excellent Distributions.}
Let $\H\subseteq \{0,1\}^\mathcal{X}$.
We say a distribution $P$ over $\mathcal{X}$ is $\epsilon$-good w.r.t $\H$ if
\[(\forall h\in \H): \Pr_{x\sim P}[h(x)=1]\in [0,\epsilon]\cup[1-\epsilon,1]. \]
Similarly, a distribution $Q$ over $\H$ is $\epsilon$-good if
\[(\forall x\in \mathcal{X}): \Pr_{h\sim Q}[h(x)=1]\in [0,\epsilon]\cup[1-\epsilon,1].\]
Next, a distribution $P$ over $\mathcal{X}$ is $\epsilon$-excellent if
\[(\forall \text{ $\epsilon$-good } Q ): \Pr_{h\sim Q, x\sim P}[h(x)=1]\in [0,\epsilon]\cup[1-\epsilon,1].\]
Finally, a distribution $Q$ over $\H$ is $\epsilon$-excellent if
\[(\forall \text{ $\epsilon$-good } P ): \Pr_{h\sim P, x\sim Q}[h(x)=1]\in [0,\epsilon]\cup[1-\epsilon,1].\]
We say that a subset of $\H$ or of $\mathcal{X}$ is $\epsilon$-good ($\epsilon$-excellent) if the uniform distribution
on it is $\epsilon$-good ($\epsilon$-excellent).
In the process of extracting large $\epsilon$-excellent subsets of $\H$
we use trees whose nodes are labelled by $\epsilon$-good distributions;
let us refer here to such trees as $\epsilon$-good trees.
\begin{definition}
{Note that each hypothesis $h \in \mathcal{H}$ naturally realizes a branch in an $\epsilon$-good tree $\mathcal{T}$.
The tree $\mathcal{T}$ is said to be \underline{shattered} by $\H$ if every branch
is realized by some $h\in \H$.}
\end{definition}
\begin{disc}
{Note that for any $\mathcal{H}$ the definitions of $\epsilon$-good and excellent distributions
are always trivially meaningful: consider a distribution concentrating on a single element. When $(X, \mathcal{H})$ is a Littlestone pair, the
natural adaptation of Fact \ref{fact:978} to this setting will give many nontrivial examples.}
\end{disc}
\begin{theorem}\label{thm:regret}
Let $\H$ be an hypothesis class, let $\mathcal{T}$ be an $\epsilon$-good complete binary tree that is shattered by $\H$, and let $T$ denote the depth of $\mathcal{T}$.
Then, for every online learning algorithm~$\mathcal{A}$, the tree $\mathcal{T}$ witnesses
a lower bound on the regret of $\mathcal{A}$ in the following sense.
There exist distributions $\mathcal{D}_1,\ldots, \mathcal{D}_T$ over $X\times\{0,1\}$
such that an independent sequence of random examples $(x_t,y_t)\sim \mathcal{D}_t, t=1,\ldots,T$ satisfies the following:
\begin{itemize}
\item The expected number of mistakes $\mathcal{A}$ makes on the random sequence is at least $\frac{T}{2}$.
\item $\exists h\in \H$ whose expected number of mistakes on the random sequence is at most $\epsilon\cdot T$.
\end{itemize}
Thus, the expected regret of $\mathcal{A}$ w.r.t $\H$ on the random sequence is at least $(\frac{1}{2}-\epsilon)\cdot T$.
\end{theorem}
Before we prove this theorem, let us demonstrate how one can use
it to bound the maximum depth of an $\epsilon$-good
tree which is shattered by a Littlestone class $\H$:
it is known that for every class $\H$ and for every $T\in\mathbb{N}$ there exists an algorithm $\mathcal{A}$ whose expected\footnote{The algorithm $\mathcal{A}$ is randomized.}
regret w.r.t any sequence of examples $(x_1,y_1),\ldots, (x_T,y_T)$ is
\begin{equation}
O\bigl(\sqrt{d\cdot T}\bigr),
\end{equation}
where $d$ is the Littlestone dimension of $\H$, and the big oh notation conceals a fixed numerical constant.\footnote{The derivation of the (optimal) bound of $O(\sqrt{d\cdot T})$ is somewhat involved~\cite{alon21}, however a slightly weaker bound of $O(\sqrt{d\cdot T \log T})$ can be proven using elementary arguments~\cite{ben-david09agnostic}.}
Thus, by Theorem~\ref{thm:regret}, it follows that if there exists a (complete) $\epsilon$-good
tree that is shattered by $\H$ of depth $T$ then $T$ must satisfy the following inequality:
\[\Bigl(\frac{1}{2}-\epsilon\Bigr)\cdot T \leq O\Bigl(\sqrt{d\cdot T}\Bigr).\]
Indeed, the LHS in the above inequality is a lower bound on the expected regret of $\mathcal{A}$, where as the RHS is an upper bound on it. A simple arithmetic manipulation yields that $T=O(d/(1/2 - \epsilon)^2)$.
Thus, we get the following corollary:
\begin{corollary}\label{c:approxldim}
Let $\H$ be a class with Littlestone dimension $d<\infty$ and let $\epsilon\in \bigl[0,\frac{1}{2}\bigr]$. Denote by $d_\epsilon$
the maximum possible depth of a complete $\epsilon$-good tree which is shattered by $\H$. (Note that $d_0=d$.)
Then,
\begin{equation*}
d_\epsilon = O\Bigl(\frac{d}{\bigl(\frac{1}{2}-\epsilon\bigr)^2}\Bigr).
\end{equation*}
\end{corollary}
\begin{proof}[Proof of Theorem~\ref{thm:regret}]
Let $\mathcal{T}$ be a tree and $\mathcal{A}$ be an online algorithm as in the premise of the theorem.
We begin with defining the distributions $\mathcal{D}_t$.
We first note that the label in each distribution $\mathcal{D}_t$ is deterministic;
that is, there exist a distribution $D_t$ over $X$ and a label $y_t\in\{0,1\}$
such that a random example $(x,y)\sim \mathcal{D}_t$ satisfies that $y=y_t$ always (with probability $=1$) and $x_t\sim D_t$.
The distributions $D_i$ and labels $y_i$ correspond
to a branch of~$\mathcal{T}$ as follows:
\begin{itemize}
\item Initialize $t=1$, set the ``current'' node $v_t$ to be the root of the tree.
\item For $t=1,\ldots, T$
\begin{enumerate}
\item Let $D_t$ denote the $\epsilon$-good distribution $D_{v_t}$ which is associated with $v_t$.
\item Define the label $y_t$ to be $1$ if and only if
\[\Pr_{(x_i)_{i=1}^t \sim \prod_{i=1}^t D_i, \mathcal{A}}\Bigl[\mathcal{A}\bigl(x_t; (x_{t-1},y_{t-1}),\ldots, (x_1,y_1)\bigr)=1\Bigr]\leq 1/2,\]
where $\mathcal{A}$ is the considered online algorithm.
(Note that the above probability is taken w.r.t the sampling of the $x_i$'s, as well as the randomness of $\mathcal{A}$ in case it is a randomized algorithm.)
I.e.\ the adversary forces that the algorithm errs with probability at least $1/2$ on $x_t$ when given an input sequence $(x_1,y_1),\ldots, (x_{t-1},y_t), x_t$, where the $x_i$'s are sampled from the $D_i$'s.
\item Set $v_{t+1}$ to be the root of the subtree corresponding to the label $y_t$.
\end{enumerate}
\item Output the sequence $(D_1,y_1),\ldots, (D_T, y_T)$.
\end{itemize}
Let $(x_t)_{t=1}^T \sim \prod_{t=1}^T D_t$ and fix $t\leq T$.
Let $\hat y_t = \mathcal{A}(x_t; (x_{t-1},y_{t-1}),\ldots, (x_1,y_1)\bigr)$ be the prediction of $\mathcal{A}$ on $x_t$. Thus, by construction $\hat y_t\neq y_t $ with probability at least $1/2$, and therefore, by linearity of expectation:
\[\mathop \mathbb{E}_{(x_t)_{t=1}^T \sim \prod_{t=1}^T D_t, \mathcal{A}}\Bigl[\sum_{t=1}^T 1[y_t\neq \hat y_t]\Bigr]\geq \frac{T}{2}.\]
It thus remains to show that there exists $h\in \H$ whose expected
number of mistakes is at most $\epsilon\cdot T$.
Indeed, this follows by considering
an hypothesis $h\in \H$ which realizes the branch corresponding to $(D_t,y_t)_{t=1}^T$.
Indeed, for each fixed distribution $D_t$ on the branch $\mathcal{B}$, the probability that $h$ errs on $x_t\sim D_t$ is at most $\epsilon$.
Thus, by linearity of expectation,
the expected number of mistakes is at most $\epsilon\cdot T$,
and therefore there exists $h$ as stated.
\end{proof}
\begin{disc}
To explain the use of Littlestone dimension hidden in this argument we emphasize that the above proof relies on deep ideas from online learning which are worth highlighting, although this is not formally necessary at this point. We also emphasize that this discussion surveys much prior work and not only what we do here.
There are two main ideas: the first one has to do with no-regret algorithms (such as the multiplicative-weights algorithm, see e.g.~\cite{LittlestoneW94}).
Consider a set of $m<\infty$ experts (say weather forecasters), and every evening each of them tells us whether they think it will rain tomorrow or not.
Then, we use this list of predictions to make a prediction of our own.
Can we come up with a strategy that will guarantee that over a course of $T$ days our prediction will not be much worse than that of the best expert in hindsight?
No-regret algorithms address this problem and provide {\it regret bounds} of roughly $\sqrt{T\cdot\log m}$:
namely such algorithms make at most roughly $\sqrt{T\cdot\log m}$ mistakes more than the best expert.
We stress that the $m$ experts can be arbitrary algorithms; in particular they may base their prediction at day $t$
based on all information up to that point (i.e.\ prefix-dependence).
The second main idea, which is where the Littlestone dimension is used, is that any (possibly infinite) Littlestone class $\mathcal{H}$
can be covered by a finite set of experts: that is, for every $T<\infty$ there is a set of ${T \choose \leq \mathsf{Ldim}(\mathcal{H})}$
dynamic-sets which cover all hypotheses on $\mathcal{H}$ with respect to sequences of length $T$.
Thus, by applying no-regret algorithms on the (finite!) set of experts ensure our regret is small relative to the best $h\in\mathcal{H}$.
These two ideas imply that there are no-regret algorithms for any Littlestone class.
Our bound on $d_\epsilon$, the maximal height of an $\epsilon$-good complete shattered tree,
exploits these connections in the new setting of $\epsilon$-good trees.
\end{disc}
\begin{corollary} \label{c:one-half}
{Let $\mathcal{H}$ be a Littlestone class of finite Littlestone dimension $d$, $\epsilon < \frac{1}{2}$, and
$d_\epsilon$ from $\ref{c:approxldim}$. Then for any finite
$H \subseteq \mathcal{H}$ there is $A \subseteq H$, $|A| \leq \epsilon^{d_\epsilon} |H|$ such that $A$ is $\epsilon$-excellent.}
\end{corollary}
\section{An Alternative Derivation Using Closure Properties}
In this section we lay out an alternative argument for upper bounding $d_\epsilon$, the maximal depth of an $\epsilon$-good tree which is shattered by $\mathcal{H}$
when $\epsilon < \frac{1}{2}$ and $\mathcal{H}$ is a Littlestone class.
The resulting bound is significantly weaker than the one stated in Corollary~\ref{c:approxldim},
but the reasoning may perhaps be more intuitive. In particular, it does not rely on the notion of regret from online learning.
In the rest of this section, let $\epsilon < \frac{1}{2}$ be arbitrary but fixed.
The first idea is to exploit certain closure properties of Littlestone classes.
Fix for awhile some $k \in \mathbb{N}$ and some function $B:\{0,1\}^k\to \{0,1\}$.
Given $(X, \mathcal{H})$, let $(X, \mathcal{H}^{(B)})$ denote the class
\[\mathcal{H}^{(B)} = \Bigl\{B(h_1,\ldots, h_k) : h_i\in \mathcal{H}\Bigr\},\]
where $B(h_1,\ldots, h_k)$ denotes the function which takes $x\in X$ to $B(h_1(x),\ldots h_k(x))\in\{0,1\}$.
Informally, we enrich $\mathcal{H}$ by adding some additional hypotheses which come from applying $B$
to $k$-tuples of elements of $\mathcal{H}$. We stress that although $B$ can be arbitrary, it is fixed for any instance of this construction.
It has been shown that if $\mathcal{H}$ is a Littlestone class (i.e.\ $\mathsf{Ldim}(X, \mathcal{H})<\infty$)
then also $\mathcal{H}^{(B)}$ is a Littlestone class~\cite{alon20,ghazi21}. In particular,
\[\mathsf{Ldim}\bigl(X, \mathcal{H}^{(B)} \bigr) = O\Bigl(\mathsf{Ldim}(X, \mathcal{H})\cdot k \cdot\log k\Bigr),\]
where the big oh notation conceals a universal numerical constant.
Let us sketch a proof of this fact using the language of section three. If $\mathsf{Ldim}(X, \mathcal{H}) = d$ then for any integer $T$ we have a set $\mathcal{E}_T$ of
$\binom{T}{\leq d}$ dynamic sets which simulate $\mathcal{H}$ on any $X$-labeled binary tree of height~$T$. To see that $\mathcal{H}^{(B)}$ is also a
Littlestone class it would suffice to show that the same is true for some $d^\prime$ replacing $d$.
For each $T$, and for each $k$-tuple of dynamic sets
$E_1, \dots, E_k$ from $\mathcal{E}_T$, let $B(E_1, \dots, E_k)$ denote the dynamic set which operates by applying $B$ to the outputs of
$E_1, \dots, E_k$. Let $\mathcal{E}_T(B) = \{ B(E_1, \dots, E_k) : E_1, \dots, E_k \in \mathcal{E}_T \}$. Observe that this collection of dynamic sets
simulates $\mathcal{H}^{(B)}$ on any $X$-labeled binary tree of height $T$ and its size will remain polynomial in $T$ (at most roughly $T^{dk}$).
On the other hand, had $\mathcal{H}^{(B)}$ not been a Littlestone then one would need $2^T >> T^{dk}$ dynamic sets to cover it.
One more step is needed here: we may also apply $B$ dually to $X$ rather than $\mathcal{H}$. To make sense of this, consider $(X, \mathcal{H})$ as a bipartite graph
with an edge between $x \in X$ and $h \in \mathcal{H}$ if $h(x) = 1$. In this picture, $\mathcal{H}^{(B)}$ added some new points to the side of $\mathcal{H}$ and
defined a rule for putting an edge between any such new point and any given element of $X$. To apply $B$ dually, we carry out the
parallel operation for $X$ instead.
That is, let $(X^{(B)}, \mathcal{H})$ be the class where $X$ is enriched by new elements as follows: for any $x_1, \dots, x_k \in X$ define an element
$B(x_1,\dots, x_k)$ and for any $h \in \mathcal{H}$, define $h(B(x_1, \dots, x_k)) = 1$ if and only if $B(h(x_1), \dots, h(x_k)) = 1$.
Recalling section one, the dual of a Littlestone class is a Littlestone class, so $(X^{(B)}, \mathcal{H})$ is Littlestone, though the $\mathsf{Ldim}$ may be quite a bit larger.\footnote{It is known that $\mathsf{Ldim}(X,\mathcal{H})\leq 2^{2^{\mathsf{Ldim}(\mathcal{H},X)}}$.}.
We may summarize this part by saying: for any $k \in \mathbb{N}$ and any function $B: \{ 0, 1 \}^k \rightarrow \{ 0, 1 \}$,
if $(X, \mathcal{H})$ is a Littlestone class, then $(X^{(B)}, \mathcal{H})$ is a Littlestone class too.
How is this useful for deriving a bound on $d_\epsilon$? Suppose we are given an $\epsilon$-good tree~$\mathcal{T}$ which is shattered by $\mathcal{H}$.
Choose $k$ large enough: $k=O(\mathsf{VCdim}(X, \mathcal{H})/(\frac{1}{2}-\epsilon)^2)$ will suffice.
Let $B: \{ 0, 1 \}^k \rightarrow \{ 0, 1 \}$ be the
majority vote operation given by $(x_1, \dots, x_k) \mapsto 0$ if $\{ 1\leq i \leq k : a_i = 0 \} \geq \frac{1}{2} k$
and $(x_1, \dots, x_{k}) \mapsto 1$ otherwise.
Suppose we independently sample $k$ elements $x_1, \dots, x_k$
from one of the $\epsilon$-good distributions labeling our given tree. Then by our choice of $k$, the VC theorem tells us
that, with positive probability, the trace of each $h \in \mathcal{H}$ on this sample is close enough to its true proportion.
Here ``close enough'' means that the error is less than $\frac{1}{2}-\epsilon$. In particular,
with positive probability, for \underline{every} $h \in \mathcal{H}$ the majority vote $B(h(x_1), \dots, h(x_k))$ on this sample agrees with the opinion of
the $\epsilon$-good distribution on $h$. We can therefore sample $k$ elements from each of the distributions labeling the nodes of the tree
and with positive probability, all samples will be correct in this way.
The crucial observation follows: any full binary $\epsilon$-good tree $\mathcal{T}$ which is shattered by $\H$
can be transformed to a (standard) full binary tree $\mathcal{T}'$ of the same height which is shattered by the class $(X^{(B)}, \mathcal{H})$.
That is, there exists a choice of $k$ elements for each node in $\mathcal{T}$ such that the corresponding tree $\mathcal{T}'$ whose nodes are labelled by
the $k$-wise majority votes of these elements (i.e. by the appropriate $B(x_1, \dots, x_k$)) is shattered by $(X^{(B)}, \mathcal{H})$. This shows that the length of $\mathcal{T}'$ (and also of $\mathcal{T}$) is bounded by $\mathsf{Ldim}(X^{(B)}, \mathcal{H})$ which is finite.
(Note that this argument implicitly gives an inequality between the approximate and virtual Littlestone dimension, which are defined in the next section.)
This completes the sketch of the proof.
This argument is closer in spirit to similar arguments in VC theory concerning the variability of the VC dimension under natural operations. The obtained bounds however are much weaker (at least double-exponentially weaker than the bound in Corollary~\ref{c:approxldim}).
\section{Approximate and virtual Littlestone dimension}
In this section we apply our earlier results to analyze two a priori strengthenings of Littlestone dimension. Both depend on a choice of
$\epsilon < \frac{1}{2}$. The first comes from observing that $\epsilon$-good sets behave much like elements;
we might call them ``virtual elements.'' The ``virtual'' Littlestone dimension asks about the maximal height of a tree where the nodes
can be labeled by possibly virtual elements of $X$. A second definition comes from allowing
a small fraction of mistaken edges in the definition of ``$h$ realizes a branch.''
\begin{conv}
For this section, let $(X, \mathcal{H})$ be some hypothesis class, and for most of this section, let $\epsilon < \frac{1}{2}$ be arbitrary but fixed.
\end{conv}
\begin{definition} \label{d:ve}
A \emph{virtual element} is an $\epsilon$-good set.
\end{definition}
Notice that all elements of $X$ are also virtual elements. In \ref{d:ve} ``virtual element'' really abbreviates
``possibly virtual element;'' we may use ``strictly virtual element'' otherwise.
\begin{definition} \label{d:virtual}
Virtual dimensions:
\begin{enumerate}
\item
The \emph{virtual Littlestone dimension} of $\mathcal{H}$ is the depth $\ell$ of the largest complete binary tree whose nodes are labeled by virtual elements
and such that every branch is realized by some element of $\mathcal{H}$, \emph{i.e.} for every root-to-leaf path $(a_0, t_0), \dots, (a_{\ell-1}, t_{\ell-1})$,
where the $a_i$'s are virtual elements and the $t_i$'s are $0$ or $1$,
there is $h \in \mathcal{H}$ such that $\mathbf{t}(a_i, h) = t_i$ for $i = 0, \dots, \ell-1$.
\item The \emph{virtual Threshold dimension} of $\mathcal{H}$ is the largest $k$ such that there exist $\bar{a} = \langle a_i : i < k \rangle$,
$\bar{h} = \langle h_i : i < k \rangle$ where $\bar{a}$ is a sequence of distinct virtual elements of $X$ and $\bar{h}$ is a sequence of
distinct elements of $\mathcal{H}$ and $\mathbf{t}(a_i, h_j) = 1$ if and only if $i<j$.
\end{enumerate}
\end{definition}
\begin{rmk} In $\ref{d:virtual}$,
it would also be reasonable to consider a case where the elements of~$\mathcal{H}$ are allowed to be virtual;
note that for the definitions to make sense,
the $h$'s would then need to possess some degree of excellence $($not necessarily faced with any good set, but at least towards those relevant to the present
configuration$)$.
\end{rmk}
\begin{rmk}
Also, in $\ref{d:virtual}$, it may be interesting to consider the case where virtual elements
are required to have some minimal size $($say, $\geq \eta |H|$ for some
$\eta = \eta(\epsilon$$))$, since such elements are likely to be visible to random sampling;
however, we do not follow this direction here.
\end{rmk}
\begin{definition} \label{d:approx}
Approximate dimensions:
\begin{enumerate}
\item
The \emph{approximate Littlestone dimension} of $\mathcal{H}$ is the depth $\ell$ of the largest complete binary tree whose nodes are labeled by elements
and such that every branch is $\epsilon$-realized by some element of $\mathcal{H}$, meaning that
for every root-to-leaf path $(a_0, t_0), \dots, (a_{\ell-1}, t_{\ell-1})$,
where the $a_i$'s are elements of $X$ and the $t_i$'s are $0$ or $1$, there is $h \in \mathcal{H}$ such that $|\{ i < \ell : h(a_i) \neq t_i \}| < \epsilon \ell$.
\item The \emph{approximate Threshold dimension} of $\mathcal{H}$ is the largest $k$ such that there exist $\bar{a} = \langle a_i : i < k \rangle$,
$\bar{h} = \langle h_j : j < k \rangle$ where $\bar{a}$ is a sequence of distinct elements of $X$ and $\bar{h}$ is a sequence of
distinct elements of $\mathcal{H}$ and at most $\epsilon\cdot k^2$ of the pairs $(i,j)\in [k]\times[k]$ satisfy $h_j(x_i)\neq 1[i<j]$. Equivalently, a random pair $(i,j)$
is connected according to the half-graph relation with probability at least $1-\epsilon$.
\end{enumerate}
\end{definition}
In both cases, we may say ``$\epsilon$-approximate'' or ``$\epsilon$-virtual'' when the specific value of $\epsilon$ is important.
\begin{theorem} \label{ldim-equiv} Let $\epsilon < \frac{1}{2}$ be given.
Then the following are equivalent:
\begin{enumerate}
\item The Littlestone dimension of $\mathcal{H}$ is finite.
\item The Threshold dimension of $\mathcal{H}$ is finite.
\item The virtual Littlestone dimension of $\mathcal{H}$ is finite.
\item The virtual Threshold dimension of $\mathcal{H}$ is finite.
\item The approximate Littlestone dimension of $\mathcal{H}$ is finite.
\end{enumerate}
\end{theorem}
\begin{proof}
Clearly (3), (5) imply (1), and (4) implies (2).
We know (1) $\iff$ (2) by the unstable formula theorem (or ``Hodges' lemma'')
and (3) $\iff$ (4) by an identical proof, working with the tree and graph given by
considering each~$a_i$ to be an element and putting an edge between $a_i$ and $h_j$ if and only if $\mathbf{t}(a_i, h_j) = 1$.
(1) implies (3) is the special case of our theorem above (see Corollary~\ref{c:approxldim}) where the distributions assign zero measure to every element not in our
given $\epsilon$-good set and use the counting measure otherwise.
(1) implies (5) also follows just as in the proof of Theorem~\ref{thm:regret} (in fact, it is simpler because we don't
need to deal with distributions).
Given a full-binary $X$-labeled tree $\mathcal{T}$ of height $T$ which is $\epsilon$-shattered by $\mathcal{H}$ and an online algorithm~$\mathcal{A}$, we can pick a branch on $\mathcal{T}$ which forces $\mathcal{A}$ to err in each step (or to err with probability $\geq 1/2$ if $\mathcal{A}$ is randomized.)
By $\epsilon$-shattering we know that there exists $h\in\mathcal{H}$ which errs on at most $\epsilon$-fraction of the nodes in this branch,
and thus this branch witnesses that the regret of $\mathcal{A}$ is at least $(1/2-\epsilon)\cdot T$. This implies an upper bound on $T$ by picking $\mathcal{A}$ to be an online learner which exhibits an optimal regret of $O(\sqrt{d\cdot T})$. (Just like in Corollary~\ref{c:approxldim}).
\end{proof}
We have left approximate Threshold dimension to a separate claim, because its relation to $\epsilon$ is different.
(We could also have given a definition in \ref{d:approx}(2) which would have fit the equivalence, but this seemed perhaps more natural.)
\begin{claim}
The Threshold dimension of $\mathcal{H}$ is finite if and only if for some $\epsilon_* > 0$, for all $\epsilon_* > \epsilon > 0$
the $\epsilon$-approximate Threshold dimension is finite.
\end{claim}
\begin{proof
If Threshold dimension is infinite, clearly $\epsilon$-approximate Threshold dimension is infinite for any $\epsilon$.
Suppose the $\epsilon$-approximate Threshold dimension is infinite.
Begin with an approximate half-graph of size $n\geq k=\lfloor \sqrt{1/\epsilon}\rfloor$. We will use the probabilistic method:
pick uniformly at random $k$ distinct indices $\ell_1,\ldots \ell_k\in [n]$ and consider $\langle a_{\ell_i} \rangle_{i\leq k}$
and $\langle h_j \rangle_{j\leq k}$. Note that for every $i,j\leq k$
the probability that $h_{\ell_j}(a_{\ell_i}) \neq 1[\ell_i< \ell_j]$ is at most $\epsilon$.
(Let us call a pair $i,j$ for which this happens a bad pair.)
Thus, with probability $\leq k^2\cdot \epsilon < 1$ there exists a bad pair;
equivalently, with probability $>0$ no pair is bad and we get a half-graph of size~$k$.
Since our assumption allows $\epsilon \rightarrow 0$, we find that the Threshold dimension is infinite.
\end{proof}
\section{Majorities in Littlestone classes}
ß
So far we have been guided by the thesis that Littlestone classes are characterized by frequent, large sets
with well-defined notions of majority. However, there are at least two candidate notions of majority which are quite distinct:
the majority arising from the counting measure, which we have been exploring via $\epsilon$-excellent and $\epsilon$-good, and the notion of
majority arising from Littlestone rank.
In this section we prove that these two notions of majority ``densely often agree'' in Littlestone classes
and indeed this is true of any simple axiomatic notion of majority, as defined below.
\begin{definition}
Say that $A \subseteq \mathcal{H}$ is \emph{Littlestone-opinionated} if for any $a \in X$, one and only one of
\[ \mathsf{Ldim}(\{ h \in H : h(a) = 0 \}), \mathsf{Ldim}(\{ h \in H : h(a) = 1 \}) \]
is strictly smaller than $\mathsf{Ldim}(A)$.
\end{definition}
As a warm-up, we prove several claims.
As above, a \emph{partition of $H$ by a half-space}
means that for some element
$a \in X$ we separate $H$ into $\{ h \in H : h(a) = 0 \}$ and $\{ h \in H : h(a) = 1 \}$.)
\begin{claim} \label{c:42}
Suppose $\mathsf{Ldim}(\mathcal{H}) = d$ and $0 < \epsilon < \frac{1}{2}$. Then for any finite
$H \subseteq \mathcal{H}$ there is $A \subseteq H$ of size $\geq \epsilon^d|H|$ such that $A$ is both $\epsilon$-good and Littlestone-opinionated and
these two notions of majority agree, i.e. for any $a \in X$
\[ \mathsf{Ldim}(\{ h \in A : h(a) = t \}) = \mathsf{Ldim}( H ) \mbox{ iff } | \{ h \in A : h(a) = t \} | \geq (1-\epsilon)|A|.
\]
\end{claim}
\begin{proof}
It suffices to observe that given any finite $H \subseteq \mathcal{H}$ which (a) is not $\epsilon$-good, (b) is $\epsilon$-good but is not Littlestone-opinionated, or
(c) is both $\epsilon$-good and Littlestone-opinionated but the two notions of majority do not always agree, we can find
$G \subseteq H$ (arising from by a partition of $H$ by a half-space)
with $|G| \geq \epsilon |H|$ and $\mathsf{Ldim}(G) < \mathsf{Ldim}(H)$, because this initiates a recursion which cannot continue
more than $\mathsf{Ldim}(H) \leq \mathsf{Ldim}(\mathcal{H}) = d$ steps.
Why? In case (a), there is a partition into two pieces of size $\geq \epsilon|H|$; choose the one of smaller $\mathsf{Ldim}$.
In case (b), there is a partition into two pieces each of $\mathsf{Ldim}$ strictly smaller than $\mathsf{Ldim}(H)$; choose the one of larger counting measure.
In case (c), there is a partition where the majorities disagree, and we can choose the piece of larger counting measure and thus smaller
$\mathsf{Ldim}$. This completes the proof.
\end{proof}
\begin{definition}
Call $P$ a \emph{good property} $($or: $\epsilon$-good property$)$ for $\mathcal{H}$
if it is a property of finite subsets of $\mathcal{H}$ which implies $\epsilon$-good and which satisfies: for some constant
$c = c(P) > 0$, for any finite $H \subseteq \mathcal{H}$ there is $B \subseteq H$ of size $\geq \epsilon^c|H|$ with property $P$.
\end{definition}
\begin{corollary}
By $\ref{c:one-half}$ above, if $\mathcal{H}$ is a Littlestone class and $\epsilon < \frac{1}{2}$ then
``$\epsilon$-excellent'' is an $\epsilon$-good property for $\mathcal{H}$.
\end{corollary}
\begin{lemma} \label{lemma:ep}
Let $\mathcal{H}$ be a Littlestone class of dimension $d$ and $0 < \epsilon < \frac{1}{2}$. Let $P$ be a good property for $\mathcal{H}$ and $c = c(P)$.
Then for any finite $H \subseteq \mathcal{H}$ there is $A \subseteq H$ of size $\geq \epsilon^{(c+1)d}|H|$ such that $A$ has property $P$
$($so is also $\epsilon$-good$)$ and is Littlestone-opinionated, and for any $a \in X$,
\[ \mathsf{Ldim}(\{ h \in A : h(a) = t \}) = \mathsf{Ldim}( H ) \mbox{ iff } | \{ h \in A : h(a) = t \} | \geq (1-\epsilon)|A| \]
i.e. the $\epsilon$-good majority and the Littlestone majority agree.
\end{lemma}
\begin{proof}
Modify the recursion in the previous proof as follows. At a given step, if $H$ does not have property $P$,
replace it by a subset $C$ of size $\geq \epsilon^c|H|$ which does.
Since $P$ implies $\epsilon$-good, if we are not finished, then we are necessarily in case (b) or (c)
and at the cost of an additional factor of $\epsilon$ we can find $B \subseteq C$ where the $\mathsf{Ldim}$ drops.
At the end of each such round, we have replaced $H$ by $B \subseteq H$ with $|B| \geq \epsilon^{c+1} |H|$ and $\mathsf{Ldim}(B) < \mathsf{Ldim}(H)$.
\end{proof}
\begin{disc}
\emph{Note that this majority agreement deals with half-spaces, which is arguably the
interesting case for ``Littlestone-opinionated'' as it relates to the SOA.
In \ref{lemma:ep}, it is a priori not asserted that every subset of $A$ which is large in the sense of counting measure (but does not
arise from a half-space) has large $\mathsf{Ldim}$.}
\end{disc}
\begin{definition}[Axiomatic largeness] \label{d:arl}
Define $\mathcal{M}$ to be an axiomatic notion of relative largeness for the class $\mathcal{H}$ if it satisfies the following properties.\footnote{This captures \emph{relative majority} or
\emph{relative largeness} since ``being large'' is a two-place relation.}
\begin{enumerate}
\item $\mathcal{M}$ is a subset of $\mathcal{P} = \{ (B, A) : B \subseteq A \subseteq \mathcal{H} \}$.
\item Define $\mathcal{P}_{half} := \{ (B, A) \in \mathcal{P} : B$ arises as the intersection of $A$ with a half-space $\}$.
\item If $(B, A) \in \mathcal{M}$, we say ``$B$ is a large subset of $A$'' and we may write $B \subseteq_\mathcal{M} A$.
\item The rules are:
\begin{enumerate}
\item {\emph{(monotonicity in the set)} if $C \subseteq B \subseteq A$ and $C \subseteq_\mathcal{M} A$ then $B \subseteq_\mathcal{M} A$.}
\item {\emph{(monotonicity in the superset)} if $C \subseteq B \subseteq A$ and
$C \subseteq_\mathcal{M} A$ then $C \subseteq_\mathcal{M} B$.}
\item \emph{(identity)} $(A, A) \in \mathcal{M}$.
\item \emph{(non-contradiction)} If $(B, A) \in \mathcal{P}_{half}$ and $C = A \setminus B$ [so also $(C, A) \in \mathcal{P}_{half}$]
then at most one of $(B, A)$ and $(C, A)$ belongs to $\mathcal{M}$.
\item \emph{(chain condition)} There is $n = n(\mathcal{M}) < \omega$ such that if $\langle A_i : i < m \rangle$ is a set of subsets of $\mathcal{H}$ and $(A_{i+1}, A_{i}) \notin \mathcal{M}$ for all $i < m-2$ then $m \leq n$. In other words, the length of any descending chain
\[ A_{m-1} \subseteq \cdots \cdots \subseteq A_0 \]
of non-large subsets is upper bounded by $n$.
\end{enumerate}
\end{enumerate}
\end{definition}
\begin{expl} \label{e:11a}
Suppose $\mathcal{H}$ is a Littlestone class. Then
\[ \mathcal{M} = \{ (B, A) \in S : \mathsf{Ldim}(A) = \mathsf{Ldim}(B) \} \]
satisfies Definition $\ref{d:arl}$.
\end{expl}
\begin{proof}
Conditions (3)(a),(b),(c) are immediate; (d) follows by the definition of $\mathsf{Ldim}$.
Condition (e) is clear because if $(A_i, A_{i+1}) \notin \mathcal{M}$ then
$\mathsf{Ldim}(A_{i+1}) < \mathsf{Ldim}(A_i)$ so $n(\mathcal{M}) \leq d$.
\end{proof}
\begin{expl}
A range of other examples are provided by various model theoretic notions of rank $($using multiplicity one$)$.
\end{expl}
In the next result, we don't need to assume a priori that $\mathcal{H}$ is a Littlestone class, though it will follow from the proof that it is.
\begin{lemma} \label{l:majority-b}
Suppose $\mathcal{H}$ admits a notion of relative largeness $\mathcal{M}$. Let $0 < \epsilon < \frac{1}{2}$ and let
$P$ be an $\epsilon$-good property for $\mathcal{H}$. Let $c = c(P)$ and $n = n(\mathcal{M})$. Then for any nonempty finite
$H \subseteq \mathcal{H}$ there is $A \subseteq H$ of size $\geq \epsilon^{(c+1)n}|H|$ such that:
\begin{enumerate}
\item $A$ has property $P$, and thus is $\epsilon$-good, so for any partition of $A$ by a half-space into $B \cup C$, at least one
$($so exactly one$)$ of $B, C$ has size $<\epsilon|A|$.
\item For any partition of $A$ by a half-space into $B \cup C$, at least one $($so exactly one$)$ of $(B, A)$, $(C, A)$ belongs to $\mathcal{M}$.
\item The two notions agree, i.e. $(B, A) \in \mathcal{M}$ if and only if $|B| \geq (1-\epsilon)|A|$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $n = n(\mathcal{M})$ and set $A_0 = H$. By induction on $t \geq {0}$ we shall prove that if
$A_t$ does not satisfy contitions 1, 2, and 3 then either it contains a subset of size $\geq \epsilon^c|A_t|$ which does, or
there is $A_{t+1} \subseteq A_t$ such that
$|A_{t+1}| \geq \epsilon^{c+1}|A_t|$ and $(A_{t+1}, A_t) \notin \mathcal{M}$.
Our chain condition \ref{d:arl}(4)(e) will then ensure $t$ is bounded above by $n$.
For each $t \geq 0$ proceed as follows.
If $A_{t}$ has property $P$, define $A^\prime_t = A_t$.
If not, replace $A_t$ by a subset of size $\geq \epsilon^c|A_t|$ which does, and set this to be $A^\prime_{t}$.
A priori, we have no information on whether $(A^\prime_t, A_t) \in \mathcal{M}$.
Since $A^\prime_t$ has property $P$, if $A^\prime_t$ does not already satisfy 1, 2, and 3, then condition 2 or 3 must fail; in either case, there must be some half-space which
partitions $A^\prime_t$ into two non-trivial sets at least one of which, call it $B$, has size at least $\epsilon|A^\prime_t|$ and satisfies
$(B, A^\prime_t) \notin \mathcal{M}$. Set $A_{t+1} = B$. Then $|B| \geq \epsilon^{c+1}|A_t|$ and by condition \ref{d:arl}(4)(b), $(A_{t+1}, A_t) \notin \mathcal{M}$.
This completes the inductive step and the proof.
\end{proof}
\begin{theorem}
The following are equivalent for $(X, \mathcal{H})$.
\begin{enumerate}
\item $\mathcal{H}$ admits a notion of relative largeness $\mathcal{M}$.
\item $\mathcal{H}$ is a Littlestone class.
\item For every $\mathcal{M}$ and $0 < \epsilon < \frac{1}{2}$ there is $n = n(\epsilon, \mathcal{M})$ such that every finite nonempty $H \subseteq \mathcal{H}$
has a subset $A$ which satisfies:
\begin{enumerate}
\item $|A| \geq \epsilon^n|H|$, and
\item $A$ is $\epsilon$-good, and
\item for every partition of $A$ by a half-space into $B \cup C$, $(B, A) \in \mathcal{M}$ if and only if $|B| \geq \epsilon|A|$ if and only if
$|B| \geq (1-\epsilon)|A|$.
\end{enumerate}
i.e. the counting majority and the $\mathcal{M}$-majority are well defined and agree.
\item In item (3) we may replace $(b)$ by ``$A$ has property $P$''
when $P$ is an $\epsilon$-good property for $\mathcal{H}$, at the cost of changing the exponent $n$ in $(a)$ to $(c+1)n$ for $c = c(P)$.$)$
\end{enumerate}
\end{theorem}
\begin{proof}
(2) implies (1) is Example \ref{e:11a}. (1) implies (3) [or (4)] is Lemma \ref{l:majority-b}. Clearly (4) implies (3). For (3) implies (2), note that (3)
tells us a fortiori that we can always find large $\epsilon$-good subsets, so $\mathcal{H}$ must be a Littlestone class by \ref{c:equiv}.
\end{proof}
\begin{rmk}
Although we have formulated these largeness properties for subsets of $\mathcal{H}$, the symmetric results whould hold for subsets of $X$.
\end{rmk}
It is interesting to inspect relative largeness from the perspective of online learning.
Indeed, note that any such notion gives rise to an online learning strategy with a bounded mistake bound:
the online learner maintains a version space $\mathcal{H}_i\subseteq \mathcal{H}$, starting with $\mathcal{H}_0=\mathcal{H}$.
For each input example $x_i$ received, the learner predicts $\hat y_i$ such that
\[ (\{h\in \mathcal{H}_i : h(x_i) = \hat y_i\}, \mathcal{H}_i)\in \mathcal{M} \]
and note that there can be at most such $\hat y_i$, if no such $\hat y_i$ exists then the learner predicts $\hat y_i = 0$.
Then, upon receiving the true label $y_i$,
if $y_i=\hat y_i$ then the learner sets $\mathcal{H}_{i+1}=\mathcal{H}_i$ and else,
when $y_i\neq \hat y_i$, the learner sets $\mathcal{H}_{i+1} = \{h\in \mathcal{H}_i : h(x_i)=y_i\}$.
Observe that given any sequence $(x_1,y_1),\ldots, (x_T,y_T)$, this learner makes at most $n(\mathcal{M})$ mistakes:
indeed, if the learner makes a mistake on $x_i$ then $(\mathcal{H}_{i+1},\mathcal{H}_{i})\notin\mathcal{M}$,
and $\mathcal{H}_{i+1}$ is obtained by intersecting $\mathcal{H}$ with a halfspace.
This point of view offers an alternative explanation to the fact that only Littlestone classes admit notions of relative largeness.
Moreover, it implies that for every notion of relative largeness $\mathcal{M}$,
we have that $n(\mathcal{M})$ is at least the Littlestone dimension.
This follows because the Littestone dimension is equal to the optimal mistake-bound.
Thus, the notion of relative largeness which arises from the Littlestone dimension is
optimal in the sense that it minimizes $n(\mathcal{M})$.
\section{Discussion and open problems}
To conclude the paper we mention several natural open problems and directions
for further work suggested by the proofs and interactions above.
For VC classes, recall that we have the usual Sauer-Shelah-Perles lemma, and
Haussler's covering lemma which says that every VC class of VC-dimension $d$
can be $\epsilon$-covered by roughly $\frac{1}{\epsilon^d}$ hypotheses~\cite{Haussler95}.\footnote{Informally, there is a list of approximately $\frac{1}{\epsilon^d}$ hypotheses
such that every other hypothesis in our class is $\epsilon$-close to one
of the hypotheses in our list.} (The SSP lemma can be thought of as the special case of Haussler's covering lemma where the domain has size $n$ and when $d = \frac{1}{n}$.)
This is clearly useful for learning.
It is natural to ask whether there is a dynamic version of this
covering lemma for Littlestone classes. That is, is there a function
$f=f(\epsilon, d)$ such that for any Littlestone class $\mathcal{H}$ of $\mathsf{Ldim}$ $d$
we can always find $\leq f(\epsilon, d)$ dynamic sets which approximately
cover the whole class $\mathcal{H}$, meaning that for every sequence $x_1, \dots, x_n$
from $X$ and every $h \in \mathcal{H}$ there is a dynamic set in our list
which is $\epsilon$-close to it. This is also related to \cite{alon21}.
In the classical case, there is a fundamental relationship between the
Littlestone dimension and the half-graph/Threshold dimension, as
explained by Shelah's unstable formula theorem: both are finite together,
and bounds are known \cite{shelah}, \cite{hodges}. Note, however, that tight quantitative bounds
still remain open. To reiterate what we said in section one, a useful
aspect of this relationship is connecting a symmetric or self-dual
quantity with Littlestone dimension (see 1.3 above). In addition to
sorting out the quantitative bounds in the classical case, it may be
interesting to explore related questions for the approximate and
virtual variants of Littlestone dimension directly.
It also may be worth while
to explore further the significance of dynamic Sauer-Shelah-Perles lemmas for model theory.
\newpage
| {
"timestamp": "2021-09-06T02:09:52",
"yymm": "2108",
"arxiv_id": "2108.05569",
"language": "en",
"url": "https://arxiv.org/abs/2108.05569",
"abstract": "We use algorithmic methods from online learning to revisit a key idea from the interaction of model theory and combinatorics, the existence of large \"indivisible\" sets, called \"$\\epsilon$-excellent,\" in $k$-edge stable graphs (equivalently, Littlestone classes). These sets arise in the Stable Regularity Lemma, a theorem characterizing the appearance of irregular pairs in Szemerédi's celebrated Regularity Lemma. Translating to the language of probability, we find a quite different existence proof for $\\epsilon$-excellent sets in Littlestone classes, using regret bounds in online learning. This proof applies to any $\\epsilon < {1}/{2}$, compared to $< {1}/{2^{2^k}}$ or so in the original proof. We include a second proof using closure properties and the VC theorem, with other advantages but weaker bounds. As a simple corollary, the Littlestone dimension remains finite under some natural modifications to the definition. A theme in these proofs is the interaction of two abstract notions of majority, arising from measure, and from rank or dimension; we prove that these densely often coincide and that this is characteristic of Littlestone (stable) classes. The last section lists several open problems.",
"subjects": "Discrete Mathematics (cs.DM); Machine Learning (cs.LG); Logic in Computer Science (cs.LO); Combinatorics (math.CO); Logic (math.LO)",
"title": "Agnostic Online Learning and Excellent Sets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540656452213,
"lm_q2_score": 0.72487026428967,
"lm_q1q2_score": 0.7099046403974144
} |
https://arxiv.org/abs/1806.02399 | Maximum and minimum nullity of a tree degree sequence | The nullity of a graph is the multiplicity of the eigenvalue zero in its adjacency spectrum. In this paper, we give a closed formula for the minimum and maximum nullity among trees with the same degree sequence, using the notion of matching number and annihilation number. Algorithms for constructing such minimum-nullity and maximum-nullity trees are described. | \section{Introduction}
Collatz and Sinogowitz (1957), see \cite{von1957spektren}, first raised the problem of characterizing all singular or nonsingular graphs. This problem is hard to be solved. On one hand, the nullity is relevant to the rank of symmetric matrices described by graphs. On the other hand, the nullity has strong chemical background. A singular bipartite graph expresses the
chemical instability of the molecule corresponding to the bipartite graph. Due to all these reasons, the nullity aroused the interest of many mathematicians and chemists. The topics on the nullity of graphs includes the computing nullity, the nullity distribution, bounds
on nullity, characterization of graphs with certain nullity, and so on. The nullity of trees has been study in many works, for example \cite{fiorini2005trees}, \cite{li2006trees}, \cite{ghorbani2016integral}, and \cite{jaume2018null}.
In \cite{fiorini2005trees}, Fiorini, Gutman and Sciriha determined the greatest nullity among $n$-vertex trees in which no vertex has degree greater than a fixed value \(D\). They explicit constructing the respective trees. Our work can be seen as an extension of Fiorini, Gutman, and Sciriha's results. Let $\mathbf{s}:d_1,d_2,\ldots,d_n$ be a degree sequence of length $n$ such that $d_i\leq d_{i+1}$ for $1\leq i \leq n$. We say that \(\mathbf{s}\) is a degree sequence of length \(n\). A degree sequence \(\mathbf{s}\) is a \textbf{tree degree sequence} if \(\sum_{i=1}^{n} d_{i}=2n-2\), see \cite{chartrand2010graphs}. Note that if a graph has tree degree sequence \(\mathbf{s}\) and it is disconnected, one of its connected components can be an unicycle graph. Let \(\mathcal{G}_\mathbf{s}\) be the set of all graphs with degree sequence \(\mathbf{s}\). With $\mathcal{T}_\mathbf{s}$ we denote the set of all the connected graph in \(\mathcal{G}_\mathbf{s}\). Let \(\mathbf{s}\) be a tree degree sequence, $T \in \mathcal{T}_\mathbf{s}$ if and only if \(T\) is a tree and has degree sequence $\mathbf{s}$.
Finally, some comments about the notation. All graphs in this work are labeled (even when we do not write the labels), finite, undirected and with neither loops nor multiple edges. Let \(G\) be a graph, where $V(G)$ and $E(G)$ denotes the set of vertices and the set of edges of $G$, respectively. Let \(v \in V(G)\), the neighborhood of \(v\), denoted by \(N(v)\), is the set \(\{u \in V(G) \, : \, u \sim v\}\). The neighborhood of a subset \(S\) of \(V(G)\) is
\[
N(S):= \bigcup_{v \in S} N(v).
\]
With \(\deg(v)\), we denote the degree of a vertex \(v\) of a graph \(G\), that is, \(\deg(v)=|N(v)|\). A vertex \(v\) of a graph \(G\) is a a leaf if \(\deg(v)=1\). With \(l(G)\) we denote the set of all the leaves of \(G\), i.e. \(l(G)=\{v \in V(G) : \deg v =1\}\). If the graph \(G\) is clear from the context, then we just write \(l\). The vertices of \(G\) that are not leaves are called \textbf{internal} vertices of \(G\). Thus, if \(v\) is an internal vertex of \(G\), then \(\deg v >1\). Let \(u,v \in V(G)\), with \(G+uv\) we denote the graph obtained by adding up the edge \(uv\) to \(E(G)\). Let $G_1$ and $G_2$ be two graphs, with $G_1\cup G_2$ we denote the union of two graphs, where $V(G_1\cup G_2)=V(G_1)\cup V(G_2)$ and $E(G_1\cup G_2)=E(G_1)\cup E(G_2)$.
Regarding algebraic notions, the only concept that is relevant to this work is the nullity of a graph, that is, the multiplicity of eigenvalue zero in the spectrum of the adjacency matrix of a graph. For all graph-theoretic and algebra-theoretic notions not defined here, the reader is referred to~\cite{chartrand2010graphs} and \cite{meyer2000matrix}, respectively.
The following two well-known results are crucial to this work.
\begin{theorem}[\cite{D1972} and \cite{bevis1995ranks}]\label{bevis}
For any tree $T$, $\rank{T}=2\nu(T)$.
\end{theorem}
\begin{theorem}[K\"{o}nig-Egerv\'ary] \label{konig}
In any bipartite graph $G$, the number of edges in a maximum matching equals the number of vertices in a minimal vertex cover: $\nu(G)=\tau(G)$.
\end{theorem}
The remainder of this work is organized as follows. In Section \ref{minimum nullity} we give an algorithm to construct a tree, named $\mna{\mathbf{s}}$, from a given tree degree sequence $\mathbf{s}$. We prove, using the notion of matching number, that $\mna{\mathbf{s}}$ has minimum nullity among all trees with $\mathbf{s}$ as its degree sequence. In Section \ref{maximum nullity} we give an algorithm to construct a tree, named $\MNA{\mathbf{s}}$, from a given tree degree sequence $\mathbf{s}$. We prove, using the notion of matching number and annihilation number, that $\MNA{\mathbf{s}}$ has maximum nullity among all trees with $\mathbf{s}$ as its degree sequence. We also give a characterization of sequences with equal minimum and maximum nullity.
\section{Minimum Nullity}\label{minimum nullity}
The nullity of a graph is the nullity of its adjacency matrix. Let \(\mathbf{s}\) be a tree degree sequence of length $n$. The goal of this section is to prove that the minimum nullity among the nullity of all the trees in \(\mathcal{T}_{\mathbf{s}}\) is $2l-n$, if $n-l \leq \lfloor \frac{n}{2} \rfloor$, and either $0$ or $1$, if $n-l > \lfloor \frac{n}{2} \rfloor$, where \(l\) is the number of number of vertices of degree 1 in \(\mathbf{s}\), we usually say that \(\mathbf{s}\) has \(l\) leaves instead to say it has \(l\) 1's.
\begin{proposition}\label{matchbound}
Let $T$ be a tree of order $n> 2$ and $l$ be the number of leaves of $T$. If $M$ is a matching in $T$, then $|M|\leq \min\{n-l,\lfloor \frac{n}{2}\rfloor\}$.
\end{proposition}
\begin{proof}
If $\min\{n-l,\lfloor \frac{n}{2}\rfloor\}=\lfloor \frac{n}{2}\rfloor$. Assume that $|M|>\lfloor \frac{n}{2}\rfloor$. Thus $\nu (T)>\lfloor \frac{n}{2}\rfloor$ and, by Theorem \ref{bevis}, $\rank{T}>2\lfloor \frac{n}{2}\rfloor$. If $n$ is even, then $\rank{T}>n$, which is absurd. If $n$ is odd, then $\rank{T}=n$. But this is impossible because, by Theorem \ref{bevis}, the rank of trees is always even. Therefore, $|M|\leq \lfloor \frac{n}{2}\rfloor$.
If $n-l<\lfloor \frac{n}{2}\rfloor$. Assume that $|M|>n-l$. Since $n-l=|\{v\in V(T) : \deg{(v)}\geq 2 \}|$ and $M$ is a matching, by pigeon hole principle there exists an edge $vu\in M$ such that $v$ and $u$ are both leaves, which is impossible because $n>2$. Hence, $|M|\leq n-l$.
\end{proof}
The above proposition holds for tree degree sequences.
\begin{corollary}
Let $\mathbf{s}$ be a tree degree sequence of length $n$ and $l$ be the number of 1's in $\mathbf{s}$. If $M$ is a matching in $T\in \mathcal{T}_{\mathbf{s}}$, then $|M|\leq \min\{n-l,\lfloor \frac{n}{2}\rfloor\}$.
\end{corollary}
The nullity of any tree is strongly associated with the maximum matching structure of the tree, see \cite{jaume2018null}. In a tree degree sequence, a tree with mimimum (maximum) nullity, among all the trees in the tree sequence, is a tree with maximum (minimum) matching number, among the trees in the tree degree sequence.
The next algorithm constructs a tree with the minimum nullity (maximum matching number) among all the trees with a given tree degree sequence $\mathbf{s}$ of length $n$.
\begin{algorithm} \label{minalgo}
Minimum nullity algorithm, $\mna{\mathbf{s}}$
\begin{enumerate}[1)]
\item INPUT. $\mathbf{s}:d_1,d_2,\ldots,d_n$
\item $V=\{v_1,v_2,\ldots,v_n\}$
\item $l=|\{d_i\in \mathbf{s} : d_i=1\}|$
\item $k=n-(d_{n}-1)$
\item $H(v_n)=\{v_1,v_{n-1},v_{n-2},\ldots,v_k\}$
\item $E=\{v_nv, \text{for } v\in H(v_n)\}$
\item $i=2$
\item IF: $n-l\leq \lfloor \frac{n}{2} \rfloor$:
\begin{enumerate}[8.1)]
\item WHILE: $i\leq n-l$:
\begin{enumerate}[8.1.1)]
\item $H(v_{n-i+1})=\{v_i,v_{k-1},v_{k-2},\ldots,v_{k-(d_{n-i+1}-2)}\}$
\item $E=E\cup \{v_{n-i+1}v, \text{for } v\in H(v_{n-i+1})\}$
\item $k=k-(d_{n-i+1}-2)$
\item $i= i+1$
\end{enumerate}
\item RETURN: $G(V,E)$
\end{enumerate}
\item WHILE: $i\leq l$
\begin{enumerate}
\item $H(v_{n-i+1})=\{v_i,v_{k-1},v_{k-2},\ldots,v_{k-(d_{n-i+1}-2)}\}$
\item $E=E\cup \{v_{n-i+1}v,\text{for }v\in H(v_{n-i+1})\}$
\item $k=k-(d_{n-i+1}-2)$
\item $i=i+1$
\end{enumerate}
\item $j=1$
\item WHILE: $j<n-2l$
\begin{enumerate}
\item $A(v_{l+j})=\{v_{l+j+1}\}$
\item $E=E\cup \{v_{l+j}v,\text{for }v\in A(v_{l+j})\}$
\item $j=j+1$
\end{enumerate}
\item $E=E\cup \{v_1v_{n-l-1}\}$
\item RETURN: $T(V,E)$
\end{enumerate}
\end{algorithm}
Clearly the return of Algorithm \ref{minalgo} is a tree. From now on $\mna{\mathbf{s}}$ denote the tree $T(V,E)$, obtained by Algorithm \ref{minalgo} from a tree degree sequence $\mathbf{s}$. In Figure \ref{fig minalgo}, where in (a) the sequence satisfies $n-l\leq \lfloor \frac{n}{2} \rfloor$, and in (b) satisfies $n-l> \lfloor \frac{n}{2} \rfloor$, both cases for a sequence of order $n$.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\begin{tikzpicture}[thick,scale=0.2]%
\draw
(12,15) node[]{$v_{11}$}
(24,10) node[]{$v_1$} -- (12,15)
(0,10) node{$v_{10}$} -- (12,15)
(8,10) node{$v_9$} -- (12,15)
(16,10) node{$v_8$} -- (12,15)
(-2,5) node{$v_7$} -- (0,10)
(2,5) node{$v_2$} -- (0,10)
(6,5) node{$v_6$} -- (8,10)
(10,5) node{$v_3$} -- (8,10)
(16,5) node{$v_4$} -- (16,10)
(-2,0) node{$v_5$} -- (-2,5);
\end{tikzpicture}
\caption{$\mna{1,1,1,1,1,1,2,2,3,3,4}$}
\end{subfigure}
\hspace{2cm}
\begin{subfigure}[b]{0.3\textwidth}
\begin{tikzpicture}[thick,scale=0.2]%
\draw
(8,15) node{$v_{11}$}
(0,10) node{$v_{10}$} -- (8,15)
(8,10) node{$v_9$} -- (8,15)
(16,10) node{$v_1$} -- (8,15)
(-2,5) node{$v_8$} -- (0,10)
(2,5) node{$v_2$} -- (0,10)
(8,5) node{$v_3$} -- (8,10)
(-2,0) node{$v_4$} -- (-2,5)
(16,5) node{$v_7$} -- (16,10)
(16,0) node{$v_6$} -- (16,5)
(16,-5) node{$v_5$} -- (16,0);
\end{tikzpicture}
\caption{$\mna{1,1,1,1,2,2,2,2,2,3,3}$}
\end{subfigure}
\caption{Example of the two cases of $\mna{\mathbf{s}}$}
\label{fig minalgo}
\end{figure}
\begin{remark} \label{rem1}
On one hand, when $n-l\leq \lfloor \frac{n}{2} \rfloor$, the vertices $v_i\in V(\mna{\mathbf{s}})$ with $1\leq i\leq n-l$, are all leaves. On the other hand, when $n-l> \lfloor \frac{n}{2} \rfloor$ the vertices $v_i\in V(\mna{\mathbf{s}})$ with $2\leq i\leq l+1$, are all leaves.
\end{remark}
The objective is to prove that $\nu(\mna{\mathbf{s}})=\min\{n-l,\lfloor\frac{n}{2}\rfloor\}$ for every tree degree sequence \(\mathbf{s}\), where \(n\) is its length and \(l\) is its number of 1's (leaves). Since $\min\{n-l,\lfloor\frac{n}{2}\rfloor\}$ depends on the number of leaves in the sequence, we will split the prove into two cases: when $n-l\leq \lfloor\frac{n}{2}\rfloor$ and when $n-l>\lfloor\frac{n}{2}\rfloor$.
Let \(\mathbf{s}\) be a tree degree sequence. Note that $l\geq \lceil\frac{n}{2}\rceil$ if and only if $n-l\leq \lfloor\frac{n}{2}\rfloor$. Therefore, $l<\lceil\frac{n}{2}\rceil$ if and only if $n-l>\lfloor\frac{n}{2}\rfloor$.
Let $M_s=\{v_iv_j\in E(\mna{\mathbf{s}}):~1\leq i\leq n-l \text{ and } j=n-i+1\}$. Proposition \ref{numin 1} states that $M_s$ is a maximum matching in $\mna{\mathbf{s}}$ if \(l \geq \lceil \frac{n}{2} \rceil\).
\begin{proposition} \label{numin 1}
Let $\mathbf{s}$ be a tree degree sequence of length $n>2$. If $l\geq \lceil \frac{n}{2}\rceil$, then $\nu{(\mna{\mathbf{s}})}=n-l$.
\end{proposition}
\begin{proof}
On one hand, assume that there are two different edges in $M_s$ that are adjacent, say the edges $v_i v_{n-i+1}$ and $v_k v_{n-k+1}$, with $1\leq i\leq n-l$ and $1\leq k\leq n-l$. By Remark \ref{rem1} and that $n-l\leq \lfloor \frac{n}{2}\rfloor$, $v_i$ and $v_k$ are leaves. Thus, $v_{n-i+1}=v_{n-k+1}$ and $i=k$, which contradicts the fact that $v_i v_{n-i+1}$ and $v_k v_{n-k+1}$ are two different edges. Therefore, $M$ is a matching in $\mna{\mathbf{s}}$.
On the other hand, by Proposition \ref{matchbound} and since $l\geq \lceil \frac{n}{2}\rceil$, we have that $\nu(\mna{\mathbf{s}})\leq n-l$. Thus, $|M_s|$ is a maximum maximum matching in $\mna{\mathbf{s}}$ and $\nu (\mna{\mathbf{s}})=n-l$, because $|M_s|=n-l$.
\end{proof}
Using Proposition \ref{numin 1}, Theorem \ref{bevis}, and Theorem \ref{konig}, we obtain the nullity and the independence number of \(\mna{\mathbf{s}}\), when $l\geq \lceil \frac{n}{2}\rceil$.
\begin{corollary} \label{nullmin 1}
Let $\mathbf{s}$ be a tree degree sequence of length $n>2$. If $l\geq \lceil \frac{n}{2}\rceil$, then $\nulidad{\mna{\mathbf{s}}}=2l-n$.
\end{corollary}
\begin{corollary}
Let $\mathbf{s}$ be a tree degree sequence of length $n>2$. If $l\geq \lceil \frac{n}{2}\rceil$, then $\alpha({\mna{\mathbf{s}}})=l$.
\end{corollary}
When $l< \lceil \frac{n}{2}\rceil$, we have the following results.
\begin{proposition} \label{numin 2}
Let $\mathbf{s}$ be a tree degree sequence of length $n>2$. If $l< \lceil \frac{n}{2}\rceil$, then \(\nu({\mna{\mathbf{s}}})=\left\lfloor\frac{n}{2}\right\rfloor.\)
\end{proposition}
\begin{proof}
Let $S_1=\{v_1,v_2,\cdots,v_l,v_{n-l+1},\cdots,v_n\} \subset V(mna(s))$ and $S_2=V(T) - S_1$. Let $T_1$ and $T_2$ be the induced subgraphs of $\mna{\mathbf{s}}$ by $S_1$ and $S_2$, respectively. On one hand, since $\mna{\mathbf{s}}$ is a tree and, by construction, $T_1$ is a connected subgraph of $\mna{\mathbf{s}}$, we have that $T_1$ is a tree of order $n_1=2l$. On the other hand, $T_2$ is, by construction, a path of order $n_2=n-2l$. Therefore, by Proposition \ref{numin 1}, we have that $\nu({T_1})=l$ and $\nu(T_2)=\lfloor \frac{n-2l}{2}\rfloor$ (because \(T_{2}\) is a path).
Let $M_1$ and $M_2$ be maximum matchings in $T_1$ and $T_2$, respectively. Note that $M=M_1 \cup M_2$ is a matching in $T$, since $E(T_1)\cap E(T_2)=\emptyset$. Moreover,
\begin{eqnarray*}
|M|&=&|M_1|+|M_2| \\
&=&l+\left\lfloor \frac{n-2l}{2}\right\rfloor \\
&=&\left\lfloor \frac{n}{2}\right\rfloor.
\end{eqnarray*}
But, since $l<\lceil\frac{n}{2}\rceil$ and $\mna{\mathbf{s}}=T_1\cup T_2 + v_1v_{l+1}$, by Proposition \ref{matchbound}, $M$ is a maximum matching in $\mna{\mathbf{s}}$ and $\nu(\mna{\mathbf{s}})=\left\lfloor \frac{n}{2}\right\rfloor$.
\end{proof}
Using Proposition \ref{numin 1}, Theorem \ref{bevis}, and Theorem \ref{konig}, we obtain the nullity and the independence number of \(\mna{\mathbf{s}}\), when $l < \lceil \frac{n}{2}\rceil$.
\begin{corollary}\label{nullmin 2}
Let $\mathbf{s}$ be a tree degree sequence. If $l< \lceil \frac{n}{2}\rceil$, then \[\nulidad{\mna{\mathbf{s}}}=\left\{
\begin{tabular}{c l}
$1$&\text{if} n \text{is odd},\\
$0$&\text{if} n \text{is even}.
\end{tabular}
\right.\].
\end{corollary}
\begin{corollary}
\label{indmin 2}
Let $\mathbf{s}$ be a tree degree sequence. If $l< \lceil \frac{n}{2}\rceil$, then \(\alpha({\mna{\mathbf{s}}})=\left\lceil \frac{n}{2} \right\rceil\).
\end{corollary}
Proposition~\ref{numin 1} and Proposition~\ref{numin 2}, together with their respective corollaries, states that for a given tree degree sequence $\mathbf{s}$ there is always a tree $T\in \mathcal{T}_{\mathbf{s}}$ such that $T$ has maximum matching number, minimum nullity and minimum independence number among the trees of $\mathcal{T}_{\mathbf{s}}$.
\begin{definition}
Let $\mathbf{s}$ be a tree degree sequence. The \textbf{maximum matching number of $\mathcal{T}_{\mathbf{s}}$}, denoted $\nu_M({\mathcal{T}_{\mathbf{s}}})$, is the maximum matching number among all trees in $ \mathcal{T}_{\mathbf{s}}$. The \textbf{minimum nullity of $\mathcal{T}_{\mathbf{s}}$}, denoted $\nulidadm{\mathcal{T}_{\mathbf{s}}}$, is the minimum nullity among all trees in $ \mathcal{T}_{\mathbf{s}}$. The \textbf{minimum independence number of $\mathcal{T}_{\mathbf{s}}$}, denoted $\alpha_m({\mathcal{T}_{\mathbf{s}}})$, is the minimum independence number among all trees in $ \mathcal{T}_{\mathbf{s}}$.
\end{definition}
We collect all the previous results in a theorem.
\begin{theorem}\label{teominimum}
If $\mathbf{s}$ is a tree degree sequence of length $n>2$, then \begin{align*}
\nu_M(\mathcal{T}_{\mathbf{s}}) &=\left\{\begin{tabular}{c l}
$n-l$&\text{if } $l\geq \lceil \frac{n}{2}\rceil$,\\
$\lfloor \frac{n}{2}\rfloor$&\text{if } $l< \lceil \frac{n}{2}\rceil$,
\end{tabular}
\right.\\
\nulidadm{\mathcal{T}_{\mathbf{s}}}&=\left\{
\begin{tabular}{c l}
$2l-n$&\text{if } $l\geq \lceil \frac{n}{2}\rceil$,\\
1&\text{if } $l< \lceil \frac{n}{2}\rceil$ \text{and} $n$ \text{is odd},\\
0&\text{if } $l< \lceil \frac{n}{2}\rceil$ \text{and} $n$ \text{is even,}
\end{tabular}
\right.
\end{align*}
and
\begin{align*}
\alpha_m(\mathcal{T}_{\mathbf{s}}) & =\left\{\begin{tabular}{c l}
$l$&\text{if } $l\geq \lceil \frac{n}{2}\rceil$, \\
$\lceil \frac{n}{2}\rceil$&\text{if } $l< \lceil \frac{n}{2}\rceil$. \phantom{\text{and} $n$ \text{ is even,}}
\end{tabular}
\right.
\end{align*}
\end{theorem}
\section{Maximum Nullity}\label{maximum nullity}
In this section, given a tree degree sequence \(\mathbf{s}\) , we find a closed formula for the maximum nullity among the nullity of all trees in $\mathcal{T}_{\mathbf{s}}$, see Theorem \ref{teomaximum}. We explicitly find a tree with the maximum possible nullity, see Algorithm \ref{maxalgo}. We use the notion of annihilation number, which was introduced by Pepper in \cite{pepper2004binding}. This structural parameter is associated with the independence number for graph in general. If the graph is bipartite, it also associated with the matching number.
The next algorithm constructs a tree with the maximum nullity (minimum matching number) among all the trees with a given tree degree sequence $\mathbf{s}$ of length $n$.
\begin{algorithm}
\label{maxalgo}
Maximum nullity algorithm, $\MNA{\mathbf{s}}$
\begin{enumerate}
\item INPUT: $s:d_1,d_2,\ldots,d_n$
\item $V=\{v_1,v_2,\ldots,v_n\}$
\item $l=\sum_{d_i=1}1$
\item $\deg(v_n)=d_n$
\item $H(v_n)=\{v_1,v_2,\ldots,v_{d_n}\}$
\item $E=\{v_{n}v: v\in H(v_{n})\}$
\item $t=d_n+1$, $k=d_{n}$, $i=0$, $h=n$, $V_K=\empty$
\item IF $t=n$:
\begin{enumerate}
\item $i=\omega$
\item RETURN $G(V,E)$, $V_K$, $\omega$
\end{enumerate}
\item $i=i+1$
\item WHILE $t<n$:
\begin{enumerate}
\item $V_K=V_K\cup \{v_k\}$
\item $\deg(v_k)=d_{l+i}$
\item $H(v_k)=\{v_{h-1},v_{h-2},\ldots,v_{h-d_{l+i}+1}\}$
\item $E=E\cup \{v_{k}v: v\in H(v_{k})\}$
\item $t=t+d_{l+i}-1$
\item IF $t=n$:
\begin{enumerate}
\item $i=\omega$
\item RETURN $G(V,E)$, $V_K$, $\omega$
\end{enumerate}
\item FOR $j\in ind(H(v_k))=\{f\in [n]:v_f\in H(v_k) \}$:
\begin{enumerate}
\item $\deg(v_j)=d_j$
\item $H(v_j)=\{v_{k+1},v_{k+2},\ldots,v_{k+d_j-1}\}$
\item $E=E\cup \{v_{j}v: v\in H(v_{j})\}$
\item $t=t+d_{j}-1$
\item IF $t=n$:
\begin{enumerate}
\item $i=\omega$
\item RETURN $G(V,E)$, $V_K$, $\omega$
\end{enumerate}
\item $k=k+d_{j}-1$
\end{enumerate}
\item $h=h-d_{l+i}+1$
\item $i=i+1$
\end{enumerate}
\end{enumerate}
\end{algorithm}
From now on we denote $\MNA{\mathbf{s}}$ to the tree obtained from Algorithm \ref{maxalgo}. In Figure \ref{fig maxalgo} we shown examples for each of the four possible execution forms of Algorithm \ref{maxalgo}.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\begin{tikzpicture}[thick,scale=0.3]%
\draw
(0,0) node{$v_{9}$}
(5,0) node{$v_1$} -- (0,0)
(3.535533,3.535533) node{$v_2$} -- (0,0)
(0,5) node{$v_3$} -- (0,0)
(-3.535533,3.535533) node{$v_4$} -- (0,0)
(-5,0) node{$v_5$} -- (0,0)
(-3.535533,-3.535533) node{$v_6$} -- (0,0)
(0,-5) node{$v_7$} -- (0,0)
(3.535533,-3.535533) node{$v_8$} -- (0,0);
\end{tikzpicture}
\caption{$V_K=\emptyset$, $\omega=0$, $\MNA{1,1,1,1,1,1,1,1,8}$}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}[b]{0.3\textwidth}
\begin{tikzpicture}[thick,scale=0.2]%
\draw
(0,0) node{$v_{15}$}
(-5,0) node[]{$v_2$} -- (0,0)
(0,5) node{$v_{1}$} -- (0,0)
(0,-5) node{$v_3$} -- (0,0)
(5,0) node{$v_4$} -- (0,0)
(10,0) node{$v_{14}$} -- (5,0)
(10,5) node{$v_{6}$} -- (10,0)
(15,0) node{$v_5$} -- (10,0)
(10,-5) node{$v_7$} -- (10,0)
(5,-5) node{$v_{13}$} -- (10,-5)
(10,-10) node{$v_{12}$} -- (10,-5)
(15,-5) node{$v_{11}$} -- (10,-5)
(20,-5) node{$v_8$} -- (15,-5)
(15,-10) node{$v_9$} -- (15,-5)
(20,0) node{$v_{10}$} -- (15,-5);
\end{tikzpicture}
\caption{$V_K=\{v_4,v_7\}$, $\omega=2$, $\MNA{1,1,1,1,1,1,1,1,1,1,2,4,4,4,4}$}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}[b]{0.3\textwidth}
\begin{tikzpicture}[thick,scale=0.2]%
\draw
(0,0) node{$v_{18}$}
(-5,0) node[]{$v_2$} -- (0,0)
(0,5) node{$v_{1}$} -- (0,0)
(0,-5) node{$v_3$} -- (0,0)
(5,0) node{$v_4$} -- (0,0)
(5+3.535533,3.535533) node{$v_{17}$} -- (5,0)
(5+3.535533,-3.535533) node{$v_{16}$} -- (5,0)
(5+2*3.535533,2*3.535533) node{$v_5$} -- (5+3.535533,3.535533)
(5+3.535533,3.535533+5) node{$v_6$} -- (5+3.535533,3.535533)
(5,2*3.535533) node{$v_7$} -- (5+3.535533,3.535533)
(5+3.535533+5,-3.535533) node{$v_9$} -- (5+3.535533,-3.535533)
(5+3.535533,-3.535533-5) node{$v_8$} -- (5+3.535533,-3.535533)
(10+2*3.535533,-2*3.535533) node{$v_{15}$} -- (10+3.535533,-3.535533)
(10+2*3.535533,0) node{$v_{14}$} -- (10+3.535533,-3.535533)
(15+2*3.535533,-2*3.535533) node{$v_{10}$} -- (10+2*3.535533,-2*3.535533)
(10+2*3.535533,-2*3.535533-5) node{$v_{11}$} -- (10+2*3.535533,-2*3.535533)
(10+2*3.535533,5) node{$v_{12}$} -- (10+2*3.535533,0)
(15+2*3.535533,0) node{$v_{13}$} -- (10+2*3.535533,0)
;
\end{tikzpicture}
\caption{\phantom{ } $V_K=\{v_4,v_9\}$, $\omega=2$, $\MNA{1,1,1,1,1,1,1,1,1,1,1,3,3,3,3,3,4,4}$}
\end{subfigure}
\caption{Examples of the three possible execution forms of Algorithm \ref{maxalgo}}
\label{fig maxalgo}
\end{figure}
\begin{remark} \label{rem i}From the Algorithm \ref{maxalgo}, we can deduce que following statements about $\MNA{\mathbf{s}}$:
\begin{itemize}
\item[$\bullet$]$\MNA{\mathbf{s}}$ is a tree of order $n$.
\item[$\bullet$]$\omega$ counts the number of times the WHILE loop runs, and for each time it runs adding a vertex to $V_K$. Thus $|V_K|=\omega$.
\item[$\bullet$]Every vertex of $V_K$ is an internal vertex, i.e. they are vertices of degree 2 or more.
\item[$\bullet$]The subindices of the vertices in $V_K$ are increasing and the distance between two consecutive vertices of $V_K$ is 2. Therefore, the distance between two vertices of $V_K$ is always even.
\item[$\bullet$]Every internal vertex that is not a vertex of $V_K$ is adjacent to, at most, two vertices of $V_K$.
\item[$\bullet$]Every internal vertex that is not a vertex of $V_K$ is adjacent to, at least, one leaf.
\item[$\bullet$]The only vertex of $V_K$ that can be neighbor of a leaf is the vertex with maximum subindex (the last vertex added to $V_K$). We call $v_{mk}$ to such vertex and $l_{mk}$ to the number of leaves adjacent to $v_{mk}$.
\item[$\bullet$] The distance from a leaf, which is not a neighbor of $v_{mk}$, to some vertex of $V_K$ is even.
\item[$\bullet$] The vertex of $V_K$ with minimum subindex is $v_{l+1}$ (the first vertex added to $V_{K}$).
\item[$\bullet$] Let $P_K$ be the path from $v_{l+1}$ to $v_{mk}$. Clearly $V(P_K)\cap V_K=V_K$.
\end{itemize}
\end{remark}
\begin{lemma} \label{iedges}
If $\mathbf{s}$ is a tree degree sequence of length $n$, then $$n-1-l=-l_{mk}+\sum_{v\in V_K}\deg(v),$$
where \(l_{mk}\) is the number of leaves adjacent to \(v_{mk}\), see Algorithm \ref{maxalgo}.
\end{lemma}
\begin{proof} Given a vertex $v$, let $\deg_{\text{ext}}(v)$ denote the number of leaves adjacent to $v$ and let $\deg_{\text{int}}(v)$ denote the number of internal vertices adjacent to $v$. Thus,
\begin{eqnarray}
-l_{mk}+\sum_{v\in V_K}\deg(v)&=&-l_{mk}+\sum_{j=l+1}^{l+\omega}d_j \nonumber \\
&=&-l_{mk}+d_{l+\omega}+\sum_{j=l+1}^{l+\omega-1}d_j \nonumber \\
&=&-l_{mk}+\deg(v_{mk})+\sum_{j=l+1}^{l+\omega-1}d_j \nonumber \\
&=&-l_{mk}+\deg_{\text{ext}}(v_{mk})+\deg_{\text{int}}(v_{mk})+\sum_{j=l+1}^{l+\omega-1}d_j \nonumber \\
&=&\deg_{\text{int}}(v_{mk})+\sum_{j=l+1}^{l+\omega-1}d_j. \nonumber
\end{eqnarray}
Clearly, by Remark \ref{rem i}, $\deg_{\text{int}}(v_{mk})+\sum_{j=l+1}^{l+\omega-1}d_j$ counts the number of internal edges (edges whose incident vertices are internal) that are incidents to some vertex of $V_K$; and by Remark \ref{rem i}, every internal edge is incident to one vertex of $V_K$. Thus, since we are in a tree \[\deg_{\text{int}}(v_{mk})+\sum_{j=l+1}^{l+\omega-1}d_j=n-l-1. \qedhere
\]
\end{proof}
\begin{definition}[\cite{pepper2004binding}]
For a graph $G$ with vertices $V= \{v_1, v_2, \ldots, v_n\}$, having degrees $d_i =
\deg (v_i)$, with $d_1 \leq d_2 \leq \ldots \leq d_n$, and having $m$ edges, the \textbf{annihilation number}, $a = a(G)$, is
defined to be the largest index such that $\sum_{i=1}^{a}d_i\leq m$.
\end{definition}
\begin{theorem}[\cite{pepper2004binding}] \label{pepper ind. bound}
For any graph $G$, $\alpha(G)\leq a(G)$.
\end{theorem}
\begin{lemma} \label{nu lower bound}
If $G$ is a bipartite graph of order $n$, then $\nu(G)\geq n-a(G)$.
\end{lemma}
\begin{proof}
It is a direct consequence of Theorem \ref{pepper ind. bound} and Theorem \ref{konig}.
\end{proof}
\begin{theorem}
If $T$ is a tree of order $n$, then $\nulidad{T}\leq 2a(T)-n$.
\end{theorem}
\begin{proof}
It is a direct consequence of Lemma \ref{nu lower bound} and Theorem \ref{bevis}.
\end{proof}
Two graphs that have the same degree sequence have the same annihilation number. Hence, we can define the annihilation number of a degree sequence as $a(\mathbf{s})=a(G)$ for any $G\in \mathcal{G}_{\mathbf{s}}$. Usually, when the tree degree sequence \(\mathbf{s}\) is clear from the context, we just write \(a\) instead of \(a(\mathbf{s})\).
The following result gives a relation between $\omega$ from Algorithm \ref{maxalgo} and the annihilation number of the sequence $\mathbf{s}$.
\begin{lemma} \label{i}
Let $\mathbf{s}$ be a tree degree sequence of length $n$. If $\omega$ is the output from Algorithm \ref{maxalgo} for $\mathbf{s}$, then $a(\mathbf{s})-l(\mathbf{s})\leq \omega \leq a(\mathbf{s})-l(\mathbf{s})+1$.
\end{lemma}
\begin{proof}
From the proof of Lemma \ref{iedges} we have that:
\begin{eqnarray}
\sum_{v\in V_K}\deg(v)&=&\sum_{j=l+1}^{l+\omega}d_j \nonumber \\
&\geq&n-1-l \nonumber \\
&\geq&\sum_{j=l+1}^{a}d_j , \nonumber
\end{eqnarray}
wich states that $\omega \geq a-l$.
Now, as $v_{mk}$ is the only vertex of $V_K$ that can be adjacent to a leaf, we have:
\begin{eqnarray}
\sum_{v\in (V_K-v_{mk})}\deg(v)&=&\sum_{j=l+1}^{l+\omega-1}d_j \nonumber \\
&<&n-1-l \nonumber \\
&<&\sum_{j=l+1}^{a+1}d_j, \nonumber
\end{eqnarray}
wich states that $\omega \leq a-l+1$. Hence \[a-l\leq \omega \leq a-l+1. \qedhere \]
\end{proof}
As a consequence of the proof of Lemma \ref{i}, we have that $a-l=\omega$ if only if $l_{mk}=0$, and $a-l+1=\omega$ if only if $l_{mk}>0$.\\
Let $P_K$ be the path defined in Remark \ref{rem i}. Since \(P_K\) has even length, it has two maximum matchings. Let $M_K$ be a maximum matching of $P_K$ such that the edge incident to $v_{l+1}$ is in $M_K$ and the edge incident to $v_{mk}$ is not in $M_K$. Hence, every vertex of $V_K-v_{mk}$ is saturated by $M_K$ and $|M_K|=\omega-1$. Let $V_J=\{v\in V(\MNA{\mathbf{s}}):v\in N(V_K)-V(P_K)\text{ and } \deg(v)>1\}$. By Remark \ref{rem i}, every vertex of $V_J$ is neighbor of a leaf. Thus, $M_J=\{vu\in E(\MNA{\mathbf{s}}):v\in V_J \text{ and }u\in N(v) \text{ such that } \deg(u)=1 \}$ is a matching in \(\MNA{\mathbf{s}}\) such that \(M_{J} \cap E(P_{K})=\emptyset\).
Note that $M_J \cup M_K$ is a matching in $\MNA{\mathbf{s}}$. If $u\in N(v_{mk})$ such that $\deg(u)=1$, then $M_J \cup M_K \cup uv_{mk}$ is also a matching in $\MNA{\mathbf{s}}$, due to the fact that $v_{mk}$ is unsaturated by $M_K$ and \(M_{J}\). We set
\[
M_{\mathbf{s}}:=\left\{
\begin{array}{ll}
M_J \cup M_K \cup uv_{mk}, & \text{if } l_{mk}>0,\\
M_J \cup M_K & \text{otherwise},
\end{array}
\right.
\]
where \(u \in N(v_{mk})\cap l(\MNA{\mathbf{s}})\). An example of $M_{\mathbf{s}}$ is given in Figure \ref{fig Ms}, using the same trees from Figure \ref{fig maxalgo}.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.35\textwidth}
\begin{tikzpicture}[thick,scale=0.2]%
\draw[decorate, decoration={snake,amplitude=2mm,
segment length=2mm},draw=red]
(5+3.535533,-3.535533) -- (5,0);
\draw[decorate, decoration={snake,amplitude=2mm,
segment length=2mm},draw=blue]
(0,5) -- (0,0)
(6+3.535533,3.535533+5) -- (6+3.535533,3.535533)
(11+2*3.535533,5) -- (11+2*3.535533,0)
(15+2*3.535533,-2*3.535533-1) -- (10+2*3.535533,-2*3.535533-1);
\draw
(0,0) node{$v_{18}$}
(-5,0) node[]{$v_2$} -- (0,0)
(0,5) node{$v_{1}$} -- (0,0)
(0,-5) node{$v_3$} -- (0,0)
(5,0) node{$v_4$} -- (0,0)
(5+3.535533,3.535533) node{$v_{17}$} -- (5,0)
(5+3.535533,-3.535533) node{$v_{16}$} -- (5,0)
(5+2*3.535533,2*3.535533) node{$v_5$} -- (5+3.535533,3.535533)
(5+3.535533,3.535533+5) node{$v_6$} -- (5+3.535533,3.535533)
(5,2*3.535533) node{$v_7$} -- (5+3.535533,3.535533)
(5+3.535533+5,-3.535533) node{$v_9$} -- (5+3.535533,-3.535533)
(5+3.535533,-3.535533-5) node{$v_8$} -- (5+3.535533,-3.535533)
(10+2*3.535533,-2*3.535533) node{$v_{15}$} -- (10+3.535533,-3.535533)
(10+2*3.535533,0) node{$v_{14}$} -- (10+3.535533,-3.535533)
(15+2*3.535533,-2*3.535533) node{$v_{10}$} -- (10+2*3.535533,-2*3.535533)
(10+2*3.535533,-2*3.535533-5) node{$v_{11}$} -- (10+2*3.535533,-2*3.535533)
(10+2*3.535533,5) node{$v_{12}$} -- (10+2*3.535533,0)
(15+2*3.535533,0) node{$v_{13}$} -- (10+2*3.535533,0)
;
\end{tikzpicture}
\caption{Tree with matchings:\\ $M_K=\{v_4v_{18}\}$,\\
$M_J=\{v_{20}v_1,v_{19}v_{6},v_{15}v_{10},v_{14}v_{12}\}$,\\ $M_{\mathbf{s}}=M_K\cup M_J$}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}[b]{0.3\textwidth}
\begin{tikzpicture}[thick,scale=0.2]%
\draw[decorate, decoration={snake,amplitude=2mm,
segment length=2mm},draw=red]
(10,0) -- (5.5,0);
\draw[decorate, decoration={snake,amplitude=2mm,
segment length=2mm},draw=green]
(10,-5) -- (10,-10);
\draw[decorate, decoration={snake,amplitude=2mm,
segment length=2mm},draw=blue]
(0,0) -- (0,4.5)
(15,-6) -- (19.5,-6);
\draw
(0,0) node{$v_{15}$}
(-5,0) node[]{$v_2$} -- (0,0)
(0,5) node{$v_{1}$} -- (0,0)
(0,-5) node{$v_3$} -- (0,0)
(5,0) node{$v_4$} -- (0,0)
(10,0) node{$v_{14}$} -- (5,0)
(10,5) node{$v_{6}$} -- (10,0)
(15,0) node{$v_5$} -- (10,0)
(10,-5) node{$v_7$} -- (10,0)
(5,-5) node{$v_{13}$} -- (10,-5)
(10,-10) node{$v_{12}$} -- (10,-5)
(15,-5) node{$v_{11}$} -- (10,-5)
(20,-5) node{$v_8$} -- (15,-5)
(15,-10) node{$v_9$} -- (15,-5)
(20,0) node{$v_{10}$} -- (15,-5);
\end{tikzpicture}
\caption{Tree with matchings:\\ $M_K=\{v_4v_{14}\},$\\
$M_J=\{v_{15}v_1,v_{11}v_{8}\},$\\ $M_{\mathbf{s}}=M_K\cup M_J\cup\{v_7v_12\}$}
\end{subfigure}
\caption{The two cases of $M_{\mathbf{s}}$}
\label{fig Ms}
\end{figure}
In the next proposition we prove, using the famous Berge's Lemma, that \(M_{\mathbf{s}}\) is a maximum matching of \(\MNA{\mathbf{s}}\).
\begin{lemma}[Berge's Lemma, see \cite{berge1957two}] \label{bergetheo}
$M$ is a maximum matching if and only if there is no augmenting path relative to $M$.
\end{lemma}
\begin{proposition} \label{min nu}
If $\mathbf{s}$ is a tree degree sequence of length $n$, then $M_{\mathbf{s}}$ is a maximum matching of $\MNA{\mathbf{s}}$. Moreover $\nu(\MNA{\mathbf{s}})=n-a(\mathbf{s})$.
\end{proposition}
\begin{proof}
Note that if $v\in V(\MNA{\mathbf{s}})$ is unsaturated by $M_{\mathbf{s}}$, then $v$ is a leaf or $v=v_{mk}$. If $u$ and $v$ are two unsaturated vertices such that $\text{d}(u,v)$ is even, there is no augmenting path relative to $M$ from $u$ to $v$. In particular, by Remark \ref{rem i}, if $u$ and $v$ are two unsaturated leaf such that $u,v\notin N(v_{mk})$, then the distance from $u$ to $v$ is even, and therefore, there is no augmenting path relative to $M$ from $u$ to $v$. Hence, there are only two cases left to analyze: the path from a leaf to $v_{mk}$, when $v_{mk}$ is unsaturated, i.e. $a-l=\omega$; and the path between two leaves, one that is adjacent to $v_{mk}$ and one that is not, i.e. $a-l+1=\omega$.
Case 1: Since $a-l=\omega$, by Remark \ref{rem i} the distance between \(v_{mk}\) and any leaf of \(\MNA{\mathbf{s}}\) is even. Thus, there is no augmenting path relative to $M_{\mathbf{s}}$ from a leaf to \(v_{mk}\). Therefore, by Berge's Lemma, $M_{\mathbf{s}}$ is a maximum matching of $\MNA{\mathbf{s}}$.
By Remark \ref{rem i}, $M_{\mathbf{s}}$ saturates all neighbors of each vertex in $V_K$, and between two consecutive vertices of $V_K$ there is a saturated vertex, by Lemma \ref{iedges} we have that:
\begin{eqnarray*}
\nu(\MNA{\mathbf{s}})&=&|M_{\mathbf{s}}| \\
&=&-(\omega-1)+\sum_{v\in V_K}\deg(v) \\
&=&1-\omega+n-1-l \\
&=&n-a.
\end{eqnarray*}
Case 2: $a-l+1=\omega$. Let $v$ and $u$ be two unsaturated leaves such that $v\notin N(v_{mk})$ and $u\in N(v_{mk})$. As $v_{mk}$ is matched to a leaf in $M_{\mathbf{s}}$, the path between \(v\) and \(u\) is not augmenting, because its two final edges are not in \(M_{\mathbf{s}}\).
Consequently, there is no augmenting path relative to $M_{\mathbf{s}}$. Therefore, by Theorem \ref{bergetheo}, $M_{\mathbf{s}}$ is a maximum matching of $\MNA{\mathbf{s}}$.
As $M_{\mathbf{s}}$ saturates all non-leaves vertices that are neighbors of the vertices in $V_K$ and one leaf adjacent to $v_{mk}$, and between two consecutive vertices of $V_K$ there is a saturated vertex, by Lemma \ref{iedges} we have that:
\begin{eqnarray*}
\nu(\MNA{\mathbf{s}})&=&|M| \\
&=&-(\omega-1)-(l(v_{mk})-1)+\sum_{v\in V_K}\deg(v) \\
&=&2-\omega+n-1-l \\
&=&n-a.
\end{eqnarray*} \qedhere
\end{proof}
A consequence of Lemma \ref{min nu}, and Lemma \ref{nu lower bound}, is that given a tree degree sequence $\mathbf{s}$, there is always a tree $T\in \mathcal{T}_{\mathbf{s}}$ such that has minimum matching number, and one of such tree is $\MNA{\mathbf{s}}$.
\begin{corollary}
If $\mathbf{s}$ is a tree degree sequence of length $n$, then $$\nulidad{\MNA{\mathbf{s}}}=2a-n.$$
\end{corollary}
\begin{proof}
A direct implication of Lemma \ref{min nu} and Theorem \ref{bevis}.
\end{proof}
\begin{corollary}
If $\mathbf{s}$ is a tree degree sequence of length $n$, then $$\alpha(\MNA{\mathbf{s}})=a.$$
\end{corollary}
Proposition~\ref{min nu}, together with their corollaries, states that $\MNA{\mathbf{s}}$ has minimum matching number, maximum nullity and maximum independence number among all the trees of $\mathcal{T}_{\mathbf{s}}$.
Theorem~\ref{teomaximum} is the main result of this section, and a direct consequence of Proposition \ref{min nu} and their corollaries. But, in order to enunciate it properly, some definitions are necessary.
\begin{definition}
Let $\mathbf{s}$ be a tree degree sequence. The \textbf{minimum matching number of $\mathbf{\mathcal{T}_{\mathbf{s}}}$}, denoted $\nu_{\text{m}}(\mathcal{T}_{\mathbf{s}})$, is the maximum matching number among all trees $T$ in $\mathcal{T}_{\mathbf{s}}$. The \textbf{maximum nullity of $\mathbf{\mathcal{T}_{\mathbf{s}}}$}, denoted $\nulidadM{\mathcal{T}_{\mathbf{s}}}$, is the maximum nullity among all trees $T$ in $\mathcal{T}_{\mathbf{s}}$. The \textbf{maximum independence number of $\mathbf{\mathcal{T}_{\mathbf{s}}}$}, denoted $\alpha_{\text{M}}({\mathcal{T}_{\mathbf{s}})}$, is the minimum matching number among all trees $T$ in $\mathcal{T}_{\mathbf{s}}$.
\end{definition}
\begin{theorem}\label{teomaximum}
If $\mathbf{s}$ is a degree sequence of trees of length $n$, then
\begin{align*}
\nu_{\text{m}}({\mathcal{T}_{\mathbf{s}})}&=n-l(\mathbf{s}),\\
\nulidadM{\mathcal{T}_{\mathbf{s}}}&=2a(\mathbf{s})-n,
\end{align*}
and
\begin{align*}
\alpha_{\text{M}}({\mathcal{T}_{\mathbf{s}})}&=a(\mathbf{s}).\phantom{-n}
\end{align*}
\end{theorem}
Finally, the following results gives a characterization of the tree degree sequences with equal maximum and minimum matching number(nullity, independence number).
\begin{theorem}
Let $\mathbf{s}$ be a tree degree sequence of lenght $n>2$. Then $\nu_{\text{m}}(\mathcal{T}_{\mathbf{s}})=\nu_{\text{M}}(\mathcal{T}_{\mathbf{s}})$ if only if $a(\mathbf{s})=l$ or $a(\mathbf{s})=\lfloor\frac{n}{2}\rfloor$.
\end{theorem}
\begin{proof} Since $\nu_{\text{M}}(\mathcal{T}_{\mathbf{s}})$ depend on the number of leaves of $\mathbf{s}$ and $\nu_{\text{m}}(\mathcal{T}_{\mathbf{s}})$ is always equal to $n-a$, it is easy to see that: when $l\geq \lceil\frac{n}{2}\rceil
\[\nu_{\text{m}}(\mathcal{T}_{\mathbf{s}})=\nu_{\text{M}}(\mathcal{T}_{\mathbf{s}})\iff a=l, \]\\
and when $l< \lceil\frac{n}{2}\rceil
\[\nu_{\text{m}}(\mathcal{T}_{\mathbf{s}})=\nu_{\text{M}}(\mathcal{T}_{\mathbf{s}})\iff a=\left\lfloor\frac{n}{2}\right\rfloor.\]
\end{proof}
\begin{corollary}
Let $\mathbf{s}$ be a tree degree sequence of lenght $n>2$. Then $\nulidadm{\mathcal{T}_{\mathbf{s}}}=\nulidadM{\mathcal{T}_{\mathbf{s}}}$ if only if $a(\mathbf{s})=l$ or $a(\mathbf{s})=\lfloor\frac{n}{2}\rfloor$.
\end{corollary}
\begin{corollary}
Let $\mathbf{s}$ be a tree degree sequence of lenght $n>2$. Then $\alpha_{\text{m}}(\mathcal{T}_{\mathbf{s}})=\alpha_{\text{M}}(\mathcal{T}_{\mathbf{s}})$ if only if $a(\mathbf{s})=l$ or $a(\mathbf{s})=\lfloor\frac{n}{2}\rfloor$.
\end{corollary}
\begin{conjecture}
If $\mathbf{s}$ is a tree degree sequence of lenght $n>2$, then for all positive integer $k$, with $\nu_m(\mathcal{T}_{\mathbf{s}})\leq k \leq \nu_M(\mathcal{T}_{\mathbf{s}})$, there exists a tree $T\in \mathcal{T}_{\mathbf{s}}$ such that $\nu(T)=k$.
\end{conjecture}
\section*{Acknowledgement}
The authors would like to thank to Vilmar Trevisan, Emilio Allem, Rodrigo Orsini Braga and Maikon Machado Toledo of Universidade Federal do Rio Grande do Sul for a wonderful time in our work's visit.
\section*{}
\textbf{Funding}: This work was partially supported by the Universidad Nacional de San Luis, grant PROIPRO 03-2216, and MATH AmSud, grant 18MATH-01. Dr. Daniel A. Jaume is funding by ``Programa de Becas de Integraci\'{o}n Regional para argentinos'', grant 2075/2017.
\section*{References}
\bibliographystyle{plain}
| {
"timestamp": "2018-06-08T02:02:30",
"yymm": "1806",
"arxiv_id": "1806.02399",
"language": "en",
"url": "https://arxiv.org/abs/1806.02399",
"abstract": "The nullity of a graph is the multiplicity of the eigenvalue zero in its adjacency spectrum. In this paper, we give a closed formula for the minimum and maximum nullity among trees with the same degree sequence, using the notion of matching number and annihilation number. Algorithms for constructing such minimum-nullity and maximum-nullity trees are described.",
"subjects": "Combinatorics (math.CO)",
"title": "Maximum and minimum nullity of a tree degree sequence",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969689263264,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7099044418315325
} |
https://arxiv.org/abs/1207.0840 | On Rainbow Cycles and Paths | In a properly edge colored graph, a subgraph using every color at most once is called rainbow. In this thesis, we study rainbow cycles and paths in proper edge colorings of complete graphs, and we prove that in every proper edge coloring of K_n, there is a rainbow path on (3/4-o(1))n vertices, improving on the previously best bound of (2n+1)/3 from Gyarfas and Mhalla. Similarly, a k-rainbow path in a proper edge coloring of K_n is a path using no color more than k times. We prove that in every proper edge coloring of K_n, there is a k-rainbow path on (1-2/(k+1)!)n vertices. | \chapter{Introduction}
\section{Rainbow cycles and paths}
Consider an edge colored graph $G$.
A subgraph of $G$ is called \emph{rainbow} (or \emph{heterochromatic})
if no two of its edges receive the same color. We are concerned with
rainbow paths and, to a lesser extent, cycles in proper edge colorings of the complete
graph $K_n$.
Hahn conjectured that every proper edge coloring of $K_n$ admits a
Hamiltonian rainbow path (a rainbow path visiting every vertex of $K_n$) (c.f.~\cite{Maamoun1984}).
Maamoun and Meyniel~\cite{Maamoun1984} disproved this conjecture
by constructing counterexamples for the case where $n$ is a power of two, as follows.
Let $n=2^m$. Then we can identify the vertices of $K_n$ with distinct elements of the group
$(\mathbold{Z}/2\mathbold{Z})^m$, and color every edge $\{a,b\}$ of $K_n$
with the sum of the group elements corresponding to $a$ and $b$.
This is a proper edge coloring, because for two edges $\{a,b\}$ and $\{a,c\}$,
the group property implies $a+b\neq a+c$. Maamoun and Meyniel proved that
this coloring admits no Hamiltonian rainbow paths.
The reader is invited
to check this fact for the case of $K_4$.
\begin{center}
\begin{pspicture}(-4,-2.5)(4,3.5)
\psset{fillstyle=solid,fillcolor=black}
\cnode(0,0){.08}{a} \nput{30}{a}{$00$}
\cnode(0,3){.08}{b} \nput{0}{b}{$01$}
\cnode(-2.5891,-1.5,){.08}{c} \nput{180}{c}{$11$}
\cnode(2.5981,-1.5){.08}{d} \nput{0}{d}{$10$}
\ncline{a}{b}\naput{$01$}
\ncline{a}{c}\naput{$11$}
\ncline{a}{d}\nbput{$10$}
\ncline{b}{c}\nbput{$10$}
\ncline{b}{d}\naput{$11$}
\ncline{c}{d}\nbput{$01$}
\end{pspicture}
\end{center}
Conversely, it is widely believed that in every proper edge coloring of $K_n$, there is a
rainbow path on $n-1$ vertices (see for example~\cite{GyarfasMhalla2010}).
Still, this is far from proved, and to date, the best general lower bound on the number of vertices
in a maximum rainbow path in (a properly edge colored) $K_n$ is $(2n+1)/3$,
as proved by Gyárfás and Mhalla in~\cite{GyarfasMhalla2010}.
The main result of this thesis improves this bound to $(3/4-o(1))n$.
\begin{thm}
\label{thm:intro}
In every proper edge coloring of $K_n$, there is a rainbow path of length
\[\left(\frac{3}{4}-o(1)\right)n\text. \]
\end{thm}
Several theorems and conjectures on rainbow cycles can be found in a paper by Akbari, Etesami, Mahini
and Mahmoody~\cite{Akbari2007}.
Most importantly (for our purposes),
it is proved that in every proper edge coloring of $K_n$, there is a rainbow cycle of length at least
$n/2-1$. This result was later improved on by Gyárfás, Ruszinkó, Sarközy and Schelp in~\cite{Gyarfas2011},
where a bound of $(4/7-o(1))n$ is given.
A related topic is that of colorful Hamiltonian cycles in proper edge colorings of $K_n$. In~\cite{Akbari2007},
it is conjectured that every proper edge coloring of $K_n$ contains a Hamiltonian cycle using
at least $n-2$ colors, and it is proved that there is always one with at least $(2/3-o(1))n$
colors. The construction used in our proof of Theorem \ref{thm:intro} can be used to show
that there are Hamiltonian cycles using at least $(3/4-o(1))n$ colors.
In~\cite{Hahn1986}, Hahn and Thomassen studied
rainbow cycles and paths in \emph{$k$-bounded edge colorings} of $K_n$, that is,
(not necessarily proper) edge colorings that use every color at most $k$ times.
It is shown that for fixed $k$ and large enough $n$, every such coloring contains
a Hamiltonian rainbow path; and the authors conjecture that there are
Hamiltonian rainbow paths even if $k = a n$ for some suitably small constant factor $a$.
\section{Paths with repeated colors}
Generalizing the notion of a rainbow path, we consider paths in $K_n$
that use every color at most a constant number of times. We call such paths \emph{$k$-rainbow paths},
where $k$ is the number of times a color may appear on the path.
We will prove that in every proper edge coloring of $K_n$ and for every integer $k>0$, there are $k$-rainbow
paths on at least $n-O(n/k!)$ vertices. As far as we know, there are no previous results in this direction.
\section{Note on Latin squares}
We now give some motivation for the study of rainbow cycles and paths
by relating it to problems whose nature is not inherently graph-theoretic.
Latin squares have been a popular topic in combinatorics at least since
the times of Euler, who studied them extensively.
An array of $n$ rows and $n$ columns
is called a \emph{Latin square of order $n$} if every number in $[n] = \{1,\dotsc,n\}$
appears exactly once in each of its rows and columns.
A (complete) \emph{transversal} of a Latin square is a
selection of $n$ cells of the square, choosing exactly one from each row and column, such that every
number in $[n]$ is contained in exactly one cell of the selection.
Similarly, a \emph{partial transversal} of a Latin square is
a maximal selection of cells, each cell again being from a different row and column,
such that no two chosen cells contain the same symbol.
As shown by Maillet (1894), there are many Latin squares which do not have complete transversals.
However, a famous conjecture of Ryser (1967) states that every Latin square of odd order has
a transversal and, moreover, Brualdi conjectured that every Latin square of
order $n$ admits a partial transversal of size at least $n-1$.
Proofs for both conjectures seem out of reach, even though it is known
that in every Latin square of order $n$, there are partial transversals of size $n-o(n)$.
All this and more can be found in~\cite{Denes}.
How do Latin squares relate to rainbow cycles, or graph theory in general?
Consider any Latin square $A$, and let $R$ and $C$ denote the sets of its rows and columns respectively.
Then $A$ defines a proper edge coloring of the complete bipartite graph with partite sets
$C$ and $R$, as follows: for $c\in C$ and $r\in R$, the edge $\{c,r\}$ is
colored with the number contained in the cell determined
by the row $r$ and the column $c$. Then a transversal of $A$ corresponds to a perfect bipartite matching in this graph,
in which no two edges use the same color: a rainbow perfect matching.
Rainbow matchings are studied for example in~\cite{Wang2008}.
Conversely, every proper edge coloring of $K_n$ with
$n-1$ colors can be used to construct a Latin square $A$ of order $n$, in the following way.
Let $\{v_1,\dotsc,v_n\}$ denote the vertex set of $K_n$. Then for every $i\in [n]$ and $j\neq i$, let $A_{i,i}=n$
and let $A_{i,j}$ be the color of the edge $\{v_i,v_j\}$.
A complete transversal of this Latin square corresponds to
a 2-regular rainbow subgraph (i.e., a subgraph consisting of vertex-disjoint cycles)
that covers all but at most one vertex of $K_n$. This can be seen as follows. For every vertex $v_i$,
either $A_{i,i}$ is in the transversal, or two cells $A_{i,j}$ and $A_{k,i}$ (with $i\neq j$ and $i \neq k$) are in the transversal.
In the former case, $v_i$ does not belong to the subgraph. In the latter case, the two edges
$\{v_i,v_j\}$ and $\{v_i,v_k\}$ are included in the subgraph. Hence, all included vertices
have degree two.
By the defining property of
a transversal, all selected edges have different colors, so the subgraph is really rainbow,
and there are no cycles of length two. Moreover, for every vertex $v_i$, we have $A_{i,i}=n$,
so at most one vertex does not belong to the subgraph.
\section{Thesis structure}
In Chapter 2, some results on rainbow paths are proved; in particular, we prove Theorem
\ref{thm:intro}.
The chapter also contains an informal overview over the ideas used in the proof.
Chapter 3 takes a look at $k$-rainbow paths.
It is proved that every proper edge coloring of $K_n$ contains a $k$-rainbow path
on at least $(1-2/(k+2)!)n$ vertices.
The final chapter is a short conclusion.
\section{Notation}
In this section, we define the notation used throughout the thesis and briefly introduce
the basic graph-theoretic notions.
\subsection{Sets}
We write $\mathbold{N}$ for the set $\{1,2,3,\dotsc\}$ of natural numbers. If $n$ is a natural number, then
we write $[n]$ for the set $\{1,2,\dotsc,n\}$.
We use capital letters for sets.
If $A$ is a set, then $\abs{A}$ is the cardinality of $A$
and $\binom{A}{k}$ is the set of all $k$-element subsets of $A$.
We write $A^c$ for the complement of $A$ (relative to some universe).
\subsection{Graphs}
For graphs, we follow the notation from~\cite{Diestel}, although we will restate the most important
definitions here.
A \emph{graph} is a tuple $(V,E)$, where $V$ is the finite set of \emph{vertices}
and $E \subseteq \binom{V}{2}$ is the set of \emph{edges}.
The \emph{endpoints} of an edge are its elements, and the edge is \emph{incident}
to them and only them.
Two edges are \emph{coincident} if they intersect,
and two vertices $u$ and $v$ are \emph{adjacent} if $\{u,v\}\in E$.
If $G$ is a graph, then we write $V(G)$ for its vertex set and $E(G)$ for its edge set.
A graph $H=(V',E')$ is a \emph{subgraph} of $G=(V,E)$, written $H\subseteq G$, if $V'\subseteq V$ and $E'\subseteq E$.
For a graph $G=(V,E)$ and an arbitrary $e\in \binom{V}{2}$, we write
$G + e$ and $G-e$ for the graphs $(V,E \cup \{e\})$ and $(V,E\setminus \{e\})$.
If $G=(V,E)$ and $H=(V',E')$ are graphs,
then $G\cup H$ denotes the graph $(V\cup V', E\cup E')$.
The \emph{complete graph on $n$ vertices} is the graph with
vertex set $[n]$ and edge set $\binom{[n]}{2}$. It is denoted by $K_n$.
Given a graph $G = (V,E)$, a map $c\colon E \to \mathbold{N}$ is called a \emph{proper edge coloring}
(or simply a coloring) of $G$
if for every two coincident edges $e$ and $e'$ of $G$, we have $c(e)\neq c(e')$.
The colors in the image domain $c(E)$ of $c$ are called the colors \emph{used} by $G$, and
we usually write $c(G)$ for this set.
For an edge $\{u,v\}\in E$, we usually write $c(u,v)$ instead $c(\{u,v\})$. Slightly
abusing this notation, if $A$ is a set of vertices, then we also write $c(u,A)$ for
the set $c(E(u,A))$.
\subsection{Cycles and paths}
A \emph{path} is a non-empty graph $P=(V,E)$ of the form
\[V = \{p_1,p_2,\dotsc,p_k\} \quad \text{and}\quad E = \{\{p_1,p_2\},\{p_2,p_3\},\dotsc,\{p_{k-1},p_k\}\},\]
which we usually denote by the sequence $(p_1,p_2,\dotsc,p_k)$. Then $p_1$ and $p_k$ are the
\emph{start} and \emph{end} vertices of $P$, respectively. The number of edges in $E$ is called the \emph{length} of $P$. We call $p_i$ a \emph{$k$-successor} of $p_j$
if $i>j$ and there are at most $k-1$ vertices between $p_i$ and $p_j$ on $P$. In other words,
$p_i$ is a $k$-successor of $p_j$ if $0< i-j\leq k$. Equivalently,
$p_j$ is a \emph{$k$-predecessor} of $p_i$.
If $P = (p_1,p_2,\dotsc,p_k)$ is a path, then the graph $C=P + \{p_k,p_1\}$ is a \emph{cycle},
and $\abs{E(C)}$ is the \emph{length} of $C$. We represent this cycle by the cyclic sequence of
its vertices, for example $C = (p_1,p_2,\dotsc,p_k,p_1)$.
If $G$ is a graph and $H\subseteq G$ is a path or cycle such that $V(H)=V(G)$, then
$H$ is called \emph{Hamiltonian}.
\chapter{Rainbow Paths}
\section{Introduction}
Consider the complete graph $K_n =(V,E)$ with a proper edge coloring $c\colon E \to \mathbold{N}$.
Given this coloring, the rainbow paths and cycles are exactly the paths
and cycles in $K_n$ that use every color at most once.
If $P$ is a rainbow path, then we will refer to the colors in $c(P)$ as \emph{old}
and to those in $c(P)^c=c(E)\setminus c(P)$ as \emph{new}. Edges colored with
old colors are \emph{old}, edges colored with new colors are \emph{new}.
In~\cite{GyarfasMhalla2010}, Gyárfás and Mhalla proved that regardless of how the coloring
is chosen, there always are rainbow paths on at least $(2n+1)/3$ vertices.
Now we give the basic idea behind their proof, the details of which we will see later.
Consider any maximum rainbow path $P=(p_1,\dotsc,p_k)$ in $K_n$, and consider an edge
$\{p_i,p_{i+1}\}$ such that $\{p_1,p_{i+1}\}$ is new, as in the following
figure.
\begin{center}
\begin{pspicture}(-5.2,-1.5)(5.2,2)
\psset{fillstyle=solid,fillcolor=black}
\cnode(-5,0){.08}{p1} \nput{-90}{p1}{$p_1$}
\cnode(0,0){.08}{pi} \nput{-90}{pi}{$p_i$}
\cnode(1,0){.08}{psi} \nput{-90}{psi}{$p_{i+1}$}
\cnode(5,0){.08}{pt} \nput{-90}{pt}{$p_k$}
\pnode(-2.75,0){a}
\ncline{p1}{a}
\pnode(-2.25,0){b}
\ncline[nodesep=3pt,linestyle=dotted]{a}{b}
\ncline{b}{pi}
\pnode(2.75,0){a1}
\ncline{psi}{a1}
\pnode(3.25,0){b1}
\ncline[nodesep=3pt,linestyle=dotted]{a1}{b1}
\ncline{b1}{pt}
\psset{fillstyle=none,arcangle=45}
\ncarc{p1}{psi}
\ncline[linewidth=1.75pt]{pi}{psi}
\end{pspicture}
\end{center}
Clearly, any edge $\{p_k,r\}$ with $r\in V(P)^c$ cannot use the color
of $\{p_{i+1},p_i\}$, as otherwise the path
\[ (p_i,\dotsc,p_1,p_{i+1},\dotsc,p_k,r) \]
would be a rainbow path on $\abs{V(P)}+1$ vertices, contradicting
the choice of $P$. Viewed the other way around, we can say that a certain number
of edges in $E(P)$ are not allowed to have colors in $c(p_k,V(P)^c)$. But all
the edges in $E(p_k,V(P)^c)$ must be old, so their colors appear somewhere on the path.
As Gyárfás and Mhalla observed, this conflict leads to the bound $k\geq (2n+1)/3$.
But what if we knew that starting in any vertex $r\in V(P)^c$, there is a
rainbow path in $V(P)^c$ (of a certain minimum length $l$) that uses no colors of $c(P)$?
Assume that this is the case, and, moreover, that given two arbitrary new colors,
this path can be chosen in such a way
that it does not use any one of them. Then instead of forbidding colors
in $c(p_k,V(P)^c)$ to appear only on edges $\{p_i,p_{i+1}\}$ as above, we
can also forbid them to appear on any edge $\{p_i,p_{i+1}\}$ such that
$p_{i+1}$ has an $l$-successor $p_j$ with $c(p_1,p_j)\not\in c(P)$.
While this does not immediately
lead to a good bound on the length of $P$, it does give us more flexibility; as we will see
we can now often simply `forget' about constant terms.
This is the main idea behind the upcoming
proof that in every proper edge coloring of $K_n$, there are rainbow paths of length $(3/4-o(1))n$.
Now for some definitions.
For any vertex $v\in V$, we define the \emph{new neighborhood} of $v$ relative
to a rainbow path $P$ by
\[ \newn{v,P} = \{ u \in V\setminus \{v\} : c(u,v) \not\in c(P) \}\text.\]
Analogously, if $C$ is a rainbow cycle, then
\[ \newn{v,C} = \{u \in V\setminus \{v\} : c(u,v) \not\in c(C) \}\text. \]
Moreover, for any rainbow path $P=(p_1,\dotsc,p_k)$, we define the sets
\begin{align*} &A(P) = \{ p_i \in V(P) : p_{i+1}\in \newn{p_1,P}\}\\
\shortintertext{and}
&B(P) = \{ p_i \in V(P) : p_{i-1}\in \newn{p_{k},P}\}\text. \end{align*}
Note that these definitions are symmetric in the sense that if $P'=(p_k,\dotsc,p_1)$, then
$A(P)=B(P')$ and $A(P') = B(P)$. Figure \ref{fig:a} serves as a visual aid for
the formal definitions of $A(P)$ and $B(P)$.
\begin{figure}
\centering
\begin{pspicture}(-5.2,-2)(5.2,2)
\psset{fillstyle=solid,fillcolor=black}
\cnode(-5,0){.08}{p1} \nput{-90}{p1}{$p_1$}
\cnode(-3,0){.08}{x}
\cnode(-2,0){.08}{y}
\cnode(0,0){.08}{pi}
\cnode(1,0){.08}{psi}
\cnode(2,0){.08}{u}
\cnode(3,0){.08}{v}
\cnode(5,0){.08}{pk} \nput{-90}{pk}{$p_k$}
\pnode(-4.25,0){a}
\ncline{p1}{a}
\pnode(-3.75,0){b}
\ncline[nodesep=3pt,linestyle=dotted]{a}{b}
\ncline{b}{x}
\pnode(-1.25,0){a1}
\ncline{y}{a1}
\pnode(-.75,0){b1}
\ncline[nodesep=3pt,linestyle=dotted]{a1}{b1}
\ncline{b1}{pi}
\pnode(3.75,0){a2}
\ncline{v}{a2}
\pnode(4.25,0){b2}
\ncline[nodesep=3pt,linestyle=dotted]{a2}{b2}
\ncline{b2}{pk}
\psset{fillstyle=none,arcangle=45}
\ncarc{p1}{psi}
\ncline{x}{y}
\ncline{pi}{psi}
\ncline{psi}{u}
\ncline{u}{v}
\ncarc{p1}{y}
\ncarc{u}{pk}
\pscircle(-3,0){.25}
\pscircle(0,0){.25}
\psframe(2.75,-.25)(3.25,.25)
\end{pspicture}
\caption{Vertices in $A(P)$ (circled) and $B(P)$ (framed)}
\label{fig:a}
\end{figure}
We will be working mostly with maximum rainbow paths, that is,
rainbow paths of maximum length. Clearly, if $P = (p_1,\dotsc,p_k)$ is a maximum
rainbow path, then adding any edge from $E(p_k,V(P)^c)$ to it cannot result
in a rainbow path (otherwise $P$ would not be maximum).
This means that all edges in $E(p_k,V(P)^c)$ use colors that are also used by $P$.
The same argument can be made for edges in $E(p_1,V(P)^c)$.
\begin{proposition}
\label{prop:maximality}
If $P=(p_1,\dotsc,p_k)$ is a maximum rainbow path with respect to some coloring $c$,
then $c(p_1,V(P)^c)\subseteq c(P)$ and $c(p_k,V(P)^c)\subseteq c(P)$.
In particular, $\abs{A(P)} = \abs{\newn{p_1,P}} \geq n-k$ and $\abs{B(P)} = \abs{\newn{p_k,P}} \geq n-k$.
\end{proposition}
Using this simple observation, we directly obtain the following lower bound on the length of
a maximum rainbow path.
\begin{proposition}
\label{prop:n/2-bound}
In every proper edge coloring of $K_n$, there are rainbow paths on at least
$(n+1)/2$ vertices.
\end{proposition}
\begin{proof}
Consider an arbitrary proper edge coloring $c$ of $K_n$,
and let $P = (p_1,\dotsc,p_k)$ be a maximum rainbow path in this coloring.
We have
\[ \abs{c(p_k,V(P)^c)} = \abs{V(P)^c} = n-k\text, \]
and
\[ \abs{c(P)} = \abs{E(P)} = k-1\text. \]
By Proposition \ref{prop:maximality}, we have $c(p_k,V(P)^c)\subseteq c(P)$, and hence
$\abs{c(p_k,V(P)^c)} \leq \abs{c(P)}$. Thus
$n-k \leq k-1$,
and so we have
\[ k \geq \frac{n+1}{2}\text, \]
as claimed.
\end{proof}
Actually, we proved something slightly stronger, namely, that given any vertex $v$ of $K_n$,
there is a rainbow path on $(n+1)/2$ vertices starting in this vertex. To see this,
revisit the proof and let $P$ be the longest rainbow path starting in $v$,
and observe that the argument made for Proposition \ref{prop:maximality} still applies.
\section{Rotations}
The paths in $K_n$ admit many symmetries; in fact, every
permutation of the vertices of a path results in another path.
We consider a special kind of permutation, which we call rotation here. Rotations were
already used by Pósa in~\cite{Posa1976}.
For every $i\in[k]$, there is a rotation $\rho_i$
which acts on the path $P = (p_1,\dotsc,p_k)$ to produce
the path \[\rho_i \cdot P = (p_i,p_{i-1},\dotsc,p_1,p_{i+1},p_{i+2},\dotsc,p_k)\text,\]
as shown in figure \ref{fig:rotation}.
\begin{figure}
\centering
\begin{pspicture}(-5.2,-2)(5.2,2)
\psset{fillstyle=solid,fillcolor=black}
\cnode(-5,0){.08}{p1} \nput{-90}{p1}{$p_1$}
\cnode(0,0){.08}{pi} \nput{-90}{pi}{$p_i$}
\cnode(1,0){.08}{psi} \nput{-90}{psi}{$p_{i+1}$}
\cnode(5,0){.08}{pt} \nput{-90}{pt}{$p_k$}
\pnode(-2.75,0){a}
\ncline{p1}{a}
\pnode(-2.25,0){b}
\ncline[nodesep=3pt,linestyle=dotted]{a}{b}
\ncline{b}{pi}
\pnode(2.75,0){a1}
\ncline{psi}{a1}
\pnode(3.25,0){b1}
\ncline[nodesep=3pt,linestyle=dotted]{a1}{b1}
\ncline{b1}{pt}
\psset{fillstyle=none,arcangle=45}
\ncarc{p1}{psi}
\end{pspicture}
\caption{The path $\rho_i\cdot (p_1,\dotsc,p_k)$}
\label{fig:rotation}
\end{figure}
The point is that
if $P = (p_1,\dotsc,p_k)$ is a rainbow path and
$p_i\in A(P)$, then
$\rho_i\cdot P$ is a rainbow path that does not use the color
$c(p_i,p_{i+1})$, but that is still very similar to $P$. In particular,
$\rho_i\cdot P$ ends in the same vertex as $P$.
\begin{proposition}
\label{prop:maximality-rot}
If $P = (p_1,\dotsc,p_k)$ is a maximum rainbow path with respect to some coloring $c$, then
for every $p_i\in A(P)$ we have $c(p_i,p_{i+1})\not\in c(p_k,V(P)^c)$. Similarly,
for every $p_i\in B(P)$ we have $c(p_i,p_{i-1}) \not\in c(p_1,V(P)^c)$.
\end{proposition}
\begin{proof}
Suppose that $p_i\in A(P)$ is such that $c(p_i,p_{i+1})\in c(p_k,V(P)^c)$.
Then there is a vertex $v\in V(P)^c$ such that $c(p_k,v)=c(p_i,p_{i+1})\not\in c(\rho_i\cdot P)$.
Since $\rho_i\cdot P$ ends in $p_k$ and $v\not\in V(P) = V(\rho_i\cdot P)$,
the rainbow path $\rho_i\cdot P$ violates Proposition
\ref{prop:maximality}.
The second part follows by symmetry.
\end{proof}
This fact was used by
by Gyárfás and Mhalla to find rainbow paths on $(2n+1)/3$ vertices. Because
the proof is quite elegant and uses a technique similar to those used later on, we
give it here.
\begin{thm}[{\cite{GyarfasMhalla2010}}]
\label{thm:gm2010}
In every proper edge coloring of $K_n$, there is a rainbow path on at least
$(2n+1)/3$ vertices.
\end{thm}
\begin{proof}
Consider an arbitrary proper edge coloring $c$ of $K_n$,
and let $P = (p_1,\dotsc,p_k)$ be a maximum rainbow path with respect to $c$.
By Proposition \ref{prop:maximality},
$c(p_k,V(P)^c)\subseteq c(P)$.
Now let
\[ X = \{ c(p_i,p_{i+1}) : p_i\in A(P) \}\text. \]
Then we have $\abs{X} = \abs{A(P)} \geq n-k$.
Since $X\subseteq c(P)$, we get \[X\cup c(p_k,V(P)^c)\subseteq c(P)\text,\] and hence
$\abs{X \cup c(p_k,V(P)^c)} \leq \abs{c(P)} = \abs{E(P)} = k-1$.
By Proposition \ref{prop:maximality-rot}, we have
$\abs{X \cap c(p_k,V(P)^c)} = \emptyset$, and thus
\[ \abs{X\cup c(p_k,V(P)^c)} = \abs{X} + \abs{c(p_k,V(P)^c)} \geq n-k + n-k\text,\] and so
$2n - 2k\leq k-1$, or
\[ k \geq \frac{2n+1}{3}\text, \]
which is what we needed to prove.
\end{proof}
Now we come to the main result of this work.
\section{A theorem on the length of rainbow paths}
In this section, we are going to prove the following result, which is equivalent to Theorem
\ref{thm:intro}.
\begin{thm}
\label{thm:awesome}
For every $\epsilon > 0$, there is some $n_0 = n_0(\epsilon)$ such that
for $n> n_0$, every proper edge coloring of $K_n$ contains a rainbow path on at least
$(3/4-\epsilon)n$ vertices.
\end{thm}
The proof proceeds by contradiction, that is, we assume that for some
$\epsilon > 0$, there are no such rainbow paths, and show that we can construct a
rainbow path that is longer than the length
of a (supposedly) maximum rainbow path. There are two steps to how this is done:
\begin{enumerate}
\parskip 0pt
\item We show that there is a maximum rainbow path $P$ such that almost all (i.e., all but constantly many)
vertices in $V(P)^c$ have at least some constant number of new neighbors in $V(P)^c$.
\item We show how such a rainbow path can be extended to a longer rainbow path, resulting in a contradiction.
\end{enumerate}
\subsection{Preliminaries}
Let $\epsilon > 0$ and let $n>n_0$ for some suitably large value $n_0 = n_0(\epsilon)$.
We will not give an explicit value for $n_0$, rather we will tacitly assume that
increasing functions of $n$ dominate any constant.
We are given a proper edge coloring $c$ of $K_n$.
Then we denote by $t$ the number of vertices in a maximum length rainbow path, that is,
there are no rainbow paths of length $t$ (on $t+1$ vertices).
We will assume throughout that we have
\[ t \leq \left(\frac{3}{4}-\epsilon\right)n\text,\]
trying to arrive at a contradiction.
During the proof, let $a$ be a `large enough' constant. For example $a = 100+\ceil{100/\epsilon}$
is more than enough.
\subsection{Nice rainbow paths}
For any rainbow path $P$, we define the set
\[ R(P) = \{ r\in V(P)^c : \abs{\newn{r,P} \cap V(P)^c} > a\} \text. \]
So $R(P)$ is the set of vertices in $V(P)^c$ that have more than $a$ new neighbors
in $V(P)^c$. Then we have the following definition.
\begin{define}[Nice rainbow path]
A \emph{nice} rainbow path is a rainbow path $P$ satisfying
\[ \abs{R(P)}> n-t- 1/\epsilon\text. \]
\end{define}
In other words, nice rainbow paths are such that all but at most $1/\epsilon$ vertices in $V(P)^c$
have more than $a$ new neighbors in $V(P)^c$.
We will be interested in nice maximum rainbow paths, that is, nice rainbow paths on $t$ vertices.
Before showing that such paths exist, we will motivate them by proving that they have some nice properties.
\begin{proposition}
\label{prop:long-paths}
If $P$ is nice maximum rainbow path,
then for any vertex $r\in R(P)$ and any set $F$ of colors, there is a rainbow path $Q$
starting in $r$ in the subgraph induced by $R(P)$ such that
$\abs{V(Q)} = \floor{(a-\abs{F}-1/\epsilon)/2}$
and $c(Q)\cap (F \cup c(P)) = \emptyset$.
\end{proposition}
\begin{proof}
Let $k=\floor{(a-\abs{F}-1/\epsilon)/2}$.
We construct a series of rainbow paths $Q_1,\dotsc,Q_{k}$ starting in $r$
and using only vertices in $R(P)$, such that for every $i \in [k]$, we have $\abs{V(Q_i)} = i$. Furthermore,
we shall make sure that no $Q_i$ uses colors in $F\cup c(P)$. Clearly, the path $Q_k$
will have the desired properties.
Let $Q_1 = (r)$ and define $Q_{i+1}$ in terms of $Q_i$ as follows.
By construction, $Q_i$ starts in $r$ and ends in some vertex $v\in R(P)$.
If there is a vertex $u\in \newn{v,P} \cap R(P)\setminus V(Q_i)$ such that
\[ c(u,v)\not\in c(Q_i)\cup F\text, \]
then we can simply define $Q_{i+1}$ to be $Q_i\cup(v,u)$.
Hence it is enough to prove that $\abs{\newn{v,P} \cap R(P) \setminus V(Q_i)} > \abs{c(Q_i)\cup F}$.
Because $i< k$, we have
\[ \abs{c(Q_i)\cup F} \leq \abs{c(Q_i)} + \abs{F} < k + \abs{F}\text. \]
Since $P$ is nice, we have $\abs{V(P)^c\setminus R(P)} < 1/\epsilon$, and as $v\in R(P)$, we get
\[ \abs{\newn{v,P} \cap R(P)} > a-1/\epsilon\text, \]
and hence
\[ \abs{\newn{v,P} \cap R(P) \setminus V(Q_i)} > a- 1/\epsilon -k\text. \]
Now the claim follows from
\[ a-1/\epsilon -k \geq k + \abs{F}\text, \]
which holds because
$k \leq (a-\abs{F}-1/\epsilon)/2$.
\end{proof}
The importance of this fact comes from the following lemma.
\begin{lemma}
\label{lemma:no-paths}
If $P$ is a nice maximum rainbow path, then
every rainbow path \[Q = (q_1,\dotsc,q_k)\] satisfies at least one of the following properties:
\begin{enumerate}[(P1)]
\parskip 0pt
\item $V(Q)\not\subseteq V(P)$,
\item $(\newn{q_1,Q}\cup \newn{q_k,Q}) \cap R(P)= \emptyset$,
\item $\abs{c(Q)\setminus c(P)}> 2$, or
\item $k< t-a/3$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose that there is a rainbow path $Q=(q_1,\dotsc,q_k)$ satisfying none of the properties (P1--4).
Because (P2) is violated, either $\newn{q_1,Q}\cap R(P)\neq \emptyset$ or
$\newn{q_k,Q}\cap R(P)\neq \emptyset$.
In any case, there is an endpoint $q$ of $Q$ such that
for some vertex $s_1\in R(P)$, we have $c(q,s_1)\not\in c(Q)$.
Then let \[F = \{c(q,s_1)\}\cup (c(Q)\setminus c(P))\text,\]
and since (P3) is violated, we have $\abs{F}\leq 3$.
By Proposition \ref{prop:long-paths}, we know that the subgraph induced by $R(P)$ contains
a rainbow path $S = (s_1,\dotsc,s_l)$ with $l\geq \floor{(a-3-1/\epsilon)/2}$
such that $c(S)\cap (F\cup c(P)) = \emptyset$. So $S$ does not use any colors in
\[ F\cup c(P) = \{c(q,s_1)\}\cup c(Q)\cup c(P)\text, \]
and hence the path $S' = (q_1,s_1,\dotsc,s_l)$ is a rainbow path with $c(S')\cap c(Q)=\emptyset$.
Because (P1) is violated, we have $V(Q)\cap R(P) = \emptyset$, and thus
\[ V(Q)\cap V(S') = \{q\}\text.\]
so $Q\cup S'$ is a rainbow path. As (P4) is violated, we have
\begin{align*}
\abs{V(Q\cup S')} &=\abs{V(Q)}+\abs{V(S')}-1\\
&= k + l-1\\
&\geq t-\frac{a}{3} + \floor{(a-3-1/\epsilon)/2}-1\\
&> t-\frac{a}{3} + \frac{a}{2} -\frac{1}{2\epsilon}- 3\\
&> t\text.
\end{align*}
But by definition of $t$, there are no rainbow paths this long.
\end{proof}
Thus Lemma \ref{lemma:no-paths} suggests that we can prove Theorem \ref{thm:awesome} in two steps. Under the assumption
that $t\leq(3/4-\epsilon)n$:
\begin{itemize}
\item Show that there is at least one nice maximum rainbow path $P$.
\item Then show that for this $P$, there is a rainbow path $Q$ violating Lemma \ref{lemma:no-paths}.
\end{itemize}
\subsection{Existence of nice maximum rainbow paths}
In this section, we will prove that there are nice maximum rainbow paths.
First, let us reiterate what it means for a rainbow path not to be nice. If $P$
is a rainbow path that is not nice, then we have
\[ \abs{R(P)}\leq n-t-1/\epsilon\text, \]
or, equivalently,
\[ \abs{R(P)^c} \geq n-(n-t-1/\epsilon) = t+1/\epsilon\text. \]
For any rainbow path, we have $\abs{V(P)}\leq t$, so we get
\[ \abs{R(P)^c\cap V(P)^c} = \abs{R(P)^c\setminus V(P)}\geq 1/\epsilon\text. \]
This means that there are at least $1/\epsilon$ vertices in $V(P)^c$
that have no more than $a$ new neighbors in $V(P)^c$.
\begin{proposition}
\label{cons:no-cycles}
Suppose that there are no nice maximum rainbow paths.
Then there are no rainbow cycles of length $t$.
\end{proposition}
\begin{proof}
Suppose that $C$ is such a cycle. Then removing any edge $e\in E(C)$ from $C$,
we get a maximum rainbow path $P=C-e$, with $V(P)=V(C)$.
By assumption, this maximum rainbow path cannot be not nice,
so there is a vertex $v \in V(C)^c$ such that at least $\abs{\newn{v,P}} - a\geq n-t-a$
new neighbors of $v$ are in $V(C)$.
Since, for large enough $n$, we have
$n-t-a\geq 2$, at
least one of those neighbors, call it $u$, is such that $c(u,v)\neq c(e)$.
So $c(u,v)\not\in c(P+e) = c(C)$.
Let $u'$ be a vertex adjacent to $u$ on $C$. Then
$(C-\{u,u'\}) \cup (u,v)$ is a rainbow path of length $t$. This contradicts the definition of $t$.
\end{proof}
\begin{proposition}
Suppose that there are no nice maximum rainbow paths, and let
$P = (p_1,\dotsc,p_t)$ be a maximum rainbow path. Then for every vertex
$p_i\in A(P)\cap B(P)$ we have $c(p_1,p_{i+1}) = c(p_{i-1},p_t)$.
\end{proposition}
\begin{proof}
Assume by way of contradiction
that there is a vertex $p_i \in A(P)\cap B(P)$ such that $c(p_1,p_{i+1}) \neq c(p_{i-1},p_t)$.
By definition, $c(p_1,p_{i+1})\not\in c(P)$ and $c(p_t,p_{i-1})\not\in c(P)$,
so the cycle
\[ C = (p_1,\dotsc,p_{i-1},p_t,p_{t-1},\dotsc,p_{i+1},p_1) \]
shown in figure \ref{fig:easy-cycle} is a rainbow cycle of length $t-1$.
\begin{figure}
\centering
\begin{pspicture}(-5.2,-2)(5.2,2)
\psset{fillstyle=solid,fillcolor=black}
\cnode(-5,0){.08}{x1} \nput{-90}{x1}{$p_1$}
\cnode(-1,0){.08}{xpi} \nput{-90}{xpi}{$p_{i-1}$}
\cnode(0,0){.08}{xi} \nput{-90}{xi}{$p_i$}
\cnode(1,0){.08}{xsi} \nput{-60}{xsi}{$p_{i+1}$}
\cnode(5,0){.08}{xt} \nput{-90}{xt}{$p_t$}
\pnode(-3.25,0){a}
\ncline{x1}{a}
\pnode(-2.75,0){b}
\ncline[nodesep=3pt,linestyle=dotted]{a}{b}
\ncline{b}{xpi}
\pnode(2.75,0){a1}
\ncline{xsi}{a1}
\pnode(3.25,0){b1}
\ncline[nodesep=3pt,linestyle=dotted]{a1}{b1}
\ncline{b1}{xt}
\ncline[linestyle=dotted,nodesep=3pt]{xa}{xpi}
\psset{fillstyle=none,arcangle=45}
\ncarc{xsi}{x1}
\ncarc{xa}{xt}
\ncarc{xpi}{xt}
\end{pspicture}
\caption{A rainbow cycle if $p_i\in A(P)\cap B(P)$ and $c(p_1,p_{i+1})\neq c(p_{i-1},p_t)$}
\label{fig:easy-cycle}
\end{figure}
By assumption, $P$ could not have been nice, so there is a vertex
$v \in V(P)^c$ with
\[ \abs{\newn{v,P}\cap V(P)} \geq \abs{\newn{v,P}} - a\text. \]
Some thought shows that
\[ \abs{\newn{v,C} \cap V(C)}\geq \abs{\newn{v,C}}-a-5\text, \]
because at most three vertices of $\newn{v,P}\cap V(P)$ are in not in
$\newn{v,C}\cap V(C)$, and at most two vertices of $\newn{v,C}$ are
not in $\newn{v,P}$.
But no two vertices $u,u'\in \newn{v,C}\cap V(C)$ can be adjacent in $C$, because otherwise,
the cycle $(C - \{u,u'\}) \cup (u,v, u')$ would be
a rainbow cycle of length $t$, contradicting Proposition \ref{cons:no-cycles}.
In other words, no edge $e\in E(C)$ is incident to two vertices in $\newn{v,C} \cap V(C)$. Hence,
if we write $X = \{e \in E(C) : e\cap \newn{v,C} \neq \emptyset\}$, then
\begin{align*}
\abs{X} &= 2\abs{\newn{v,C}\cap V(C)}\\
&\geq 2\abs{\newn{v,C}}-2a-10\\
&\geq 2n-2t-2a-12\text.
\end{align*}
Now let $Y = V(C)^c \cap \newn{v,C}^c$. We have
\begin{align*}
\abs{Y} &= \abs{V(C)^c \cap \newn{v,C}^c}\\
&= \abs{V(C)^c} - \abs{\newn{v,C} \cap V(C)^c}\\
&= n-t+1 - \abs{\newn{v,C}} + \abs{\newn{v,C}\cap V(C)}\\
&\geq n-t+1 - \abs{\newn{v,C}} + \abs{\newn{v,C}}-a-5\\
&= n-t-a-4\text.
\end{align*}
Clearly, $c(X) \subseteq c(C)$ and $c(v,Y\setminus \{v\})\subseteq c(C)$.
Then $c(X)$ and $c(v,Y\setminus \{v\})$ have to intersect, because
\begin{align*}
\abs{c(X)} + \abs{c(v,Y\setminus \{v\})} &= \abs{X} + \abs{Y} - 1\\
&\geq 3n-3t - 3a -17 \\
&> t-1\\
&= \abs{c(C)}\text.
\end{align*}
Therefore, there exists an edge $e\in X$ with $c(e) = c(v,u)$ for some $u\in V(C)^c$.
By definition, $e$ is incident to some vertex $x\in V(C)$ with $c(x,v)\not\in c(C)$. Then
the rainbow path $(C-e)\cup (x,v,u)$ is a rainbow path of length $t$, but this contradicts the definition
of $t$.
\end{proof}
This suggests that one could choose $P$ in such a way that
$\abs{A(P)\cap B(P)}$ is small. Indeed, we can do this, as we will
show next.
\begin{proposition}
\label{cons:a-b}
Suppose that there are no nice maximum rainbow paths.
Then there is a maximum rainbow path $P$ such that $\abs{A(P)\cap B(P)}\leq \epsilon n$.
\end{proposition}
\begin{proof}
Let $k = \ceil{1+2/\epsilon}$ and let $P$ be an arbitrary maximum rainbow path.
Now we define a sequence of maximum rainbow paths $(P_0,\dotsc,P_k)$ as follows:
\begin{itemize}
\parskip 0pt
\item $P_0 = P$.
\item $P_l$ is defined in terms of $P_{l-1}=(p_1,\dotsc,p_t)$. Because $P_{l-1}$ is a maximum rainbow path,
we have $\abs{A(P_{l-1})} \geq n-t > k$. Then choose a vertex $p_i \in A(P_{l-1})$ that
is \emph{not} the starting vertex of any of the paths $P_0,\dotsc,P_{l-1}$,
and let $P_l = \rho_i \cdot P_{l-1}$. Then, because $p_i\in A(P_{l-1})$, the path $P_l$
is a maximum rainbow path starting in $p_i$ and ending in $p_t$.
\end{itemize}
By this construction, all paths $P_i$ end in the same vertex $p_t$, but no two different
paths start in the same vertex.
We now show that at least one of the paths $P_l$ must
satisfy $\abs{A(P_l)\cap B(P_l)}\leq \epsilon n$. By way of contradiction,
assume that for every $l\in \{0,\dotsc,k\}$, we have
\[ \abs{A(P_l)\cap B(P_l)} > \epsilon n\text.\]
For every path $P_l = (p_1,\dotsc,p_t)$, we define two sets $X_l$ and $Y_l$.
Let $X_l$ be the smallest set
that, for every vertex $p_i\in \newn{p_t,P_l}$, contains the triples $(p_i,p_{i+1},p_{i+2})$ and
$(p_i,p_{i-1},p_{i-2})$, if those vertices are defined.
Observe that $\abs{X_l\setminus X_{l-1}} \leq 4$, because the path $P_l$ is built from $P_{l-1}$ by a single rotation.
Then, since $\abs{X_0} \leq 2t$, we get
\[ \abs{X_0\cup \dotsb\cup X_k} \leq 2t + 4k\text. \]
Now let $Y_l$ be the smallest set that, for every vertex $p_i \in A(P_l)\cap B(P_l)$, contains the
triple $(p_{i-1},p_i,p_{i+1})$. If $p_i\in B(P_l)$, then $p_{i-1}\in \newn{p_t,P_l}$, so we have $Y_l\subseteq X_l$.
Assuming that $n$ is large enough, we get
\[ \frac{\abs{X_0\cup \dotsb\cup X_k}}{\abs{Y_l}} < \frac{2t+4k}{\epsilon n} < 1+2/\epsilon\leq k\text.\]
Thus $k\abs{Y_l}> \abs{X_0\cup \dotsb \cup X_k}$. Since $Y_l\subseteq \abs{X_0\cup \dotsb \cup X_k}$,
and because there are $k$ sets $Y_l$ altogether, this means that
there are
two sets $Y_i$ and $Y_j$ that intersect, say in the triple $(u,v,w)$.
But if we write $P_i = (p_1,\dotsc,p_t)$ and $P_j=(p_1',\dotsc,p_{t-1}',p_t)$, then either
$c(p_1,w)\neq c(u,p_t)$ or $c(p_1',w)\neq c(u,p_t)$, contradicting the result above stating that
for any maximum rainbow path $P=(p_1,\dotsc,p_t)$, every vertex $p_i\in A(P)\cap B(P)$
satisfies $c(p_1,p_{i+1}) = c(p_{i-1},p_t)$.
\end{proof}
Now we are ready to prove the following.
\begin{lemma}
There exists at least one nice maximum rainbow path.
\end{lemma}
\begin{proof}
We may assume that there are no nice maximum rainbow paths.
Then, by Proposition \ref{cons:a-b}, there is a maximum rainbow path $P$ with
$\abs{A(P)\cap B(P)}\leq \epsilon n$. Writing $A$ and $B$ for
$A(P)$ and $B(P)$ respectively, this means that
\[ \abs{A \cup B} = \abs{A} + \abs{B} - \abs{A\cap B} \geq 2n-2t - \epsilon n\text. \]
Observe that every vertex in $A\cup B$ can have at most one new neighbor in $V(P)^c$,
as otherwise we would get a rainbow path of length $t$.
Now suppose that $r$ is a vertex in $V(P)^c$ such that
\[ \abs{\newn{r,P}\cap V(P)} \geq \newn{r,P}-a\geq n-t-a\text. \]
For brevity, let us write $X = \newn{r,P}\cap V(P)$.
Then, using $\abs{A \cup B \cup X}\leq t$,
\begin{align*}
\abs{(A \cup B) \cap X}& = \abs{A\cup B} + \abs{X} - \abs{A \cup B \cup X}\\
& \geq 2n-2t-\epsilon n + n - t-a - t\\
& = 3n - 4t -a- \epsilon n\\
& \geq \epsilon n\text.
\end{align*}
Therefore, at least $\epsilon n$ new neighbors of $r$
are in $A\cup B$.
Hence, by the observation above, there may be at most
\[ \frac{\abs{A\cup B}}{\epsilon n} \leq \frac{2n-2t-\epsilon n}{\epsilon n} < \frac{1}{\epsilon} \]
vertices $r\in V(P)^c$ with $\abs{\newn{r,P} \cap V(P)} \geq \abs{\newn{r,P}}- a$. In
other words, more than $n-t-1/\epsilon$ vertices in $V(P)^c$ satisfy
\[ \abs{\newn{r,P}\cap V(P)}> a\text, \]
so $P$ is nice, contradicting our assumption.
\end{proof}
\subsection{Establishing a contradiction}
In the following, let $P = (p_1,\dotsc,p_t)$ be a nice maximum rainbow path, that is, such that
\[ \abs{R(P)} > n-t-1/\epsilon\text.\]
We will now try to reach a contradiction by constructing a rainbow path $Q$ violating
Lemma \ref{lemma:no-paths}. We will start by proving the following useful proposition.
\begin{proposition}
\label{prop:counting}
Let $A$ be any subset of $V(P)$. Then for any $k \in \mathbold{N}$,
there are at most $1+t/k$ vertices in $A$ that do not have a $k$-successor in $A$ (on $P$).
\end{proposition}
\begin{proof}
Let $X\subseteq A$ be the set of vertices in $A$ that do not have a $k$-successor in $A$.
Clearly, there can be only one vertex in $A$ that does not have a $t$-successor in $A$.
So for all but at most one vertex in $X$, there are $k$ vertices following that vertex that are not in $A$.
If we also count the vertices in $X$ themselves, then in total we count at least
$k(\abs{X}-1)+\abs{X}$ vertices. Altogether, there are only $t$ vertices in $P$. Therefore
\[ k(\abs{X}-1)+\abs{X} \leq t \]
and hence
\[ \abs{X} \leq \frac{t+k}{k+1} \leq \frac{t}{k}+1\text. \]
This completes the proof.
\end{proof}
\begin{proposition}
\label{prop:a-b-bound}
We have $\abs{A(P) \cap B(P)} \leq \epsilon n$.
\end{proposition}
\begin{proof}
Suppose that \[\abs{A(P) \cap B(P)} > \epsilon n\text.\]
Let $k=\ceil{t/(\epsilon n-1)}$.
If $p_i \in A(P) \cap B(P)$, then by definition $p_{i-1} \in \newn{p_t,P}$.
By Proposition \ref{prop:counting}, at most $1+t/k$ vertices in $\newn{p_t,P}$ have no $k$-predecessor in $\newn{p_t,P}$
on $P$.
Therefore, there are more than
\[ \epsilon n - 1-t/k \geq 0\]
vertices $p_i\in A(P)\cap B(P)$ such that $p_{i-1}$ does have a $k$-predecessor $p_j$ in $\newn{p_t,P}$.
Take any such vertex;
at least one of the edges $\{p_t,p_{i-1}\}$ and $\{p_t,p_{j}\}$ is colored differently from $\{p_1,p_{i+1}\}$.
As shown in figure \ref{fig:cycle-ab}, there is a rainbow cycle $C$ using this edge, of length
at least \[t-k-1 \geq t-\ceil{t/(\epsilon n-1)}-1\geq t- a/3\text.\] Now we distinguish two cases.
\begin{figure}
\centering
\begin{pspicture}(-5.2,-2)(5.2,2)
\psset{fillstyle=solid,fillcolor=black}
\cnode(-5,0){.08}{x1} \nput{-90}{x1}{$p_1$}
\cnode(-2,0){.08}{xa} \nput{-90}{xa}{$p_j$}
\cnode(-1,0){.08}{xpi} \nput{-90}{xpi}{$p_{i-1}$}
\cnode(0,0){.08}{xi} \nput{-90}{xi}{$p_i$}
\cnode(1,0){.08}{xsi} \nput{-60}{xsi}{$p_{i+1}$}
\cnode(5,0){.08}{xt} \nput{-90}{xt}{$p_t$}
\pnode(-3.75,0){a}
\ncline{x1}{a}
\pnode(-3.25,0){b}
\ncline[nodesep=3pt,linestyle=dotted]{a}{b}
\ncline{b}{xa}
\pnode(2.75,0){a1}
\ncline{xsi}{a1}
\pnode(3.25,0){b1}
\ncline[nodesep=3pt,linestyle=dotted]{a1}{b1}
\ncline{b1}{xt}
\ncline[linestyle=dotted,nodesep=3pt]{xa}{xpi}
\psset{fillstyle=none,arcangle=45}
\ncarc{xsi}{x1}
\ncarc{xa}{xt}
\ncarc{xpi}{xt}
\end{pspicture}
\caption{Building a rainbow cycle for $p_i \in A(P)\cap B(P)$}
\label{fig:cycle-ab}
\end{figure}
\begin{enumerate}[{Case }1.]
\parskip 0pt
\item There is a vertex $r\in R(P)$ such that for some $v \in V(C)$, we have $c(r,v)\not\in c(C)$.
Let $e\in E(C)$ be adjacent to $v$. Then it is easily verified that
the path $C-e$ violates Lemma \ref{lemma:no-paths}.
\item No vertex $r\in R(P)$ has a neighbor $v\in V(C)$ such that $c(r,v) \not\in c(C)$.
Let $r$ be any vertex in $R(P)$. Since there are at most $\abs{c(C)}$
vertices $u$ such that $c(r,u) \in c(C)$, and because by assumption all of them are in $V(C)$,
we have $c(r,p_i)\not\in c(C)$.
Then let $e\in E(C)$ be adjacent to $p_{i+1}$; in this case the path $(C-e)\cup (p_{i+1},p_i)$
violates Lemma \ref{lemma:no-paths}.
\end{enumerate}
Both cases result in a contradiction.
\end{proof}
\begin{proposition}
There are at least $\epsilon n$ vertices $p_i \in A(P)$ such that
$c(p_i,p_{i+1}) \in c(p_1,R(P))$.
\end{proposition}
\begin{proof}
Let \[X = \{ p_i \in V(P) : c(p_{i},p_{i+1}) \in c(p_1,R(P)) \} \text.\]
First we show that $X\cap B(P) = \emptyset$. This is the case, because if
$p_i \in X\cap B(P)$, then $c(p_i,p_{i+1}) = c(p_1,r)$ for some $r\in R(P)$, so
$P' = (r,p_1,\dotsc,p_{i-1},p_t,p_{t-1},\dotsc,p_{i+1})$ is a rainbow path on $t$ vertices.
But since $r\in R(P)$, at least one vertex of $\newn{r,P'}$ is not in $V(P')$, so
$P'$ cannot be maximum.
Hence $X\cap B(P) = \emptyset$.
Then we can partition $X$ as follows:
\[ X = \big(X \cap (A(P)\cup B(P))^c\big) \cup \big(X\cap A(P)\big)\text. \]
By Proposition \ref{prop:a-b-bound}, we have $\abs{A(P)\cap B(P)}\leq \epsilon n$.
So we get
\begin{align*}
\abs{X \cap (A(P)\cup B(P))^c} &\leq \abs{V(P)} - \abs{A(P)\cup B(P)}\\
&= \abs{V(P)} - \abs{A(P)} -\abs{B(P)} + \abs{A(P)\cap B(P)}\\
&\leq t - n+t - n+t + \epsilon n\\
& = 3t-2n +\epsilon n\text,
\end{align*}
and hence
\[ \abs{X\cap A(P)} = \abs{X} - \abs{X \cap (A(P)\cup B(P))^c}
> 3n - 4t - 1/\epsilon - \epsilon n > \epsilon n\text, \]
using $\abs{X} = \abs{R(P)}>n-t-1/\epsilon$.
\end{proof}
By Proposition \ref{prop:counting}, at most $1+\epsilon t$ vertices in $A(P)$ do not have a
$\ceil{1/\epsilon}$-successor in $A(P)$.
Thus at least $2\epsilon n -\epsilon t-1 > 0$ vertices $p_i\in A(P)$ satisfy the following properties:
\begin{enumerate}
\parskip 0pt
\item $c(p_i,p_{i+1}) \in c(p_1,R(P))$ and
\item $p_i$ has a $\ceil{1/\epsilon}$-successor $p_j\in A(P)$.
\end{enumerate}
Take any such vertices $p_i$ and $p_j$.
\begin{proposition}\label{prop:inter}
We have \[\abs{\newn{p_i,P}\cap \newn{p_j,P} \cap V(P)} > \epsilon n\text.\]
\end{proposition}
\begin{proof}
Because $p_i,p_j\in A(P)$,
the paths $P_i = \rho_i \cdot P$ and $P_j = \rho_j \cdot P$ are maximum rainbow paths.
Both paths $P_i$ and $P_j$ end in the same vertex $p_t$. It will be useful to
consider $P_i$ and $P_j$ as directed paths, so let $E_i$ and $E_j$ be the
edge sets of $P_i$ and $P_j$, but with the edges directed towards $p_t$.
Because $p_j$ is a $\ceil{1/\epsilon}$-successor
of $p_i$, we have
\[ \abs{E_i\cup E_j} \leq t-1+\ceil{1/\epsilon} \quad \text{and}\quad \abs{E_i\cap E_j} \geq t-1-\ceil{1/\epsilon}\text. \]
Since $P_i$ and $P_j$ are maximum rainbow paths, the colors in $c(p_t,V(P)^c)$
must appear on edges of $P_i$ and $P_j$.
Let $Z$ be the set of edges in $E_i\cap E_j$ that are colored with
colors in $c(p_t,V(P)^c)$.
We have \[\abs{Z} \geq \abs{c(p_t,V(P)^c)} - \ceil{1/\epsilon} = n-t-\ceil{1/\epsilon}\text.\]
If we write $P_i = (x_1,\dotsc,x_t)$ and $P_j = (y_1,\dotsc,y_t)$, then we can define
\[X = \{(x_i,x_{i+1}) : x_{i}\in A(P_i) \} \quad \text{and}\quad Y = \{(y_i,y_{i+1}) : y_{i}\in A(P_j) \}\text. \]
Note that it is sufficient to show that $\abs{X\cap Y}> \epsilon n+2$.
By maximality of $P_i$ and $P_j$,
\[ Z \cap X = Z \cap Y = \emptyset\text.\]
Therefore, we get
\begin{align*} \abs{Z \cup X \cup Y}
& = \abs{Z} + \abs{X \cup Y}\\
& = \abs{Z} + \abs{X} + \abs{Y}
- \abs{X \cap Y}\\
& \geq 3n - 3t - \ceil{1/\epsilon}- \abs{X\cap Y}\text.
\end{align*}
But $Z \cup X\cup Y\subseteq E_i\cup E_j$, so we have
\[ \abs{Z \cup X \cup Y} \leq \abs{E_i\cup E_j} \leq t+\ceil{1/\epsilon}\text,\]
and therefore
\[ \abs{X\cap Y} \geq 3n-4t-2\ceil{1/\epsilon}
>\epsilon n+2 \text,\]
which is what we needed to prove.
\end{proof}
In the following, let us write
\[ X = \newn{p_i,P}\cap \newn{p_j,P}\cap V(P)\text.\]
We have just shown that $\abs{X}\geq \epsilon n$.
Because there are less than $1/\epsilon$ vertices of $P$ between $p_i$ and $p_j$, we can say
that if $n$ is large enough, then either there are at least $\epsilon n /3$ vertices
of $X$ preceding $p_i$ on $P$ (Case 1), or there are $\epsilon n /3$
vertices of $X$ succeeding $p_j$ (Case 2).
Now we distinguish between the Cases 1 and 2.
\begin{enumerate}[{Case }1.]
\parskip 0pt
\item In this case, there are at least $\epsilon n/3$ vertices of $X$ preceding $p_i$.
Using Proposition \ref{prop:counting}, there are at most $1+\epsilon t/10<\epsilon n/10$ vertices in $X$
that do not have a $\ceil{10/\epsilon}$-successor in $X$.
Therefore, at least
\[ \frac{\epsilon n}{3}-\frac{\epsilon n}{10} = \frac{7\epsilon n}{30} \]
of the vertices of $X$ preceding $p_i$ have a $\ceil{10/\epsilon}$-successor in $X$
Note that there is a bijection between the vertices and their successors -- to every vertex in $X$
corresponds his closest successor in $X$.
So by the same argument, of those successors, at least
\[ \frac{7\epsilon n}{30} - \frac{\epsilon n}{10} = \frac{4\epsilon n}{30} \]
have themselves a $\ceil{10/\epsilon}$-successor in $X$.
Hence there is a vertex $u \in X$ preceding $p_i$
on $P$ that has two $\ceil{20/\epsilon}$-successors in $X$ which also
precede $p_i$. In particular, $u$ has a $\ceil{20/\epsilon}$-successor $v$ that precedes $p_i$
and that satisfies $c(p_i,u)\neq c(p_j,v)$.
Then the path shown in figure \ref{fig:case1}
is a rainbow path starting in $p_1$ and ending in $p_t$, visiting at least $t-\ceil{1/\epsilon}-\ceil{20/\epsilon}$ vertices,
that uses only two colors not in $c(P)$.
\item As in the previous case, there is a vertex
$v \in X$ succeeding $p_j$ on $P$
that has a $\ceil{20/\epsilon}$-successor $v$ which satisfies $c(p_i,u)\neq c(p_j,v)$.
The path shown in figure \ref{fig:case2}
is a rainbow path starting in $p_1$ and ending in $p_t$, visiting at least $t-\ceil{1/\epsilon}-\ceil{20/\epsilon}$ vertices,
that uses only two colors not in $c(P)$.
\end{enumerate}
\begin{figure}
\centering
\begin{pspicture}(-5.2,-2)(5.2,2)
\psset{fillstyle=solid,fillcolor=black}
\cnode(-5,0){.08}{p1} \nput{-90}{p1}{$p_1$}
\cnode(-2.5,0){.08}{u} \nput{-90}{u}{$u$}
\cnode(-1.5,0){.08}{v} \nput{-90}{v}{$v$}
\cnode(1.5,0){.08}{pi} \nput{-90}{pi}{$p_i$}
\cnode(2.5,0){.08}{pj} \nput{-90}{pj}{$p_{j}$}
\cnode(5,0){.08}{pt} \nput{-90}{pt}{$p_t$}
\pnode(-4,0){a}
\ncline{p1}{a}
\pnode(-3.5,0){b}
\ncline[nodesep=3pt,linestyle=dotted]{a}{b}
\ncline{b}{u}
\pnode(3.5,0){a1}
\ncline{pj}{a1}
\pnode(4,0){b1}
\ncline[nodesep=3pt,linestyle=dotted]{a1}{b1}
\ncline{b1}{pt}
\psset{fillstyle=none,arcangle=45}
\ncline{v}{pi}
\ncarc{u}{pi}
\ncarc{pj}{v}
\end{pspicture}
\caption{Figure for Case 1}
\label{fig:case1}
\end{figure}
\begin{figure}
\centering
\begin{pspicture}(-5.2,-2)(5.2,2)
\psset{fillstyle=solid,fillcolor=black}
\cnode(-5,0){.08}{p1} \nput{-90}{p1}{$p_1$}
\cnode(-2.5,0){.08}{u} \nput{-90}{u}{$p_i$}
\cnode(-1.5,0){.08}{v} \nput{-90}{v}{$p_j$}
\cnode(1.5,0){.08}{pi} \nput{-90}{pi}{$u$}
\cnode(2.5,0){.08}{pj} \nput{-90}{pj}{$v$}
\cnode(5,0){.08}{pt} \nput{-90}{pt}{$p_t$}
\pnode(-4,0){a}
\ncline{p1}{a}
\pnode(-3.5,0){b}
\ncline[nodesep=3pt,linestyle=dotted]{a}{b}
\ncline{b}{u}
\pnode(3.5,0){a1}
\ncline{pj}{a1}
\pnode(4,0){b1}
\ncline[nodesep=3pt,linestyle=dotted]{a1}{b1}
\ncline{b1}{pt}
\psset{fillstyle=none,arcangle=45}
\ncline{v}{pi}
\ncarc{u}{pi}
\ncarc{pj}{v}
\end{pspicture}
\caption{Figure for Case 2}
\label{fig:case2}
\end{figure}
Notice that by this construction, the path that we get is always such that
some color in $c(p_1,R(P))$ is not used, namely, $c(p_i,p_{i+1})$.
But since $t-\ceil{1/\epsilon}-\ceil{20/\epsilon} > t-a/3$, in both cases the resulting path violates
Lemma \ref{lemma:no-paths}. This is the desired contradiction.
\section{Conclusion}
We have proved that for every $\epsilon > 0$, every proper edge coloring of the graph $K_n$
contains a rainbow path on
\[t > \left(\frac{3}{4}-\epsilon\right)n\]
vertices, assuming that $n$ is larger than some value $n_0$ depending on $\epsilon$.
This is a significant improvement over Theorem \ref{thm:gm2010}.
In~\cite{Akbari2007}, the authors proved that every proper edge coloring of $K_n$ contains Hamiltonian
cycles on at least $(2/3-o(1))n$ different colors.
Clearly, the extension of a rainbow path of length $t$ to a Hamiltonian cycle in $K_n$ gives
a Hamiltonian cycle using at least $t$ colors.
So as a bonus we get the following corollary to Theorem \ref{thm:awesome}.
\begin{corollary}
In every proper edge coloring of $K_n$, there are Hamiltonian cycles using at least
$(3/4-o(1))n$
different colors.
\end{corollary}
\chapter{Paths with Repeated Colors}
\section{Introduction}
In this chapter, we take a look at a natural generalization of rainbow paths.
Given a proper edge coloring of $K_n$, a rainbow path uses no color more than once;
now we allow for \emph{$k$-rainbow paths}, using every color at most a fixed number $k$ of times.
Note that $1$-rainbow paths are just rainbow paths, so we have the following theorem.
\begin{thm}[{\cite{GyarfasMhalla2010}}]
\label{thm:1-rainbow}
In every proper edge coloring of $K_n$, there is a $1$-rainbow path on at least
$(2n+1)/3$ vertices.
\end{thm}
Mostly, we are interested in an asymptotic statement: how does the length of a maximum
$k$-rainbow path increase with $k$? The following is a simple bound, although for simplicity
we only prove it for powers of two.
\begin{proposition}
\label{prop:naive}
Let $n = 2^m$, for some $m \in \mathbold{N}$.
In every proper edge coloring of $K_n$ and for any $k\geq 1$,
there is a $k$-rainbow path of length $(1-1/2^k)n - O(k)$.
\end{proposition}
\begin{proof}
We actually prove a stronger statement: that for every vertex $v$ of $K_n$, there is a $k$-rainbow path starting
in $v$ of length $(1-1/2^k)n - O(k)$.
The proof goes by induction on the number of vertices. In the base case of $K_1$, there are no paths of nonzero length, so the
claim is trivially satisfied.
Now assume that the claim is true for some value $n$ and consider the graph $K_{2n}$.
Let $v$ and $k$ be given. Starting in $v$, there is a rainbow path $P=(v,p_2,\dotsc,p_n)$ on
$n$ vertices in $K_{2n}$. This was noted after the proof of Proposition \ref{prop:n/2-bound} in
the previous chapter.
Then we can invoke the induction hypothesis
to get a $(k-1)$-rainbow path starting in $p_n$ and avoiding the vertices of $P$, of length
$(1-1/2^{k-1})n-O(k-1)$.
Appending the two paths together, we get a $k$-rainbow path of length
\[ n-1 + \left(1-\frac{1}{2^{k-1}}\right)n - O(k-1) = \left(1-\frac{1}{2^k}\right)
2n - O(k)\text, \]
concluding the proof.
\end{proof}
This proof, while simple, already contains an important idea: that we can use $(k-1)$-rainbow paths
to build $k$-rainbow paths.
The rest of this chapter is about improving on this result, but first we define some notation.
Consider the complete graph $K_n = (V,E)$ and a proper edge coloring $c$ of $K_n$.
If $P$ is a $k$-rainbow path with respect to $c$, then let $C_0(P) = c(P)^c$ and for
$i\in [k]$, let $C_i(P)$ be the set of colors used exactly $i$ times
on edges of $P$. Clearly, $c(E) = \bigcup_{i=0}^k C_i(P)$.
Furthermore, for a $k$-rainbow path $P=(p_1,\dotsc,p_t)$, we define
\[C_A(P) = \{c(p_{i},p_{i+1}) : c(p_1,p_{i+1}) \not\in C_k(P)\}\text.\]
Now we define what we mean by a maximal $k$-rainbow path.
A $k$-rainbow path $P=(p_1,\dotsc,p_t)$ is \emph{maximal} if it satisfies both
\begin{gather}
c(p_1, V(P)^c) \subseteq C_k(P)\text.\label{eq:maximality}\\
\shortintertext{and}
c(p_t,V(P)^c) \subseteq C_k(P) \setminus C_A(P)
\label{eq:maximality-rot}
\end{gather}
We shall see that every maximum length $k$-rainbow path is also
maximal in this sense.
\section{Two lemmas on maximal $k$-rainbow paths}
\begin{lemma}
If $P$ is a $(k-1)$-rainbow path, then
there is a maximal $k$-rainbow path $P'$ with
$\abs{C_{k}(P')} \leq \abs{V(P')}-\abs{V(P)}$.
\end{lemma}
\begin{proof}
Let $P$ be any $(k-1)$-rainbow path. Then
$P$ is also a (non-maximal) $k$-rainbow path with $C_{k}(P) = 0$.
We will show that for any non-maximal $k$-rainbow path $P$, there is a $k$-rainbow
path $P'$ with $\abs{V(P')}=\abs{V(P)}+1$ and $\abs{C_k(P')}\leq \abs{C_k(P)} +1$.
Since we cannot add vertices indefinitely, we will
eventually get a maximal $k$-rainbow path with the required properties.
So if $P= (p_1,\dotsc,p_t)$ is a non-maximal $k$-rainbow path, then
one of the following is the case.
\begin{enumerate}[{Case }1.]
\parskip0pt
\item If $c(p_1, V(P)^c) \not\subseteq C_{k}(P)$, then
there is an edge $\{p_1,r\}\in E(p_1,V(P)^c)$ colored with a color $c\not\in C_{k}(P)$.
Hence the path $P \cup \{p_1,r\}$ is a $k$-rainbow path and
$C_{k}(P\cup\{p_1,r\}) = C_{k}(P)+1$.
\item If $c(p_t, V(P)^c) \not\subseteq C_{k}(P)$, then
we can proceed just as in the first case,
after reversing the order of the vertices on $P$.
\item If $c(p_t, V(P)^c) \subseteq C_{k}(P)$ but $c(p_t, V(P)^c) \not\subseteq C_{k}(P)\setminus C_A(P)$,
then there are vertices $p_i\in V(P)$ and $r\in V(P)^c$ such that $c(p_i,p_{i+1})=c(p_t,r)\in C_k(P)$
and, furthermore, $c(p_1,p_{i+1})\not\in C_{k}(P)$.
Recall that $\rho_i \cdot P$ is the path $(p_i,p_{i-1},\dotsc,p_1,p_{i+1},\dotsc,p_t)$.
Then $\rho_i \cdot P$ is a $k$-rainbow path with $\abs{C_k(\rho_i\cdot P)}\leq \abs{C_k(P)}$
and $\abs{V(\rho_i\cdot P)} = \abs{V(P)}$.
Moreover, $\rho_i\cdot P$ ends in $p_t$ and we have $c(p_t,r)\in C_{k-1}(\rho_i\cdot P)$.
This means that
$c(p_t, V(\rho_i\cdot P)^c) \not\subseteq C_{k}(\rho_i\cdot P)$, and
so can we proceed as in the second case.
\end{enumerate}
This completes the proof of the lemma. Note that in addition, we have proved that every maximum
length $k$-rainbow path is maximal.
\end{proof}
\newcommand{\edges}[1]{\mathcal{E}[#1]}
\begin{lemma}
\label{lemma:ck}
If $P$ is a maximal $k$-rainbow path, then
\[ \abs{C_k(P)} \geq (k+1)n - (k+1)\abs{V(P)}\text.\]
\end{lemma}
\begin{proof}
Let $P=(p_1,\dotsc,p_t)$ be a maximal $k$-rainbow path.
If $C\subseteq C(E)$ is a set of colors, then we write
\[ \edges{C} = \{ e\in E(P) : c(e)\in C \} = c^{-1}(C)\cap E(P) \]
for the set of edges of $P$ colored with a color in $C$.
First, we would like to find a lower bound for $\abs{C_A(P) \cap C_k(P)}$.
Since every color appears at most $k$ times on $P$,
we have
\[ k\abs{C_A(P)\cap C_k(P)} \geq \abs{\edges{C_A(P)\cap C_k(P)}}\text. \]
By maximality condition \eqref{eq:maximality},
every vertex $v$ with $c(p_1,v) \not\in C_k(P)$ is in $V(P)$,
so we have
\[ \abs{\edges{C_A(P)}}\geq \abs{C_A(P)} \geq n-1-\abs{C_k(P)}\text. \]
Moreover,
\[ \abs{\edges{C_k(P)^c}} = \abs{E(P)}-\abs{\edges{C_k(P)}} = t-1-k\abs{C_k(P)}\text. \]
Then we get
\begin{align*} k\abs{C_A(P)\cap C_k(P)} &\geq \abs{\edges{C_A(P)\cap C_k(P)}}\\
& = \abs{\edges{C_A(P)}\setminus \edges{C_k(P)^c}}\\
&\geq \abs{\edges{C_A(P)}} - \abs{\edges{C_k(P)^c}}\\
&\geq n-1-\abs{C_k(P)} - (t-1-k\abs{C_k(P)})\\
&= n-t +(k-1)\abs{C_k(P)}\text.
\end{align*}
By maximality condition \eqref{eq:maximality-rot},
\begin{align*} k\abs{c(p_t,V(P)^c)} &\leq k\abs{C_k \setminus C_A(P)} \\
&= k\abs{C_k(P)} - k\abs{C_A(P) \cap C_k(P)} \\
&\leq k\abs{C_k(P)} - n + t - (k-1)\abs{C_k(P)}\\
&= \abs{C_k(P)} - n + t
\end{align*}
With $\abs{c(p_t,V(P)^c)} = n-t$, we get
\[ \abs{C_k(P)} \geq (k+1)n - (k+1)t\text, \]
as claimed.
\end{proof}
\section{A theorem on the length of $k$-rainbow paths}
\begin{thm}
\label{thm:k-bound}
In every proper coloring of $K_n$ and for any $k\geq 1$,
there is a $k$-rainbow path on at least
\[ \left(1-\frac{2}{(k+2)!}\right)n \]
vertices.
\end{thm}
\begin{proof}
The proof goes by induction on $k$.
The induction basis is provided for by Theorem \ref{thm:1-rainbow},
since \[ \frac{2n+1}{3} \geq \left(1-\frac{2}{3!}\right)n\text.\]
In the induction step, assume that there is a
$(k-1)$-rainbow path using
\[t_{k-1} > \left(1-\frac{2}{(k+1)!}\right)n\] vertices.
Then, by Lemma 1, there is a maximal $k$-rainbow path $P$ with $\abs{C_k(P)}\leq \abs{V(P)}-t_{k-1}$.
Using Lemma 2, we get
\[ \abs{C_k(P)} \geq (k+1)n - (k+1)\abs{V(P)}\text,\]
so if we write $t_k$ for $\abs{V(P)}$, then
\[ t_k-t_{k-1} \geq (k+1)n - (k+1)t_k\text,\]
or
\[ (k+2)t_k \geq t_{k-1} + (k+1)n\text.\]
Using the induction hypothesis, we get
\begin{align*}
t_k& \geq \frac{t_{k-1} + (k+1)n}{k+2}\\
&> \frac{\left(1-\frac{2}{(k+1)!}\right)n}{k+2} + \frac{(k+1)n}{k+2}\\
&= \frac{n}{k+2} + \frac{(k+1)n}{k+2} - \frac{2n}{(k+2)!}\\
&= \frac{(k+2)n}{k+2} - \frac{2n}{(k+2)!}\\
&= \left(1 - \frac{2}{(k+2)!}\right)n\text,
\end{align*}
so $P$ is a $k$-rainbow path of sufficient length.
\end{proof}
\section{Conclusion}
From the statement of Theorem \ref{thm:k-bound}, it is easily seen that for fixed $n$,
the number of vertices not included in a maximum $k$-rainbow path is in the order of $1/k!$.
This is clearly an improvement over Proposition \ref{prop:naive}, which only
shows that the number
of vertices not included in a maximum $k$-rainbow path is in the order of $1/2^k$.
The growth provided by Theorem \ref{thm:k-bound} is asymptotically faster.
In the proof, we essentially used a generalized version of the argument
made by Gyárfás and Mhalla in~\cite{GyarfasMhalla2010}, a discussion of which can be found in
the previous chapter.
In this light, it might be interesting to generalize the techniques used in the proof of
the bound of $(3/4-o(1))n$ for the length of maximum rainbow paths in $K_n$, and to
apply them to $k$-rainbow paths.
\chapter{Conclusion}
In the preceding chapters, we have derived two novel results on the existence of certain paths
in proper edge colorings of the complete graph $K_n$.
Most importantly, we have shown that in every proper edge coloring of $K_n$ there is a
rainbow path of length at least \[\left(\frac{3}{4}-o(1)\right)n\text.\]
This result improves on the previously best known bound of $2n/3$ proved by
Gyárfás and Mhalla in~\cite{GyarfasMhalla2010}.
As a corollary, there are Hamiltonian cycles using at least $(3/4-o(1))n$ colors;
here we improve on a result by Akbari, Etesami, Mahini and Mahmoody~\cite{Akbari2007}.
Moreover, we have proved that in every proper edge coloring of $K_n$,
there is a $k$-rainbow path on at least \[ \left(1-\frac{2}{(k+2)!}\right)n \]
vertices, for any $k>0$.
Thus, for fixed $n$, the number of vertices not included in a maximum $k$-rainbow
path decreases with $k$
faster than any exponential function, and hence asymptotically faster than what
we got using a naive approach.
We believe that the techniques used in the proof
of the first result, which relied heavily on pigeonhole-style arguments,
do not immediately lend themselves to proving stronger bounds.
Moreover, the proof itself does not seem to reveal deep insights into
the structure of rainbow paths, leading us to believe that different
methods will have to be used to prove the existence of rainbow paths of length $n-o(n)$.
However, it seems likely that applying the same methods, suitably generalized,
to the problem of $k$-rainbow paths in $K_n$ might prove to be fruitful. Indeed, we would
expect the resulting bound to be asymptotically stronger (in $k$, for fixed $n$)
than our bound, whose proof relied on comparatively simple techniques.
\backmatter
\bibliographystyle{alpha}
| {
"timestamp": "2012-07-05T02:01:04",
"yymm": "1207",
"arxiv_id": "1207.0840",
"language": "en",
"url": "https://arxiv.org/abs/1207.0840",
"abstract": "In a properly edge colored graph, a subgraph using every color at most once is called rainbow. In this thesis, we study rainbow cycles and paths in proper edge colorings of complete graphs, and we prove that in every proper edge coloring of K_n, there is a rainbow path on (3/4-o(1))n vertices, improving on the previously best bound of (2n+1)/3 from Gyarfas and Mhalla. Similarly, a k-rainbow path in a proper edge coloring of K_n is a path using no color more than k times. We prove that in every proper edge coloring of K_n, there is a k-rainbow path on (1-2/(k+1)!)n vertices.",
"subjects": "Discrete Mathematics (cs.DM); Combinatorics (math.CO)",
"title": "On Rainbow Cycles and Paths",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969679646668,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7099044411374612
} |
https://arxiv.org/abs/1203.6397 | Max-Sum Diversification, Monotone Submodular Functions and Dynamic Updates | Result diversification is an important aspect in web-based search, document summarization, facility location, portfolio management and other applications. Given a set of ranked results for a set of objects (e.g. web documents, facilities, etc.) with a distance between any pair, the goal is to select a subset $S$ satisfying the following three criteria: (a) the subset $S$ satisfies some constraint (e.g. bounded cardinality); (b) the subset contains results of high "quality"; and (c) the subset contains results that are "diverse" relative to the distance measure. The goal of result diversification is to produce a diversified subset while maintaining high quality as much as possible. We study a broad class of problems where the distances are a metric, where the constraint is given by independence in a matroid, where quality is determined by a monotone submodular function, and diversity is defined as the sum of distances between objects in $S$. Our problem is a generalization of the {\em max sum diversification} problem studied in \cite{GoSh09} which in turn is a generaliztion of the {\em max sum $p$-dispersion problem} studied extensively in location theory. It is NP-hard even with the triangle inequality. We propose two simple and natural algorithms: a greedy algorithm for a cardinality constraint and a local search algorithm for an arbitary matroid constraint. We prove that both algorithms achieve constant approximation ratios. | \section{The Greedy Algorithm Applied to Diversification with a Matroid
Constraint}
\label{greedy-matroid}
We observe that
for the more general matroid constraint diversification problem,
the greedy algorithm in section \ref{sec:submo} no longer achieves any constant approximation ratio. More specifically, consider the max-sum diversifciation problem
as in Gallopudi and Sharma \cite{GoSh09} (that is, for a modular quality
function $f()$) but now subject to a partition matroid constraint.
Partition the universe into
$A = \{a,b\}$ with cardinality constraint 1 and $C = \{c_1, c_2, \ldots c_r\}$ with no cardinality
constraint. Let the objective be
$f(S) = \sum_{u \in S} q_u + \sum_{u,v \in S} d(u,v)$
where the quality and distance functions are defined as follows:
$q(a) = \ell + \epsilon$, $q(x) = 0$ for all $x \neq a$, and for all $x$,
$d(b,x) = \ell$, $d(u,x) = \epsilon $ for all $u \neq b$. The greedy algorithm
(starting with $a$ or with the best pair $(a,b)$ will yield
$f(S) = f(C \cup \{a\}) = \ell + \epsilon + \epsilon \cdot {r \choose 2} + r \epsilon$
while the optimal solution
will be $f(C \cup \{b\}) = r \cdot \ell + \epsilon \cdot {r \choose 2}$. Hence the
approximation can be made arbitrarily bad by choosing
$\epsilon = \frac{1}{{r \choose 2}}$ and making $r$ sufficiently large.
By the reduction in \cite{GoSh09} to the metric dispersion problem, the above example
shows that the greedy algorithm will also suffer the same unbounded
approximation ratio for the metric dispersion problem.
\section{Conclusion}
We study the max-sum diversification with monotone submodular set functions and give a natural 2-approximation
greedy algorithm for the problem when there is a cardinality constraint.
We further extend the problem to matroid constraints and give a 2-approximation
local search algorithm for the problem.
We examine the dynamic update setting for modular set functions, where the
weights and distances are constantly changing over time and the goal is to maintain a solution with good quality with a limited number of updates. We propose a simple update rule: the oblivious (single swap) update rule, and show that if the weight-perturbation is not too large, we can maintain an approximation ratio of 3 with a single update.
The diversification problem has many important applications and there are many interesting future directions.
Although in this paper we restricted ourselves to the max-sum objective, there are many other well-defined notion of diversity that can be considered, see for example~\cite{Chandra:1996:FDR:645898.756652} and \cite{GoSh09}. The max-sum case can be also studied for specific metrics such as the $\ell_1$-norm
in Euclidean space as
considered by Fekete and Meijer \cite{FeketeM03} who provide a linear
time optimal algorithm for constant $p$ and a PTAS when $p$ is
part of the input. Their PTAS
algorithm also provides a $(2+\epsilon)$-approximation for the $\ell_2$-norm.
Their algorithms exploit the geomtric nature of the metric space. Other
specific metric spaces are also of interest.
In the general matroid case, the greedy algorithm given in Section~\ref{sec:submo} fails to achieve any constant approximation ratio, but how about other ``greedy-like algorithms'' such as the partial enumeration
greedy method used (for example) successully for monotone submodular
maximization subject to
a knapsack constaint in Sviridenko \cite{Sviridenko04}? Can such a technique
also be used to provide an approximation for our diversification problem?
Can our results be extended to provide a constant approximation for
the diversification problem subject to a knapsack constraint?
In a dynamic update setting, we only considered the oblivious
single swap update rule.
It is interesting to see if it is possible to maintain a better ratio than 3 with a limited number of updates, by larger cardinality swaps, and/or by
a non-oblivious update rule. We leave this as an interesting open questioa.
Finally, a crucial property used throughout our results
is the triangle inequality.
For a relaxed version of the triangle inequality
can we relate the approximation ratio to the parameter of a relaxed triangle inequality?
\section{Dynamic Update}
\label{sec:dynup}
In this section, we discuss dynamic updates for the max-sum diversification problem with modular set functions.
The setting is that we have initially computed a good solution with some approximation guarantee. The weights are changing over time, and upon seeing a change of weight, we want to maintain the quality (the same approximation ratio) of the solution by modifying the current solution without completely recomputing it.
We use the number of updates to quantify the amount of modification needed to
maintain the desired approximation.
An {\em update} is a single swap of an element in $S$ with an element outside $S$, where $S$ is the current solution. We ask the following question:
\begin{quote}
Can we maintain a good approximation ratio with a limited number of updates?
\end{quote}
Since the best known approximation algorithm achieves approximation ratio of 2, it is natural to ask whether it is possible to maintain that ratio through local updates.
And if it is possible, how many such updates it requires.
To simplify the analysis, we restrict to the following oblivious update rule.
Let $S$ be the current solution, and let $u$ be an element in $S$ and $v$ be an element outside $S$. The marginal gain
$v$ has over $u$ with respect to $S$ is defined to be
$$\phi_{v\rightarrow u}(S)=\phi(S\setminus\{u\}\cup\{v\})-\phi(S).$$
\noindent{\sc Oblivious (single element swap) Update Rule}\\
Find a pair of elements $(u,v)$ with $u\in S$ and $v\not\in S$ maximizing $\phi_{v\rightarrow u}(S)$.
If $\phi_{v\rightarrow u}(S)\le 0$, do nothing; otherwise swap $u$ with $v$.
Since the oblivious local search in Theorem~\ref{thm:ls} uses the same
single element swap update rule, it is not hard to see that we can maintain
the approximation ratio of 2.
However, it is not clear how many updates
are needed to maintain that ratio. We conjecture that the number of updates
can be made relatively small (i.e., constant) by a non-oblivious update rule and carefully maintaining some desired configuration of the solution set. We leave this as an open question.
However, we are able to show that if we relax the requirement slightly, i.e.,
aiming for an approximation ratio of 3 instead of 2, and restrict slightly the magnitude of the weight-perturbation,
we are able to maintain the desired ratio with a single update.
Note that the weight restriction is only used for the case of a weight decrease
(Theorem \ref{thm:wd}).
We divide weight-perturbations into four types: a weight increase (decrease) which occurs on an element,
and a distance increase (decrease) which occurs between two elements.
We denote these four types: {\sc (i), (ii),(iii), (iv)}; and we have a corresponding theorem for each case.
Before getting to the theorems, we first prove the following two lemmas.
After a weight-perturbation, let $S$ be the
current solution set, and $O$ be the
optimal solution.
Let $S^*$ be the solution set after a single update using the oblivious update rule,
and let $\Delta=\phi(S^*)-\phi(S)$.
We again let $Z=O\cap S$, $X=O\setminus S$ and $Y=S\setminus O$.
\begin{lemma}
\label{lem:up}
There exists $z\in Y$ such that $$\phi_z(S\setminus\{z\})\le\frac{1}{|Y|}[f(Y)+2\lambda d(Y)+\lambda d(Z,Y)].$$
\end{lemma}
\proof
If we sum up all marginal gain $\phi_y(S\setminus\{y\})$ for all $y\in Y$, we have
$$\sum_{y\in Y}\phi_y(S\setminus\{y\}) = f(Y)+2\lambda d(Y)+\lambda d(Z,Y).$$
By
an averaging argument, there must exist $z\in Y$ such that
$$\phi_z(S\setminus\{z\})\le\frac{1}{|Y|}[f(Y)+2\lambda d(Y)+\lambda d(Z,Y)].$$
\qed
Lemma~\ref{lem:up} ensures the existence of an element in $S$
such that after removing it from $S$, the objective function value does not decrease much.
The following lemma ensures that there always exists
an element outside $S$ which can increase the objective function value substantially if we bring it in.
\begin{lemma}
\label{lem:low}
If $\phi(S^*)< \frac{1}{3}\phi(O)$, then for all $y\in Y$, there exists $x\in X$ such that
$$\phi_x(S\setminus\{y\})>\frac{1}{|X|}[2\phi(Z)+3\phi(Y)+3\lambda d(Z,Y)+3\Delta].$$
\end{lemma}
\proof
For any $y\in Y$, and by Lemma~\ref{lem:rrt}, we have
\begin{eqnarray*}
&&f(X)+\lambda d(S\setminus\{y\}, X)\\
&=&f(X)+\lambda d(Z,X)+\lambda d(Y\setminus\{y\}, X)\\
&\ge& f(X)+\lambda d(Z,X)+\lambda d(X).
\end{eqnarray*}
Note that since $\phi(S^*)=\phi(S)+\Delta< \frac{1}{3}\phi(O)$, we have
\begin{eqnarray*}
\phi(O)&=&\phi(Z)+f(X)+\lambda d(X)+\lambda d(Z,X)\\
&>&3\phi(Z)+3\phi(Y)+3\lambda d(Z,Y)+3\Delta.
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
&&f(X)+\lambda d(S\setminus\{y\}, X)\\
&\ge& f(X)+\lambda d(Z,X)+\lambda d(X)\\
&>&2\phi(Z)+3\phi(Y)+3\lambda d(Z,Y)+3\Delta.
\end{eqnarray*}
This implies there must exist $x\in X$ such that
$$\phi_x(S\setminus\{y\})>\frac{1}{|X|}[2\phi(Z)+3\phi(Y)+3\lambda d(Z,Y)+3\Delta].$$
\qed
Combining Lemma~\ref{lem:up} and \ref{lem:low}, we can give a lower bound for $\Delta$.
We have the following corollary.
\begin{corollary}
\label{cor:delta}
If $\phi(S^*)< \frac{1}{3}\phi(O)$, then we have $|Y|> 3$ and furthermore
$$\Delta>\frac{1}{|Y|-3}[2\phi(Z)+2f(Y)+\lambda d(Y)+2\lambda d(Z,Y)].$$
\end{corollary}
\proof
By Lemma~\ref{lem:up}, there exists $y\in Y$ such that $$\phi_y(S\setminus\{y\})\le\frac{1}{|Y|}[f(Y)+2\lambda d(Y)+\lambda d(Z,Y)].$$
Since $\phi(S^*)< \frac{1}{3}\phi(O)$,
by Lemma~\ref{lem:low}, for this particular $y$, there exists $x\in X$ such that
$$\phi_x(S\setminus\{y\})>\frac{1}{|X|}[2\phi(Z)+3\phi(Y)+3\lambda d(Z,Y)+3\Delta].$$
Since $|X|=|Y|$, we have
$$\Delta>\frac{1}{|Y|}[2\phi(Z)+2f(Y)+\lambda d(Y)+2\lambda d(Z,Y)+3\Delta].$$
If $|Y|\le 3$, then it is a contradiction. Therefore $|Y|> 3$.
Rearranging the inequality, we have
$$\Delta>\frac{1}{|Y|-3}[2\phi(Z)+2f(Y)+\lambda d(Y)+2\lambda d(Z,Y)].$$
\qed
\begin{corollary}
\label{cor:small-p}
If $p \leq 3$, then for any weight or distance perturbation, we
can maintain an approximation ratio of 3 with a single update.
\end{corollary}
\proof
This is an immediate consequence of Corollary~\ref{cor:delta} since
$p \geq |Y|$.
\qed
Given Corollary~\ref{cor:small-p}, we will assume $p > 3$ for
all the remaining results in this section. We first discuss weight-perturbations on elements.
\begin{theorem}
\label{thm:wi}
{\sc [type (i)]}
For any weight increase, we can maintain an approximation ratio of 3 with a single update.
\end{theorem}
\proof
Suppose we increase the weight of $s$ by $\delta$.
Since the optimal solution can increase by at most $\delta$, if $\Delta \geq \frac{1}{3} \delta$,
then we have maintained a ratio of 3. Hence we assume
$\Delta < \frac{1}{3} \delta$.
If $s\in S$ or $s\not\in O$, then it is clear the ratio of $3$ is maintained. The only interesting case is when $s\in O\setminus S$. Suppose, for the sake of contradiction, that $\phi(S^*)<\frac{1}{3}\phi(O)$, then by Corollary~\ref{cor:delta}, we have $|Y|> 3$ and
$$\Delta>\frac{1}{|Y|-3}[2\phi(Z)+2f(Y)+\lambda d(Y)+2\lambda d(Z,Y)].$$
Since $\Delta<\frac{1}{3}\delta$, we have
$$\delta>\frac{1}{|Y|-3}[6\phi(Z)+6f(Y)+3\lambda d(Y)+6\lambda d(Z,Y)].$$
On the other hand, by Lemma~\ref{lem:up}, there exists $y\in Y$ such that $$\phi_y(S\setminus\{y\})\le\frac{1}{|Y|}[f(Y)+2\lambda d(Y)+\lambda d(Z,Y)].$$
Now considering a swap of s with y, the loss by removing $y$ from $S$ is
$\phi_y(S\setminus\{y\})$, while the increase that $s$ brings to the set $S\setminus\{y\}$ is at
least $\delta$ (as $s$ is increased by $\delta$, and the original weight of
$s$ is non-negative). Therefore the marginal gain of the swap of $s$ with $y$ is
$\phi_{s \rightarrow y} \geq \delta - \phi_y(S\setminus\{y\})$ and hence
$$\phi_{s\rightarrow y}(S)\ge\delta-\frac{1}{|Y|}[f(Y)+2\lambda d(Y)+\lambda d(Z,Y)].$$
However, $\phi_{s\rightarrow y}(S)\le\Delta<\frac{1}{3}\delta$. Therefore, we have
$$\frac{1}{3}\delta>\delta-\frac{1}{|Y|}[f(Y)+2\lambda d(Y)+\lambda d(Z,Y)].$$
This implies
$$\delta<\frac{1}{|Y|}[\frac{3}{2}f(Y)+3\lambda d(Y)+\frac{3\lambda}{2}d(Z,Y)],$$
which is a contradiction.
\qed
\begin{theorem}
\label{thm:wd}
{\sc [type (ii)]}
For a weight decrease of magnitude $\delta$,
we can maintain an approximation ratio of 3 with
$$\lceil\log_{\frac{p-2}{p-3}}\frac{w}{w-\delta}\rceil$$
updates, where $w$ is the weight of the solution before the weight decrease.
In particular, if $\delta\le \frac{w}{p-2}$, we only need a single update.
\end{theorem}
\proof
Suppose we decrease the weight of $s$ by $\delta$. Without loss of generality, we can assume $s\in S$.
Suppose, for the sake of contradiction, that $\phi(S^*)<\frac{1}{3}\phi(O)$, then
by Corollary~\ref{cor:delta}, we have $|Y|> 3$ and
\begin{eqnarray*}
\Delta&>&\frac{1}{|Y|-3}[2\phi(Z)+2f(Y)+\lambda d(Y)+2\lambda d(Z,Y)]\\
&\ge&\frac{1}{p-3}\phi(S).
\end{eqnarray*}
Therefore $$\phi(S^*)>\frac{p-2}{p-3}\phi(S).$$
This implies that we can maintain the approximation ratio with
$$\lceil\log_{\frac{p-2}{p-3}}\frac{w}{w-\delta}\rceil$$ number of updates.
In particular, if $\delta\le \frac{w}{p-2}$, we only need a single update.
\qed
We now discuss the weight-perturbations between two elements. We assume that such perturbations preserve the metric condition.
Furthermore, we assume $p>3$ for otherwise, by Corollary~\ref{cor:delta}, the ratio of 3 is maintained.
\begin{theorem}
\label{thm:di}
{\sc [type (iii)]}
For any distance increase, we can maintain an approximation ratio of 3 with a single update.
\end{theorem}
\proof
Suppose we increase the distance of $(x,y)$ by $\delta$,
and for the sake of contradiction, we assume that $\phi(S^*)<\frac{1}{3}\phi(O)$, then
by Corollary~\ref{cor:delta}, we have $|Y|> 3$ and
$$\Delta>\frac{1}{|Y|-3}[2\phi(Z)+2f(Y)+\lambda d(Y)+2\lambda d(Z,Y)].$$
Since $\Delta<\frac{1}{3}\delta$, we have
\begin{eqnarray*}
\delta&>&\frac{3}{|Y|-3}[2\phi(Z)+2f(Y)+\lambda d(Y)+2\lambda d(Z,Y)]\\
&\ge&\frac{3}{p-3}\phi(S).
\end{eqnarray*}
If both $x$ and $y$ are in $S$, then it is not hard to see that the ratio of $3$ is maintained. Otherwise,
there are two cases:
\be
\item
Exactly one of $x$ and $y$ is in $S$, without loss of generality, we assume $y\in S$.
Considering that we swap $x$ with any vertex $z\in S$ other than $y$. Since after the swap, both $x$ and $y$ are now in $S$, by the triangle inequality of the metric condition, we have
$$\Delta\ge(p-1)\delta-\phi(S)>(\frac{2}{3}p-2)\delta.$$
Since $p>3$, we have
$$\Delta>(\frac{2}{3}p-2)\delta\ge\frac{2}{3}\delta>2\Delta,$$
which is a contradiction.
\item
Both $x$ and $y$ are outside in $S$.
By Lemma~\ref{lem:up},
there exists $z\in Y$ such that $$\phi_z(S\setminus\{z\})\le\frac{1}{|Y|}[f(Y)+2\lambda d(Y)+\lambda d(Z,Y)].$$
Consider the set $T=\{x,y\}$ with $S\setminus\{z\}$, by the triangle inequality of the metric condition, we have $d(T,S\setminus\{z\})\ge (p-1)\delta$. Therefore, at least one of $x$ and $y$, without loss of generality, assuming $x$, has the following property:
$$d(x,S\setminus\{z\})\ge\frac{(p-1)\delta}{2}.$$
Considering that we swap $x$ with $z$, we have:
$$\Delta\ge\frac{(p-1)}{2}\delta-\frac{1}{|Y|}[f(Y)+2\lambda d(Y)+\lambda d(Z,Y)].$$
Since $\Delta<\frac{1}{3}\delta$, we have
$$\frac{1}{3}\delta>\frac{(p-1)}{2}\delta-\frac{1}{|Y|}[f(Y)+2\lambda d(Y)+\lambda d(Z,Y)].$$
This implies that
$$\delta<\frac{6}{3p-5}\cdot\frac{1}{|Y|}[f(Y)+2\lambda d(Y)+\lambda d(Z,Y)].$$
Since $p>3$, we have
$$\delta<\frac{1}{|Y|}[\frac{6}{7}f(Y)+\frac{12\lambda}{7}d(Y)+\frac{6\lambda}{7}d(Z,Y)],$$
which is a contradiction.
\ee
Therefore, $\phi(S^*)\ge\frac{1}{3}\phi(O)$; this completes the proof.
\qed
\begin{theorem}
\label{thm:de}
{\sc [type (iv)]}
For any distance decrease, we can maintain an approximation ratio of 3 with a single update.
\end{theorem}
\proof
Suppose we decrease the distance of $(x,y)$ by $\delta$. Without loss of generality,
we assume both $x$ and $y$ are in $S$, for otherwise, it is not hard to see the ratio
of 3 is maintained.
Suppose, for the sake of contradiction, that $\phi(S^*)<\frac{1}{3}\phi(O)$, then
by Corollary~\ref{cor:delta}, we have $|Y|> 3$ and
\begin{eqnarray*}
\Delta&>&\frac{1}{|Y|-3}[2\phi(Z)+2f(Y)+\lambda d(Y)+2\lambda d(Z,Y)]\\
&\ge&\frac{1}{p-3}\phi(S).
\end{eqnarray*}
If $\Delta\ge\delta$, then the ratio of 3 is maintained.
Otherwise, $$\delta>\Delta\ge\frac{1}{p-3}\phi(S).$$
By the triangle inequality of the metric condition, we have
$$\phi(S)\ge(p-2)\delta>\frac{p-2}{p-3}\phi(S)>\phi(S),$$
which is a contradiction.
\qed
Combining Theorem~\ref{thm:wi}, \ref{thm:wd}, \ref{thm:di}, \ref{thm:de},
we have the following corollary.
\begin{corollary}
\label{cor:main}
If the initial solution achieves approximation ratio of 3,
then for any weight-perturbation of {\sc type (i), (iii), (iv)};
and any weight-perturbation of {\sc type (ii)} that is no more than
$\frac{1}{p-2}$ of the current solution for $p>3$ and arbitrary for $p\le 3$,
we can maintain the ratio of 3 with a single update.
\end{corollary}
\section{Experiments}
\label{sec:expmt}
While the results in this paper are mainly theoretical in nature,
we present some
experimental results in
this section to provide additional insight
about the relative performance and efficiency of our algorithms.
In section \ref{synthetic-greedy}, we will first consider the relative
performance of two greedy algorithms and local search with respect to
a synthetic
data set, followed in section \ref{letor-greedy}
by similar experiments for a
well-known dataset (LETOR) that has been actively used for different information
and machine learning problems and especially for "learn to rank" research \cite{Qin-etal10}. In section \ref{dynamic-experiments}, we again consider the
synthetic data set as in section \ref{synthetic-greedy} and make some
obervations on the performance of local search for dynamically changing data.
For the synthetic data as well as the LETOR data set, we consider the max-sum diversification problem with modular set functions
and a cardinality constraint $p$ so as to be able to compare the greedy
and local search algorithms as well as comparing our greedy algorithm with
the algorithm
of Gollapudi and Sharma \cite{GoSh09} whose work motivated this paper.
We will refer to their diversification algorithm as Greedy A. We recall
that their algorithm
consists of a reduction to the max-sum p-dispersion problem and then uses the
Hassin, Rubenstein and Tamir \cite{HassinRT97}
algorithm that greedily chooses {\it edges}
yielding an approximation ratio of $2$. We will experimentally compare the performance and
time complexity of their algorithm against our greedy by vertices algorithm
which also has approximation ratio 2. We will refer to our greedy algorithm
as Greedy B. We also consider how much a limited amount of
local search improves the results obtained by our Greedy B algorithm. That is, we follow Greedy B by a 1-swap local
search algorithm that looks for the best improvement in each iteration.
We refer to this local search algorithm as LS with the understanding that it is
being initialized by Greedy B and terminated when either a local maximum is
reached or when the algorithm runs for
ten times the time of the Greedy B initialization.
\begin{table*}[t]
\caption{Comparison of Greedy A and Greedy B ($N=50$)}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$p$ & $OPT$ & $Greedy A$ & $Greedy B$ & $AF_{Greedy A}$ & $AF_{Greedy B}$ & $AF_{\frac{Greedy B}{Greedy A}}$ \\
\hline
3 & 7.088 & 6.140 & 7.088 & 1.154 & 1.000 & 1.154 \\
4 & 10.02 & 10.020 & 10.000 & 1.000 & 1.002 & 0.998 \\
5 & 12.571 & 12.470 & 12.570 & 1.008 & 1.000 & 1.008 \\
6 & 15.315 & 15.060 & 15.060 & 1.017 & 1.017 & 1.000 \\
7 & 18.54 & 17.290 & 17.949 & 1.072 & 1.033 & 1.038 \\
\hline
\end{tabular}
\label{table:4}
\end{table*}
\begin{table*}[t]
\caption{Comparison of Greedy A, Greedy B and LS}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
$p$ & $Greedy A$ & $Greedy B$ & $LS$
& $AF_{\frac{Greedy B}{Greedy A}}$ & $AF_{\frac{LS}{Greedy B}}$ & $Time_{Greedy A}$ & $Time_{Greedy B}$ & $Time_{\frac{Greedy A}{Greedy B}}$\\
\hline
5 & 13.996 & 13.999 & 13.999 & 1.000 & 1.000 & 2365 ms & 426 ms & 5.552 \\
10 & 37.570 & 37.970 & 37.970 & 1.011 & 1.000 & 2370 ms & 504 ms & 4.702 \\
15 & 69.590 & 71.600 & 71.600 & 1.029 & 1.000 & 2694 ms & 421 ms & 6.399 \\
20 & 110.900 & 113.640 & 113.640 & 1.025 & 1.000 & 3280 ms & 470 ms & 6.979 \\
25 & 154.590 & 162.400 & 162.480 & 1.051 & 1.000 & 3223 ms & 587 ms & 5.491 \\
30 & 192.260 & 220.450 & 220.730 & 1.147 & 1.001 & 4364 ms & 785 ms & 5.559 \\
35 & 253.790 & 288.490 & 288.970 & 1.137 & 1.002 & 4762 ms & 758 ms & 6.282 \\
40 & 317.290 & 366.520 & 367.215 & 1.155 & 1.002 & 4599 ms & 864 ms & 5.323 \\
45 & 397.230 & 454.500 & 455.100 & 1.144 & 1.001 & 6088 ms & 1028 ms & 5.922 \\
50 & 486.440 & 552.500 & 553.150 & 1.136 & 1.001 & 5323 ms & 1155 ms & 4.609 \\
55 & 584.830 & 660.430 & 661.370 & 1.129 & 1.001 & 7360 ms & 1536 ms & 4.792 \\
60 & 686.970 & 778.140 & 779.220 & 1.133 & 1.001 & 5585 ms & 1684 ms & 3.317 \\
65 & 805.520 & 905.660 & 906.880 & 1.124 & 1.001 & 7349 ms & 1855 ms & 3.962 \\
70 & 930.600 & 1042.970 & 1044.120 & 1.121 & 1.001 & 5381 ms & 2041 ms & 2.636 \\
75 & 1054.940& 1189.970 & 1191.360 & 1.128 & 1.001 & 8480 ms & 2212 ms & 3.834 \\
\hline
\end{tabular}
\label{table:5}
\end{table*}
\begin{table*}[t]
\caption{Comparison of Greedy A and Greedy B (N=50, Average over 5 Queries)}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$p$ & $AF_{Greedy A}$ & $AF_{Greedy B}$ \\
\hline
3 & 1.030 & 1.000 \\
4 & 1.009 & 1.004 \\
5 & 1.020 & 1.012 \\
6 & 1.059 & 1.018 \\
7 & 1.096 & 1.022 \\
\hline
\end{tabular}
\label{table:6}
\end{table*}
\begin{table*}[t]
\caption{Comparison of Greedy A, Greedy B and LS (Average over 5 Queries)}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$p$ & $AF_{\frac{Greedy B}{Greedy A}}$ & $AF_{\frac{LS}{Greedy B}}$ & $Time_{Greedy A}$ & $Time_{Greedy B}$ & $Time_{\frac{Greedy A}{Greedy B}}$\\
\hline
5 & 1.005 & 1 & 1714 & 303 & 5.657 \\
10 & 1.016 & 1 & 1997 & 289 & 6.910 \\
15 & 1.036 & 1 & 2387 & 381 & 6.265 \\
20 & 1.056 & 1.002 & 2767 & 522 & 5.301 \\
25 & 1.047 & 1.003 & 3280 & 574 & 5.714 \\
30 & 1.086 & 1.003 & 2959 & 537 & 5.510 \\
35 & 1.081 & 1.003 & 3387 & 622 & 5.445 \\
40 & 1.105 & 1.003 & 3208 & 704 & 4.557 \\
45 & 1.119 & 1.002 & 4154 & 837 & 4.963 \\
50 & 1.146 & 1.002 & 4126 & 1035 & 3.986 \\
55 & 1.141 & 1.002 & 5559 & 1298 & 4.283 \\
60 & 1.156 & 1.002 & 5059 & 1411 & 3.585 \\
65 & 1.152 & 1.002 & 5722 & 1534 & 3.730 \\
70 & 1.157 & 1.002 & 4766 & 1691 & 2.818 \\
75 & 1.151 & 1.001 & 7272 & 2180 & 3.336 \\
\hline
\end{tabular}
\label{table:7}
\end{table*}
\subsection{Experiments on synthetic data sets}
\label{synthetic-greedy}
Our synthetic data sets are generated by uniformly at random assigning
each vertex $v$ (i.e. element of
the metric space) a value $f(v) \in [0,1]$, and each distance $d(u,v)$
a value in [1,2]. We note that the \{1,2\} metric is the metric relative to
which the suggested hardness of approximation is derived. We construct
such data sets for various values of $N$, the size of the universe, and for
$p$, the cardinality constraint. In all cases, we set $\lambda = .2$,
where $\lambda$ is the parameter defining
the relative weight between
the quality $f(S)$ of a set $S$ and its max-sum dispersion
$d(S) = \sum_{u,v \in S}d(u,v)$.
For small $N$, we can compute the optimal value and can therefore compute and
compare the
experimental approximation ratios for Greedy A and Greedy B.
In Table 1 (resp. Table 2), we present results on the relative performance and time
elapsed for Greedy A, Greedy B, and LS for $N = 50$ (resp. $N = 500$).
For each setting of the $N,p$
parameters we ran 5 trials and averaged the results. We observe these
average values for each parameter setting for an algorithm $ALG$, and
report the ``observed average approximation ratio'',
namely $\frac{OPT-average}{ALG-average}$, denoted $AF_{ALG}$
for the $N = 50$ data where we are able to compute the optimum value.
Similarly,
we denote the ``relative average approximation'' between two algorithms
as $AF_{\frac{ALG_2}{ALG_1}}$. We also report the average time elapsed
\footnote{The time is reported in milliseconds (ms), with algorithms
implemented in Java running on a Macbook Pro with 2.4 GHz Intel Core i7 processor and 8 GB 1600 MHz DDR3 memory.} for each algorithm,
denoted as $T_{ALG}$. We make the following
observations based on these trials:
\begin{itemize}
\item Given that max-sum dispersion is a supermodular function, as $N$ grows the
objective value becomes dominated by the dispersion contribution to the diversification result. For each algorithm $ALG$ we show
its average value $ALG(S)$. It is observed in our experiments that the max-sum dispersion that is the cause of non-optimality.
\item
In all cases, the Greedy algorithms and LS perform quite well with regard to
the optimum (when it is computed); this is not surprising as it
is often the case that algorithms perform well for random
or ``real'' data in contrast to worst case approximation ratios.
More specifically, for $N = 50$ and $p \leq 7$, the approximation ratio for
GreedyB is roughly $1.02$.
\item The performance of Greedy A for odd values of $p$ is marred by
the fact, that as defined, Greedy A chooses an arbitrary last vertex rather than
the best last vertex. For larger $p$, this does not have a significant impact
but it is perhaps best to ignore small odd values of $p$.
The performance of Greedy
B is marred by the fact, that as defined, it chooses its first vertex
arbitrarily rather than choosing a best pair.
\item As expected, the time bounds for Greedy B are substantially better as
Greedy B is iterating over all vertices rather than over all edges as
in Greedy A.
\item In all cases (for average performance), Greedy B outperforms Greedy A.
For $N = 500$, the relative improvement
appears generally to be decreasing as $p$ increases, where for the largest
values of $p = 70$ and $75$, the relative improvement is roughly
1.5\%. We actually observed in our experiments that the relative improvement was 2.5\% if one
just compared the dispersion results $d()$.
\item As expected local search can sometimes improve upon the results of Greedy B, but
obviously at a cost.
Stopping LS at 10 times the Greedy time, results improve by at most $5\%$
and for large $p = 70,75$ the improvement is only $1.5\%$.
\end{itemize}
Our results for average performance raises the question as to whether or
not Greedy B might outperform Greedy A for all inputs, that is, for all
parameter settings.
In order to make the comparison fair, for Greedy A we will
choose the best final node rather than an arbitrary node when $p$ is odd, and for Greedy B, we
will start with the best pair of nodes rather than an arbitrary node.
These minor changes do not effect the approximation ratios but can improve the
observed performance of the algorithms. In Table~\ref{table:3}, we report on these
improved algorithms, running them for one trial for each of the reported
parameter setting. We see that
for $N = 50, p = 4$, there is one setting
where Greedy A outperformed Greedy B. However, running the algorithms with
these improvements does not alter the basic observations above.
\begin{table}[t]
\caption{Comparison of documents being returned for the top 50 document data set}
\centering
\subcaption*{N=50, p=3}
\begin{tabular}{|c|c|c|}
\hline
Greedy A & Greedy B & OPT \\
\hline
4 & 4 & 4 \\
29 & 29 & 29 \\
\textbf{46} & 24 & 24 \\
\hline
\end{tabular}
\bigskip
\centering
\subcaption*{N=50, p=4}
\begin{tabular}{|c|c|c|}
\hline
Greedy A & Greedy B & OPT \\
\hline
4 & 4 & 4 \\
29 & 29 & 29 \\
24 & 24 & 24 \\
12 & 12 & 12 \\
\hline
\end{tabular}
\bigskip
\centering
\subcaption*{N=50, p=5}
\begin{tabular}{|c|c|c|}
\hline
Greedy A & Greedy B & OPT \\
\hline
4 & 4 & 4 \\
29 & 29 & 29 \\
24 & 24 & 24 \\
12 & 12 & 12 \\
\textbf{46} & 49 & 49 \\
\hline
\end{tabular}
\bigskip
\centering
\subcaption*{N=50, p=6}
\begin{tabular}{|c|c|c|}
\hline
Greedy A & Greedy B & OPT \\
\hline
4 & 4 & 4 \\
29 & 29 & 29 \\
24 & 24 & 24 \\
12 & 12 & 12 \\
46 & 46 & 46 \\
49 & 49 & \textbf{35} \\
\hline
\end{tabular}
\bigskip
\centering
\subcaption*{N=50, p=7}
\begin{tabular}{|c|c|c|}
\hline
Greedy A & Greedy B & OPT \\
\hline
4 & 4 & 4 \\
29 & 29 & 29 \\
24 & 24 & 24 \\
12 & 12 & 12 \\
\textbf{0} & \textbf{49} & \textbf{37} \\
\textbf{8} & 46 & 46 \\
\textbf{14} & 35 & 35 \\
\hline
\end{tabular}
\label{table:8}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=3in]{2.eps}
\caption{Approximation Ratio in Dynamic Updates}
\label{fig:wr}
\end{figure}
\subsection{Experiments with The LETOR data set}
\label{letor-greedy}
We considered popular queries in creating a number of
LETOR data sets.
Each item in a LETOR data set represents a document related to a query. As such,
each item $u$ has an integral relevance score $r(u)$ (relative to the query)
ranging from 0 to 5,
a set of feature attributes with their respective values, and a query id. Thus, we take (as ground truth), the quality score $f(S) = \sum_{u \in S} r(u)$.
We define (and take as ground truth) a metric distance $f(u,v)$ function
given by the cosine similarity between the feature vectors for $u$ and $v$.
For Table~\ref{table:4} and Table~\ref{table:5}, we
chose one data set (chosen at random from the original LETOR dataset) and
created a data set consisting of the top (by relevance score)
50 and top 370 documents. We applied the Greedy A, Greedy B and limited local search algorithms to these two
data sets for various settings of the cardinality parameter $p$.
For the 50 document data set we also computed the optimal values.
We observe a qualitative
difference between these ``real data'' experiments and the experiments
for synthetic data.
Namely, Greedy B now has a more substantial advantage over
Greedy A and corresponding decrease in the benefit of running local search for
10 times the run time of Greedy B on a given input.
In contrast to the synthetic data experiments, the advantage of Greedy B
over Greedy A is more pronounced for larger values of $p$, the cardinality
constraint. For the $N = 50$ data set and small values of $p$, the advantage
stays between 3 and 4\%. For the $N = 370$ data set, the advantage of Greedy B
over Greedy A rises to about 15\% and then levels off at around 12\%.
The improvement due to the limited use of local search never exceeds .2\% and sometimes results in
no improvement. We also ran 5 different data sets (i.e. generated by
5 different queries) and averaged the results with respect to the top 50 results and all documents in each data set
as shown in Table~\ref{table:6} and Table~\ref{table:7}. Note that in these tables, we are omitting the objective function
values that have been previously included in other tables. We are averaging our results over different LETOR datasets (i.e. queries) and therefore reporting on the average objective function values wont be fully meaningful. These average results support what we found in Table~\ref{table:4} and Table~\ref{table:5}, namely that Greedy B significantly outperforms Greedy A and that limited local search provides a very small advantage over Greedy B.
In Table~\ref{table:8}, we present the difference in the documents being returned for
the 50 document data set. Here the OPT documents are the true
set of optimal documents with respect to the diversification function applied
to the values of the document relevance scores and the cosine
distance function. As an example, consider the results for the $N = 50, p = 7$
setting of the parameters. Here OPT and Greedy B differ on one document
while Greedy A differs on 3 documents.
\subsection{Approximation Ratio in Dynamic Updates}
\label{dynamic-experiments}
For dynamic updates, we use same synthetic data as in
Section~\ref{synthetic-greedy}.
We have three different dynamically changing environments:
\be
\item {\sc vperturbation}: each perturbation is a weight change on an item;
that is, an item (vertex) $u$ is randomly chosen and its value
value is reset uniformly
at random
from $[0,1]$.
\item {\sc eperturbation}: each perturbation is a distance change between two
items; that is, a pair of distinct items $\{u,v\}$ is randmoy chosen
and the distance $d(u,v)$ is reset uniformaly at random from $[1,2]$.
\item {\sc mperturbation}: each perturbation is one of the above two with equal probability.
\ee
For each of the environments above and every value of $\lambda$, we start with our greedy solution (a 2-approximation) and run 20 steps of simulation, where each step consists of
a random weight change of the stated type, followed by a single application of the oblivious update rule.
We repeat this 100 times and record the worst approximation ratio occurring during these 100 updates. The results are shown in Fig.~\ref{fig:wr}; the horizontal axis measures $\lambda$ values, and the vertical axis measures the approximation ratio.
We have the following observations:
\be
\item In any dynamic changing environment, the maintained ratio is well below the provable ratio of 3.
The worst observed ratio is about 1.11.
\item The maintained ratios are decreasing to 1 for increasing
$\lambda \geq 0.6$.
\ee
From the experiment, we see that the simple local search update rule seems effective for maintaining a good approximation ratio in a dynamically changing environment.
\section{Problem Formulation}
\label{sec:formu}
Although the notion of ``diversity" naturally arises in the context of databases, social media and web search,
the underlying mathematical object is not new.
As presented in~\cite{GoSh09}, there is a rich and long line of research
in location theory dealing with a similar concept; in particular,
one objective is the placement of facilities on a network to maximize some function of the distances between facilities. The situation arises when
proximity of facilities is undesirable, for example, the distribution of business franchises in a city. Such location problems are often referred to as {\em dispersion} problems; for more
motivation and early work, see~\cite{RePEc:eee:ejores:v:46:y:1990:i:1:p:48-60, RePEc:eee:ejores:v:40:y:1989:i:3:p:275-291, GEAN:GEAN133}.
Analytical models for the dispersion problem assume that the given network is represented by a set
$V=\{v_1,v_2, \dots, v_n\}$ of $n$ vertices along with a distance function
between every pair of vertices. The objective is to locate
$p$ facilities ($p\le n$)
among the $n$ vertices, with at most one facility per vertex, such that some function of distances between facilities is maximized. Different objective functions are considered for the dispersion problems in the literature including: the max-sum criterion (maximize the total distances between all pairs of facilities) in~\cite{Wang:1988:STG:49310.49312,RePEc:eee:ejores:v:46:y:1990:i:1:p:48-60,RRT94}, the max-min criterion (maximize the minimum distance between a pair of facilities) in~\cite{GEAN:GEAN133,RePEc:eee:ejores:v:46:y:1990:i:1:p:48-60,RRT94}, the max-mst (maximize the minimum spanning tree among all facilities) and many other related criteria in~\cite{HIKT95, DBLP:journals/jal/ChandraH01}.
When the distances are arbitrary, the max-sum
problem is a weighted generaliztion
of the densest subgraph problem which is a known difficult problem
not admitting a PTAS~(\cite{Khot06} and not known to have a constant approximation algorithm. Sometimes the problem is studied for specific metric distances
(e.g as in Fekete and Meijer \cite{FeketeM03}) or
for restricted classes of weights (e.g. as in Czygrinow \cite{Czygrinow00})
where there can be a PTAS.
Our diversification problem is a generalization of the
max sum $p$-dispersion problem assuming
arbitrary metric distances. For the max-sum criteria and for most of the
objective criteria, the dispersion problem
is NP-hard, and approximation algorithms have been developed and studied; see~\cite{DBLP:journals/jal/ChandraH01} for a summary of known results.
Our diversification problem is a generalization of the following
max sum $p$-dispersion problem for
arbitrary metric distances. Most relevant to this paper is the max-sum dispersion problem with metric distances.
{\sc Problem 1.} {\tt Max-Sum $p$ Dispersion} \\
\\
Let $U$ be the underlying ground set, and let $d(\cdot,\cdot)$ be a metric
distance function on $U$.
Given a fixed integer $p$, the goal of the problem is to find a subset $S \subseteq U$
that:
$$\begin{array}{ll}
\rm{maximizes} & \sum_{\{u,v\}:{u,v\in S}} d(u,v)\\
\\
\rm{subject\;\; to} & |S|=p,
\end{array}$$
The problem is known to be NP-hard
by an easy reduction from Max-Clique, and as noted by
Alon \cite{Alon14}, there is evidence that the problem is hard to
compute in polynomial time with approximation
$2-\epsilon$ for any $\epsilon > 0$
when $p = n^r$ for $1/3 \leq r < 1$.
Namely, based on the assumption that the planted clique problem is hard,
Alon et al \cite{Alonammw11}
show that it is hard to distinguish between a graph having a large
planted clique of
size $p$
and one in which the densest subgraph of size $p$ is of
density at most an arbitrarily small constant
$\delta$ (for suffiently large $n$).
Considering the complement of a random graph $G$ in ${\cal G}(n,1/2)$,
their result says that it
is hard to distinguish between a graph having an independent set of size $p$ and
one in which the density of edges in any size $p$-subgraph is at least
$(1-\delta)$.
Adding another node to the complement graph that is connected
to all nodes in $G$,
the graph distance metric is now the $\{1,2\}$ metric formed by the
transitive closure so that
adjacent nodes have distance 1 and non adjacent nodes have distance 2.
So we therefore cannot distinguish between graphs where there exists a set of
nodes $S$ of size $p$ ( for
$p$ as above) where $\sum_{(u,v) \in S} d(u,v) = {p \choose 2} * 2$ and one
where in every set of size $p$, we have
$\sum_{(u,v) \in S} d(u,v) \leq {p \choose 2} [(1-\delta) + 2 \delta]$.
In~\cite{RRT94}, Ravi, Rosenkrantz and Tayi give a greedy algorithm (greedily
choosing vertices
that is shown to have approximation ratio no worse than $4$ and no better
than $\frac{2}{1+2/p(p-1)}$.
Hassin, Rubenstein and Tamir \cite{HassinRT97} improve upon the Ravi et al
result by
an algorithm that greedily chooses {\it edges}
yielding an approximation ratio of $2$.
Hassin et al also give an algorithm based
on maximum matching that provides a $2-\frac{1}{\lceil p/2 \rceil}$
approximation for a more
general problem;
namely, the algorithm must find a subset $U'$ which
is partitioned
into $k$ disjoint subsets, each of
size $p$ so as to maximize the pairwise sum of all pairs of vertices in $U'$.
The more general $(p,k)$ problem is similar to a
partition matroid constraint but in a partition matroid, the partition
is given as part of the definition of the matroid and each block
of the partition has its own cardinality constraint.
Answering an open problem stated
in Hassin et al., Birnbaum and Goldman \cite{BirnbaumG09} give an
improved analysis proving that
the Ravi et al greedy algorithm results in a $\frac{2p-2}{p-1}$ approximation
for the max-sum $p$ dispersion problem. This then shows that a 2-approximation
is a tight bound (as $p$ grows) for the Ravi et al greedy algorithm.
More generally, Birnbaum and Goldman show that greedily choosing a
set of $d$ nodes provides a $\frac{2p-2}{p+d-2}$ approximation. Our
analysis in Section~\ref{sec:submo} yields an alternative proof
that the Ravi et al greedy
algorithm approximation ratio is no worse than $2$ even when extended to
the max-sum $p$ diversification problem (with a monotone submodular
value function) considered in Section~\ref{sec:submo}.
\medskip
{\sc Problem 2.} {\tt Max-Sum $p$ Diversification} \\
\\
Let $U$ be the underlying ground set, and let $d(\cdot,\cdot)$ be a metric
distance function on $U$.
For any subset of $U$, let $f(\cdot)$ be
a non-negative set function measuring
the value of a
subset. Given a fixed integer $p$, the goal of the problem is to find a subset $S \subseteq U$
that:
$$\begin{array}{ll}
\rm{maximizes} & f(S)+\lambda\sum_{\{u,v\}:{u,v\in S}} d(u,v)\\
\\
\rm{subject\;\; to} & |S|=p,
\end{array}$$
where $\lambda$ is a parameter specifying a desired trade-off between the two objectives.
The max-sum diversification problem is first proposed and studied in the context of result diversification in~\cite{GoSh09}~\footnote{In fact, they have a slightly different but equivalent formulation.}, where the function $f(\cdot)$ is modular.
In their paper, the value of $f(S)$ measures the relevance of a given subset to a search query, and the value $\sum_{\{u,v\}:{u,v\in S}} d(u,v)$ gives a diversity measure on $S$. The parameter $\lambda$ specifies a desired trade-off between diversity and relevance. They reduce the problem to the max-sum dispersion problem, and using an algorithm in~\cite{HassinRT97}, they obtain an approximation ratio of 2.
In this paper, we first study the problem with more general valuation functions;namely, normalized, monotone submodular set functions.
For notational convenience, for any two sets $S$, $T$ and an element $e$, we write $S\cup\{e\}$ as $S+e$, $S\setminus\{e\}$ as $S-e$, $S\cup T$ as $S+T$,
and $S\setminus T$ as $S-T$.
A set function $f$ is {\em normalized} if $f(\emptyset)=0$.
The function is {\em monotone} if for any $S, T\subseteq U$ and $S\subseteq T$, $$f(S)\le f(T).$$ It is {\em submodular} if for any $S, T\subseteq U$, $S\subseteq T$ with $u\in U$, $$f(T+u)-f(T)\le f(S+u)-f(S).$$
In the remainder of paper, all functions considered are normalized.
We proceed to our
first contribution, a greedy algorithm (different than the one
in~\cite{GoSh09}) that obtains a 2-approximation for
monotone submodular
set functions.
\section{Introduction}
Result diversification has many important applications in databases, operations research, information retrieval, and finance. In this paper, we study and extend a particular version of result diversification, known as max-sum diversification. More specifically, we consider the setting where we
are given a set of elements in a metric space and a set valuation function $f$ defined on every subset. For any given
subset $S$, the overall objective is a linear combination of $f(S)$ and
the sum of the distances induced by $S$. The goal is to find a subset
$S$ satisfying some constraints that maximizes the overall objective.
This diversification problem is first studied by Gollapudi and Sharma in~\cite{GoSh09} for
modular (i.e. linear) set functions and for sets satisfying a
cardinality constraint (i.e. a
uniform matroid). (See \cite{GoSh09} for some closely related work.)
The max-sum $p$-dispersion problem seeks to find a subset $S$
of cardinality $p$ so as to maximize $\sum_{x,y \in S} d(x,y)$.
The diversification problem is then a linear combination of a quality function
$f()$ and the max-sum dispersion function.
Gollapudi and Sharma give a 2 approximation greedy algorithm for
some metrical distance diversification problems
by reducing to the analogous dispersion problem. More specifically for max-sum
diversification they
use the greedy algorithm of Hassin, Rubsenstein and Tamir \cite{HassinRT97}.
Hassin et al give a
non greedy algorithm
for a more general problem where the goal is to construct
$k$ subsets each having $p$ elements. (We willl restrict attention to
the case $k = 1$.) Their non greedy algorithm obtains the
ratio $2-\frac{1}{\lceil p/2 \rceil}$
and hence the same
approximation holds for the Gollapudi and Sharma diversification problem.
The first part of our paper considers an extension of the modular case to the monotone submodular case,
for which the algorithm in~\cite{GoSh09} no longer applies.
We are able to maintain the same 2-approximation using
a natural, but different greedy algorithm. We then further extend the
problem by considering any matroid constraint and show that a natural
single swap local search algorithm provides a 2-approximation in this more general setting. This extends the Nemhauser, Wolsey and Fisher~\cite{NWF78} approximation result for the problem of submodular function maximization subject to a matroid constraint (without the distance function component).
We note that the dispersion function is a supermodular function
\footnote{Motivated by the analysis in this paper, Borodin et al \cite{BorodinLY14} introduce the
class of {\it weakly submodular functions} and show that the max-sum dispersion
measure as well as all monotone submodular functions are weakly submodular.
Furthermore, it is shown that the problem of maximizing such functions
subject to cardinality (resp. general matroid) constraints can be
polynomial time approximated
within a constant factor by a greedy (resp. local search) algorithm.}
and hence
the Nemhauser er al result does not immediately extend to our diversification
problem.
\vspace{.1in}
\vspace{.1in}
Submodular functions have been extensively considered
since they model many natural phenomena. For example,
in terms of keyword based search in database systems,
it is well understood that users begin to gradually (or sometimes abruptly)
lose interest
the more results they have to consider \cite{vieira11_2, vieira11}. But on the other hand,
as long as a user continues to gain some benefit, additional query results can
improve the overall quality but at a decreasing rate. In a related application,
Lin and Bilnes \cite{LinB2011} argue that monotone submodular functions
are an ideal class of functions for text summarization. Following
and extending the results in \cite{GoSh09},
we consider the case of maximizing a linear combination of
a submodular quality function $f(S)$ and the max-sum dispersion subject to
a cardinality constraint (i.e., $|S| \leq p$ for some given $p$).
We present a greedy algorithm that is somewhat unusual in that it does not
try to optimize the objective in each iteration but rather optimizes a closely
related potential function. We show that our greedy approach matches
the greedy $2$-approximation
\footnote{Clearly, in the modular case
for $p$ constant, a brute force trial of all subsets of
size $p$ is an optimum, albeit
inefficient, algorithm.}
in \cite{GoSh09} obtained for diversification with a modular quality function.
We note that the greedy algorithm in \cite{GoSh09} utilizes
the max dispersion algorithm of Hassin, Rubinstein and Tamir
\cite{HassinRT97} which greedily adds edges whereas our algorithm
greedily adds vertices.
Our next result continues with the submodular case
but now we go beyond a cardinality constraint (i.e., the uniform matroid)
on $S$ and allow the constraint to be that $S$ is independent in
a given matroid. This allows a substantial increase in generality.
For example, while diversity might represented by the distance
between retrieved database tuples under a given criterion
(for instance, a kernel based diversity measure
called \textit{answer tree kernel} is used in \cite{fengzhao11}), we could use a partition matroid
to insure that (for example) the retrieved database tuples come from a variety
of different sources. That is, we may wish to have
$n_i$ tuples from a specific database field $i$. This is, of course, another form
of diversity but one orthogonal to diversity based on the given criterion. Similarly
in the stock portfolio example, we might wish to have a balance
of stocks in terms of say risk and profit profiles (using some statistical
measure of distances) while using a submodular quality function to reflect
a users submodular utility for profit and using a partition
matroid to insure that different sectors of
the economy are well represented. Another important class of matroids
(relevant to the above applications) is that
of transversal matroids. Suppose we have a collection
$\{C_1, C_2, \ldots, C_m$\} of (possibly) {\it overlapping} sets (i.e., the collection is
not necessarily a partition) of
database tuples (or stocks). Our goal might be to derive a set $S$
such that the database tuples in $S$ form a set of representatives for
the collection; that is, every database tuple in $S$ represents (and is in) a unique
set $C_i$ in the collection. The set $S$ is then an
independent set in the
transversal matroid induced by the collection. We also note
\cite{schrijver03} that
the intersection of any matroid with a uniform matroid is still a matroid
so that in the above examples, we could further impose the constraint
that the set S has at most $p$ elements.
Our final theoretical result concerns dynamic updates. Here we
restrict attention to a modular set function $f(S)$; that is, we now
have weights on the elements and $f(S) = \sum_{u \in S} w(u)$
where $w(u)$ is the weight of element $u$. This allows us to consider
changes to the weight of a single element as well as changes to the
distance function.
The rest of the paper is organized as follows.
In Section 2, we discuss related work in
dispersion and result diversification.
In Section 3, we formulate the problem as a combinatorial optimization problem
and discuss the complexity of the problem.
In Section 4, we consider max-sum diversification with monotone submodular
set quality functions subject to a cardinality constriant and give a
conceptually simple greedy algorithm that achieves a 2-approximation.
We extend the problem to the matroid case in Section 5 and discuss dynamic
updates in Section 6. Section 7 carries out a number of experiments. In
particular, we compare our greedy algorithm with the greedy
algorithm of Gollapudi and Sharma. Section 8 concludes the paper.
\vspace{.1in}
\section{Matroids and Local Search}
\label{sec:matroid}
Theorem~\ref{thm:main} provides a
2-approximation for max-sum diversification when the set function is
submodular and the set constraint is a cardinality constraint, i.e.,
a uniform matroid. It is natural to ask if the same approximation
guarantee can be obtained for an arbitrary matroid.
In this section, we show that the max-sum diversification problem with
monotone submodular function admits a 2-approximation
subject to a general matroid constraint.
Matroids are well studied objects in combinatorial optimization.
A matroid $\cal M$ is a pair $<U, {\cal F}>$, where $U$ is a set of ground elements and $\cal F$ is a collection of subsets of $U$, called {\em independent sets}, with the following properties
:
\bi
\item {\bf Hereditary:} The empty set is independent and if $S\in {\cal F}$ and $S'\subset S$, then $S'\in {\cal F}$.
\item {\bf Augmentation:} If $A, B\in {\cal F}$ and $|A|>|B|$, then $\exists e\in A-B$ such that $B\cup \{e\}\in {\cal F}$.
\ei
The maximal independent sets of a matroid are called {\em bases} of $\cal M$.
Note that all bases have the same number of elements, and this number
is called the {\em rank} of $\cal M$.
The definition of a matroid captures the key notion of independence from linear algebra and extends that notion so as to apply to many combinatorial objects.
We have already mentioned two classes of matroids relevant to our results,
namely
partition matroids and transversal matroids. In a partition matroid, the
universe $U$ is partitioned into sets $S_1, \ldots, S_m$ and the independent
sets $S$ satisfy $S = \cup_{1 \leq i \leq m} S_i$ with $|S_i| \leq k_i$
for some given bounds $k_i$ on each part of the partition.
A uniform matroid is a special case of a partition matroid with $m = 1$.
In a transversal matroid, the universe $U$ is a union of (possibly) intersecting sets
${\cal C} = C_1, \ldots, C_m$ and a set
$S = \{s_1, \ldots s_r\} \subseteq U$ is independent if there is an
injective function $\phi$
from $S$ into ${\cal C}$ with say $\phi(s_i) = C_i$ and $\phi(s_i) \in C_i$.That is, $S$ forms a set of representatives for each set $C_i$ or equivalently
there is a matching between $S$ and ${\cal C}$.
(Note that a given $s_i$ could occur in other sets $C_j$.)
\medskip
{\sc Problem 2.} {\tt Max-Sum Diversification for Matroids} \\
\\
Let $U$ be the underlying ground set, and $\cal F$ be the set of independent subsets of $U$ such that ${\cal M}=<U, {\cal F}>$ is a matroid.
Let $d(\cdot,\cdot)$ be a (non-negative) metric distance function measuring the distance on every pair of elements. For any subset of $U$, let $f(\cdot)$ be a non-negative monotone submodular set function measuring the weight of the subset. The goal of the problem is to find a subset $S \in {\cal F}$ that:
$$\begin{array}{ll}
\rm{maximizes} & f(S)+\lambda\sum_{\{u,v\}:u,v\in S} d(u,v)
\end{array}$$
where $\lambda$ is a parameter specifying a desired trade-off between the two objectives.
As before, we let $\phi(S)$ be the value of the objective function.
Note that since the function $\phi(\cdot)$ is monotone,
$S$ is essentially a basis of the matroid ${\cal M}$.
The greedy algorithm in Section~\ref{sec:submo} still applies, but it fails to achieve any constant approximation ratio even for a linear quality function
$fcdot)$ including the identically zero function; that is, for
max-sum dispersion. (See the Appendix.)
This is in contrast to the seminal result of Nemhauser, Wolsey and Fisher
\cite{NWF78} showng that the greedy algorithm is optimal (respectivley,
a 2-approximation)
for linear functions (respectively, monotone submodular functions) subject to a matroid constraint.
Note that the problem is trivial if the rank of the matroid is less than two.
Therefore, without loss of generality, we assume the rank is greater or equal to two.
Let $$\{x,y\}=\argmax_{\{x,y\}\in {\cal F}}[f(\{x,y\})+\lambda d(x,y)].$$
We now consider the following oblivious local search algorithm:
\medskip
\noindent{\sc Local Search Algorithm}
\noindent let $S$ be a basis of $\cal M$ containing both $x$ and $y$\\
while there is an $u\in U-S$ and $v\in S$ such that $S+u-v\in {\cal F}$ and $\phi(S+u-v) >\phi(S)$ \\
\indent \ \ \ \ \ \ $S=S+u-v$\\
end while\\
return $S$
\medskip
\begin{theorem}
\label{thm:ls}
The local search algorithm achieves an approximation ratio of 2 for max-sum diversification with a matroid constraint.
\end{theorem}
Note that if the rank of the matroid is two, then the algorithm is clearly optimal.
From now on, we assume the rank of the matroid is greater than two.
Before we prove the theorem, we first give several lemmas.
All the lemmas assume the problem and the underlying matroid without explicitly mentioning it.
Let $O$ be the optimal solution, and $S$, the solution at the end of the local search algorithm.
Let $A=O\cap S$, $B=S-A$ and $C=O-A$.
\begin{lemma}
\label{lem:bj}
For any two sets $X, Y\in {\cal F}$ with $|X|=|Y|$, there is a bijective mapping $g: X\rightarrow Y$ such that $X-x+g(x)\in {\cal F}$ for any $x\in X$.
\end{lemma}
This is a known property of a matriod and its proof can be found in~\cite{Brualdi-1969}.
Since both $S$ and $O$ are bases of the matroid, they have the same cardinality.
Therefore, $B$ and $C$ have the same cardinality.
By Lemma~\ref{lem:bj}, there is a bijective mapping $g: B\rightarrow C$ such that $S-b+g(b)\in {\cal F}$ for any $b\in B$.
Let $B=\{b_1, b_2, \dots, b_t\}$, and let $c_i=g(b_i)$ for all $i$.
Without loss of generality, we assume $t\ge 2$, for otherwise, the algorithm is optimal by the local optimality condition.
\begin{lemma}
\label{lem:sm1}
$f(S)+\sum_{i=1}^{t} f(S-b_i+c_i)\ge f(S-\sum_{i=1}^{t} b_i)+\sum_{i=1}^{t} f(S+c_i)$.
\end{lemma}
\proof
Since $f$ is submodular,
$$f(S)-f(S-b_1)\ge f(S+c_1)-f(S+c_1-b_1)$$
$$f(S-b_1)-f(S-b_1-b_2)\ge f(S+c_2)-f(S+c_2-b_2)$$
$$\vdots$$
$$f(S-\sum_{i=1}^{t-1}b_i)-f(S-\sum_{i=1}^{t}b_i)\ge f(S+c_t)-f(S+c_t-b_t).$$
Summing up these inequalities, we have
$$f(S)- f(S-\sum_{i=1}^{t} b_i)\ge \sum_{i=1}^{t} f(S+c_i) - \sum_{i=1}^{t} f(S-b_i+c_i),$$
and the lemma follows.
\qed
\begin{lemma}
\label{lem:sm2}
$\sum_{i=1}^t f(S+c_i)\ge (t-1)f(S)+f(S+\sum_{i=1}^t c_i)$.
\end{lemma}
\proof
Since $f$ is submodular,
$$f(S+c_t)-f(S)= f(S+c_t)-f(S)$$
$$f(S+c_{t-1})-f(S)\ge f(S+c_t+c_{t-1})-f(S+c_t)$$
$$f(S+c_{t-2})-f(S)\ge f(S+c_t+c_{t-1}+c_{t-2})-f(S+c_t+c_{t-1})$$
$$\vdots$$
$$f(S+c_1)-f(S)\ge f(S+\sum_{i=1}^t c_i)-f(S+\sum_{i=2}^t c_i)$$
Summing up these inequalities, we have
$$\sum_{i=1}^t f(S+c_i) - tf(S)\ge f(S+\sum_{i=1}^t c_i)-f(S),$$
and the lemma follows.
\qed
\begin{lemma}
\label{lem:sm3}
$\sum_{i=1}^{t} f(S-b_i+c_i)\ge (t-2)f(S)+f(O)$.
\end{lemma}
\proof
Combining Lemma~\ref{lem:sm1} and Lemma~\ref{lem:sm2}, we have
\begin{eqnarray*}
& &f(S)+ \sum_{i=1}^{t} f(S-b_i+c_i)\\
&\ge&f(S-\sum_{i=1}^{t} b_i)+\sum_{i=1}^{t} f(S+c_i)\\
&\ge&(t-1)f(S)+f(S+\sum_{i=1}^{t} c_i)\\
&=&(t-1)f(S)+f(S+C)\\
&\ge&(t-1)f(S)+f(O).
\end{eqnarray*}
Therefore, the lemma follows.
\qed
\begin{lemma}
\label{lem:eg1}
If $t>2$, $d(B,C) - \sum_{i=1}^{t} d(b_i, c_i)\ge d(C)$.
\end{lemma}
\proof
For any $b_i, c_j, c_k$, we have
$$d(b_i, c_j)+d(b_i,c_k)\ge d(c_j, c_k).$$
Summing up these inequalities over all $i,j,k$ with $i\neq j$, $i\neq k$, $j\neq k$, we have
each $d(b_i, c_j)$ with $i\ne j$ is counted $(t-2)$ times; and each $d(c_i, c_j)$ with $i\ne j$ is counted $(t-2)$ times.
Therefore $$(t-2)[d(B,C) - \sum_{i=1}^{t} d(b_i, c_i)]\ge (t-2)d(C),$$ and the lemma follows.
\qed
\begin{lemma}
\label{lem:eg2}
$\sum_{i=1}^t d(S-b_i+c_i)\ge (t-2)d(S)+d(O)$.
\end{lemma}
\proof
\begin{eqnarray*}
& &\sum_{i=1}^t d(S-b_i+c_i)\\
&=&\sum_{i=1}^t [d(S)+d(c_i, S-b_i)-d(b_i, S-b_i)]\\
&=& td(S)+\sum_{i=1}^{t}d(c_i, S-b_i)-\sum_{i=1}^{t}d(b_i, S-b_i)\\
&=& td(S)+\sum_{i=1}^{t}d(c_i, S)-\sum_{i=1}^{t}d(c_i,b_i)-\sum_{i=1}^{t}d(b_i, S-b_i)\\
&=& td(S)+d(C, S)-\sum_{i=1}^{t}d(c_i,b_i)-d(A, B)-2d(B).
\end{eqnarray*}
There are two cases.
If $t>2$ then by Lemma~\ref{lem:eg2}, we have
\begin{eqnarray*}
& &d(C,S)-\sum_{i=1}^{t}d(c_i,b_i)\\
&=&d(A,C)+d(B,C)-\sum_{i=1}^{t}d(c_i,b_i)\\
&\ge&d(A,C)+d(C).
\end{eqnarray*}
Furthermore, since $d(S)=d(A)+d(B)+d(A,B)$,
we have $2d(S)-d(A,B)-2d(B)\ge d(A)$.
Therefore
\begin{eqnarray*}
& &\sum_{i=1}^t d(S-b_i+c_i)\\
&=& td(S)+d(C, S)-\sum_{i=1}^{t}d(c_i,b_i)-d(A, B)-2d(B)\\
&\ge& (t-2)d(S)+d(A,C)+d(C)+d(A)\\
&\ge& (t-2)d(S)+d(O).
\end{eqnarray*}
If $t=2$, then since the rank of the matroid is greater than two, $A\neq\emptyset$.
Let $z$ be an element in $A$, then we have
\begin{eqnarray*}
& & 2d(S)+d(C, S)-\sum_{i=1}^{t}d(c_i,b_i)-d(A, B)-2d(B)\\
&=&d(A,C)+d(B,C)-\sum_{i=1}^{t}d(c_i,b_i)+2d(A)+d(A,B)\\
&\ge&d(A,C)+d(c_1,b_2)+d(c_2,b_1)+d(A)+d(z,b_1)+d(z,b_2)\\
&\ge&d(A,C)+d(A)+d(c_1, c_2)\\
&\ge&d(A,C)+d(A)+d(C)\\
&=&d(O).
\end{eqnarray*}
Therefore
\begin{eqnarray*}
& &\sum_{i=1}^t d(S-b_i+c_i)\\
&=& td(S)+d(C, S)-\sum_{i=1}^{t}d(c_i,b_i)-d(A, B)-2d(B)\\
&\ge& (t-2)d(S)+d(O).
\end{eqnarray*}
This completes the proof.
\qed
Now with the proofs of Lemma~\ref{lem:sm3} and Lemma~\ref{lem:eg2},
we are ready to complete the proof of Theorem~\ref{thm:ls}.
\proof
Since $S$ is a locally optimal solution, we have
$\phi(S)\ge\phi(S-b_i+c_i)$ for all $i$.
Therefore, for all $i$ we have
$$f(S)+\lambda d(S)\ge f(S-b_i+c_i)+\lambda d(S-b_i+c_i).$$
Summing up over all $i$, we have
$$tf(S)+\lambda td(S)\ge \sum_{i=1}^{t}f(S-b_i+c_i)+\lambda \sum_{i=1}^{t}d(S-b_i+c_i).$$
By Lemma~\ref{lem:sm3}, we have
$$tf(S)+\lambda td(S)\ge (t-2)f(S)+f(O)+\lambda \sum_{i=1}^{t}d(S-b_i+c_i).$$
By Lemma~\ref{lem:eg2}, we have
$$tf(S)+\lambda td(S)\ge (t-2)f(S)+f(O)+\lambda [(t-2)d(S)+d(O)].$$
Therefore,
$$2f(S)+2\lambda d(S))\ge f(O)+\lambda d(O).$$
$$\phi(S)\ge\frac{1}{2}\phi(O),$$
this completes the proof.
\qed
Theorem~\ref{thm:ls} shows that even in the more general case of a matroid constraint,
we can still achieve the approximation ratio of 2.
As is standard in such local search algorithms,
with a small sacrifice on the approximation ratio,
the algorithm can be modified to run in polynomial time by requiring at least an
$\epsilon$-improvement at each iteration rather than just any
improvement.
\section{Related Work}
With the proliferation of today's social media, database and web content, ranking becomes an important problem as it decides what gets selected and what does not, and what to be displayed first and what to be displayed last.
Many early ranking algorithms, for example in web search, are based on the notion of ``relevance", i.e., the closeness of the object to the search query.
However, there has been a rising interest to incorporate some notion of ``diversity" into
measures of quality.
One early work in this direction is the notion of ``Maximal Marginal Relevance'' (MMR) introduced by Carbonell
and Goldstein in~\cite{Carbonell:1998:UMD:290941.291025}. More specifically, MMR is defined as follows:
$${\rm MMR}=\max_{D_i\in R\setminus S}[\lambda\cdot sim_1(D_i, Q)-(1-\lambda) \max_{D_j\in S}sim_2(D_i,D_j)],$$
where $Q$ is a query; $R$ is the ranked list of documents retrieved; $S$ is the subset of documents in
$R$ already selected; $sim_1$ is the similarity measure between a document and a query, and
$sim_2$ is the similarity measure between two documents. The parameter $\lambda$ controls the trade-off
between novelty (a notion of diversity) and relevance. The MMR algorithm iteratively selects the next document with
respect to the MMR objective function until a given cardinality condition is met.
The MMR heuristic has been widely used, but to the best of our knowledge, it has not been theoretically justified.
Our paper provides some theoretical evidence why MMR is a legitimate approach for diversification.
The greedy algorithm we propose in this paper can be viewed as a natural extension of MMR.
There is extensive research on how to diversify returned ranking results to satisfy multiple users. Namely, the result diversity issue occurs when many facets of queries are discovered and a set of multiple users expect to find their desired facets in the first page of the results. Thus, the challenge is to find the best strategy for ordering the results such that many users would find their relevant pages in the top few slots.
Rafiei et al.~\cite{DBLP:conf/www/RafieiBS10} modeled this as a continuous optimization problem. They introduce a weight vector $W$ for the search results, where the total weight sums to one.
They define the portfolio variance to be $W^TCW$, where $C$ is the covariance matrix of the result set. The goal then is to minimize the portfolio variance while the expected relevance is fixed at a certain level. They report that their proposed algorithm can improve upon Google in terms of the diversity on random queries, retrieving $14\%$ to $38\%$ more aspects of queries in top five, while maintaining a precision very close to Google.
Bansal et al.~\cite{DBLP:conf/icalp/BansalJKN10} considered the setting in which various types of users exist and each is interested in a subset of the search results. They use a performance measure based on {\em discounted cumulative gain}, which defines the usefulness (gain) of a document as its position in the resulting list.
Based on this measure, they suggest a general approach to develop approximation algorithms for ranking search results that captures different aspects of users' intents.
They also take into account that the relevance of one document cannot be treated independent of the relevance of other documents in a collection returned by a search engine.
They consider both the scenario where users are interested in only a single search result (e.g., navigational queries)
and the scenario where users have different requirements on the number of search results, and develop good approximation solutions for them.
The database community has recently studied the query diversification problem, which is mainly for keyword
search in databases \cite{liu09, Yu:2009,drosou09,vieira11, fengzhao11,vieira11_2, Demidova10}.
Given a very large database, an exploratory query can easily lead to a vast answer set. Typically, an answer's relevance to the user query is based on \textit{top-k} or \textit{tf-idf}. As a way of increasing user
satisfaction, different query diversification techniques have been proposed including some system based ones taking into account query parameters, evaluation algorithms,
and dataset properties. For many of these, a max-sum type objective function is usually used.
Other than those discussed above, there are many recent papers studying result diversification in different settings, via different approaches and through different perspectives, for example~\cite{DBLP:conf/sigir/ZhaiCL03,DBLP:conf/sigir/ChenK06,DBLP:conf/naacl/ZhuGGA07,DBLP:conf/icml/YueJ08,DBLP:conf/icml/RadlinskiKJ08,DBLP:conf/wsdm/AgrawalGHI09,DBLP:conf/wsdm/BrandtJYB11,SantosMO11,DBLP:conf/wsdm/DouHCSW11,DBLP:conf/icml/SlivkinsRG10}.
The reader is referred to~\cite{DBLP:conf/wsdm/AgrawalGHI09,DrosouP10} for a good summary of the field.
Most relevant to our work is the paper by Gollapudi and Sharma~\cite{GoSh09}, where they develop an axiomatic approach to characterize and design diversification systems. Furthermore, they consider three different
diversification objectives and using earlier results in facility dispersion, they are able to give algorithms with good worst case approximation guarantees.
This paper is a continuation of research along this line.
Recently, Minack et al.~\cite{minack11} have studied the problem of incremental diversification for very large data sets.
Instead of viewing the input of the problem as a set, they consider the input as a stream, and use a simple online algorithm to process each element in an incremental fashion, maintaining a near-optimal diverse set at any point in the stream. Although their results are largely experimental, this approach significantly reduces CPU and memory consumption, and hence is applicable to large data sets. Our dynamic update algorithm deals with a problem of a similar nature, but in addition to our experimental results, we are also able to prove theoretical guarantees. To the best of our knowledge,
our work is the first of its kind to obtain a near-optimality condition for result diversification in a dynamically changing environment.
Independent of our conference paper \cite{BorodinLY12}, Abbassi, Mirrokni and
Thakus \cite{AbbassiMT13} have also shown that the (Hamming distance 1) local search algorithm provides a 2-approximation
for the max-sum dispersion problem subject to a matroid constraint. Their
version of the dispersion problem is somwehat more general in that
they additionally consider that the points are chosen from different
clusters. They indirectly consider a quality measure
by first restricting the universe of objects to high quality objects
and then apply dispersion. They provide a number of interesting experimental
results.
\section{Submodular Functions}
\label{sec:submo}
Submodular set functions can be characterized by
the property of a decreasing marginal gain as the size of the set increases.
As such, submodular functions are well-studied objects in economics, game theory and combinatorial optimization.
More recently, submodular functions have attracted attention in many practical fields of computer science.
For example, Kempe et al.~\cite{Kempe:2003:MSI:956750.956769} study the problem of selecting a set of most influential nodes to maximize
the total information spread in a social network. They have shown that under two basic stochastic diffusion models, the expected influence of an initially chosen
set is submodular,
hence the problem admits a good approximation algorithm.
In natural language processing, Lin and Bilmes \cite{LinBX2009, LinB10, LinB2011} have studied a class of submodular functions for document summarization.
These functions each combine two terms, one which encourages the summary to be representative of the corpus, and the other which positively rewards diversity.
Their experimental results show that a greedy algorithm with the objective of maximizing these submodular functions outperforms the existing state-of-art results
in both generic and query-focused document summarization.
Both of the above mentioned results are based on the fundamental work of
Nemhauser, Wolsey and Fisher~\cite{NWF78}, which has
shown an $\frac{e}{e-1}$-approximation for maximizing monotone submodular set functions
over a uniform matroid; and this bound is known to be tight even for a general matroid~\cite{CCPV11}.
Our max-sum diversification problem with monotone submodular set functions can be viewed as an extension of that problem:
the objective function now not only contains a submodular part, but also has a supermodular part: the sum of distances.
Since the max-sum diversification problem with modular set functions studied in~\cite{GoSh09} admits a 2-approximation algorithm, it is natural to ask what approximation ratio is obtainable for the same problem with monotone submodular set functions.
Note that the algorithm in~\cite{GoSh09} does not apply
to the submodular case.
In what follows we assume (as is standard when considering submodular
function) access to an oracle for finding an element
$u \in U-S$ that maximizes $f(S+u)-f(S)$. When $f$ is modular, this simply
means accessing the element $u \in U-S$ having maximum weight.
\begin{theorem}
\label{thm:main}
There is a simple linear time greedy algorithm
that achieves a 2-approximation for the max-sum diversification problem
with monotone submodular set functions satisfying a cardinality constraint.
\end{theorem}
Before giving the proof of Theorem~\ref{thm:main}, we first introduce our notation.
We extend the notion of distance function to sets. For disjoint subsets $S, T\subseteq U$, we let $d(S)=\sum_{\{u,v\}:{u,v\in S}} d(u,v)$, and $d(S, T)=\sum_{\{u,v\}:{u\in S, v\in T}} d(u,v)$.
\medskip
Now we define various types of marginal gain.
For any given subset $S\subseteq U$ and an element $u\in U-S$: let $\phi(S)$ be the value of the objective function, $d_u(S)=\sum_{v\in S} d(u,v)$ be the marginal gain on the distance, $f_u(S)=f(S+u)-f(S)$ be the marginal gain on the weight, and $\phi_u(S)=f_u(S)+\lambda d_u(S)$ be the total marginal gain on the objective function.
Let $f'_u(S)=\frac{1}{2}f_u(S)$, and $\phi'_u(S)=f'_u(S)+\lambda d_u(S)$.
We consider the following simple greedy algorithm:
\medskip
\noindent{\sc Greedy Algorithm}
\noindent$S=\emptyset$\\
while $|S|<p$\\
\indent \ \ \ \ \ \ find $u\in U-S$ maximizing $\phi'_u(S)$\\
\indent \ \ \ \ \ \ $S=S+u$\\
end while\\
return $S$
\medskip
Note that the above greedy algorithm is ``non-oblivious" (in the sense of~\cite{KMSV}) as it is not selecting the next element with respect to the objective function $\phi(\cdot)$. This might be of an independent interest.
We utilize the following lemma in~\cite{RRT94}.
\begin{lemma}
\label{lem:rrt}
Given a metric distance function $d(\cdot, \cdot)$, and two disjoint sets $X$ and $Y$, we have the following inequality:
$$(|X|-1)d(X,Y)\ge |Y|d(X).$$
\end{lemma}
Now we are ready to prove Theorem~\ref{thm:main}.
\proof
Let $O$ be the optimal solution, and $G$, the greedy solution at the end of the algorithm.
Let $G_i$ be the greedy solution at the end of step $i$, $i<p$; and let $A=O\cap G_i$, $B=G_i-A$ and
$C=O-A$. By lemma~\ref{lem:rrt}, we have the following three inequalities:
\begin{eqnarray}
(|C|-1)d(B,C)\ge |B|d(C)\\
(|C|-1)d(A,C)\ge |A|d(C)\\
(|A|-1)d(A,C)\ge |C|d(A)
\end{eqnarray}
Furthermore, we have
\begin{eqnarray}
d(A,C)+d(A)+d(C)=d(O)
\end{eqnarray}
Note that the algorithm clearly achieves the optimal solution if $p=1$.
If $|C|=1$, then $i=p-1$ and $G_i\subset O$.
Let $v$ be the element in $C$, and let $u$ be the element taken by the greedy algorithm in the next step, then
$\phi'_u(G_i) \ge \phi'_v(G_i)$. Therefore,
$$ \frac{1}{2}f_u(G_i) + \lambda d_u(G_i) \ge \frac{1}{2}f_v(G_i) + \lambda d_v(G_i),$$
which implies
\begin{eqnarray*}
\phi_u(G_i)&=&f_u(G_i) + \lambda d_u(G_i)\\
&\ge&\frac{1}{2}f_u(G_i) + \lambda d_u(G_i)\\
&\ge&\frac{1}{2}f_v(G_i) + \lambda d_v(G_i)\\
&\ge&\frac{1}{2}\phi_v(G_i);
\end{eqnarray*}
and hence $\phi(G)\ge\frac{1}{2}\phi(O)$.
Now we can assume that
$p>1$ and $|C|>1$. We apply the following non-negative multipliers to equations (1), (2), (3), (4) and add them:
$(1)*\frac{1}{|C|-1}+(2)*\frac{|C|-|B|}{p(|C|-1)}+(3)*\frac{i}{p(p-1)}+(4)*\frac{i|C|}{p(p-1)}$; we then have
$$d(A,C)+d(B,C)-\frac{i|C|(p-|C|)}{p(p-1)(|C|-1)}d(C)\ge \frac{i|C|}{p(p-1)}d(O).$$
Since $p > |C|$,
$$d(C, G_i)\ge \frac{i|C|}{p(p-1)}d(O).$$
By submodularity and monotonicity of $f'(\cdot)$, we have
$$\sum_{v\in C}f'_v(G_i)\ge f'(C\cup G_i)-f'(G_i)\ge f'(O)-f'(G).$$
Therefore,
\begin{eqnarray*}
\sum_{v\in C}\phi'_{v}(G_i)&=&\sum_{v\in C}[f'_v(G_i)+\lambda d(\{v\}, G_i)]\\
&=&\sum_{v\in C}f'_v(G_i)+\lambda d(C, G_i)\\
&\ge& [f'(O)-f'(G)]+\frac{\lambda i|C|}{p(p-1)}d(O).
\end{eqnarray*}
Let $u_{i+1}$ be the element taken at step $(i+1)$, then we have
$$\phi'_{u_{i+1}}(G_i)\ge \frac{1}{p}[f'(O)-f'(G)]+\frac{\lambda i}{p(p-1)}d(O).$$
Summing over all $i$ from $0$ to $p-1$, we have
$$\phi'(G)=\sum_{i=0}^{p-1}\phi'_{u_{i+1}}(G_i)\ge[f'(O)-f'(G)]+\frac{\lambda}{2}d(O).$$
Hence,
$$f'(G)+\lambda d(G)\ge f'(O)-f'(G)+\frac{\lambda}{2}d(O),$$
and
$$\phi(G)=f(G)+\lambda d(G)\ge \frac{1}{2}[f(O)+\lambda d(O)]=\frac{1}{2}\phi(O).$$
This completes the proof.
\qed
The greedy algorithm runs in time proportional to $p$
(for the $p$ iterations) times the
cost of computing $\phi'_u(S)$ for a given $u$ and $S$. When $f$ is
modular, the time for updating
$\phi'_u(S)$ can be bounded by $O(n)$. Namely,
each iteration costs $O(n)$ time
(to search over all elements $u$
in $U \setminus S$) and update $\phi'(S)$.
Updating $f'(S)$ is clearly $O(1)$ while naively updating
$d_u(S)$ would take time $O(p)$. But as observed by
Birnbaum and Goldman \cite{BirnbaumG09},
$d_u(V')$ can be maintained for all for all $V \setminus S$ within the same
$O(n)$ needed to search $V'$ so that updating $\phi'(S)$ only costs
time $O(1)$. .
Hence the total time is $O(np)$,
linear in $n$ when $p$ is a constant.
\begin{corollary}
The Ravi et al \cite{RRT94}
greedy algorithm for dispersion has approximation
ratio no worse that 2.
\end{corollary}
\proof
The identically zero function $f$ is monotone submodular and
for this $f$, our greedy algorithm is precisely the dispersion
algorithm of Ravi et al.
\qed
We note that for the dispersion
problem, Birnbaum and Goldman \cite{BirnbaumG09}
show that their bound for the greedy
algorithm is tight. In particular, for the greedy algorithm that adds
one element at a time, the precise bound is $\frac{2p-2}{p-1}$.
| {
"timestamp": "2014-08-21T02:05:32",
"yymm": "1203",
"arxiv_id": "1203.6397",
"language": "en",
"url": "https://arxiv.org/abs/1203.6397",
"abstract": "Result diversification is an important aspect in web-based search, document summarization, facility location, portfolio management and other applications. Given a set of ranked results for a set of objects (e.g. web documents, facilities, etc.) with a distance between any pair, the goal is to select a subset $S$ satisfying the following three criteria: (a) the subset $S$ satisfies some constraint (e.g. bounded cardinality); (b) the subset contains results of high \"quality\"; and (c) the subset contains results that are \"diverse\" relative to the distance measure. The goal of result diversification is to produce a diversified subset while maintaining high quality as much as possible. We study a broad class of problems where the distances are a metric, where the constraint is given by independence in a matroid, where quality is determined by a monotone submodular function, and diversity is defined as the sum of distances between objects in $S$. Our problem is a generalization of the {\\em max sum diversification} problem studied in \\cite{GoSh09} which in turn is a generaliztion of the {\\em max sum $p$-dispersion problem} studied extensively in location theory. It is NP-hard even with the triangle inequality. We propose two simple and natural algorithms: a greedy algorithm for a cardinality constraint and a local search algorithm for an arbitary matroid constraint. We prove that both algorithms achieve constant approximation ratios.",
"subjects": "Data Structures and Algorithms (cs.DS); Information Retrieval (cs.IR)",
"title": "Max-Sum Diversification, Monotone Submodular Functions and Dynamic Updates",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969660413473,
"lm_q2_score": 0.7217432182679957,
"lm_q1q2_score": 0.7099044397493185
} |
https://arxiv.org/abs/1903.04968 | A quantitative Lovász criterion for Property B | A well known observation of Lovász is that if a hypergraph is not $2$-colorable, then at least one pair of its edges intersect at a single vertex. %This very simple criterion turned out to be extremly useful . In this short paper we consider the quantitative version of Lovász's criterion. That is, we ask how many pairs of edges intersecting at a single vertex, should belong to a non $2$-colorable $n$-uniform hypergraph? Our main result is an {\em exact} answer to this question, which further characterizes all the extremal hypergraphs. The proof combines Bollobás's two families theorem with Pluhar's randomized coloring algorithm. | \section{Introduction}\label{sec:intro}
A hypergraph ${\cal H}=(V,E)$ consists of a vertex set $V$ and a set
of edges $E$ where each
$X \in E$ is a subset of $V$. If all edges of ${\cal H}$ have size $n$
then ${\cal H}$
is called an $n$-uniform hypergraph, or $n$-graph for short.
A hypergraph is $2$-colorable if one can assign each vertex $v\in V$
one of two colors, say $Red$/$Blue$, so that each
$X \in E$ contains vertices of both colors. Miller \cite{M}, and later Erd\H{o}s in various papers, referred
to this property as {\em Property B}, after F.~Bernstein \cite{B}
who introduced it in 1907. Since deciding if a hypergraph is
$2$-colorable is $NP$-hard one cannot hope to find a simple
characterization of all $2$-colorable hypergraphs. Instead, one
looks for general sufficient/necessary conditions for
having this property. For example, a famous result of Seymour \cite{S}
states that if ${\cal H}$ is not $2$-colorable then
$|E| \geq |V|$. Probably the most well studied question of this type
asks for the smallest number of edges in an $n$-graph that
is not $2$-colorable. The study of this quantity, denote $m(n)$, was
popularized by Erd\H{o}s, see \cite{AS} for a comprehensive treatment.
Despite much effort by many researchers, even the asymptotic value of $m(n)$ has not been determined yet.
A pair of edges $X,Y \in E({\cal H})$ is \emph{simple} if $|X\cap Y|=1$. Let $m_2({\cal H})$ denote the number of ordered simple pairs
of edges of ${\cal H}$. A well known observation of Lov\'asz \cite{L} states that if ${\cal H}$ is not $2$-colorable then $m_2({\cal H}) >0$. Despite its simplicity, this observation underlies the best known bounds for $m(n)$, see
\cite{CK,Pluhar}. It is natural to ask if one can obtain a quantitative version of Lov\'asz's observation, that is,
estimate how small can $m_2({\cal H})$ be in an $n$-graph not satisfying property $B$?
Our main result in this paper states that (somewhat surprisingly), one can give an exact answer to the above extremal question
as well as characterize the extremal $n$-graphs.
Let $K_{2n-1}^n$ denote the complete $n$-graph on $2n-1$ vertices. It is easy to see that $K_{2n-1}^n$ is not $2$-colorable
and that $m_2(K_{2n-1}^n)=n\cdot\binom{2n-1}{n}$. We first observe that this simple upper bound is tight.
\begin{prop}\label{prop:main}
If an $n$-graph is not $2$-colorable then $m_2({\cal H}) \geq n\cdot\binom{2n-1}{n}$.
\end{prop}
As with any extremal problem, one would like to know which graphs or hypergraphs are extremal with respect to this problem.
For example, Tur\'an's theorem states that among all $n$-vertex graphs not containing a complete
$t$-vertex subgraph, there is only one graph maximizing the number of edges. In the setting of
our problem, it is easy to see that $K_{2n-1}^n$ is not the only non $2$-colorable $n$-graph satisfying
$m_2({\cal H}) = n\cdot\binom{2n-1}{n}$, since one can take a copy of $K_{2n-1}^n$ and add to it more
vertices and edges without increasing the number of simple pairs. Our main result in this paper
characterizes the extremal $n$-graphs, by showing that this is in fact the only way to construct an $n$-graph meeting the bound of Proposition \ref{prop:main}.
\begin{theo}\label{thm:main}
If a non $2$-colorable $n$-graph ${\cal H}$ satisfies $m_2({\cal H}) = n\cdot\binom{2n-1}{n}$ then it contains a copy of $K_{2n-1}^n$.
\end{theo}
While the proof of Proposition \ref{prop:main} is implicit in Pluhar's \cite{Pluhar} argument for bounding $m(n)$,
the proof of Theorem \ref{thm:main} is more intricate,
relying on Bollob\'as's two families theorem \cite{Bo} as well as on a refined analysis of Pluhar's randomized algorithm for $2$-coloring $n$-graphs.
\section{Proof of Proposition \ref{prop:main}}
In this section we describe several preliminary observations regarding a coloring algorithm introduced in~\cite{Pluhar}, and use them to derive Proposition \ref{prop:main}. The algorithm is the following:
\bigskip
\noindent{\bf Algorithm }$\text{Col}({\cal H},\pi)$. The input is a hypergraph ${\cal H}=(V,E)$ and an ordering $\pi:V \mapsto \{1,\ldots,|V|\} $ (that is, $\pi$ is a bijection). The output is a $2$-coloring of $V$ (not necessarily a proper one). The algorithm runs in $|V|$ steps, where in each time step $1\leq i\leq |V|$, the vertex $\pi^{-1}(i)$ is being colored $Blue$ if this does not form any monochromatic $Blue$ edge. Otherwise, $\pi^{-1}(i)$ is colored $Red$.
\bigskip
We now state an important property of $\text{Col}({\cal H},\pi)$.
For two disjoint subsets $X,Y\subseteq V$, we use the notation $\pi(X)<\pi(Y)$ whenever $\max_{x \in X}\pi(x)<\min_{y \in Y}\pi(y)$, that is, the elements of $X$ precede all the elements of $Y$ in the ordering $\pi$.
Suppose $(X,Y)$ is a simple pair of edges in ${\cal H}$ with\footnote{Here, and in what follows, we slightly abuse notation by writing $y$ instead of the more appropriate $\{y\}$.} $X\cap Y=y$.
We say that $\pi$ {\em separates} $(X,Y)$ if $\pi(X\setminus y)<\pi(y)<\pi(Y\setminus y)$.
\begin{claim}\label{claim1}
If $\text{Col}({\cal H},\pi)$ fails to properly color ${\cal H}$ then $\pi$ separates at least one pair of simple edges.
\end{claim}
\begin{proof}
We first observe that (by definition) for every ordering $\pi$, the algorithm $\text{Col}(H,\pi)$ does not produce monochromatic $Blue$ edges.
Suppose then it produced a $Red$ edge $Y\in E$. Let $y$ be the first vertex of $Y$ according to the ordering $\pi$.
If $y$ was colored red, then there must have been an edge $X$ so that $y\in X$, and all other vertices of $X$ were already colored $Blue$ (otherwise the algorithm would color $y$ $Blue$). This means $(X,Y)$ is simple and that $\pi$ separates it.
\end{proof}
Note that the claim above already shows that if ${\cal H}$ is not 2-colorable then $m_2({\cal H})>0$.
For the proof of Proposition \ref{prop:main} we will also need the following simple fact.
\begin{claim}\label{claim0}
A random permutation separates any given simple pair with probability $1/n\binom{2n-1}{n}$.
\end{claim}
\begin{proof}
Let $(X,Y)$ be a simple pair, and let $X\cap Y=y$. A permutation $\pi$ separates $(X,Y)$ if and only if $\pi(X\setminus y)<\pi(y)<\pi(Y\setminus y)$,
and this happens with probability exactly
$$\frac{(n-1)!(n-1)!}{(2n-1)!}=\frac{1}{n\binom{2n-1}{n}}$$
as desired.
\end{proof}
The above claims suffice for proving Proposition \ref{prop:main}.
\begin{proof}[Proof (of Proposition \ref{prop:main}):]
Assume $m_2({\cal H}) < n\binom{2n-1}{n}$. Suppose we pick a uniformly random $\pi$.
Then by the union bound and Claim \ref{claim0}, we infer that with positive probability $\pi$ does not separate any simple pair edges.
Hence, there is a $\pi$ not separating any simple pair. Claim \ref{claim1} then implies that $\text{Col}({\cal H},\pi)$ will produce a legal $2$-coloring of ${\cal H}$.
\end{proof}
\section{Proof of Theorem \ref{thm:main}}
For the rest of this section fix some non $2$-colorable $n$-graph ${\cal H}=(V,E)$ satisfying $m_2({\cal H})=n\binom{2n-1}{n}$.
We need to show that ${\cal H}$ contains a copy of $K^n_{2n-1}$. We start with a few preliminary claims regarding ${\cal H}$.
First, we show that no $\pi$ separates more than one simple pair.
\begin{claim}
\label{claim: separates at most one pair}
Every ordering $\pi$ separates at most one simple pair.
\end{claim}
\begin{proof}
Suppose $\pi$ separates two simple pairs. By Claim \ref{claim0}, the assumption on $m_2(\cal H)$, and by linearity of expectation, the expected number of simple pairs separated by a random permutation is exactly $1$. Hence, if $\pi$ separates $2$ simple pairs, then there must exist a permutation $\sigma$ which separates less than $1$, and therefore $0$, simple pairs. Therefore, by Claim \ref{claim1} we obtain that $\text{Col}({\cal H},\sigma)$ produces a legal $2$-coloring of ${\cal H}$, which is a contradiction to the assumption that $\mathcal H$ is not $2$-colorable.
\end{proof}
\begin{claim}\label{claim: no two edges intersect same vertex}
If $(X,Y)$ and $(X',Y)$ are simple pairs, then $X \cap Y \neq X' \cap Y$.
\end{claim}
\begin{proof}
We observe that if $X \cap Y = X' \cap Y=y$, then there is a $\pi$ that separates both $(X,Y)$ and $(X',Y)$, and this will contradict Claim \ref{claim: separates at most one pair}. Indeed, if $(X,Y)$ and $(X',Y)$ are simple pairs and $X\cap Y=X'\cap Y=y$, then $(X\cup X')\setminus y$ and $Y$ are disjoint. Therefore,
any $\pi$ satisfying
$$
\pi((X\cup X')\setminus y)<\pi(y)<\pi(Y \setminus y)
$$
separates $(X,Y)$ and $(X',Y)$. This completes the proof.
\end{proof}
In addition to the above observations about ${\cal H}$, the last ingredient we will need is the following theorem of Bollob\'as \cite{Bo}.
\begin{lemma}\label{lem:bol}
Let $I$ be an index set. For all $i\in I$, let $A_i$ and $B_i$ be subsets of a set $V$ of $p$ elements satisfying the following conditions:
\begin{enumerate}
\item $A_i\cap B_i=\emptyset$ for all $i\in I$, and
\item $A_j\nsubseteq A_i\cup B_i$ for all $i\neq j\in I$.
\end{enumerate}
Then, we have
$$\sum_{i\in I}\frac{1}{\binom{p-|B_i|}{|A_i|}}\leq 1,$$
with equality if and only if $B_i=B$ for all $i\in I$ and the sets $A_i$ are all the $q$-tuples of the set $P\setminus B$ for some value of $q$.
\end{lemma}
Let us now show how to use Lemma \ref{lem:bol} in order to derive Theorem \ref{thm:main}. Recall that $V$ is the vertex set of ${\cal H}$ and set $p:=|V|$.
Let $M(\mathcal H)$ be a collection of simple pairs $(X,Y)$ defined as follows; out of all the simple pairs $(X,Y)$ with the same ``second'' set $Y$, put
in $M(\mathcal H)$ one of these pairs. Observe that by Claim \ref{claim: no two edges intersect same vertex} each $Y$ belongs to at most $|Y|=n$ simple pairs of the form $(X,Y)$ (i.e, with $Y$ as the second set), implying that $t:=|M(\mathcal H)|\geq \frac{1}{n} \cdot m_2(\mathcal H)= \binom{2n-1}{n}$.
We now define a collection $\mathcal F$ consisting of pairs of subsets of $V$ as follows: For every simple pair $s:=(X,Y)\in M(\mathcal H)$, define $A_s=X\setminus Y$ and $B_s=V\setminus (X\cup Y)$, and let $\mathcal F=\{(A_s,B_s): \text{ }s\in M(\mathcal H)\}$. For convenience, let
us rename the pairs in ${\cal F}$ as $(A_i,B_i)$ with $1 \leq i \leq t$.
Now we wish to show that $\mathcal F$ satisfies the conditions in Lemma \ref{lem:bol}. Observe that if it does, then since
$$\sum_{i=1}^t\frac{1}{\binom{p-|B_i|}{|A_i|}}=\sum_{i=1}^t \frac{1}{\binom{2n-1}{n-1}}\geq 1,$$
it follows by the first part of Lemma \ref{lem:bol} that the last inequality is in fact an equality. Therefore, by the second part of Lemma \ref{lem:bol}, we conclude that all the $B_i$'s are the same set $B$, and the set of all the $A_i$'s consists of all $n-1$ subsets of a ground set of size $2n-1$. That is, let $B=B_i$ and $U=V\setminus B$. Then we have that $|U|=2n-1$, and that the sets $A_i$ are all the $n-1$ subsets of $U$. Since by construction we have that $U\setminus A_i\in E(\mathcal H)$ for all $i$, we conclude that $\mathcal H$ restricted to the set $U$ is a copy of $K^n_{2n-1}$ as desired. It thus remains to show the following:
\begin{claim}
$\mathcal F$ satisfies the conditions in Lemma \ref{lem:bol}
\end{claim}
\begin{proof}
The first condition $A_i\cap B_i=\emptyset$ for all $i$ is trivially satisfied by construction. For the second condition, let $(A,B)$ and $(A',B')$ be two elements in $\mathcal F$ coming from simple pairs $(X,Y)$ and $(X',Y')$ belonging to $M({\cal H})$, respectively. Recall that by the way we defined $M(\mathcal H)$ and $\mathcal F$ we have $Y\neq Y'$. Let us use $y$ and $y'$ to denote the unique elements in $X\cap Y$ and $X'\cap Y'$, respectively.
We wish to show that $A\nsubseteq A'\cup B'$, which, by construction, is implied by $(X\setminus y)\cap Y'\neq \emptyset$. Assuming $(X\setminus y)\cap Y'=\emptyset$, we will derive a contradiction to Claim \ref{claim: separates at most one pair} by showing that there is a permutation $\pi$ separating two distinct simple pairs.
Observe that it cannot be that $y\in Y'$. Indeed, if it was the case, then together with the assumption that $(X\setminus y)\cap Y'=\emptyset$ we would infer
that $(X,Y)$ and $(X,Y')$ are both simple pairs intersecting at $y$ (and distinct as $Y\neq Y'$), contradicting Claim \ref{claim: no two edges intersect same vertex}.
Assume then that $y\not \in Y'$ (so in particular $y \neq y'$). We claim that we can find a $\pi$ satisfying
$$
\pi(X\setminus y)<\pi(y)< \pi((X'\setminus y')\setminus X)<\pi(y')<\pi((Y\cup Y')\setminus (X \cup X')).
$$
Indeed, the only thing that needs to be justified is the ability to place $y'$ as above, which follows from the fact that $y' \in Y'$ and the
assumption $(X\setminus y)\cap Y'=\emptyset$ which together imply that $y' \not \in X$. Observe that since $\pi$ first places $X\setminus y$ and then $y$, the pair $(X,Y)$ is separated by $\pi$. Such a $\pi$ clearly places $X' \setminus y'$ before $y'$ and the assumption $(X\setminus y)\cap Y'=\emptyset$ together with the fact that $y \not \in Y'$ imply that such a $\pi$ places all of $Y' \setminus y'$ after $y'$, so it
separates $(X',Y')$ as well, giving us the desired contradiction.
\end{proof}
This completes the proof of Theorem \ref{thm:main}.
| {
"timestamp": "2019-03-13T01:17:59",
"yymm": "1903",
"arxiv_id": "1903.04968",
"language": "en",
"url": "https://arxiv.org/abs/1903.04968",
"abstract": "A well known observation of Lovász is that if a hypergraph is not $2$-colorable, then at least one pair of its edges intersect at a single vertex. %This very simple criterion turned out to be extremly useful . In this short paper we consider the quantitative version of Lovász's criterion. That is, we ask how many pairs of edges intersecting at a single vertex, should belong to a non $2$-colorable $n$-uniform hypergraph? Our main result is an {\\em exact} answer to this question, which further characterizes all the extremal hypergraphs. The proof combines Bollobás's two families theorem with Pluhar's randomized coloring algorithm.",
"subjects": "Combinatorics (math.CO)",
"title": "A quantitative Lovász criterion for Property B",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969636371976,
"lm_q2_score": 0.7217432182679957,
"lm_q1q2_score": 0.7099044380141397
} |
https://arxiv.org/abs/1601.07678 | Extremal Relations Between Shannon Entropy and $\ell_α$-Norm | The paper examines relationships between the Shannon entropy and the $\ell_{\alpha}$-norm for $n$-ary probability vectors, $n \ge 2$. More precisely, we investigate the tight bounds of the $\ell_{\alpha}$-norm with a fixed Shannon entropy, and vice versa. As applications of the results, we derive the tight bounds between the Shannon entropy and several information measures which are determined by the $\ell_{\alpha}$-norm, e.g., Rényi entropy, Tsallis entropy, the $R$-norm information, and some diversity indices. Moreover, we apply these results to uniformly focusing channels. Then, we show the tight bounds of Gallager's $E_{0}$ functions with a fixed mutual information under a uniform input distribution. | \section{Introduction}
Information measures of random variables are used in several fields.
The Shannon entropy \cite{shannon} is one of the famous measures of uncertainty for a given random variable.
On the studies of information measures, inequalities for information measures are commonly used in many applications.
As an instance, Fano's inequality \cite{fano} gives the tight upper bound of the conditional Shannon entropy with a fixed error probability.
Then, note that the \emph{tight} means the existence of the distribution which attains the equality of the bound.
Later, the reverse of Fano's inequality, i.e., the tight lower bound of the conditional Shannon entropy with a fixed error probability, are established \cite{kovalevsky, tebbe, feder}.
On the other hand, Harremo\"{e}s and Tops{\o}e \cite{topsoe} derived the exact range between the Shannon entropy and the index of coincidence (or the Simpson index) for all $n$-ary probability vectors, $n \ge 3$.
In the above studies, note that the error probability and the index of coincidence are closely related to $\ell_{\infty}$-norm and $\ell_{2}$-norm, respectively.
Similarly, several axiomatic definitions of the entropies \cite{renyi, tsallis2, havrda, daroczy, behara, boekee} are also related to the $\ell_{\alpha}$-norm.
Furthermore, the $\ell_{\alpha}$-norm are also related to some diversity indices, such as the index of coincidence.
In this study, we examine extremal relations between the Shannon entropy and the $\ell_{\alpha}$-norm for $n$-ary probability vectors, $n \ge 2$.
More precisely, we establish the tight bounds of $\ell_{\alpha}$-norm with a fixed Shannon entropy in Theorem \ref{th:extremes}.
Similarly, we also derive the tight bounds of the Shannon entropy with a fixed $\ell_{\alpha}$-norm in Theorem \ref{th:extremes2}.
Directly extending Theorem \ref{th:extremes} to Corollary \ref{cor:extremes}, we can obtain the tight bounds of several information measures which are determined by the $\ell_{\alpha}$-norm with a fixed Shannon entropy, as shown in Table \ref{table:extremes}.
In particular, we illustrate the exact feasible regions between the Shannon entropy and the R\'{e}nyi entropy in Fig. \ref{fig:Renyi} by using \eqref{eq:Renyi_bound1} and \eqref{eq:Renyi_bound2}.
In Section \ref{subsect:focusing}, we consider applications of Corollary \ref{cor:extremes} for a particular class of discrete memoryless channels, defined in Definition \ref{def:focusing}, which is called \emph{uniformly focusing} \cite{massey} or \emph{uniform from the output} \cite{fano2}.
\section{Preliminaries}
\label{sect:pre}
\subsection{$n$-ary probability vectors and its information measures}
Let the set of all $n$-ary probability vectors be denoted by
\begin{align}
\mathcal{P}_{n} \!
\triangleq \!
\left\{ (p_{1}, p_{2}, \dots, p_{n}) \in \mathbb{R}^{n} \left| \; p_{j} \ge 0 \ \mathrm{and} \ \sum_{i=1}^{n} p_{i} = 1 \! \right. \right\}
\end{align}
for an integer $n \ge 2$.
For $\bvec{p} = (p_{1}, p_{2}, \dots, p_{n}) \in \mathcal{P}_{n}$, let
\begin{align}
p_{[1]} \ge p_{[2]} \ge \dots \ge p_{[n]}
\end{align}
denote the components of $\bvec{p}$ in decreasing order, and let
\begin{align}
\bvec{p}_{\downarrow}
\triangleq
(p_{[1]}, p_{[2]}, \dots, p_{[n]})
\label{def:rearrangement}
\end{align}
denote the decreasing rearrangement%
\footnote{This rearrangement is denoted by reference to the notation of \cite{marshall}.}
of $\bvec{p}$.
In particular, we define the following two $n$-ary probability vectors:
(i) an $n$-ary deterministic distribution
\begin{align}
\bvec{d}_{n}
\triangleq
(d_{1}, d_{2}, \dots, d_{n}) \in \mathcal{P}_{n}
\end{align}
is defined by $d_{1} = 1$ and $d_{i} = 0$ for $i \in \{ 2, 3, \dots, n \}$ and
(ii) the $n$-ary equiprobable distribution
\begin{align}
\bvec{u}_{n}
\triangleq
(u_{1}, u_{2}, \dots, u_{n}) \in \mathcal{P}_{n}
\end{align}
is defined by $u_{i} = \frac{1}{n}$ for $i \in \{ 1, 2, \dots, n \}$.
For an $n$-ary random variable $X \sim \bvec{p} \in \mathcal{P}_{n}$, we define the Shannon entropy \cite{shannon} as
\begin{align}
H( X )
=
H( \bvec{p} )
\triangleq
- \sum_{i=1}^{n} p_{i} \ln p_{i} ,
\end{align}
where $\ln$ denotes the natural logarithm and assume that
$0 \ln 0 = 0$.
Moreover, we define the $\ell_{\alpha}$-norm of $\bvec{p} \in \mathcal{P}_{n}$ as
\begin{align}
\| \bvec{p} \|_{\alpha}
\triangleq
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha}}
\end{align}
for $\alpha \in (0, \infty)$.
Note that $\lim_{\alpha \to \infty} \| \bvec{p} \|_{\alpha} = \| \bvec{p} \|_{\infty} \triangleq \max \{ p_{1}, p_{2}, \dots, p_{n} \}$ for $\bvec{p} \in \mathcal{P}_{n}$.
On the works of extending Shannon entropy, the $\ell_{\alpha}$-norm is appear in the several information measures.
As an instance, R\'{e}nyi \cite{renyi} generalized the Shannon entropy axiomatically to the R\'{e}nyi entropy of order $\alpha \in (0, 1) \cup (1, \infty)$, defined as
\begin{align}
H_{\alpha}( X )
=
H_{\alpha}( \bvec{p} )
\triangleq
\frac{ \alpha }{ 1 - \alpha } \ln \| \bvec{p} \|_{\alpha}
\label{def:renyi}
\end{align}
for $X \sim \bvec{p} \in \mathcal{P}_{n}$.
Note that it is usually defined that $H_{1}( X ) \triangleq H(X)$ since $\lim_{\alpha \to 1} H_{\alpha}(X) = H(X)$ by L'H\^{o}pital's rule.
In other axiomatic definitions of entropies \cite{tsallis2, havrda, daroczy, behara, boekee}, we can also define them by using the $\ell_{\alpha}$-norm, as with \eqref{def:renyi}.
In this study, we analyze relations between $H( \bvec{p} )$ and $\| \bvec{p} \|_{\alpha}$ to examine relationships between the Shannon entropy and several information measures.
Note that $H( \bvec{p} )$ and $\| \bvec{p} \|_{\alpha}$ are invariant for any permutation of the indices of $\bvec{p} \in \mathcal{P}_{n}$;
that is,
\begin{align}
H( \bvec{p} ) = H( \bvec{p}_{\downarrow} )
\qquad \mathrm{and} \qquad
\| \bvec{p} \|_{\alpha} = \| \bvec{p}_{\downarrow} \|_{\alpha}
\end{align}
for any $\bvec{p} \in \mathcal{P}_{n}$.
Hence, we only consider $\bvec{p}_{\downarrow}$ for $\bvec{p} \in \mathcal{P}_{n}$ in the analyses of the study.
Since $\| \bvec{p} \|_{1} = 1$ for any $\bvec{p} \in \mathcal{P}_{n}$, we have no interest in the case $\alpha = 1$;
hence, we omit the case $\alpha = 1$ in this study.
Furthermore, since
\begin{align}
H( \bvec{p} ) = \ln n
& \iff
\| \bvec{p} \|_{\alpha} = n^{\frac{1}{\alpha}-1}
\iff
\bvec{p}
=
\bvec{u}_{n} ,
\label{eq:uniform} \\
H( \bvec{p} ) = 0
& \iff
\| \bvec{p} \|_{\alpha} = 1
\iff
\bvec{p}_{\downarrow}
=
\bvec{d}_{n} ,
\label{eq:deterministic}
\end{align}
the cases $\bvec{p} = \bvec{u}_{n}$ and $\bvec{p}_{\downarrow} = \bvec{d}_{n}$ are trivial;
thus, we also omit these cases in the analyses of this study.
\subsection{Properties of two distributions $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$}
\label{subsect:vw}
For a fixed $n \ge 2$,
let the $n$-ary distribution
$
\bvec{v}_{n}( p )
\triangleq
(v_{1}(p), v_{2}(p), \dots v_{n}(p)) \in \mathcal{P}_{n}
$
be defined by
\begin{align}
v_{i}( p )
=
\begin{cases}
1 - (n-1) p
& \mathrm{if} \ i = 1 , \\
p
& \mathrm{otherwise}
\end{cases}
\end{align}
for $p \in [0, \frac{1}{n-1}]$, and let the $n$-ary distribution%
\footnote{The definition of $\bvec{w}_{n}( \cdot )$ is similar to the definition of \cite[Eq. (26)]{verdu}.}
$
\bvec{w}_{n}( p )
\triangleq
(w_{1}( p ), w_{2}( p ), \dots, w_{n}( p )) \in \mathcal{P}_{n}
$
be defined by
\begin{align}
w_{i}( p )
=
\begin{cases}
p
& \mathrm{if} \ 1 \le i \le \lfloor p^{-1} \rfloor , \\
1 - \lfloor p^{-1} \rfloor p
& \mathrm{if} \ i = \lfloor p^{-1} \rfloor + 1 , \\
0
& \mathrm{otherwise}
\end{cases}
\end{align}
for $p \in [\frac{1}{n}, 1]$, where $\lfloor \cdot \rfloor$ denotes the floor function.
Note that $\bvec{v}_{n}( p )_{\downarrow} = \bvec{w}_{n}( p )$ for $p \in [\frac{1}{n}, \frac{1}{n-1}]$.
In this subsection, we examine the properties of the Shannon entropies and the $\ell_{\alpha}$-norms for $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$.
For simplicity, we define
\begin{align}
H_{\sbvec{v}_{n}}( p )
& \triangleq
H( \bvec{v}_{n}( p ) )
\\
& =
- (1 - (n-1) p ) \ln (1 - (n-1) p ) - (n-1) p \ln p ,
\\
H_{\sbvec{w}_{n}}( p )
& \triangleq
H( \bvec{w}_{n}( p ) )
\\
& =
- \lfloor p^{-1} \rfloor p \ln p - (1 - \lfloor p^{-1} \rfloor p ) \ln (1 - \lfloor p^{-1} \rfloor p ) .
\end{align}
Then, we first show the monotonicity of $H_{\sbvec{v}_{n}}( p )$ with respect to $p \in [0, \frac{1}{n}]$ in the following lemma.
\begin{lemma}
\label{lem:Hv}
$H_{\sbvec{v}_{n}}( p )$ is strictly increasing for $p \in [0, \frac{1}{n}]$.
\end{lemma}
\begin{IEEEproof}[Proof of Lemma \ref{lem:Hv}]
It is easy to see that
\begin{align}
H_{\sbvec{v}_{n}}( p )
& =
- \sum_{i=1}^{n} v_{i}( p ) \ln v_{i}( p )
\\
& =
- v_{1}( p ) \ln v_{1}( p ) - \sum_{i=2}^{n} v_{i}( p ) \ln v_{i}( p )
\\
& =
- (1-(n-1)p) \ln (1-(n-1)p) - \sum_{i=2}^{n} v_{i}( p ) \ln v_{i}( p )
\\
& =
- (1-(n-1)p) \ln (1-(n-1)p) - (n-1) p \ln p .
\end{align}
Then, the first-order derivative of $H_{\sbvec{v}_{n}}( p )$ with respect to $p$ is
\begin{align}
\frac{ \partial H_{\sbvec{v}_{n}}( p ) }{ \partial p }
& =
\frac{ \partial }{ \partial p } \left( \vphantom{\sum} - (n-1) p \ln p - (1 - (n-1) p) \ln (1 - (n-1) p) \right)
\\
& =
- (n-1) \left( \frac{ \mathrm{d} }{ \mathrm{d} p } (p \ln p) \right) - \left( \frac{ \partial }{ \partial p } ((1 - (n-1) p) \ln (1 - (n-1) p)) \right)
\\
& =
- (n-1) \left( \vphantom{\sum} \ln p + 1 \right) + (n-1) \left( \vphantom{\sum} \ln (1 - (n-1)p) + 1 \right)
\\
& =
(n-1) \left( \vphantom{\sum} \ln (1 - (n-1)p) - \ln p \right)
\\
& =
(n-1) \ln \frac{1 - (n-1) p}{p} .
\label{eq:diff1_Hv}
\end{align}
Since $1 - (n-1) p > p > 0$ for $p \in (0, \frac{1}{n})$, it follows from \eqref{eq:diff1_Hv} that
\begin{align}
\frac{ \partial H_{\sbvec{v}_{n}}( p ) }{ \partial p } > 0
\end{align}
for $p \in (0, \frac{1}{n})$.
Note that $H_{\sbvec{v}_{n}}( p )$ is continuous for $p \in [0, \frac{1}{n}]$ since $\lim_{p \to \frac{1}{n}} H_{\sbvec{v}_{n}}( p ) = H_{\sbvec{v}_{n}}( \frac{1}{n} ) = \ln n$ and $\lim_{p \to 0^{+}} H_{\sbvec{v}_{n}}( p ) = H_{\sbvec{v}_{n}}( 0 ) = 0$ by the assumption $0 \ln 0 = 0$.
Therefore, $H_{\sbvec{v}_{n}}( p )$ is strictly increasing for $p \in [0, \frac{1}{n}]$.
\end{IEEEproof}
Lemma \ref{lem:Hv} implies the existence of the inverse function of $H_{\sbvec{v}_{n}}( p )$ for $p \in [0, \frac{1}{n}]$.
We second show the monotonicity of $H_{\sbvec{w}_{n}}( p )$ with respect to $p \in [\frac{1}{n}, 1]$ as follows:
\begin{lemma}
\label{lem:Hw}
$H_{\sbvec{w}_{n}}( p )$ is strictly decreasing for $p \in [\frac{1}{n}, 1]$.
\end{lemma}
\begin{IEEEproof}[Proof of Lemma \ref{lem:Hw}]
For an integer $m \in [2, n]$, assume that $p \in [\frac{1}{m}, \frac{1}{m-1}]$.
Then, note that $\lfloor p^{-1} \rfloor = m$.
It is easy to see that
\begin{align}
H_{\sbvec{w}_{n}}( p )
& =
- \sum_{i=1}^{n} w_{i}( p ) \ln w_{i}( p )
\\
& =
- \sum_{i=1}^{m} w_{i}( p ) \ln w_{i}( p ) - w_{m+1}( p ) \ln w_{m+1}( p ) - \sum_{j=m+2}^{n} w_{j}( p ) \ln w_{j}( p )
\\
& \overset{\text{(a)}}{=}
- \sum_{i=1}^{m} w_{i}( p ) \ln w_{i}( p ) - w_{m+1}( p ) \ln w_{m+1}( p )
\\
& =
- m \, p \ln p - w_{m+1}( p ) \ln w_{m+1}( p )
\\
& =
- m \, p \ln p - (1 - m \, p) \ln (1 - m \, p) ,
\end{align}
where (a) follows by the assumption $0 \ln 0 = 0$.
Then, the first-order derivative of $H_{\sbvec{w}_{n}}( p )$ with respect to $p$ is
\begin{align}
\frac{ \partial H_{\sbvec{w}_{n}}( p ) }{ \partial p }
& =
\frac{ \partial }{ \partial p } \left( \vphantom{\sum} - m \, p \ln p - (1 - m \, p) \ln (1 - m \, p) \right)
\\
& =
- m \left( \frac{ \mathrm{d} }{ \mathrm{d} p } (p \ln p) \right) - \left( \frac{ \partial }{ \partial p } ((1 - m \, p) \ln (1 - m \, p)) \right)
\\
& =
- m \left( \vphantom{\sum} \ln p + 1 \right) + m \left( \vphantom{\sum} \ln (1 - m \, p) + 1 \right)
\\
& =
m \left( \vphantom{\sum} \ln (1 - m \, p) - \ln p \right)
\\
& =
m \ln \frac{ 1 - m \, p }{ p } .
\label{eq:diff1_Hw}
\end{align}
Since $p > 1 - m \, p > 0$ for $p \in (\frac{1}{m}, \frac{1}{m-1})$, it follows from \eqref{eq:diff1_Hw} that
\begin{align}
\frac{ \partial H_{\sbvec{w}_{n}}( p ) }{ \partial p } < 0
\end{align}
for $p \in (\frac{1}{m}, \frac{1}{m-1})$.
On the other hand, we observe that
\begin{align}
\lim_{p \to (\frac{1}{m})^{-}} H_{\sbvec{w}_{n}}( p )
& =
\lim_{p \to (\frac{1}{m})^{-}} \left( \vphantom{\sum} - \lfloor p^{-1} \rfloor p \ln p - (1 - \lfloor p^{-1} \rfloor p) \ln (1 - \lfloor p^{-1} \rfloor p) \right)
\\
& =
\lim_{p \to (\frac{1}{m})^{-}} \left( \vphantom{\sum} - m \, p \ln p - (1 - m \, p) \ln (1 - m \, p) \right)
\\
& =
\ln m - \lim_{p \to (\frac{1}{m})^{-}} \left( \vphantom{\sum} (1 - m \, p) \ln (1 - m \, p) \right)
\\
& =
\ln m - \lim_{x \to 0^{+}} \left( \vphantom{\sum} x \ln x \right)
\\
& =
\ln m
\label{eq:Hw}
\end{align}
for an integer $m \in [1, n-1]$ and
\begin{align}
\lim_{p \to (\frac{1}{m})^{+}} H_{\sbvec{w}_{n}}( p )
& =
\lim_{p \to (\frac{1}{m})^{+}} \left( \vphantom{\sum} - \lfloor p^{-1} \rfloor p \ln p - (1 - \lfloor p^{-1} \rfloor p) \ln (1 - \lfloor p^{-1} \rfloor p) \right)
\\
& =
\lim_{p \to (\frac{1}{m})^{+}} \left( \vphantom{\sum} - (m-1) p \ln p - (1 - (m-1) p) \ln (1 - (m-1) p) \right)
\\
& =
\left( 1 - \frac{1}{m} \right) \ln m - \lim_{p \to (\frac{1}{m})^{+}} \left( \vphantom{\sum} (1 - (m-1) p) \ln (1 - (m-1) p) \right)
\\
& =
\left( 1 - \frac{1}{m} \right) \ln m - \left( - \frac{1}{m} \ln m \right)
\\
& =
\ln m
\end{align}
for an integer $m \in [2, n]$.
Note that $H_{\sbvec{w}_{n}}( \frac{1}{m} ) = \ln m$ from \eqref{eq:Hw} and the assumption $0 \ln 0 = 0$.
Hence, for any integer $m \in [2, n-1]$, we get that
\begin{align}
\lim_{p \to (\frac{1}{n})^{+}} H_{\sbvec{w}_{n}}( p ) & = H_{\sbvec{w}_{n}}( {\textstyle \frac{1}{n}} ) = \ln n
\\
\lim_{p \to \frac{1}{m}} H_{\sbvec{w}_{n}}( p ) & = H_{\sbvec{w}_{n}}( {\textstyle \frac{1}{m}} ) = \ln m ,
\\
\lim_{p \to 1^{-}} H_{\sbvec{w}_{n}}( p ) & = H_{\sbvec{w}_{n}}( 1 ) = 0 ,
\end{align}
which imply that $H_{\sbvec{w}_{n}}( p )$ is continuous for $p \in [\frac{1}{n}, 1]$.
Therefore, $H_{\sbvec{w}_{n}}( p )$ is strictly decreasing for $p \in [\frac{1}{n}, 1]$.
\end{IEEEproof}
As with Lemma \ref{lem:Hv}, Lemma \ref{lem:Hw} also implies the existence of the inverse function of $H_{\sbvec{w}_{n}}( p )$ for $p \in [\frac{1}{n}, 1]$.
Since $H_{\sbvec{v}_{n}}( 0 ) = 0$, $H_{\sbvec{v}_{n}}( \frac{1}{n} ) = \ln n$, $H_{\sbvec{w}_{n}}( \frac{1}{n} ) = \ln n$, and $H_{\sbvec{w}_{n}}( 1 ) = 0$, we can denote the inverse functions of $H_{\sbvec{v}_{n}}( p )$ and $H_{\sbvec{w}_{n}}( p )$ with respect to $p$ as follows:
We denote by $H_{\sbvec{v}_{n}}^{-1} : [0, \ln n] \to [0, \frac{1}{n}]$ the inverse function of $H_{\sbvec{v}_{n}}( p )$ for $p \in [0, \frac{1}{n}]$.
Moreover, we also denote by $H_{\sbvec{w}_{n}}^{-1} : [0, \ln n] \to [\frac{1}{n}, 1]$ the inverse function of $H_{\sbvec{w}_{n}}( p )$ for $p \in [\frac{1}{n}, 1]$.
Now, we provide the monotonicity of $\| \bvec{v}_{n}( p ) \|_{\alpha}$ with respect to $H_{\sbvec{v}_{n}}( p )$ in the following lemma.
\begin{lemma}
\label{lem:mono_v}
For any fixed $n \ge 2$ and any fixed $\alpha \in (-\infty, 0) \cup(0, 1) \cup (1, \infty)$, if $p \in [0, \frac{1}{n}]$, the following monotonicity hold:
\begin{itemize}
\item[(i)]
if $\alpha > 1$, then $\| \bvec{v}_{n}( p ) \|_{\alpha}$ is strictly decreasing for $H_{\sbvec{v}_{n}}( p ) \in [0, \ln n]$ and
\item[(ii)]
if $\alpha < 1$, then $\| \bvec{v}_{n}( p ) \|_{\alpha}$ is strictly increasing for $H_{\sbvec{v}_{n}}( p ) \in [0, \ln n]$.
\end{itemize}
\end{lemma}
\begin{IEEEproof}[Proof of Lemma \ref{lem:mono_v}]
The proof of Lemma \ref{lem:mono_v} is given in a similar manner with \cite[Appendix I]{fabregas}.
By the chain rule of the derivation and the inverse function theorem, we have
\begin{align}
\frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p ) }
& =
\left( \frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial p } \right) \cdot \left( \frac{ \partial p }{ \partial H_{\sbvec{v}_{n}}( p ) } \right)
\\
& =
\left( \frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial p } \right) \cdot \left( \frac{ 1 }{ \frac{ \partial H( \sbvec{v}_{n}( p ) ) }{ \partial p } } \right) .
\label{eq:diff1}
\end{align}
Direct calculation shows
\begin{align}
\frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial p }
& =
\frac{ \partial }{ \partial p } \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha}}
\\
& =
\frac{1}{\alpha} \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \frac{ \partial }{ \partial p } \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right) \right)
\\
& =
\frac{1}{\alpha} \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \alpha (n-1) \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right) \right)
\\
& =
(n-1) \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right) .
\label{eq:norm_diff1}
\end{align}
Substituting \eqref{eq:diff1_Hv} and \eqref{eq:norm_diff1} into \eqref{eq:diff1}, we obtain
\begin{align}
&
\frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p ) }
\notag \\
& \quad =
(n-1) \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right) \left( \frac{ 1 }{ (n-1) \ln \frac{ 1 - (n-1) p}{ p } } \right)
\\
& \quad =
\left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right) \frac{ 1 }{ \ln \frac{ 1 - (n-1) p}{ p } } .
\label{eq:diff1_N_H_v}
\end{align}
We now define the sign function as
\begin{align}
\operatorname{sgn}( x )
\triangleq
\begin{cases}
1
& \mathrm{if} \ x > 0 , \\
0
& \mathrm{if} \ x = 0 , \\
-1
& \mathrm{if} \ x < 0 .
\end{cases}
\end{align}
Since $0 < p < 1 - (n-1) p$ for $p \in (0, \frac{1}{n})$, we observe that
\begin{align}
\operatorname{sgn} \! \left( \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \right)
& =
1 ,
\\
\operatorname{sgn} \! \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right)
& =
\begin{cases}
1
& \mathrm{if} \ \alpha < 1 , \\
0
& \mathrm{if} \ \alpha = 1 , \\
-1
& \mathrm{if} \ \alpha > 1 ,
\end{cases}
\\
\operatorname{sgn} \! \left( \frac{ 1 }{ \ln \frac{ 1 - (n-1) p}{ p } } \right)
& =
1
\end{align}
for $p \in (0, \frac{1}{n})$ and $\alpha \in (-\infty, 0) \cup (0, +\infty)$;
and therefore, we have
\begin{align}
&
\operatorname{sgn} \! \left( \frac{ \partial \| \bvec{v}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{v}_{n}}( p ) } \right)
\notag \\
& \quad \overset{\eqref{eq:diff1_N_H_v}}{=}
\operatorname{sgn} \! \left( \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right) \frac{ 1 }{ \ln \frac{ 1 - (n-1) p}{ p } } \right)
\\
& \quad =
\operatorname{sgn} \! \left( \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \right) \! \cdot \operatorname{sgn} \! \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right) \! \cdot \operatorname{sgn} \! \left( \frac{ 1 }{ \ln \frac{ 1 - (n-1) p}{ p } } \right)
\\
& \quad =
\begin{cases}
1
& \mathrm{if} \ \alpha < 1 , \\
0
& \mathrm{if} \ \alpha = 1 , \\
-1
& \mathrm{if} \ \alpha > 1 ,
\end{cases}
\label{eq:sign_diff1_N_H_v}
\end{align}
for $p \in (0, \frac{1}{n})$ and $\alpha \in (-\infty, 0) \cup (0, +\infty)$, which implies Lemma \ref{lem:mono_v}.
\end{IEEEproof}
It follows from Lemmas \ref{lem:Hv} and \ref{lem:mono_v} that, for each $\alpha \in (0, 1) \cup (1, \infty)$, $\| \bvec{v}_{n}( p ) \|_{\alpha}$ is bijective for $p \in [0, \frac{1}{n}]$.
Similarly, we also show the monotonicity of $\| \bvec{w}_{n}( p ) \|_{\alpha}$ with respect to $H_{\sbvec{w}_{n}}( p )$ in the following lemma.
\begin{lemma}
\label{lem:mono_w}
For any fixed $n \ge 2$ and any fixed $\alpha \in (0, 1) \cup (1, \infty)$, if $p \in [\frac{1}{n}, 1]$, the following monotonicity hold:
\begin{itemize}
\item[(i)]
if $\alpha > 1$, then $\| \bvec{w}_{n}( p ) \|_{\alpha}$ is strictly decreasing for $H_{\sbvec{w}_{n}}( p ) \in [0, \ln n]$ and
\item[(ii)]
if $\alpha < 1$, then $\| \bvec{w}_{n}( p ) \|_{\alpha}$ is strictly increasing for $H_{\sbvec{w}_{n}}( p ) \in [0, \ln n]$.
\end{itemize}
\end{lemma}
\begin{IEEEproof}[Proof of Lemma \ref{lem:mono_w}]
Since $\bvec{w}_{n}( p ) = \bvec{v}_{n}( p )_{\downarrow}$ for $p \in [\frac{1}{n}, \frac{1}{n-1}]$,
we can obtain immediately from \eqref{eq:diff1_N_H_v} that
\begin{align}
\frac{ \partial \| \bvec{w}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{w}_{n}}( p ) }
=
\left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right) \frac{ 1 }{ \ln \frac{ 1 - (n-1) p}{ p } }
\end{align}
for $p \in (\frac{1}{n}, \frac{1}{n-1})$.
Since $0 < 1 - (n-1) p < p$ for $p \in (\frac{1}{n}, \frac{1}{n-1})$, we observe that
\begin{align}
\operatorname{sgn} \! \left( \left( \vphantom{\sum} (n-1) \, p^{\alpha} + (1 - (n-1)p)^{\alpha} \right)^{\frac{1}{\alpha} - 1} \right)
& =
1 ,
\\
\operatorname{sgn} \! \left( \vphantom{\sum} p^{\alpha-1} - (1 - (n-1)p)^{\alpha-1} \right)
& =
\begin{cases}
1
& \mathrm{if} \ \alpha > 1 , \\
0
& \mathrm{if} \ \alpha = 1 , \\
-1
& \mathrm{if} \ \alpha < 1 ,
\end{cases}
\\
\operatorname{sgn} \! \left( \frac{ 1 }{ \ln \frac{ 1 - (n-1) p}{ p } } \right)
& =
-1
\end{align}
for $p \in (\frac{1}{n}, \frac{1}{n-1})$ and $\alpha \in (-\infty, 0) \cup (0, +\infty)$;
and therefore, we have
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial \| \bvec{w}_{n}( p ) \|_{\alpha} }{ \partial H_{\sbvec{w}_{n}}( p ) } \right)
& =
\begin{cases}
1
& \mathrm{if} \ \alpha < 1 , \\
0
& \mathrm{if} \ \alpha = 1 , \\
-1
& \mathrm{if} \ \alpha > 1 ,
\end{cases}
\label{eq:sign_diff1_N_H_w}
\end{align}
for $p \in (\frac{1}{n}, \frac{1}{n-1})$ and $\alpha \in (-\infty, 0) \cup (0, +\infty)$, as with \eqref{eq:sign_diff1_N_H_v}.
Hence, for $\alpha \in (-\infty, 0) \cup (0, +\infty)$, we have that
\begin{itemize}
\item
if $\alpha > 1$, then $\| \bvec{w}_{n}( p ) \|_{\alpha}$ is strictly decreasing for $H_{\sbvec{w}_{n}}( p ) \in [\ln (n-1), \ln n]$ and
\item
if $\alpha < 1$, then $\| \bvec{w}_{n}( p ) \|_{\alpha}$ is strictly increasing for $H_{\sbvec{w}_{n}}( p ) \in [\ln (n-1), \ln n]$.
\end{itemize}
Finally, since $H_{\sbvec{w}_{m}}( p ) = H_{\sbvec{w}_{n}}( p )$ and $\| \bvec{w}_{m}( p ) \|_{\alpha} = \| \bvec{w}_{n}( p ) \|_{\alpha}$ for any integer $m \in [2, n-1]$, any $p \in [\frac{1}{m}, \frac{1}{m-1}]$, and any $\alpha \in (0, 1) \cup (1, +\infty)$, we can obtain that
\begin{itemize}
\item
if $\alpha > 1$, then $\| \bvec{w}_{n}( p ) \|_{\alpha}$ is strictly decreasing for $H_{\sbvec{w}_{n}}( p ) \in [\ln (m-1), \ln m]$ and
\item
if $\alpha < 1$, then $\| \bvec{w}_{n}( p ) \|_{\alpha}$ is strictly increasing for $H_{\sbvec{w}_{n}}( p ) \in [\ln (m-1), \ln m]$
\end{itemize}
for any integer $m \in [2, n]$ and any $\alpha \in (0, 1) \cup (1, \infty)$.
This completes the proof of Lemma \ref{lem:mono_w}.
\end{IEEEproof}
It also follows from Lemmas \ref{lem:Hw} and \ref{lem:mono_w} that, for each $\alpha \in (0, 1) \cup (1, \infty)$, $\| \bvec{w}_{n}( p ) \|_{\alpha}$ is also bijective for $p \in [\frac{1}{n}, 1]$.
\if0
\begin{definition}
\label{def:inverseN}
We denote by $N_{\alpha, \sbvec{v}_{n}}^{-1} : [\min\{ 1, n^{\frac{1}{\alpha}-1} \}, \max\{ 1, n^{\frac{1}{\alpha}-1} \}] \to [0, \frac{1}{n}]$ the inverse function of $\| \bvec{v}_{n}( p ) \|_{\alpha}$ for $p \in [0, \frac{1}{n}]$.
Moreover, we also denote by $N_{\alpha, \sbvec{w}_{n}}^{-1} : [\min\{ 1, n^{\frac{1}{\alpha}-1} \}, \max\{ 1, n^{\frac{1}{\alpha}-1} \}] \to [\frac{1}{n}, 1]$ the inverse function of $\| \bvec{w}_{n}( p ) \|_{\alpha}$ for $p \in [\frac{1}{n}, 1]$.
\end{definition}
\fi
\section{Results}
\label{sect:result}
In Section \ref{subsect:extremes}, we examine the extremal relations between the Shannon entropy and the $\ell_{\alpha}$-norm, as shown in Theorems \ref{th:extremes} and \ref{th:extremes2}.
Then, we can identify the exact feasible region of
\begin{align}
\mathcal{R}_{n}( \alpha )
\triangleq
\{ (H( \bvec{p} ), \| \bvec{p} \|_{\alpha}) \mid \bvec{p} \in \mathcal{P}_{n} \}
\label{def:region_Pn}
\end{align}
for any $n \ge 2$ and any $\alpha \in (0, 1) \cup (1, \infty)$.
Extending Theorems \ref{th:extremes} and \ref{th:extremes2} to Corollary \ref{cor:extremes}, we can obtain the tight bounds between the Shannon entropy and several information measures which are determined by the $\ell_{\alpha}$-norm, as shown in Table \ref{table:extremes}.
In Section \ref{subsect:focusing}, we apply the results of Section \ref{subsect:extremes} to uniformly focusing channels of Definition \ref{def:focusing}.
\subsection{Bounds on Shannon entropy and $\ell_{\alpha}$-norm}
\label{subsect:extremes}
Let the $\alpha$-logarithm function \cite{tsallis} be denoted by
\begin{align}
\ln_{\alpha} x
\triangleq
\frac{ x^{1-\alpha} - 1 }{ 1 - \alpha }
\end{align}
for $\alpha \neq 1$ and $x > 0$;
besides, since $\lim_{\alpha \to 1} \ln_{\alpha} x = \ln x$ by L'H\^{o}pital's rule, it is defined that $\ln_{1} x \triangleq \ln x$.
For the $\alpha$-logarithm function, we can see the following lemma.
\begin{lemma}
\label{lem:frac_qlog}
For $\alpha < \beta$ and $1 \le x \le y$ $(y \neq 1)$, we observe that
\begin{align}
\frac{ \ln_{\alpha} x }{ \ln_{\alpha} y }
\le
\frac{ \ln_{\beta} x }{ \ln_{\beta} y }
\end{align}
with equality if and only if $x \in \{ 1, y \}$.
\end{lemma}
\begin{IEEEproof}[Proof of Lemma \ref{lem:frac_qlog}]
For $1 \le x \le y$ $(y \neq 1)$, we consider the monotonicity of $\frac{ \ln_{\alpha} x }{ \ln_{\alpha} y }$ with respect to $\alpha$.
Direct calculation shows
\begin{align}
\frac{ \partial }{ \partial \alpha } \left( \frac{ \ln_{\alpha} x }{ \ln_{\alpha} y } \right)
& =
\frac{ \partial }{ \partial \alpha } \left( \frac{ x^{1-\alpha} - 1 }{ y^{1-\alpha} - 1 } \right)
\\
& =
\left( \frac{ \partial }{ \partial \alpha } (x^{1-\alpha} - 1) \right) \left( \frac{ 1 }{ y^{1-\alpha} - 1 } \right) + (x^{1-\alpha} - 1) \left( \frac{ \partial }{ \partial \alpha } \left( \frac{ 1 }{ y^{1-\alpha} - 1 } \right) \right)
\\
& =
- \frac{ x^{1-\alpha} \ln x }{ y^{1-\alpha} - 1 } + (x^{1-\alpha} - 1) \left( - \frac{ 1 }{ (y^{1-\alpha} - 1)^{2} } \right) \left( \frac{ \partial }{ \partial \alpha } (y^{1-\alpha} - 1) \right)
\\
& =
- \frac{ x^{1-\alpha} \ln x }{ y^{1-\alpha} - 1 } + \frac{ y^{1-\alpha} (\ln y) (x^{1-\alpha} - 1) }{ (y^{1-\alpha} - 1)^{2} }
\\
& =
- \frac{ x^{1-\alpha} (\ln x) (y^{1-\alpha} - 1) - y^{1-\alpha} (\ln y) (x^{1-\alpha} - 1) }{ (y^{1-\alpha} - 1)^{2} }
\\
& =
- \frac{ 1 }{ (y^{1-\alpha} - 1)^{2} } \left( \vphantom{\sum} x^{1-\alpha} (\ln x) (y^{1-\alpha} - 1) - y^{1-\alpha} (\ln y) (x^{1-\alpha} - 1) \right) .
\label{eq:diff1_frac_qlog}
\end{align}
Then, we can see that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial }{ \partial \alpha } \left( \frac{ \ln_{\alpha} x }{ \ln_{\alpha} y } \right) \right)
& \overset{\eqref{eq:diff1_frac_qlog}}{=}
\operatorname{sgn} \! \left( - \frac{ 1 }{ (y^{1-\alpha} - 1)^{2} } \left( \vphantom{\sum} x^{1-\alpha} (\ln x) (y^{1-\alpha} - 1) - y^{1-\alpha} (\ln y) (x^{1-\alpha} - 1) \right) \right)
\\
& =
\operatorname{sgn} \! \left( - \frac{ 1 }{ (y^{1-\alpha} - 1)^{2} } \right) \cdot \, \operatorname{sgn} \! \left( \vphantom{\sum} x^{1-\alpha} (\ln x) (y^{1-\alpha} - 1) - y^{1-\alpha} (\ln y) (x^{1-\alpha} - 1) \right)
\\
& \overset{\text{(a)}}{=}
- \operatorname{sgn} \! \left( \vphantom{\sum} x^{1-\alpha} (\ln x) (y^{1-\alpha} - 1) - y^{1-\alpha} (\ln y) (x^{1-\alpha} - 1) \right)
\\
& \overset{\text{(b)}}{=}
- \operatorname{sgn} \! \left( (\ln x) \frac{ y^{1-\alpha} - 1 }{ y^{1-\alpha} } - (\ln y) \frac{ x^{1-\alpha} - 1 }{ x^{1-\alpha} } \right)
\\
& =
\operatorname{sgn} \! \left( \vphantom{\sum} (y^{\alpha-1} - 1) \ln x - (x^{\alpha-1} - 1) \ln y \right)
\\
& =
\operatorname{sgn} \! \left( \frac{ (y^{\alpha-1} - 1) \ln x^{\alpha-1} - (x^{\alpha-1} - 1) \ln y^{\alpha-1} }{ \alpha - 1 } \right)
\\
& =
\operatorname{sgn} \! \left( \frac{1}{\alpha-1} \right) \cdot \, \operatorname{sgn} \! \left( \vphantom{\sum} (y^{\alpha-1} - 1) \ln x^{\alpha-1} - (x^{\alpha-1} - 1) \ln y^{\alpha-1} \right)
\label{eq:sign_frac_qlog} \\
& \overset{\text{(c)}}{=}
\operatorname{sgn} \! \left( \frac{1}{\alpha-1} \right) \cdot \, \operatorname{sgn} \! \left( \vphantom{\sum} (b - 1) \ln a - (a - 1) \ln b \right)
\end{align}
where
\begin{itemize}
\item
the equality (a) follows from the fact that
\begin{align}
\operatorname{sgn} \! \left( - \frac{ 1 }{ (y^{1-\alpha} - 1)^{2} } \right) = -1
\end{align}
for $y > 0 \ (y \neq 1)$ and $\alpha \in (-\infty, 1) \cup (1, +\infty)$,
\item
the equality (b) follows from the fact that $x^{1-\alpha}, y^{1-\alpha} > 0$ for $\alpha \in (-\infty, +\infty)$ and $x, y > 0$, and
\item
the equality (c) follows by the change of variables: $a = a(x, \alpha) \triangleq x^{\alpha-1}$ and $b = b(y, \alpha) \triangleq y^{\alpha-1}$.
\end{itemize}
Then, it can be easily seen that
\begin{align}
\operatorname{sgn} \! \left( \frac{1}{\alpha-1} \right)
=
\begin{cases}
1
& \mathrm{if} \ \alpha > 1 , \\
-1
& \mathrm{if} \ \alpha < 1 .
\end{cases}
\label{eq:sign_1_over_(1-a)}
\end{align}
Thus, to check the sign of $\frac{ \partial }{ \partial \alpha } \left( \frac{ \ln_{\alpha} x }{ \ln_{\alpha} y } \right)$, we now examine the function $(b - 1) \ln a - (a - 1) \ln b$.
We readily see that
\begin{align}
\left. \left( \vphantom{\sum} (b - 1) \ln a - (a - 1) \ln b \right) \right|_{a = 1}
=
\left. \left( \vphantom{\sum} (b - 1) \ln a - (a - 1) \ln b \right) \right|_{a = b}
=
0
\label{eq:gap_a=b}
\end{align}
for $b > 0$.
We calculate the second order derivative of $(b - 1) \ln a - (a - 1) \ln b$ with respect to $a$ as follows:
\begin{align}
\frac{ \partial^{2} }{ \partial a^{2} } \left( \vphantom{\sum} (b - 1) \ln a - (a - 1) \ln b \right)
& =
\frac{ \partial }{ \partial a } \left( \frac{ \partial }{ \partial a } \left( \vphantom{\sum} (b - 1) \ln a - (a - 1) \ln b \right) \right)
\\
& =
\frac{ \partial }{ \partial a } \left( (b-1) \left( \frac{ \mathrm{d} }{ \mathrm{d} a } (\ln a) \right) - \left( \frac{ \mathrm{d} }{ \mathrm{d} a } (a-1) \right) \ln b \right)
\\
& =
\frac{ \partial }{ \partial a } \left( \frac{b-1}{a} - \ln b \right)
\\
& =
(b-1) \left( \frac{ \mathrm{d} }{ \mathrm{d} a } \left( \frac{1}{a} \right) \right)
\\
& =
- \frac{ b-1 }{ a^{2} } .
\end{align}
Hence, we observe that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial^{2} }{ \partial a^{2} } \left( \vphantom{\sum} (b - 1) \ln a - (a - 1) \ln b \right) \right)
& =
\operatorname{sgn} \! \left( - \frac{ b-1 }{ a^{2} } \right)
\\
& =
\begin{cases}
1
& \mathrm{if} \ 0 < b < 1 , \\
0
& \mathrm{if} \ b = 1 , \\
-1
& \mathrm{if} \ b > 1
\end{cases}
\end{align}
for $a > 0$, which implies that
\begin{itemize}
\item
if $b > 1$, then $(b - 1) \ln a - (a - 1) \ln b$ is strictly concave in $a > 0$ and
\item
if $0 < b < 1$, then $(b - 1) \ln a - (a - 1) \ln b$ is strictly convex in $a > 0$.
\end{itemize}
Therefore, it follows from \eqref{eq:gap_a=b} that
\begin{itemize}
\item
it $b > 1$, then
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} (b - 1) \ln a - (a - 1) \ln b \right)
=
\begin{cases}
1
& \mathrm{if} \ 1 < a < b , \\
0
& \mathrm{if} \ a = 1 \ \mathrm{or} \ a = b , \\
-1
& \mathrm{if} \ 0 < a < 1 \ \mathrm{or} \ a > b
\end{cases}
\end{align}
and
\item
it $0 < b < 1$, then
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} (b - 1) \ln a - (a - 1) \ln b \right)
=
\begin{cases}
1
& \mathrm{if} \ 0 < a < b \ \mathrm{or} \ a > 1 , \\
0
& \mathrm{if} \ a = b \ \mathrm{or} \ a = 1 , \\
-1
& \mathrm{if} \ b < a < 1 .
\end{cases}
\end{align}
\end{itemize}
Since $a = x^{\alpha-1}$ and $b = y^{\alpha-1}$, note that
\begin{itemize}
\item
if $\alpha > 1$, then $1 \le a \le b \ (b \neq 1)$ for $1 \le x \le y \ (y \neq 1)$ and
\item
if $\alpha < 1$, then $0 < b \le a \le 1 \ (b \neq 1)$ for $1 \le x \le y \ (y \neq 1)$.
\end{itemize}
Hence, we obtain
\begin{align}
\operatorname{sgn} \! \left( \vphantom{\sum} (y^{1-\alpha} - 1) \ln x^{1-\alpha} - (x^{1-\alpha} - 1) \ln y^{1-\alpha} \right)
=
\begin{cases}
1
& \mathrm{if} \ 1 < x < y \ \mathrm{and} \ \alpha > 1, \\
0
& \mathrm{if} \ x = 1 \ \mathrm{or} \ x = y \ \mathrm{or} \ \alpha = 1 , \\
-1
& \mathrm{if} \ 1 < x < y \ \mathrm{and} \ \alpha < 1
\end{cases}
\label{eq:sign_gap_(a-1)ln(b)}
\end{align}
for $\alpha \in (-\infty, +\infty)$ and $1 \le x \le y \ (y \neq 1)$.
Concluding the above analyses, we have
\begin{align}
\operatorname{sgn} \! \left( \frac{ \partial }{ \partial \alpha } \left( \frac{ \ln_{\alpha} x }{ \ln_{\alpha} y } \right) \right)
& \overset{\eqref{eq:sign_frac_qlog}}{=}
\operatorname{sgn} \! \left( \frac{1}{1-\alpha} \right) \cdot \, \operatorname{sgn} \! \left( \vphantom{\sum} (y^{1-\alpha} - 1) \ln x^{1-\alpha} - (x^{1-\alpha} - 1) \ln y^{1-\alpha} \right)
\\
& =
\begin{cases}
1
& \mathrm{if} \ 1 < x < y , \\
0
& \mathrm{if} \ x = 1 \ \mathrm{or} \ x = y
\end{cases}
\end{align}
for $\alpha \in (-\infty, 1) \cup (1, \infty)$, where the last equality follows from \eqref{eq:sign_1_over_(1-a)} and \eqref{eq:sign_gap_(a-1)ln(b)}.
Note that
\begin{align}
\lim_{\alpha \to 1} \left( \frac{ \ln_{\alpha} x }{ \ln_{\alpha} y } \right)
=
\frac{ \ln_{1} x }{ \ln_{1} y }
=
\frac{ \ln x }{ \ln y }
\end{align}
for $x, y > 0 \ (y \neq 1)$, which implies that $\frac{ \ln_{\alpha} x }{ \ln_{\alpha} y }$ is continuous at $\alpha = 1$.
Therefore, we have that, if $1 < x < y$, then $\frac{ \ln_{\alpha} x }{ \ln_{\alpha} y }$ is strictly increasing for $\alpha \in (-\infty, +\infty)$, which implies Lemma \ref{lem:frac_qlog}.
\end{IEEEproof}
The following two lemmas have important roles in the proving Theorem \ref{th:extremes}.
\begin{lemma}
\label{lem:vector_v}
For any $n \ge 2$ and any $\bvec{p} \in \mathcal{P}_{n}$, there exists $p \in [0, \frac{1}{n}]$ such that $H_{\sbvec{v}_{n}}( p ) = H( \bvec{p} )$ and $\| \bvec{v}_{n}( p ) \|_{\alpha} \ge \| \bvec{p} \|_{\alpha}$ for all $\alpha \in (0, \infty)$.
\end{lemma}
\begin{IEEEproof}[Proof of Lemma \ref{lem:vector_v}]
If $n = 2$, then it can be easily seen that $\bvec{p}_{\downarrow} = \bvec{v}_{2}( p )$ for any $\bvec{p} \in \mathcal{P}_{2}$ and some $p \in [0, \frac{1}{2}]$;
therefore, the lemma obviously holds when $n = 2$.
Moreover, since
\begin{align}
H( \bvec{p} ) & = \ln n
& \iff &&
\bvec{p} & = \bvec{u}_{n} = \bvec{v}_{n}( {\textstyle \frac{1}{n}} ),
\\
H( \bvec{p} ) & = 0
& \iff &&
\bvec{p}_{\downarrow} & = \bvec{d}_{n} = \bvec{v}_{n}( 0 ) ,
\end{align}
the lemma obviously holds if $H(\bvec{p}) \in \{ 0, \ln n \}$.
Thus, we omit the cases $n = 2$ and $H(\bvec{p}) \in \{ 0, \ln n \}$ in the analyses and consider $\bvec{p} \in P_{n}$ for $H( \bvec{p} ) \in (0, \ln n)$.
For a fixed $n \ge 3$ and a constant $A \in (0, \ln n)$, we assume for $\bvec{p} \in P_{n}$ that
\begin{align}
H( \bvec{p} ) = A .
\label{eq:fixed_H}
\end{align}
For that $\bvec{p}$, let $k \in \{ 2, 3, \dots, n-1 \}$ be the index such that $p_{[k-1]} > p_{[k+1]} = p_{[n]}$;
namely, the index $k$ is chosen to satisfy the following inequalities:
\begin{align}
p_{[1]} \ge p_{[2]} \ge \dots \ge p_{[k-1]} \ge p_{[k]} \ge p_{[k+1]} = p_{[k+2]} = \dots = p_{[n]}
\qquad (p_{[k-1]} > p_{[k+1]}) .
\label{eq:equal_k+1_to_n}
\end{align}
Since
$
p_{1} + p_{2} + \dots + p_{n} = 1
$,
we observe that
\begin{align}
&&
\sum_{i=1}^{n} p_{i}
& =
1
\\
& \ \Longrightarrow \ &
\frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } \left( \sum_{i=1}^{n} p_{i} \right)
& =
\frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (1)
\\
& \iff &
\frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } \left( \sum_{i=1}^{n} p_{[i]} \right)
& =
0
\\
& \iff &
\frac{ \mathrm{d} p_{[k]} }{ \mathrm{d} p_{[k]} } + \sum_{i = 1 : i \neq k}^{n} \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} }
& =
0
\\
& \iff &
1 + \sum_{i = 1 : i \neq k}^{n} \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} }
& =
0
\\
& \iff &
\sum_{i = 1 : i \neq k}^{n} \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} }
& =
- 1 .
\label{eq:total_diff_prob}
\end{align}
In this proof, we further assume that
\begin{align}
\frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} }
=
0
\label{eq:hypo1}
\end{align}
for $i \in \{ 2, 3, \dots, k-1 \}$ and
\begin{align}
\frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} }
=
\frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} }
\label{eq:hypo2}
\end{align}
for $j \in \{ k+1, k+2, \dots, n-1 \}$.
By constraints \eqref{eq:hypo1} and \eqref{eq:hypo2}, we get
\begin{align}
&&
\sum_{i=1}^{n} p_{i}
& =
1
\\
& \ \overset{\eqref{eq:total_diff_prob}}{\Longrightarrow} \ &
\sum_{i = 1 : i \neq k}^{n} \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} }
& =
- 1
\\
& \iff &
\sum_{i = 1}^{k-1} \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } + \sum_{j = k+1}^{n} \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} }
& =
- 1
\\
& \overset{\eqref{eq:hypo1}}{\iff} &
\frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } + \sum_{j = k+1}^{n} \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} }
& =
-1
\\
& \overset{\eqref{eq:hypo2}}{\iff} &
\frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } + (n-k) \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} }
& =
-1
\\
& \iff &
\frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} }
& =
- 1 - (n-k) \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } .
\label{eq:total_prob_hypo}
\end{align}
Moreover, since
$
H( \bvec{p} )
=
A
$,
we observe that
\begin{align}
&&
- \sum_{i = 1}^{n} p_{i} \ln p_{i}
& =
A
\\
& \ \Longrightarrow \ &
\frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } \left( - \sum_{i = 1}^{n} p_{i} \ln p_{i} \right)
& =
\frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (A)
\\
& \iff &
\frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } \left( - \sum_{i = 1}^{n} p_{[i]} \ln p_{[i]} \right)
& =
0
\\
& \iff &
- \sum_{i = 1}^{n} \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (p_{[i]} \ln p_{[i]})
& =
0
\\
& \iff &
- \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (p_{[k]} \ln p_{[k]}) - \sum_{i = 1 : i \neq k}^{n} \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (p_{[i]} \ln p_{[i]})
& =
0
\\
& \iff &
- (\ln p_{[k]} + 1) - \sum_{i = 1 : i \neq k}^{n} \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (p_{[i]} \ln p_{[i]})
& =
0
\\
& \iff &
- \sum_{i = 1 : i \neq k}^{n} \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (p_{[i]} \ln p_{[i]})
& =
\ln p_{[k]} + 1
\\
& \overset{\text{(a)}}{\iff} &
- \sum_{i = 1 : i \neq k}^{n} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) \left( \frac{ \mathrm{d} }{ \mathrm{d} p_{[i]} } (p_{[i]} \ln p_{[i]}) \right)
& =
\ln p_{[k]} + 1
\\
& \iff &
- \sum_{i = 1 : i \neq k}^{n} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[i]} + 1)
& =
\ln p_{[k]} + 1
\label{eq:diff1_H_halfway} \\
& \iff &
- \sum_{i = 1}^{k-1} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[i]} + 1) - \sum_{j = k+1}^{n} \left( \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[j]} + 1)
& =
\ln p_{[k]} + 1
\\
& \overset{\eqref{eq:equal_k+1_to_n}}{\iff} &
- \sum_{i = 1}^{k-1} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[i]} + 1) - (\ln p_{[n]} + 1) \sum_{j = k+1}^{n} \left( \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} } \right)
& =
\ln p_{[k]} + 1
\\
& \overset{\eqref{eq:hypo1}}{\iff} &
- \left( \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[1]} + 1) - (\ln p_{[n]} + 1) \sum_{j = k+1}^{n} \left( \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} } \right)
& =
\ln p_{[k]} + 1
\\
& \overset{\eqref{eq:hypo2}}{\iff} &
- \left( \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[1]} + 1) - (\ln p_{[n]} + 1) (n-k) \left( \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right)
& =
\ln p_{[k]} + 1
\\
& \overset{\eqref{eq:total_prob_hypo}}{\iff} &
- \left( - 1 - (n-k) \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[1]} + 1) - (n-k) \left( \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[n]} + 1)
& =
\ln p_{[k]} + 1
\\
& \iff & \! \! \! \! \!
(\ln p_{[1]} + 1) + (n-k) \left( \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[1]} + 1) - (n-k) \left( \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[n]} + 1)
& =
\ln p_{[k]} + 1
\\
& \iff &
(n-k) \left( \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[1]} + 1) - (n-k) \left( \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[n]} + 1)
& =
\ln p_{[k]} - \ln p_{[1]}
\\
& \iff &
(n-k) \left( \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[1]} - \ln p_{[n]})
& =
\ln p_{[k]} - \ln p_{[1]}
\\
& \iff &
(n-k) \left( \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right)
& =
\frac{ \ln p_{[k]} - \ln p_{[1]} }{ \ln p_{[1]} - \ln p_{[n]} }
\\
& \iff &
\frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} }
& =
- \frac{ 1 }{ n-k } \left( \frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[n]} } \right) \!
\label{eq:total_entropy_hypo}
\end{align}
where the equivalence (a) follows by the chain rule.
We now check the sign of the right-hand side of \eqref{eq:total_entropy_hypo}.
If $1 > p_{[1]} > p_{[k]} \ge p_{[n]} > 0$, then
\begin{align}
0
<
\frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[n]} }
<
1
\label{eq:cond1}
\end{align}
since
$
0 > \ln p_{[1]} > \ln p_{[k]} > \ln p_{[n]}
$;
therefore, we get from \eqref{eq:total_entropy_hypo} that
\begin{align}
- \frac{1}{n-k}
<
\frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} }
<
0
\label{eq:sign_dndk_1}
\end{align}
for $1 > p_{[1]} > p_{[k]} > p_{[n]} > 0$.
Note that $n - k \ge 1$.
Moreover, if $1 > p_{[1]} = p_{[k]} > p_{[n]} > 0$, then
\begin{align}
\frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} }
& =
- \frac{ 1 }{ n-k } \left( \frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[n]} } \right)
\\
& =
- \frac{ 1 }{ n-k } \left( \frac{ 0 }{ \ln p_{[1]} - \ln p_{[n]} } \right)
\\
& =
0 .
\label{eq:sign_dndk_2}
\end{align}
Furthermore, if $1 > p_{[1]} > p_{[k]} = p_{[n]} > 0$, then
\begin{align}
\frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} }
& =
- \frac{ 1 }{ n-k } \left( \frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[n]} } \right)
\\
& =
- \frac{ 1 }{ n-k } .
\label{eq:sign_dndk_3}
\end{align}
Combining \eqref{eq:sign_dndk_1}, \eqref{eq:sign_dndk_2}, and \eqref{eq:sign_dndk_3}, we get under the constraints \eqref{eq:fixed_H}, \eqref{eq:equal_k+1_to_n}, \eqref{eq:hypo1}, and \eqref{eq:hypo2} that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right)
=
\begin{cases}
0
& \mathrm{if} \ p_{[1]} = p_{[k]} , \\
-1
& \mathrm{otherwise}
\end{cases}
\label{eq:sign_dndk}
\end{align}
for $1 > p_{[1]} \ge p_{[k]} \ge p_{[n]} > 0 \ (p_{[1]} > p_{[n]})$.
Note for the constraint \eqref{eq:fixed_H} that
\begin{align}
\lim_{(p_{[k+1]}, p_{[k+2]}, \dots, p_{[n]}) \to (0^{+}, 0^{+}, \dots, 0^{+})} H( p_{[1]}, p_{[2]}, \dots, p_{[n]} )
=
H( p_{[1]}, p_{[2]}, \dots, p_{[k]}, 0, 0, \dots, 0 )
\end{align}
since $\lim_{x \to 0^{+}} x \ln x = 0 \ln 0$ by the assumption $0 \ln 0 = 0$.
Thus, it follows from \eqref{eq:sign_dndk} that, for all $j \in \{ k+1, k+2, \dots, n \}$, $p_{[j]}$ is strictly decreasing for $p_{[k]}$ under the constraints \eqref{eq:fixed_H}, \eqref{eq:equal_k+1_to_n}, \eqref{eq:hypo1}, and \eqref{eq:hypo2}.
Similarly, we check the sign of the right-hand side of \eqref{eq:total_prob_hypo}:
\begin{align}
\frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} }
& =
- 1 - (n-k) \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } .
\end{align}
By \eqref{eq:sign_dndk_1}, \eqref{eq:sign_dndk_2}, and \eqref{eq:sign_dndk_3}, we can see that
\begin{align}
-1 \le \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } < 0
\end{align}
for $1 > p_{[1]} \ge p_{[k]} > p_{[n]} > 0$ and
\begin{align}
\frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} }
& =
0
\end{align}
for $1 > p_{[1]} > p_{[k]} = p_{[n]} > 0$;
therefore, we also get under the constraints \eqref{eq:fixed_H}, \eqref{eq:equal_k+1_to_n}, \eqref{eq:hypo1}, and \eqref{eq:hypo2} that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } \right)
=
\begin{cases}
0
& \mathrm{if} \ p_{[k]} = p_{[n]} , \\
-1
& \mathrm{otherwise}
\end{cases}
\label{eq:sign_d1dk}
\end{align}
for $1 > p_{[1]} \ge p_{[k]} \ge p_{[n]} > 0 \ (p_{[1]} > p_{[n]})$.
As with \eqref{eq:sign_dndk}, it follows from \eqref{eq:sign_d1dk} that $p_{[1]}$ is strictly decreasing for $p_{[k]}$ under the constraints \eqref{eq:fixed_H}, \eqref{eq:equal_k+1_to_n}, \eqref{eq:hypo1}, and \eqref{eq:hypo2}.
On the other hand, for a fixed $\alpha \in (-\infty, 1) \cup (1, +\infty)$, we have
\begin{align}
\frac{ \mathrm{d} \| \bvec{p} \|_{\alpha} }{ \mathrm{d} p_{[k]} }
& =
\frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha}}
\\
& =
\frac{1}{\alpha} \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } \sum_{i=1}^{n} p_{i}^{\alpha} \right)
\\
& =
\frac{1}{\alpha} \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } \sum_{i=1}^{n} p_{[i]}^{\alpha} \right)
\\
& =
\frac{1}{\alpha} \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \sum_{i=1}^{n} \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (p_{[i]}^{\alpha}) \right)
\\
& =
\frac{1}{\alpha} \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (p_{[k]}^{\alpha}) + \sum_{i=1 : i \neq k}^{n} \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (p_{[i]}^{\alpha}) \right)
\\
& =
\frac{1}{\alpha} \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \alpha \, p_{[k]}^{\alpha-1} + \sum_{i=1 : i \neq k}^{n} \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (p_{[i]}^{\alpha}) \right)
\label{eq:diff_norm_pk_halfway} \\
& =
\frac{1}{\alpha} \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \alpha \, p_{[k]}^{\alpha-1} + \sum_{i=1 : i \neq k}^{n} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) \left( \frac{ \mathrm{d} }{ \mathrm{d} p_{[i]} } (p_{[i]}^{\alpha}) \right) \right)
\\
& =
\frac{1}{\alpha} \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \alpha \, p_{[k]}^{\alpha-1} + \sum_{i=1 : i \neq k}^{n} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) (\alpha \, p_{[i]}^{\alpha-1}) \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[k]}^{\alpha-1} + \sum_{i=1 : i \neq k}^{n} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) (p_{[i]}^{\alpha-1}) \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[k]}^{\alpha-1} + \sum_{i=1}^{k-1} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) (p_{[i]}^{\alpha-1}) + \sum_{j=k+1}^{n} \left( \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} } \right) (p_{[j]}^{\alpha-1}) \right)
\\
& \overset{\eqref{eq:equal_k+1_to_n}}{=}
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[k]}^{\alpha-1} + \sum_{i=1}^{k-1} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) (p_{[i]}^{\alpha-1}) + (p_{[n]}^{\alpha-1}) \sum_{j=k+1}^{n} \left( \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} } \right) \right)
\\
& \overset{\eqref{eq:hypo1}}{=}
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[k]}^{\alpha-1} + \left( \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } \right) (p_{[1]}^{\alpha-1}) + (p_{[n]}^{\alpha-1}) \sum_{j=k+1}^{n} \left( \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} } \right) \right)
\\
& \overset{\eqref{eq:hypo2}}{=}
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[k]}^{\alpha-1} + \left( \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } \right) (p_{[1]}^{\alpha-1}) + (p_{[n]}^{\alpha-1}) (n-k) \left( \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right) \right)
\\
& \overset{\eqref{eq:total_prob_hypo}}{=}
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[k]}^{\alpha-1} + \left( - 1 - (n-k) \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right) (p_{[1]}^{\alpha-1}) + (n-k) \left( \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right) (p_{[n]}^{\alpha-1}) \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[k]}^{\alpha-1} - p_{[1]}^{\alpha-1} - (n-k) \left( \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right) (p_{[1]}^{\alpha-1}) + (n-k) \left( \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right) (p_{[n]}^{\alpha-1}) \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( (p_{[k]}^{\alpha-1} - p_{[1]}^{\alpha-1}) + (n-k) \left( \frac{ \mathrm{d} p_{[n]} }{ \mathrm{d} p_{[k]} } \right) (p_{[n]}^{\alpha-1} - p_{[1]}^{\alpha-1}) \right)
\\
& \overset{\eqref{eq:total_entropy_hypo}}{=}
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( (p_{[k]}^{\alpha-1} - p_{[1]}^{\alpha-1}) + (n-k) \left( - \frac{ 1 }{ n-k } \left( \frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[n]} } \right) \right) (p_{[n]}^{\alpha-1} - p_{[1]}^{\alpha-1}) \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( (p_{[k]}^{\alpha-1} - p_{[1]}^{\alpha-1}) - \left( \frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[n]} } \right) (p_{[n]}^{\alpha-1} - p_{[1]}^{\alpha-1}) \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[n]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \left( \frac{ p_{[k]}^{\alpha-1} - p_{[1]}^{\alpha-1} }{ p_{[n]}^{\alpha-1} - p_{[1]}^{\alpha-1} } - \frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[n]} } \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[n]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \left( \frac{ p_{[1]}^{\alpha-1} \left( \left( \frac{ p_{[k]} }{ p_{[1]} } \right)^{\alpha-1} - 1 \right) }{ p_{[1]}^{\alpha-1} \left( \left( \frac{ p_{[n]} }{ p_{[1]} } \right)^{\alpha-1} - 1 \right) } - \frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[n]} } \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[n]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \left( \frac{ \left( \frac{ p_{[k]} }{ p_{[1]} } \right)^{\alpha-1} - 1 }{ \left( \frac{ p_{[n]} }{ p_{[1]} } \right)^{\alpha-1} - 1 } - \frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[n]} } \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[n]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \left( \frac{ \left( \frac{ p_{[1]} }{ p_{[k]} } \right)^{1 - \alpha} - 1 }{ \left( \frac{ p_{[1]} }{ p_{[n]} } \right)^{1 - \alpha} - 1 } - \frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[n]} } \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[n]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \left( \frac{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[k]} } }{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[n]} } } - \frac{ \ln \frac{ p_{[1]} }{ p_{[k]} } }{ \ln \frac{ p_{[1]} }{ p_{[n]} } } \right) .
\end{align}
Hence, we can see that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \mathrm{d} \| \bvec{p} \|_{\alpha} }{ \mathrm{d} p_{[k]} } \right)
& =
\operatorname{sgn} \! \left( \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[n]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \left( \frac{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[k]} } }{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[n]} } } - \frac{ \ln \frac{ p_{[1]} }{ p_{[k]} } }{ \ln \frac{ p_{[1]} }{ p_{[n]} } } \right) \right)
\\
& =
\underbrace{ \operatorname{sgn} \! \left( \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \right) }_{ = 1 } \cdot \, \operatorname{sgn} \! \left( p_{[n]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \cdot \operatorname{sgn} \! \left( \frac{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[k]} } }{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[n]} } } - \frac{ \ln \frac{ p_{[1]} }{ p_{[k]} } }{ \ln \frac{ p_{[1]} }{ p_{[n]} } } \right)
\\
& =
\operatorname{sgn} \! \left( p_{[n]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \cdot \operatorname{sgn} \! \left( \frac{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[k]} } }{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[n]} } } - \frac{ \ln \frac{ p_{[1]} }{ p_{[k]} } }{ \ln \frac{ p_{[1]} }{ p_{[n]} } } \right)
\label{eq:diff1_norm_pk_1}
\end{align}
for $\alpha \in (-\infty, 0) \cup (0, +\infty)$.
Since $\bvec{p} \neq \bvec{u}_{n}$, i.e., $p_{[1]} > p_{[n]}$, we readily see that
\begin{align}
\operatorname{sgn} \! \left( p_{[n]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right)
& =
\begin{cases}
1
& \mathrm{if} \ \alpha < 1 , \\
0
& \mathrm{if} \ \alpha = 1 , \\
-1
& \mathrm{if} \ \alpha > 1 .
\end{cases}
\label{eq:sign_pn-p1}
\end{align}
Moreover, for $1 \le \frac{ p_{[1]} }{ p_{[k]} } \le \frac{ p_{[1]} }{ p_{[n]} } \ (\frac{ p_{[1]} }{ p_{[n]} } \neq 1)$, we observe from Lemma \ref{lem:frac_qlog} that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[k]} } }{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[n]} } } - \frac{ \ln \frac{ p_{[1]} }{ p_{[k]} } }{ \ln \frac{ p_{[1]} }{ p_{[n]} } } \right)
& =
\begin{cases}
1
& \mathrm{if} \ \alpha > 1 \ \mathrm{and} \ p_{[1]} > p_{[k]} > p_{[n]} , \\
0
& \mathrm{if} \ \alpha = 1 \ \mathrm{or} \ p_{[1]} = p_{[k]} \ \mathrm{or} \ p_{[k]} = p_{[n]} , \\
-1
& \mathrm{if} \ \alpha < 1 \ \mathrm{and} \ p_{[1]} > p_{[k]} > p_{[n]} .
\end{cases}
\label{eq:sign_f(p1pkpn)}
\end{align}
Therefore, under the constraints \eqref{eq:fixed_H}, \eqref{eq:equal_k+1_to_n}, \eqref{eq:hypo1}, and \eqref{eq:hypo2}, we have
\begin{align}
\operatorname{sgn} \! \left( \frac{ \mathrm{d} \| \bvec{p} \|_{\alpha} }{ \mathrm{d} p_{[k]} } \right)
& \overset{\eqref{eq:diff1_norm_pk_1}}{=}
\operatorname{sgn} \! \left( p_{[n]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \cdot \operatorname{sgn} \! \left( \frac{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[k]} } }{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[n]} } } - \frac{ \ln \frac{ p_{[1]} }{ p_{[k]} } }{ \ln \frac{ p_{[1]} }{ p_{[n]} } } \right)
\\
& =
\begin{cases}
0
& \mathrm{if} \ \alpha = 1 \ \mathrm{or} \ p_{[1]} = p_{[k]} \ \mathrm{or} \ p_{[k]} = p_{[n]} , \\
-1
& \mathrm{if} \ \alpha \neq 1 \ \mathrm{and} \ p_{[1]} > p_{[k]} > p_{[n]}
\end{cases}
\label{eq:sign_diff_norm}
\end{align}
for $\alpha \in (-\infty, 0) \cup (0, +\infty)$, where the last equality follows from \eqref{eq:sign_pn-p1} and \eqref{eq:sign_f(p1pkpn)}.
Hence, we have that $\| \bvec{p} \|_{\alpha}$ with a fixed $\alpha \in (-\infty, 0) \cup(0, 1) \cup (1, +\infty)$ is strictly decreasing for $p_{[k]}$ under the constraints \eqref{eq:fixed_H}, \eqref{eq:equal_k+1_to_n}, \eqref{eq:hypo1}, and \eqref{eq:hypo2}.
Using the above results, we now prove this lemma.
If $p_{[k]} = p_{[k+1]}$, then we reset the index $k \in \{ 3, 4, \dots, n-1 \}$ to $k - 1$;
namely, we now choose the index $k \in \{ 2, 3, \dots, n-1 \}$ to satisfy the following inequalities:
\begin{align}
p_{[1]} \ge p_{[2]} \ge \dots \ge p_{[k-1]} \ge p_{[k]} > p_{[k+1]} = p_{[k+2]} = \dots = p_{[n]} \ge 0 .
\label{eq:choose_k}
\end{align}
Then, we consider to decrease $p_{[k]}$ under the constraints of \eqref{eq:fixed_H}, \eqref{eq:equal_k+1_to_n}, \eqref{eq:hypo1}, and \eqref{eq:hypo2}.
It follows from \eqref{eq:sign_d1dk} that $p_{[1]}$ is strictly increased by according to decreasing $p_{[k]}$.
Hence, if $p_{[k]}$ is decreased, then the condition $p_{[1]} > p_{[2]}$ must be held.
Similarly, it follows from \eqref{eq:hypo2} and \eqref{eq:sign_dndk} that, for all $j \in \{ k+1, k+2, \dots, n \}$, $p_{[j]}$ is also strictly increased by according to decreasing $p_{[k]}$.
Hence, if $p_{[k]}$ is decreased, then the condition $p_{[k+1]} = p_{[k+2]} = \dots = p_{[n]} > 0$ must be held.
Let $\bvec{q} = (q_{1}, q_{2}, \dots, q_{n})$ denote the probability vector that made from $\bvec{p}$ by continuing the above operation until to satisfy $p_{[k]} = p_{[k+1]}$ under the conditions of \eqref{eq:fixed_H}, \eqref{eq:equal_k+1_to_n}, \eqref{eq:hypo1}, \eqref{eq:hypo2}, and \eqref{eq:choose_k}.
Namely, the probability vector $\bvec{q}$ satisfies the following inequalities:
\begin{align}
q_{[1]} > q_{[2]} \ge q_{[3]} \ge \dots \ge q_{[k-1]} > q_{[k]} = q_{[k+1]} = \dots = q_{[n]} > 0 .
\end{align}
Since $\bvec{q}$ is made from $\bvec{p}$ under the constraint \eqref{eq:fixed_H}, note that
\begin{align}
H( \bvec{p} ) = H( \bvec{q} ) .
\end{align}
Moreover, it follows from \eqref{eq:sign_diff_norm} that $\| \bvec{p} \|_{\alpha}$ with a fixed $\alpha \in (0, 1) \cup (1, +\infty)$ is also strictly increased by according to decreasing $p_{[k]}$;
that is, we observe that
\begin{align}
\| \bvec{p} \|_{\alpha} \le \| \bvec{q} \|_{\alpha}
\end{align}
for $\alpha \in (0, 1) \cup (1, +\infty)$.
Repeating these operation until to satisfy $k = 2$ and $p_{[k]} = p_{[n]}$, we have that
\begin{align}
H( \bvec{p} )
& =
H_{\sbvec{v}_{n}}( p ) ,
\\
\| \bvec{p} \|_{\alpha}
& \le
\| \bvec{v}_{n}( p ) \|_{\alpha}
\end{align}
for all $\alpha \in (0, 1) \cup (1, +\infty)$ and some $p \in [0, \frac{1}{n}]$.
That completes the proof of Lemma \ref{lem:vector_v}.
\end{IEEEproof}
\begin{lemma}
\label{lem:vector_w}
For any $n \ge 2$ and any $\bvec{p} \in \mathcal{P}_{n}$, there exists $p \in [\frac{1}{n}, 1]$ such that $H_{\sbvec{w}_{n}}( p ) = H( \bvec{p} )$ and $\| \bvec{w}_{n}( p ) \|_{\alpha} \le \| \bvec{p} \|_{\alpha}$ for all $\alpha \in (0, \infty)$.
\end{lemma}
\begin{IEEEproof}[Proof of Lemma \ref{lem:vector_w}]
This proof is similar to the proof of Lemma \ref{lem:vector_v}.
If $n = 2$, then it can be easily seen that $\bvec{p}_{\downarrow} = \bvec{w}_{2}( p )$ for any $\bvec{p} \in \mathcal{P}_{2}$ and some $p \in [\frac{1}{2}, 1]$;
therefore, the lemma obviously holds when $n = 2$.
Moreover, since
\begin{align}
H( \bvec{p} ) & = \ln n
& \iff &&
\bvec{p} & = \bvec{u}_{n} = \bvec{w}_{n}( {\textstyle \frac{1}{n}} ),
\\
H( \bvec{p} ) & = 0
& \iff &&
\bvec{p}_{\downarrow} & = \bvec{d}_{n} = \bvec{w}_{n}( 1 ) ,
\end{align}
the lemma obviously holds if $H(\bvec{p}) \in \{ 0, \ln n \}$.
Furthermore, if $\bvec{p}_{\downarrow} = \bvec{w}_{n}( \frac{1}{m} )$ for an integer $2 \le m \le n-1$, then the lemma also obviously holds.
Thus, we omit the cases $n=2$, $H(\bvec{p}) \in \{ 0, \ln n \}$, and $\bvec{p}_{\downarrow} = \bvec{w}_{n}( \frac{1}{m} )$ in the analyses.
For a fixed $n \ge 3$ and a constant $A \in (0, \ln n)$, we assume for $\bvec{p} \in P_{n}$ that
\begin{align}
H( \bvec{p} ) = A .
\label{eq:fixed_H_w}
\end{align}
For that $\bvec{p}$, let $k, l \in \{ 2, 3, \dots, n \} \ (k < l)$ be the indices such that $p_{[1]} = p_{[k-1]} > p_{[k+1]}$ and $p_{[l]} > p_{[l+1]} = 0$;
namely, the indices $k, l$ are chosen to satisfy the following inequalities:
\begin{align}
p_{[1]} = \dots = p_{[k-1]} \ge p_{[k]} \ge p_{[k+1]} \ge \dots \ge p_{[l-1]} \ge p_{[l]} > p_{[l+1]} = \dots = p_{[n]} = 0
\quad (p_{[k-1]} > p_{[k+1]}) .
\label{eq:equal_1_to_k-1}
\end{align}
Since
$
p_{1} + p_{2} + \dots + p_{n} = 1
$,
we observe as with \eqref{eq:total_diff_prob} that
\begin{align}
\sum_{i=1}^{n} p_{i}
=
1
\qquad \Longrightarrow \qquad
\sum_{i = 1 : i \neq k}^{n} \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} }
=
- 1 .
\label{eq:total_diff_prob_w}
\end{align}
In this proof, we further assume that
\begin{align}
\frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} }
=
\frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} }
\label{eq:hypo1_w}
\end{align}
for $i \in \{ 2, 3, \dots, k-1 \}$,
\begin{align}
\frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} }
=
1
\label{eq:hypo2_w}
\end{align}
for $j \in \{ k+1, k+2, \dots, l-1 \}$, and
\begin{align}
\frac{ \mathrm{d} p_{[m]} }{ \mathrm{d} p_{[k]} }
=
0
\label{eq:hypo3_w}
\end{align}
for $m \in \{ l+1, l+2, \dots, n \}$.
Note that \eqref{eq:hypo2_w} implies that, for all $j \in \{ k+1, k+2, \dots, l-1 \}$, the increase/decrease rate of $p_{[j]}$ is equivalent to the increase/decrease rate of $p_{[k]}$.
By constraints \eqref{eq:hypo1_w}, \eqref{eq:hypo2_w}, and \eqref{eq:hypo3_w}, we get
\begin{align}
&&
\sum_{i=1}^{n} p_{i}
& =
1
\\
& \ \overset{\eqref{eq:total_diff_prob_w}}{\Longrightarrow} \ &
\sum_{i = 1 : i \neq k}^{n} \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} }
& =
- 1
\\
& \iff &
\sum_{i = 1}^{k-1} \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } + \sum_{j = k+1}^{l-1} \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} } + \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } + \sum_{m = l+1}^{n} \frac{ \mathrm{d} p_{[m]} }{ \mathrm{d} p_{[l]} }
& =
- 1
\\
& \overset{\eqref{eq:hypo1_w}}{\iff} &
(k-1) \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } + \sum_{j = k+1}^{l-1} \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} } + \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } + \sum_{m = l+1}^{n} \frac{ \mathrm{d} p_{[m]} }{ \mathrm{d} p_{[l]} }
& =
- 1
\\
& \overset{\eqref{eq:hypo2_w}}{\iff} &
(k-1) \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } + (l - k - 1) + \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } + \sum_{m = l+1}^{n} \frac{ \mathrm{d} p_{[m]} }{ \mathrm{d} p_{[l]} }
& =
- 1
\\
& \overset{\eqref{eq:hypo3_w}}{\iff} &
(k-1) \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } + (l - k - 1) + \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} }
& =
- 1
\\
& \iff &
(k-1) \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } + \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} }
& =
- (l-k)
\\
& \iff &
(k-1) \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} }
& =
- (l-k) - \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} }
\\
& \iff &
\frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} }
& =
- \frac{1}{k-1} \left( (l - k) + \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) ,
\label{eq:total_prob_hypo_w}
\end{align}
where note in \eqref{eq:total_prob_hypo_w} that $k \ge 2$.
Moreover, since
$
H( \bvec{p} )
=
A
$,
we observe that
\begin{align}
&&
- \sum_{i = 1}^{n} p_{i} \ln p_{i}
& =
A
\\
& \ \overset{\eqref{eq:diff1_H_halfway}}{\Longrightarrow} \ &
- \sum_{i = 1 : i \neq k}^{n} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[i]} + 1)
& =
\ln p_{[k]} + 1
\\
& \iff &
- \sum_{i = 1 : i \neq k}^{l} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[i]} + 1) - \sum_{m = l+1}^{n} \left( \frac{ \mathrm{d} p_{[m]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[m]} + 1)
& =
\ln p_{[k]} + 1
\\
& \overset{\text{(a)}}{\iff} &
- \sum_{i = 1 : i \neq k}^{l} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[i]} + 1)
& =
\ln p_{[k]} + 1
\\
& \iff &
- \sum_{i = 1}^{k-1} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[i]} + 1) - \sum_{j = k+1}^{l-1} \left( \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[j]} + 1) - \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[l]} + 1)
& =
\ln p_{[k]} + 1
\\
& \overset{\eqref{eq:equal_1_to_k-1}}{\iff} &
- (\ln p_{[1]} + 1) \sum_{i = 1}^{k-1} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) - \sum_{j = k+1}^{l-1} \left( \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[j]} + 1) - \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[l]} + 1)
& =
\ln p_{[k]} + 1
\\
& \overset{\eqref{eq:hypo2_w}}{\iff} &
- (\ln p_{[1]} + 1) \sum_{i = 1}^{k-1} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) - \sum_{j = k+1}^{l-1} (\ln p_{[j]} + 1) - \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[l]} + 1)
& =
\ln p_{[k]} + 1
\\
& \overset{\eqref{eq:hypo1_w}}{\iff} &
- (k-1) (\ln p_{[1]} + 1) \left( \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } \right) - \sum_{j = k+1}^{l-1} (\ln p_{[j]} + 1) - \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[l]} + 1)
& =
\ln p_{[k]} + 1
\\
& \overset{\eqref{eq:total_prob_hypo_w}}{\iff} &
(\ln p_{[1]} + 1) \left( (l - k) + \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) - \sum_{j = k+1}^{l-1} (\ln p_{[j]} + 1) - \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[l]} + 1)
& =
\ln p_{[k]} + 1
\\
& \iff &
(l - k) (\ln p_{[1]} + 1) - \sum_{j = k+1}^{l-1} (\ln p_{[j]} + 1) + \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[1]} - \ln p_{[l]})
& =
\ln p_{[k]} + 1
\\
& \iff &
(l - k) (\ln p_{[1]} + 1) - \sum_{j = k}^{l-1} (\ln p_{[j]} + 1) + \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[1]} - \ln p_{[l]})
& =
0
\\
& \iff &
\sum_{j = k}^{l-1} (\ln p_{[1]} - \ln p_{[j]}) + \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[1]} - \ln p_{[l]})
& =
0 ,
\label{eq:total_entropy_hypo_w_halfway}
\end{align}
where (a) follows from the fact that $\left( \frac{ \mathrm{d} p_{[m]} }{ \mathrm{d} p_{[k]} } \right) (\ln p_{[m]} + 1) = 0$ for $m \in \{ l+1, l+2, \dots, n \}$ since $\frac{ \mathrm{d} p_{[m]} }{ \mathrm{d} p_{[k]} } = 0$ (see Eq. \eqref{eq:hypo3_w}), $p_{[m]} = 0$ (see Eq. \eqref{eq:equal_1_to_k-1}), and $0 \ln 0 = 0$.
Hence, under the constraints \eqref{eq:fixed_H_w}, \eqref{eq:equal_1_to_k-1}, \eqref{eq:hypo1_w}, \eqref{eq:hypo2_w}, and \eqref{eq:hypo3_w}, we observe that
\begin{align}
\frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} }
=
- \frac{ \sum_{j = k}^{l-1} (\ln p_{[1]} - \ln p_{[j]}) }{ \ln p_{[1]} - \ln p_{[l]} } .
\label{eq:total_entropy_hypo_w}
\end{align}
We now check the sign of the right-hand side of \eqref{eq:total_entropy_hypo_w}.
Note that
\begin{align}
- (l - k) \left( \frac{ \ln p_{[1]} - \ln p_{[l-1]} }{ \ln p_{[1]} - \ln p_{[l]} } \right)
\le
\frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} }
\le
- (l - k) \left( \frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[l]} } \right)
\label{ineq:total_entropy_hypo_w}
\end{align}
since $\ln p_{[k]} \ge \ln p_{[j]} \ge \ln p_{[l-1]}$ for all $j \in \{ k, k+1, \dots, l-1 \}$.
If $1 > p_{[1]} > p_{[k]} \ge p_{[l]} > 0$, then
\begin{align}
\frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[l]} }
> 0
\end{align}
since
$
0 > \ln p_{[1]} > \ln p_{[k]} \ge \ln p_{[l]}
$;
therefore, we get for the upper bound of \eqref{ineq:total_entropy_hypo_w} that
\begin{align}
- (l - k) \left( \frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[l]} } \right)
<
0
\label{eq:sign_dndk_1_w}
\end{align}
for $1 > p_{[1]} > p_{[k]} \ge p_{[l]} > 0$, where note that $l - k \ge 1$.
Moreover, if $1 > p_{[1]} = p_{[k]} > p_{[l]} > 0$, then
\begin{align}
- (l - k) \left( \frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[l]} } \right)
& =
- (l - k) \left( \frac{ 0 }{ \ln p_{[1]} - \ln p_{[l]} } \right)
\\
& =
0 .
\label{eq:sign_dndk_2_w}
\end{align}
Combining \eqref{eq:sign_dndk_1_w} and \eqref{eq:sign_dndk_2_w}, we see that the upper bound of \eqref{ineq:total_entropy_hypo_w} is always nonpositive for $1 > p_{[1]} \ge p_{[k]} \ge p_{[l]} > 0 \ (p_{[1]} > p_{[l]})$;
that is, we observe under the constraints \eqref{eq:fixed_H_w}, \eqref{eq:equal_1_to_k-1}, \eqref{eq:hypo1_w}, \eqref{eq:hypo2_w}, and \eqref{eq:hypo3_w} that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right)
& \overset{\eqref{ineq:total_entropy_hypo_w}}{\le}
\operatorname{sgn} \! \left( - (l - k) \left( \frac{ \ln p_{[1]} - \ln p_{[k]} }{ \ln p_{[1]} - \ln p_{[l]} } \right) \right)
\\
& =
\begin{cases}
0
& \mathrm{if} \ p_{[1]} = p_{[k]} , \\
-1
& \mathrm{otherwise}
\end{cases}
\label{eq:sign_dndk_w}
\end{align}
for $1 > p_{[1]} \ge p_{[k]} \ge p_{[l]} > 0 \ (p_{[1]} > p_{[l]})$.
Note for the constraint \eqref{eq:fixed_H_w} that
\begin{align}
\lim_{p_{[l]} \to 0^{+}} H( p_{[1]}, p_{[2]}, \dots, p_{[l-1]}, p_{[l]}, 0, 0, \dots, 0 )
=
H( p_{[1]}, p_{[2]}, \dots, p_{[l-1]}, 0, 0, \dots, 0 )
\end{align}
since $\lim_{x \to 0^{+}} x \ln x = 0 \ln 0$ by the assumption $0 \ln 0 = 0$.
Thus, it follows from \eqref{eq:sign_dndk_w} that $p_{[l]}$ is strictly decreasing for $p_{[k]}$ under the constraints \eqref{eq:fixed_H_w}, \eqref{eq:equal_1_to_k-1}, \eqref{eq:hypo1_w}, \eqref{eq:hypo2_w}, and \eqref{eq:hypo3_w}.
Similarly, we check the sign of the right-hand side of \eqref{eq:total_prob_hypo_w}.
Substituting the lower bound of \eqref{ineq:total_entropy_hypo_w} into the right-hand side of \eqref{eq:total_prob_hypo_w}, we observe that
\begin{align}
\frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} }
\le
- \frac{l-k}{k-1} \left( 1 - \frac{ \ln p_{[1]} - \ln p_{[l-1]} }{ \ln p_{[1]} - \ln p_{[l]} } \right) .
\label{eq:total_entropy_hypo_w_UB}
\end{align}
If $1 > p_{[1]} \ge p_{[l-1]} > p_{[l]} > 0$, then
\begin{align}
\frac{ \ln p_{[1]} - \ln p_{[l-1]} }{ \ln p_{[1]} - \ln p_{[l]} }
<
1
\end{align}
since
$
0 > \ln p_{[1]} \ge \ln p_{[l-1]} > \ln p_{[l]}
$;
therefore, we get for the upper bound of \eqref{eq:total_entropy_hypo_w_UB} that
\begin{align}
- \frac{l-k}{k-1} \left( 1 - \frac{ \ln p_{[1]} - \ln p_{[l-1]} }{ \ln p_{[1]} - \ln p_{[l]} } \right)
<
0
\label{eq:sign_dndk_1_w2}
\end{align}
for $1 > p_{[1]} \ge p_{[l-1]} > p_{[l]} > 0$, where note that $\frac{l-k}{k-1} > 0$.
Moreover, if $1 > p_{[1]} = p_{[l-1]} > p_{[l]} > 0$, then
\begin{align}
- \frac{l-k}{k-1} \left( 1 - \frac{ \ln p_{[1]} - \ln p_{[l-1]} }{ \ln p_{[1]} - \ln p_{[l]} } \right)
& =
- \frac{l-k}{k-1} \left( 1 - \frac{ \ln p_{[1]} - \ln p_{[l]} }{ \ln p_{[1]} - \ln p_{[l]} } \right)
\\
& =
- \frac{l-k}{k-1} (1 - 1)
\\
& =
0 .
\label{eq:sign_dndk_1_w3}
\end{align}
It follows from \eqref{eq:sign_dndk_1_w2} and \eqref{eq:sign_dndk_1_w3} that the upper bound of \eqref{eq:total_entropy_hypo_w_UB} is always nonpositive for $1 > p_{[1]} \ge p_{[l-1]} \ge p_{[l]} > 0 \ (p_{[1]} > p_{[l]})$;
that is, we observe under the constraints \eqref{eq:fixed_H_w}, \eqref{eq:equal_1_to_k-1}, \eqref{eq:hypo1_w}, \eqref{eq:hypo2_w}, and \eqref{eq:hypo3_w} that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } \right)
& \overset{\eqref{eq:total_entropy_hypo_w_UB}}{\le}
\operatorname{sgn} \! \left( - \frac{l-k}{k-1} \left( 1 - \frac{ \ln p_{[1]} - \ln p_{[l-1]} }{ \ln p_{[1]} - \ln p_{[l]} } \right) \right)
\\
& =
\begin{cases}
0
& \mathrm{if} \ p_{[l-1]} = p_{[l]} , \\
-1
& \mathrm{otherwise}
\end{cases}
\label{eq:sign_d1dk_w}
\end{align}
for $1 > p_{[1]} \ge p_{[l-1]} \ge p_{[l]} > 0 \ (p_{[1]} > p_{[l]})$.
As with \eqref{eq:sign_dndk_w}, it follows from \eqref{eq:sign_d1dk_w} that, for all $i \in \{ 1, 2, \dots, k-1 \}$, $p_{[i]}$ is strictly decreasing for $p_{[k]}$ under the constraints \eqref{eq:fixed_H_w}, \eqref{eq:equal_1_to_k-1}, \eqref{eq:hypo1_w}, \eqref{eq:hypo2_w}, and \eqref{eq:hypo3_w}.
On the other hand, for a fixed $\alpha \in (0, 1) \cup (1, +\infty)$, we have
\begin{align}
\frac{ \mathrm{d} \| \bvec{p} \|_{\alpha} }{ \mathrm{d} p_{[k]} }
& \overset{\eqref{eq:diff_norm_pk_halfway}}{=}
\frac{1}{\alpha} \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \alpha \, p_{[k]}^{\alpha-1} + \sum_{i=1 : i \neq k}^{n} \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (p_{[i]}^{\alpha}) \right)
\\
& =
\frac{1}{\alpha} \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \alpha \, p_{[k]}^{\alpha-1} + \sum_{i=1 : i \neq k}^{l} \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (p_{[i]}^{\alpha}) + \sum_{m=l+1}^{n} \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (p_{[m]}^{\alpha}) \right)
\\
& \overset{\text{(a)}}{=}
\frac{1}{\alpha} \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \alpha \, p_{[k]}^{\alpha-1} + \sum_{i=1 : i \neq k}^{l} \frac{ \mathrm{d} }{ \mathrm{d} p_{[k]} } (p_{[i]}^{\alpha}) \right)
\\
& =
\frac{1}{\alpha} \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \alpha \, p_{[k]}^{\alpha-1} + \sum_{i=1 : i \neq k}^{l} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) \left( \frac{ \mathrm{d} }{ \mathrm{d} p_{[i]} } (p_{[i]}^{\alpha}) \right) \right)
\\
& =
\frac{1}{\alpha} \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \alpha \, p_{[k]}^{\alpha-1} + \sum_{i=1 : i \neq k}^{l} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) (\alpha \, p_{[i]}^{\alpha-1}) \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[k]}^{\alpha-1} + \sum_{i=1 : i \neq k}^{l} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) p_{[i]}^{\alpha-1} \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[k]}^{\alpha-1} + \sum_{i=1}^{k-1} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) p_{[i]}^{\alpha-1} + \sum_{j=k+1}^{l-1} \left( \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} } \right) p_{[j]}^{\alpha-1} + \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) p_{[l]}^{\alpha-1} \right)
\\
& \overset{\eqref{eq:equal_1_to_k-1}}{=}
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[k]}^{\alpha-1} + p_{[1]}^{\alpha-1} \sum_{i=1}^{k-1} \left( \frac{ \mathrm{d} p_{[i]} }{ \mathrm{d} p_{[k]} } \right) + \sum_{j=k+1}^{l-1} \left( \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} } \right) p_{[j]}^{\alpha-1} + \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) p_{[l]}^{\alpha-1} \right)
\\
& \overset{\eqref{eq:hypo1_w}}{=}
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[k]}^{\alpha-1} + p_{[1]}^{\alpha-1} (k-1) \left( \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } \right) + \sum_{j=k+1}^{l-1} \left( \frac{ \mathrm{d} p_{[j]} }{ \mathrm{d} p_{[k]} } \right) p_{[j]}^{\alpha-1} + \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) p_{[l]}^{\alpha-1} \right)
\\
& \overset{\eqref{eq:hypo2_w}}{=}
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[k]}^{\alpha-1} + p_{[1]}^{\alpha-1} (k-1) \left( \frac{ \mathrm{d} p_{[1]} }{ \mathrm{d} p_{[k]} } \right) + \sum_{j=k+1}^{l-1} p_{[j]}^{\alpha-1} + \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) p_{[l]}^{\alpha-1} \right)
\\
& \overset{\eqref{eq:total_prob_hypo_w}}{=}
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( p_{[k]}^{\alpha-1} - p_{[1]}^{\alpha-1} \left( (l - k) + \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) + \sum_{j=k+1}^{l-1} p_{[j]}^{\alpha-1} + \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) p_{[l]}^{\alpha-1} \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \sum_{j=k}^{l-1} p_{[j]}^{\alpha-1} - p_{[1]}^{\alpha-1} (l - k) + \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) (p_{[l]}^{\alpha-1} - p_{[1]}^{\alpha-1}) \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \sum_{j=k}^{l-1} (p_{[j]}^{\alpha-1} - p_{[1]}^{\alpha-1}) + \left( \frac{ \mathrm{d} p_{[l]} }{ \mathrm{d} p_{[k]} } \right) (p_{[l]}^{\alpha-1} - p_{[1]}^{\alpha-1}) \right)
\\
& \overset{\eqref{eq:total_entropy_hypo_w}}{=}
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \sum_{j=k}^{l-1} (p_{[j]}^{\alpha-1} - p_{[1]}^{\alpha-1}) + \left( - \frac{ \sum_{j = k}^{l-1} (\ln p_{[1]} - \ln p_{[j]}) }{ \ln p_{[1]} - \ln p_{[l]} } \right) (p_{[l]}^{\alpha-1} - p_{[1]}^{\alpha-1}) \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\sum} p_{[l]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \left( \frac{ \sum_{j=k}^{l-1} (p_{[j]}^{\alpha-1} - p_{[1]}^{\alpha-1}) }{ p_{[l]}^{\alpha-1} - p_{[1]}^{\alpha-1} } - \frac{ \sum_{j = k}^{l-1} (\ln p_{[1]} - \ln p_{[j]}) }{ \ln p_{[1]} - \ln p_{[l]} } \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\sum} p_{[l]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \sum_{j=k}^{l-1} \left( \frac{ p_{[j]}^{\alpha-1} - p_{[1]}^{\alpha-1} }{ p_{[l]}^{\alpha-1} - p_{[1]}^{\alpha-1} } - \frac{ \ln p_{[1]} - \ln p_{[j]} }{ \ln p_{[1]} - \ln p_{[l]} } \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\sum} p_{[l]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \sum_{j=k}^{l-1} \left( \frac{ \left( \frac{ p_{[1]} }{ p_{[j]} } \right)^{1-\alpha} - 1 }{ \left( \frac{ p_{[1]} }{ p_{[l]} } \right)^{1-\alpha} - 1 } - \frac{ \ln p_{[1]} - \ln p_{[j]} }{ \ln p_{[1]} - \ln p_{[l]} } \right)
\\
& =
\left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\sum} p_{[l]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \sum_{j=k}^{l-1} \left( \frac{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[j]} } }{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[l]} } } - \frac{ \ln \frac{ p_{[1]} }{ p_{[j]} } }{ \ln \frac{ p_{[1]} }{ p_{[l]} } } \right)
\label{eq:diff_norm_pk_w}
\end{align}
where (a) holds since the constraint \eqref{eq:hypo3_w} implies that $p_{[m]}$ is constant for $p_{[k]}$.
Hence, we can see that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \mathrm{d} \| \bvec{p} \|_{\alpha} }{ \mathrm{d} p_{[k]} } \right)
& =
\operatorname{sgn} \! \left( \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \left( \vphantom{\sum} p_{[l]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \sum_{j=k}^{l-1} \left( \frac{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[j]} } }{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[l]} } } - \frac{ \ln \frac{ p_{[1]} }{ p_{[j]} } }{ \ln \frac{ p_{[1]} }{ p_{[l]} } } \right) \right)
\\
& =
\underbrace{ \operatorname{sgn} \! \left( \left( \sum_{i=1}^{n} p_{i}^{\alpha} \right)^{\frac{1}{\alpha} - 1} \right) }_{ = 1 } \cdot \, \operatorname{sgn} \! \left( \vphantom{\sum} p_{[l]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \cdot \, \operatorname{sgn} \! \left( \sum_{j=k}^{l-1} \left( \frac{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[j]} } }{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[l]} } } - \frac{ \ln \frac{ p_{[1]} }{ p_{[j]} } }{ \ln \frac{ p_{[1]} }{ p_{[l]} } } \right) \right)
\\
& =
\operatorname{sgn} \! \left( \vphantom{\sum} p_{[l]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \cdot \, \operatorname{sgn} \! \left( \sum_{j=k}^{l-1} \left( \frac{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[j]} } }{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[l]} } } - \frac{ \ln \frac{ p_{[1]} }{ p_{[j]} } }{ \ln \frac{ p_{[1]} }{ p_{[l]} } } \right) \right)
\label{eq:diff1_norm_pk_1_w}
\end{align}
for $\alpha \in (0, 1) \cup (1, \infty)$.
As with \eqref{eq:sign_pn-p1}, we readily see that
\begin{align}
\operatorname{sgn} \! \left( p_{[l]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right)
& =
\begin{cases}
1
& \mathrm{if} \ \alpha < 1 , \\
0
& \mathrm{if} \ \alpha = 1 , \\
-1
& \mathrm{if} \ \alpha > 1
\end{cases}
\label{eq:sign_pl-p1_w}
\end{align}
for $p_{[1]} > p_{[l]} > 0$.
Moreover, since $1 \le \frac{ p_{[1]} }{ p_{[j]} } \le \frac{ p_{[1]} }{ p_{[l]} } \ (\frac{ p_{[1]} }{ p_{[l]} } \neq 1)$ for $j \in \{ k, k+1, \dots, l-1 \}$, we observe from Lemma \ref{lem:frac_qlog} that
\begin{align}
\operatorname{sgn} \! \left( \frac{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[j]} } }{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[l]} } } - \frac{ \ln \frac{ p_{[1]} }{ p_{[j]} } }{ \ln \frac{ p_{[1]} }{ p_{[l]} } } \right)
& =
\begin{cases}
1
& \mathrm{if} \ \alpha > 1 \ \mathrm{and} \ p_{[1]} > p_{[j]} > p_{[l]} , \\
0
& \mathrm{if} \ \alpha = 1 \ \mathrm{or} \ p_{[1]} = p_{[j]} \ \mathrm{or} \ p_{[j]} = p_{[l]} , \\
-1
& \mathrm{if} \ \alpha < 1 \ \mathrm{and} \ p_{[1]} > p_{[j]} > p_{[l]} .
\end{cases}
\end{align}
for $j \in \{ k, k+1, \dots, l-1 \}$;
and therefore, we have
\begin{align}
\operatorname{sgn} \! \left( \sum_{j=k}^{l-1} \left( \frac{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[j]} } }{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[l]} } } - \frac{ \ln \frac{ p_{[1]} }{ p_{[j]} } }{ \ln \frac{ p_{[1]} }{ p_{[l]} } } \right) \right)
& =
\begin{cases}
1
& \mathrm{if} \ \alpha > 1 \ \mathrm{and} \ (p_{[1]} > p_{[k]} \ge p_{[l]} \ \mathrm{or} \ p_{[1]} \ge p_{[k]} > p_{[l]}) , \\
0
& \mathrm{if} \ \alpha = 1 \ \mathrm{or} \ (p_{[1]} = p_{[k]} \ \mathrm{and} \ p_{[k+1]} = p_{[l]}) \ \mathrm{or} \ p_{[k]} = p_{[l]} , \\
-1
& \mathrm{if} \ \alpha < 1 \ \mathrm{and} \ (p_{[1]} > p_{[k]} \ge p_{[l]} \ \mathrm{or} \ p_{[1]} \ge p_{[k]} > p_{[l]})
\end{cases}
\label{eq:sign_f(p1pkpn)_w}
\end{align}
for $\bvec{p} \in \mathcal{P}_{n}$ under the constraint \eqref{eq:equal_1_to_k-1}.
Therefore, under the constraints \eqref{eq:fixed_H_w}, \eqref{eq:equal_1_to_k-1}, \eqref{eq:hypo1_w}, \eqref{eq:hypo2_w}, and \eqref{eq:hypo3_w}, we obtain
\begin{align}
\operatorname{sgn} \! \left( \frac{ \mathrm{d} \| \bvec{p} \|_{\alpha} }{ \mathrm{d} p_{[k]} } \right)
& \overset{\eqref{eq:diff1_norm_pk_1_w}}{=}
\operatorname{sgn} \! \left( \vphantom{\sum} p_{[l]}^{\alpha-1} - p_{[1]}^{\alpha-1} \right) \cdot \, \operatorname{sgn} \! \left( \sum_{j=k}^{l-1} \left( \frac{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[j]} } }{ \ln_{\alpha} \frac{ p_{[1]} }{ p_{[l]} } } - \frac{ \ln \frac{ p_{[1]} }{ p_{[j]} } }{ \ln \frac{ p_{[1]} }{ p_{[l]} } } \right) \right)
\\
& =
\begin{cases}
0
& \mathrm{if} \ \alpha = 1 \ \mathrm{or} \ (p_{[1]} = p_{[k]} \ \mathrm{and} \ p_{[k+1]} = p_{[l]}) \ \mathrm{or} \ p_{[k]} = p_{[l]} , \\
-1
& \mathrm{if} \ \alpha \neq 1 \ \mathrm{and} \ (p_{[1]} > p_{[k]} \ge p_{[l]} \ \mathrm{or} \ p_{[1]} \ge p_{[k]} > p_{[l]})
\end{cases}
\label{eq:sign_diff_norm_w}
\end{align}
for $\alpha \in (0, 1) \cup (1, +\infty)$, where the last equality follows from \eqref{eq:sign_pl-p1_w} and \eqref{eq:sign_f(p1pkpn)_w}.
Hence, we have that $\| \bvec{p} \|_{\alpha}$ with a fixed $\alpha \in (0, 1) \cup (1, +\infty)$ is strictly decreasing for $p_{[k]}$ under the constraints \eqref{eq:fixed_H_w}, \eqref{eq:equal_1_to_k-1}, \eqref{eq:hypo1_w}, \eqref{eq:hypo2_w}, and \eqref{eq:hypo3_w}.
Using the above results, we now prove this lemma.
Note that, if $p_{[k-1]} = p_{[k]}$ and $k = l-1$, then $\bvec{p}_{\downarrow} = \bvec{w}_{n}( p )$ for some $p \in [\frac{1}{n}, 1]$.
If $p_{[k-1]} = p_{[k]}$ and $k < l-1$, then we reset the index $k \in \{ 2, 3, \dots, n-2 \}$ to $k + 1$;
namely;
we now choose the indices $k, l \in \{ 2, 3, \dots, n \} \ (k < l)$ to satisfy the following inequalities:
\begin{align}
p_{[1]} = p_{[2]} = \dots = p_{[k-1]} > p_{[k]} \ge p_{[k+1]} \ge \dots \ge p_{[l-1]} \ge p_{[l]} > p_{[l+1]} = p_{[l+2]} = \dots = p_{[n]} = 0 .
\label{eq:choose_k2}
\end{align}
Then, we consider to increase $p_{[k]}$ under the constraints of \eqref{eq:fixed_H_w}, \eqref{eq:equal_1_to_k-1}, \eqref{eq:hypo1_w}, \eqref{eq:hypo2_w}, and \eqref{eq:hypo3_w}.
Note that the constraint \eqref{eq:hypo2_w} implies that, for all $j \in \{ k+1, k+2, \dots, l-1 \}$, $p_{[j]}$ is strictly increased with the same speed of increasing $p_{[k]}$.
It follows from \eqref{eq:hypo1_w} and \eqref{eq:sign_d1dk_w} that, for all $i \in \{ 1, 2, \dots, k-1 \}$, $p_{[i]}$ is strictly decreased by according to increasing $p_{[k]}$.
Hence, if $p_{[k]}$ is decreased, then there is a possibility that $p_{[1]} = \dots = p_{[k-1]} = p_{[k]}$.
Similarly, it follows from \eqref{eq:sign_dndk_w} that $p_{[l]}$ is also strictly decreased by according to increasing $p_{[k]}$.
Hence, if $p_{[k]}$ is decreased, then there is a possibility that $p_{[l]} = p_{[l+1]} = \dots = p_{[n]} = 0$.
Let $\bvec{q} = (q_{1}, q_{2}, \dots, q_{n})$ denotes the probability vector that made from $\bvec{p}$ by continuing the above operation until to satisfy $p_{[1]} = p_{[k]}$ or $p_{[l]} = 0$ under the conditions of \eqref{eq:fixed_H_w}, \eqref{eq:hypo1_w}, \eqref{eq:hypo2_w}, and \eqref{eq:hypo3_w}.
Namely, the probability vector $\bvec{q}$ satisfies either
\begin{align}
q_{[1]} = q_{[2]} = \dots = q_{[k-1]} = q_{[k]} \ge q_{[k+1]} \ge q_{[k+2]} \ge \dots \ge q_{[l-1]} > q_{[l]} \ge q_{[l+1]} = q_{[l+2]} = \dots = q_{[n]} = 0
\label{ineq:q_w_1}
\end{align}
or
\begin{align}
q_{[1]} = q_{[2]} = \dots = q_{[k-1]} \ge q_{[k]} \ge q_{[k+1]} \ge q_{[k+2]} \ge \dots \ge q_{[l-1]} > q_{[l]} = q_{[l+1]} = q_{[l+2]} = \dots = q_{[n]} = 0 .
\label{ineq:q_w_2}
\end{align}
Note that there is a possibility that both of \eqref{ineq:q_w_1} and \eqref{ineq:q_w_2} hold;
that is,
\begin{align}
q_{[1]} = q_{[2]} = \dots = q_{[k-1]} = q_{[k]} \ge q_{[k+1]} \ge q_{[k+2]} \ge \dots \ge q_{[l-1]} > q_{[l]} = q_{[l+1]} = q_{[l+2]} = \dots = q_{[n]} = 0
\end{align}
holds.
Since $\bvec{q}$ is made under the constraint \eqref{eq:fixed_H_w}, note that
\begin{align}
H( \bvec{p} ) = H( \bvec{q} ) .
\end{align}
Moreover, it follows from \eqref{eq:sign_diff_norm_w} that $\| \bvec{p} \|_{\alpha}$ with a fixed $\alpha \in (0, 1) \cup (1, +\infty)$ is also strictly decreased by according to increasing $p_{[k]}$;
therefore, we observe that
\begin{align}
\| \bvec{p} \|_{\alpha} \ge \| \bvec{q} \|_{\alpha}
\end{align}
for $\alpha \in (0, 1) \cup (1, +\infty)$.
Repeating these operation until to satisfy $k = l-1$ and $p_{[1]} = p_{[k]} > p_{[l]} \ge p_{[l-1]} = p_{[n]} = 0$, we have that
\begin{align}
H( \bvec{p} )
& =
H_{\sbvec{w}_{n}}( p ) ,
\\
\| \bvec{p} \|_{\alpha}
& \ge
\| \bvec{w}_{n}( p ) \|_{\alpha}
\end{align}
for all $\alpha \in (0, 1) \cup (1, +\infty)$ and some $p \in [\frac{1}{n}, 1]$.
That completes the proof of Lemma \ref{lem:vector_w}.
\end{IEEEproof}
Lemmas \ref{lem:vector_v} and \ref{lem:vector_w} are derived by using Lemma \ref{lem:frac_qlog}.
Lemmas \ref{lem:vector_v} and \ref{lem:vector_w} imply that the distributions $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$ have extremal properties in the sense of a relation between the Shannon entropy and the $\ell_{\alpha}$-norm.
Then, we can derive tight bounds of $\ell_{\alpha}$-norms with a fixed Shannon entropy as follows:
\begin{theorem}
\label{th:extremes}
Let $\bar{\bvec{v}}_{n}( \bvec{p} ) \triangleq \bvec{v}_{n}( H_{\sbvec{v}_{n}}^{-1}( H( \bvec{p} ) ) )$ and $\bar{\bvec{w}}_{n}( \bvec{p} ) \triangleq \bvec{w}_{n}( H_{\sbvec{w}_{n}}^{-1}( H( \bvec{p} ) ) )$ for $\bvec{p} \in \mathcal{P}_{n}$.
Then, we observe that
\begin{align}
\| \bar{\bvec{w}}_{n}( \bvec{p} ) \|_{\alpha} \le \| \bvec{p} \|_{\alpha} \le \| \bar{\bvec{v}}_{n}( \bvec{p} ) \|_{\alpha}
\label{ineq:extremes}
\end{align}
for any $n \ge 2$, any $\bvec{p} \in \mathcal{P}_{n}$, and any $\alpha \in (0, \infty)$.
\end{theorem}
\begin{IEEEproof}[Proof of Theorem \ref{th:extremes}]
It follows from Lemmas \ref{lem:vector_v} and \ref{lem:vector_w} that, for any $n \ge 2$ and any $\bvec{p} \in \mathcal{P}_{n}$, there exist $p \in [0, \frac{1}{n}]$ and $p^{\prime} \in [\frac{1}{n}, 1]$ such that
\begin{align}
H_{\sbvec{w}_{n}}( p^{\prime} )
& =
H( \bvec{p} )
=
H_{\sbvec{v}_{n}}( p ) ,
\\
\| \bvec{w}_{n}( p^{\prime} ) \|_{\alpha}
& \le
\| \bvec{p} \|_{\alpha}
\le
\| \bvec{v}_{n}( p ) \|_{\alpha}
\end{align}
for all $\alpha \in (0, +\infty)$.
Then, we now consider $\bvec{q}, \bvec{q}^{\prime} \in \mathcal{P}_{n}$ such that
\begin{align}
H( \bvec{q}^{\prime} )
=
H_{\sbvec{w}_{n}}( p^{\prime} )
& =
H_{\sbvec{v}_{n}}( p )
=
H( \bvec{q} ) ,
\label{eq:sameH_q_1} \\
\| \bvec{q}^{\prime} \|_{\alpha}
\le
\| \bvec{w}_{n}( p^{\prime} ) \|_{\alpha}
& \le
\| \bvec{v}_{n}( p ) \|_{\alpha}
\le
\| \bvec{q} \|_{\alpha}
\end{align}
for $\alpha \in (0, +\infty)$.
It also follows from Lemmas \ref{lem:vector_v} and \ref{lem:vector_w} that there exist $q \in [0, \frac{1}{n}]$ and $q^{\prime} \in [\frac{1}{n}, 1]$ such that
\begin{align}
H_{\sbvec{w}_{n}}( q^{\prime} )
=
H( \bvec{q}^{\prime} )
& =
H( \bvec{q} )
=
H_{\sbvec{v}_{n}}( q ) ,
\label{eq:sameH_q_2} \\
\| \bvec{w}_{n}( q^{\prime} ) \|_{\alpha}
\le
\| \bvec{q}^{\prime} \|_{\alpha}
& \le
\| \bvec{q} \|_{\alpha}
\le
\| \bvec{v}_{n}( q ) \|_{\alpha}
\end{align}
for $\alpha \in (0, +\infty)$.
Note from \eqref{eq:sameH_q_1} and \eqref{eq:sameH_q_2} that
\begin{align}
H_{\sbvec{v}_{n}}( p )
& =
H_{\sbvec{v}_{n}}( q ) ,
\\
H_{\sbvec{w}_{n}}( p^{\prime} )
& =
H_{\sbvec{w}_{n}}( q^{\prime} ) .
\end{align}
Note that it follows from Lemmas \ref{lem:Hv} and \ref{lem:Hw} that $H_{\sbvec{v}_{n}}( p )$ and $H_{\sbvec{w}_{n}}( p^{\prime} )$ are both bijective functions of $p \in [0, \frac{1}{n}]$ and $p^{\prime} \in [\frac{1}{n}, 1]$, respectively.
Therefore, we get
\begin{align}
p
& =
q ,
\\
p^{\prime}
& =
q^{\prime} ,
\end{align}
which imply that, for $\bvec{q}$ and $\bvec{q}^{\prime}$, the following equalities must be held:
\begin{align}
\| \bvec{v}_{n}( p ) \|_{\alpha}
& =
\| \bvec{q} \|_{\alpha}
=
\| \bvec{v}_{n}( q ) \|_{\alpha} ,
\\
\| \bvec{w}_{n}( p^{\prime} ) \|_{\alpha}
& =
\| \bvec{q}^{\prime} \|_{\alpha}
=
\| \bvec{w}_{n}( q^{\prime} ) \|_{\alpha} .
\end{align}
That completes the proof of Theorem \ref{th:extremes}.
\end{IEEEproof}
Note that the distributions $\bar{\bvec{v}}_{n}( \bvec{p} )$ and $\bar{\bvec{w}}_{n}( \bvec{p} )$ denote $\bvec{v}_{n}( p )$ and $\bvec{w}_{n}( q )$, respectively, such that $H_{\sbvec{v}_{n}}( p ) = H_{\sbvec{w}_{n}}( q ) = H( \bvec{p} )$ for a given $\bvec{p} \in \mathcal{P}_{n}$.
Theorem \ref{th:extremes} shows that, among all $n$-ary probability vectors with a fixed Shannon entropy, the distributions $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$ take the maximum and the minimum $\ell_{\alpha}$-norm, respectively.
Thus, the bounds \eqref{ineq:extremes} of Theorem \ref{th:extremes} are tight in the sense of the existences of the distributions $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$ which attain both equalities of the bounds \eqref{ineq:extremes}.
In other words, Theorem \ref{th:extremes} implies that the boundaries of $\mathcal{R}_{n}( \alpha )$, defined in \eqref{def:region_Pn}, can be attained by $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$.
We illustrate the graphs of the boundaries of $\mathcal{R}_{n}( \alpha )$ in Fig. \ref{fig:region_P6_half}.
Note that $\| \bar{\bvec{v}}_{2}( \bvec{p} ) \|_{\alpha} = \| \bar{\bvec{w}}_{2}( \bvec{p} ) \|_{\alpha}$ for any $\bvec{p} \in \mathcal{P}_{2}$ and any $\alpha \in (0, \infty)$ since $\bvec{v}_{2}( p ) = \bvec{w}_{2}( 1 - p )$ for $p \in [0, \frac{1}{2}]$.
Therefore, Theorem \ref{th:extremes} becomes meaningful for $n \ge 3$.
On the other hand, the following theorem shows that, among all $n$-ary probability vectors with a fixed $\ell_{\alpha}$-norm, the distributions $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$ also take the extreme values of the Shannon entropy.
\begin{theorem}
\label{th:extremes2}
Let $p \in [0, \frac{1}{n}]$ and $p^{\prime} \in [\frac{1}{n}, 1]$ be chosen to satisfy
\begin{align}
\| \bvec{v}_{n}( p ) \|_{\alpha}
=
\| \bvec{p} \|_{\alpha}
=
\| \bvec{w}_{n}( p^{\prime} ) \|_{\alpha}
\label{ineq:norm_v_to_w}
\end{align}
for a fixed $\alpha \in (0, 1) \cup (1, \infty)$.
Then, we observe that
\begin{align}
0 < \alpha < 1
& \ \Longrightarrow \
H_{\sbvec{v}_{n}}( p ) \le H( \bvec{p} ) \le H_{\sbvec{w}_{n}}( p^{\prime} ) ,
\label{ineq:H_0to1} \\
\alpha > 1
& \ \Longrightarrow \
H_{\sbvec{w}_{n}}( p^{\prime} ) \le H( \bvec{p} ) \le H_{\sbvec{v}_{n}}( p )
\label{ineq:H_1toinfty}
\end{align}
for any $n \ge 2$ and any $\bvec{p} \in \mathcal{P}_{n}$.
\end{theorem}
\begin{IEEEproof}[Proof of Theorem \ref{th:extremes2}]
From Theorem \ref{th:extremes}, for a fixed $n \ge 2$, we consider $\bvec{p} \in \mathcal{P}_{n}$, $p \in [0, \frac{1}{n}]$, and $p^{\prime} \in [\frac{1}{n}, 1]$ such that
\begin{align}
H_{\sbvec{w}_{n}}( p^{\prime} )
& =
H( \bvec{p} )
=
H_{\sbvec{v}_{n}}( p ) ,
\\
\| \bvec{w}_{n}( p^{\prime} ) \|_{\alpha}
& \le
\| \bvec{p} \|_{\alpha}
\le
\| \bvec{v}_{n}( p ) \|_{\alpha}
\end{align}
for $\alpha \in (0, 1) \cup (1, +\infty)$.
Note that $p$ and $p^{\prime}$ are uniquely determined for a given $\bvec{p} \in \mathcal{P}_{n}$.
It follows from Lemmas \ref{lem:Hv} and \ref{lem:Hw} that $H_{\sbvec{v}_{n}}( p ) \in [0, \ln n]$ and $H_{\sbvec{w}_{n}}( p^{\prime} ) \in [0, \ln n]$ are strictly increasing for $p \in [0, \frac{1}{n}]$ and strictly decreasing for $p^{\prime} \in [\frac{1}{n}, 1]$, respectively.
Moreover, it follows from Lemmas \ref{lem:mono_v} and \ref{lem:mono_w} that, if $\alpha \in (0, 1)$, then $\| \bvec{v}_{n}( p ) \|_{\alpha}$ and $\| \bvec{w}_{n}( p^{\prime} ) \|_{\alpha}$ are strictly increasing for $p \in [0, \frac{1}{n}]$ and strictly decreasing for $p^{\prime} \in [\frac{1}{n}, 1]$, respectively.
Therefore, decreasing both $p \in [0, \frac{1}{n}]$ and $p^{\prime} \in [\frac{1}{n}, 1]$, we can obtain $q \in [0, \frac{1}{n}]$ and $q^{\prime} \in [\frac{1}{n}, 1]$ such that
\begin{align}
H_{\sbvec{w}_{n}}( q^{\prime} )
& \ge
H( \bvec{p} )
\ge
H_{\sbvec{v}_{n}}( q ) ,
\\
\| \bvec{w}_{n}( q^{\prime} ) \|_{\alpha}
& =
\| \bvec{p} \|_{\alpha}
=
\| \bvec{v}_{n}( q ) \|_{\alpha}
\end{align}
for a fixed $\alpha \in (0, 1)$.
On the other hand, it follows from Lemmas \ref{lem:mono_v} and \ref{lem:mono_w} that, if $\alpha \in (1, +\infty)$, then $\| \bvec{v}_{n}( p ) \|_{\alpha}$ and $\| \bvec{w}_{n}( p^{\prime} ) \|_{\alpha}$ are strictly decreasing for $p \in [0, \frac{1}{n}]$ and strictly increasing for $p \in [\frac{1}{n}, 1]$, respectively.
Therefore, increasing both $p \in [0, \frac{1}{n}]$ and $p^{\prime} \in [\frac{1}{n}, 1]$, we can obtain $q \in [0, \frac{1}{n}]$ and $q^{\prime} \in [\frac{1}{n}, 1]$ such that
\begin{align}
H_{\sbvec{w}_{n}}( q^{\prime} )
& \le
H( \bvec{p} )
\le
H_{\sbvec{v}_{n}}( q ) ,
\\
\| \bvec{w}_{n}( q^{\prime} ) \|_{\alpha}
& =
\| \bvec{p} \|_{\alpha}
=
\| \bvec{v}_{n}( q ) \|_{\alpha}
\end{align}
for a fixed $\alpha \in (1, +\infty)$.
Finally, we note that the strict monotonicity of Lemmas \ref{lem:mono_v} and \ref{lem:mono_w} prove the uniquenesses of the values $q \in [0, \frac{1}{n}]$ and $q^{\prime} \in [\frac{1}{n}, 1]$.
In fact, it follows from Lemmas \ref{lem:Hv}, \ref{lem:Hw}, \ref{lem:mono_v}, and \ref{lem:mono_w} that, for a fixed $n \ge 2$ and a fixed $\alpha \in (0, 1) \cup (1, +\infty)$, $\| \bvec{v}_{n}( p ) \|_{\alpha}$ and $\| \bvec{w}_{n}( p^{\prime} ) \|_{\alpha}$ are both bijective function of $p \in [0, \frac{1}{n}]$ and $p^{\prime} \in [\frac{1}{n}, 1]$, respectively.
That completes the proof of Theorem \ref{th:extremes2}.
\end{IEEEproof}
\begin{figure}[!t]
\centering
\subfloat[The case $\alpha = \frac{1}{2}$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsN_6ary_half_no.pdf}
\put(-5, 30){\rotatebox{90}{$\| \bvec{p} \|_{\alpha}$}}
\put(75, -2.5){$H( \bvec{p} )$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(29, 32){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(70, 29){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $\alpha = 2$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsN_6ary_2_no.pdf}
\put(-5, 30){\rotatebox{90}{$\| \bvec{p} \|_{\alpha}$}}
\put(75, -2.5){$H( \bvec{p} )$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(60, 41){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(40, 23){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\end{overpic}
}
\caption{
Plot of the boundary of $\mathcal{R}_{n}( \alpha )$ with $n = 6$.
The upper- and lower-boundaries correspond to distributions $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$, respectively.}
\label{fig:region_P6_half}
\end{figure}
In Theorem \ref{th:extremes2}, note that the values $p \in [0, \frac{1}{n}]$ and $p^{\prime} \in [\frac{1}{n}, 1]$ are uniquely determined by the value of $\| \bvec{p} \|_{\alpha}$.
Theorems \ref{th:extremes} and \ref{th:extremes2} show that extremality between the Shannon entropy and the $\ell_{\alpha}$-norm can be attained by the distributions $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$.
Following a same manner with \cite[Theorem 2]{fabregas}, we extend the bounds of Theorem \ref{th:extremes} from the $\ell_{\alpha}$-norm to several information measures, which are related to $\ell_{\alpha}$-norm, as follows:
\begin{corollary}
\label{cor:extremes}
Let $f( \cdot )$ be a strictly monotonic function.
Then, we observe that:
(i) if $f( \cdot )$ is strictly increasing, then
\begin{align}
f( \vphantom{\sum} \| \bar{\bvec{w}}_{n}( \bvec{p} ) \|_{\alpha} )
\le
f( \vphantom{\sum} \| \bvec{p} \|_{\alpha} )
\le
f( \vphantom{\sum} \| \bar{\bvec{v}}_{n}( \bvec{p} ) \|_{\alpha} )
\label{ineq:increasing}
\end{align}
and (ii) if $f( \cdot )$ is strictly decreasing, then
\begin{align}
f( \vphantom{\sum} \| \bar{\bvec{v}}_{n}( \bvec{p} ) \|_{\alpha} )
\le
f( \vphantom{\sum} \| \bvec{p} \|_{\alpha} )
\le
f( \vphantom{\sum} \| \bar{\bvec{w}}_{n}( \bvec{p} ) \|_{\alpha} )
\label{ineq:decreasing}
\end{align}
for any $n \ge 2$, any $\bvec{p} \in \mathcal{P}_{n}$, and any $\alpha \in (0, \infty)$.
\end{corollary}
\begin{IEEEproof}[Proof of Corollary \ref{cor:extremes}]
Since any strictly increasing function $f( \cdot )$ satisfies $f( x ) < f( y )$ for $x < y$, it is easy to see that \eqref{ineq:increasing} from \eqref{ineq:extremes} of Theorem \ref{th:extremes}.
Similarly, since any strictly decreasing function $f( \cdot )$ satisfies $f( x ) > f( y )$ for $x < y$, it is also easy to see that \eqref{ineq:decreasing} from \eqref{ineq:extremes} of Theorem \ref{th:extremes}.
\end{IEEEproof}
Therefore, we can obtain tight bounds of several information measures, which are determined by $\ell_{\alpha}$-norm, with a fixed Shannon entropy.
As an instance, we introduce the application of Corollary \ref{cor:extremes} to the R\'{e}nyi entropy as follows:
Let $f_{\alpha}( x ) = \frac{\alpha}{1-\alpha} \ln x$.
Then, we readily see that $H_{\alpha}( \bvec{p} ) = f_{\alpha}( \| \bvec{p} \|_{\alpha} )$.
It can be easily seen that $f_{\alpha}( x )$ is strictly increasing for $x \ge 0$ when $\alpha \in (0, 1)$ and strictly decreasing for $x \ge 0$ when $\alpha \in (1, \infty)$.
Hence, it follows from Corollary \ref{cor:extremes} that
\begin{align}
0 < \alpha < 1
& \, \Longrightarrow \,
H_{\alpha}( \bar{\bvec{w}}_{n}( \bvec{p} ) ) \le H_{\alpha}( \bvec{p} ) \le H_{\alpha}( \bar{\bvec{v}}_{n}( \bvec{p} ) ) ,
\label{eq:Renyi_bound1} \\
\alpha > 1
& \, \Longrightarrow \,
H_{\alpha}( \bar{\bvec{v}}_{n}( \bvec{p} ) ) \le H_{\alpha}( \bvec{p} ) \le H_{\alpha}( \bar{\bvec{w}}_{n}( \bvec{p} ) )
\label{eq:Renyi_bound2}
\end{align}
for any $n \ge 2$ and any $\bvec{p} \in \mathcal{P}_{n}$.
Moreover, if $p \in [0, \frac{1}{n}]$ and $p^{\prime} \in [\frac{1}{n}, 1]$ are chosen to satisfy
$
H_{\alpha}( \bvec{p} )
=
H_{\alpha}( \bvec{v}_{n}( p ) )
=
H_{\alpha}( \bvec{w}_{n}( p^{\prime} ) )
$
for a fixed $\alpha \in (0, 1) \cup (1, \infty)$, then \eqref{ineq:H_0to1} and \eqref{ineq:H_1toinfty} hold for any $n \ge 2$ and any $\bvec{p} \in \mathcal{P}_{n}$ from Theorem \ref{th:extremes2}.
These bounds between the Shannon entropy and the R\'{e}nyi entropy imply the boundary of the region
$
\mathcal{R}_{n}^{\text{R\'{e}nyi}}( \alpha )
\triangleq
\{ (H( \bvec{p} ), H_{\alpha}( \bvec{p} )) \mid \bvec{p} \in \mathcal{P}_{n} \}
$
for any $n \ge 2$ and any $\alpha \in (0, 1) \cup (1, \infty)$.
We illustrate the boundaries of $\mathcal{R}_{n}^{\text{R\'{e}nyi}}( \alpha )$ in Fig. \ref{fig:Renyi}.
Similarly, we can apply Corollary \ref{cor:extremes} to several entropies as shown in Table \ref{table:extremes}, and we illustrate these exact feasible region in Figs. \ref{fig:Renyi}--\ref{fig:R}.
\begin{table*}[t]
\centering
\caption{Applications of Corollary \ref{cor:extremes}}
\label{table:extremes}
\begin{tabular}{|c|l|c|c|}\hline
Entropies & \centering function $f_{t}( \cdot )$ & monotonicity ($0 < t < 1$) & monotonicity ($t > 1$) \\ \hline
R\'{e}nyi entropy \cite{renyi} $H_{\alpha}( \bvec{p} ) = f_{\alpha}( \| \bvec{p} \|_{\alpha} )$ & $f_{t}( x ) = \frac{ t }{ 1 - t } \ln x$ & strictly increasing for $x > 0$ & strictly decreasing for $x > 0$ \\ \hline
Tsallis entropy \cite{tsallis2} $S_{q}( \bvec{p} ) = f_{q}( \| \bvec{p} \|_{q} )$ & $f_{t}( x ) = \frac{ 1 }{ 1 - t } (x^{t}-1)$ & strictly increasing for $x > 0$ & strictly decreasing for $x > 0$ \\ \hline
Entropy of type-$\beta$ \cite{havrda, daroczy} $H_{\beta}( \bvec{p} ) = f_{\beta}( \| \bvec{p} \|_{\beta} )$ & $f_{t}( x ) = \frac{ 1 }{ 2^{1-t} - 1 } (x^{t}-1)$ & strictly increasing for $x > 0$ & strictly decreasing for $x > 0$ \\ \hline
$\gamma$-entropy \cite{behara} $H_{\gamma}( \bvec{p} ) = f_{\gamma}( \| \bvec{p} \|_{1/\gamma} )$ & $f_{t}( x ) = \frac{ 1 }{ 1 - 2^{t-1} }( 1 - x )$ & strictly decreasing for $x > 0$ & strictly increasing for $x > 0$ \\ \hline
The $R$-norm information \cite{boekee} $H_{R}( \bvec{p} ) = f_{R}( \| \bvec{p} \|_{R} )$ & $f_{t}( x ) = \frac{ t }{ t - 1 } ( 1 - x )$ & strictly increasing for $x > 0$ & strictly decreasing for $x > 0$ \\ \hline
\end{tabular}
\end{table*}
\begin{figure}[!t]
\centering
\subfloat[The case $\alpha = \frac{1}{2}$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsRenyi_6ary_half.pdf}
\put(-5, 25){\rotatebox{90}{$H_{\alpha}( \bvec{p} )$}}
\put(0, 59){\scriptsize [nats]}
\put(75, -2.5){$H( \bvec{p} )$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(30, 45){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(60, 32){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $\alpha = 2$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsRenyi_6ary_2.pdf}
\put(-5, 25){\rotatebox{90}{$H_{\alpha}( \bvec{p} )$}}
\put(0, 59){\scriptsize [nats]}
\put(75, -2.5){$H( \bvec{p} )$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(75, 30){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(45, 37){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\end{overpic}
}
\caption{
Plots of the boundaries of $\mathcal{R}_{n}^{\text{R\'{e}nyi}}( \alpha )$ with $n = 6$.
If $0 < \alpha < 1$, then the upper- and lower-boundaries correspond to distributions $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$, respectively.
If $\alpha > 1$, then these correspondences are reversed.}
\label{fig:Renyi}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[The case $q = \frac{1}{2}$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsTsallis_6ary_half.pdf}
\put(-5, 25){\rotatebox{90}{$S_{q}( \bvec{p} )$}}
\put(75, -2.5){$H( \bvec{p} )$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(30, 38){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(60, 27){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $q = 2$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsTsallis_6ary_2.pdf}
\put(-5, 25){\rotatebox{90}{$S_{q}( \bvec{p} )$}}
\put(75, -2.5){$H( \bvec{p} )$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(65, 33){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(35, 43){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\end{overpic}
}
\caption{
Plots of the boundaries of $\{ (H( \bvec{p} ), S_{q}( \bvec{p} )) \mid \bvec{p} \in \mathcal{P}_{n} \}$ with $n = 6$.
If $0 < \alpha < 1$, then the upper- and lower-boundaries correspond to distributions $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$, respectively.
If $\alpha > 1$, then these correspondences are reversed.}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[The case $\beta = \frac{1}{2}$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsBeta_6ary_half.pdf}
\put(-5, 25){\rotatebox{90}{$H_{\beta}( \bvec{p} )$}}
\put(75, -2.5){$H( \bvec{p} )$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(30, 38){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(60, 27){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $\beta = 2$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsBeta_6ary_2.pdf}
\put(-5, 25){\rotatebox{90}{$H_{\beta}( \bvec{p} )$}}
\put(75, -2.5){$H( \bvec{p} )$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(65, 33){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(35, 43){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\end{overpic}
}
\caption{
Plots of the boundaries of $\{ (H( \bvec{p} ), H_{\beta}( \bvec{p} )) \mid \bvec{p} \in \mathcal{P}_{n} \}$ with $n = 6$.
If $0 < \alpha < 1$, then the upper- and lower-boundaries correspond to distributions $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$, respectively.
If $\alpha > 1$, then these correspondences are reversed.}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[The case $\gamma = \frac{1}{2}$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsGamma_6ary_half.pdf}
\put(-5, 25){\rotatebox{90}{$H_{\gamma}( \bvec{p} )$}}
\put(75, -2.5){$H( \bvec{p} )$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(65, 26){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(35, 39){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $\gamma = 2$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsGamma_6ary_2.pdf}
\put(-5, 25){\rotatebox{90}{$H_{\gamma}( \bvec{p} )$}}
\put(75, -2.5){$H( \bvec{p} )$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(40, 38){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(72, 27){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\end{overpic}
}
\caption{
Plots of the boundaries of $\{ (H( \bvec{p} ), H_{\gamma}( \bvec{p} )) \mid \bvec{p} \in \mathcal{P}_{n} \}$ with $n = 6$.
If $0 < \alpha < 1$, then the upper- and lower-boundaries correspond to distributions $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$, respectively.
If $\alpha > 1$, then these correspondences are reversed.}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[The case $R = \frac{1}{2}$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsR_6ary_half.pdf}
\put(-5, 25){\rotatebox{90}{$H_{R}( \bvec{p} )$}}
\put(75, -2.5){$H( \bvec{p} )$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(30, 35){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(60, 22){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $R = 2$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_HvsR_6ary_2.pdf}
\put(-5, 25){\rotatebox{90}{$H_{R}( \bvec{p} )$}}
\put(75, -2.5){$H( \bvec{p} )$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(75, 35){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(45, 44){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\end{overpic}
}
\caption{
Plots of the boundaries of $\{ (H( \bvec{p} ), H_{R}( \bvec{p} )) \mid \bvec{p} \in \mathcal{P}_{n} \}$ with $n = 6$.
If $0 < \alpha < 1$, then the upper- and lower-boundaries correspond to distributions $\bvec{v}_{n}( \cdot )$ and $\bvec{w}_{n}( \cdot )$, respectively.
If $\alpha > 1$, then these correspondences are reversed.}
\label{fig:R}
\end{figure}
\begin{remark}
Harremo\"{e}s and Tops{\o}e \cite{topsoe} showed that the exact region of $\Delta_{n} = \{ (H( \bvec{p} ), IC( \bvec{p} )) \mid \bvec{p} \in \mathcal{P}_{n} \}$ for $n \ge 3$, where $IC( \bvec{p} ) \triangleq \| \bvec{p} \|_{2}^{2}$ denotes the index of coincidence.
Then, we can see that Corollary \ref{cor:extremes} contains its result by $f( x ) = x^{2}$.
\end{remark}
\subsection{Applications for uniformly focusing channels}
\label{subsect:focusing}
In this subsection, we consider applications of Corollary \ref{cor:extremes} for a particular class of discrete memoryless channels (DMCs), i.e., uniformly focusing channels \cite{massey}.
Let the R\'{e}nyi divergence \cite{renyi} of order $\alpha \in (0, 1) \cup (1, \infty)$ is denoted by
\begin{align}
D_{\alpha}(\bvec{p} \; \| \; \bvec{q})
\triangleq
\frac{1}{\alpha - 1} \ln \sum_{i=1}^{n} p_{i}^{\alpha} q_{i}^{1-\alpha} ,
\end{align}
for $\bvec{p}, \bvec{q} \in \mathcal{P}_{n}$.
Since $\lim_{\alpha \to 1} D_{\alpha}(\bvec{p} \, \| \, \bvec{q}) = D(\bvec{p} \, \| \, \bvec{q})$ by L'H\^{o}pital's rule, we write $D_{1}(\bvec{p} \, \| \, \bvec{q}) \triangleq D(\bvec{p} \, \| \, \bvec{q})$, where
\begin{align}
D(\bvec{p} \, \| \, \bvec{q})
\triangleq
\sum_{i=1}^{n} p_{i} \ln \frac{p_{i}}{q_{i}}
\end{align}
denotes the relative entropy.
Since
\begin{align}
D_{\alpha}(\bvec{p} \; \| \; \bvec{u}_{n})
& =
\ln n - H_{\alpha}( \bvec{p} )
\label{eq:RenyiDiv_unif}
\end{align}
for $\alpha \in (0, \infty)$, we can obtain Corollary \ref{cor:RenyiDiv} from \eqref{eq:Renyi_bound1} and \eqref{eq:Renyi_bound2}.
\begin{corollary}
\label{cor:RenyiDiv}
If $0 < \alpha < 1$, then
\begin{align}
D_{\alpha}(\bar{\bvec{v}}_{n}( \bvec{p} ) \; \| \; \bvec{u}_{n})
\le
D_{\alpha}(\bvec{p} \; \| \; \bvec{u}_{n})
\le
D_{\alpha}(\bar{\bvec{w}}_{n}( \bvec{p} ) \; \| \; \bvec{u}_{n})
\end{align}
for any $n \ge 2$ and any $\bvec{p} \in \mathcal{P}_{n}$.
Moreover, if $\alpha > 1$, then
\begin{align}
D_{\alpha}(\bar{\bvec{w}}_{n}( \bvec{p} ) \; \| \; \bvec{u}_{n})
\le
D_{\alpha}(\bvec{p} \; \| \; \bvec{u}_{n})
\le
D_{\alpha}(\bar{\bvec{v}}_{n}( \bvec{p} ) \; \| \; \bvec{u}_{n})
\end{align}
for any $n \ge 2$ and any $\bvec{p} \in \mathcal{P}_{n}$.
\end{corollary}
Since $D(\bvec{p} \, \| \, \bvec{u}_{n}) = \ln n - H( \bvec{p} )$, we note that Corollary \ref{cor:RenyiDiv} shows the tight bounds of R\'{e}nyi divergence from a uniform distribution with a fixed relative entropy from a uniform distribution.
Namely, Corollary \ref{cor:RenyiDiv} implies the boundary of
\begin{align}
\{ (D( \bvec{p} \, \| \, \bvec{u}_{n} ), D_{\alpha}( \bvec{p} \, \| \, \bvec{u}_{n} )) \mid \bvec{p} \in \mathcal{P}_{n} \}
\end{align}
for any $n \ge 2$ and any $\alpha \in (0, 1) \cup (1, \infty)$.
We illustrate boundaries of its region in Fig. \ref{fig:RenyiDiv}.
\begin{figure}[!t]
\centering
\subfloat[The case $\alpha = \frac{1}{2}$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_Div_6ary_half.pdf}
\put(-5, 25){\rotatebox{90}{$D_{\alpha}( \bvec{p} \, \| \, \bvec{u}_{n} )$}}
\put(0, 59){\scriptsize [nats]}
\put(75, -2.5){$D( \bvec{p} \, \| \, \bvec{u}_{n} )$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(75, 24){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(40, 33){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $\alpha = 2$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_Div_6ary_2.pdf}
\put(-5, 25){\rotatebox{90}{$D_{\alpha}( \bvec{p} \, \| \, \bvec{u}_{n} )$}}
\put(0, 59){\scriptsize [nats]}
\put(75, -2.5){$D( \bvec{p} \, \| \, \bvec{u}_{n} )$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(37, 46){\color{burgundy} $\bvec{v}_{n}( \cdot )$}
\put(56, 32){\color{navyblue} $\bvec{w}_{n}( \cdot )$}
\end{overpic}
}
\caption{
Plots of the boundaries of $\{ (D( \bvec{p} \, \| \, \bvec{u}_{n} ), D_{\alpha}( \bvec{p} \, \| \, \bvec{u}_{n} )) \mid \bvec{p} \in \mathcal{P}_{n} \}$ with $n = 6$.
If $0 < \alpha < 1$, then the upper- and lower-boundaries correspond to distributions $\bvec{w}_{n}( \cdot )$ and $\bvec{v}_{n}( \cdot )$, respectively.
If $\alpha > 1$, then these correspondences are reversed.}
\label{fig:RenyiDiv}
\end{figure}
We now define DMCs as follows:
Let the discrete random variables $X \in \mathcal{X}$ and $Y \in \mathcal{Y}$ denote the input and output of a DMC, respectively, where $\mathcal{X}$ and $\mathcal{Y}$ denote the finite input and output alphabets, respectively.
Let
$P_{Y|X}(y \mid x)$ denote the transition probability of a DMC $(X, Y)$ for $(x, y) \in \mathcal{X} \times \mathcal{Y}$.
Then, we define the following three classes of DMCs.
\begin{definition
\label{def:dispersive}
A channel $(X, Y)$ is said to be \emph{uniformly dispersive \cite{massey}} or \emph{uniform from the input \cite{fano2}} if there exists a permutation $\pi_{x} : \mathcal{Y} \to \mathcal{Y}$ for each $x \in \mathcal{X}$ such that $P_{Y|X}(x \mid \pi_{x}( y )) = P_{Y|X}(x^{\prime} \mid \pi_{x^{\prime}}( y ))$ for all $(x, x^{\prime}, y) \in \mathcal{X}^{2} \times \mathcal{Y}$.
\end{definition}
\begin{definition
\label{def:focusing}
A channel $(X, Y)$ is said to be \emph{uniformly focusing \cite{massey}} or \emph{uniform from the output \cite{fano2}} if there exists a permutation $\pi_{y} : \mathcal{X} \to \mathcal{X}$ for each $y \in \mathcal{Y}$ such that $P_{Y|X}(\pi_{y}( x ) \mid y) = P_{Y|X}(\pi_{y^{\prime}}( x ) \mid y^{\prime})$ for all $(x, y, y^{\prime}) \in \mathcal{X} \times \mathcal{Y}^{2}$.
\end{definition}
\begin{definition
\label{def:strongly}
A channel is said to be \emph{strongly symmetric \cite{massey}} or \emph{doubly uniform \cite{fano2}} if it is both uniformly dispersive and uniformly focusing.
\end{definition}
For a uniformly dispersive channel $(X, Y)$, it is known that
\begin{align}
H(Y \mid X)
=
H(Y \mid X = x)
\end{align}
for any $x \in \mathcal{X}$ (see \cite[Eq. (5.18)]{fano2} or \cite[Lemma 4.1]{massey}),
where the conditional Shannon entropy \cite{shannon} of $(X, Y) \sim P_{X|Y} P_{Y}$ is defined by
\begin{align}
H(X \mid Y)
\triangleq
\mathbb{E}[ H( P_{X|Y}( \cdot \mid Y ) ) ]
\end{align}
and $\mathbb{E}[ \cdot ]$ denotes the expected value of the random variable.
Moreover, let the conditional R\'{e}nyi entropy \cite{arimoto} of order $\alpha \in (0, 1) \cup (1, \infty)$ be denoted by
\begin{align}
H_{\alpha}( X \mid Y )
\triangleq
\frac{ \alpha }{ 1 - \alpha } \ln \mathbb{E}[ \| P_{X|Y}( \cdot \mid Y ) \|_{\alpha} ]
\end{align}
for $(X, Y) \sim P_{X|Y} P_{Y}$.
By convention, we write $H_{1}(X \mid Y) \triangleq H(X \mid Y)$.
As with uniformly focusing channels, for uniformly focusing channels, we can provide the following lemma.
\begin{lemma}
\label{lem:focusing}
If a channel $(X, Y)$ is uniformly focusing and the input $X$ follows a uniform distribution, then
\begin{align}
H_{\alpha}(X \mid Y)
=
H_{\alpha}(X \mid Y = y)
\end{align}
for any $y \in \mathcal{Y}$ and any $\alpha \in (0, \infty)$.
\end{lemma}
\begin{IEEEproof}[Proof of Lemma \ref{lem:focusing}]
Consider a uniformly focusing channel $(X, Y)$.
Assume that the input $X$ follows a uniform distribution, i.e., $P_{X}( x ) = \frac{1}{|\mathcal{X}|}$ for all $x \in \mathcal{X}$.
Note from \cite[p. 127]{fano2} or \cite[Vol. I, Lemma 4.2]{massey} that, if the input $X$ follows a uniform distribution, then the output $Y$ also follows a uniform distribution, i.e., $P_{Y}( y ) = \frac{1}{|\mathcal{Y}|}$ for all $y \in \mathcal{Y}$.
Then, since the a posteriori probability of $(X, Y)$ is written as
\begin{align}
P_{X|Y}(x \mid y)
=
\frac{ P_{X}( x ) P_{Y|X}(y \mid x) }{ P_{Y}( y ) }
\end{align}
for $(x, y) \in \mathcal{X} \times \mathcal{Y}$ by Bayes' rule and the fraction $\frac{ P_{X}( x ) }{ P_{Y}( y ) }$ is constant for $(x, y) \in \mathcal{X} \times \mathcal{Y}$, it follows from Definition \ref{def:focusing} that there exists a permutation $\pi_{y} : \mathcal{X} \to \mathcal{X}$ for each $y \in \mathcal{Y}$ such that
\begin{align}
P_{X|Y}(\pi_{y}( x ) \mid y)
=
P_{X|Y}(\pi_{y^{\prime}}( x ) \mid y^{\prime})
\label{eq:a_posteriori_pi}
\end{align}
for all $(x, y, y^{\prime}) \in \mathcal{X} \times \mathcal{Y}^{2}$.
Hence, we get
\begin{align}
H(X \mid Y)
& =
\sum_{y \in \mathcal{Y}} P_{Y}( y ) H(X \mid Y = y)
\\
& =
\sum_{y \in \mathcal{Y}} P_{Y}( y ) \left( - \sum_{x \in \mathcal{X}} P_{X|Y}(x \mid y) \ln P_{X|Y}(x \mid y) \right)
\\
& =
\sum_{y \in \mathcal{Y}} P_{Y}( y ) \left( - \sum_{x \in \mathcal{X}} P_{X|Y}(\pi_{y}( x ) \mid y) \ln P_{X|Y}(\pi_{y}( x ) \mid y) \right)
\\
& \overset{\eqref{eq:a_posteriori_pi}}{=}
\left( \sum_{y \in \mathcal{Y}} P_{Y}( y ) \right) \left( - \sum_{x \in \mathcal{X}} P_{X|Y}(\pi_{y^{\prime}}( x ) \mid y^{\prime}) \ln P_{X|Y}(\pi_{y^{\prime}}( x ) \mid y^{\prime}) \right)
\\
& =
- \sum_{x \in \mathcal{X}} P_{X|Y}(\pi_{y^{\prime}}( x ) \mid y^{\prime}) \ln P_{X|Y}(\pi_{y^{\prime}}( x ) \mid y^{\prime})
\\
& =
- \sum_{x \in \mathcal{X}} P_{X|Y}(x \mid y^{\prime}) \ln P_{X|Y}(x \mid y^{\prime})
\\
& =
H(X \mid Y = y^{\prime} )
\label{eq:cond_H_focusing}
\end{align}
for any $y^{\prime} \in \mathcal{Y}$.
Similarly, we also get
\begin{align}
\mathbb{E}[ \| P_{X|Y}(\cdot \mid Y) \|_{\alpha} ]
& =
\sum_{y \in \mathcal{Y}} P_{Y}( y ) \| P_{X|Y}(\cdot \mid y) \|_{\alpha}
\\
& =
\sum_{y \in \mathcal{Y}} P_{Y}( y ) \left( \sum_{x \in \mathcal{X}} P_{X|Y}(x \mid y)^{\alpha} \right)^{\frac{1}{\alpha}}
\\
& =
\sum_{y \in \mathcal{Y}} P_{Y}( y ) \left( \sum_{x \in \mathcal{X}} P_{X|Y}(\pi_{y}( x ) \mid y)^{\alpha} \right)^{\frac{1}{\alpha}}
\\
& \overset{\eqref{eq:a_posteriori_pi}}{=}
\left( \sum_{y \in \mathcal{Y}} P_{Y}( y ) \right) \left( \sum_{x \in \mathcal{X}} P_{X|Y}(\pi_{y^{\prime}}( x ) \mid y^{\prime})^{\alpha} \right)^{\frac{1}{\alpha}}
\\
& =
\left( \sum_{x \in \mathcal{X}} P_{X|Y}(\pi_{y^{\prime}}( x ) \mid y^{\prime})^{\alpha} \right)^{\frac{1}{\alpha}}
\\
& =
\left( \sum_{x \in \mathcal{X}} P_{X|Y}(x \mid y^{\prime})^{\alpha} \right)^{\frac{1}{\alpha}}
\\
& =
\| P_{X|Y}(\cdot \mid y^{\prime}) \|_{\alpha}
\label{eq:cond_N_focusing}
\end{align}
for any $x \in \mathcal{X}$ and any $\alpha \in (0, \infty)$.
Since $H_{\alpha}(X \mid Y) \triangleq \frac{ \alpha }{ 1 - \alpha } \ln \mathbb{E}[ \| P_{X|Y}(\cdot \mid Y) \|_{\alpha} ]$ for $\alpha \in (0, 1) \cup (1, \infty)$ and $H_{1}(X \mid Y) = H(X \mid Y)$, Eqs. \eqref{eq:cond_H_focusing} and \eqref{eq:cond_N_focusing} imply Lemma \ref{lem:focusing}.
\end{IEEEproof}
Therefore, it follows from Lemma \ref{lem:focusing} that the results of Corollary \ref{cor:extremes} can be applied to uniformly focusing channels $(X, Y)$ if the input $X$ follows a uniform distribution, as with \eqref{eq:Renyi_bound1} and \eqref{eq:Renyi_bound2}.
For a channel $(X, Y)$, let the mutual information of order $\alpha \in (0, \infty)$ \cite{arimoto} between $X$ and $Y$ be denoted by
\begin{align}
I_{\alpha}(X; Y)
\triangleq
H_{\alpha}(X) - H_{\alpha}(X \mid Y)
\end{align}
for $\alpha \in (0, \infty)$.
Note that $I_{1}(X; Y) \triangleq I(X; Y)$ denotes the (ordinary) mutual information between $X$ and $Y$.
In this paragraph, we assume that a channel $(X, Y)$ is uniformly focusing and the input $X$ follows a uniform distribution.
Since $H_{\alpha}( \bvec{u}_{n} ) = \ln n$ for $\alpha \in (0, \infty)$, it follows from Lemma \ref{lem:focusing} that
\begin{align}
I_{\alpha}(X; Y)
& =
\ln |\mathcal{X}| - H_{\alpha}(X \mid Y = y)
\\
& \overset{\eqref{eq:RenyiDiv_unif}}{=}
D_{\alpha}(P_{X|Y}(\cdot \mid y) \ \| \ \bvec{u}_{|\mathcal{X}|})
\label{eq:Ialpha_focusing}
\end{align}
for any $y \in \mathcal{Y}$ and any $\alpha \in (0, \infty)$, where $| \cdot |$ denotes the cardinality of the finite set.
Therefore, it follows that the tight bounds of $I_{\alpha}(X; Y)$ with a fixed $I(X; Y)$ are equivalent to the bounds of Corollary \ref{cor:RenyiDiv} under the hypotheses.
Furthermore, we consider Gallager's $E_{0}$ function \cite{gallager} of a channel $(X, Y)$, defined by
\begin{align}
E_{0}(\rho, X, Y)
& =
E_{0}(\rho, P_{X}, P_{Y|X})
\\
& \triangleq
- \ln \sum_{y \in \mathcal{Y}} \! \left( \sum_{x \in \mathcal{X}} P_{X}( x ) P_{Y|X}(y \mid x)^{\frac{1}{1+\rho}} \! \right)^{\!\!1+\rho}
\end{align}
for $\rho \in (-1, \infty)$.
Then, we can obtain the following theorem.
\begin{theorem}
\label{th:E0_focusing}
For a uniformly focusing channel $(X, Y)$, let
\begin{align}
E_{0}^{(\sbvec{v}_{n})}(\rho, X, Y)
& \triangleq
\rho \, D_{\frac{1}{1+\rho}}( \hat{\bvec{v}}_{n}(X \mid Y) \ \| \ \bvec{u}_{n} ) ,
\label{def:Div_vn} \\
E_{0}^{(\sbvec{w}_{n})}(\rho, X, Y)
& \triangleq
\rho \, D_{\frac{1}{1+\rho}}( \hat{\bvec{w}}_{n}(X \mid Y) \ \| \ \bvec{u}_{n} ) ,
\label{def:Div_wn}
\end{align}
where $\hat{\bvec{v}}_{n}(X \mid Y) \triangleq \bvec{v}_{n}( H_{\sbvec{v}_{n}}^{-1}( H(X \mid Y) ) )$, $\hat{\bvec{w}}_{n}(X \mid Y) \triangleq \bvec{w}_{n}( H_{\sbvec{w}_{n}}^{-1}( H(X \mid Y) ) )$, and $n = | \mathcal{X} |$.
If the input $X$ follows a uniform distribution, then we observe that
\begin{align}
E_{0}^{(\sbvec{v}_{n})}(\rho, X, Y)
\le
E_{0}(\rho, X, Y)
\le
E_{0}^{(\sbvec{w}_{n})}(\rho, X, Y)
\label{eq:E0_focusing}
\end{align}
for any $\rho \in (-1, \infty)$.
\end{theorem}
\begin{IEEEproof}[Proof of Theorem \ref{th:E0_focusing}]
We can see from \cite[Eq. (16)]{arimoto} that
\begin{align}
\frac{ E_{0}(\rho, P_{X^{\alpha}}, P_{Y|X}) }{ \rho }
=
I_{\frac{1}{1+\rho}}(X; Y) ,
\label{eq:E0_Ialpha}
\end{align}
where
\begin{align}
P_{X^{\alpha}}( x )
\triangleq
\frac{ P_{X}( x )^{\alpha} }{ \sum_{x^{\prime} \in \mathcal{X}} P_{X}( x^{\prime} )^{\alpha} }
\end{align}
denotes the tilted distribution.
We can see from \eqref{eq:E0_Ialpha} that the $E_{0}$ function is closely related to the mutual information of order $\alpha$.
Note that, if the distribution $P_{X}$ is a uniform distribution, then its tilted distribution $P_{X^{\alpha}}$ is also a uniform distribution for any $\alpha \in (0, \infty)$.
Thus, if a channel $(X, Y)$ is uniformly focusing and the input $X$ follows a uniform distribution, then it follows from \eqref{eq:Ialpha_focusing} and \eqref{eq:E0_Ialpha} that
\begin{align}
E_{0}(\rho, X, Y)
=
\rho \, D_{\frac{1}{1+\rho}}( P_{X|Y}(\cdot \mid y) \ \| \ \bvec{u}_{|\mathcal{X}|} )
\end{align}
for any $\rho \in (-1, \infty)$ and any $y \in \mathcal{Y}$.
Hence, noting the relations
\begin{align}
-1 < \rho < 0
& \iff
1 < \alpha < \infty ,
\\
0 < \rho < \infty
& \iff
0 < \alpha < 1 ,
\end{align}
the $E_{0}$ function can also be evaluate as with Corollary \ref{cor:RenyiDiv}.
\end{IEEEproof}
Note that the distributions $\hat{\bvec{v}}_{n}(X \mid Y)$ and $\hat{\bvec{w}}_{n}(X \mid Y)$ denote $\bvec{v}_{n}( p )$ and $\bvec{w}_{n}( q )$, respectively, such that $H_{\sbvec{v}_{n}}( p ) = H_{\sbvec{w}_{n}}( q ) = H(X \mid Y)$ for a given channel $(X, Y)$.
Since $I(X; Y) = \ln |\mathcal{X}| - H(X \mid Y)$ under a uniform input distribution, Theorem \ref{th:E0_focusing} shows bounds of the $E_{0}$ function with a fixed mutual information.
Note that, since \eqref{def:Div_vn} and \eqref{def:Div_wn} are defined by the $\bvec{v}_{n}(\cdot)$ and $\bvec{w}_{n}(\cdot)$, respectively, there exist two strongly symmetric channels which attain each equality of the bounds \eqref{eq:E0_focusing}.
Namely, Theorem \ref{th:E0_focusing} provides tight bounds \eqref{eq:E0_focusing}.
We illustrate graphical representations of Theorem \ref{th:E0_focusing} in Fig. \ref{fig:E0_focusing}, as with Figs. \ref{fig:region_P6_half} and \ref{fig:Renyi}.
Theorem \ref{th:E0_focusing} is a generalization of \cite[Theorem 2]{isit2015} from ternary-input strongly symmetric channels to $n$-ary input uniformly focusing channels under a uniform input distribution.
Finally, we consider the hypothesis of a uniform input distribution.
If a channel $(X, Y)$ is symmetric%
\footnote{Symmetric channels are defined in \cite[p. 94]{gallager}.},
then the mutual information of order $\alpha$ is maximized by a uniform input distribution%
\footnote{This fact can be verified by using, e.g., \cite[Theorem 7.2]{jelinek}.}
for $\alpha \in (0, \infty)$.
Therefore, since a strongly symmetric channel is symmetric, the hypothesis is optimal if the channel $(X, Y)$ is strongly symmetric.
\begin{figure}[!t]
\centering
\subfloat[The case $\rho = - \frac{1}{2}$.]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_E0_focusing_6ary_minus-half.pdf}
\put(-4, 20){\rotatebox{90}{$E_{0}(\rho, X, Y)$}}
\put(1, 0){\scriptsize [nats]}
\put(75, 48){$I(X; Y)$}
\put(95.5, 54){\scriptsize [nats]}
\put(30, 10){\color{burgundy} $E_{0}^{(\sbvec{v}_{n})}(\rho, X, Y)$}
\put(55, 32){\color{navyblue} $E_{0}^{(\sbvec{w}_{n})}(\rho, X, Y)$}
\end{overpic}
}\hspace{0.05\hsize}
\subfloat[The case $\rho = 1$ (cutoff rate).]{
\begin{overpic}[width = 0.45\hsize, clip]{graph_E0_focusing_6ary_1.pdf}
\put(-5, 20){\rotatebox{90}{$E_{0}(\rho, X, Y)$}}
\put(0, 59){\scriptsize [nats]}
\put(75, -2.5){$I(X; Y)$}
\put(95.5, 1.5){\scriptsize [nats]}
\put(63, 17){\color{burgundy} $E_{0}^{(\sbvec{v}_{n})}(\rho, X, Y)$}
\put(28, 36){\color{navyblue} $E_{0}^{(\sbvec{w}_{n})}(\rho, X, Y)$}
\end{overpic}
}
\caption{
Plots of the bounds between $I(X; Y)$ and $E_{0}(\rho, X, Y)$ for all uniformly focusing channels $(X, Y)$ with $|\mathcal{X}| = 6$ and a uniform input distribution $P_{X}$.
The upper and lower bounds of $E_{0}(\rho, X, Y)$ with a fixed $I(X; Y)$ correspond to $E_{0}^{(\sbvec{w}_{n})}(\rho, X, Y)$ and $E_{0}^{(\sbvec{v}_{n})}(\rho, X, Y)$, respectively.}
\label{fig:E0_focusing}
\end{figure}
\section{Conclusion}
\label{sect:conclusion}
In this study, we established the tight bounds of the $\ell_{\alpha}$-norm with a fixed Shannon entropy in Theorem \ref{th:extremes}, and vise versa in Theorem \ref{th:extremes2}.
Previously, the tight bounds of the Shannon entropy with a fixed error probability were derived \cite{fano, kovalevsky, tebbe, feder, verdu, ben-bassat}.
Since the error probability is closely related to the $\ell_{\infty}$-norm, this study is a generalization of previous studies \cite{fano, kovalevsky, tebbe, feder, verdu, ben-bassat}.
Note that the set of all $n$-ary probability vectors, which are sorted in decreasing order, with a fixed $\ell_{\alpha}$-norm is convex set.
The previous works \cite{fano, kovalevsky, tebbe, feder, verdu, ben-bassat} used the concavity of the Shannon entropy in probability vectors to examine the Shannon entropy with a fixed $\ell_{\alpha}$-norm.
However, since
$\| \bvec{p} \|_{\alpha}$ is strictly concave in $\bvec{p} \in \mathcal{P}_{n}$ when $\alpha \in (0, 1)$ and is strictly convex in $\bvec{p} \in \mathcal{P}_{n}$ when $\alpha \in (1, \infty)$, the concavity of the Shannon entropy in probability vectors turns out to be hard-to-use when the $\ell_{\alpha}$-norm is fixed.
In this study, we derived Theorems \ref{th:extremes} and \ref{th:extremes2} by using elementary calculus without using the concavity of the Shannon entropy.
\if0
As application, we extend the bounds of Theorem \ref{th:extremes} from the $\ell_{\alpha}$-norm to several information measures, which are determined by the $\ell_{\alpha}$-norm, in Corollary \ref{cor:extremes}.
As instances, we showed some applications of Corollary \ref{cor:extremes} in Table \ref{table:extremes};
in particular, we illustrated the boundary of $\mathcal{R}_{n}^{\text{R\'{e}nyi}}( \alpha )$ in Fig. \ref{fig:Renyi}.
In addition, we can apply Corollary \ref{cor:extremes} to several diversity indices, such as the index of coincidence.
Moreover, we presented further applications of Corollary \ref{cor:extremes} to uniformly focusing channels, defined in Definition \ref{def:focusing}, in Section \ref{subsect:focusing}.
\fi
\section*{Acknowledgment}
This study was partially supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scientific Research (C) 26420352.
| {
"timestamp": "2016-01-29T02:05:58",
"yymm": "1601",
"arxiv_id": "1601.07678",
"language": "en",
"url": "https://arxiv.org/abs/1601.07678",
"abstract": "The paper examines relationships between the Shannon entropy and the $\\ell_{\\alpha}$-norm for $n$-ary probability vectors, $n \\ge 2$. More precisely, we investigate the tight bounds of the $\\ell_{\\alpha}$-norm with a fixed Shannon entropy, and vice versa. As applications of the results, we derive the tight bounds between the Shannon entropy and several information measures which are determined by the $\\ell_{\\alpha}$-norm, e.g., Rényi entropy, Tsallis entropy, the $R$-norm information, and some diversity indices. Moreover, we apply these results to uniformly focusing channels. Then, we show the tight bounds of Gallager's $E_{0}$ functions with a fixed mutual information under a uniform input distribution.",
"subjects": "Information Theory (cs.IT)",
"title": "Extremal Relations Between Shannon Entropy and $\\ell_α$-Norm",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969636371976,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7099044380141396
} |
https://arxiv.org/abs/1509.04632 | The Shape of Data and Probability Measures | We introduce the notion of multiscale covariance tensor fields (CTF) associated with Euclidean random variables as a gateway to the shape of their distributions. Multiscale CTFs quantify variation of the data about every point in the data landscape at all spatial scales, unlike the usual covariance tensor that only quantifies global variation about the mean. Empirical forms of localized covariance previously have been used in data analysis and visualization, but we develop a framework for the systematic treatment of theoretical questions and computational models based on localized covariance. We prove strong stability theorems with respect to the Wasserstein distance between probability measures, obtain consistency results, as well as estimates for the rate of convergence of empirical CTFs. These results ensure that CTFs are robust to sampling, noise and outliers. We provide numerous illustrations of how CTFs let us extract shape from data and also apply CTFs to manifold clustering, the problem of categorizing data points according to their noisy membership in a collection of possibly intersecting, smooth submanifolds of Euclidean space. We prove that the proposed manifold clustering method is stable and carry out several experiments to validate the method. | \section{Introduction}
\label{S:intro}
Probing, analyzing and visualizing the shape of complex data are
challenges that are magnified by the intricate dependence of their
structural properties, as basic as dimensionality, on location
and scale (cf.\,\cite{ljm09}). As such, resolving and integrating the
geometry and topology
of data across scales are problems of foremost importance. In this
paper, we develop the notion of multiscale covariance tensor
fields (CTF) associated with Euclidean random variables and
show that many properties of the shape of their distributions
become accessible through CTFs, which provide
stable representations that can be estimated reliably from data.
For a random vector $y \in \ensuremath{\mathbb{R}}^d$, scale dependence is
controlled by a kernel function $K(x,y,\sigma) \geqslant 0$,
where $x,y \in \ensuremath{\mathbb{R}}^d$ and $\sigma > 0$ is the scale parameter.
The idea is that from the standpoint of $x$, at scale $\sigma > 0$,
the kernel masks the distribution by attributing weight
$K(x,y,\sigma)$ to data located at $y$, creating a windowing
effect. More simply put, $K (x,y,\sigma)$ quantifies how well an
observer at $x$ sees data at $y$ at scale $\sigma$. Covariation
of the weighted data is measured relative to every point
$x \in \ensuremath{\mathbb{R}}^d$, not just about the mean as is common practice,
thus giving rise to a multiscale covariance field. Special cases of these
covariance fields were introduced in \cite{mmm13}, targeting
applications to such problems as detection of local scales
and feature rich points in shapes. Here we present a more systematic
treatment that includes a broader formulation of multiscale
CTFs, stability theorems that ensure that properties of probability
measures derived from multiscale CTFs are robust, as well
as consistency results and convergence rates for empirical
CTFs. We prove stability of CTFs with respect to the Wasserstein
distance between probability measures, a metric that is finding
uses in an ever expanding landscape of problems and whose
origins are in optimal transport theory \cite{villani03,villani09}.
Since Wasserstein distance metrizes weak convergence of
probability measures, we obtain a strong stability result that
ensures that if two probability distributions are similar
in a weak sense, then their multiscale CTFs are uniformly
close over the entire domain. Convergence rates are derived
from the stability theorems and results by Fournier and Guillin
\cite{fournier14} and Garc\'{i}a-Trillos and Slep\v{c}ev \cite{gts15} on
convergence of empirical measures. The standard covariance
tensor of a random vector $y \in \ensuremath{\mathbb{R}}^d$ quantifies covariation
of $y$ about the mean, but may be extended to a full covariance
field by considering covariation about arbitrary points.
Nonetheless, this field provides no information about the
organization of the data other than that
already contained in the covariance about the mean.
Thus, a localized formulation is essential for gaining additional
insight into the shape of data.
The trace of a multiscale CTF is a scalar field that gives a
multiscale analogue of the classical Fr\'{e}chet
function $V(x) = \expect{\|y-x\|^2}$ of a random variable $y$
with finite second moment. The Fr\'{e}chet function provides a
more geometric interpretation of the mean as the unique
minimizer of $V$; that is, the point $\mu \in \ensuremath{\mathbb{R}}^d$ with respect
to which the spread of $y$ is minimal. Similarly, the local extrema
and other properties of the multiscale Fr\'{e}chet function
provide a wealth of information about the distribution of $y$.
In fact, we show that the distribution of any random vector
may be fully recovered from the multiscale Fr\'{e}chet
function associated with the Gaussian kernel.
Several variants of empirical localized or weighted covariance
previously have been used in data analysis, but we develop a framework
for the formulation and systematic treatment of such problems.
Allard et al. have developed a computational model termed geometric
multi-resolution analysis for multiscale data analysis based on covariance
localized to hierarchies of dyadic cubes \cite{allardetal12}. In computer
graphics, local principal component analysis (PCA) is commonly
used in the estimation of normals to surfaces
from point-cloud data \cite{berk-caelli} for surface reconstruction;
see also \cite{rusu09} and references therein. In
computer vision, tensor voting by Medioni et al. \cite{medioni00} has been
applied to multiple image analysis and processing tasks.
Brox et al. have used empirical covariance
weighted by the isotropic Gaussian kernel in non-parametric
density estimation targeting applications in motion tracking
\cite{brcs07}. In the literature dealing with clustering, especially clustering of multiple possibly intersecting manifolds, local PCA ideas have been used in the works of Kushnir et al. \cite{kushnir2006fast}, Goldberg et al. \cite{goldberg2009multi}, Gong et al. \cite{gong2012robust}, Wang et al. \cite{wang2011spectral}, and in a series of papers by Arias-Castro and collaborators \cite{arias2011clustering,arias2011spectral,arias2013spectral}.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{ccc}
\begin{tabular}{c}
\includegraphics[width=0.22\linewidth]{2planes2lines}
\end{tabular}
\quad & \quad
\begin{tabular}{c}
\includegraphics[width=0.2\linewidth]{lines_no_box}
\end{tabular}
\quad & \quad
\begin{tabular}{c}
\includegraphics[width=0.22\linewidth]{2circles2}
\end{tabular} \\
(a) \quad & \quad (b) \quad & \quad (c)
\end{tabular}
\end{center}
\caption{Examples of data clustered along intersecting
manifolds.}
\label{F:clusters}
\end{figure}
The special case of affine linear subspaces, known as subspace
clustering, has been addressed in the machine learning and
computer vision literature by many authors using a variety of
techniques (cf.\,\cite{vidal05, govindu05, lerman09, lerman11,
lerman12, liu13, elhamifar13,soltanolkotabi2014robust}). More general manifold clustering has
been considered in \cite{polito2001grouping,gionis2005dimension,pless05,kushnir2006fast,haro2006stratification,goldberg2009multi,gong2012robust,wang2011spectral,arias2013spectral,wang2014riemannian}. In our approach, we exploit the fact that
localized covariance tensors encode rich information about the
tangential structure of the submanifolds that underlie the data.
Combined with information about the (relative) positions
of the data points, they yield an effective data representation for
manifold clustering. Although several different clustering techniques
could be applied to the ``tensorized'' data, we use the single linkage
hierarchical method because it produces provenly stable
dendrograms. In conjunction with the stability and consistency
results for covariance fields, this ensures that the manifold clustering
method is stable at all steps. Dendrogram stability is analyzed
in the framework of \cite{memoli10}.
\paragraph{Contributions and Organization of the paper}
The paper includes several
illustrations and applications of
CTFs to data analysis. For example, to illustrate how geometric
information can be extracted from CTFs, we show that the
curvature of plane curves and the principal curvatures of
surfaces in $\ensuremath{\mathbb{R}}^3$
can be calculated from the spectrum of multiscale CTFs.
Thus, multiscale covariance tensors give a way of extending
these infinitesimal measures of geometric complexity to
all scales and general probability distributions, not just
those supported on smooth submanifolds. We also apply
multiscale CTFs to manifold clustering, the problem of
clustering Euclidean data that are organized along a finite
union of possibly intersecting smooth
submanifolds. Fig.\,\ref{F:clusters} shows three such
examples.
The main goals of the paper are: (i) to establish the foundations
for analysis, visualization and management of data with
methods based on multiscale covariance tensor fields,
and (ii) to describe applications that characterize the
usefulness of CTFs in data analysis. In Section \ref{S:covariance},
we formulate the notion of multiscale CTFs for a broad class of
kernels and give examples that illustrate how CTFs reveal
the geometry of data. In Section \ref{S:geometry}, we show that
the curvature of a plane curve and the principal curvatures
of a surface in $\ensuremath{\mathbb{R}}^3$ can be recovered from small-scale
covariance. Section \ref{S:stability}
is devoted to the main theoretical developments. We prove
stability and consistency theorems for multiscale covariance
tensor fields under mild regularity assumptions on the kernel,
and also analyze rates of convergence that are
important for applications in data analysis. Since some
discontinuous kernels are of practical interest, we also
investigate convergence results for such kernels,
including a pointwise central limit theorem. Multiscale
Fr\'{e}chet functions are discussed in Section \ref{S:frechet}
and manifold clustering in Section \ref{S:clustering}. We
close with a summary and some discussion.
\section{Covariance Tensor Fields} \label{S:covariance}
\subsection{Preliminaries}
To define covariance tensor fields, we introduce some notation.
Elements of the tensor product $\ensuremath{\mathbb{R}}^d \otimes \ensuremath{\mathbb{R}}^d$ may be
identified with bilinear forms $B \colon \ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}$
through the Euclidean inner product. More precisely,
a pure tensor $x \otimes y$ corresponds to the bilinear form
\begin{equation} \label{E:bilinear}
x \otimes y \, (u,v) = \inner{x}{u} \cdot \inner{y}{v},
\end{equation}
$\forall u, v \in \ensuremath{\mathbb{R}}^d$, where $\inner{\,}{}$ denotes Euclidean
inner product. Bilinear forms associated with more general
elements of $\ensuremath{\mathbb{R}}^d \otimes \ensuremath{\mathbb{R}}^d$ can be
described by linear extension. In Euclidean coordinates, we abuse
notation and also write the coordinate vectors of $x,y \in \ensuremath{\mathbb{R}}^d$ as
$x$ and $y$. With this convention, letting $A$ be the
$d \times d$ matrix $A = xy^T$, we have
\begin{equation}
x \otimes y \, (u,v) = \inner{u}{A v} ,
\end{equation}
where the superscript $T$ denotes transposition.
In this manner, using Euclidean coordinates, an element of
$\ensuremath{\mathbb{R}}^d \otimes \ensuremath{\mathbb{R}}^d$ also can be identified with a $d \times d$
matrix by linear extension of the correspondence
$x \otimes y \leftrightarrow A$. Through these identifications,
we refer to an element $\Sigma \in \ensuremath{\mathbb{R}}^d \otimes \ensuremath{\mathbb{R}}^d$
interchangeably as a tensor, a bilinear form or a matrix. We equip
$\ensuremath{\mathbb{R}}^d \otimes \ensuremath{\mathbb{R}}^d$ with the inner product defined on
pure tensors by
\begin{equation} \label{E:inner}
\inner{x_1 \otimes y_1}{x_2 \otimes y_2} =
\inner{x_1}{x_2} \inner{y_1}{y_2}
\end{equation}
and extended linearly to $\ensuremath{\mathbb{R}}^d \otimes \ensuremath{\mathbb{R}}^d$. Thus, the
corresponding norm satisfies
\begin{equation} \label{E:norm}
\|x \otimes y\| = \|x\| \|y\| \,,
\end{equation}
for any $x, y \in \ensuremath{\mathbb{R}}^d$. In matrix representation,
this is the Frobenius norm.
Throughout the paper, we view $\ensuremath{\mathbb{R}}^d$ as a measurable space
equipped with the Borel $\sigma$-algebra for the
Euclidean metric. Let $y$ be an $\ensuremath{\mathbb{R}}^d$-valued random
variable distributed according to the probability measure $\alpha$.
Suppose that $y$ has expected value $\expect{y} = \mu \in \ensuremath{\mathbb{R}}^d$
and finite second moment. As a motivation for the definition of
multiscale CTFs, recall that the covariance
tensor of $y$ is defined as
\begin{equation}
\Sigma_\alpha (\mu) = \expect{(y-\mu) \otimes (y-\mu)} =
\int_{\ensuremath{\mathbb{R}}^d} (y-\mu) \otimes (y-\mu) \, \alpha (dy)
\in \ensuremath{\mathbb{R}}^d \otimes \ensuremath{\mathbb{R}}^d \,.
\end{equation}
In matrix notation,
\begin{equation}
\Sigma_\alpha (\mu) = \int_{\ensuremath{\mathbb{R}}^d} (y-\mu) (y-\mu)^T \, \alpha (dy) \,.
\end{equation}
The bilinear form associated with $\Sigma_\alpha (\mu)$
clearly is symmetric and positive semi-definite.
Covariation of $y$ may be measured with respect
to any $x \in \ensuremath{\mathbb{R}}^d$, not just $\mu$. Thus,
$\Sigma_\alpha (\mu)$ may be extended to a global
covariance tensor field
$\Sigma_\alpha \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}^d \otimes \ensuremath{\mathbb{R}}^d$
given by
\begin{equation} \label{E:field}
\Sigma_\alpha (x) = \int_{\ensuremath{\mathbb{R}}^d} (y-x) \otimes (y-x) \, \alpha (dy) \,.
\end{equation}
Note, however, that
\begin{equation}
\Sigma_\alpha (x) = \Sigma_\alpha (\mu) + (\mu-x) \otimes (\mu-x) \,,
\end{equation}
for any $x \in \ensuremath{\mathbb{R}}^d$. Thus, for $x \ne \mu$, $\Sigma_\alpha (x)$
does not reveal any information about the distribution of $y$
other than that already contained in $\Sigma_\alpha (\mu)$.
In contrast, as we shall see below, multiscale
analogues are rich in information about the
shape of $\alpha$.
\subsection{Multiscale Covariance Tensor Fields}
We adopt the notation $\ensuremath{\nu}_d$ for the volume of the unit ball
in $\ensuremath{\mathbb{R}}^d$ and $\ensuremath{\omega}_{d-1}$ for the ``surface area'' of the unit
sphere $\ensuremath{\mathbb{S}}^{d-1} \subset \ensuremath{\mathbb{R}}^d$, $d \geq 1$. Recall that
$\ensuremath{\omega}_{d-1} = 2 \pi^{d/2} / \Gamma(d/2)$, where $\Gamma(\cdot)$ is
the Gamma function, and $\ensuremath{\omega}_{d-1} = d\, \ensuremath{\nu}_d$.
We make the convention that $\ensuremath{\nu}_0 = 1$.
Let $y$ be an $\ensuremath{\mathbb{R}}^d$-valued random variable with distribution
$\alpha$ and let $K$ be a multiscale kernel; that is, a measurable
function $K: \ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d \times (0, \infty) \to \ensuremath{\mathbb{R}}$
such that $K(x,y,\sigma) \geqslant 0$, for any $x,y \in \ensuremath{\mathbb{R}}^d$
and $\sigma > 0$.
\begin{definition} \label{D:ctf}
The {\em multiscale covariance tensor field} (CTF) of $y$ associated
with the kernel $K$ is the one-parameter family of tensor fields,
indexed by $\sigma \in (0, \infty)$, given by
\begin{equation} \label{E:ctf}
\Sigma_\alpha (x, \sigma) :=
\int_{\ensuremath{\mathbb{R}}^d} (y-x) \otimes (y-x) K(x,y,\sigma)\, \alpha (dy) \,,
\end{equation}
provided that the integral converges for each
$x \in \ensuremath{\mathbb{R}}^d$ and $\sigma > 0$.
\end{definition}
\begin{remark}
Note that $\Sigma_\alpha$ depends only on the probability measure
$\alpha$, not on $y$. For this reason, we refer to $\Sigma_\alpha$
interchangeably as the multiscale CTF of the random variable $y$
or the probability measure $\alpha$.
\end{remark}
$\Sigma_\alpha (x, \sigma)$ measures the covariation of $y$
about $x$ with probability mass at $y$ weighted by $K (x,y, \sigma)$.
It is simple to verify that the bilinear form
$\Sigma_\alpha (x, \sigma)$ is symmetric and positive semi-definite.
Note that if $K$ is bounded for each $\sigma>0$, that is,
$\exists M_\sigma > 0$ such that $K(x,y,\sigma) \leq M_\sigma$,
$\forall x,y \in \ensuremath{\mathbb{R}}^d$, then $\Sigma_\alpha (x, \sigma)$ is
well defined for any random variable $y$ with finite second
moment. In particular if $K \equiv 1$,
$\Sigma_\alpha (x, \sigma) = \Sigma_\alpha (x)$, $\forall x \in \ensuremath{\mathbb{R}}^d$.
However, as our primary goal is to study the organization of
data and random variables at scales ranging from local
to global, we consider kernels in $\ensuremath{\mathbb{R}}^d$ that satisfy additional
decay conditions as they produce a windowing effect. The
kernels are constructed as follows.
\begin{definition} \label{D:kernel}
Let $d$ be a positive integer and $f \colon [0, \infty) \to \ensuremath{\mathbb{R}}$
a bounded and measurable function satisfying:
\begin{itemize}
\item[(a)] $f (r) \geqslant 0$, $\forall r \in [0, \infty)$;
\item[(b)] $M_d = \int_0^\infty r^{\frac{d}{2}-1} f(r) \,dr < \infty$;
\item[(c)] There is $C > 0$ such that $r f(r) \leq C$,
$\forall r \in [0, \infty)$.
\end{itemize}
The multiscale kernel $K \colon \ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d \times (0, \infty) \to \ensuremath{\mathbb{R}}$
associated with $f$ is defined as
\begin{equation} \label{E:kernel}
K(x,y,\sigma) := \frac{1}{C_d (\sigma)} \,
f \left(\frac{\Vert y-x\Vert^2}{\sigma^2}\right) ,
\end{equation}
where $C_d (\sigma) = \frac{1}{2} \sigma^d M_d \,\ensuremath{\omega}_{d-1}$.
\end{definition}
Condition (b) in the definition implies that the normalizing
constant $C_d (\sigma)$ is well defined. The normalization
is adopted so that $\int K(x,y,\sigma) \, dy = 1$,
$\forall x \in \ensuremath{\mathbb{R}}^d$ and $\forall \sigma > 0$.
Condition (c) guarantees that the integral in \eqref{E:ctf}
is convergent for any probability measure $\alpha$. Henceforth,
for convenience, we assume that $\sup f =1$. This is not restrictive
since scaling $f$ does not change the kernel $K$ because of
the normalization.
Whereas we investigate properties of multiscale CTFs in a
more general setting, our examples and
experiments focus on two special kernels:
\begin{itemize}
\item[(i)] The isotropic Gaussian kernel
\begin{equation}
G(x,y, \sigma) = \frac{1}{(2 \pi \sigma^2)^{d/2}}
\exp \left( - \frac{\|y-x\|^2}{2 \sigma^2} \right) ,
\end{equation}
which is associated with the function $f(x) = e^{-x/2}$;
\item[(ii)] The truncation kernel
\begin{equation}
T(x,y,\sigma) = \frac{1}{\sigma^d \ensuremath{\nu}_d} \,
\chi \left(\frac{\|y-x\|^2}{\sigma^2} \right)
\end{equation}
associated with the characteristic function
$\chi \colon [0, \infty) \to \ensuremath{\mathbb{R}}$ of the unit interval $\left[ 0,1\right]$.
In measuring covariation of random variables about $x$, the
kernel $T$ attributes a uniform weight to mass at points
within the closed ball of radius $\sigma$ centered at $x$
and weight zero to mass elsewhere.
\end{itemize}
\begin{remark} \label{R:iso}
The kernel $K$ defined in \eqref{E:kernel} is
homogeneous and isotropic;
that is, for any isometry $\varphi \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}^d$,
$K (\varphi (x), \varphi (y), \sigma) = K (x,y,\sigma)$,
$\forall x,y \in \ensuremath{\mathbb{R}}^d$ and $\sigma > 0$. Moreover,
if we write $\varphi (x) = U x + b$, with $U \in O(d)$ and
$b \in \ensuremath{\mathbb{R}}^d$, then
\begin{equation}
U \, \Sigma_{\alpha}(x,\sigma) \, U^T =
\Sigma_{\varphi_\ast (\alpha)}(\varphi(x),\sigma),
\end{equation}
for any $(x, \sigma) \in \ensuremath{\mathbb{R}}^d \times (0, \infty)$. Here
$O(d)$ is the group of $d \times d$ orthogonal matrices and
$\varphi_\ast (\alpha)$ is the pushforward of $\alpha$ under
$\varphi$.
\end{remark}
\begin{remark} \label{R:support}
Multiscale covariance tensor fields can be defined for
any positive Borel measure $\alpha$ that satisfies
\begin{equation}
\int_{\ensuremath{\mathbb{R}}^d} \|z\|^2 f (\|z\|^2) \, \alpha(dz) < \infty \,,
\end{equation}
not just for probability measures. In particular, if $f$ has
compact support, covariance fields
are defined for any locally finite Borel measure
$\alpha$; that is, measures for which every point $p \in \ensuremath{\mathbb{R}}^d$
has an open neighborhood $U_p$ such that $\alpha (U_p) < \infty$.
\end{remark}
We conclude this section with examples that support
our contention that multiscale covariance tensor fields are
rich in information about the shape of data.
\begin{example} \label{E:dimensionality}
This example shows that the spectrum of multiscale
covariance tensors allow us to estimate the dimensionality of data
in a scale dependent manner. We consider the data points
$y_1, \ldots, y_n$ in $\ensuremath{\mathbb{R}}^2$, shown in Figure \ref{F:dimension},
and calculate $\Sigma_{\alpha_n}$ centered at one of the data
points for the Gaussian kernel at scales $\sigma = 0.1$ and
$\sigma = 2$. Here $\alpha_n$ denotes the empirical measure
$n^{-1} \sum_{i=1}^n \delta_{y_i}$. The covariance tensors are
depicted as ellipses whose principal axes are in the direction of
the eigenvectors of the covariance matrix and principal radii
are proportional to $\sqrt{\lambda_1}$ and $\sqrt{\lambda_2}$,
where $0 \leq \lambda_1 \leq \lambda_2$ are the eigenvalues of
the covariance. At scale $\sigma = 0.1$,
$\lambda_1/\lambda_2 = 0.908$, showing that the covariance
tensor is nearly isotropic, indicating that the ``dimension'' of the
data is 2. At $\sigma = 2$, the ratio of the eigenvalues is
$0.025$, giving a highly anisotropic covariance tensor, from
which we infer that the dimension is 1.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\begin{tabular}{c}
\includegraphics[width=0.42\linewidth]{band-0_1}
\end{tabular}
\qquad & \qquad
\begin{tabular}{c}
\includegraphics[width=0.4\linewidth]{band-2}
\end{tabular} \\
$\sigma = 0.1$ \qquad & \qquad $\sigma = 2$
\end{tabular}
\end{center}
\caption{Estimating data dimensionality at different scales
through multiscale covariance.}
\label{F:dimension}
\end{figure}
\end{example}
\begin{example}[A linear subspace of $\ensuremath{\mathbb{R}}^d$]
\label{E:subspace}
Let $v_1, \ldots, v_r \in \ensuremath{\mathbb{R}}^d$, $1 \leq r \leq d$,
be orthonormal vectors and consider the subspace
$H = <v_1, \ldots, v_r >$ that they span. Let $\alpha$
denote the singular measure supported on $H$ induced
by the volume form on $H$. The measure $\alpha$ clearly
is locally finite. We calculate multiscale
covariance fields at points $x \in H$ to show that $H$
may be recovered from $\Sigma_\alpha (x, \sigma)$.
By Remark \ref{R:iso}, we may assume that $x = 0$.
A calculation shows that for the Gaussian kernel,
\begin{equation}
\Sigma_\alpha (0, \sigma) =
\frac{r}{(\sqrt{2\pi})^{d-r }\sigma^{d-r-2}} \sum_{i=1}^r \,
v_i \otimes v_i \,.
\end{equation}
For the truncation kernel,
\begin{equation}
\Sigma_\alpha (0, \sigma) = \lambda_r \sum_{i=1}^r \,
v_i \otimes v_i \,,
\end{equation}
where
\begin{equation}
\lambda_r = \frac{1}{\sigma^{d-r-2}}
\frac{\ensuremath{\nu}_{r-1}}{\ensuremath{\nu}_d} \int_{-\pi/2}^{\pi/2}
\sin^2 \theta \cos^r \theta \, d\theta\,.
\end{equation}
For $r=1$, this expression simplifies to
$\lambda_1 = 2/(3 \sigma^{d-3} \ensuremath{\nu}_d)$.
Thus, for both kernels, the orthogonal complement of $H$ is
the null space of $\Sigma_\alpha (0, \sigma)$ and $H$
is the eigenspace associated with the positive
eigenvalue $\lambda_r$.
\end{example}
\begin{example}[Wedge of $n$ segments]
\label{E:wedge}
Consider the wedge (one-point union) $W$ of $n$ segments
$L_1, \ldots, L_n$ in $\ensuremath{\mathbb{R}}^d$ attached at the origin, as depicted in
Fig.\,\ref{F:wedge}. Each segment $L_i$ is determined by its
length $\ell_i > 0$ and a unit direction vector $v_i$. We assume
that $v_i \ne v_j$, for any $1 \leq i < j \leq n$.
Let $\alpha$ be the singular measure on $\ensuremath{\mathbb{R}}^d$ that is
supported on $W$ and agrees with the measure induced by
arc length on each segment $L_i$. We consider the multiscale
covariance field of $\alpha$ associated with the truncation
kernel. For $x \in L_i$, $x \ne 0$, as in the case $r=1$ in
Example \ref{E:subspace}, we have that $\Sigma_\alpha (x, \sigma)
= (2/3 \sigma^{d-3} \ensuremath{\nu}_d) \, v_i \otimes v_i$ at small enough
scales. Thus, $\Sigma_\alpha (x, \sigma)$ has rank one. However,
at the origin,
\begin{equation}
\Sigma_\alpha (0, \sigma) = \frac{1}{3 \sigma^d \ensuremath{\nu}_d}
\sum_{i=1}^n \left( \min(\sigma, \ell_i)\right)^3
v_i \otimes v_i \,.
\end{equation}
for any $\sigma > 0$. Thus, for $\sigma \leq
\min \{\ell_i, 1\leq i \leq n\}$,
\begin{equation}
\Sigma_\alpha (0, \sigma) = \frac{1}{3 \sigma^{d-3} \ensuremath{\nu}_d}
\sum_{i=1}^n v_i \otimes v_i \,.
\end{equation}
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\begin{tabular}{c}
\includegraphics[width=0.25\linewidth]{segments1-0_5}
\end{tabular}
\qquad \quad & \quad \qquad
\begin{tabular}{c}
\includegraphics[width=0.25\linewidth]{segments2-0_5}
\end{tabular}
\end{tabular}
\end{center}
\caption{Covariance at the one-point union of line segments.}
\label{F:wedge}
\end{figure}
\end{example}
\section{Geometry of Curves and Surfaces} \label{S:geometry}
In this section, we show how multiscale CTFs associated with the
truncation kernel extract precise local geometric information from plane
curves and surfaces in $\ensuremath{\mathbb{R}}^3$.
\subsection{Plane Curves} \label{S:curves}
\begin{example} \label{E:circle}
We begin with the special case of a circle.
Let $C_R \subset \ensuremath{\mathbb{R}}^2$ be the circle of radius $R$ centered
at the origin in $\ensuremath{\mathbb{R}}^2$ and $\alpha$ the singular measure
supported on $C_R$ induced by arc length. For any $x \in \ensuremath{\mathbb{R}}^2$,
we denote $r = \|x\|$. If $x$ is such that $\left| r - R \right|>\sigma$ then
${\Sigma}_\alpha (x,\sigma) = 0.$ Assume that $x \in \ensuremath{\mathbb{R}}^2$ and $0 < \sigma < R$
are such that $r \in[R-\sigma,R+\sigma]$. In this case, in the coordinate
system given by the directions $n = x/\|x\|$ and $t = n^\perp$, a calculation
shows that $\Sigma_\alpha (x,\sigma)$ is diagonal with entries
\begin{equation}
\begin{split}
\lambda_n (x,\sigma) &= \frac{1}{\pi \sigma^2} \left[ R{\phi
\left(R^2+2 r^2\right)+R^2 (R \cos \phi - 4r)\sin \phi }\right] \\
\lambda_t (x,\sigma) &= {\frac{R^3}{\pi \sigma^2} (\phi -\sin \phi \cos \phi )} \,,
\end{split}
\end{equation}
where $\phi = \arccos\left(\frac{R^2 + r^2-\sigma^2}{2rR}\right)$.
Thus, the normal and tangential vectors, $n$ and $t$, are
eigenvectors with eigenvalues $\lambda_n$ and $\lambda_t$,
respectively. Fig.\,\ref{F:circle} shows the eigenvalues as
functions of $r$, $0.9 \leq r \leq 1.1$, for $\sigma = 0.1$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.35\linewidth]{S1eigenvalues}
\end{center}
\caption{Tangential (blue) and normal (red) eigenvalues
as a function of $r$, $0.9 \leq r \leq 1.1$, at $\sigma = 0.1$,
of the multiscale CTF associated with
the truncation kernel for the singular measure induced
by arc length, supported on the unit circle in $\ensuremath{\mathbb{R}}^2$.}
\label{F:circle}
\end{figure}
\end{example}
Now we consider a general smooth curve $C \subset \ensuremath{\mathbb{R}}^2$,
that is, a 1-dimensional, smooth, properly embedded submanifold of
$\ensuremath{\mathbb{R}}^2$. Let $\alpha$ be the singular measure on $\ensuremath{\mathbb{R}}^2$
supported on $C$ and induced by arc length. This measure is locally
finite because the embedding is proper. We calculate the small-scale
covariance at points on $C$ for the truncation kernel and show
that the curvature can be recovered from the eigenvalues of
$\Sigma_\alpha$. Let $x \in C$ be fixed. The arc-length
parametrization of $C$ near $x$ may be written as
\begin{equation}
X(s) = s-\frac{\kappa ^2 s^3}{6}+ O(s^4)
\quad \text{and} \quad
Y(s) = \frac{\kappa s^2}{2}+\frac{\kappa _s s^3}{6}+O(s^4) \,,
\end{equation}
where $X(s)$ and $Y(s)$ are coordinates along the tangent
and normal to $C$ at $x$, respectively \cite{giblin}. Here, the
curvature $\kappa$ and its derivative $\kappa_s$ are evaluated
at $x$. A calculation yields:
\begin{proposition} \label{P:curves}
Let $\sigma>0$ be small. If $C$ is a smooth plane curve
and $x \in C$, then in the coordinates specified above we have
\begin{equation}
{\Sigma}_\alpha (x, \sigma)=\left(
\begin{matrix}
\frac{2\sigma}{3\pi}-\frac{\kappa^2 \sigma^3}{20 \pi}+O(\sigma^4) \quad &
\frac{\kappa_s \sigma^3}{15 \pi}+O (\sigma^4 ) \\
\frac{\kappa _s \sigma^3}{15 \pi}+O (\sigma^4 ) & \frac{\kappa^2 \sigma^3}{10 \pi}+O (\sigma^4)
\end{matrix}
\right).
\end{equation}
\end{proposition}
Proposition \ref{P:curves} implies that, for $\sigma > 0$ small,
the eigenvalues of $\Sigma_\alpha$ are
\begin{equation}
\lambda_1 = \frac{2\sigma}{3 \pi} - \frac{\kappa^2 \sigma^3}{20 \pi}
+O(\sigma^4)
\quad \text{and} \quad
\lambda_2 = \frac{\kappa^2 \sigma^3}{10 \pi}+O(\sigma^4) \,,
\end{equation}
so that
\begin{equation}
\text{tr} \, \Sigma_\alpha (x, \sigma) = \frac{2 \sigma}{3 \pi} +
\frac{\kappa^2 \sigma^3}{20 \pi} + O(\sigma^4) \,.
\end{equation}
Thus, the curvature at $x \in C$ may be recovered, up to a sign, as
\begin{equation}
\kappa = \pm \lim_{\sigma \to 0} \frac{\sqrt{20 \pi}}{\sigma^{3/2}}
\left( \text{tr} \, \Sigma_\alpha (x, \sigma) - \frac{2 \sigma}{3 \pi} \right)^{1/2}.
\end{equation}
\subsection{Surfaces in $\ensuremath{\mathbb{R}}^3$}
\begin{example}
Let $S_R$ be the sphere of radius $R$ centered at the origin
in $\ensuremath{\mathbb{R}}^3$. For $x \in \ensuremath{\mathbb{R}}^3$, we let $r = \|x\|$. If $x$ is such
that $\left|r - R \right| > \sigma$,
then ${\Sigma_\alpha}(x,\sigma)=0.$ Assume that $x \neq (0,0,0)$
and $\sigma>0$ are such that $r \in [R-\sigma, R + \sigma]$.
In the coordinate system given by the vector
$n = x/ \|x\|$, and any orthonormal basis $\{t_1, t_2\}$
of the orthogonal complement $n^\perp$, a direct calculation
shows that ${\Sigma_\alpha}(x,\sigma)$ is a $3\times 3$ diagonal
matrix with entries
\begin{equation}
\begin{split}
\lambda_{t_1}(x,\sigma) &= \lambda_{t_2}(x,\sigma) =
\frac{R^4}{\sigma^3} \sin ^4\left(\frac{\phi }{2}\right) (\cos \phi + 2) \\
\lambda_n (x,\sigma) &= \frac{R^2}{2 \sigma^3}(1-\cos \phi)
\left( R^2+R \cos \phi (R \cos \phi +R-3 r) \right) \\
&+ \frac{R^2}{2 \sigma^3}(1-\cos \phi) \left(- 3 R r+3 r^2\right)
\end{split}
\end{equation}
where $\phi = \arccos\left(\frac{R^2+ r^2-\sigma^2}{2R r}\right)$.
In particular, this means that $\lambda_n$ is the eingenvalue
corresponding to the eigenvector $n$ along the normal direction
to the sphere at $x/\|x\|$,
and $\{t_1, t_2\}$ span the eigenspace along the tangent directions
with eigenvalue $\lambda_{t_1} = \lambda_{t_2}$. Fig.\,\ref{F:S2}
shows a plot of the eigenvalues as a function of $r$,
$0.9 \leq r \leq 1,1$, for $\sigma = 0.1$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.35\linewidth]{S2eigenvalues}
\end{center}
\caption{Tangential (blue) and normal eigenvalues (red)
of the multiscale CTF associated with the truncation kernel as
a function of $r$, $0.9 \leq r \leq 1.1$, at $\sigma = 0.1$,
for the singular measure induced by surface area, supported
on the unit sphere in $\ensuremath{\mathbb{R}}^3$.}
\label{F:S2}
\end{figure}
\end{example}
Now we consider a general smooth compact surface
$S\subset \ensuremath{\mathbb{R}}^3$. Let $\alpha$ be the singular measure
on $\ensuremath{\mathbb{R}}^3$ supported on $S$ and induced by the area
measure on $S$. We calculate the
small-scale covariance at points on $S$ for the truncation kernel
and show that the principal curvatures may indeed be recovered
from the spectrum of ${\Sigma}$.
Given a non-umbilic point $p \in S$, one can choose a Cartesian
coordinate system centered at $p$ so that the $x$-axis is along
the direction of maximal curvature at $p$, the $y$-axis is along the
direction of minimal curvature at $p$, and the $z$-axis is along
the normal to $S$ at $p$.
\begin{proposition} \label{P:surface}
Let $\sigma>0$ be small, $p \in S$ be non-umbilic, and
$\alpha$ be the surface area measure on $S$. In the coordinate
system described above, the covariance tensor for the truncation
kernel is given by
\begin{equation}
\Sigma_\alpha (p, \sigma) = \begin{bmatrix}
A_{t_1} & O (\sigma^4 ) & O(\sigma^5) \\
O(\sigma^4) & A_{t_2}& O(\sigma^5) \\
O(\sigma^5) & O(\sigma^5) & A_n
\end{bmatrix},
\end{equation}
where
\begin{equation}
\begin{split}
A_{t_1} &= \frac{3\sigma}{16} + \frac{1}{256}
(-3\kappa_1^2 - 6\kappa_1 \kappa_2+\kappa_2^2) \sigma^3
+ O(\sigma^4),\\
A_{t_2} &= \frac{3\sigma}{16} + \frac{1}{256}
(\kappa_1^2 - 6\kappa_1 \kappa_2 - 3\kappa_2^2) \sigma^3
+ O(\sigma^4) ,\\
A_n &= \frac{3 \kappa_1^2 + 2 \kappa_1 \kappa_2 +
3 \kappa_2^2}{128} \sigma^3 + O(\sigma^4),
\end{split}
\end{equation} and
$\kappa_1 > \kappa_2$ are the principal curvatures
of $S$ at $p$.
\end{proposition}
It follows from this result that, for $\sigma>0$ small,
\begin{equation}
\begin{split}
\text{tr} \, \Sigma_\alpha (p, \sigma) &=
\frac{3}{16}\sigma+ \frac{1}{64}(\kappa_1-\kappa_2)^2\sigma^3+O(\sigma^4)
\ \text{and} \\
\text{det} \, \Sigma_\alpha (p, \sigma) &=\big({3\kappa_1^2+2\kappa_1\kappa_2+3\kappa_2^2\big)}\frac{\pi^2}{2048}\sigma^{11} + O(\sigma^{12}) \,.
\end{split}
\end{equation}
As a consequence, $\kappa_1$ and $\kappa_2$ can be recovered from
the spectrum of ${\Sigma}_\alpha(p,\sigma)$ as a function of $\sigma$.
Indeed, from the small scale asymptotics of the trace and determinant
of $\Sigma_\alpha (p, \sigma)$, we can extract the values of
$(\kappa_1-\kappa_2)^2$ and
$3\kappa_1^2+2\kappa_1\kappa_2+3\kappa_2^2$ from which we
can determine the values of $\kappa_1$ and $\kappa_2$.
\begin{proof}[Proof of Proposition \ref{P:surface}]
Using cylindrical coordinates in the chosen reference system,
we can parametrize the patch $S\cap B(p,\sigma)$
as $(\rho\cos\phi,\rho\sin\phi,z(\rho,\phi))$, for $\phi\in[0,2\pi]$,
$\rho\in[0,\rho_\sigma(\phi)]$, where $\rho_\sigma(\phi)
= \sigma-\frac{1}{8}\big(\kappa_1(\cos\phi)^2
+ \kappa_2(\sin\phi)^2\big)^2\sigma^3+O(\sigma^4)$, and $z(\rho,\phi)
= \frac{\rho^2}{2}\big(\kappa_1(\cos\phi)^2
+ \kappa_2(\sin\phi)^2\big)+O(\sigma^3).$
The area element on the patch is given by
\begin{equation}
dA = \left( \rho+\frac{\rho^3}{2}(\kappa_1^2(\cos\phi)^2 +
\kappa_2^2(\sin\phi)^2)+O(\rho^5) \right) d\rho\,d\phi.
\end{equation}
Now we have all the ingredients needed to compute
$\Sigma_\alpha (p,\sigma)$. For example, to calculate the
$(1,1)$-entry, we express $\iint_{S\cap B(0,\sigma)}x^2\,dA$ as
\begin{equation}
\int_0^{2\pi}\int_0^{\rho_\sigma(\phi)}
\left[ \rho ^3 \cos ^2 \phi + \frac{\rho ^5 \cos ^2 \phi}{2}
\left(\kappa _2^2 \sin ^2 \phi +\kappa _1^2 \cos ^2 \phi \right)
+ O (\rho ^6) \right] d\rho\, d\phi\,,
\end{equation}
which after a simple but tedious calculation yields the
desired result. The computation of other
entries of the matrix follows similar steps.
\end{proof}
\section{Stability and Consistency} \label{S:stability}
For each $p \in [1, \infty)$, let $\ensuremath{\mathcal{P}}_p (\ensuremath{\mathbb{R}}^d)$ denote the collection
of all Borel probability measures $\alpha$ on $\ensuremath{\mathbb{R}}^d$ whose
$p$th moment $M_p (\alpha) = \int \|z\|^p \alpha (dz)$ is finite. We
adopt the notation $m_p (\alpha) = M^{1/p}_p (\alpha)$. For $p=\infty$,
we let $\ensuremath{\mathcal{P}}_\infty(\ensuremath{\mathbb{R}}^d)$ be the collection of all Borel
probability measures on $\ensuremath{\mathbb{R}}^d$ with bounded support and
$m_\infty(\alpha)= \sup \{\|z\|,\,z\in \mathrm{supp}\,[\alpha]\}$.
By Jensen's inequality, if $1 \leq q \leq p \leq \infty$,
then $\ensuremath{\mathcal{P}}_p (\ensuremath{\mathbb{R}}^d) \subset \ensuremath{\mathcal{P}}_q (\ensuremath{\mathbb{R}}^d)$ and
$m_q (\alpha) \leq m_p (\alpha)$,
for any $\alpha \in \ensuremath{\mathcal{P}}_p (\ensuremath{\mathbb{R}}^d)$.
\begin{definition} \label{D:moments}
For $p \in [1, \infty]$ and $\lambda>0$, we define
$\ensuremath{\mathcal{P}}_p^\lambda(\ensuremath{\mathbb{R}}^d) \subset \ensuremath{\mathcal{P}}_p (\ensuremath{\mathbb{R}}^d)$ as the
subset of all $\alpha \in \ensuremath{\mathcal{P}}_p (\ensuremath{\mathbb{R}}^d)$ such that
$\alpha(A)\leq \lambda \,\mathcal{L}(A)$, for all measurable
sets $A$, where $\mathcal{L}$ stands for Lebesgue measure.
\end{definition}
\begin{example}
If $\alpha \in \ensuremath{\mathcal{P}}_p (\ensuremath{\mathbb{R}}^d)$ is absolutely continuous with respect
to the Lebesgue measure with density function
$f \in \mathbb{L}^\infty (\ensuremath{\mathbb{R}}^d)$ satisfying $\|f\|_\infty \leq \lambda$,
then $\alpha \in \ensuremath{\mathcal{P}}_p^\lambda (\ensuremath{\mathbb{R}}^d)$.
\end{example}
Let us recall the definition of the $p$-Wasserstein distance
$\ensuremath{W_p} (\alpha,\beta)$ between
$\alpha, \beta \in \ensuremath{\mathcal{P}}_p (\ensuremath{\mathbb{R}}^d)$. Let $\ensuremath{\Gamma} (\alpha,\beta)$ be the
collection of all couplings of $\alpha$ and $\beta$; that is, probability
measures $\mu$ on $\ensuremath{\mathbb{R}}^d\times\ensuremath{\mathbb{R}}^d$
such that $(\pi_1)_\ast \mu=\alpha$ and $(\pi_2)_\ast \mu = \beta$,
where $\pi_1, \pi_2 \colon \ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}^d$ denote
projections onto the first and second components, respectively.
\begin{definition}
For $p \in [1, \infty)$, the {\em $p$-Wasserstein distance}
between $\alpha, \beta \in \ensuremath{\mathcal{P}}_p (\ensuremath{\mathbb{R}}^d)$ is given by
\[
\ensuremath{W_p} (\alpha,\beta) := \inf_{\mu \in \ensuremath{\Gamma} (\alpha,\beta)}
\left(\iint\| z_1 - z_2 \|^p \mu(dz_1 \times dz_2) \right)^{1/p},
\]
and the $\infty$-Wasserstein distance between $\alpha, \beta
\in \ensuremath{\mathcal{P}}_\infty (\ensuremath{\mathbb{R}}^d)$ by
\[
\ensuremath{W_\infty} (\alpha,\beta) := \inf_{\mu \in \ensuremath{\Gamma} (\alpha,\beta)}
\sup\left\{\|z_1-z_2\|,\,(z_1,z_2)\in\mathrm{supp} \, [\mu]\right\}.
\]
\end{definition}
\begin{remark} \label{R:wasserstein}
\hfill
\begin{itemize}
\item[(i)] For any $\alpha, \beta \in \ensuremath{\mathcal{P}}_p (\ensuremath{\mathbb{R}}^d)$, $p\in[1,\infty]$,
there exists a coupling that realizes the infimum in the definition
of $W_p (\alpha,\beta)$ (cf. \cite{givens84}).
\item[(ii)] It is a standard result that, for each $p \in [1, \infty)$, $\ensuremath{W_p}$
defines a metric on $\ensuremath{\mathcal{P}}_p (\ensuremath{\mathbb{R}}^d)$ that is compatible with
weak convergence of probability measures \cite{villani03}.
\item[(iii)] If $\varphi \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}^d$ is an isometry, then
$\ensuremath{W_p} (\alpha, \beta) =
\ensuremath{W_p} (\varphi_\ast (\alpha), \varphi_\ast (\beta))$,
for any $\alpha, \beta \in \ensuremath{\mathcal{P}}_p (\ensuremath{\mathbb{R}}^d)$.
\end{itemize}
\end{remark}
\subsection{Smooth Kernels}
\begin{theorem}[Stability for Smooth Kernels] \label{T:stab}
Let $f \colon [0, \infty) \to \ensuremath{\mathbb{R}}$ be as in Definition \ref{D:kernel}
with multiscale kernel $K$. Suppose that $f$ is differentiable
and there exists a constant $A_1>0$ such that
$r^{3/2} \, |f'(r)| \leq A_1$, $\forall r \geq 0$. Then,
there is a constant $A_f >0$, that depends only on $f$, such
that
\[
\sup_{x\in\ensuremath{\mathbb{R}}^d} \left\| \Sigma_\alpha (x, \sigma) -
\Sigma_\beta (x, \sigma) \right\| \leq
\frac{\sigma A_f}{C_d(\sigma)} \,
\ensuremath{W_1} (\alpha,\beta),
\]
for any $\alpha,\beta\in\ensuremath{\mathcal{P}}_1 (\ensuremath{\mathbb{R}}^d)$ and any $\sigma>0$.
Here $\| \cdot \|$ is the norm associated
with the inner product defined in \eqref{E:inner}.
\end{theorem}
Theorem \ref{T:stab} shows that multiscale covariance fields
yield a robust representation of probability measures that make
their geometric properties more readily accessible, as illustrated in
our examples. In Section \ref{S:frechet}, we show that not only
is $\Sigma_\alpha (\cdot, \sigma)$ stable, but all the information
contained in the probability measure $\alpha$ is
fully absorbed into the multiscale CTF associated with the
Gaussian kernel. In fact, $\alpha$ may
be recovered from the multiscale scalar field given by
$V_\alpha (x, \sigma) = \mathrm{tr} \, \Sigma_\alpha (x, \sigma)$,
$x \in \ensuremath{\mathbb{R}}^d$ and $\sigma > 0$.
The following lemma will be used in the proof of the stability
theorem for smooth kernels. To simplify notation, we define
$K_\sigma \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}$ by
\begin{equation} \label{E:center}
K_\sigma (z) = K (z, 0, \sigma) =
\frac{1}{C_d (\sigma)} \,
f \left(\frac{\Vert z\Vert^2}{\sigma^2}\right)
\end{equation}
and $Q_\sigma \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}^d \otimes \ensuremath{\mathbb{R}}^d$ by
\begin{equation}
Q_\sigma (z) = (z \otimes z) K_\sigma (z) \,.
\end{equation}
\begin{lemma}\label{L:kernel}
Let $f$ be as in Definition \ref{D:kernel} and suppose
that $f$ is differentiable and there is a constant $A_1 >0$
such that $r^{3/2} \, |f'(r)| \leq A_1$, $\forall r \geq 0$.
Then, there is a constant $A_f > 0$, that depends only on $f$,
such that
\[
\left\| Q_\sigma(z_1) - Q_\sigma(z_2) \right\| \leq
\frac{A_f \sigma}{C_d(\sigma)} \|z_1 - z_2\| ,
\]
for any $z_1, z_2 \in \ensuremath{\mathbb{R}}^d$ and all $\sigma > 0$.
\end{lemma}
\begin{proof}
Let $z(t) = t z_1 + (1-t) z_2$, $0 \leq t \leq 1$. Then,
\begin{equation} \label{E:estimate1}
\begin{split}
Q_\sigma(z_1) - Q_\sigma(z_2) =
\int_0^1 \frac{d}{dt} Q_\sigma(z(t)) \,dt \,.
\end{split}
\end{equation}
Since
\begin{equation}
\begin{split}
\frac{d}{dt} Q_\sigma(z) &= \frac{(z_1-z_2) \otimes z}{C_d (\sigma)}
f \left( \frac{\|z\|^2}{\sigma^2} \right) +
\frac{z \otimes (z_1-z_2)}{C_d (\sigma)}
f \left( \frac{\|z\|^2}{\sigma^2} \right) \\
&+ (z \otimes z) \frac{2}{\sigma^2 C_d(\sigma)}
f' \left(\frac{\Vert z\Vert^2}{\sigma^2}\right) \left( z \cdot
(z_1-z_2) \right) \,,
\end{split}
\end{equation}
it follows that
\begin{equation} \label{E:estimate2}
\begin{split}
\left\| \frac{d}{dt} Q_\sigma(z) \right\| & \leq
\frac{ 2 \|z\|}{C_d (\sigma)} f \left( \frac{\|z\|^2}{\sigma^2} \right)
\|z_1 - z_2\| \\
&+ \frac{2 \|z\|^3}{\sigma^2 C_d(\sigma)}
\left| f' \left(\frac{\Vert z\Vert^2}{\sigma^2}\right) \right| \|z_1 - z_2\| \\
&= \frac{ 2 \sigma}{C_d (\sigma)} \frac{\|z\|}{\sigma}
f \left( \frac{\|z\|^2}{\sigma^2} \right) \|z_1 - z_2\| \\
&+ \frac{2 \sigma}{C_d(\sigma)} \frac{\|z\|^3}{\sigma^3}
\left| f' \left(\frac{\Vert z\Vert^2}{\sigma^2}\right) \right|
\|z_1 - z_2\| \,.
\end{split}
\end{equation}
Since $f$ is smooth, condition (c) of Definition \ref{D:kernel}
ensures that there is a constant $A_2 > 0$ such that
$\sqrt{r} f(r) \leq A_2$, $\forall r \geq 0$. Moreover, by hypothesis
$r^{3/2} |f' (r)| \leq A_1$. Thus, \eqref{E:estimate2} implies that
\begin{equation} \label{E:estimate3}
\left\| \frac{d}{dt} Q_\sigma(z) \right\| \leq
\frac{\sigma A_f}{C_d(\sigma)} \|z_2 - z_1\|\,,
\end{equation}
where $A_f = 2(A_1 + A_2)$. The lemma follows from
\eqref{E:estimate1} and \eqref{E:estimate3}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T:stab}]
Without loss of generality, we may assume that $x=0$.
We express the covariance fields as
\begin{equation}
\Sigma_\alpha(x,\sigma)=\int Q_\sigma(z_1)\,\alpha(dz_1)
\quad \text{and} \quad
\Sigma_\beta(x,\sigma) = \int Q_\sigma(z_2)\,\beta(dz_2).
\end{equation}
Given $\eta > 0$ satisfying $\ensuremath{W_1} (\alpha,\beta) < \eta$,
let $\mu \in \ensuremath{\Gamma} (\alpha,\beta)$ be a coupling such that
\begin{equation} \label{E:wass}
\iint\|z_1 - z_2\| \mu (dz_1 \times dz_2) < \eta \,.
\end{equation}
We may write
\begin{equation}
\begin{split}
\Sigma_\alpha(x,\sigma) =\int Q_\sigma(z_1)\,\mu (dz_1 \times dz_2)
\ \ \text{and} \ \
\Sigma_\beta (x,\sigma) =\int Q_\sigma(z_2)\,\mu (dz_1 \times dz_2) \,.
\end{split}
\end{equation}
Therefore,
\begin{equation} \label{E:cestimate0}
\left\| \Sigma_\alpha (x,\sigma) - \Sigma_\beta (x,\sigma) \right\|
\leq \iint \|Q_\sigma( z_1) - Q_\sigma( z_2)\| \,\mu(dz_1\times dz_2) \,.\end{equation}
Lemma \ref{L:kernel} and \eqref{E:cestimate0} imply that
\begin{equation} \label{E:cestimate1}
\begin{split}
\left\| \Sigma_\alpha (x,\sigma) - \Sigma_\beta (x,\sigma) \right\|
&\leq \frac{\sigma A_f}{C_d(\sigma)}
\iint \|z_1- z_2\| \,\mu(dz_1\times dz_2) \\
&\leq \frac{\sigma A_f}{C_d(\sigma)} \eta \,.
\end{split}
\end{equation}
Since \eqref{E:cestimate1} holds for any
$\eta > \ensuremath{W_1} (\alpha,\beta)$,
we can conclude that
\begin{equation}
\left\| \Sigma_\alpha(x,\sigma) - \Sigma_\beta(x,\sigma) \right\|
\leq \frac{\sigma A_f}{C_d(\sigma)}
\ensuremath{W_1} (\alpha, \beta) \,,
\end{equation}
as claimed.
\end{proof}
In what follows, given random vectors $y_i\in \ensuremath{\mathbb{R}}^d$,
$i \in \mathbb{N}$, we let $\alpha_n =
\sum_{i=1}^n \delta_{y_i}/n$.
\begin{corollary}[Consistency for Smooth Kernels]
\label{C:consistency}
Let $K$ be a multiscale kernel as in Theorem \ref{T:stab}
and $\sigma > 0$. If $\alpha \in \mathcal{P}_1 (\ensuremath{\mathbb{R}}^d)$ and
$y_i \in \ensuremath{\mathbb{R}}^d$, $i \in \mathbb{N}$,
are i.i.d. random variables with distribution $\alpha$, then
\[
\sup_{x \in \ensuremath{\mathbb{R}}^d}
\left\| \Sigma_{\alpha_n}(x,\sigma)
-\Sigma_\alpha(x,\sigma) \right\|
\xrightarrow{n\uparrow\infty} 0
\]
almost surely.
\end{corollary}
\begin{proof}
Theorem \ref{T:stab} implies that
\begin{equation}
\sup_{x \in \ensuremath{\mathbb{R}}^d} \left\| \Sigma_{\alpha_n}(x,\sigma) -
\Sigma_\alpha(x,\sigma) \right\| \leq
\frac{\sigma A_f}{C_d(\sigma)}
\, \ensuremath{W_1} (\alpha_n,\alpha).
\end{equation}
The conclusion follows from the fact that $\ensuremath{W_1}$ metrizes weak
convergence of probability measures in $\ensuremath{\mathcal{P}}_1 (\ensuremath{\mathbb{R}}^d)$
and Varadarajan's Theorem \cite{dudley} about convergence of
empirical measures on Polish spaces that ensures that
$\alpha_n$ converges weakly to $\alpha$ almost surely.
\end{proof}
Corollary \ref{C:consistency} guarantees the asymptotic consistency
of empirical CTFs. However, in applications, it is important to have
estimates of the rate of convergence, which we derive
from the stability theorem and a result of Fournier and
Guillin \cite[Theorem 1]{fournier14}.
\begin{theorem}[Fournier and Guillin, \cite{fournier14}]
Let $\alpha \in \ensuremath{\mathcal{P}}_s (\ensuremath{\mathbb{R}}^d)$, where $s > 1$. If $y_1, \ldots, y_n$
are i.i.d. random variables with distribution $\alpha$ and $p \in [1,s)$,
then there exists a constant $b> 0$ that depends only on
$p,s$ and $d$ such that
\begin{equation*}
\begin{split}
\mathbb{E} \left[W_p (\alpha,\alpha_n)\right] & \leq \\
b \, m_s^p (\alpha) \,
&\cdot \begin{cases}
n^{-\frac{s-p}{s}} + n^{-\frac{1}{2}} & \text{if $p>d/2$ and $s\neq 2p$;} \\
n^{-{\frac{s-p}{s}}} + n^{-\frac{1}{2}}\log(1+n) &
\text{if $p=d/2$ and $s\neq 2p$;} \\
n^{-\frac{s-p}{s}} +n^{-\frac{p}{d}} & \mbox{if $p\in[1,d/2)$ and $s\neq d/(d-p)$,}
\end{cases}
\end{split}
\end{equation*}
for any $n \geq 1$.
\end{theorem}
\begin{corollary} \label{C:rates}
Let $f$ be as in Theorem \ref{T:stab} and $\sigma > 0$.
Suppose that $\alpha \in \ensuremath{\mathcal{P}}_3 (\ensuremath{\mathbb{R}}^d)$ and
$y_i$, $i \in \mathbb{N}$, are i.i.d. random
variables with distribution $\alpha$. Then, there is
a constant $b> 0$, that depends only on $d$,
such that
\[
\begin{split}
\mathbb{E} &\left[ \, \sup_{x\in\ensuremath{\mathbb{R}}^d} \left\| \Sigma_{\alpha_n} (x, \sigma)
- \Sigma_\alpha (x, \sigma) \right\| \right] \leq \\
&\frac{\sigma A_f b}{C_d (\sigma)} m_3 (\alpha) \cdot
\begin{cases}
n^{-\frac{2}{3}} + n^{-\frac{1}{2}} & \text{if $d=1$;}\\
n^{-{\frac{2}{3}}} + n^{-\frac{1}{2}}\log(1+n) & \text{if $d=2$;}\\
n^{-\frac{2}{3}} +n^{-\frac{1}{d}} & \text{if $d\geq 3$.}
\end{cases}
\end{split}
\]
\end{corollary}
\begin{proof}
Since $\ensuremath{\mathcal{P}}_3 (\ensuremath{\mathbb{R}}^d) \subseteq \ensuremath{\mathcal{P}}_1 (\ensuremath{\mathbb{R}}^d)$,
$\alpha$ is also an element of $\ensuremath{\mathcal{P}}_1 (\ensuremath{\mathbb{R}}^d)$. The
conclusion follows by invoking Theorem \ref{T:stab} and the
result of Fournier and Guillin with $p=1$ and $s=3$. The
constant $b$ depends only on $d$ because we are
fixing $p$ and $s$.
\end{proof}
Theorem \ref{T:stab} ensures that multiscale covariance fields
are stable. However, the results do not apply
to some discontinuous kernels of practical interest. Nonetheless,
we prove a stability theorem for the truncation kernel, as well as
pointwise convergence results for more general kernels.
\subsection{The Truncation Kernel}
We begin our discussion of covariance fields associated
with the truncation kernel with a stability theorem with
respect to the $\infty$-Wasserstein metric. In preparation for
the proof of the theorem, we introduce some notation.
For $0 \leq a< b$, let
\begin{equation} \label{E:annulus}
R_d (a,b) = \{y \in \ensuremath{\mathbb{R}}^d \, \colon \, a < \|y\| \leq b \} \,.
\end{equation}
and
\begin{equation} \label{E:moment}
s_d (a,b) = \int_{R_d (a,b)} \|y\|^2 \, dy
\end{equation}
be its radial moment of inertia.
We will use the fact that for any $B \geq b$, the inequality
\begin{equation} \label{E:inertia}
s_d (a,b) \leq (b-a) \, \frac{\ensuremath{\omega}_{d-1}}{d+2} \, \frac{B^{d+2}}{B-a}
\end{equation}
holds. Indeed,
\begin{equation}
\begin{split}
s_d(a,b) &= \frac{\ensuremath{\omega}_{d-1}}{d+2} \left(b^{d+2} - a^{d+2} \right) =
\frac{\ensuremath{\omega}_{d-1}}{d+2} a^{d+2} \left((b/a)^{d+2}-1\right) \\
&=\frac{\ensuremath{\omega}_{d-1}}{d+2}a^{d+2} \frac{(b-a)}{a}
\left(1+ \left(\frac{b}{a}\right) + \cdots + \left(\frac{b}{a}\right)^{d+1}\right) \\
&\leq \frac{b-a}{B-a} \, \frac{\ensuremath{\omega}_{d-1}}{d+2} a^{d+2}
\frac{B-a}{a} \left(1+ \left(\frac{B}{a}\right) + \cdots +
\left(\frac{B}{a}\right)^{d+1}\right) \\
&\leq (b-a) \, \frac{\ensuremath{\omega}_{d-1}}{(d+2)} \frac{B^{d+2} - a^{d+2}}{B-a}
\leq (b-a) \, \frac{\ensuremath{\omega}_{d-1}}{d+2} \, \frac{B^{d+2}}{B-a} \,.
\end{split}
\end{equation}
Let $\ensuremath{\mathcal{P}}_\infty (\ensuremath{\mathbb{R}}^d)$ and $\ensuremath{\mathcal{P}}^\lambda_\infty (\ensuremath{\mathbb{R}}^d)$
be as in Definition \ref{D:moments}.
\begin{theorem}[Stability for the Truncation Kernel] \label{T:stab-trunc}
Let $\sigma>0$ and $\lambda>0$. Suppose $\Omega \subset \ensuremath{\mathbb{R}}^d$ is
a compact set and let $c > 0$ satisfy $\mathrm{diam} (\Omega) \leq c$.
There is a constant $A = A(\sigma, d, c) > 0$ such that if
$\alpha \in \ensuremath{\mathcal{P}}_\infty^\lambda (\ensuremath{\mathbb{R}}^d)$ and $\beta \in
\ensuremath{\mathcal{P}}_\infty (\ensuremath{\mathbb{R}}^d)$ have their supports contained in $\Omega$,
then the multiscale covariance tensor fields of $\alpha$ and
$\beta$ associated with the truncation kernel $T$ satisfy
\begin{equation*}
\sup_{x\in\ensuremath{\mathbb{R}}^d}
\|\Sigma_\alpha (x, \sigma) - \Sigma_\beta (x, \sigma)\| \leq
\lambda A \ensuremath{W_\infty} (\alpha, \beta) \,.
\end{equation*}
\end{theorem}
\begin{proof}
Without loss of generality, we may assume that $x \in \ensuremath{\mathbb{R}}^d$
is the origin. We abbreviate $\eta = W_\infty (\alpha, \beta)$
and let $\mu \in \Gamma (\beta, \alpha)$ be a coupling that realizes
$\eta$. Clearly, $\eta \leq \text{diam} (\Omega)\leq c$.
Using the notation
introduced in \eqref{E:annulus}, let $A_{\sigma, \eta} =
R_d (\sigma, \sigma + \eta)$. We write
\begin{equation} \label{E:diff1}
\begin{split}
\Sigma_\alpha (x, \sigma) - \Sigma_\beta (x, \sigma) &=
\underbrace{
\Sigma_\alpha (x, \sigma) - \frac{(\sigma + \eta)^d}{\sigma^d}\,
\Sigma_\alpha (x, \sigma + \eta)
}_{T_1} \\
&+
\underbrace{
\frac{(\sigma + \eta)^d}{\sigma^d}\, \Sigma_\alpha (x, \sigma + \eta)
- \Sigma_\beta (x, \sigma)
}_{T_2} \,.
\end{split}
\end{equation}
Using the fact that $\alpha \in \ensuremath{\mathcal{P}}^\lambda_\infty (\ensuremath{\mathbb{R}}^d)$,
we bound the norm of $T_1$ as follows:
\begin{equation} \label{E:diff2}
\begin{split}
\left\|T_1 \right\| &=
\frac{1}{\sigma^d \ensuremath{\nu}_d}
\Big\| \int_{A_{\sigma, \eta}} y \otimes y \, d\alpha (y) \Big\|
\leq \frac{1}{\sigma^d \ensuremath{\nu}_d}
\int_{A_{\sigma, \eta}} \| y\|^2 \, d\alpha (y) \\
&\leq \frac{\lambda}{\sigma^d \ensuremath{\nu}_d}
\int_{A_{\sigma, \eta}} \| y\|^2 \, d y
= \frac{\lambda}{\sigma^d \ensuremath{\nu}_d}
s_d (\sigma, \sigma + \eta) \,.
\end{split}
\end{equation}
Using \eqref{E:inertia} with $a = \sigma$, $b = \sigma + \eta$
and $B = \sigma + c$, we have that
\begin{equation} \label{E:bound3}
s_d(\sigma, \sigma + \eta)\leq \eta \,\frac{\ensuremath{\omega}_{d-1}}{d+2}
\frac{(\sigma + c)^{d+2}}{c} = \eta \, \ensuremath{\nu}_d \,
\frac{d}{d+2} \frac{(\sigma + c)^{d+2}}{c} \,.
\end{equation}
Thus,
\begin{equation} \label{E:diff3}
\left\| T_1 \right\| \leq
\frac{\lambda d}{d+2} \frac{(\sigma + c)^{d+2}}{c \sigma^d}
\, \ensuremath{W_\infty} (\alpha, \beta) \,.
\end{equation}
Now we examine $T_2$. Let $\mathbb{I} \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}$ the
characteristic function of the closed ball of radius 1 centered
at the origin. Using the coupling $\mu$, we write
\begin{equation} \label{E:diff4}
\begin{split}
T_2 &= \frac{1}{\sigma^d \ensuremath{\nu}_d} \iint_{\ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d}
\left[ (y_1 \otimes y_1) \,
\mathbb{I} \left(\frac{y_1}{\sigma + \eta}\right)
- (y_2 \otimes y_2) \, \mathbb{I} \left(\frac{y_2}{\sigma}\right) \right]
\mu (dy_1 \times dy_2) \,.
\end{split}
\end{equation}
Note that the integrand in \eqref{E:diff4} vanishes on
$(\ensuremath{\mathbb{R}}^d \setminus B_{\sigma + \eta}) \times
(\ensuremath{\mathbb{R}}^d \setminus B_{\sigma})$ and
the integral also vanishes over $\left(\ensuremath{\mathbb{R}}^d \setminus
B_{\sigma + \eta} \right) \times B_\sigma$ because this
subdomain is disjoint from $\text{supp} \,[\mu]$. Combining
these remarks with
\begin{equation}
y_1 \otimes y_1 - y_2 \otimes y_2 = (y_1 - y_2) \otimes y_1 +
y_2 \otimes (y_1- y_2) \,,
\end{equation}
we may rewrite \eqref{E:diff4} as
\begin{equation} \label{E:diff5}
\begin{split}
T_2 &= \frac{1}{\sigma^d \ensuremath{\nu}_d}
\iint_{B_{\sigma + \eta} \times B_\sigma} \left[ (y_1 \otimes y_1) \,
- (y_2 \otimes y_2) \right] \mu (dy_1 \times dy_2) \\
&+ \frac{1}{\sigma^d \ensuremath{\nu}_d}
\iint_{B_{\sigma + \eta} \times (\ensuremath{\mathbb{R}}^d \setminus B_\sigma)}
y_1 \otimes y_1 \, \mu (dy_1 \times dy_2) \\
&= \frac{1}{\sigma^d \ensuremath{\nu}_d}
\iint_{B_{\sigma + \eta} \times B_\sigma}
(y_1-y_2) \otimes y_1 \, \mu (dy_1 \times dy_2) \\
&+ \frac{1}{\sigma^d \ensuremath{\nu}_d}
\iint_{B_{\sigma + \eta} \times B_\sigma}
y_2 \otimes (y_1-y_2) \, \mu (dy_1 \times dy_2) \\
&+ \frac{1}{\sigma^d \ensuremath{\nu}_d}
\iint_{B_{\sigma + \eta} \times A_{\sigma, 2\eta}}
y_1 \otimes y_1 \, \mu (dy_1 \times dy_2) \,.
\end{split}
\end{equation}
For the last equality, we used again the fact that
\begin{equation} \label{E:support}
(y_1, y_2) \in \text{supp}\,[\mu] \Longrightarrow
\|y_1 - y_2 \|\leq \eta \,.
\end{equation}
From \eqref{E:diff5} and \eqref{E:support}, using the facts
that $\|y_1\| \leq \sigma + \eta$ for $y_1 \in B_{\sigma+\eta}$
and $\|y_2\| \leq \sigma$ for $y_2 \in B_\sigma$,
we may conclude that
\begin{equation} \label{E:diff6}
\begin{split}
\|T_2\| &\leq \frac{\sigma + \eta}{\sigma^d \ensuremath{\nu}_d}
\iint_{B_{\sigma + \eta} \times B_\sigma}
\|y_1-y_2\|\, \mu (dy_1 \times dy_2) \\
&+ \frac{\sigma}{\sigma^d \ensuremath{\nu}_d}
\iint_{B_{\sigma + \eta} \times B_\sigma}
\|y_1-y_2\|\, \mu (dy_1 \times dy_2) \\
&+ \frac{1}{\sigma^d \ensuremath{\nu}_d}
\iint_{B_{\sigma + \eta} \times A_{\sigma, 2\eta}}
\|y_1\|^2 \, \mu (dy_1 \times dy_2) \\
&\leq \frac{\sigma + c}{\sigma^d \ensuremath{\nu}_d}
\eta \int_{B_{\sigma + \eta}} \alpha (dy_1)
+ \frac{\sigma}{\sigma^d \ensuremath{\nu}_d}
\eta \int_{B_{\sigma + \eta}} \alpha (dy_1) \\
&+ \frac{1}{\sigma^d \ensuremath{\nu}_d}
\int_{A_{(\sigma-\eta)^+, 2\eta}} \|y_1\|^2 \, \alpha (dy_1) \,,
\end{split}
\end{equation}
where $(\sigma - \eta)^+ = \text{max} \{\sigma - \eta, 0\}$.
Since $\alpha \in \ensuremath{\mathcal{P}}^\lambda_\infty (\ensuremath{\mathbb{R}}^d)$ and
$\eta \leq c$,
\begin{equation} \label{E:ball}
\int_{B_{\sigma + \eta}} \alpha (dy_1) \leq
\lambda \int_{B_{\sigma + \eta}} dy_1 =
\lambda (\sigma + \eta)^d \nu_d \leq
\lambda (\sigma + c)^d \nu_d
\end{equation}
and
\begin{equation} \label{E:inertia2}
\begin{split}
\int_{A_{\sigma-\eta, 2\eta}} \|y_1\|^2 \, \alpha (dy_1) & \leq
\lambda \int_{A_{(\sigma-\eta)^+, 2\eta}} \|y_1\|^2 \, dy_1 \\
&=\lambda s_d ((\sigma - \eta)^+, \sigma + \eta) \\
&\leq 2 \lambda \eta \frac{\omega_{d-1}}{d+2}
\frac{(\sigma + c)^{d+2}}{\eta + c} \\
&\leq 2 \lambda \eta \frac{d}{d+2} \ensuremath{\nu}_d
\frac{(\sigma + c)^{d+2}}{c} \,,
\end{split}
\end{equation}
where we used \eqref{E:inertia} with $a= (\sigma - \eta)^+$,
$b = \sigma + \eta$ and $B = \sigma + c$.
Combining \eqref{E:diff6}, \eqref{E:ball} and \eqref{E:inertia2},
we obtain
\begin{equation} \label{E:diff7}
\|T_2\| \leq \frac{2 \sigma + c}{\sigma^d}
\lambda (\sigma + c)^d \eta
+ \frac{2 \lambda}{\sigma^d}
\frac{d}{d+2}
\frac{(\sigma + c)^{d+2}}{c} \eta
\end{equation}
From \eqref{E:diff1}, \eqref{E:diff3} and \eqref{E:diff7},
it follows that
\begin{equation}
\begin{split}
\|\Sigma_\alpha (x, \sigma) - \Sigma_\beta (x, \sigma) \|
&\leq \lambda \frac{ d}{d+2} \frac{(\sigma + c)^{d+2}}{c \sigma^d}
\ensuremath{W_\infty} (\alpha, \beta) \\
&+ \lambda \frac{(2 \sigma + c)(\sigma + c)^d}{\sigma^d}
\, \ensuremath{W_\infty} (\alpha, \beta) \\
&+ \lambda \frac{2d}{(d+2)} \frac{(\sigma + c)^{d+2}}{c \sigma^d }
\, \ensuremath{W_\infty} (\alpha, \beta) \\
&= \lambda A(\sigma, d, c) \ensuremath{W_\infty} (\alpha, \beta) \,.
\end{split}
\end{equation}
Since $x \in \ensuremath{\mathbb{R}}^d$ is arbitrary, the claim follows.
\end{proof}
We now derive a consistency result and estimates for
the rate of convergence of empirical approximations to multiscale
covariance fields. The following result is a $W_\infty$-counterpart
to the theorem by Fournier and Guillin stated above.
\begin{theorem}[Garc\'{\i}a-Trillos and Slep\v{c}ev \cite{gts15}]
\label{T:slepcev}
Let $\Omega\subset \ensuremath{\mathbb{R}}^d$ be a bounded connected open subset with Lipschitz boundary. Let $\alpha$ be a probability measure on $\Omega$ with density $f_\alpha \colon \Omega \rightarrow(0,\infty)$ such that there exists $\lambda \geq 1$ with $\lambda^{-1} \leq f_\alpha(x) \leq \lambda$, for all $x\in \Omega$, and let $y_, \ldots, y_n$ be i.i.d. random variables with distribution $\alpha$. Then, there exist constants $c_1, C_1, C_2 > 0$, depending only on $\Omega$ and $\lambda$, such that for all $n \in \mathbb{N}$ and $p > 1$,
\begin{equation*}\mathbb{P} \big( W_\infty(\alpha,\alpha_n)
\leq (C_1+ C_2 \sqrt{p}) \,r_d(n) \big) \geq 1- c_1 n^{-p} \,,
\end{equation*} where $r_2(n) = \frac{\ln(n)^{3/4}}{n^{1/2}}$ and
$r_d(n) = \frac{\ln(n)^{1/d}}{n^{1/d}}$, for $d\geq 3.$
\end{theorem}
\begin{corollary}[Consistency for the Truncation Kernel] \label{C:ctf-infty}
Let $\alpha$ be a probabilty measure on $\ensuremath{\mathbb{R}}^d$ with density $f_\alpha$ and let $\Omega_\alpha$ be the interior of the support of $\alpha$. Assume that $\Omega_\alpha$ is bounded and connected with Lipschitz boundary $\partial \Omega_\alpha$. Furthermore, assume that there exists $\lambda \geq 1$ such that $\lambda^{-1}\leq f_\alpha(z) \leq \lambda$, for all $z \in \Omega_\alpha$. If $y_i$,
$i \in \mathbb{N}$, are i.i.d. random variables with
distribution $\alpha$, then, for any $p > 1$, there are constants
$C = C (\Omega_\alpha, \lambda, p) > 0$ and
$c_1 = c_1 (\Omega_\alpha, \lambda) >0$ such that
\begin{equation*}\mathbb{P} \left( \, \sup_{x\in\ensuremath{\mathbb{R}}^d}
\left\| \Sigma_{\alpha_n} (x, \sigma) - \Sigma_\alpha (x, \sigma) \right\|
\leq C \,r_d(n) \right) \geq 1- c_1 n^{-p} \,.
\end{equation*}
Here, $\alpha_n = \sum_{i=1}^n \delta_{y_i} /n$.
\end{corollary}
\begin{proof}
We use Theorem \ref{T:slepcev} and write
$C' = C_1 + \sqrt{p} \, C_2$.
Theorem \ref{T:stab-trunc} implies that there is a constant
$C'' = C'' (\Omega_\alpha, \lambda) > 0$
such that
\begin{equation}
\sup_{x\in\ensuremath{\mathbb{R}}^d} \left\| \Sigma_{\alpha_n} (x, \sigma) -
\Sigma_\alpha (x, \sigma) \right\| \leq C'' \,
\ensuremath{W_\infty} (\alpha, \alpha_n) \,.
\end{equation}
Thus,
\begin{equation}
\begin{split}
\mathbb{P} \, &\Big(\,\sup_{x\in\ensuremath{\mathbb{R}}^d}
\left\| \Sigma_{\alpha_n} (x, \sigma) - \Sigma_\alpha (x, \sigma) \right\|
\leq C' C'' r_d (n) \Big) \geq \\ &\geq \mathbb{P}
\big( \ensuremath{W_\infty} (\alpha, \alpha_n) \leq C' r_d (n) \big) \geq 1 - c_1 n^{-p} \,.
\end{split}
\end{equation}
The claim follows by setting $C = C' C''$.
\end{proof}
\begin{corollary}\label{C:trunc-stab}
Let $\sigma > 0$ and $p > 1$. Under the assumptions of Corollary \ref{C:ctf-infty}, for the truncation kernel, there exist $N=N(\sigma, \Omega_\alpha, \lambda) \in \mathbb{N}$ and a constant $A = A (\sigma, \Omega_\alpha,\lambda) > 0$ such that
\begin{equation*}
\mathbb{E} \left[ \, \sup_{x\in\ensuremath{\mathbb{R}}^d} \left\| \Sigma_{\alpha_n} (x, \sigma)
- \Sigma_\alpha (x, \sigma) \right\| \right] \leq A r_d(n),
\end{equation*} for all $n\geq N.$
\end{corollary}
\begin{proof}
We apply to $\ensuremath{W_\infty} (\alpha, \alpha_n)$ the identity
$\mathbb{E}(Z) = \int_{0}^\infty\mathbb{P}(Z>t)\,dt$
that is valid for any non-negative random variable $Z$ with finite
first moment. Since
$W_\infty(\alpha,\alpha_n) \leq D =\mathrm{diam}(\Omega_\alpha)$,
we get
\begin{equation} \label{E:ewass1}
\expect{\ensuremath{W_\infty} (\alpha, \alpha_n)} = \int_0^D
\mathbb{P} \left(\ensuremath{W_\infty} (\alpha, \alpha_n) > t \right) \, dt \,.
\end{equation}
Theorem \ref{T:slepcev} implies that
\begin{equation} \label{E:ewass2}
\mathbb{P} \left( W_\infty(\alpha,\alpha_n) >
(C_1+ \sqrt{p} \, C_2) \,r_d(n) \right) \leq n^{-p} \,.
\end{equation}
Let $t_0 = \min \{D, (C_1+ \sqrt{p} \, C_2)\,r_d(n) \}$.
From \eqref{E:ewass1} and \eqref{E:ewass2},
\begin{equation} \label{E:ewass3}
\begin{split}
\hspace{-0.1in}
\expect{\ensuremath{W_\infty} (\alpha, \alpha_n)} &= \int_0^{t_0}
\mathbb{P} \left(\ensuremath{W_\infty} (\alpha, \alpha_n) > t \right) dt
+ \int_{t_0}^D \mathbb{P}
\left(\ensuremath{W_\infty} (\alpha, \alpha_n) > t \right) dt \\
&\leq t_0 + D \,
\mathbb{P} \left(\ensuremath{W_\infty} (\alpha, \alpha_n) >
(C_1+ \sqrt{p} \, C_2)\,r_d(n) \right) \\
&\leq (C_1+ \sqrt{p} \, C_2) \, r_d (n) + D n^{-p} \,.
\end{split}
\end{equation}
Fixing $p$, say $p=2$, for $n$ sufficiently large, the dominant term
on this last expression is the one involving $r_d (n)$. Thus,
the claim follows from \eqref{E:ewass3} and
Theorem \ref{T:stab-trunc} applied to $\alpha$ and
$\beta = \alpha_n$.
\end{proof}
\begin{remark}
We carry out an experiment to test the convergence
rates obtained in Corollary \ref{C:trunc-stab}. We consider
the probability measure $\alpha$ supported on the unit circle
$\ensuremath{\mathbb{S}}^1 \subset \ensuremath{\mathbb{R}}^2$ induced by the normalized arc length
element $(2 \pi)^{-1} ds$. In this case, for the truncation kernel,
$\Sigma_\alpha$ was calculated explicitly in Example \ref{E:circle}.
We consider sets of i.i.d.\ samples of size $n$,
$10 \leq n \leq 10^6$. For each $n$, thirty sets of samples
are taken. For each such set, we compute
$\Sigma_{\alpha_n}$ and estimate the ``error'' as
$\max \|\Sigma_{\alpha_n} (x,\sigma) -
\Sigma_\alpha (x, \sigma)\|$, for $\sigma = 0.6$, where the
maximum is taken over gridpoints on a $24 \times 24$ grid
on the square $[-1.5,1.5] \times [-1.5,1.5]$. We let
$\varepsilon_n$ be the average error over all thirty sets of samples.
Figure \ref{F:error} shows a plot (in blue) of $\varepsilon_n$ in
log-log scale. To compare $\varepsilon_n$ with the predicted rates,
we use a least-squares fit, in log-log scale, of the form
$\varepsilon = C r_2 (n) = C \frac{\ln(n)^{3/4}}{n^{1/2}}$, also
shown in Figure \ref{F:error} (in red). The discrepancy between
the predicted and observed rates suggests that
Corollary \ref{C:trunc-stab} might not be optimal. A curve of the
form $\varepsilon = C n^{-1/2}$, shown in green, produces a
tighter fit to the data, suggesting that the optimal bound
might be $O(n^{-1/2})$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.5\linewidth]{error}
\end{center}
\caption{Log-log plots of experimental error rates (in blue) for
empirical covariance fields, rates predicted by Corollary
\ref{C:trunc-stab} (in red), and a least-squares fit of order
$n^{-1/2}$ (in green).}
\label{F:error}
\end{figure}
\end{remark}
\subsection{General Kernels}
We conclude the discussion of convergence of empirical
CTFs with a pointwise central limit theorem (CLT) that holds
for kernels in the full generality of Definition \ref{D:kernel}.
One may think of it as a CLT for each entry of the matrix
$\Sigma_\alpha (x, \sigma)$.
If $e_1, \ldots, e_d$ is an orthonormal basis of $\ensuremath{\mathbb{R}}^d$, the
$(i,j)$-entry of the covariance matrix in this coordinate system
is given by $\Sigma_\alpha (x, \sigma) (e_i, e_j)$, the bilinear
form $\Sigma_\alpha (x, \sigma)$ evaluated at $(e_i, e_j)$.
In matrix notation, this is the same as
$\inner{e_i}{\Sigma_\alpha (x, \sigma) e_j}$.
More generally, for fixed $u, v, x \in \ensuremath{\mathbb{R}}^d$ and $\sigma > 0$,
we consider
\begin{equation}
\Sigma_\alpha (x, \sigma) (u,v) =
\int (y-x) \otimes (y-x) (u,v) K(x,y, \sigma) \, \alpha (d y).
\end{equation}
Consider the random variable
\begin{equation} \label{E:randomz}
z_{uv} (y) = (y-x) \otimes (y-x) \, (u,v) K(x,y,\sigma),
\end{equation}
where $y$ has distribution $\alpha$. Clearly,
\begin{equation} \label{E:zmean}
\expect{z_{uv}} = \Sigma_\alpha (x, \sigma) (u,v) \,.
\end{equation}
\begin{theorem}[Central Limit] \label{T:clt}
If $f$ is as in Definition \ref{D:kernel}, then $z_{uv}$
has finite variance $\sigma^2_{uv}$. Moreover, if
$z_i$, $i \in \mathbb{N}$, are i.i.d. random variables with
the same distribution as $z_{uv}$, then
\[
\sqrt{n} \left( \frac{1}{n} \sum_{i=1}^n z_i -
\Sigma_\alpha (x, \sigma) (u,v) \right) \xrightarrow{d}
\mathcal{N} (0, \sigma_{uv}^2) \,,
\]
as $n \to \infty$, where convergence is in distribution and
$\mathcal{N} (0,\sigma_{uv}^2)$ is normally distributed with
mean zero and variance $\sigma_{uv}^2$.
\end{theorem}
\begin{proof}
We show that $z_{uv}$ has finite second moment. From
\eqref{E:randomz} and \eqref{E:norm},
\begin{equation} \label{E:zvariance}
\begin{split}
\int z^2_{uv} \, \alpha (dy) &\leqslant \|u\| \|v\|
\int \|y-x\|^4 K^2 (x,y, \sigma) \, \alpha (dy) \\
&\leq \frac{\sigma^4 \|u\| \|v\|}{C^2_d (\sigma)}
\int \frac{\|y-x\|^4}{\sigma^4}
f^2 \left(\frac{\|y-x\|^2}{\sigma^2} \right) \, \alpha (dy) \\
&\leq \frac{\sigma^4 \|u\| \|v\|}{C^2_d (\sigma)} \, C^2\,.
\end{split}
\end{equation}
The last inequality follows from condition (c) in Definition
\ref{D:kernel} that ensures that $r^2 f^2(r) < C^2$, for any
$r>0$. The theorem now follows from a direct application
of the classical CLT.
\end{proof}
\begin{remark}
Note that \eqref{E:zvariance} implies that if
$\|u\| = \|v\| =1$, then
\begin{equation}
\sigma^2_{uv} = \int z^2_{uv} \, \alpha (dy)
- \left(\expect{z_{uv}} \right)^2
\leq \frac{C^2 \sigma^4}{C^2_d (\sigma)} \,,
\end{equation}
giving a uniform bound on the variance of $z_{uv}$ over
$x \in \ensuremath{\mathbb{R}}^d$ and $u,v \in \ensuremath{\mathbb{S}}^{d-1}$.
\end{remark}
\section{Multiscale Fr\'{e}chet Functions} \label{S:frechet}
The mean of a random vector $y \in \ensuremath{\mathbb{R}}^d$ is a simple
and yet oftentimes informative, ``one-element'' summary of
the distribution of $y$. If $y$ has finite second moment and
is distributed according to the probability measure $\alpha$,
then the mean may be characterized more
geometrically as the unique minimizer of the Fr\'{e}chet
function
\begin{equation}
F_\alpha (x) = \expect{\|y-x\|^2} = \int \|y-x\|^2 \, \alpha (dy) \,,
\end{equation}
which measures the spread of $y$ about
$x \in \ensuremath{\mathbb{R}}^d$.
The mean, however, is not as effective
for complex distributions of practical interest such as multimodal
distributions or those supported in
nonlinear subspaces. In this section, we introduce a multiscale
analogue of the Fr\'{e}chet function that is rich in information
about the shape of the distribution of $y$. At each fixed scale,
the local minima of the function may be viewed as localized
analogues of the
mean, as illustrated in examples below. However, instead of
just focusing on the local extrema, we take the view that it is
more informative to investigate the behavior of the
full multiscale Fr\'{e}chet function, as this lets us
uncover more information about the distribution of $y$.
\begin{definition} \label{D:frechet}
Let $f \colon [0, \infty) \to \ensuremath{\mathbb{R}}$ be as in Definition
\ref{D:kernel} with associated kernel
$K \colon \ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d \times (0, \infty) \to \ensuremath{\mathbb{R}}$.
The {\em multiscale Fr\'{e}chet function}
$V_\alpha \colon \ensuremath{\mathbb{R}}^d \times (0, \infty) \to \ensuremath{\mathbb{R}}$ is
defined as
\[
V_\alpha (x, \sigma) := \int \|y-x\|^2 K(x,y,\sigma) \, \alpha(d y)\,.
\]
\end{definition}
\begin{proposition} \label{P:trace}
For each $\sigma > 0$, the multiscale Fr\'{e}chet
function satisfies
\[
V_\alpha (x, \sigma) = \mathrm{tr} \, \Sigma_\alpha (x, \sigma) \,.
\]
\end{proposition}
\begin{proof}
Let $\{e_1, \ldots, e_d\} \subset \ensuremath{\mathbb{R}}^d$ be an orthonormal
basis. Then,
\begin{equation}
\|y-x\|^2 = \sum_{i=1}^d \inner{y-x}{e_i}^2 =
\sum_{i=1}^d (y-x) \otimes (y -x) (e_i, e_i) \,.
\end{equation}
Hence,
\begin{equation}
\begin{split}
V_\alpha (x, \sigma) &=
\sum_{i=1}^d \int (y-x) \otimes (y -x) (e_i, e_i)
K(x,y,\sigma) \, \alpha (dy) \\
&= \sum_{i=1}^d \left( \int (y-x) \otimes (y -x)
K(x,y,\sigma) \, \alpha (dy) \right) (e_i, e_i) \\
&= \sum_{i=1}^d \Sigma_\alpha (x, \sigma) (e_i, e_i)
= \text{tr} \, \Sigma_\alpha (x, \sigma) \,,
\end{split}
\end{equation}
as claimed.
\end{proof}
\begin{corollary}[Stability] \label{C:stability}
Let $f \colon [0, \infty) \to \ensuremath{\mathbb{R}}$ be as in Definition \ref{D:kernel}
with multiscale kernel $K$. Suppose that $f$ is differentiable
and there exists a constant $A>0$ such that
$r^{3/2} \, |f'(r)| \leq A$, $\forall r \geq 0$. Then,
there is a constant $A_f >0$, that depends only on $f$, such
that
\[
\sup_{x\in\ensuremath{\mathbb{R}}^d} \left| V_\alpha (x, \sigma) -
V_\beta (x, \sigma) \right| \leq
\frac{\sigma d A_f}{C_d(\sigma)} \, \ensuremath{W_1} (\alpha,\beta),,
\]
for any $\alpha,\beta\in\ensuremath{\mathcal{P}}_1 (\ensuremath{\mathbb{R}}^d)$ and any $\sigma>0$.
\end{corollary}
\begin{proof}
The result follows from Proposition \ref{P:trace},
Theorem \ref{T:stab} and the fact that for any $d \times d$
matrix $X$, $| \text{tr} \, X| \leq d \, \|X\|$, where
$\| X \|$ is the Frobenius norm of $X$.
\end{proof}
Similarly, Corollary \ref{C:consistency} and
Proposition \ref{P:trace} yield the following consistency
result for multiscale Fr\'{e}chet functions.
\begin{corollary}[Consistency]
Suppose that $\alpha \in \ensuremath{\mathcal{P}}_1 (\ensuremath{\mathbb{R}}^d)$. Let $y_i \in \ensuremath{\mathbb{R}}^d$,
$i \in \mathbb{N}$, be i.i.d. random variables with distribution
$\alpha$ and $K$ a multiscale kernel as in Theorem \ref{T:stab}.
Then, for each fixed $\sigma > 0$,
\begin{equation*}
\sup_{x \in \ensuremath{\mathbb{R}}^d}
\left| V_\alpha(x,\sigma) - V_{\alpha_n}(x,\sigma) \right|
\xrightarrow{n\uparrow\infty} 0
\end{equation*} almost surely.
\end{corollary}
The following result about convergence of multiscale
Fr\'{e}chet functions are immediate consequences of
Corollary \ref{C:rates} and Corollary \ref{C:stability}.
\begin{corollary}
Let $f$ be as in Theorem \ref{T:stab} and $\sigma > 0$.
Suppose that $\alpha \in \ensuremath{\mathcal{P}}_3 (\ensuremath{\mathbb{R}}^d)$ and
$y_i$, $i \in \mathbb{N}$, are i.i.d. random
variables with distribution $\alpha$. Then, there is
a constant $\beta> 0$, that depends only on $d$,
such that
\begin{equation*}
\begin{split}
\expect{\, \sup_{x \in \ensuremath{\mathbb{R}}^d}\left| V_\alpha(x,\sigma)
- V_{\alpha_n}(x,\sigma) \right| }&\leq \\
\frac{\sigma d A_f \beta}{C_d (\sigma)} m_3 (\alpha) &\cdot
\begin{cases}
n^{-\frac{2}{3}} + n^{-\frac{1}{2}} & \text{if $d=1$;}\\
n^{-{\frac{2}{3}}} + n^{-\frac{1}{2}}\log(1+n) & \text{if $d=2$;}\\
n^{-\frac{2}{3}} +n^{-\frac{1}{d}} & \text{if $d\geq 3$.}
\end{cases}
\end{split}
\end{equation*}
\end{corollary}
\begin{remark}
Analogous stability and consistency results for the truncation kernel
follow from Theorem \ref{T:stab-trunc}, Corollary \ref{C:ctf-infty}
and Corollary \ref{C:trunc-stab}.
\end{remark}
For more general kernels, the following pointwise
central limit theorem holds. For fixed $x \in \ensuremath{\mathbb{R}}^d$
and $\sigma > 0$, let
\begin{equation}
t (y) = \|y-x\|^2 K(x,y,\sigma),
\end{equation}
whose expected value is
$\expect{t} = V_\alpha (x, \sigma)$. As in
Theorem \ref{T:clt}, the variance of $t$ is finite and
denoted $\sigma^2_t$.
\begin{theorem}[Central Limit] \label{T:frechetclt}
Let $f$ be as in Definition \ref{D:kernel}. If $t_i \in \ensuremath{\mathbb{R}}$,
$i \in \mathbb{N}$, are i.i.d. random variables with the same
distribution as $t$, then
\[
\sqrt{n} \left( \frac{1}{n} \sum_{i=1}^n t_i -
V_\alpha (x, \sigma) \right) \xrightarrow{d}
\mathcal{N} (0, \sigma_t^2) \,,
\]
as $n \to \infty$, where convergence is in distribution and
$\mathcal{N} (0,\sigma_V^2)$ is normally distributed with
mean zero and variance $\sigma_t^2$.
\end{theorem}
Multiscale Fr\'{e}chet functions
not only give stable representations of probability measures,
but any probability measure $\alpha$ may be fully recovered
from its multiscale Fr\'{e}chet function associated with the
Gaussian kernel, as the following result shows.
\begin{proposition} \label{P:fourier}
Let $\sigma > 0$ be fixed.
Any probability measure $\alpha$ is completely determined by the
Fr\'{e}chet function $V_\alpha (\cdot, \sigma)$ associated
with the Gaussian kernel at scale $\sigma$.
\end{proposition}
\begin{proof}
Let $h_\sigma \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}$ be given by
\begin{equation}
h_\sigma (x) = \frac{\|x\|^2}{(2 \pi \sigma^2)^{d/2}}
\exp{\left(- \frac{\|x\|^2}{2 \sigma^2}\right)} \,.
\end{equation}
Then, for the Gaussian kernel, we may express the
multiscale Fr\'{e}chet function as the convolution
$V_\alpha (x, \sigma) = (h_\sigma \ast \alpha) (x)$.
Under Fourier transform, for each fixed $\sigma > 0$, we
obtain
\begin{equation}
\widehat{V}_\alpha (\xi, \sigma) =
\widehat{h}_\sigma (\xi) \, \phi_\alpha (-2 \pi \xi) \,,
\end{equation}
where $\phi_\alpha$ is the characteristic function of
$\alpha$ defined as $\phi_\alpha (\xi) =
\int e^{i \inner{x}{\xi}} \, \alpha (dx)$. Therefore,
\begin{equation} \label{E:phi}
\phi_\alpha (\xi) = \widehat{V}_\alpha
(-\xi / 2 \pi, \sigma) / \, \widehat{h}_\sigma (-\xi / 2 \pi)
\end{equation}
provided that $\widehat{h}_\sigma (-\xi / 2 \pi) \ne 0$.
A calculation shows that
\begin{equation}
\widehat{h}_\sigma (- \xi / 2 \pi) =
\sigma^2 \left(d - \frac{\sigma^2 \|\xi\|^2}{\pi} \right)
\exp{\left(- \frac{\sigma^2 \|\xi\|^2}{2 \pi} \right)} \,,
\end{equation}
which only vanishes at points $\xi$ on the sphere of radius
$\rho_\sigma = \sqrt{\pi d}/ \sigma$ about the origin.
Thus, \eqref{E:phi} implies that we can recover
$\phi_\alpha (\xi)$ from $\widehat{V}_\alpha (\cdot, \sigma)$,
if $\|\xi\| \ne \sqrt{\pi d}/ \sigma$. By continuity, we can recover
$\phi_\alpha (\xi)$, for any $\xi$. The claim now follows
from the fact that the characteristic function $\phi_\alpha$
determines $\alpha$ \cite{dudley}.
\end{proof}
The following examples illustrate how information about
the shape of data can be extracted from multiscale
Fr\'{e}chet functions.
\begin{example} \label{E:frechet1}
We consider $n=400$ data points distributed into two clusters
of 200 points, each sampled from a Gaussian of variance $0.36$
centered at different points. The data points are plotted in
blue in Fig.\,\ref{F:frechet1}(a), which also shows
the empirical Fr\'{e}chet function $V_n$ at scale $\sigma = 3$. The
local minima of $V_n$ captures what is perceived as the ``centers''
of the two clusters at that scale. However, more information about
the data distribution can be uncovered from $V_n$. For example,
the local minima may be viewed as attractors of the (negative)
gradient field $- \nabla V_n$, indicated by the arrows in the figure.
The stable manifold of each attractor, which comprises points that
move toward the attractors under the associated flow may be
viewed as clusters inferred from the data at that scale. These
clusters are delimited by the repellers of the system, which
correspond to the local maxima of $V_n$. Fig.\,\ref{F:frechet1}(b)
shows how $V_n$ varies across scales, highlighting the bifurcation
of the attractors (in red) and repellers (in green) as $\sigma$ changes.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.45\linewidth]{vfunction-v1}
\qquad & \qquad
\includegraphics[width=0.3\linewidth]{vsurface} \\
(a) \qquad & \qquad (b)
\end{tabular}
\end{center}
\caption{(a) Fr\'{e}chet function for data on the line
(highlighted in blue) computed with the Gaussian kernel at
scale $\sigma = 3$; (b) Fr\'{e}chet function across scales.}
\label{F:frechet1}
\end{figure}
In data analysis, such bifurcation diagrams may find
several applications. For example, if the data
represent the distribution of some phenotypic trait for
two species that have evolved from a single group, the
multiscale Fr\'{e}chet function and the associated
bifurcation diagram let us create an evolutionary model
for the trait from the observed data.
\end{example}
\begin{example} \label{E:frechet2}
Here we consider the dataset in $\ensuremath{\mathbb{R}}^2$ shown in panel (a)
of Fig.\,\ref{F:frechet2}. Panels (b)--(h) show the Fr\'{e}chet
function for the Gaussian kernel calculated at increasing scales.
The gradient field $- \nabla V_n$ at scale $\sigma = 2.25$ is
depicted in panel (a) of Fig.\,\ref{F:2dfield} along with the
two attractors $p_1$ and $p_2$, and their stable manifolds
that were estimated numerically.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{ccccc}
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{2clusters-data}
\end{tabular} &
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{2clusters-trace-1}
\end{tabular} &
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{2clusters-trace-1_5}
\end{tabular} &
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{2clusters-trace-2}
\end{tabular} &
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{2clusters-trace-3}
\end{tabular} \\
data & $\sigma = $1.0 & $\sigma = 1.5 $ & $\sigma = 2.0$ &
$\sigma = 3.0 $
\vspace{0.1in} \\
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{2clusters-trace-4_5}
\end{tabular} &
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{2clusters-trace-5}
\end{tabular} &
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{2clusters-trace-6}
\end{tabular} &
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{2clusters-trace-7}
\end{tabular} &
\begin{tabular}{c}
\includegraphics[width=0.14\linewidth]{2clusters-trace-9}
\end{tabular} \\
$\sigma = 4.5$ & $\sigma = 5.0 $ & $\sigma = 6.0 $ &
$\sigma = 7.0$ & $\sigma = 9.0$
\end{tabular}
\end{center}
\caption{Heat maps of the multiscale Fr\'{e}chet function
for 2D data at increasing scales computed with the
Gaussian kernel.}
\label{F:frechet2}
\end{figure}
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\begin{tabular}{c}
\includegraphics[width=0.32\linewidth]{2clusters-manifolds-2_25}
\end{tabular}
\qquad & \qquad
\begin{tabular}{c}
\includegraphics[width=0.35\linewidth]{2clusters-localpca-2_25}
\end{tabular}
\\
(a) \qquad & \qquad (b)
\end{tabular}
\end{center}
\caption{(a) 2D data, attractors and their stable manifolds at
a fixed scale ($\sigma=2.25$); (b) gradient vector field and
covariance tensors at the attractors.}
\label{F:2dfield}
\end{figure}
The stable manifolds may be viewed as estimations
at scale $\sigma = 2.25$ of clusters of the underlying probability
measure $\alpha$ from which the data was sampled. Panel (b)
shows the gradient field and the covariance tensors at the attractors
depicted as ellipses with principal radii proportional to the
square root of the eigenvalues of the covariance matrix. This
may be viewed as a localized analogue of principal component
analysis (PCA) that is able to uncover geometry that is not
detectable with standard PCA. Analysis of the spectra
of $\Sigma_n (p_i, \sigma)$, $i =1,2$, suggests that the data
is organized around two one-dimensional clusters, whereas
standard PCA is not sensitive to the local dimensionality because
of the orientation of the clusters.
\end{example}
These examples are intended as proof-of-concept
illustrations. Topological and other methods will
be explored in forthcoming work for extraction of
structural information from $V_n$.
\section{Hierarchical Manifold Clustering} \label{S:clustering}
Clustering is a central theme in pattern analysis with a rich
history; cf. \cite{jain}. One of the most studied forms of the problem
is that of partitioning a dataset into various subsets if there
is some form of spatial separation of the data into subgroups.
Motivated by problems in such areas as computer vision and
video analysis, cf.\,\cite{vidal07}, there has been a
growing interest in clustering data that are organized as a finite
union of possibly intersecting subspaces that have some
special geometric structure \cite{chen2009foundations,arias2011clustering,arias2013spectral,soltanolkotabi2014robust}. As illustrated in
Fig.\,\ref{F:clusters}, the data may
consist of noisy samples from an arrangement of (affine) linear
subspaces of a Euclidean space such as a collection of lines in
a plane, or an arrangement of lines and planes in $\ensuremath{\mathbb{R}}^3$. More
generally, the clusters may comprise a finite collection
of possibly non-linear, smooth submanifolds of a Euclidean space
that intersect transversely. Here we propose an approach to
manifold clustering based on CTFs. The basic idea is to use
covariance fields to incorporate directional information at
each data point. Formally, this is achieved via a section of the
tensor bundle $\ensuremath{\mathbb{R}}^d \times (\ensuremath{\mathbb{R}}^d \otimes \ensuremath{\mathbb{R}}^d)$ over
$\ensuremath{\mathbb{R}}^d$, as follows. Given a probability measure $\alpha$
and a multiscale kernel, let $\Sigma_\alpha (x, \sigma)$ be
the associated CTF. For each $\sigma > 0$, consider the section
$\iota_{\alpha; \sigma} \colon \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}^d \times (\ensuremath{\mathbb{R}}^d
\otimes \ensuremath{\mathbb{R}}^d)$ given by
$x \mapsto \left( x, \Sigma_\alpha (x, \sigma) \right)$.
On the total space of the tensor bundle, define the metric
\begin{equation}
\| (x, \Sigma) - (x', \Sigma')\|_\gamma =
\left( \Vert \Sigma - \Sigma' \Vert^2
+ \gamma^2 \Vert x - x' \Vert^2 \right)^{1/2} \,,
\end{equation}
where $x, x' \in \ensuremath{\mathbb{R}}^d$, $\Sigma, \Sigma'
\in \ensuremath{\mathbb{R}}^d \otimes \ensuremath{\mathbb{R}}^d$, and $\gamma \geq 0$
is a parameter that balances the contributions of the spatial
and tensor components. Note that $\|\cdot\|_0$ only defines
a pseudo-metric since $\|\cdot\|_0$ disregards ``horizontal''
distances.
For any subset $X \subseteq \ensuremath{\mathbb{R}}^d$, we denote
by $\mathbb{X}_{\alpha; \gamma, \sigma}$ the metric
space $(X, d_{\alpha; \gamma, \sigma})$, where
\begin{equation} \label{E:metric}
d_{\alpha; \gamma, \sigma} (x, x') =
\left\| \iota_{\alpha; \sigma} (x) -
\iota_{\alpha; \sigma} (x') \right\|_\gamma \,.
\end{equation}
For a dataset $A = \{a_1, \ldots, a_n\} \subset \ensuremath{\mathbb{R}}^d$,
the proposed clustering method is based on the single-linkage
method \cite{sibson73} applied to the finite metric space
$\mathbb{A}_{\alpha_n; \gamma, \sigma}$ associated with
the empirical measure $\alpha_n = n^{-1} \sum_{i=1}^n \delta_{a_i}$.
Equivalently, clustering is based
on the $n \times n$ affinity matrix $D$ whose $(i,j)$-entry is
\begin{equation} \label{E:affinity}
d_{ij} = d_{\alpha_n; \gamma,\sigma} (a_i, a_j) \,.
\end{equation}
Recall that single linkage
on a finite metric space $\mathbb{A}=(A, d_A)$ starts from $n$ clusters,
each a singleton $\{a_i\}$, $1 \leqslant i \leqslant n$,
sequentially merging the closest clusters until all data points
coalesce into a single cluster. Closeness of two clusters, say
$A_1, A_2 \subset A$, is measured by the inter-cluster distance
\begin{equation}
d_{sl}(A_1, A_2)=\min_{a \in A_1 , a' \in A_2} d_A (a, a') \,,
\label{E:distance}
\end{equation}
We choose single linkage because it yields stable dendrograms,
as expounded below, under assumptions on the
probability measure from which the data is sampled that are not
very restrictive. Combined with our stability and consistency
results for covariance fields, this guarantees that the
manifold clustering method is stable at all stages.
\subsection{Dendrogram Stability}
We denote a metric space by $\mathbb{X} = (X,d_X)$.
An ultrametric space is a pair $(X,u_X)$, where $u_X \colon
X\times X \to \mathbb{R}^+$ is a metric on $X$ that satisfies
the \emph{strong triangle inequality}
\begin{equation}
u_X(x,x')\leq \max \left\{ u_X(x,x''),u_X(x'',x') \right\} \,,
\end{equation}
for all $x,x',x''\in X$. Any such function $u_X$ is called
an \emph{ultrametric} on $X$.
As proved in \cite{memoli10}, dendrograms over a finite set
$X$ are in structure-preserving, bijective correspondence with
ultrametrics on $X$. In this formulation, a hierarchical clustering method
can be regarded as a map $\mathcal{H}:\mathcal{M}\rightarrow\mathcal{U}$
from finite metric spaces into finite ultrametric spaces. Henceforward,
$\mathcal{H}$ will denote the map given by single linkage hierarchical
clustering. It is known \cite{memoli10} that if $\mathbb{X} = (X,d_X)
\in \mathcal{M}$, then $\mathcal{H}(\mathbb{X})=(X,u_X)$ is given by
\begin{equation}
u_X(x,x') = \min_{x=x_0,\ldots,x_r=x'}\max_{i}d_X(x_i,x_{i+1}) \,.
\end{equation}
The minimum above is taken over all finite ordered sequences
$x_0, x_1,\ldots,x_r$ of points in $X$ such that $x_0=x$ and $x_r=x'$.
If $x, x' \in X$, then $u_X (x,x')$ may be interpreted as the
dendrogram level at which the clusters containing $x$ and $x'$
first merge. This is known as the {\em cophenetic\/}
distance between $x$ and $x'$.
The main goal of this section is to formulate and prove
stability of the map $\mathcal{M} \ni
\mathbb{X}\mapsto\mathcal{H}(\mathbb{X})\in \mathcal{U}$.
The question of stability of single linkage clustering
can be approached using ideas related to the Gromov-Hausdorff
distance \cite{burago}, as follows. A correspondence $R$ between
two sets $X$ and $Y$ is a subset of $X\times Y$ such that
$\pi_1(R)=X$ and $\pi_2(R)=Y$, where $\pi_1$ and $\pi_2$
denote projections onto the first and second factors. Given
$X$ and $Y$, we denote by $\mathcal{R}(X,Y)$ the set of
all correspondences between $X$ and $Y$.
\begin{definition} Let $\mathbb{X}$ and $\mathbb{Y}$ be
compact metric spaces.
\begin{itemize}
\item[(i)]
The {\em distortion} of a correspondence $R$ between
$\mathbb{X}$ and $\mathbb{Y}$ is defined by
\[
\text{dis}\, (R;\mathbb{X},\mathbb{Y}) :=
\max_{(x,y),(x',y')\in R} \left| d_X(x,x)-d_Y(y,y') \right|.
\]
\item[(ii)] The {\em Gromov-Hausdorff} distance between
$\mathbb{X}$ and $\mathbb{Y}$ is given by
\[
d_{GH}(\mathbb{X},\mathbb{Y}):=\frac{1}{2}\inf_{R}
\text{dis}\,(R; \mathbb{X},\mathbb{Y}),
\]
where the infimum is taken over all correspondences between
$\mathbb{X}$ and $\mathbb{Y}$.
\end{itemize}
\end{definition}
The following stability result is a generalization of
\cite[Proposition 26]{memoli10}.
\begin{proposition} \label{P:dendro}
For any $\mathbb{X}, \mathbb{Y} \in \mathcal{M}$ and
any correspondence $R \in \mathcal{R}(X,Y)$,
\[
\mathrm{dis}(R;\mathcal{H}(\mathbb{X}),\mathcal{H}
(\mathbb{Y}))\leq \mathrm{dis}(R;\mathbb{X},\mathbb{Y}) \,.
\]
As a consequence,
$d_{GH}(\mathcal{H}(\mathbb{X}),\mathcal{H}(\mathbb{Y}))
\leq d_{GH}(\mathbb{X},\mathbb{Y})$.
\end{proposition}
\begin{remark}
The claim of the proposition may be written, equivalently,
as follows. If $u_X$ and $u_Y$ denote the ultrametrics
produced by single linkage hierarchical clustering on
$\mathbb{X}$ and $\mathbb{Y}$, then
\begin{equation}\label{E:stab-gh}
|u_X(x,x')-u_Y(y,y')|\leq \max_{(x,y),(x',y')\in
R}|d_X(x,x')-d_Y(y,y')| \,,
\end{equation}
for any correspondence $R$ between $X$ and $Y$ and
all $(x,y),(x',y')\in R$.
\end{remark}
\begin{proof}[Proof of Proposition \ref{P:dendro}]
We prove (\ref{E:stab-gh}). Given a correspondence
$R \in \mathcal{R}(X,Y)$ and $(x,y),(x',y')\in R$, let
$x=x_0,x_1,\ldots,x_n=x'$ in $X$ be such that
$\max_i d_X(x_i,x_{i+1}) = u_X(x,x')$. Let $y_0=y$,
$y_n=y'$ and choose $y_1,\ldots,y_{n-1}\in Y$ such that
$(x_i,y_{i})\in R$ for all $i=1,\ldots,n-1$. This is possible since
any correspondence $R$ satisfies $\pi_1(R)=X$.
Notice that
\begin{equation} \label{E:ubound}
\begin{split}
u_Y(y,y') &\leq \max_{i} d_Y(y_{i},y_{i+1}) \\
&\leq \max_{i} \left( d_X(x_{i},x_{i+1}) + |d_X(x_i,x_{i+1})
- d_Y(y_i,y_{i+1})| \right)\\
&\leq \max_i d_X(x_i,x_{i+1}) + \max_{(x,y),(x',y')\in R}
| d_X(x,x')-d_Y(y,y')| \\
&=u_X(x,x') + \max_{(x,y),(x',y')\in R} | d_X(x,x')-d_Y(y,y')|.
\end{split}
\end{equation}
The claim follows since \eqref{E:ubound} also holds
if we reverse the roles of $X$ and $Y$.
\end{proof}
\begin{lemma}\label{lemma:sigma-ab}
Let $\alpha,\beta\in\mathcal{P}_\infty(\ensuremath{\mathbb{R}}^d)$ and $\sigma>0$.
If a kernel satisfies the conditions of Lemma \ref{L:kernel}, then
\begin{equation*}
\sup_{(a,b) \in R_\mu}
\left\|\Sigma_\alpha(a,\sigma) - \Sigma_\beta(b,\sigma) \right\|
\leq \frac{2 A_f \sigma}{C_d(\sigma)} \,
\sup_{(a,b) \in R_\mu} \|a-b\|\,,
\end{equation*}
for any coupling $\mu\in\Gamma(\alpha,\beta)$, where
$R_\mu := \text{supp} \, [\mu]$ and $A_f>0$ is as in
Lemma \ref{L:kernel}.
\end{lemma}
\begin{proof}
Set $\zeta = \sup_{(a,b) \in \text{supp} \, [\mu]}\|a-b\|$.
Let $\mu\in\Gamma(\alpha,\beta)$ and $(y,y'),(a,b) \in
\text{supp}\, [\mu]$. In the notation of Lemma \ref{L:kernel},
setting $z_1 = y-a$ and $z_2 = y' - b$, we have
\begin{equation}
\|Q_\sigma(y-a) - Q_\sigma(y'-b)\|
\leq \frac{A_f \sigma}{C_d(\sigma)} \left(\|y-y'\| + \|a-b\|\right)
\leq \frac{2 A_f \sigma}{C_d(\sigma)} \zeta \,,
\end{equation}
where in the last inequality we used $\|y-y'\| \leq \zeta$ and
$\|a-b\| \leq \zeta$. Since
\begin{equation}
\left\|\Sigma_\alpha(a,\sigma) - \Sigma_\beta(b,\sigma) \right\|
\leq \iint \|Q_\sigma(y-a) - Q_\sigma(y'-b)\| \,\mu (dy\times dy') \,,
\end{equation}
the lemma follows.
\end{proof}
\begin{lemma}[Lemma 2.2 of \cite{dghlp}] \label{L:R-mu}
Let $\alpha, \beta \in \ensuremath{\mathcal{P}}_\infty (\ensuremath{\mathbb{R}}^d)$.
Then, for any coupling $\mu \in \Gamma(\alpha,\beta)$,
$R_\mu = \text{supp}\, [\mu]$ gives a correspondence between
$A=\text{supp} \, [\alpha]$ and $B=\text{supp} \, [\beta]$.
\end{lemma}
\begin{theorem}\label{T:stab-metric}
Let $\alpha, \beta \in \ensuremath{\mathcal{P}}_\infty(\ensuremath{\mathbb{R}}^d)$, $A = \text{supp} \, [\alpha]$, $B=\text{supp}\,[\beta]$, $\sigma > 0$ and $\gamma \geq 0$. Then, for any kernel satisfying the conditions of Lemma \ref{L:kernel},
\begin{equation*}
d_{GH} \left((A,d_{\alpha;\sigma,\gamma}),(B,d_{\beta;\sigma,\gamma})
\right) \leq \left(\frac{2 A_f \sigma}{C_d(\sigma)}+\gamma\right)
\ensuremath{W_\infty} (\alpha,\beta)\,,
\end{equation*}
with $A_f>0$ as in Lemma \ref{L:kernel}.
\end{theorem}
\begin{proof}
Let $\mu\in\Gamma(\alpha,\beta)$ be a coupling that realizes
$\ensuremath{W_\infty} (\alpha, \beta)$. By Lemma \ref{L:R-mu},
$R_\mu = \text{supp}\, [\mu]$ is a correspondence between
$A$ and $B$. Thus,
\begin{equation}
\begin{split}
d_{GH} \big(\mathbb{A}_{\alpha;\sigma,\gamma},\,
&\mathbb{B}_{\beta;\sigma,\gamma} \big) \leq
\frac{1}{2} \, \text{dis} (R_\mu;A,B) \\
&=\frac{1}{2} \sup_{(a,b),(a',b') \in R_\mu}
\left| d_{\alpha;\sigma,\gamma}(a,a') -
d_{\beta;\sigma,\gamma} (b, b') \right| \\
&\leq \frac{1}{2}\sup_{(a,b),(a',b')\in R_\mu}
\Big( \left\| \Sigma_\alpha(a,\sigma) - \Sigma_\beta (b,\sigma) \right\| + \\
&+ \big\|\Sigma_\alpha(a',\sigma) - \Sigma_\beta (b',\sigma) \big\| +
\gamma\|a-b\| + \gamma \|a'-b'\| \Big) \\
&\leq \sup_{(a,b )\in R_\mu} \left\|\Sigma_\alpha(a,\sigma) -
\Sigma_\beta (b,\sigma) \right\| +\gamma \sup_{(a,b)\in R_\mu}\|a-b\| \\
&\leq \left(\frac{2 A_f \sigma}{C_d(\sigma)}+\gamma\right)
\sup_{(a,b)\in R_\mu}\|a-b\|,
\end{split}
\end{equation}
where the last step follows from Lemma \ref{lemma:sigma-ab}.
The conclusion follows since $\ensuremath{W_\infty} (\alpha,\beta) =
\sup_{(a,b)\in R_\mu}\|a-b\|.$
\end{proof}
Combining Proposition \ref{P:dendro} and
Theorem \ref{T:stab-metric}, we obtain:
\begin{corollary}[Stability of Hierarchical Manifold Clustering]
Let $\alpha, \beta \in \ensuremath{\mathcal{P}}_\infty (\ensuremath{\mathbb{R}}^d)$ be probability
measures with finite support, $A = \text{supp} \, [\alpha]$,
$B=\text{supp}\,[\beta]$, $\sigma > 0$ and $\gamma \geq 0$. Then,
\begin{equation*}
d_{GH} \left(\mathcal{H}(\mathbb{A}_{\alpha; \gamma, \sigma}),
\mathcal{H}(\mathbb{B}_{\alpha; \gamma, \sigma}) \right) \leq
\left(\frac{2 A_f \sigma}{C_d(\sigma)}+\gamma\right)
W_\infty(\alpha,\beta).\end{equation*}
\end{corollary}
\subsection{Comments About Consistency of Hierarchical Manifold Clustering}
A question that our paper leaves open is whether, under a reasonable generative model for the sampling from a collection of intersecting manifolds one may be able to prove that the empirical dendrogram converges in probability to a dendrogram that represents the spatial organization of the underlying manifolds. In the simpler context of clustering $n$ i.i.d. samples $\{x_1,\ldots,x_n\}$ from a compact metric space $(X,d_X)$ endowed with a Borel probability measure $\alpha_X$, it has been established in \cite{memoli10} that single linkage hierarchical clustering converges to a dendrogram whose hierarchical structure depends on the support of $\alpha_X$ in a precise way.
In the case of flat clustering, i.e. when the goal is to obtain a single partition of the dataset, consistency results for some multi-manifold clustering methods based on local PCA are given in \cite{arias2011clustering,arias2011spectral,arias2013spectral}.
\subsection{Examples and Applications}
Let $X = \{x_1, \ldots, x_n\}$ be a dataset in $\ensuremath{\mathbb{R}}^d$.
For $\gamma, \sigma > 0$, we apply the single linkage method to
the metric space $\mathbb{X}_{\alpha_n; \gamma,\sigma} =
(X, d_{\alpha_n; \gamma,\sigma})$, where $\alpha_n$ is
the empirical measure associated with $X$ and
$d_{\alpha_n; \gamma,\sigma}$ is the distance defined
in \eqref{E:metric}. The ultrametric associated with
$\mathcal{H} (\mathbb{X}_{\alpha_n; \gamma,\sigma})$
is abbreviated $u_{\alpha_n; \gamma,\sigma}$.
In this setting, analyzing informative dendrogram cutoff levels
often is an important task, which can be approached in
different ways, depending on the nature of the problem.
For example, a cutoff level $h$ may be based on a
pre-assigned number of clusters, be learned from
training data, or be more exploratory. We give examples
that illustrate all three viewpoints.
\begin{example}[Lines and planes]
In this experiment we consider the unlabeled point cloud in
Fig.\,\ref{F:clusters}(a) that represents an arrangement of two
parallel planes and two lines that intersect the planes transversely.
Each plane contains $225$ points on a uniform grid and each
line contains $30$ equally spaced points. Cutting the dendrogram
at four clusters, our method finds the four affine
linear subspaces accurately with the Gaussian kernel at
$\sigma = 0.6$. In this case, it is important to choose
$\gamma \ne 0$ since the spatial component
of \eqref{E:affinity} is needed to discriminate the parallel planes.
In Fig.\,\ref{F:clusters}(a), the points are colored according to
cluster membership. In this case, the covariance tensors at
data points on the planes that are away 'from the cluster intersections
have two dominating eigenvalues, whereas for points on the lines
they have only one such eigenvalue. Thus, an
analysis of the spectrum of the covariance tensors at data
points let us infer the dimension of each cluster.
\end{example}
\begin{example}[A Line Arrangement] \label{S:3lines}
In this example, the point-cloud data represents
three intersecting lines in $\ensuremath{\mathbb{R}}^2$, as shown in Fig.\,\ref{F:3lines}(a).
Each line segment is sampled at $200$ equally spaced points.
Since the slopes of the lines are different, we expect the covariance
matrices to be able to cluster the points without the aid of additional
spatial information. Thus, we set $\gamma = 0$ in \eqref{E:affinity} and $\sigma = 0.4$.
The number of clusters was set to six to test the ability of the
algorithm to detect not only the lines, but
also the three intersections. Fig.\,\ref{F:3lines}(b) shows the
single-linkage dendrogram, highlighting each of the six clusters.
The data points are colored according to cluster
membership.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cccc}
\includegraphics[width=0.22\linewidth]{3lines} &
\includegraphics[width=0.2\linewidth]{3lines-dendrogram}
\quad & \quad
\includegraphics[width=0.22\linewidth]{3lines-noise} &
\includegraphics[width=0.24\linewidth]{3lines-noise-dendrogram} \\
(a) & (b) \quad & \quad (c) & (d)
\end{tabular}
\end{center}
\caption{(a) an arrangement of three lines and (b) clustering
dendrogram; (c) noisy lines with outliers and (d) clustering
dendrogram.}
\label{F:3lines}
\end{figure}
As expected, well delineated clusters are detected away from the
intersection points because the covariance matrices are highly anisotropic
with principal axes that align well with the corresponding line
segments. Although the covariance matrices are not as anisotropic
near the intersection points, there are enough differences in their
behavior near the three intersection loci for the algorithm to be able to
place them into different clusters.
\end{example}
The next two examples are of a more exploratory nature in
that dendrogram cutoff was chosen through experimentation with
the data.
\begin{example}[Noisy Lines with Outliers] \label{E:noisylines}
This is a noisy version of Example \ref{S:3lines}, as shown
in Fig.\,\ref{F:3lines}(c). As before, each line is represented by
200 points, but we have added Gaussian noise of width $0.015$
to the data, as well as 180 outliers sampled from the uniform distribution
on a rectangle containing the lines. Because of the nature of the data, the
number of clusters was set to $m=80$ so that the three main clusters
did not get merged because of the outliers. The figure also shows
a line fitted to each of the three largest clusters using principal component
analysis. The method was able to sharply recover the three lines, even in
the presence of noise and outliers. The majority of the 80 clusters
are singletons of outliers and these are colored black in the figure.
We remark that the choice of $\sigma = 0.51$ is crucial when dealing with
data contaminated by noise. In this case, it was also important to set
$\gamma \ne 0$ to better cope with noise.
\end{example}
\begin{example}[Floor cracks]
We apply the clustering method to segmentation of two images of concrete
floor cracks. Panel (a) of Fig.\,\ref{F:cracks} shows the original images,
whereas panel (b) shows binary images obtained from an edge detection
algorithm. We cluster the foreground pixels of the binary
images. As in Example \ref{E:noisylines}, it is important to allow a fairly
large number of clusters so that the clusters that detect the main
cracks do not get merged because of the noisy pixels.
Panels (c) and (d) show the outputs (not to scale) of the clustering
algorithm.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cccc}
\begin{tabular}{c}
\includegraphics[width=0.17\linewidth]{f1} \\
\includegraphics[width=0.17\linewidth]{f2}
\end{tabular}
&
\begin{tabular}{c}
\includegraphics[width=0.17\linewidth]{f1_c} \\
\includegraphics[width=0.17\linewidth]{f2_c}
\end{tabular}
\quad & \quad
\begin{tabular}{c}
\includegraphics[width=0.2\linewidth]{f1_a}
\end{tabular}
&
\begin{tabular}{c}
\includegraphics[width=0.2\linewidth]{f2_a}
\end{tabular} \\
(a) & (b) \quad & \quad (c) & (d)
\end{tabular}
\end{center}
\caption{(a) original and (b) processed images of floor cracks;
(c) and (d) show clustering based on CTFs.}
\label{F:cracks}
\end{figure}
\end{example}
To further test the ability of the method to cluster intersecting
manifolds, we experimented with synthetic data comprising
multiple arrangements such as the intersecting lines in
Fig.\,\ref{F:clusters}(b).
\begin{example}
We consider three syntehtic datasets of point clouds
representing random arrangements of: (i) three line segments
in $\ensuremath{\mathbb{R}}^2$; (ii) four curves in $\ensuremath{\mathbb{R}}^2$ that are either
line segments or arcs of parabolas; and (iii) three
patches of planes in $\ensuremath{\mathbb{R}}^3$. Each of these datasets
contains a total of 250 point clouds, 50 used for training
the algorithm and 200 test samples. The points in each
point cloud are labeled to allow quantification of the
accuracy of the output of the algorithm.
Fig.\,\ref{F:arrangements} shows a few samples from
each of these datasets.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.15\linewidth]{lines1}
\includegraphics[width=0.15\linewidth]{lines2}
\includegraphics[width=0.15\linewidth]{lines3}
\includegraphics[width=0.15\linewidth]{lines4}
\includegraphics[width=0.15\linewidth]{lines5}
\includegraphics[width=0.15\linewidth]{lines6}
\vspace{0.1in}\\
\includegraphics[width=0.15\linewidth]{mix1}
\includegraphics[width=0.15\linewidth]{mix2}
\includegraphics[width=0.15\linewidth]{mix7}
\includegraphics[width=0.15\linewidth]{mix4}
\includegraphics[width=0.15\linewidth]{mix8}
\includegraphics[width=0.15\linewidth]{mix6}
\vspace{0.1in}\\
\includegraphics[width=0.18\linewidth]{planes1}
\includegraphics[width=0.18\linewidth]{planes2}
\includegraphics[width=0.18\linewidth]{planes4}
\includegraphics[width=0.18\linewidth]{planes5}
\includegraphics[width=0.18\linewidth]{planes6}
\end{center}
\caption{Random arrangements of line segments (row 1),
segments of lines and parabolas (row 2), and plane
patches (row 3).}
\label{F:arrangements}
\end{figure}
Parameter values that optimize classification performance
are learned from the training samples. Note, however,
that even though the number $k$ of clusters is known, specifying a
height $h$ that yields precisely $k$ clusters may yield
undesirable results. For example, important clusters representing
different components of an arrangement of manifolds may
get merged due to the presence of outliers or the behavior near
the intersections. Thus, it is often
preferable to choose a lower cutoff level before this phenomenon
occurs at the expense of getting a larger number $\ell$ of clusters.
In such situations, we select the largest $k$ clusters and
assign each point in the remaining $(\ell - k)$ clusters
to the closest of the top $k$ clusters. Experiments indicate
that a good baseline for the cutoff level $h$ is the mean
cophenetic distance, which for a point cloud $\{x_1, \ldots,
x_n\} \subset \ensuremath{\mathbb{R}}^d$ is given by
\begin{equation}
h_0 = \frac{2}{n (n-1)} \sum_{i<j}
u_{\alpha_n; \gamma,\sigma} (x_i, x_j) \,.
\end{equation}
In the learning phase, we
typically search for $h$ in a neighborhood of $h_0$
whose width is determined by the variance of the
distribution of the cophenetic distances.
With the learned parameter values, the algorithm performs
well in all three cases. For each point cloud
we count the number of misclassified points and
calculate the average error (AE) and the mean error (ME) rates
over all test samples obtaining:
\begin{itemize}
\item[(i)] arrangements of lines: 9.59\% (AE) and 4.17\% (ME);
\item[(ii)] lines and parabolas: 9.93\% (AE) and 3.38\% (ME);
\item[(iii)] arrangements of planes: 7.00\% (AE) and 2.42\% (ME).
\end{itemize}
As expected, a closer inspection of the results reveals that
most of the errors occur at points near the intersections
of the clusters, where the covariance tensors are not as
informative for clustering purposes.
\end{example}
\section{Concluding Remarks} \label{S:summary}
We introduced the notion of multiscale covariance tensor
fields associated with Euclidean random variables and developed
a framework for the systematic study of the shape of data
using localized covariance tensors. We investigated foundational
questions such as stability and consistency of multiscale
CTFs, provided illustrations of how CTFs let us uncover
geometry underlying data, and applied the methods to manifold
clustering. We also introduced multiscale Fr\'{e}chet functions,
which are scalar fields derived from CTFs that fully capture
the distributions of random vectors. Multiscale Fr\'{e}chet
functions are particularly well suited for extension
of the methods of this paper to non-Euclidean random variables,
a problem that is receiving ever increasing attention in
data science. In this setting, the goal is to devise methods that
can cope with random variables taking values in spaces such
as Riemannian manifolds and more general metric spaces.
Unless restrictive assumptions are imposed on the sample
space and the distributions, CTFs may be difficult to define
in this nonlinear realm. In contrast, the Fr\'{e}chet function
formulation can be easily extended to metric spaces supporting
a diffusion kernel \cite{coifman06}. In forthcoming work, we will
investigate theoretical and computational aspects of such extensions,
including the accessibility of information residing
in multiscale Fr\'{e}chet functions, a problem that poses
computational challenges even in the case of high-dimensional
Euclidean random variables.
In this paper, we only considered radial basis kernels;
however, many results extend easily to more general kernels.
We emphasized the multiscale formulation largely because
of the questions that motivated this work. Nevertheless, the
majority of the results apply to kernels that are not
scale dependent.
Covariance tensor fields also suggest ways of formalizing
the notion of shape of Euclidean data and probability measures.
For example, for a distribution $\alpha$ with the property
that the covariance tensor field $\Sigma_\alpha (\cdot, \sigma)$
associated with a smooth kernel (such as the Gaussian kernel) is
non-singular for every $x \in \ensuremath{\mathbb{R}}^d$,
$\Sigma_\alpha^{-1}(\cdot, \sigma)$ defines a metric tensor
with close ties to $\alpha$. This poses the problem
of uncovering relationships between Riemannian metrics
derived from CTFs, such as $\Sigma_\alpha^{-1}(\cdot, \sigma)$,
and the shape of $\alpha$.
In a different direction, for a fixed point $x\in \ensuremath{\mathbb{R}}^d$, an interesting problem
is that of capturing the values of $\sigma$ for which $\Sigma_\alpha(x,\sigma)$
exhibits a ``jump'' in behavior. This study, in the context of images, gives
rise to notions of {\em local scales}. Knowledge of local scales for each
point $x$ leads to criteria for selecting important, salient points in the
spirit of SIFT \cite{lowe2004,mmm13}. The concept of local scales arose
first in the context of images \cite{jones-le} and was later extended to
probabilty distributions \cite{le-scales}. The notions of local scales in
\cite{jones-le,le-scales} were isotropic. Thus, future developments
related to characterizing shape using CTFs are suggested by the
possibility of constructing notions of local scales on general shapes
\cite{le-scales,mmm13} which ---by exploiting the tensorial nature
of $\Sigma_\alpha(x,\sigma)$--- become sensitive to direction.
\section*{Data Accessibility}
The synthetic data used in the manifold clustering experiments
is available at \url{https://bitbucket.org/diegodiaz-math/ctf-files/}.
\section*{Acknowledgements}
This research was supported in part by NSF grants IIS-1422400,
DMS-1418007 and DBI-1262351, and by the Erwin Schr\"{o}dinger Institute
in Vienna. We thank Dejan Slep\v{c}ev for a discussion about
the results of \cite{gts15}.
\bibliographystyle{elsarticle-num}
| {
"timestamp": "2017-03-01T02:03:07",
"yymm": "1509",
"arxiv_id": "1509.04632",
"language": "en",
"url": "https://arxiv.org/abs/1509.04632",
"abstract": "We introduce the notion of multiscale covariance tensor fields (CTF) associated with Euclidean random variables as a gateway to the shape of their distributions. Multiscale CTFs quantify variation of the data about every point in the data landscape at all spatial scales, unlike the usual covariance tensor that only quantifies global variation about the mean. Empirical forms of localized covariance previously have been used in data analysis and visualization, but we develop a framework for the systematic treatment of theoretical questions and computational models based on localized covariance. We prove strong stability theorems with respect to the Wasserstein distance between probability measures, obtain consistency results, as well as estimates for the rate of convergence of empirical CTFs. These results ensure that CTFs are robust to sampling, noise and outliers. We provide numerous illustrations of how CTFs let us extract shape from data and also apply CTFs to manifold clustering, the problem of categorizing data points according to their noisy membership in a collection of possibly intersecting, smooth submanifolds of Euclidean space. We prove that the proposed manifold clustering method is stable and carry out several experiments to validate the method.",
"subjects": "Machine Learning (stat.ML); Metric Geometry (math.MG); Statistics Theory (math.ST)",
"title": "The Shape of Data and Probability Measures",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969703688159,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7099044369856161
} |
https://arxiv.org/abs/1305.2944 | Linear combinations of frame generators in systems of translates | A finitely generated shift invariant space $V$ is a closed subspace of $L^2(\R^d)$ that is generated by the integer translates of a finite number of functions. A set of frame generators for $V$ is a set of functions whose integer translates form a frame for $V$. In this note we give necessary and sufficient conditions in order that a minimal set of frame generators can be obtained by taking linear combinations of the given frame generators. Surprisingly the results are very different to the recently studied case when the property to be a frame is not required. | \section{Introduction}
{\it Shift invariant spaces} (SISs) are closed subspaces of $L^2(\mathbb{R}^d)$ that are invariant under integer translations. They play an important role in approximation theory, harmonic analysis, wavelet
theory, sampling and signal processing \cite{AG01, Gro01, HW96, Mal89}. The structure of these spaces has been deeply analyzed (see for example \cite{dBDVR94, dBDVRI94, Bow00, Hel64, RS95}).
A set of functions $\Phi$ is a {\it set of generators} for a shift invariant space $V$ if the closure of the space spanned by all integer translations of the functions in $\Phi$
agrees with $V.$ When there exists a finite set of generators $\Phi$ for $V$, we say that $V$ is finitely generated. In this case, there exists a positive integer $\ell$, called the length of $V$, that is defined as the minimal number of functions that generate $V.$ Any set of generators of $V$ with $\ell$ elements will be called a {\it minimal set of generators.}
Let $\Phi=\{\phi_1,\cdots,\phi_m\}$ be a set of generators for a shift invariant space $V$. It is interesting to know whether it is possible to obtain a minimal set of generators from the given generators in $\Phi$. There are many examples with the property that no subset of $\Phi$ is a minimal set of generators.
So, deleting elements from $\Phi$ may not be a successfull procedure.
Concerning this question, Bownik and Kaiblinger in \cite{BK06}, showed that a minimal set of generators for $V$ can be obtained from $\Phi$ by linear combinations of its elements. Moreover, they proved that almost every set of $\ell$ functions that are linear combinations of
$\{\phi_1,\cdots,\phi_m\}$ is a minimal set of generators for $V$ (see \cite[Theorem 1]{BK06}).
We emphasize that the linear combinations only involve the functions $\{\phi_1,\cdots,\phi_m\}$ and not their translations.
Since linear combinations of a finite number of functions preserve properties such as smoothness, compact support, bandlimitedness, decay, etc, an interesting consequence of Bownik and Kaiblinger's result is that
if the generators for $V$ have some additional property, there exists a minimal set of generators that inherits this property.
In many problems involving shift invariant spaces, it is important that the system of translates $\{T_k\phi_j\colon k\in\mathbb{Z}^d,\,\,j=1,\cdots, m\}$ bears a particular functional analytic structure such
as being an orthonormal basis, a Riesz basis or a frame. Therefore, it is interesting to know when a minimal
set of generators obtained by taking linear combinations of the original one has the same structure.
More precisely, suppose that $\Phi=\{\phi_1,\cdots,\phi_m\}$ generates a shift invariant space $V$ of length $\ell$,
and assume that a new set of generators $\Psi=\{\psi_1,\cdots, \psi_{\ell}\}$ for $V$ is produced by taking linear combinations of the functions in $\Phi$. That is, assume that $\psi_i=\sum_{j=1}^m a_{ij}\phi_j$ for $i=1,\cdots,\ell$ for some complex scalars $ a_{ij}$.
If we collect the coefficients in a matrix $A=\{a_{ij}\}_{i,j}\in \mathbb{C}^{\ell\times m}$, then we can write in matrix notation
$\Psi=A\Phi$. We would like to know, which matrices $A$ transfer the structure of $\Phi$ over $\Psi$. Precisely, we study the following question: If we know that $\{T_k\phi_i\colon k\in\mathbb{Z}^d,\,\,i=1,\cdots, m\}$ is a frame for $V$, when will $\{T_k\psi_i\colon k\in\mathbb{Z}^d,\,\,i=1,\cdots, \ell\}$ also be a frame for $V$?
In this paper we answer this question completely.
As we mentioned before, the property of being a ``set of generators" for a SIS $V$ is generically preserved
by the action of a matrix $A$ (\cite{BK06}). This is not anymore valid for the case of frames.
This is an unexpected result. More than that, we were able to construct a surprising example of a shift invariant space $V$
with a set of generators $\Phi$ such that their integer translates
$\{T_k\phi_i\colon k\in\mathbb{Z}^d,\,\,i=1,\cdots,m\}$ form a frame for $V$ and with the property that no matrix $A,$ of size $\ell\times m$ with $\ell<m,$
transform $\Phi$ into a new set of generators that form a frame for $V$.
Our main result gives exact conditions in order that the frame property is preserved by a matrix $A$.
These conditions are in terms of a particular geometrical relation that has to be satisfied between the
nullspace of $A$ and the column space of $G_{\Phi}(\omega)$ for almost all $\omega$.
The proof uses recent results about singular values of composition of operators
and involves the Friedrichts angle between subspaces in Hilbert spaces.
We also provide an equivalent analytic condition between $A$ and $G_{\Phi}(\omega)$ in order that this same result holds. Although we are interested in the case $\ell=\ell(V)$ (the length of the SIS $V$ under study), most of our results are still valid when $\ell(V)\le \ell\le m.$
For completeness we include the particular case of Riesz bases and orthonormal bases that are known.
The paper is organized as follows. In Section \ref{preliminaries} we set the definitions and results that we need. We include some
results about the eigenvalues of conjugated matrices in Section \ref{matrix}. Finally, in Section \ref{results} we state and prove our main results.
\section{Preliminaries}\label{preliminaries}
We start this section by giving the basic definitions. Then, we state some known result about shift invariant spaces that we will need later.
\begin{definition}
Let $\mathcal{H}$ be a separable Hilbert space and $\{f_k\}_{k\in \mathbb{Z}}$ be a sequence in $\mathcal{H}.$
\begin{enumerate}
\item[$(a)$]
The sequence $\{f_k\}_{k\in \mathbb{Z}}$ is said to be a {\it Riesz basis} for $\mathcal{H}$
if it is complete in
$\mathcal{H}$ and if there exist $0<\alpha\leq \beta$ such that
for every finite scalar sequence
$\{c_k\}_{k\in \mathbb{Z}}$ one has
$$\alpha \sum_{k\in\mathbb{Z}} |c_k|^2 \leq \|\sum_{k\in\mathbb{Z}} c_k f_k\|^2\leq
\beta\sum_{k\in\mathbb{Z}} |c_k|^2.$$
The constants $\alpha$ and $\beta$ are called {\it Riesz bounds}.
\item[$(b)$]
The sequence $\{f_k\}_{k\in \mathbb{Z}}$ is said to be a {\it frame} for $\mathcal{H}$ if there exist $0<\alpha\leq \beta$ such that
\begin{equation}\label{eq-frame}
\alpha\|f\|^2\leq \sum_{k\in \mathbb{Z}} |\langle f,f_k\rangle|^2\leq \beta\|f\|^2
\end{equation}
for all $f\in\mathcal{H}$. The constants $\alpha$ and $\beta $ are called {\it frame bounds}.
When only the right hand side inequality in (\ref{eq-frame}) is satisfied, we say that $\{f_k\}_{k\in \mathbb{Z}}$ is {\it Bessel sequence} with Bessel bound $\beta$.
\end{enumerate}
\end{definition}
In this paper we will work with the above definitions in the following context. We will assume that $\mathcal{H}$ is a closed subspace of $L^2(\mathbb{R}^d)$ and the sequence $\{f_k\}_{k\in \mathbb{Z}}\subseteq \mathcal{H}$ consists of integer translates of a fixed finite set of functions $\Phi\subseteq L^2(\mathbb{R}^d).$
\begin{definition}
We say that a closed subspace $V\subseteq L^2(\mathbb{R}^d)$ is {\it shift invariant} if
$$f\in V\Longrightarrow T_kf\in V, \,\, \textrm{ for any }\,\,k\in\mathbb{Z}^d,$$
where $T_k$ is the translation by the vector $k\in\mathbb{Z}^d,$ i.e. $T_kf(x)=f(x-k)$.
For any subset $\Phi\subseteq L^2(\mathbb{R}^d)$ we define \[
S(\Phi)= \overline{\mbox{span}}\{T_k\phi\colon \phi\in\Phi, k\in\mathbb{Z}^d\}\,\,\textrm{ and }\,\,
E(\Phi)= \{T_k\phi\colon \phi\in\Phi, k\in\mathbb{Z}^d\}.
\]
We call $S(\Phi)$ the shift invariant space (SIS) generated by $\Phi$.
If $V=S(\Phi)$ for some finite set $\Phi$ we say that $V$ is a {\it finitely generated} SIS, and a
{\it principal} SIS if $V$ can be generated by the translates of a single function.
\end{definition}
For a finitely generated SIS $V\subseteq L^2(\mathbb{R}^d)$ we define the length of $V$ as
\[
\ell(V)= \min\{n\in\mathbb{N}\colon \exists \,\,\phi_1, \cdots,\phi_n \in
V \textrm{ with } V=S(\phi_1, \cdots,\phi_n)\}.
\]
We say that $\Phi$ is a {\it minimal set of generators for $V$} if $V=S(\Phi)$ and $\Phi$ has exactly $\ell(V)$ elements.
Helson in \cite{Hel64} introduced range functions and used this notion to completely characterize shift invariant spaces.
Later on, several authors have used this framework to describe and characterize frames and bases of these spaces.
See for example \cite{dBDVR94, dBDVRI94, RS95, Bow00, CP10}.
We are not going to review here the complete theory of Helson. We will only mention the required definitions and the properties that we need in this note. We refer to \cite{Bow00} for a clear and complete description.
\begin{definition}
Given $f\in L^2(\mathbb{R}^d)$ and $\omega\in[-\frac{1}2, \frac{1}2]^d,$ the {\it fiber} $\widetilde{f_{\omega}}$ of $f$ at $\omega$ is the sequence
\[
\widetilde{f_{\omega}} \equiv \{\widehat{f}(\omega+k)\}_{k\in\mathbb{Z}^d}.
\]
\end{definition}
Here $\widehat{f}$ denotes the Fourier transform of the function $f,$ $\widehat{f}(\omega)\!= \int_{\mathbb{R}^d}e^{-2\pi i\omega x}f(x)\, dx$ when $f\in L^1(\mathbb{R}^d).$
We observe that if $f\in L^2(\mathbb{R}^d),$ then the fiber $\widetilde{f_{\omega}}$ belongs to $\ell^2(\mathbb{Z}^d)$ for almost every $\omega\in[-\frac{1}2, \frac{1}2]^d.$
Let $\Phi= \{\phi_1,\cdots,\phi_m\}$ be a finite collection of functions in $L^2(\mathbb{R}^d)$.
The \emph{Gramian} $G_\Phi$ of $\Phi$ is the
$m\times m$ matrix of $\mathbb{Z}^d$-periodic functions
\begin{equation} \label{gram}
[G_{\Phi}(\omega)]_{ij}
=
\sum_{k \in \mathbb{Z}^d} \widehat{\phi}_i(\omega+k) \, \overline{\widehat{\phi}_j(\omega+k)}.
\end{equation}
The Gramian of $\Phi$ is determined a.e. by its values at $\omega\in [-\frac{1}2, \frac{1}2]^d$ and satisfies $G_{\Phi}(\omega)^*=G_{\Phi}(\omega)$ for a.e. $\omega\in [-\frac{1}2, \frac{1}2]^d$. We denote the set of eigenvalues of $G_{\Phi}(\omega)$ by $\Sigma(G_{\Phi}(\omega))$.
For a finitely generated SIS $V$, the length of $V$ can be expressed in terms of the Gramian as follows (see \cite{Bow00, dBDVR94, TW12})
\begin{equation}\label{length-gramian}
\ell(V)=\mathop{\rm ess\,sup}_{\omega\in [-\frac1{2}, \frac1{2}]^d}\text{rk}(G_{\Phi}(\omega))
\end{equation}
where $\text{rk}(B)$ denotes the rank of a matrix $B$ and $\Phi$ is a generator set for $V$.
For $B_1, B_2\in \mathbb{C}^{m\times m}$, we write $B_1\leq B_2$ meaning that $B_2-B_1$ is a positive semidefinite matrix. Using the Gramian matrix, the following characterizations hold (see \cite{Bow00}).
\begin{proposition}\label{gramiano-frame}
Let $\Phi$ a finite set of functions in $L^2(\mathbb{R}^d)$ and $V=S(\Phi)$. Then,
\begin{enumerate}
\item the following statements are equivalent:
\begin{enumerate}
\item $E(\Phi)$ is a Bessel sequence with bound $0<\beta$.
\item $\mathop{\rm ess\,sup}_{\omega\in [-\frac1{2}, \frac1{2}]^d} \|G_{\Phi}(\omega)\|\le \beta.$
\end{enumerate}
\item the following statements are equivalent:
\begin{enumerate}
\item $E(\Phi)$ is a Riesz basis for $V$ with bounds $0<\alpha\leq \beta$.
\item For a.e. $\omega\in [-\frac1{2}, \frac1{2}]^d$,
$\alpha I\leq G_{\Phi}(\omega)\leq I\beta .$
\item For a.e. $\omega\in [-\frac1{2}, \frac1{2}]^d$, $\Sigma(G_{\Phi}(\omega))\subseteq [\alpha, \beta]$.
\item For a.e. $\omega\in [-\frac1{2}, \frac1{2}]^d,$ $G_{\Phi}(\omega)$ is invertible, $\|G_{\Phi}(\omega)\|\le \beta$ and $\|(G_{\Phi}(\omega))^{-1}\|\le \frac{1}{\alpha}.$
\end{enumerate}
\item the following statements are equivalent:
\begin{enumerate}
\item $E(\Phi)$ is a frame for $V$ with bounds $0<\alpha\leq \beta$.
\item For a.e. $\omega\in [-\frac1{2}, \frac1{2}]^d$,
$\alpha G_{\Phi}(\omega)\leq G_{\Phi}^2(\omega)\leq \beta G_{\Phi}(\omega).$
\item For a.e. $\omega\in [-\frac1{2}, \frac1{2}]^d$, $\Sigma(G_{\Phi}(\omega))\subseteq [\alpha, \beta]\cup\{0\}$.
\end{enumerate}
\end{enumerate}
\end{proposition}
As we mentioned above, we are interested in when a set of linear combinations of the generators for a finitely generated SIS inherits some particular structure from the original generators. In order to make clear our exposition we use the following notation.
Let $\Phi=\{\phi_1,\cdots,\phi_m\}$ be a set of functions in $L^2(\mathbb{R}^d)$.
By taking linear combinations of the elements of $\Phi$, we construct a new set $\Psi=\{\psi_1,\cdots, \psi_{\ell}\}$
where $\psi_i=\sum_{j=1}^ma_{ij}\phi_j$ and
$A=\{a_{ij}\}_{i,j}\in \mathbb{C}^{\ell\times m}$ is a matrix.
Using a matrix notation we set $\Psi=A\Phi$.
Then, we consider the following questions:
Let $V=S(\Phi)$ where $\Phi=\{\phi_1,\cdots,\phi_m\}$.
\begin{itemize}
\item If $E(\Phi)$ is a frame
for $V$ and $\ell(V)\leq \ell\leq m$, for which matrices $A\in \mathbb{C}^{\ell\times m}$, is $E(A\Phi)$ a frame
for $V$?
\item If $E(\Phi)$ is an orthonormal basis (Riesz basis)
for $V$, for which square matrices $A$, is $E(A\Phi)$ an orthonormal basis (Riesz basis)
for $V$?
\end{itemize}
Sometimes in the paper by convenient abuse of notation,
we will say that a set $\Phi=\{\phi_1,\cdots, \phi_m\}$ is a frame for a SIS $V$ to indicate that actually $E(\Phi)$ forms a frame for $V$.
In order to study when $E(A\Phi)$ is a frame or a Riesz basis for $V$ we will use Proposition \ref{gramiano-frame}.
So, we need to know the Gramian associated to
$A\Phi$.
\begin{proposition}\label{gramiano-A}
Let $V=S(\Phi)$ be a SIS where $\Phi=\{\phi_1,\cdots,\phi_m\}$ and let
$A=\{a_{ij}\}_{i,j}\in \mathbb{C}^{\ell\times m}$ be a matrix. Consider the set
$\Psi=\{\psi_1,\cdots, \psi_{\ell}\}$ where $\Psi=A\Phi$.
Then, the Gramian $G_{\Psi}$ is a conjugation of $G_{\Phi}$ by $A,$ i.e.
$$G_{\Psi}(\omega)=AG_{\Phi}(\omega)A^*$$
for a.e. $\omega\in [-\frac1{2}, \frac1{2}]^d.$
\end{proposition}
\begin{proof}
Let $i,j\in\{1,\cdots,\ell\}$ be fixed. Then,
\begin{align*}
[G_{\Psi}(\omega)]_{ij}
=
\Big\langle (\widetilde{\psi_i)}_{\omega}, (\widetilde{\psi_j})_{\omega}\Big\rangle
&=
\Big\langle \sum_{k=1}^ma_{ik}(\widetilde{\phi_k})_{\omega}, \sum_{r=1}^ma_{ir}(\widetilde{\phi_r})_{\omega}\Big\rangle\\
=
\sum_{k=1}^m\sum_{r=1}^ma_{ik}\overline{a_{ir}}\Big\langle (\widetilde{\phi_k})_{\omega}, (\widetilde{\phi_r})_{\omega}\Big\rangle
&=\sum_{k=1}^m\sum_{r=1}^ma_{ik}\overline{a_{ir}}[G_{\Phi}(\omega)]_{kr}
=
[AG_{\Psi}(\omega)A^*]_{ij}.
\end{align*}
\end{proof}
We study in the following section the eigenvalues of a conjugated matrix,
and provide the definition of the Friedrichs angle between subspaces, that we will need to state the main results.
\section{Eigenvalues of a conjugated matrix}\label{matrix}
According to Proposition \ref{gramiano-frame}, we will need to study the eigenvalues of the Gramian $G_{\Psi}(\omega)$
which is, as Proposition \ref{gramiano-A} shows, a conjugation of $G_{\Phi}(\omega)$ by $A$.
The behaviour of the eigenvalues of the conjugation of a given matrix is in general
not very well established. In our case, we shall to find uniform bounds for the eigenvalues of
$G_{\Psi}(\omega)$.
In this section we first set some matrix notation and then we state the known results
that we will later apply to shift invariant spaces.
For a matrix $B\in\mathbb{C}^{\ell\times m}$
we denote by $\sigma(B)$ the smallest non-zero singular value of $B$. By $Ker(B)$ and $Im(B)$ we denote the nullspace and column space
of $B$ respectively, as an operator acting by right multiplication, i.e., matrix-vector multiplication.
For a squared positive-semidefinite matrix $B$ such that $B=B^*$, the eigenvalues and its singular values agree. In particular, $\sigma(B)=\lambda_-(B)$ where
$\lambda_-(B)$ denotes the smallest non-zero eigenvalue of $B$.
We now state a recent result that we need for next section. It was proven by Antezana et al. in \cite{ACRS05}.
For this, we need the notion of Friedrichs angle between subspaces.
The Friedrichs angle can be defined for subspaces of a general Hilbert space (see \cite{Deu95, HJ95, Ka84}).
However, we will define it for subspaces of $\mathbb{C}^n$, since this is the context
on which we will use it.
Let $N, M\neq \{0\}$ be subspaces of $\mathbb{C}^n$. The {\it Friedrichs angle between $M$ and $N$}
is the angle in $[0, \frac{\pi}{2}]$ whose cosine is defined by
$$
{\bm{\mathcal{G}}}[M,N]=\sup\{|\langle x,y\rangle|:\, x\in M\cap (M\cap N)^{\perp},\, \|x\|=1, \, y\in N\cap (M\cap N)^{\perp},\, \|y\|=1\}.
$$
We define ${\bm{\mathcal{G}}}[M,N]=0$ if $M=\{0\}$, $N=\{0\}$, $M\subseteq N$ or $N\subseteq M$.
As usual, the sine of the Friedrichs angle is defined as $\bm{\mathcal{F}}\,[M,N]=\sqrt{1-{\bm{\mathcal{G}}}[M,N]^2}.$
It satisfies $\bm{\mathcal{F}}[M,N]=\bm{\mathcal{F}}[N,M]=\bm{\mathcal{F}}[M^{\perp},N^{\perp}]$ (see \cite{ACRS05}).
The following theorem is a particular case of the result stated in Remark 2.10 in \cite{ACRS05} for square matrices.
\begin{theorem}\label{prop-jorge}
Let $A, B$ be non zero square matrices in $\mathbb{C}^{m\times m}$. Then,
$$\sigma(A)\sigma(B) \,\bm{\mathcal{F}}[Ker(A), Im(B)]\leq \sigma(AB)\le \|A\|\|B\| \,\bm{\mathcal{F}} [Ker(A), Im(B)].$$
\end{theorem}
In order to adapt Theorem \ref{prop-jorge} to our setting we need the following lemma.
\begin{lemma}\label{lema-rango-2}
Let $A\in \mathbb{C}^{\ell\times m}$ and $G\in \mathbb{C}^{m\times m}$ with $\ell\le m$.
If $\text{rk}(AGA^*)=\text{rk}(G)$ then $Ker(AG)=Ker(G)$.
\end{lemma}
\begin{proof}
Clearly $Ker(G)\subseteq Ker(AG)$. Suppose that $\dim Ker(G)<\dim Ker(AG)$. Then $\text{rk}(AG)<\text{rk}(G)$. Now,
$\text{rk}(AGA^*)\leq \min\{\text{rk}(AG), \text{rk}(A)\}\leq \text{rk}(AG)<\text{rk}(G)$ which is a contradiction. Thus, we must have $\dim Ker(G)=\dim Ker(AG)$.
\end{proof}
Now, from Theorem \ref{prop-jorge} and Lemma \ref{lema-rango-2} we obtain:
\begin{proposition}\label{prop-jorge-rectangular}
Let $G$ be a positive-semidefinite matrix in $\mathbb{C}^{m\times m}$ such that $G=G^*$ and $A=\{a_{ij}\}_{i,j}\in\mathbb{C}^{\ell\times m}$ with $\ell\le m$ such that $\text{rk}(G)=\text{rk}(AGA^*)$.
Then,
$$\sigma(A)^2\lambda_-(G) \bm{\mathcal{F}}[Ker(A), Im(G)]^2\leq \lambda_-(AGA^*)\le \|A\|^2\|G\| \bm{\mathcal{F}}[Ker(A), Im(G)].$$
\end{proposition}
\begin{proof}
Let $\widetilde A\in \mathbb{C}^{m\times m}$ be the matrix defined by
$\widetilde{a}_{ij}=
\begin{cases}
a_{ij}& \textrm{ if } 1\leq i\leq \ell\\
0& \textrm{ if } \ell< i\leq m\\
\end{cases}$. Then,
$$\widetilde{A}G(\widetilde{A})^*=
\Big(
\begin{array}{c|c}
AGA^* & 0\\ \hline
0 & 0
\end{array}
\Big),
$$
and therefore $ \lambda_-(AGA^*)=\lambda_-(\widetilde{A}G\widetilde{A}^*)$.
Now, using Theorem \ref{prop-jorge} and Lemma \ref{lema-rango-2}, we have
\begin{align*}
\lambda_-(\widetilde{A}G\widetilde{A}^*)&\geq
\sigma(\widetilde{A}G)\sigma(\widetilde{A}^*)\bm{\mathcal{F}}[Ker(\widetilde{A}G), Im(\widetilde{A}^*)]\\
&=\sigma(\widetilde{A}G)\sigma(\widetilde{A}^*)\bm{\mathcal{F}}[Ker(G), Im(\widetilde{A}^*)]\\
&\geq \sigma(\widetilde{A})\lambda_-(G) \bm{\mathcal{F}}[Ker(\widetilde{A}), Im(G)]\sigma(\widetilde{A}^*)\bm{\mathcal{F}}[Ker(G), Im(\widetilde{A}^*)],
\end{align*}
and
$$
\lambda_-(\widetilde{A}G\widetilde{A}^*)\leq
\|\widetilde{A}G\|\|\widetilde{A}^*\| \bm{\mathcal{F}}[Ker(\widetilde{A}G), Im(\widetilde{A}^*)]
\le \|\widetilde{A}\|^2\|G\| \bm{\mathcal{F}}[Ker(G), Im(\widetilde{A}^*)].
$$
By the properties of the sine of the Friedrichs angle it can be seen that $\bm{\mathcal{F}}[Ker(G), Im(\widetilde{A}^*)]= \bm{\mathcal{F}}[Ker(\widetilde{A}), Im(G)].$
Using that $\sigma(A)=\sigma(\widetilde{A})=\sigma(\widetilde{A}^*)$ and $\|\widetilde{A}\|=\|A\|,$ we finally obtain
$$
\sigma(A)^2\lambda_-(G)\bm{\mathcal{F}}[Ker(\widetilde{A}), Im(G)]^2\le \lambda_-(\widetilde{A}G\widetilde{A}^*)\le
\|A\|^2\|G\| \bm{\mathcal{F}}[Ker(\widetilde{A}), Im(G)].
$$
We finish the proof by observing that $Ker(A)=Ker(\widetilde{A})$.
\end{proof}
The last result of this section gives an equivalent condition for $\text{rk}(AGA^*)=\text{rk}(G)$ to hold, and we
will use it in the next section.
\begin{lemma}\label{lemma-Radu}
Let $A\in \mathbb{C}^{\ell\times m}$ with $\ell\le m$ and $G\in \mathbb{C}^{m\times m}$ such that $G$ is positive-semidefinite and $G=G^*$.
Then, $\text{rk}(AGA^*)=\text{rk}(G)$ if and only if $Ker(A)\cap Im(G)=\{0\}$.
\end{lemma}
\begin{proof}
Note that, since $G$ is positive semidefinite, and $G=G^*,$ it is always true that $\text{rk}(AGA^*)=\text{rk}(AG^{1/2}G^{1/2}A^*)= \text{rk}(AG^{1/2})$. Then, $\text{rk}(AGA^*)=\text{rk}(G)$ if and only if
$\dim(Ker(AG^{1/2}))=\dim(Ker(G))$. Thus, since $Im(G)=Im(G^{1/2})$ and then $Ker(G)=Ker(G^{1/2})$, we want to prove that
$\dim(Ker(AG^{1/2}))=\dim(Ker(G^{1/2})$ if and only if $Ker(A)\cap Im(G^{1/2})=\{0\}$.
Now, using that $Ker(G^{1/2})\subseteq Ker(AG^{1/2})$, we have $\dim(Ker(AG^{1/2}))=\dim(Ker(G^{1/2})$ if and only if
$Ker(AG^{1/2})=Ker(G^{1/2})$.
Finally, it is easy to check that the condition $Ker(AG^{1/2})=Ker(G^{1/2})$ is equivalent to
$Ker(A)\cap Im(G)=\{0\}$.
\end{proof}
\section{Main results}\label{results}
As we have mentioned in the introduction, there are examples of sets of generators that do not
contain any {\it minimal} subset of generators. For instance,
consider the shift invariant space $V=S(\phi)$ where $\phi$ is such that
$ \widehat{\phi}=\chi_{[0,1]}$. (Here we denote by $\chi_M$ the characteristic
function of a set $M$). It can be seen that $V=S(\phi_1, \phi_2)$ with $\phi_1, \phi_2$
such that $\widehat{\phi_1}=\chi_ {[0,\frac1{2}]}$ and $\widehat {\phi_2}=\chi_ {[\frac1{2}, 1]}$.
However, neither $\phi_1$ nor $\phi_2$ generates $V$ by itself.
An alternative to overcome this problem will be to try to obtain a smaller set of generators, for example a minimal one, considering instead linear
combinations of the original generators. Related to this, Bownik and
Kaiblinger (\cite{BK06}) proved the following:
\begin{theorem}\cite[Theorem 1]{BK06}\label{thm-bownik}
Let $V$ be a finitely generated SIS with length $\ell(V)$, and $\Phi=\{\phi_1,\cdots, \phi_m\}\subseteq L^2(\mathbb{R}^d)$ where $\ell(V)\leq m$ and such that $V=S(\Phi)$.
For every $\ell(V)\leq \ell\leq m$, consider the set of matrices
$\mathcal{R}=\{A\in\mathbb{C}^{\ell\times m}\,:\, V=S(\Psi), \Psi=A\Phi \}.$ Then,
$\mathbb{C}^{\ell\times m}\setminus \mathcal{R}$ has zero Lebesgue measure.
\end{theorem}
This result briefly says that given any set of generators of a finitely generated SIS, almost every matrix (of the right size) transforms it in a new set of
generators. In particular, a {\it minimal } set of generators can be obtained with this procedure.
An interesting question arises here. When is the set of generators obtained, a set of {\it frame} generators? (i.e. the integer translations form a frame of the SIS?)
More precisely we want to obtain new sets of generators of the form $\Psi=A\Phi$ with the additional property of being a frame.
First, we show that in order to $E(\Psi)$ to be a frame, $E(\Phi)$ needs to be a frame. More precisely we prove:
\begin{proposition}\label{frametoframe}
Let $ \Phi=\{\phi_1,\cdots, \phi_m\}\subseteq L^2(\mathbb{R}^d)$ be a generator set for a SIS $V$ of length $\ell(V)\leq m$ and suppose that $E(\Phi)$ is a Bessel sequence but not a frame for $V$. Then, for each matrix $A\in\mathbb{C}^{\ell\times m}$, with $\ell(V)\leq \ell\leq m$, $E(\Psi)$ is not a frame for $V$ where $\Psi=A\Phi$.
\end{proposition}
After this result, the right question will be which matrices (of the right size) map frame generator sets into {\it new} frame generator sets?
The answer of this question is not as direct as is the case of a plain set of generators and it will take us the rest of the section.
Let us first start with the proof of Proposition \ref{frametoframe}
\begin{proof}[Proof of Proposition\ \ref{frametoframe}]
Let $A\in\mathbb{C}^{\ell\times m}$. If $\Psi=A\Phi$ is not a generator set for $V$, then $\Psi$ it is not a frame for $V$.
Thus, suppose that $\Psi$ is a set of generators for $V$.
Since $E(\Phi)$ is a Bessel sequence but not a frame for $V$, the lower frame inequality in (\ref{eq-frame}) is not satisfied. Therefore, there exists $\{f_n\}_{n\in\mathbb{N}}\subseteq V$ such that
$$t_n(\Phi):=\sum_{j=1}^{m}\sum_{k\in\mathbb{Z}^d}|
\langle f_n, T_k\phi_j\rangle|^2\rightarrow 0, \textrm{ when } n\to+\infty.$$
Now,
\begin{align*}
t_n(\Psi)&=\sum_{i=1}^{\ell}\sum_{k\in\mathbb{Z}^d}|
\langle f_n, T_k\psi_i\rangle|^2
=\sum_{i=1}^{\ell}\sum_{k\in\mathbb{Z}^d}|\sum_{j=1}^{m}\overline{a_{ij}}
\langle f_n, T_k\phi_j\rangle|^2\\
&\leq \sum_{i=1}^{\ell}\sum_{k\in\mathbb{Z}^d}
\left(\sum_{j=1}^{m}|a_{ij}|^2\right)\left(\sum_{j=1}^{m}
|\langle f_n, T_k\phi_j\rangle|^2\right)\\
&=\left(\sum_{i=1}^{\ell}\sum_{j=1}^{m}|a_{ij}|^2\right) t_n(\Phi).
\end{align*}
Then, $t_n(\Psi)\rightarrow 0, \textrm{ when } n\to+\infty$ and thus, $E(\Psi)$ does not satisfy the lower frame inequality.
\end{proof}
When $E(\Phi)$ is not a Bessel sequence it can happen that for certain matrix $A,$ $E(A\Phi)$ is a Bessel sequence. To construct
an easy example take $\phi\in L^2(\mathbb{R})$ such that $E(\phi)$ is a frame for $S(\phi).$ Now, choose a second generator $\widetilde{\phi}\in S(\phi)$
such that $E(\widetilde{\phi})$ is not a Bessel sequence. Thus, if $\Phi= \{\phi, \widetilde{\phi}\},$ $S(\Phi)= S(\phi)$ and $E(\{\phi, \widetilde{\phi}\})$ is
not a Bessel sequence. However, taking $A= (1, 0),$ we get that $E(A\Phi)$ is a Bessel sequence. The hypothesis in the above proposition of $E(\Phi)$ being a Bessel sequence it is not very restrictive and greatly simplifies the treatment.
We now point out a property about the result of Bownik and Kaiblinger that will be important in what follows.
From the proof of Theorem \ref{thm-bownik}, it follows that the set $\mathcal{R}$ can be described in terms of the Gramian as
\begin{equation}\label{pinta-R}
\mathcal{R}=\{A\in\mathbb{C}^{\ell\times m}\,:\, \text{rk}(G_{\Phi}(\omega))=\text{rk}(AG_{\Phi}(\omega)A^*)
\textrm{ for a.e. } \omega\in [-\frac1{2}, \frac1{2}]^d \},
\end{equation}
where $\ell$ is a number between length of $S(\Phi)$ and $m.$ Now, using Lemma \ref{lemma-Radu}
$$\mathcal{R}=\{A\in\mathbb{C}^{\ell\times m}\,:\, Ker(A)\cap Im(G_{\Phi}(\omega))=\{0\}
\textrm{ for a.e. } \omega\in [-\frac1{2}, \frac1{2}]^d \}.$$
\begin{remark}
When $\ell$ is exactly the length of $S(\Phi)$, note that if $A\in\mathcal{R}$, by $(\ref{length-gramian})$, $\ell=\text{rk}(A)$ and then $A$ is full rank.
\end{remark}
As we have already discussed, a key point in our problem is the behavior of the eigenvalues of conjugated matrices.
Here one wants to get a smaller
set of generators from a given large set of generators. In terms of matrices, this translates
in conjugating the Gramian by rectangular matrices.
The behavior of the eigenvalues in this case is not very well understood.
However we are able to exactly determine those matrices that yield frames, in terms of the Friedrichs angle,
and using recent results by Antezana et al.\cite{ACRS05} on singular values of composition of operators.
The main result of this section is the following theorem:
\begin{theorem}\label{thm-frames}
Let $ \Phi=\{\phi_1,\cdots, \phi_m\}\subseteq L^2(\mathbb{R}^d)$ such that $E(\Phi)$ is a frame for $V=S(\Phi)$ and suppose that $\ell(V)\leq\ell\leq m$.
Let $A\in\mathbb{C}^{\ell\times m}$ be a matrix and consider $\Psi=\{\psi_1,\cdots, \psi_{\ell}\}$ where $\Psi=A\Phi$. Then, $E(\Psi)$ is a
frame for $V$ if and only if $A$ satisfies the following two conditions
\begin{enumerate}
\item $A\in\mathcal{R}$ where $\mathcal{R}$ is as in (\ref{pinta-R}).
\item There exists $\delta>0$ such that $\bm{\mathcal{F}}\,[Ker(A), Im(G_{\Phi}(\omega))]\ge \delta$ for a.e. $\omega\in[-\frac1{2}, \frac1{2}]^d.$
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem\ \ref{thm-frames}]
Let $0<\alpha\leq\beta$ be the frame bounds for $E(\Phi)$.
First suppose that $E(\Psi)$ is a frame for $V$ and let $\beta'\ge \alpha'>0$ be its frame bounds. Since, in particular, $\Psi$ is a generator
set for $V,$ $A$ belongs to $\mathcal{R}.$
By Proposition \ref{prop-jorge-rectangular}, we have that
$$
\lambda_{-}(G_{\Psi}(\omega))= \lambda_{-}(AG_{\Phi}(\omega)A^*)\le \|A\|^2\|G_{\Phi}(\omega)\| \,\bm{\mathcal{F}}[Ker(A), Im(G_{\Phi}(\omega))],
$$
for a.e. $\omega\in[-\frac1{2}, \frac1{2}]^d.$
Using Proposition \ref{gramiano-frame},
$\alpha'\le\lambda_{-}(G_{\Psi}(\omega))$ and $\|G_{\Phi}(\omega)\|\leq \beta$. Thus,
$$
\alpha'\le \|A\|^2\beta \, \bm{\mathcal{F}}[Ker(A), Im(G_{\Phi}(\omega))].
$$
Then, item (2) is satisfied taking $\delta= \frac{\alpha'}{\|A\|^2\beta}.$
Conversely. Since $A\in \mathcal{R},$ $V=S(\Psi)$ and $\text{rk}(AG_{\Phi}(\omega)A^*)= \text{rk}(G_{\Phi}(\omega))$ for a.e. $\omega\in[-\frac1{2}, \frac1{2}]^d.$
Now, for a.e. $\omega\in[-\frac1{2}, \frac1{2}]^d,$ we apply the lower inequality that Proposition \ref{prop-jorge-rectangular} gives to $G_{\Phi}(\omega)$ and $A.$ Then,
$$
\lambda_{-}(AG_{\Phi}(\omega)A^*)\ge \sigma(A)^2\lambda_{-}(G_{\Phi}(\omega))\,\bm{\mathcal{F}}[Ker(A), Im(G_{\Phi}(\omega))]^2\ge \sigma(A)^2 \alpha \delta^2.
$$
On the other hand, by Proposition \ref{gramiano-frame},
$$
\|G_{\Psi}(\omega)\|= \|AG_{\Phi}(\omega)A^*\|\le \|A\|^2\|G_{\Phi}(\omega)\|\le \|A\|^2\beta
$$
and from this it follows that the eigenvalues of $G_{\Psi}(\omega)$ are bounded above by $\|A\|^2\beta.$
Therefore, $\Sigma(G_{\Psi}(\omega))\subseteq \big[ \,\sigma(A)^2 \alpha \delta^2, \|A\|^2\beta\,\big] \cup\{0\}$ for a.e. $\omega\in[-\frac1{2}, \frac1{2}]^d$
and the result follows from Proposition \ref{gramiano-frame}.
\end{proof}
As a consequence of the above theorem, we have the following result. We impose more restrictive condition on $G_{\Phi}(\omega)$ than in Theorem \ref{thm-frames}.
However the new hypothesis is easy to check and avoid the calculation of the sine of the Friedrichs angle.
\begin{corollary}\label{thm-frames-2}
Let $ \Phi=\{\phi_1,\cdots, \phi_m\}\subseteq L^2(\mathbb{R}^d)$ such that $E(\Phi)$ is a frame for $V=S(\Phi)$ and suppose that $\ell(V)\leq\ell\leq m$. Consider $A\in\mathbb{C}^{\ell\times m}$ such that $Ker(A)=
Ker(G_{\Phi}(\omega))$ for a.e. $\omega\in[-\frac1{2}, \frac1{2}]^d$, and $dim(Ker(A)) = m-\ell$.
If $A\in\mathcal{R}$ where $\mathcal{R}$ is as in (\ref{pinta-R}), and $\Psi=\{\psi_1,\cdots, \psi_{\ell}\}$ where $\Psi=A\Phi$, then $E(\Psi)$ is a frame for $V$.
\end{corollary}
\begin{proof}
Since $Ker(A)\subseteq Ker(G_{\Phi}(\omega))$ and $Ker(G_{\Phi}(\omega))=Im(G_{\Phi}(\omega))^{\perp}$, it follows that $\bm{\mathcal{F}}[Ker(A), Im(G_{\Phi}(\omega))]=1$.
\end{proof}
\begin{example}
Let $V= S(\Phi)$ be the shift invariant space generated by $\Phi= \{\phi_1, \phi_2,\phi_3\}\subseteq L^2(\mathbb{R}),$ where $\phi_1, \phi_2,\phi_3$ are defined by
\begin{align*}
\widehat{\phi_1}(\omega)&= -8\chi_{[-\frac1{2}, \frac1{2}]}(\omega)+ 4\chi_{[\frac1{2}, \frac3{2}]}(\omega),\\
\widehat{\phi_2}(\omega)&= \chi_{[-\frac1{2}, \frac1{2}]}(\omega)+ 4\chi_{[\frac3{2}, \frac5{2}]}(\omega),\\
\widehat{\phi_3}(\omega)&= \chi_{[\frac1{2}, \frac3{2}]}(\omega)+ 8\chi_{[\frac3{2}, \frac5{2}]}(\omega).
\end{align*}
Then, the associated Gramian is
$$G_{\Phi}(\omega)=
\left(
\begin{array}{ccc}
80& -8& 4\\
-8& 17& 32\\
4& 32& 65\\
\end{array}
\right).
$$
It can be seen that $G_{\Phi}^2(\omega)= 81G_{\Phi}(\omega) $ and $\text{rk}(G_{\Phi}(w))=2$ for all $\omega\in[-\frac1{2}, \frac1{2}].$
Thus, by Proposition \ref{gramiano-frame}, $E(\Phi)$ is a frame for $V$ and using \eqref{length-gramian}, $\ell(V)=2.$
On the other side, $Ker(G_{\Phi}(\omega))= \text{span} \{(1,8,-4)\}.$ Then, $V$ satisfy the hypothesis of Corollary \ref{thm-frames-2}. In what follows,
we construct all possible matrices $A\in\mathbb{C}^{2\times3}$ such that $Ker(A)= \text{span} \{(1,8,-4)\}$ and $A\in\mathcal{R}$.
These matrices give frames $E(\Psi)$ with $\Psi=A\Phi.$
Since $Ker(A)= \text{span} \{(1,8,-4)\},$ $A$ has the form
$$
A= \left(
\begin{array}{ccc}
(4b-8a)& a& b\\
(4d-8c)& c& d\\
\end{array}
\right),
$$
with $a,b,c,d\in\mathbb{C}.$
Now, $A\in\mathcal{R}$ if and only if $\text{rk}(AG_{\Phi}(\omega)A^*)= 2.$ Now $\text{rk}(AG_{\Phi}(\omega)A^*)= 2$ if and only if $\det(AG_{\Phi}(\omega)A^*)\neq 0.$
Since $\det(AG_{\Phi}(\omega)A^*)= (81)^3(ad-bc)^2,$ we conclude that $A\in\mathcal{R}$ if and only if $ad-bc\neq 0.$
\hfill
$\blacksquare$
\end{example}
Condition $(2)$ in Theorem \ref{thm-frames} is a geometric property that $A$ and $G_{\Phi}$
need to satisfy in order to $A$ preserves the frame property of $E(\Phi)$ over $E(\Psi)$. We now state a result
on which we give an analytic way to express
conditions $(1)$ and $(2)$ of Theorem \ref{thm-frames}.
In this case, $\ell$ will be exactly the length of the SIS.
\begin{theorem}\label{thm-Radu}
Let $ \Phi=\{\phi_1,\cdots, \phi_m\}\subseteq L^2(\mathbb{R}^d)$ such that $E(\Phi)$ is a frame for $V=S(\Phi)$ and suppose that $\ell(V)=\ell\leq m$.
Let $A\in\mathbb{C}^{\ell\times m}$ be a matrix and consider $\Psi=\{\psi_1,\cdots, \psi_{\ell}\}$ where $\Psi=A\Phi$.
Then, $E(\Psi)$ is a
frame for $V$ if and only if the following condition between $A$ and $G_{\Phi}$ is satisfied: $AA^*$ is invertible and
\begin{equation}\label{condition-Radu}
\mathop{\rm ess\,sup}_{\omega\in [-\frac1{2}, \frac1{2}]^d} \|(I_m-A^*(AA^*)^{-1}A)G_{\Phi}(\omega)G_{\Phi}^{\dagger}(\omega)\|<1.
\end{equation}
Here, $I_m$ is the identity in $\mathbb{C}^{m\times m}$ and $G_{\Phi}^{\dagger}(\omega)$ is the Moore-Penrose pseudoinverse of
$G_{\Phi}(\omega)$.
\end{theorem}
\begin{proof}
Let us first prove that conditions $(1)$ and $(2)$ of Theorem \ref{thm-frames}
imply condition (\ref{condition-Radu}).
Note that condition $(2)$ is equivalent to $\bm{\mathcal{G}} [Ker(A), Im(G_{\Phi}(\omega))]\leq \sqrt{1-\delta^2}$ for
a.e. $\omega\in [-\frac1{2}, \frac1{2}]^d$. Now, since $Ker(A)\cap Im(G_{\Phi}(\omega))=\{0\}$ for a.e
$\omega\in [-\frac1{2}, \frac1{2}]^d$, it follows from Proposition 2.2 in \cite{ACRS05} that
$\bm{\mathcal{G}} [Ker(A), Im(G_{\Phi}(\omega))]=\|P_{Ker(A)}P_{Im(G_{\Phi}(\omega))}\|$ for a.e
$\omega\in [-\frac1{2}, \frac1{2}]^d$, where $P_{Ker(A)}$ and $P_{Im(G_{\Phi}(\omega))}$ denote the orthogonal projection onto
$Ker(A)$ and $Im(G_{\Phi}(\omega)$ respectively.
Using the Moore-Penrose pseudoinverse we can write
$\|P_{Ker(A)}P_{Im(G_{\Phi}(\omega))}\|=\|(I_m-A^{\dagger}A)G_{\Phi}(\omega)G_{\Phi}^{\dagger}(\omega)\|$.
Finally, since $A$ is full rank, we replace $A^{\dagger}=A^*(AA^*)^{-1}$ and then the assertion follows.
Conversely. Suppose that (\ref{condition-Radu}) holds. Then,
we have that $\|P_{Ker(A)}P_{Im(G_{\Phi}(\omega))}\|\leq\gamma<1$ for all
$\omega\in [-\frac1{2}, \frac1{2}]^d\setminus Z$ where $Z$ is a set with Lebesgue measure zero and
$\gamma=\mathop{\rm ess\,sup}_{\omega\in [-\frac1{2}, \frac1{2}]^d} \|(I_m-A^*(AA^*)^{-1}A)G_{\Phi}(\omega)G_{\Phi}^{\dagger}(\omega)\|$.
Fix $\omega\in[-\frac1{2}, \frac1{2}]^d\setminus Z$ and suppose there exists $x\in Ker(A)\cap Im(G_{\Phi}(\omega))$ with $\|x\|=1$.
Then, $\|P_{Ker(A)}P_{Im(G_{\Phi}(\omega))}x\|=\|x\|=1$ which is a contradiction. Therefore,
$Ker(A)\cap Im(G_{\Phi}(\omega))=\{0\}$ for all $\omega\in[-\frac1{2}, \frac1{2}]^d\setminus Z$ and this gives that $(1)$
in Theorem \ref{thm-frames} is satisfied. Having $Ker(A)\cap Im(G_{\Phi}(\omega))=\{0\}$ for a.e.
$\omega\in[-\frac1{2}, \frac1{2}]^d$, condition $(2)$ of Theorem \ref{thm-frames} can be obtained with similar arguments as in the
first part of this proof.
\end{proof}
\medskip
Now, we consider the case of Riesz bases. We obtain necessary and sufficient conditions on $A$ in order to preserve Riesz bases of translates.
Different from the frame case,
the conditions on $A$ do not depend on the shift invariant space.
First, observe that, if $ \Phi=\{\phi_1,\cdots, \phi_m\}\subseteq L^2(\mathbb{R}^d)$ is such that $E(\Phi)$ is an Riesz (orthonormal)
basis for $V=S(\Phi)$, by Proposition \ref{gramiano-frame}, $G_{\Phi}(\omega)$ is an $m\times m$ invertible matrix for a.e.
$\omega\in[-\frac1{2}, \frac1{2}]^d$. Thus, using \eqref{length-gramian} we have that $\ell(V)=m$.
Therefore, in order to preserve Riesz (orthonormal) bases, we need to consider squared matrices $A\in\mathbb{C}^{m\times m}$.
\begin{proposition}\label{thm-for-riesz}
Let $ \Phi=\{\phi_1,\cdots, \phi_m\}\subseteq L^2(\mathbb{R}^d)$ such that $E(\Phi)$ is a Riesz basis for $V=S(\Phi)$.
Let $A\in\mathbb{C}^{m\times m}$ be a matrix and consider $\Psi=\{\psi_1,\cdots, \psi_m\}$ with
$\Psi=A\Phi$.
Then, $E(\Psi)$ is a Riesz basis for $V$ if and only if $A$ is an invertible matrix.
\end{proposition}
\begin{proof}
Let $0<\alpha\leq \beta$ be the Riesz bounds for $E(\Phi)$.
Suppose first that $A$ is invertible.
Then, for a.e. $\omega\in [-\frac1{2}, \frac1{2}]^d,$
\begin{align*}
\|G_{\Psi}(\omega)\|&= \|AG_{\Phi}(\omega)A^*\|\le \|A\|^2\|G_{\Phi}(\omega)\|\le \|A\|^2\beta,\\
\|(G_{\Psi}(\omega))^{-1}\|&\le \|A^{-1}\|^2\|(G_{\Phi}(\omega))^{-1}\|\le \|A^{-1}\|^2\frac{1}{\alpha}.
\end{align*}
Therefore, by Proposition \ref{gramiano-frame}, $E(\Psi)$ is a Riesz basis for $V$.
Conversely, if $E(\Psi)$ is a Riesz basis for $V,$ it follows from Proposition \ref{gramiano-frame} that
$G_{\Psi}(\omega)$ is invertible for a.e. $\omega\in[-\frac1{2}, \frac1{2}]^d.$ Since $G_{\Phi}(\omega)$ is invertible
as well for a.e. $\omega\in[-\frac1{2}, \frac1{2}]^d$
we have that $A$ is invertible.
\end{proof}
\begin{remark}
The above result shows that every invertible matrix preserves Riesz bases of translates.
On the other side, it is known that the set $\{A\in\mathbb{C}^{m\times m}\colon \det(A)=0\}$ has Lebesgue measure zero.
Thus, we have that almost every matrix (exactly those that are invertible) preserves Riesz bases of translates.
We can then connect this with Bownik and Kaiblinger's result as follows. If in addition to the hypothesis of Theorem \ref{thm-bownik}
we ask the Riesz basis condition on $E(\Phi),$ we have that the set $\mathcal{R}$ is equal to $\{A\in\mathbb{C}^{m\times m}\colon \det(A)\neq0\}.$
\end{remark}
It is worth to mention that Proposition \ref{thm-for-riesz} can be proven as a corollary of Theorem \ref{thm-frames} or independently from this result using Ostrowski's
Theorem \cite[Theorem 4.5.9]{HJ90}.
Applying this result to
$G=G_{\Phi}(\omega)$ for a.e. $\omega\in [-\frac1{2}, \frac1{2}]^d$, uniform bounds for the eigenvalues of $AG_{\Phi}(\omega)A^*$ can be found.
For frames, in the special case when the initial set of generators has exactly $\ell(V)$ elements, we have that
every invertible matrix yields a set of generators that is a frame for $V.$
This is stated in the next result and its proof is analogous to the proof of Proposition \ref{thm-for-riesz}. It can be also viewed as a corollary of Theorem \ref{thm-Radu}.
\begin{theorem}
Let $ \Phi=\{\phi_1,\cdots, \phi_{\ell}\}\subseteq L^2(\mathbb{R}^d)$ such that $E(\Phi)$ is a frame for $V=S(\Phi)$ and suppose that $\ell(V)=\ell$.
Let $A\in\mathbb{C}^{\ell\times \ell}$ be a matrix and consider $\Psi=\{\psi_1,\cdots, \psi_{\ell}\}$ where $\Psi=A\Phi$.
Then, $E(\Psi)$ is a frame for $V$ if and only if $A$ is an invertible matrix.
\end{theorem}
Finally, in case of orthonormal bases, we have:
\begin{proposition}\label{thm-for-bons}
Let $ \Phi=\{\phi_1,\cdots, \phi_m\}\subseteq L^2(\mathbb{R}^d)$ such that $E(\Phi)$ is an orthonormal basis for $V=S(\Phi)$ and let $A\in\mathbb{C}^{m\times m}$ be a matrix. Consider $\Psi=A\Phi$. Then,
$E(\Psi)$ is an orthonormal basis for $V$
if and only if $A$ is an unitary matrix.
\end{proposition}
\begin{proof}
Note that if $A=\{a_{ij}\}_{i,j}$ then,
$$
\langle T_k\psi_i, T_{k'}\psi_{i'}\rangle=\sum_{j,j'}^ma_{ij}\overline{a_{i'j'}}
\langle T_k\phi_j, T_{k'}\phi_{j'}\rangle=(AA^*)_{ii'}\delta(k-k')
$$
and from here it follows that $E(\Psi)$ is an orthonormal set if and only if $A$ is unitary.
For the completeness of $E(\Psi)$ on $V$ we use that, since $A$ is unitary and $\Psi=A\Phi,$ we can write $\Phi=A^* \Psi.$ Then
$
T_k\phi_j=\sum_{i=1}^m \overline{a_{ij}} T_k\psi_i, \quad k\in\mathbb{Z},\, j=1, \ldots, m
$
and the result follows.
\end{proof}
Theorem \ref{thm-for-riesz} shows that almost every square matrix maps Riesz bases generators in Riesz bases generators.
For the case of frames it might happen that
condition (2) of Theorem \ref{thm-frames} is not satisfied for any matrix $A.$
That is exactly what we show in the following example on which we present a finitely
generated SIS for which any linear combination of its generators
yields to a minimal set of generators that is not a frame of translates.
\begin{example}
Let $\phi_1, \phi_2\in L^2(\mathbb{R}^2)$ defined by
$$\widehat{\phi_1}(\omega_1,\omega_2)=-\sin(2\pi\omega_1)\chi_{[-\frac1{2}, \frac1{2}]^2}(\omega_1,\omega_2)$$
and
$$\widehat{\phi_2}(\omega_1,\omega_2)=e^{2\pi i \omega_2}\cos(2\pi\omega_1)\chi_{[-\frac1{2}, \frac1{2}]^2}(\omega_1,\omega_2).$$
Consider the shift invariant space generated by $\Phi=\{\phi_1,\phi_2\}$, $V=S(\phi_1,\phi_2)$.
We will see that $E(\phi_1,\phi_2)$
is a frame for $V$, that $V$ is a principal SIS and that $E(A\Phi)$ is not a frame for $V$ for any matrix $A\in \mathbb{C}^{1\times2}$.
We first compute $G_{\Phi}(\omega_1,\omega_2)$. Let $(\omega_1,\omega_2)\in [-\frac1{2}, \frac1{2}]^2$. Then, we have
$$G_{\Phi}(\omega_1,\omega_2)=
\Big(
\begin{array}{cc}
\sin^2(2\pi\omega_1)& -e^{-2\pi i \omega_2}\sin(2\pi\omega_1)\cos(2\pi\omega_1)\\
-e^{2\pi i \omega_2}\sin(2\pi\omega_1)\cos(2\pi\omega_1) & \cos^2(2\pi\omega_1)
\end{array}
\Big).
$$
Note that $G_{\Phi}(\omega_1,\omega_2)=G_{\Phi}^2(\omega_1,\omega_2)$ for all
$(\omega_1,\omega_2)\in [-\frac1{2}, \frac1{2}]^2$. Thus, by Proposition \ref{gramiano-frame}, $E(\phi_1,\phi_2)$
is a frame for $V.$ Further, it can be seen that $\text{rk}(G_{\Phi}(\omega_1, \omega_2))=1$
for $a.e.$ $(\omega_1,\omega_2)\in [-\frac1{2}, \frac1{2}]^2$
and then $V$ has length 1. Moreover, $V$ is the Paley Wiener space
$PW=\{f\in L^2(\mathbb{R}^2): \,supp(\widehat{f})\subseteq [-\frac1{2}, \frac1{2}]^2\}$.
Let $A\in \mathbb{C}^{1\times2}$. Without loss of generality we can suppose that $A=(a_1 \,\,a_2)$ with
$|a_1|^2+|a_2|^2=1$.
Then, $A$ can be written as $A=(\cos(\theta)e^{2\pi i \beta}\,\, \,\,\sin(\theta)e^{2\pi i \beta'})$ for
$\theta\in [0,\frac{\pi}{2}]$ and $\beta, \beta'\in \mathbb{R}$.
Therefore, the Gramian associated to $A\Phi$ is
\begin{align*}
AG_{\Phi}(\omega_1,\omega_2)A^*&=
\sin^2(2\pi\omega_1) \cos^2(\theta) + \cos^2(2\pi\omega_1) \sin^2(\theta)\\
&-2\cos(2\pi(w_2-\beta'+\beta))\sin(2\pi\omega_1)\cos(2\pi\omega_1)\sin(\theta)\cos(\theta).
\end{align*}
Observe that for each $\theta, \beta$ and $\beta'$ fixed,
$$
AG_{\Phi}(\omega_1,\omega_2)A^*\neq 0\quad \mbox {for a.e. } (\omega_1,\omega_2)\in [-\frac1{2}, \frac1{2}]^2.
$$
In particular,
$$
\text{rk}(G_{\Phi}(\omega_1,\omega_2))=\text{rk}(AG_{\Phi}(\omega_1,\omega_2)A^*) \quad \mbox {for a.e. } (\omega_1,\omega_2)\in [-\frac1{2}, \frac1{2}]^2
$$
and then, every matrix $A$ preserves generators.
Let $\tilde{\omega}_2\in[-\frac1{2}, \frac1{2}]$ such that $\tilde{\omega}_2=\beta'-\beta+k$ for some $k\in\mathbb{Z}$.
Then,
\begin{align*}
AG_{\Phi}(\omega_1,\tilde{\omega}_2)A^*&=
\sin^2(2\pi\omega_1) \cos^2(\theta) + \cos^2(2\pi\omega_1) \sin^2(\theta)\\
&-2\sin(2\pi\omega_1)\cos(2\pi\omega_1)\sin(\theta)\cos(\theta)\\
&=\sin^2(2\pi\omega_1-\theta).
\end{align*}
Now, taking $\tilde{\omega}_1=\frac{\theta}{2\pi},$ we get
$AG_{\Phi}(\tilde{\omega}_1,\tilde{\omega}_2)A^*=0$.
Then, since the Gramian associated to $A\Phi$ is a continuos function with a zero, condition $(b)$ in item $(3)$
of Proposition \ref{gramiano-frame} can never be fullfiled. Thus,
$E(A\Phi)$ can not be a frame for $V$.
\vspace{-0.5cm}
\hfill
$\blacksquare$
\end{example}
{\bf Acknowledgments.} The authors thank J. Antezana and P. Massey for fruitful conversations. We also thank J. Antezana
for pointing out to us the result in Remark 2.10 from \cite{ACRS05}, and R. Balan for his suggestions
that gave rise to Theorem \ref{thm-Radu}.
Finally, we thank the anonymous referee for her/his comments that helped to improve the manuscript.
| {
"timestamp": "2013-12-13T02:01:04",
"yymm": "1305",
"arxiv_id": "1305.2944",
"language": "en",
"url": "https://arxiv.org/abs/1305.2944",
"abstract": "A finitely generated shift invariant space $V$ is a closed subspace of $L^2(\\R^d)$ that is generated by the integer translates of a finite number of functions. A set of frame generators for $V$ is a set of functions whose integer translates form a frame for $V$. In this note we give necessary and sufficient conditions in order that a minimal set of frame generators can be obtained by taking linear combinations of the given frame generators. Surprisingly the results are very different to the recently studied case when the property to be a frame is not required.",
"subjects": "Functional Analysis (math.FA); Classical Analysis and ODEs (math.CA)",
"title": "Linear combinations of frame generators in systems of translates",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969674838368,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.709904434903402
} |
https://arxiv.org/abs/2201.09115 | Disproof of a Conjecture by Woodall | In 2001, Woodall conjectured that for every pair of integers $s,t \ge 1$, all graphs without a $K_{s,t}$-minor are $(s+t-1)$-choosable. In this note we refute this conjecture in a strong form: We prove that for every choice of constants $\varepsilon>0$ and $C \ge 1$ there exists $N=N(\varepsilon,C) \in \mathbb{N}$ such that for all integers $s,t $ with $N \le s \le t \le Cs$ there exists a graph without a $K_{s,t}$-minor and list chromatic number greater than $(1-\varepsilon)(2s+t)$. | \section{Introduction}
\paragraph{\textbf{Preliminaries.}} All graphs considered in this paper are loopless and have no parallel edges. Given numbers $s,t \in \mathbb{N}$ we denote by $K_t$ the complete graph of order $t$ and by $K_{s,t}$ the complete bipartite graph with bipartition classes of size $s$ and $t$, respectively. Given graphs $G$ and $F$, we say that $G$ \emph{contains $F$ as a minor}, in symbols, $G \succeq F$, if there exists a collection of pairwise disjoint non-empty subsets $(Z_f)_{f \in V(F)}$ of the vertex-set of $G$ such that for every $f \in V(F)$, the induced subgraph $G[Z_f]$ of $G$ is connected, and furthermore, for every edge $f_1f_2 \in E(F)$, there exists at least one edge in $G$ with endpoints in $Z_{f_1}$ and $Z_{f_2}$. It is easily seen that our definition above is equivalent to the standard definition of graph minors, i.e., $G \succeq F$ if and only if $G$ can be transformed into a graph isomorphic to $F$ by performing a sequence of vertex or edge deletions, and edge contractions.
A \emph{proper coloring} of a graph $G$ with color-set $S$ is a mapping $c:V(G) \rightarrow S$ such that $c^{-1}(s)$ is an independent set, for every $s \in S$. A \emph{list assignment} for $G$ is an assignment $L:V(G) \rightarrow 2^\mathbb{N}$ of finite sets $L(v)$ (called lists) to the vertices $v \in V(G)$. An \emph{$L$-coloring} of $G$ is a proper coloring $c:V(G) \rightarrow \mathbb{N}$ of $G$ in which every vertex must be assigned a color from its respective list, i.e., $c(v) \in L(v)$ for every $v \in V(G)$.
The chromatic number $\chi(G)$ of a graph $G$ is defined as the smallest integer $k \ge 1$ such that $G$ admits a proper coloring with color-set $[k]$. Similarly, the \emph{list chromatic number} $\chi_\ell(G)$ of a graph $G$ is defined as the smallest number $k \ge 1$ such that $G$ admits an $L$-coloring for \emph{every} assignment $L(\cdot)$ of color lists to the vertices of $G$, provided that $|L(v)| \ge k$ for every $v \in V(G)$ (we refer to this property by saying that $G$ is \emph{$k$-choosable}).
Clearly, $\chi(G) \le \chi_\ell(G)$ for every graph $G$, but in general $\chi_\ell(G)$ is not bounded from above by a function in $\chi(G)$, as shown by complete bipartite graphs.
\bigskip
Hadwiger's conjecture, a vast generalization of the four-color-theorem~\cite{appelhaken1,appelhaken2} and arguably one of the most important open problems in graph theory, states the following upper bound on the chromatic number of graphs containing no $K_t$-minor:
\begin{conjecture}[Hadwiger~\cite{hadwiger}, 1943]\label{hadwiger}
For every $t \in \mathbb{N}$, if $G$ is a graph such that $G \not\succeq K_t$, then $\chi(G) \le t-1$.
\end{conjecture}
Hadwiger's conjecture has given rise to many beautiful results and open problems in the past. For a good overview of this field of research, encompassing the major results until about $2$ years ago, we refer the reader to the survey article~\cite{survey} by Seymour. Very recently, there has been considerable progress on the asymptotic version of Hadwiger's conjecture, see~\cite{norine, norine2, postle, postle2, postle3, del} for further reference.
The remarkable difficulty of Hadwiger's conjecture has led to the study of several relaxations. One natural such relaxation is to prove the conjecture for graphs which (more strongly) exclude not only $K_t$, but a fixed sparser graph $H$ on $t$ vertices as a minor. In particular the case when $H$ is a complete bipartite graph has received attention. In his survey article~\cite{woodall} on list coloring, Woodall made the following two conjectures, both of which have remained open problems thus far. The first conjecture is a weakening of Hadwiger's conjecture, that was independently proposed by Seymour (cf.~\cite{kostochkabip3}).
\begin{conjecture}[cf.~\cite{woodall}, Conjecture ``$C(r,s,\chi)$'']\label{con:chrom}
For every $s, t \in \mathbb{N}$, if $G$ is a graph with $G \not\succeq K_{s,t}$, then $\chi(G) \le s+t-1$.
\end{conjecture}
\begin{conjecture}[cf.~\cite{woodall}, Conjecture ``$C(r,s,\text{ch})$'']\label{con:lists}
For every $s, t \in \mathbb{N}$, if $G$ is a graph with $G \not\succeq K_{s,t}$, then $\chi_\ell(G) \le s+t-1$.
\end{conjecture}
For both conjectures, several partial results have been obtained in the past, which we summarize in the following.
Woodall proved in~\cite{woodall,woodall2} that Conjecture~\ref{con:lists} holds if $s \le 2$. The correctness for $s=2$ can also be obtained as a consequence of the main result of Chudnovsky et al.~\cite{chudnovsky}: They proved, extending a result by Myers~\cite{myers}, that every $n$-vertex-graph with no $K_{2,t}$-minor has at most $\frac{t+1}{2}(n-1)$ edges. Hence the average degree of such graphs is less than $t+1$. The latter implies that every graph with no $K_{2,t}$-minor is $t$-degenerate and therefrom (via greedy coloring) $(t+1)$-choosable.
For $s=t=3$, Conjecture~\ref{con:lists} was confirmed by Woodall in~\cite{woodall}. The conjecture is also valid for $s=3$ and $t=4$, since J{\o}rgensen proved (cf. Corollary~7 in~\cite{jorgensen}) that every $K_{3,4}$-minor free graph is $5$-degenerate, and hence $6$-choosable. For $s=3$ and large values of $t$, it was proved in~\cite{kostochkabip3} by Kostochka and Prince that for $t \ge 6300$ every $n$-vertex graph without a $K_{3,t}$-minor has at most $\frac{t+3}{2}(n-2)+1$ edges. Therefore (by considering the average degree), every such graph is $(t+2)$-degenerate. Using greedy coloring one can conclude that every graph without a $K_{3,t}$-minor is $(3+t)$-choosable for $t \ge 6300$, which misses Woodall's conjecture by an additive constant of $1$ only. With an additional argument\footnote{which does not seem to extend to list coloring}, Kostochka and Prince proved in~\cite{kostochkabip3} that Conjecture~\ref{con:chrom} holds for $s=3$ and $t \ge 6300$.
Finally, the case $s=4 \le t$ was considered by Kawarabayashi~\cite{kawarabayashi}, who proved that graphs without a $K_{4,t}$-minor are $4t$-choosable, for every $t \ge 1$. For $s=t=4$, a better result is known: J{\o}rgensen proved (cf. Corollary~6 in~\cite{jorgensen}) that every graph without a $K_{4,4}$-minor is $7$-degenerate, and hence $8$-choosable.
Let us now turn to asymptotic bounds for large $s$ and $t$. By recent results of Delcourt and Postle~\cite{del}, every graph with no $K_t$-minor is $O(t \log \log t)$-colorable and $O(t (\log \log t)^2)$-choosable. This in particular means that the maximum (list) chromatic number of graphs with no $K_{s,t}$-minor is bounded by $O((s+t)\log \log (s+t))$ (respectively $O((s+t)(\log \log (s+t))^2)$). These are the asymptotically best known bounds for Conjecture~\ref{con:chrom} and~\ref{con:lists} when $s$ and $t$ are of comparable size, however, if $t$ is significantly bigger than $s$, then better bounds are known. Notably, Kostochka~\cite{kostochkabip1,kostochkabip2} proved that for every $s \ge 1$, Conjecture~\ref{con:chrom} holds if $t \ge t_0(s)$, where $t_0(s)=O(s^3\log^3(s))$. However, an analogous result is not known for the list chromatic number. The only similar result in this direction was proved in~\cite{kostochkadens1,kostochkadens2}, see also~\cite{kuehn} for related results: If $t$ is huge compared to $s$, concretely if if $t>(240s\log_2(s))^{8s\log_2(s)+1}$, then every graph with no $K_{s,t}$-minor is $(3s+t)$-degenerate, and therefore $(3s+t+1)$-choosable.
\medskip
In this note, we disprove Conjecture~\ref{con:lists}, by constructing counterexamples for sufficiently large values of $s$ and $t$ which have a comparable size.
\begin{theorem}\label{main}
For every choice of constants $\varepsilon>0$ and $C \ge 1$ there exists $N=N(\varepsilon,C) \in \mathbb{N}$ such that for all integers $s,t $ with $N \le s \le t \le Cs$ there exists a graph without a $K_{s,t}$-minor and list chromatic number greater than $(1-\varepsilon)(2s+t)$.
\end{theorem}
For instance, if $s=t$, then the above implies that the maximum list chromatic number of graphs with no $K_{t,t}$-minor is at least as $3t-o(t)$, which substantially exceeds the conjectured upper bound of $2t-1$ for large $t$. It remains an interesting open problem whether Conjecture~\ref{con:lists} remains true if $s$ is fixed and $t$ is sufficiently large in terms of $s$. It would also be interesting to determine the smallest values of $s$ and $t$ (or of $s+t$) for which Conjecture~\ref{con:lists} fails, since the bounds coming from our proof of Theorem~\ref{main} are almost astronomical. In that regard, it could be interesting to study the following smallest open cases of Conjecture~\ref{con:lists}.
\begin{question}
Is every graph without a $K_{4,4}$-minor $7$-choosable? Is every graph without a $K_{3,5}$-minor $7$-choosable?
\end{question}
Regarding the true asymptotics of the list chromatic number of graphs with no $K_{s,t}$-minor, the following natural problem arises.
\begin{question}
Is it true that for all integers $1 \le s \le t$, every graph $G$ with $G \not\succeq K_{s,t}$ satisfies $\chi_\ell(G) \le 2s+t$?
\end{question}
In the remainder of this note we present the proof of Theorem~\ref{main}. It is probabilistic and relies on a few modifications of an argument previously used by the author in~\cite{steiner} to prove that the maximum list chromatic number of graphs without a $K_t$-minor is at least $2t-o(t)$, addressing a conjecture by Kawarabayashi and Mohar~\cite{kawarabayashimohar}, see also~\cite{op}, known as the \emph{List Hadwiger Conjecture}.
\section{Proof of Theorem~\ref{main}}
Following standard notation, for a pair of natural numbers $m, n \in \mathbb{N}$ and a probability $p \in [0,1]$, we denote by $G(m,n,p)$ the bipartite Erd\H{o}s-Renyi graph, that is, a random bipartite graph $G$ with bipartition $A ; B$ such that $|A|=m$, $|B|=n$, and in which every pair $ab$ with $a \in A, b \in B$ is selected as an edge of $G$ with probability $p$, independently from all other such pairs.
\begin{lemma}\label{random}
Let $\varepsilon \in (0,1)$, $C \ge 1$, $f \in \mathbb{N}$ and $\delta \in (0,1)$ be constants such that $f^2 \delta<1$. For every $n \in \mathbb{N}$, let $p=p(n):=n^{-\delta}$ and $m=m(n):=\lfloor Cn \rfloor$. Then with probability tending to $1$ as $n \rightarrow \infty$, the random graph $G=G(m(n),n,p(n))$ with bipartition $A \cup B$ simultaneously satisfies the following two properties:
\begin{itemize}
\item For every collection of pairwise disjoint non-empty sets $X_1,\ldots,X_k \subseteq A$, $Y_1,\ldots,Y_k \subseteq B$ such that $k \ge \varepsilon n$ and $\max\{|X_1|,\ldots,|X_k|,|Y_1|,\ldots,|Y_k|\} \le f$, there exists a pair of indices $(i,j) \in [k] \times [k]$ such that $G$ contains all the edges $xy, (x,y) \in X_i \times Y_j$.
\item $G$ has maximum degree at most $\varepsilon n$.
\end{itemize}
\end{lemma}
\begin{proof}
\noindent
It will clearly be sufficient to prove that for each of the two events above individually, the probability for them not to occur tends to $0$ as $n \rightarrow \infty$. It then follows using a union bound that also the probability that at least one of the two events does not occur tends to $0$ as $n \rightarrow \infty$, proving the claim of the lemma.
\begin{itemize}
\item Let us first consider the probability event $E_n$ that $G$ does not satisfy the property claimed by the first item. We want to show that $\mathbb{P}(E_n) \rightarrow 0$ as $n \rightarrow \infty$. So consider a fixed collection $X_1,\ldots,X_k \subseteq A, Y_1,\ldots,Y_k \subseteq B$ of disjoint non-empty sets, where $k \ge \varepsilon n$ and $\max\{|X_1|,\ldots,|X_k|,|Y_1|,\ldots,|Y_k|\} \le f$. Let $E(X_1,\ldots,X_k,Y_1,\ldots,Y_k)$ be the probability event ``there exists no pair $(i,j) \in [k] \times [k]$ such that all the edges $xy$ with $x \in X_i$ and $y \in Y_j$ are included in $G$''. Fixing a pair of indices $(i,j) \in [k] \times [k]$, clearly the probability of the event that ``$X_i$ is not fully connected to $Y_j$'' equals $1-p^{|X_i||Y_j|} \le 1-p^{f^2}$. Since these events are independent for different choices of $(i,j)$, it follows that
$$\mathbb{P}(E(X_1,\ldots,X_k,Y_1,\ldots,Y_k)) \le (1-p^{f^2})^{k^2}$$ $$\le (1-p^{f^2})^{\varepsilon^2n^2}\le \exp(-p^{f^2}\varepsilon^2 n^2)=\exp(-\varepsilon^2n^{2-f^2\delta}).$$
With a (very) rough estimate, there are at most $$(m+n+1)^{m+n}=\exp(\ln(m+n+1)(m+n)) \le \exp(\ln((C+1)n+1)(C+1)n)$$ different ways to select the sets $X_1,\ldots,X_k,Y_1,\ldots,Y_k$. Hence, applying a union bound we find that
$$\mathbb{P}(E_n) \le \exp(\ln((C+1)n+1)(C+1)n-\varepsilon^2n^{2-f^2\delta}).$$
The right hand side of the above inequality tends to $0$ as $n \rightarrow \infty$, since $f^2\delta<1$ and hence $\varepsilon^2n^{2-f^2\delta}=\Omega(n^{2-f^2\delta})$ grows faster than $\ln((C+1)n+1)(C+1)n=O(n\ln n)$. This proves that $G$ satisfies the properties claimed by the first item w.h.p., as required.
\item To show that also the property claimed by the second item holds true w.h.p., consider the probability that a fixed vertex $x \in A \cup B$ has more than $\varepsilon n$ neighbors in $G$. The degree of $x$ in $G(m,n,p)$ is distributed like a binomial random variable $B(n,p)$ if $x \in A$ and like $B(m,p)$ if $x \in B$. Hence the expected degree of $x$ is $np=n^{1-\delta}$ if $x \in A$ and $mp \in [n^{1-\delta},Cn^{1-\delta}]$ if $x \in B$. Hence, $\mathbb{E}(d_G(x))$ is smaller than $\frac{\varepsilon n}{2}$ for $n$ sufficiently large in terms of $\varepsilon$, $\delta$ and $C$. Applying Chernoff's bound we find for every sufficiently large $n$:
$$\mathbb{P}(d_G(x)>\varepsilon n ) \le \mathbb{P}(d_G(x)>2\mathbb{E}(d_G(x))) \le \exp\left(-\frac{1}{3}\mathbb{E}(d_G(x))\right) \le \exp\left(-\frac{1}{3}n^{1-\delta}\right).$$
Since this bound holds for every choice of $x \in A \cup B$, applying a union bound we find that the probability that $G$ has a vertex of degree more than $\varepsilon n$ is at most
$$(m+n)\exp\left(-\frac{1}{3}n^{1-\delta}\right) \le \exp\left(\ln((C+1)n)-\frac{1}{3}n^{1-\delta}\right)$$ which tends to $0$ as $n \rightarrow \infty$, as desired (here we used that $\delta<1$ and hence $n^{1-\delta}$ grows faster than $\ln((C+1)n)$).
\end{itemize}
\end{proof}
In the next intermediate result we derive a useful deterministic statement from Lemma~\ref{random} about the existence of graphs with certain properties, which then come in handy when we construct the lower-bound examples for Theorem~\ref{main}.
\begin{lemma}\label{cor}
For every $\varepsilon \in (0,1)$ and $C \ge 1$, there exists $n_0=n_0(\varepsilon,C)$ such that for all integers $m,n$ satisfying $n_0 \le n \le m \le Cn$, there exists a graph $H$ whose vertex-set $V(H)=A \cup B$ is partitioned into two disjoint sets $A$ of size $m$ and $B$ of size $n$, and such that the following properties hold:
\begin{itemize}
\item Both $A$ and $B$ form cliques of $H$,
\item every vertex in $H$ has at most $\varepsilon n$ non-neighbors in $H$, and
\item for all integers $1 \le s \le t$ such that $n \le s$ and $m \le (1-2\varepsilon)(s+t)$, $H$ does not contain $K_{s,t}$ as a minor.
\end{itemize}
\end{lemma}
\begin{proof}
Let $f:=\lceil \frac{C}{\varepsilon} \rceil \in \mathbb{N}$ and $\delta:=\frac{\varepsilon^2}{4C^2}$. Then $f^2\delta <1$, and hence we may apply Lemma~\ref{random}. It follows directly that there exists $n_0=n_0(\varepsilon,C) \in \mathbb{N}$ such that for every $n \ge n_0$ there exists a bipartite graph $G'$, whose bipartition classes $A'$ and $B'$ are of size $\lfloor Cn \rfloor$ and $n$ respectively, and such that the following hold:
\begin{itemize}
\item For every collection of pairwise disjoint non-empty sets $X_1,\ldots,X_k \subseteq A', Y_1,\ldots,Y_k \subseteq B'$ such that $k \ge \varepsilon n$ and $\max\{|X_1|,\ldots,|X_k|,|Y_1|,\ldots,|Y_k|\} \le f$, there exists a pair $(i,j) \in [k] \times [k]$ such that $G'$ contains all the edges $xy, (x,y) \in X_i \times Y_j$.
\item $G'$ has maximum degree at most $\varepsilon n$.
\end{itemize}
Since $n \le m \le \lfloor Cn \rfloor$, we may select and fix a subset $A \subseteq A'$ such that $|A|=m$. Also, put $B:=B'$. In the following, let $G:=G'[A \cup B]$ denote the induced subgraph of $G'$ with bipartition $A; B$.
We now define $H$ as the complement of $G$ (also with vertex-set $A \cup B$). It is clear from the definition of $G$ that $A$ and $B$ form cliques in $H$ and have the required size, verifying the first item in the claim of the lemma. The second item follows directly from the fact that $\Delta(G) \le \Delta(G') \le \varepsilon n$.
It hence remains to verify the last item. Towards a contradiction, suppose that there exist numbers $1 \le s \le t $ with $n \le s$, $m \le (1-2\varepsilon)(s+t)$, such that $H$ contains $K_{s,t}$ as a minor. This implies that there exists a collection $\mathcal{Z}=\mathcal{Z}_1 \cup \mathcal{Z}_2$ of non-empty and pairwise disjoint subsets of $V(H)$ such that $|\mathcal{Z}_1|=s$, $|\mathcal{Z}_2|=t$ and such that for every pair $Z_1 \in \mathcal{Z}_1, Z_2 \in \mathcal{Z}_2$, there exists at least one edge in $H$ connecting a vertex in $Z_1$ to a vertex in $Z_2$.
Let us now consider $\mathcal{Z}_{A,1}:=\{Z \in \mathcal{Z}_1|Z \cap A \neq \emptyset\}$ and $\mathcal{Z}_{A,2}:=\{Z \in \mathcal{Z}_2|Z \cap A \neq \emptyset\}$. Since the sets in $\mathcal{Z}$ are pairwise disjoint, we can see that $|\mathcal{Z}_{A,1}|+|\mathcal{Z}_{A,2}| \le |A|=m \le (1-2\varepsilon)(s+t)$. We therefore must have $|\mathcal{Z}_{A,1}| \le (1-2\varepsilon)s$ or $|\mathcal{Z}_{A,2}| \le (1-2\varepsilon)t$. In the following, we distinguish these two cases and lead both to a contradiction. This will then show that our assumption on the existence of $s$ and $t$ above was incorrect, and hence complete the proof that $H$ satisfies all three properties required by the lemma.
\medskip
\textbf{Case 1.} Suppose first that $|\mathcal{Z}_{A,1}| \le (1-2\varepsilon)s$. Then this means that $|\mathcal{Z}_1 \setminus \mathcal{Z}_{A,1}| \ge 2\varepsilon s \ge 2\varepsilon n$. The sets in $\mathcal{Z}_1 \setminus \mathcal{Z}_{A,1}$ are exactly those $Z \in \mathcal{Z}_1$ such that $Z \subseteq B$. Since $|B|=n$, and since the sets in $\mathcal{Z}_1 \setminus \mathcal{Z}_{A,1}$ are pairwise disjoint, it follows that $\mathcal{Z}_1 \setminus \mathcal{Z}_{A,1}$ contains at most $\varepsilon n$ sets of size more than $\frac{1}{\varepsilon}$. Consequently, at least $2\varepsilon n -\varepsilon n=\varepsilon n$ sets in $\mathcal{Z}_1 \setminus \mathcal{Z}_{A,1}$ have size at most $\frac{1}{\varepsilon} \le f$. Fix a list $Y_1,\ldots,Y_k \subseteq B$ of $k=\lceil \varepsilon n \rceil$ distinct sets in $\mathcal{Z}_1\setminus \mathcal{Z}_{A,1}$, each of size at most $f$.
Next, consider the set $\mathcal{Z}_{B,2}:=\{Z \in \mathcal{Z}_2|Z \cap B \neq \emptyset\}$. Since the elements of $(\mathcal{Z}_1\setminus \mathcal{Z}_{A,1}) \cup \mathcal{Z}_{B,2}$ are pairwise disjoint and all intersect $B$, it follows that $|\mathcal{Z}_1\setminus \mathcal{Z}_{A,1}|+|\mathcal{Z}_{B,2}| \le |B|=n$, and hence that $|\mathcal{Z}_{B,2}| \le n-|\mathcal{Z}_1\setminus \mathcal{Z}_{A,1}| \le n-2\varepsilon n$. Since $t \ge s \ge n$, we conclude that $|\mathcal{Z}_2\setminus\mathcal{Z}_{B,2}| \ge t-(n-2\varepsilon n)=t-n+2\varepsilon n \ge 2 \varepsilon n$. All the sets $Z \in \mathcal{Z}_2\setminus \mathcal{Z}_{B,2}$ are fully included in $A$. Therefore, and since $|A|=m \le Cn$, there can be at most $\varepsilon n$ sets in $\mathcal{Z}_2\setminus \mathcal{Z}_{B,2}$ whose size exceeds $\frac{C}{\varepsilon}$. Hence, at least $2\varepsilon n-\varepsilon n=\varepsilon n$ sets in $\mathcal{Z}_2\setminus \mathcal{Z}_{B,2}$ have size at most $\frac{C}{\varepsilon} \le f$. Let $X_1,\ldots,X_k \subseteq A$ be $k=\lceil \varepsilon n\rceil$ distinct sets in $\mathcal{Z}_2\setminus \mathcal{Z}_{B,2}$, each of size at most $f$. By the property of $G'$ listed in the beginning of this proof, we know that there exists a pair $(i,j) \in [k] \times [k]$ such that all the edges $xy, (x,y) \in X_i \times Y_j$ are contained in $G'$ (and hence in $G$). This, however, means that there exists no edge in $H$ which connects a vertex in $X_i \in \mathcal{Z}_2$ to a vertex in $Y_j \in \mathcal{Z}_1$, contradicting our initial assumptions on the collection $\mathcal{Z}=\mathcal{Z}_1 \cup \mathcal{Z}_2$. This contradiction concludes the proof in Case~1.
\medskip
The analysis for Case~2 below is almost identical to the analysis of Case~1, except for the fact that $\mathcal{Z}_1$ and $\mathcal{Z}_2$ play interchanged roles\footnote{We avoided reducing the second case to the first with a simple ``w.l.o.g.'' assumption, since formally the cases are not excatly symmetric (note that the sizes $s$ and $t$ of $\mathcal{Z}_1$ and $\mathcal{Z}_2$ can be different)}.
\medskip
\textbf{Case 2.} Suppose now that $|\mathcal{Z}_{A,2}| \le (1-2\varepsilon)t$. This implies $|\mathcal{Z}_2 \setminus \mathcal{Z}_{A,2}| \ge 2\varepsilon t \ge 2\varepsilon s \ge 2\varepsilon n$. Note that all the sets in $\mathcal{Z}_2 \setminus \mathcal{Z}_{A,2}$ are fully included in $B$. Since the sets in $\mathcal{Z}_2 \setminus \mathcal{Z}_{A,2}$ are pairwise disjoint, it follows that there are at most $\varepsilon n$ sets of size more than $\frac{1}{\varepsilon}$ in $\mathcal{Z}_2 \setminus \mathcal{Z}_{A,2}$. Thus there are at least $2\varepsilon n -\varepsilon n=\varepsilon n$ sets in $\mathcal{Z}_2 \setminus \mathcal{Z}_{A,2}$ of size at most $\frac{1}{\varepsilon} \le f$. Fix $k=\lceil \varepsilon n \rceil$ distinct sets $Y_1,\ldots,Y_k \subseteq B$ in $\mathcal{Z}_2\setminus \mathcal{Z}_{A,2}$, each of size at most $f$.
Let us now consider the subcollection $\mathcal{Z}_{B,1}:=\{Z \in \mathcal{Z}_1|Z \cap B \neq \emptyset\}$. The elements of $\mathcal{Z}_{B,1} \cup (\mathcal{Z}_2\setminus \mathcal{Z}_{A,2})$ all intersect $B$, and hence $|\mathcal{Z}_{B,1}|+|\mathcal{Z}_{2}\setminus \mathcal{Z}_{A,2}| \le |B|=n$. Therefrom, we have $|\mathcal{Z}_{B,1}| \le n-|\mathcal{Z}_2\setminus \mathcal{Z}_{A,2}| \le n-2\varepsilon n$. Since $s \ge n$, it follows that $|\mathcal{Z}_1\setminus\mathcal{Z}_{B,1}| \ge s-(n-2\varepsilon n)=s-n+2\varepsilon n \ge 2 \varepsilon n$. By definition, every set $Z \in \mathcal{Z}_1\setminus \mathcal{Z}_{B,1}$ is fully included in $A$. Due to $|A|=m \le Cn$, this implies that there are at most $\varepsilon n$ sets in $\mathcal{Z}_1\setminus \mathcal{Z}_{B,1}$ whose size exceeds $\frac{C}{\varepsilon}$. Hence, at least $2\varepsilon n-\varepsilon n=\varepsilon n$ sets in $\mathcal{Z}_1\setminus \mathcal{Z}_{B,1}$ have size at most $\frac{C}{\varepsilon} \le f$. Let $X_1,\ldots,X_k \subseteq A$ be $k=\lceil \varepsilon n\rceil$ distinct sets in $\mathcal{Z}_1\setminus \mathcal{Z}_{B,1}$, each of size at most $f$. Applying the first of the two properties of $G'$ as mentioned above, it follows that there must be $(i,j) \in [k] \times [k]$ such that all pairs $xy, (x,y) \in X_i \times Y_j$ are contained as edges in $G$. Since $H$ and $G$ are complements, this means that there exists no edge in $H$ which connects a vertex in $X_i \in \mathcal{Z}_1$ to a vertex in $Y_j \in \mathcal{Z}_2$, contradicting our initial assumptions. This contradiction concludes the proof also in Case~2.
\end{proof}
We are now almost ready for proving Theorem~\ref{main}. The only remaining ingredient is a simple lemma, which states that glueing two $K_{s,t}$-minor-free graphs together along a sufficiently small clique separator results in a graph that is again $K_{s,t}$-minor-free. We strongly suspect that this statement has appeared elsewhere before, but we decided to include the (simple) proof here for the reader's convenience.
\begin{lemma}\label{glue}
Let $1 \le s \le t$ be integers, and let $G_1$ and $G_2$ be graphs not containing $K_{s,t}$ as a minor. Let $C:=V(G_1) \cap V(G_2)$. If $C$ forms a clique in both $G_1$ and $G_2$, and if $|C|<s$, then the graph $G_1 \cup G_2$ also does not contain $K_{s,t}$ as a minor.
\end{lemma}
\begin{proof}
Towards a contradiction, suppose that $G:=G_1 \cup G_2$ contains $K_{s,t}$ as a minor. By definition, this means that there exists a collection of disjoint non-empty subsets $\mathcal{Z}=\mathcal{Z}_1 \cup \mathcal{Z}_2$ of $V(G_1) \cup V(G_2)$ such that $G[Z]$ is connected for every $Z \in \mathcal{Z}$, $|\mathcal{Z}_1|=s, |\mathcal{Z}_2|=t$, and for every pair $X \in \mathcal{Z}_1, Y \in \mathcal{Z}_2$, there exists an edge of $G$ with endpoints in $X$ and $Y$. Since $|C|<s=|\mathcal{Z}_1| \le |\mathcal{Z}_2|$, there exist $Z_1 \in \mathcal{Z}_1$ and $Z_2 \in \mathcal{Z}_2$ such that $Z_1 \cap C=Z_2 \cap C=\emptyset$. Since $Z_1$ and $Z_2$ induce connected subgraphs of $G$, this means that for both $i \in \{1,2\}$ we have $Z_i \subseteq V(G_1) \setminus C$ or $Z_i \subseteq V(G_2) \setminus C$ (since no edge in $G$ connects a vertex in $V(G_1) \setminus C$ to a vertex in $V(G_2) \setminus C$). Furthermore, by assumption there exists an edge with endpoints in $Z_1$ and $Z_2$, which implies that either $Z_1, Z_2 \subseteq V(G_1) \setminus C$, or $Z_1, Z_2 \subseteq V(G_2) \setminus C$. W.l.o.g. (possibly after renaming $G_1$ and $G_2$) we may assume from now on that $Z_1, Z_2 \subseteq V(G_1) \setminus C$. Then every $Z \in \mathcal{Z} \setminus \{Z_1,Z_2\}$ must also be linked with an edge to one of $Z_1$ or $Z_2$, and hence cannot be entirely contained in $V(G_2) \setminus C$. Hence, we have $Z \cap V(G_1) \neq \emptyset$ for every $Z \in \mathcal{Z}$. We now claim that the collection $\mathcal{Z}':=\{Z \cap V(G_1)|Z \in \mathcal{Z}\}$ of disjoint vertex-subsets in $G_1$ certifies that $G_1$ also contains a $K_{s,t}$-minor.
Firstly, for every $Z \in \mathcal{Z}$, the graph $G_1[Z \cap V(G_1)]$ is connected. Namely, we either have $Z \subseteq V(G_1)$, and hence $G_1[Z\cap V(G_1)]=G_1[Z]=G[Z]$ is a connected subgraph of $G_1$, or we have $Z \cap C \neq \emptyset$. In the latter case, the facts that $G[Z]$ is connected and that $C$ is a clique yield that $Z \cap V(G_1)$ also induces a connected subgraph of $G$ (and hence of $G_1$), as desired.
Secondly, for every pair of distinct sets $X, Y \in \mathcal{Z}$ with $X \in \mathcal{Z}_1, Y \in \mathcal{Z}_2$, there exists an edge $e$ in $G$ with endpoints in $X$ and $Y$. If these endpoints both lie in $V(G_1)$, then the same edge links $X \cap V(G_1)$ and $Y \cap V(G_1)$ in $G_1$. Otherwise, at least one endpoint of $e$ is contained in $V(G_2)\setminus C$, and in this case the connectivity of $G[X], G[Y]$ implies that $X \cap C \neq \emptyset \neq Y \cap C$. Since $C$ is a clique, the latter directly implies that there is an edge in $G_1$ joining a vertex in $X \cap C$ to a vertex in $Y \cap C$, again certifying that $X \cap V(G_1)$ and $X \cap V(G_2)$ are linked by an edge in $G_1$. All in all, this shows that the collection $\mathcal{Z}'$ certifies the existence of a $K_{s,t}$-minor in $G_1$, a contradiction to the assumptions made in the lemma. This concludes the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{main}]
Let fixed constants $\varepsilon \in (0,1)$ and $C \ge 1$ be given, and assume w.l.o.g. that $\varepsilon<\frac{1}{2}$. Define $\varepsilon':=\frac{\varepsilon}{2}$ and $C':=2C+2 \ge 1$. Let $n_0=n_0(\varepsilon', C') \in \mathbb{N}$ be chosen as in Lemma~\ref{cor} applied with parameters $\varepsilon', C'$, and define $N:=\max\{n_0+1,\lceil \frac{4}{\varepsilon} \rceil\} \in \mathbb{N}$.
Let us now go about proving the claim of Theorem~\ref{main}. For that purpose, let $s,t$ be any given integers such that $N \le s \le t \le Cs$, and let us show that there exists a graph with no $K_{s,t}$-minor and list chromatic number greater than $(1-\varepsilon)(2s+t)$. For that purpose, define $n:=s-1$ and $m:= \lfloor (1-\varepsilon)(s+t)\rfloor$, noting that we have $n_0 \le n \le m$ as well as $m \le s+t \le (C+1)s=(C+1)(n+1) \le C'n$.
We may therefore apply Lemma~\ref{cor} to the parameters $m$ and $n$, which yields a graph $H$ whose vertex-set is partitioned into two non-empty sets $A$ and $B$ of size $m$ and $n$ respectively, such that both $A$ and $B$ form cliques in $H$, every vertex in $H$ has at most $\varepsilon' n$ non-neighbors, and $H$ is $K_{s,t}$-minor free (since $n \le s$ and $m \le (1-\varepsilon)(s+t)=(1-2\varepsilon')(s+t)$, by definition of $m$ and $n$).
For each possible choice of an assignment $c \in [m+n-1]^B$ of colors from $[m+n-1]$ to vertices in $B$, denote by $H(c)$ an isomorphic copy of $H$, such that the vertex-set of $H(c)$ decomposes into the cliques $A(c)$ and $B$ of size $m$ and $n$, respectively. More precisely, the distinct copies $H(c), c \in [m+n-1]^B$ of $H$ share the same set $B$ but have pairwise disjoint sets $A(c)$. Since $B$ forms a clique of size $n=s-1<s$ in the $K_{s,t}$-minor-free graph $H(c)$ for every coloring $c:B \rightarrow [m+n-1]$, it follows by repeated application of Lemma~\ref{glue} that the graph $\mathbf{G}$ with vertex set $\bigcup_{c \in [m+n-1]^B}{A(c)} \cup B$, defined as the union of the graphs $H(c), c \in [m+n-1]^B$, is $K_{s,t}$-minor free as well.
Now, consider an assignment $L:V(\mathbf{G}) \rightarrow 2^\mathbb{N}$ of color lists to the vertices of $\mathbf{G}$ as follows:
For every vertex $b \in B$, we define $L(b):=[m+n-1]$, and for every vertex $a \in A(c)$ for some coloring $c \in [m+n-1]^B$ of $B$, we define $L(a):= [m+n-1] \setminus \{c(b)|b \in B, ab \notin E(H(c))\}$.
Note that since every vertex in $A(c)$ has at most $\varepsilon' n$ non-neighbors in $H(c)$, we have $|L(v)| \ge m+n-1-\varepsilon' n$ for every vertex $v \in V(\mathbf{G})$.
We now claim that $\mathbf{G}$ does not admit an $L$-coloring, which will then imply the inequality $\chi_\ell(\mathbf{G}) \ge m+n-\varepsilon' n$. Indeed, suppose towards a contradiction there exists a proper coloring $c_\mathbf{G}:V(\mathbf{G}) \rightarrow \mathbb{N}$ of $\mathbf{G}$ such that $c_\mathbf{G}(v) \in L(v)$ for every $v \in V(\mathbf{G})$. Let $c$ denote the restriction of $c_\mathbf{G}$ to $B$, and consider the proper coloring of $H(c)$ obtained by restricting $c_\mathbf{G}$ to the vertices in $H(c)$. Since $v(H(c))=m+n$ and $c_\mathbf{G}(v) \in [m+n-1]$ for every $v \in V(H(c))$, there must exist two (necessarily non-adjacent) vertices in $H(c)$ which have the same color with respect to $c_\mathbf{G}$. Concretely, there exist $a \in A(c)$, $b\in B$ such that $ab \notin E(H(c))$ and $c_\mathbf{G}(a)=c_\mathbf{G}(b)$. This however yields a contradiction, since $c_\mathbf{G}(a) \in L(a)$ and by definition $c(b)=c_\mathbf{G}(b)$ is not included in the list of $a$.
We conclude that indeed, $\mathbf{G}$ is a $K_{s,t}$-minor-free graph which satisfies $$\chi_\ell(\mathbf{G}) \ge m+n-\varepsilon' n=\lfloor (1-\varepsilon)(s+t) \rfloor+\left(1-\frac{\varepsilon}{2}\right)(s-1)$$ $$>(1-\varepsilon)(s+t)-1+(1-\varepsilon)s+\frac{\varepsilon}{2}s-\left(1-\frac{\varepsilon}{2}\right)$$
$$=(1-\varepsilon)(2s+t)+\frac{\varepsilon}{2}s-2+\frac{\varepsilon}{2}>(1-\varepsilon)(2s+t),$$
where for the last inequality we used that $s \ge N \ge \frac{4}{\varepsilon}$.
\end{proof}
| {
"timestamp": "2022-01-25T02:15:07",
"yymm": "2201",
"arxiv_id": "2201.09115",
"language": "en",
"url": "https://arxiv.org/abs/2201.09115",
"abstract": "In 2001, Woodall conjectured that for every pair of integers $s,t \\ge 1$, all graphs without a $K_{s,t}$-minor are $(s+t-1)$-choosable. In this note we refute this conjecture in a strong form: We prove that for every choice of constants $\\varepsilon>0$ and $C \\ge 1$ there exists $N=N(\\varepsilon,C) \\in \\mathbb{N}$ such that for all integers $s,t $ with $N \\le s \\le t \\le Cs$ there exists a graph without a $K_{s,t}$-minor and list chromatic number greater than $(1-\\varepsilon)(2s+t)$.",
"subjects": "Combinatorics (math.CO)",
"title": "Disproof of a Conjecture by Woodall",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969670030071,
"lm_q2_score": 0.7217432122827967,
"lm_q1q2_score": 0.7099044345563663
} |
https://arxiv.org/abs/1804.10920 | Partial complementation of graphs | A partial complement of the graph $G$ is a graph obtained from $G$ by complementing all the edges in one of its induced subgraphs. We study the following algorithmic question: for a given graph $G$ and graph class $\mathcal{G}$, is there a partial complement of $G$ which is in $\mathcal{G}$? We show that this problem can be solved in polynomial time for various choices of the graphs class $\mathcal{G}$, such as bipartite, degenerate, or cographs. We complement these results by proving that the problem is NP-complete when $\mathcal{G}$ is the class of $r$-regular graphs. | \section{Introduction}
One of the most important questions in graph theory concerns the efficiency of recognition of a graph class $\mathcal{G}$. For example, how fast we can decide whether a graph is chordal,
2-connected,
triangle-free,
of bounded treewidth,
bipartite,
$3$-colorable,
or excludes some fixed graph as a minor? In particular, the recent developments in parameterized algorithms are driven by the problems of recognizing of graph classes which do not differ up to a ``small disturbance'' from graph classes recognizable in polynomial time. The amount of disturbance is quantified in ``atomic'' operations required for modifying an input graph into the ``well-behaving'' graph class $\mathcal{G}$. The standard operations could be edge/vertex deletions, additions or edge contractions.
Many problems in graph algorithms fall into this graph modification category: is it possible to add at most $k$ edges to make a graph $2$-edge connected or to make it chordal? Or is it possible to delete at most $k$ vertices such that the resulting graph has no edges or contains no cycles?
A rich subclass of modification problems concerns edge editing problems. Here the ``atomic'' operation is the change of adjacency, i.\,e.\@\xspace{} for a pair of vertices $u,v$, we can either add an edge $uv$ or delete the edge $uv$. For example, the \textsc{Cluster Editing} problem asks to transform an input graph into a cluster graph, that is a disjoint union of cliques, by flipping at most $k$ adjacency relations.
Besides the basic edge editing, it is natural to consider problems where the set of removed and added edges should satisfy some structural constraints. In particular, such problems were considered for \emph{complementation} problems. Recall that the \emph{complement} of a graph $G$ is a graph $H$ on the same vertices such that two distinct vertices of $H$ are adjacent if and only if they are not adjacent in $G$.
Seidel (see~\cite{Seidel74,Seidel76,Seidel81}) introduced the operation that is now known as the \emph{Seidel switch}. For a vertex $v$ of a graph $G$, this operation complements the adjacencies of $v$, that is, it removes the edges incident to $v$ and makes $v$ adjacent to the non-neighbors of $v$ in $G$. Respectively, for a set of vertices $U$, the Seidel switching, that is, the consecutive switching for the vertices of $U$, complements the adjacencies between $U$ and its complement $V(G)\setminus U$.
The study of the algorithmic question whether it is possible to obtain a graph from a given graph class by the Seidel switch was initiated by Ehrenfeucht et al.~\cite{EhrenfeuchtHHR98}.
Further results were established in~\cite{JelinekJK16,JelinkovaK14,JelinkovaSHK11,KratochvilNZ92,Kratochvil03}.
Another important operation of this type is the \emph{local complementation}. For a vertex $v$ of a graph $G$, the \emph{local complementation of $G$ at $v$} is the graph obtained from $G$ by replacing $G[N(v)]$ by its complement. This operation plays crucial role in the definition of \emph{vertex-minors}~\cite{Oum05} and was investigated in this contest (see, e.g.~\cite{CourcelleO07,Oum17}). See also~\cite{Bouchet93,KAMINSKI20092747} for some algorithmic results concerning local complementations.
In this paper we study the \emph{partial complement} of a graph, which was introduced by Kami{\'n}ski, Lozin, and Milani{\v c} in \cite{KAMINSKI20092747} in their study of the clique-width of a graph.
A \emph{partial complement} of a graph $G$ is a graph obtained from $G$ by complementing all the edges of one of its induced subgraphs.
More formally, for a graph $G$ and $S\subseteq V(G)$, we define
$G \oplus S$ as the graph with the vertex set $V(G)$ whose edge set is defined as follows:
a pair of distinct vertices $u,v$ is an edge of $G \oplus S$ if and only if one of the following holds:
\begin{itemize}
\item $uv \in E(G) \land (u \notin S \lor v \notin S)$, or
\item $uv \notin E(G) \land u \in S \land v \in S$.
\end{itemize}
Thus when the set $S$ consists only of two vertices $\{u, v\}$, then the operation changes the adjacency between $u$ and $v$, and for a larger set $S$,
$G \oplus S$ changes the adjacency relations for all pairs of vertices of $S$.
We say that a graph $H$ is a partial complement of the graph $G$ if $H$ is isomorphic to $G \oplus S$ for some $S\subseteq V(G)$.
For a graph class $\mathcal{G}$ and a graph $G$, we say that there is a \emph{partial complement of $G$ to $\mathcal{G}$} if for some $S\subseteq V(G)$, we have $G \oplus S \in \mathcal{G}$. We denote by $\mathcal{G}^{(1)}$ the class of graphs such that its members can be partially complemented to $\mathcal{G}$.
Let $\mathcal{G}$ be a graph class. We consider the following generic algorithmic problem.
\defsimpleproblem{\textsc{Partial Complement to $\mathcal{G}$}\xspace (\textsc{PC$\mathcal{G}$}\xspace{})}%
{A simple undirected graph $G$.}%
{Is there a partial complement of $G$ to $\mathcal{G}$?}
In other words, how difficult is it to recognize the class $\mathcal{G}^{(1)}$?
In this paper we show that there are many well-known graph classes $\mathcal{G} $ such that
$\mathcal{G}^{(1)}$ is recognizable in polynomial time. We show that
\begin{itemize}
\item \textsc{Partial Complement to $\mathcal{G}$}\xspace is solvable in time $\mathcal{O}(f(n)\cdot n^4 + n^6)$ when $\mathcal{G}$ is a triangle-free graph class recognizable in time $f(n)$. For example, this implies that when $\mathcal{G}$ is the class of bipartite graphs, the class $\mathcal{G}^{(1)}$ is recognizable in polynomial time.
\item \textsc{Partial Complement to $\mathcal{G}$}\xspace is solvable in time $f(n)\cdot n^{\mathcal{O}(1)}$ when $\mathcal{G}$ is a $d$-degenerate graph class recognizable in time $f(n)$.
Thus when $\mathcal{G}$ is the class of planar graphs, class of cubic graphs, class of graph of bounded treewidth, or class of $H$-minor free graphs, then the class $\mathcal{G}^{(1)}$ is recognizable in polynomial time.
\item \textsc{Partial Complement to $\mathcal{G}$}\xspace is solvable in polynomial time when $\mathcal{G}$ is a class of bounded clique-width expressible in monadic second-order logic (with no edge set quantification). In particular, if $\mathcal{G}$ is the class of $P_4$-free graphs (cographs), then $\mathcal{G}^{(1)}$ is recognizable in polynomial time.
\item \textsc{Partial Complement to $\mathcal{G}$}\xspace is solvable in polynomial time when $\mathcal{G}$ can be described by a $2 \times 2$ $M$-partition matrix. Therefore $\mathcal{G}^{(1)}$ is recognizable in polynomial time when $\mathcal{G}$ is the class of split graphs, as they can be described by such a matrix.
\end{itemize}
Nevertheless, there are cases when the problem is \ensuremath{\operatorClassNP}-hard. In particular, we prove that this holds when $\mathcal{G}$ is the class of $r$-regular graphs.
\section{Partial complementation to triangle-free graph classes}\label{sec:triangle_free}
A triangle is a complete graph on three vertices. Many graph classes does not allow the triangle as a subgraph, for instance trees, forests, or graphs with large girth. In this paper
we show that partial complementation to triangle-free graphs can be decided in polynomial time.
More precisely, we show that if a graph class ${\cal G}$ can be recognized in polynomial time and it is triangle-free, then we can also solve \textsc{Partial Complement to $\mathcal{G}$}\xspace{} in polynomial time.
Our algorithm is constructive, and returns a \emph{solution} $S \subseteq V(G)$, that is a set $S$ such that $G\oplus S$ is in $\mathcal{G}$. We say that a solution \emph{hits} an edge $uv$ (or a non-edge $\overline{uv})$, if both $u$ and $v$ are contained in $S$.
Our algorithm considers each of the following cases.
\begin{enumerate}
\item[($i$)] There is a solution $S$ of size at most two.
\item[($ii$)] There is a solution $S$ containing two vertices that are non-adjacent in $G$.
\item[($iii$)] There is a solution $S$ such that it form a clique of size at least $3$ in $G$.
\item[($iv$)] $G$ is a no-instance.
\end{enumerate}
Case~($i$) can be resolved in polynomial time by brute-force, and thus we start from analyzing the structure of a solution in Case~($ii$). We need the following observation.
\begin{observation} \label{tf:obs:small-independentset-in-image}
Let $ \mathcal{G}$ be a class of triangle-free graphs and let $G$ be an instance of \PCG, where $S \subseteq V(G)$ is a valid solution. Then
\begin{enumerate}[a)]
\item $G[S]$ does not contain an independent set of size 3, and
\item for every triangle $\{u, v, w\} \subseteq V(G)$, at least two vertices are in $S$.
\end{enumerate}
\end{observation}
\noindent Because all non-edges between vertices in $G[S]$ become edges in $G \oplus S$ and vice versa, whereas all (non-) edges with an endpoint outside $S$ remain untouched, we see that the observation holds.
Let us recall that a graph $G$ is a \emph{split graph} if its vertex set can be partitioned into $V(G)=C\cup I$, where $C$ is a clique and $I$ is an independent set. Let us note that the vertex set of a split graph can have several \emph{split partitions}, i.e. partitions into a clique and independent set. However, the number of split partitions of an $n$-vertex split graphs is at most $n$.
The analysis of Case~($ii$) is based on the following lemma.
\begin{lemma} \label{tf:lem:nonclique-solution-properties}
Let $ \mathcal{G}$ be a class of triangle-free graphs and let $G$ be an instance of \PCG{}. Let $S \subseteq V(G)$ be a valid solution which is not a clique, and let $u, v \in S$ be distinct vertices such that $uv \notin E(G)$. Then
\begin{enumerate}[a)]
\item the entire solution $S$ is a subset of the union of the closed neighborhoods of $u$ and $v$, that is $S \subseteq N_G[u] \cup N_G[v]$;
\item every common neighbor of $u$ and $v$ must be contained in the solution $S$, that is $N_G(u) \cap N_G(v) \subseteq S$;
\item the graph $ G[N(u) \setminus N(v)]$ is a split graph. Moreover, $(N(u) \setminus N(v))\cap S$ is a clique and $(N(u) \setminus N(v))\setminus S$ is an independent set.
\end{enumerate}
\end{lemma}
\begin{proof}
We will prove each point separately, and in order.
\begin{enumerate}[a)]
\item Assume for the sake of contradiction that the solution $S$ contains a vertex $w \notin N_G[u] \cup N_G[v]$. But then $\{u, v, w\}$ is an independent set in $G$, which contradicts item a) of Observation~\ref{tf:obs:small-independentset-in-image}.
\item Assume for the sake of contradiction that the solution $S$ does not contain a vertex $w \in N_G(u) \cap N_G(v)$. Then the edges $uw$ and $vw$ will both be present in $G \oplus S$, as well as the edge $uv$. Together, these forms a triangle.
\item We first claim that the solution $S$ is a vertex cover for $G[N(u) \setminus N(v)]$. If it was not, then there would exist an edge $u_1u_2$ of $G[N(u)\setminus N(v)]$ such that both endpoints $u_1, u_2\not\in S$, yet $u_1, u_2$ would form a triangle with $u$ in $G\oplus S$, which would be a contradiction. Hence $(N(u) \setminus N(v))\setminus S$ is an independent set.
Secondly, we claim that $(N(u) \setminus N(v))\cap S$ forms a clique. If not, then there would exist
$u_1, u_2 \in (N(u) \setminus N(v))\cap S$ which are nonadjacent.
In this case $\{u_1, u_2, v\}$ is an independent set, which contradicts item a) of Observation~\ref{tf:obs:small-independentset-in-image}. Taken together, these claims imply the last item of the lemma.
\end{enumerate}
\end{proof}
\noindent We now move on to examine the structure of a solution for the third case, when there exists a solution which is a clique of size at least three.
\begin{lemma} \label{tf:lem:clique-solution-properties}
Let $ \mathcal{G}$ be a class of triangle-free graphs and let $G$
be an instance of \PCG. Let $S \subseteq V(G)$ be a solution such that $|S| \geq 3$ and $G[S]$ is a clique. Let $u, v \in S$ be distinct. Then
\begin{enumerate}[a)]
\item the solution $S$ is contained in their common neighborhood, that is $S \subseteq N_G[u] \cap N_G[v]$, and
\item the graph $G[N_G[u] \cap N_G[v]]$ is a split graph where $(N_G[u] \cap N_G[v])\setminus S$ is an independent set.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove each point separately, and in order.
\begin{enumerate}[a)]
\item Assume for the sake of contradiction that the solution $S$ contains a vertex $w$ which is not in the neighborhood of both $u$ and $v$. This contradicts that $S$ is a clique.
\item We claim that $S$ is a vertex cover of $G[N_G[u] \cap N_G[v]]$. Because $S$ is also a clique, the statement of the lemma will then follow immediately. Assume for the sake of contradiction that $S$ is not a vertex cover. Then there exist an uncovered edge $w_1w_2$, where $w_1, w_2 \in N_G[u] \cap N_G[v]$, and also $w_1, w_2 \notin S$. Since $\{u, w_1, w_2\}$ form a triangle, we have by b) of Observation~\ref{tf:obs:small-independentset-in-image} that at least two of these vertices are in $S$. That is a contradiction, so our claim holds. %
\end{enumerate} %
\end{proof}
\noindent
We now have everything in place to present the algorithm.
\begin{algo}[{\textsc{Partial Complement to $\mathcal{G}$}\xspace{} where $\mathcal{G}$ is triangle-free}] \label{tf:algo}
$ $ \newline{}
Input: An instance $G$ of \textsc{PC$\mathcal{G}$}\xspace{} where $\mathcal{G}$ is a triangle-free graph class recognizable in time $f(n)$ for some function $f$.
\newline{}
Output: A set $S \subseteq V(G)$ such that $G \oplus S$ is in $\mathcal{G}$, or a correct report that no such set exists.
\begin{enumerate}
\item By brute force, check if there is a solution of size at most 2. If yes, return this solution.
\item For every non-edge $\overline{uv}$ of $G$:
\begin{enumerate}
\item If either $G[N(u) \setminus N_G(v)]$ or $G[N_G(u) \setminus N_G(v)]$ is not a split graph, skip this iteration and try the next non-edge.
\item Let $(I_u, C_u)$ and $(I_v, C_v)$ denote a split partition of $G[N_G(u)\setminus N_G(v)]$ and $G[N_G(v)\setminus N_G(u)]$ respectively. For each pair of split partitions $(I_u, C_u), (I_v, C_v)$:
\begin{enumerate}
\item Construct solution candidate $S' := \{u, v\} \cup (N_G(u) \cap N_G(v)) \cup C_u \cup C_v$
\item If $G \oplus S'$ is a member of $\mathcal{G}$, return $S'$
\end{enumerate}
\end{enumerate}
\item Find a triangle $\{x, y, z\}$ of $G$
\item For each edge in the triangle $uv \in \{xy, xz, yz\}$:
\begin{enumerate}
\item If $G[N_G(u) \cap N_G(v)]$ is not a split graph, skip this iteration and try the next edge.
\item For each possible split partition $(I, C)$ of $G[N_G(u) \cap N_G(v)]$:
\begin{enumerate}
\item Construct solution candidate $S' := \{u, v\} \cup C$
\item If $G \oplus S'$ is a member of $\mathcal{G}$, return $S'$
\end{enumerate}
\end{enumerate}
\item Return `\textsc{None}'
\end{enumerate}
\end{algo}
\begin{theorem}
Let $\mathcal{G}$ be a class of triangle-free graphs such that deciding whether an $n$-vertex graph is in $\mathcal{G}$ is solvable in time $f(n)$ for some function $f$. Then
\textsc{Partial Complement to $\mathcal{G}$}\xspace is solvable
in time $\mathcal{O}(n^6 + n^4 \cdot f(n))$.
\end{theorem}
\begin{proof}
We will prove that Algorithm~\ref{tf:algo} is correct, and that its running time is $\mathcal{O}(n^4 \cdot (n^2 + f(n)))$. We begin by proving correctness. Step 1 is trivially correct. After Step 1 we can assume that any valid solution has size at least three, and we have handled Case~($i$) when there exists a solution of size at most two. We have the three cases left to consider: ($ii$) There exists a solution which hits a non-edge, ($iii$)
there is a solution $S$ such that in $G\oplus S$ vertices of $S$ form a clique of size at least $3$, and
($iv$) no solution exists.
In the case that there exists a solution $S$ hitting a non-edge $uv$, we will at some point guess this non-edge in Step 2 of the algorithm. By Lemma~\ref{tf:lem:nonclique-solution-properties}, we have that both $G[N_G(u) \setminus N_G(v)]$ and $G[N_G(u) \setminus N_G(v)]$ are split graphs, so we do not miss the solution $S$ in Step 2a. Since we try every possible combinations of split partitions in Step 2b, we will by Lemma~\ref{tf:lem:nonclique-solution-properties} at some point construct $S'$ correctly such that $S' = S$.
In the case that there exist only solutions which hits exactly a clique, we first find some triangle $\{x, y, z\}$ of $G$. It must exist, since a solution $S$ is a clique of size at least three. By Observation~\ref{tf:obs:small-independentset-in-image}b, at least two vertices of the triangle must be in the $S$. At some point in step 4 we guess these vertices correctly. By Lemma~\ref{tf:lem:clique-solution-properties}b we know that $G[N_G(u) \cap N_G(v)]$ is a split graph, so we will not miss $S$ in Step 4a. Since we try every split partition in Step 4b, we will by Lemma~\ref{tf:lem:clique-solution-properties} at some point construct $S'$ correctly such that $S' = S$.
Lastly, in the case that there is no solution, we know that there neither exists a solution of size at most two, nor a solution which hits a non-edge, nor a solution which hits a clique of size at least three. Since these three cases exhaust the possibilities, we can correctly report that there is no solution when none was found in the previous steps.
For the runtime, we start by observing that Step 1 takes time $\mathcal{O}(n^2 \cdot f(n))$. The sub-procedure of Step 2 is performed $\mathcal{O}(n^2)$ times, where step 2a takes time $\mathcal{O}(n \log n)$. The sub-procedure of Step 2b takes time at most $\mathcal{O}(n^2 + f(n))$, and it is performed at most $\mathcal{O}(n^2)$ times. In total, Step 2 will use no longer than $\mathcal{O}(n^4 \cdot (n^2 + f(n)))$ time. Step 3 is trivially done in time $\mathcal{O}(n^3)$. The sub-procedure of Step 4 is performed at most three times. Step 4a is done in $\mathcal{O}(n \log n)$ time, and step 4b is done in $\mathcal{O}(n \cdot (n^2 + f(n))$ time, which also becomes the asymptotic runtime of the entire step 4. The worst running time among these steps is Step 2, and as such the runtime of Algorithm~\ref{tf:algo} is $\mathcal{O}(n^4 \cdot (n^2 + f(n)))$.
\end{proof}
\section{Complement to degenerate graphs}\label{sec:degenerated}
For $d>0$, we say that a graph $G$ is $d$-degenerate, if every induced (not necessarily proper) subgraph of $G$ has a vertex of degree at most $d$. For example, trees are $1$-degenerate, while planar graphs are $5$-degenerate.
\begin{theorem} \label{thm:ddeg}
Let $\mathcal{G}$ be a class of $d$-degenerate graphs such that deciding
whether an $n$-vertex graph is in $\mathcal{G}$ is solvable in time $f(n)$ for some function $f$.
Then
\textsc{Partial Complement to $\mathcal{G}$}\xspace is solvable in time $f(n) \cdot n^{2^{\mathcal{O}(d)}}$.
\end{theorem}
\begin{proof}Let $G$ be an $n$-vertex graph. We are looking for a vertex subset
$S$ of $G$ such that $G\oplus S\in \mathcal{G}$.
We start from trying all vertex subsets of $G$ of size at most $2d$ as a candidate for $S$.
Thus, in time $\mathcal{O}(n^{2d}\cdot f(n))$ we either find a solution or conclude that a solution, if it exists, should be of size more than $2d$.
Now we assume that $|S|>2d$. We try all subsets of $V(G)$ of size $2d+1$. Then if $G$ can be complemented to $\mathcal{G}$, at least one of these sets, say $X$, is a subset of $S$. In total, we enumerate ${\binom{n}{2d+1}}$ sets.
First we consider the set $Y$ of all vertices in $ V(G)\setminus X$ with at least $d+1$ neighbors in $X$. The observation here is that most vertices from $Y$ are in $S$. More precisely, if more than
\[
\alpha= \binom{|X|}{d+1} \cdot d+1= \binom{2d+1}{d+1} \cdot d+1
\]
vertices of $Y$ are not in $S$, then $G\oplus S$ contains a complete bipartite graph $G_{d+1,d+1}$ as a subgraph, and hence $G\oplus S$ is not $d$-degenerate. Thus, we make at most
$
\binom{n}{\alpha}
$
guesses on which subset of $Y$ is in $S$.
Similarly, when we consider the set $Z$ of all vertices from $ V(G)\setminus X$ with at most $d$ neighbors in $X$, we have that at most $\alpha$ of vertices from $Z$ could belong to $S$. Since $V(G)=X\cup Y\cup Z$, if there is a solution $S$, it will be found in at least one from
\[
\binom{n}{2d+1} \cdot \alpha^2=n^{2^{\mathcal{O}(d)}}
\]
of the guesses.
Since for each set $S$ we can check in time $f(n)$ whether $G\oplus S\in \mathcal{G}$, this concludes the proof.
\end{proof}
\section{Complement to M-partition}
Many graph classes can be defined by whether it is possible to partition the vertices of graphs in the class such that certain internal and external edge requirements of the parts are met. For instance, a complete bipartite graph is one which can be partitioned into two sets such that every edge between the two sets is present (external requirement), and no edge exists within any of the partitions (internal requirements). Other examples are split graphs and $k$-colorable graphs.
Feder et al.~\cite{feder2003list} formalized such partition properties of graph classes by making use of a symmetric matrix over $\{0, 1, \star\}$, called an \emph{$M$-partition}.
\begin{definition}[{$M$-partition}]
For a $k \times k$ matrix $M$, we say that a graph $G$ belongs to the graph class $\mathcal{G}_M$ if its vertices can be partitioned into $k$ (possibly empty) sets $X_1, X_2, \dots, X_k$ such that, for every $i \in [k]$, if
\begin{itemize}
\item $M[i,i] = 1$, then $X_i$ is a clique and if $M[i,i] = 0$, then $X_i$ is an independent set, and
\end{itemize}
for every $i, j \in [k]$, $i\neq j$,
\begin{itemize}
\item if $M[i,j] = 1$, then every vertex of $X_i$ is adjacent to all vertices of $X_j$,
\item if $M[i,j] = 0$, then there is no edges between $X_i$ and $X_j$.
\end{itemize}
\end{definition}
\noindent Note that if $M[i,j] = \star$, then there is no restriction on the edges between vertices from $X_i$ and $X_j$.
For example, for matrix
\[M=\left(\begin{array}{cc}0 & \star \\ \star & 0\end{array}\right)\]
the corresponding class of graphs is the class of bipartite graphs, while matrix
\[M=\left(\begin{array}{cc}0 & \star \\ \star& 1\end{array}\right)\]
identifies the class of split graphs.
In this section we prove the following theorem.
\begin{theorem}\label{thm:Mpartition}
Let $\mathcal{G}= \mathcal{G}_M$ be a graph class described by an $M$-partition matrix of size $2 \times 2$. Then \textsc{Partial Complement to $\mathcal{G}$}\xspace{} is solvable in polynomial time.
\end{theorem}
In particular, Theorem~\ref{thm:Mpartition} yields
polynomial algorithms for \textsc{Partial Complement to $\mathcal{G}$}\xspace{} when $\mathcal{G}$ is the class of split graphs or (complete) bipartite graphs. The proof of our theorem is based on the following beautiful dichotomy result of Feder et al.~\cite{feder2003list} on the recognition of classes $\mathcal{G}_M$ described by $4\times 4$ matrices.
\begin{proposition}[{\cite[Corollary 6.3]{feder2003list}}] \label{mpart:prop:mpart44}
Suppose $M$ is a symmetric matrix over $\{0, 1, \star\}$ of size $k = 4$. Then the recognition problem for $\mathcal{G}_M$ is
\begin{itemize}
\item NP-complete when $M$ contains the matrix for 3-coloring or its complement, and no diagonal entry is $\star$.
\item Polynomial time solvable otherwise.
\end{itemize}
\end{proposition}
\begin{lemma}\label{lem:Mpart}
Let $M$ be a symmetric $k\times k$ matrix giving rise to the graph class $\mathcal{G}_M = \mathcal{G}$. Then there exists a $2k \times 2k$ matrix $M'$ such that for any input $G$ to \textsc{Partial Complement to $\mathcal{G}$}\xspace{}, it is a yes-instance if and only if $G$ belongs to $\mathcal{G}_{M'}$.
\end{lemma}
\begin{proof}
Given $M$, we construct a matrix $M'$ in linear time. We let $M'$ be a matrix of dimension $2k \times 2k$, where entry $M'[i,j]$ is defined as $M[\lceil\frac{i}{2}\rceil,\lceil\frac{j}{2}\rceil]$ if at least one of $i,j$ is even, and $\neg{M[\frac{i+1}{2},\frac{j+1}{2}]}$ if $i,j$ are both odd. Here, $\neg{1} = 0$, $\neg{0} = 1$, and $\neg{\star} = \star$. For example, for matrix
\[M=\left(\begin{array}{cc}0 & \star \\ \star& 1\end{array}\right)\]
the above construction results in
\[ M'=
\left(\begin{array}{cccc}1 & 0 & \star & \star \\
0 & 0 & \star & \star \\
\star & \star & 0 & 1 \\
\star & \star & 1 & 1
\end{array}\right) .\]
We prove the two directions separately.
($\implies$) Assume there is a partial complementation $G\oplus S$ into $\mathcal{G}_M$.
Let $X_1, X_2, \dots, X_k$ be an $M$-partition of $G\oplus S$. We define partition $X'_1, X'_2, \dots, X'_{2k}$ of $G$ as follows.
For every vertex $v \in X_i$, $1\leq i \leq k$, we assign $v$ to $X'_{2i-1}$ if $v\in S$ and to $X'_{2i}$ otherwise.
We now show that every edge of $G$ respects the requirements of $M'$. Let $uv \in E(G)$ be an edge, and let
$u\in X_i$ and $v\in X_j$. If at least one vertex from $\{u,v\}$, say $v$ is not in $S$, then $uv$ is also an edge in $G\oplus S$, thus $M[i,j] \neq 0$.
Since $v\not\in S$, it belongs to set $v\in X'_{2j}$. Vertex $u$ is assigned to set $X'_{\ell}$, where $\ell$ is either $2i$ or $2i-1$, depending whether $u$ belongs to $S$ or not. But because $2j$ is even irrespectively of $\ell$, $M'[\ell, 2j]=M[i,j]\neq 0$.
Now consider the case when both $u,v \in S$. Then the edge does not persist after the partial complementation by $S$, and thus $M[i ,j ] \neq 1$. We further know that $u$ is assigned to $X'_{2i-1}$ and $v$ to $X'_{2j-1}$. Both $2i-1$ and $2j-1$ are odd, and
by the construction of $M'$, we have that $M'[2i-1,2j-1] \neq 0$, and again the edge $uv$ respects $M'$. An analogous argument shows that also all non-edges respect $M'$.
($\impliedby$) Assume that there is a partition $X'_1, X'_2, \dots, X'_{2k}$ of $G$ according to $M'$. Let the set $S$ consist of all vertices in odd-indexed parts of the partition. We now show that $G\oplus S$ can be partitioned according to $M$. We define partition
$X_1, X_2, \dots, X_k$ by assigning
each vertex $u\in X'_i$ to $X_{\lceil\frac{i}{2}\rceil}$. It remains to show that
$X_1, X_2, \dots, X_k$ is an $M$-partition of
$G\oplus S$.
Let $u\in X_i$, $v\in X_j$.
Suppose first that $uv\in E(G\oplus S)$.
If at least one of $u,v$ is not in $S$, we assume without loss of generality that $v \notin S$.
Then $uv\in E(G)$ and $v\in X'_{2j}$. For vertex $u\in X'_{\ell}$, irrespectively, whether $\ell$ is $2i$ or $2i-1$, we have that $M'[\ell, 2j]=M[i,j]\neq 0$.
But then $M[i,j] \neq 0$.
Otherwise we have $u,v \in S$. Then $uv$ is a non-edge in $G$, and thus $M'[2i-1,2j-1] \neq 1$. But by the construction of $M'$, we have that $M[i ,j] \neq 0$, and there is no violation of $M$. An analogous argument shows that if $u$ and $v$ are not adjacent in $G\oplus S$, it holds that $M[i,j] \neq 1$. Thus
$X_1, X_2, \dots, X_k$ is an $M$-partition of
$G\oplus S$, which concludes the proof.
\end{proof}
Now we are ready to prove Theorem~\ref{thm:Mpartition}.
\begin{proof}[Proof of Theorem~\ref{thm:Mpartition}]
For a given matrix $M$, we use Lemma~\ref{lem:Mpart} to construct a matrix $M'$. Let us note that by the construction of matrix $M'$, for
every $2 \times 2$ matrix $M$ we have that matrix $M'$ has at most two $1$'s and at most two $0$'s along the diagonal. Then by Proposition~\ref{mpart:prop:mpart44}, the recognition of whether $G$ admits $M'$-partition is in P. Thus by Lemma~\ref{lem:Mpart}, \textsc{Partial Complement to $\mathcal{G}$}\xspace{} is solvable in polynomial time
\end{proof}
\section{Partial complementation to graph classes of bounded clique-width}
We show that \textsc{Partial Complement to $\mathcal{G}$}\xspace{} can be solved in polynomial time when $\mathcal{G}$ has bounded clique-width and can be expressed by an \text{$\textbf{MSO}_1$}{} property.
We refer to the book~\cite{Courcelle:2012:GSM:2414243} for the basic definitions.
We will use the following result of Hlin{\v e}n{\'y} and Oum~\cite{HlinenyO08}.
\begin{proposition}[\cite{HlinenyO08}] \label{prop:cliquewappro}
There is an algorithm that for every integer $k$ and graph $G$ in time $O(|V(G)|^3)$ either computes a $(2^{k+1}
- 1)$ expression for a graph $G$ or correctly concludes that the clique-width of $G$ is more than $k$.
\end{proposition}
Note that the algorithm of Hlin{\v e}n{\'y} and Oum
only approximates the clique-width but does not provide an
algorithm to construct an optimal $k$-expression tree for a graph
$G$ of clique-width at most $k$. But this approximation is usually
sufficient for algorithmic purposes.
Courcelle, Makowsky and Rotics~\cite{Courcelle2000} proved that every graph property that can be expressed in \text{$\textbf{MSO}_1$}{} can be recognized in linear time for graphs of bounded clique-width when given a $k$-expression.
\begin{proposition}[{\cite[{Theorem~4}]{Courcelle2000}}]\label{rw:prop:courcelle} Let $\mathcal{G}$ be some class of graphs of clique-width at most $k$ such that for each graph $G \in \mathcal{G}$, a corresponding $k$-expression
can be found in $\mathcal{O}(f(n, m))$ time. Then every \text{$\textbf{MSO}_1$}{} property on $\mathcal{G}$ can be recognized in time $\mathcal{O}(f(n, m)+n)$.
\end{proposition}
The nice property of graphs with bounded clique-width is that their partial complementation is also bounded. In particular, Kami{\'n}ski, Lozin, and Milani{\v c} in \cite{KAMINSKI20092747} observed that if $G$ is a graph of clique-width $k$, then any partial complementation of $G$ is of clique-width at most $g(k)$ for some computable function $g$. For completeness, we provide a more accurate upper bound.
\begin{lemma}\label{lem:upper-cw}
Let $G$ be a graph, $S\subseteq V(G)$. Then $\textsc{cwd}(G\oplus S)\leq 3\textsc{cwd}(G)$.
\end{lemma}
\begin{proof}
Let $\textsc{cwd}(G)=k$.
To show the bound, it is more convenient to use expression trees instead of $k$-expressions.
An {\em expression tree} of a graph $G$ is a rooted tree $T$ with nodes of four types $i$, $\dot{\cup}$, $\eta$ and $\rho$:
\begin{itemize}
\item \emph{Introduce nodes $i(v)$} are leaves of $T$ corresponding to initial $i$-graphs with vertices $v$ labeled by $i$.
\item\emph{Union node $\dot{\cup}$} stands for a disjoint union of graphs associated with its children.
\item \emph{Relabel node $\rho_{i\to j}$} has one child and is associated with the $k$-graph obtained by applying of the relabeling operation to the graph corresponding to its child.
\item \emph{Join node $\eta_{i,j}$} has one child and is associated with the $k$-graph resulting by applying the join operation to the graph corresponding to its child.
\item The graph $G$ is isomorphic to the graph associated with the root of $T$ (with all labels removed).
\end{itemize}
The {\em width} of the tree $T$ is the number of different labels appearing in $T$. If $G$ is of clique-width $k$, then by parsing the corresponding $k$-expression, one can construct an expression tree of width $k$ and, vise versa, given an expression tree of width $k$, it is straightforward to construct a $k$-expression.
Throughout the proof we call the elements of $V(T)$ \emph{nodes} to distinguish them from the vertices of $G$.
Given a node $x$ of an expression tree, $T_x$ denotes the subtree of $T$ rooted in $x$ and the graph $G_x$ represents the $k$-graph formed by $T_x$.
An expression tree $T$ is \emph{irredundant} if for any join node $\eta_{i,j}$, the vertices labeled by $i$ and $j$ are not adjacent in the graph associated with its child.
It was shown by Courcelle and Olariu~\cite{CourcelleO00} that every expression tree $T$ of $G$ can be transformed into an irredundant expression tree $T'$ of the same width in time linear in the size of $T$.
Let $T$ be an irredundant expression tree of $G$ with the width $k$ rooted in $r$. We construct the expression tree $T'$ for $G'=G\oplus S$ by modifying $T$.
Recall that the vertices of the graphs $G_x$ for $x\in V(T)$ are labeled $1,\ldots,k$. We introduce three groups of distinct labels $\alpha_1,\ldots,\alpha_k$, $\beta_1,\ldots,\beta_k$ and $\gamma_1,\ldots,\gamma_k$. The labels $\alpha_1,\ldots,\alpha_k$ and $\beta_1,\ldots,\beta_k$ correspond the the labels $1,\ldots,k$ for the vertices in $S$ and $V(G)\setminus S$ respectively. The labels $\gamma_1,\ldots,\gamma_k$ are auxiliary.
Then for every node $x$ of $T$ we construct $T_x'$ using $T_x$ starting the process from the leaves. We denote by $G_x'$ the $k$-graph corresponding to the root $x$ of $T_x'$.
For every introduce node $i(v)$, we construct an introduce node $\alpha_i(v)$ if $v\in S$ and an introduce node $\beta_i(v)$ if $v\notin S$. Let $x$ be a non-leaf node of $T$ and assume that we already constructed the modified expression trees of the children of $x$.
Let $x$ be a union node $\dot{\cup}$ of $T$ and let $y$ and $z$ be its children.
We construct $k$ relabel nodes $\rho_{\alpha_i,\gamma_i}$ for $i\in \{1,\ldots,k\}$ that form a path, make one end-node of the path adjacent to $y$ in $T_y'$ and make the other end-node denoted by $y'$ the root of $T_{y'}'$ constructed from $T_y'$. Notice that in the corresponding graph $G_{y'}'$ all the vertices of $S$ are now labeled by $\gamma_1,\ldots,\gamma_k$ instead of $\alpha_1,\ldots,\alpha_k$.
Next, we construct a union node $\dot{\cup}$ denoted by $x^{(1)}$ with the children $y'$ and $z$. This way we construct the disjoint union of $G_{y'}'$ and $G_z'$.
Notice that the vertices that are labeled by the same label in $G_y$ and $G_z$ are not adjacent in $G$. Respectively, we should make the vertices of $V(G_x)\cap S$ and $V(G_y)\cap S$ with the same label adjacent in $G'$. We achieve it by adding $k$ join nodes $\eta_{\alpha_i,\gamma_i}$ for $i\in \{1,\ldots,k\}$, forming a path out of them and making one end-node of the path adjacent to $x^{(1)}$. We declare the other end-node of the path denoted by $x^{(2)}$ the new root.
Observe now that for the set of vertices $Y_i$ of $G_y$ labeled $i$ and the set of vertices $Z_j$ of $G_z$ labeled by $j$ where $i,j\in \{1,\ldots,k\}$ are distinct, it holds that the vertices of $Y_i$ and $Z_j$ are either pairwise adjacent in $G$ or pairwise nonadjacent. Respectively, on this stage of construction we ensure that if the vertices of $Y_i$ are not adjacent to the vertices of $Z_j$, then the vertices of $Y_i\cap S$ and $Z_j\cap S$ are made adjacent in $G'$. To do it, for every two distinct $i,j\in\{1,\ldots,k\}$ such that the vertices of $Y_i$ and $Z_j$ are not adjacent in $G$, construct a new join node $\eta_{\gamma_i,\alpha_j}$ and form a path with all these nodes whose one end-node is adjacent to $x^{(2)}$ and the other end-node $x^{(3)}$ is the new root (we assume that $x^{(3)}=x^{(2)}$ if have no new constricted nodes).
Finally, we add $k$ relabel nodes $\rho_{\gamma_i,\alpha_i}$ for $i\in \{1,\ldots,k\}$ that form a path, make one end-node of the path adjacent to $x^{(3)}$ and make the other end-node denoted by $x$ the root of the obtained $T_{x}'$. Clearly, all the vertices of $S$ in $G_x'$ are labeled by $\alpha_1,\ldots,\alpha_k$.
Let $x$ be a relabel node $\rho_{i\to j}$ of $T$ and let $y$ be its child. We construct two relabel nodes $\rho_{\alpha_i\to \alpha_j}$ and $\rho_{\beta_i\to \beta_j}$ denoted by $x$ and $x'$ respectively. We make $x'$ the child of $x$ and we make the root $y$ of $T_y'$ the child of $x'$.
Now, let $x$ be a join node $\eta_{i\to j}$ of $T$ and let $y$ be its child. Recall that $T$ is irredundant, that is, the vertices labeled by $i$ and $j$ in $G_y$ are not adjacent. Clearly, we should avoid making adjacent the vertices in $S$ in the construction of $G'$. We do it by constructing three new join nodes $\eta_{\alpha_i\to \beta_j}$, $\eta_{\alpha_j\to \beta_i}$ and $\eta_{\beta_i\to \beta_j}$ denoted by $x,x',x''$ respectively. We make $x'$ the child of $x$, $x''$ the child of $x'$ and the node $y$ of $T_y'$ is made the child of $x''$.
This completes the description of the construction of $T'$. Using standard inductive arguments, it is straightforward to verify that $G'$ is isomorphic to the graph associated with the root of $T'$, that is, $\textsc{cwd}(G')\leq 3k$.
\end{proof}
\begin{lemma} \label{rw:lem:msoone}
Let $\varphi$ be an \text{$\textbf{MSO}_1$}{} property describing the graph class $\mathcal{G}$. Then there exists an \text{$\textbf{MSO}_1$}{} property $\phi$ describing the graph class $\mathcal{G}^{(1)}$ of size $|\phi| \in \mathcal{O}(|\varphi|)$.
\end{lemma}
\begin{proof}
We will construct $\phi$ from $\varphi$ in the following way: We start by prepending $\exists S \subseteq V(G)$. Then for each assessment of the existence of an edge in $\varphi$, say $uv \in E(G)$, replace that term with $((u \notin S \lor v \notin S) \land uv \in E(G)) \lor (u \in S \land v \in S \land uv \notin E(G))$. Symmetrically, for each assessment of the non-existence of an edge $uv \notin E(G)$, replace that term with $((u \notin S \lor v \notin S) \land uv \notin E(G)) \lor (u \in S \land v \in S \land uv \in E(G))$.
We observe that if $\varphi$ is satisfiable for some graph $G$, then for every $S \subseteq V(G)$, the partial complementation $G\oplus S$ will yield a satisfying assignment to $\phi$. Conversely, if $\phi$ is satisfiable for a graph $G$, then there exist some $S$ such that $\varphi$ is satisfied for $G \oplus S$. For the size, we note that each existence check for edges blows up by a constant factor.
\end{proof}
We are ready to prove the main result of this section.
\begin{theorem}\label{thm:clique-width}
Let $\mathcal{G}$ be a graph class expressible in \text{$\textbf{MSO}_1$}{} which has bounded clique-width.
Then \textsc{Partial Complement to $\mathcal{G}$}\xspace{} is solvable in polynomial time.
\end{theorem}
\begin{proof}
Let $\varphi$ be the \text{$\textbf{MSO}_1$}{} formula which describes $\mathcal{G}$, and let $G$ be an $n$-vertex input graph.
We apply
Proposition~\ref{prop:cliquewappro} for $G$ and in time $O(n^3)$ either obtain a $(2^{3k+1}- 1)$ expression for $G$ or conclude that the clique-width of $G$ is more than $3k$.
In the latter case,
by Lemma~\ref{lem:upper-cw}, $G$ cannot be partially complemented to
$\mathcal{G}$.
We then obtain an \text{$\textbf{MSO}_1$}{} formula $\phi$ from Lemma~\ref{rw:lem:msoone}, and apply Proposition~\ref{rw:prop:courcelle}, which works in time $f(k, \phi) \cdot n$ for some function $f$. In total, the runtime of the algorithm is $f(k, \phi) \cdot n + n^3$.
\end{proof}
We remark that if clique-width expression is provided along with the input graphs, and $\mathcal{G}$ can be expressed in \text{$\textbf{MSO}_1$}{}, then there is a linear time algorithm for \textsc{Partial Complement to $\mathcal{G}$}\xspace. This follows directly from Lemma~\ref{rw:lem:msoone} and Proposition~\ref{rw:prop:courcelle}.
Theorem~\ref{thm:clique-width} implies that for every class of graphs $\mathcal{G}$ of bounded clique-width characterized by a finite set of finite forbidden induced subgraphs, e.\,g.\@\xspace $P_4$-free graphs (also known as cographs) or classes of graphs discussed in \cite{BlancheD0LPZ17}, the \textsc{Partial Complement to $\mathcal{G}$}\xspace problem is solvable in polynomial time. However, Theorem~\ref{thm:clique-width} does not imply that \textsc{Partial Complement to $\mathcal{G}$}\xspace is solvable in polynomial time for $\mathcal{G}$ being of the class of graphs having clique-width at most $k$. This is because such a class $\mathcal{G}$ cannot be described by \text{$\textbf{MSO}_1$}{}. Interestingly, for the related class $\mathcal{G}$ of graphs of bounded \emph{rank-width} (see~\cite{CourcelleO00} for the definition) at most $k$, the result of Oum and Courcelle \cite{CourcelleO07} combined with Theorem~\ref{thm:clique-width}
implies that \textsc{Partial Complement to $\mathcal{G}$}\xspace is solvable in polynomial time.
\section{Hardness of partial complementation to r-regular graphs}\label{sec:regulargraphs}
\noindent
Let us remind that a graph $G$ is $r$-regular if all its vertices are of degree $r$.
We consider the following restricted version of \textsc{Partial Complement to $\mathcal{G}$}\xspace{}.
\defsimpleproblem{\textsc{Partial Complement to $r$-Regular}\xspace{} (\textsc{PC}$r$\textsc{R}{})}{A simple undirected graph $G$, a positive integer $r$.}{Does there exist a vertex set $S \subseteq V(G)$ such that $G \oplus S$ is $r$-regular?}
\noindent
In this section, we show that \textsc{Partial Complement to $r$-Regular}\xspace{} is NP-complete by a reduction from \textsc{Clique in}\text{ $r$}\textsc{-regular Graph}{}.
\defsimpleproblem{\textsc{Clique in}\text{ $r$}\textsc{-regular Graph}{} (\textsc{K\text{$r$}R}{})}{A simple undirected graph $G$ which is $r$-regular, a positive integer $k$.}{Does $G$ contain a clique on $k$ vertices?}
\noindent We will need the following well-known proposition.
\begin{proposition}[{\cite{GareyJ79}}] \label{rr:prop:krr-hard}
\textsc{Clique in}\text{ $r$}\textsc{-regular Graph}{} is NP-complete.
\end{proposition}
\begin{theorem}\label{thm_NP_regular}
\textsc{Partial Complement to $r$-Regular}\xspace
is NP-complete.
\end{theorem}
\begin{proof}
\noindent
We begin by defining a gadget which we will use in the reduction.
\begin{figure}[h]
\centering
\includegraphics[scale=.4]{fig/gdg.pdf}
\caption{The graph $\textsc{gdg}_{k, r}$ is built of $k$ parts, namely a clique $K_{k-1}$, and $k-1$ complete bipartite graphs $K^1_{r,r}, \ldots, K^{k-1}_{r,r}$ with some rewiring.}\label{rr:fig:gdg}
\end{figure}
For integers $r>k$ such that $r-k$ is even, we build the graph $\textsc{gdg}_{k, r}$ as follows. Initially, we let $\textsc{gdg}_{k, r}$ consist of one clique on $k-1$ vertices, as well as $k-1$ distinct copies of $K_{r,r}$. These are all the vertices of the gadget, which is a total of $(k-1) + 2r \cdot (k-1)$ vertices. We denote the vertices of the clique $c_1, c_2, \ldots ,c_{k-1}$, and we let the complete bipartite graphs be denoted by $K^1_{r,r}, K^2_{r,r}, \ldots, K^{k-1}_{r,r}$. For a bipartite graph $K^i_{r,r}$, let the vertices of the two parts be denoted by $a^i_1, a^i_2, \ldots, a^i_r$ and $b^i_1, b^i_2, \ldots , b^i_r$ respectively.
We will now do some rewiring of the edges to complete the construction of $\textsc{gdg}_{k, r}$. Recall that $r-k$ is even and positive. For each vertex $c_i$ of the clique, add one edge from $c_i$ to each of $a^i_1, a^i_2, \ldots ,a^i_{\frac{r-k}{2}}$. Similarly, add an edge from $c_i$ to each of $b^i_1, b^i_2, \ldots ,b^i_{\frac{r-k}{2}}$. Now remove the edges $a^i_1b^i_1, a^i_2b^i_2, \ldots ,a^i_{\frac{r-k}{2}}b^i_{\frac{r-k}{2}}$. Once this is done for every $i \in [k-1]$, the construction is complete. See Figure~\ref{rr:fig:gdg}.
We observe the following property of vertices $a^i_j$, $b^i_j$, and $c_i$ of $\textsc{gdg}_{k, r}$.
\begin{observation} For every $i \in [k-1]$ and $j\in [r]$, it holds that the degrees of $a^i_j$ and $b^i_j$ in $\textsc{gdg}_{k, r}$ are both exactly $r$, whereas the degree of $c_i$ is $r-1$.
\end{observation}
\noindent
We are now ready to prove that \textsc{Clique in}\text{ $r$}\textsc{-regular Graph}{} is many-one reducible to
\textsc{Partial Complement to $r$-Regular}\xspace{}.
\begin{algo}[Reduction \textsc{K\text{$r$}R}{} to \textsc{PC}$r$\textsc{R}{}] \label{rr:alg:reduction}
$ $ \newline
Input: An instance $(G, k)$ of \textsc{K\text{$r$}R}{}. \newline
Output: An instance $(G', r)$ of \textsc{PC}$r$\textsc{R}{} such that it is a yes-instance if and only if $(G, k)$ is a yes-instance of \textsc{K\text{$r$}R}{}.
\begin{enumerate}
\item If $k < 7$ or $k \geq r$, solve the instance of \textsc{K\text{$r$}R}{} by brute force. If it is a yes-instance, return a trivial yes-instance to \textsc{PC}$r$\textsc{R}{}, if it is a no-instance, return a trivial no-instance to \textsc{PC}$r$\textsc{R}{}.
\item If $r-k$ is odd, modify $G$ by taking two copies of $G$ which are joined by a perfect matching between corresponding vertices. Then $r$ increase by one, whereas $k$ remains the same.
\item Construct the graph $G'$ by taking the disjoint union of $G$ and the gadget $\textsc{gdg}_{k, r}$. Here, $r$ denotes the regularity of $G$ after step 2 is performed. Return $(G', r)$.
\end{enumerate}
\end{algo}
\noindent
Let $n=|V(G)|$.
We observe that the number of vertices in the returned instance is at most $2n + (k-1) + 2r \cdot (k-1)$, which is $\mathcal{O}(n^2)$. The running time of the algorithm is $\mathcal{O}(n^7)$ and thus is polynomial.
The correction of the reduction follows from the following two lemmata.
\begin{lemma}\label{lem_NPhardness1}
Let $(G, k)$ be the input of Algorithm~\ref{rr:alg:reduction}, and let $(G', r)$ be the returned result. If $(G, k)$ is a yes-instance to \textsc{Clique in}\text{ $r$}\textsc{-regular Graph}{}, then $(G', r)$ is a yes-instance of \textsc{Partial Complement to $r$-Regular}\xspace{}.
\end{lemma}
\begin{proof}
Let $C \subseteq V(G)$ be a clique of size $k$ in $G$. If the clique is found in step 1, then $(G', r)$ is a trivial yes-instance, so the claim holds. Thus, we can assume that the graph $G'$ was constructed in step 3. If $G$ was altered in step 2, we let $C$ be the clique in one of the two copies that was created. Let $S \subseteq V(G')$ consist of the vertices of $C$ as well as the vertices of the clique $K_{k-1}$ of the gadget $\textsc{gdg}_{k, r}$. We claim that $S$ is a valid solution to $(G', r)$.
We show that $G' \oplus S$ is $r$-regular. Any vertex not in $S$ will have the same number of neighbors as it had in $G'$. Since the only vertices that weren't originally of degree $r$ were those in the clique $K_{k-1}$, all vertices outside $S$ also have degree $r$ in $G' \oplus S$. What remains is to examine the degrees of vertices of $C$ and of $K_{k-1}$.
Let $c_i$ be a vertex of $K_{k-1}$ in $G'$. Then $c_i$ lost its $k-2$ neighbors from $K_{k-1}$, gained $k$ neighbors from $C$, and kept $r-k$ neighbors in $K^i_{r,r}$. We see that its new neighborhood has size $k + r - k = r$.
Let $u \in C$ be a vertex of the clique from $G$. Then $u$ lost $k-1$ neighbors from $C$, gained $k-1$ neighbors from $K_{k-1}$, and kept $r-(k-1)$ neighbors from $G - C$. In total, $u$ will have $r-(k-1)+(k-1) = r$ neighbors in $G' \oplus S$. Since every vertex of $G' \oplus S$ has degree $r$, it is $r$-regular, and thus $(G', r)$ is a yes-instance.
\end{proof}
\begin{lemma}\label{lem_NPhardness2}
Let $(G, k)$ be the input of Algorithm~\ref{rr:alg:reduction}, and let $(G', r)$ be the returned result. If $(G', r)$ is a yes-instance to \textsc{Partial Complement to $r$-Regular}\xspace{}, then $(G, k)$ is a yes-instance of \textsc{Clique in}\text{ $r$}\textsc{-regular Graph}{}.
\end{lemma}
\begin{proof}
Let $S \subseteq V(G')$ be a solution witnessing that $(G', r)$ is a yes-instance. If $(G', r)$ was the trivial yes-instance returned in step 1 of Algorithm~\ref{rr:alg:reduction}, the statement trivially holds. Going forward we may thus assume $(G', r)$ was returned in step 3, and that $k \geq 7$.
Because $G' \oplus S$ is $r$-regular, it must be the case that every vertex of $K_{k-1}$ is in $S$, since by construction these are the vertices which do not have degree $r$ in $G'$.
We claim that $|S| = 2k - 1$, and moreover, that no neighbor of $K_{k-1}$ is in $S$. To show this, we let $p = |S \setminus K_{k-1}|$, and proceed to show that $p = k$. Towards this end, consider a vertex $c_i \in K_{k-1}$. This vertex has some number of neighbors in $S\setminus K_{k-1}$, denoted $x_i = |N_{G'}(c_i) \cap (S \setminus K_{k-1})|$. We know that $c_i$ has $r$ neighbors in $G' \oplus S$. Let us count them: Some neighbors are preserved by the partial complementation, namely $r-k-x_i$ of its neighbors found in $K^i_{r,r}$. Some neighbors are gained, namely $p-x_i$ of the vertices in $S$. Thus, we have that $r = r-k-x_i+p-x_i$. The $r$'s cancel, and we get $0 = p - k - 2x_i$. This is true for every $i \in [k-1]$, so we simply denote the number by $x = x_i$, and get $p = k + 2x$.
Towards the claim, it remains to show that $x = 0$. Because the neighborhoods of distinct $c_i$ and $c_j$ are disjoint outside $K_{k-1}$, we get that $p \geq (k-1)\cdot x$. We substitute $p$, and get
$$k+2x \geq (k-1) \cdot x $$
$$k \geq (k-3)\cdot x$$
$$\frac{k}{k-3} \geq x$$
Recalling that $k \geq 7$, we have that $x$ is either $1$ or $0$. Assume for the sake of contradiction that $x = 1$. Then without loss of generality, each $c_i$ has some neighbor $a^i_j$ which is in $S$. Since $a^i_j$ had degree $r$ in $G'$, it must hold that $a^i_j$ has equally many neighbors as non-neighbors in $S$. At most one of $a^i_j$'s neighbors is outside of $K^i_{r,r}$, this means that at least $\frac{|S|-3}{2}$ vertices of $K^i_{r,r}$ are in $S$. Because $k \geq 7$ and the $K^i_{r,r}$'s are completely disjoint for different values of $i \in [k-1]$, we get that
$$|S| \geq \frac{|S|-3}{2} \cdot (k-1) \geq \frac{|S|-3}{2} \cdot 6$$
$$|S| \geq 3\cdot|S| - 9$$
$$9 \geq 2\cdot|S|$$
Seeing that $|S| \geq k-1 \geq 6$, this is a contradiction. Thus, $x$ must be $0$, so $p = k + 2x = k$ and the claim holds.
We now show that $S \setminus K_{k-1}$ is a clique in $G'$. Assume for the sake of contradiction it is not, and let $u,v \in S \setminus K_{k-1}$ be vertices such that $uv \notin E(G')$. Consider the vertex $u$. By the above claim we know that $u$ does not have a neighbor in $K_{k-1}$. It will thus gain at least $k$ edges going to $K_{k-1} \cup \{v\}$, and lose at most $k-2$ edges going to $S \setminus (K_{k-1} \cup \{u, v\})$. Because $u$ was of degree $r$ in $G'$ yet gained more edges than it lost by the partial complementation, its degree is strictly greater than $r$ in $G \oplus S$. This is a contradiction, hence $S \setminus K_{k-1}$ is a clique in $G'$.
Because $k \geq 3$, the clique $S \setminus K_{k-1}$ can not be contained in the gadget $\textsc{gdg}_{k, r}$ nor span across both copies of $G$ created in step 2 of the reduction (if that step was applied). It must therefore be contained in the original $G$. Thus, $G$ has a clique of size $k$, and $(G, k)$ is a yes-instance of \textsc{Clique in}\text{ $r$}\textsc{-regular Graph}{}.
\end{proof}
Lemmata~\ref{lem_NPhardness1} and ~\ref{lem_NPhardness2} together with Proposition~\ref{rr:prop:krr-hard} conclude the proof of NP-hardness. Membership in NP is trivial, so NP-completeness holds.
\end{proof}
\noindent We remark that if $r$ is a constant not given with the input, the problem becomes polynomial time solvable by Theorem~\ref{thm:ddeg}.
\section{Conclusion and open problems}\label{secconcl_openpr}
In this paper we initiated the study of \textsc{Partial Complement to $\mathcal{G}$}\xspace. Many interesting questions remain open. In particular, what is the complexity of the problem when $\mathcal{G}$ is
\begin{itemize}
\item the class of chordal graphs,
\item the class of interval graphs,
\item the class of graph excluding a path $P_5$ as an induced subgraph,
\item the class graphs with max degree $\leq r$, or
\item the class of graphs with min degree $\geq r$
\end{itemize}
More broadly, it is also interesting to see what happens as we allow more than one partial complementation; how quickly can we recognize the class $\mathcal{G}^{(k)}$ for some class $\mathcal{G}$? It will also be interesting to investigate what happens if we combine partial complementation with other graph modifications, such as the Seidel switch.
\medskip\noindent\textbf{Acknowledgment} We thank Saket Saurabh for helpful discussions, and also a great thanks to the anonymous reviewers who provided valuable feedback.
{{
| {
"timestamp": "2018-05-01T02:09:14",
"yymm": "1804",
"arxiv_id": "1804.10920",
"language": "en",
"url": "https://arxiv.org/abs/1804.10920",
"abstract": "A partial complement of the graph $G$ is a graph obtained from $G$ by complementing all the edges in one of its induced subgraphs. We study the following algorithmic question: for a given graph $G$ and graph class $\\mathcal{G}$, is there a partial complement of $G$ which is in $\\mathcal{G}$? We show that this problem can be solved in polynomial time for various choices of the graphs class $\\mathcal{G}$, such as bipartite, degenerate, or cographs. We complement these results by proving that the problem is NP-complete when $\\mathcal{G}$ is the class of $r$-regular graphs.",
"subjects": "Computational Complexity (cs.CC); Discrete Mathematics (cs.DM)",
"title": "Partial complementation of graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969665221771,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7099044342093307
} |
https://arxiv.org/abs/1601.02085 | Approximating Stochastic Evolution Equations with Additive White and Rough Noises | In this paper, we analyze Galerkin approximations for stochastic evolution equations driven by an additive Gaussian noise which is temporally white and spatially fractional with Hurst index less than or equal to $1/2$. First we regularize the noise by the Wong-Zakai approximation and obtain its optimal order of convergence. Then we apply the Galerkin method to discretize the stochastic evolution equations with regularized noises. Optimal error estimates are obtained for the Galerkin approximations. In particular, our error estimates remove an infinitesimal factor which appears in the error estimates of various numerical methods for stochastic evolution equations in existing literatures. | \section{Introduction}
\label{sec1}
In this paper we consider the Galerkin approximation of the stochastic evolution equation (SEE)
\begin{align}\label{see}
L u(t,x)=b(u(t,x))+\xi(t,x),\quad (t,x)\in I\times \mathcal O,
\end{align}
with either homogenous Dirichlet boundary condition
\begin{align}\label{dbc}
u(t,0)=u(t,1)=0,\quad t\in I
\end{align}
or Neumann boundary condition
\begin{align}\label{nbc}
\partial_x u(t,0)=\partial_x u(t,1)=0,\quad t\in I,
\end{align}
where $I=[0,T]$ and $\mathcal O=(0,1)$. Here $L$ is a second order partial differential operator, the shift coefficient $b$ is a real-valued Lipschitz continuous function, and $\xi$ is a white-fractional noise, i.e., $\xi=\frac{\partial^2W}{\partial t \partial x}$ where $W=\{W(t,x),\ (t,x)\in I\times \mathcal O\}$ is a fractional Gaussian sheet on a stochastic basis $(\Omega,\mathscr F,(\mathscr F_t)_{t\in I},\mathbb P)$ such that
\begin{align}\label{wfn}
\mathbb E\bigg[W(s,x)W(t,y)\bigg]
=(s\wedge t)\frac{x^{2H}+y^{2H}-|x-y|^{2H}}{2}
\end{align}
for all $(s,x),(t,y)\in I\times \mathcal O$.
Here the parameter $H$ is called the Hurst index. When $H=1/2$,
the white-fractional noise $\xi$ becomes the standard space-time white noise.
There have been many studies on numerical approximations of SEEs with space-time white noise or smoother noises (cf. \cite{CP12, FLP14, Hau03, JR15, KLM11, Kru14, WGT14, ZTRK14} and references therein). In this paper we focus on the case when $H<1/2$, which makes the noise ``rougher" than the white noise in the spatial dimension.
In some practical applications such as flows in porous media, such rough noises are more suitable to model physical properties (cf. \cite{CCCSV13} and references therein).
Though the methodology developed in this paper is applicable to a variety of partial differential operators, in this study, we focus on the parabolic partial differential operator $L=L^I=\partial_t-\partial_{xx}$, which makes Eq. \eqref{see} a stochastic heat equation (SHE), and the hyperbolic partial differential operator $L=L^{II}=\partial_{tt}-\partial_{xx}$, which makes Eq. \eqref{see} a stochastic wave equation (SWE). For the SHE we impose the initial condition $u(0,x)=u_0(x)$; for the SWE we the impose initial conditions $u(0,x)=u_0(x)$ and $\partial_t u(0,x)=v_0(x)$.
A key step of our Galerkin approximation for Eq. \eqref{see} is to apply the approximation
\begin{align}\label{eq:wz}
\tilde \xi(t,x)=\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}
\bigg[\frac{1}{kh}\int_{I_i}\int_{\mathcal O_j}\xi(ds,dy)\bigg]
\chi_{i,j}(t,x),\ (t,x)\in I\times \mathcal O,
\end{align}
to regularize the white-factional noise $\xi(t,x)$.
Here $\{I_i\}_{i=0}^{m-1}$ and $\{\mathcal O_j\}_{j=0}^{n-1}$ are uniform partitions of $I$ and $\mathcal O$, respectively, with grid sizes $k=T/m$ and $h=1/n$,
and $\chi$ is the usual indicator function on these partitions.
The spatial partition will also serve as the finite element mesh in the Galerkin approximation of the SHE. We note that $\tilde \xi$ is a piecewise constant process in each time-space grid $I_i\times \mathcal O_j$. In this sense \eqref{eq:wz} is a Wong-Zakai type approximation for the stochastic process $\xi(t,x)$ (see \cite{WZ65} for the origin of the Wong-Zakai approximation). For simplicity, we call \eqref{eq:wz} the Wong-Zakai approximation of $\xi$. The Wong-Zakai approximation is a commonly used method in the numerical study of stochastic differential equations (cf. \cite{MT04} and references therein). It has also been widely used in theoretical analysis as well as numerical approximations of SPDEs. For instance, based on the Wong-Zakai approximation for the space-time white noise, the authors in \cite{BMS95} obtained a support theorem for the law of the solution of SHEs. For numerical solutions using the Wong-Zakai approximation, we refer to \cite{CYY07, CHL15b, CHL15a} for the finite element method for stochastic elliptic equations and \cite{ANZ98, CY07, DZ02} for SEEs.
To obtain error estimates for the Galerkin approximation of Eq. \eqref{see}, we first study the well-posedness and Sobolev regularity of its mild solution (defined in Section \ref{sec2}). Specifically, we prove that if the initial datum possesses finite $p$-th moment for $p\geq 2$, then Eq. \eqref{see} has a unique mild solution with uniformly bounded $p$-th moment. Moreover, this mild solution is in $\dot{\mathbb H}^\beta$ (whose definition is given in Section \ref{sec2}) provided $u_0$ is in $\dot{\mathbb H}^\beta$ and/or $v_0$ is in $\dot{\mathbb H}^{\beta-1}$ for any $\beta\in [0,H)$ (see Theorem \ref{wel}).
The main results of this study are the error estimates for both the approximation to the exact solution of the SEE through the Wong-Zakai approximation and the numerical solutions through Galerkin finite element approximation for the SHE and the spectral Galerkin approximation for the SWE.
Let $u$ be the mild solution of Eq. \eqref{see} and $ \tilde u$ be the mild solution of the SEE with the noise term replaced by the Wong-Zakai approximation.
Then (see Theorem \ref{uu}) for SHE,
\begin{align}\label{eq:error-W-Z-SHE}
\sup_{t\in I}\bigg(\mathbb E\bigg[ \|u (t)-\tilde u(t)\|_{\mathbb L^2}^p\bigg] \bigg)^\frac1p
\le C \left(h^H+k^\frac14 h^{H-\frac12}\right),
\end{align}
and for SWE,
\begin{align}\label{eq:error-W-Z-SWE}
\sup_{t\in I}\bigg(\mathbb E\bigg[ \|u (t)-\tilde u(t)\|_{\mathbb L^2}^p\bigg] \bigg)^\frac1p
\le C \left(h^H+k^\frac12 h^{H-\frac12}\right).
\end{align}
Here and in the rest of the paper, $C$ denotes a generic constant whose value may be different at different appearances.
A similar result for the SHE driven by the space-time white noise ($H=1/2$) was obtained in \cite{ANZ98}. There
the error estimate is O($ k^\frac14+h k^{-\frac14})$. Obviously this error estimate coincides with ours, but only after an additional condition on $k$ and $h$ is enforced. For the Galerkin finite element approximation $ \tilde u _h$ for the SHE we have the following error estimate (see Theorem \ref{fem-ord})
\begin{align}\label{eq:error-SHE}
\sup_{t\in I}\bigg(\mathbb E\bigg[\| \tilde u (t)- \tilde u _h(t)\|_{\mathbb L^2}^p\bigg]\bigg)^\frac1p
\le C (h^H+ h^{\frac{3}{2}-\epsilon}k^{-\frac{1}{2}}),
\end{align}
and for the spectral Galerkin approximation $ \tilde u _N$ for the SWE we have (see Theorem \ref{spe-ord0})
\begin{align}\label{eq:error-SWE}
\sup_{t\in I}\bigg(\mathbb E\bigg[\| \tilde u (t)- \tilde u _N(t)\|_{\mathbb L^2}^p\bigg]\bigg)^\frac1p
\le C N^{-1} h^{H-1}
\end{align}
where $N$ is the number of terms in the spectral approximation. We notice that our error estimates remove a negative infinitesimal component which appears in many error estimates for numerical solutions of SPDES (see \cite{ANZ98, CY07, DZ02, Yan05}).
The paper is organized as follows.
In Section \ref{sec2}, we provide some preliminaries about stochastic integrals with respect to white-fractional noise with $H\le 1/2$, followed by establishing the existence and uniqueness as well as the Sobolev regularity of the mild solution. In Section \ref{sec3}, we study the regularization of the noise with Wong-Zakai approximation and derive the error estimates \eqref{eq:error-W-Z-SHE} and \eqref{eq:error-W-Z-SWE}. Finally in Section \ref{sec4}, we apply the Galerkin approximation to spatially discretize the regularized SEEs and derive the error estimates \eqref{eq:error-SHE} and \eqref{eq:error-SWE}.
\section{Well-posedess and regularity of the SEEs}
\label{sec2}
In this section, we first introduce the stochastic integral with respect to the white-factional noise $\xi$ with deterministic functions as integrands.
Then we use such integrals to define the mild solution and establish the well-posedness and Sobolev regularity of Eq. \eqref{see}.
\subsection{Stochastic integrals with respect to the white-fractional noise}
We follow the approach of \cite{CHL15a} to define the stochastic integrals with respect to the white-fractional noise $\xi$.
To this end, we introduce a set $\mathcal E$ of all step functions in $I\times \overline \mathcal O$ in the form of
\begin{align*}
f(t,x)=\sum_{i=0}^{M-1}\sum_{j=0}^{N-1} f_{ij}\chi_{(a_i, a_{i+1}]\times (b_j, b_{j+1}]}(t,x),\quad (t,x)\in I\times \overline \mathcal O,
\end{align*}
where $0=a_0<a_1<\cdots<a_M=T$ and $0=b_0<b_1<\cdots<b_N=1$ are partitions of $I$ and $\mathcal O$, respectively, and $f_{ij}\in\mathbb R$, $i=0,1,\cdots,M-1$, $i=0,1,\cdots,N-1$, $M,N\in \mathbb N_+$.
For $f\in \mathcal E$, we define its integral with respect to $W$ by the Riemann sum as
\begin{align*}
&\int_I \int_{\mathcal O} f(s,y) \xi(ds,dy) \\
&:=\sum_{i=0}^{M-1}\sum_{j=0}^{N-1} f_{ij}
\bigg(W(a_{i+1})-W(a_i)\bigg) \bigg(W(b_{j+1})-W(b_j)\bigg),
\end{align*}
and for $f,g\in \mathcal E$, we define their scalar product as
\begin{align*}
\Psi(f,g):=\mathbb E\bigg[\bigg(\int_I\int_{\mathcal O} f(s,y) \xi(ds,dy)\bigg) \bigg(\int_I\int_{\mathcal O}g(s,y) \xi(ds,dy)\bigg)\bigg].
\end{align*}
Next we extend $\mathcal E$ through completion to a Hilbert space, denoted by $\mathcal H$, and define the stochastic integral for
any function $f\in \mathcal H$ accordingly.
By \cite[Theorem 2.9]{BJQ15},
we have a characterization for $\mathcal H$:
\begin{align*}
\mathcal H
&=\bigg\{f\ \text{is Lebesgue measurable}:\ \int_I\int_{\mathcal O}\int_{\mathcal O}
\frac{\left|f(s,x)-f(s,y)\right|^2}{|x-y|^{2-2H}}dxdyds \nonumber \\
&\qquad\qquad+\int_I\int_{\mathcal O} f^2(s,x) \bigg( x^{2H-1}+(1-x)^{2H-1} \bigg)dxds<\infty \bigg\}
\end{align*}
and the following It\^{o} isometry (cf. \cite{BJQ15, CHL15a}).
\\
\begin{tm}\label{ito}
For all $f, g \in \mathcal H$,
\begin{align}\label{ito0}
&\mathbb E\bigg[\bigg(\int_I\int_{\mathcal O} f(s,y) \xi(ds,dy)\bigg)
\bigg(\int_I\int_{\mathcal O}g(s,y) \xi(ds,dy)\bigg)\bigg] \\
&=\frac{H(1-2H)}{2} \int_I\int_{\mathcal O}\int_{\mathcal O}
\frac{\big(f(s,x)-f(s,y)\big) \big(g(s,x)-g(s,y)\big)}{|x-y|^{2-2H}}dxdyds \nonumber \\
&\quad+H \int_I\int_{\mathcal O} f(s,x)g(s,x)
\bigg( x^{2H-1}+(1-x)^{2H-1} \bigg) dxds.\nonumber
\end{align}
\end{tm}
We remark that in the case of space-time white noise, i.e., $H=1/2$, \eqref{ito0} becomes the It\^o isometry for space-time white noise:
\begin{align*}
&\mathbb E\bigg[\bigg(\int_I\int_{\mathcal O} f(s,y) \xi(ds,dy)\bigg)
\bigg(\int_I\int_{\mathcal O}g(s,y) \xi(ds,dy)\bigg)\bigg] \\
&\qquad \qquad \qquad =\int_I\int_{\mathcal O} f(s,y)g(s,y)dyds.
\end{align*}
We will frequently use the following inequalities about the Lebesgue integrals of the singular kernels associated with the fractional Brownian motion (see \cite{CHL15a}, Lemma 2.2 and Lemma 2.3):
\begin{align}
\sum_{j=0}^{n-1}\int_{\mathcal O_j}\int_{\mathcal O_j}|y-z|^{2H-1}dydz
&\le C h^{2H}, \label{same} \\
\sum_{j\neq l}\int_{\mathcal O_j}\int_{\mathcal O_l}|y-z|^{2H-2}dydz
&\le C h^{2H-1}. \label{dif}
\end{align}
\subsection{Well-posedness and Sobolev regularity}
In this subsection, we study the existence and unique of the mild solution and its regularity for Eq. \eqref{see}.
The mild solution is defined by the Green's function for the given partial differential operator $L$ which we define as follow.
Let $\{(\lambda_\alpha,\ \varphi_\alpha)\}_{\alpha\in \mathbb N_+}$, be the eigensystem of the negative Laplacian $-\Delta$.
Under Dirichlet condition \eqref{dbc}, it is given by
\begin{align*}
\lambda_\alpha:=(\alpha\pi)^2,\quad
\varphi_\alpha(x):=\sqrt{2}\sin(\sqrt{\lambda_\alpha} x),
\quad x\in \overline{\mathcal O}, \quad \alpha\in \mathbb N_+. \
\end{align*}
In this paper, all discussions are concerned with the Dirichlet condition \eqref{dbc}.
However, the main results are also valid for the Neumann condition \eqref{nbc} since the main estimates about $\lambda_\alpha$ in Lemma \ref{lm-re} also hold for $\psi_\alpha(\cdot):=\sqrt{2}\cos(\sqrt{\lambda_\alpha}\cdot)$, $\alpha\in \mathbb N_+$, which are eigenfunctions corresponding to the
eigenvalues $\lambda_\alpha$ of $-\Delta$ with the Neumann boundary condition.
With the above eigensystem, the Green's function for $L$ can be represented as (cf. \cite{Duf15})
\begin{align}\label{gre}
G_t(x,y)
&=\sum_{\alpha=1}^\infty \phi_\alpha(t)\varphi_\alpha(x)\varphi_\alpha(y),\quad t\in I,\ x,y \in \overline{\mathcal O},
\end{align}
where $\phi_\alpha(t)=e^{-\lambda_\alpha t}$ for SHE and $\phi_\alpha(t)=\frac{\sin(\sqrt{\lambda_\alpha} t)}{\sqrt{\lambda_\alpha}}$ for SWE, $t\in I$.
For convenence, we set $\phi_\alpha(t-s)=0$, and thus $G_{t-s}(x,y)=0$, for any $0\le t<s\le T$ and $x,y\in \overline{\mathcal O}$.
Denote by $S$ the stochastic convolution
\begin{align}\label{con}
S (t,x):=\int_0^t\int_{\mathcal O} G_{t-s}(x,y)\xi(ds,dy),\quad (t,x)\in I\times \overline{\mathcal O}.
\end{align}
Then the mild solution $u$ of Eq. \eqref{see} is defined as the solution of the following stochastic integral equation (cf. \cite{Dal09}):
\begin{align}\label{mild}
u(t,x)
=\omega(t,x)+\int_0^t\int_{\mathcal O} G_{t-s}(x,y)b(u(s,y))dsdy+S (t,x)
\end{align}
for all $(t,x)\in I\times \overline{\mathcal O}$,
where $\omega$ is the solution of the deterministic evolution equation $Lu=0$ with the same initial and boundary conditions, i.e., for SHE,
\begin{align*}
\omega(t,x)=\int_{\mathcal O} G_t(x,y) u_0(y) dy,
\quad (t,x)\in I\times \overline{\mathcal O},
\end{align*}
and for SWE,
\begin{align*}
\omega(t,x)=\int_{\mathcal O} G_t(x,y) v_0(y) dy
+\int_{\mathcal O} \frac{\partial G_t(x,y)}{\partial t} u_0(y) dy,
\quad (t,x)\in I\times \overline{\mathcal O}.
\end{align*}
The following lemma will be frequently used in the derivation of the regularity for the mild solution and in the error estimates of the Wong-Zakai approximation. \\
\begin{lm}\label{lm-re}
\begin{enumerate}
\item [{\rm (i).}]
For any $y,z\in \mathbb R$, there exists a constant $C$ such that
\begin{align}\label{sin-sum}
\begin{split}
\sum_{k=1}^\infty \frac{|\varphi_\alpha(y)-\varphi_\alpha(z)|^2}{\lambda_\alpha}
&\le C|y-z|,\\
\sum_{k=1}^\infty \frac{|\psi_\alpha(y)-\psi_\alpha(z)|^2}{\lambda_\alpha}
&\le C|y-z|.
\end{split}
\end{align}
Moreover, for any $\kappa\in (1/2,3/2)$, there exists a constant $C=C(\kappa)$ such that
\begin{align}\label{sin-sum1}
\begin{split}
\sum_{k=1}^\infty \frac{|\varphi_\alpha(y)-\varphi_\alpha(z)|^2}{\lambda_\alpha^\kappa}
&\le C|y-z|^{2\kappa-1},\\
\sum_{k=1}^\infty \frac{|\psi_\alpha(y)-\psi_\alpha(z)|^2}{\lambda_\alpha^\kappa}
&\le C|y-z|^{2\kappa-1}.
\end{split}
\end{align}
\item [{\rm (ii).}]
For any $H<1/2$ and any $\alpha\in \mathbb N_+$, there exists a constant $C=C(H)$ such that
\begin{align}\label{sob-int}
\int_{\mathcal O}\int_{\mathcal O} \frac{|\varphi_\alpha(y)-\varphi_\alpha(z)|^2}
{|y-z|^{2-2H}} dydz
\le C \lambda_\alpha^{\frac{1}{2}-H}.
\end{align}
\item [{\rm (iii).}]
For any $t>0$ and any $y,z\in \mathbb R$, there exists a constant $C=C(T)$ such that
\begin{align} \label{phi}
\sum_{\alpha=1}^\infty |\varphi_\alpha(y)-\varphi_\alpha(z)|^2
\bigg( \int_0^t \phi_\alpha^2(t-s)ds \bigg)
\le C |y-z|.
\end{align}
\item [{\rm (iv).}]
For any $y,z\in \overline{\mathcal O}$, there exists a constant $C=C(T)$ such that
\begin{align}\label{gg}
\int_I \int_{\mathcal O}|G_{t-s}(x,y)-G_{t-s}(x,z)|^2 dxds
\le C |y-z|.
\end{align}
\end{enumerate}
\end{lm}
\begin{proof}
(i).
For any $x,y\in \mathbb R$,
\begin{align*}
& \sum_{\alpha=1}^\infty \frac{|\varphi_\alpha(y)-\varphi_\alpha(z)|^2}{\lambda_\alpha} \\
&\le \sum_{\alpha=1}^\infty \frac{8\wedge 2\lambda_\alpha |y-z|^2}{\lambda_\alpha}
\le \int_0^\infty \frac{8}{\pi^2 u^2}\wedge 2|y-z|^2 du \nonumber \\
&\le \int_0^{\frac{2}{\pi |y-z|}} 2|y-z|^2du+ \int_{\frac{2}{\pi |y-z|}}^\infty \frac{8}{\pi^2 u^2}du
=\frac{8|y-z|}{\pi}.
\end{align*}
This proves the first inequality of \eqref{sin-sum}. The proof of the second inequality and \eqref{sin-sum1} are analogous.
(ii). Let $u=\alpha\pi y$ and $v=\alpha\pi z$. Then
\begin{align}\label{eq:inequality(ii)}
&\int_{\mathcal O}\int_{\mathcal O} \frac{|\varphi_\alpha(y)-\varphi_\alpha(z)|^2}
{|y-z|^{2-2H}}dydz \nonumber \\
&=\frac{2}{\lambda_\alpha^H}\int_0^{\sqrt{\lambda_\alpha}}\int_0^{\sqrt{\lambda_\alpha}}\frac{(\sin u-\sin v)^2}{|u-v|^{2-2H}}dudv.
\end{align}
Set $K_1=\{(u,v)\in [0,\sqrt{\lambda_\alpha}]^2: |u-v|\le 1\}$ and $K_2=\{(u,v)\in [0,\sqrt{\lambda_\alpha}]^2: |u-v|> 1\}$. It is easy to see that
\begin{align*}
\int_{K_1} \frac{|\sin u-\sin v|^2}{|u-v|^{2-2H}} dudv
\le \int_{K_1} |u-v|^{2H}dudv
\le 2\sqrt{\lambda_\alpha}-1.
\end{align*}
On the other hand, since $(\sin u-\sin v)^2\le 4$ when $(u,v)\in K_2$, we have that
\begin{align*}
& \int_{K_2} \frac{|\sin u-\sin v|^2}{|u-v|^{2-2H}} dudv \\
&\le 4\int_0^{\sqrt{\lambda_\alpha}}
\bigg[\int_0^{v-1} (v-u)^{2H-2}du+\int_{v+1}^{\sqrt{\lambda_\alpha}} (u-v)^{2H-2}du\bigg] dv \\
&=\frac{4}{H(1-2H)}\lambda_\alpha^H.
\end{align*}
From the the above two estimates and \eqref{eq:inequality(ii)} we obtain
\begin{align*}
\int_{\mathcal O}\int_{\mathcal O}
\frac{|\varphi_\alpha(y)-\varphi_\alpha(z)|^2}{|y-z|^{2-2H}} dydz
\le 4\lambda_\alpha^{\frac{1}{2}-H}+\frac{8}{H(1-2H)}
\le C \lambda_\alpha^{\frac{1}{2}-H},
\end{align*}
which is \eqref{sob-int}.
(iii). Direct calculations yield
\begin{align*}
\int_0^t \phi_\alpha^2(t-s)ds
=\begin{cases}
\frac{1-e^{-2\lambda_\alpha t}}{2\lambda_\alpha},&\quad {\rm for} \ \text{SHE}, \\
\frac{\sqrt{\lambda_\alpha} t-\sin(2\sqrt{\lambda_\alpha}t)}{2\lambda_\alpha^{3/2}},&\quad {\rm for} \ \text{SWE}.
\end{cases}
\end{align*}
It follows from \eqref{sin-sum} that
\begin{align*}
&\sum_{\alpha=1}^\infty
\bigg( \int_0^t \phi_\alpha^2(t-s)ds \bigg)
|\varphi_\alpha(y)-\varphi_\alpha(z)|^2 \\
&\le C \sum_{\alpha=1}^\infty \frac{|\varphi_\alpha(y)-\varphi_\alpha(z)|^2}{\lambda_\alpha}
\le C |y-z|,
\end{align*}
which proves \eqref{phi}.
(iv).
By \eqref{gre} and orthogonality of $\phi_\alpha$, we have
\begin{align*}
&\int_I \int_{\mathcal O} |G_{t-s}(x,y)-G_{t-s}(x,z)|^2 dxds \\
&=\sum_{\alpha=1}^\infty
\bigg(\int_I \phi_\alpha^2(t-s) ds\bigg) |\varphi_\alpha(y)-\varphi_\alpha(z)|^2.
\end{align*}
Then \eqref{gg} follows immediately from \eqref{phi}.
\end{proof}
Denote by $\dot \mathbb H^\beta=\dot\mathbb H^\beta (\mathcal O)$ the usual intepolation space with its norm defined by $\|\cdot\|_{\beta}:=\|(-\Delta)^\frac{\beta}{2}\cdot\|_{\mathbb L^2}$, $\beta\in \mathbb R$.
In particular, $\dot\mathbb H^0=\mathbb L^2$.
We have the following well-posedness and Sobolev regularity of Eq. \eqref{see}.\\
\begin{tm}\label{wel}
Let $p\ge 2$ and $\beta\in [0,H)$.
Assume that $u_0\in \mathbb L^p(\Omega; \dot \mathbb H^\beta)$ and
$v_0\in \mathbb L^p(\Omega; \dot \mathbb H^{\beta-1})$.
Then Eq. \eqref{see} associated with Dirichlet condition \eqref{dbc} or Neumann condition \eqref{nbc} with initial data $u_0$ and/or $v_0$ has a unique mild solution $u$ defined by \eqref{mild}.
Furthermore, there exists a constant $C=C(p,T,H)$ such that for SHE,
\begin{align}\label{bon-she}
\mathbb E\bigg[\sup_{t \in I}\|u(t)\|_{\beta}^p\bigg]
\le C\bigg(1+\mathbb E\bigg[\|u_0\|_{\beta}^p\bigg]\bigg).
\end{align}
and for SWE,
\begin{align}\label{bon-swe}
\mathbb E\bigg[\sup_{t \in I}\|u(t)\|_{\beta}^p\bigg]
\le C\bigg(1+\mathbb E\bigg[\|u_0\|_{\beta}^p\bigg]
+\mathbb E\bigg[\|v_0\|_{\beta-1}^p\bigg]\bigg).
\end{align}
\end{tm}
\begin{proof}
Substituting \eqref{gre} for $G_{t-s}(x,y)$, using the fact that $S(t)$ is a Gaussian random field, and applying the It\^o isometry \eqref{ito0}, we obtain
\begin{align*}
& \mathbb E\bigg[\|S(t)\|_{\mathbb L^2}^p \bigg] \\
&\le C\Bigg[\bigg(\int_{\mathcal O}\int_{\mathcal O}
\frac{\sum\limits_{\alpha=1}^\infty |\varphi_\alpha(y)-\varphi_\alpha(z)|^2
\Big( \int_0^t \phi_\alpha^2(t-s)ds \Big) } {|y-z|^{2-2H}} dydz \\
&\qquad +\bigg(\int_{\mathcal O} \Big( y^{2H-1}+(1-y)^{2H-1}\Big)dy\bigg)
\bigg(\sum_{\alpha=1}^\infty \bigg( \int_0^t \phi_\alpha^2(t-s)ds \bigg) \bigg) \Bigg]^\frac p2,
\end{align*}
where $C$ is a constant depending only on $p$ and $H$.
By the estimation \eqref{phi} and the facts that
\begin{align*}
&\int_{\mathcal O}\int_{\mathcal O} |y-z|^{2H-1}dydz
=\frac{1}{(H+1)(2H+1)},\\
&\int_{\mathcal O}\Big( y^{2H-1}+(1-y)^{2H-1}\Big)dy
=\frac{1}{H},
\end{align*}
we have
\begin{align*}
\mathbb E\bigg[\|S(t)\|_{\mathbb L^2}^p \bigg]
&\le C\bigg[1+\sum_{\alpha=1}^\infty
\bigg( \int_0^t \phi_\alpha^2(t-s)ds \bigg) \bigg]^\frac p2
\end{align*}
For SHE, simple calculations yield that
\begin{align*}
\sum_{\alpha=1}^\infty \bigg( \int_0^t \phi_\alpha^2(t-s)ds \bigg)
=\sum_{\alpha=1}^\infty\frac{1-e^{-2\lambda_\alpha t}}{2\lambda_\alpha}
\le \sqrt{\frac t{2\pi}},
\end{align*}
and for SWE we have
\begin{align*}
\sum_{\alpha=1}^\infty \bigg( \int_0^t \phi_\alpha^2(t-s)ds \bigg)
=\sum_{\alpha=1}^\infty\frac{2\sqrt{\lambda_\alpha}t-\sin(2\sqrt{\lambda_\alpha}t)}{2\lambda_\alpha^{3/2}}
<\infty.
\end{align*}
This shows that $\mathbb E\left[\|S(t) \|_{\mathbb L^2}^p\right]<\infty$, which in turn ensures the existence of the unique mild solution of Eq. \eqref{see} as well as moments' uniform boundedness through the standard Picard iteration argument (cf. \cite{Dal09} or \cite[Theorem 3.1]{HL16}).
It remains to show \eqref{bon-she} and \eqref{bon-swe}.
By the orthogonality and uniform boundedness of $\varphi_\alpha$ and It\^o isometry \eqref{ito0}, there exists a constant $C=C(p,H)$ such that
\begin{align*}
&\mathbb E\bigg[\sup_{t \in I}\|S(t)\|_\beta^p\bigg] \\
&\le C\Bigg(\sum_{\alpha=1}^\infty
\bigg(\mathbb E\bigg[\sup_{t \in I} \bigg|\int_0^t\int_{\mathcal O} \lambda_\alpha^\frac{\beta}{2}\phi_\alpha(t-s)\varphi_\alpha(y)\xi(ds,dy)\bigg|^2\bigg] \bigg) \Bigg)^\frac p2 \\
&\le C\sup_{t \in I}\Bigg[\sum_{\alpha=1}^\infty \lambda_\alpha^\beta
\bigg(\int_0^t \phi_\alpha^2(t-s) ds \bigg)
\bigg(\int_{\mathcal O}\int_{\mathcal O} \frac{|\varphi_\alpha(y)-\varphi_\alpha(z)|^2}{|y-z|^{2-2H}} dydz \bigg) \Bigg]^\frac p2 \\
& + C\sup_{t \in I} \Bigg[\bigg(\int_{\mathcal O} \bigg(y^{2H-1}+(1-y)^{2H-1} \bigg) dy\bigg)
\sum_{\alpha=1}^\infty \lambda_\alpha^\beta
\bigg(\int_0^t \phi_\alpha^2(t-s) ds\bigg) \Bigg]^\frac p2 \\
:&=S_1+S_2.
\end{align*}
For $S_1$, the inequality \eqref{sob-int} yields
\begin{align*}
S_1
\le C \sup_{t \in I} \Bigg[\sum_{\alpha=1}^\infty
\lambda_\alpha^{\beta-H+\frac12}
\bigg(\int_0^t \phi_\alpha^2(t-s) ds\bigg) \Bigg]^\frac p2
\le C \Bigg(\sum_{\alpha=1}^\infty \lambda_\alpha^{\beta-H-\frac{1}{2}} \Bigg)^\frac p2,
\end{align*}
which converges if and only if $\beta<H$.
The second term $S_2$ can be estimated as
\begin{align*}
S_2
\le C \sup_{t \in I} \Bigg[\sum_{\alpha=1}^\infty \lambda_\alpha^\beta
\bigg(\int_0^t \phi_\alpha^2(t-s) ds\bigg) \Bigg]^\frac p2
\le C \Bigg(\sum_{\alpha=1}^\infty \lambda_\alpha^{\beta-1} \Bigg)^\frac p2,
\end{align*}
which is finite if and only if $\beta<1/2$.
Therefore, for any $\beta\in [0,H)$ we have
\begin{align*}
\mathbb E\bigg[\sup_{t \in I}\|S(t)\|_\beta^p\bigg]<\infty.
\end{align*}
Since $b$ is Lipschitz continuous, the standard arguments imply \eqref{bon-she} and \eqref{bon-swe}.
\end{proof}
We remark that well-posedness results were recently established for $H>1/4$ in \cite{BJQ15} for linear ($b=0$) SEEs whose diffusion coefficient is given by an affine function $\sigma(u)=a_1u+a_2$ with $a_1,a_2\in \mathbb R$, and in \cite{HHLNT15} for linear SHE where $\sigma(u)$ is differentiable with a Lipschitz derivative and $\sigma(0) = 0$.
We also note that the authors in \cite{HL16} proved the optimal H\"older regularity for the solution of Eq. \eqref{see} in real line, i.e., $(t,x)\in I\times \mathbb R$.
\section{Wong-Zakai approximations}
\label{sec3}
In this section, we regularize the white-fractional noise $\xi$ through the Wong-Zakai approximation and establish the rate of convergence of the approximate mild solution of the SEE with $\xi$ replaced by its Wong-Zakai approximation.
First we recall the Wong-Zakai approximation described in Section \ref{sec1}. For partitions $\{I_i=(t_i,t_{i+1}],\ t_i=ik,\ i=0,1,\cdots,m-1\}$ and
$\{\mathcal O_j=(x_j,x_{j+1}],\ x_j=jh,\ j=0,1,\cdots,n-1\}$ of $I$ and $\mathcal O$, with $k=T/m$ and $h=1/n$, the Wong-Zakai approximation to $\xi(t,x)$ is given by
\begin{align}\label{w'}
\tilde \xi(t,x)=\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}
\bigg[\frac{1}{kh}\int_{I_i}\int_{\mathcal O_j}\xi(ds,dy)\bigg]
\chi_{i,j}(t,x),\ (t,x)\in I\times \overline{\mathcal O}.
\end{align}
It is easy to see from It\^{o} isometry \eqref{ito} that $\tilde \xi(t)\in \mathbb H$ a.s., for any $t\in I$,
moreover, for any $p\ge 2$, there exists a constant $C=C(p,T,H)$ such that
\begin{align}\label{xi}
\sup_{t\in I}\bigg(\mathbb E\bigg[\|\tilde \xi(t)\|^p_{\mathbb L^2}\bigg]\bigg)^\frac1p
\le C k^{-\frac12} h^{-\frac12}.
\end{align}
\iffalse
In fact,
\begin{align*}
\|\tilde \xi(t)\|_{\mathbb L^p(\Omega; \mathbb L^2)}
&=\sqrt[p]{\mathbb E\left[ \int_{\mathcal O}\left|\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}\frac{1}{kh}\int_{I_i}\int_{\mathcal O_j}\xi(ds,dz)\chi_{I_i\times \mathcal O_j}(t,x)\right|^2 dx\right]^\frac p2 } \\
&\le C k^{-1}h^{-\frac12} \sqrt[p]{\mathbb E\left[\left(\sum_{j=0}^{n-1} \left |\int_{I_M}\int_{\mathcal O_j}\xi(ds,dz)\right|^2\right)^\frac p2\right]} \\
&\le k^{-1}h^{-\frac12} h^{-\frac{p-2}{2p}} \sqrt[p]{\sum_{j=0}^{n-1} \mathbb E\left[\left |\int_{I_M}\int_{\mathcal O_j}\xi(ds,dz)\right|^p\right]}
\le C k^{-\frac12} h^{-\frac12}.
\end{align*}
\fi
Now we consider the regularized SEE with $\xi$ replaced by $\tilde \xi$ in Eq. \eqref{see}:
\begin{align} \label{spde-dis}
L \tilde u (t,x)=b( \tilde u (t,x))+\tilde \xi(t,x),\quad (t,x)\in I\times \overline{\mathcal O},
\end{align}
with same initial and boundary values.
Similarly to \eqref{mild}, we define the mild solution of \eqref{spde-dis} as $\tilde u$ such that
\begin{align}\label{mild-dis}
\tilde u (t,x)
=\omega(t,x)+\int_0^t\int_{\mathcal O} G_{t-s}(x,y)b( \tilde u (s,y))dsdy+\tilde S(t,x)
\end{align}
for all $(t,x)\in I\times \overline{\mathcal O}$,
where $\tilde S$ denotes the approximate convolution:
\begin{align}\label{con-a}
\tilde S(t,x):=\int_0^t\int_{\mathcal O} G_{t-s}(x,y)\tilde \xi(s,y)dsdy,
\quad (t,x)\in I\times \overline{\mathcal O}.
\end{align}
Using \eqref{w'} we can rewrite $\tilde S(t,x)$ as a stochastic integral:
\begin{align*}
\tilde S(t,x)=\int_I\int_{\mathcal O}
G_{t,s}^{m,n}(x,y) dW(s,y),\quad
(t,x)\in I\times \overline{\mathcal O},
\end{align*}
where
\begin{align*}
G_{t,s}^{m,n}(x,y)=\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}\frac{\chi_{i,j}(s,y)}{kh} \int_{I_i}\int_{\mathcal O_j}
G_{t-\tau}(x,z)d\tau dz
\end{align*}
for $(t,x),\ (s,y)\in I\times \overline{\mathcal O}$.
As a result,
\begin{align*}
S (t,x)-\tilde S(t,x)
=\int_I\int_{\mathcal O} \Big( G_{t-s}(x,y)-G^{m,n}_{t,s}(x,y) \Big)\xi(ds,dy),
\end{align*}
where $(t,x)\in I\times \overline{\mathcal O}$.
In what follows, we derive an error estimate for the Wong-Zakai approximation of $\xi$ and then establish the convergence rate of the mild solution
$ \tilde u$ of \eqref{spde-dis} to the mild solution $u$ of \eqref{see} in terms of $k$ and $h$. For this purpose we define, for $t\in I$,
\begin{align*}
\Psi_\alpha(t):
&=\sum_{i=0}^{m-1} \int_{I_i}
\bigg[\int_{I_i}
\bigg( \phi_\alpha(t-s)- \phi_\alpha(t-\tau) \bigg) d\tau \bigg]^2ds,\\
\Upsilon_\alpha(t):
&=\sum_{i=0}^{m-1} \int_{I_i}\int_{I_i} \phi_\alpha(t-s)
\bigg( \phi_\alpha(t-s)- \phi_\alpha(t-\tau) \bigg) d\tau ds.
\end{align*}
The following estimations are frequently used in our analysis.\\
\begin{lm}
Let $\phi_{\alpha}, \ \alpha\in \mathbb N_+$ be the basis functions related to SHE. Then there exists a constant $C=C(T,H)$ such that
\begin{align}\label{lm-she}
\sup_{t\in I}\sum_{\alpha=1}^\infty\Psi_\alpha(t)
\le C k^\frac{5}{2},
\quad
\sup_{t\in I}\sum_{\alpha=1}^\infty \Upsilon_\alpha(t)
\le C k^\frac{3}{2}.
\end{align}
\end{lm}
\begin{proof}
Let $M$ be the integer such that $t\in [t_{M-1},t_M)$.
Define for $i\in [0,M-1]$,
\begin{align*}
\Psi_i^\alpha(t):=\int_{I_i} \left[\int_{I_i}
\Big( \phi_\alpha(t-s)- \phi_\alpha(t-\tau) \Big)
d\tau \right]^2ds.
\end{align*}
Then
$\Psi_i^{\alpha}(t)
=\int_{I_i} \Big[\int_{I_i}\int_\tau^s \lambda_\alpha e^{-\lambda_\alpha(t-u)}dud\tau \Big]^2ds$.
When $i\in [0,M-2]$,
\begin{align*}
\Psi_i^{\alpha}(t)
&\le \int_{I_i} \bigg[\int_{I_i}\int_{t_i}^{s\vee \tau} \lambda_\alpha e^{-\lambda_\alpha(t-u)}dud\tau \bigg]^2ds \\
&\le 2\int_{I_i} \bigg[\int_{I_i}\int_{t_i}^s \lambda_\alpha e^{-\lambda_\alpha(t-u)}dud\tau \bigg]^2ds \\
&\quad +2\int_{I_i} \bigg[\int_{I_i}\int_{t_i}^\tau \lambda_\alpha e^{-\lambda_\alpha(t-u)}dud\tau \bigg]^2ds \\
&\le 4k^2\int_{I_i} \bigg[\int_{t_i}^s \lambda_\alpha e^{-\lambda_\alpha(t-u)}du \bigg]^2ds \\
&\le 2k^2\frac{(1-e^{\lambda_\alpha k})^2
(1-e^{2\lambda_\alpha k})}{\lambda_\alpha} e^{-2\lambda_\alpha(t-t_i)}.
\end{align*}
Summing up $\Psi_i^{\alpha}(t)$ from $0$ to $M-2$, we obtain
\begin{align*}
\sum_{i=0}^{M-2} \Psi_i^\alpha(t)
&\le 2k^2\frac{(1-e^{\lambda_\alpha k})^2}{\lambda_\alpha}.
\end{align*}
On the other hand,
\begin{align*}
\Psi_{M-1}^\alpha(t)
&=\int_{t_{M-1}}^t \bigg[\int_{t_{M-1}}^t\int_\tau^s \lambda_\alpha e^{-\lambda_\alpha(t-u)}dud\tau+\int_t^{t_M}e^{-\lambda_\alpha(t-s)} d\tau\bigg]^2ds \nonumber \\
&\quad + \int_t^{t_M}\bigg[\int_{t_{M-1}}^t e^{-\lambda_\alpha(t-\tau)}d\tau\bigg]^2ds
:=\Psi_{M-1,1}^\alpha(t)+\Psi_{M-1,2}^\alpha(t).
\end{align*}
The first term $\Psi_{M-1,1}^\alpha(t)$ has the estimation:
\begin{align*}
\Psi_{M-1,1}^\alpha(t)
&\le 2\int_{t_{M-1}}^t \bigg[\int_{t_{M-1}}^t\int_\tau^s \lambda_\alpha e^{-\lambda_\alpha(t-u)}dud\tau \bigg]^2ds \\
&\qquad \qquad +\frac{1-e^{-2\lambda_\alpha(t-t_{M-1})}}{\lambda_\alpha} k^2.
\end{align*}
Similar to the analysis and calculations for $\Psi_i^\alpha(t)$, $i\in [0,M-2]$, the first term on the right hand side above can be controlled by
\begin{align*}
&2\int_{t_{M-1}}^t \bigg[\int_{t_{M-1}}^t\int_\tau^s \lambda_\alpha e^{-\lambda_\alpha(t-u)}dud\tau \bigg]^2ds \\\
&\le 8k^2\int_{t_{M-1}}^t \bigg[\int_{t_{M-1}}^s \lambda_\alpha e^{-\lambda_\alpha(t-\tau)}d\tau \bigg]^2ds \\
&\le 4k^2\frac{(1-e^{2\lambda_\alpha k})^3}{\lambda_\alpha}.
\end{align*}
As a result,
\begin{align*}
\Psi_{M-1,1}^\alpha(t)
&\le 4k^2\frac{[1-e^{2\lambda_\alpha k}]^3}{\lambda_\alpha}+ \frac{1-e^{-2\lambda_\alpha(t-t_{M-1})}}{\lambda_\alpha} k^2
\le 5k^2\frac{1-e^{2\lambda_\alpha k}}{\lambda_\alpha}.
\end{align*}
Since $1-e^{-x}\le x$ for any $x\le 0$,
\begin{align*}
\Psi_{M-1,2}^\alpha(t)
&=\int_t^{t_M}\bigg[\int_{t_{M-1}}^t e^{-\lambda_\alpha(t-\tau)}d\tau \bigg]^2ds
=\frac{(1-e^{-2\lambda_\alpha(t-t_{M-1})})^2}{\lambda_\alpha} k \\
&\le \frac{1-e^{-2\lambda_\alpha(t-t_{M-1})}}{\lambda_\alpha} k^2
\le \frac{1-e^{2\lambda_\alpha k}}{\lambda_\alpha} k^2.
\end{align*}
Therefore,
$\Psi_{M-1}^\alpha(t)\le 6k^2\frac{1-e^{2\lambda_\alpha k}}{\lambda_\alpha}$.
As a consequence,
\begin{align*}
\sum_{\alpha=1}^\infty\Psi_\alpha(t)
\le 8k^2\sum_{\alpha=1}^\infty\frac{1-\phi_\alpha(2k)}{\lambda_\alpha}
\le C k^\frac{5}{2}.
\end{align*}
Define for $i\in [0,M-1]$,
\begin{align*}
\Upsilon_i^\alpha(t):=\int_{I_i}\int_{I_i} \phi_\alpha(t-s)
\Big( \phi_\alpha(t-s)-\phi_\alpha(t-\tau) \Big) d\tau ds.
\end{align*}
When $i\in [0,M-2]$, Similar to the arguments for $\Psi_i^\alpha(t)$, we have
\begin{align*}
\Upsilon_i^\alpha(t)
&=\int_{I_i} e^{-\lambda_\alpha(t-s)}
\bigg[\int_{I_i}\int_s^\tau \lambda_\alpha e^{-\lambda_\alpha(t-u)} dud\tau \bigg] ds\\
&\le \int_{I_i} e^{-\lambda_\alpha(t-s)}
\bigg[\int_{I_i}\int_{t_i}^s \lambda_\alpha e^{-\lambda_\alpha(t-u)} dud\tau\bigg] ds\\
&\quad +\int_{I_i} e^{-\lambda_\alpha(t-s)}
\bigg[\int_{I_i}\int_{t_i}^\tau \lambda_\alpha e^{-\lambda_\alpha(t-u)} dud\tau\bigg] ds\\
&\le 2k \Big(1-e^{-\lambda_\alpha k}\Big)
\bigg(\int_{I_i} e^{-2\lambda_\alpha(t-s)}ds\bigg).
\end{align*}
Summing up $\Upsilon_i^\alpha(t)$ both for $i$ from $0$ to $M-2$ and $\alpha\in \mathbb N_+$, we obtain
\begin{align*}
\sum_{\alpha=1}^\infty \sum_{i=0}^{M-2} \Upsilon_i^\alpha(t)
\le k\sum_{\alpha=1}^\infty\frac{1-e^{-\lambda_\alpha k}}{\lambda_\alpha}
\le C k^\frac{3}{2}.
\end{align*}
On the other hand,
\begin{align*}
\Upsilon_{M-1}^\alpha(t)
&=\int_{t_{M-1}}^t \int_{t_{M-1}}^t \phi_\alpha(t-s)
\Big(\phi_\alpha(t-s)-\phi_\alpha(t-\tau)\Big) d\tau ds \\
&\quad +\int_{t_{M-1}}^t \int_t^{t_M} \phi_\alpha(t-s) d\tau ds \\
&=(t-t_{M-1})\bigg(\int_{t_{M-1}}^t \phi^2_\alpha(t-s) ds\bigg)
-\bigg(\int_{t_{M-1}}^t \phi_\alpha(t-s) ds\bigg)^2 \\
&\quad +(t_M-t) \bigg(\int_{t_{M-1}}^t \phi_\alpha(t-s) ds\bigg) \\
&\le (t-t_{M-1}) \bigg(\int_{t_{M-1}}^t \phi^2_\alpha(t-s) ds\bigg) \\
&\quad +(t_M-t) \bigg(\int_{t_{M-1}}^t \phi_\alpha(t-s) ds\bigg).
\end{align*}
Thus
\begin{align*}
\sum_{\alpha=1}^\infty \Upsilon_{M-1}^\alpha(t)
&\le k\sum_{\alpha=1}^\infty
\bigg(\frac{1-e^{-2\lambda_\alpha (t-t_{M-1})}}{2\lambda_\alpha}\bigg) \\
&\quad +k\sum_{\alpha=1}^\infty
\bigg(\frac{1-e^{-\lambda_\alpha (t-t_{M-1})}}{\lambda_\alpha} \bigg)
\le C k^\frac{3}{2},
\end{align*}
which completes the proof of \eqref{lm-she}.
\end{proof}
Following the same arguments as in the above lemma, we have the following estimations for SWE.
\begin{lm}
Let $\phi_{\alpha}, \ \alpha\in \mathbb N_+$ be the basis functions related to SWE. Then there exists a constant $C=C(T,H)$ such that
\begin{align}\label{lm-swe}
\sup_{t\in I}\sum_{\alpha=1}^\infty\Psi_\alpha(t)
\le C k^3,
\quad
\sup_{t\in I}\sum_{\alpha=1}^\infty \Upsilon_\alpha(t)
\le C k^2.
\end{align}
\end{lm}
Applying It\^o isometry \eqref{ito0} and the above two lemmas, we have the following estimate for the error between the convolution
$S$ and $\tilde S$ given by \eqref{con} and \eqref{con-a}, respectively.
\begin{tm}\label{ww} For $p\geq 2$, there exists a constant $C=C(p,T,H)$ such that for SHE,
\begin{align}\label{ww-she}
\sup_{t\in I}\left(\mathbb E\left[\|S (t)-\tilde S(t)\|_{\mathbb L^2}^p\right] \right)^\frac1p
\le C (h^H+k^\frac14 h^{H-\frac12})
\end{align}
and for SWE,
\begin{align}\label{ww-swe}
\sup_{t\in I}\left(\mathbb E\left[ \|S (t)-\tilde S(t)\|_{\mathbb L^2}^p\right] \right)^\frac1p
\le C(h^H+k^\frac12 h^{H-\frac12}).
\end{align}
\end{tm}
\begin{proof}
We only prove \eqref{ww-she} for $p=2$, the other cases can be handled by the fact that $S-\tilde S$ is a Gaussian field.
By It\^o isometry \eqref{ito0},
\begin{align*}
\mathbb E\bigg[\|S(t)-\tilde S(t)\|_{\mathbb L^2}^2\bigg]
=\frac{H(1-2H)}{2} I_1(t)+H I_2(t),
\end{align*}
where
\begin{align*}
I_1(t)
&=\int_I \int_{\mathcal O} \int_{\mathcal O} \int_{\mathcal O}
\left| G_{t-s}(x,y)-G^{m,n}_{t-s}(x,y)
-G_{t-s}(x,z)+G^{m,n}_{t-s}(x,z) \right|^2 \\
&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
|y-z|^{2H-2}dxdydzds, \\
I_2(t)
&=\int_I \int_{\mathcal O} \int_{\mathcal O}
\left| G_{t-s}(x,y)-G^{m,n}_{t-s}(x,y)
\right|^2 \Big( y^{2H-1}+(1-y)^{2H-1}\Big) dxdyds.
\end{align*}
For the first term, we have
\begin{align*}
I_1(t)&=\sum_{j=0}^{n-1}\int_{\mathcal O_j}\int_{\mathcal O_j}
\frac{\int_I \int_{\mathcal O}|G_{t-s}(x,y)-G_{t-s}(x,z)|^2 dxds}{|y-z|^{2-2H}} dydz \\
&\quad+\frac{1}{(kh)^2}\sum_{i=0}^{m-1}\sum_{j\neq l}\int_{\mathcal O} \int_{I_i}\int_{\mathcal O_j}\int_{\mathcal O_l} \\
&\qquad \Bigg[\bigg(\int_{I_i}\int_{\mathcal O_j}
\Big(G_{t-s}(x,y)- G_{t-\tau}(x,r)\Big) drd\tau\bigg) |y-z|^{2H-2}
\\
&\qquad \qquad \times \bigg(\int_{I_i}\int_{\mathcal O_l}\Big(G_{t-s}(x,z)- G_{t-\tau}(x,r)\Big) drd\tau \bigg) \Bigg] dydzdsdx \\
&=:I_{11}(t)+I_{12}(t).
\end{align*}
By \eqref{gg} and \eqref{same}, we have
\begin{align*}
I_{11}(t)
&\le C \sum_{j=0}^{n-1}\int_{\mathcal O_j}\int_{\mathcal O_j} |y-z|^{2H-1} dydz
\le C h^{2H}.
\end{align*}
Applying H\"older inequality repetitively and Fubini theorem, we obtain
\begin{align*}
I_{12}(t)
&=\frac{1}{(kh)^2}\sum_{i=0}^{m-1}\sum_{j\neq l}\sum_{\alpha=1}^\infty \int_{I_i}\int_{\mathcal O_j}\int_{\mathcal O_l} \Bigg[ |y-z|^{2H-2} \nonumber \\
&\quad \Bigg(\int_{I_i}\int_{\mathcal O_j}
\bigg[\phi_\alpha (t-s)\varphi_\alpha (y)-
\phi_\alpha (t-\tau_1)\varphi_\alpha (r_1) \bigg]
d\tau_1 dr_1 \Bigg) \\
&\quad \Bigg( \int_{I_i}\int_{\mathcal O_l}
\bigg[\phi_\alpha (t-s)\varphi_\alpha (z)-
\phi_\alpha (t-\tau_2)\varphi_\alpha (r_2)\bigg]
d\tau_2 dr_2\Bigg) \Bigg] dydzds\\
&=:I_{121}(t)+I_{122}(t)+I_{123}(t)+I_{124}(t),
\end{align*}
where
\begin{align*}
I_{121}(t)
&=\frac{1}{h^2}\sum_{j\neq l}\sum_{\alpha=1}^\infty
\bigg( \int_0^t \phi_\alpha^2(t-s)ds \bigg)
\bigg(\int_{\mathcal O_j}\int_{\mathcal O_l}|y-z|^{2H-2}dydz\bigg) \\
&\qquad\qquad
\bigg( \int_{\mathcal O_j}[\varphi_\alpha(y)-\varphi_\alpha(r_1)]dr_1\bigg)
\bigg(\int_{\mathcal O_l} [\varphi_\alpha(z)-\varphi_\alpha(r_2)] dr_2\bigg), \\
I_{122}(t)&=\frac{1}{(kh)^2}\sum_{j\neq l}
\bigg( \int_{\mathcal O_j}\int_{\mathcal O_l} |y-z|^{2H-2} dydz\bigg) \\
&\qquad\qquad \left[\sum_{\alpha=1}^\infty \Psi_\alpha(t)
\bigg(\int_{\mathcal O_j}\varphi_\alpha(r_1) dr_1\bigg)
\bigg(\int_{\mathcal O_l}\varphi_\alpha(r_2) dr_2\bigg) \right], \\
I_{123}(t)&=\frac{1}{kh^2}\sum_{\alpha=1}^\infty \Upsilon_\alpha(t) \bigg( \int_{\mathcal O_l}\varphi_\alpha(r_2) dr_2 \bigg) \\
&\qquad\qquad
\Bigg(\sum_{j\neq l}\int_{\mathcal O_j}\int_{\mathcal O_l}\int_{\mathcal O_j}
\frac{\varphi_\alpha(y)-\varphi_\alpha(r_1)}{|y-z|^{2-2H}} dr_1dydz\Bigg),
\\
I_{124}(t)&=\frac{1}{kh^2}\sum_{\alpha=1}^\infty \Upsilon_\alpha(t)
\bigg(\int_{\mathcal O_j}\varphi_\alpha(r_1) dr_1 \bigg) \\
&\qquad\qquad
\Bigg(\sum_{j\neq l}\int_{\mathcal O_j}\int_{\mathcal O_l}\int_{\mathcal O_l}
\frac{\varphi_\alpha(z)-\varphi_\alpha(r_2)}{|y-z|^{2-2H}} dr_2dydz\Bigg).
\end{align*}
By Young's inequality, estimates \eqref{phi} and \eqref{dif}, we have that
\begin{align*}
& |I_{121}(t)| \\
&\le \frac{C}{h^2} \sum_{j\neq l}
\int_{\mathcal O_j}\int_{\mathcal O_l}
\frac{\int_{\mathcal O_j} \sum\limits_{\alpha=1}^\infty
|\varphi_\alpha(y)-\varphi_\alpha(r_1)|^2
\Big( \int_0^t \phi_\alpha^2(t-s)ds \Big) dr_1}
{|y-z|^{2-2H}} dydz \\
&\le \frac{C}{h^2} \sum_{j\neq l}\int_{\mathcal O_j}\int_{\mathcal O_l}
\frac{\int_{\mathcal O_j}\int_{\mathcal O_l} \big(|y-r_1|+|z-r_2| \big) dr_1dr_2}
{|y-z|^{2-2H}} dydz \\
&\le C h \sum_{j\neq l}\int_{\mathcal O_j}\int_{\mathcal O_l}|y-z|^{2H-2}dydz
\le C h^{2H}.
\end{align*}
From \eqref{lm-she} and \eqref{dif}, we have, for the second term $I_{122}(t)$,
\begin{align*}
|I_{122}(t)|
&\le C \frac{1}{k^2}
\bigg(\sum_{\alpha=1}^\infty \Psi_\alpha(t)\bigg)
\bigg(\sum_{j\neq l}\int_{\mathcal O_j}\int_{\mathcal O_l} |y-z|^{2H-2} dydz\bigg)\\
&\le C k^{-2}h^{2H-1} \bigg(\sum_{\alpha=1}^\infty \Psi_\alpha(t)\bigg)
\le C (h^{2H}+k^\frac{1}{2}h^{2H-1}).
\end{align*}
By \eqref{lm-she}, we have, for the third term $I_{123}(t)$,
\begin{align*}
|I_{123}(t)|
&\le \frac C{kh}
\sum_{j\neq l}\int_{\mathcal O_j}\int_{\mathcal O_l}
\frac{\sum\limits_{\alpha=1}^\infty \Upsilon_\alpha(t)
\Big(\int_{\mathcal O_j} \Big(\varphi_\alpha(y)-\varphi_\alpha(r_1)\Big) dr_1\Big)}
{|y-z|^{2-2H}} dydz\\
&\le \frac Ck \bigg(\sum_{\alpha=1}^\infty \Upsilon_\alpha(t)\bigg)
\bigg(\sum_{j\neq l}\int_{\mathcal O_j}\int_{\mathcal O_l} |y-z|^{2H-2} dydz\bigg) \\
&\le C (h^{2H}+k^\frac{1}{2}h^{2H-1}).
\end{align*}
Analogously, the last term $I_{124}(t)$ satisfies
\begin{align*}
|I_{124}(t)|
&\le C (h^{2H}+k^\frac{1}{2}h^{2H-1}).
\end{align*}
The above four estimates yield
\begin{align*}
\left|I_{12}(t) \right|
\le C (h^{2H}+k^\frac{1}{2}h^{2H-1}).
\end{align*}
Combining the estimations of $I_{11}$ and $I_{12}$, we get
\begin{align} \label{e1}
|I_1(t)|\le C (h^{2H}+k^\frac{1}{2}h^{2H-1}).
\end{align}
Next we estimate $I_2$. Young's inequality yields
\begin{align*}
I_2(t)
&\le \frac{C}{(kh)^2}\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}
\Bigg[ \int_{\mathcal O}\int_{I_i}\int_{\mathcal O_j}
\Big( y^{2H-1}+(1-y)^{2H-1} \Big) \\
&\qquad \bigg|\int_{I_i}\int_{\mathcal O_j}
\Big( G_{t-s}(x,y)-G_{t-s}(x,z) \Big)dzd\tau \bigg|^2 \Bigg] dsdydx \\
&\quad+\frac{C}{(kh)^2}\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}
\Bigg[ \int_{\mathcal O}\int_{I_i}\int_{\mathcal O_j} \Big( y^{2H-1}+(1-y)^{2H-1} \Big) \\
&\qquad \bigg|\int_{I_i}\int_{\mathcal O_j}
\Big(G_{t-s}(x,z)-G_{t-\tau}(x,z)\Big) dzd\tau \bigg|^2 \Bigg] dsdydx \\
&=:I_{21}(t)+I_{22}(t).
\end{align*}
By \eqref{gre}, orthogonality of $\phi_\alpha$, and \eqref{phi}, we have
\begin{align*}
I_{21}(t)
&=\frac{C}{h^2}\sum_{j=0}^{n-1}\int_{\mathcal O_j}
\Bigg[ \sum_{\alpha=1}^\infty \bigg( \int_0^t \phi_\alpha^2(t-s)ds \bigg) \\
&\qquad \bigg(\int_{\mathcal O_j}|\varphi_\alpha(y)-\varphi_\alpha(z)|dz \bigg)^2 \Big( y^{2H-1}+(1-y)^{2H-1}\Big)\Bigg] dy \\
&\le \frac{C}{h}\sum_{j=0}^{n-1}\int_{\mathcal O_j}\int_{\mathcal O_j} \Bigg(\bigg[\sum_{\alpha=1}^\infty
\bigg( \int_0^t \phi_\alpha^2(t-s)ds \bigg)
|\varphi_\alpha(y)-\varphi_\alpha(z)|^2\bigg] \\
&\qquad\qquad\qquad\qquad
\Big( y^{2H-1}+(1-y)^{2H-1}\Big)\Bigg) dydz \\
&\le \frac{C}{h}\sum_{j=0}^{n-1}\int_{\mathcal O_j}\int_{\mathcal O_j} |y-z| \Big( y^{2H-1}+(1-y)^{2H-1} \Big)dydz\le C h.
\end{align*}
Similarly, for $E_{22}(t)$, by \eqref{lm-she}, we have
\begin{align*}
I_{22}(t)
&=\frac{C}{(kh)^2}\sum_{j=0}^{n-1} \sum_{\alpha=1}^\infty
\Psi_\alpha(t)
\bigg[\int_{\mathcal O_j} \bigg|\int_{\mathcal O_j}\varphi_\alpha(z)dz\bigg|^2 \\
&\qquad \qquad \qquad \qquad \Big( y^{2H-1}+(1-y)^{2H-1} \Big)dy\bigg] \\
&\le \frac{C}{k^2} \Bigg[\int_{\mathcal O} \Big( y^{2H-1}+(1-y)^{2H-1} \Big) dy\Bigg]
\Bigg[\sum_{\alpha=1}^\infty \Psi_\alpha(t) \Bigg]
\le C k^\frac12.
\end{align*}
It follows from the above two estimates that
\begin{align}\label{e2}
I_2(t)
\le C (h+k^\frac{1}{2}).
\end{align}
Combining \eqref{e1} and \eqref{e2}, we obtain \eqref{ww-she}.
\end{proof}
Now we are ready to estimate the error between the exact solution $u$ of Eq. \eqref{see} and the approximate solution $ \tilde u $ of Eq. \eqref{spde-dis}.
\begin{tm}\label{uu}
Let $p\ge 2$.
Assume that $u_0\in \mathbb L^p(\Omega; \mathbb L^2)$ and
$v_0\in \mathbb L^p(\Omega; \dot{\mathbb H}^{-1})$.
Let $u$ and $\tilde u$ be the mild solutions of Eq. \eqref{see} and Eq. \eqref{spde-dis}, respectively.
Then there exists a constant $C=C(p,T,H,u_0,v_0)$ such that for SHE,
\begin{align}\label{uu-she}
\sup_{t\in I}\bigg(\mathbb E\bigg[ \|u (t)-\tilde u(t)\|_{\mathbb L^2}^p\bigg] \bigg)^\frac1p
\le C (h^H+k^\frac14 h^{H-\frac12})
\end{align}
and for SWE,
\begin{align}\label{uu-swe}
\sup_{t\in I}\bigg(\mathbb E\bigg[ \|u (t)-\tilde u(t)\|_{\mathbb L^2}^p\bigg] \bigg)^\frac1p
\le C(h^H+k^\frac12 h^{H-\frac12}).
\end{align}
\end{tm}
\begin{proof} By Theorem \ref{ww} and the Lipschitz continuity of the drift coefficient function $b$, it suffices to prove that
\begin{align}\label{uw}
\mathbb E\bigg[\|u(t)-\tilde u(t)\|_{\mathbb L^2}^p\bigg]
\le C \mathbb E\bigg[\|S(t)-\tilde S(t)\|_{\mathbb L^2}^p\bigg].
\end{align}
Subtracting \eqref{mild-dis} from \eqref{mild}, we get
\begin{align*}
u(t,x)- \tilde u (t,x)
&=\int_0^t\int_{\mathcal O} G_{t-s}(x,y)
\Big(b(u(s,y))-b( \tilde u (s,y))\Big) dsdy \\
&\quad +S (t,x)-\tilde S(t,x).
\end{align*}
Taking $\mathbb L^2$-norm and then the expectation in the above equation, we have
\begin{align*}
&\mathbb E\bigg[\|u(t)-\tilde u(t)\|_{\mathbb L^2}^p\bigg] \\
&\le C \mathbb E\Bigg[\bigg\|\int_0^t\int_{\mathcal O} G_{t-s}(\cdot,y)
\Big( b(u(s,y))-b(\tilde u (s,y)) \Big) dsdy\bigg\|_{\mathbb L^2}^p\Bigg] \\
&\quad +C\mathbb E\bigg[\|S(t)-\tilde S(t)\|_{\mathbb L^2}^p\bigg] \\
&\le C \mathbb E \Bigg[\bigg(\int_0^t \sum_{\alpha=1}^\infty \phi^2_\alpha(t-s) (\varphi_\alpha, u(s)- \tilde u (s) )^2ds \bigg)^\frac p2 \Bigg] \\
&\quad +C\mathbb E\bigg[\|S(t)-\tilde S(t)\|_{\mathbb L^2}^p\bigg] \\
&\le C \int_0^t \bigg(\sup_{\alpha\in \mathbb N_+}\phi_\alpha^p(t-s)\bigg)
\mathbb E\bigg[\|u(s)-\tilde u(s)\|_{\mathbb L^2}^p\bigg] ds \\
&\quad +C\mathbb E\bigg[\|S(t)-\tilde S(t)\|_{\mathbb L^2}^p\bigg].
\end{align*}
Since $\sup\limits_{t\in I}\sup\limits_{\alpha\in \mathbb N_+}|\phi_\alpha(t)|\le 1$, we obtain \eqref{uw} by Gronwall's inequality.
\end{proof}
\begin{rk}\label{rk-wz}
(i) If $\xi$ reduces to the space-time white noise, i.e., $H=1/2$, then for SHE we have
\begin{align*}
\sup_{t\in I}\bigg(\mathbb E\bigg[ \|u (t)-\tilde u(t)\|_{\mathbb L^2}^p\bigg] \bigg)^\frac1p
\le C (h^\frac12+k^\frac14),
\end{align*}
and for SWE we have
\begin{align*}
\sup_{t\in I}\bigg(\mathbb E\bigg[ \|u (t)-\tilde u(t)\|_{\mathbb L^2}^p\bigg] \bigg)^\frac1p
\le C(h^\frac12+k^\frac12),
\end{align*}
which shows that $\tilde u$ converges to $u$ as $h,k\rightarrow 0^+$ without any restriction on $h$ and $k$.
This estimation for SHE improves related result in \cite[Theorem 2.3]{ANZ98} where the authors proved the convergence of
-Zakai approximation under the assumption that $h/k^\frac{1}{4}\rightarrow 0$:
\begin{align*}
\sup_{t\in I}\bigg(\mathbb E\bigg[ \|u (t)-\tilde u(t)\|_{\mathbb L^2}^2\bigg] \bigg)^\frac12
&\le C(k^\frac14+h k^{-\frac14}).
\end{align*}
Similar result for SWE had also been established in \cite[Theorem 2]{CY07}.
(ii) When $H<1/2$, the Wong-Zakai approximation converges only if
$k/h^{2-4H}\rightarrow 0$ for SHE and
$k/h^{1-2H}\rightarrow 0$ for SWE.
Moreover, the convergence rate is optimal if we set
$k=h^2$ for SHE and $k=h$ for SWE:
\begin{align*}
\sup_{t\in I}\bigg(\mathbb E\bigg[ \|u (t)-\tilde u(t)\|_{\mathbb L^2}^p\bigg] \bigg)^\frac1p
\le C h^H.
\end{align*}
(iii) In many literatures (see \cite{KLS10, Yan05} for examples), the order of convergence of various numerical discretizations for SEEs includes a negative infinitesimal component $\epsilon$. Our results remove this factor.
\end{rk}
\section{Galerkin approximations for regularized equations}
\label{sec4}
In this section, we apply the Galerkin methods to spatially discretize the regularized equation \eqref{spde-dis} and conduct the error estimates. Specifically, we apply the Galerkin finite element method to discretize the regularized SHE and the
spectral Galerkin method to discretize the regularized SWE.
\subsection{Galerkin finite element method for regularized SHE}
Let $V_h\subset \dot{\mathbb H}^1$ be a family of linear finite element spaces, i.e., $V_h$ consists of continuous piecewise affine polynomials with respect to the same partition $\{\mathcal O_j\}_{i=0}^{n-1}$ of $\mathcal O$ as in Section \ref{sec3}.
To introduce the finite element formulation for the regularized SHE \eqref{spde-dis}, we use $P_h:L^2\rightarrow V_h$ to denote the orthogonal projection operator defined by $(P_h u,v)=(u,v)$ for any $u\in \mathbb L^2$ and $v\in V_h$, and $R_h:L^2\rightarrow V_h$ to denote the Ritz projection operator defined by
$(\nabla R_hu, \nabla v)=(\nabla u,\nabla v)$ for any $u\in \dot{\mathbb H}^1$ and $v\in V_h$, where $(\cdot,\cdot)$ denotes the inner product in $\mathbb L^2$.
Then the semidiscrete Galerkin finite element approximation for Eq. \eqref{spde-dis} is to find an $\tilde u _h\in V_h$ such that $ \tilde u _h(0)=P_h u_0$ and for any $t>0$,
\begin{align}\label{fem}
d \tilde u _h(t)
=\Delta_h \tilde u _h(t)dt+P_h\Big(b( \tilde u _h(t))+\tilde \xi(t)\Big) dt,
\end{align}
where $\Delta_h$ is the discrete analogue of Dirichlet Laplacian defined by
$(-\Delta_h u,v)=(\nabla u, \nabla v)$ for any $u,v\in V_h$.
Define $E(t)=e^{t \Delta}$ and $E_h(t)=e^{t \Delta_h}$ for $t\geq 0$.
Then Eq. \eqref{spde-dis} and Eq. \eqref{fem} admit unique mild solutions
\begin{align}\label{fem-mild0}
u (t)=E(t)u_0+\int_0^t E(t-s) \Big(b( \tilde u (s))+\tilde \xi(s)\Big) ds,
\quad t\in I
\end{align}
and respectively,
\begin{align}\label{fem-mild}
\tilde u _h(t)
=E_h(t)P_h u_0+\int_0^t E_h(t-s)P_h \Big(b( \tilde u _h(s))+\tilde \xi(s)\Big)ds,
\quad t\in I.
\end{align}
We note that the solution of homogenous equation $\partial_t u=\Delta u$ under the Dirichlet boundary condition \eqref{dbc} with initial data $v$ is smooth for positive time due to the fact that the solution operator $E(t)$ of the initial value problem is an analytic semigroup satisfying (cf. \cite[Lemma 3.2]{Tho06})
\begin{align}\label{sem-smo}
\|E(t)v\|_\beta\le C t^{-\frac{\beta-\alpha}{2}} \|v\|_\alpha,
\quad t>0,\ 0\le \alpha\le \beta.
\end{align}
Define $F_h(t):=E(t)-E_h(t)P_h$.
The following error estimate will play an important role in our error analysis (cf. \cite{Tho06}, Theorem 3.5):
\begin{align}\label{smo}
\|F_h(t) v\|_{\mathbb L^2}
\le C h^\beta t^{-\frac{\beta-\alpha}{2}} \|v\|_\alpha,
\quad t>0,\ 0\le \alpha\le \beta\le 2.
\end{align}
\begin{tm}\label{fem-ord}
Let $p\ge 2$.
Assume that $u_0\in \mathbb L^p(\Omega; \dot \mathbb H^H)$.
Then for any $\epsilon>0$, there exists a constant $C=C(p,T,H,u_0)$ such that
\begin{align}\label{fem-ord1}
\sup_{t\in I}\bigg(\mathbb E\bigg[\| \tilde u (t)- \tilde u _h(t)\|_{\mathbb L^2}^p\bigg]\bigg)^\frac1p
\le C \Big(h^H+\epsilon^{-1}h^{\frac32-\epsilon} k^{-\frac12}\Big).
\end{align}
\end{tm}
\begin{proof} Denote by $e_h(t):= \tilde u (t)- \tilde u _h(t)$.
Subtracting \eqref{fem-mild0} from \eqref{fem-mild}, we have
\begin{align}\label{fem1}
e_h(t)
&=F_h(t)u_0+\int_0^t F_h(t-s)\tilde \xi(s)ds+\int_0^t F_h(t-s)b( \tilde u (s))ds \nonumber \\
&\quad +\int_0^t E_h(t-s)P_h \Big( b( \tilde u (s))-b( \tilde u _h(s))\Big)ds.
\end{align}
The finite element estimate \eqref{smo} with $\alpha=\beta=H$ yields
\begin{align*}
\|F_h(t)u_0\|_{\mathbb L^p(\Omega; \mathbb L^2)}
\le C h^H \|u_0\|_{\mathbb L^p(\Omega; \dot{\mathbb H}^H)}
\le C h^H.
\end{align*}
By Minkowski's inequality, \eqref{smo} with $\alpha=0$ and $\beta=2-\epsilon$, and \eqref{xi}, we have
\begin{align*}
&\left\|\int_0^t F_h(t-s)\tilde \xi(s)ds\right\|_{\mathbb L^p(\Omega; \mathbb L^2)} \\
&\le C \int_0^t \left\|F_h(t-s)\tilde \xi(s)\right\|_{\mathbb L^p(\Omega; \mathbb L^2)} ds\nonumber \\
&\le C \sup_{t\in I}\|\tilde \xi(t)\|_{\mathbb L^p(\Omega; \mathbb L^2)}
\sup_{t\in I} \left(\int_0^t h^{2-\epsilon} (t-s)^{-(1-\frac{\epsilon}{2})}ds\right) \\
&\le C \epsilon^{-1}h^{\frac32-\epsilon} k^{-\frac12}.
\end{align*}
Similarly, by the Lipschitz continuity of $b$ and \eqref{bon-she} we have
\begin{align*}
\left\|\int_0^t F_h(t-s)b( \tilde u (s))ds\right\|_{\mathbb L^p(\Omega; \mathbb L^2)}
\le C \epsilon^{-1} h^{2-\epsilon}.
\end{align*}
To estimate the last term on the right hand side of \eqref{fem1}, we note that, from the smoothness property \eqref{sem-smo} and the finite element estimate \eqref{smo},
$\|E_h(t)P_hv\| \le C \|v\|$.
As a consequence,
\begin{align*}
&\left\|\int_0^t E_h(t-s)P_h \Big(b( \tilde u (s))-b( \tilde u _h(s)) \Big) ds \right\|_{\mathbb L^p(\Omega; \mathbb L^2)} \\
&\le C \int_0^t \|e_h(s)\|_{\mathbb L^p(\Omega; \mathbb L^2)} ds.
\end{align*}
Combining the above estimations, we obtain
\begin{align*}
\sup_{t\in I}\|e_h(t)\|_{\mathbb L^p(\Omega; \mathbb L^2)}
\le C \Big(h^H+\epsilon^{-1}h^{\frac32-\epsilon} k^{-\frac12} \Big)
+C \int_0^t \|e_h(s)\|_{\mathbb L^p(\Omega; \mathbb L^2)} ds,
\end{align*}
from which we conclude \eqref{fem-ord1} by Gronwall's inequality.
\end{proof}
\begin{rk}
(i) In the derivation of the convergence rate of Galerkin finite element approximation for SHE, we do not need a priori regularity information about
$ \tilde u $ such as the estimates of $\|\partial_t \tilde u (t)\|_{\mathbb L^p(\Omega; \mathbb L^2)}$ and $\|\tilde u (t)\|_{\mathbb L^p(\Omega; \dot{\mathbb H}^2)}$ needed in \cite{ANZ98, DZ02}.
(ii) Moreover, when $H<1/2$ and $k=h^2$, we take $\epsilon=1/2-H$ and then
\begin{align*}
\sup_{t\in I}\bigg(\mathbb E\bigg[\| \tilde u (t)- \tilde u _h(t)\|_{\mathbb L^2}^p\bigg]\bigg)^\frac1p
\le C h^H.
\end{align*}
\end{rk}
\subsection{Spectral Galerkin method for regularized SWE}
In this subsection, we use the spectral Galerkin method to spatially discretize the regularized SWE and derive its rate of convergence.
Let $V_N:=\text{span}\{\varphi_\alpha\}_{\alpha=1}^N$ and $P_N:L^2\rightarrow V_N$ the correponding orthogonal projection operator defined by $(P_Nu,v)=(u,v)$ for any $v\in V_N$. Then the spectral Galerkin method for Eq. \eqref{spde-dis} is to find an $\tilde u _N\in V_N$ such that $ \tilde u _N(0)=P_Nu_0$ and for any $t>0$, $v\in V_N$,
\begin{align}\label{spe}
(\partial_t \tilde u _N(t),v)=(P_Nv_0,v)+\int_0^t ( \tilde u _N(s), \Delta v)ds+\int_0^t (b( \tilde u _N(s))+\tilde \xi(s), v)ds.
\end{align}
It is well-known that $\|P_N\|=1$, $\Delta P_N u=P_N \Delta u$ for any $u\in \dot{\mathbb H}^2$ and $\|u-P_N u\|_{\mathbb L^2} \le N^{-1}\|u\|_1$ for any $u\in \dot{\mathbb H}^1$ (cf. \cite[Lemma 4]{CY07}). To estimate the error between $ \tilde u (t)$ and $ \tilde u _N(t)$ we first split it into two parts as follows.
\begin{align}\label{div}
\tilde u (t)- \tilde u _N(t)
=\Big(\tilde u (t)-P_N \tilde u (t)\Big)
+\Big(P_N \tilde u (t)- \tilde u _N(t)\Big).
\end{align}
The first part has the following estimation.
\begin{prop}\label{reg}
Let $p\ge 2$.
Assume that $u_0\in \mathbb L^p(\Omega; \dot{\mathbb H}^1)$ and $v_0\in \mathbb L^p(\Omega; \mathbb L^2)$.
Then the approximate solution $\tilde u$ of the regularized SWE is in $L^\infty([0,T]; \mathbb L^p(\Omega; \dot\mathbb H ^1))$. Moreover, there exists a constant $C=C(p,T,H,u_0,v_0)$ such that
\begin{align}\label{u2}
\sup_{t\in I}\bigg(\mathbb E\bigg[\| \tilde u (t)-P_N \tilde u (t)\|_{\mathbb L^2}^p\bigg]\bigg)^\frac1p
\le CN^{-1} h^{H-1}.
\end{align}
\end{prop}
\begin{proof}
By the above property of $P_N$, we get
\begin{align*}
\|\tilde u (t)-P_N \tilde u (t)\|_{\mathbb L^2}
\le N^{-1}\|\tilde u (t)\|_1
\end{align*}
Thus to prove \eqref{u2} it suffices to show that
\begin{align}\label{u20}
\sup_{t\in I} \bigg(\mathbb E\bigg[\|\tilde u (t)\|_1^p\bigg] \bigg)^\frac1 p
\le C h^{H-1}.
\end{align}
The definition of the $\dot \mathbb H ^1$-norm and Young's inequality yield
\begin{align}
& \mathbb E\bigg[\|\tilde u(t)\|_1^p\bigg] \nonumber \\
&\le C\mathbb E\bigg[\bigg(\sum_{\alpha=1}^\infty \lambda_\alpha
(\varphi_\alpha, \omega(t,\cdot))^2 \bigg)^\frac p2\bigg] \nonumber \\
&\quad+C\mathbb E \Bigg[\bigg( \sum_{\alpha=1}^\infty\lambda_\alpha
\bigg(\varphi_\alpha, \int_0^t\int_{\mathcal O} G_{t-s}(\cdot,y)b( \tilde u (s,y))dsdy\bigg)^2 \bigg)^\frac p2\Bigg] \nonumber \\
&\qquad +C\mathbb E \Bigg[\bigg( \sum_{\alpha=1}^\infty\lambda_\alpha
\bigg(\varphi_\alpha, \int_0^t\int_{\mathcal O} G_{t-s}(\cdot,y)\tilde \xi(ds,dy)\bigg)^2 \bigg)^\frac p2 \Bigg] \nonumber \\
& =: F_1(t)+F_2(t)+F_3(t).\label{u21}
\end{align}
For SWE, it is clear that
$\sup\limits_{t\in I}|\phi_\alpha(t)|\le \lambda_\alpha^{-\frac12}$ and
$\sup\limits_{t\in I}|\phi_\alpha'(t)| \le 1$ for any $\alpha\in \mathbb N_+$.
Then
\begin{align}
& F_1(t) \\
&=C\mathbb E \Bigg[\bigg(\sum_{\alpha=1}^\infty\lambda_\alpha
\bigg(\varphi_\alpha, \int_{\mathcal O} G_t(\cdot,y)v_0(y)dy+\int_{\mathcal O} \frac{\partial}{\partial t}G_t(\cdot,y) u_0(y)dy\bigg)^2 \bigg)^\frac p2\Bigg] \nonumber \\
&\le C\mathbb E\Bigg[\bigg(\sum_{\alpha=1}^\infty \lambda_\alpha \phi^2_\alpha(t) (\varphi_\alpha,v_0)^2\bigg)^\frac p2 \Bigg]
+C\mathbb E\Bigg[\bigg(\sum_{\alpha=1}^\infty \lambda_\alpha |\phi'_\alpha(t)|^2
(\varphi_\alpha, u_0)^2\bigg)^\frac p2 \Bigg] \nonumber \\
&\le C\mathbb E\Bigg[\bigg(\sum_{\alpha=1}^\infty (\varphi_\alpha,v_0)^2\bigg)^\frac p2 \Bigg]
+C\mathbb E\Bigg[\bigg(\sum_{\alpha=1}^\infty \lambda_\alpha
(\varphi_\alpha, u_0)^2\bigg)^\frac p2 \Bigg] \nonumber \\
&\le C \bigg(\mathbb E\bigg[\|u_0\|_1^p\bigg]
+\mathbb E\bigg[\|v_0\|_{\mathbb L^2}^p\bigg]\bigg).\nonumber
\end{align}
Following a similar argument and combining the Lipschitz continuity of $b$ and the estimation \eqref{bon-swe}, we get
\begin{align}
F_2(t)
&=C\mathbb E\Bigg[\bigg(\sum_{\alpha=1}^\infty\lambda_\alpha \int_0^t \phi^2_\alpha(t-s)
(\varphi_\alpha, b(\tilde u (s)))^2 ds\bigg)^\frac p2\Bigg] \nonumber \\
&\le \mathbb E\bigg[\bigg( \int_0^t \|b(\tilde u (s))\|_{\mathbb L^2}^2 ds\bigg)^\frac p2\bigg] \nonumber \\
&\le C \bigg(1+\mathbb E\bigg[\|u_0\|_{\mathbb L^2}^p\bigg]
+\mathbb E\bigg[\|v_0\|_{-1}^p\bigg]\bigg) . \label{f2}
\end{align}
For the last term $F_3(t)$, since $\int_0^t\int_{\mathcal O} G_{t-s}(x,y)\tilde \xi(ds,dy)$ is Gaussian, we have
\begin{align*}
F_3(t)
&\le C\Bigg(\frac{1}{(kh)^2}\sum_{\alpha=1}^\infty\lambda_\alpha
\mathbb E\bigg[\bigg|\sum_{i=0}^{m-1}\sum_{j=0}^{n-1} \int_{I_i}\int_{\mathcal O_j} \widehat{g}^{i,j}_{t,x}(\tau,z)\xi(ds,dy)\bigg|^2\bigg]\Bigg)^\frac p2,
\end{align*}
where $\widehat{g}^{i,j}_{t,x}(s,y)
=\left(\int_{I_i} \phi_\alpha(t-\tau)d\tau\right) \left(\int_{\mathcal O_j}\varphi_\alpha(z)dz\right)$.
Similar to the estimate bwtween $S$ and $\tilde S$ in Theorem \ref{ww}, we have
\begin{align*}
& F_3(t) \\
&\le\frac C{(kh)^p}\Bigg(\sum_{\alpha=1}^\infty\lambda_\alpha\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}\int_{I_i}\int_{\mathcal O_j}\int_{\mathcal O_j}
\frac{ |\widehat{g}_{t,x}^{i,j}(s,y)-\widehat{g}_{t,x}^{i,j}(s,z)|^2}
{|y-z|^{2-2H}} dydzds\Bigg)^\frac p2 \nonumber \\
&\ +\frac C{(kh)^p} \Bigg(\sum_{\alpha=1}^\infty\lambda_\alpha\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}\int_{I_i}\int_{\mathcal O_j} |\widehat{g}_{t,x}^{i,j}(s,y)|^2
\Big( y^{2H-1}+(1-y)^{2H-1}\Big)dsdy \Bigg)^\frac p2 \nonumber \\
&\quad+\frac C{(kh)^p} \Bigg(\sum_{\alpha=1}^\infty\lambda_\alpha\sum_{i=0}^{m-1}\sum_{j\neq l}\int_{I_i}\int_{\mathcal O_j}\int_{\mathcal O_l}
\frac{|\widehat{g}_{t,x}^{i,j}(s,y)|\cdot |\widehat{g}_{t,x}^{i,k}(s,z)|}
{|y-z|^{2-2H}} dydzds \Bigg)^\frac p2 \nonumber\\
&=:F_{31}(t)+F_{32}(t)+F_{33}(t).
\end{align*}
The first term $F_{31}(t)$ vanishes:
\begin{align*}
F_{31}(t)=0.
\end{align*}
It follows from the inequalities \eqref{phi} and \eqref{sin-sum} that
\begin{align*}
F_{32}(t)
&\le \frac C{h^p}\Bigg[\sum_{j=0}^{n-1}\sum_{\alpha=1}^\infty\lambda_\alpha
\bigg( \int_0^t \phi_\alpha^2(t-s)ds \bigg)
\bigg(\int_{\mathcal O_j}\varphi_\alpha(z)dz \bigg)^2 \\
&\qquad\qquad \bigg(\int_{\mathcal O_j} \Big( y^{2H-1}+(1-y)^{2H-1} \Big)dy\bigg) \Bigg]^\frac p2 \\
&\le \frac C{h^p} \Bigg[\sum_{j=0}^{n-1}
\bigg(\sum_{\alpha=1}^\infty \frac{[\psi_\alpha(x_{j+1})-\psi_\alpha(x_j)]^2}{\lambda_\alpha} \bigg) \\
&\qquad \qquad \bigg(\int_{\mathcal O_j} \Big( y^{2H-1}+(1-y)^{2H-1} \Big)dy\bigg)\Bigg]^\frac p2
\le C h^{-\frac p2}.
\end{align*}
By \eqref{sin-sum} in Lemma \ref{lm-re} and the estimate \eqref{dif}, we have
\begin{align*}
F_{33}(t)
&\le\frac C{h^p} \Bigg(\sum_{\alpha=1}^\infty\lambda_\alpha
\bigg( \int_0^t \phi_\alpha^2(t-s)ds \bigg) \\
&\qquad \qquad\qquad \Bigg[\sum_{j\neq l}\bigg(\bigg(\int_{\mathcal O_j}\varphi_\alpha(r)dr\bigg)^2+\bigg(\int_{\mathcal O_l}\varphi_\alpha(r)dr\bigg)^2\bigg) \\
&\qquad \qquad\qquad \qquad \qquad
\bigg(\int_{\mathcal O_j}\int_{\mathcal O_l} |y-z|^{2H-2} dydz\bigg) \Bigg] \Bigg)^\frac p2 \\
&\le \frac C{h^p}\Bigg[ \sum_{j\neq l}
\bigg(\int_{\mathcal O_j}\int_{\mathcal O_l} |y-z|^{2H-2} dydz\bigg) \\
&\qquad \bigg(\sum_{\alpha=1}^\infty
\frac{|\psi_\alpha(x_{j+1})-\psi_\alpha(x_j)|^2}{\lambda_\alpha}
+\sum_{\alpha=1}^\infty
\frac{|\psi_\alpha(x_{l+1})-\psi_\alpha(x_l)|^2}{\lambda_\alpha}\bigg)
\Bigg]^\frac p2 \\
&\le C h^{(H-1)p}.
\end{align*}
Thus we obtain
\begin{align}\label{f33}
F_3(t)\le C h^{(H-1)p}.
\end{align}
Combining the estimations \eqref{u21}-\eqref{f33}, we conclude \eqref{u20}.
\end{proof}
\begin{tm}\label{spe-ord0}
Let $p\ge 2$.
Assume that $u_0\in \mathbb L^p(\Omega; \dot{\mathbb H}^1)$ and $v_0\in \mathbb L^p(\Omega; \mathbb L^2)$.
Then there exists a constant $C=C(p,T,H,u_0,v_0)$ such that
\begin{align}\label{spe-ord00}
\sup_{t\in I}\bigg(\mathbb E\bigg[\| \tilde u (t)- \tilde u _N(t)\|_{\mathbb L^2}^p\bigg]\bigg)^\frac1p
\le C N^{-1} h^{H-1}.
\end{align}
\end{tm}
\begin{proof} The weak formulation of \eqref{spde-dis} reads for any $v\in V_n$,
\begin{align*}
(\partial_t \tilde u (t),v)=(v_0,v)+\int_0^t ( \tilde u (s), \Delta v)ds+\int_0^t (b( \tilde u (s))+\tilde \xi(s), v)ds.
\end{align*}
Since $\Delta P_N u=P_N \Delta u$ for any $u\in V$, we get for any $v\in V_N$,
\begin{align}\label{wea-pro}
(P_N\partial_t \tilde u (t),v)
&=(P_Nv_0,v)+\int_0^t (P_N \tilde u (s), \Delta v)ds \nonumber \\
&\quad +\int_0^t (b( \tilde u (s))+\tilde \xi(s), v)ds.
\end{align}
Set $v(t)=\partial_t (P_N \tilde u (t)- \tilde u _N(t))$. Then $v(t)$ is an element of $V_N$ for any $t>0$. Subtracting \eqref{wea-pro} from \eqref{spe}, we obtain
\begin{align*}
&\|\partial_t (P_N \tilde u (t)- \tilde u _N(t))\|_{\mathbb L^2}^2 \\
&=\int_0^t (P_N \tilde u (s)- \tilde u _N(s), \Delta [\partial_t (P_N \tilde u (s)- \tilde u _N(s))])ds \\
&\quad+\int_0^t (b( \tilde u (s))-b( \tilde u _N(s)), \partial_t (P_N \tilde u (s)- \tilde u _N(s)))ds \\
&=-\|\nabla (P_N \tilde u (t)-\tilde u _N(t))\|_{\mathbb L^2}^2 \\
&\quad +\int_0^t (b( \tilde u (s))-b( \tilde u _N(s)), \partial_t (P_N \tilde u (s)- \tilde u _N(s)))ds.
\end{align*}
By Cauchy-Schawarz inequality and the Lipschitz continuity of $b$, we have
\begin{align*}
&\|\partial_t (P_N \tilde u (t)- \tilde u _N(t))\|_{\mathbb L^2}^2
+\|\nabla (P_N \tilde u (t)- \tilde u _N(t))\|_{\mathbb L^2}^2 \\
&\le C \int_0^t \|\partial_t (P_N \tilde u (s)- \tilde u _N(s))\|_{\mathbb L^2}^2 ds
+\int_0^t \| \tilde u (s)- \tilde u _N(s)\|_{\mathbb L^2}^2 ds,
\end{align*}
which in turn, by the classical Gronwall's inequality, yields
\begin{align*}
\|\nabla (P_N \tilde u (t)- \tilde u _N(t))\|_{\mathbb L^2}^2
\le C \int_0^t \| \tilde u (s)- \tilde u _N(s)\|_{\mathbb L^2}^2 ds.
\end{align*}
By Poincar\'{e}'s inequality we have that
\begin{align}\label{spe2}
\|P_N \tilde u (t)- \tilde u _N(t)\|_{\mathbb L^2}^2
& \le C \|\nabla (P_N \tilde u (t)- \tilde u _N(t))\|_{\mathbb L^2}^2 \nonumber \\
& \le C \int_0^t \| \tilde u (s)- \tilde u _N(s)\|_{\mathbb L^2}^2 ds.
\end{align}
Combining \eqref{div}, \eqref{u2} and \eqref{spe2}, we conclude \eqref{spe-ord00} by Gronwall's inequality.
\end{proof}
\begin{rk}
For the space-time white noise, i.e., $H=1/2$, the above convergence result improves slightly the corresponding result of \cite[Theorem 4]{CY07}, where the authors proved that
\begin{align*}
\sup_{t\in I} \mathbb E\bigg[\| \tilde u (t)- \tilde u _N(t)\|_{\mathbb L^2}^2\bigg]
\le C h |\ln h|
\end{align*}
provided $N=h^{-1}$ and $u_0\in \mathbb L^2(\Omega; \dot{\mathbb H}^{\beta+1})$ and $v_0\in \mathbb L^2(\Omega; \dot{\mathbb H}^\beta)$ for some $\beta>1/2$.
\end{rk}
\bibliographystyle{amsalpha}
| {
"timestamp": "2017-03-21T01:10:32",
"yymm": "1601",
"arxiv_id": "1601.02085",
"language": "en",
"url": "https://arxiv.org/abs/1601.02085",
"abstract": "In this paper, we analyze Galerkin approximations for stochastic evolution equations driven by an additive Gaussian noise which is temporally white and spatially fractional with Hurst index less than or equal to $1/2$. First we regularize the noise by the Wong-Zakai approximation and obtain its optimal order of convergence. Then we apply the Galerkin method to discretize the stochastic evolution equations with regularized noises. Optimal error estimates are obtained for the Galerkin approximations. In particular, our error estimates remove an infinitesimal factor which appears in the error estimates of various numerical methods for stochastic evolution equations in existing literatures.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Approximating Stochastic Evolution Equations with Additive White and Rough Noises",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969650796875,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7099044331682236
} |
https://arxiv.org/abs/1812.02235 | The Hamiltonian Circuit Polytope | The hamiltonian circuit polytope is the convex hull of feasible solutions for the circuit constraint, which provides a succinct formulation of the traveling salesman and other sequencing problems. We study the polytope by establishing its dimension, developing tools for the identification of facets, and using these tools to derive several families of facets. The tools include necessary and sufficient conditions for an inequality to be facet defining, and an algorithm for generating all undominated circuits. We use a novel approach to identifying families of facet-defining inequalities, based on the structure of variable indices rather than on subgraphs such as combs or subtours. This leads to our main result, a hierarchy of families of facet-defining inequalities and polynomial-time separation algorithms for them. | \section{Introduction.}\label{intro}
\section{Introduction}
The {\em circuit constraint} \cite{Lau78,CasLab97,ShuBer94} requires that a sequence of vertices in
a directed graph define a hamiltonian circuit. Given a directed graph $G$ on vertices $1, \ldots, n$, the constraint is written
\begin{equation}
\mbox{circuit$(x_1, \ldots, x_n)$} \label{eq:cir}
\end{equation}
where variable $x_i$ denote the vertex that follows vertex $i$ in the
sequence. The constraint
requires that $x=(x_1, \ldots, x_n)$ describe a hamiltonian
circuit of $G$. For brevity, we will say that an $x$ satisfying
(\ref{eq:cir}) is a {\em circuit}.
We define the {\em hamiltonian circuit polytope} to be the convex hull of the feasible
solutions of (\ref{eq:cir}) when $G$ is a complete graph. Thus if the {\em domain} $D_i$ of variable $x_i$ is the
set of values $x_i$ can take, we suppose that each $D_i=\{1, \ldots, n\}$. To our
knowledge, this polytope has not been studied. Our objective is to establish its basic properties and provide tools for identifying classes of facets of the polytope. We use these tools to describe several families of facets. In particular, we identify a hierarchy of families of facets, along with polynomial-time separation algorithms.
A circuit should be distinguished from a permutation. Although a circuit $x=(x_1, \ldots, x_n)$ is always a permutation of $(1, \ldots, n)$, a permutation is not necessarily a circuit. For example, $(x_1,x_2,x_3,x_4)=(3,4,2,1)$ is a circuit that goes from 1 to 3 to 2 to 4, and back to 1. However, the permutation $(x_1,x_2,x_3,x_4)=(3,4,1,2)$ is not a circuit because it contains two subtours (1 to 3 to 1, 2 to 4 to 2). If the domain of each $x_i$ is $\{1, \ldots, n\}$, then $n!$
values of $x$ are permutations but only $(n-1)!$ of these are
circuits.
The convex hull of permutations of $1, \ldots, n$ is the {\em permutohedron}, which has been studied for at least a century \cite{Sch1911}. The permutohedron is well understood and quite different from the hamiltonian circuit polytope, although we will see that they have some facets in common.
The paper is organized as follows. We begin by clarifying the connection between the circuit constraint and the traveling salesman problem, and how facets identified here can provide lower bounds for the problem. We then introduce general variable domains and establish the dimension of the hamiltonian circuit polytope for an arbitrary domain. Following this, we develop two tools for identifying facets of the polytope: (a) necessary and sufficient conditions for an inequality with at most $n-4$ variables to be facet-defining, stated in terms of {\em undominated} circuits; and (b) a simple greedy algorithm that generates all undominated circuits, along with a proof of its completeness.
We then apply these tools to analyze the structure of the hamiltonian circuit polytope. A key element of the analysis is a novel approach to identifying families of facets. Rather than associate facet-defining inequalities with graphical substructures such as combs and subtours, we associate them with the position of their variables in the sequence $x_1, \ldots, x_n$. Different patterns of variable indices give rise to different classes of facets.
We first describe a family of inequalities that are facet defining for both the permutohedron and the hamiltonian circuit polytope, and we provide an exhaustive list of two-term facets. We then proceed to our main result, which is a hierarchy of facets of increasing combinatorial complexity. We explicitly describe the facets on levels~0, 1 and~2 of the hierarchy and show how similar analysis can identify facets on higher levels. We conclude by presenting polynomial-time separation algorithms for all families of facets identified here. The algorithms yield a separating cut for each family whenever one exists.
\section{Sequencing Problems}
The circuit constraint is useful for formulating combinatorial
problems that involve permutations or sequencing. One of the best
known such problems is the {\em traveling salesman problem} (TSP), which may be
very succinctly written
\begin{equation}
\begin{array}{l}
{\displaystyle \min \; \sum_{i=1}^n c_{ix_i}
} \vspace{1ex} \\
\mbox{circuit}(x_1, \ldots, x_n), \;\; x_i\in D_i, \; i=1, \ldots n
\vspace{.5ex}
\end{array} \label{eq:tsp}
\end{equation}
where $c_{ij}$ is the distance from city $i$ to city $j$. The
objective is to visit each city once, and return to the starting
city, in such a way as to minimize the total travel distance.
The facet-defining inequalities we obtain for the hamiltonian circuit polytope can be used to obtain lower bounds on the optimal value of the TSP and related problems. Bounds of this sort can be indispensable for solving the problem. In addition, domain filtering methods developed elsewhere \cite{CasLab97,KayHoo06,ShuBer94} for the circuit constraint can be useful for eliminating infeasible values from the variable domains.
Bounds are normally obtained for the TSP by formulating it with \mbox{0--1} variables $y_{ij}$, where $y_{ij}=1$ if vertex $j$
immediately follows vertex $i$ in the hamiltonian circuit. The
problem (\ref{eq:tsp}) can then be written
\begin{equation}
\begin{array}{l}
{\displaystyle \min \;\sum_{ij} c_{ij}y_{ij}
} \vspace{1ex} \\
{\displaystyle \sum_j y_{ij} = \sum_j y_{ji} = 1, \;\; i=1, \ldots,
n
} \vspace{1ex} \\
{\displaystyle \sum_{\scriptsize
\begin{array}{@{}c@{}}
i\in V \\
j\not\in V
\end{array}
} \hspace{-.5ex} y_{ij} \geq 1, \;\; \mbox{all $V\subset \{1,
\ldots, n\}$ with $2\leq |V|\leq n-2$}
} \vspace{.5ex} \\
y_{ij} \in \{0,1\}, \;\;\mbox{all $i,j$}
\end{array} \label{eq:tsp2}
\end{equation}
The polyhedral structure of problem (\ref{eq:tsp2}) has been
intensively analyzed, and surveys of this work may be found in
\cite{BalFis02,JunReiRin95,Nad02}. Bounds are obtained by solving a linear programming problem that minimizes the objective function in (\ref{eq:tsp2}) subject to valid inequalities for this problem, including facet-defining inequalities.
Although the objective function of model (\ref{eq:tsp}) is nonlinear, valid inequalities for (\ref{eq:tsp}) can be mapped into the \mbox{0--1} model (\ref{eq:tsp2}), where the \mbox{objective} function is linear. This is accomplished by the simple change of variable $x_i=\sum_j jy_{ij}$, which transforms linear inequalities in the variables $x_i$ into linear inequalities in the \mbox{0--1} variables $y_{ij}$. These can be combined with valid inequalities that have been developed for the \mbox{0--1} model, so as to obtain a lower bound on the objective function value.
This strategy is applied in \cite{BerHoo12,BerHoo13} to graph coloring problems. Facet-defining inequalities for a formulation in terms of finite-domain variables $x_i$ are transformed into valid inequalities for the standard \mbox{0--1} model. The resulting cuts are quite different from known classes of valid inequalities. They yield tighter bounds in substantially less compututation time.
We leave to future research the question of how the valid inequalities obtained here compare with known valid cuts when mapped into the \mbox{0--1} model. Our focus is on the structure of the hamiltonian circuit polytope, which is an interesting object of study in its own right.
The {\em all-different constraint} \cite{Lau78,Reg94} provides a third formulation for the TSP, which may be written
\begin{equation}
\begin{array}{l}
{\displaystyle \min \; \sum_{i=1}^n c_{x_i x_{i+1}}
} \vspace{1ex} \\
\mbox{all-different}(x_1, \ldots, x_n), \;\;\; x_i\in\{1, \ldots, n\}, \; i=1,\ldots, n
\vspace{.5ex}
\end{array} \label{eq:tsp3}
\end{equation}
where $x_{n+1}$ is identified with $x_1$. The all-different constraint simply requires that $x_1,\ldots, x_n$ be a permutation of $1, \ldots, n$, and the convex hull of its solutions is the permutohedron. Although the facets of the permutohedron are well known (see Section~\ref{permutation}), they cannot be transformed into linear inequalities for the \mbox{0--1} model (\ref{eq:tsp2}) because the variables $x_i$ have a different meaning than in the circuit model (\ref{eq:tsp}). In addition, missing edges in the graph $G$ cannot be represented by removing elements from the domains $D_i$ as in (\ref{eq:tsp}).
\section{General Domains}
A peculiar characteristic of the circuit constraint is that the
values of its variables are indices of other variables. Because the
vertex immediately after $x_i$ is $x_{x_i}$, the value of $x_i$ must
index a variable. The numbers $1, \ldots, n$ are normally used as
indices, but this is an arbitrary choice. One could just as well
use any other set of distinct numbers, which would give rise to a
different polytope. Thus the hamiltonian circuit polytope cannot be
fully understood unless it is characterized for general numerical
domains, and not just for $1, \ldots, n$.
We therefore generalize the circuit constraint so that each domain
$D_i$ is drawn from an arbitrary set $\{v_1, \ldots, v_n\}$ of
nonnegative real numbers. The constraint is written
\begin{equation}
\mbox{circuit}(x_{v_1}, \ldots, x_{v_n}) \label{eq:cir2}
\end{equation}
It is convenient to assume $v_1<\cdots< v_n$. Thus
circuit$(x_0,x_{2.3},x_{3.1})$ is a well-formed circuit constraint
if the variable domains are subsets of $\{0,2.3,3.1\}$. The
nonnegativity of the $v_i$s does not sacrifice generality when the domains are finite, since one
can always translate the origin so that the feasible points lie in
the nonnegative orthant.
Most of the results stated here are valid for a general finite domain. However, to simplify notation we develop the facets in the hierarchy mentioned earlier only for $\{1, \ldots, n\}$.
To avoid an additional layer of subscripts, we will consistently
abuse notation by writing $x_{v_i}$ as $x_i$. We therefore write
the constraint (\ref{eq:cir2}) as (\ref{eq:cir}), with the understanding that $x=(x_1, \ldots, x_n)$ satisfies (\ref{eq:cir}) if and
only if $\pi_1, \ldots, \pi_n$ is a permutation of $1, \ldots,
n$, where $\pi_1=1$ and $v_{\pi_i} = x_{\pi_{i-1}}$ for $i=2,
\ldots n$.
We define the hamiltionian circuit polytope $H_n(v)$ with respect to $v=(v_1,
\ldots, v_n)$ to be the convex hull of the feasible solutions of
(\ref{eq:cir}) for full domains; that is, each domain $D_i$ is
$\{v_1, \ldots, v_n\}$. All of the facet-defining inequalities
we identify for full domains are valid inequalities for
smaller domains, even if they may not define facets of the convex
hull.
\section{Dimension of the Polytope}
We begin by establishing the dimension of the hamiltonian circuit polytope.
\begin{theorem} \label{th:dimension}
The dimension of $H_n(v)$ is $n-2$ for $n=2,3$
and $n-1$ for $n \geq 4$.
\end{theorem}
\begin{proof}
The polytope $H_n(v)$ is a point $(v_2,v_1)$ for $n=2$
and the line segment from $(v_2,v_3,v_1)$ to $(v_3,v_1,v_2)$ for
$n=3$. In either case the dimension is $n-2$.
To prove the theorem for $n\geq 4$, note first that all feasible
points for (\ref{eq:cir}) satisfy
\begin{equation}
\sum_{i=1}^n x_i = \sum_{i=1}^n v_i \label{eq:affine}
\end{equation}
(Recall that $x_i$ is shorthand for $x_{v_i}$.) Thus, $H_n(v)$ has
dimension at most $n-1$. To show it has dimension exactly $n-1$, it
suffices to exhibit $n$ affinely independent points in $H_n(v)$.
Consider the following $n$ permutations of $v_1, \ldots, v_n$,
where the first $n-1$ permutations consist of $v_1$ followed by
cyclic permutations of $v_2, \ldots, v_n$. The last permutation
is obtained by swapping $v_{n-1}$ and $v_n$ in the first
permutation:
\begin{equation}
\begin{array}{l@{\hspace{1ex}}l@{\hspace{1ex}}l@{\hspace{1ex}}c@{\hspace{1ex}}l@{\hspace{1ex}}l@{\hspace{1ex}}l}
v_1, & v_2, & v_3, & \ldots, & v_{n-2}, & v_{n-1}, & v_n \\
v_1, & v_3, & v_4, & \ldots, & v_{n-1}, & v_n, & v_2 \\
v_1, & v_4, & v_5, & \ldots, & v_n, & v_2, & v_3 \\
& & & \vdots & & & \\
v_1, & v_{n-1}, & v_n, & \ldots, & v_{n-4}, & v_{n-3}, & v_{n-2} \\
v_1, & v_n, & v_2, & \ldots, & v_{n-3}, & v_{n-2}, & v_{n-1} \\
v_1, & v_2, & v_3, & \ldots, & v_{n-2}, & v_n, & v_{n-1}
\end{array} \label{eq:perm0}
\end{equation}
The rows of the following matrix correspond to circuit
representations of the above permutations. Thus row $i$ contains
the values $x_1, \ldots, x_n$ for the $i$th permutation in
(\ref{eq:perm0}).
\begin{equation}
\left[
\begin{array}{ccccccc}v_2 & v_3 & v_4 & \cdots & v_{n-1} & v_n & v_1 \\
v_3 & v_1 & v_4 & \cdots & v_{n-1} & v_n & v_2 \\
v_4 & v_3 & v_1 & \cdots & v_{n-1} & v_n & v_2 \\
\vdots & \vdots
& \vdots
& & \vdots & \vdots & \vdots \\
v_{n-1} & v_3 & v_4 & \cdots & v_1 & v_n & v_2 \\
v_n & v_3 & v_4 & \cdots & v_{n-1} & v_1 & v_2 \\
v_2 & v_3 & v_4 & \cdots & v_n & v_1 & v_{n-1}
\end{array}
\right] \label{eq:00}
\end{equation}
Since each row of (\ref{eq:00}) is a point in $H_n(v)$, it suffices
to show that the rows are affinely independent. Subtract $[v_n
\;\; v_3 \;\; v_4 \; \cdots \; v_{n-1} \;\; v_n \;\; v_2]$ from
every row of (\ref{eq:00}) to obtain
\begin{equation}
{\small \left[
\begin{array}{c@{\hspace{1ex}}c@{\hspace{1ex}}ccccc}
v_2-v_n & 0 & 0 & \cdots & 0 & 0 & v_1-v_2 \\
v_3-v_n & v_1-v_3 & 0 & \cdots & 0 & 0 & 0 \\
v_4-v_n & 0 & v_1-v_4 & \cdots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & & \vdots & \vdots & \vdots \\
v_{n-1}-v_n & 0 & 0 & \cdots & v_1-v_{n-1} & 0 & 0 \\
0 & 0 & 0 & \cdots & 0 & v_1-v_n & 0 \\
v_2-v_n & 0 & 0 & \cdots & v_n-v_{n-1} & v_1-v_n & v_{n-1}-v_2
\end{array}
\right] } \label{eq:01}
\end{equation}
The rows of (\ref{eq:00}) are affinely independent if and only if
the rows of (\ref{eq:01}) are. It now suffices to show that
(\ref{eq:01}) is nonsingular, and we do so through a series of row
operations. The first step is to subtract $(v_{n-1}-v_2)/(v_1-v_2)$
times row 1, $(v_n-v_{n-1})/(v_1-v_{n-1})$ times row $n-2$, and
row $n-1$ from row $n$ to obtain
\begin{equation}
{\small \left[
\begin{array}{ccccccc}
v_2-v_n & 0 & 0 & \cdots & 0 & 0 & v_1-v_2 \\
v_3-v_n & v_1-v_3 & 0 & \cdots & 0 & 0 & 0 \\
v_4-v_n & 0 & v_1-v_4 & \cdots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & & \vdots & \vdots & \vdots \\
v_{n-1}-v_n & 0 & 0 & \cdots & v_1-v_{n-1} & 0 & 0 \\
0 & 0 & 0 & \cdots & 0 & v_1-v_n & 0 \\
E_n & 0 & 0 & \cdots & 0 & 0 & 0
\end{array}
\right] } \label{eq:02}
\end{equation}
where
\[
E_n = -\frac{v_n-v_{n-1}}{v_{n-1}-v_1}(v_n-v_{n-1})
-\frac{v_{n-1}-v_1}{v_2-v_1}(v_n-v_2)
\]
Interchange the first and last rows of (\ref{eq:02}) to obtain
\begin{equation}
{\small \left[
\begin{array}{ccccccc}
E_n & 0 & 0 & \cdots & 0 & 0 & 0 \\
v_1-v_n & v_1-v_3 & 0 & \cdots & 0 & 0 & 0 \\
v_4-v_n & 0 & v_1-v_4 & \cdots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & & \vdots & \vdots & \vdots \\
v_{n-1}-v_n & 0 & 0 & \cdots & v_1-v_{n-1} & 0 & 0 \\
0 & 0 & 0 & \cdots & 0 & v_1-v_n & 0 \\
v_2-v_n & 0 & 0 & \cdots & 0 & 0 & v_1-v_2
\end{array}
\right] } \label{eq:03}
\end{equation}
Note that $E_n<0$ since $v_1<\cdots<v_n$. Thus (\ref{eq:03}) is
a lower triangular matrix with nonzero diagonal elements and is
therefore nonsingular.
\end{proof}
\section{Facet-Defining Inequalities}
We now develop necessary and sufficient conditions for an inequality containing at most $n-4$ variables to be facet defining for the hamiltonian circuit polytope. The
following lemma is key.
\begin{lemma} \label{le:zeroes}
Suppose that the inequality
\begin{equation}
\sum_{j\in J} a_jx_j \geq \alpha \label{eqineq}
\end{equation}
is valid for circuit$(x_1, \ldots, x_n)$ and is satisfied as an
equation by at least one circuit $x$. If $|J| \leq n-4$ and
\begin{equation}
\sum_{j=1}^n d_jx_j = \delta \label{eqeq}
\end{equation}
is satisfied by all circuits $x$ that satisfy (\ref{eqineq}) as an
equation, then $d_i=d_j$ for all $i,j\not\in J$.
\end{lemma}
\begin{proof}
Because $|J| \leq n-4$, it suffices to prove that
$d_{j_1}=d_{j_2}=d_{j_3}=d_{j_4}$ for any four distinct indices
$j_1, \ldots, j_4 \not\in J$.
Let $x^0$ be any circuit that satisfies (\ref{eqineq}) as an
equation, and let the permutation described by $x^0$ be
\vspace{-1ex}
\[
\begin{array}{l}
\hspace{-1.2ex} v_1,\ldots, v_{j_1-1},v_{j_1},v_{j_1+1},\ldots,v_{j_2-1},v_{j_2},
v_{j_2+1},\ldots,v_{j_3-1},v_{j_3},v_{j_3+1}, \ldots, v_{j_4-1},
v_{j_4}
\end{array}
\label{perm0}
\]
Consider the circuits $x^1, \ldots, x^5$ that describe the following
permutations, respectively:
\[
\begin{array}{l}
\hspace{-1.2ex} v_1,\ldots,v_{j_1-1},v_{j_1},v_{j_3+1},\ldots,v_{j_4-1},v_{j_4},v_{j_1+1},\ldots,v_{j_2-1},v_{j_2},v_{j_2+1},\ldots,v_{j_3-1},v_{j_3} \vspace{.5ex} \\
\hspace{-1.2ex} v_1,\ldots,v_{j_1-1}, v_{j_1},v_{j_2+1},\ldots,v_{j_3-1},v_{j_3},v_{j_3+1},\ldots,v_{j_4-1},v_{j_4},v_{j_1+1},\ldots,v_{j_2-1},v_{j_2} \vspace{.5ex} \\
\hspace{-1.2ex} v_1,\ldots, v_{j_1-1},v_{j_1},v_{j_2+1},\ldots,v_{j_3-1},v_{j_3},v_{j_1+1},\ldots,v_{j_2-1},v_{j_2},v_{j_3+1},\ldots,v_{j_4-1},v_{j_4} \vspace{.5ex} \\
\hspace{-1.2ex} v_1,\ldots, v_{j_1-1},v_{j_1},v_{j_1+1},\ldots,v_{j_2-1},v_{j_2},v_{j_3+1},\ldots,v_{j_4-1},v_{j_4}, v_{j_2+1},\ldots,v_{j_3-1},v_{j_3} \vspace{.5ex} \\
\hspace{-1.2ex} v_1,\ldots,v_{j_1-1},v_{j_1},v_{j_3+1},\ldots,v_{j_4-1},v_{j_4},v_{j_2+1},\ldots,v_{j_3-1},v_{j_3},v_{j_1+1},\ldots,v_{j_2-1},v_{j_2}
\vspace{.5ex}
\end{array}
\]
We obtain $x^1, \ldots, x^5$ from $x^0$ by viewing the permutation
represented by $x^0$ as a concatenation of four subsequences,
each ending in one of the values $v_{j_i}$. We fix the first
subsequence and obtain $x^1$ and $x^2$ by cyclically permuting the
remaining three subsequences. We obtain $x^3$, $x^4$ and $x^5$ by
interchanging a pair of subsequences.
Note that variables $x_{j_1}, \ldots, x_{j_4}$ have the values shown
below in each circuit $x^i$:
\[
\begin{array}{ccccl}
x_{j_1} & x_{j_2} & x_{j_3} & x_{j_4} &\vspace{.5ex} \\
\cline{1-4}
v_{j_1+1} & v_{j_2+1} & v_{j_3+1} & v_{1} & (x^0) \\
v_{j_3+1} & v_{j_2+1} & v_{1} & v_{j_1+1} & (x^1) \\
v_{j_2+1} & v_{1} & v_{j_3+1} & v_{j_1+1} & (x^2) \\
v_{j_2+1} & v_{j_3+1} & v_{j_1+1} & v_{1} & (x^3) \\
v_{j_1+1} & v_{j_3+1} & v_{1} & v_{j_2+1} & (x^4) \\
v_{j_3+1} & v_{1} & v_{j_1+1} & v_{j_2+1} & (x^5)
\end{array}
\]
and all other variables $x_j$ have value $x^0_j$ in each circuit
$x^i$. Thus all six circuits $x^0, \ldots,x^5$ satisfy
(\ref{eqineq}) at equality, so that $dx^i=\delta$ for
$i=0,\ldots,5$. This implies
\[
{\textstyle \frac{1}{2}}
\left[
\begin{array}{c}
(dx^0+dx^1+dx^5)-(dx^2+dx^3+dx^4) \vspace{.5ex} \\
(dx^0+dx^2+dx^5)-(dx^1+dx^3+dx^4) \vspace{.5ex} \\
(dx^0+dx^3+dx^5)-(dx^1+dx^2+dx^4) \vspace{.5ex}
\end{array}
\right]
\begin{array}{c}
=
\left[
\begin{array}{@{}c@{}}
0 \\ 0 \\ 0
\end{array}
\right] \\
\ \vspace{-1.7ex}
\end{array}
\]
Substituting the values of $x^0, \ldots, x^5$, we obtain
\[
\begin{array}{@{}c@{}}
\left[
\begin{array}{cccc}
v_{j_3+1}-v_{j_2+1} & v_{j_2+1}-v_{j_3+1} & 0 & 0 \\
0 & v_{1}-v_{j_3+1} & v_{j_3+1}- v_{1} & 0 \\
0 & 0 & v_{j_1+1}- v_{1} & v_{1}-v_{j_1+1}
\end{array}
\right]
\\
\ \vspace{-.5ex}
\end{array}
\left[
\begin{array}{@{}c@{}}
d_{j_1} \\ d_{j_2} \\ d_{j_3} \\ d_{j_4}
\end{array}
\right]
\begin{array}{@{}c@{}}
=
\left[
\begin{array}{@{}c@{}}
0 \\ 0 \\ 0
\end{array}
\right] \\
\ \vspace{-.5ex}
\end{array}
\]
from which we can conclude that $d_{j_1}=d_{j_2}=d_{j_3}=d_{j_4}$.
$\Box$
\end{proof}
Lemma~\ref{le:zeroes} applies only when $|J| \leq n-4$ because its proof relies on the absence of at least four variables from (\ref{eqineq}). The theorems below are therefore stated only for $|J| \leq n-4$. We conjecture that they also hold for the densest facets ($|J|>n-4$), but proof seems to require the analysis of several special cases that substantially complicate the argument. This slightly stronger result would be of little additional value for identifying useful families of facets.
For a given $x$, we denote by $x(J)$ the tuple
$(x_{j_1}, \ldots, x_{j_m})$ when $J=\{j_1, \ldots, j_m\}$. We say
that $x(J)$ is a {\em \mbox{$J$-circuit}} if it creates no cycles and is
therefore a partial solution of the circuit constraint. That is,
$x(J)$ is a \mbox{$J$-circuit} if there is no subsequence $j_{i_1}, \ldots,
j_{i_k}$ of the indices in $J$ such that $x_{j_{i_t}}=v_{j_{i_{t+1}}}$
for $t=1, \ldots, k-1$ and $x_{j_{i_k}}=v_{j_{i_1}}$. The following
lemma is straightforward, but its proof introduces notation we will
need later.
\begin{lemma} \label{le:projection}
If $\bar{x}(J)$ is a \mbox{$J$-circuit}, then there is a circuit $x$ such
that \mbox{$x(J)=\bar{x}(J)$}.
\end{lemma}
\begin{proof}
Let $J=\{j_1, \ldots, j_m\}$, and let $\{v_{i_1},
\ldots, v_{i_r}\}$ be the subset of domain values $v_1, \ldots,
v_n$ that occur in neither $\{v_{j_1}, \ldots, v_{j_m}\}$ nor
$\{ \bar{x}_{j_1},\ldots,\bar{x}_{j_m}\}$. Consider the directed
graph $G_{\bar{x}(J)}$ that contains a vertex $v_i$ for each
$i\in\{1, \ldots, \mbox{$n$}\}$, a directed edge
$(v_{j_k},\bar{x}_{j_k})$ for $k=1, \ldots, m$, and a directed edge
$(v_{i_k},v_{i_{k+1}})$ for each $k=1, \ldots, r-1$. The maximal
subchains of $G_{\bar{x}(J)}$ have the form
\[
\begin{array}{l}
v_{j_{k_1}} \rightarrow \cdots \rightarrow v_{j_{k'_1}} \rightarrow \bar{x}_{j_{k'_1}} \vspace{.5ex} \\
v_{j_{k_2}} \rightarrow \cdots \rightarrow v_{j_{k'_2}} \rightarrow \bar{x}_{j_{k'_2}} \\
\vdots \\
v_{j_{k_p}} \rightarrow \cdots \rightarrow v_{j_{k'_p}} \rightarrow \bar{x}_{j_{k'_p}} \vspace{.5ex} \\
v_{i_1} \rightarrow \cdots \rightarrow v_{i_r}
\end{array}
\]
Because
maximal subchains are disjoint, we can form a hamiltonian circuit in
$G_{\bar{x}(J)}$ by linking the last element of each subchain to the
first element of the next, and linking $v_{i_r}$ to $v_{k_1}$. Let
$v_{s_1}, \ldots, v_{s_n}$ be the resulting circuit. Then if
$x$ is given by $x_i = v_{s_{((i-1)\,\mbox{\tiny mod} \, n) + 1}}$ for
$i=1, \ldots, n$, then $x$ is a circuit and $x(J)=\bar{x}(J)$.
$\Box$
\end{proof}
The concept of {\em domination} between $J$-circuits is central to identifying facets of $H_n(v)$, because inequality (\ref{eqineq}) is valid if and only if it is satisfied by all undominated \mbox{$J$-circuits}. If $(J_+,J_-)$ is a partition of $J$, we say that $x(J)$ dominates $y(J)$ with respect to $(J_+,J_-)$ when $x_j\leq y_j$ for all $j\in J_+$ and $x_j\geq y_j$ for all $j\in J_-$. A
\mbox{$J$-circuit} $x(J)$ is {\em undominated} with respect to $(J_+,J_-)$ if no other \mbox{$J$-circuit} dominates it with respect to $(J_+,J_-)$.
\begin{lemma} \label{le:valid}
Inequality (\ref{eqineq}) is valid for the hamiltonian circuit polytope if and only if it is satisfied by all undominated \mbox{$J$-circuits} with respect to $(J_+,J_-)$, where $J_+=\{j\;|\;a_j>0\}$ and $J_-=\{j\;|\;a_j<0\}$.
\end{lemma}
\begin{proof}
A valid inequality must be satisfied by all circuits. This means, due to Lemma~\ref{le:projection}, that it must be satisfied by all \mbox{$J$-circuits} and therefore by all undominated \mbox{$J$-circuits}. For the converse, suppose (\ref{eqineq}) is satisfied by all undominated \mbox{$J$-circuits}, and let $x$ be any circuit. Then $x(J)$ is dominated by some undominated \mbox{$J$-circuit} $x'(J)$ with respect to $(J_+,J_-)$, which means that $a_j(x_j - x'_j) \geq 0$ for all $j\in J$.
Thus we have
\[
\sum_{j \in J} a_jx_{j} \geq \sum_{j \in J} a_j x'_{j} \geq \alpha
\]
because $x'(J)$ satisfies (\ref{eqineq}), and so $x$ satisfies
(\ref{eqineq}). This shows (\ref{eqineq}) is valid.
$\Box$
\end{proof}
The following theorem provides sufficient conditions under which an inequality is facet defining.
\begin{theorem} \label{th:main}
Consider any inequality of the form (\ref{eqineq}). Let $S$ be the set of \mbox{$J$-circuits} that are undominated with respect
to $(J_+,J_-)$, where $J_+=\{j\;|\;a_j>0\}$, $J_-=\{j\;|\;a_j<0\}$, and $1\leq |J|\leq n-4$. If all \mbox{$J$-circuits} in $S$ satisfy (\ref{eqineq}) and at least $|J|$ affinely independent \mbox{$J$-circuits} satisfy
\begin{equation}
\sum_{j \in J} a_j x_{j} = \alpha
\label{+facet_eq}
\end{equation}
then (\ref{eqineq}) defines a facet of $H_n(v)$.
\end{theorem}\label{th:undom_facets}
\begin{proof}
Inequality (\ref{eqineq}) is valid by Lemma~\ref{le:valid}. To show (\ref{eqineq}) is facet defining, let (\ref{eqeq}) be any equation satisfied by all circuits $x$
that satisfy (\ref{eqineq}) at equality. Recall that all circuits satisfy (\ref{eq:affine}). It suffices to show that (\ref{eqeq}) is a linear combination of (\ref{+facet_eq}) and (\ref{eq:affine}).
Let $S=\{x^1(J), \ldots, x^m(J)\}$. Because $|J|\geq 1$ and $S$ is therefore nonempty, at least one \mbox{$J$-circuit} $x^i(J)\in S$ satisfies (\ref{eqineq}) at equality. Lemma~\ref{le:projection} therefore implies that at least one circuit $x^i$ satisfies (\ref{eqineq})
at equality. Thus since $|J| \leq n-4$, we have from
Lemma~\ref{le:zeroes} that $d_i=d_j$ for all $i,j \notin J$.
We first suppose that $d_j=0$ for all $j\notin J$. Then (\ref{eqeq}) has the form
\begin{equation}
\sum_{j \in J} d_j x_j=\delta
\label{eqeq1}
\end{equation}
Because $|J|$ affinely independent \mbox{$J$-circuits} satisfy (\ref{+facet_eq}) and therefore (\ref{eq_dJ}), these two equations are
the same up to a scalar multiple. Thus (\ref{eqeq}) is a linear combination of (\ref{+facet_eq}) and (\ref{eq:affine}), where the latter has multiplier zero.
We now suppose that $d_j\neq 0$ for $j\notin J$. Because the $d_j$s are equal for all $j\notin J$, we can without loss of generality write (\ref{eqeq}) as
\[
\sum_{j\in J} d_jx_j + \sum_{j\notin J} x_j = \delta
\]
This is a linear combination of (\ref{+facet_eq}) and (\ref{eq:affine}) if the following is a scalar multiple of (\ref{+facet_eq}):
\begin{equation}
\sum_{j\in J} (d_j-1)x_j = \delta - \sum_{j=1}^n v_j
\label{eq_dJ2}
\end{equation}
But this follows from the fact that $|J|$ affinely independent \mbox{$J$-circuits} satisfy (\ref{+facet_eq}) and (\ref{eq_dJ2}).
$\Box$
\end{proof}
A simple corollary sometimes suffices to show that inequalities are facet defining.
\begin{corollary} \label{co:main}
If $J$ is as in Theorem~\ref{th:main}, (\ref{eqineq}) is valid, and at least $|J|$ affinely independent \mbox{$J$-circuits} satisfy (\ref{eqineq}) at equality, then (\ref{eqineq}) is facet defining.
\end{corollary}
\begin{proof}
If (\ref{eqineq}) is valid, then it is satisfied by all undominated \mbox{$J$-circuits}, and the conditions of Theorem~\ref{th:main} apply.
$\Box$
\end{proof}
To apply Theorem~\ref{th:main} (or Corollary~\ref{co:main}), one must identify a set of affinely independent $J$-circuits. However, the number of circuits required is only the number $|J|$ of terms included in the facet-defining inequality, as opposed to $n$ circuits in traditional arguments based on affine independence. The theorem can therefore be regarded as a lifting lemma. It will allow us to exploit patterns in the selection of terms to be included, so as to establish several classes of facets.
Finally, we note that the conditions of Theorem~\ref{th:main} are necessary as well as sufficient for (\ref{eqineq}) to be facet defining.
\begin{theorem} \label{th:main2}
Consider any inequality (\ref{eqineq}) that is facet-defining for a
hamiltonian circuit polytope $H_n(v)$. Let $J_+=\{j\;|\;a_j>0\}$ and
$J_-=\{j\;|\;a_j<0\}$. Then (\ref{eqineq}) is satisfied by all undominated \mbox{$J$-circuits} with respect to $(J_+,J_-)$, and at least $|J|$ affinely independent \mbox{$J$-circuits} satisfy (\ref{+facet_eq}).
\end{theorem}
\begin{proof}
Because (\ref{eqineq}) is valid, Lemma~\ref{le:valid} implies that it is satisfied by all undominated \mbox{$J$-circuits}. Furthermore, because (\ref{eqineq}) is facet defining, it is
satisfied at equality by $n$ affinely independent circuits
$\bar{x}^1, \ldots, \bar{x}^n$. Then $\{\bar{x}^1(J), \ldots,
\bar{x}^n(J)\}$ contains some subset $\{\bar{x}^{j_1}(J), \ldots,
\bar{x}^{j_m}(J)\}$ of $|J|=m$ affinely independent
\mbox{$J$-circuits}, which satisfy (\ref{eqineq}).
$\Box$
\end{proof}
\section{Generating Undominated Circuits}
\label{greedy}
A simple greedy procedure can be used to generate all \mbox{$J$-circuits}
$\bar{x}(J)$ that are undominated with respect to $(J_+,J_-)$.
It is applied for each ordering $j_1, \ldots, j_m$ of the elements
of $J$. First, let $\bar{x}_{j_1}$ be the smallest domain value
$v_i$ if $j_1\in J_+$, or the largest if $j_1\in J_-$. Then let
$\bar{x}_{j_2}$ be the smallest (or largest) remaining domain value
that does not create a cycle. Continue until all $\bar{x}_j$ for
$j\in J$ are defined. The precise algorithm appears in
Fig.~\ref{fig:greedy}.
\begin{figure}
\centering
\fbox{
\parbox[c]{6in}{
\begin{tabbing}
xxx \= xxx \= xxx \= xxx \= \kill
For each ordering $j_1, \ldots, j_m$ of the elements of $J$: \\
\> Let $\bar{J}=\{1, \ldots, n\}$ and $J'=\emptyset$. \\
\> For $i=1, \ldots, m$: \\
\> \> Add $j_i$ to $J'$. \\
\> \> If $j_i \in J_+$ then let $\bar{x}_{j_i}$ be the minimum value $v_k$ in $\{v_i\;|\;i\in \bar{J}\}$ \\
\> \> \> such that $\bar{x}(J')$ is a $J'$-circuit. \\
\> \> Else let $\bar{x}_{j_i}$ be the maximum value $v_k$ in $\{v_i\;|\;i\in \bar{J}\}$ \\
\> \> \> such that $\bar{x}(J')$ is a $J'$-circuit. \\
\> \> Remove $k$ from $\bar{J}$. \\
\> Add $\bar{x}(J)$ to the list of undominated \mbox{$J$-circuits}.
\end{tabbing}
}
}
\vspace{-1ex}
\caption{Greedy procedure for generating undominated \mbox{$J$-circuits}. Input: tuple $v$ of domain values, index set $J$, and partition $(J_+,J_-)$ of $J$. Output: a complete list of \mbox{$J$-circuits} that are undominated with respect to $(J_+,J_-)$.}
\label{fig:greedy}
\vspace{2ex}
\end{figure}
To prove that the greedy procedure is correct, it is convenient to write $x_j\prec y_j$ when either $x_j < y_j$ and $j\in J_+$ or $x_j > y_j$ and $j\in J_-$.
\begin{theorem} \label{th:greedy}
The greedy procedure of Fig.~\ref{fig:greedy} generates \mbox{$J$-circuits}
that are undominated with respect to $(J_+,J_-)$.
\end{theorem}
\begin{proof}
Let $\bar{x}(J)$ be a \mbox{$J$-circuit} generated by the
procedure for a given ordering $j_1, \ldots, j_m$. To see that
$\bar{x}(J)$ is undominated with respect to $(J_+,J_-)$, assume
otherwise. Then there exists a \mbox{$J$-circuit} $\bar{y}(J)$ that dominates $\bar{x}(J)$ such that $\bar{y}_{j_t} \prec \bar{x}_{j_t}$ for some $t\in\{1, \ldots, m\}$. Let $t$ be the
smallest such index, so that $\bar{x}_{j_k} = \bar{y}_{j_k}$ for
$k=1, \ldots, t-1$. This contradicts the greedy construction of
$\bar{x}$, because $\bar{y}_{j_t}$ is available when $\bar{x}_{j_t}$
is assigned to $x_{j_t}$. $\Box$
\end{proof}
As an example, consider circuit$(x_1,\ldots,x_7)$ where each $x_j$ has domain $\{v_1, \ldots, v_7\}$. The undominated \mbox{$J$-circuits} of $J=\{1,3,4\}$ with respect to $(J,\emptyset)$ can be generated by considering the six orderings of $1,3,4$ listed on the left below. The
resulting undominated \mbox{$J$-circuits} appear on the right.
\[
\begin{array}{c@{\hspace{5ex}}c}
(j_1,j_2,j_3) & (x_1,x_3,x_4) \\
\ \vspace{-2.4ex} \\
\hline \vspace{-2.4ex} \\
(1,3,4) & (v_2,v_1,v_3) \vspace{.5ex} \\
(1,4,3) & (v_2,v_4,v_1) \vspace{.5ex} \\
(3,1,4) & (v_2,v_1,v_3) \vspace{.5ex} \\
(3,4,1) & (v_4,v_1,v_2) \vspace{.5ex} \\
(4,1,3) & (v_2,v_4,v_1) \vspace{.5ex} \\
(4,3,1) & (v_3,v_2,v_1)
\end{array}
\]
There is only one undominated $J$-circuit with respect to $(\{1,3\},\{4\})$, \mbox{because} all six orderings result in the
same \mbox{$J$-circuit} $(v_2,v_1,v_7)$.
It remains to show that the greedy procedure finds all undominated \mbox{$J$-circuits}. We will first prove this for the partition $(J,\emptyset)$ because the \mbox{argument} simplifies considerably in this case. Thus we assume that circuit $x$ dominates circuit $x'$ when $x\leq x'$. The proof for the general case appears in the Appendix.
\begin{theorem} \label{th:simplifiedgreedycomplete} Any undominated $J$-circuit with respect to $(J,\emptyset)$ can be generated in a greedy fashion for some ordering of the indices in $J$.
\end{theorem}
\begin{proof}
Let $\bar{x}(J)$ be a $J$-circuit that is undominated with
respect to $(J,\emptyset)$. Let $J=\{i_1, \ldots,
i_m\}$ where $\bar{x}_{i_1} < \cdots < \bar{x}_{i_m}$, and let $y=(y_{i_1}, \ldots, y_{i_m})$ be the greedy solution with respect to the ordering $i_1, \ldots, i_m$. We claim that $\bar{x}_{i_{\ell}}=y_{i_{\ell}}$ for $\ell=1,
\ldots, m$, which suffices to prove the theorem.
Supposing to the contrary, let $t$ be the smallest index for which \mbox{$\bar{x}_{i_t}\neq y_{i_t}$}. Clearly $\bar{x}_{i_t} < y_{i_t}$ is inconsistent with the greedy choice, because $\bar{x}_{i_t}$ is available when $y_{i_t}$ is assigned a value.
Thus we have $\bar{x}_{i_t} > y_{i_t}$.
By hypothesis, $\bar{x}$ is undominated with respect to $J$.
We therefore have $\bar{x}_{i_{\ell}} < y_{i_{\ell}}$ for some $\ell\in\{t+1, \ldots, m\}$.
Let $u$ be the smallest such index.
Finally, let $t'$ be the largest index in $\{t, \ldots, u-1\}$ such that $\bar{x}_{i_{t'}} > y_{i_{t'}}$.
We know that $t'$ exists because $\bar{x}_{i_t} > y_{i_t}$.
Thus we have two sequences of values related as follows:
\[
\begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c}
\bar{x}_{i_1} & < & \cdots & < &
\bar{x}_{i_{t-1}} & < & \bar{x}_{i_t} & <
& \cdots & < & \bar{x}_{i_{t'-1}} & < &
\bar{x}_{i_{t'}} & < & \cdots & < &
\bar{x}_{i_{u-1}} & < & \bar{x}_{i_u} \\
$\rotatebox[origin=c]{270}{$=$}$ & & & &
$\rotatebox[origin=c]{270}{$=$}$ & & $\rotatebox[origin=c]{270}{$>$}$ & &
& & $\rotatebox[origin=c]{270}{$\geq$}$ & &
$\rotatebox[origin=c]{270}{$>$}$ & & & &
$\rotatebox[origin=c]{270}{$\geq$}$ & & $\rotatebox[origin=c]{270}{$<$}$ \\
y_{i_1} & & \cdots & & y_{i_{t-1}} & &
y_{i_t} & & \cdots & & y_{i_{t'-1}} & &
y_{i_{t'}} & & \cdots & & y_{i_{u-1}} & &
y_{i_u}
\end{array}
\]
We first show that value $\bar{x}_{i_u}$ has not yet been assigned
in the greedy algorithm when $y_{i_u}$ is assigned a value. That is, we show that $\bar{x}_{i_u}\not\in\{y_{i_1}, \ldots, y_{i_{u-1}}\}$.
Suppose to the contrary that $\bar{x}_{i_u}=y_{i_{w}}$ for some $w\in \{1 \ldots, u-1\}$.
But this is impossible, because $\bar{x}_{i_u}>\bar{x}_{i_w}\geq y_{i_w}$.
We next show that value $\bar{x}_{i_{t'}}$ has not yet been assigned
in the greedy algorithm when $y_{i_u}$ is assigned a value. That is, we show that $\bar{x}_{i_{t'}}\not\in \{y_{i_1}, \ldots, y_{i_{u-1}}\}$.
To begin with, we have that $\bar{x}_{i_{t'}}\not\in \{y_{i_1}, \ldots,
y_{i_{t'-1}}\}$, by virtue of the same reasoning just applied
to $\bar{x}_{i_u}$. Also $\bar{x}_{i_{t'}}\neq y_{i_{t'}}$,
since by hypothesis $\bar{x}_{i_{t'}} > y_{i_{t'}}$. To show
that $\bar{x}_{i_{t'}}\not\in \{y_{i_{t'+1}}, \ldots,
y_{i_{u-1}}\}$, suppose to the contrary that $\bar{x}_{i_{t'}} = y_{i_w}$ for some $w\in \{t'+1, \ldots, u-1\}$.
Then since $\bar{x}_{i_{t'}} < \bar{x}_{i_w}$, we must have $\bar{x}_{i_w} >
y_{i_w}$. But this contradicts the definition of $t'$ ($< w$)
as the largest index in $\{1, \ldots, u-1\}$ such that
$\bar{x}_{i_{t'}} > y_{i_{t'}}$. Thus $\bar{x}_{i_{t'}} \neq y_{i_w}$.
Because $\bar{x}_{i_u} < y_{i_u}$ and value $\bar{x}_{i_u}$ has
not yet been assigned, setting $y_{i_u} = \bar{x}_{i_u}$ must create
a cycle in $y$, because otherwise setting $y_{i_u} = \bar{x}_{i_u}$
would have been the greedy choice. Also, setting
$y_{i_u}=\bar{x}_{i_{t'}}$ was not the greedy choice because
$y_{i_u} > \bar{x}_{i_u} > \bar{x}_{i_{t'}}$. Thus setting
$y_{i_u}=\bar{x}_{i_{t'}}$ must likewise create a cycle in
$y$, because $\bar{x}_{i_{t'}}$ has not yet been assigned. Now
define $G_{y(J)}$ as before and consider the maximal subchain
in $G_{y(J)}$ that contains $y_{i_u}$. Let the segment
of the subchain up to $y_{i_u}$ be
\[
v_z \rightarrow \cdots \rightarrow v_{i_u} \rightarrow y_{i_u}
\]
Because setting $y_{i_u}=\bar{x}_{i_u}$ creates a cycle in
$y$, we must have $\bar{x}_{i_u} = v_z$. Similarly,
because setting $y_{i_u}=\bar{x}_{i_{t'}}$ creates a cycle in
$y$, we must have $\bar{x}_{i_{t'}} = v_z$. This implies
$\bar{x}_{i_u}=\bar{x}_{i_{t'}}$, which is impossible because
$\bar{x}_{i_u}>\bar{x}_{i_{t'}}$. $\Box$
\end{proof}
\begin{theorem} \label{th:greedycomplete} Any undominated $J$-circuit with respect to $(J_+,J_-)$ can be generated in a greedy fashion for some ordering of the indices in $J$.
\end{theorem}
\begin{proof}
See the Appendix.
\end{proof}
\section{Permutation and Two-term Facets}
\label{permutation}
We begin by identifying two special classes of facets of
$H_n(v)$, namely, permutation facets and two-term facets.
The {\em permutohedron} $P_n(v)$ for an arbitrary domain $\{v_1,\ldots, v_n\}$ can be defined as the convex hull of all points whose coordinates are permutations of $v_1, \ldots, v_n$. We refer to the facets of $P_n(v)$ as {\em permutation facets}. The circuit
polytope $H_n(v)$ is contained in $P_n(v)$ because every circuit
$(x_1, \ldots, x_n)$ is a permutation of $v_1, \ldots, v_n$.
This means that every facet-defining inequality for $P_n(v)$ is
valid for circuit but not necessarily facet defining. This raises
the question as to which permutation facets are also circuit facets.
We will identify a large family of permutation facets that can be
immediately recognized as circuit facets.
The permutohedron $P_n(v)$ has dimension $n-1$, and its affine hull is described by
\begin{equation}
\sum_{j=1}^n x_j = \sum_{j=1}^n v_j
\label{eq:2termaffine}
\end{equation}
The facets of $P_n(v)$ are identified in \cite{Hoo00,WilYan01}, and they are
defined by
\begin{equation}
\sum_{j\in J} x_j \geq \sum_{j=1}^{|J|} \hspace{0ex} v_j
\label{eq:permfacet}
\end{equation}
for all $J\subset \{1, \ldots, n\}$ with $1\leq |J|\leq n-1$.
(Recall that $0\leq v_1 <\cdots < v_n$.) This result is generalized
in \cite{Hooker12} to domains with more than $n$ elements.
For example, the permutohedron $P_3(v)$ with $v=(2,4,5)$ is
defined by
\[
\begin{array}{ll}
x_1+x_2+x_3=11 \vspace{.5ex} \\
x_i \geq 2, \;\mbox{for $i=1,2,3$} \vspace{.5ex} \\
x_i+x_j \geq 6, \; \mbox{for distinct $i,j\in\{1,2,3\}$}
\end{array}
\]
We can see at this point that a facet-defining inequality for
$P_n(v)$ need not be facet-defining for $H_n(v)$. The inequality
$x_1+x_2\geq 6$ is facet-defining for $P_3(v)$ but not for $H_3(v)$,
which is the line segment from $(4,5,2)$ to $(5,2,4)$. However, a large family of inequalities are facet defining for both $H_n(v)$ and $P_n(v)$.
\begin{theorem}\label{th:permfacet1}
The inequality (\ref{eq:permfacet}) defines a facet of $H_n(v)$ if
$1\leq |J| \leq n-4$ and $j>2$ for all $j\in J$.
\end{theorem}
\begin{proof}
Let $J=\{j_1, \ldots, j_m\}$. Inequality (\ref{eq:permfacet}) is clearly valid because the variables $x_{j_1},\ldots,x_{j_m}$ must have pairwise distinct values. By Corollary~\ref{co:main}, it suffices to exhibit $m$ affinely independent \mbox{$J$-circuits} that satisfy (\ref{eq:permfacet}) at equality. Consider the following assignments to $(x_{j_1},\ldots,x_{j_m})$:
\begin{equation}
\begin{array}{ccccccc}
x_{j_1} & x_{j_2} & x_{j_3} & \cdots & x_{j_{m-1}} & x_{j_m} \vspace{.5ex} \\
\hline
v_1 & v_2 & v_3 & \cdots & v_{m-1} & v_m \\
v_2 & v_1 & v_3 & \cdots & v_{m-1} & v_m \\
v_1 & v_3 & v_2 & \cdots & v_{m-1} & v_m \\
\vdots & \vdots & \vdots & & \vdots & \vdots \\
v_1 & v_2 & v_3 & \cdots & v_m & v_{m-1}
\end{array} \label{eq:fam31}
\end{equation}
The $i$th assignment is obtained from the first by swapping $v_{i-1}$ and $v_i$.
These \mbox{assign}ments obviously satisfy (\ref{eq:permfacet}) at equality. They are also affinely independent,
as can be seen by subtracting the first row from each row. It \mbox{remains} to show that the assignments create no cycles and are therefore \mbox{$J$-circuits}. For this, it suffices to show that each $x_{j_i}$ is assigned a value $v_k$ with $k<j_i$. The first assignment satisfies this condition because $2<j_1$ and $j_1<\cdots <j_m$ imply that $i<j_i-1$ for $i=1, \ldots, m$. The $i$th \mbox{assign}ment agrees with the first on the \mbox{values} of all variables except $x_{j_{i-1}}, x_{j_i}$. It sets $(x_{j_{i-1}},x_{j_i})=(v_i,v_{i-1})$, which satisfies $i<j_{i-1}$ because $i-1<j_{i-1}-1$, and satisfies $i-1<j_i$ because $i<j_i-1$. The $i$th assignment therefore satisfies the condition and is a \mbox{$J$-circuit} for $i=2, \ldots, m$. $\Box$
\end{proof}
Another special class of facet-defining inequalities are those
containing two terms, which can be listed in closed form.
\begin{corollary} \label{co:twoterm}
If $n\geq 6$, the two-term facets of $H_n(v)$ are precisely those
defined by
\begin{eqnarray}
&& x_i+x_j \geq v_1+v_2, \;\;\mbox{for distinct $i,j\in \{3,\ldots,n\}$} \vspace{.5ex} \label{eq:2term1} \\
&& (v_3-v_1)x_1 + (v_3-v_2)x_2 \geq v_3^2 - v_1v_2 \vspace{.5ex} \label{eq:2term2} \\
&& (v_2-v_1)x_2 + (v_3-v_1)x_i \geq v_2v_3 - v_1^2, \;\; \mbox{for $i\in \{3, \ldots, n\}$} \vspace{.5ex} \label{eq:2term3} \\
&& (v_{n-1}-v_{n-2})x_{n-1} + (v_n-v_{n-2})x_n \leq v_n v_{n-1} - v_{n-2}^2 \vspace{.5ex} \label{eq:2term5} \\
&& (v_n-v_{n-2})x_i + (v_n-v_{n-1})x_{n-1} \leq v_n^2 - v_{n-1}v_{n-2}, \label{eq:2term6} \\
&& \hspace{40ex} \mbox{for $i\in \{1, \ldots, n-2\}$} \nonumber
\end{eqnarray}
\end{corollary}
\vspace{-2ex}
\begin{proof}
Consider an arbitrary two-term inequality $a_ix_i+a_jx_j\geq \alpha$. If we suppose $a_i,a_j>0$, four cases can be distinguished. {\em Case 1:} $i,j>2$. The two permutations of $i,j$ generate the two undominated \mbox{$J$-circuits} $(v_1,v_2)$ and $(v_2,v_1)$, where $J=\{i,j\}$. The only equation satisfied by these two affinely independent \mbox{$J$-circuits}, up to a positive scalar multiple, is $x_i+x_j=v_1+v_2$. So by Theorems~\ref{th:main} and~\ref{th:main2}, all facet-defining inequalities for this case have the form (\ref{eq:2term1}). {\em Case 2:} $(i,j)=(1,2)$. The undominated \mbox{$J$-circuits} are $(v_2,v_3)$ and $(v_3,v_1)$, which satisfy only (\ref{eq:2term2}) at equality, up to a positive scalar multiple. {\em Case~3.} $i=1$, $j>2$. The two permutations of $1,j$ generate the same undominated \mbox{$J$-circuit} $(v_2,v_3)$. Thus no two affinely independent \mbox{$J$-circuits} satisfy $a_1x_1+a_jx_j=\alpha$, and by Theorem~\ref{th:main2} there are no facet-defining inequalities in this case. {\em Case 4.} $i=2$, $j>2$. The undominated \mbox{$J$-circuits} are $(v_1,v_2)$ and $(v_3,v_1)$, which satisfy only (\ref{eq:2term3}) at equality.
Now if we suppose $a_i,a_j<0$, similar reasoning yields the facets (\ref{eq:2term5})--(\ref{eq:2term6}) and
\[
x_i+x_j \leq v_{n-1}+v_n, \;\;\mbox{for distinct $i,j\in \{1,\ldots,n-2\}$}
\]
which is redundant of (\ref{eq:2term1}) because it is the sum of (\ref{eq:2term1}) and the negation of (\ref{eq:2termaffine}). Finally, if $a_i>0$ and $a_j<0$, we consider four cases: $i>1$ and $j<n$; $i=1$ and $j<n$; $i>1$ and $j=n$; and $(i,j)=(1,n)$. The two permutations of $i,j$ generate only one \mbox{$J$-circuit} in each case, respectively $(v_1,v_n)$, $(v_2,v_n)$, $(v_1,v_{n-1})$, and $(v_2,v_{n-1})$. This means by Theorem~\ref{th:main2} that there are no additional facets. The situation is similar when $a_i<0$ and $a_j>0$. $\Box$
\end{proof}
\section{A Hierarchy of Facets}
We now describe a hierarchy of facets of increasing complexity. To simplify discussion, we suppose in this section that each variable has domain $\{v_1,\ldots,v_n\}=\{1, \ldots, n\}$, and we consider only facets defined by inequalities with nonnegative coefficients. We therefore focus on $H_n(u)$, where $u=(1, \ldots, n)$.
The intuition behind the hierarchy is as follows. On level~0 of the hierarchy, the number of variables in an inequality (\ref{eqineq}) is less than the smallest index in $J$. The undominated \mbox{$J$-circuits} are simply the permutations of $1, \ldots, m$, because the greedy algorithm of Section~\ref{greedy} never encounters a \mbox{cycle}. As a result, the only facets on level~0 are permutation facets. In higher levels of the hierarchy, the index of the first variable is smaller than the number of variables in the facet, which increases the combinatorial complexity of undominated \mbox{$J$-circuits} and yields more complicated facets. We will exhaustively identify facets for levels~0, 1, and~2, although one can in principle use similar methods to identify facets on higher levels.
Let level~$d$ of the hierarchy consist of inequalities of the form
\begin{equation}
\sum_{j=m-d+1}^m \hspace{-2.5ex} a_jx_j + \hspace{-.7ex} \sum_{i=d+1}^m \hspace{-1ex} a_{j_i}x_{j_i} \geq \alpha
\label{eq:fam1}
\end{equation}
where each $a_j>0$, where $m<j_{d+1}< \cdots < j_m$, and where $\{x_{j_{d+1}},\ldots,x_{j_m}\}$ is any subset of $m-d$ variables in $\{x_{m+1}, \ldots, x_n\}$. Thus (\ref{eq:fam1}) contains $m$ variables, and $m-d$ variables are absent before the first variable. Note also that the first $d$ variables are consecutive. We will identify one family of facet-defining inequalities on level 0, two families on level 1, and five families on level 2.
First, we have immediately from Theorem~\ref{th:permfacet1} that level 0 contains a class of permutation facets.
\begin{corollary}
The following level~0 inequalities are facet defining for $H_n(u)$:
\[
\sum_{i=1}^m x_{j_i} \geq {\textstyle\frac{1}{2}} m(m+1), \;\;\; m=2, \ldots, n
\]
for any set $\{x_{j_1}, \ldots, x_{j_m}\}$ of $m$ variables in $\{x_{m+1}, \ldots, x_n\}$.
\end{corollary}
For level 1 we have the following.
\begin{theorem}
The following level 1 inequalities are facet defining for $H_n(u)$:
\begin{equation}
x_m + \sum_{i=2}^m x_{j_i} \geq {\textstyle\frac{1}{2}}m(m+1), \;\;\;m=3, \ldots, \lceil n/2 \rceil
\label{eq:fam2a}
\end{equation}
\begin{equation}
x_m + 2\sum_{i=2}^m x_{j_i} \geq m^2+1, \;\;\;m=2, \ldots, \lceil n/2 \rceil
\label{eq:fam2}
\end{equation}
for any subset $\{x_{j_2}, \ldots, x_{j_m}\}$ of $m-1$ variables in $\{x_{m+1}, \ldots, x_n\}$, provided \mbox{$n-m\geq 4$}.
\end{theorem}
\proof{Proof.} Here $J=\{m,j_2,\ldots,j_m\}$. Inequality (\ref{eq:fam2a}) is facet defining due to Theorem~\ref{th:permfacet1}. To show that (\ref{eq:fam2}) is facet defining, it suffices to show that it is satisfied by all undominated \mbox{$J$-circuits} and is satisfied at equality by $m$ affinely independent \mbox{$J$-circuits}. From Theorem~\ref{th:simplifiedgreedycomplete}, all undominated \mbox{$J$-circuits} correspond to permutations of the elements of $J$, or equivalently, permutations $x'$ of $(x_m,x_{j_2},\ldots,x_{j_m})$. We distinguish two cases: permutations in which $x_m$ is last, resulting in {\em type~1} circuits, and permutations in which $x_m$ is not last, resulting in {\em type~2} circuits. Type~1 \mbox{$J$-circuits} have the form $x'=(1,\ldots,m-1,m+1)$, because once the first $m-1$ variables in $x'$ are assigned $1, \ldots, m-1$, $x_m$ cannot be assigned the next value $m$ and must be assigned $m+1$. For all such \mbox{$J$-circuits}, the left-hand side of (\ref{eq:fam2}) has value
\[
(m+1) + 2\left(1 + 2 + \cdots + (m-1)\right) = m^2 + 1
\]
which satisfies (\ref{eq:fam2}). Type~2 \mbox{$J$-circuits} have the form $x''=(1,\ldots,m)$ where $x''$ is any permutation of $(x_m,x_{j_2},\ldots,x_{j_m})$ in which $x_m$ is not last. Because $x_m$ has the smallest coefficient in (\ref{eq:fam2}), the LHS of (\ref{eq:fam2}) is minimized over type~2 \mbox{$J$-circuits} when $x_m$ occurs next to last in $x''$, in which case the LHS has value
\[
(m-1) + 2\left( 1 + 2 + \cdots + (m-2) + m \right) = m^2+1
\]
Thus (\ref{eq:fam2}) is again satisfied.
We now exhibit $m$ affinely independent \mbox{$J$-circuits} satisfying (\ref{eq:fam2}) at equality. The first $m-1$ \mbox{$J$-circuits} below are type~1, and the last is type 2:
\begin{equation}
\begin{array}{c@{\hspace{6ex}}cccccc}
x_m & x_{j_2} & x_{j_3} & x_{j_4} & \cdots & x_{j_{m-1}} & x_{j_m} \vspace{.5ex} \\
\hline
m-1 & 1 & 2 & 3 & \cdots & m-2 & m \\
m-1 & 2 & 1 & 3 & \cdots & m-2 & m \\
m-1 & 1 & 3 & 2 & \cdots & m-2 & m \\
\vdots & \vdots & \vdots & \vdots & & \vdots & \vdots \\
m-1 & 1 & 2 & 3 & \cdots & m & m-2 \\
\ \\
m+1 & 1 & 2 & 3 & \cdots & m-2 & m-1
\end{array} \label{eq:fam29}
\end{equation}
These satisfy (\ref{eq:fam2}) at equality, as noted above. The $(m-1)\times(m-1)$ submatrix in the upper right is obtained by swapping pairs of elements in the first row. After suitable row operations, (\ref{eq:fam29}) becomes
\begin{equation}
\begin{array}{c@{\hspace{6ex}}cccccc}
x_m & x_{j_2} & x_{j_3} & x_{j_4} & \cdots & x_{j_{m-1}} & x_{j_m} \vspace{.5ex} \\
\hline
(m-1)/s & 1 & 0 & 0 & \cdots & 0 & 0 \\
(m-1)/s & 0 & 1 & 0 & \cdots & 0 & 0 \\
(m-1)/s & 0 & 0 & 1 & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & & \vdots & \vdots \\
(m-1)/s & 0 & 0 & 0 & \cdots & 0 & 1 \\
\ \\
m+1 & 1 & 2 & 3 & \cdots & m-2 & m-1
\end{array} \label{eq:fam29a}
\end{equation}
where $s=\frac{1}{2}m(m-1)+1$ is the sum of the entries in each row of the submatrix. After further row operations, the last row is reduced to \mbox{$2+(m-1)/s$} followed by $m-1$ zeros, resulting in a triangular matrix (after \mbox{rearranging} columns) with nonzeros on the diagonal. The matrix (\ref{eq:fam29}) is therefore nonsingular, and the rows are affinely independent. $\Box$
\endproof \medskip
Finally, we identify five classes of level~2 facets.
\begin{theorem}
The following level~2 inequalities are facet defining for $H_n(u)$:
\begin{eqnarray}
&& \hspace{-8ex} x_{m-1} + x_m + \sum_{i=3}^m x_{j_i} \geq {\textstyle\frac{1}{2}}m(m+1), \;\;\; m=4, \ldots, \lceil (n+1)/2 \rceil \label{eq:fam10} \vspace{.5ex} \\
&& \hspace{-8ex} 2x_{m-1} + x_m + 2\sum_{i=3}^m x_{j_i} \geq m^2+1, \;\;\; m=4, \ldots, \lceil (n+1)/2 \rceil \label{eq:fam11} \vspace{.5ex} \\
&& \hspace{-8ex} 2x_{m-1} + x_m + 4\sum_{i=3}^m x_{j_i} \geq m(2m-3)+5, \;\;\; m=3, \ldots, \lceil (n+1)/2 \rceil \label{eq:fam12} \vspace{.5ex} \\
&& \hspace{-8ex} 3x_{m-1} + 2x_m + 4\sum_{i=3}^m x_{j_i} \geq m(2m-1)+4, \;\;\; m=3, \ldots, \lceil (n+1)/2 \rceil \label{eq:fam13} \vspace{.5ex} \\
&& \hspace{-8ex} 3x_{m-1} + 2x_m + 5\sum_{i=3}^m x_{j_i} \geq {\textstyle\frac{5}{2}}m(m-1) + 6, \;\;\; m=3, \ldots, \lceil (n+1)/2 \rceil \label{eq:fam14}
\end{eqnarray}
for any given set $\{x_{j_3}, \ldots, x_{j_m}\}$ of $m-2$ variables in $\{x_{m+1}, \ldots, x_n\}$, if \mbox{$n-m\geq 4$}.
\end{theorem}
\proof{Proof.} Inequality (\ref{eq:fam10}) is facet defining due to Theorem~\ref{th:permfacet1}. For the remaining inequalities we apply Theorem~\ref{th:main}. First, we show that (\ref{eq:fam11})--(\ref{eq:fam14}) are satisfied by all undominated \mbox{$J$-circuits}, where $J=\{m-1,m,j_3, \ldots, j_m\}$. This can be shown individually for each inequality, but we can establish the result for all at once by showing that
\begin{equation}
ax_{m-1} + bx_m + c\sum_{i=3}^m x_{j_i} \geq \beta
\label{eq:fam20}
\end{equation}
is satisfied by all undominated \mbox{$J$-circuits}, given that
\[
\vspace{-2ex}
\beta = (m-2)a + (m+1)b + {\textstyle\frac{1}{2}}(m-3)(m-2)c + (m-1)c
\]
and
\begin{equation}
2a\geq c, \;\;\; c\geq a, \;\;\; 3a\geq 2b+c, \;\;\; 2a\geq 3b, \;\;\; c\geq 2b
\label{eq:fam21}
\end{equation}
Note that the inequalities (\ref{eq:fam11})--(\ref{eq:fam14}) have the form (\ref{eq:fam20}) and satisfy the relations (\ref{eq:fam21}). It can also be checked that $\beta$ is equal to the right-hand side of each inequality (\ref{eq:fam11})--(\ref{eq:fam14}). It therefore suffices to show that all undominated \mbox{$J$-circuits} satisfy (\ref{eq:fam20}).
To show this, we again apply Theorem~\ref{th:simplifiedgreedycomplete}. We partition permutations $x'$ of $x=(x_{m-1},x_m,x_{j_3}, \ldots, x_{j_m})$ into 5 classes, which give rise to 5 types of \mbox{$J$-circuits}. It suffices to show that \mbox{$J$-circuits} of all 5 types satisfy (\ref{eq:fam20}).
\begin{description}
\item {\em Type 1.} $x_m$ occurs last and $x_{m-1}$ next to last in $x'$. Circuits constructed in a greedy fashion have the form $x'=(1,\ldots,m-2,m,m+1)$. This is because once the first $m-2$ variables in $x'$ are assigned $1, \ldots, m-2$, variable $x_{m-1}$ cannot be assigned $m-1$ and is therefore assigned $m$. Now $x_m$ cannot be assigned $m-1$ without creating a cycle with $x_{m-1}$ and is therefore assigned $m+1$. The LHS of (\ref{eq:fam20}) is
\[
ma + (m+1)b + (m-2)c + {\textstyle\frac{1}{2}}(m-3)(m-2)c \geq \beta
\]
where the inequality follows from the fact that $2a\geq c$. So \mbox{$J$-circuits} of type~1 satisfy (\ref{eq:fam20}).
\item {\em Type 2.} $x_m$ occurs last but $x_{m-1}$ does not occur next to last in $x'$. The circuits have the form $x'=(1,\ldots,m-1,m+1)$. Because $a\leq c$, the LHS of (\ref{eq:fam20}) is minimized when $x_{m-1}$ occurs second from last in $x'$ (i.e., in position $m-2$), in which case the LHS has value equal to $\beta$. So \mbox{$J$-circuits} of type~2 satisfy (\ref{eq:fam20}).
\item {\em Type 3.} $x_{m-1}$ occurs last and $x_m$ next to last in $x'$. The circuits have the form $x'=(1,\ldots,m-1,m+1)$, for which the LHS of (\ref{eq:fam20}) is
\[
(m+1)a + (m-1)b + (m-2)c + {\textstyle\frac{1}{2}}(m-3)(m-2)c \geq \beta
\]
where the inequality follows from the fact that $3a\geq 2b+c$. So \mbox{$J$-circuits} of type~3 satisfy (\ref{eq:fam20}).
\item {\em Type 4.} $x_{m-1}$ occurs last but $x_m$ does not occur next to last in $x'$. The circuits have the form $x'=(1,\ldots,m)$. Because $b\leq c$, the LHS of (\ref{eq:fam20}) is minimized when $x_m$ occurs second from last in $x'$, in which case the LHS has value
\[
ma + (m-2)b + (m-1)c + {\textstyle\frac{1}{2}}(m-3)(m-2)c \geq \beta
\]
where the inequality follows from the fact that $2a\geq 3b$. So \mbox{$J$-circuits} of type~4 satisfy (\ref{eq:fam20}).
\item {\em Type 5.} Neither $x_{m-1}$ nor $x_m$ occurs last in $x'$. The circuits have the form $x'=(1,\ldots,m)$. Because $b\leq a\leq c$, the LHS of (\ref{eq:fam20}) is minimized when $x_{m-1}$ is second from last and $x_m$ is next to last in $x'$, in which case the LHS has value
\[
(m-2)a + (m-1)b + mc + {\textstyle\frac{1}{2}}(m-3)(m-2)c \geq \beta
\]
where the inequality follows from the fact that $c\geq 2b$. So \mbox{$J$-circuits} of type~5 satisfy (\ref{eq:fam20}).
\end{description}
It remains to exhibit, for each inequality (\ref{eq:fam11})--(\ref{eq:fam14}), $m$ affinely independent \mbox{$J$-circuits} that satisfy it at equality. The scheme for doing so is very similar for (\ref{eq:fam12})--(\ref{eq:fam14}), but somewhat different for (\ref{eq:fam11}). Beginning with (\ref{eq:fam12}), suppose for the moment that $m>3$. We use circuits of type~1, 2, and 3, which are the only types that can satisfy (\ref{eq:fam20}) at equality:
\begin{equation}
\begin{array}{cc@{\hspace{6ex}}cccccc}
x_{m-1} & x_m & x_{j_3} & x_{j_4} & x_{j_5} & \cdots & x_{j_{m-1}} & x_{j_m} \vspace{.5ex} \\
\hline
m-2 & m+1 & 1 & 2 & 3 & \cdots & m-3 & m-1 \\
m-2 & m+1 & 2 & 1 & 3 & \cdots & m-3 & m-1 \\
m-2 & m+1 & 1 & 3 & 2 & \cdots & m-3 & m-1 \\
\vdots & \vdots & \vdots & \vdots & \vdots & & \vdots & \vdots \\
m-2 & m+1 & 1 & 2 & 3 & \cdots & m-1 & m-3 \\
\ \\
m & m+1 & 1 & 2 & 3 & \cdots & m-3 & m-2 \\
m+1 & m-1 & 1 & 2 & 3 & \cdots & m-3 & m-2
\end{array} \label{eq:fam25}
\end{equation}
The first $m-2$ rows are type~2 \mbox{$J$-circuits}, all of which satisfy (\ref{eq:fam20}) at equality. The last two rows are type~1 and type~3 \mbox{$J$-circuits}, respectively, chosen as above to satisfy (\ref{eq:fam20}) at equality. The nonsingular $(m-2)\times (m-2)$ submatrix in the upper right is obtained by swapping pairs of elements in the first row. After suitable row operations (\ref{eq:fam25}) becomes a matrix that is triangular after rearranging columns:
\[
\begin{array}{cc@{\hspace{6ex}}cccccc}
x_{m-1} & x_m & x_{j_3} & x_{j_4} & x_{j_5} & \cdots & x_{j_{m-1}} & x_{j_m} \vspace{.5ex} \\
\hline
(m-2)/s & (m+1)/s & 1 & 0 & 0 & \cdots & 0 & 0 \\
(m-2)/s & (m+1)/s & 0 & 1 & 0 & \cdots & 0 & 0 \\
(m-2)/s & (m+1)/s & 0 & 0 & 1 & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & & \vdots & \vdots \\
(m-2)/s & (m+1)/s & 0 & 0 & 0 & \cdots & 0 & 1 \\
\ \\
2+(m-2)/s
& (m+1)/s & 0 & 0 & 0 & \cdots & 0 & 0 \\
3+2(2s-3)/(m+1)
& 0 & 0 & 0 & 0 & \cdots & 0 & 0
\end{array}
\]
where $s=\frac{1}{2}(m-1)(m-2)+1$ is the sum of the elements in an arbitrary row of the $(m-2)\times (m-2)$ submatrix. Because each element on the diagonal is nonzero, the entire matrix is nonsingular, and the rows are affinely independent. When $m=3$, we use instead the affinely independent \mbox{$J$-circuits} $(3,4,1)$, $(1,4,2)$, and $(4,2,1)$, which again are of types~1, 2 and 3 and satisfy (\ref{eq:fam12}) at equality.
Affinely independent \mbox{$J$-circuits} of types 2, 4 and 5 can be similarly exhibited for (\ref{eq:fam13}), and circuits of types 2, 3 and 4 for (\ref{eq:fam14}). Affinely independent \mbox{$J$-circuits} for (\ref{eq:fam11}) are slightly different because only circuits of types~2 and 5 can satisfy (\ref{eq:fam11}) at equality. Here we are given that $m\geq 4$. We use the first $m-2$ circuits in (\ref{eq:fam25}) and the following two circuits of type~5:
\[
\begin{array}{cc@{\hspace{6ex}}cccccc}
x_{m-1} & x_m & x_{j_3} & x_{j_4} & x_{j_5} & \cdots & x_{j_{m-1}} & x_{j_m} \vspace{.5ex} \\
\hline
1 & m-1 & 2 & 3 & 4 & \cdots & m-2 & m \\
2 & m-1 & 1 & 3 & 4 & \cdots & m-2 & m
\end{array}
\]
These satisfy (\ref{eq:fam11}) at equality because $x_{m-1}$ has the same coefficient as $x_1,x_2$. An argument similar to the above shows that the \mbox{$J$-circuits} are affinely \mbox{independent}. $\Box$
\endproof \medskip
The above theorems provide a complete description of facets that appear for all $m\geq d+2$ on levels~$d=0,1,2$. We can verify this by exhaustive enumeration of facets for $m=d+2$ using Theorems~\ref{th:main} and~\ref{th:main2}. That is, for each $d$ we use the greedy algorithm to generate all undominated \mbox{$J$-circuits} for $J=\{3,\ldots,d+4\}$. We then consider the set $I$ of all inequalities (\ref{eqineq}), up to a positive scalar multiple, that are satisfied at equality by an affinely independent subset of $d+2$ undominated \mbox{$J$-circuits}. Finally, we list the inequalities in $I$ that are satisfied by all the undominated \mbox{$J$-circuits}. This list contains all inequalities that are facet defining for $m=d+2$, and all of them are described above. This method can, in principle, be used to identify families of facets on levels 3 and higher, although for each family one must prove that it is facet defining for all $m\geq d+2$, as is done above.
\section{Separation Algorithms}
\label{separation}
There are polynomial-time separation algorithms for all of the classes of facets described in the previous two sections. Each algorithm identifies a separating facet whenever one exists.
The separation problem is to identify a facet that separates a given solution value $\bar{x}$ of $x=(x_1, \ldots, x_n)$ from the hamiltonian circuit polytope; that is, to find a facet-defining inequality $ax\geq\alpha$ that is violated by $x=\bar{x}$. Consider first the family (\ref{eq:permfacet}) of permutation facets. Let $j_1, \ldots, j_{n-2}$ be an ordering of the indices $3, \ldots, n$ such that $\bar{x}_{j_1}\leq\cdots\leq \bar{x}_{j_{n-2}}$. Then for $m=1, \ldots, n$, check whether
\begin{equation}
\sum_{i=1}^m x_{j_i} \geq {\textstyle\frac{1}{2}} m(m+1)
\label{eq:sep1}
\end{equation}
is violated by setting $(x_{j_1}, \ldots, x_{j_m})=(\bar{x}_{j_1}, \ldots, \bar{x}_{j_m})$. Continue until (\ref{eq:sep1}) is violated, at which point a separating facet is discovered. The procedure has worst-case running time of $\mathcal{O}(n\log n)$, the time required to sort $n$ values.
This procedure identifies a separating permutation facet in the family (\ref{eq:permfacet}) if one exists. To see this, suppose $\sum_{j\in J'} x_j\geq \frac{1}{2}m(m+1)$ is a separating permutation facet, where $m=|J'|$ and $1,2\not\in J'$. Then because $\bar{x}_{j_1}, \ldots, \bar{x}_{j_m}$ are the $m$ smallest values among $\bar{x}_{j_1}, \ldots, \bar{x}_{j_{n-2}}$, we have
\[
\sum_{i=1}^m \bar{x}_{j_i} \leq \sum_{j\in J'} \bar{x}_j < {\textstyle\frac{1}{2}}m(m+1)
\]
Thus (\ref{eq:sep1}) is also separating.
Separation requires only $\mathcal{O}(n)$ time for the two-term facets (\ref{eq:2term1})--(\ref{eq:2term5}). A separating facet of the form (\ref{eq:2term1}) can be found, if one exists, by checking whether (\ref{eq:2term1}) is violated by setting $(x_i,x_j)=(\bar{x}_{j_1},\bar{x}_{j_2})$, where $\bar{x}_{j_1}$ and $\bar{x}_{j_2}$ are the two smallest values among $\bar{x}_1, \ldots, \bar{x}_n$. If so, then (\ref{eq:2term1}) is separating with $(i,j)=(j_1,j_2)$. Facets (\ref{eq:2term2})--(\ref{eq:2term5}) can be separated by enumerating at most $n$ values of the index $i$.
Level~0 facets, level~1 facets of the form (\ref{eq:fam2a}), and level~2 facets of the form (\ref{eq:fam10}) can be separated with the algorithms just described. A single initial sort of the values $\bar{x}_1, \ldots, \bar{x}_n$ provides the basis for separating all other facets on levels~1 and~2. For any fixed $m\geq 2$, we can find a separating level~1 facet of the form (\ref{eq:fam2}) as follows, if one exists. Let $\bar{x}_{j_2}, \ldots, \bar{x}_{j_m}$ be the $m-1$ smallest values in $\{\bar{x}_{m+1}, \ldots, \bar{x}_n\}$. These values can be identified in $\mathcal{O}(n)$ time by looking through the sorted elements of $\{\bar{x}_1, \ldots, \bar{x}_n\}$ and selecting the first $m-1$ elements $\bar{x}_j$ with $j> m$. Now check whether (\ref{eq:fam2}) is violated by setting $(x_m,x_{j_1}, \ldots, x_{j_m})$ equal to $(\bar{x}_m,\bar{x}_{j_1}, \ldots, \bar{x}_{j_m})$. If so, then (\ref{eq:fam2}) is separating. It can be shown as above that this procedure finds a separating facet for any fixed $m$ if one exists. We use a similar procedure for the level~2 facets (\ref{eq:fam11})--(\ref{eq:fam14}). Thus for each $m$, we can identify a separating level~1 and level~2 facet of each type in $\mathcal{O}(n)$ time, if one exists. By enumerating $\mathcal{O}(n)$ values of $m$, we can execute the entire separation algorithm in time $\mathcal{O}(n\log n+n^2)=\mathcal{O}(n^2)$.
As an illustration, consider circuit$(x_1,\ldots,x_7)$ with each $D_j=\{1, \ldots, 7\}$. Suppose that $(\bar{x}_1, \ldots, \bar{x}_7)=(7,2.6,1,6.25,7,2.2,1.95)$. This point belongs to the affine hull described by (\ref{eq:2termaffine}), but it is infeasible if only because it does not consist of
values in the domain. The following separating cuts are identified by the above algorithms:
\begin{eqnarray}
&& x_3+x_7\geq 3 \vspace{.5ex} \label{eq:sep11} \\
&& x_2 + 2x_3 \geq 5 \vspace{.5ex} \label{eq:sep12} \\
&& x_3+2x_6+2x_7 \geq 10 \vspace{.5ex} \label{eq:sep13} \\
&& 2x_3 + x_4 + 2x_6 + 2x_7 \geq 17 \vspace{.5ex} \label{eq:sep14} \\
&& 2x_3 + x_4 + 4x_6 + 4x_7 \geq 25 \vspace{.5ex} \label{eq:sep15} \\
&& 3x_2 + 2x_3 + 4x_7 \geq 19 \vspace{.5ex} \label{eq:sep16} \\
&& 3x_2 + 2x_3 + 5x_7 \geq 21 \label{eq:sep17}
\end{eqnarray}
Here, (\ref{eq:sep11}) is a permutation facet as well as a 2-term facet, (\ref{eq:sep12}) is a level~1 facet as well as a 2-term facet, (\ref{eq:sep13}) is a level~1 facet, and (\ref{eq:sep14})--(\ref{eq:sep17}) are level~2 facets of the form (\ref{eq:fam11})--(\ref{eq:fam14}), respectively.
\section{Conclusions and Future Research}
We studied the structure of the hamiltonian circuit polytope by establishing its dimension, developing tools for the identification of facets, and using these tools to derive several families of facets. The tools include necessary and sufficient conditions for an inequality with at most $n-4$ variables to be facet defining, stated in terms of undominated circuits, and a greedy algorithm for generating undominated circuits, for which we proved completeness. We used a novel approach to identifying families of facet-defining inequalities, based on the structure of variable indices rather than on structured subgraphs. Finally, we described a \mbox{hier}archy of facets of increasing combinatorial complexity and derived all facets on the first three levels. We also presented complete polynomial-time separation algorithms for all facets described here.
\section*{Appendix. Proof of Theorem~\ref{th:greedycomplete}}
To prove Theorem~\ref{th:greedycomplete}, we first define for any given circuit $\bar{x}$ an {\em implied ordering} with respect to $(J_+,J_-)$. The proof will show that if $\bar{x}$ is undominated with respect to $(J_+,J_-)$, then a \mbox{$J$-circuit} that is greedily constructed according to the implied ordering is identical to $\bar{x}(J)$.
For a given $J$-circuit $\bar{x}(J)$, and partition $(J_+,J_-)$, let $J_+=\{i_1, \ldots,i_p\}$ where $\bar{x}_{i_1} < \cdots < \bar{x}_{i_p}$, and let $J_-=\{j_1, \ldots, j_q\}$ where $\bar{x}_{j_1} > \cdots > \bar{x}_{j_q}$.
The implied ordering will be $k_1, \ldots, k_m$. As we construct the ordering, we construct a \mbox{$J$-circuit} $y(J)$ that is greedy with respect to the ordering. The basic idea is that at each step $\ell$ of the procedure, we assign the greedy value to $y_{i_r}$ for the next $i_r\in J_+$ (if any remain) and let $k_{\ell}=i_r$, provided this assigns $y_{i_r}$ the same value as $\bar{x}_{i_r}$. Otherwise, we assign the greedy value to $y_{j_s}$ for the next $j_s\in J_-$ and let $k_{\ell}=j_s$. If no indices $j_s$ remain in $J_-$, we assign the greedy value to $y_{i_r}$ regardless of whether it agrees with $\bar{x}_{i_r}$. The precise algorithm appears in Fig.~\ref{fig:impliedorder}.
As an example, suppose $\bar{x}=(v_2,v_3,v_4,v_7,v_6,v_1,v_5)$, $J_+=\{1,3,6,7\}$, and $J_-=\{4,5\}$. Thus $\bar{x}(J)=(\bar{x}_1,\bar{x}_3,\bar{x}_4,\bar{x}_5,\bar{x}_6,\bar{x}_7)=(v_2,v_4,v_7,v_6,v_1,v_5)$. Based on the values in $\bar{x}(J)$, we order the contents of $J_+$ so that $J_+=\{i_1, \ldots, i_4\}=\{6,1,3,7\}$. Similarly, $J_-=\{j_1,j_2\}=\{4,5\}$. The progress of the algorithm appears in \mbox{Table~\ref{ta:impliedorder}}. Note that when $\ell=4$, we first consider assigning $v_{\min}$ to $y_{i_r}$. But this results in $y_7=v_3$, which deviates from $\bar{x}$ because $\bar{x}_7=v_5$. We therefore assign $v_{\max}$ to $y_{j_s}$, which yields $y_4=v_7$. When $\ell=5$, we again consider assigning $v_{\min}$ to $y_{i_r}$, but because $v_{\min}$ has changed, we now obtain an assignment $y_7=v_5$ that agrees with $\bar{x}$. When $\ell = 6$, the indices in $J_+$ are exhausted, and we therefore assign $v_{\min}$ to $y_{j_s}$, so that $y_5=v_6$. The resulting $y(J)$ is identical to $\bar{x}(J)$, and the implied ordering is $(k_1, \ldots, k_6)=(6,1,3,4,5,7)$.
\bigskip
\noindent
{\bf Proof of Theorem~\ref{th:greedycomplete}.} Let $\bar{x}(J)$ be a $J$-circuit that is undominated with respect to $(J_+,J_-)$. Let $J_+=\{i_1, \ldots,i_p\}$ where $\bar{x}_{i_1} < \cdots < \bar{x}_{i_p}$, and let $J_-=\{j_1, \ldots, j_q\}$ where $\bar{x}_{j_1} > \cdots > \bar{x}_{j_q}$.
Let $k_1, \ldots, k_m$ be the implied ordering for $\bar{x}$ with respect to $(J_+,J_-)$ as computed above, and let $(y_{k_1}, \ldots, y_{k_m})$ be the
greedy solution with respect to this ordering. We claim that $\bar{x}_{k_{\ell}}=y_{k_{\ell}}$ for $\ell=1,
\ldots, m$, which suffices to prove the theorem. Supposing to the
contrary, let $\bar{\ell}$ be the smallest index for which
$\bar{x}_{k_{\bar{\ell}}}\neq y_{k_{\bar{\ell}}}$. Clearly
$\bar{x}_{k_{\bar{\ell}}}\prec y_{k_{\bar{\ell}}}$ is
inconsistent with the greedy choice, because
$\bar{x}_{k_{\bar{\ell}}}$ is available when
$y_{k_{\bar{\ell}}}$ is assigned a value. Thus
we have $\bar{x}_{k_{\bar{\ell}}} \succ y_{k_{\bar{\ell}}}$
\begin{figure}[t]
\centering
\fbox{
\parbox[c]{6in}{
\begin{tabbing}
xxx \= xxx \= xxx \= xxx \= \kill
Let $V = \{v_1, \ldots, v_n\}$. \\
Let $J_+ = \{i_1, \ldots, i_p\}$ where $\bar{x}_{i_1} < \cdots < \bar{x}_{i_p}$. \\
Let $J_- = \{j_1, \ldots, j_q\}$ where $\bar{x}_{j_1} > \cdots > \bar{x}_{j_q}$. \\
Let $r=1$ and $s=1$. \\
For $\ell = 1, \ldots, m$: \\
\> Let $v_{\min}$ be the smallest value in $V$ such that setting $y_{i_r}=v_{\min}$ \\
\> \> creates no cycle with the elements of $y$ assigned so far. \\
\> Let $v_{\max}$ be the largest value in $V$ such that setting $y_{j_s}=v_{\max}$ \\
\> \> creates no cycle with the elements of $y$ assigned so far. \\
\> If $r\leq p$ and ($\bar{x}_{i_r}=v_{\min}$ or $s>q$) then \\
\> \> Let $k_{\ell}=i_r$, $y_{i_r}=v_{\min}$, and $r=r+1$. \\
\> \> Remove $v_{\min}$ from $V$. \\
\> Else \\
\> \> Let $k_{\ell}=j_s$, $y_{j_s}=v_{\max}$, and $s=s+1$. \\
\> \> Remove $v_{\max}$ from $V$.
\end{tabbing}
}
}
\vspace{0ex} \caption{Algorithm for generating an implied ordering $k_1, \ldots, k_m$ for $J$-circuit $\bar{x}(J)$ with respect to $(J_+,J_-)$, where $m=|J|$. The resulting \mbox{$J$-circuit} $y(J)$ is greedily constructed with respect to the ordering $k_1, \ldots, k_m$ and $(J_+,J_-)$. The algorithm is used to help prove Theorem~\ref{th:greedycomplete}, not to identify undominated \mbox{$J$-circuits} or construct facets.} \label{fig:impliedorder}
\vspace{2ex}
\end{figure}
\begin{table}
\caption{Computation of the implied ordering for $\bar{x}=(v_2,v_3,v_4,v_7,v_6,v_1,v_5)$, where $J_+=\{1,3,6,7\}$ and $J_-=\{4,5\}$ (indicated by the the signs above $\bar{x}$).}
\label{ta:impliedorder}
\vspace{1ex}
\begin{center}
$
\begin{array}{c@{\hspace{3ex}}c@{\hspace{2ex}}c@{\hspace{3ex}}c@{\hspace{1ex}}c@{\hspace{2ex}}cc@{\hspace{5ex}}ccccccc@{\hspace{3ex}}c}
& & & & & & & + & & + & - & - & + & + & \vspace{-.5ex} \\
& & & & & & \hspace{4ex} \bar{x}= \hspace{-4ex}
& v_2 & v_3 & v_4 & v_7 & v_6 & v_1 & v_5 & \\
\ell & r & s & i_r & j_s & v_{\min} & v_{\max} & y_1 & y_2 & y_3 & y_4 & y_5 & y_6 & y_7 & k_{\ell} \\
\hline
1 & 1 & 1 & 6 & 4 & v_1 & v_7 & & & & & & v_1 & & 6 \\
2 & 2 & 1 & 1 & 4 & v_2 & v_7 & v_2 & & & & & v_1 & & 1 \\
3 & 3 & 1 & 3 & 4 & v_4 & v_7 & v_2 & & v_4 & & & v_1 & & 3 \\
4 & 4 & 1 & 7 & 4 & v_3 & v_7 & v_2 & & v_4 & v_7 & & v_1 & & 4 \\
5 & 4 & 2 & 7 & 5 & v_5 & v_6 & v_2 & & v_4 & v_7 & & v_1 & v_5 & 5 \\
6 & 5 & 2 & & 5 & & v_6 & v_2 & & v_4 & v_7 & v_6 & v_1 & v_5 & 7 \\
\hline
\end{array}
$
\end{center}
\vspace{4ex}
\end{table}
By hypothesis, $\bar{x}$ is undominated with respect to $(J_+\cup
J_-)$. We therefore have $\bar{x}_{k_{\ell}} \prec
y_{k_{\ell}}$ for some $\ell\in\{\bar{\ell}+1, \ldots, m\}$.
Let $\hat{\ell}$ be the smallest such index. Then there are two
cases: (1) $k_{\bar{\ell}}$ and $k_{\hat{\ell}}$ are both in $J_+$
or both in $J_-$, or (2) they are in different sets.
\bigskip
Case 1: $k_{\bar{\ell}}$ and $k_{\hat{\ell}}$ are both in $J_+$ or
both in $J_-$. We will suppose that both are in $J_+$. The
argument is similar if both are in $J_-$.
Let $t$ be the index such that $i_t=k_{\bar{\ell}}$, and $u$ the
index such that $i_u=k_{\hat{\ell}}$. Then
$\bar{x}_{i_t}>y_{i_t}$ because $\bar{x}_{i_t}\succ
y_{i_t}$ and $i_t\in J_+$. Let $t'$ be the largest index in
$\{t, \ldots, u-1\}$ such that $\bar{x}_{i_{t'}} >
y_{i_{t'}}$. We know that $t'$ exists because $\bar{x}_{i_t}
> y_{i_t}$. Thus we have two sequences of values related as
follows:
\[
\begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c}
\bar{x}_{i_1} & < & \cdots & < &
\bar{x}_{i_{t-1}} & < & \bar{x}_{i_t} & <
& \cdots & < & \bar{x}_{i_{t'-1}} & < &
\bar{x}_{i_{t'}} & < & \cdots & < &
\bar{x}_{i_{u-1}} & < & \bar{x}_{i_u} \\
$\rotatebox[origin=c]{270}{$=$}$ & & & &
$\rotatebox[origin=c]{270}{$=$}$ & & $\rotatebox[origin=c]{270}{$>$}$ & &
& & $\rotatebox[origin=c]{270}{$\geq$}$ & &
$\rotatebox[origin=c]{270}{$>$}$ & & & &
$\rotatebox[origin=c]{270}{$\geq$}$ & & $\rotatebox[origin=c]{270}{$<$}$ \\
y_{i_1} & & \cdots & & y_{i_{t-1}} & &
y_{i_t} & & \cdots & & y_{i_{t'-1}} & &
y_{i_{t'}} & & \cdots & & y_{i_{u-1}} & &
y_{i_u}
\end{array}
\]
We first show that value $\bar{x}_{i_u}$ has not yet been assigned
in the greedy algorithm when $y_{i_u}$ is assigned a value. That is, we show that $\bar{x}_{i_u}\not\in
\{y_{i_1}, \ldots, y_{i_{u-1}}\}$ and
$\bar{x}_{i_u}\not\in \{y_{j_1}, \ldots, y_{j_{u'}}\}$.
To see that $\bar{x}_{i_u}\not\in \{y_{i_1}, \ldots,
y_{i_{u-1}}\}$, suppose to the contrary that
$\bar{x}_{i_u}=y_{i_{w}}$ for some $w\in \{1, \ldots, u-1\}$.
This is impossible, because $\bar{x}_{i_u}>\bar{x}_{i_w}\geq
y_{i_w}$. Also $\bar{x}_{i_u}\not\in \{y_{j_1}, \ldots,
y_{j_{u'}}\}$, because assigning value $\bar{x}_{i_u}$ to
$y_{j_w}$ for some $w\in \{1, \ldots, u'\}$ contradicts the greedy
construction of $y$, due to the fact that value
$y_{i_u}$ was available at that time and is a superior choice.
We next show that value $\bar{x}_{i_{t'}}$ has not yet been assigned
in the greedy algorithm when $y_{i_u}$ is assigned a value. That is, we show that \mbox{$\bar{x}_{i_{t'}}\not\in
\{y_{i_1}, \ldots, y_{i_{u-1}}\}$} and
$\bar{x}_{i_{t'}}\not\in \{y_{j_1}, \ldots,
y_{j_{u'}}\}$. To begin with, we have that
$\bar{x}_{i_{t'}}\not\in \{y_{i_1}, \ldots,
y_{i_{t'-1}}\}$, by virtue of the same reasoning just applied
to $\bar{x}_{i_u}$. Also $\bar{x}_{i_{t'}}\neq y_{i_{t'}}$,
since by hypothesis $\bar{x}_{i_{t'}} > y_{i_{t'}}$. To show
that $\bar{x}_{i_{t'}}\not\in \{y_{i_{t'+1}}, \ldots,
y_{i_{u-1}}\}$, suppose to the contrary that $\bar{x}_{i_{t'}}
= y_{i_w}$ for some $w\in \{t'+1, \ldots, u-1\}$. Then since
$\bar{x}_{i_{t'}} < \bar{x}_{i_w}$, we must have $\bar{x}_{i_w} >
y_{i_w}$. But this contradicts the definition of $t'$ ($< w$)
as the largest index in $\{1, \ldots, u-1\}$ such that
$\bar{x}_{i_{t'}} > y_{i_{t'}}$. Thus $\bar{x}_{i_{t'}} \neq
y_{i_w}$. Finally, $\bar{x}_{i_{t'}}\not\in \{y_{j_1},
\ldots, y_{j_{u'}}\}$ because assigning value
$\bar{x}_{i_{t'}}$ to $y_{j_w}$ for some $w\in \{1, \ldots, u'\}$
contradicts the greedy construction of $y$, due to the fact
that $y_{i_u}$ was available at the time and $y_{i_u} >
\bar{x}_{i_u} > \bar{x}_{i_{t'}}$.
Because $\bar{x}_{i_u} < y_{i_u}$ and value $\bar{x}_{i_u}$ has
not yet been assigned, setting \mbox{$y_{i_u} = \bar{x}_{i_u}$} must create
a cycle in $y$, because otherwise setting $y_{i_u} = \bar{x}_{i_u}$
would have been the greedy choice. Also, setting
$y_{i_u}=\bar{x}_{i_{t'}}$ was not the greedy choice because
$y_{i_u} > \bar{x}_{i_u} > \bar{x}_{i_{t'}}$. Thus setting
$y_{i_u}=\bar{x}_{i_{t'}}$ must likewise create a cycle in
$y$, because $\bar{x}_{i_{t'}}$ has not yet been assigned. Now
define $G_{y(J)}$ as before and consider the maximal subchain
in $G_{y(J)}$ that contains $y_{i_u}$. Let the segment
of the subchain up to $y_{i_u}$ be
\[
v_z \rightarrow \cdots \rightarrow v_{i_u} \rightarrow y_{i_u}
\]
Because setting $y_{i_u}=\bar{x}_{i_u}$ creates a cycle in
$y$, we must have $\bar{x}_{i_u} = v_z$. Similarly,
because setting $y_{i_u}=\bar{x}_{i_{t'}}$ creates a cycle in
$y$, we must have $\bar{x}_{i_{t'}} = v_z$. This implies
$\bar{x}_{i_u}=\bar{x}_{i_{t'}}$, which is impossible because
$\bar{x}_{i_u}>\bar{x}_{i_{t'}}$.
\bigskip
Case 2: $k_{\bar{\ell}}\in J_+$ and $k_{\hat{\ell}}\in J_-$, or
$k_{\bar{\ell}}\in J_-$ and $k_{\hat{\ell}}\in J_+$. We can rule
out the latter subcase immediately, because $k_{\bar{\ell}}$ can be
in $J_-$ only if $r>p$ when $y_{k_{\bar{\ell}}}$ is assigned
a value. This means $k_{\hat{\ell}}$ must be in
$J_-$ as well, because $y_{k_{\hat{\ell}}}$ is assigned a value after
$y_{k_{\bar{\ell}}}$ is assigned a value, and the situation reverts to Case 1. We
therefore suppose $k_{\bar{\ell}}\in J_+$ and $k_{\hat{\ell}}\in
J_-$.
Let $t$ be the index such that $i_t=k_{\bar{\ell}}$, and $u$ the
index such that $j_u=k_{\hat{\ell}}$. Again $\bar{x}_{i_t} >
y_{i_t}$ because $\bar{x}_{i_t} \succ y_{i_t}$ and
$j_t\in J_+$. Thus, at the time value $y_{i_t}$ was assigned
a value, we had $\bar{x}_{j_s}<v_{\max}$ for the current value
of $s$. So we have two sequences of values related as follows:
\begin{equation}
\begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c}
\bar{x}_{j_1} & > & \cdots & > &
\bar{x}_{j_{s-1}} & > & \bar{x}_{j_{s}} & > &
\cdots & \bar{x}_{j_{u-1}} & > & \bar{x}_{j_u} \\
$\rotatebox[origin=c]{270}{$=$}$ & & & &
$\rotatebox[origin=c]{270}{$=$}$ & & $\rotatebox[origin=c]{270}{$\leq$}$ & & &
$\rotatebox[origin=c]{270}{$\leq$}$ & & $\rotatebox[origin=c]{270}{$>$}$ \\
y_{j_1} & & \cdots & & y_{j_{s-1}} & &
y_{j_{s}} & & \cdots & y_{j_{u-1}} & & y_{j_u}
\end{array} \label{eq:J+}
\end{equation}
where $v_{\max}> \bar{x}_{j_s}$. Let $t'$ be the largest index for
which $y_{i_{t'}}$ has been assigned a value at the time
$y_{j_u}$ is assigned a value. We have two sequences of
values related as follows:
\[
\begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c}
\bar{x}_{i_1} & < & \cdots & < &
\bar{x}_{i_{t-1}} & < & \bar{x}_{i_t} & < &
\cdots & < & \bar{x}_{i_{t'}} \\
$\rotatebox[origin=c]{270}{$=$}$ & & & &
$\rotatebox[origin=c]{270}{$=$}$ & & $\rotatebox[origin=c]{270}{$>$}$ & & & & \\
y_{i_1} & & \cdots & & y_{i_{t-1}} & &
y_{i_t} & & \cdots & & y_{i_{t'}}
\end{array}
\]
We first show that a cycle must be created if value $\bar{x}_{j_u}$
is assigned to $y_{j_u}$. Because
$y_{j_u}<\bar{x}_{j_u}$, it suffices to show that value
$\bar{x}_{j_u}$ has not yet been assigned in the greedy algorithm
when $y_{j_u}$ is assigned a value. That is, we show
that $\bar{x}_{j_u}\not\in \{y_{j_1}, \ldots,
y_{j_{u-1}}\}$ and $\bar{x}_{j_u}\not\in \{y_{i_1},
\ldots, y_{i_{t'}}\}$. If $\bar{x}_{j_u}=y_{j_{w}}$ for
some $w\in \mbox{$\{1, \ldots, u-1\}$}$, then
$\bar{x}_{j_u}<\bar{x}_{j_w}\leq y_{j_w}$, which is
impossible. Thus $\bar{x}_{j_u}\not\in \{y_{j_1}, \ldots,
y_{j_{u-1}}\}$. Also $\bar{x}_{j_u}\not\in \{y_{i_1},
\ldots, y_{i_{t'}}\}$, because assigning value $\bar{x}_{j_u}$
to $y_{i_w}$ for some $w\in \{1, \ldots, t'\}$ contradicts the
greedy construction of $y$, due to the fact that value
$y_{j_u}$ was available at that time and is a superior choice.
We next show that a cycle must be created if value $v_{\max}$ is assigned to $y_{j_u}$. Note that
$v_{\max}\not\in \{y_{i_1}, \ldots, y_{i_{t'}}\}$,
because assigning value $v_{\max}$ to $y_{i_w}$ for some $w\in \{1,
\ldots, t'\}$ contradicts the greedy construction of $y$, due
to the fact that value $y_{j_u}$ was available at that time
and is a superior choice because
$v_{\max}>\bar{x}_{j_s}>\bar{x}_{j_u}$. Now suppose, contrary to
the claim, that assigning $v_{\max}$ to $y_{j_u}$ does not create a
cycle. Then since $v_{\max}>y_{j_u}$, the value $v_{\max}$
must have already been assigned in the greedy algorithm at the time
$y_{j_u}$ is assigned a value. This implies $v_{\max}\in
\{y_{j_s}, \ldots, y_{j_{u-1}}\}$. But in this case we
must have $y_{j_s}=v_{\max}$, because assigning $v_{\max}$ to
$y_{j_s}$ does not create a cycle and, by definition, is the most
attractive choice at the time. Thus (\ref{eq:J+}) becomes
\[
\begin{array}{c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c@{\;}c}
\bar{x}_{j_1} & > & \cdots & > & \bar{x}_{j_{s-1}} & > & \bar{x}_{j_s} & > & \cdots & > & \bar{x}_{j_{s'-1}} & > & \bar{x}_{j_{s'}} & > & \cdots & > & \bar{x}_{j_{u-1}} & > & \bar{x}_{j_u} \\
$\rotatebox[origin=c]{270}{$=$}$ & & & & $\rotatebox[origin=c]{270}{$=$}$ & & $\rotatebox[origin=c]{270}{$<$}$ & & & & $\rotatebox[origin=c]{270}{$\leq$}$ & & $\rotatebox[origin=c]{270}{$<$}$ & & & & $\rotatebox[origin=c]{270}{$\geq$}$ & & $\rotatebox[origin=c]{270}{$<$}$ \\
y_{j_1} & & \cdots & & y_{j_{s-1}} & &
y_{j_s} & & \cdots & & y_{j_{s'-1}} & &
y_{j_{s'}} & & \cdots & & y_{j_{u-1}} & &
y_{j_u}
\end{array}
\]
where $y_{j_s} = v_{\max}$ and where $s'$ is the largest index
in $\{s, \ldots, u-1\}$ such that
$y_{j_{s'}}<\bar{x}_{j_{s'}}$. Now we can argue as in Case 1
that assigning $\bar{x}_{j_u}$ to $y_{j_u}$ creates a cycle, and
assigning $\bar{x}_{j_{s'}}$ to $y_{j_u}$ creates a cycle, which
implies $\bar{x}_{j_{s'}}=\bar{x}_{j_u}$, a contradiction because
$\bar{x}_{j_{s'}}>\bar{x}_{j_u}$. We conclude that assigning
$v_{\max}$ to $y_{j_u}$ creates a cycle.
Having shown that assigning $\bar{x}_{j_u}$ to $y_{j_u}$ creates a
cycle, and assigning $v_{\max}$ to $y_{j_u}$ creates a cycle, we
derive as in Case 1 that $v_{\max}=\bar{x}_{j_u}$, a contradiction
because $v_{\max}\geq \bar{x}_{j_s}> \bar{x}_{j_u}$. The theorem
follows.
$\Box$ \medskip
| {
"timestamp": "2018-12-07T02:02:18",
"yymm": "1812",
"arxiv_id": "1812.02235",
"language": "en",
"url": "https://arxiv.org/abs/1812.02235",
"abstract": "The hamiltonian circuit polytope is the convex hull of feasible solutions for the circuit constraint, which provides a succinct formulation of the traveling salesman and other sequencing problems. We study the polytope by establishing its dimension, developing tools for the identification of facets, and using these tools to derive several families of facets. The tools include necessary and sufficient conditions for an inequality to be facet defining, and an algorithm for generating all undominated circuits. We use a novel approach to identifying families of facet-defining inequalities, based on the structure of variable indices rather than on subgraphs such as combs or subtours. This leads to our main result, a hierarchy of families of facet-defining inequalities and polynomial-time separation algorithms for them.",
"subjects": "Combinatorics (math.CO); Optimization and Control (math.OC)",
"title": "The Hamiltonian Circuit Polytope",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969650796874,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7099044331682235
} |
https://arxiv.org/abs/1205.4603 | The maximal energy of classes of integral circulant graphs | The energy of a graph is the sum of the moduli of the eigenvalues of its adjacency matrix. We study the energy of integral circulant graphs, also called gcd graphs, which can be characterized by their vertex count $n$ and a set $\cal D$ of divisors of $n$ in such a way that they have vertex set $\mathbb{Z}_n$ and edge set ${{a,b}: a,b\in\mathbb{Z}_n, \gcd(a-b,n)\in {\cal D}}$. For a fixed prime power $n=p^s$ and a fixed divisor set size $|{\cal D}| =r$, we analyze the maximal energy among all matching integral circulant graphs. Let $p^{a_1} < p^{a_2} < ... < p^{a_r}$ be the elements of ${\cal D}$. It turns out that the differences $d_i=a_{i+1}-a_{i}$ between the exponents of an energy maximal divisor set must satisfy certain balance conditions: (i) either all $d_i$ equal $q:=\frac{s-1}{r-1}$, or at most the two differences $[q]$ and $[q+1]$ may occur; %(for a certain $d$ depending on $r$ and $s$) (ii) there are rules governing the sequence $d_1,...,d_{r-1}$ of consecutive differences. For particular choices of $s$ and $r$ these conditions already guarantee maximal energy and its value can be computed explicitly. | \section{Introduction}
Integral circulant graphs have attacted much research attention lately, in particular since
more and more people have become aware that they
play a role in quantum physics \cite{SAX}, \cite{BAS4}. A characteristic property of circulant graphs
is that their vertices can be numbered such that any cyclic rotation of the vertex numbering results
in a graph isomorphic to the original graph. Circulant graphs have been the object of research
for quite some time \cite{DAV} and belong to the important family of Cayley graphs.
The integral circulant graphs, having only integer eigenvalues,
form a small but rather distinguished subclass since integral graphs are quite rare among graphs in general \cite{AHM}.
Given an integer $n$ and a set ${\cal D}$ of positive divisors of $n$, the integral circulant graph $\Icg{\cal D}{n}$ is
defined as the graph having vertex set $\ZZ_n=\{0,1,\ldots,n-1\}$ and edge set $\{ \{a,b\}:~a,b\in \ZZ_n,~ \gcd(a-b,n)\in {\cal D}\}$.
We consider only loopless gcd graphs, i.e. $n\notin {\cal D}$.
For $\vert {\cal D}\vert =1$ we obtain the subclass of so-called unitary Cayley graphs.
Over the years, the general structural properties of integral circulant graphs have been well
researched \cite{DEJ}, \cite{BER}, \cite{SO}, \cite{KLO}, \cite{AKH},
\cite{BAS}, \cite{KLO2}, \cite{DRO}, \cite{BEA}, \cite{BAS2}.
Due to the connection with quantum physics, emphasis has lately been placed
on researching the energy of integral circulant graphs
\cite{SHP}, \cite{ILI}, \cite{RAM}, \cite{BAS3}, \cite{PET}, \cite{SA1}, \cite{SA2}.
The \textit{energy} $E(G)$ of a graph $G$ on $n$ vertices is defined as
\[
E(G)=\sum_{i=1}^n \vert\lambda_i\vert,
\]
where $\lambda_1,\ldots,\lambda_n$ are the
eigenvalues of the adjacency matrix of $G$.
Refer to \cite{BRU} and \cite{GUT} for general results on graph energy.
Let us abbreviate $\Ene{\cal D}{n}=E(\Icg{\cal D}{n})$.
Given a positive integer $n$, we consider
\[ \Emin{n} := \min\,\{ \Ene{\cal D}{n}:\;\; {\cal D}\subseteq \{1\le d<n:\; d\mid n\} \}\]
and
\[ \Emax{n} := \max\,\{ \Ene{\cal D}{n}:\;\; {\cal D}\subseteq \{1\le d<n:\; d\mid n\} \}.\]
Consider a prime power $n=p^s$ and a divisor set ${\cal D} = \{ p^{a_1}, p^{a_2} , \ldots , p^{a_r}\}$ with exponents $0\le a_1 < \ldots < a_r \le s-1$.
According to
Theorem 2.1 in \cite{SA1} we have
\begin{equation}
\Ene{{\cal D}}{p^s} = 2(p-1)p^{s-1}\left(r -(p-1)h_p(a_1,\ldots,a_r) \right), \label{ft3}
\end{equation}
where
\begin{equation}
h_p(x) = h_p(x_1,\ldots,x_r) := \sum_{k=1}^{r-1}\sum_{i=k+1}^r \,\frac{1}{p^{x_i-x_k}} \label{basishp}
\end{equation}
for $x=(x_1, \ldots, x_r) \in \mathbb{R}^r$. Observe that $h_p$ has the symmetry property
\begin{equation}
h_p(s-1-a_r,\ldots,s-1-a_1) = h_p(a_1,\ldots,a_r) \label{symmhp}
\end{equation}
for all integral exponents $0\leq a_1 < a_2 < \ldots < a_{r-1} < a_r \leq s-1$.
A straightforward consequence of (\ref{ft3}) is that $\Emin{p^s}$
is attained precisely for the singleton divisor sets ${\cal D}=\{p^t\}$ with $0\le t \le s-1$ (cf. \cite{SA1}, Theorem 3.1).
In \cite{SA2} divisor sets $\cal D$ producing graphs with maximal energy
$ \Emax{p^s}$ were studied. Equivalently, exponent tuples $(a_1,\ldots,a_r)$ minimizing $h_p$ had to be found.
By the result cited above, such minimizers satisfy $r\geq 2$, and they obviously must have the entries $a_1=0$ and $a_r=s-1$. Accordingly, a corresponding $a=(a_1,\ldots,a_r)$ lies in the set
\[ A(s,r) := \{(a_1,\ldots,a_r)\in \mathbb{Z}^r:\; 0=a_1<a_2<\ldots <a_{r-1}<a_r=s-1\},\]
and such an $a$ is called an \emph{admissible} exponent tuple.
Hence the quest for minimizers of $h_p$ is only interesting in case $r\geq 3$, which we shall assume in the sequel. It was shown by use of methods from convex optimization that, for fixed $s$ and $r$, the function $h_p$ becomes almost minimal if only $0=a_1<a_2<\ldots<a_{r-1}<a_r=s-1$ are chosen in nearly equidistant position (\cite{SA2}, Theorem 4.2). Note here that perfect equidistance can only be achieved if $(r-1) \mid (s-1)$ because the $a_i$ are integers.
It is the purpose of this article to use combinatorial instead of analytic arguments in order to refine the earlier approximative results.
The nearly equidistant positioning just mentioned indicates that the key to maximizing the energy lies
in considering the successive exponent differences. Hence, for a given $a\in A(s,r)$, we define its \emph{delta vector} as
\[
\delta(a):= (\delta_1(a), \delta_2(a), \ldots, \delta_{r-1}(a)) \in \mathbb{N}^{r-1}
\]
with $\delta_j(a):=a_{j+1}-a_j$ ($1\le j \leq r-1$). Obviously, we have $\sum_{j=1}^{r-1} \delta_j(a) = s-1$. Thus, introducing
\[
D(s,r):= \{(d_1,\ldots,d_{r-1})\in \mathbb{N}^{r-1}: \; \sum_{j=1}^{r-1} d_j = s-1 \},
\]
the function
\[
\delta: \left\{ \begin{array}{ccl}
A(s,r) & \longrightarrow & D(s,r) \\
(a_1,a_2,\ldots,a_r) & \mapsto & (a_2-a_1,a_3-a_2,\ldots,a_r-a_{r-1})
\end{array}
\right.
\]
is
1--1
with its inverse
\[ \quad\quad \quad\quad\;\;\,
\delta^{-1}: \left\{ \begin{array}{ccl}
D(s,r) & \longrightarrow & A(s,r) \\
(d_1,d_2,\ldots,d_{r-1}) & \mapsto & (0,d_1,d_1+d_2,\ldots, d_1+d_2+\ldots+d_{r-2,s-1}).
\end{array}
\right.
\]
The mentioned divisor set structure becomes apparent by restrictions
on the delta vector $\delta(a)$ corresponding to an energy maximal exponent tuple $a$ as follows:\newline
Firstly, the set $\{\delta_j(a):\; j=1,\ldots,r-1\}$ of differences is either a singleton or has only two elements that are successive positive integers.
Secondly, the distribution of the differences must satisfy certain balance conditions, in the sense that the differences of the value occuring less often than the other must be distributed somewhat ``evenly'' between the other difference values.
In some cases, these restriction will already characterize the
delta vectors, and consequently the divisor set(s) imposing maximal energy on the corresponding class of integral circulant graphs. In other words, for some fixed $s$ and $r$, we will be able to determine precisely
\[ \min h_p := \min \{h_p(a): a \in A(s,r)\} \]
along with all admissible $a$ satisfying $h_p(a)= \min h_p$.
Open questions and conjectures in the final section disclose our view on how a ``perfect balancing'' process might look in order to determine the admissible $a$ satisfying $h_p(a)= \min h_p$ in all cases.
\section{Main results}
In what follows, we shall consider $3\le r < s$ to be fixed integers and set $q:=\frac{s-1}{r-1}$.
Furthermore, $p$ will always be a fixed prime.
\bigskip
If $s\equiv 1 \bmod (r-1)$ or $s\equiv 0 \bmod (r-1)$, we are able to determine all minimizers of $\min h_p$ precisely.
\begin{theorem}{\label{Thm2.1}}
Let $p\geq 3$ be a fixed prime, and let $3\leq r < s$.
Assume that $a\in A(s,r)$ is a minimizer of $h_p$, i.e. $h_p(a) = \min h_p$.
\begin{itemize}
\item[(i)]
If $(r-1)\mid (s-1)$, i.e. $q=\frac{s-1}{r-1}$ is an integer, then
$a = \delta^{-1}(q,\ldots,q)$,
and we have
\[ h_p(a) = \min h_p = \frac{1}{p^q-1}\left(r-1 - \frac{1}{p^q-1}\left(1-\frac{1}{p^{q(r-1)}}\right)\right). \]
\item[(ii)]
If $(r-1)\mid s$, then
$a=\delta^{-1}([q],[q+1],\ldots,[q+1])$ or $a=\delta^{-1}([q+1],\ldots,[q+1],[q])$,
and we have
\[
h_p(a) = \min h_p = \frac{1}{p^{[q+1]}-1}\left(r-1 + \left(p-1-\frac{1}{p^{[q+1]}-1}\right)\left(1-\frac{1}{p^{[q+1](r-1)}}\right)\right).
\]
\end{itemize}
\end{theorem}
Inserting the explicit values of $\min h_p$ into formula (\ref{ft3}), one can easily compute the maximal energies of the corresponding classes of integral circulant graphs.
\bigskip\medskip
Complementing Theorem \ref{Thm2.1}, we have the following
\begin{theorem}{\label{Thm2.2}}
Let $p\geq 3$ be a fixed prime, and let $3\leq r < s$ be such that $(r-1)\nmid (s-1)$ and $(r-1)\nmid s$. Define the integer $g$ as the least positive residue satisfying $g \equiv s-1 \bmod (r-1)$. Assume that $a\in A(s,r)$ is a minimizer of $h_p$, i.e. $h_p(a) = \min h_p$.
\medskip\newline
For $2g\geq r-1$ and $q_2:=\frac{g}{r-g-2}$ we have:
\begin{itemize}
\item[(i)] If $(r-g-2)\mid g$, then
\[ a= \delta^{-1}\Big([q],\underbrace{[q+1],\ldots,[q+1]}_{q_2-fold},[q],\underbrace{[q+1],\ldots,[q+1]}_{ q_2-fold},[q],\mbox{\rm etc.},\underbrace{[q+1],\ldots,[q+1]}_{q_2-fold},[q]\Big).
\]
\item[(ii)] If $(r-g-2)\nmid g$, then $d=(d_1,\ldots,d_{r-1}):=\delta(a)$ has the following properties:
\begin{itemize}
\item[$\bullet$] There are exactly $r-g-1$ entries $[q]$, two of which are $d_1=d_{r-1}=[q]$. Moreover, neighboring entries $d_j=d_{j+1}=[q]$ do not occur.
\item[$\bullet$] The remaining $g$ entries of $\delta(a)$ all equal $[q+1]$ and appear in blocks of length either $[q_2]$ or $[q_2+1]$. More precisely, $\delta(a)$ has exactly $\,e\,$ $[q+1]$-blocks of length $[q_2+1]$ and
$\,(r-g-2-e)\,$ $[q+1]$-blocks of length $[q_2]$, where $e \equiv g \bmod (r-g-2)$ is the least positive residue.
\end{itemize}
\end{itemize}
For $2g\leq r-2$ and $q_1:=\frac{r-g-1}{g+1}$ we have:
\begin{itemize}
\item[(iii)] If $(g+1)\mid (r-g-1)$, then
\[ a= \delta^{-1}\Big(\underbrace{[q],\ldots,[q]}_{q_1-fold},[q+1],\underbrace{[q],\ldots,[q]}_{ q_1-fold},[q+1],\mbox{etc.},\underbrace{[q],\ldots,[q]}_{q_1-fold}\Big)
\]
\item[(iv)] If $(g+1)\nmid (r-g-1)$, then $d=(d_1,\ldots,d_{r-1}):=\delta(a)$ has the following properties:
\begin{itemize}
\item[$\bullet$] There are exactly $g$ entries $[q+1]$, but $d_1\neq [q+1]$ and $d_{r-1}\neq [q+1]$. Moreover, neighboring entries $d_j=d_{j+1}=[q+1]$ do not occur.
\item[$\bullet$] The remaining $r-g-1$ entries of $\delta(a)$ all equal $[q]$ and appear in blocks of length either $[q_1]$ or $[q_1+1]$. More precisely, $\delta(a)$ has exactly $\,f\,$ $[q]$-blocks of length $[q_1+1]$ and
$\,(g+1-f)\,$ $[q]$-blocks of length $[q_1]$, where $f \equiv r-g-1 \bmod (g+1)$ is the least positive residue.
\end{itemize}
\end{itemize}
\end{theorem}
As in Theorem \ref{Thm2.1}, the computation of $\min h_p$ in (i) and (iii) is just a matter of evaluating certain multi-geometric sums, and again by use of (\ref{ft3})
this would give explicit formulae for the maximal energies of the corresponding classes of integral circulant graphs.
\section{ Bivalence -- Proof of Theorem \ref{Thm2.1}}
For $d=(d_1,\ldots,d_{r-1}) \in D(s,r)$, let
\begin{align*}
\max d &:= \max\{d_j: \; 1\le j\le r-1\}, \\
\min d &:= \min\{d_j: \; 1\le j\le r-1\}.
\end{align*}
By the definition of $D(s,r)$, we clearly have
\begin{equation}
1 \le \min d \le q \le \max d \le s-r+1. \label{diffdelta}
\end{equation}
For
$d\in D(s,r)$ we call $\rho(d):= \max d - \min d$
the \emph{range} of $d$.
Any vector containing only entries $m$ or $m+1$ for some positive integer $m$ shall be called {\em bivalent}.
Hence, $d$
is bivalent if $\rho(d) \leq 1$.
It is an immediate consequence of (\ref{diffdelta}) that
$\min d = [q]$ and $\max d = [q+1]$ for a bivalent $d\in D(s,r)$ in case $q$ is not integral, and $\min d =\max d = q$ for a bivalent $d\in D(s,r)$ in case $q$ is an integer.
For the set
\[
\mathrm{Biv}(s,r):= \{d\in D(s,r):\; \rho(d)\le 1\} \subseteq D(s,r),
\]
containing all bivalent elements of $D(s,r)$, we thus have
\begin{equation}
\mathrm{Biv}(s,r) = \left\{\begin{array}{ll}
\{d\in D(s,r):\; \forall j\;\; d_j=[q] \mbox{ or } d_j=[q+1]\}& \mbox{ if $q\notin\mathbb{N}$, } \\
\{(q,q,\ldots,q)\}& \mbox{ if $q\in\mathbb{N}$. }
\end{array} \right.
\label{Abar}
\end{equation}
\begin{proposition}{\label{PropSmallRange}}
Let $p$ be a fixed prime.
If $a\in A(s,r)$ satisfies $h_p(a) = \min h_p$,
then $\delta(a) \in \mathrm{Biv}(s,r)$.
\end{proposition}
{\sc Proof. }
We make the assumption that $d=(d_1,\ldots,d_{r-1}):=\delta(a) \notin \mathrm{Biv}(s,r)$
and shall derive a contradiction.
Let $u$ be some index such that $d_u = \min d$, and let $v$ be some index such that $d_v= \max d$. By assumption, $u\neq v$. By the symmetry property (\ref{symmhp}) of $h_p$, we may assume w.l.o.g. that $u<v$, and also that
\begin{equation}
\min d < d_j < \max d \quad\quad (u<j<v).
\label{intermediate}
\end{equation}
For $a=(a_1,\ldots,a_r)$, say, we define $b=(b_1,\ldots,b_r)\in A(s,r)$ by setting
\begin{equation} \label{bjot}
b_j := \left\{\begin{array}{ll}
a_j & \mbox{for $j\le u$ or $j\ge v+1$,} \\
a_j+1 & \mbox{for $u+1\leq j \leq v$,}
\end{array} \right.
\end{equation}
i.e. we simultaneously extend one of the smallest subintervals of the partition $(a_1,\ldots,a_r)$ by $1$ and shorten one of its longest subintervals by $1$, while all other subintervals remain unchanged in length.
Then, by (\ref{basishp}),
\[h_p(a)-h_p(b)=
\sum_{k=1}^{r-1}\sum_{i=k+1}^r \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{b_i-b_k}}\right).\]
According to the definition of $b$ in (\ref{bjot}),
the two quotients enclosed in parentheses differ from each other only if $1\leq k \leq u$ and $u+1\leq i \leq v$, or if $u+1\leq k \leq v$ and $v+1\leq i \leq r$. Therefore,
and since $\sum_{k=1}^u p^{a_k-a_u} \geq 1$ and $\sum_{i=v+1}^r p^{a_{v+1}-a_i} < \sum_{j=0}^{\infty}p^{-j} = \frac{p}{p-1}$,
\begin{align
h_p(a)&-h_p(b) = \nonumber \\
&= \sum_{k=1}^{u}\sum_{i=u+1}^v \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{(a_i+1)-a_k}}\right)
+ \sum_{k=u+1}^{v}\sum_{i=v+1}^r \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{a_i-(a_k+1)}}\right) \nonumber\\
&= (p-1)\left(\frac{1}{p} \sum_{k=1}^{u}\, p^{a_k} \sum_{i=u+1}^v \,\frac{1}{p^{a_i}} -
\sum_{k=u+1}^{v}\, p^{a_k} \sum_{i=v+1}^r \,\frac{1}{p^{a_i}} \right) \label{diffhp} \\
&= (p-1)\left(p^{a_u- a_{u+1}-1} \sum_{k=1}^{u}\, \frac{1}{p^{a_u -a_k}} \sum_{i=u+1}^v \,\frac{1}{p^{a_i-a_{u+1}}} - p^{a_v- a_{v+1}} \sum_{k=u+1}^{v}\, \frac{1}{p^{a_v-a_k}} \sum_{i=v+1}^r \,\frac{1}{p^{a_i-a_{v+1}}} \right) \nonumber\\
&= (p-1)\left(p^{-\min d -1} \sum_{k=1}^{u}\, \frac{1}{p^{a_u -a_k}} \sum_{i=u+1}^v \,\frac{1}{p^{a_i-a_{u+1}}}
- p^{-\max d} \sum_{k=u+1}^{v}\, \frac{1}{p^{a_v-a_k}} \sum_{i=v+1}^r \,\frac{1}{p^{a_i-a_{v+1}}} \right) \nonumber \\
&> (p-1)\left(p^{-\min d-1} \sum_{i=u+1}^v \,\frac{1}{p^{a_i-a_{u+1}}}
- \frac{p^{- \max d +1}}{p-1} \sum_{k=u+1}^{v}\, \frac{1}{p^{a_v-a_k}} \right). \nonumber
\end{align}
Since $\sum_{i=u+1}^v \,p^{a_{u+1}-a_i}\geq 1$ and $\sum_{k=u+1}^{v}\, p^{a_k-a_v}<\sum_{j=0}^{\infty}p^{-j} = \frac{p}{p-1}$, it follows from (\ref{diffhp}) that
\[
h_p(a)-h_p(b) > (p-1)\left(p^{- \min d -1} - \frac{p^{-\max d +2}}{(p-1)^2}\right).
\]
In case $\rho(d)\geq 3$, i.e. $\min d \leq \max d -3$, we conclude that
\[
h_p(a)-h_p(b) > (p-1) p^{2-\max d} \left( 1 - \frac{1}{(p-1)^2}\right)\geq 0
\]
for all primes $p$, which proves the proposition.
We are left with the case $\rho(d)=2$, i.e. $\min d = \max d -2$. By (\ref{intermediate}), we have
\[
d_j = \min d +1 = \max d -1 \quad\quad (u<j<v).
\]
Consequently
\[
a_i-a_{u+1} = \sum_{j=u+1}^{i-1} d_j = (i-u-1)(\max d -1) \quad\quad (i= u+1,\ldots,v)
\]
and
\[
a_v-a_k = \sum_{j=k}^{v-1} d_j = (v-k)(\max d -1) \quad\quad (k= u+1,\ldots,v).
\]
Hence by (\ref{diffhp})
\begin{align*}
h_p(a)-h_p(b) &> (p-1)p^{1-\max d} \left(\sum_{i=u+1}^v \,\frac{1}{p^{(i-u-1)(\max d-1)}}
- \frac{1}{p-1} \sum_{k=u+1}^{v}\, \frac{1}{p^{(v-k)(\max d-1)}} \right)\\
&= (p-1)p^{1-\max d} \sum_{i=u+1}^v \,\frac{1}{p^{(i-u-1)(\max d-1)}} \left(1- \frac{1}{p-1}\right) \geq 0
\end{align*}
for all primes $p$, which completes our proof.
\hfill \ensuremath{\Box}\bigskip
{\sc Proof of Theorem \ref{Thm2.1}{\rm(i)}. }\newline
Let $a\in A(s,r)$ have the property $h_p(a) = \min h_p$. Then we know by Proposition \ref{PropSmallRange}(i) that $\delta(a)\in \mathrm{Biv}(s,r)$. It follows from (\ref{Abar}) that $\delta(a)=(q,q,\ldots,q)$.
The proof of the formula for $\min h_p$ is an easy exercise with geometric sums.
\hfill \ensuremath{\Box}\bigskip
\medskip
Up to this point we know that $\min h_p$ can only be attained by admissible tuples $a$ having bivalent delta vectors,
that is $\delta(a)\in \mathrm{Biv}(s,r)$. In the sequel, we shall derive further restrictions for minimizers of $h_p$. For $(r-1)\nmid (s-1)$, the number $q$ is not an integer. If $\delta(a)\in \mathrm{Biv}(s,r)$, thus $\delta(a) \in \{[q],[q+1]\}^{r-1}$ by (\ref{Abar}).
\begin{proposition}{\label{PropUnframed}}
Let $a\in A(s,r)$ satisfy $h_p(a)= \min h_p$, hence $d=(d_1,\ldots,d_{r-1}):=\delta(a)\in \mathrm{Biv}(s,r)$ by Proposition \ref{PropSmallRange}.
If
$d_1=[q+1]$ or $d_{r-1}=[q+1]$, then
$d=([q],[q+1],[q+1],\ldots,[q+1])$ or
$d=([q+1],[q+1],\ldots,[q+1],[q])$.
\end{proposition}
{\sc Proof. }
By the symmetry of $h_p$ (see (\ref{symmhp})), we may assume w.l.o.g. that
$d_1=[q+1]$.
Clearly, $d_j=[q]$ for at least one $j$. Hence let
\[
d_1 = d_2 = \ldots = d_{\ell} = [q+1], \quad d_{\ell +1} = [q]
\]
for a suitable $1\leq \ell\leq r-2$. For $a=(a_1,\ldots,a_r)$, say, we define $b=(b_1,\ldots,b_r) \in A(s,r)$ by setting
\[ b_j := \left\{ \begin{array}{cl}
a_j & \mbox{ for $j=1$ or $\ell +2 \leq j \leq r$, }\\
a_j-1& \mbox{ for $2\leq j \leq \ell +1$. }
\end{array} \right.
\]
Clearly, $\delta(a)\in \mathrm{Biv}(s,r)$ implies $\delta(b)\in \mathrm{Biv}(s,r)$.
By (\ref{basishp}), we have
\[
h_p(a)-h_p(b) =
\sum_{k=1}^{r-1}\sum_{i=k+1}^r \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{b_i-b_k}}\right).
\]
According to our definition of $b$,
the two quotients enclosed in parentheses differ from each other only if $k=1$ and $2\leq i \leq \ell+1$, or if $2\leq k \leq \ell+1$ and $\ell+2 \leq i \leq r$. Therefore,
\begin{align*}
h_p(a)-h_p(b)
&= \sum_{i=2}^{\ell+1} \,\left(\frac{1}{p^{a_i-a_1}} - \frac{1}{p^{(a_i-1)-a_1}}\right)
+ \sum_{k=2}^{\ell+1}\sum_{i=\ell+2}^r \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{a_i-(a_k-1)}}\right) \\
&= (p-1)\left(\frac{1}{p} \sum_{k=2}^{\ell+1}\, p^{a_k} \sum_{i=\ell+2}^r \,\frac{1}{p^{a_i}} -
p^{a_1} \sum_{i=2}^{\ell+1} \,\frac{1}{p^{a_i}} \right).
\end{align*}
Observe that $a_1=0$ and $a_k= [q+1](k-1)$ for $2\leq k \leq \ell+1$. Hence
\begin{align}\label{hpdiff}
\begin{split}
h_p(a)-h_p(b)
&= (p-1)\left(\frac{1}{p} \sum_{k=0}^{\ell-1}\, p^{[q+1](k+1)} \sum_{i=\ell+2}^r \,
\,\frac{1}{p^{a_i}}
- \sum_{i=0}^{\ell-1} \,\frac{1}{p^{[q+1](i+1)}} \right)\\
&= \frac{p-1}{p^{[q+1]}-1}
\left(\frac{p^{[q+1]}}{p} \left(p^{[q+1]\ell}-1\right) \sum_{i=\ell+2}^r
\,\frac{1}{p^{a_i}}
- \left( 1- \frac{1}{p^{[q+1]\ell}} \right)\right).
\end{split}
\end{align}
If $\ell \leq r-3$, we obtain
\[
\sum_{i=\ell+2}^r \, \frac{1}{p^{a_i}} > \frac{1}{p^{a_{\ell+2}}} =
\frac{1}{p^{[q+1]\ell + [q]}}.
\]
Using this lower bound in (\ref{hpdiff}) shows that the righthand side of (\ref{hpdiff}) is positive. Thus $h_p(a) >h_p(b)$, which would contradict the minimality of $h_p(a)$.
It remains to consider the case $\ell = r-2$, but then $\delta(a)=
([q+1],[q+1],\ldots,[q+1],[q])$.
\hfill \ensuremath{\Box}\bigskip
{\sc Proof of Theorem \ref{Thm2.1}{\rm(ii)}. }\newline
Let $a\in A(s,r)$ satisfy $h_p(a) = \min h_p$. It follows from Proposition \ref{PropSmallRange} that
$(d_1,\ldots,d_{r-1}):=\delta(a)\in \mathrm{Biv}(s,r)$.
The condition $(r-1)\mid s$ means that $s-1\equiv -1 \bmod (r-1)$, hence
$d_j = [q]$ for exactly one $1\leq j \leq r-1$ and $d_j = [q+1]$ otherwise.
By Proposition \ref{PropUnframed}, the condition $h_p(a) = \min h_p$ implies that $d$
equals one of the two
$(r-1)$-tuples given there. Hence $\delta(a)$ has the desired form.
The proof of the formula for $\min h_p$ is an easy exercise with geometric sums.
\hfill \ensuremath{\Box}\bigskip
\section{Separability}
In case $\mbox{$s\equiv 1 \bmod (r-1)$}$ or $s \equiv 0 \bmod (r-1)$, we know all minimizers $a\in A(s,r)$ of $h_p$ by Theorem \ref{Thm2.1}.
If $s$ belongs to another residue class $\bmod\, (r-1)$, we have a further restriction for minimizers of $h_p$. To this end, we shall call any vector {\em framed} if its first and last entry are the same. We indicate that these entries have value $x$, say, by calling the vector \emph{$x$-framed}.
Let
\[
\mathrm{Biv}^{\star}(s,r):= \{ d=(d_1,\ldots,d_{r-1}) \in \mathrm{Biv}(s,r): \; d_1=d_{r-1} = [q]\}.
\]
denote the set of all bivalent, $[q]$-framed delta vectors.
\begin{proposition}{\label{PropFraming}}
Let $p\geq 3$ be a fixed prime, and let
$(r-1)\nmid (s-1)$ and $(r-1)\nmid s$. If $h_p(a) = \min h_p$ for some $a\in A(s,r)$, then
$\delta(a) \in \mathrm{Biv}^{\star}(s,r)$.
\end{proposition}
{\sc Proof. }
Proposition \ref{PropSmallRange} tells us that $(d_1,\ldots,d_{r-1}):=\delta(a)\in \mathrm{Biv}(s,r)$.
The condition
$(r-1)\nmid (s-1)$ implies that $d_j = [q+1]$ for at least one $j$, and $(r-1)\nmid s$ guarantees indices $j_1\neq j_2$ such that $d_{j_1}=d_{j_2}=[q]$. All this shows that $\delta(a)$ cannot be one of the two $(r-1)$-tuples in Proposition \ref{PropUnframed}, which under our minimality assumption for $h_p(a)$ yields $d_1=d_{r-1} = [q]$.
\hfill \ensuremath{\Box}\bigskip
For $d\in \mathrm{Biv}^{\star}(s,r)$, we have $d= \big([q], d_2,\ldots, d_{r-2}, [q] \big)$, where $\mbox{$d_j \in \{[q], [q+1]\}$}$ for all $j$ by (\ref{Abar}). Now we study the sequences of successive $d_j$ of equal value. For suitable positive integers $t_i=t_i(d)$ $(1\leq i \leq 2w+1)$, say, we have
\[ d =
\Big(\underbrace{[q],\ldots,[q]}_{t_1-fold},\underbrace{[q+1],\ldots,[q+1]}_{t_2-fold},\underbrace{[q],\ldots,[q]}_{t_3-fold},{\rm etc.}, \underbrace{[q+1],\ldots,[q+1]}_{t_{2w}-fold},\underbrace{[q],\ldots,[q]}_{t_{2w+1}-fold}\Big).
\]
To put it another way, $d$
is composed of a $[q]$-block of length $t_1$ followed by a $[q+1]$-block of lengths $t_2$ and then alternately by $[q]$-blocks and $[q+1]$-blocks of respective lengths.
Setting $T_{\ell} = T_{\ell}(d) := \sum_{i=1}^{\ell} t_i$ for $1\leq \ell \leq 2w+1$, we have $T_{2w+1}=r-1$ and
\begin{equation} \label{deftj}
\left.
\begin{array}{ccccccccc}
d_1 &=& d_2 &=& \ldots &=& d_{T_1} &=& [q] \\
d_{T_1+1} &=& d_{T_1+2} &=& \ldots &=& d_{T_2} &=& [q+1] \\
d_{T_2+1} &=& d_{T_2+2} &=& \ldots &=& d_{T_3} &=& [q] \\
\vdots & & \vdots & & && \vdots & & \vdots\\
d_{T_{2w-1}+1} &=& d_{T_{2w-1}+2} &=& \ldots &=& d_{T_{2w}} &=& [q+1] \\
d_{T_{2w}+1} &=& d_{T_{2w}+2} &=& \ldots &=& d_{T_{2w+1}} &=& [q]
\end{array}
\right\}
\end{equation}
Denote by $g$ the least non-negative integer satisfying $g \equiv s-1 \bmod (r-1)$.
It is easily seen that
\[g = \# \{1\leq j \leq r-1: \; d_j = [q+1] \}.
\]
In particular, $g$ does not depend on $d$.
The definition of the $t_i(d)$ clearly implies
\begin{equation}
\sum_{\ell=0}^w t_{2\ell+1} = r-g-1 \quad \mbox{ and } \quad \sum_{\ell=1}^w t_{2\ell} = g.
\label{sumstl}
\end{equation}
In case $(r-1)\nmid (s-1)$, i.e. $q$ is not integral, we define for $d\in \mathrm{Biv}^{\star}(s,r)$ the maximal and minimal lengths of $[q]$-blocks and $[q+1]$-blocks, respectively, occurring in $d$, namely
\[
\begin{array}{rclrcl}
\eta_{\max}(d) &:=& \max\{t_{2\ell+1}:\; 0\le \ell \leq w\},
&\eta_{\min}(d) &:=& \min\{t_{2\ell+1}:\; 0\le \ell \leq w\}, \\
\theta_{\max}(d) &:=& \max\{t_{2\ell}:\; 1\le \ell \leq w\},
&\theta_{\min}(d) &:=& \min\{t_{2\ell}:\; 1\le \ell \leq w\}.
\end{array}
\]
Then $q_1:= \frac{r-g-1}{w+1}$ and $q_2:=\frac{g}{w}$ are the average lengths of $t_{2\ell+1}$ and $t_{2\ell}$ respectively, i.e. the average lengths of the $[q]$-blocks and the $[q+1]$-blocks, and we obviously have
\begin{equation} \label{eta}
1\le \eta_{\min}(d) \leq q_1 \leq \eta_{\max}(d)
\end{equation}
and
\begin{equation} \label{theta}
1 \leq \theta_{\min}(d) \leq q_2 \leq \theta_{\max}(d).
\end{equation}
A bivalent
vector, containing both entries $m$ and $m+1$, say, shall be called {\em separable} if no consecutive entries $m$ or no consecutive entries $m+1$ occur.
Our next result shows that for an $a\in A(s,r)$ with $h_p(a)= \min h_p$, thus $\delta(a)\in \mathrm{Biv}^{\star}(s,r)$ under suitable congruence restrictions, either all $[q]$ in $\delta(a)$ are separated from each other by entries $[q+1]$ or vice versa. Hence these delta vectors are separable.
\begin{proposition}{\label{PropSeparable}}
Let $(r-1)\nmid (s-1)$, and let $g \equiv s-1 \bmod (r-1)$ be the least positive residue.
For any $d\in \mathrm{Biv}^{\star}(s,r)$ satisfying $h_p(\delta^{-1}(d))=\min h_p$,
we have:
\begin{itemize}
\item[(i)] If $2g\geq r-1$, then $\eta_{\max}(d)=1$ and $q_1=1$, $q_2=\frac{g}{r-g-2}$.
\item[(ii)] If $2g \leq r-2$, then $\theta_{\max}(d)=1$ and $q_2=1$, $q_1=\frac{r-g-1}{g+1}$.
\end{itemize}
\end{proposition}
{\sc Proof. }\vspace{7pt}\newline
(i) Let $d=(d_1,\ldots,d_{r-1})$ and assume that $\eta_{\max}(d)\geq 2$. Hence there is some $1\leq u\leq r-2$ such that
$d_u=d_{u+1}=[q]$. Since $d_1=d_{r-1} =[q]$ and $g\geq r-g-1$, there is some $2\leq v \leq r-3$ such that $d_v=d_{v+1}= [q+1]$. By (\ref{symmhp}) we may assume that
$u<v$. Moreover, we may assume w.l.o.g. that $d_j\neq d_{j+1}$ for $u+1\leq j \leq v-1$ (otherwise we could choose $u$ larger or $v$ smaller, respectively).
Thus, and since $d_j\in \{[q],[q+1]\}$, the sequence $(d_j)_{u+1\leq j \leq v}$
is alternating, starting with $d_{u+1}=[q]$ and terminating with $d_v= [q+1]$.
Consequently, $v-u$ is even, and for $u+1\leq j \leq v$ we have
\[
d_j = \left\{ \begin{array}{cl}
\mbox{$[q]$} & \mbox{ if $j\not\equiv u \bmod 2$, } \\
\mbox{$[q+1]$} & \mbox{ if $j\equiv u \bmod 2$.}
\end{array} \right.
\]
This means that
\begin{equation}\label{alternating}
a_{u+2+j} = a_{u+2} + [q]j +\left[\frac{j+1}{2}\right] \quad\quad (0\leq j \leq v-u-1).
\end{equation}
For $a=(a_1,\ldots,a_r):= \delta^{-1}(d)\in A(s,r)$, we define $b=(b_1,\ldots,b_r)\in A(s,r)$ by setting
\begin{equation}\label{vectorb}
b_j := \left\{ \begin{array}{cl}
a_j & \mbox{ for $1\leq j\leq u+1$ or $v+1\leq j \leq r$, }\\
a_{j+1}-[q]& \mbox{ for $u+2\leq j \leq v$, }
\end{array} \right.
\end{equation}
i.e. we swap $d_{u+1}$ and $d_v$ in $d$.
Clearly, $\delta(b)\in \mathrm{Biv}^{\star}(s,r)$.
Then, by (\ref{basishp}),
\begin{align}\label{diffhptotal}
\begin{split}
h_p(a)-h_p(b)
&= \left(\sum_{k=1}^{u+1}\sum_{i=u+2}^v + \sum_{k=u+2}^{v-1}\sum_{i=k+1}^v + \sum_{k=u+2}^v\sum_{i=v+1}^r\right) \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{b_i-b_k}}\right) \\
&= \sum_{k=1}^{u+1}\sum_{i=u+2}^v \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{(a_{i+1}-[q])-a_k}}\right) \\
&\quad\quad + \sum_{k=u+2}^{v-1}\sum_{i=k+1}^v \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{(a_{i+1}-[q])-(a_{k+1}-[q])}}\right) \\
&\quad\quad + \sum_{k=u+2}^v\sum_{i=v+1}^r \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{a_i-(a_{k+1}-[q])}}\right) \\
&= \sum_{k=1}^{u+1}\,p^{a_k}\sum_{i=u+2}^v \,\left(\frac{1}{p^{a_i}} - \frac{1}{p^{a_{i+1}-[q]}}\right)
+ \sum_{k=u+2}^{v-1}\sum_{i=k+1}^v \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{a_{i+1}-a_{k+1}}}\right) \\
&\quad\quad + \sum_{k=u+2}^v\,\left(p^{a_k}-p^{a_{k+1}-[q]}\right) \sum_{i=v+1}^r \,\frac{1}{p^{a_i}}\, .
\end{split}
\end{align}
For the middle double sum, we obtain
\begin{align}\label{hpmiddle1}
\begin{split}
\sum_{k=u+2}^{v-1}\sum_{i=k+1}^v \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{a_{i+1}-a_{k+1}}}\right) &= \sum_{k=u+2}^{v-1} p^{a_k} \sum_{i=k+1}^v \,\frac{1}{p^{a_i}} -
\sum_{k=u+2}^{v-1} p^{a_{k+1}} \sum_{i=k+1}^v \,\frac{1}{p^{a_{i+1}}} \\
&= \sum_{k=u+2}^{v-1} p^{a_k} \sum_{i=k+1}^v \,\frac{1}{p^{a_i}} -
\sum_{k=u+3}^v p^{a_k} \sum_{i=k+1}^{v+1} \,\frac{1}{p^{a_i}} \\
&= p^{a_{u+2}} \sum_{i=u+3}^v \,\frac{1}{p^{a_i}} - \frac{1}{p^{a_{v+1}}} \sum_{k=u+3}^v p^{a_k}\\
&= \sum_{j=1}^{v-u-2} \,\frac{1}{p^{a_{u+2+j}-a_{u+2}}} - \sum_{j=1}^{v-u-2} \,
\frac{1}{p^{a_{v+1}-a_{v+1-j}}}\,.
\end{split}
\end{align}
Now (\ref{alternating}) implies that
$a_{u+2+j}-a_{u+2} = [q]j + \left[\frac{j+1}{2}\right] = a_{v+1}-a_{v+1-j}$
for $1\leq j \leq v-u-2$. Hence the last two sums in (\ref{hpmiddle1}) cancel termwise, and we conclude
\begin{equation}
\sum_{k=u+2}^{v-1}\sum_{i=k+1}^v \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{a_{i+1}-a_{k+1}}}\right) = 0. \label{hpmiddle2}
\end{equation}
By (\ref{alternating}), we also have
\begin{align}\label{part1}
\begin{split}
\sum_{i=u+2}^v \,\left(\frac{1}{p^{a_i}} - \frac{1}{p^{a_{i+1}-[q]}}\right) &=
\sum_{j=0}^{v-u-2} \,\left(\frac{1}{p^{a_{u+2+j}}} - \frac{1}{p^{a_{u+2+j+1}-[q]}}\right) \\
&= \frac{1}{p^{a_{u+2}}} \sum_{j=0}^{v-u-2} \,\frac{1}{p^{[q]j}}\left(\frac{1}{p^{[\frac{j+1}{2}]}} -\frac{1}{p^{[\frac{j+2}{2}]}} \right) \\
&= \frac{1}{p^{a_{u+2}}} \sum_{j=0}^{\frac{v-u}{2}-1}
\,\frac{1}{p^{(2[q]+1)j}}\left(1 -\frac{1}{p} \right)\\
&= (p-1)\frac{1}{p^{a_{u+2}+1}} \sum_{j=0}^{\frac{v-u}{2}-1}
\,\frac{1}{p^{(2[q]+1)j}}\,,
\end{split}
\end{align}
and similarly
\begin{align}\label{part2}
\begin{split}
\sum_{k=u+2}^v\,\left(p^{a_k}-p^{a_{k+1}-[q]}\right) &=
p^{a_{u+2}} \sum_{j=0}^{v-u-2}\, p^{[q]j}\left(p^{[\frac{j+1}{2}]} -p^{[\frac{j+2}{2}]} \right) \\
&= p^{a_{u+2}} \sum_{j=0}^{\frac{v-u}{2}-1}\,p^{(2[q]+1)j}(1-p)\\
&= (1-p)p^{a_{u+2}+(2[q]+1)(\frac{v-u}{2}-1)} \sum_{j=0}^{\frac{v-u}{2}-1}\,\frac{1}{p^{(2[q]+1)j}}\,.
\end{split}
\end{align}
Using (\ref{hpmiddle2}), (\ref{part1}) and (\ref{part2}) altogether in (\ref{diffhptotal}) implies that
\begin{equation}\label{final1}
Q:=\frac{h_p(a)-h_p(b)}{(p-1)\sum\limits_{j=0}^{\frac{v-u}{2}-1}\,\frac{1}{p^{(2[q]+1)j}}}=
\frac{1}{p^{a_{u+2}+1}} \sum_{k=1}^{u+1}\,p^{a_k} \; - \;\;\;
p^{a_{u+2}+(2[q]+1)(\frac{v-u}{2}-1)}\sum_{i=v+1}^r \,\frac{1}{p^{a_i}}\,. \\
\end{equation}
Since $a_{v+2}-a_{v+1} = d_{v+1} = [q+1]$ by definition, and since $d_j \geq [q]$ for all $j$, it follows that
\begin{align*}
\begin{split}
\sum_{i=v+1}^r \,\frac{1}{p^{a_i}} &=
\frac{1}{p^{a_{v+1}}}\sum_{i=v+1}^r\, \frac{1}{p^{a_i-a_{v+1}}} \\
&\leq \frac{1}{p^{a_{v+1}}} \left( 1 + \frac{1}{p^{[q+1]}} \sum_{i=0}^{\infty}
\frac{1}{p^{[q]i}} \right) = \frac{1}{p^{a_{v+1}}} \left( 1 + \frac{1}{p^{[q+1]}-p} \right)\,.
\end{split}
\end{align*}
By (\ref{alternating}), we have $a_{v+1}-a_{u+2} = [q](v-u-1) + \frac{v-u}{2}$.
Applying this as well as the last inequality and $\sum_{k=1}^{u+1}\,p^{a_k} \geq p^{a_{u+1}} + p^{a_u}$ to (\ref{final1}), we obtain
\begin{align*}
Q &\geq p^{a_{u+1}-a_{u+2}-1} + p^{a_u -a_{u+2}-1} -p^{a_{u+2}-a_{v+1} +(2[q]+1)(\frac{v-u}{2}-1)}\left( 1 + \frac{1}{p^{[q+1]}-p} \right) \\
&= p^{-[q]-1} + p^{-2[q]-1} -p^{-[q]-1}\left( 1 + \frac{1}{p^{[q+1]}-p} \right) \\
&= p^{-2[q]-1}\left( 1 - \frac{1}{p-p^{1-[q]}}\right)\,.
\end{align*}
This last term is positive because of $q\geq 1$. By definition of $Q$ in (\ref{final1}), we conclude that $h_p(a) > h_p(b)$. This contradicts the minimality condition for $h_p(a)$, and thus our initial assumption $\eta_{\max}(d)\geq 2$ must be wrong.
Therefore, $\eta_{\max}(d)=1$, which means that $q_1=\frac{r-g-1}{w+1}=1$. Hence $w=r-g-2$ and $q_2=\frac{g}{w}=\frac{g}{r-g-2}$.
\bigskip\newline
(ii) We assume that $\theta_{\max}(d)\geq 2$. Hence there is some $2\leq v\leq r-3$ such that
$d_v=d_{v+1}=[q+1]$. Since $g\leq r-g-2$, there is some $1\leq u \leq r-2$ such that $d_u=d_{u+1}= [q]$. By (\ref{symmhp}) we may assume that
$u<v$. Moreover, we may assume w.l.o.g. that $d_j\neq d_{j+1}$ for $u+1\leq j \leq v-1$ (otherwise we could choose $u$ larger or $v$ smaller, respectively). At this point we are exactly in the same situation as in the proof of part (i). Again $b$ as defined in (\ref{vectorb}) reveals that $h_p(a)> \min h_p$, and this contradiction completes the proof of the proposition.
\hfill \ensuremath{\Box}\bigskip
\section{Bivalence of second degree -- Proof of Theorem \ref{Thm2.2}}
We denote by $\mathrm{Sep}^{\star}(s,r)$ the set of all $d\in \mathrm{Biv}^{\star}(s,r)$ having no neighbouring entries $[q]$ in case $2g\geq r-1$ and no neighbouring entries $[q+1]$ in case $2g \leq r-2$, respectively. Then Proposition \ref{PropSeparable} mainly says that $h_p(a)=\min h_p$ implies $\delta(a) \in \mathrm{Sep}^{\star}(s,r)$.
Assuming that $q\notin\mathbb{N}$,
we shall now see that in case $2g\ge r-1$ all $[q+1]$-blocks in $\delta(a)$, lying between two successive entries $[q]$, are of length $[q_2]$ or $[q_2+1]$. In case $2g\leq r-2$ all $[q]$-blocks in $\delta(a)$ have length either $[q_1]$ or $[q_1+1]$ (cf. (\ref{eta}) and (\ref{theta})).
If $(r-1)\nmid (s-1)$, we define for $d\in \mathrm{Sep}^{\star}(s,r)$
\[ \eta(d) := \eta_{\max}(d) - \eta_{\min}(d)\]
and
\[ \theta(d) := \theta_{\max}(d) - \theta_{\min}(d).\]
i.e. $\eta(d)$ is the difference between the lengths of the longest and the shortest maximal sequence of successive values $[q]$ in $d$, and $\theta(d)$ is the corresponding difference for successive values $[q+1]$.
By Proposition \ref{PropSeparable} we know for any minimizer $a\in A(s,r)$ of $h_p$ that $\eta(\delta(a))=0$ in case $h\geq r-h-1$ and
$\theta(\delta(a))=0$ in case $h \leq r-h-2$.
For a bivalent, separable integer vector $v$ we may formally derive a vector
$\Lambda(v)$ as follows. Let $m\neq k$ be the two entries of $v$ and assume w.l.o.g.
that $v$ contains no consecutive entries $m$. If the same holds for $k$, then we assume $m<k$ for tie-breaking. Set $\Lambda(v):=(\lambda_1,\ldots,\lambda_{\ell})$ for suitable $\ell$,
where $\lambda_i$ is the length of the $i$-th maximal sequence of consecutive $k$-entries,
as separated by the $m$-entries.
If $\Lambda(v)$, like $v$, is bivalent we shall call $v$ {\em bivalent of second degree}.
For $d\in \mathrm{Sep}^{\star}(s,r)$ we clearly have $\min\{\eta_{\max}(d),\theta_{\max}(d)\}=1$ due to
separability. The following proposition strengthens Proposition \ref{PropSeparable}
in the sense that, under the same assumptions on $r$ and $s$,
some $d\in\mathrm{Biv}^{\star}(s,r)$ with $h_p(\delta^{-1}(d))= \min h_p$ is not only separable
but also satisfies $\eta(d)+\theta(d) \le 1$. The latter amounts to the fact that $d$ is bivalent of second degree.
\begin{proposition}{\label{ProcBivSecond}}
Let $(r-1)\nmid (s-1)$, and let $g \equiv s-1 \bmod (r-1)$ be the least positive residue. If $d\in \mathrm{Biv}^{\star}(s,r)$ satisfies $h_p(\delta^{-1}(d))= \min h_p$, then we have:
\begin{itemize}
\item[(i)] If $2g\geq r-1$, then $\eta_{\max}(d)=1$ and $\theta(d)\leq 1$.
\item[(ii)] If $2g \leq r-2$, then $\theta_{\max}(d)=1$ and $\eta(d)\leq 1$.
\end{itemize}
\end{proposition}
{\sc Proof. }\vspace{7pt} \newline
(i) Let $d=(d_1,\ldots,d_{r-1})$ satisfy the conditions of the proposition, in particular $d_1=d_{r-1}=[q]$, and there are integers
$1=j_1 < j_2 < \ldots < j_{r-g-2} < j_{r-g-1}=r-1$ with the property
\
d_j = \left\{ \begin{array}{cl}
\mbox{$[q]$} & \mbox{ for $j\in \{j_1,j_2,\ldots,j_{r-g-1}\}$, } \\
\mbox{$[q+1]$} & \mbox{ for $j\notin \{j_1,j_2,\ldots,j_{r-g-1}\}$ }
\end{array} \right. \quad\quad\quad (1\leq j \leq r-1).
\]
It follows from Proposition \ref{PropSeparable}(i) that $\eta_{\max}(d)=1$, hence
$j_{i+1}-j_i \geq 2$ for $1\leq i \leq r-g-2$.
In order to prove the other assertion of (i) we make the assumption that $\theta(d)\geq 2$, i.e. there are two groups of successive entries $[q+1]$ in $d$ whose lengths differ by at least $2$.
Hence, using the notation introduced in (\ref{deftj}), we can find integers $1\leq u\leq w$ and $1\leq v \leq w$ such that $j_{u+1} - j_u-1 = t_{2u}$ and $j_{v+1} - j_v-1 = t_{2v}$ satisfy $t_{2v}-t_{2u} \geq 2$, and we may assume that $|v-u|$ is minimal with this property. By (\ref{symmhp}) we can also assume w.l.o.g. that $u<v$. We therefore have
\begin{align*}
d &=
(\ldots,d_{j_u},\underbrace{[q+1],\ldots,[q+1]}_{t_{2u}-fold},d_{j_{u+1}},\ldots\ldots,d_{j_v},\underbrace{[q+1],\ldots,[q+1]}_{t_{2v}-fold},d_{j_{v+1}}\ldots)\\
&= (\ldots,[q],\underbrace{[q+1],\ldots,[q+1]}_{t_{2u}-fold},[q],\ldots\ldots,[q], \underbrace{[q+1],\ldots,[q+1]}_{t_{2v}-fold},[q]\ldots),
\end{align*}
and the desired contradiction will be derived in two steps: We first deal with the case where merely a single $[q]$-block separates the two $[q+1]$-blocks of lengths $t_{2u}$ and $t_{2v}$, and later we shall handle greater distances between them. In both situations, we construct some $b\in A(s,r)$ satisfying $\delta(b) \in \mathrm{Biv}^{\star}(s,r)$ and $h_p(b)<h_p(a)$ by counterbalancing the lengths of the two $[q+1]$-blocks. We set
$a=(a_1,\ldots,a_r):= \delta^{-1}(d)$.
\underline{Case 1}: $v=u+1$.
\newline
It follows that
\begin{equation}\label{ajot1}
a_{j_v+1} -a_k = \left\{
\begin{array}{ll}
(j_v-k+1)[q+1]-1 & \mbox{ for $j_u+1\leq k \leq j_v$, } \\
(j_v-j_u+1)[q+1]-2 & \mbox{ for $k=j_u$, }
\end{array}
\right.
\end{equation}
and
\begin{equation}\label{ajot2}
a_i - a_{j_v+1} = (i-j_v-1)[q+1] \quad\quad (j_v+2 \leq i \leq j_{v+1})\,.
\end{equation}
We define $b=(b_1,\ldots,b_r)\in A(s,r)$ by setting
\begin{equation}\label{vectorb1}
b_j := \left\{ \begin{array}{cl}
a_j & \mbox{ for $1\leq j\leq j_v$ or $j_v+2\leq j \leq r$, }\\
a_{j_v+1}+1 & \mbox{ for $j=j_v+1$. }
\end{array} \right.
\end{equation}
Clearly, $\delta(a)=d \in \mathrm{Biv}^{\star}(s,r)$ implies $\delta(b) \in \mathrm{Biv}^{\star}(s,r)$.
Then, by (\ref{basishp}),
\begin{align*}
\begin{split}
h_p(a)-h_p(b)
&= \sum_{i=j_v+2}^r \,\left(\frac{1}{p^{a_i-a_{j_v+1}}} - \frac{1}{p^{b_i-b_{j_v+1}}}\right)
+ \sum_{k=1}^{j_v}\,\left(\frac{1}{p^{a_{j_v+1}-a_k}} - \frac{1}{p^{b_{j_v+1}-b_k}}\right) \\
&= \sum_{i=j_v+2}^r \, \frac{1}{p^{a_i}}\left(p^{a_{j_v+1}} - p^{a_{j_v+1}+1}\right)
+ \sum_{k=1}^{j_v}\,p^{a_k} \left(\frac{1}{p^{a_{j_v+1}}} - \frac{1}{p^{a_{j_v+1}+1}}\right) \\
&= (p-1)\left(\sum_{k=1}^{j_v}\,\frac{1}{p^{a_{j_v+1}-a_k+1}} - \sum_{i=j_v+2}^r\, \frac{1}{p^{a_i-a_{j_v+1}}}\right) .
\end{split}
\end{align*}
We obtain
\begin{align*
\begin{split}
\frac{h_p(a)-h_p(b)}{p-1}
&> \frac{1}{p^{a_{j_v+1}-a_{j_u}+1}}+ \sum_{k=j_u+1}^{j_v}\,\frac{1}{p^{a_{j_v+1}-a_k+1}} \\
&\quad\quad - \sum_{i=j_v+2}^{2j_v-j_u+1}\, \frac{1}{p^{a_i-a_{j_v+1}}} -
\sum_{i=2j_v-j_u+2}^{\infty}\, \frac{1}{p^{a_i-a_{j_v+1}}}
\end{split}
\end{align*}
and observe that
the first two sums on the righthand side have the same number of terms. Since
\begin{equation}
2j_v-j_u+2 = j_v+j_{u+1}-j_u+2 = j_v + t_{2u} +3 \leq j_v + t_{2v} +1 =j_{v+1},
\label{ajot3}
\end{equation}
we can apply (\ref{ajot1}) and (\ref{ajot2}) to deduce termwise cancellation of those two sums. Hence, and by (\ref{ajot1}), (\ref{ajot3}) and (\ref{ajot2}) again, it follows that
\begin{align*
\begin{split}
\frac{h_p(a)-h_p(b)}{p-1}
&> \frac{1}{p^{a_{j_v+1}-a_{j_u}+1}} -
\sum_{i=2j_v-j_u+2}^{\infty}\, \frac{1}{p^{a_i-a_{j_v+1}}} \\
&\geq \frac{1}{p^{a_{j_v+1}-a_{j_u}+1}} -
\frac{1}{p^{a_{2j_v-j_u+2}-a_{j_v+1}}} \sum_{i=0}^{\infty}\, \frac{1}{p^i} \\
&= \frac{1}{p^{(j_v-j_u+1)[q+1]-1}} - \frac{1}{p^{(j_v-j_u+1)[q+1]}}\, \frac{p}{p-1} \\
&= \frac{1}{p^{(j_v-j_u+1)[q+1]-1}}\left( 1 - \frac{1}{p-1}\right) \geq 0\,.
\end{split}
\end{align*}
This contradicts the minimality condition for $h_p(a)$, and thus our initial assumption $\theta(d)\geq 2$ must be wrong in this case.
\underline{Case 2}: $v\geq u+2$.
\newline
By Case 1, we know that
\[ t_{2v} - t_{2u} = ( t_{2v} - t_{2(u+1)}) + (t_{2(u+1)} - t_{2u}) \leq
\vert t_{2v} - t_{2(u+1)}\vert +1.\]
Now the assumption $t_{2v}-t_{2u} \geq 3$ would imply $\vert t_{2v} - t_{2(u+1)}\vert \geq 2$, contradicting the minimality of $\vert v-u\vert$.
We are left with $t_{2v}-t_{2u} = 2$. The minimality of $\vert v-u\vert$ implies in this special situation that
\[
t_{2u}+1 =t_{2(u+1)} = t_{2(u+2)} = \ldots = t_{2(v-1)} = t_{2v}-1,
\]
i.e. we have for $\Delta_u(d):= j_{u+1} -j_u+1$ that
\begin{equation}
\Delta_u(d)= j_{u+2}-j_{u+1} = j_{u+3}-j_{u+2} = \ldots = j_v-j_{v-1}= j_{v+1}-j_v-1.
\label{grossdelta}
\end{equation}
We also have
\begin{equation}\label{ajot11}
a_{j_{u+1}+1} -a_k = \left\{
\begin{array}{ll}
(j_{u+1}-k+1)[q+1]-1 & \mbox{ for $j_u+1\leq k \leq j_{u+1}$, } \\
(j_{u+1}-j_u+1)[q+1]-2 & \mbox{ for $k=j_u$, }
\end{array}
\right.
\end{equation}
and
\begin{equation}\label{ajot12}
a_i - a_{j_v+1} = (i-j_v-1)[q+1] \quad\quad (j_v+2 \leq i \leq j_{v+1})\,.
\end{equation}
We define $b=(b_1,\ldots,b_r)\in A(s,r)$ by setting
\begin{equation}\label{vectorb11}
b_j := \left\{ \begin{array}{cl}
a_j & \mbox{ for $1\leq j\leq j_{u+1}$ or $j_v+2\leq j \leq r$, }\\
a_{j-1}+[q+1]& \mbox{ for $j_{u+1}+1\leq j \leq j_v+1$. }
\end{array} \right.
\end{equation}
i.e. we enlarge the number of intervals of length $[q+1]$ between $a_{j_u+1}$ and $a_{j_{u+1}}$ by one and shorten the number of these intervals between $a_{j_v+1}$ and $a_{j_{v+1}}$ by one. Clearly, $\delta(a)=d \in \mathrm{Biv}^{\star}(s,r)$ implies $\delta(b) \in \mathrm{Biv}^{\star}(s,r)$.
Then, by (\ref{basishp}),
\begin{align}\label{diffhp11}
\begin{split}
h_p(a)-h_p(b)
&= \left(\sum_{k=1}^{j_{u+1}}\sum_{i=j_{u+1}+1}^{j_v+1} + \sum_{k=j_{u+1}+1}^{j_v+1}\sum_{i=k+1}^{j_v+1} + \sum_{k=j_{u+1}+1}^{j_v+1}\sum_{i=j_v+2}^r\right) \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{b_i-b_k}}\right) \\
&= \sum_{k=1}^{j_{u+1}}\sum_{i=j_{u+1}+1}^{j_v+1} \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{(a_{i-1}+[q+1])-a_k}}\right) \\
&\quad\quad + \sum_{k=j_{u+1}+1}^{j_v}\sum_{i=k+1}^{j_v+1} \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{(a_{i-1}+[q+1])-(a_{k-1}+[q+1])}}\right) \\
&\quad\quad + \sum_{k=j_{u+1}+1}^{j_v+1}\sum_{i=j_v+2}^r \,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{a_i-(a_{k-1}+[q+1])}}\right) \\
&= \sum_{k=1}^{j_{u+1}}\, p^{a_k} \sum_{i=j_{u+1}+1}^{j_v+1}
\,\left(\frac{1}{p^{a_i}} - \frac{1}{p^{a_{i-1}+[q+1]}}\right) \\
&\quad\quad + \sum_{k=j_{u+1}+1}^{j_v}\sum_{i=k+1}^{j_v+1}\,\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{a_{i-1}-a_{k-1}}}\right) \\
&\quad\quad +\sum_{k=j_{u+1}+1}^{j_v+1} \,\left(p^{a_k}-p^{a_{k-1}+[q+1]}\right) \sum_{i=j_v+2}^r\,\frac{1}{p^{a_i}}\, .
\end{split}
\end{align}
By definition of the $j_{\ell}$, we have for $j_{u+1}+1 \leq i \leq j_v+2-1$
\begin{equation}
a_i -a_{i-1} = d_{i-1} = \left\{ \begin{array}{cl}
$[$q$]$ & \mbox{ for $i=j_{\ell}+1$, $u+1\leq \ell \leq v$, }\\
$[$q+1$]$ & \mbox{ otherwise. }
\end{array} \right.
\label{diffai}
\end{equation}
This implies that
\begin{equation}
\sum_{i=j_{u+1}+1}^{j_v+1} \,\left(\frac{1}{p^{a_i}} - \frac{1}{p^{a_{i-1}+[q+1]}}\right)
= \sum_{\ell=u+1}^v \,\left(\frac{1}{p^{a_{j_{\ell}+1}}} - \frac{1}{p^{a_{j_{\ell}+1}+1}}\right)
= \left(1-\frac{1}{p}\right) \sum_{\ell=u+1}^v \,\frac{1}{p^{a_{j_{\ell}+1}}}
\label{inter1}
\end{equation}
and
\begin{equation}
\sum_{k=j_{u+1}+1}^{j_v+1} \,\left(p^{a_k}-p^{a_{k-1}+[q+1]}\right)
= \sum_{\ell=u+1}^v \,\left(p^{a_{j_{\ell}+1}} - p^{a_{j_{\ell}+1}+1}\right)
= \left(1-p\right) \sum_{\ell=u+1}^v \,p^{a_{j_{\ell}+1}}.
\label{inter2}
\end{equation}
Moreover
\begin{align}\label{diffmiddle}
\begin{split}
\sum_{k=j_{u+1}+1}^{j_v}\sum_{i=k+1}^{j_v+1}\,&\left(\frac{1}{p^{a_i-a_k}} - \frac{1}{p^{a_{i-1}-a_{k-1}}}\right) \\
&= \sum_{k=j_{u+1}+1}^{j_v} p^{a_k} \sum_{i=k+1}^{j_v+1}\,\frac{1}{p^{a_i}} -
\sum_{k=j_{u+1}}^{j_v-1} p^{a_k} \sum_{i=k+1}^{j_v}\,\frac{1}{p^{a_i}} \\
&= \frac{1}{p^{a_{j_v+1}}} \sum_{k=j_{u+1}+1}^{j_v} p^{a_k} -
p^{a_{j_{u+1}}} \sum_{i=j_{u+1}+1}^{j_v}\, \frac{1}{p^{a_i}}\, \\
&= \sum_{k=j_{u+1}+1}^{j_v} \frac{1}{p^{a_{j_v+1}-a_k}} - \sum_{i=j_{u+1}+1}^{j_v}\, \frac{1}{p^{a_i-a_{j_{u+1}}}} \\
&= \sum_{k=1}^{j_v-j_{u+1}} \frac{1}{p^{a_{j_v+1}-a_{j_v+1-k}}} - \sum_{i=1}^{j_v-j_{u+1}}\, \frac{1}{p^{a_{j_{u+1}+i}-a_{j_{u+1}}}}\, .
\end{split}
\end{align}
It is easy to deduce from (\ref{diffai}) and (\ref{grossdelta}) that, by symmetry of the spacing,
\[ a_{j_v+1} -a_{j_v+1-k}
= a_{j_{u+1}+k} -a_{j_{u+1}} \]
for $0 \leq k \leq j_v- j_{u+1} +1$. Hence the last two sums in (\ref{diffmiddle}) cancel termwise, and we obtain from (\ref{diffhp11}), (\ref{inter1}) and (\ref{inter2}) that
\begin{align}\label{diffhp15}
\begin{split}
\frac{h_p(a)-h_p(b)}{p-1} &= \frac{1}{p} \sum_{k=1}^{j_{u+1}}\, p^{a_k} \sum_{\ell=u+1}^v \,\frac{1}{p^{a_{j_{\ell}+1}}} - \sum_{i=j_v+2}^r\,\frac{1}{p^{a_i}}
\sum_{\ell=u+1}^v \,p^{a_{j_{\ell}+1}} \\
&= \frac{1}{p^{a_{j_{u+1}+1}+1}} \sum_{k=1}^{j_{u+1}}\, p^{a_k} \sum_{\ell=u+1}^v \,\frac{1}{p^{a_{j_{\ell}+1}-a_{j_{u+1}+1}}} \\
&\quad\quad\quad - p^{a_{j_v+1}} \sum_{i=j_v+2}^r\,\frac{1}{p^{a_i}}
\sum_{\ell=u+1}^v \,\frac{1}{p^{a_{j_v+1}-a_{j_{\ell}+1}}} \\
&= \frac{1}{p^{a_{j_{u+1}+1}+1}} \sum_{k=1}^{j_{u+1}}\, p^{a_k} \sum_{\ell=0}^{v-u-1} \,\frac{1}{p^{(\Delta_u(d)[q+1]-1)\ell}} \\
&\quad\quad\quad - p^{a_{j_v+1}} \sum_{i=j_v+2}^r\,\frac{1}{p^{a_i}}
\sum_{\ell=0}^{v-u-1} \,\frac{1}{p^{(\Delta_u(d)[q+1]-1)\ell}} \\
&= \sum_{\ell=0}^{v-u-1} \,\frac{1}{p^{(\Delta_u(d)[q+1]-1)\ell}} \left( \sum_{k=1}^{j_{u+1}}\, \frac{1}{p^{a_{j_{u+1}+1}-a_k+1}} - \sum_{i=j_v+2}^r\,\frac{1}{p^{a_i-a_{j_v+1}}} \right).
\end{split}
\end{align}
We have
\begin{align*
\begin{split}
\sum_{k=1}^{j_{u+1}}\, &\frac{1}{p^{a_{j_{u+1}+1}-a_k+1}} -
\sum_{i=j_v+2}^r\,\frac{1}{p^{a_i-a_{j_v+1}}} \\
&\quad\quad\quad> \frac{1}{p^{a_{j_{u+1}+1}-a_{j_u}+1}}+ \sum_{k=j_u+1}^{j_{u+1}}\,\frac{1}{p^{a_{j_{u+1}+1}-a_k+1}} \\
&\quad\quad\quad\quad - \sum_{i=j_v+2}^{j_v+j_{u+1}-j_u+1}\, \frac{1}{p^{a_i-a_{j_v+1}}} -
\sum_{i=j_v+j_{u+1}-j_u+2}^{\infty}\, \frac{1}{p^{a_i-a_{j_v+1}}}
\end{split}
\end{align*}
and observe that the positive sum and the first negative sum on the righthand side have the same number of terms. Since, by (\ref{grossdelta}),
\begin{equation}
j_v+j_{u+1}-j_u+2 = j_{v+1}-(j_{v+1}-j_v)+(j_{u+1}-j_u)+2 =j_{v+1},
\label{ajot13}
\end{equation}
we can apply (\ref{ajot11}) and (\ref{ajot12}) to deduce termwise cancellation of those two sums. Hence, and by (\ref{ajot11}), (\ref{ajot13}) and (\ref{ajot12}) again, it follows that
\begin{align*
\begin{split}
\sum_{k=1}^{j_{u+1}}\, \frac{1}{p^{a_{j_{u+1}+1}-a_k+1}} -
\sum_{i=j_v+2}^r\,\frac{1}{p^{a_i-a_{j_v+1}}}
&> \frac{1}{p^{a_{j_{u+1}+1}-a_{j_u}+1}}
- \sum_{i=j_v+j_{u+1}-j_u+2}^{\infty}\, \frac{1}{p^{a_i-a_{j_v+1}}} \\
&\geq \frac{1}{p^{a_{j_{u+1}+1}-a_{j_u}+1}} -
\frac{1}{p^{a_{j_{v+1}}-a_{j_v+1}}} \sum_{i=0}^{\infty}\, \frac{1}{p^i} \\
&= \frac{1}{p^{(j_{u+1}-j_u+1)[q+1]-1}} -
\frac{1}{p^{(j_{v+1}-j_v-1)[q+1]}} \, \frac{p}{p-1} \\
&= \frac{1}{p^{\Delta_u(d)[q+1]-1}} -
\frac{1}{p^{\Delta_u(d)[q+1]}} \, \frac{p}{p-1} \\
&= \frac{1}{p^{\Delta_u(d)[q+1]-1}}\left( 1 - \frac{1}{p-1}\right) \geq 0\,.
\end{split}
\end{align*}
With this inequality, (\ref{diffhp15}) implies $h_p(a)>h_p(b)$, contradicting the minimality condition for $h_p(a)$. Again the initial assumption $\theta(d) \geq 2$ cannot hold, which completes the proof of (i).
\bigskip\newline
(ii) For $2g \leq r-2$, it follows from Proposition \ref{PropSeparable}(ii) that $\theta_{\max}(d)=1$. By the respective arguments, corresponding directly to the ones used in (i), now the assumption $\eta(d)\geq 2$ turns out to be contradictive.
\hfill \ensuremath{\Box}\bigskip
\bigskip
{\sc Proof of Theorem \ref{Thm2.2}. }\newline
(i) We know from Proposition \ref{PropFraming} that $d=(d_1,\ldots,d_{r-1}):=\delta(a)\in \mathrm{Biv}^{\star}(s,r)$.
Since $(r-g-2)\mid g$, the number $q_2$ is an integer. It follows from (\ref{sumstl}) that the existence of a $t_{2k}< q_2$ would imply the existence of a $t_{2\ell}> q_2$ and vice versa, both cases contradicting $\theta_{\max}(d)-\theta_{\min}(d)=\theta(d)\leq 1$, which holds by Proposition \ref{ProcBivSecond}(i). Hence $t_{2\ell}= q_2$ for all $\ell$, and the assertion follows.
\bigskip\newline
(ii) The argument in the proof of (i) showed that the existence of a $t_{2k}< q_2$ implies the existence of a $t_{2\ell} > q_2$ and vice versa. Hence $\theta_{\min}(d)=[q_2]$ and $\theta_{\max}(d)=[q_2]+1$. Denote by $x$ the number of sequences of $[q+1]$-blocks of length $[q_2]$ in $d$. Then there are $r-g-2-x$ sequences of $[q+1]$-blocks of length $[q_2]+1$ in $d$. It follows that
\[ x[q_2] + (r-g-2-x)([q_2]+1) =g,\]
hence $x=(r-g-2)[q_2+1]-g$. By definition, $(r-g-2)q_2 = g$ and $e= (r-g-2)(q_2-[q_2])$. These identities imply
\[ x -(r-g-2) = (r-g-2)[q_2] -g =(r-g-2)q_2 -e -g = -e,\]
which completes the proof of (ii).
\bigskip\newline
(iii) can be shown by the same reasoning as (i).
\bigskip\newline
(iv) follows like (ii).
\hfill \ensuremath{\Box}\bigskip
\section{Continued balancing}
Recall that we cited an analytical result from \cite{SA2}
stating that $h_p(a)$ becomes minimal if $0=a_1<a_2<\ldots<a_{r-1}<a_r=s-1$ are chosen in nearly equidistant position. In terms of delta vector structure, we now see that bivalence is the first balancing step towards
this goal. Further balancing is achieved by placing the rarer of the two elements of the delta vector
as singletons. This is the separability property. Finally, we expect that the separating
singletons are again distributed in nearly equidistant position, which amounts to
bivalence of second degree.
\medskip
The following example demonstrates the balancing effect numerically.
\begin{example} Let $s=22$ and $r=17$. Then
\[\delta(a_1)=(1, 1, 2, 1, 1, 2, 1, 1, 2, 1, 2, 1, 1, 2, 1, 1)\]
gives the minimal possible value of $h_3(a_1)\approx 5.36266$, thus maximizing the energy among
all tuples of $A(s,r)$. On the other hand, the vector
\[\delta(a_2)=(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 6)\]
gives a particularly large value of $h_3(a_2)\approx 7.25206$.
Restricting ourselves to bivalent delta vectors, the maximal value of $h_3$ achievable
is $h_3(a_3)\approx 5.96811$ for
\[\delta(a_3)=(2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2).\]
A further restriction to delta vectors that are also $[q]$-framed yields
a maximal $h_3(a_4) \approx 5.79688$ for
\[\delta(a_4)=(1, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1).\]
If we additionally impose separability we get a maximal $h_3(a_5) \approx 5.47795$ for
\[\delta(a_5)=(1, 2, 1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1).\]
Finally also requiring bivalence of second degree, we arrive at a maximal
$h_3(a_6) \approx 5.37484$ for
\[\delta(a_6)=(1, 2, 1, 1, 2, 1, 1, 2, 1, 1, 2, 1, 1, 2, 1, 1).\]
We can see that this is now quite close to $h_3(a_1) \approx 5.36266$.
\end{example}
In view of this example one tends to expect that the balancing continues as far as possible,
finally resulting in the desired energy maximizing divisor set.
A definition of balancing of a certain degree is readily derived.
Let us formally define
\[
\begin{split}
\Lambda^0(d) &=d,\\
\Lambda^{i}(d)&=\Lambda(\Lambda^{i-1}(d)), \quad \mbox{for}~i\in\NN
\end{split}
\]
and say that $d$
is {\em balanced of $i$-th degree} if
$\Lambda^{i}(d)$ exists, i.e.~$\Lambda^{i-1}(d)$ is bivalent and separable.
We exclude cases where $\Lambda^{i}(d)$ would formally exist but be an empty vector due to
$\Lambda^{i-1}(d)$ having only identical entries.
Let us call $\Lambda^{0}(d),\ldots,\Lambda^{j}(d)$ the {\em $\Lambda$ sequence of $d$}
if $\Lambda^{i}(d)$ exists for $i=0,\ldots,j$, but not for $i=j+1$.
\medskip
The effect of continued balancing becomes strikingly apparent in the following example, where
we have several levels of balancing.
\begin{example}\label{s44r35ex}
For $s=44$ and $r=35$ the $\Lambda$ sequence of the energy maximal divisor tuple (up to symmetry of the delta vector) is
\[
\begin{split}
&(1, 1, 1, 2, 1, 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 1), \\
&(3, 2, 3, 2, 2, 3, 2, 3, 2, 3),\\
&(1, 2, 1, 1),\\
&(1, 2), \\
&(1).
\end{split}
\]
\end{example}
It seems that framing is an important aspect in continued balancing.
\begin{conjecture}
Let $a$ be an energy maximal exponent tuple. Suppose that for $d:=\delta(a)$ all
$\Lambda^0(d),\ldots,\Lambda^j(d)$ exist and that $\Lambda^j(d)$ is unframed.
Then $\Lambda^0(d),\ldots,\Lambda^{j-1}(d)$ are all framed.
\end{conjecture}
Formally, a vector $\Lambda^i(d)$ can be interpreted as the delta vector
$\delta(a')$ of some unique admissible exponent tuple $a'$ (with length $r'$ and largest entry $s'-1$).
In this sense, one could shorten a given sequence $\Lambda^0(a),\ldots,\Lambda^j(a)$
to obtain the tail $\Lambda^0(a'),\ldots,\Lambda^{j-i}(a')$. This is shown in the next example.
\begin{example}\label{s26r11ex}
Based on the $\Lambda$ sequence given in Example \ref{s44r35ex}, consider the following vectors:
\[
\begin{split}
d=&(1, 1, 1, 2, 1, 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 1), \\
a=&(0,1,2,3,5,6,7,9,10,11,\ldots,32,33,34,36,37,38,40,41,42,43),\\
d'=&(3, 2, 3, 2, 2, 3, 2, 3, 2, 3),\\
a'=&(0,3,5,8,10,12,15,17,20,22,25).
\end{split}
\]
\medskip
Shortening the $\Lambda$ sequence of $d$ with $a=\delta^{-1}(d)$ by omitting the first delta vector
gives the $\Lambda$ sequence of $d'=\Lambda(d)$ with admissible exponent tuple $a'=\delta^{-1}(d')$, in which case we have $s'=26$ and $r'=11$:
\[
\begin{split}
&(3, 2, 3, 2, 2, 3, 2, 3, 2, 3),\\
&(1, 2, 1, 1),\\
&(1, 2), \\
&(1).
\end{split}
\]
\end{example}
It would be a most desirable property if $a$ were energy maximal within $A(r,s)$ that
the same would hold for $a'$ within $A(r',s')$. Examples indicate that this is often the case,
but not in general. Consider the following example:
\begin{example}
Consider the $\Lambda$ sequence given in Example \ref{s26r11ex}.
Clearly, the vector $(3, 2, 3, 2, 2, 3, 2, 3, 2, 3)$ does not define an energy maximal
divisor tuple since it does not have the $[q]$-framing property required by Proposition \ref{PropFraming}.
\medskip
And indeed, the $\Lambda$ sequence of the energy maximal divisor tuple (again, up to symmetry) is
\[
\begin{split}
&(2, 3, 2, 3, 3, 2, 3, 2, 3, 2),\\
&(1, 2, 1, 1),\\
&(1, 2),\\
&(1).
\end{split}
\]
\end{example}
Although a continued balancing with longest possible sequences $\Lambda^0(d),\ldots,\Lambda^j(d)$
yields divisor tuples $a:=\delta^{-1}(d)\in A(s,r)$ with high energy, it does not automatically guarantee maximal energy
among the elements of $A(s,r)$. This can be seen from the next example. However, we suspect that
this effect is due to a probably not yet completely suitable formal notion of continued balancing.
\begin{example}
For $s=16$ and $r=12$ the $\Lambda$ sequence of the energy maximal divisor tuple (up to symmetry of the delta vector) is
\[
\begin{split}
&(1, 1, 2, 1, 2, 1, 2, 1, 2, 1, 1), (2, 1, 1, 1, 2), (3)
\end{split}
\]
but the $\Lambda$ sequence of the runner-up is longer:
\[
\begin{split}
& (1, 2, 1, 1, 2, 1, 2, 1, 2, 1, 1), (1, 2, 1, 1, 2), (1, 2), (1).
\end{split}
\]
Interestingly, this situation is reversed for $s=16$ and $r=11$:
\[
\begin{split}
&(1, 2, 1, 2, 1, 2, 2, 1, 2, 1), (1, 1, 2, 1), (2, 1), (1)
\end{split}
\]
is the $\Lambda$ sequence of the energy maximal divisor tuple, whereas
\[
\begin{split}
&(1, 2, 1, 2, 1, 2, 1, 2, 2, 1), (1, 1, 1, 2), (3)
\end{split}
\]
is the $\Lambda$ sequence of the runner-up.
\medskip
Note that in the first case we have $2g \le r-2$ and in the second case
$2g \ge r-1$. So, in view of the cases listed in Theorem \ref{Thm2.2},
we have a notable difference here that may have to do with the effect.
\end{example}
To better understand this process and properly embed it in a theory would be the object of
future work. In this context, let us remark that the continued balancing somewhat resembles what
happens in leap year calculations, which in turn are related to the Bresenham line drawing algorithm,
continued fractions and the Euclidean algorithm (cf. \cite{HAR}). Balancing also seems to be reminiscent of Beatty sequences and the way they partition $\ZZ$ into two sets (cf. \cite{ST}). Successfully linking these concepts
with maximizing the energy of integral ciculant graphs of prime power order is certainly a goal
inviting further research.
| {
"timestamp": "2012-05-22T02:05:08",
"yymm": "1205",
"arxiv_id": "1205.4603",
"language": "en",
"url": "https://arxiv.org/abs/1205.4603",
"abstract": "The energy of a graph is the sum of the moduli of the eigenvalues of its adjacency matrix. We study the energy of integral circulant graphs, also called gcd graphs, which can be characterized by their vertex count $n$ and a set $\\cal D$ of divisors of $n$ in such a way that they have vertex set $\\mathbb{Z}_n$ and edge set ${{a,b}: a,b\\in\\mathbb{Z}_n, \\gcd(a-b,n)\\in {\\cal D}}$. For a fixed prime power $n=p^s$ and a fixed divisor set size $|{\\cal D}| =r$, we analyze the maximal energy among all matching integral circulant graphs. Let $p^{a_1} < p^{a_2} < ... < p^{a_r}$ be the elements of ${\\cal D}$. It turns out that the differences $d_i=a_{i+1}-a_{i}$ between the exponents of an energy maximal divisor set must satisfy certain balance conditions: (i) either all $d_i$ equal $q:=\\frac{s-1}{r-1}$, or at most the two differences $[q]$ and $[q+1]$ may occur; %(for a certain $d$ depending on $r$ and $s$) (ii) there are rules governing the sequence $d_1,...,d_{r-1}$ of consecutive differences. For particular choices of $s$ and $r$ these conditions already guarantee maximal energy and its value can be computed explicitly.",
"subjects": "Combinatorics (math.CO)",
"title": "The maximal energy of classes of integral circulant graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969645988576,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7099044328211879
} |
https://arxiv.org/abs/1403.0103 | On vanishing theorems for local systems associated to Laurent polynomials | We prove some vanishing theorems for the cohomology groups of local systems associated to Laurent polynomials. In particular, we extend one of the results of Gelfand-Kapranov-Zelevinsky into various directions. | \section{Introduction}\label{sec:1}
The study of the cohomology groups of
local systems is an important subject in
algebraic geometry, hyperplane arrangements,
topology and hypergeometric
functions of several variables.
Many mathematicians are interested
in the conditions for
which we have their concentrations
in the middle degrees (for a review of this
subject, see for example \cite[Section 6.4]{Dimca}).
Here let us consider this problem in the
following situation.
Let $B=\{ b(1), b(2), \ldots ,
b(N)\} \subset \ZZ^{n-1}$ be a finite subset
of the lattice $\ZZ^{n-1}$.
Assume that the affine lattice generated by
$B$ in $\ZZ^{n-1}$ coincides with $\ZZ^{n-1}$.
For $z=(z_1, \ldots, z_N) \in \CC^N$
we consider Laurent polynomials $P(x)$
on the algebraic torus
$T_0=( \CC^*)^{n-1}$ defined by
$P(x)=\sum_{j=1}^N z_j x^{b(j)}$
($x=(x_1, \ldots, x_{n-1})
\in T_0=( \CC^*)^{n-1}$).
Then for $c=(c_1, \ldots, c_n) \in \CC^n$
we obtain a possibly multi-valued function
$P(x)^{-c_n}
x_1^{c_1-1} \cdots x_{n-1}^{c_{n-1}-1}$ on
$W=T_0 \setminus P^{-1}(0)$.
It generates the
rank one local system
\begin{equation}
\LL = \CC_{W}
P(x)^{-c_n}
x_1^{c_1-1} \cdots x_{n-1}^{c_{n-1}-1}
\end{equation}
on $W$. Under the nonresonance
condition (see Definition \ref{NRC}) on
$c \in \CC^n$,
Gelfand-Kapranov-Zelevinsky
\cite{G-K-Z-2} proved that
we have the concentration
\begin{equation}
H^j(W ; \LL ) \simeq
0 \qquad (j \not= n-1)
\end{equation}
for non-degenerate Laurent polynomials $P(x)$.
This result was obtained as a byproduct
of their study on the integral
representations of $A$-hypergeometric
functions in \cite{G-K-Z-2}. Since
their proof of this concentration
heavily relies on
the framework of the $\D$-module
theory, it is desirable to
prove it more directly.
In this paper, by applying the
twisted Morse theory to perverse sheaves
we extend the result of
Gelfand-Kapranov-Zelevinsky
to various directions.
First in Theorem \ref{VTM}
we relax the non-degeneracy
condition on $P(x)$ by replacing it
with a weaker one (see Definition
\ref{WND}). We thus
extend the result of \cite{G-K-Z-2}
to the case where the hypersurface $P^{-1}(0)
\subset T_0$ may have isolated
singular points in $T_0$.
In fact, in Theorem \ref{VTM}
we relax also the condition that
$B$ generates $\ZZ^{n-1}$ to a weaker one that
the dimension of the convex
hull $\Delta \subset \RR^{n-1}$ of
$B$ in $\RR^{n-1}$ is $n-1$. In Theorem \ref{MVTM}
we extend these results
to more general local systems
associated to several Laurent polynomials.
Namely we obtain a vanishing theorem for
arrangements of toric hypersurfaces
with isolated singular points. Our proofs
of Theorems \ref{VTM} and \ref{MVTM}
are very natural and obtained by taking
(possibly singular) ``minimal" toric
compactifications of $T_0$. In order to work
on such singular varieties, we use
our previous idea in the proof of
\cite[Lemma 4.2]{E-T-2}.
See Section \ref{sec:3} for the details.
Moreover in Theorem \ref{NTM}
(assuming the non-degeneracy of
Gelfand-Kapranov-Zelevinsky \cite{G-K-Z-2}
for Laurent polynomials) we relax
the nonresonance condition of $c \in \CC^n$
in Theorem \ref{MVTM} by replacing it with the
much weaker one $c \notin \ZZ^n$.
To prove Theorem \ref{NTM}, we first perturb
Laurent polynomials by multiplying
monomials. Then we apply the
twisted Morse theory to the
real-valued functions
associated to them
by using some standard properties
of vanishing cycles of perverse
sheaves. See Sections \ref{sec:4}
and \ref{sec:5} for the details.
In the course of the proof of Theorem \ref{NTM},
we obtain also the following result which
might be of independent interest.
Let $Q_1, \ldots, Q_l$ be
Laurent polynomials on $T=( \CC^*)^{n}$
and for $1 \leq i \leq l$ denote by
$\Delta_i \subset \RR^{n}$ the Newton
polytope $NP(Q_i)$ of $Q_i$.
Set $\Delta = \Delta_1 + \cdots + \Delta_l$.
\begin{theorem}\label{NNVTM}
Let $\LL$ be a non-trivial local system
of rank one on $T=( \CC^*)^{n}$.
Assume that for any $1 \leq i \leq l$
we have $\dim \Delta_i =n$ and the subvariety
\begin{equation}
Z_i = \{ x \in T \ | \
Q_1(x)= \cdots = Q_i(x)=0 \} \subset T
\end{equation}
of $T$ is a non-degenerate complete
intersection.
Then for any $1 \leq i \leq l$ we have the
concentration
\begin{equation}
H^j(Z_i ; \LL ) \simeq
0 \qquad (j \not= n-i).
\end{equation}
Moreover we have
\begin{equation}
\dim H^{n-i} (Z_i ; \LL ) =
\dsum_{\begin{subarray}{c}
m_1,\ldots,m_i \geq 1\\
m_1+\cdots +m_i=n
\end{subarray}}\Vol_{\ZZ}(
\underbrace{\Delta_1,
\ldots,\Delta_1}_{\text{
$m_1$-times}},\ldots,
\underbrace{\Delta_i,
\ldots,\Delta_i}_{\text{$m_i$-times}}),
\end{equation}
where $\Vol_{\ZZ}(\underbrace{\Delta_1,
\ldots,\Delta_1}_{\text{$m_1$-times}},
\ldots,\underbrace{\Delta_i,\ldots,
\Delta_i}_{\text{$m_i$-times}})\in \ZZ$
is the normalized $n$-dimensional mixed volume
with respect to the lattice $\ZZ^n
\subset \RR^n$
\end{theorem}
Note that this result can be
considered as a refinement of
the classical Bernstein-Khovanskii-Kushnirenko
theorem (see \cite{Khovanskii}).
On the other hand, Matusevich-Miller-Walther
\cite{M-M-W} and Saito-Sturmfels-Takayama
\cite{S-S-T} studied the condition on
the parameter vector $c \in \CC^n$
for which the corresponding local
system of $A$-hypergeometric
functions is non-rank-jumping. They also
relaxed the nonresonance condition of
$c \in \CC^n$. It would be an
interesting problem to study the
relationship between Theorem \ref{NTM}
and their results.
\bigskip
\noindent{\bf Acknowledgement:}
We express our hearty gratitude to
Professor N. Takayama for drawing
our attention to this problem.
Moreover some discussions with
Professor M. Yoshinaga were very
useful during the preparation
of this paper.
\section{Preliminary results}\label{sec:2}
In this section, we recall basic
notions and results which will be used
in this paper. In this paper, we essentially
follow the terminology of
\cite{Dimca}, \cite{H-T-T} etc.
For example, for a topological
space $X$ we denote by $\Db(X)$ the
derived category whose objects are
bounded complexes of sheaves
of $\CC_X$-modules on $X$.
Denote by $\Dbc(X)$ the full
subcategory of $\Db(X)$ consisting of
constructible objects.
Let $\Delta \subset \RR^n$ be a lattice polytope
in $\RR^n$. For an element $u \in \RR^n$ of
(the dual vector space of) $\RR^n$ we define the
supporting face $\gamma_u \prec \Delta$
of $u$ in $ \Delta$ by
\begin{equation}
\gamma_u = \left\{ v \in \Delta \ | \
\langle u , v \rangle
=
\min_{w \in \Delta }
\langle u ,w \rangle \right\},
\end{equation}
where for $u=(u_1,\ldots,u_n)$
and $v=(v_1,\ldots, v_n)$ we set
$\langle u,v\rangle =\sum_{i=1}^n u_iv_i$.
For a face $\gamma$ of $\Delta$ set
\begin{equation}
\sigma (\gamma) = \overline{ \{ u \in \RR^n \ | \
\gamma_u = \gamma \} } \subset \RR^n .
\end{equation}
\noindent Then $\sigma (\gamma )$
is an $(n- \dim \gamma )$-dimensional
rational convex polyhedral
cone in $\RR^n$. Moreover
the family $\{ \sigma (\gamma ) \ | \
\gamma \prec \Delta \}$ of cones in $\RR^n$
thus obtained is a subdivision of $\RR^n$.
We call it the dual subdivision of $\RR^n$ by
$\Delta$. If $\dim \Delta =n$ it
satisfies the axiom
of fans (see \cite{Fulton} and
\cite{Oda} etc.). We call it the dual fan of
$\Delta$.
Let $\Delta_1, \ldots, \Delta_p
\subset \RR^n$ be lattice polytopes
in $\RR^n$ and $\Delta =
\Delta_1 + \cdots + \Delta_p
\subset \RR^n$ their Minkowski sum.
For a face $\gamma \prec \Delta$ of
$\Delta$, by taking a point $u \in \RR^n$
in the relative interior of its dual cone $\sigma (\gamma)$
we define the supporting face
$\gamma_i \prec \Delta_i$ of $u$ in $\Delta_i$.
Then it is easy to see that
$\gamma = \gamma_1 + \cdots + \gamma_p$.
Now we recall Bernstein-Khovanskii-Kushnirenko's
theorem \cite{Khovanskii}.
\begin{definition}
Let $g(x)=\sum_{v \in \ZZ^n} c_vx^v$ be a
Laurent polynomial on the algebraic torus
$T=(\CC^*)^n$ ($c_v\in \CC$).
\begin{enumerate}
\item We call the convex hull of
$\supp(g):=\{v\in \ZZ^n \ | \ c_v\neq 0\}
\subset \ZZ^n \subset \RR^n$ in $\RR^n$ the
Newton polytope of $g$ and denote it by $NP(g)$.
\item For a face $\gamma \prec NP(g)$ of $NP(g)$,
we define the $\gamma$-part
$g^{\gamma}$ of $g$ by
$g^{\gamma}(x):=\sum_{v \in \gamma} c_vx^v$.
\end{enumerate}
\end{definition}
\begin{definition} (see \cite{Oka} etc.)
Let $g_1, g_2, \ldots , g_p$ be
Laurent polynomials on $T=(\CC^*)^n$.
Set $\Delta_i=NP(g_i)$ $(i=1,\ldots, p)$ and
$\Delta = \Delta_1 + \cdots + \Delta_p$.
Then we say that the subvariety
$Z=\{ x\in T=(\CC^*)^n \ | \ g_1(x)=g_2(x)=
\cdots =g_p(x)=0 \}$ of $T=(\CC^*)^n$ is a
non-degenerate complete intersection
if for any face $\gamma \prec \Delta$ of
$\Delta$ the $p$-form $dg_1^{\gamma_1} \wedge
dg_2^{\gamma_2} \wedge
\cdots \wedge dg_p^{\gamma_p}$ does not vanish
on $\{ x\in T=(\CC^*)^n \ | \
g_1^{\gamma_1}(x)= \cdots =
g_p^{\gamma_p}(x)=0 \}$.
\end{definition}
\begin{theorem}\label{BKK}
[\cite{Khovanskii}]\label{thm:2-14}
Let $g_1, g_2, \ldots , g_p$ be
Laurent polynomials on $T=(\CC^*)^n$.
Assume that the subvariety $Z
=\{ x\in T=(\CC^*)^n \ | \ g_1(x)=g_2(x)=
\cdots =g_p(x)=0 \}$ of $T=(\CC^*)^n$ is a
non-degenerate complete intersection.
Set $\Delta_i=NP(g_i)$ $(i=1,\ldots, p)$. Then we have
\begin{equation}
\chi(Z)=(-1)^{n-p}
\dsum_{\begin{subarray}{c}
m_1,\ldots,m_p \geq 1\\ m_1+\cdots +m_p=n
\end{subarray}}\Vol_{\ZZ}(
\underbrace{\Delta_1,\ldots,\Delta_1}_{\text{
$m_1$-times}},\ldots,
\underbrace{\Delta_p,
\ldots,\Delta_p}_{\text{$m_p$-times}}),
\end{equation}
where $\Vol_{\ZZ}(\underbrace{\Delta_1,
\ldots,\Delta_1}_{\text{$m_1$-times}},
\ldots,\underbrace{\Delta_p,\ldots,
\Delta_p}_{\text{$m_p$-times}})\in \ZZ$
is the normalized $n$-dimensional mixed volume
with respect to the lattice $\ZZ^n
\subset \RR^n$ $($see the remark below$)$.
\end{theorem}
\begin{remark}\label{rem:2-13}
Let $\Delta_1,\ldots,\Delta_n$
be lattice
polytopes in $\RR^n$. Then
their normalized $n$-dimensional
mixed volume
$\Vol_{\ZZ}( \Delta_1,\ldots,\Delta_n)
\in \ZZ$ is defined by the formula
\begin{equation}
\Vol_{\ZZ}( \Delta_1, \ldots , \Delta_n)=
\frac{1}{n!}
\dsum_{k=1}^n (-1)^{n-k}
\sum_{\begin{subarray}{c}I\subset
\{1,\ldots,n\}\\ \sharp I=k\end{subarray}}
\Vol_{\ZZ}\left(
\dsum_{i\in I} \Delta_i \right)
\end{equation}
where $\Vol_{\ZZ}(\ \cdot\ )
= n! \Vol (\ \cdot\ ) \in \ZZ$ is
the normalized $n$-dimensional volume
with respect to the lattice $\ZZ^n
\subset \RR^n$.
\end{remark}
\section{A vanishing theorem
for local systems}\label{sec:3}
Let $B=\{ b(1), b(2), \ldots ,
b(N)\} \subset \ZZ^{n-1}$ be a finite subset
of the lattice $\ZZ^{n-1}$.
Let $\Delta \subset \RR^{n-1}$ be the convex
hull of $B$ in $\RR^{n-1}$.
Assume that $\dim \Delta =n-1$.
For $z=(z_1, \ldots, z_N) \in \CC^N$
we define a Laurent polynomial $P(x)$
on $T_0=( \CC^*)^{n-1}$ by
$P(x)=\sum_{j=1}^N z_j x^{b(j)}$
($x=(x_1, \ldots, x_{n-1})
\in T_0=( \CC^*)^{n-1}$).
Then for $c=(c_1, \ldots, c_n) \in \CC^n$
the possibly multi-valued function
$P(x)^{-c_n}
x_1^{c_1-1} \cdots x_{n-1}^{c_{n-1}-1}$ on
$W=T_0 \setminus P^{-1}(0)$ generates the
local system
\begin{equation}
\LL = \CC_{W}
P(x)^{-c_n}
x_1^{c_1-1} \cdots x_{n-1}^{c_{n-1}-1}.
\end{equation}
Set $a(j)=(b(j), 1) \in \ZZ^n$
($1 \leq j \leq N$) and $A=\{ a(1), a(2), \ldots ,
a(N)\} \subset \ZZ^{n}$.
Then $K= \RR_+ A \subset \RR^n$ is an
$n$-dimensional
closed convex polyhedral cone in $\RR^n$.
For a face $\Gamma \prec K$ of $K$
let $\Lin (\Gamma) \simeq
\CC^{\dim \Gamma} \subset \CC^n$
be the $\CC$-linear subspace of
$\CC^n$ generated by $\Gamma$.
\begin{definition}\label{NRC}
(Gelfand-Kapranov-Zelevinsky
\cite[page 262]{G-K-Z-2})
We say that the parameter vector
$c \in \CC^n$ is nonresonant
(with respect to $A$) if
for any face $\Gamma \prec K$ of $K$
such that $\dim \Gamma =n-1$
we have $c \notin \{ \ZZ^n+
\Lin (\Gamma ) \}$.
\end{definition}
\begin{definition}\label{WND}
(see \cite{Oka} etc.)
We say that the Laurent polynomial
$P(x)= \sum_{j=1}^N z_j x^{b(j)}$ is
``weakly" non-degenerate if
for any face $\gamma$ of
$\Delta$ such that
$\dim \gamma < \dim \Delta =n-1$ the hypersurface
\begin{equation}
\{ x \in T_0=(\CC^*)^{n-1} \ | \
P^{\gamma}(x)= \sum_{j: b(j) \in \gamma}
z_j x^{b(j)}=0 \} \subset T_0
\end{equation}
is smooth and reduced.
\end{definition}
Let $\iota : W=T_0 \setminus P^{-1}(0)
\hookrightarrow T_0$ be the inclusion
map and set $\M = R \iota_* \LL
\in \Dbc (T_0)$. Then the following theorem
generalizes one of the results in
Gelfand-Kapranov-Zelevinsky \cite{G-K-Z-2}
to the case where the hypersurface
$P^{-1}(0) \subset T_0$ may have
isolated singular points.
\begin{theorem}\label{VTM}
Assume that $\dim \Delta =n-1$,
the parameter vector
$c \in \CC^n$ is nonresonant and
the Laurent polynomial $P(x)$ is
weakly non-degenerate. Then there exists
an isomorphism
\begin{equation}
H^j_c(T_0; \M ) \simeq H^j(T_0; \M )
\simeq H^j(W ; \LL )
\end{equation}
for any $j \in \ZZ$. Moreover we have the
concentration
\begin{equation}
H^j(W ; \LL ) \simeq
0 \qquad (j \not= n-1).
\end{equation}
\end{theorem}
\begin{proof}
Let $\Sigma_0$ be the dual fan of $\Delta$ in $\RR^{n-1}$
and $X$ the (possibly singular) toric variety associated to it.
Then there exists a natural action of $T_0$ on
$X$ whose orbits are parametrized by the faces of
$\Delta$. For a face $\gamma$ of $\Delta$
denote by $X_{\gamma} \simeq
(\CC^*)^{\dim \gamma}$ the $T_0$-orbit
associated to $\gamma$. Note that
$X_{\Delta} \simeq T_0$ is the unique
open dense $T_0$-orbit in $X$ and
its complement $X \setminus X_{\Delta}$
is the union of $X_{\gamma}$ for
$\gamma \prec \Delta$ such that
$\dim \gamma <n-1$. Let $i :
X_{\Delta} \simeq T_0 \hookrightarrow X$ be
the inclusion map. Then by the weak
non-degeneracy of $P(x)$, the closure
$S= \overline{i (P^{-1}(0))}
\subset X$
of the hypersurface $i( P^{-1}(0)) \subset i (T_0)$
in $X$ intersects $T_0$-orbits $X_{\gamma}$
in $X \setminus X_{\Delta}$ transversally.
Moreover by the nonresonance
of $c \in \CC^n$, for any
$\gamma \prec \Delta$ such that $\dim
\gamma =n-2$ the monodromy of the local
system $\LL$ around the codimension-one
$T_0$-orbit $X_{\gamma} \subset X$ in $X$
is non-trivial.
Indeed, let $\gamma \prec \Delta$
be such a facet of $\Delta$.
We denote by $\Gamma$ the facet of the cone
$K= \RR_+ A$ generated by
$\gamma \times \{ 1 \} \subset K$.
Let $\nu \in \ZZ^{n-1}
\setminus \{ 0 \}$ be the primitive
inner conormal vector of the facet
$\gamma$ of $\Delta \subset
\RR^{n-1}$ and set
\begin{equation}
m= \min_{v \in \Delta}
\langle \nu, v \rangle =
\min_{v \in \gamma}
\langle \nu, v \rangle \in \ZZ.
\end{equation}
Then the primitive
inner conormal vector
$\widetilde{\nu} \in \ZZ^{n}
\setminus \{ 0 \}$ of the facet
$\Gamma$ of $K \subset \RR^{n}$
is explicitly given by the formula
\begin{equation}
\widetilde{\nu} =
\left( \begin{array}{c}
\nu \\
-m
\end{array} \right) \in \ZZ^{n}
\setminus \{ 0 \}.
\end{equation}
and the condition
$c=(c_1, \ldots, c_{n-1}, c_n) \notin
\{ \ZZ^n+ \Lin (\Gamma ) \}$ is
equivalent to the one
\begin{equation}
m( \gamma ):=
\biggl\langle \nu , \quad
\left( \begin{array}{c}
c_1-1 \\
\vdots \\
c_{n-1}-1
\end{array} \right) \biggr\rangle
- m \cdot c_n
\quad \notin \ZZ.
\end{equation}
We can easily see that the
order of the (multi-valued) function
$P(x)^{-c_n}
x_1^{c_1-1} \cdots x_{n-1}^{c_{n-1}-1}$
along the codimension-one
$T_0$-orbit $X_{\gamma} \subset X$ in $X$
is equal to $m( \gamma ) \notin \ZZ$.
Then by constructing suitable distance functions
as in the proof of
\cite[Lemma 4.2]{E-T-2}, we can show that
for the open embedding $i: T_0 \hookrightarrow
X$ we have
\begin{equation}
(Ri_* \M )_p \simeq 0
\qquad \text{for any} \ p \in
X \setminus i(T_0)
\end{equation}
as follows. Let us first assume that the point $p \in
X \setminus i(T_0)$ lies in a $0$-dimensional
$T_0$-orbit $X_{\gamma}$. Let $U_{\gamma}
\subset X$ be an $(n-1)$-dimensional affine toric
variety containing $p=X_{\gamma}$ and regard it
as a subvariety of $\CC^l$ for some $l$.
Then as in the proof of \cite[Lemma 4.2]{E-T-2}
we can construct a distance function on $\CC^l$
to prove the isomorphism
$(Ri_* \M )_p \simeq 0 $.
When the point $p \in
X \setminus i(T_0)$ lies in a
$T_0$-orbit $X_{\gamma}$ such that
$\dim X_{\gamma} = \dim \gamma >0$, by
taking a normal slice of $X_{\gamma}$ in $X$
we can reduce the problem to the case
where $\dim X_{\gamma} =0$. We thus obtain an
isomorphism
$i_! \M \simeq Ri_* \M$ in $\Dbc (X)$.
Applying the functor $R \Gamma_c(X; \cdot ) =
R \Gamma (X; \cdot )$ to it we obtain the
desired isomorphisms
\begin{equation}
H^j_c(T_0; \M ) \simeq H^j(T_0; \M )
\simeq H^j(W ; \LL )
\end{equation}
for $j \in \ZZ$. Now recall that $T_0$ is
an affine variety and $\M \in \Dbc (T_0)$
is a perverse sheaf on it (up to some
shift). Then by Artin's vanishing theorem
for perverse sheaves over affine varieties
(see \cite[Corollaries 5.2.18 and 5.2.19]{Dimca}
etc.) we have
\begin{equation}
H^j_c(T_0; \M ) \simeq 0 \quad \text{for} \
j< \dim T_0 =n-1
\end{equation}
and
\begin{equation}
H^j(T_0; \M ) \simeq 0 \quad \text{for} \
j> \dim T_0 =n-1,
\end{equation}
from which the last assertion
immediately follows.
This completes the proof.
\end{proof}
By Theorem \ref{BKK} we obtain the following
corollary of Theorem \ref{VTM}.
\begin{corollary}
In the situation of Theorem \ref{VTM},
let $p_1, \ldots, p_r \in P^{-1}(0)$
be the (isolated) singular points of
$P^{-1}(0) \subset T_0$ and for
$1 \leq i \leq r$ let $\mu_i>0$ be the
Milnor number of $P^{-1}(0)$ at $p_i$.
Then we have
\begin{equation}
\dim H^{n-1}(W ; \LL ) =
\Vol_{\ZZ}( \Delta ) -
\sum_{i=1}^r \mu_i.
\end{equation}
\end{corollary}
We can generalize Theorem \ref{VTM}
to the case where the hypersurface
$S= \overline{i (P^{-1}(0))}
\subset X$ has
(stratified) isolated
singular points $p$ also in
$T_0$-orbits $X_{\gamma}
\subset X \setminus i(T_0)$ as follows.
For such a point
$p \in S \cap X_{\gamma}$ of $S$
let us show that we have the
vanishing $(Ri_* \M )_p
\simeq 0$ in general.
First consider the case where
the codimension of $X_{\gamma}$ in $X$ is one.
The question being local,
it suffices to consider the case where
$X= \CC^{n-1}_y \supset
X_{\gamma}= \{ y_{n-1}=0 \}$,
$S= \{ f(y)=0 \} \ni p=0$,
$T_0= \CC^{n-1} \setminus \{ y_{n-1}=0 \}$,
$i: \CC^{n-1} \setminus \{ y_{n-1}=0 \} \hookrightarrow
\CC^{n-1}$ and
\begin{equation}
\LL = \CC_{\CC^{n-1} \setminus
\{ f(y) \cdot y_{n-1}=0 \} }
f(y)^{\alpha} y_{n-1}^{\beta}
\end{equation}
for $\alpha = -c_n$ and some
$\beta \in \CC$
(by the notation in the
proof of Theorem \ref{VTM} we have
$\beta = m ( \gamma )$). Here $f(y)$ is a
polynomial on
$\CC^{n-1}$ such that $S=f^{-1}(0)$
has a (stratified) isolated singular point
at $p=0 \in S \cap X_{\gamma}$. Moreover for
the inclusion map
$\iota : \CC^{n-1}
\setminus \{ f(y) \cdot y_{n-1}=0 \}
\hookrightarrow \CC^{n-1}
\setminus \{ y_{n-1}=0 \}$
we have $\M \simeq R \iota_* \LL$.
By the nonresonance
of $c \in \CC^n$ we have $\beta = m ( \gamma )
\notin \ZZ$ and there exists an isomorphism
\begin{equation}\label{EQ=1}
i_! ( \CC_{\CC^{n-1}
\setminus \{ y_{n-1}=0 \} }
y_{n-1}^{\beta} )
\simto
Ri_* ( \CC_{\CC^{n-1}
\setminus \{ y_{n-1}=0 \} }
y_{n-1}^{\beta} ) .
\end{equation}
Set $\N = i_! ( \CC_{\CC^{n-1}
\setminus \{ y_{n-1}=0 \} }
y_{n-1}^{\beta} )$. Then $\N$ is a perverse
sheaf (up to some shift)
on $X= \CC^{n-1}$ and satisfies
the condition
$\psi_{f}( \N )_p
\simeq \phi_{f}( \N )_p$ (use \eqref{EQ=1}), where
\begin{equation}
\psi_f, \phi_f : \Dbc(X)
\longrightarrow \Dbc ( \{ f=0 \} )
\end{equation}
are the nearby and vanishing
cycle functors associated to $f$ respectively
(see \cite{Dimca} etc.).
By the $t$-exactness
of the functor $\phi_{f}$
the constructible
sheaf $\phi_{f}( \N )$
on $S=f^{-1}(0)$
is perverse (up to some
shift). Moreover by our assumption its
support is contained in the
point $\{ p \} = \{ 0 \}
\subset X = \CC^{n-1}$.
This implies that
we have the concentration
\begin{equation}
H^j \psi_{f}( \N )_p
\simeq H^j \phi_{f}( \N )_p
\simeq 0 \qquad (j \not= n-2).
\end{equation}
Hence in order to show the vanishing
$(Ri_* \M )_p \simeq 0$ it suffices to prove
that the monodromy operator
$\Phi : H^{n-2} \psi_{f}( \N )_p \simto
H^{n-2} \psi_{f}( \N )_p$
does not have the
eigenvalue $\exp (- 2 \pi i \alpha )$.
For this purpose, we shall use the results
in \cite[Section 5]{M-T-2}.
Let $\Gamma_+(f)
\subset \RR_+^{n-1}$ be the convex hull of
$\cup_{v \in \supp (f)} (v+ \RR^{n-1}_+)$ in $\RR^{n-1}_+$.
We call it the Newton polyhedron of
$f$ at the origin $p=0 \in
\CC^{n-1}$.
\begin{definition} (see \cite{Oka} etc.)
We say that $f$ is Newton non-degenerate
at the origin $p=0 \in
\CC^{n-1}$ if for any compact face $\gamma \prec
\Gamma_+(f)$ of $\Gamma_+(f)$
the hypersurface
$\{ y \in (\CC^*)^{n-1} \ | \ f^{\gamma}
(y)=0 \}$ of $(\CC^*)^{n-1}$ is
smooth and reduced.
\end{definition}
For each subset
$I \subset \{ 1,2, \ldots, n-1 \}$
we set
\begin{equation}
\RR_+^I= \{ v=(v_1, \ldots, v_{n-1})
\in \RR_+^{n-1}
\ | \ v_i=0 \
\text{for any} \ i \notin I \} \simeq \RR_+^{\sharp I}.
\end{equation}
Let $\gamma_1^I,
\ldots, \gamma_{n(I)}^I \prec
\Gamma_+(f) \cap \RR_+^I$
be the compact facets of
$\Gamma_+(f) \cap \RR_+^I$.
For $1 \leq i \leq n(I)$
denote by $d_i^I \in \ZZ_{>0}$ the lattice
distance of $\gamma_i^I$ from the origin
$0 \in \RR^{I}_+$
and let $u_i^I=(u_{i,1}^I, \ldots,
u_{i,n-1}^I) \in
\RR_+^I \cap \ZZ^{n-1}$ be the
unique (non-zero) primitive
vector which takes its
minimum exactly on
$\gamma_i^I$. For simplicity
we set $\delta_i^I:=u_{i,n-1}^I$.
Finally we define
a finite subset
$E_p \subset \CC$ of $\CC$ by
\begin{equation}
E_p= \bigcup_{I: I \ni n-1}
\bigcup_{i=1}^{n(I)}
\{ \lambda \in \CC \ |
\ \lambda^{d_i^I} =
\exp (2 \pi \sqrt{-1}
\beta \cdot \delta_i^I) \}.
\end{equation}
Then the following result
is a special case of \cite[Theorem 5.5]{M-T-2}.
\begin{proposition}
In the above situation, assume moreover that
$f$ is Newton non-degenerate
at the origin $p=0 \in
\CC^{n-1}$. Then the set
of the eigenvalues of
the monodromy operator
$\Phi : H^{n-2}
\psi_{f}( \N )_p \simto
H^{n-2} \psi_{f}( \N )_p$
is contained in $E_p$.
\end{proposition}
\begin{corollary}
Assume that $\dim \Delta =n-1$,
$c \in \CC^n$ is nonresonant,
$\exp (- 2 \pi \sqrt{-1} \alpha )
= \exp ( 2 \pi \sqrt{-1} c_n )
\notin E_p$ and
$f$ is Newton non-degenerate
at the origin $p=0 \in \CC^{n-1}$. Then we have
$(Ri_* \M )_p \simeq 0$.
\end{corollary}
In fact, by \cite[Theorem 5.5]{M-T-2}
we can generalize this corollary to the case
where the codimension of the $T_0$-orbit
$X_{\gamma}$ in $X_{\gamma}
\subset X \setminus i(T_0)$ containing
the (stratified) isolated
singular point $p$ of $S$ is larger than one.
We leave the precise formulation
to the reader and omit the details here.
In this way, our Theorem \ref{VTM} can be generalized
to the case where $S$ has (stratified) isolated
singular points $p$ also in $T_0$-orbits $X_{\gamma}
\subset X \setminus i(T_0)$.
In particular we have the following result.
For a face $\gamma$ of $\Delta$ let
$L_{\gamma} \simeq \RR^{\dim \gamma}$
be the linear subspace of $\RR^{n-1}$
parallel to the affice span of $\gamma$
in $\RR^{n-1}$ and consider the $\gamma$-part
$P^{\gamma}$ of $P$ as a function on
$T_{\gamma}= \Spec ( \CC [ L_{\gamma} \cap
\ZZ^{n-1} ] ) \simeq ( \CC^*)^{\dim \gamma}$.
\begin{theorem}\label{SVTM}
Assume that $\dim \Delta =n-1$ and
for any face $\gamma$ of $\Delta$
the hypersurface $(P^{\gamma})^{-1}(0)
\subset T_{\gamma}$ of $T_{\gamma}$
has only isolated singular points.
Then for generic parameter vectors
$c \in \CC^n$ we have the
concentration
\begin{equation}
H^j(W ; \LL ) \simeq
0 \qquad (j \not= n-1).
\end{equation}
\end{theorem}
\medskip \par
From now, let us generalize Theorem \ref{VTM}
to the following more general situation.
For $0<k<n$ let $B_i=\{ b_i(1), b_i(2), \ldots ,
b_i(N_i)\} \subset \ZZ^{n-k}$
($1 \leq i \leq k$) be $k$ finite subsets
of the lattice $\ZZ^{n-k}$ and set
$N=N_1 + N_2 + \cdots +N_k$.
For $1 \leq i \leq k$ and
$(z_{i1}, \ldots, z_{iN_i}) \in \CC^{N_i}$
we define a Laurent polynomial $P_i(x)$
on $T_0=( \CC^*)^{n-k}$ by
$P_i(x)=\sum_{j=1}^{N_i} z_{ij} x^{b_i(j)}$
($x=(x_1, \ldots, x_{n-k}) \in T_0=( \CC^*)^{n-k}$).
Let us set $W=T_0 \setminus \cup_{i=1}^k P_i^{-1}(0)$.
Then for
$c=(c_1, \ldots, c_{n-k}, \tl{c_1},
\ldots, \tl{c_k}) \in \CC^n$
the possibly multi-valued function
\begin{equation}
P_1(x)^{-\tl{c_1}} \cdots P_k(x)^{-\tl{c_k}}
x_1^{c_1-1} \cdots x_{n-k}^{c_{n-k}-1}
\end{equation}
on $W$ generates the local system
\begin{equation}
\LL = \CC_{W}
P_1(x)^{-\tl{c_1}} \cdots P_k(x)^{-\tl{c_k}}
x_1^{c_1-1} \cdots x_{n-k}^{c_{n-k}-1}.
\end{equation}
Let $e_i=(0,0, \ldots, 0,1,0, \ldots, 0) \in \ZZ^k$
($1 \leq i \leq k$) be the standard basis of $\ZZ^k$
and set $a_i(j)=(b_i(j), e_i) \in \ZZ^{n-k}
\times \ZZ^k=\ZZ^n$
($1 \leq i \leq k$, $1 \leq j \leq N_i$)
and
\begin{equation}
A=\{ a_1(1), \ldots , a_1(N_1),
\ldots \ldots ,
a_k(1), \ldots , a_k(N_k)
\} \subset \ZZ^{n}.
\end{equation}
For $1 \leq i \leq k$ let $\Delta_i \subset
\RR^{n-k}$ be the convex
hull of $B_i$ in $\RR^{n-k}$.
Denote by $\Delta \subset \RR^{n-k}$
their Minkowski sum $\Delta_1 + \cdots +
\Delta_k$. Assume that $\dim \Delta =n-k$.
Then by using the $n$-dimensional
closed convex polyhedral cone
$K= \RR_+ A \subset \RR^n$
generated by $A$ in $\RR^n$
we can define the nonresonance of
the parameter $c \in \CC^n$ as in
Definition \ref{NRC}.
For a face $\gamma \prec \Delta$
of $\Delta$ let $\gamma_i \prec \Delta_i$
be the faces of $\Delta_i$ ($1 \leq i \leq k$)
canonically associated to $\gamma$ such that
$\gamma = \gamma_1 + \cdots +
\gamma_k$.
\begin{definition}\label{MND}
(see \cite{Oka} etc.)
We say that the $k$-tuple of the
Laurent polynomials
$(P_1, \ldots, P_k)$ is
``weakly" (resp. ``strongly") non-degenerate if
for any face $\gamma$ of
$\Delta$ such that
$\dim \gamma < \dim \Delta = n-k$
(resp. $\dim \gamma \leq \dim \Delta = n-k$)
and non-empty subset
$J \subset \{ 1,2, \ldots, k \}$
the subvariety
\begin{equation}
\{ x \in T_0=(\CC^*)^{n-k} \ | \
P_i^{\gamma_i}(x)=0 \ \
(i \in J) \} \subset T_0
\end{equation}
is a non-degenerate complete intersection.
\end{definition}
\begin{remark}
Denote the convex hull of
$\cup_{i=1}^k (\Delta_i \times \{ e_i \}) \subset
\RR^{n-k} \times \RR^{k} =
\RR^{n}$ in $\RR^{n}$ by $\Delta_1 * \cdots * \Delta_k$.
Then $\Delta_1 * \cdots * \Delta_k$ is naturally identified
with the Newton polytope of the Laurent polynomial
$R(x,t)=\sum_{i=1}^k P_i(x)t_i $ on
$\widetilde{T_0}:= T_0 \times (\CC^*)_t^k \simeq
(\CC^*)_{x,t}^{n}$. In \cite{G-K-Z-2} the authors considered
the condition that for any face $\gamma$ of
$\Delta_1 * \cdots * \Delta_k$ the hypersurface
$\{ (x,t) \in \widetilde{T_0} \ | \
R^{\gamma}(x,t)=0 \} \subset \widetilde{T_0}$ of
$\widetilde{T_0}$ is smooth and reduced. It is easy to
see that our strong non-degeneracy of the
$k$-tuple $(P_1, \ldots, P_k)$
in Definition \ref{MND}
is equivalent to their condition.
\end{remark}
Let $\iota :
W=T_0 \setminus \cup_{i=1}^k P_i^{-1}(0)
\longrightarrow T_0$ be the inclusion
map and set $\M = R \iota_* \LL
\in \Dbc (T_0)$.
\begin{theorem}\label{MVTM}
Assume that $\dim \Delta = n-k$,
the parameter vector
$c \in \CC^n$ is nonresonant and
$(P_1, \ldots, P_k)$ is
weakly non-degenerate. Then there exists
an isomorphism
\begin{equation}
H^j_c(T_0; \M ) \simeq H^j(T_0; \M )
\simeq H^j(W ; \LL )
\end{equation}
for any $j \in \ZZ$. Moreover we have the
concentration
\begin{equation}
H^j(W ; \LL ) \simeq
0 \qquad (j \not= n-k).
\end{equation}
\end{theorem}
\begin{proof}
The proof is similar to that of Theorem \ref{VTM}.
Let $\Sigma_0$ be the dual fan of $\Delta$ in $\RR^{n-k}$
and $X$ the (possibly singular) toric
variety associated to it.
For a face $\gamma$ of $\Delta$
we denote by $X_{\gamma} \simeq
(\CC^*)^{\dim \gamma}$ the $T_0$-orbit
associated to $\gamma$. Let $i :
X_{\Delta} \simeq T_0 \hookrightarrow X$ be
the inclusion map. Then by the weak
non-degeneracy of the $k$-tuple
$(P_1, \ldots, P_k)$, for
any $T_0$-orbits $X_{\gamma}$
in $X \setminus X_{\Delta}$ and the closure
$S= \overline{i (\cup_{i=1}^k P_i^{-1}(0))}
\subset X$
of the hypersurface $i(\cup_{i=1}^k
P_i^{-1}(0)) \subset i (T_0)$
in $X$ their intersection
$S \cap X_{\gamma}
\subset X_{\gamma}$ is a normal
crossing divisor in $X_{\gamma}$. In fact
$S$ itself is normal crossing on a neighborhood
of such $X_{\gamma}$ and any irreducible
component of it intersects $X_{\gamma}$
transversally.
Moreover by the nonresonance
of $c \in \CC^n$, for any
$\gamma \prec \Delta$ such that $\dim
\gamma =n-k-1$ the monodromy of the local
system $\LL$ around the codimension-one
$T_0$-orbit $X_{\gamma} \subset X$ in $X$
is non-trivial. Indeed, let $\gamma \prec \Delta$
be such a facet of $\Delta$ and
$\gamma_i \prec \Delta_i$
the faces of $\Delta_i$ ($1 \leq i \leq k$)
associated to $\gamma$ such that
$\gamma = \gamma_1 + \cdots +
\gamma_k$. We denote the convex hull of
$\cup_{i=1}^k (\Delta_i \times \{ e_i \})$
(resp. $\cup_{i=1}^k (\gamma_i \times \{ e_i \})$)
$\subset \RR^{n-k} \times \RR^{k} = \RR^{n}$
in $\RR^{n}$ by $\Delta_1 * \cdots * \Delta_k$
(resp. $\gamma_1 * \cdots * \gamma_k$).
Then $\Delta_1 * \cdots * \Delta_k$ is the
join of $\Delta_1, \ldots, \Delta_k$ and
$\gamma_1 * \cdots * \gamma_k$ is its
facet. We denote by $\Gamma$ the facet of the cone
$K= \RR_+ A$ generated by
$\gamma_1 * \cdots * \gamma_k \subset K$.
Let $\nu \in \ZZ^{n-k}
\setminus \{ 0 \}$ be the primitive
inner conormal vector of the facet
$\gamma$ of $\Delta \subset
\RR^{n-k}$ and for $1 \leq i \leq k$ set
\begin{equation}
m_i= \min_{v \in \Delta_i}
\langle \nu, v \rangle =
\min_{v \in \gamma_i}
\langle \nu, v \rangle \in \ZZ.
\end{equation}
Then the primitive
inner conormal vector
$\widetilde{\nu} \in \ZZ^{n}
\setminus \{ 0 \}$ of the facet
$\Gamma$ of $K \subset \RR^{n}$
is explicitly given by the formula
\begin{equation}
\widetilde{\nu} =
\left( \begin{array}{c}
\nu \\
-m_1 \\
\vdots \\
-m_k
\end{array} \right) \in \ZZ^{n}
\setminus \{ 0 \}.
\end{equation}
and the condition
$c=(c_1, \ldots, c_{n-k}, \tl{c_1},
\ldots, \tl{c_k}) \notin
\{ \ZZ^n+ \Lin (\Gamma ) \}$ is
equivalent to the one
\begin{equation}
m( \gamma ):=
\biggl\langle \nu , \quad
\left( \begin{array}{c}
c_1-1 \\
\vdots \\
c_{n-k}-1
\end{array} \right) \biggr\rangle
- \sum_{i=1}^k m_i \cdot \tl{c_i}
\quad \notin \ZZ.
\end{equation}
Moreover we can easily see that the
order of the (multi-valued) function
\begin{equation}
P_1(x)^{-\tl{c_1}} \cdots P_k(x)^{-\tl{c_k}}
x_1^{c_1-1} \cdots x_{n-k}^{c_{n-k}-1}
\end{equation}
along the codimension-one
$T_0$-orbit $X_{\gamma} \subset X$ in $X$
is equal to $m( \gamma ) \notin \ZZ$.
Finally, by constructing suitable distance functions
as in the proof of
\cite[Lemma 4.2]{E-T-2}, we can show that
\begin{equation}
(Ri_* \M )_p \simeq 0 \qquad \text{for any} \ p \in
X \setminus T_0.
\end{equation}
Namely there exists an isomophism
$i_! \M \simeq Ri_* \M$ in $\Dbc (X)$.
Applying the functor $R \Gamma_c(X; \cdot ) =
R \Gamma (X; \cdot )$ to it we obtain the
desired isomorphisms
\begin{equation}
H^j_c(T_0; \M ) \simeq H^j(T_0; \M )
\simeq H^j(W ; \LL )
\end{equation}
for $j \in \ZZ$. Then
the remaining assertion can be proved
as in the proof of Theorem \ref{VTM}.
This completes the proof.
\end{proof}
As in the case where $k=1$ we have the following
result. For a face $\gamma$ of $\Delta$ let
$L_{\gamma} \simeq \RR^{\dim \gamma}$
be the linear subspace of $\RR^{n-k}$
parallel to the affice span of $\gamma$
in $\RR^{n-k}$ and for $1 \leq i \leq k$
consider the $\gamma_i$-part
$P_i^{\gamma_i}$ of $P_i$ as a function on
$T_{\gamma}= \Spec ( \CC [ L_{\gamma} \cap
\ZZ^{n-k} ] ) \simeq ( \CC^*)^{\dim \gamma}$.
\begin{theorem}\label{SMVTM}
Assume that $\dim \Delta = n-k$ and
for any $1 \leq i \leq k$ the hypersurface
$P_i^{-1}(0) \subset T_0$ of $T_0$ has only
isolated singular points. Assume moreover that
for any face $\gamma$ of
$\Delta$ such that $\dim \gamma <
\dim \Delta = n-k$
and non-empty subset
$J \subset \{ 1,2, \ldots, k \}$
the $k$-tuple of the Laurent polynomials
$(P_1, \ldots, P_k)$ satisfies the following
condition:
\medskip \par
If $J=\{ i \}$ for some $1 \leq i \leq k$
and $\dim \gamma_i= \dim \gamma =
\dim \Delta -1= n-k-1$
the \par
hypersurface
$(P_i^{\gamma_i})^{-1}(0)
\subset T_{\gamma}$ of $T_{\gamma}$
has only isolated singular points.
Otherwise,
\par
the subvariety
\begin{equation}
\{ x \in T_0=(\CC^*)^{n-k} \ | \
P_i^{\gamma_i}(x)=0 \ \
(i \in J) \} \subset T_0
\end{equation}
\par
of $T_0$ is a non-degenerate
complete intersection.
\medskip \par
\noindent Then for generic parameter vectors
$c \in \CC^n$ we have the
concentration
\begin{equation}
H^j(W ; \LL ) \simeq
0 \qquad (j \not= n-k).
\end{equation}
\end{theorem}
\begin{proof}
Let $\Sigma_0$ be the dual fan of $\Delta$ in $\RR^{n-k}$
and $X$ the (possibly singular) toric
variety associated to it. Then our assumptions imply
that for any $1 \leq i \leq k$ the hypersurface
$S_i= \overline{i ( P_i^{-1}(0))}
\subset X$ has only stratified
isolated singular points in $X$ and we can prove
the assertion following the proofs of Theorems
\ref{SVTM} and \ref{MVTM}.
\end{proof}
For a face $\gamma$ of
$\Delta$ and $1 \leq i \leq k$ such that $\dim \gamma_i <
\dim \gamma \leq n-k-1$
the hypersurface
$(P_i^{\gamma_i})^{-1}(0)
\subset T_{\gamma}$ of $T_{\gamma}$ is smooth or
has non-isolated singularities. In the latter case,
we cannot prove the concentration in
Theorem \ref{SMVTM} by our methods.
This is the reason why we do not allow
such cases in our assumptions of Theorem \ref{SMVTM}.
However, in the very special case where the Newton polytopes
$\Delta_1, \ldots, \Delta_k$ are similar each other,
we do not have this problem and obtain
the following simpler result.
\begin{theorem}\label{SSMVTM}
Assume that $\dim \Delta = n-k$, the Newton polytopes
$\Delta_1, \ldots, \Delta_k$ are similar each other and
for any face $\gamma$ of $\Delta$ and $1 \leq i \leq k$
the hypersurface $(P_i^{\gamma_i})^{-1}(0)
\subset T_{\gamma}$ of $T_{\gamma}$
has only isolated singular points. Assume moreover that
for any face $\gamma$ of
$\Delta$ such that $\dim \gamma <
\dim \Delta = n-k$
and any subset
$J \subset \{ 1,2, \ldots, k \}$ such that
$\sharp J \geq 2$ the subvariety
\begin{equation}
\{ x \in T_0=(\CC^*)^{n-k} \ | \
P_i^{\gamma_i}(x)=0 \ \
(i \in J) \} \subset T_0
\end{equation}
of $T_0$ is a non-degenerate
complete intersection.
Then for generic parameter vectors
$c \in \CC^n$ we have the
concentration
\begin{equation}
H^j(W ; \LL ) \simeq
0 \qquad (j \not= n-k).
\end{equation}
\end{theorem}
\section{Some results on the twisted
Morse theory}\label{sec:4}
In this section, we prepare some
auxiliary results on the twisted
Morse theory which will be used in
Section \ref{sec:5}.
The following proposition
is a refinement of the
results in \cite[page 10]{Esterov}.
See also \cite[Proposition 7.1]{E-T-2}.
\begin{proposition}\label{MIS}
Let $T$ be an algebraic torus $( \CC^*)^n_x$
and $T= \sqcup_{\alpha} Z_{\alpha}$ its
algebraic stratification. In particular
we assume that each stratum
$Z_{\alpha}$ in it is smooth.
Let $h(x)$ be a Laurent
polynomial on $T=( \CC^*)^n_x$
such that the hypersurface $\{ h=0 \} \subset
T$ intersects $Z_{\alpha}$
transversally for
any $\alpha$. For
$a \in \CC^n$ consider the
(possibly multi-valued) function $g_a(x):=
h(x) x^{-a}$ on $T$.
Then there exists a non-empty
Zariski open subset $\Omega \subset \CC^n$ of
$\CC^n$ such that the restriction
$g_a|_{Z_{\alpha}}:
Z_{\alpha} \longrightarrow
\CC$ of $g_a$ to $Z_{\alpha}$
has only isolated
non-degenerate (i.e.
Morse type) critical points
for any $a \in \Omega
\subset \CC^n$ and $\alpha$.
\end{proposition}
\begin{proof}
We may assume that each stratum $Z_{\alpha}$
is connected. We fix a stratum $Z_{\alpha}$
and set $k= \dim Z_{\alpha}$. For a
subset $I \subset \{ 1,2, \ldots, n \}$ such that
$|I|=k= \dim Z_{\alpha}$
denote by $\pi_I : T=( \CC^*)^n_x
\longrightarrow ( \CC^*)^k$ the projection
associated to $I$. We also denote by
$Z_{\alpha, I} \subset Z_{\alpha}$ the
maximal Zariski open subset of $Z_{\alpha}$
such that the restriction of $\pi_I$ to
it is locally biholomorphic.
By the implicit function theorem, the
variety $Z_{\alpha}$ is covered by such
open subsets $Z_{\alpha, I}$. For simplicity,
let us consider the
case where $I= \{ 1,2, \ldots,
k \}
\subset \{ 1,2, \ldots, n \}$. Then
we may regard $g_a|_{Z_{\alpha}}$
locally as a
function $g_{a, \alpha, I}
(x_1, \ldots, x_k)$ on
the Zariski open subset $\pi_I (Z_{\alpha, I})
\subset ( \CC^*)^k$ of the form
\begin{equation}
g_{a, \alpha, I}(x_1, \ldots, x_k) =
\frac{h_{a, \alpha, I}(x_1, \ldots,
x_k)}{x_1^{a_1} \cdots x_k^{a_k}}.
\end{equation}
By our assumption, the hypersurface
$\{ h_{a, \alpha, I}=0 \} \subset
\pi_I (Z_{\alpha, I}) \subset ( \CC^*)^k$
is smooth. Then as in the proof of
\cite[Proposition 7.1]{E-T-2} we can show
that there exists a non-empty
Zariski open subset
$\Omega_{\alpha, I} \subset \CC^n$
such that the (possibly multi-valued) function
$g_{a, \alpha, I}(x_1, \ldots, x_k)$ on
$\pi_I (Z_{\alpha, I}) \subset ( \CC^*)^k$
has only isolated
non-degenerate (i.e. Morse type) critical points
for any $a \in \Omega_{\alpha, I}
\subset \CC^n$. This completes the proof.
\end{proof}
\begin{corollary}\label{NCR}
In the situation of Proposition \ref{MIS},
assume moreover that for the Newton polytope
$NP(h) \subset \RR^n$ of $h$ we have
$\dim NP(h)=n$. Then there exists
$a \in \Int NP(h)$ such that the restriction
$g_a|_{Z_{\alpha}}: Z_{\alpha} \longrightarrow
\CC$ of $g_a$ to $Z_{\alpha}$ has only isolated
non-degenerate (i.e. Morse type) critical points
for any $\alpha$.
\end{corollary}
Now let $Q_1, \ldots, Q_l$ be
Laurent polynomials on $T=( \CC^*)^{n}$
and for $1 \leq i \leq l$ denote by
$\Delta_i \subset \RR^{n}$ the Newton
polytope $NP(Q_i)$ of $Q_i$.
Set $\Delta = \Delta_1 + \cdots + \Delta_l$.
Then by Corollary \ref{NCR}
we obtain the following result
which might be of independent interest.
\begin{theorem}\label{NVTM}
Let $\LL$ be a non-trivial local system
of rank one on $T=( \CC^*)^{n}$.
Assume that for any $1 \leq i \leq l$
we have $\dim \Delta_i =n$ and the subvariety
\begin{equation}
Z_i = \{ x \in T \ | \
Q_1(x)= \cdots = Q_i(x)=0 \} \subset T
\end{equation}
of $T$ is a non-degenerate complete
intersection.
Then for any $1 \leq i \leq l$ we have the
concentration
\begin{equation}
H^j(Z_i ; \LL ) \simeq
0 \qquad (j \not= n-i).
\end{equation}
Moreover we have
\begin{equation}
\dim H^{n-i} (Z_i ; \LL ) =
\dsum_{\begin{subarray}{c}
m_1,\ldots,m_i \geq 1\\
m_1+\cdots +m_i=n
\end{subarray}}\Vol_{\ZZ}(
\underbrace{\Delta_1,
\ldots,\Delta_1}_{\text{
$m_1$-times}},\ldots,
\underbrace{\Delta_i,
\ldots,\Delta_i}_{\text{$m_i$-times}}).
\end{equation}
\end{theorem}
\begin{proof}
We prove the assertion
by induction on $i$. For $i=0$
we have $Z_i=T$ and the assertion is
obvious. Since $Z_i \subset T$ is affine,
by Artin's vanishing theorem we have
the concentration
\begin{equation}\label{avt}
H^j(Z_i ; \LL ) \simeq
0 \qquad (j > n-i= \dim Z_i).
\end{equation}
On the other hand, by Corollary \ref{NCR}
there exists
$a_i \in \Int NP(Q_i) \subset \RR^n$
such that the real-valued function
\begin{equation}
g_i: Z_{i-1} \longrightarrow \RR, \qquad
x \longmapsto
| Q_i(x) x^{-a_i} |
\end{equation}
has only isolated non-degenerate (Morse type)
critical points. Note that
the Morse index of $g_i$
at each critical point is
$\dim Z_{i-1} = n-i+1$.
Let $\Sigma_0$ be the dual fan of the
$n$-dimensional polytope $\Delta$ in
$\RR^n$ and $\Sigma$ its smooth
subdivision. We denote by $X_{\Sigma}$
the toric variety associated to $\Sigma$.
Then $X_{\Sigma}$ is a smooth compactification
of $T$ such that $D= X_{\Sigma} \setminus T$
is a normal crossing divisor in it.
By our assumption, the closures
$\overline{Z_{i-1}}, \overline{Z_{i}}
\subset X_{\Sigma}$
of $Z_{i-1}, Z_i$
in $X_{\Sigma}$ are smooth.
Moreover they intersect $D$ etc.
transversally.
Let $U$ be a sufficiently small tubular
neighborhood of $\overline{Z_i}
\cap D$ in $\overline{Z_{i-1}}$. Then by
\cite[Section 3.5]{Zaharia} (see also
\cite{L-S}), for any $t \in \RR_+$
there exist isomorphisms
\begin{equation}
H^j( \{ g_i<t \} ; \LL ) \simeq
H^j( \{ g_i<t \} \setminus
U ; \LL )
\qquad (j \in \ZZ ).
\end{equation}
Moreover the level set
$g_i^{-1}(t) \cap (Z_{i-1} \setminus
U)$ of $g_i$ in $Z_{i-1} \setminus
U$ is compact in $Z_{i-1}$
and intersects $\partial U$ transversally
for any $t \in \RR_+$.
Hence for $t \gg 0$ we
have isomorphisms
\begin{equation}
H^j( \{ g_i<t \} ; \LL )
\simeq H^j(Z_{i-1} ; \LL )
\qquad (j \in \ZZ ).
\end{equation}
Moreover for $0 < t \ll 1$
we have isomorphisms
\begin{equation}
H^j( \{ g_i<t \} ; \LL )
\simeq H^j(Z_{i} ; \LL )
\qquad (j \in \ZZ ).
\end{equation}
When $t \in \RR$ decreases passing through
one of the critical values of $g_i$, only
the dimensions of
$H^{n-i+1}( \{ g_i<t \} ; \LL )$ and
$H^{n-i}( \{ g_i<t \} ; \LL )$
may change and
the other cohomology groups
$H^j( \{ g_i<t \} ; \LL )$
$(j \not= n-i+1, n-i)$
remain the same. Then by our induction
hypothesis for $i-1$ and \eqref{avt} we obtain
the desired concentration
\begin{equation}
H^j(Z_i ; \LL ) \simeq
0 \qquad (j \not= n-i).
\end{equation}
Moreover the last assertion follows
from Theorem \ref{thm:2-14}.
This completes the proof.
\end{proof}
From now on, assume also that the $l$-tuple
$(Q_1, \ldots, Q_l)$ is
strongly non-degenerate and
$\dim \Delta_l=n$. Let $T= \sqcup_{\alpha}
Z_{\alpha}$ be the algebraic stratification
of $T$ associated to the hypersurface
$S= \cup_{i=1}^{l-1} Q_i^{-1}(0) \subset
T$ and set $M=T \setminus S$. Then
by Corollary \ref{NCR}
there exists $a \in \Int ( \Delta_l)$
such that the restriction
of the (possibly multi-valued) function
$Q_l(x)x^{-a}$
to $Z_{\alpha}$ has only isolated
non-degenerate (i.e. Morse type)
critical points
for any $\alpha$. In particular, it has only
stratified isolated singular points.
We fix such $a \in \Int ( \Delta_l)$
and define a real-valued function
$g: T \longrightarrow \RR_+$ by
$g(x)=| Q_l(x)x^{-a} |$.
For $t \in \RR_+$ we set also
\begin{equation}
M_t= \{ x \in M=T \setminus S \ | \
g(x)<t \} \subset M.
\end{equation}
Then we have the following result.
\begin{lemma}\label{VL}
Let $\LL$ be a local system on
$M=T \setminus S$.
Then for any $c >0$
there exists a sufficiently small
$0 < \e \ll 1$ such that we have the
concentration
\begin{equation}
H^j( M_{c+ \e}, M_{c- \e} ; \LL ) \simeq
0 \qquad (j \not= n).
\end{equation}
\end{lemma}
\begin{proof}
Let $\Sigma_0$ be the dual fan of the
$n$-dimensional polytope $\Delta$ in
$\RR^n$ and $\Sigma$ its smooth
subdivision. We denote by $X_{\Sigma}$
the toric variety associated to $\Sigma$.
Then $X_{\Sigma}$ is a smooth compactification
of $T$ such that $D= X_{\Sigma} \setminus T$
is a normal crossing divisor in it.
By the strong non-degeneracy
of $(Q_1, \ldots, Q_l)$, the hypersurface
$\overline{Q_l^{-1}(0)} \subset X_{\Sigma}$
intersects $D$ etc. transversally.
Let $U$ be a sufficiently small tubular
neighborhood of $\overline{Q_l^{-1}(0)}
\cap D$ in $X_{\Sigma}$ and for $t \in \RR_+$
set $M_t^{\prime}=M_t \setminus
U$. Then by
\cite[Section 3.5]{Zaharia},
for any $t \in \RR_+$ there exist isomorphisms
\begin{equation}
H^j( M_t ; \LL ) \simeq
H^j( M_t^{\prime} ; \LL )
\qquad (j \in \ZZ ).
\end{equation}
Moreover the level set
$g^{-1}(t) \cap (T \setminus
U)$ of $g$ in $T \setminus
U$ is compact in $T$
and intersects $\partial U$ transversally
for any $t \in \RR_+$.
For $c>0$ let $p_1, \ldots, p_r
\in T \setminus g^{-1}(0)=
T \setminus Q_l^{-1}(0)$
be the stratified isolated singular
points of the function
$h(x)=Q_l(x)x^{-a}$ in $T$
such that $g(p_i)=|h(p_i)|=c$.
Note that we have
\begin{equation}
g(x)=|h(x)|= \exp [ {\rm Re}
\{ \log h(x) \} ].
\end{equation}
Then there exist small open balls
$B_i$ centered at $p_i$ in $T$ and
$0 < \e \ll 1$ such
that we have isomorphisms
\begin{equation}
H^j( M_{c+ \e}^{\prime},
M_{c- \e}^{\prime} ; \LL )
\simeq \bigoplus_{i=1}^r
H^j( B_i \cap M_{c+ \e},
B_i \cap M_{c- \e} ; \LL )
\qquad (j \in \ZZ ).
\end{equation}
For $1 \leq i \leq r$ by taking a
local branch $\log h$ of the
logarithm of the function $h \not= 0$ on a
neighborhood of $p_i \in T \setminus
h^{-1}(0)$ we set
$f_i= \log h - \log h(p_i)$.
Then $f_i$ has also a stratified
isolated singular point at $p_i$.
Let $F_i \subset B_i$ be the Milnor fiber
of $f_i$ at $p_i \in f_i^{-1}(0)$.
Then for any $1 \leq i \leq r$
by shrinking $B_i$
if necessary we can
easily prove the isomorphisms
\begin{equation}
H^j( B_i \cap M_{c+ \e},
B_i \cap M_{c- \e} ; \LL )
\simeq
H^j( B_i \setminus S,
F_i \setminus S ; \LL )
\qquad (j \in \ZZ ).
\end{equation}
Let $j : M=T \setminus S \hookrightarrow T$
be the inclusion. Since the Milnor fibers
$F_i \subset B_i$ intersect
each stratum $Z_{\alpha}$
transversally, we have also isomorphisms
\begin{equation}
H^j( B_i \setminus S,
F_i \setminus S ; \LL )
\simeq H^{j-1} \phi_{f_i}
( Rj_* \LL )_{p_i}
\qquad (j \in \ZZ ),
\end{equation}
where $\phi_{f_i}$ are
Deligne's vanishing cycle
functors. Hence by (the proof of)
\cite[Proposition 6.1.1]{Dimca}
the assertion follows from
\begin{equation}
{\rm supp} \ \phi_{f_i}
( Rj_* \LL ) \subset \{ p_i \}
\qquad (1 \leq i \leq r)
\end{equation}
and the fact that
$Rj_* \LL$ and
$\phi_{f_i}( Rj_* \LL )$
are perverse sheaves
(up to some shifts). This completes
the proof.
\end{proof}
\section{A new vanishing theorem}\label{sec:5}
Now let $P_1, \ldots, P_k$ be
Laurent polynomials on $T_0=( \CC^*)^{n-k}$
and for $1 \leq i \leq k$ denote by
$\Delta_i \subset \RR^{n-k}$ the Newton
polytope $NP(P_i)$ of $P_i$.
Set $\Delta = \Delta_1 + \cdots + \Delta_k$.
Let us set $W=T_0 \setminus
\cup_{i=1}^k P_i^{-1}(0)$
and for $(c, \tl{c} )
=(c_1, \ldots, c_{n-k}, \tl{c_1},
\ldots, \tl{c_k}) \in \CC^n$
consider the local system
\begin{equation}
\LL = \CC_{W}
P_1(x)^{\tl{c_1}} \cdots P_k(x)^{\tl{c_k}}
x_1^{c_1} \cdots x_{n-k}^{c_{n-k}}
\end{equation}
on $W$.
\begin{theorem}\label{NTM}
Assume that the $k$-tuple of the
Laurent polynomials $(P_1, \ldots, P_k)$
is strongly non-degenerate,
$(c, \tl{c} ) =
(c_1, \ldots, c_{n-k}, \tl{c_1},
\ldots, \tl{c_k}) \notin \ZZ^n$
and for any $1 \leq i \leq k$ we have
$\dim \Delta_i =n-k$.
Then we have the concentration
\begin{equation}
H^j(W ; \LL ) \simeq
0 \qquad (j \not= n-k).
\end{equation}
\end{theorem}
\begin{proof}
Set $T=T_0 \times
( \CC^*)^k_{t_1, \ldots, t_k}
\simeq ( \CC^*)^n_{x,t}$ and consider the
Laurent polynomials
\begin{equation}
\tl{P_i}(x,t)=t_i -P_i(x)
\qquad (1 \leq i \leq k)
\end{equation}
on $T$. For $1 \leq i \leq k$ we set also
\begin{equation}
Z_i = \{ (x,t) \in T
\ | \ \tl{P_1}(x,t)= \cdots
= \tl{P_i}(x,t)=0 \}.
\end{equation}
We define a local system $\tl{\LL}$ on
$T$ by
\begin{equation}
\tl{\LL} =
\CC_T x_1^{c_1}
\cdots x_{n-k}^{c_{n-k}}
t_1^{\tl{c_1}} \cdots t_k^{\tl{c_k}}.
\end{equation}
Then $Z_k \simeq W$
and we have isomorphisms
\begin{equation}
H^j(W ; \LL ) \simeq
H^j (Z_k ; \tl{\LL} )
\qquad (j \in \ZZ ).
\end{equation}
First let us consider the case where
$ \tl{c} = ( \tl{c_1},
\ldots, \tl{c_k}) \notin \ZZ^k$.
In this case, without loss of generality
we may assume that $\tl{c_k} \notin \ZZ$.
Then by the K\"unneth formula,
for $i=1,2, \ldots, k-1$ we have the
vanishings
\begin{equation}
H^j (Z_i ; \tl{\LL} )
\simeq 0 \qquad (j \in \ZZ ).
\end{equation}
Moreover we can naturally identify $Z_{k-1}
\subset T$ with $(T_0 \setminus
\cup_{i=1}^{k-1} P_i^{-1}(0)) \times
\CC^*_{t_k}$. Consider $\tl{P_k}$ as a
Laurent polynomial on $T_1=T_0 \times
\CC^*_{t_k} \simeq ( \CC^*)^{n-k+1}$.
Note that we have $\dim NP(\tl{P_k})
=n-k+1= \dim T_1$.
By taking a sufficiently generic
\begin{equation}
( a_1, \ldots, a_{n-k},
a_{n-k+1} ) \in \Int NP(\tl{P_k})
\subset \RR^{n-k+1}
\end{equation}
we define a real-valued function $g$ on
$T_1=T_0 \times \CC^*_{t_k}$ by
\begin{equation}
g(x, t_k)= \left|
\tl{P_k} (x, t_k) \times
x_1^{- a_1} \cdots
x_{n-k}^{- a_{n-k}}
t_k^{- a_{n-k+1}} \right|.
\end{equation}
Then by applying Lemma \ref{VL} to the
Morse function $g: T_1=T_0 \times \CC^*
\longrightarrow \RR$ and arguing
as the proof of Theorem \ref{NVTM}
we obtain the desired concentration
\begin{equation}
H^j (Z_k ; \tl{\LL} )
\simeq 0 \qquad (j \not= n-k).
\end{equation}
The proof for the remaining case where
$(c, \tl{c} )=
(c_1, \ldots, c_{n-k}, \tl{c_1},
\ldots, \tl{c_k}) \notin \ZZ^n$ and
$ \tl{c} = ( \tl{c_1},
\ldots, \tl{c_k}) \in \ZZ^k$
is similar. In this case, $Z_1 \subset T$
is isomorphic to the product
$Z_1^{\prime} \times ( \CC^*)^{k-1}$
for a hypersurface $Z_1^{\prime}$ in
$T_0 \times \CC^*_{t_1}$ and $\tl{\LL}$
is isomoprhic to the pull-back of a
local system on $T_0
\times \CC^*_{t_1}$.
Hence by the K\"unneth
formula and the proof
of Theorem \ref{NVTM}
we obtain the concentration
\begin{equation}
H^j (Z_1 ; \tl{\LL} )
\simeq 0 \qquad (j \not= n-k, \ldots, n-1).
\end{equation}
Repeating this argument with the help of
Lemma \ref{VL} and the proof
of Theorem \ref{NVTM} we obtain also
\begin{equation}
H^j(W ; \LL ) \simeq
H^j (Z_k ; \tl{\LL} )
\simeq 0 \qquad (j \not= n-k, \ldots, n-1).
\end{equation}
Then the assertion
is obtained by applying
Artin's vanishing theorem to
the $(n-k)$-dimensional affine variety
$Z_k \subset T$. This completes the proof.
\end{proof}
| {
"timestamp": "2014-10-23T02:08:17",
"yymm": "1403",
"arxiv_id": "1403.0103",
"language": "en",
"url": "https://arxiv.org/abs/1403.0103",
"abstract": "We prove some vanishing theorems for the cohomology groups of local systems associated to Laurent polynomials. In particular, we extend one of the results of Gelfand-Kapranov-Zelevinsky into various directions.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "On vanishing theorems for local systems associated to Laurent polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969703688159,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7099044310985926
} |
https://arxiv.org/abs/0704.3564 | The critical temperature for the BCS equation at weak coupling | For the BCS equation with local two-body interaction $\lambda V(x)$, we give a rigorous analysis of the asymptotic behavior of the critical temperature as $\lambda \to 0$. We derive necessary and sufficient conditions on $V(x)$ for the existence of a non-trivial solution for all values of $\lambda>0$. | \section{Introduction}
The BCS model has played a prominent role in condensed matter physics
in the fifty years since its introduction \cite{BCS}. Originally
introduced as a model for electrons displaying superconductivity, it
has recently also been used to describe dilute cold gases of fermionic
atoms in the case of weak interactions among the atoms
\cite{Leg,NS,rand,andre,parish,Chen,zwerger}. We will not be
concerned here with a mathematical justification of the approximations
leading to the BCS model, but rather with an investigation of its
precise predictions.
We consider the BCS equation for a Fermi gas at chemical potential $\mu
\in \mathbb{R}$ and temperature $T>0$, with local two-body interaction
$2\lambda V(x)$. Here, $\lambda>0$ denotes a coupling constant, and
the factor $2$ is introduced for convenience. Because of the many
different applications of the BCS equation, it is important to keep the
discussion as general as possible. Our only assumption on the
interaction potential $V$ will be that it is real-valued and $V\in
L^1(\mathbb{R}^3)\cap L^{3/2}(\mathbb{R}^3)$.
It was shown in \cite{HHSS} that the existence of a non-trivial
solution to the BCS gap equation
\begin{equation}\label{bcse}
\Delta(p) = -\frac \lambda{(2\pi)^{3/2}} \int_{\mathbb{R}^3} \hat V(p-q)
\frac{\Delta(q)}{E(q)} \tanh \frac{E(q)}{2T} \, dq
\end{equation}
with $E(p)= \sqrt{(p^2-\mu)^2 + |\Delta(p)|^2}$ at some temperature
$T>0$ is {\it equivalent} to the fact that a certain {\it linear}
operator has a negative eigenvalue. Here, $\hat V(p)=(2\pi)^{-3/2}
\int V(x) e^{-ipx} dx$ denotes the Fourier transform of $V(x)$. In
particular, it was shown that this property holds for $T$ less than a
certain {\it critical temperature}, which we denote by $T_c$, whereas
there are no non-trivial solutions to Eq.~\eqref{bcse} for $T\geq T_c$.
According to the usual interpretation of solutions to the
BCS gap equation, the system displays superfluid behavior for all
temperatures $T<T_c$, while it is in a normal phase for $T\geq T_c$.
The analysis in \cite{HHSS} shows that $T_c$ is non-zero for purely
attractive (i.e., non-positive) $V$, and exponentially small in $\lambda$.
The fact that the critical temperature in the {\it non-linear} BCS
equation can be expressed in terms of spectral properties of a linear
operator allows for a more thorough investigation of its properties.
This is the purpose of this paper. In particular, we shall be
concerned here with the asymptotic behavior of $T_c$ at weak coupling,
i.e., for small $\lambda$. We shall derive necessary and sufficient
conditions on $V$ for the positivity of $T_c$ for {\it all} $\lambda>0$, as
well as its precise asymptotics as $\lambda\to 0$. The precise
statement of our results is given in Theorem~\ref{asymptofcrittemp}
below.
The linear operator one is led to analyze is of the form
$K_{T,\mu}+\lambda V$ where $K_{T,\mu}$ is a multiplication operator
in momentum space that represents an `effective' kinetic energy. By a
modification of the Birman-Schwinger principle we need to study the
diverging part of the compact operator
\begin{equation}\label{eq:bsop}
(\sgn V) |V|^{1/2} K_{T,\mu}^{-1} |V|^{1/2}
\end{equation}
as $T\to 0$. Note that, if $V$ is not of definite sign, then the latter
operator is not self-adjoint and standard perturbation arguments based on
the variational principle will fail. Still we are able to give a
variational characterization for the leading behavior of the critical
temperature in the weak coupling limit.
Our analysis is somewhat similar in spirit to that of the lowest
eigenvalue of the Schr\"odinger operator $p^2+\lambda V$ in \emph{two}
space dimensions, see~\cite{simon}. This latter case is considerably simpler,
however, since $p^2$ has a unique minimum at $p=0$, whereas
$K_{T,\mu}(p)$ takes its minimal value on the Fermi sphere $p^2=\mu$.
Technically, this is reflected in the fact that the singular part of
the Birman-Schwinger operator $(\sgn V) |V|^{1/2} (p^2+T)^{-1}
|V|^{1/2}$ is of rank one in contrast to that of \eqref{eq:bsop},
which is of infinite rank. In particular, the difficulties stemming
from the non-selfadjointness are not present in the case of $p^2+\lambda
V$.
We would like to emphasize that our approach is not restricted to the
kinetic energy $K_{T,\mu}$ appearing in the BCS model, but can be adopted
to any symbol vanishing on a manifold of codimension one or higher.
Operators of this form arise naturally in various fields of Mathematical
Physics, e.g. in the quantum-mechanical description of particles in a
homogeneous magnetic field or in the analysis of trapped modes in
elasticity theory \cite{F,FW,LSW,Sob}.
\section{Mail results and discussion}
According to the analysis in \cite{HHSS}, the critical temperature
in the BCS model is, in appropriate units, given by the following
expression.
\begin{defn}
For $\mu \in \mathbb{R}$ and $T>0$, let $K_{T,\mu}$ be the multiplication operator in momentum space
$$
K_{T,\mu}(p) = \left( p^2 -\mu\right) \frac { e^{(p^2-\mu)/T}+1
}{e^{(p^2-\mu)/T}-1}\, .
$$
Let $V \in L^1(\mathbb{R}^3) \cap L^1(\mathbb{R}^3)$ be real-valued. The critical
temperature in the BCS model is given by
\begin{equation}\label{deftc}
T_c(V) = \inf \left\{ T>0\, : \, \infspec \left( K_{T,\mu} + V \right) \geq
0\right\}\,.
\end{equation}
\end{defn}
\medskip
More precisely, it was shown in \cite{HHSS} that Eq.~(\ref{bcse}) has
a non-trivial solution for $T<T_c(\lambda V)$, whereas for $T\geq
T_c(\lambda V)$ it doesn't. Note that $K_{T,\mu} \geq 2 T$, and that the
essential spectrum of $K_{T,\mu} + V$ is $[2T,\infty)$. Hence, in case
$T_c(V)>0$, it is the largest $T$ such that $K_{T,\mu}+V$ has a zero
eigenvalue. Note also that $K_{T,\mu}$ becomes $|p^2-\mu|$ as $T\to
0$.
We assume that $\mu>0$ henceforth. For weak potentials $V$, the
critical temperature is determined by the behavior of the potential on
the Fermi sphere $\Omega_\mu$, the sphere in momentum space with
radius $\sqrt{\mu}$. We denote the Lebesgue measure on $\Omega_\mu$
by $d\omega$.
Let $\V_\mu: \,
L^2(\Omega_\mu) \to L^2(\Omega_\mu)$ be the self-adjoint operator
\begin{equation}\label{defvm}
\big(\V_\mu u\big)(p) =
\frac 1{(2\pi)^{3/2}} \frac 1{\sqrt{\mu}}\int_{\Omega_\mu}\hat V(p-q) u(q) \,d\omega(q)\,.
\end{equation}
We note that $\V_\mu$ is non-vanishing if $\hat V(p)$ does not vanish
identically for $|p|\leq 2\sqrt\mu$. Since $V\in L^1(\mathbb{R}^3)$ by
assumption, $\hat V$ is a bounded continuous function, and hence
$\V_\mu$ is a Hilbert-Schmidt operator. It is, in fact, trace class,
as will be shown below, and its trace equals
$\frac{\sqrt{\mu}}{2\pi^2} \int V(x)dx$.
Let $a_{\mu}(V)= \infspec(\V_\mu)$ denote the infimum of the spectrum
of $\V_\mu$. Since $\V_\mu$ is compact, we have $a_\mu(V)\leq 0$.
Note that, in particular, $a_\mu(V)$ is negative if the trace of
$\V_\mu$ is negative, that is, $a_\mu(V)<0$ if $\hat V(0) =
(2\pi)^{-3/2} \int V(x) dx < 0$. Moreover, by considering a trial
function that is supported on two small sets on the Fermi sphere
separated a distance $|p|$, it is easy to see that $a_\mu(V)<0$ if
$|\hat V(p)|>\hat V(0)$ for some $p$ with $|p|<2\sqrt{\mu}$.
Our main result concerning the critical temperature (\ref{deftc}) is as follows.
\begin{Theorem}\label{asymptofcrittemp}
Let $V\in L^{3/2}(\mathbb{R}^3)\cap L^1(\mathbb{R}^3)$ be real-valued, and let
$\lambda>0$.
\begin{itemize}
\item[(i)] Assume that $a_{\mu}(V)<0$. Then $T_c(\lambda V)$ is non-zero for
all $\lambda >0$, and
\begin{equation}\label{crittempformula}
\lim_{\lambda\to 0} \lambda\, \ln \frac{\mu}{T_c(\lambda V)} =
-\frac{1}{a_{\mu}(V)} \,.
\end{equation}
\item[(ii)] Assume that $a_{\mu}(V)= 0$. If $T_c(\lambda V)$ is
non-zero, then $\ln (\mu/{T_c(\lambda V)})\geq c \lambda^{-2}$ for some $c>0$ and
small $\lambda$.
\item[(iii)] If there exists an $\epsilon>0$ such that $a_{\mu}(V-\epsilon|V|)= 0$,
then $T_c(\lambda V) = 0$ for small enough $\lambda$.
\end{itemize}
\end{Theorem}
Note that Eq.~\eqref{crittempformula} implies that, in case
$a_\mu(V)<0$, the critical temperature has the asymptotic behavior
$$
T_c(\lambda V) \sim \mu e^{ 1/(\lambda a_{\mu}(V))}$$ in the limit of
small $\lambda$.
On the other hand, if $a_\mu(V)=0$ then part (ii) of Theorem~\ref{asymptofcrittemp} implies that
$T_c(\lambda V)$ is at most as big as
$e^{-\const/\lambda^2}$ for some positive constant. If $a_\mu(V)$
remains zero if $\epsilon |V(x)|$ is subtracted from $V(x)$, then
$T_c(\lambda V)=0$ for small enough $\lambda$, and there is no
superfluid phase at weak coupling.
Although we restrict our attention to local potentials $V$ here, we
remark that a similar analysis can be applied in the case of non-local
potentials as well.
\subsection{Radial Potentials.}
In the special case of radial potentials $V$, depending only on
$|x|$, the spectrum of $\V_\mu$ can be determined more explicitly.
Since $\V_\mu$ commutes with rotations in this case, all its
eigenfunctions are given by spherical harmonics. For $\ell$ a
non-negative integer, the eigenvalues of $\V_\mu$ are then given by
$\frac{\sqrt{\mu}}{2\pi^2} \int V(x) |j_\ell(\sqrt\mu |x|)|^2 dx$,
with $j_\ell$ denoting the spherical Bessel functions. These
eigenvalues are $(2\ell+1)$ fold degenerate. In particular, we then
have
$$
a_\mu(V) = \inf_{\ell \in \mathbb{N}} \, \frac{\sqrt{\mu}}{2\pi^2} \int V(x)
\left|j_\ell(\sqrt\mu |x|)\right|^2 dx\,.
$$
in the case of radial potentials $V$.
We remark that
$\sum_{\ell\in\mathbb{N}}(2\ell+1)|j_\ell(r)|^2=1$, hence the expression for
the trace of $\V_\mu$ stated above is recovered.
If $\hat V$ is non-positive, it is easy to see that the infimum is
attained at $\ell=0$. This follows since the lowest eigenfunction can
be chosen non-negative in this case, and is thus not orthogonal to the
constant function. Since $j_0(r)=\sin(r)/r$, this means that $a_\mu(V)
=(2\pi^2 \sqrt\mu)^{-1} \int V(x) \frac{\sin^2(\sqrt{\mu}|x|)}{|x|^2}
dx$ for radial potentials $V$ with non-positive Fourier transform.
In the limit of small $\mu$ we can use the asymptotics $j_\ell(r)
\approx r^\ell/(2\ell+1)!!$ to observe that, in case $\int V(x)dx<0$,
$a_\mu(V) \approx \frac{\sqrt\mu}{2\pi^2} \int V(x) dx$ as $\mu\to 0$.
Note that $(\lambda/4\pi)\int V(x) dx$ is the first Born approximation
to the {\it scattering length} of $2\lambda V$, which we denote by
$a_0$. Thus, replacing $\lambda a_\mu(V)$ by $2\sqrt\mu a_0/\pi$ and
writing $\mu = k_{\rm f}^2$, we arrive at the expression
$T_c\sim e^{\pi/(2k_{\rm f}a_0)}$ for the critical temperature, which
is well established in the physics literature \cite{gorkov,NS,zwerger}.
In the remainder of this paper, we shall give the proof of
Theorem~\ref{asymptofcrittemp}.
\section{Proof of Theorem~\ref{asymptofcrittemp}}
Note that in case $T_c>0$, the essential spectrum of
$K_{T_c,\mu}+\lambda V$ starts at $2T_c>0$, and hence $T_c$ is the
largest $T$ such that $0$ is an eigenvalue of $K_{T,\mu}+\lambda V$ in
this case. Therefore there exists an eigenstate $|\psi\rangle\in
L^2(\mathbb{R}^3)$ such that $K_{T_c,\mu} |\psi\rangle = -\lambda V
|\psi\rangle$. For a (not necessarily sign-definite) potential $V(x)$ let
us use the notation
\begin{equation*}
V(x)^{1/2} = (\sgn V(x)) |V(x)|^{1/2} \,.
\end{equation*}
The Birman-Schwinger principle then implies that $|\varphi\rangle =
V^{1/2} |\psi\rangle$ satisfies $B_{T_c} |\varphi\rangle = -
|\varphi\rangle$, where
\begin{equation}\label{defofbt}
B_T = \lambda V^{1/2}K_{T,\mu}^{-1}|V|^{1/2}\,.
\end{equation}
Conversely, if $B_T|\varphi\rangle=-|\varphi\rangle$ and
$|\psi\rangle=K_{T,\mu}^{-1}|V|^{1/2}|\varphi\rangle$, then
$|\psi\rangle\in L^2(\mathbb{R}^3)$ and $K_{T,\mu}|\psi\rangle=-\lambda
V|\psi\rangle$. The existence of a zero eigenvalue for $K_{T,\mu}
+\lambda
V$ is thus equivalent to the fact that $B_T$ has an eigenvalue $-1$.
Note that $B_T$ is not a self-adjoint operator, however.
With the aid
of the Birman-Schwinger operator $B_T$, we can thus state the
following alternative characterization of the critical temperature
$T_c(\lambda V)$.
\begin{Lemma}\label{l1}
For any $T>0$, the Birman-Schwinger operator $B_T$ defined in
(\ref{defofbt}) is Hilbert-Schmidt and has real spectrum. If
$T_c(\lambda V)>0$, the smallest eigenvalue of $B_{T_c}$ equals
$-1$. Moreover, in case $T_c(\lambda V)=0$, the spectrum of $B_T$ is
contained in $(-1,\infty)$ for any $T>0$.
\end{Lemma}
\begin{proof}
The Hilbert-Schmidt property follows from the
Hardy-Littlewood-Sobolev inequality \cite[Thm.~4.3]{LL}, using that
$V\in L^{3/2}(\mathbb{R}^3)$ and that $K_{T,\mu} \geq \const (1+
p^2)$. Moreover, $B_T$ is the product of a self-adjoint operator
(multiplication by $\sgn(V(x))$) and a non-negative operator, hence it has
real spectrum.
We have already shown above that $-1$ is an eigenvalue of $B_{T_c}$
in case $T_c>0$. Moreover, because of strict monotonicity of
$K_{T,\mu}$ in $T$, $-1$ is not an eigenvalue of $B_T$ for all $T >
T_c$. This implies that $B_{T_c}$ has no eigenvalue less than $-1$,
for otherwise there would be a $T>T_c$ for which $B_T$ has
eigenvalue $-1$ since the eigenvalues of $B_T$ depend continuously
on $T$ and approach $0$ as $T\to \infty$.
In the same way, one argues that $B_T$ does not have an eigenvalue
less than or equal to $-1$ if $T_c=0$.
\end{proof}
Let $J$ be the unitary operator that multiplies by $\sgn(V(x))$. To be
precise, we define $\sgn(V(x))=1$ in case $V(x)=0$. Moreover, let $X$
denote the self-adjoint operator on $L^2(\mathbb{R}^3)$ with integral kernel
$$
X(x,y)= |V(x)|^{1/2} \frac1{2\pi^2} \frac{\sin\sqrt\mu|x-y|}{|x-y|} |V(y)|^{1/2}\,.
$$
We note that the $X$ is a non-negative trace-class operator, with trace $\tr [X]
=\frac{\sqrt\mu}{2\pi^2} \int |V(x)| dx$. Hence also $JX$ is trace-class, and
$\tr [JX] =\frac{\sqrt\mu}{2\pi^2}\int V(x) dx$.
Define $Y_T$ by
\begin{equation}\label{defY}
B_T = \lambda \ln\left(1+\frac\mu{2T}\right) J X + \lambda Y_T \,.
\end{equation}
We have
\begin{Lemma}\label{decomp}
Let $V\in L^1(\mathbb{R}^3)\cap L^{3/2}(\mathbb{R}^3)$. Then, for any $T>0$, the
operator $Y_T$ defined in (\ref{defY}) is Hilbert-Schmidt, and its
Hilbert-Schmidt norm is bounded
uniformly in $T$, i.e., $\sup_{T>0} \tr[ Y_T^\dagger Y_T]\! <\! \infty$.
\end{Lemma}
The proof of this lemma will be given in the next section.
Lemma~\ref{decomp} shows that the singular part of the operator
$B_T$ as $T\to 0$ is given by $JX$. This observation will
enable us to recover the exact asymptotics of $T_c(\lambda V)$ as
$\lambda \to 0$.
The operator $JX$ is closely related to $\V_\mu$ defined in
(\ref{defvm}). In fact, the two operators are isospectral.
\begin{Lemma}\label{aequivbvdv}
The spectrum of $JX$ on $L^2(\mathbb{R}^3)$ equals the
spectrum of $\V_\mu$ on $L^2(\Omega_\mu)$.
\end{Lemma}
\begin{proof}
Let $A: L^2(\mathbb{R}^3)\mapsto L^2(\Omega_\mu)$ denote the operator which
maps $\psi\in L^2(\mathbb{R}^3)$ to the Fourier transform of
$|V|^{1/2}\psi$, restricted to the sphere $\Omega_\mu$. Note that
$|V|^{1/2}\psi\in L^1(\mathbb{R}^3)$ and hence it has a bounded and continuous
Fourier transform. Moreover, let $B: L^2(\Omega_\mu)\mapsto
L^2(\mathbb{R}^3)$ be defined by
$$
(Bu)(x) = V(x)^{1/2} \frac 1{(2\pi)^{3/2}}\frac 1{\sqrt\mu}
\int_{\Omega_\mu} u(p) e^{ipx}
\, d\omega(p)\,.
$$
Using the fact that $\int_{\Omega_\mu} e^{ipx}
d\omega(p)=4\pi\sqrt{\mu}|x|^{-1}\sin \sqrt{\mu}|x|$ it is easy to see
that $JX = BA$, while $AB = \V_\mu$. Hence they have the same
spectrum, except possibly at zero.
Indeed, if $AB|f\rangle=\lambda |f\rangle$ with $\lambda\neq 0$, then
$|g\rangle=B|f\rangle \neq 0$ and $BA|g\rangle=\lambda |g\rangle$.
Since both operators are Hilbert-Schmidt operators on
infinite-dimensional spaces, 0 is an element of both spectra.
\end{proof}
We now study the behavior of the spectrum of $JX$ under small
perturbations. We will show that for $\alpha>0$ the spectrum of
\begin{equation}\label{vme}
\alpha J X + \lambda Y_T
\end{equation}
differs from the spectrum of $\alpha J X$ by at most
$\ord(\sqrt{\alpha\lambda})+ \ord(\lambda)$, uniformly in $T$. Here and in
the following, we use the notation $\ord(t)$ to indicate an expression
that is bounded as $c t\leq \ord(t)\leq C t$ for constants $0<c<C$.
Pick a $z$ that stays away a distance $d$ from the spectrum of $\alpha
JX$. By expanding in a Neumann series, we see that $\alpha J X +
\lambda Y_T-z$ has a bounded inverse provided
$$
\lambda \| Y_T \| \, \left\| \frac 1 {\alpha JX - z}\right\| < 1\,.
$$
We have
$$
\frac 1{\alpha JX - z} = -\frac 1z +\frac \alpha z J X^{1/2} \frac {1}{\alpha X^{1/2}J X^{1/2} - z} X^{1/2}\,.
$$
Since $X^{1/2}J X^{1/2}$ is a self-adjoint operator having the same
spectrum as $JX$, we can bound $\|1/(\alpha X^{1/2}J X^{1/2} - z)\|
\leq 1/d$ for any $z$ a distance $d$ away from the spectrum of $\alpha
JX$. We conclude that $\left\|(\alpha JX - z)^{-1}\right\| \leq 1/d+\alpha \| X\|/d^2$.
Hence $z$ is not in the spectrum of (\ref{vme}) if $d \geq
\ord(\sqrt{\lambda \alpha})+\ord(\lambda)$.
Since the spectrum of $\alpha JX + \lambda Y_T$ depends continuously
on $\lambda$, we have thus proved the claim. In particular, it follows
that the lowest eigenvalue of \eqref{vme} equals the lowest eigenvalue
of $\alpha JX$ plus terms that are at most of order $\ord(\sqrt{\alpha\lambda})+
\ord(\lambda)$.
We now have the necessary prerequisites to give the proof of
Theorem~\ref{asymptofcrittemp}.
\begin{proof}[Proof of Part (i)]
According to Lemma~\ref{aequivbvdv}, we have $a_{\mu}(V)=\infspec
JX$. Assume now that $\infspec JX<0$. Since $Y_T$ is bounded uniformly
in $T$, we see that the spectrum of $B_T = \lambda \ln(1+\mu/2T) JX +
\lambda Y_T$ becomes arbitrarily negative for $T\to 0$, and hence
$T_c(\lambda V)>0$ for any $\lambda >0$.
Moreover, $\lambda \ln (1+\mu/2T_c)$ is bounded away from zero as
$\lambda\to 0$.
We have shown above that the lowest eigenvalue of~$B_T$ is bounded
from above and below by $\lambda a_{\mu}(V)\ln(1+\mu/2T) +
\ord(\sqrt{\lambda})$, uniformly in $T$. Since at $T=T_c$ this lowest
eigenvalues equals $-1$, we conclude Eq.~(\ref{crittempformula}).
\end{proof}
\begin{proof}[Proof of Part (ii)]
For $T>0$, let $\alpha = \lambda \ln (1+\mu/2T)$ for simplicity. Under
the assumption that the spectrum of $J X$ is non-negative, the lowest
eigenvalue of $B_T = \alpha J X + \lambda Y_T$
is bigger than $-\ord( \sqrt{\alpha \lambda})$, as shown above. This immediately
implies that $B_T$ can only have an eigenvalue $-1$ if
$\alpha\lambda \geq \ord(1)$, or $\ln(\mu/T) \geq \ord(1/\lambda^2)$ for small
$\lambda$.
\end{proof}
\begin{proof}[Proof of Part (iii)]
Let again $\alpha = \lambda \ln(1+\mu/2T)$, and recall that $B_T = \alpha J X + \lambda Y_T$.
Since the operator $1+\lambda Y_T$ is
invertible for small enough $\lambda$, we are able to rewrite
$$
1+B_T = (1+\lambda Y_T)\left(1+(1+\lambda
Y_T)^{-1}\alpha JX \right)\,.
$$
Hence $1+B_T$ does not have a zero eigenvalue for any
$\alpha\geq 0$ if the spectrum of
$(1+\lambda Y_T)^{-1} J X$ is non-negative. Note that $JY_T$ is self-adjoint, since $JB_T$ is
self-adjoint and $J^2=1$. Hence $(1+\lambda Y_T)^{-1} J X$ has the same spectrum
as the self-adjoint operator
\begin{equation}\label{aax}
X^{1/2} \frac1{ J + \lambda J Y_T } X^{1/2}\,.
\end{equation}
This operator is non-negative for small $\lambda$ if $X^{1/2}J X^{1/2}
\geq \epsilon X$ for some $\epsilon>0$, since then
$$
\eqref{aax} = X^{1/2} J X^{1/2}
- \lambda X^{1/2}
Y_T \frac1{ J + \lambda JY_T }X^{1/2} \geq X
\left(\epsilon - \lambda \| Y_T \| \, \| (1 + \lambda Y_T )^{-1}\|\right)\,.
$$
Note that the range of $X$ is dense in the range of $X^{1/2}$, and
hence it is enough to check the inequality $J\geq \epsilon$ on the
range of $X$. Let $|\psi\rangle$ be in the range of $X$, i.e.,
$|\psi\rangle = X |\phi\rangle$ for some $|\phi\rangle\in L^2(\mathbb{R}^3)$.
Then $\langle \psi |J \psi\rangle \geq \epsilon \langle
\psi|\psi\rangle$ is equivalent to the statement that, for
$|\chi\rangle = |V|^{1/2}|\phi\rangle$,
\begin{align*}\nonumber
&\int_{\Omega_\mu\times \Omega_\mu}
\overline{\hat \chi(p)} \hat V(p-q) \hat\chi(q) \,d\omega(p)\, d\omega(q) \\
& \geq \epsilon \int_{\Omega_\mu\times \Omega_\mu}
\overline{\hat \chi(p)} \widehat{|V|}(p-q) \hat\chi(q)\, d\omega(p)\,
d\omega(q) \,.
\end{align*}
This, in turn, is equivalent to $a_\mu(V-\epsilon
|V|)=0$. Under this assumption, we have thus shown that, for small
enough $\lambda$, the operator $B_T$ does not have an eigenvalue $-1$,
for arbitrary $T> 0$. Together with Lemma~\ref{l1}, this proves the claim.
\end{proof}
\section{Proof of Lemma~\ref{decomp}}
By scaling we may assume that $\mu=1$, and we set $K_{T,1}=K_T$
for simplicity. The operator $K_{T}$ can be rewritten as
\begin{equation*}
K_{T}(p) = (|p^2-1|+2T) g(|p^2-1|/T)
\end{equation*}
where $g(t)=t(1+e^{-t})/((t+2)(1-e^{-t}))$. The
integral kernel of $K_T^{-1}$ is given by
$$
K_T^{-1}(x,y) = \frac 1{(2\pi)^{3}} \int_{\mathbb{R}^3} \frac{e^{ip(x-y)}}{(|p^2-1|+2T)
g(|p^2-1|/T)} dp\,.
$$
We decompose $K_T(p)$ as $K_T(p)^{-1} = L_T^{(1)}(p) + M_T^{(1)}(p)$,
where $L_T^{(1)}(p) = \theta(\sqrt{2}-|p|) K_T(p)^{-1}$ and
$M_T^{(1)}(p) = \theta(|p|-\sqrt{2}) K_T(p)^{-1}$.
Since $b=\inf_t g(t)>0$ one has
\begin{equation*}
M_T^{(1)}(p) \leq \theta(|p|-\sqrt{2}) b^{-1} |p^2-1|^{-1}\,.
\end{equation*}
Using that $V\in L^{3/2}(\mathbb{R}^3)$, we find with the aid of the
Hardy-Littlewood-Sobolev inequality \cite[Thm.~4.3]{LL} that $
\|V^{1/2} M_T^{(1)} |V|^{1/2}\|_2 $ is bounded independently of
$T$. Here, $\|\,\cdot\,\|_2=(\tr |\,\cdot\,|^2)^{1/2}$ denotes the
Hilbert-Schmidt norm.
Note that the integral kernel of $L_T^{(1)}$ is given by
\begin{equation*}
\frac1{2\pi^2} \int_0^{\sqrt{2}} \frac k{(|k^2-1|+2T)
g(|k^2-1|/T) } \frac{\sin k|x-y|}{|x-y|} \,dk \,.
\end{equation*}
We further decompose $L_T^{(1)} = L_T^{(2)} + M_T^{(2)}$,
where
\begin{equation*}
L_T^{(2)}(x,y) =
\frac1{2\pi^2} \int_0^{\sqrt{2}} \frac k{(|k^2-1|+2T) } \frac{\sin
k|x-y|}{|x-y|} \,dk\,.
\end{equation*}
Estimating $|\sin k|x-y|| \leq \sqrt2 |x-y|$ for $k\leq\sqrt 2$ and
changing variables one easily finds
$$
|M_T^{(2)}(x,y)| \leq \frac{\sqrt 2}{2\pi^2} \int_0^{1/T} \frac
1{t+2}\left(\frac1{g(t)}-1\right) \,dt.
$$
This is bounded independently of $T$ since
$1/g(t)-1\sim 2/t$ as $t\to\infty$. Since $V\in L^1(\mathbb{R}^3)$, we
can bound $\|V^{1/2} M_T^{(2)} |V|^{1/2}\|_2 \leq \int |V(x)|dx\,
\sup_{x,y}|M_T^{(2)}(x,y)|$, and hence we see that also $\|V^{1/2}
M_T^{(2)} |V|^{1/2}\|_2$ is bounded uniformly in $T$.
Finally, we decompose $L_T^{(2)} = L_T^{(3)} + M_T^{(3)}$,
where
\begin{align*}
L_T^{(3)}(x,y) & =
\frac1{2\pi^2} \int_0^{\sqrt{2}} \frac k{(|k^2-1|+2T)} \,dk\,
\frac{\sin |x-y|}{|x-y|}
\\ & \ = \ln\big(1/(2T)+1\big) \frac1{2\pi^2} \frac{\sin |x-y|}{|x-y|}\,.
\end{align*}
Since $|\sin a -\sin b|\leq |a-b|$ one easily sees that
\begin{equation*}
|M_T^{(3)}(x,y)| \leq \frac1{2\pi^2} \int_0^{\sqrt{2}} \frac k{k + 1
} \,dk\,.
\end{equation*}
Again, since $V\in L^1(\mathbb{R}^3)$, $\|V^{1/2} M_T^{(3)} |V|^{1/2}\|_2$ is
uniformly bounded. This completes the proof.
\hfill\qed
| {
"timestamp": "2007-10-24T22:13:38",
"yymm": "0704",
"arxiv_id": "0704.3564",
"language": "en",
"url": "https://arxiv.org/abs/0704.3564",
"abstract": "For the BCS equation with local two-body interaction $\\lambda V(x)$, we give a rigorous analysis of the asymptotic behavior of the critical temperature as $\\lambda \\to 0$. We derive necessary and sufficient conditions on $V(x)$ for the existence of a non-trivial solution for all values of $\\lambda>0$.",
"subjects": "Superconductivity (cond-mat.supr-con); Other Condensed Matter (cond-mat.other); Mathematical Physics (math-ph)",
"title": "The critical temperature for the BCS equation at weak coupling",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969698879862,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.709904430751557
} |
https://arxiv.org/abs/1207.0765 | Maharaja Nim, Wythoff's Queen meets the Knight | New combinatorial games are introduced, of which the most pertinent is Maharaja Nim. The rules extend those of the well-known impartial game of Wythoff Nim in which two players take turn in moving a single Queen of Chess on a large board, attempting to be the first to put her in the lower left corner. Here, in addition to the classical rules a player may also move the Queen as the Knight of Chess moves. We prove that the second player's winning positions of Maharaja Nim are close to the ones of Wythoff Nim, namely they are within a bounded distance to the lines with slope $\frac{\sqrt{5}+1}{2}$ and $\frac{\sqrt{5}-1}{2}$ respectively. For a close relative to Maharaja Nim, where the Knight's jumps are of the form $(2,3)$ and $(3,2)$ (rather than $(1,2)$ and $(2,1)$), we also demonstrate polynomial time complexity to the decision problem of the outcome of a given position. | \section{Maharaja Nim}
We introduce a 2-player combinatorial game called
\emph{Maharaja Nim}, an extension of the well-known game
of Wythoff Nim \cite{Wy}. (The name `Maharaja' is taken from a variation
of Chess, `The Maharaja and the Sepoys', \cite{Fa}.)
Both these games are impartial, that is, the set of options
are the same regardless of whose turn it is. For a background on impartial
games see \cite{BeCoGu}.
Place a Queen (of Chess) on a given position of a large Chess board, with
the position in the lower left corner labeled $(0,0)$.
In the game of Wythoff Nim, here denoted by $\text{W}$, the two players
move the Queen alternately
as it moves in Chess, but with the restriction that, by moving,
no coordinate increases, see Figure \ref{figure:1}. Only non-negative
coordinates are allowed so that the first player
to reach the position $(0, 0)$ wins.
In Maharaja Nim, denoted by $\text{M}$, the rules are as in Wythoff Nim, except
that the Queen is exchanged for a `Maharaja',
a piece which may move both as the Queen and the Knight of Chess,
again, provided by moving no coordinate increases.
See Figure \ref{figure:1}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.7\textwidth]{Maharaja.eps}
\caption{The move options, from a given position, of Wythoff Nim and Maharaja Nim respectively.}\label{figure:1}
\end{figure}
We say that a position is P if the second player to move
has a winning strategy, otherwise N. Also, let $\mathcal{P}_\text{M}$ and $\mathcal{P}_\text{W}$ denote
the set of P-positions of Maharaja Nim and Wythoff Nim respectively. See Figure \ref{figure:2} for a computation of the initial P-positions of the respective games and the Appendix, Section \ref{A:1} for the corresponding code.
We let $\mathbb{N}$ denote the positive integers and $\mathbb{N}_{0}$ the non-negative integers.
Let
\begin{align*}
\phi = \frac{1+\sqrt{5}}{2}
\end{align*}
denote the golden ratio. The well-known winning strategy
of Wythoff Nim \cite{Wy} is
\begin{align}\label{ban}
\mathcal{P}_\text{W} = \{(\lfloor \phi n \rfloor, \lfloor \phi^2 n \rfloor),
(\lfloor \phi^2 n \rfloor, \lfloor \phi n \rfloor)\mid n\in \mathbb{N}_{0}\}.
\end{align}
From this it follows that there is precisely one P-position
of Wythoff Nim in each row and each column of the board (see also \cite{Be}).
The purpose of this paper is to explore the P-positions of Maharaja Nim
and some related games. Clearly $(0, 0)$ is P. Another trivial observation
is that, since the rules of game are symmetric, if $(x, y)$ is P then
$(y, x)$ is P. It is also easy to see that there is at most one
P-position in each row and each column
(corresponding to the Rook-type moves). But, in fact, the same assertion
as for Wythoff Nim holds:
\begin{Prop}\label{rowcolumn}
There is precisely one P-position of Maharaja Nim in each row and
each column of $\mathbb{N}_{0}\times \mathbb{N}_{0}$.
\end{Prop}
\noindent{\bf Proof.}
Since all Nim-type moves are allowed
in Maharaja Nim, there is at most one P-position in each row and column
of $\mathbb{N}_{0}\times \mathbb{N}_{0}$. This implies that there are at most $k$ P-positions
strictly to the left of the $k^{th}$ column (row). Each such
P-position is an option for at most
three N-positions in column (row) $k$. This implies that
there is a least position in column (row)
$k$ which has only N-positions as options. By definition this
position is P and so, since $k$ is an arbitrary index,
the result follows. \hfill $\Box$\\
\begin{figure}[ht]
\centering
\includegraphics[width=0.47\textwidth]{Wyth50.eps}
\includegraphics[width=0.47\textwidth]{Maha50.eps}
\caption{The initial P-positions of Wythoff Nim and Maharaja Nim
respectively.}\label{figure:2}
\end{figure}
Another claim holds for both Wythoff Nim and Maharaja Nim.
There is \emph{at most} one P-position on each \emph{diagonal} of the form
\begin{align}\label{diag}
\{\{x, x + C\}\mid x\in \mathbb{N}_{0}\}, C\in \mathbb{N}_{0},
\end{align}
(corresponding to the Bishop-type moves). But (\ref{ban})
readily gives that, for Wythoff Nim, there is \emph{precisely} one
P-position on each such diagonal. Even more is true: If
\begin{align}\label{abW}
\mathcal{P}_\text{W} = \{(a_i,b_i),(b_i,a_i)\},
\end{align}
with $(a_i)$ increasing and for all $i$, $a_i\le b_i$, then for all $n$,
\begin{align}\label{strengthening}
\{0, 1, \ldots , n\} = \{b_i - a_i\mid i\in \{0, 1,\ldots ,n\}\}.
\end{align}
As we will see in Section \ref{Sec:2}, a somewhat weaker, but crucial,
property holds also for Maharaja Nim, but let us now state
our main result (see also Figure \ref{figure:3}). We
let $O(1)$ denote bounded functions on $\mathbb{N}_{0}$.
\begin{Thm}[Main Theorem]\label{mainthm}
Each P-position of Maharaja Nim lies on one of the `bands'
$\phi n + O(1)$ or $\phi^{-1} n + O(1)$, that is, if $(x, y)\in \mathcal{P}_\text{M}$,
with $y\ge x$, then $y - \phi x$ is $O(1)$.
\end{Thm}
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{WPlines.eps}\includegraphics[width=0.45\textwidth]{M100.eps}
\caption{To the left, the P-positions of Wythoff Nim lie `on'
the lines $\phi x$ and $\phi^{-1}x$, $x\ge 0$. The figure to the right
illustrates a main result of this paper, that the P-positions
of Maharaja Nim are bounded below and above
by the `bands', $\phi x + O(1)$ and $\phi^{-1} x + O(1)$}\label{figure:3}
\end{figure}
We give the proof of this result in Section \ref{Sec:2}. It is
quite satisfactory in one sense, but for the two gamesters trying
to figure out how to quickly find safe positions, it does not
quite suffice. The following question is left open.
\begin{Ques}\label{Ques:1}
Does Maharaja Nim's \emph{decision problem}, to determine whether a given
position $(x, y)$, with input length $\log (xy)$, is P,
have polynomial time complexity in $\log (xy)$?
\end{Ques}
In Section \ref{Section:5} we provide an affirmative answer of
this question for a close relative
of Maharaja Nim, namely the extension of Wythoff Nim where moves
of type $(2,3)$ and $(3,2)$ are adjoined (but not $(1,2)$ or $(2,1)$).
This result builds upon
an analog result, of `approximately linear' P-positions,
as that for Maharaja Nim in Theorem \ref{mainthm}. See also \cite{FrPe}, which was the inspiration for some results in this paper,
although its methods do not seem to encompass the complexity of Maharaja Nim.
\subsection{Complementary sequences and a central lemma}
We say that two sequences of positive integers
are \emph{complementary} if
each positive integer is contained in precisely one of these sequences. In our setting this corresponds to Proposition \ref{rowcolumn} together with the claim before (\ref{diag}).
In \cite{FrPe} the authors proved the following result.
\begin{Prop}[Fraenkel, Peled]\label{AUthm}
Suppose $x$ and $y$ are complementary and increasing
sequences of positive integers. Suppose further that
there is a positive real constant, $\delta$, such that, for all $n$,
\begin{align}\label{yxn}
y_n - x_n = \delta n + O(1).
\end{align}
Then there are constants, $1<\alpha < 2 < \beta $, such that, for all $n$,
\begin{align}\label{xn}
x_n - \alpha n = O(1)
\end{align}
and
\begin{align}\label{yn}
y_n - \beta n = O(1).
\end{align}
\end{Prop}
As they have remarked (see also \cite{HeLa}),
by simple density estimates one may decide the constants
$\alpha$ and $\beta$ as functions of $\delta$. Namely, notice
that (\ref{yxn}) and (\ref{xn}) together imply
\begin{align}\label{betaalpha}
\beta = \alpha + \delta
\end{align}
and, by complementarity, we must have
\begin{align}\label{alphabeta}
\frac{1}{\alpha} + \frac{1}{\beta} = 1.
\end{align}
(Thus $\alpha$ and $\beta$ are algebraic numbers if and only if $\delta$ is.)
By this we get the relation
\begin{align}\label{deltalpha}
\delta(1-\alpha) +\alpha = (\alpha -1)\alpha,
\end{align}
which will turn out to be useful in what will come next, namely we have
found a short proof of an extension of their theorem---an extension which is
easier to adapt to the circumstances of Maharaja Nim. Let us explain.
If we denote
\begin{align}\label{abM}
\mathcal{P}_\text{M} = \{(a_n, b_n),(b_n,a_n)\mid n\in \mathbb{N}_{0}\},
\end{align}
with $(a_n)$ increasing and for all $n, b_n\ge a_n$, then, for all $n, b_n$
is uniquely defined by the rules of M. At this point, one might want
to observe that, if the $b$-sequence would have been increasing
(by Figure \ref{figure:2} it is not) then
Theorem \ref{mainthm} would follow from Proposition \ref{AUthm}
if one could only establish the following claim: $b_n - a_n - n$ is $O(1)$.
Namely in (\ref{deltalpha}) $\delta = 1$ gives $\alpha = \phi$
in Proposition \ref{AUthm}. Now, interestingly enough, it turns out that Proposition
\ref{AUthm} holds without the condition that the $y$-sequence is
increasing, namely (\ref{yxn}) together with an increasing $x$-sequence
suffices.
\begin{Lem}[Central Lemma]\label{centralthm}
Suppose $x$ and $y$ are complementary
sequences of positive integers with $x$ increasing. Suppose further that
there is a positive real constant, $\delta$, such that, for all $n$,
\begin{align}\label{one}
y_n - x_n = \delta n + O(1).
\end{align}
Then there are constants, $1 < \alpha < 2 < \beta $, such that, for all $n$,
\begin{align}\label{xn2}
x_n - \alpha n = O(1)
\end{align}
and
\begin{align}\label{yn2}
y_n - \beta n = O(1).
\end{align}
\end{Lem}
\noindent{\bf Proof.}
We begin by demonstrating that, for all $n\in \mathbb{N}$,
\begin{align}
x_{n+1} = x_n + O(1),\label{two}
\intertext{ and }
y_{n+1} = y_n + O(1).\label{three}
\end{align}
By (\ref{one}), for all $k,n\in \mathbb{N}$ we have that
\begin{align}
y_{n+k} - y_n &= x_{n+k} + \delta(n+k) - x_n - \delta n + O(1),\notag\\
&= x_{n+k} - x_n + \delta k + O(1).\label{yy}
\end{align}
Since for all $k, n \in \mathbb{N}$,
$x_{n+k} - x_n \ge k$ and $\delta > 0$ this means that, for all $k, n \in \mathbb{N}$,
\begin{align}\label{yC}
y_{n+k} \ge y_n - C,
\end{align}
where $C$ is some universal positive constant (which may depend on $\delta$).
But, with $C$ as in (\ref{yC}), we can find another universal constant $\kappa = \kappa(C)\in \mathbb{N}$ such that, for all $n$,
\begin{align}\label{2C}
y_{n+\kappa} - y_n \ge \kappa + 2C + 1.
\end{align}
This follows since, in (\ref{yy}), for any $C$, we can find $k=k(C)$ such that, for all $n$, $\delta k + O(1)> 2C$. Any such $k$ suffices as our $\kappa$. On the one hand there can be at most $\kappa - 1$ numbers from the $y$-sequence strictly between $y_n$ and $y_{n+\kappa}$ (with indexes strictly in-between $n$ and $n+\kappa$).
On the other hand the inequality (\ref{yC}) gives that there can be at
most $C$ numbers from the $y$-sequence with index greater
than $n + \kappa$ but less than $y_{n+\kappa}$. It also gives that there can be
at most $C$ numbers with index less than $n$ but greater than $y_{n}$.
Therefore, by complementarity and (\ref{2C}), there has to be a number
from the $x$-sequence in every interval of length $\kappa+2C+1$. Thus the jumps
in the $x$-sequence are bounded, which is (\ref{two}). But
then (\ref{three}) follows from (\ref{one}) and (\ref{two}) since
\begin{align*}
y_{n+1} - y_n &= x_{n+1} + \delta(n + 1) - x_n - \delta n + O(1)\\
&=x_{n+1} + \delta - x_n + O(1)\\
&= O(1).
\end{align*}
By (\ref{three}) we may define $m$ as a function of $n$ with
\begin{align}\label{cons}
x_{n} = y_{m} + O(1).
\end{align}
(For example, one can take $m = m(n)$ the least number
such that $x_n < y_m $. Then $y_m-x_n$ has to be bounded for otherwise $y_m-y_{m-1}$ is not bounded.)
This has two consequences, of which the first one is
\begin{align}\label{five}
x_n = n + m + O(1).
\end{align}
This follows since the numbers $1,2,\ldots , x_n$ are partitioned
in $n$ numbers from the $x$-sequence, and the rest, by complementarity, $m + O(1)$ numbers from the $y$-sequence.
The second consequence of (\ref{cons}) is that, by using (\ref{one}),
\begin{align}\label{six}
x_n = x_m + \delta m + O(1).
\end{align}
If $\lim x_n/n$ and $\lim y_n/n$ exist then, clearly they must
satisfy (\ref{betaalpha}) and (\ref{alphabeta}) with $\delta$ as
in the lemma. Thus, using this definition of $\alpha = \alpha (\delta)$,
for all $n$, denote
\begin{align*}
\Delta_n := x_n - \alpha n.
\end{align*}
We want to use (\ref{five}) and (\ref{six}) to express
$\Delta_n$ in terms of $\Delta_m$.
Equation (\ref{six}) expresses $x_n$ in terms of $x_m$ and $m$. Therefore,
we wish to combine (\ref{five}) and (\ref{six}) to express $n$ in terms
of $x_m$ and $m$, that is, we wish to eliminate $x_n$ from (\ref{five}).
If we plug in the expression (\ref{six}) for $x_n$ in (\ref{five}) and
solve for $n$ we get
\begin{align}\label{seven}
n = x_m + (\delta-1)m + O(1).
\end{align}
Combining (\ref{six}) and (\ref{seven}) gives
\begin{align}
\Delta_n &= x_m + \delta m - \alpha(x_m + (\delta - 1)m) + O(1)\notag\\
&= (1-\alpha)x_m + (\delta (1-\alpha)+\alpha)m + O(1)\notag\\
&= (1-\alpha)\Delta_m + O(1),\label{eight}
\end{align}
where the last equality is by (\ref{deltalpha}).
Notice that, by (\ref{six}), for sufficiently large $n$ we have that $m < n$.
Hence we may use strong induction and by (\ref{eight}) conclude
that $\Delta_n$ is $O(1)$ which is (\ref{xn2}).
Then (\ref{yn2}) follows from (\ref{one}).
\hfill $\Box$\\
\section{Perfect sectors, a dictionary and the proof
of Theorem \ref{mainthm}}\label{Sec:2}
This whole section is devoted to the proof of Theorem \ref{mainthm}. We begin
by proving that there is precisely one P-position of Maharaja Nim
on each diagonal of the form in (\ref{diag}).
Then we explain how the proof of this result leads to the second part
of the theorem, the bounding of the P-positions within the
desired `bands' (Figure \ref{figure:3}).
A position, say $(x,y)$, is an \emph{upper} position if it is strictly
above the \emph{main diagonal}, that is if $y > x$.
Otherwise it is \emph{lower}.
We call a \emph{$(C,X)$-perfect sector}, or simply a \emph{perfect sector},
all positions strictly above some diagonal of the form in (\ref{diag})
and strictly to the right of column $X$.
Suppose that we have computed all P-positions in the
columns $1,2, \ldots , a_{n-1}$ and that, when we
erase each upper position from which a player can move to an upper P-position
(Figures \ref{figure:5} and \ref{figure:6}), then the remaining
upper positions strictly to the right of $a_{n-1}$ constitute
an $(n-1,a_{n-1})$-perfect sector (Figure \ref{figure:6}). Then we say that $a_{n-1}$ is \emph{perfect} and, in fact, it is easy to see that also property (\ref{strengthening}) holds for all such $n$. On the other hand, we will see that the converse statement holds if and only if for any such $n$,
\begin{align}\label{nth}
b_{n} - a_{n} = n,
\end{align}
given that the lower P-positions do not interfere. It is crucial to our approach that the first implication can be made stronger to also include (\ref{nth}).
\begin{Lem}\label{LemmaPerfect}
Let $n\in \mathbb{N}$ be sufficiently large so that Knight type moves from lower P-positions do not affect the coordinates of upper P-positions and define $(a_i)$ and $(b_i)$ as in (\ref{abM}). Suppose also that
\begin{align}\label{12n}
\{0, 1, \ldots , n-1\} = \{b_i - a_i\mid 0\le i < n \}
\end{align}
holds. Then (\ref{nth}) must hold if and only if $a_{n-1}$ is perfect.
\end{Lem}
\noindent{\bf Proof.} Suppose that (\ref{nth}) does not hold. Then clearly $b_n-a_n>n$. This must be due to a Knight type move from an upper P-position from $(a_{n-1},b_{n-1})$, that is to position $(a_{n-1}+1,b_{n-1}+2)$. Hence $a_{n-1}$ is not perfect. For the other direction, whenever there is no $i<n$ such that $b_i=a_{n-1}+1$, so that $a_n=a_{n-1}+1$ excludes a Knight type move as in the previous paragraph and hence assures a perfect sector.
\hfill $\Box$\\
\subsection{Constructing Maharaja Nim's bit-string}
We study a bit-string, a sequence of `0's and `1's, where the
$i^{th}$ bit equals `0' if and only if there is an upper
P-position of Maharaja Nim in column $i$.
By Proposition \ref{rowcolumn}, if there is no upper P-position in
column $i$, there is a lower ditto (the $i^{th}$ bit equals $1$).
Suppose that $a_x=n$ is perfect. Then, by symmetry we know some
lower P-positions in columns to the right of $n$. The next step
is to erase each column in this perfect sector which has a
lower P-position, a `1' in the bit-string (see Figure \ref{figure:7}) and
to, recursively in the non-erased part of the perfect sector, compute new
upper P-positions. We do this until we reach the next perfect
sector (for the moment assume that this will happen) at say column
$n + m$, $m > 0$. Thus, using this notation, we
may define a word of length $m$, containing the information of whether
the P-position in column $i\in \{n, n+1, \ldots , n+m-1\}$ is below or
above the main diagonal.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{imperfectSector2.eps}
\caption{All upper positions from which a player
can move to an upper P-position are erased. (The `sector' continues
above the figure.) However, the `sector' is not perfect.}
\label{figure:5}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{perfectSector2.eps}
\caption{(Step 1) A perfect sector together
with the corresponding initial P-positions.}
\label{figure:6}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{columnErase.eps}
\caption{
(Step 2) Each column in the perfect sector which
corresponds to a lower P-position (a `1' in the bit-string)
has been erased.}
\label{figure:7}
\end{figure}
At this point we adjoin this \emph{word} together with its unique
\emph{translate} to Maharaja Nim's \emph{dictionary}.
The translate is obtained accordingly: For each P-position in the columns
$n$ to $n + m - 1$ define the $i^{th}$ bit in the translation as a `1'
if and only if row $k + i$ has an upper P-position and where $k$ is the
largest row index strictly below the perfect sector. See
also Figure \ref{figure:8} and the next section for examples.
Then the translate has length $m + l$, where $l$ denotes the number of
`0's in the word.
We then concatenate the translate at the end of the existing bit-string.
In this way, provided a next perfect sector will be detected,
the bit-string will always grow faster than we read from it.
However, there is no immediate guarantee that we will be able to repeat
the procedure---that the next word exists---or
for that matter that the size of
the dictionary will be finite, so that the process may be described by a
finite system of words and translates. But, in the coming,
we aim to prove that, in fact, the next perfect sector will always
(in the sense outlined above) be detected
within a `period' of at most 7 P-positions, that is `0's in the bit-string.
As we will see, a complete dictionary needs only (between 9 and)
14 translations.
Let us describe a bit more in detail how the first part of Maharaja's
bit-string is constructed.
\subsection{A detailed example}\label{Section:2.2}
Initially there is some interference which does not allow a recursive
definition of words and translates, see Figure \ref{figure:2}.
The first perfect sector beyond the origin is attained
when the four first P-positions strictly above the main diagonal has
been computed. This happens to the right of column 8. To the right of
column 12 a new perfect sector is detected. Thus the first word
(left hand side entry) in the
dictionary will be `00100', corresponding to the P-positions
$(8,13)$, $(9,16)$, $(10,7)$, $(11,19)$ and $(12, 18)$. (Here there is
no interference since the $y$-coordinate of the first P-position
is greater than the $x$-coordinate of the last P-position.)
Let us verify that this word translates to `100101100'.
Notice that the first `1'-bit means that the P-position $(8, 13)$ is
to the left of the main diagonal---by symmetry this corresponds to a
lower P-position in column 13. The second bit is `0'. This means that
the next upper P-position is in column 14. Then, by rules of game,
it has to be at least in row $16$, which indeed will be attained, so
that the next P-position will be $(9, 16)$. By the rules of game, the
rows 14 and 15 cannot have P-positions to the left of the main diagonal,
so that a prefix is `1001'. Continuing up to the last P-position of
this translate, $(12, 18)$, extends the prefix to `1001011'. The next
upper P-position will be in at least row $22$ since the least
unused diagonal is $22 - 13 = 9$.
After this a new perfect sector will start. This gives the two last `0's in
the translate, `100101100', which may now be concatenated at the end
of the first part of the bit-string, `00100', so that
the new bit-string becomes `00100\underline{1}00101100'.
In column 13 there is a lower P-position (corresponding to the $6^{th}$
bit in the string), which gives a new perfect sector by default,
that is, the next left hand side word is `1'. This corresponds
to that the first column in a perfect sector is erased and we get a
new perfect sector without adding any upper P-position.
By the property of a perfect sector, there can be
no P-position to the left of the main diagonal in row 22, so
the translate of the word `1' must be `0'. A concatenation of this `0'
at the end of the existing string gives `001001\underline{0}01011000'.
As we continue to read from the `0' in the seventh position it turns out that,
this time, we need to read `0010110' (Figure \ref{figure:8} to the right)
to obtain a new perfect sector and also
that this word translates to `10010011000'. Again, concatenating
this translate at the end of the
existing string gives `0010010010110\underline{0}010010011000', and so on.\\\\
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4\textwidth]{Mtransl.eps}
\caption{To the left, the unique (upper) P-positions of Maharaja Nim
in the columns 8 to 12 are computed. The corresponding translation is $00100\rightarrow 100101100$. To the right are the P-positions in the columns 14 to 20 together with the translate $0010110\rightarrow 10010011000$. (Here we
have omitted column 13 with its translation $1\rightarrow 0$.)
See also Figure \ref{figure:2} and Section~\ref{Section:2.2}.}
\label{figure:8}
\end{figure}
\subsection{Maharaja Nim's dictionary}
The dictionary of $M$ is
\begin{align
1 &\rightarrow 0\label{dic1}\\%\notag
01 &\rightarrow 100\\%\notag
00100 &\rightarrow 100101100\\
00110 &\rightarrow 10010100\\
000100 &\rightarrow 10010110100\\
001110 &\rightarrow 100100100\\
0010110 &\rightarrow 10010011000\\
00000100 &\rightarrow 100101100111000\\
000010010 &\rightarrow 1001001111000100\label{dic9}\\
0000000 &\rightarrow 10010110110100\\
0010100 &\rightarrow 100100110100\\
0011110 &\rightarrow 1001000100\\
00000010 &\rightarrow 100101101100100\\
00001000 &\rightarrow 100100111100100.
\end{align}
By computer simulations we have verified that each one of the words (\ref{dic1}) to (\ref{dic9}) does appear in Maharaja Nim's bit-string. We have included the code the Appendix, Section \ref{A.2}. By our method of proof, we have found no way to exclude the latter five, but a guess is that they do not appear. At least they do not appear among the first 20000 bits of the bit-string. The following result gives the first part of the theorem.\\
\begin{Lem}[Completeness Lemma]\label{Lemma}
When we read from Maharaja Nim's bit-string each prefix is
contained in our extended dictionary of (left hand side) words
of Maharaja Nim.
\end{Lem}
\newpage
\noindent{\bf Proof.}
Let us present a list in lexicographic order of all words in
our extended dictionary together with the words we need to exclude:
\begin{align*}
0000000 &\rightarrow 10010110110100\\
00000010 &\rightarrow 100101101100100\\
00000011 &\text{ 'to exclude' (a)}\\
00000100 &\rightarrow 100101100111000\\
00000101 &\text{ 'to exclude' (b)}\\
0000011 &\text{ 'to exclude (c)'}\\
00001000 &\rightarrow 100100111100100\\
000010010 &\rightarrow 1001001111000100\\
000010011 &\text{ 'to exclude' (d)}\\
0000101 &\text{ 'to exclude' (e)}\\
000011 &\text{ 'to exclude' (f)}\\
000100 &\rightarrow 10010110100\\
000101 &\text{ 'to exclude' (g)}\\
00011 &\text{ 'to exclude' (h)}\\
00100 &\rightarrow 100101100\\
0010100 &\rightarrow 100100110100\\
0010101 &\text{ 'to exclude' (i)}\\
0010110 &\rightarrow 10010011000\\
0010111 &\text{ 'to exclude' (j)}\\
00110 &\rightarrow 10010100\\
001110 &\rightarrow 100100100\\
0011110 &\rightarrow 1001000100\\
0011111 &\text{ 'to exclude' (k)}\\
01 &\rightarrow 100\\
1 &\rightarrow 0
\end{align*}
This list is `complete' in the sense that any bit-string has precisely
one of the words on the left hand side as a prefix.
This motivates why it suffices to exclude the words `to exclude'. For example
(a) needs to be excluded since the only word in our list beginning
with `0000001' ends with a `0'. Neither can we translate words beginning
with `000001's and ending with `01' or `1'. This motivates
why we need to exclude (b) and (c). All left hand side
words in our dictionary beginning with 4 `0's continues with
100, which motivates that (e) and (f) need to be excluded, and so on.
We move on to verify that the strings (a) to (k) are not contained
in the bit-string.
No translate can contain more than three consecutive `0's.
To get a longer string one has to finish off one translate and start a new.
The only translate which starts with `0' is `0'. Thus, when a sequence of
four or more `0's is interrupted it means that a new translate has begun.
But all translates that begin with a `1' begins with `100'. Thus, a
sequence of four or more `0's cannot be followed by `11' or `101'. This
gives that the exclusion of the words (a),(b), (c), (e) and (f) is correct.
Clearly, the string `100' in (d) has to be the prefix of some translate. Since
the next two bits are `11', by the dictionary, this translate has to
be `100'. But then the next translate has the prefix `11',
which is impossible.
For the exclusion of (g) and (h) notice that the only strings of
three consecutive `0's
that exist within a translate is either at the end or is followed
by the string `100'. Therefore, a string of three `0's cannot be followed
by `11' or `101'.
For (i), notice that the sub-string `101010' is not contained
in any translate. If it were, it needed to be either at
the beginning of a translate, which is impossible (since all of them
except `0' begin with `100') or be split between two.
The latter is impossible since all translates except `0' ends with `00'.
In analogy to this, also (j) must be excluded and similarly for
(k) since no translate contains 5 consecutive
`1's and all translates ends in a `0', but starts with
either `0' or `10'.\hfill $\Box$\\
Since the left hand side words have at most 7 `0's we adjoin at
most 6 P-positions in a sequence with $b_n - a_n$ distinct from $n$. Namely,
by Lemma \ref{LemmaPerfect}, when we start a
new perfect sector we know that the next P-position will satisfy
$b_n - a_n = n.$ The number of bits in a translate is bounded (by $16$)
so that $b_n$ can never deviate more than a bounded number of
positions from $a_n + n$. Hence, by Proposition \ref{rowcolumn},
the conditions of Lemma \ref{centralthm} are satisfied with
the $a$-sequence as $x$, the $b$-sequence as $y$ and $\delta = 1$. Thus,
$b_n- a_n - n$ is $O(n)$ (as discussed in the paragraph
before Lemma \ref{centralthm}) this concludes the proof
of Theorem \ref{mainthm}. By inspecting the dictionary one can see that, in fact, for all $n$, $-4 \le b_n-a_n -n\le 3$.
\section{Dictionary processes and undecidability}
Let us briefly discuss a problem related to the method used in this paper.
Given a dictionary (of binary words and translations) and a starting string,
will the translation process of the bit-string `terminate' or not?
More precisely, let us assume that we have a finite
list of words $A = \{A_1, A_2, \ldots , A_m\}$ with
translates $B_1, B_2, \ldots , B_m$ respectively,
each word being a string of `0's and
`1's, and where we assume that none of the words in $A$ is a prefix
of another. (The latter is a convenient, but not necessary, condition
as we will explore further in Section \ref{Section:5}.) Namely, as
the read head reads from the bit-string, a natural generalization of
a prefix free dictionary is to translate precisely the longest word
containing a certain prefix.
Take any string $S$ as a starting string (for example $A_1$ but it could
be an arbitrary string, not necessarily in the list). A `read head' `$\_$'
starts to read $S$ from left to the right and as soon as it finds a
string $A_i$ in $A$ it stops, sends a signal to a printer at the other end
which concatenates the translation $B_i$ at the end of $S$. Then the read
head continues to read from where it ended until it finds the next word
in $A$, its translation being concatenated at the end, and so on.
If the read head gets to the end of the string without finding a word in
the list $A$, the process stops with the current string as `output'.
Otherwise, the process continues and gives as output an infinite string.
It follows from E. Post's tag productions \cite{Mi, Po}
that it is algorithmically undecidable whether our
`dictionary processes' stop or not. We give a proof in the Appendix, part B.
\section{Approximate linearity, converging dictionaries
and polynomial time complexity}\label{Sec:4}
There are infinitely many relatives to Maharaja Nim of the form `adjoin
a finite set of moves to Wythoff Nim'. It is easy to see that Proposition \ref{rowcolumn} and (\ref{diag}) hold also for these type of games.
For any given such generalization, is it possible to determine the greatest
departure from $n$ for $b_n - a_n$? For example see the games in
Figure \ref{figure:10} and \ref{figure:11}.
Even simpler, is it decidable, whether there is a P-position above some
straight line? More precisely:
\begin{Ques}\label{Ques:2}
Given the moves of Wythoff Nim together with
some finite list of moves, that is
ordered pairs of integers (in Maharaja Nim the list is $\{(1, 2), (2, 1)\}$)
and a linear inequality in two variables $x$ and $y$, is it
decidable whether there is a P-position in the game which satisfies
the inequality?
\end{Ques}
On the one hand it is not even clear if
a `generalized Maharaja Nim' has a finite dictionary in the sense
of Section \ref{Sec:2}. On the other hand the solution of a similar
game may or may not depend on the possible outcome of a dictionary process
as in Section \ref{Sec:2}. In fact, in Section \ref{Section:5} we prove that
a related dictionary process is successful in giving a polynomial time
algorithm for the decision problem of whether a certain position is P.
Therefore, let us look into some questions regarding some close
relatives of Maharaja Nim.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.17\textwidth]{35M.eps}\includegraphics[width=0.17\textwidth]{46M.eps}\includegraphics[width=0.17\textwidth]{47M.eps}\includegraphics[width=0.17\textwidth]{58M.eps}\includegraphics[width=0.17\textwidth]{610M.eps}\includegraphics[width=0.17\textwidth]{711M.eps}
\caption{The initial P-positions (the coordinates are less than 100)
of $(k,l)$M for $(k,l) = (3,5),(4,6),(4,7),(5,8),(6,10)$ and $(7,11)$
respectively. In support of Conjecture \ref{conjecture:1}, the ratios of
the respective coordinates seem to closely approximate $\phi$ or $1/\phi$.
(For $(2,3)$M, see Section \ref{Section:5}.)}
\label{figure:10}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.25\textwidth]{10_12_1500M.eps}\includegraphics[width=0.25\textwidth]{50_12_1500M.eps}\includegraphics[width=0.25\textwidth]{100_12_1500M.eps}\includegraphics[width=0.25\textwidth]{1500_12_1500M.eps}
\caption{The initial P-positions (the coordinates are less than 1500)
of four extensions of Maharaja Nim where the adjoined moves are
$\{(t,2t),(2t,t)\}$ where $t\in \{1, 2, \ldots, 10\}$, $\{1, 2, \ldots, 50\}$,
$\{1,2,\ldots, 100\}$ and $\mathbb{N}$ respectively. That is the three first games
have a finite number of moves adjoined to Wythoff Nim but the last one
has infinitely many. Notice the seemingly emerging `bounded split' of the (upper) P-positions in the middle two figures, the ratio of the coordinates still seem to be within a bounded distance of $\phi$, but in the last figure, where an infinite number of moves are adjoined the convergence to $\phi$ is destroyed, a fact which is proved in \cite{La12}, and an 'unbounded split' (as in the rightmost figure) is established, which was recently proved in \cite{La}).}
\label{figure:11}
\end{figure}
To begin with, one might want to pay special attention to the
family of extensions of Wythoff Nim, where the
adjoined moves are of the form $(k, l)$ and $(l, k)$,
$k, l\in \mathbb{N}$, $k < l$. We call a game in this family $(k,l)$-Maharaja Nim,
$(k,l)$-M. (Another problem is indicated in Figure \ref{figure:11} and its discussion.) The P-positions are distinct from those of Wythoff Nim, see \cite{La}, if and only if $(k, l)$ is a so-called `Wythoff pair' or a `dual Wythoff pair', that is of the form $(\lfloor\phi n\rfloor, \lfloor\phi^2 n\rfloor)$ or
$(\lceil\phi n\rceil, \lceil\phi^2n\rceil )$, $n\in \mathbb{N}$.
Thus, in Maharaja Nim we take the first Wythoff pair
$(1, 2) = (\lfloor\phi\rfloor, \lfloor\phi^2\rfloor)$, whereas in the next
section we study $(2,3)$-Maharaja Nim, that is we let $(k,l)$ take
the values of
the first dual Wythoff pair $(2,3) = (\lceil\phi \rceil, \lceil\phi^2\rceil )$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{polynomiskMaharaja3.eps}
\caption{A `telescope' with `focus' $O(1)$ and `reflectors' along
the lines $\phi n$ and $n/\phi$ attempts to determine the outcome (P or N)
of some position, $(x, y)$ at the top of the picture. As we demonstrate
in Section \ref{Section:5} the method is successful for
$(2,3)$-Maharaja Nim. (It gives the correct value
for all extensions of Wythoff Nim with a finite non-terminating converging
dictionary). The focus is kept sufficiently wide (a constant)
to provide correct translations in each step.
The number of steps is linear in $\log (xy)$.}
\label{figure:12}
\end{figure}
\begin{Conj}\label{conjecture:1}
Let $k,l\in\mathbb{N},$ $k<l$. Then each upper P-position $(x, y)$
of $(k, l)$-M satisfies $y = \phi x + O(1)$.
\end{Conj}
Does this conjecture hold for any game of the form `a
finite number of moves adjoined to Wythoff Nim'?
Suppose that a given game $(k, l)$M has a finite (non-terminating)
dictionary (as for Maharaja Nim) thus, hypothetically,
providing an affirmative answer to Conjecture \ref{conjecture:1}.
Suppose further that the dictionary \emph{converges},
that is, given an arbitrary string-position, we can, within the distance
of a bounded number of bits, precisely determine when a new word starts.
For this particular game, let us sketch a polynomial time algorithm which
determines whether a given position $(x, y)$ (with $\frac{y}{x}$ approximately
$\phi$) is P, see also Figure \ref{figure:12}.
Suppose that we have computed an initial (sufficiently large)
sequence of the bit-string.
We sketch the steps of the decision problem of $(k,l)$M are as follows:
\begin{itemize}
\item Back track $(x, y)$ via orthogonal reflections along the
lines $\phi n$ and $n/\phi$. Here we do not need to use our dictionary,
only to put marks at the precise locations of our reflecting points on
the lines $\phi n$ and $n/\phi$. That is, we get a finite sequence of
pairs of the form
$$(x, \phi x), (x, x/\phi), (x/\phi^2, x/\phi),\ldots , (x/\phi^p, x/\phi^{p-1}),$$
some $p\in \mathbb{N}$.
\item When we have back tracked as far as to our initial bit-string,
the `forward' translations can begin. Suppose that we know that the
dictionary converges within $q$ (which is supposed to be
much less than $x$ and $y$) bits and that the maximal length of
a translate is $c\le q$ bits.
\item Then it suffices to translate
$< \phi q$ bits in each step. If the first left hand side word begins with,
say the bit
$\lfloor x/\phi^p\rfloor - \phi q \le b_1 \le \lfloor x/\phi^p\rfloor - q$
we may translate it and be assured to find another left hand side word
beginning at a bit
$\lfloor x/\phi^{p-1}\rfloor -\phi q \le b_2 \le \lfloor x/\phi^{p-1}\rfloor -q$
and so on. For the final computation of the value of $(x,y)$ it suffices to,
given the left hand side word which contains $x$,
compute the P-positions in some area of size less than $ c\times c$ squares.
(Alternatively, given a short dictionary, the list of P-positions
corresponding to each word may be computed beforehand.)
\item This procedure takes $p$ steps where $\phi^p$ is proportional to $x+y$.
\end{itemize}
\section{The close relative $(2,3)$-Maharaja Nim has polynomial time complexity}\label{Section:5}
The game $(2, 3)$-Maharaja Nim, $(2, 3)$-M, is as Maharaja Nim except
that, for this game, the Knight's jumps are of the form $(2,3)$ and $(3,2)$
(and not $(1,2)$ and $(2,1)$). In this section we let
$(a_1,b_1), (a_2,b_2), \ldots$ denote the upper P-positions of $(2,3)$-M,
where $(a_i)$ is increasing. As we have remarked in Section \ref{Sec:4}
analogs of Proposition \ref{rowcolumn} and (\ref{diag}) hold for $(2,3)$-M.
Hence $(a_i)$ and $(b_i)$ are complementary.
\begin{figure}[ht]
\centering
\includegraphics[width=0.35\textwidth]{23Maharaja.eps}
\caption{The move options from a given position of $(2,3)$-Maharaja Nim.}\label{figure:13}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.55\textwidth]{23Mbitstring.eps}
\caption{The initial P-positions of $(2,3)$-Maharaja Nim together
with its initial bit-string.}\label{figure:16}
\end{figure}
Since Lemma \ref{LemmaPerfect} does not hold for $(2,3)$-M, for the
analysis of this game we use a relaxation of the approach in
Section \ref{Sec:2}. As we saw at the end of that section, the crucial
property for approximate linearity to hold is that the dictionary promised
a sufficiently frequent reappearance of property (\ref{nth}).
Hence, for a new left hand side word to be translated it is not necessary
that we require a perfect sector as defined for $(1,2)$-Maharaja to be detected.
It turns out that the condition (\ref{12n}) in Lemma \ref{LemmaPerfect} suffices for our purposes.
This almost corresponds to a perfect sector, by which we mean that at most a finite number
of positions are deleted from a perfect sector.
That is, the requirement is still that (\ref{nth}) and (\ref{12n}) hold simultaneously.
Suppose that the initial P-positions up to column $a_n$ has been coded in a
unique $(2,3)$-M bit-string, where
as before, a `1' (`0') in th $i^{th}$ position denotes a lower (upper)
P-position in column $i$. That is the read head is about to read the
${a_n}^{th}$ bit in the string. As in Section \ref{Sec:2}, by
symmetry of P-positions, a finite number of bits follow to
the right of the read head's current position.
Then a (new) left hand side word $\omega\ne 1$ (the word `1' is translated to `0')
is included to the dictionary if and only if the following two criteria
are satisfied. Each one of the numbers $0, 1, \ldots , n-1$ is represented as
the difference $b_i-a_i$ of the coordinates of precisely
one of the first $n - 1$ upper P-positions \emph{and} $b_n-a_n=n$.
As usual, the translation of $\omega$ is computed and concatenated at the end
of the bit-string. The next left hand side word begins by the ${a_n}^{th}$ column.
\subsection{$(2,3)$-Maharaja Nim's dictionary process}
Given a finite binary dictionary, we define unique \emph{non-prefix free} translations by the following rule. Suppose that the read head has finished one translation in the (infinite) binary string $x$ and starts reading at position $n$. Suppose further that it detects the left hand side entries $\omega_1,\ldots , \omega_k$ of the dictionary, reading from position $n$ and onwards, where $\omega_i$ is a prefix of $\omega_{i+1}$ for all $i<k$, so that $\omega_k$ is not the prefix of any other entry. Then it accepts the translation of $\omega_k$ and it is unique if it exists.
Let us illustrate this definition by defining the following (very short) non-prefix free dictionary of $(2,3)$-M:
\begin{align}
0 &\rightarrow 10\label{0}\\
1 &\rightarrow 0\label{1}\\
01000 &\rightarrow 100011100\label{01000}\\
01010 &\rightarrow 10001100\label{01010}.
\end{align}
Since the bit `0' is a prefix of the words `01000' and `01010' we need
some external rule to decide which translation to use in the construction
of the bit-string. Suppose that the next bit detected by the read head
is `0'. Then the translation is as in (\ref{0}), except if the next
four bits are either `1000' or `1010'. For these cases the
translations are as in (\ref{01000}) and (\ref{01010}) respectively.
Notice that, by these translation rules, by (\ref{0}) and (\ref{1}), \emph{any} bit-string
has a (longer) translation, and therefore the construction cannot terminate. Before proving that
this dictionary is correct, let us provide some initial translations.
Column-wise, the first non-terminal
P-positions of $(2,3)$M are $(1, 2),$ $(2, 1), (3, 6), (4, 8), (5, 7),
(6, 3), (7, 5)$ and $(8, 4)$. These P-positions
correspond to the bit-string `01000111' on the $x$-axis and
`0100011100000' on the $y$-axis, see Figure \ref{figure:16}. That is, we can assume that the first word to be translated
starts in position $(9, 14)$, which corresponds to the first diagonal (of the form (\ref{diag})) in the $9^{th}$ column. It is clear the the initial
interference between rows and columns has ended here so, to begin with
the read heads position is `01000111\underline{0}0000'. Thus, the first
four translations are of the type (\ref{0}) which produce the bit-string
`010001110000\underline{0}10101010'. Then a type (\ref{01010}) translation
follows which produces `01000111000001010\underline{1}01010001100', and so on.
It is easy to see that, given a perfect
sector to the right of the column of the read head's position, each
translation in $(2,3)$-M's dictionary is correct.
However, since there is no a priori guarantee that a new translation
starts at a perfect sector, we need to exclude certain combinations of translations, thus preventing any $(2,3)$-type move to short-circuit two P-positions. The translations (\ref{0}) and (\ref{01000}) could potentially interfere with a succeeding translation but (\ref{1}) and (\ref{01010}) cannot. Precisely, if the word `0' were followed by a `1' and then any of the words beginning with `0', or if the word `01000' were followed by a `0', then the translation rules would be wrong, because of a $(2,3)$-type ``short-circuit" of P-positions. These are all cases that we need to exclude. Let us begin to rule out the latter case.\\
\noindent {Claim 1:} If the left hand side word
`01000' is detected by the read head, then it is succeeded by the left hand side word `1'.\\
Suppose, on the contrary, that the read
head reads the left hand side word `01000' followed by a `0'.
This string, `010000', which we say is part of our \emph{original} string, must have been translated from the left hand side words
`$x$',`0','1','1','1' (in this order and where $x$ is the left hand side word in either
(\ref{0}), (\ref{01000}) or (\ref{01010})). But the string `00111'
only appears as a translation in (\ref{01000}). Further, the string `01000111'
is forced since `11000111' cannot appear, but it cannot be that the read
head detected the first five bits `01000'
as the word in (\ref{01000}), since it would have translated to `100011100000' which does not include the original string '01000' in the right place.
Thus, to prevent this, preceding the pattern `01000111' there must have been either `0100', `01', or `0101'. It follows that either of the strings
\begin{align}
& 010001000111,\label{first}\\
& 0101000111, \text{ or}\label{second}\\
& 010101000111\label{third}
\end{align}
must have been read in the stage before the original string, \emph{and} where a new left hand side word starts from the first `0'.
But then, for the case (\ref{first}),
this translates to the original pattern `1000111000101010000', which
forces that a left hand side word starts after the consecutive words `1',`1',`1', that is `01010' will be detected as a word and so the word `01000' would not have been read in the original string, which contradicts our initial assumption.
For the latter cases (\ref{second}) and (\ref{third}), we get the translates `100011001010000' and `100011000101010000' respectively, both which may be treated in analogy to the first case, but here it is forced that new left hand side words start after the consecutive words `1',`1' respectively.
\hfill $\Box$\\
\noindent {Claim 2:} Any sequence of left hand side words beginning with `0', `1' and then some pattern beginning with a `0' is impossible.\\
We are here concerned with that the read head detects any sequence of left
hand side words beginning with `0', `1' and then some sequence
`$0xy$', where $x$ and $y$ represent two bits.
It is immediate by the translation rules that we may exclude the cases where
$xy$ represents `00' or `10'. Namely, for these two cases, by
the `prefix-rule' of choosing the longest left hand side word in
the dictionary, we would rather have used one of the translations
in (\ref{01000}) or (\ref{01010}). Also, the case where $xy$ is `11' may be excluded since
the string `01011' does not appear in any combination of the right hand side translates. Thus, it only remains to analyze the case where the two bits are `01'.
That is, we want to exclude the pattern `01001'. By looking at the translations it is obvious that the string `1001' must have been translated from the left
hand side words `0', `1' and then a word beginning with a `0'. This means that
precisely the pattern which we want to exclude has appeared in a
previous translation (and thereby also short-circuiting two P-positions in columns strictly to the left of the current position). Thus (using Figure \ref{figure:16} as a base case) strong induction resolves this case.
\subsection{Polynomiality}
We have proved that the dictionary in (\ref{0}) to (\ref{01010}) is correct
and thereby also that the P-positions of
$(2,3)$-Maharaja Nim lie within a bounded distance of either the
`line' $\phi n$ or $\phi^{-1}n$. Next, we will demonstrate that
this dictionary gives a polynomial strategy, as outlined
in Section \ref{Sec:4}.
For this, it suffices to prove that, given an arbitrary position in
the infinite bit-string, by a search within a bounded number of bits
we can determine which one of the four given translations is correct.
If the read head reads the pattern `11' then, by the left hand side words
in the dictionary and in particular (\ref{1}),
we can conclude that a new word starts by the first `1'.
Hence we assume that no two consecutive `1's are detected.
By analyzing the translations in the dictionary one can see that at most five consecutive `0's can appear. Therefore, we may assume that the read
head reads the pattern `010' within a bounded distance,
which by previous arguments mean either `01000' or `01010'. Both these strings are detected as words, unless the preceding pattern ends
with `0100', `01' or `0101'. Hence one needs to investigate the following
six ambiguous strings:
\begin{enumerate}[(a)]
\item 010001000,
\item 0101000,
\item 010101000,
\item 010001010,
\item 0101010,
\item 010101010.
\end{enumerate}
The pattern `10001000' in (a) cannot have been translated from the string `011011'. This follows by viewing the possible combinations of right hand side translates. Hence, the combination of translations comes from first `0', `1' and `1' and then `01000' or `01010'. But these combinations are also impossible since they both enforce the impossible pattern ``1101''. Hence (a) cannot appear.
The string in (b) must have been translated from `0,', `0', `1', `1'
which, by (\ref{01000}) and (\ref{01010}) and since all
translates end with a `0', implies that the three preceding bits must have been `010'. Hence, we can extend the pattern to be translated to `0100011'. It
is given that the prefix `01000' of this string cannot be
detected as a left hand side word. Therefore, the translation
of `0100011' must be `10010101000' which has the prefix
\begin{align}\label{1001}
\text{`1001'}.
\end{align}
But, by the left hand side words in the dictionary,
any string containing (\ref{1001}) must converge between the two `0's.
Hence a new word must start as `01010' followed by `1', `0', `0',$\ldots$.
Notice that (c) has this string as a suffix and hence
it may also be included in the argument. Also, by (\ref{1001}) and by the argument in (a), in any attempt to disprove convergence (d) must be preceded by the pattern `01', but then again, we may analyze (d) as (b).
We are left with the strings (e) and (f).
Since a repetition of more than five consecutive patterns `01' implies that
more than five consecutive 0s has been translated, which is impossible, we
may assume that the repetitions of `01' in (f) has been preceded by
either of the patterns `10' or `00' (`11' is already ruled out). Again,
the first case leads to (\ref{1001}). Notice that (e) can also
be included in this argument. For the second case, notice that any
string beginning with `00001' converges after the three first `0's, that is
a new word must begin with `01', so it suffices to study the string
`1000101010', which (since the pattern `11' is excluded) has been
treated already in (d).
We have proved that, given an arbitrary position in the bit-string, at most
a bounded number of preceding bits need to be searched in order to find the
correct translation. By Section \ref{Sec:4} this convergence gives a polynomial
time winning strategy of $(2,3)$-Maharaja Nim.
| {
"timestamp": "2012-07-04T02:06:02",
"yymm": "1207",
"arxiv_id": "1207.0765",
"language": "en",
"url": "https://arxiv.org/abs/1207.0765",
"abstract": "New combinatorial games are introduced, of which the most pertinent is Maharaja Nim. The rules extend those of the well-known impartial game of Wythoff Nim in which two players take turn in moving a single Queen of Chess on a large board, attempting to be the first to put her in the lower left corner. Here, in addition to the classical rules a player may also move the Queen as the Knight of Chess moves. We prove that the second player's winning positions of Maharaja Nim are close to the ones of Wythoff Nim, namely they are within a bounded distance to the lines with slope $\\frac{\\sqrt{5}+1}{2}$ and $\\frac{\\sqrt{5}-1}{2}$ respectively. For a close relative to Maharaja Nim, where the Knight's jumps are of the form $(2,3)$ and $(3,2)$ (rather than $(1,2)$ and $(2,1)$), we also demonstrate polynomial time complexity to the decision problem of the outcome of a given position.",
"subjects": "Combinatorics (math.CO)",
"title": "Maharaja Nim, Wythoff's Queen meets the Knight",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969694071564,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7099044304045213
} |
https://arxiv.org/abs/2110.04716 | Spectral structure of the Neumann-Poincaré operator on thin ellipsoids and flat domains | We investigate the spectral structure of the Neumann-Poincaré operator on thin ellipsoids. Two types of thin ellipsoids are considered: long prolate ellipsoids and flat oblate ellipsoids. We show that the totality of eigenvalues of the Neumann-Poincaré operators on a sequence of the prolate spheroids is densely distributed in the interval [0,1/2] as their eccentricities tend to 1, namely, as they become longer. We then prove that eigenvalues of the Neumann-Poincaré operators on the oblate ellipsoids are densely distributed in the interval [-1/2, 1/2] as the ellipsoids become flatter. In particular, this shows that even if there are at most finitely many negative eigenvalues on the oblate ellipsoids, more and more negative eigenvalues appear as the ellipsoids become flatter. We also show a similar spectral property for flat three dimensional domains. | \section{Introduction}
For a bounded domain $\Omega$ with the Lipschitz continuous boundary in $\mathbb{R}^d$, $d=2,3$, the Neumann--Poincar\'e (abbreviated by NP) operator associated with $\partial\Omega$ is the boundary integral operator on $\partial\Omega$ defined by
\begin{equation}
\mathcal{K}_{\partial\Omega} [\varphi](x) = \frac{1}{\omega_d} \int_{\partial\Omega} \frac{\langle y-x, {\nu(y)} \rangle}{|x-y|^d} \varphi(y) dS(y), \quad x \in \partial\Omega,
\end{equation}
where $\omega_d=1/2\pi$ if $d=2$ and $1/4\pi$ if $d=3$, and $\nu(y)$ denotes the outward unit normal to $\partial\Omega$ at $y \in \partial\Omega$. It naturally appears when solving the classical Dirichlet problem using layer potentials, and is commonly called the double layer potential. The NP operator can be realized as a self-adjoint operator on $H^{1/2}(\partial\Omega)$, the Sobolev space on $\partial\Omega$ \cite{KPS} (see also the recent survey \cite{AKMP}). If $\partial\Omega$ is smooth ($C^{1, \alpha}$ for some $\alpha>0$ to be precise), then it is a compact operator and has a countable number of eigenvalues accumulating to $0$. It is known that the NP eigenvalues (eigenvalues of the NP operator) are confined in the interval $(-1/2, 1/2]$ (see, for example, \cite[Chapter XI, Section 11]{Kellog-book} or \cite{AKMP}).
The NP spectrum depends heavily on geometry of the surface (or the curve) on which the operator is defined. In particular, as the boundary $\partial\Omega$ becomes `singular' in some sense, the spectrum seems to approach to the bounds $\pm 1/2$. For example, if $\Omega$ consists of two strictly convex planar domains and boundaries get closer, then more and more eigenvalues of the corresponding NP operator approach $\pm 1/2$ \cite{BT, BT2}. If a planar curvilinear domain has corners, then the essential spectrum of the NP operator is an interval whose end-points are determined by the smallest angle of the corners \cite{PP1, PP2} (see also \cite{BZ}) (in this case, essential spectrum possibly except $0$ consists of absolutely continuous spectrum \cite{KLY, Perfekt}). If a corner gets sharper and the domain becomes needle-like around the corner, then the essential spectrum approaches $[-1/2,1/2]$. For rectangles whose corner angles are $\pi/2$, the essential spectrum is fixed to be $[-1/4,1/4]$. However, it is shown in \cite{HKL} by numerical computations that there appear more and more eigenvalues outside the interval, which approaches to $\pm 1/2$, as the aspect ratio of the rectangle becomes larger.
Motivated by above observation, it is proved in the recent paper \cite{AKM21} that if $D_R$ is a rectangular shape planar domain of the aspect ratio $R$, then for any sequence $R_j$ of positive numbers tending to $\infty$ as $j \to \infty$, the NP spectra are densely distributed in $[-1/2, 1/2]$. More precisely,
\begin{equation}\label{2D}
\overline{\cup_{j=1}^\infty \sigma(\mathcal{K}_{\partial D_j})} = [-1/2, 1/2].
\end{equation}
Here and afterwards, $\sigma(\mathcal{K}_{\partial \Omega})$ denotes the spectrum of the NP operator $\mathcal{K}_{\partial \Omega}$ on $H^{1/2}(\partial \Omega)$. This proves that more and more NP eigenvalues appear outside the essential spectrum $[-1/4,1/4]$ to densely fill up $[-1/2, 1/2] \setminus [-1/4,1/4]$. This is in accordance with the numerical finding in \cite{HKL}. A similar spectral property is shared by a sequence of ellipses. Since the NP eigenvalues on ellipses are explicitly known, it can be shown without difficulty that the spectral property \eqnref{2D} holds for ellipses of the form $x_1^2/R_j^2 + x_2^2 < 1$ (see \cite{AKM21} for a proof). It says that even if the totality of the spectrum is countable, it is dense in $[-1/2, 1/2]$, and it holds regardless of choices of the sequence $R_j$.
The purpose of this paper is to investigate the spectral structure of the NP operator on thin domains including ellipsoids and prove results similar to \eqnref{2D}. Unlike the two-dimensional case, there are two different kinds of thinness in three dimensions: thin and long (like prolate spheroids), thin and flat (like oblate ellipsoids). As we will see,
three-dimensional bounded domains exhibit the
NP spectral structure different from that of two-dimensional ones. In two dimensions, the NP spectrum always appears in pairs $\pm \lambda$, which is due to existence of harmonic conjugates. However, there are domains in three dimensions where the NP operators have only positive eigenvalues: the NP eigenvalues on a sphere are $1/(4n+2)$ for $n = 0, 1, 2\, \ldots$ \cite{Poi1}, and they are all positive on prolate spheroids \cite{AA}. Thus, the property \eqnref{2D} does not hold for prolate spheroids. It is shown in \cite{Ahner} that there is an oblate ellipsoid having a negative eigenvalue. To the best of our knowledge, this is the first example of three-dimensional domains with a negative NP eigenvalue.
Recently, it is proved in \cite{MR-SPMJ-20} that the NP operator on the boundary of strictly convex domains in three dimensions can have at most finitely many eigenvalues. If the boundary of the domain has a concave part like tori, then there are (infinitely) many negative eigenvalues (see \cite{AJKKM, JK, MR-SPMJ-20}).
Here we discuss a possible advantage of having negative eigenvalues. Suppose that a three-dimensional domain $\Omega$ has $k$ as its dielectric constant, while the background matrix $\mathbb{R}^3 \setminus \Omega$ does $1$. Then plasmon resonance in the quasi-static limit occurs if
\begin{equation}\label{fredholm}
\frac{k+1}{2(k-1)}= \lambda,
\end{equation}
where $\lambda$ is an eigenvalue of the NP operator on $\partial\Omega$ (see \cite{Grieser}). Since $\lambda$ lies in $(-1/2, 1/2]$, \eqnref{fredholm} can be fulfilled only when $k$ is negative (so that $\Omega$ is a meta-material with the negative dielectric constant). The relation \eqnref{fredholm} can be achieved by a larger $k$ (the smaller $|k|$) if $\lambda$ is negative (see Figure \ref{negative}). This may yield an advantage in practice even though verifying it is out of reach of mathematical research. We also mention a recent work \cite{AKMN} where it is shown by numerical computation that the spectral property of the NP operator (in relation to the cloaking by anomalous localized resonance) on the torus is quite different from that on strictly convex surfaces.
\begin{figure}[ht!]
\begin{center}
\epsfig{figure=figure.eps,width=6cm}
\end{center}
\caption{The curve is the graph of $\frac{k+1}{2(k-1)}$ and $\lambda_+, \ \lambda_-$ are positive and negative NP eigenvalues, respectively. The relation \eqnref{fredholm} for $\lambda_-$ can be satisfied by a larger $k$ (the smaller $|k|$) than that for $\lambda_+$.} \label{negative} \end{figure}
We now present the main results of this paper. Let us begin with the prolate spheroids. Let $\Pi_R$ be a prolate spheroid, namely, for $R \ge 1$,
\begin{equation}\label{prolate}
\Pi_R:= \Big\{ (x_1,x_2, x_3): x_1^2 + x_2^2 + \frac{x_3^2}{R^2} < 1 \Big\}.
\end{equation}
If we dilate $\Pi_R$ by $R^{-1}$, $\Pi_R$ becomes thin. That is why we call them `thin' domains. The NP spectrum is invariant under dilation.
We obtain the following proposition for prolate spheroids.
\begin{prop}\label{prop:prolate}
Let $\Pi_R$ be the prolate spheroid defined by \eqnref{prolate}. If $R_j$ is a sequence of numbers such that $R_j \ge 1$ for all $j$ and $R_j \to \infty$ as $j \to \infty$, then
\begin{equation}\label{main:prolate1}
[0, 1/2] \subset \overline{\cup_{j=1}^\infty \sigma(\mathcal{K}_{\partial \Pi_{R_j}})} .
\end{equation}
\end{prop}
Since $\sigma(\mathcal{K}_{\partial \Pi_{R}}) \subset (0, 1/2]$ if $R \ge 1$ as proved in \cite{AA}, we obtain the following theorem as an immediate consequence.
\begin{theorem}\label{thm:prolate}
Let $\Pi_R$ be the prolate spheroid defined by \eqnref{prolate}. If $R_j$ is a sequence of numbers such that $R_j \ge 1$ for all $j$ and $R_j \to \infty$ as $j \to \infty$, then
\begin{equation}\label{main:prolate}
\overline{\cup_{j=1}^\infty \sigma(\mathcal{K}_{\partial \Pi_{R_j}})} = [0, 1/2].
\end{equation}
\end{theorem}
Theorem \ref{thm:prolate} shows that totality of eigenvalues of $\mathcal{K}_{\partial \Pi_{R_j}}$ is dense in $[0, 1/2]$ regardless of choice of the sequence $R_j$ as long as $R_j \to \infty$.
There are significant works on the NP spectrum on ellipsoids \cite{Ahner, AA, ADR, Mart, Ritt}. For example, NP eigenvalues on prolate spheroids are expressed in terms of values of Legendre functions (see \eqnref{ev:prolate}). However, it is unlikely that Theorem \ref{thm:prolate} (and Theorem \ref{thm:oblate} below) can be proved using those results since we do not have enough knowledge about value distributions of Legendre functions. Nonetheless, we are able to prove the following theorem based on those results, which is in good comparison with Theorem \ref{thm:prolate}: It shows that the totality (in continuum) of the NP eigenvalues on prolate spheroids covers the interval $(0,1/2]$ while Theorem \ref{thm:prolate} shows that the NP eigenvalues on a sequence of prolate spheroids, which is countable, are dense in $[0,1/2]$ regardless of the choice of the sequence.
\begin{theorem}\label{thm:prolate2}
Let $\Pi_R$ be the prolate spheroid defined by \eqnref{prolate}. It holds that for any $R_0 \ge 1$,
\begin{equation}\label{prolate2}
\cup_{R\geq R_0}\sigma(\mathcal{K}_{\partial\Pi_R}) = (0, 1/2].
\end{equation}
\end{theorem}
We then turn our attention to oblate ellipsoids. Let $a_j$ ($j=1,2$) be positive numbers. For a positive number $R$, let $\Omega_R$ be an oblate ellipsoid defined by
\begin{equation}\label{oblate}
\Omega_R:= \left\{ (x_1, x_2, x_3): \frac{x_1^2}{(a_1 R)^2} + \frac{x_2^2}{(a_2 R)^2} + x_3^2 < 1 \right\}.
\end{equation}
If $a_1=a_2$, then $\Omega_R$ is an oblate spheroid.
We obtain the following theorem for oblate ellipsoids:
\begin{theorem}\label{thm:oblate}
Let $\Omega_R$ be the oblate ellipsoid defined by \eqnref{oblate}. If $R_j$ is a sequence of positive numbers such that $R_j \to \infty$ as $j \to \infty$, then
\begin{equation}\label{main:oblate}
\overline{\cup_{j=1}^\infty \sigma(\mathcal{K}_{\partial \Omega_{R_j}})} = [-1/2, 1/2].
\end{equation}
\end{theorem}
Theorem \ref{thm:oblate} shows that totality of eigenvalues of $\mathcal{K}_{\partial \Omega_{R_j}}$ is dense in $[-1/2, 1/2]$. This is rather surprising. As mentioned earlier, $\mathcal{K}_{\partial \Omega_{R_j}}$ admits at most finitely many negative eigenvalues since $\Omega_R$ is strictly convex. However, \eqnref{main:oblate} says that negative eigenvalues in $\cup_{j=1}^\infty \sigma(\mathcal{K}_{\partial \Omega_{R_j}})$ are dense in $[-1/2,0]$.
Proposition \ref{prop:prolate} and Theorem \ref{thm:oblate} are proved by investigating the limiting behaviour of the NP operators as $R \to \infty$. We show that the NP operator on the prolate spheroids converges (on some test functions) to a certain one-dimensional convolution operator as $R \to \infty$ (see \eqnref{L0}). We prove that the Fourier transform of the convolution kernel has values in $(0,1/2]$ and hence the operator has continuous spectrum $(0,1/2]$, and use this fact to prove Proposition \ref{prop:prolate}. The NP operator on oblate ellipsoids converges to the two-dimensional Poisson integral evaluated at $2$ or $-2$ (this is so because oblate ellipsoids have the upper and lower parts) (see \eqnref{4100}). The Poisson integral operator has continuous spectrum $(0,1/2]$. But, since this operator is evaluated at $\pm 2$, we are able to prove Theorem \ref{thm:oblate}.
The property \eqnref{main:oblate} seems to be a generic property of thin, flat domains. To demonstrate it, we consider typical thin, flat domains other than oblate ellipsoids. To define such a domain, let $U$ be a bounded planar domain with the Lipschitz continuous boundary $\partial U$. Let $\Phi$ be the domain in $\mathbb{R}^3$ whose boundary consists of three pieces, namely,
\begin{equation}\label{Fboundary}
\partial \Phi = \Sigma^+ \cup \Sigma^- \cup \Sigma^s
\end{equation}
where the top and bottom are given by $\Sigma^{\pm}= U \times \{ \pm 1\}$ and $\Sigma^s$ is a surface connecting $\partial U \times \{ +1\}$ and $\partial U \times \{ -1\}$. We assume that $\partial \Phi$ is Lipschitz continuous. For $R>0$ let
\begin{equation}\label{thinflat}
\Phi_R:= \{(Rx_1,Rx_2, x_3): (x_1,x_2, x_3)\in \Phi \}.
\end{equation}
We obtain the following theorem using the method of proving Theorem \ref{thm:oblate}.
\begin{theorem}\label{thm:thinflat}
Let $\Phi_R$ be the domain defined by \eqnref{thinflat}. If $R_j$ is a sequence such that $R_j \to \infty$ as $j \to \infty$, then
\begin{equation}\label{main:thinflat}
\overline{\cup_{j=1}^\infty \sigma(\mathcal{K}_{\partial\Phi_{R_j}})} = [-1/2, 1/2].
\end{equation}
\end{theorem}
One may naturally ask a question if Theorem \ref{thm:prolate} holds for cylinder-like domains or even prolate ellipsoids. One can show that \eqnref{main:prolate1} holds for such domains. But we do not know if the reverse inclusion is true. In the oblate case, the reverse inclusion is always true, namely, the NP spectrum is contained in $[-1/2, 1/2]$.
The rest of the paper is devoted to proving main results:
Proposition \ref{prop:prolate} and Theorem \ref{thm:prolate2} in Section \ref{sec:2}; Theorem \ref{thm:oblate} in Section \ref{sec:3}; Theorem \ref{thm:thinflat} in Section \ref{sec:4}.
We use standard notation of $A \lesssim B$ which means that there is a constant $C$ independent of the parameter $R$ of the given ellipsoids. The meaning of $A \gtrsim B$ is analogous, and $A \sim B$ means both $A \lesssim B$ and $A \gtrsim B$ hold.
\section{Proof of Proposition \ref{prop:prolate} and Theorem \ref{thm:prolate2}}\label{sec:2}
In this section we prove Proposition \ref{prop:prolate} (and Theorem \ref{thm:prolate} as its consequence) and Theorem \ref{thm:prolate2}. Since $\partial\Pi_R$ is smooth, a non-zero eigenvalue of $\mathcal{K}_{\partial \Pi_{R}}$ on $L^2(\partial\Pi_R)$ is automatically an eigenvalue on $H^{1/2}(\partial\Pi_R)$. Thus it is enough to prove \eqnref{main:prolate1} assuming $\sigma(\mathcal{K}_{\partial \Pi_{R}})$ is on $L^2(\partial\Pi_R)$.
\subsection{Parametrization of the NP operator on prolate spheroids}
Let
$$
\eta(t)= \eta_R(t):= \sqrt{1-t^2/R^2}.
$$
We parametrize the prolate spheroid $\Pi_R$ given by \eqnref{prolate} by $x= (\eta(x_3) \cos\theta, \eta(x_3) \sin \theta, x_3)$. Let $y = (\eta(y_3) \cos\phi, \eta(y_3) \sin \phi, y_3)$. Then, straight-forward calculations yield that
\begin{equation}\label{dSpro}
\nu(y) = \left(1-\frac{y_3^2}{R^2} + \frac{y_3^2}{R^4} \right)^{-1/2} \left(\eta(y_3) \cos\phi, \eta(y_3) \sin \phi, \frac{y_3}{R^2} \right)
\end{equation}
and
\begin{equation}\label{dSpro2}
dS(y) = \left(1-\frac{y_3^2}{R^2} + \frac{y_3^2}{R^4} \right)^{1/2} d\phi dy_3.
\end{equation}
Thus we have
\begin{align*}
&\frac{1}{4\pi} \frac{\langle y-x, {\nu(y)} \rangle}{|x-y|^3} dS(y) \\
&= \frac{1}{4\pi} \frac{(\eta(y_3)^2- \eta(x_3)\eta(y_3) \cos(\theta-\phi) + \frac{y_3}{R^2} (y_3-x_3))}{\,\,[\big( \eta(x_3)^2+ \eta(y_3)^2- 2\eta(x_3)\eta(y_3) \cos(\theta-\phi) \big) + (x_3-y_3)^2]^{3/2}\,}\, d\phi dy_3.
\end{align*}
Let $g(x_3)$ be a function supported in $(-R, R)$. Define $\psi$ on $\partial\Pi_R$ by
\begin{equation}\label{extpro}
\psi(x)= \psi (\eta(x_3) \cos\theta, \eta(x_3) \sin \theta, x_3)= g(x_3).
\end{equation}
Thanks to \eqnref{dSpro2}, we have
\begin{equation}
\| \psi \|_{L^2(\partial \Pi_R)} \lesssim \| g \|_2.
\end{equation}
Additionally, if $g(x_3)$ is supported in $(-R^{1-\sigma}, R^{1-\sigma})$ for some $\sigma \in (0, 1)$, then
we have
\begin{equation}\label{psig}
\| \psi \|_{L^2(\partial \Pi_R)} \sim \| g \|_2.
\end{equation}
Moreover, $\mathcal{K}_{\partial \Pi_R}[\psi]$ can be expressed as
\begin{equation}\label{KcalHcal}
\mathcal{K}_{\partial \Pi_R}[\psi](x) = \mathcal{H}_R [g](x_3) ,
\end{equation}
where $\mathcal{H}_R$ is the integral operator defined by the integral kernel $H_R(x_3, y_3)$ given by
\begin{equation}\label{H-kernel}
H_R(x_3,y_3) = \frac{1}{2\pi} \int_{0}^{\pi} \frac{ 1-R^{-2}x_3y_3- \eta(x_3)\eta(y_3) \cos\theta}{ [\big( \eta(x_3)^2+ \eta(y_3)^2- 2\eta(x_3)\eta(y_3) \cos\theta \big) + (x_3-y_3)^2]^{3/2} }\, d\theta.
\end{equation}
If $x_3$ and $y_3$ lie in $(-R^{1-\sigma}, R^{1-\sigma})$, then $\eta(x_3)$ and $\eta(y_3)$ tend to $1$ as $R \to \infty$. Thus, formally speaking, $H_R(x_3,y_3)$ tends to $L_0(x_3-y_3)$, where
\begin{equation}\label{L0}
L_0(t) := \frac{1}{2\pi} \int_{0}^{\pi} \frac{1- \cos\theta}{ [\big( 2- 2\cos\theta \big) + t^2]^{3/2} } d\theta .
\end{equation}
Let $\widehat{f}$ denote the Fourier transform on $\mathbb{R}^d$, namely,
\begin{equation}\label{Fourdef}
\widehat{f}(\xi)=\mathcal{F}[f](\xi):= \int_{\mathbb{R}^d} e^{-2\pi i \xi \cdot x} f(x) dx.
\end{equation}
Note that
\begin{equation}
\widehat{L_0}(\xi) = \frac{1}{4\pi} \int_{0}^{\pi} \widehat{k}(\sqrt{2(1- \cos\theta)} \xi) d\theta,
\end{equation}
where
\begin{equation}\label{functionk}
k(t):= \frac{1}{(1+t^2)^{3/2}}.
\end{equation}
\begin{lemma}\label{lem:khat}
Let $k$ be the function defined by \eqnref{functionk}. Then, $\widehat{k}(\xi)$ is even, decreasing in $\xi \ge 0$, continuously differentiable on $\mathbb{R}$, $0 < \widehat{k}(\xi) \le \widehat{k}(0)=2$, and
\begin{equation}\label{khatdecay}
|\widehat{k}(\xi)| \lesssim \frac{1}{1+|\xi|^N}
\end{equation}
for any positive integer $N$.
\end{lemma}
\begin{proof}
Since
$$
\widehat{k}(\xi)= 2 \int_0^\infty \frac{\cos 2\pi \xi t} {(1+t^2)^{3/2}} dt,
$$
we see that $\widehat{k}$ is even, belongs to $C^1(\mathbb{R})$, and $\widehat{k}(0)=2$. If $|\xi| >1$, then integrations by parts show that $|\widehat{k}(\xi)| \lesssim |\xi|^{-N}$
for any positive integer $N$. Thus we have \eqnref{khatdecay}.
It now remains to prove that $\widehat{k}(\xi)$ is decreasing in $\xi \ge 0$. To prove it, we recall the relation
$$
\widehat{k}(\xi) = 2\pi \xi K_1(2\pi\xi),
$$
where $K_\nu$ denotes the modified Bessel function of the second kind (see \cite[10.32.11]{NIST}). Here $2\pi$ appears in the formula due to the definition \eqnref{Fourdef} of the Fourier transformation. Thanks to the recurrence relation $(\xi K_1(\xi))'=-\xi K_0(\xi)$ (\cite[10.29.2]{NIST}), we infer that $(\widehat{k})'(\xi)<0$ since $K_0(\xi) > 0$ if $\xi>0$. This completes the proof.
\end{proof}
\begin{lemma}\label{lem:Lhat}
$\widehat{L_0}$ is even, decreasing in $\xi \ge 0$, continuously differentiable on $\mathbb{R}$, $0 < \widehat{L_0}(\xi) \le \widehat{L_0}(0)=1/2$, and for any $\delta >0$
\begin{equation}\label{Lhatdecay}
|\widehat{L_0}(\xi)| \lesssim \frac{1}{1+|\xi|^{1-\delta}}.
\end{equation}
\end{lemma}
\begin{proof}
It follows from Lemma \ref{lem:khat} that $\widehat{L_0}$ is even, decreasing in $\xi \ge 0$, continuously differentiable on $\mathbb{R}$, and $0 < \widehat{L_0}(\xi) \le \widehat{L_0}(0)=1/2$. To prove \eqnref{Lhatdecay},
suppose that $|\xi| >1$ and write
$$
\widehat{L_0}(\xi) = \frac{1}{4 \pi} \left( \int_0^{|\xi|^{-1+\delta}} + \int_{|\xi|^{-1+\delta}}^\pi \right) \widehat{k}(\sqrt{2 (1-\cos\theta)} \xi) d\theta =: I_1(\xi) + I_2(\xi).
$$
Since $0 < \widehat{k} \le 2$, we have $|I_1(\xi)| \lesssim |\xi|^{-1+\delta}$.
Since $|\sqrt{2 (1-\cos\theta)} \xi| \gtrsim |\xi|^{\delta}$ if $|\xi|^{-1+\delta} \le \theta \le \pi$,
it follows from \eqnref{khatdecay} that $|I_2(\xi)| \lesssim |\xi|^{-N}$ for any positive integer $N$.
\end{proof}
\subsection{The NP operator on prolate spheroids and the limiting operator}
In this subsection, we prove that the limiting operator (as $R \to 0$) of the NP operator on prolate spheroids is the convolution by $L_0$ given in \eqnref{L0} on some test functions. We begin by constructing test functions $g_\rho$ for parameter $\rho>0$. Eventually, we take $\rho=R^{1-\sigma}$ for some $\sigma \in (0,1)$. By Lemma \ref{lem:Lhat}, for $\lambda \in (0, 1/2]$ there is a unique point
$\xi_0 \in [0, \infty)$ such that
\begin{equation}
\label{GLdef-pro}
\lambda - \widehat{L_0}(\xi_0) =0 .
\end{equation}
Let $\zeta_1$ be a function on $\mathbb{R}$ such that $\widehat{\zeta_1}$ is a non-negative compactly supported smooth function
with
$$
\int_{\mathbb{R}} \widehat{\zeta_1}(\xi)d\xi=1.
$$
Then, $\rho \widehat{\zeta_1}(\rho (\xi-\xi_0))$ converges weakly to $\delta_{\xi_0}(\xi)$, the one-dimensional Dirac delta function, as $\rho \to \infty$. Let $\chi_1$ be a smooth cut-off function such that $\mbox{supp}(\chi_1) \subset B(0,1)$ and $\chi_1=1$
on $B(0,1/2)$. Define
\begin{equation}\label{gRx}
g_\rho(x):= \rho^{-1/2} e^{2\pi i \xi_0 x} (\chi_1\zeta_1)(\rho^{-1}x).
\end{equation}
In this section $\| \ \|_2$ denotes the $L^2$-norm on $\mathbb{R}^1$.
\begin{lemma}\label{lem:limit-pro}
Let $\lambda \in (0, 1/2]$ and $g_\rho$ be defined by \eqnref{gRx} with $\xi_0$ satisfying \eqnref{GLdef-pro}.
Then the following hold:
\begin{itemize}
\item[{\rm (i)}] $\| g_\rho \|_{2} \sim 1$.
\item[{\rm (ii)}] $\| \lambda g_\rho - L_0 *g_\rho \|_{2} \to 0$ as $\rho \to \infty$.
\end{itemize}
\end{lemma}
\begin{proof} It is easy to see $\| g_\rho \|_{2}=\|\chi_1\zeta_1\|_2 $, thus (i) follows. To show (ii),
we note that
$$
\mathcal F(\lambda g_\rho - L_0*g_\rho)=(\lambda- \widehat{L_0} (\xi)) \rho^{1/2} ( \widehat{\chi_1}\ast\widehat{\zeta_1}) (\rho(\xi-\xi_0)).
$$
By Plancherel's theorem and changing variables $\xi \to \rho^{-1}\xi+\xi_0$, we have
\[ \| \lambda g_\rho - L_0 *g_\rho \|_{2}^2= \int |\lambda- \widehat{L_0} (\rho^{-1}\xi +\xi_0))| ( \widehat{\chi_1}\ast\widehat{\zeta_1}) (\xi)|^2 d\xi. \]
Since $\lambda= \widehat{L_0} (\xi_0)$ and $\widehat{L_0}$ is continuous by Lemma \ref{lem:Lhat}, we obtain (ii) by the dominated convergence theorem.
\end{proof}
\begin{lemma}\label{lem:limit-pro2}
Let $\mathcal{H}_{R}$ be the operator appearing in \eqnref{KcalHcal} and let $\rho=R^{1-\sigma}$ for some $\sigma \in (0,1)$. Then,
\begin{equation}\label{2110}
\| \mathcal{H}_R [g_\rho]- L_0\ast g_\rho \|_2\to 0,
\end{equation}
and hence
\begin{equation}
\label{L2}
\| \lambda g_\rho - \mathcal{H}_{R}[g_\rho] \|_{2} \to 0
\end{equation}
as $R \to \infty$.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:limit-pro} (ii), \eqref{L2} is an immediate consequence of \eqnref{2110}.
To prove \eqnref{2110}, we break the integral kernel $H_R$ of $\mathcal{H}_R$ as follows:
\begin{equation}
H_R(x_3,y_3)=H_R^u(x_3,y_3)+H_R^l(x_3,y_3),
\end{equation}
where
\begin{align*}
H_R^u(x_3,y_3) &=H_R(x_3,y_3) \chi_{[R^{\sigma/2},\infty)}(|x_3-y_3| ), \\
H_R^l(x_3,y_3) &=H_R(x_3,y_3) \chi_{[0, R^{\sigma/2})}(|x_3-y_3| ).
\end{align*}
Here and afterwards, $\chi_{[R^{\sigma/2},\infty)}$ and $\chi_{[0, R^{\sigma/2})}$ denote the characteristic functions of the intervals $[R^{\sigma/2},\infty)$ and $[0, R^{\sigma/2})$, respectively.
We also make a similar decomposition for $L_0$, namely,
\[ L_0:=L_0^u+ L_0^l,\]
where $L_0^l(t)=L_0^l(t)\chi_{[0, R^{\sigma/2})}(|t| ) $. For notational convenience we denote by $\mathcal{H}_R^l,$ $\mathcal{H}_R^u$, $\mathcal{L}^l_0,$ and $\mathcal{L}^u_0$ the operators defined by the integral kernels $H_R^l,$ $H_R^u$, $L^l_0(x_3-y_3),$ and $L^u_0(x_3-y_3)$, respectively. We then have
\begin{equation}
\mathcal{H}_R [g_\rho]- L_0\ast g_\rho = (\mathcal{H}_R^l- \mathcal{L}^l_0) [g_\rho] + (\mathcal{H}_R^u- \mathcal{L}^u_0) [g_\rho].
\end{equation}
The term $(\mathcal{H}_R^u- \mathcal{L}^u_0) [g_\rho]$ is easy to handle. Indeed, by \eqref{H-kernel} and \eqref{L0} it follows that
\[
0\le H_R(x_3,y_3), \ L_0(x_3-y_3)\lesssim |x_3-y_3|^{-3}.
\]
Hence, we have
\begin{align*}
&\int_{-\infty}^\infty |H_R^u(x_3,y_3)| +| L^u_0(x_3-y_3)| dy_3 \lesssim R^{-\sigma}.
\end{align*}
Young's inequality yields $\| \mathcal{H}_R^u\|_{2\to 2}\lesssim R^{-\sigma}$ and $\| \mathcal{L}_0^u\|_{2\to 2}\lesssim R^{-\sigma}.$
Here $\|\cdot\|_{2\to 2}$ denotes the operator norm from $L^2$ to $L^2$. Therefore,
$$
\| (\mathcal{H}_R^u- \mathcal{L}^u_0)[g_\rho]\|_2 \lesssim R^{-\sigma}\| g_\rho\|_2 \lesssim R^{-\sigma},
$$
where the last inequality holds thanks to Lemma \ref{lem:limit-pro} (i).
The matter is now reduced to showing
\begin{equation}\label{HL}
\| (\mathcal{H}_R^l- \mathcal{L}^l_0)[g_\rho]\|_2 \to 0
\end{equation}
as $R\to \infty$. In order to prove this we further break the operator $\mathcal{H}_R^l$ by decomposing its kernel. By $D(x_3,y_3,\theta)$ we denote the denominator of the integrand of \eqref{H-kernel}, namely,
\[
D(x_3,y_3,\theta)= 2\pi [\big( \eta(x_3)^2+ \eta(y_3)^2- 2\eta(x_3)\eta(y_3) \cos\theta \big) + (x_3-y_3)^2]^{3/2},
\]
and break the numerator so that
\[ 1-R^{-2}x_3y_3- \eta(x_3)\eta(y_3) \cos\theta= E_1(x_3,y_3,\theta)+E_2(x_3,y_3,\theta)+E_3(x_3,y_3,\theta),
\]
where
\begin{align*}
E_1(x_3,y_3)&:= 1-R^{-2}x_3y_3- \eta(x_3)\eta(y_3),
\\
E_2(x_3,y_3, \theta)&:=(\eta(x_3)\eta(y_3)-1)(1- \cos\theta),
\\
E_3(x_3,y_3,\theta)&:=(1- \cos\theta).
\end{align*}
We then define
\[
H_R^{l,j} (x_3,y_3) = \chi_{[0, R^{\sigma/2})}(|x_3-y_3| ) \int_0^\pi \frac{ E_j(x_3,y_3,\theta)}{D(x_3,y_3,\theta)} d\theta, \quad j=1,2,3,
\]
so that
\[
H_R^{l}=H_R^{l,1}+H_R^{l,2}+H_R^{l,3}.
\]
As before, we denote by $\mathcal{H}_R^{l,j}$ the operator given by the kernel $H_R^{l,j}$ for $j=1, 2,3$. For the proof of \eqref{HL}, we show the contributions from the operators $\mathcal{H}_R^{l,2}$ and $\mathcal{H}_R^{l,3}$ are negligible.
This reduces \eqref{HL} to \eqref{Hl} below.
If $y_3$ lies in the support of $g_\rho$, namely, $|y_3|\le R^{1-\sigma}$ and if $|x_3-y_3|\le R^{\sigma/2}$, then $|x_3| \lesssim R^{1-\sigma}$ if $0< \sigma \leq 1/2$. Thus, in order to show \eqref{HL} we may assume
\begin{equation}\label{x3y3}
|y_3|\le R^{1-\sigma} \quad \mbox{and} \quad |x_3| \lesssim R^{1-\sigma}
\end{equation}
for the rest of this proof.
On the other hand, since
$$
\eta(x_3)^2+ \eta(y_3)^2- 2\eta(x_3)\eta(y_3) \cos\theta =( \eta(x_3)-\eta(y_3))^2+ 2\eta(x_3)\eta(y_3)(1- \cos\theta),
$$
we have
\begin{equation}
D(x_3,y_3,\theta)\ge C \big [(1-\cos\theta)+ (x_3-y_3)^2\big]^{3/2}
\end{equation}
for some constant $C>0$.
Because of \eqnref{x3y3}, assuming $R$ is large enough, one can easily see that $
|E_1(x_3,y_3, \theta)| \lesssim R^{-2}|x_3-y_3|^2.
$
Thus, we have
\[
|H_R^{l,1} (x_3,y_3)| \lesssim \frac{\chi_{[0, R^{\sigma/2})}(|x_3-y_3| )}{R^2} \int_0^\pi \frac{ |x_3-y_3|^2}{[ \theta^2+ (x_3-y_3)^2]^{3/2}} d\theta
\lesssim \frac{\chi_{[0, R^{\sigma/2})}(|x_3-y_3| )}{R^2} ,
\]
which clearly yields
$$
\int_{-\infty}^\infty |H_R^{l,1} (x_3,y_3)| dy_3 \lesssim R^{-2+\sigma/2}.
$$
Therefore, by symmetry and Young's inequality as before, we have
$$
\|\mathcal{H}_R^{l,1} [g_\rho] \|_2 \lesssim R^{-2+\sigma/2} .
$$
Thanks to \eqnref{x3y3} it is easy to see $
|E_2(x_3,y_3, \theta)| \lesssim R^{-2\sigma} (1- \cos\theta)
$. So, we have
\begin{align*}
|H_R^{l,2} (x_3,y_3)|
&\lesssim R^{-2\sigma}\chi_{[0, R^{\sigma/2})}(|x_3-y_3| ) \int_0^\pi \frac{ \theta^2}{[\theta^2+ (x_3-y_3)^2]^{3/2}} d\theta
\\
&\lesssim R^{-2\sigma}\chi_{[0, R^{\sigma/2})}(|x_3-y_3| ) (1+ |\log |x_3-y_3||).
\end{align*}
Thus, it follows that
$$
\int_{-\infty}^\infty |H_R^{l,2} (x_3,y_3)| dy_3 \lesssim R^{-3\sigma/2}\log R.
$$
By symmetry and Young's inequality this gives
$$
\|\mathcal{H}_R^{l,2} [g_\rho] \|_2 \lesssim R^{-3\sigma/2}\log R.
$$
The proof of \eqref{Hl} is now reduced to showing
\begin{equation}
\label{Hl}
\| (\mathcal{H}_R^{l,3}- \mathcal{L}^l_0)[g_\rho] \|_2 \to 0
\end{equation}
as $R\to \infty$. To prove this \eqref{Hl}, we first note
\begin{equation}\label{HLL}
|H_R^{l,3}(x_3,y_3)- L^l_0(x_3-y_3)| \lesssim \chi_{[0, R^{\sigma/2})}(|x_3-y_3| ) \int_0^\pi (1-\cos\theta) |K_\theta(x_3,y_3)| d\theta,
\end{equation}
where
\begin{align*}
K_\theta(x_3,y_3) &=\Big ( \eta(x_3)^2+ \eta(y_3)^2- 2\eta(x_3)\eta(y_3) \cos\theta + (x_3-y_3)^2\Big) ^{-\frac 32} \\
& \qquad - \Big ( 2-2\cos\theta + (x_3-y_3)^2\Big) ^{-\frac 32}.
\end{align*}
Also, note that
\begin{align*}
&\eta(x_3)^2+ \eta(y_3)^2- 2\eta(x_3)\eta(y_3) \cos\theta- (2- 2\cos\theta) \\
&=F( \frac{x_3}{R}, \frac{y_3}{R}) + 2( \eta(x_3)\eta(y_3)-1) (1-\cos\theta),
\end{align*}
where
\[
F(s,t):=2-s^2- t^2 -2\sqrt{(1- s^2)(1-t^2)}\,.
\]
If $R$ is sufficiently large, then
$
\eta(x_3)^2+ \eta(y_3)^2- 2\eta(x_3)\eta(y_3) \cos\theta \ge 1-\cos\theta .
$
Thus, by the mean value theorem, we have
\begin{align*}
|K_\theta(x_3,y_3)|&\lesssim \frac{ | F( \frac{x_3}{R}, \frac{y_3}{R})| + |( \eta(x_3)\eta(y_3)-1) (1-\cos\theta) | }{\big(1-\cos\theta + (x_3-y_3)^2\big)^{5/2}}.
\end{align*}
Since $F$ is smooth on $[-1/2, 1/2]\times [-1/2, 1/2]$, $\partial_s F(s,s)=0$ and $F(s,s)=0$, we have
$$
|F(s,t)|\le C|s-t|^2
$$
for $s,t\in (-1/2, 1/2)$. We then infer using \eqnref{x3y3} that
\[
|K_\theta(x_3,y_3)| \le C \frac{ R^{-2}| x_3-y_3 |^2 + R^{-2\sigma} (1-\cos\theta) }{\big(2-2\cos\theta + (x_3-y_3)^2\big)^{5/2}}.
\]
Combining this with \eqref{HLL} yields
\begin{align*}
|H_R^{l,3}(x_3,y_3)- L^l_0(x_3-y_3)|
&\le C \chi_{[0, R^{\sigma/2})}(|x_3-y_3| ) \int_0^\pi \frac{ R^{-2}| x_3-y_3|^2\theta^2 + R^{-2\sigma} \theta^4 }{\big(\theta^2 + (x_3-y_3)^2\big)^{5/2}}d\theta
\\
&\le C \chi_{[0, R^{\sigma/2})}(|x_3-y_3| ) (R^{-2} + R^{-2\sigma}(1+|\log |x_3-y_3||),
\end{align*}
from which it follows that
$$
\int |H_R^{l,3}(x_3,y_3)- L^l_0(x_3-y_3)| dy_3 \lesssim R^{-3\sigma/2}\log R.
$$
Because of symmetry, the integration with respect to $x_3$ satisfies the same inequality. Therefore, by Young's inequality,
we infer
$$
\| (\mathcal{H}_R^{l,3}- \mathcal{L}^l_0)[g_\rho] \|_2 \lesssim R^{-3\sigma/2}\log R,
$$
which yields \eqref{Hl}. This completes the proof.
\end{proof}
\subsection{Proof of Proposition \ref{prop:prolate}}\label{subsec:proofpro}
Let $\lambda \in (0, 1/2]$ and $g_\rho$ be defined by \eqnref{gRx} where $\xi_0$ satisfies \eqref{GLdef-pro}. Then, we define the function $\psi_\rho$ on $\partial \Pi_R$ by \eqnref{extpro} with $g=g_\rho$.
Applying \eqnref{psig},
\eqnref{KcalHcal}, Lemma \ref{lem:limit-pro}, and Lemma \ref{lem:limit-pro2}, we see
\begin{equation}\label{limit-pro}
\lim_{R \to \infty} \frac{\| ( \lambda I - \mathcal{K}_{\partial\Pi_R}) [\psi_\rho] \|_{L^2(\partial \Pi_R)}}{\| \psi_\rho \|_{L^2(\partial \Pi_R)}} =0.
\end{equation}
Let $R_j$ be a sequence such that $R_j \to \infty$ as $j \to \infty$. Suppose $\lambda \notin \overline{\cup_{j=1}^\infty \sigma(\mathcal{K}_{\partial\Pi_{R_j}})}$, then there is an $\epsilon>0$ such that $[\lambda -\epsilon, \lambda+\epsilon] \cap \sigma(\mathcal{K}_{\partial\Pi_{R_j}}) = \emptyset$ for all $j$. Therefore, there is a constant $C$ independent of $j$ such that
$$
C \| \varphi \|_{L^2(\partial\Pi_{R_j})} \le \| ( \lambda I - \mathcal{K}_{\partial\Pi_{R_j}}) [\varphi] \|_{L^2(\partial\Pi_{R_j})}, \quad \forall \varphi \in L^2(\partial\Pi_{R_j})
$$
for all $j$. This contradicts \eqnref{limit-pro}. Thus, we conclude $\lambda \in \overline{\cup_{j=1}^\infty \sigma(\mathcal{K}_{\partial\Omega_{R_j}})}$. This completes the proof.
\subsection{Proof of Theorem \ref{thm:prolate2}}
The (confocal) prolate spheroids can be canonically described in terms of the prolate spheroidal coordinates, which are given by
\begin{align*}
x_1 &=\frac{d}{2} [(\xi^2 -1)(1- \eta ^2)]^{1/2} \cos \phi, \\
x_2 &=\frac{d}{2} [(\xi^2 -1)(1- \eta ^2)]^{1/2} \sin \phi, \\
x_2 &= \frac{d}{2} \xi \eta,
\end{align*}
where $d$ is the distance between two foci, $(0,0,d/2)$ and $(0,0,-d/2)$, $1 \leq \xi <\infty$, $-1\leq \eta \leq 1$, and $\phi$ is the azimuthal angle lying in $[0, 2 \pi]$. The surfaces $\xi=L$ (constant) represent (confocal) prolate spheroids. The spheroid $\partial\Pi_R$ is $\xi=L$ with $R= L/\sqrt{L^2-1}$ after dilation by the factor of $(d/2)^2 (L^2-1)$. Let us recall that the NP spectrum is invariant under dilation. The limiting case $\xi=1$ is a degenerate case corresponding to the line segment between the foci, and $L=\infty$ is the sphere.
It was shown in \cite{AA} that NP eigenvalues on the surface $\xi =L>1$, or on $\partial\Pi_R$ with $R= L/\sqrt{L^2-1}$, are positive and given by an explicit formula
\begin{equation}\label{ev:prolate}
\lambda_{m, n}(L) = \left( -\frac{1}{2} \right) (-1)^m \frac{(n-m)!}{(n+m)!} (L^2 -1)(P_n^m Q_n^m)' (L)
\end{equation}
for $n=1,2, \ldots$ and $m=-n, -n+1, \ldots, -1, 0, 1, \ldots, n$, where $P_n^m(L)$ and $Q_n^m(L)$ denote associated Legendre functions of the first kind and the second kind, respectively. On the surface $L=\infty$, namely, the sphere, we have
\begin{equation}\label{Glinfty}
\lambda_{m, n}(\infty)= \frac{1}{2(2n+1)}, \quad m=-n, -n+1, \ldots, -1, 0, 1, \ldots, n.
\end{equation}
Moreover, it is shown in \cite{Mart} that $\lambda_{m, n}(L)$ enjoys the 1/2-property
\begin{equation}\label{a half property}
\sum_{m=-n}^n \lambda_{m, n}(L)=1/2 \quad\mbox{for all } n.
\end{equation}
It is worth mentioning that it has not been known whether NP eigenvalues on general surfaces do or do not satisfy the 1/2-property. We refer to \cite{AKMU} for a discussion on this.
We obtain the following proposition from (ii) of which Theorem \ref{thm:prolate2} immediately follows by taking $R_0= L_0/\sqrt{L_0^2-1}$. Note that (i) of the following proposition shows tunability of the eigenvalues by prolate spheroids, namely, for any $n$ and $\lambda \in (0, 1/2)$ there is a prolate spheroid characterized by $\xi=L$ such that $\lambda_{m, n}(L)=\lambda$ for some $m$. When $n=1$, it was proved in \cite{FK}.
\begin{prop}
\begin{itemize}
\item[{\rm (i)}] $\displaystyle \cup_{L>1} \{\lambda_{m, n}(L): -n \le m \le n\} = (0, 1/2)$ for each $n=1,2, \ldots$.
\item[{\rm (ii)}] For any $L_0 >1$, $\displaystyle \cup_{1<L\le L_0} \{\lambda_{0, n}(L): n=1,2,\ldots \} = (0, 1/2)$.
\end{itemize}
\end{prop}
\begin{proof}
From Rodrigues' formulas which is also known as the Ivory--Jacobi formula (also see, e.g., \cite{AS}) we have the following:
\begin{align*}
P_n^0(z)&=:P_n(z) =\frac{1}{2^n n!} \frac{d^n (z^2-1)^n}{dz^n},
\\
Q_n^0(z)&=:Q_n(z) =\frac{1}{2} P_n(z) \log \frac{1+z}{1-z} - \sum_{m=1} ^n \frac{1}{m} P_{m-1}(z) P_{n-m} (z),
\\
P_n (z) Q_n (z) &=
\frac{1}{2} (P_n(z))^2 \log \frac{1+z}{1-z}
- P_n(z)\sum_{m=1} ^n \frac{1}{m} P_{m-1}(z) P_{n-m} (z) .
\end{align*}
Since $P_k$ is a polynomial (it is of degree $k$) for each $k$ and $P_n(1)=1$, we have
$$
(P_n Q_n)'(L)= \frac{1}{2(1-L)} + O(|\log(L-1)|)
$$
as $L \to 1+0$. We then infer from \eqref{ev:prolate} that $\lambda_{0, n}(L) \rightarrow \frac{1}{2}$ as $L \rightarrow 1+0$.
By \eqref{a half property}, we see that $\lambda_{m, n}(L) \to 0$ as $L \rightarrow 1$ if $m \neq 0$. Since each $\lambda_{m, n}(L)$ is continuous in $L$ on $(1, \infty)$, by \eqnref{Glinfty} we have
$$
\bigcup_{L>1} \{\lambda_{m, n}(L): m \neq 0 \} \supseteq \left(0, \frac{1}{2(2n+1)} \right], \quad
\bigcup_{L>1} \{\lambda_{0, n}(L) \} \supseteq \left[\frac{1}{2(2n+1)},\frac{1}{2} \right)
$$
for each $n$. Thus, (i) follows.
Since $\lambda_{m, n}(L) \rightarrow 0$ as $n \to \infty$ for each fixed $L$, we have $\lambda_{0, n}(L) \rightarrow 0$ as a subsequence, and hence (ii) follows.
\end{proof}
\section{Proof of Theorem \ref{thm:oblate}}\label{sec:3}
In this section we prove Theorem \ref{thm:oblate}. Again it is enough to consider the spectrum of $\mathcal{K}_{\partial \Omega_{R}}$ on $L^2(\partial\Omega_R)$ since $\partial\Omega_R$ is smooth.
\subsection{Parametrization of the NP operator on oblate ellipsoids}
In this sections and those to follow, we use $X$ to represent points in $\mathbb{R}^3$ saving $x$ for points in the plane, that is to say, $X=(x,x_3)=(x_1,x_2,x_3)$. Let $D_R$ be the projection of $\Omega_R$ onto $x$-plane, namely,
$$
D_R:=\left \{ x=(x_1,x_2): \frac{x_1^2}{(a_1 R)^2} + \frac{x_2^2}{(a_2 R)^2} \le 1 \right \},
$$
and let
\begin{equation}
\gamma(x)=\gamma_R(x):= \sqrt{1- \frac{x_1^2}{(a_1R)^2} - \frac{x_2^2}{(a_2R)^2}}, \quad x \in D_R.
\end{equation}
Then, $\partial\Omega_R$ consists of two pieces, namely, $\partial\Omega_R = \Gamma^+ \cup \Gamma^-$,
where
\begin{equation}\label{GOboundary}
\Gamma^{\pm} = \{ (x,\pm \gamma(x)): x \in D_R \},
\end{equation}
and the NP operator can be written as
$$
\mathcal{K}_{\partial\Omega_{R}}[\varphi](X)= \Big(\int_{\Gamma^+} + \int_{\Gamma^-}\Big) \frac{\langle Y-X, \nu(Y) \rangle}{4\pi |X-Y|^3} \varphi(Y) dS(Y).
$$
Now, let us set
\begin{align}
\label{K1def}
K_R^1(x,y)= -\frac{1}{4\pi} \frac{\frac{1}{R^2\gamma(y)} \sum_{j=1}^2 (x_j-y_j) \frac{y_j}{a_j^2} + (\gamma(x) - \gamma(y)) } { \big[ |x-y|^2 + (\gamma(x) - \gamma(y))^2 \big]^{3/2} },
\\[4pt]
\label{K2def}
K_R^2(x,y)= -\frac{1}{4\pi} \frac{\frac{1}{R^2\gamma(y)} \sum_{j=1}^2 (x_j-y_j) \frac{y_j}{a_j^2} - (\gamma(x) + \gamma(y)) } { \big[ |x-y|^2 + (\gamma(x) +\gamma(y))^2 \big]^{3/2} }.
\end{align}
For $Y=(y,y_3) \in \Gamma^\pm$, we have
$$
\nu(Y) dS(Y)= \Big( \frac{y_1}{(a_1R)^2 \gamma(y)}, \frac{y_2}{(a_2R)^2 \gamma(y)}, \pm 1 \Big) dy.
$$
Therefore, for $(x, \gamma(x)) \in \Gamma^+$ we obtain
\begin{align}
\mathcal{K}_{\partial\Omega_{R}}[\varphi](x, \gamma(x)) &= \int_{D_R} K_R^{1}(x,y) \varphi^+(y) dy + \int_{D_R} K_R^{2}(x,y) \varphi^-(y) dy \nonumber \\
&=: \mathcal{K}_{R}^{1}[\varphi^+](x) + \mathcal{K}_{R}^{2}[\varphi^-](x), \label{upperform}
\end{align}
where
$$
\varphi^+(y):= \varphi(y, \gamma(y)), \quad \varphi^-(y):= \varphi(y, -\gamma(y)).
$$
Similarly, one can easily see
\begin{equation}\label{lowerform}
\mathcal{K}_{\partial\Omega_{R}}[\varphi](x, -\gamma(x)) = \mathcal{K}_{R}^{2}[\varphi^+](x) + \mathcal{K}_{R}^{1}[\varphi^-](x), \quad (x, -\gamma(x)) \in \Gamma^-.
\end{equation}
The surface measure on $\Gamma^\pm$ is given by
\begin{equation}
dS(x)= \omega_R(x) dx,
\end{equation}
where
\begin{equation}
\omega_R(x)= \frac{\chi_{D_R}(x)}{\gamma(x)} \sqrt{\frac{x_1^2}{(a_1R)^4} + \frac{x_2^2}{(a_2R)^4} + \gamma(x)^2} \, .
\end{equation}
For a measurable subset $U$ of $D_R$, we set
\begin{equation}
\omega_R(U) := \int_U \omega_R(x) dx.
\end{equation}
The following elementary lemma will be used later.
\begin{lemma}\label{lem:ARBR}
Let
\begin{equation}
A_R:= \Big\{ x \in D_R: \frac{x_1^2}{(a_1R)^4} + \frac{x_2^2}{(a_2R)^4} \le \gamma(x)^2 \Big\} , \quad B_R:= D_R \setminus A_R.
\end{equation}
The following hold:
\begin{itemize}
\item[{\rm (i)}] $\omega_R \ge 1$ on $D_R$ and $\omega_R \sim 1$ on $A_R$.
\item[{\rm (ii)}] $\omega_R(A_R) \sim R^2$.
\item[{\rm (iii)}] $\omega_R(B_R) \sim 1$.
\end{itemize}
\end{lemma}
\begin{proof}
(i) is clear since $\omega_R \ge 1$ on $D_R$ and $1 \le \omega_R \le \sqrt{2}$ on $A_R$.
Let $m=\min\{a_1, a_2\}$ and $M=\max\{a_1, a_2\}$. If $x \in A_R$, then
\begin{equation}\label{BRset}
\frac{x_1^2}{(a_1R)^2} + \frac{x_2^2}{(a_2R)^2} < C_1,
\end{equation}
where $C_1:= (MR)^2/((MR)^2+1)$, and hence
$
\omega_R(A_R) \lesssim R^2.
$
Likewise, we have
\begin{equation}\label{BRset2}
C_2 \le \frac{x_1^2}{(a_1R)^2} + \frac{x_2^2}{(a_2R)^2} <1, \quad x \in B_R,
\end{equation}
where $C_2:= (mR)^2/((mR)^2+1)$. After the changes of variables $x_j=a_j R y_j$ ($j=1,2$), we have
$$
\omega_R(B_R) \lesssim R^2 \int_{V} \frac{1}{\sqrt{1-|y|^2}} \sqrt{\frac{y_1^2}{(a_1R)^2} + \frac{y_2^2}{(a_2R)^2}} \, dy \lesssim
R \int_{V} \frac{|y|}{\sqrt{1-|y|^2}} dy,
$$
where $V=\{ \sqrt{C_2} \le |y| <1 \}$. It thus follows that
$$
\omega_R(B_R) \lesssim
R \int_{\sqrt{C_2}}^1 \frac{r}{\sqrt{1-r^2}} dr \sim 1.
$$
On the other hand, it is clear that $\omega_R(B_R) \gtrsim 1$ if $R$ is large enough. This shows ${\rm (iii)}$.
Since
$$
\omega_R(A_R) + \omega_R(B_R) = \omega_R(D_R) \gtrsim R^2,
$$
we have (ii). This completes the proof.
\end{proof}
For a function $f$ defined on $D_R$, we denote
\begin{equation}
\|f\|_{\omega_R}= \| f\|_{L^2(D_R, \omega_R)}.
\end{equation}
Let $f^\pm$ be functions on $\Gamma^\pm$ defined by $f^\pm(x, \pm \gamma(x))=f(x)$. Then, we have
\begin{equation}
\| f^+ \|_{L^2(\Gamma^+)} = \| f^- \|_{L^2(\Gamma^-)} = \|f\|_{\omega_R}.
\end{equation}
\subsection{The NP operator on oblate ellipsoids and the Poisson integral}
Since $\gamma(y)$ tends to $1$ pointwise as $R \to \infty$, one can expect from \eqnref{K1def} and \eqnref{K2def} that $K_R^1(x,y)$ and $K_R^2(x,y)$ respectively tend to $0$ and $\frac{1}{2}P_2(x-y)$ (if $x \neq y$) as $R \to \infty$, where $P_t(x)$ ($x \in \mathbb{R}^2$) is the Poisson kernel
\begin{equation}\label{Pkernel}
P_t(x) = \frac{1}{2\pi} \frac{t}{(|x|^2+t^2)^{\frac{3}{2}}}, \quad (x,t)\in \mathbb R^2\times \mathbb R_+.
\end{equation}
We now construct test functions $f_\rho$ in a similar manner as $g_\rho$ in the previous section. Recall that $\widehat{P_t}(\xi) = \exp(-2\pi t |\xi|)$.
For $\lambda \in (0, 1/2]$, choose $\xi_0 \in \mathbb{R}^2$ such that
\begin{equation}\label{GLdef}
\lambda - \frac{1}{2} e^{-4\pi |\xi_0|} =0.
\end{equation}
Let $\zeta$ be a function on $\mathbb{R}^2$ such that $\widehat{\zeta}$ is a non-negative compactly supported smooth function satisfying
$$
\int_{\mathbb{R}^2} \widehat{\zeta}(\xi)d\xi=1.
$$
Then, $\rho^2 \widehat{\zeta}(\rho (\xi-\xi_0))$ converges weakly to $\delta_{\xi_0}(\xi)$ as $\rho \to \infty$.
Let $m=\min\{a_1, a_2\}$ as before and let $\chi$ be a smooth cut-off function such that $\mbox{supp}(\chi) \subset B(0,m)$ and $\chi=1$ on $B(0,m/2)$, where $B(0,r)$ denotes the disk centered at the origin of radius $r$. Define
\begin{equation}\label{fRx}
f_\rho(x):= \rho^{-1} e^{2\pi i \xi_0 x} (\chi\zeta)(\rho^{-1}x).
\end{equation}
Note that $f_\rho$ is supported in $D_\rho:=B(0,\rho m)$ and
\begin{equation}\label{fhatrho}
\widehat{f_\rho}(\xi)= \rho \widehat{(\chi\zeta)}(\rho(\xi-\xi_0)).
\end{equation}
\begin{lemma}\label{lem:limit}
Let $\lambda \in (0, 1/2]$ and $f_\rho$ be defined by \eqnref{fRx} with $\xi_0$ satisfying \eqnref{GLdef}. If $\rho=R^{1-\sigma}$ for some $\sigma >0$, then the following hold:
\begin{itemize}
\item[{\rm (i)}] $\| f_\rho \|_{\omega_R} \sim 1$.
\item[{\rm (ii)}] $\| \lambda f_\rho -\frac{1}{2}P_2 *f_\rho \|_{\omega_R} \to 0$ as $R \to 0$.
\end{itemize}
\end{lemma}
\begin{proof}
Since $f_\rho$ is supported in $D_\rho$ and $D_\rho \subset A_R$ with $\rho=R^{1-\sigma}$, it follows from (i) of Lemma \ref{lem:ARBR} that
$$
\| f_\rho \|_{\omega_R} \sim \| f_\rho \|_{2},
$$
where $\| \ \|_2$ denotes the $L^2$-norm on $\mathbb{R}^2$ with respect to the Lebesgue measure. Using Plancherel's theorem and \eqnref{fhatrho}, we have
$
\| f_\rho \|_{2}^2 = \int_{\mathbb{R}^2} |\widehat{f_\rho}(\xi)|^2 d\xi = \int_{\mathbb{R}^2} |\widehat{(\chi\zeta)}(\xi)|^2 d\xi.
$
Thus, we get (i) combining this with the above.
By \eqnref{BRset2}, we have $|x| \sim R$ for $x \in B_R$. Thus, if $x \in B_R$ and $y \in D_\rho$ with $\rho=R^{1-\sigma}$, then
$
|x-y| \gtrsim R-\rho \gtrsim R.
$
So, we have $
P_2(x-y) \lesssim R^{-3}
$
for $x \in B_R$ and $y \in D_\rho$.
Since the support of $f_\rho$ is contained in $ A_R$, by (ii) in Lemma \ref{lem:ARBR} and H\"older inequality we have
$$
|(P_2 * f_\rho)(x)| \lesssim R^{-2} \| f_\rho \|_2 \lesssim R^{-2}.
$$
Combining this and (iii) in Lemma \ref{lem:ARBR} we obtain
\begin{equation}\label{BRint1}
\int_{B_R} |(P_2 * f_\rho)(x)|^2 \omega_R(x) dx \lesssim R^{-4}.
\end{equation}
We now proceed to prove (ii). Note that
$$
\big\| \lambda f_\rho -\frac{1}{2}P_2 *f_\rho \big\|_{\omega_R}^2 = \big\| \lambda f_\rho -\frac{1}{2}P_2 *f_\rho \big\|_{L^2(A_R, \omega_R)}^2 + \big\| \frac{1}{2}P_2 *f_\rho \big\|_{{L^2(B_R, \omega_R)}}^2.
$$
Thanks to (i) of Lemma \ref{lem:ARBR} and \eqnref{BRint1}, it suffices to show that
\begin{equation}\label{forii}
\big\| \lambda f_\rho -\frac{1}{2}P_2 *f_\rho \big\|_{2}^2 \to 0 \quad\mbox{as } R \to \infty.
\end{equation}
By \eqnref{fhatrho}, we have
$$
\mathcal{F} \Big( \lambda f_\rho -\frac{1}{2}P_2 *f_\rho \Big)(\xi)= \Big( \lambda - \frac{1}{2} e^{-4\pi |\xi|} \Big) \rho \widehat{(\chi\zeta)}(\rho(\xi-\xi_0)).
$$
Hence, changing variables $\xi\to \xi/\rho$, we see
\[\big\| \lambda f_\rho -\frac{1}{2}P_2 *f_\rho \big\|_{2}^2=\int_{\mathbb{R}^2} \big| \lambda - \frac{1}{2} e^{-4\pi | \frac{\xi}{\rho} + \xi_0 |} \big|^2 \big| \widehat{(\chi\zeta)}(\xi) \big|^2 d\xi.\]
We then break the right hand side as follows:
\begin{align*}
{\rm I + I\!I}:=\Big(\int_{|\xi| \le \sqrt{\rho}} + \int_{|\xi| > \sqrt{\rho}} \Big) \big| \lambda - \frac{1}{2} e^{-4\pi | \frac{\xi}{\rho} + \xi_0 |} \big|^2 \big| \widehat{(\chi\zeta)}(\xi) \big|^2 d\xi.
\end{align*}
If $|\xi| \le \sqrt{\rho}$, we have from \eqnref{GLdef}
$
\big| \lambda - \frac{1}{2} e^{-4\pi | \frac{\xi}{\rho} + \xi_0 |} \big| \lesssim \frac{|\xi|}{\rho}.
$
Thus, $
\mathrm I \lesssim \rho^{-1} $, so $\mathrm I\to 0$ as $R\to \infty$.
Since $\chi\zeta$ is compactly supported and smooth, we have
$
\big| \widehat{(\chi\zeta)}(\xi) \big| \lesssim (1+|\xi|)^{-N}
$
for any $N$. So, we have
$$
{\rm I\!I} \lesssim \int_{|\xi| > \sqrt{\rho}} ( 1+ |\xi| )^{1-2N} d\xi \lesssim \rho^{2-N}.
$$
As a results, ${\rm I\!I} \to 0$ as $R\to \infty$. Therefore, we conclude \eqnref{forii}.
\end{proof}
\begin{lemma}\label{lem:limit2}
Let $\lambda \in (0, 1/2]$ and $f_\rho$ be defined by \eqnref{fRx} with $\xi_0$ satisfying \eqnref{GLdef}. Suppose $\mathcal{K}_{R}^{1}$ and $\mathcal{K}_{R}^{2}$ are given by \eqref{upperform} and $\rho=R^{1-\sigma}$ for some $\sigma >0$, then the following hold:
\begin{itemize}
\item[{\rm (i)}] $\| \mathcal{K}_{R}^{1}[f_\rho] \|_{\omega_R} \to 0$ as $R \to \infty$.
\item[{\rm (ii)}] $\| \lambda f_\rho - \mathcal{K}_{R}^{2}[f_\rho] \|_{\omega_R} \to 0$ as $R \to \infty$.
\end{itemize}
\end{lemma}
\begin{proof}
As in the proof of Lemma \ref{lem:limit}, we note that $|x-y| \gtrsim R$ if $x \in B_R$ and $y \in D_\rho$ with $\rho=R^{1-\sigma}$. Since $\gamma(y) \gtrsim 1$, using \eqref{K1def} and \eqref{K2def} we have
$$
|K_R^j (x,y)| \lesssim |x-y|^{-2} \frac{\rho}{R} \lesssim R^{-2-\sigma}.
$$
Since $\|f_\rho\|_1\lesssim \rho$, $|\mathcal{K}_R^j [f_\rho]|\lesssim R^{-1}$.
By (iii) of Lemma \ref{lem:ARBR} we obtain
\begin{equation}\label{BRint2}
\int_{B_R} |\mathcal{K}_R^j [f_\rho](x)|^2 \omega_R(x) dx \lesssim R^{-2}, \quad j=1,2.
\end{equation}
Thus, in order to prove (i), it suffices to prove
\begin{equation}\label{foriii}
\|\mathcal{K}_{R}^{1}[f_\rho]\|_{L^2(A_R)}^2 \to 0 \quad\mbox{as } R \to \infty.
\end{equation}
Since $\rho=R^{1-\sigma}$, changing variables $x\to Rx$ and $y\to Ry$ yields
\begin{align}
\label{KR1}
\|\mathcal{K}_{R}^{1}[f_\rho]\|_{L^2(A_R)}^2
&= R^{-2+2\sigma} \int_{A_1}\Big| \int G^1_R(x,y) e^{2\pi i R\xi_0\cdot y} (\chi\zeta)( R^\sigma y) dy\Big|^2 dx,
\end{align}
where
\begin{equation}
\label{G1R}
G_R^1(x,y):= R^3 K_R^1(Rx,Ry)=
-\frac{1}{4\pi} \frac{\gamma_1(x) - \gamma_1(y)+ \frac{1}{\gamma_1(y)} \sum_{j=1}^2 (x_j-y_j) \frac{y_j}{a_j^2} } { \big[ |x-y|^2 + R^{-2}(\gamma_1(x) - \gamma_1(y))^2 \big]^{3/2} }.
\end{equation}
Note that $(\chi\zeta)( R^\sigma y)$ is supported in $B(0, m R^{-\sigma})$ and $\partial_j\gamma_1(y)=-\frac{1}{\gamma_1(y)} \frac{y_j}{a_j^2} $. Thus,
if $y \in B(0, m R^{-\sigma})$ and $x \in B(0, m/2)$, by Taylor's theorem we have
$$
\gamma_1(x) - \gamma_1(y)+ \frac{1}{\gamma_1(y)} \sum_{j=1}^2 (x_j-y_j) \frac{y_j}{a_j^2} = \frac{1}{2} \sum_{i,j=1}^2 (\partial_i\partial_j \gamma_1)(x_*) (x_i-y_i)(x_j-y_j)
$$
for some $x_* \in B(0, m/2)$.
Since $|(\partial_i\partial_j \gamma_1)(x)| \lesssim 1$ for $x \in B(0, m/2)$, we have
$$
\Big|\gamma_1(x) - \gamma_1(y)+ \frac{1}{\gamma_1(y)} \sum_{j=1}^2 (x_j-y_j) \frac{y_j}{a_j^2} \Big| \lesssim |x-y|^2,
$$
and hence
\begin{equation}\label{3000}
|G^1_R(x,y)|\lesssim |x-y|^{-1}.
\end{equation}
If $y \in B(0, m R^{-\sigma})$ and $x \in A_1 \setminus B(0, m/2)$, then $|x-y| \gtrsim 1$. Thus, we
see from \eqref{G1R} that \eqnref{3000} remains to be valid for $y \in B(0, m R^{-\sigma})$ and $x \in A_1$.
Therefore, using \eqref{3000} we have
$$
\sup_{x \in A_1} \int_{B(0, m R^{-\sigma})} |G^1_R(x,y)|^p dy \lesssim 1, \quad 1\le p<2.
$$
Taking $p=3/2$, by H\"older's inequality we have
$$
\sup_{x \in A_1} \Big| \int G^1_R(x,y) e^{2\pi i R\xi_0 x} (\chi\zeta)( R^\sigma y) dy\Big| \lesssim R^{-2\sigma/3}.
$$
Combining this and \eqref{KR1} we thus obtain
\begin{equation}\label{2000}
\|\mathcal{K}_{R}^{1}[f_\rho]\|_{L^2(A_R)} \lesssim R^{-1+2\sigma/3},
\end{equation}
which yields \eqnref{foriii} if we take a $\sigma>0$ small enough.
We now show (ii). We prove
$$
\| \mathcal{K}_{R}^{2}[f_\rho] - \frac{1}{2} P_2 * f_\rho\|_{\omega_R} \to 0 \quad\mbox{as } R \to \infty.
$$
Then (ii) follows by Lemma \ref{lem:limit} (ii). As before, thanks to \eqnref{BRint1} and \eqnref{BRint2} it suffices to show
\begin{equation}\label{foriv}
\Big\|\mathcal{K}_{R}^{2}[f_\rho] - \frac{1}{2} P_2 * [f_\rho]\Big\|_{L^2(A_R)} \to 0 \quad\mbox{as } R \to \infty.
\end{equation}
Let us set
\[ K_R^{2,e}(x,y)= \frac{1}{2\pi} \frac{\gamma(x)}{\big[ |x-y|^2 + (\gamma(x) +\gamma(y))^2 \big]^{3/2} }.\]
We note from \eqref{K1def} and \eqref{K2def} that
$K_{R}^{2}= K_{R}^{1}+ K_R^{2,e}$. Thus, we may decompose
$$
\mathcal{K}_{R}^{2}[f_\rho]= \mathcal{K}_{R}^{1}[f_\rho]+ \mathcal{K}_{R}^{2,e}[f_\rho],
$$
where $\mathcal{K}_{R}^{2,e}$ denotes the operator defined by the integral kernel $K_R^{2,e}$. Thus,
by \eqref{2000} proving \eqnref{foriv} is reduced to proving
\begin{equation}\label{foriv2}
\Big\|\mathcal{K}_{R}^{2,e}[f_\rho] - \frac{1}{2} P_2 * [f_\rho]\Big\|_{L^2(A_R)} \to 0 \quad\mbox{as } R \to \infty.
\end{equation}
For simplicity we set
\begin{align*}
J(x)&= \mathcal{K}_{R}^{2,e}[f_\rho](x) - \frac{1}{2} P_2 * [f_\rho](x),
\\
K(x,y)&= K_R^{2,e}(x,y) - \frac{1}{2} P_2(x-y).
\end{align*}
So, we have
$$
J(x) = \int K(x,y) f_\rho(y) dy.
$$
Let $\sigma'$ be a number to be determined later, but satisfying $0< \sigma' <\sigma$. If $x \in A_R \setminus B(0, R^{1-\sigma'})$ and $y$ is in the support of $f_\rho$, namely, $y \in B(0, m R^{1-\sigma})$, then
$
|x-y| \gtrsim R^{1-\sigma'}.
$
Hence, we have
$$
|K(x,y)| \lesssim |K_R^{2,e}(x,y)| + |P_2(x-y)| \lesssim R^{-3+3\sigma'}.
$$
By (i) of Lemma \ref{lem:limit} we see that
$$
|J(x)| \lesssim R^{-3+3\sigma'} |B(0, m R^{1-\sigma})|^{1/2} \| f_\rho \|_2 \lesssim R^{-2+3\sigma'-\sigma}.
$$
Therefore, we have
$$
\int_{A_R \setminus B(0, R^{1-\sigma'})} |J(x)|^2 dx \le R^{6\sigma'-2\sigma-2}.
$$
We choose $\sigma'$ so that $3\sigma' <\sigma$. Then we see
\begin{equation}\label{2300}
\int_{A_R \setminus B(0, R^{1-\sigma'})} |J(x)|^2 dx \to 0 \quad\mbox{as } R \to \infty.
\end{equation}
To handle the remaining part we only need to consider $y \in B(0, m R^{1-\sigma})$ and $x \in B(0, R^{1-\sigma'})$. We write
\begin{align*}
2\pi K(x,y) &= \frac{\gamma(x)-1}{\big[ |x-y|^2 + (\gamma(x) +\gamma(y))^2 \big]^{3/2} } \\
& \qquad + \Big( \frac{1}{\big[ |x-y|^2 + (\gamma(x) +\gamma(y))^2 \big]^{3/2}} - \frac{1}{\big[ |x-y|^2 + 2^2 \big]^{3/2}} \Big).
\end{align*}
Since $|\gamma(x)-1| \lesssim R^{-2}|x|^2 \lesssim R^{-2\sigma'}$ for $x \in B(0, R^{1-\sigma'})$, one can easily see that the absolute values of the first and the second terms in the right hand side are respectively bounded by $R^{-2\sigma'} k_3(x-y)$ and $R^{-2\sigma'} k_5(x-y)$ for $y \in B(0, m R^{1-\sigma})$ and $x \in B(0, R^{1-\sigma'})$, where we denote
$$
k_n(x)= \frac{1}{(|x|^2+1)^{n/2}}.
$$
Thus, we have $
|J(x)| \lesssim R^{-2\sigma'} (k_3 * |f_\rho| + k_5 * |f_\rho|)
$
for $x \in B(0, R^{1-\sigma'})$. Therefore,
$$
\int_{B(0, R^{1-\sigma'})} |J(x)|^2 dx \lesssim R^{-4\sigma'} (\| k_3 * |f_\rho| \|_2^2 + \| k_5 * |f_\rho| \|_2^2).
$$
Since $k_n$ ($n>2$) is integrable, applying Young's convolution inequality we obtain
$$
\int_{B(0, R^{1-\sigma'})} |J(x)|^2 dx \lesssim R^{-4\sigma'} \| f_\rho \|_2^2 \lesssim R^{-4\sigma'}.
$$
This together with \eqnref{2300} yields \eqnref{foriv}. So, the proof is completed.
\end{proof}
\subsection{Proof of Theorem \ref{thm:oblate}}
Let $\lambda \in [-1/2,1/2]\setminus \{0\}$. Let $f_\rho$ denote the function given by \eqnref{fRx} with $\rho=R^{1-\sigma}$ for some $\sigma \in (0,1)$
where $\xi_0$ is given by \eqref{GLdef} with $\lambda$ replaced by $|\lambda|$. Though $f_\rho$ is sightly different from the previous one, we keep using the same notation.
We now define $\varphi_\rho$ on $\partial\Omega_R$.
If $\lambda \in (0, 1/2]$,
\begin{equation}\label{Gvfdef3D}
\varphi_\rho(X)= \varphi_\rho(x, x_3) :=
f_\rho(x), \quad X \in \Gamma^+ \cup \Gamma^-.
\end{equation}
If $\lambda \in [-1/2,0)$, we define
\begin{equation}\label{fodef}
\varphi_\rho(X) :=
\begin{cases}
f_\rho(x) \quad &\mbox{if } X \in \Gamma^+ , \\
-f_\rho(x) \quad &\mbox{if } X \in \Gamma^-.
\end{cases}
\end{equation}
For $\lambda \in (0, 1/2]$, it follows from \eqnref{upperform} and \eqnref{lowerform} that
$$
\mathcal{K}_{\partial\Omega_R}[\varphi_\rho](X)= \mathcal{K}_{R}^{1}[f_\rho](x) + \mathcal{K}_{R}^{2}[f_\rho](x)
, \quad X \in \Gamma^+ \cup \Gamma^-.
$$
As a result, we have
$$
\lambda \varphi_\rho(X) - \mathcal{K}_{\partial\Omega_R}[\varphi_\rho](X)= -\mathcal{K}_{R}^{1}[f_\rho](x) + (\lambda f_\rho - \mathcal{K}_{R}^{2}[f_\rho])(x), \quad X \in \Gamma^+ \cup \Gamma^-.
$$
When $\lambda \in [-1/2,0)$, we similarly have
$$
\mathcal{K}_{\partial\Omega_R}[\varphi_\rho](X)=
\begin{cases}
\mathcal{K}_{R}^{1}[f_\rho](x) - \mathcal{K}_{R}^{2}[f_\rho](x) \quad &\mbox{if } X \in \Gamma^+ , \\
-\mathcal{K}_{R}^{1}[f_\rho](x) + \mathcal{K}_{R}^{2}[f_\rho](x) \quad &\mbox{if } X \in \Gamma^-,
\end{cases}
$$
and, consequently,
$$
\lambda \varphi_\rho(X) - \mathcal{K}_{\partial\Omega_R}[\varphi_\rho](X)=
\begin{cases}
-\mathcal{K}_{R}^{1}[f_\rho](x) + (\lambda f_\rho(x) + \mathcal{K}_{R}^{2}[f_\rho](x)) \quad &\mbox{if } X \in \Gamma^+ , \\
\mathcal{K}_{R}^{1}[f_\rho](x) - (\lambda f_\rho(x) + \mathcal{K}_{R}^{2}[f_\rho](x)) \quad &\mbox{if } X \in \Gamma^-.
\end{cases}
$$
In either case, we thererfore have
$$
\big \| \lambda \varphi_\rho - \mathcal{K}_{\partial\Omega_R}[\varphi_\rho] \big \|_{L^2(\partial\Omega_R)} \le \big \| \mathcal{K}_{R}^{1}[f_\rho] \big \|_{\omega_R} + \big \| |\lambda| f_\rho - \mathcal{K}_{R}^{2}[f_\rho] \big \|_{\omega_R}.
$$
Since $\| \varphi_\rho \|_{L^2(\partial\Omega_{R})} \sim \| f_\rho \|_{\omega_R}$, we obtain the next proposition as an immediate consequence of Lemma \ref{lem:limit2}.
\begin{prop}\label{prop:oblate}
If $\lambda \in [-1/2,0) \cup (0, 1/2]$, then
\begin{equation}\label{limit}
\lim_{R \to \infty} \frac{\| ( \lambda I - \mathcal{K}_{\partial\Omega_R}) [\varphi_\rho] \|_{L^2(\partial\Omega_{R})}}{\| \varphi_\rho \|_{L^2(\partial\Omega_{R})}} =0.
\end{equation}
\end{prop}
Once we have this proposition,
the rest of the proof of Theorem \ref{thm:oblate} is the same as that of Proposition \ref{prop:prolate} in subsection \ref{subsec:proofpro}. So, we omit the detail.
\section{Proof of Theorem \ref{thm:thinflat}}\label{sec:4}
For $\Sigma^\pm$ and $\Sigma^s$ given in \eqnref{Fboundary}, we define
$$
\Sigma_R^\pm= \{(Rx, x_3): X=(x, x_3) \in \Sigma^\pm \}
$$
and $\Sigma_R^s$ likewise. Then, we have
$$
\partial \Phi_R = \Sigma_R^+ \cup \Sigma_R^- \cup \Sigma_R^s.
$$
Let $U_R= \{Rx: x\in U \}$, which is the projection of $\Sigma_R^\pm$ onto the $x$-plane.
If a function $\varphi$ defined on $\partial \Phi_R$ is supported in $\Sigma_R^+ \cup \Sigma_R^-$, then we write
\begin{align*}
\mathcal{K}_{\partial\Phi_R} [\varphi](x, x_3) & = -\frac{1}{4\pi} \int_{\mathbb{R}^2} \frac{x_3-1}{[|x-y|^2 + (x_3-1)^2]^{3/2}} \varphi^+(y) dy \\
&\quad + \frac{1}{4\pi} \int_{\mathbb{R}^2} \frac{x_3+1}{[|x-y|^2 + (x_3+1)^2]^{3/2}} \varphi^-(y) dy,
\end{align*}
where $\varphi^\pm(y)=\varphi(y, \pm 1)$. Thus, if $x \in U_R$, then
\begin{equation}\label{4100}
\mathcal{K}_{\partial\Phi_R} [\varphi](x, \pm 1) = \frac{1}{2} (P_2 * \varphi^\pm)(x).
\end{equation}
Let $f_\rho$ be the function defined by \eqnref{fRx} (the number $m$ used to define $f_\rho$ is chosen so that $B(0,m) \subset U$ in this case). By slightly modifying the proof of Lemma \ref{lem:limit}, one can prove the following lemma. Note that we here use $H^{1/2}$ norm since $\partial\Phi_R$ is allowed to be Lipschitz continuous.
\begin{lemma}\label{lem:limitGF}
Let $\lambda \in (0, 1/2]$ and $f_\rho$ be defined by \eqnref{fRx} with $\xi_0$ satisfying \eqnref{GLdef}. The following hold:
\begin{itemize}
\item[{\rm (i)}] $\| f_\rho \|_{H^{1/2}(\mathbb{R}^2)} \sim 1$.
\item[{\rm (ii)}] $\| \lambda f_\rho -\frac{1}{2}P_2 *f_\rho \|_{H^{1/2}(\mathbb{R}^2)} \to 0$ as $\rho \to 0$.
\end{itemize}
\end{lemma}
Let $\lambda \in [-1/2,1/2]$ ($\lambda \neq 0$) and let $f_\rho$ be the function defined in \eqnref{fRx} corresponding to $|\lambda|$. Let $\rho=R^{1-\sigma}$ for some $\sigma \in (0,1)$. In the same manner as \eqnref{Gvfdef3D} and \eqnref{fodef}, we define $\varphi_\rho$ on $\partial\Phi_R$:
\begin{align}
\label{GvfGF}
\varphi_\rho(X)&= \varphi_\rho(x, x_3) :=
\begin{cases}
f_\rho(x) \qquad &\mbox{if } X \in \Sigma_R^+ \cup \Sigma_R^-, \\
0 \qquad &\mbox{if } X \in \Sigma_R^s,
\end{cases}
\qquad \lambda \in (0, 1/2],
\\[4pt]
\label{foGF}
\varphi_\rho(X)&= \varphi_\rho(x, x_3) :=
\begin{cases}
f_\rho(x) \quad &\mbox{if } X \in \Sigma_R^+ , \\
-f_\rho(x) \quad &\mbox{if } X \in \Sigma_R^-, \\
0 \quad &\mbox{if } X \in \Sigma_R^s,
\end{cases}
\qquad\qquad \ \lambda \in [-1/2,0).
\end{align}
The following proposition which is analogous to Proposition \ref{prop:oblate} yields Theorem \ref{thm:thinflat} in the same way as Proposition \ref{prop:oblate} yields Theorem \ref{thm:oblate}.
\begin{prop}\label{prop:thinflat}
Let $\rho=R^{1-\sigma}$ for some $\sigma \in (0,1)$. If $\lambda \in [-1/2,0) \cup (0, 1/2]$, then
\begin{equation}\label{limitGF}
\lim_{R \to \infty} \frac{\| ( \lambda I - \mathcal{K}_{\partial\Phi_R}) [\varphi_\rho] \|_{H^{1/2}(\partial\Phi_{R})}}{\| \varphi_\rho \|_{H^{1/2}(\partial\Phi_{R})}} =0.
\end{equation}
\end{prop}
\begin{proof}
By (i) in Lemma \ref{lem:limitGF},
$
\| \varphi_\rho \|_{H^{1/2}(\partial\Phi_R)} = \| f_\rho \|_{H^{1/2}(\mathbb{R}^2)} \sim 1.
$
So, it suffices to show
\begin{equation}\label{GFbounded}
\lim_{R \to \infty} \| (\lambda I- \mathcal{K}_{\partial\Phi_R})[\varphi_{\rho}] \|_{H^{1/2}(\partial\Phi_R)} = 0.
\end{equation}
For $c>0$ we set
$$
U_R^c := \{ (x,x_3) : x \in U_R, \ \mbox{dist}(x, \partial U_R) \ge c \}.
$$
Let $C$ be a constant such that $\Gamma^s \subset \mathbb{R}^3 \setminus U_R^C$ where
$\Gamma^s$ denotes the projection of $\Sigma_R^s$ onto the $x$-plane. Note that we can choose such a constant independently of $R>1$.
Let $\chi_1(X) = \chi_1(x,x_3) =\chi_1(x)$ be a smooth function supported in $U_R^C $ such that $\chi_1=1$ on $U_R^{2C}$, and let $\chi_2:=1-\chi_1$. Then we have
$$
\| (\lambda I- \mathcal{K}_{\partial\Phi_R})[\varphi_\rho] \|_{H^{1/2}(\partial\Phi_R)} \le \sum_{j=1}^2 \| \chi_j(\lambda I- \mathcal{K}_{\partial\Phi_R})[\varphi_\rho] \|_{H^{1/2}(\partial\Phi_R)} .
$$
Since
$
\chi_1(\lambda I- \mathcal{K}_{\partial\Phi_R})[\varphi_\rho] (x,x_3) = \chi_1( |\lambda| f_\rho -\frac{1}{2} (P_2 * f_\rho) ),
$
it follows from (ii) of Lemma \ref{lem:limitGF} that
\begin{equation}
\| \chi_1(\lambda I- \mathcal{K}_{\partial\Phi_R})[\varphi_\rho] \|_{H^{1/2}(\partial\Phi_R)} \to 0 \ \text{ as } \ R \to \infty.
\end{equation}
To estimate $\| \chi_2(\lambda I- \mathcal{K}_{\partial\Phi_R})[\varphi_\rho] \|_{H^{1/2}(\partial\Phi_R)}$, let
$$
\Gamma:= \partial \Phi_R \cap (\mathbb{R}^3 \setminus U_R^{2C}).
$$
Then we have
$$
\| \chi_2(\lambda I- \mathcal{K}_{\partial\Phi_R})[\varphi_\rho] \|_{H^{1/2}(\partial\Phi_R)} = \| \chi_2 \mathcal{K}_{\partial\Phi_R}[\varphi_\rho] \|_{H^{1/2}(\Gamma)}.
$$
Note that the shape of $\Gamma$ is independent of $R$.
We use the following characterization of the space $H^{1/2}(\Gamma)$ (see, e.g., \cite{GT}):
\begin{equation}\label{def_H1/2}
\| h \|_{H^{1/2}(\Gamma)}^2= \| h \|_{L^2(\Gamma)}^2 + \int_{\Gamma} \int_{\Gamma} \frac{|h(X)-h(Z)|^2}{|X-Z|^3} dS(X) dS(Z).
\end{equation}
Let $K(X,Y)$ be the integral kernel of $\mathcal{K}_{\partial\Phi_R}$, namely,
$$
K_R(X,Y) = \frac{1}{4\pi} \frac{\langle Y-X, \nu(Y) \rangle}{|X-Y|^3}.
$$
If $X, Z \in \mbox{supp}(\chi_2)$ and $Y \in \mbox{supp}(\varphi_{\rho})$, then
$$
|X-Z| \lesssim 1, \quad |X-Y| \gtrsim R, \quad |Z-Y| \gtrsim R.
$$
Thus, $|K(X,Y)| \lesssim R^{-2}$. It then follows from \eqnref{fRx} that
\begin{equation}\label{100}
\| \chi_2 \mathcal{K}_{\partial\Omega_R}[\varphi_\rho] \|_{L^2(\Gamma)} \lesssim R^{-2} \int_{\mathbb{R}^2} |f_\rho(y)|dy \lesssim R^{-1-\sigma} \int_{\mathbb{R}^2} |(\chi\zeta)(y)|dy \lesssim R^{-1-\sigma}.
\end{equation}
We also have
\begin{align*}
&|\chi_2(X) K(X,Y) - \chi_2(Z) K(Z,Y)| \\
& \le |\chi_2(X)- \chi_2(Z)| |K(X,Y)| + |\chi_2(Z)| |K(X,Y)-K(Z,Y)| \\
& \lesssim R^{-2} |X- Z| + R^{-3} |X- Z| \le R^{-2} |X- Z|.
\end{align*}
Thus,
\begin{align*}
|\chi_2(X) \mathcal{K}_{\partial\Phi_R}[\varphi_\rho](X) - \chi_2(Z) \mathcal{K}_{\partial\Phi_R}[\varphi_\rho](Z)|
&\lesssim R^{-2}|X- Z| \int_{\mathbb{R}^2} |f_\rho(y)|dy \\
&\le R^{-1-\sigma} |X- Z| \int_{\mathbb{R}^2} |(\chi\zeta)(y)|dy \\
& \lesssim R^{-1-\sigma} |X- Z|.
\end{align*}
It then follows that
$$
\int_{\Gamma} \int_{\Gamma} \frac{|\chi_2(X) \mathcal{K}_{\partial\Phi_R}[\varphi_{\rho}](X) - \chi_2(Z) \mathcal{K}_{\partial\Phi_R}[\varphi_{\rho}](Z)|^2}{|X- Z|^3} dS(X) dS(Z) \lesssim R^{-1-\sigma},
$$
which together with \eqnref{100} implies \eqnref{GFbounded}.
This completes the proof.
\end{proof}
\section*{Acknowledgments}
We thank Graeme Milton for useful discussion on negative NP eigenvalues.
| {
"timestamp": "2021-10-12T02:18:17",
"yymm": "2110",
"arxiv_id": "2110.04716",
"language": "en",
"url": "https://arxiv.org/abs/2110.04716",
"abstract": "We investigate the spectral structure of the Neumann-Poincaré operator on thin ellipsoids. Two types of thin ellipsoids are considered: long prolate ellipsoids and flat oblate ellipsoids. We show that the totality of eigenvalues of the Neumann-Poincaré operators on a sequence of the prolate spheroids is densely distributed in the interval [0,1/2] as their eccentricities tend to 1, namely, as they become longer. We then prove that eigenvalues of the Neumann-Poincaré operators on the oblate ellipsoids are densely distributed in the interval [-1/2, 1/2] as the ellipsoids become flatter. In particular, this shows that even if there are at most finitely many negative eigenvalues on the oblate ellipsoids, more and more negative eigenvalues appear as the ellipsoids become flatter. We also show a similar spectral property for flat three dimensional domains.",
"subjects": "Spectral Theory (math.SP)",
"title": "Spectral structure of the Neumann-Poincaré operator on thin ellipsoids and flat domains",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969694071564,
"lm_q2_score": 0.7217432062975978,
"lm_q1q2_score": 0.7099044304045212
} |
https://arxiv.org/abs/1011.4490 | On the constant in Burgess' bound for the number of consecutive residues or non-residues | We give an explicit version of a result due to D. Burgess. Let $\chi$ be a non-principal Dirichlet character modulo a prime $p$. We show that the maximum number of consecutive integers for which $\chi$ takes on a particular value is less than $\left\{\frac{\pi e\sqrt{6}}{3}+o(1)\right\}p^{1/4}\log p$, where the $o(1)$ term is given explicitly. | \section{Introduction}\label{S:1}
Let $\chi$ be a non-principal Dirichlet character to the prime modulus $p$.
In 1963, D.~Burgess showed (see~\cite{bib:burgess.1963}) that the maximum number of consecutive integers for which $\chi$ takes
on any particular value is $O(p^{1/4}\log p)$.
This still constitutes the best known asymptotic upper bound on this quantity.
However, in some applications, one needs a more explicit result.
Following the general lines of his original argument and
making careful estimates throughout,
we prove an explicit version of Burgess' theorem (see Theorem~\ref{T:3} and Corollary~\ref{C:1}),
thereby obtaining the following:
\begin{theorem}\label{T:1}
If $\chi$ is any non-principal Dirichlet character to the prime modulus $p$
which is constant on $(N,N+H]$, then
$$
H<\left\{\frac{\pi e\sqrt{6}}{3}+o(1)\right\}
p^{1/4}\log p
\,.
$$
\end{theorem}
We note that the constant $(\pi e\sqrt{6})/3$ is approximately $6.97$.
As we have an explicit bound on the $o(1)$ term when $p$ is large,
we are able to obtain the following result which is more useful in applications:
\begin{theorem}\label{T:2}
If $\chi$ is any non-principal Dirichlet character to the prime modulus $p$
which is constant on $(N,N+H]$, then
$$
H
<
\begin{cases}
7.06\,p^{1/4}\log p
\,,\;\;
&
\text{for }p\geq 5\cdot 10^{18}
\\[0.5ex]
7\,p^{1/4}\log p
\,,\;\;
&
\text{for }p\geq 5\cdot10^{55}
\end{cases}
\,.
$$
\end{theorem}
For the special case of $N=0$, which amounts to giving a bound on the smallest non-residue of $\chi$
(i.e., the smallest $n$ such that $\chi(n)\neq 1$),
K.~Norton proves a result analogous to Theorem~\ref{T:2}
which holds for all $p$ with a constant of $4.7$ (see~\cite{bib:norton.1971}).
In addition, a result for arbitrary $N$, similar to the one given in Theorem~\ref{T:2} is stated,
but not proved in~\cite{bib:norton.1973}.
R.~Hudson (see~\cite{bib:hudson.1974}) cites a result slightly improving the
one stated in~\cite{bib:norton.1973} to appear in a future paper,
but the present author cannot locate the purported proof.
It seems a worthwhile endeavor to put down such a proof as it is possible that some
authors avoid using the result in~\cite{bib:norton.1973} due to the lack of proof
(see, for example~\cite{bib:hummel.2003}), while others (see~\cite{bib:hudson.1974})
use the result for further derivations.
To our knowledge, this is the first proof to appear in the literature which makes
the constant in Burgess' theorem explicit.
\\
It is perhaps useful here to comment briefly on the connection between Dirichlet characters and power residues.
Fix an integer $k\geq 2$.
We say that $n\in\mathbb{Z}$ is a $k$-th power residue modulo $p$ if
$(n,p)=1$ and the equation $x^k\equiv n\pmod{p}$ is soluble in $x$.
Suppose $\chi$ is any Dirichlet character modulo $p$ of order $(k,p-1)$.
One can easily show that
$\chi(n)=1$ if and only if $n$ is a $k$-th power residue modulo $p$.
Here we might as well assume $(k,p-1)>1$, or else every integer is a $k$-th power residue modulo $p$
and the only such $\chi$ is the principal character.
If we denote by $C_p=(\mathbb{Z}/p\mathbb{Z})^\star$ the multiplicative group consisting
of the integers modulo $p$ and by $C_p^k$ the subgroup of $k$-th powers modulo $p$, then the value of
$\chi(n)$ determines to which coset of $C_p/C_p^k$ the integer $n$ belongs.
In light of this, theorems~\ref{T:1} and \ref{T:2} also give estimates
(which are the best known)
on the maximum number of consecutive integers that belong to
a given coset of $C_p/C_p^k$.
\\
We should also mention that Burgess' well-known character sum estimate (see~\cite{bib:burgess.1962}) gives a bound
on the quantity (in the title of the paper) of $O(p^{1/4+\varepsilon})$.
However, the constant associated to the $O$-symbol depends on $\varepsilon$ and hence,
although there are explicit versions of Burgess' character sum estimate available (see~\cite{bib:IW}),
theorems~\ref{T:1} and~\ref{T:2} would not follow from
this.
\\
The main idea behind Burgess' proof is to combine upper and lower bounds for the sum:
$$
S(\chi,h,r):= \sum_{x=0}^{p-1}
\left|
\sum_{m=1}^h
\chi(x+m)
\right|^{2r}
$$
In Lemma~\ref{L:1C} of \S\ref{S:2} we give an upper bound for $S(\chi,h,r)$ in terms of $r$ and $h$.
In Proposition~\ref{P:1} of \S\ref{S:3} we give a lower
bound on $S(\chi,h,r)$ in terms of $h$ and $H$, under some additional hypotheses on $H$.
Combining these results, we obtain an upper bound on $H$ in terms of $r$ and $h$ under the same hypotheses;
this result is also given as part of Proposition~\ref{P:1}.
Then, in \S\ref{S:4} we prove our main result (see Theorem~\ref{T:3}) by invoking Proposition~\ref{P:1} with a careful choice of parameters.
Finally, by performing some simple numerical computations, we show that that the extra hypothesis on $H$ can be
dropped when $p$ is large enough (see Corollary~\ref{C:1}); theorems~\ref{T:1} and \ref{T:2} will then follow immediately.
\section{An Upper Bound on $S(\chi,h,r)$}\label{S:2}
The following character sum estimate was first given by A. Weil, as a consequence of his
deep work on the Riemann hypothesis for function fields (see~\cite{bib:weil}).
It is also proved as Theorem~2C' in \cite{bib:schmidt}
using an elementary method due to S.~Stepanov (see~\cite{bib:stepanov}),
which was later extended by both E.~Bombieri (see~\cite{bib:bombieri.1973}) and W. Schmidt (see~\cite{bib:schmidt.1973}).
\begin{lemma}\label{L:Weil}
Let $\chi$ be a non-principal Dirichlet character to the prime modulus $p$,
having order $n$.
Let $f(x)\in\mathbb{Z}[x]$ be a polynomial with $m$ distinct roots
which is not an $n$-th power in $\mathbb{F}_p[x]$,
where $\mathbb{F}_p$ denotes the finite field with $p$ elements.
Then
$$
\left|
\sum_{x\in\mathbb{F}_p}
\chi(f(x))
\right|
\leq
(m-1)\;
p^{1/2}
\,.
$$
\end{lemma}
The next lemma is a slight improvement over
Lemma 2 in~\cite{bib:burgess.1962} which gives
an upper bound on $S(\chi,h,r)$. The proof is not difficult if we grant ourselves
Lemma~\ref{L:Weil}.
\begin{lemma}\label{L:1C}
Suppose $\chi$ is any non-principal Dirichlet character to the prime modulus $p$.
If $r,h\in\mathbb{Z}^+$, then
$$
S(\chi,h,r)
<
\frac{1}{4}(4r)^rp h^r
+
(2r-1)p^{1/2}h^{2r}
\,.
$$
\end{lemma}
\noindent\textbf{Proof.}
First we claim that we may assume, without loss of generality,
that \mbox{$r<h<p$}. We commence by observing
that $h=p$ implies $S(\chi,h,r)=0$, in which case there is nothing to prove.
We see that $h>p$ implies $S(\chi,h-p,r)=S(\chi,h,r)$, which allows us
to inductively bring $h$ into the range $0<h<p$.
Additionally, we notice that if $h\leq r$, then the theorem
is trivial since in this case we would have
$S(\chi,h,r)\leq h^{2r}p\leq (hr)^rp$.
This establishes the claim.
Now, to begin the proof proper,
we observe that
$$
S(\chi,h,r)
=
\sum_{1\leq m_1,\dots,m_{2r}\leq h}\;\;
\sum_{x=0}^{p-1}
\chi(x+m_1)\dots\chi(x+m_r)\overline{\chi}(x+m_{r+1})\dots\overline{\chi}(x+m_{2r})
\,.
$$
Define
$$
\mathcal{M}:=\{\mathbf{m}=(m_1,\dots,m_{2r})\mid 1\leq m_1,\dots,m_{2r}\leq h\}
\,.
$$
We can rewrite the above as
$$
S(\chi,h,r)=
\sum_{\mathbf{m}\in\mathcal{M}}
\sum_{x\in\mathbb{F}_p}
\chi(f_\mathbf{m}(x))
\,,
$$
where
$$
f_\mathbf{m}(x)
=
(x+m_1)\dots(x+m_r)(x+m_{r+1})^{n-1}(x+m_{2r})^{n-1}
\,,
$$
and $n$ denotes the order of $\chi$.
If $f_\mathbf{m}(x)$ is not an $n$-th power mod $p$, then by Lemma~\ref{L:Weil}
we have
$$
\left|
\sum_{x\in\mathbb{F}_p}
\chi(f_\mathbf{m}(x))
\right|
\leq
(2r-1)\sqrt{p}
\,.
$$
Otherwise, we must settle for the trivial bound of $p$.
It remains to count the number of exceptions -- that is, the number of $\mathbf{m}\in\mathcal{M}$ such that
$f_\mathbf{m}(x)$ is an $n$-th power mod $p$.
A little care is required here -- as an example, if $r=n=3$ and $p\geq 5$, then the vectors
$\mathbf{m}=(1,2,3,1,2,3)$ and $\mathbf{m}=(1,1,1,2,2,2)$ are both exceptions, but the way in which they
arise is slightly different;
as $r$ gets larger compared to $n$, the situation only gets worse.
In light of this difficulty, we will actually count (as Burgess does in~\cite{bib:burgess.1963}) the number of
$\mathbf{m}=(m_1,\dots,m_{2r})\in\mathcal{M}$ such that each $m_j$ is repeated at least
once.
We let $u$ denote the number of distinct $m_j$ (so that $u\leq r<h$)
and denote by \mbox{$1=j_1<j_2<\dots<j_u\leq 2r$} the indices corresponding to the first
occurrence of each of the $u$ values among the $m_j$.
The number of ways to choose the $j_k$ is bounded by $\binom{2r-1}{u-1}$,
and there are at most $h$ choices for each $m_{j_k}$ while the remaining $m_{j}$ are restricted
to at most $u$ values.
In light of all this, we find that
the number of exceptions
is bounded above by
\begin{eqnarray*}
\sum_{u=1}^r\binom{2r-1}{u-1}h^uu^{2r-u}
&\leq&
(hr)^r\sum_{u=1}^r\binom{2r-1}{u-1}\left(\frac{u}{h}\right)^{r-u}
\;\leq\;
(hr)^r\sum_{u=1}^r\binom{2r-1}{u-1}
\,.
\end{eqnarray*}
Finally, to complete the proof, we observe
$$
(hr)^r\sum_{u=1}^r\binom{2r-1}{u-1}
\;=\;
(hr)^r2^{2r-2}
\;=\;
\frac{1}{4}
(4rh)^r
\,.
\;\;
\text{$\blacksquare$}
$$
\section{A Lower Bound on $S(\chi,h,r)$}\label{S:3}
In obtaining the desired lower bound, the idea is to locate a large number of intervals on which $\chi$ is constant.
The next two lemmas will be useful in accomplishing this end.
The following lemma makes the error term in Lemma 3 of~\cite{bib:burgess.1963} explicit
and improves the main constant from
$1-\pi^2/12\approx 0.178$ to $3/\pi^2\approx 0.304$.
\begin{lemma}\label{L:3B}
Let $X\geq 7$. If $a,b\in\mathbb{Z}$ are coprime with $a\geq 1$, then there are at least
$$
X^2
\left(\frac{3}{\pi^2}-\frac{\log X}{2X}-\frac{1}{X}-\frac{1}{2X^2}\right)
$$
distinct numbers of the form
$$
\frac{at+b}{q}
$$
where $0\leq t<q\leq X$.
\end{lemma}
\noindent\textbf{Proof.}
As in \cite{bib:burgess.1963}, we observe that
$
\#\{q^{-1}(at+b) \mid 0\leq t<q\leq X\}
$
is bounded below by
\begin{eqnarray*}
\sum_{q\leq X}\sum_{\substack{0\leq t<q\\(at+b,q)=1}}
1
&=&
\sum_{q\leq X}
\;
\sum_{0\leq t<q}
\;
\sum_{m|(at+b,q)}
\mu(m)
\end{eqnarray*}
Writing $q=rm$ allows us to rewrite the sum above as
\begin{eqnarray*}
\sum_{m\leq X}\mu(m)
\sum_{r\leq X/m}
\sum_{\substack{0\leq t<rm\\at\equiv -b\pod m}}
1
\,.
\end{eqnarray*}
Since $(a,b)=1$, the congruence $at\equiv -b\pmod m$ has a solution if and only if $(m,a)=1$.
Therefore we can rewrite our sum in the following way:
\begin{eqnarray*}
\sum_{\substack{m\leq X\\ (m,a)=1}}
\mu(m)
\sum_{r\leq X/m}
\sum_{\substack{0\leq t<rm\\at\equiv -b\pod m}}
1
&=&
\sum_{\substack{m\leq X\\ (m,a)=1}}
\mu(m)
\sum_{r\leq X/m}
r
\end{eqnarray*}
A careful lower estimate of the sum on the right-hand side above
will give the desired result.
Using the identity
$$
\sum_{r\leq Y}r=\frac{Y^2}{2}+\frac{Y}{2}\theta_Y
\,,
\quad
\theta_Y\in[-1,1]
\,,
$$
which holds for $Y>0$, we obtain
\begin{eqnarray}\label{E:BigMu}
&&
\sum_{\substack{m\leq X\\ (m,a)=1}}
\mu(m)
\sum_{r\leq X/m}
r
\\
\nonumber
&&\qquad\qquad\qquad
=
\;\;
\frac{X^2}{2}
\sum_{\substack{m\leq X\\ (m,a)=1}}
\frac{\mu(m)}{m^2}
\;+\;
\frac{X}{2}
\sum_{\substack{m\leq X\\ (m,a)=1}}
\frac{\mu(m)}{m}\theta_{X/m}
\,.
\end{eqnarray}
Let $\zeta(s)$ denote the Riemann zeta function.
When $s>1$, we have
\begin{eqnarray*}
\sum_{\substack{m=1\\(m,a)=1}}^\infty
\mu(m) m^{-s}
\;=\;
\zeta(s)^{-1}\prod_{p | a}(1-p^{-s})^{-1}
\;\geq\;
\zeta(s)^{-1}
\,,
\end{eqnarray*}
and the tail of the series is bounded in absolute value by
\begin{eqnarray*}
\sum_{m>X}m^{-s}
\leq
\frac{1}{X^{s}}+\frac{1}{(s-1)}\cdot \frac{1}{X^{s-1}}
\,;
\end{eqnarray*}
therefore
$$
\sum_{\substack{m\leq X\\(m,a)=1}}
\mu(m) m^{-s}
\geq
\zeta(s)^{-1}
-\frac{1}{X^{s}}-\frac{1}{(s-1)}\cdot \frac{1}{X^{s-1}}
\,.
$$
Setting $s=2$ gives
$$
\sum_{\substack{m\leq X\\(m,a)=1}}
\frac{\mu(m)}{m^2}
\geq
\zeta(2)^{-1}
-\frac{1}{X^2}-\frac{1}{X}
\,.
$$
Now we deal with the second sum on the right-hand side of (\ref{E:BigMu});
we have
$$
\vrule\;
\sum_{\substack{m\leq X\\(m,a)=1}}^{\phantom{N}}
\frac{\mu(m)}{m}\theta_{X/m}
\;\vrule
\leq
\sum_{m\leq X}\frac{1}{m}
\leq
1+\log X
\,.
$$
Summarizing, we have shown
\begin{eqnarray*}
\vrule\;
\sum_{\substack{m\leq X\\ (m,a)=1}}^{\phantom{N}}
\mu(m)
\sum_{r\leq X/m}
r
\;\vrule
&\geq&
\frac{X^2}{2}
\left(\frac{1}{\zeta(2)}-\frac{1}{X^2}-\frac{1}{X}\right)
-
\frac{X}{2}
\left(1+\log X\right)
\\
&=&
X^2
\left(\frac{1}{2\zeta(2)}-\frac{\log X}{2X}-\frac{1}{X}-\frac{1}{2X^2}\right)
\,.
\end{eqnarray*}
In light of the fact that $\zeta(2)=\pi^2/6$, we have arrived at the desired conclusion.
The reader may worry why we failed to use the hypothesis that $X\geq 7$.
This hypothesis is not necessary for the truth of the conclusion,
but we include it nonetheless to ensure that our estimate gives a positive number.
$\blacksquare$
\vspace{1ex}
Finally we will require Dirichlet's Theorem in Diophantine approximation; see, for example,
Theorem 1 in Chapter 1 of~\cite{bib:cassels}.
\begin{lemma}\label{L:2}
Let $\theta,\,A\in\mathbb{R}$ with $A>1$.
Then there exists $a,b\in\mathbb{Z}$ with $(a,b)=1$ such that
$$
0<a<A
\,,\quad
|a\theta-b|\leq A^{-1}
\,.
$$
\end{lemma}
We are now ready to give our lower bound on $S(\chi,h,r)$.
\begin{proposition}\label{P:1}
Let $h,r\in\mathbb{Z}^+$.
Suppose $\chi$ is a non-principal Dirichlet character to the prime modulus $p$ which
is constant on $(N,N+H]$ and such that
$$
14h\leq H\leq (2h-1)^{1/3}p^{1/3}
\,.
$$
If we set $X:=H/(2h)\geq 7$, then
$$
S(\chi,h,r)\geq \left(\frac{3}{\pi^2}\right)X^2 h^{2r+1} f(X)
\,,
$$
where
$$
f(X)=1-\frac{\pi^2}{3}\left(\frac{\log X}{2X}+\frac{1}{X}+\frac{1}{2X^2}\right)
\,,
$$
and therefore
\begin{equation}
\nonumber
H
<
\frac{2\pi h}{\sqrt{3f(X)}}
\,
p^{1/4}
\left[
\frac{1}{4h}
\left(\frac{4r}{h}\right)^{r}p^{1/2}
+
\left(\frac{2r-1}{h}\right)
\right]^{1/2}
\,.
\end{equation}
Note: $f(X)$ is positive and increasing on $[7,\infty)$ and $f(X)\to 1$ as $X\to\infty$.
\end{proposition}
\noindent\textbf{Proof.}
Following the argument given in \cite{bib:burgess.1963},
we define the real interval
$$
I(q,t):=
\left(\frac{N+pt}{q},\;\; \frac{N+H+pt}{q}\right]
\,,
$$
for $0\leq t<q\leq X$.
We take note of two important properities of $I(q,t)$, which we will use later.
First, the length of $I(q,t)$ is $H/q\geq H/X=2h$. Second, $\chi$ is constant on $I(q,t)$;
this is because for any $z\in I(q,t)$ we have $\chi(qz-pt)=\zeta$
and hence $\chi(z)=\overline{\chi}(q)\zeta$.
We are interested in locating a large number of non-overlapping intervals of this form.
By Lemma~\ref{L:2},
there exists coprime $a,b\in\mathbb{Z}$ such that
$1\leq a\leq H$ and
\begin{equation}\label{E:I1}
|aNp^{-1}-b|\leq 1/H
\,.
\end{equation}
One shows
that if $I(q_1,t_1)$ and $I(q_2,t_2)$ overlap, then
\begin{equation}\label{E:I2}
|Np^{-1}(q_1-q_2)+t_2 q_1- t_1 q_2|
<
p^{-1}XH
\,.
\end{equation}
Equations
(\ref{E:I1}) and (\ref{E:I2}) yield
$$
\left|
\frac{b}{a}
(q_1-q_2)+t_2 q_1-t_1 q_2
\right|
<
\frac{XH}{p}+\frac{|q_1-q_2|}{Ha}
\leq
\frac{XH}{p}+\frac{X}{Ha}
=
\frac{H^2 a+p}{2ahp}
\,.
$$
But since $a\leq H$ and $H^3\leq (2h-1)p$ by hypothesis, we have
$$
\frac{H^2 a+p}{2ahp}
\leq
\frac{H^3+p}{2ahp}
\leq
\frac{1}{a}
\,.
$$
Hence
$$
\left|
\frac{b}{a}
(q_1-q_2)+t_2 q_1-t_1 q_2
\right|
<
\frac{1}{a}
\,,
$$
and it follows that
$I(q_1,t_1)$ and $I(q_2,t_2)$ can only overlap if
$$
\frac{a t_1+b}{q_1}=\frac{a t_2+b}{q_2}
\,.
$$
Invoking Lemma \ref{L:3B}, we find that there will be at least $(3/\pi^2)X^2 f(X)$ disjoint intervals $I(q,t)$ of the given form.
\\
Having located the desired intervals, we are ready to give a lower estimate
for $S(\chi,h,r)$.
Let $z(q,t)$ denote the smallest integer in $I(q,t)$.
Since $I(q,t)$ has length at least $2h$,
the integers $z(q,t)+n+m$, for $n=0,\dots,h-1$ and $m=1,\dots,h$ are distinct elements of $I(q,t)$.
Moreover, as $q,t$ run through the values selected by Lemma~\ref{L:3B}, the $I(q,t)$ are disjoint.
Now, using the fact that $\chi$ is constant on each $I(q,t)$, one obtains the following bound for $S(\chi,h,r)$:
\begin{eqnarray*}
\sum_{x=0}^{p-1}
\left|
\sum_{m=1}^h\chi(x+m)\right|^{2r}
&\geq&
\sum_{q,t}\sum_{n=0}^{h-1}
\left|
\sum_{m=1}^{h}\chi(z(q,t)+n+m)
\right|^{2r}
\\
&=&
\sum_{q,t}
\sum_{n=0}^{h-1}h^{2r}
\\
&=&
h^{2r+1}\sum_{q,t} 1
\\
&\geq&
\left(\frac{3}{\pi^2}\right)X^2 h^{2r+1} f(X)
\,.
\end{eqnarray*}
Now we combine this lower bound on
$S(\chi,h,r)$ with the upper bound given in Lemma~\ref{L:1C}
to obtain
$$
\left(\frac{3}{\pi^2}\right)
\left(\frac{H}{2h}\right)^2 h^{2r+1} f(X)
<
\frac{1}{4}(4r)^{r}ph^r + (2r-1)p^{1/2}h^{2r}
\,,
$$
which implies
\begin{eqnarray*}
H^2
&<&
\frac{4\pi^2h^2}{3f(X)}
\,
p^{1/2}
\left[
\frac{1}{4h}
\left(\frac{4r}{h}\right)^{r}p^{1/2}
+
\left(\frac{2r-1}{h}\right)
\right]
\,.
\end{eqnarray*}
(We have used the fact that $f(X)>0$ for $X\geq 7$ in order to divide both sides
by $f(X)$ and preserve the inequality.)
Taking the square root of both sides yields the result.
$\blacksquare$
\section{The Main Result}\label{S:4}
\begin{theorem}\label{T:3}
Suppose $\chi$ is a non-principal Dirichlet character to the prime modulus $p\geq 5\cdot 10^4$
which is constant on $(N,N+H]$.
If
$H\leq (2e^2\log p-3)^{1/3}p^{1/3}$,
then
$$
H<
C\,g(p)\cdot p^{1/4}\log p\cdot
$$
where
$$
C=\frac{\pi e\sqrt{6}}{3}\approx 6.97266
$$
and
$g(p)\to 1$ as $p\to\infty$. In fact,
$$
g(p)=
\sqrt{
f\left(\frac{Cp^{1/4}}{2e^2}\right)^{-1}
\left(
1+\frac{1}{\log p}
\right)
}
\,,
$$
where $f(X)$ is defined in Proposition~\ref{P:1}.
Note that $g(p)$ is positive and decreasing for $p\geq 5\cdot 10^4$.
\end{theorem}
Before launching the proof of Theorem~\ref{T:3}, we will establish the following:
\begin{lemma}\label{L:logs}
Let $p\geq 3$ be an integer.
Suppose that $A,B>0$ are real numbers such that $h=\lfloor A\log p\rfloor$ and $r=\lfloor B\log p\rfloor$ are positive integers with $2r+1\leq h$.
Then
$$
A\geq 4B\cdot\exp\left(\frac{1}{2B}\right)
\;\;\Longrightarrow\;\;
\frac{1}{2h}\left(\frac{4r}{h}\right)^r\leq \frac{1}{Ap^{1/2}\log p}
\,.
$$
\end{lemma}
\noindent\textbf{Proof.}
By convexity, $\log t\geq (2\log 2)(t-1)$ for all $t\in[1/2,1]$ and thus
$$
\log\left(\frac{h}{h+1}\right)\geq
\frac{-2\log 2}{h+1}
\geq
\frac{-\log 2}{r+1}
\,.
$$
This implies
$$
\frac{1}{2}\leq\left(\frac{h}{h+1}\right)^{r+1}
$$
and therefore
$$
\frac{1}{2h}\left(\frac{4r}{h}\right)^{r}
\leq
\frac{1}{h+1}
\left(\frac{4r}{h+1}\right)^r
\leq
\frac{1}{A\log p}
\left(\frac{4B}{A}\right)^r
\,.
$$
Hence to obtain the desired implication, is suffices to show
\begin{equation}
\nonumber
\left(\frac{4B}{A}\right)^r\leq p^{-1/2}
\,.
\end{equation}
Taking logarithms, this is equivalent to
$$
r\log \left(\frac{4B}{A}\right)\leq -\frac{1}{2}\log p
\,,
$$
which follows from inequality
$$
B\log \left(\frac{4B}{A}\right)\leq -\frac{1}{2}
\,,
$$
which is true by hypothesis.
$\blacksquare$
\vspace{2ex}
\noindent\textbf{Proof of Theorem~\ref{T:3}.}
We will suppose $H\geq Cp^{1/4}\log p$, or else there is nothing to prove.
Set $h=\lfloor A\log p\rfloor$ and $r=\lfloor B\log p\rfloor$,
where $A:=e^2$ and $B:=1/4$. The constants $A$ and $B$ were chosen as to minimize the quantity
$AB$ subject to the constraint $A\geq 4B\exp\left(\frac{1}{2B}\right)$.
One easily checks that $14h\leq Cp^{1/4}\log p$ for our choices of $h$ and $C$, provided $p\geq 5\cdot 10^4$ and hence $14h\leq H$.
Also, we note that $H\leq(2h-1)^{1/3}p^{1/3}$ by hypothesis.
We apply Proposition~\ref{P:1} and adopt all notation relevant to its statement.
This gives:
\begin{equation}\label{E:bound}
H
<
\frac{2\pi h}{\sqrt{3f(X)}}
\,
p^{1/4}
\left[
\frac{1}{4h}
\left(\frac{4r}{h}\right)^{r}p^{1/2}
+
\left(\frac{2r-1}{h}\right)
\right]^{1/2}
\end{equation}
In order for the quantity inside the square brackets above to remain bounded as $p$ gets large,
and moreover be as small as possible, we would like
\begin{equation}\label{E:mycond}
\nonumber
\frac{1}{4h}\left(\frac{4r}{h}\right)^rp^{1/2}\to 0
\,.
\end{equation}
As the constants $A$ and $B$ were chosen to satisfy the conditions of Lemma~\ref{L:logs}
(the condition above was precisely the motivation for the lemma),
we have
$$
\frac{1}{2h}
\left(\frac{4r}{h}\right)^{r}
\leq
\frac{1}{Ap^{1/2}\log p}
\,.
$$
To give a clean bound on the the quantity $(2r-1)/h$ we notice that $2r\leq h+1$ implies
$$
\frac{2r-1}{h}
\leq\frac{2r}{h+1}
\leq
\frac{2B}{A}
\,.
$$
Thus inequality (\ref{E:bound}) becomes
\begin{eqnarray*}
H
&<&
\frac{2\pi A}{\sqrt{3f(X)}}\,p^{1/4}\log p
\left[
\frac{1}{2A\log p }+\frac{2B}{A}
\right]^{1/2}
\\
&=&
p^{1/4}\log p
\left[
\frac{8\pi^2AB}{3f(X)}\left(
1+\frac{1}{4B\log p}
\right)
\right]^{1/2}
\,.
\end{eqnarray*}
Now it is plain that the asymptotic constant in the above expression is directly proportional to $\sqrt{AB}$,
which motivates our choices of $A$ and $B$.
Plugging in the values of $A$ and $B$, we obtain:
\begin{eqnarray*}
H
&<&
p^{1/4}\log p
\left[
\frac{2\pi^2e^2}{3f(X)}
\left(
1+\frac{1}{\log p}
\right)
\right]^{1/2}
\\
&=&
\frac{e\pi\sqrt{6}}{3}
\,
p^{1/4}\log p
\left[
\frac{1}{f(X)}
\left(
1+\frac{1}{\log p}
\right)
\right]^{1/2}
\end{eqnarray*}
Finally, we note that we have an a priori lower bound on $X$; namely
$$
X=\frac{H}{2h}
\geq
\frac{Cp^{1/4}\log p}{2A\log p}
=
\frac{C\,p^{1/4}}{2e^2}
\,.
$$
In light of the fact that $f(X)$ is increasing, this gives
$$
f(X)^{-1}\leq f\left(\frac{C\,p^{1/4}}{2e^2}\right)^{-1}
\,,
$$
and the result follows.
$\blacksquare$
\begin{corollary}\label{C:1}
If $\chi$ is a non-principal Dirichlet character to the prime modulus $p\geq 5\cdot 10^{18}$
which is constant on $(N,N+H]$,
then
$$
H<
C\,g(p)\cdot p^{1/4}\log p
\,,
$$
where $C$ and $g(p)$ are as in Theorem~\ref{T:3}.
\end{corollary}
\noindent\textbf{Proof.}
In order to apply Theorem~\ref{T:3}, which will give the result, it suffices to show that
$H\leq (2e^2\log p-3)^{1/3}p^{1/3}$. By way of contradiction,
suppose $H>(2e^2\log p-3)^{1/3}p^{1/3}$. In this case
we set $H=\lfloor (2e^2\log p -3)^{1/3}p^{1/3}\rfloor$,
and note that $\chi$ is clearly still constant on $(N,N+H]$ for smaller $H$.
We invoke Theorem~\ref{T:3} to conclude that
$H<Cg(p)p^{1/4}\log p$ where $Cg(p)\leq Cg(5\cdot 10^{18})< 7.06$.
Using the fact that $p\geq 5\cdot 10^{18}$, we have
$$
H<7.06 p^{1/4}\log p<(2e^2\log p-3)^{1/3}p^{1/3}-1<H
\,,
$$
which is
a contradiction.
$\blacksquare$
\vspace{1ex}
It remains to derive theorems~\ref{T:1} and \ref{T:2}.
Theorem~\ref{T:1} follows immediately from Corollary~\ref{C:1}, and
Theorem~\ref{T:2} follows immediately as well in light
of the facts that $Cg(5\cdot 10^{18})<7.06$ and $Cg(5\cdot 10^{55})<7$.
\vspace{2ex}
\noindent\textbf{Remark.}
It would be highly desirable to prove a form of Theorem~\ref{T:2} with a reasonable constant
when $p<10^{20}$. For small $p$ the best result appears to be
due to A.~Brauer, using elementary methods. In~\cite{bib:brauer.1932}, he shows that
$H\leq\sqrt{2p}+2$ for all $p$.
\vspace{2ex}
\noindent\textbf{Acknowledgement.}
The author would like to thank Professor H. M. Stark for his
helpful suggestions.
| {
"timestamp": "2010-11-22T02:02:31",
"yymm": "1011",
"arxiv_id": "1011.4490",
"language": "en",
"url": "https://arxiv.org/abs/1011.4490",
"abstract": "We give an explicit version of a result due to D. Burgess. Let $\\chi$ be a non-principal Dirichlet character modulo a prime $p$. We show that the maximum number of consecutive integers for which $\\chi$ takes on a particular value is less than $\\left\\{\\frac{\\pi e\\sqrt{6}}{3}+o(1)\\right\\}p^{1/4}\\log p$, where the $o(1)$ term is given explicitly.",
"subjects": "Number Theory (math.NT)",
"title": "On the constant in Burgess' bound for the number of consecutive residues or non-residues",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969689263264,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7099044300574856
} |
https://arxiv.org/abs/0804.3019 | Three Dimensional Corners: A Box Norm Proof | In an additive group (G,+), a three-dimensional corner is the four points g, g+d(1,0,0), g+d(0,1,0), g+d(0,0,1), where g is in G^3, and d is a non-zero element of G. The Ramsey number of interest is R_3(G) the maximal cardinality of a subset of G^3 that does not contain a three-dimensional corner. Furstenberg and Katznelson have shown R_3(Z_N) is little-o of N^3, and in fact the corresponding result holds in all dimensions, a result that is a far reaching extension of the Szemeredi Theorem. We give a new proof of the finite field version of this fact, a proof that is a common generalization of the Gowers proof of Szemeredi's Theorem for four term progressions, and the result of Shkredov on two-dimensional corners. The principal tool are the Gowers Box Norms. | \section{Introduction}
For any discrete abelian group $ (G,+)$, we define a \emph{$ d$-dimensional corner}
to be the $ d+1$ points in $ G ^{d}$ given by
\begin{equation*}
g\,,\, g+h(1,0,0, ,\dotsc, 0)\,,\,
g+ h(0,1,0 ,\dotsc, 0)\, ,\dotsc, g+ h(0,0,0 ,\dotsc, 1)\,, \qquad h\in G- \{0\}\,.
\end{equation*}
The Ramsey numbers of interest are
$ R (G,d)$, the maximum cardinality of a subset $ A \subset G ^{d}$ which
does not contain a $ d$-dimensional corner.
The principal result in the subject is the Theorem of Furstenberg and Katznelson
\cite{MR833409}, a generalization
of the Szemer{\'e}di Theorem \cite{MR0369312} to arbitrary dimension.
\begin{FurstenbergKatznelson} We have the estimate below, for any dimension $ d$.
\begin{equation*}
R (\mathbb Z _N, d)= o ( N ^{d})\,, \qquad N \to \infty .
\end{equation*}
\end{FurstenbergKatznelson}
Our principal result of this result is a new proof of this Theorem, in dimension
$ d=3$, for a finite field.
\begin{mainTheorem} \label{t.main} We have this estimate, where $ N= 5^n= \lvert F_5^n\rvert $,
\begin{equation*}
R (\mathbb F _5 ^{n}, 3)= o ( N ^{3})\,, \qquad n \to \infty \,.
\end{equation*}
\end{mainTheorem}
The quantitative bound we provide is of Ackerman type, and accordingly we do not attempt
to specify it. In the two dimensional case, there is a much better quantitative bound,
doubly logarithmic in nature, due to Shkredov
\cites{MR2266965,shkredov-2007}.
\begin{shkredov} There is a $ 0<c<1$ for which we have the estimate below in the two
dimensional case.
\begin{equation*}
R (\mathbb Z _N, 2) \lesssim \frac {N ^{2}} { (\log \log N) ^{c}}\,, \qquad N \to \infty \,.
\end{equation*}
\end{shkredov}
In the simpler case of the finite field, one can get a better estimate, in that the
constant $ c$ can be specified. See \cite{MR2289954}, also \cite{MR2187732}.
Indeed it would appear that any improvement in the constant below would require new ideas.
\begin{theorem}\label{t.laceyMcclain} In the finite field setting, we have the estimate
below in the two dimensional case. Set $ N = p ^{n}$ for prime $ p$.
\begin{equation*}
R ( \mathbb F _p ^{n}, 2) \lesssim { N ^{2}}
\frac {\log \log \log N}{ \log \log N }\,, \qquad N \to \infty \,.
\end{equation*}
\end{theorem}
Our methods of proof are those of arithmetic combinatorics, which in most instances
give better quantitative bounds. However in this proof, our bounds are of Ackerman type.
It took some time for a purely combinatorial proof of the
Furstenberg-Katzneslon proof to be found \cites{MR2167756,gowers-2007,MR2195580} and the commentary in
\cite{MR2167755}. Thus, our proof using the Gowers norms \cite{MR2167755}, and the double recursion
argument of Shkredov \cite{MR2266965}, might have some independent interest.
The Theorem we discuss is the first `hard' case, as it corresponds to four-term arithmetic
progressions \cites{MR0245555,MR1631259}. The `hardness' is expressed in terms of the
very weak information that we get from the Box Norm, an issue we go into in more depth
in the next section, see also \S~\ref{s.Tbox}.
The rigorous results on Box Norm are Lemma~\ref{l.BPZ} below, and a more sophisticated variant
Lemma~\ref{l.Tbox}.
A central question in the subject of Ergodic Theory concerns the identification of the
characteristic factors for multi-linear ergodic averages, especially in the
sense of Host and Kra \cites{MR2150389,MR2090768,MR1827115}.
In the case of commuting transformations, the only complete information about
these factors is in the case of two commuting transformations, a result of
Conze and Lesigne \cite{MR788966}, also \cite{MR1827115}. Incorporating their results in to a proof of Shkredov's
Theorem is of substantial interest. Our ignorance of these factors is also
a hindrance in the result of Bergelson, Leibman and Lesigne \cite{0710.4862}. Perhaps this
approach can shed some light on this question.
There should be no essential difficulty in rewriting this proof to treat the estimate $ R (\mathbb Z _N, 3)=
o (N ^{3})$. We have adopted the finite field setting just as a matter of convenience, making the arguments of
\S~\ref{s.uniform} technically a little easier (though admittedly there is little gain in simplicity by
this choice.) It appears to be an interesting question, requiring additional insight, to extend this argument
to higher dimensions.
\begin{Acknowledgment}
The first author completed part of this work while in residence at the Fields Institute, Toronto Canada,
as a George Eliot Distinguished Visitor.
Support and hospitality of that Institute is gratefully acknowledged. The second author has been supported
by a NSF VIGRE grant at the Georgia Institute of Technology.
\end{Acknowledgment}
\section{Overview of the Proof}
There is a substantial jump in difficulty of the proof in passing from the two
dimensional case to the three case. The three dimensional case,
projected back to one dimension, gives a result about four term arithmetic progressions,
explaining part of this difficulty. Accordingly, we begin with a description of the
two dimensional case.
In two dimensions, the are three important coordinate directions:
$ \operatorname e_1=(1,0)$, $ \operatorname e_2=(0,1)$, and $ \operatorname e_3=
\operatorname e_1 +\operatorname e_2$, associated with the endpoints of the corners.
We exploit these three choices of coordinate directions by this mechanism.
Consider three functions $ \lambda _j \;:\; \mathbb Z _N ^{3} \longrightarrow \mathbb Z_N ^2 $
given by
\begin{align} \label{e.olambda}
\lambda _j (x_1,x_2,x_3)= \sum _{k \;:\; k\neq j} x_k \operatorname e_k
\end{align}
The point of these definitions is that $ \lambda _j$ is \emph{not} a function of $ x_
j$.
For a given set $ A\subset \mathbb Z_N ^2 $, the expected number of corners in $ A$ is
\begin{align*}
\mathbb E _{x_1,x_2,x_3\in \mathbb Z _N} &
A (x_1,x_2)A (x_1+x_3,x_2)A (x_1,x_2+x_3)
\\&=
\mathbb E _{x_1,x_2,x_3\in \mathbb Z _N}
A (x_1,x_2)A (x_3-x_2,x_2)A (x_1,x_3-x_1)
& (x_3\to x_3-x_1-x_2)
\\
&=
\mathbb E _{x_1,x_2,x_3\in \mathbb Z _N}
\prod _{j=1} ^{3} A \circ \lambda _j (x_1,x_2,x_3)\,.
\end{align*}
Each of the three functions is a function of just two of the three variables
$ x_1,x_2,x_3$.
There is a specific mechanism to address expectations of such products: the Gowers Box
norms. Define one of these norms on a function $ g$ of $ x_1,x_2$ as follows.
\begin{equation} \label{e.oBox}
\norm g .\Box \{1,2\}.
= \bigl[ \mathbb E _{x_1,x_1',x_2,x_2'\in \mathbb Z _N}
g (x_1,x_2)g (x_1',x_2)g (x_1,x_2')g (x_1',x_2')
\bigr] ^{1/4}
\end{equation}
which is the cross-correlation of $ g$ at the four points of an average rectangle selected
from $ \mathbb Z _N \times \mathbb Z _N$.
Write $ \delta = \mathbb P (A)$, and
$ f= A - \delta $, which is, following Gower's terminology,
the balanced function of $ A$. We then expand one of the $ A$'s in
the expectation above as $ A= \delta + f$,
\begin{gather*}
\mathbb E _{x_1,x_2,x_3\in \mathbb Z _N}
\prod _{j=1} ^{3} A \circ \lambda _j (x_1,x_2,x_3)
= C_1+C_2
\\
C_1=\delta
\mathbb E _{x_1,x_2,x_3\in \mathbb Z _N}
\prod _{j=1} ^{2} A \circ \lambda _j (x_1,x_2,x_3)
\\
C_2= \mathbb E _{x_1,x_2,x_3\in \mathbb Z _N} f \circ \lambda _3
\prod _{j=1} ^{2} A \circ \lambda _j (x_1,x_2,x_3)
\end{gather*}
For the first of these terms, one can check directly that
\begin{equation*}
C_1 \ge \delta \mathbb E _{x_1} \lvert \mathbb E _{x_2} A (x_1,x_2)\rvert ^2 \ge \delta
^{3}\,.
\end{equation*}
For sets $ A$ with the number of corners approximately equal to the number of corners
that
one would naively expect, this should be the dominant term. On the other hand,
it is the import and power of the Gowers Box Norms that we have the inequality
\begin{equation} \label{e...0<}
\lvert C_0\rvert \le \norm f . \Box \{1,2\}.
\end{equation}
Thus, if this last quantity is less than, say, $ \tfrac12 \delta ^{3}$, the $ A$ has
at least one-half of the expected number of corners.
There is however, the alternative that $ \norm f . \Box \{1,2\}. \ge \tfrac12 \delta
^{3}$,
which point brings us to an unfortunate fact concerning these Box Norms:
The definition in \eqref{e.oBox} makes perfect sense on the product of arbitrary probability
spaces. Accordingly, the consequence of the Box Norm being large can only have a
probabilistic consequence. In the two dimensional case, it is this:
There is are subsets $ R_1, R_2\subset \mathbb Z _N$ so that $ A$ correlates with
the product set $ R_1 \times R_2$, namely $ \mathbb P (A\,:\, R_1 \times R_2)
\ge \delta + \tfrac 14 \delta ^{12}$, and the product set $ R_1 \times R_2$ is
non-trivial, in that we have the estimates $ \mathbb P (R_1), \mathbb P (R_2)\ge
c\delta ^{12}$, for appropriate constant $ c$. There is however no additional
structure on the sets $ R_1$ and $ R_2$.
The natural path, originating in Roth's proof \cite{MR0051853} for three term arithmetic progressions,
is to iterate this alternative. We can only hope to achieve an increment in density
of
$ A$ by an amount of $ \delta ^{12}$ a finite number of times.
But without an additional insight, the iteration cannot go forward
as the use of the Gowers Box Norms requires at least a little arithmetic information
through the use of the change of variables. Shkredov \cites{MR2266965}
found a solution to this problem by
introducing a secondary iteration, the result of which is that one finds further subsets
$ R_1'\subset R_1$ and $ R_2'\subset R_2$ which satisfy three conditions. First,
we maintain the property that $ A$ has a higher density on $ R_1' \times R_2'$,
namely $ \mathbb P (A\,:\, R_1' \times R_2')
\ge \delta + \tfrac 18 \delta ^{12}$. Second, the sets $ R_1'$ and $ R_2'$ are non-trivial,
in that they have a lower bound on their probabilities. Third, $ R_1'$ and $ R_2'$
have arithmetic properties, in that their one-dimensional Box Norms are small.
Specifically, $ R_1, R_2$ are subsets of a subspace $ H\le \mathbb F _2 ^{n}$, where there
is a lower bound on the dimension of $ H$, and the norms
\begin{equation*}
\norm R_j (x_1+x_2)- \mathbb P (R_j\,:\, H) H (x_1+x_2). \Box ^{ \{1,2\}} H \times H. \,, \qquad j=1,2
\end{equation*}
are small.
The first two conditions are certainly required. It is the third property that permits
the iteration to continue, as a subtle refinement of the inequality \eqref{e...0<} is
available.
There is one additional feature of this discussion that we should bring forward, as it plays
a decisive role in the three-dimensional case. Namely, the discussion above placed a distinguished
role on the standard basis $ (\operatorname e_1, \operatorname e_2) $, whereas the formulation of the
question makes sense any any choice of basis from the three vectors $ \{\operatorname e_1, \operatorname e_2,
\operatorname e_3\}$. One can phrase a `coordinate-free' version of Shkredov's argument, which is the
viewpoint of \cite{MR2289954}. This is the viewpoint we adopt in the three-dimensional case.
\medskip
We turn to the three dimensional case. We again have the
the standard basis $ \operatorname e _{j}$, for $ j=1,2,3$ in $ \mathbb Z _N ^{3}$.
The fourth relevant basis element is $ \operatorname e_4=\sum _{j=1} ^{3} \operatorname e_j$
associated to the endpoints of the corner. The analogs of the functions $ \lambda _j
$ in
\eqref{e.olambda} are now four distinct functions from $ \mathbb Z _N ^{4}\longrightarrow
\mathbb Z _N ^{3}$ given by
\begin{equation*}
\lambda _{j} (x_1, x_2,x_3,x_4)= \sum _{k \;:\; k\neq j} x_k \operatorname e_k\,.
\end{equation*}
The point to exploit is that $ \lambda _j$ is \emph{not} a function of $ x_j$.
For a given set $ A\subset \mathbb Z _N ^{3}$, the average number of corners in $ A$
is given
by
\begin{equation*}
\mathbb E _{x_1 , x_2,x_3,x_4\in \mathbb Z _N }
A (x_1,x_2,x_3) \prod _{j=1} ^{3} A ((x_1,x_2,x_3)+ x_4 \operatorname e_j)
=
\mathbb E _{x_1 , x_2,x_3,x_4\in \mathbb Z _N }
\prod _{j=1} ^{4} A \circ \lambda _j (x_1, x_2,x_3,x_4)\,.
\end{equation*}
This is a four-linear term, which each of the four terms being dependent upon
just three variables.
Again, there is a Gowers Box Norm that is relevant.
This norm, of a function $ g (x_1,x_2,x_3)$
has a definition that can be given recursively as
\begin{equation*}
\norm g(x_1,x_2,x_3). \Box \{1,2,3\}. ^{8}
= \Norm \abs{\mathbb E _{x_3\in \mathbb Z _N} g (x_1,x_2,x_3)} ^2 . \Box \{1,2\}. ^{4}
\end{equation*}
It has a similar interpretation as the average cross-correlation of $ g$ at the eight
corners of a randomly chosen box in $ \mathbb Z _N ^{3}$.
To exploit the norm,
we make the same expansion of $A $. Setting $ \delta = \mathbb P (A\,:\, \mathbb Z _N
^{3})$,
and write $ A=\delta +f$. Use this expansion just on $ A \circ \lambda _4$ above, so
that we can write
\begin{gather*}
\mathbb E _{x_1 , x_2,x_3,x_4\in \mathbb Z _N }
\prod _{j=1} ^{4} A \circ \lambda _j =C_1+C_0
\\
C_1= \delta
\mathbb E _{x_1 , x_2,x_3,x_4\in \mathbb Z _N }
\prod _{j=1} ^{3} A \circ \lambda _j
\\
C_0= \mathbb E _{x_1 , x_2,x_3,x_4\in \mathbb Z _N }
f \circ \lambda _4
\prod _{j=1} ^{3} A \circ \lambda _j \,.
\end{gather*}
The Box Norm is introduced because it controls the second term.
\begin{equation} \label{e.OverBox}
\lvert C_0\rvert
\le
\norm f . \Box \{1,2,3\}. \,.
\end{equation}
Thus, if the Box Norm is sufficiently small, $ C_0$ should be negligible.
Turning to the term $ C_1$, typically we would expect $ C_1$ to be of the order of
$ \delta ^{4}$, but we do not have any simple recourse to establishing
such a bound. Indeed, $ C_1$ is an instance of the two-dimensional question,
as $ C_1$ is $ \delta $ times the average number of two-dimensional corners
in $ A$, with the two-dimensional corners located on hyperplanes of the form
$ (x_1,x_2,x_3) \cdot \operatorname e_4=c$, for some $ c$.
This suggests to us that we will need to use a two-dimensional Box Norm
on the hyperplanes just described. Namely, and this is an essential point,
control of the Box Norm in \eqref{e.OverBox} is not sufficient to control
the number of corners in $ A$. Control of one more Box Norm, in a second set of
coordinates, is required. This situation can be avoided in the two-dimensional case.
We
adopt a method that places the four coordinate vectors $ \{\operatorname e_j \,:\,
1\le j\le 4\}$ on equal footing. For each choice of subset $ I\subset \{1,2,3,4\}$,
we have a Box Norm corresponding to the basis for $ \mathbb Z _N$
given by $ \{\operatorname e_j\,:\, j\in I\}$. A sufficient condition for
$ A$ to have a corner is that
\begin{equation*}
\max _{\substack{I\subset \{1,2,3,4\}\\ \lvert I\rvert=3 }}
\norm f . \Box I . < 2^{-8}\delta ^{4}\,.
\end{equation*}
These norms are distinct, namely that one can have $\norm f . \Box \{1,2,3\} . $
very small, while $ \norm f . \Box \{1,2,4\} .$ is much larger, a situation that
does not arise in the one-dimensional case, as all of these norms turn out to be
the same after a change of variables.
Turning to the alternative, suppose that we have $ \norm f . \Box \{1,2,3\} . > 2^{-8
}\delta
^{4}$. Again, the Box Norm admits a formulation on the three-fold product of probability
spaces. Accordingly we can only have a probabilistic consequence of the Box Norm being
large, and it is a dramatically weaker statement than in the two-dimensional case. It is
this: Associate $ \mathbb Z _N ^{3}$ to $ \mathbb Z _N ^{ \{1,2,3\}}$, with the
superscripts signifying the coordinates. For $ J\subset \{1,2,3\}$ of cardinality $ 2$,
associate $ \mathbb Z _N ^{J}$ to the corresponding face of $ \mathbb Z _N ^{ \{1,2,3
\}}$.
For each such $ J$, there is a subset $ R_J\subset \mathbb Z _N ^{ J}$. Consider the
fibers that lie above this set, denoted by
\begin{equation*}
\overline R_J = \bigl\{ (x_1,x_2,x_3) \in \mathbb Z _N ^{ \{1,2,3\}} \,:\,
\{ (x_1,x_2,x_3) \cdot \operatorname e_j \,:\, j\in J\}\in R_J
\bigr\}\,.
\end{equation*}
Then, the conclusions are two fold. First,
$ A$ has a higher density in $ \prod _{\substack{J\subset \{1,2,3\}\\ \lvert J\rvert
=2 }}
\overline R_J$, and second the latter set is non-trivial, in that it admits a lower
bound on its
probability. Namely, the conclusions are
\begin{gather} \label{e.oo}
\mathbb P \Bigl( A \,:\, \prod _{\substack{J\subset \{1,2,3\}\\ \lvert J\rvert=2 }
} \overline R_J \Bigr)
\ge \delta + c \delta ^{C}\,,
\\ \label{e.op}
\mathbb P \Bigl( \prod _{\substack{J\subset \{1,2,3\}\\ \lvert J\rvert=2 }} \overline R_J \Bigr)
\ge c \delta ^{C}\,.
\end{gather}
Here $ 0<c,C$ are absolute constants. Note that both conclusions are substantive.
There is no \emph{a priori} reason that the set in \eqref{e.op} should admit this
lower bound in its probability. The other conclusion \eqref{e.oo} gives a correlation with a set,
unfortunately, this set has substantially less structure than in the two-dimensional
case.
Another essential complication arises from the fact that one must consider the $ 6$ sets
$ R_J$, for $ J\subset \{1,2,3,4\}$, $ J$ consisting of two elements. If we consider the
three-fold intersection
$\prod _{\substack{J\subset \{1,2,3\}\\ \lvert J\rvert=2 }} \overline R_J\,,$
one can see that it is well-behaved with respect to corners if the individual
sets $ R_J$ are well-behaved with respect to two-dimensional Box Norms, and their one-dimensional
projections are well-behaved with respect to the $ U (3)$ norm.
But, there is no reason
that the 3-dimensional set formed from the $ 6$-fold intersection
$ \prod _{J\subset \{1,2,3,4\}} \overline R_J$ should be well-behaved
with respect to any Box Norm. To overcome this difficulty, we introduce an auxiliary set $ T\subset
\overline R_J$ for all $ J$. This set is required to be uniform with respect to all four
three-dimensional Box Norms, but the Box Norm is taken relative to the sets $ R_J$.
We are left with the following task: Find the appropriate `uniformity' conditions on the
sets $ R_J$ and the set $ T$ so that these conditions are met. First, we can
obtain a variant of the inequality \eqref{e.OverBox}, namely
if the set $ A$ is uniform in the `Box Norms adapted to $ T$' then $ A $ has a corner.
Second, assuming that $ A$ is not uniform with respect to a `Box Norms adapted to $ T$,'
then we can find suitable variants of \eqref{e.oo} and \eqref{e.op}.
This must be done in a manner that is consistent with the choice of any of the four
possible coordinate systems from $ \{\operatorname e_1,\operatorname e_2,\operatorname
e_3,\operatorname e_4\}$.
The remainder of the paper is organized as follows.
\begin{itemize}
\item \S~\ref{s.lemmas} presents the most important definitions and three Lemmas which combine to prove
our main result, Theorem~\ref{t.main}. These three Lemmas set out, in broad terms the iteration scheme of
Shkredov \cite{MR2266965}, but the formulation of the definitions is hardly clear.
\begin{itemize}
\item A critical definition is that of a corner-system, Definition~\ref{d.cornersystem}. Such a system consists of
the set $ A$, in which we seek a corner, and a number of auxiliary sets, such as the sets $ R_J$ mentioned above.
If the auxiliary sets are `suitably uniform' the the corner-system is called \emph{admissible}, see
Definition~\ref{d.admissible}.
\item A `generalized von Neumann Lemma,' to use the phrase of Ben Green and Terrance Tao \cite{math.NT/0404188}.
Lemma~\ref{l.3dvon} states that if the corner-system is admissible, and
$ A$ is suitably `uniform' in a non-obvious sense (and $ A$ is not too small, a weak condition)
then $ A$ has a corner.
\item An `increment Lemma,' Lemma~\ref{l.dinc}. This Lemma tells us that in the event that the hypothesis of
of Lemma~\ref{l.3dvon} fails, we can find a new corner-system, which is non-trivial, in which $ A$ has a larger
density. It is this step that provides termination in our iteration, as the density of a set can never exceed
one. The non-triviality comes from suitable lower bounds on the probabilities associated to the sets
in the corner-system. This Lemma, probabilistic in nature does \emph{not} provide for an admissible
corner-system.
\item A `Uniformizing Lemma,' Lemma~\ref{l.uni},
in which a non-admissible corner-system is made admissible, permitting the recursion to
continue.
\end{itemize}
These three Lemmas are combined, in a known way see \S~\ref{s.algorithm}, to prove the Main Theorem.
\item \S~\ref{s.box} sets out notation for the Box Norms which are essential for the entire paper,
in particular the Gowers-Cauchy-Schwartz Inequality~\ref{gcsi}.
These considerations have to be set out in some generality, as the later arguments will encounter a
variety of Box Norms, and multi-linear forms consisting of up to $ 56$ functions.
Most, but not all, of this section is standard, but worked out in a setting in which the underlying sets
have relatively large probabilities.
\item \S~\ref{s.linearForms} applies the results on the Box Norm to some classes of linear forms which
arise in the context of the three-dimensional Box Norm. These results have
proofs which are appropriate refinements of the proof of the Gowers-Cauchy-Schwartz Inequality, taking into
account the fact that the underlying sets we are interested have very small probabilities. This section introduces
a notion of uniformity with respect to linear forms of a bounded complexity, Definition~\ref{d.U}.
An important component of the argument, is that the sets we consider only have
a uniformity in the sense of Definition~\ref{d.U} of a bounded complexity. Also in this section, and
particularly
important, is the First Proposition on Conservation of Densities, Proposition~\ref{p.con}, and its corollary
Lemma~\ref{l.Zvar}.
\item \S~\ref{s.formsCorners} is a reprise of the previous section. In principle, we could have written
the one section to encompass both this section and \S~\ref{s.linearForms}, but felt that this might make the
paper harder to read. This section contains the Second Proposition on Conservation of Densities,
Proposition~\ref{p.con2}. Both of these sections are central to the remainder of the argument.
\item \S~\ref{s.von} will prove the first of the three Lemmas, Lemma~\ref{l.3dvon}, by a subtle reworking of a
standard Box Norm inequality. In its simplest form, this argument was found by Shkredov \cite{math.NT/0404188}, but
has a more refined elaboration in the current context.
\item \S~\ref{s.Tbox}
presents a Lemma we refer to as a `Paley-Zygmund inequality for the Box Norm,'
see Lemma~\ref{l.BPZ}. Namely, assuming that the Box Norm is big, deduce, e.\thinspace g.\thinspace,
the conclusions \eqref{e.oo} and \eqref{e.op} above. This Lemma is presented in the simplest context
in the two dimensional setting.
We then present the same Lemma as above, but in the `weighted context.' That is, in a context where
the underlying spaces is \emph{not} just a tensor product space. See Lemma~\ref{l.Tbox}. Both of these Lemmas a
are stated in some generality, as the more general formulation is required in \S~\ref{s.uniform}. The main
result of this section, Lemma~\ref{l.Tbox}, requires a careful elaboration of the proof in the `unweighted' case.
\item \S~\ref{s.uniform} we address the fact that the data provided to us from Lemma~\ref{l.BPZ} and
Lemma~\ref{l.Tbox} does not have any uniformity properties. This is remedied by selecting a variety of
partitions of the underlying space, with most of the `atoms' of the partitions are sufficiently uniform.
It is in this section that the Ackerman function will arise. The main Lemma is
Lemma~\ref{l.uni}.
\item The three Lemmas of \S~\ref{s.lemmas} are combined to prove our main Theorem in \S~\ref{s.algorithm}.
\end{itemize}
\section{Principal Lemmata}
\label{s.lemmas}
Our proof is recursive, with each step in the recursion identifying a
new subspace $ H\le \mathbb F _5 ^{n}$ in which we work. $ H$ is of course
a copy of $ \mathbb F _5 ^n$, just with a smaller value of $ n$. We maintain a lower bound
on the dimension of $ H$.
$ H\times H\times H$ has the standard basis elements
$ \operatorname e _1$, $ \operatorname e _2$, and $ \operatorname e _3$.
We also use the basis element
\begin{equation}\label{e.4}
\operatorname e _4= \operatorname e _1+\operatorname e _2+\operatorname e _3\,,
\end{equation}
which is the element associated with the `endpoints' of the corner. A corner has
an equivalent description in terms of any three elements of the four basis
elements $ \{\operatorname e _i \,:\, 1\le i\le 4\}$.
Below, we will work with sets $ S_i$, $ 1\le i \le 4$. They can be viewed as
elements of the field $ H$. But in addition, we view them as subsets of
$ H\times H\times H$, as follows:
\begin{equation} \label{e.overline}
\overline S _ i = \{ x\in H\times H\times H\,:\, x \cdot \operatorname e_i \in S_i \}
\qquad 1\le i\le 4\,.
\end{equation}
Thus, the fibers over $ \overline S_i$ are copies of $ H\times H$.
Likewise we will work with sets $ R_{i,j}\subset S_i \times S_j$. They can be
viewed as subsets of $ H\times H\times H$ by setting
\begin{equation} \label{e.Overline}
\overline R_{j,k}= \{ x\in H\times H\times H\,:\, (x \cdot \operatorname e
_j , x \cdot \operatorname e_k) \in R_{j,k}\}\,,
\qquad 1\le i<j\le 4\,.
\end{equation}
Thus, the fibers of $ \overline R_{j,k}$ are copies of $ H$.
\begin{definition}\label{d.cornersystem}
By an \emph{corner-system} we mean the data
\begin{equation} \label{e.Asystem}
\mathcal A=\{H\,,\, S_i\,,\, R_{i,j}\, ,\, T\,,\, A \,:\, 1\le i,j\le 4\}
\end{equation}
where these conditions are met.
\begin{enumerate}
\item $ H$ is a subspace of $ \mathbb F _5 ^{n}$.
\item $ S_i \subset H$, $ 1\le i\le 4$.
\item $ R_{j,k}\subset S_j \times S_k$, $ 1\le j<k\le 4$.
\item $ T\subset \overline R_{j,k}$, $ 1\le j<k\le 4$.
\item $ A\subset T$.
\end{enumerate}
By a \emph{$ T$-system} we mean the data
\begin{equation} \label{e.Tsystem}
\mathcal T=\{H\,,\, S_i\,,\, R_{i,j}\, ,\, T \,:\, 1\le i,j\le 4\}
\end{equation}
which is the same as a corner system, except that the set $ A$ is not listed,
and so condition (5) above is not needed.
For such systems we use the notations
\begin{gather} \label{e.Tlll}
T_ \ell \coloneqq \bigcap _{\substack{
1\le j<k\le 4\\ j,k\neq \ell }}
\overline R_{j,k } ,\, \quad 1\le \ell \le 4 \,,
\\
\label{e.delta1}
\delta _j \coloneqq \mathbb P (S_j\,:\, H)\,, \qquad
\delta _{j,k} \coloneqq \mathbb P (R_{j,k}\,:\, S_j \times S_k)\,, \qquad
1\le j<k\le 4\,,
\\ \label{e.delta2}
\delta _{T \,:\, \ell } \coloneqq \mathbb P (T\,:\, T _{\ell })\,, \qquad 1\le \ell \le 4\,.
\end{gather}
\end{definition}
The sets $ T _{\ell }$ play an essential role in this proof for the following reason. They are
built up from lower dimensional objects in a natural way, and presuming that the lower dimensional
objects are themselves well behaved with respect to box norms, then the $ T_ \ell $ is as well.
The same conclusion does not seem to hold for the $ 6$-fold intersection
$ \cap _{1\le i<j\le k} \overline R _{j,k}$. That in turn lead us to the introduction of the
auxilary set $ T\subset \overline R _{j,k}$. Working on this indeterminant set $ T$ leads to
most of the complications of this paper.
We use the notation $ R _{j,k}\subset S_j \times S_k$ rather than the (more natural) $ S _{j,k}$, as
we will use the notation $ S _{j,k} \coloneqq S_j \times S_k$, in association with a number of Box Norms throughout
the paper.
\begin{definition}\label{d.admissible}
Let $ C_{\textup{admiss}}\ge 64$ be a fixed large constant, and $ 0< \kappa _{\textup{admiss}} <1$ be a fixed small constant.
Given $ 0<\varepsilon<1$, and $ T$-system $ \mathcal T$ as in \eqref{e.Tsystem},
we say that $ \mathcal T$ is
\emph{$ \varepsilon $-admissible} iff
\begin{gather}
\label{e.Ad3Box}
\frac{\norm T - \delta _{T \,:\, \ell } T _{\ell } . \Box \{ i\,:\, i\neq \ell \}. }
{\norm T _{\ell }. \Box { \{ i\,:\, i\neq \ell \} } . } \le \kappa _{\textup{admiss}} \varepsilon ^{C_{\textup{admiss}}} \cdot
\mathbb P (T\,:\, T _{\ell }) ^{C_{\textup{admiss}}}
\,, \qquad {1\le \ell \le 4}\,,
\\
\label{e.Ad2Box}
{ {\norm R_{i,j}- \delta _{i,j} . \Box ^{\{i,j\}}
(S_i \times S_j). } }
\le \kappa _{\textup{admiss}} \varepsilon ^{C_{\textup{admiss}}}
\mathbb P (T\,:\, H \times H \times H ) ^{C_{\textup{admiss}}}
\,, \qquad {1\le i < j\le 4}\,,
\\
\label{e.Ad1Box}
{ \norm S_{i} - \delta _i . U (3).}
\le \kappa _{\textup{admiss}} \varepsilon ^{C_{\textup{admiss}}}
\mathbb P (T\,:\, H \times H \times H) ^{C_{\textup{admiss}}}
\,, \qquad {1\le i\le 4} \,.
\end{gather}
\end{definition}
All conditions require uniformity of the objects in terms of the density of $ T$ in that object.
But the condition in \eqref{e.Ad3Box} can not be strengthened in any way, and it is the condition that
turns out to be the most subtle. In particular, it will turn out that we can compute the
expression $ {\norm T _{\ell }. \Box { \{ i\,:\, i\neq \ell \} } . }$ in \eqref{e.Ad3Box}, but
it is also the case that $ T _{\ell }$ is \emph{not} uniform with respect to the norm $ \Box { \{ i\,:\, i\neq \ell \}
}$.
The norms in \eqref{e.Ad3Box} and \eqref{e.Ad2Box}
are detailed in Definition~\ref{d.GowersBox} and \eqref{e.norm-simple},
but also given explicitly in the next definition.
\begin{definition}\label{d.boxes} Let $ X$, $ Y$ and $ Z$ be finite sets.
For any function $ f \;:\; X\to \mathbb C $, we use the notation for expectation, namely
\begin{equation*}
\mathbb E _{x\in X} f (x) = \lvert X\rvert ^{-1} \sum _{x\in X} f (x)\,.
\end{equation*}
Corresponding notation for probability $ \mathbb P (A)$, conditional probabilities, and
conditional expectations, and conditional variance are also used.
For a function $ f \;:\; X \times Y \longrightarrow \mathbb R
$, define
\begin{equation}\label{e.2Boxexplicit}
\norm f . \Box ^{ \{x,y\}} (X \times Y). ^{4}
\coloneqq \mathbb E _{\substack{x,x'\in X\\ y,y'\in Y}}
f (x,y) f (x,y') f (x',y) f (x',y') \,.
\end{equation}
Note that the right hand side is the average of the cross-correlation of $ f$ over all
combinatorial rectangles in $ X \times Y$.
For a function $ f \;:\; X \times Y \times Z
\longrightarrow \mathbb R $, define
\begin{equation}\label{e.3Boxexplict}
\begin{split}
\norm f . \Box ^{x,y,z} (X \times Y \times Z). ^{8}
& \coloneqq \mathbb E _{\substack{z,z'\in Z}}
\norm f ( \cdot , \cdot ,z)f ( \cdot , \cdot ,z') . \Box ^{x,y} (X \times Y). ^{4}
\\
& =
\mathbb E _{\substack{x,x'\in X\\ y,y'\in Y \\ z,z'\in Z}}
f (x,y,z) f (x,y',z) f (x',y,z) f (x',y',z)
\\ & \qquad \times
f (x,y,z') f (x,y',z') f (x',y,z') f (x',y',z') \,.
\end{split}
\end{equation}
This has a similar interpretation as the norm in \eqref{e.2Boxexplicit}.
In \eqref{e.Ad3Box}, we use the notation
\begin{equation} \label{e.norm-simple}
\norm g. \Box { \{ i\,:\, i\neq \ell \}}.
\coloneqq
\norm g. \Box ^ { \{ i\,:\, i\neq \ell \}}( H\times H\times H) . \,.
\end{equation}
This notation is consistent with \eqref{e.norm-simple} below.
\end{definition}
The $ U (3)$ norm used in \eqref{e.Ad1Box} has a definition that is similar to the
Box Norms, but has an additive component.
\begin{definition}\label{d.U3} For $ f \;:\; H \longrightarrow \mathbb \mathbb R $, we define
\begin{equation*}
\norm f. U (3). \coloneqq \norm f (x+y+z). \Box ^{x,y,z} H \times H \times H .
\end{equation*}
\end{definition}
In these definitions, observe
\begin{itemize}
\item A $ \delta $ represents a `density,' and this will most frequently be a
relative density. Thus, $ \delta _{i,j}$ is the density of $ R_{i,j}$ in $ S_i \times
S_j$. In some of these notations, this relative density is indicated explicitly, as in
the definition for $ \delta _{T \,:\, \ell }$.
\item Likewise, the Box Norms in \eqref{e.Ad3Box} and \eqref{e.Ad2Box} are relative
Box Norms. In \eqref{e.Ad2Box}, this relative norm is indicated in the notation. But,
in \eqref{e.Ad3Box} this is indicated by the division by $ \norm T _{\ell }. \Box \{i\,:\, i
\neq \ell \}. $.
\item Notice that the uniformity conditions \eqref{e.Ad3Box}---\eqref{e.Ad2Box} are
phrased relative to the the `higher dimensional objects in question.' Thus, the uniformity
condition on $ T $ in \eqref{e.Ad3Box} is phrased in terms of the
densities of $ T$ in $ T _{\ell }$.
\item The previous point, not anticipated by the two-dimensional version of this
Theorem, is important to the proof of our critical Lemma~\ref{l.uni} below.
And it complicates the proof of Lemma~\ref{l.3dvon}.
\item It is possible that the degree of uniformity require on $ S_i$ in \eqref{e.Ad1Box} and
$ R _{i,j}$ in \eqref{e.Ad2Box} is too high. For instance, one could imagine that
\eqref{e.Ad1Box} should be replaced by
\begin{equation} \label{e.ad1box}
{ \norm S_{i} - \delta _i . U (3).}
\le \kappa \varepsilon ^{C_{\textup{admiss}}}
\mathbb P (T\,:\, \overline S_i) ^{C_{\textup{admiss}}}
\,, \qquad {1\le i\le 4} \,.
\end{equation}
As it turns out, the conditions \eqref{e.Ad1Box} and \eqref{e.Ad2Box} are available to us
by this proof, and so we use them. The distinction between \eqref{e.ad1box} and \eqref{e.Ad1Box}
could be important in extensions of this argument to higher dimensions.
\end{itemize}
The three Lemmas are very much as in \cites{MR2266965,MR2289954},
though with more complicated statements in the current setting.
The first Lemma asserts that for admissible corner-systems,
if dimension is not too small, and the Box Norms $ \norm A - \delta _{A\,:\, T} T
.\Box\{ i\,:\, i\neq \ell \}.$ are sufficiently small, uniformly in $ \ell $ then $ A$ has a corner.
\begin{von} \label{l.3dvon} Suppose that we are given an corner-system $ \mathcal A$ as in
\eqref{e.Asystem}. Set $ \delta _{A\,:\, T}= \mathbb P (A \,:\, T)$,
and assume that $ \mathcal A$ is $ \delta _{A \,:\, T}$-admissible.
The following two conditions are then sufficient for $ A$ to have a corner.
\begin{gather}
\label{e.BigEnough}
\delta _{A\,:\, T} \cdot \prod _{j=1} ^{4} \delta _j \cdot
\prod _{1\le j<k\le 4} \delta _{j,k} \cdot \prod _{\ell =1} ^{4} \delta _{T\,:\, \ell } \cdot \lvert H\rvert ^{4}
>4 \lvert A\rvert \,,
\\
\label{e.UniformEnough}
\max _{1\le \ell \le 4} \frac {\norm A - \delta _{A\,:\, T} T. \Box \{ i\,:\, i\neq \ell \}. }
{\norm T . \Box \{ i\,:\, i\neq \ell \}.} \le \kappa \delta _{A \,:\, T} ^{4} \,.
\end{gather}
\end{von}
The condition \eqref{e.BigEnough} is the condition, typical to the subject, that the `average number of corners'
in $ A$ exceed the number of `trivial corners' in $ A$. The second condition \eqref{e.UniformEnough} is the
all important uniformity condition.
The second Lemma is the alternative if \eqref{e.UniformEnough} does not hold.
\begin{DensityIncrement}\label{l.dinc} There is an absolute constant $ \kappa $ for
which the following holds.
Suppose that the corner-system in \eqref{e.Asystem} is $ \delta _{A\,:\, T} $-admissible, and that
(\ref{e.UniformEnough}) \emph{does not hold.} Then,
there are sets
\begin{equation*}
S_i'\subset S_i\,,
\quad
R_{i,j}'\subset R_{i,j}\,,
\quad
T'\subset T_\ell ' = \prod _{
\substack{1\le i,j\le 4\\ i,j\neq \ell }}S '_{i,j}
\end{equation*}
These sets satisfy the estimates
$
\mathbb P (T'\,:\, T)
\ge \delta _{A\,:\, T} ^{1/ \kappa}
$ and
$ \mathbb P (A\,:\, T')\ge \delta_{A\,:\, T}+ \
\delta_{A\,:\, T} ^{1/ \kappa}$.
\end{DensityIncrement}
It is the last estimate that provides a termination for our algorithm in \S~\ref{s.algorithm}.
The previous Lemma, which is probabilistic in nature, does not supply us with
admissible data. This is rectified in the next Lemma.
\begin{uniformizing}\label{l.uni}
There is are functions
\begin{equation*}
\Psi _{\operatorname {dim}} \,,\, \Psi _{T}\;:\; [0,1] ^{3} \longrightarrow \mathbb N
\end{equation*}
for which the following holds
for all $0< v<\delta <1 $. Let
$ \mathcal A$ be an corner-system as in \eqref{e.Asystem}. Assume that
$ \mathbb P (A\,:\, T)\ge \delta +v$.
There is a new corner-system
\begin{equation*}
\mathcal A' =\{H'\,,\, S_i'\,,\, R_{i,j}'\, ,\, T'\,, A '\,:\, 1\le i,j\le 4\}
\end{equation*}
so that for some $ x\in H$, $ A' \subset A+x$, and similarly for $ T'\subset T+x$.
More importantly, we have:
\begin{gather}
\label{e.H'}
\operatorname {dim} (H')\ge \operatorname {dim} (H)-\Psi _{\operatorname {dim}} (v, \delta )
\\
\label{e.inc5}
\mathbb P (A'\,:\, T')\ge \delta +\tfrac v 4
\\ \label{e.inc0}
\textup{ $ \mathcal A'$ is $ \delta $-admissible,}
\\
\label{e.inc2}
\mathbb P (T'\,:\, H'\times H'\times H')\ge
\Psi _{T} (\delta , v, \mathbb P (T \,:\, H\times H\times H) )\,.
\end{gather}
\end{uniformizing}
We remark that in \eqref{e.H'}, if the dimension of $ H$ is too small, then
$ \mathcal A '$ will be trivial in that $ T'$ consists of only one point.
These Lemmas are combined in a standard way to prove our Main Theorem. The details
are in \S~\ref{s.algorithm}.
\section{Box Norms}\label{s.box}
It will be helpful to recall the Gowers uniformity or Box Norms in
a more general form. In this we follow the the presentation in the
appendices of \cite{math.NT/0606088}, with most, but not all, Lemmas
similar in statement to that reference. The notion of a Box Norm is
critical to all the principal arguments of this paper; accordingly,
we have pulled these general results together into their own section.
\begin{GowersBox} \label{d.GowersBox} Let $\{ X_{u} \}_{ u \in U}$
be a finite non-empty collection of finite non-empty sets indexed by $ u \in U$.
For any $V\subseteq U$ write $X_{V} := \prod_{ v \in V} X_{v} $ for the Cartesian product.
For a complex-valued function
$f_{U}: X_{U} \to \mathbb C $, we define the \emph{Gowers Box Norm} (or just \emph{Box Norm})
$\norm f_{U} . {\Box^U(X_{U})} . \in {\mathbb R}^+$ to be
\begin{equation}\label{fa}
\lVert f_{U}\rVert_{\Box^U(X_{U})}^{2^{\lvert U\rvert }} := \mathbb E_{x^{0}_{U}, x^{1}_{U} \in X_{U}}
\prod_{\omega_{U} \in \{0,1\}^U} {\mathcal C}^{\vert\omega_{U}\rvert} f_U( x^{\omega_{U}}_{U} )
\end{equation}
where ${\mathcal C}: z \mapsto \overline{z}$ is complex conjugation,
and for any $x^{0}_{U} = (x^{0}_{u} )_{ u\in U}$ and
$x^{1}_{U} = (x^{1}_{u} )_{ u\in U}$ in $X_{U}$ and
$\omega_{U} = (\omega_{u} )_{ u\in U}$ in $\{0,1\}^U$, we write
$x^{\omega}_{U} := (x^{\omega_{u} }_{u} )_{ u\in U}$
and $|\omega_{U}| := \sum_{ u\in U} \omega_{u} $.
In the special case that $U$ is empty, forcing $f_{U}$ to be a constant, we have
$\lVert f_{U}\rVert_{\Box^U(X_{U})} := \lvert f_{U}\rvert $.
\end{GowersBox}
Above, we use the notation $ A^B$ for the class of maps from $ B$ into $ A$, which notation will be used throughout
the paper.
If $ U= \{u\}$, then $ \lVert f_{U}\rVert_{\Box^U(X_{U})} = \lvert \mathbb E _{X_u} f \rvert
$. In particular this is non-negative, and can be zero. Note that if $ A\subset X _{U}$,
$\lVert A\rVert_{\Box^U(X_{U})} ^{2 ^{\lvert U\rvert }} $ is the
average number of `boxes' in $ A$. Thus,
$ \lVert A- \mathbb P (A\,:\, X_U)\rVert_{\Box^U(X_{U})} $ measures the degree to which
$ A$ behaves as expected, in regards to the number of boxes it contains.
It is also easy to verify that if $ A$ is a randomly selected subset of $ X _{U}$,
then $ \lVert A- \mathbb P (A\,:\, X_U)\rVert_{\Box^U(X_{U})} $ is small. A similar
point is essential to this section: Sets which are small with respect to this semi-norm behave in a manner
similar to randomly selected subsets. A set $ A$ for which $\lVert A- \mathbb P (A\,:\,
X_U)\rVert_{\Box^U(X_{U})} $ is small we will call \emph{uniform}.
The Box Norms
arise through the following inequality, proved by inductive application of
the Cauchy-Schwartz inequality. For this Lemma, see \cite{math.NT/0606088}*{Lemma B.2}.
\begin{GCSI}\label{gcsi}
Let $ U$ be non-empty, and $\{X_{u}\}_{u\in U}$ be a finite collection of finite non-empty sets.
For every $\omega_{U} \in \{0,1\}^U$ let $f_{U}^{\omega_{U}}: X_{U} \to \mathbb C $ be a function.
Then
\begin{equation}\label{gcz-box}
\biggl\lvert\mathbb E_{x^{0}_{U}, x^{1}_{U} \in X_{U}}
\prod_{\omega_{U} \in \{0,1\}^U} {\mathcal C}^{\lvert\omega_{U}\rvert} f_{U}^{\omega_{U}}(x^{\omega_{U}})\biggr\rvert
\leq \prod_{\omega_{U} \in \{0,1\}^U} \lVert f_{U}^{\omega_{U}} \rVert_{\Box^U(X_{U})}\,.
\end{equation}
\end{GCSI}
From this, it follows that one has the \emph{Gowers Triangle Inequality.}
\begin{equation}\label{e.gti}
\norm f_{U}+g_{U}. \Box ^{U} (X_{U}).
\le \norm f_{U}. \Box ^{U} (X_{U}). +\norm g_{U}. \Box ^{U} (X_{U}).
\end{equation}
Indeed, raise both sides of the equation above to the power of $ 2 ^{\lvert U\rvert }$
and use \eqref{gcz-box}.
We will also refer to this corollary to the Gowers-Cauchy-Schwartz inequality.
\begin{corollary}\label{c.gcsi}
Let $\{X_{u} \}_{ u\in U}$ be a finite collection of finite non-empty sets.
For $ V\subset U$, let $ f _{V} \;:\; X_{V} \to \{z\in \mathbb C
\,:\, \lvert z\rvert\le 1 \} $. Then,
\begin{align}\label{e.gcsi1}
\Abs{ \mathbb E _{x\in X_{U}} \prod _{V\subset U} f_{V} (x_{V}) }
&\le \norm f_{U} . \Box^U (X_{U}). \,.
\end{align}
That is, only the Box Norm associated to the largest set $ U$ is needed.
Here, for $ x\in X_{U}$, $ x_{V}$ is the restriction of the sequence $ x= \{x_{u} \,:\,
u\in U\}$ to the set $ V\subset U$.
\end{corollary}
The inequality \eqref{e.gcsi1} is \cite{math.NT/0606088}*{(B.7)}, and it suggests
that the $ \Box^U$ norm is insensitive to `lower order' perturbations.
We single out a more general inequality that is important to us.
\begin{lemma}\label{l.gcsi1} Under the hypotheses of Corollary~\ref{c.gcsi},
for $ V_0\subsetneq U$, we have
\begin{equation} \label{e.gcsi2}
\Abs{ \mathbb E _{x\in X_{U}} \prod _{\substack{V\subset U\\ \lvert V\rvert\le \lvert V_0\rvert }}
f_{V} (x_{V}) }
\le \norm f_ {V_0} . \Box^ {V_0} (X_ {V_0}). \,.
\end{equation}
\end{lemma}
The inequality \eqref{e.gcsi2} has a proof similar to \eqref{e.gcsi1}, and
we omit the proof. (Our proof of the von Neumann Lemma below could provide a proof,
as we comment when we arrive there.)
It has a similar interpretation to the first inequality: the
$ \Box ^{V_0}$ norm is insensitive to perturbations of the same order in distinct
variables.
\begin{corollary}\label{c.gcsi1}
For all $ \epsilon >0$ and all integers $ k$, and finite sets $ U$ with $ \lvert U\rvert\ge k $
there is a $ C_1 = C_1 (\lvert U\rvert, k, \epsilon ) $ for which the following holds.
Let $\{X_{u} \}_{ u\in U}$ be a finite collection of finite non-empty sets,
and $ X_{V}= \prod _{ u\in V} X_{u} $, for $ V\subset U$.
Let $ \mathcal U_k$ be the collection of subsets of $ U$ of cardinality $ k$, and
for each $ V\in \mathcal U_k$ let $ S_V\subset X_V$ satisfy
\begin{equation}\label{e.xgc1}
\norm S_V - \mathbb P (S_V). \Box ^{V} X_V.
\le \bigl( \tfrac12 \mathbb P (S_V) \bigr) ^{C_1} \,, \qquad V\in \mathcal U _k\,.
\end{equation}
Then, we have the inequality
\begin{equation}\label{e.xgc2}
\ABs{ \mathbb E _{X_U} \prod _{V\in \mathcal U_k} S_V
- \prod _{V\in \mathcal U_k} \mathbb E _{X_V} S_V }
\le
\epsilon \prod _{V\in \mathcal U_k} \mathbb E _{X_V} S_V \,.
\end{equation}
\end{corollary}
Thus, if all the sets $ S_{V}$ are \emph{very uniform} with respect to the natural
Box Norms, the expectation of the products of the $ S_{V}$ behaves as if the
sets are randomly selected.
\begin{proof}
We induct on the number $ w$ of elements of $ V\in \mathcal U_k$
for which $ S_V\neq X_V$. That is, we prove that for all
all $ \epsilon >0$, integers $ k$, and $ 1\le w \le \lvert \mathcal U_k\rvert $
there is a $ C_1 (\lvert U\rvert, k , \epsilon ,w )$ so that if
for collections $ S_V$, with at most $ w$ choices of $ V
\in \mathcal U_k$ do we have $ S_V \neq X_V$ satisfying \eqref{e.xgc1}
we have \eqref{e.xgc2}.
The case of $ w=1$ is obvious. Let us suppose
that this holds for $ 1\le w < \lvert \mathcal U_k\rvert $, and prove the claim for
$ w+1$. We take
\begin{equation*}
C_2=C_2 (\lvert U\rvert, k, \epsilon , w+1 )
=
w+3 + \log _2 1/\epsilon + C_1 ( \lvert U\rvert, k, \epsilon /2, w )\,.
\end{equation*}
Considering the collections $ S_V$ for $ V\in \mathcal U_k$, we select
$ V_0$ so that $ \mathbb P (S_{V_0})$ minimal. Thus, in particular
we must have $ S _{V_0} \subsetneq X _{V_0}$. Write
$ S_{V_0}= \mathbb P (S _{V_0})+f _{V_0}$.
Since all the sets in $ \mathcal U_k$
have the same cardinality, we have the inequality
\begin{equation*}
\Abs{\mathbb E _{x_U\in X_U} f _{V_0} \prod _{\substack{V\in \mathcal U_k - \{V_0\} }}
S_V } \le \norm f _{V_0}. \Box ^{V_0} X_ {V_0}.
\le \bigl( \tfrac 12 \mathbb P (S_{V_0}) \bigr) ^{C_2}
\le \frac \epsilon 4 \prod _{V\in \mathcal U_k}\mathbb E _{x_V\in X_V} S_V \,.
\end{equation*}
The last line follows from the selection of $ V_0$.
We can the apply the induction hypothesis to estimate
\begin{align*}
\ABs{ \mathbb E _{X_U} \prod _{V\in \mathcal U_k} S_V
- \prod _{V\in \mathcal U_k} \mathbb E _{X_V} S_V }
& \le
\frac \epsilon 4 \prod _{V\in \mathcal U_k}\mathbb E _{x_V\in X_V} S_V
\\& \qquad + \mathbb P (S_{V_0})
\ABs{ \mathbb E _{X_U} \prod _{V\in \mathcal V_k - \{V_0\}} S_V
- \prod _{V\in \mathcal U_k- \{V_0\}} \mathbb E _{X_V}S_V }
\\
& \le \epsilon \prod _{V\in \mathcal U_k} \mathbb E _{X_V} S_V \,.
\end{align*}
So the induction is complete.
We can then conclude the Lemma by taking $ C_1 (\lvert U\rvert, k, \epsilon )
= C_2 (\lvert U\rvert, k, \epsilon/2 , \lvert \mathcal U_k\rvert )$.
\end{proof}
We frequently use this corollary of the Gowers-Cauchy-Schwartz inequality.
\begin{lemma}\label{l.Ugcsi}
Let $\{X_{u} \}_{ u\in U}$ be a finite collection of finite non-empty sets.
For $ V\subset U$, let $ S_{V} \subset X_{V}$. Then, for an integer $ k \le \lvert U\rvert $
\begin{equation}\label{e.Ugcsi}
\ABs{ \mathbb E _{x\in X_{U}} \prod _{\substack{V\subset U\\ \lvert V\rvert\le k }}
S_V (x_{V}) - \prod _{\substack{V\subset U\\ \lvert V\rvert\le k }}
\mathbb E _{x_{V}\in X_{V}} S_V (x_{V})}
\le 2 ^{\lvert U\rvert } \cdot \max _{ \substack{V\subset U\\ \lvert V\rvert\le k }}
\norm S_{V} - \mathbb E _{x_{V}\in X_{V}} S_{V} . \Box^{V} (X_V). \,.
\end{equation}
\end{lemma}
Box Norms, the expectation of the products of the $ S_{V}$ behaves as if the
sets are randomly selected. In order for this inequality to be non-trivial, we need
\begin{equation*}
\max _{ \substack{V\subset U\\ \lvert V\rvert\le k }}
\norm S_{V} - \mathbb E _{x_{V}\in X_{V}} S_{V} . \Box^{V} (X_V).
\le 2 ^{- \lvert U\rvert }
\prod _{\substack{V\subset U\\ \lvert V\rvert\le k }}
\mathbb E _{x_{V}\in X_{V}} S_V (x_{V})
\end{equation*}
Of course, the Lemma is trivial if $ k=1$, and
for $ k>1$, this uniformity requirement is quite
restrictive if the sets $ S_{V}$ have small probabilities.
This is exactly the situation in our proof.
\begin{proof}
We view
\begin{equation} \label{e.1B}
\mathbb E _{x\in X_{U}} \prod _{\substack{V\subset U\\ \lvert V\rvert\le k }}
S_V (x_{V})
\end{equation}
as a multi-linear form, with the order of the multi-linearity being
$
\sum _{j=1} ^{k} \binom { \lvert U\rvert } j\,,
$
a term which we have crudely estimated by $ 2 ^{\lvert U\rvert }$ in \eqref{e.Ugcsi}.
For each set $ V\subset U$, we consider the expansion of the function $ S_{V}$
as $ S_{V}=g_{V,0}+g_{V,1}$ where $ g_{V,0}= \mathbb P (S_{V} \,:\, X_{V}) \cdot X_{V}$,
and $ g_{V,1} $ is the balanced function.
We expand the term in \eqref{e.1B}. Let $ \mathcal I $ be the collection of subsets
of $ A$ of cardinality at most $ k$. We have
\begin{equation*}
\eqref{e.1B}=
\sum _{\epsilon \in \{0,1\} ^{\mathcal I}}
\mathbb E _{x_U\in X_{U}} \prod _{\substack{V\subset U\\ \lvert V\rvert=\le k }}
g_{V, \epsilon (V)} (x_{V})\,.
\end{equation*}
The leading term arises from the choice of $ \epsilon_0 $ which takes the value $ 0$
for all choices of sets $ V$. For this function we have
\begin{equation*}
\mathbb E _{x_U\in X_{U}} \prod _{\substack{V\subset U\\ \lvert V\rvert\le k }}
g_{V, \epsilon_0 (V)} (x_{V})
=
\prod _{\substack{V\subset U\\ \lvert V\rvert\le k }}
\mathbb E _{x_{V}\in X_{V}} S_V (x_{V})\,,
\end{equation*}
which is part of the expression on the left in \eqref{e.Ugcsi}.
let $ B_1\subset A$ be a maximal cardinality set for which $ \epsilon (B_1)=1$.
Then, for any subset $ V\subset U$ with $ \lvert B_1\rvert< \lvert V\rvert\le k $,
we have $ \epsilon (V)=0$, so that $ g_{V, \epsilon (V)}$ is a constant
function, taking a value of at most one.
It follows from \eqref{e.gcsi2} that we have
\begin{equation*}
\Abs{\mathbb E _{x_U\in X_{U}} \prod _{\substack{V\subset U\\ \lvert V\rvert\le k }}
g_{V, \epsilon (V)} (x_{V})}
\le
\Abs{\mathbb E _{x_U\in X_{U}}
\prod _{\substack{V\subset U\\ \lvert V\rvert\le \lvert B_1\rvert }}
g_{V, \epsilon (V)} (x_{V})}
\le
\norm g_{V_1,1} . \Box ^B (X_{V}). \,.
\end{equation*}
From this, \eqref{e.Ugcsi} follows.
\end{proof}
We note the following Corollary to the proof above, with the main distinction
being that some of the functions are indicators of uniform sets as before, while
others are arbitrary bounded functions. The conclusion is that the uniform
sets matter little to the computation of the expectation.
\begin{corollary}\label{c.Ugcsi}
Let $\{X_{u} \}_{ u\in U}$ be a finite collection of finite non-empty sets and let
$ k$ be a non-zero integer.
Let $ \mathcal V_1$ and $ \mathcal V_2$ be two collections of subsets of $ U$, with
all members of $ \mathcal V_1$ and $ \mathcal V_2$ having cardinality at most $ k$.
For $ V\in \mathcal V_1$, let $ S_{V} \subset X_{V}$. For $ W\in \mathcal V_2$ let
$ f_W \;:\; X_W \longrightarrow [-1,1]$ be a bounded function.
Then,
\begin{equation}\label{e.cUgcsi}
\begin{split}
\ABs{ \mathbb E _{x\in X_{U}} \prod _{\substack{V \in \mathcal V_1}}
S_V (x_{V})
\prod _{\substack{V \in \mathcal V_2}} f_W (x_W)
&- \prod _{\substack{V \in \mathcal V_1}}
\mathbb E _{x_{V}\in X_{V}} S_V (x_{V})
\times
\mathbb E _{x_U\in X_U} \prod _{\substack{V \in \mathcal V_2}} f_W (x_W)
}
\\&\qquad \le 2 ^{\lvert U\rvert } \cdot \max _{ \substack{V\in \mathcal V_1}}
\norm S_{V} - \mathbb E _{x_{V}\in X_{V}} S_{V} . \Box^{V} (X_V). \,.
\end{split}
\end{equation}
\end{corollary}
We turn to a more complicated version of these Lemmas and Corollaries.
\begin{lemma}\label{l.01} Let $ U$ be a finite set, and $ X_u$ for $ u\in U$
another finite set. Fix $ 1<k< \lvert U\rvert $, and let $ \mathcal V$ be a
collection of subsets of $ U$ of cardinality at most $ k$. Let $ S_U\subset X_U$,
and write $ \delta = \mathbb P (S_U)$. Assume that
\begin{equation}\label{e.0v}
\sup _{V\in \mathcal V_k}
\mathbb E _{x ^{0} _{U-V} \in X _{U-V}} \norm f _{U} (x^V_U). \Box ^{V} X_V .=\tau
<\delta \lvert \mathcal V\rvert ^{-1} \,,
\qquad
f_U \coloneqq S_U- \delta \,.
\end{equation}
We emphasize that, in the expansion of the Box Norm above, the Box Norm is taken
over the variables associated to $ V$ and the expectation is taken over \emph{all}
variables in $ U$. The conclusion is that we have the inequality below.
\begin{equation}\label{e..0<}
\mathbb E _{x ^{0} _{U} } \ABs{ \delta ^ {\lvert \mathcal V\rvert }
-\mathbb E _{x ^1 _{U}}\prod _{V\in \mathcal V} S _U ( x^V _{U})}
\lesssim \tau \,.
\end{equation}
The implied constant depends upon $ \lvert V\rvert $. Above, by very slight abuse
of notation, we mean
\begin{equation*}
x^V _{U} = \begin{cases}
x^1 _{v} & v\in V
\\
x ^0 _v & v\not\in V
\end{cases}
\end{equation*}
\end{lemma}
This is a `conditional' version of Corollary~\ref{c.Ugcsi}.
In particular, note that in \eqref{e..0<}, we impose the Box Norms in
the variables $ X_V$, and take the expectation over all of $ X_U$.
The conclusion is again that if the set is suitably small with respect to
a family of relevant Box Norms, then a range of products of these sets behave
as if the set were randomly selected.
\begin{proof} Let us begin by noting that for $ V\in \mathcal V$, the monotonicity of the
Box Norms as the variables increase imply that
\begin{equation*}
\mathbb E _{x ^{0} _{U} } \Abs{ \delta
-\mathbb E _{x ^1 _{U}} S _V ( x^V _{U})}
\le \norm S_U - \delta . \Box ^{V} X_U. \le \tau \,.
\end{equation*}
It follows by the assumption on the magnitude of $ \tau $ that we can estimate
\begin{align*}
\Abs{\delta ^ {\lvert \mathcal V\rvert }
-\prod _{V\in \mathcal V} \mathbb E _{x ^1 _{U}} S _V ( x^V _{U})}
&\le (\delta + \tau ) ^{\lvert V\rvert }- \delta ^{\lvert V\rvert }
\\
& \le \delta ^{\lvert V\rvert } \bigl[(1+ \tau \delta ^{-1} )^{ \lvert V\rvert} -1\bigr]
\le \tau
\end{align*}
Also note that we can estimate, using Lemma~\ref{l.Ugcsi},
\begin{align*}
\mathbb E _{x ^{0}_U}
\Abs{ \mathbb E _{x_U ^{1} }
\prod _{V\in \mathcal V} S_{U} (x^V _{U})
- \prod _{V\in \mathcal V} \mathbb E _{x_U ^{1} } S _{U} (x^V _{U}) }
& \lesssim
\mathbb E _{x ^{0}_U} \sup _{V\in \mathcal V} \Norm S_U (x_V) - \mathbb E _{x_U ^{1} } S
_{U} (x^V _{U}) . \Box ^V X_V.
\lesssim 2 \tau \,.
\end{align*}
Putting these inequalities together proves the Lemma.
\end{proof}
\section{Linear Forms for the Analysis of Box Norms} \label{s.linearForms}
Box Norms, and counting corners in sets are examples of multi-linear forms that we will work with.
Their analysis will lead to forms in as many as $ 24$ functions, leading to the need for some general
remarks on such objects. Moreover, we are analyzing these forms on objects that are far
from tensor products. This is the primary focus of this section.
We will be making a wide variety of approximations to different expectations. In order to codify these
approximations, let us make this definition.
\begin{definition}\label{d.=} Fix $ 0<\upsilon < 3 ^{-28}$ be a small constant.
For $ A,B>0$ we will write $ A \stackrel u = B$ if $ \lvert A-B\rvert < \upsilon A$.
(We stack a `$ u$' on the equality, as this relation will always come about from uniformity.)
In those (few) instances, where it is important emphasize the role of $ \upsilon $, we will
write $ A \stackrel {u, \upsilon } = B$.
\end{definition}
We will only use the notation for quantities between $ 0$ and $ 1$. Observe the following.
Let $ 0<A,B,\alpha , \beta <1$. If $ A \stackrel {u, \upsilon } = \alpha $ and $ B \stackrel {u, \upsilon } = \beta $, then
we have
\begin{align*}
\lvert A- \alpha \cdot \beta \rvert &\le
\lvert A- \alpha B\rvert+ \alpha \cdot \lvert \beta -B\rvert
\\
& \le \upsilon A+ \alpha \upsilon B \le 3 \upsilon A \,.
\end{align*}
Thus, we can write $ A \stackrel {u, 3\upsilon } = \alpha \cdot \beta $, that is this relationship
is weakly transitive. We will need to
use a finite chain of inequalities of this type, with the longest chain associated with the analysis
of a $ 28$-linear form in Lemma~\ref{l.Z} below. By abuse of notation, we will adopt the convention
$ A \stackrel u = B$ and $ B \stackrel u = C$ implies $ A \stackrel u = C$. This transitivity will
only be applied a finite number of times, so that taking an initial $ \upsilon $ in Definition~\ref{d.=} will
lead to a meaningful inequality at every stage of our proof.
A second situation we will have is this. Suppose that $ A \stackrel {u, \upsilon } = A'$ and $ B \stackrel {u,\upsilon } = B'$.
Then,
\begin{align*}
\lvert AA'-BB'\rvert & \le \lvert A-B\rvert A' + \lvert A'-B'\rvert B
\\ &
\le \upsilon (AA'+A'B) \le 3 \upsilon A A' \,.
\end{align*}
Thus, we can write $ A A' \stackrel {u, 3\upsilon } = B B'$, thus this relationship is
weakly multiplicatively transitive. We will need to use a finite chain of these inequalities, mostly related
to computing conditional expectations. By abuse of notation, we will adopt the convention that
$ A \stackrel u = A' $ and $ B \stackrel u = B'$ implies $ A A' \stackrel u = B B'$. This observation is
closely linked with the fact that our definition of admissibility, Definition~\ref{d.admissible} includes
relative measures of uniformity.
Our Lemmas and Definitions
should be coordinate-free, but to ease the burden of notation, we state them distinguishing
the coordinate $ x_4$ for a special role. They will be applied in their more general formulations, which
are left to the reader.
We are concerned with the evaluation of certain multi-linear forms, especially
those associated with Box Norms. For a collection of maps $ \Omega \subset \{0, ,\dotsc, \lambda -1\} ^{\{1,2,3\}}$,
where $ \lambda \ge 2$ is an integer, let $ \{f _{\omega } \,:\, \omega \in \Omega \}$ be a collection of functions.
The linear forms we are interested in are
\begin{equation}\label{e.Ldef}
\operatorname L( f _{\omega } \,:\, \Omega )
= \mathbb E _{x _{1,2,3} ^{\ell } \in S _{1,2,3} \,, \, 0\le \ell\le 2 }
\prod _{\omega \in \Omega } f _{\omega } (x _{1,2,3} ^{\omega }) \,.
\end{equation}
This next definition is concerned with the uniform evaluation of forms of this type, where the $ f_\omega $
are particularly simple.
\begin{definition}\label{d.U}
Let $ \lambda \ge 3$ be an integer, and $ 0< \vartheta <1$.
A subset $ U\subset T_4$ is called \emph{$ (\lambda , \vartheta,4) $-uniform} if the following
holds. Set $ \Omega_{3\to \lambda } = \{0 ,\dotsc, \lambda -1 \}^{ \{1,2,3\}}$ .
For any subset $ \Omega \subset \Omega_{3\to \lambda }$ we have the inequalities
\begin{equation} \label{e.Uuniform}
\operatorname L _{\Omega } (U\,:\, \Omega )
\stackrel {u, \vartheta } =
\bigl[ \delta _{ 4} \delta _{U\,:\, 4}
\bigr] ^{ \lvert \Omega \rvert }
\prod _{1\le j<k\le 3}
\delta _{j,k} ^{ \lvert \{\omega \vert _{ \{j,k\}} \,:\, \Omega \} \rvert }
\end{equation}
Here, $ \delta _{U\,:\, 4} =\mathbb P (U\,:\, T_4)$.
That is, the percentage error between the two terms is at most $ \vartheta $.
\end{definition}
It is an important point that we index this notion on the number of linearities that we permit the form to have, as
we must provide an upper bound on this notion of complexity.
Our primary objective is that $ T$ be well-behaved with respect to the Box Norm, in particular that
Lemma~\ref{l.Tbox} holds. This will require that $ T$ be $ (4, \vartheta_1 , 4)$-uniform, where
$ \vartheta _1$ is specified in that Lemma. But this will in turn require us to require $ T_4$ is
$ (12, \vartheta _2, 4)$-uniform. It is one purpose of this section to explain this relationship.
See Lemma~\ref{l.Uuniform}.
While we will use these results several times, there are two points where either these results
apply, but would lead to an increased order of complexity, as in the proof of \eqref{e.L10},
or the results of this section are not stated in enough generality, as in the proof of \eqref{e.B4var}.
A full understanding of these issues would likely be an aid to extending this argument to higher
dimensions.
In this definition, examining the product of densities, we see that $ \delta _{U\,:\, 4}=
\mathbb P (U\,:\, T_4)$ has the power $ \lvert \Omega \rvert $, that is the total number of
terms in the product. The power on the density $ \delta _{j,k}$ is the number of distinct
maps of the form $ \omega $, restricted to $ \{j,k\}$ in the set $ \Omega $. To set out
an example, a typical
term to which we will apply this definition is to the set $ U=T_4$, in
\begin{equation*}
\mathbb E _{\substack{x _{1}\in S_1,
\\ x_{2,3} ^0, x _{2,3}^1\in S _{2,3}} }
\prod _{ \epsilon \in \{0,1\} ^ {\{2,3\}} }
T_4 (x_1, x_{2,3} ^{\epsilon })
\end{equation*}
Here, it is clear that $ \lvert \Omega \rvert=4 $, while
\begin{equation*}
\lvert \{\omega \vert _{ \{1,2\}} \,:\, \Omega \} \rvert = 2\,,
\qquad
\lvert \{\omega \vert _{ \{1,3\}} \,:\, \Omega \} \rvert = 2\,,
\qquad
\lvert \{\omega \vert _{ \{2,3\}} \,:\, \Omega \} \rvert = 4\,.
\end{equation*}
The parameter $ \vartheta $ appears on the right in \eqref{e.Uuniform}, and represents
how close, in terms of percentages, the expectation behaves with respect to its expected behavior.
A set $ U$ is $ (\lambda, \vartheta,4) $-uniform if
a wide set of expectations of $ U$ `behave as expected.'
It is hardly obvious that even the set $ T_4$ satisfies this definition, but it
does, and we prove in Lemma~\ref{l.Uuniform} that both $ T_4$ and $ T$ are uniform.
\begin{lemma}\label{l.Uuniform} We have the following two assertions.
For constants $ C_1> C_0>0$ that depend only on $ C _{\textup{admiss}}$ in Definition~\ref{d.admissible}
the following are true.
\begin{enumerate}
\item For $ \vartheta = \delta _{T\,:\, T_4} ^{C_0}$, the set $ T_4$ is $ (12, \vartheta , 4)$-uniform.
\item For $ \vartheta = \delta _{T\,:\, T_4} ^{C_1}$, the set $ T$ is $ (6, \vartheta ,4)$-uniform.
\end{enumerate}
In fact, $ C_1, C_0$ can be taken to be a small constant multiple of $ C _{\textup{admiss}}$.
\end{lemma}
As the statement of the Lemma indicates, there is a link between the complexity of the linear forms
we need to consider for $ T$ and $ T_4$.
\begin{proof}
Let us discuss $ T_4$ first. Note that by \eqref{e.Ad1Box} and \eqref{e.gcsi2},
\begin{align} \notag
\operatorname L (T_4 \,:\, \Omega ) & =
\mathbb E _{\substack{x_{1,2,3} ^\ell \in S _{1,2,3} \\ 0\le \ell\le11 }}
\prod _{\omega \in \Omega } T_4 (x _{1,2,3} ^{\omega })
\\ \label{e.454}
&=
\mathbb E _{\substack{x_{1,2,3} ^\ell \in S _{1,2,3} \\ 0\le \ell\le11 }}
\prod _{\omega \in \Omega }
S_4 (x_1 ^{\omega (1)}+ x_2 ^{\omega (2)}+ x_3 ^{\omega (3)})
\prod _{1\le j<k\le 3} S _{j,k} (x _{j,k} ^{\omega })
\\
&=
\delta _{4} ^{\lvert \Omega \rvert } \cdot
\mathbb E _{\substack{x_{1,2,3} ^\ell \in S _{1,2,3} \\ 0\le \ell\le11 }}
\prod _{\omega \in \Omega }
\prod _{1\le j<k\le 3} S _{j,k} (x _{j,k} ^{\omega })
\label{e.ER1}
+O(\mathbb P (T\,:\, H \times H \times H) ^{C _{\textup{Admiss}} -12}) \,.
\end{align}
The power on $\mathbb P (T\,:\, H \times H \times H) $ accounts for the fact that implicitly the
condition \eqref{e.Ad1Box} is an expectation over $ H$, while above we are taking integration over
$ S _{1,2,3}$.
We continue with the analysis of the expectation above. We can use \eqref{e.gcsi2} and \eqref{e.Ad2Box} to
estimate
\begin{align}
\mathbb E _{\substack{x_{1,2,3} ^\ell \in S _{1,2,3} \\ 0\le \ell\le11 }}
\prod _{\omega \in \Omega }
\prod _{1\le j<k\le 3} S _{j,k} (x _{j,k} ^{\omega })
&=
\prod _{1\le j<k\le 3}
\delta _{j,k} ^{ \lvert \{\omega \vert _{ \{j,k\}} \,:\, \Omega \} \rvert }
\label{e.ER2}
+O(\mathbb P (T \,:\, S_{1,2,3,4}) ^{C _{\textup{admiss}}}) \,.
\end{align}
The leading terms of the expectations are exactly as desired. The two error terms in \eqref{e.ER1} and \eqref{e.ER2}
should be as small as desired, namely that they contribute at most $ \vartheta \operatorname L (T_4 \,:\, \Omega ) $.
But it is straight forward to see that we can take $ C_0$ of the Lemma to be $ C _{\textup{admiss}}-12 - \lvert \Omega \rvert
\ge C _{\textup{admiss}}-12 - 3 ^{12}$, with $ 3 ^{12}$ being the cardinality of $ \Omega _{3\to 12} = \{0 ,\dotsc, 11\}
^{\{1,2,3\}}$.
We turn to the second conclusion of the Lemma. Let $ \Omega \subset \Omega _{3\to 6}$,
and consider the multi-linear expression $ \operatorname L (T\,:\, \Omega )$.
Each occurrence of $ T$ is expanded as $ T=f_1+f_0$ where $ f_1= \delta _{T\,:\, 4} T_4$.
The leading term is when each $ T$ is replaced by $ f_1$, which leads to $ \delta _{T\,:\, 4} ^{\lvert \Omega \rvert }$
times the expectation in \eqref{e.454}. There are $ 2 ^{\lvert \Omega \rvert }-1 $ terms remaining. Each of them
has an occurrence of $ f_0$. All of these terms can be controlled by the assumption \eqref{e.Ad3Box}, and
importantly, the inequality \eqref{e.3Box} below. (We have not yet proved \eqref{e.3Box}, part of
Lemma~\ref{l.goodBox}, but its proof is independent of this argument.)
This last Lemma is applied with $ \lambda =6$, $ V=T_4$, which as we have just seen in the first half of the
proof, is $ (12, \vartheta ', 4)$-uniform, for a very small choice of $ \vartheta '$.
This gives us
\begin{align*}
\Abs{ \operatorname L (T\,:\, \Omega )- \delta _{T\,:\, T_4} ^{\lvert \Omega \rvert }
\operatorname L (T_4\,:\, \Omega )
}
&\le 2 ^{\lvert \Omega \rvert+1 } \operatorname L (T_4 \,:\, \Omega )
\cdot \frac { \norm f_0 . \Box ^{1,2,3} S _{1,2,3}.} { \norm T_4. \Box ^{1,2,3} S _{1,2,3}.}
\\ &
\le 2 ^{\lvert \Omega \rvert+1 } \delta _{T\,:\, 4} ^{C _{\textup{admiss}}} \cdot \operatorname L (T_4 \,:\, \Omega ) \,.
\end{align*}
And this completes the proof.
\end{proof}
Here is a corollary to the previous Lemma that is certainly relevant for us.
\begin{lemma}\label{l.UTbox} We have this estimate
\begin{align} \notag
\norm T_4 . \Box ^ {1,2,3} H _{1,2,3}. ^{8}
& =
\mathbb E _{x _{1,2,3}^1, x _{1,2,3}^0\in H _{1,2,3}}
\prod _{\omega \in \{0,1\} ^{1,2,3}} T_4 \circ \lambda _4 (x _{1,2,3} ^{\omega })
\\ \notag
&=
\mathbb E _{x _{1,2,3}^1, x _{1,2,3}^0\in H _{1,2,3}}
\prod _{\omega \in \{0,1\} ^{1,2,3}}
S_4 \circ \lambda _4 (x _{1,2,3} ^{\omega }) \prod _{1\le j<k\le 3} S _{j,k} (x _{1,2,3} ^{\omega })
\\ \label{e.T4Box}
&\stackrel u =
\prod _{j=1} ^{3} \delta _j ^{2} \cdot \delta _4 ^{8} \cdot \prod _{1\le j<k\le 3} \delta _{j,k} ^{2} \,.
\end{align}
\end{lemma}
We return to general considerations, and make a remark that we will refer to several times.
Let $ V\subset T_4$ be $ (\lambda , \vartheta ,4)$-uniform. Let $ \Omega \subset \Omega _{3\to \lambda -1}$,
and assume that the set $ \Omega _{1\to0} $ is non-trivial.
\begin{equation*}
\Omega _{1\to 0}= \{\omega \in \Omega \,:\, \omega (1)=0\}\,, \qquad \Omega _{1\not\to0} = \Omega - \Omega _{1\to 0}\,.
\end{equation*}
Consider the estimate below obtained by applying the Cauchy-Schwartz inequality in all variables except $ x_1 ^0$.
\begin{align} \label{e.con1}
\operatorname L (V\,:\, \Omega ) &\le \bigl[ \operatorname L (\Omega _{1\not\to0} ) \cdot U_2 \bigr] ^{1/2}
\\ \label{e.con2}
U_2 & = \mathbb E \prod _{\omega \in \Omega _{1\not\to0}} V ({x _{1,2,3} ^{\omega }})
\cdot \Abs{\mathbb E _{x_1 ^{0} \in S_1} \prod _{\omega \in \Omega _{1\to0}}
\prod _{\omega \in \Omega _{1\to0}} V ({x _{1,2,3} ^{\omega }}) } ^2 \,.
\end{align}
Use \eqref{e.elem} to write the last term as $ U_2 = \operatorname L (V \,:\, \Omega ^{1})$, where
we define
\begin{align} \label{e.con3}
\overline \omega (j) =
\begin{cases}
\lambda & j=1
\\
\omega (j) & j=2,3
\end{cases}
\\ \label{e.con4}
\Omega ^{1} = \Omega _{1\not\to0} \cup \{\omega , \overline \omega \,:\, \omega \in \Omega _{1\to0}\}\,.
\end{align}
\begin{conserve} \label{p.con} If
$ V\subset T_4$ be $ (\lambda , \vartheta ,4)$-uniform, $ \Omega \subset \Omega _{3\to \lambda -1}$,
with the notation in \eqref{e.con1}---\eqref{e.con4} we have the equality
\begin{equation}\label{e.con5}
\operatorname L (V\,:\, \Omega ) \stackrel {u, \sqrt \vartheta } = \operatorname L (V\,:\, \Omega _{1\not\to0}) ^{1/2} \cdot
\operatorname L (V\,:\, \Omega ^1 ) ^{1/2} \,.
\end{equation}
\end{conserve}
\begin{proof}
The proof is almost trivial. Each $ \omega \in \Omega $ on the contributes $ 1$ to the densities
$ \delta _{V\,:\, 4}, \delta _4, \delta _{j,k}$ for $ 1\le j<k\le3$. If $ \omega (1)\neq 0$,
it contributes to both terms on the right, so the square root makes contribution $ 1$.
If $ \omega (1)=0$, then it contributes nothing to $ \operatorname L (V\,:\, \Omega _{1\not\to0} )$,
but contributes $ 2$ to the other term $ \operatorname L (V\,:\, \Omega ^1 )$.
\end{proof}
The previous Lemma plays a decisive role in all our applications of the Cauchy-Schwartz inequality, to prove
our weighed versions of these inequalities. This Conservation of Densities has an essentially equivalent
formulation, also important to us, that we give here. With the notation of \eqref{e.con1}---\eqref{e.con4},
set
\begin{equation} \label{e.Zdef}
Z [ \Omega _{1\not\to0} \;:\; \Omega _{1\to0} ] = \mathbb E _{x_1 ^{0} \in S_1}
\prod _{\omega \in \Omega _{1\to0}} V (x _{1,2,3} ^{\omega })
\end{equation}
\begin{lemma}\label{l.Zvar}
Let $ \lambda =1 ,\dotsc, 6$.
Suppose that the set $ V\subset T_4$ is $ (\lambda, \vartheta , 4)$-uniform,
where $ \vartheta \le \mathbb P (V\,:\, T_4) ^{2 \cdot 3 ^{\lambda }}$.
Then, for all choices of $ \Omega \subset \Omega _{3\to \lambda -1 }$ as above, we have
\begin{gather}\label{e.Zvar}
\begin{split}
\operatorname {Var} _{x_j ^{\ell }\in \Omega } \Bigl(Z[\Omega _{1\not\to0} \;:\; \Omega _{1\to0}]
&\,:\, \prod _{\omega \in \Omega _{1\not\to0}} V (x _{1,2,3} ^{\omega })\Bigr)
\\&
\le K\sqrt{\vartheta } \cdot
\Bigl[ \mathbb E
\Bigl(Z[\Omega _{1\not\to0} \;:\; \Omega _{1\to0}] \,:\,
\prod _{\omega \in \Omega _{1\not\to0}} V (x _{1,2,3} ^{\omega })\Bigr) \Bigr] ^2 \,.
\end{split}
\end{gather}
Here, $ K$ is an absolute constant.
\end{lemma}
Of course the conditional expectation of $ Z$ can be computed.
\begin{proof}
We use the standard formula for the variance of a random variable $ W$ supported on a set $
Y$.
\begin{equation} \label{e.elemVar}
\operatorname {Var} (W\,:\, Y) =
\mathbb P (Y) ^{-1}
\mathbb E W ^2 - (\mathbb P (Y) ^{-1} \cdot \mathbb E W)^2
\end{equation}
The conditional variance will be small if we have
\begin{equation*}
\mathbb E
\Bigl(Z[\Omega _{1\not\to0} \;:\; \Omega _{1\to0}] ^2 \,:\,
\prod _{\omega \in \Omega _{1\not\to0}} V (x _{1,2,3} ^{\omega })\Bigr)
\stackrel u =
\mathbb E
\Bigl(Z[\Omega _{1\not\to0} \;:\; \Omega _{1\to0}] \,:\,
\prod _{\omega \in \Omega _{1\not\to0}} V (x _{1,2,3} ^{\omega })\Bigr) ^2 \,.
\end{equation*}
But this is a recasting of \eqref{e.con5}.
Namely, using the notation of \eqref{e.con5}, we can write the equation above as
\begin{equation*}
\frac { \operatorname L (V \,:\, \Omega ^{1})} {\operatorname L (V\,:\, \Omega _{1\to0})}
\stackrel u =
\frac { \operatorname L (V \,:\, \Omega ) ^2 } {\operatorname L (V\,:\, \Omega _{1\to0}) ^2 }
\end{equation*}
which is \eqref{e.con5}.
\end{proof}
We are interested in refinements of
the Gowers Box Norms, in which we estimate $ \operatorname L$ in terms
of a Box Norm of one of its arguments, but do so in a more efficient manner, just as in the
proof of Lemma~\ref{l.3dvon}, which is presented in \S~\ref{s.von}. For this Lemma, let us consider selections of $ f_\omega $
where $ f _{\omega } \in \{f, V\}$, and $ f$ is a fixed function supported on $ V$ and at most
one in absolute value. In
application, $ f$ is a balanced function.
In this Lemma, we will single out the first and second coordinates for a distinguished role,
which is done just for simplicity.
\begin{lemma}\label{l.goodBox} Let $ \lambda = 2 ,\dotsc, 6$.
Suppose that $ V$ is $ (2\lambda, \vartheta , 4)$-Uniform, where
$ \vartheta < \mathbb P (V\,:\, T_4) ^{ 2 \cdot 3 ^{\lambda }}$. Let $ \Omega \subset \Omega _{3\to \lambda } $,
where the value of $ \lambda $ is half of the uniformity assumption imposed on $ V$.
Let $ \{f _{\omega } \,:\, \Omega \}$ be a selection of functions which are
either equal to $ V$ or a fixed function $ f$ which
is supported on V and bounded by one in absolute value. (In application, $ f$ will be a balanced function.)
\begin{enumerate}
\item Suppose that there is an $ \omega _0\in \Omega $ with $ f _{\omega _0}=f$,
and $ \omega _0 (1)\neq \omega (1)$ for all other $ \omega \in \Omega $ with $ f _{\omega
}=f$.
Then, we have the estimate
\begin{equation}\label{e.1Box}
\abs{ \operatorname L(f _{\omega } \,:\, \Omega )}
< 2 \operatorname L(V\,:\, \Omega ) \cdot
\Biggl[ O( \vartheta )+\frac { \mathbb E _{x_2 , x_3 \in S _{2,3}} \norm f . \Box ^1 S_1 . ^2 }
{ \mathbb E _{x_2 , x_3 \in S _{2,3}} \norm V . \Box ^1 S_1 . ^2 } \Biggr] ^{1/2} \,.
\end{equation}
\item Suppose that there is an $ \omega _0\in \Omega $ with $ f _{\omega _0}=f$,
and $ (\omega _0 (1), \omega _0 (2))\neq (\omega (1), \omega (2))$
for all other $ \omega \in \Omega $ with $ f _{\omega
}=f$.
Then, we have the estimate
\begin{equation}\label{e.2Box}
\abs{ \operatorname L(f _{\omega } \,:\, \Omega )}
< 4 \operatorname L(V\,:\, \Omega ) \cdot
\Biggl[ O( \vartheta )+\frac { \mathbb E _{ x_3 \in S _{2,3}} \norm f . \Box ^{1,2} S_{1,2} . ^{4} }
{ \mathbb E _{x_3 \in S _{2,3}} \norm V . \Box ^{1,2} S_{1,2} . ^{4} } \Biggr] ^{1/4}\,.
\end{equation}
\item If there is at least one $ \omega_0 \in \Omega $ with $ f _{\omega_0 }=f$, we have
\begin{equation}\label{e.3Box}
\abs{ \operatorname L(f _{\omega } \,:\, \Omega )}
< 8 \operatorname L(V\,:\, \Omega ) \cdot
\Biggl[O( \vartheta )+\frac { \mathbb E _{ x_3 \in S _{2,3}} \norm f . \Box ^{1,2,3} S_{1,2,3} . ^{8} }
{ \mathbb E _{x_3 \in S _{2,3}} \norm V . \Box ^{1,2,3} S_{1,2,3} . ^{8} } \Biggr] ^{1/8}\,.
\end{equation}
\end{enumerate}
\end{lemma}
Of course the estimate \eqref{e.3Box} applies in the first two cases of the Lemma. But we will
be in situations, in the proof of Lemma~\ref{l.Tbox},
where we do not wish to use the estimate \eqref{e.3Box}.
We remark that one could read the proof of Lemma~\ref{l.3dvon} in \S~\ref{s.von} before the one below.
This proof in \S~\ref{s.von} is independent of the proof below. It treats a more complicated
situation, in that all the $ T _{j}$ have to be considered, but is only discussed in a single concrete
instance.
\begin{proof}
We can read off a good estimate for $ \operatorname L(V\,:\, \Omega )
$ from \eqref{e.Uuniform}, in all cases $ (1)$---$(3)$ above.
For each of the three cases, we assume that the choice of $ \omega _0$ specified in each of the
three cases satisfies $ \omega _0\equiv 0$.
In case $ (1)$, we will apply the Cauchy-Schwartz inequality
in all \emph{other } variables. To set notation for this, let
\begin{equation*}
\Omega_{1\to0} = \{\omega \in \Omega \,:\, \omega (1)=0\}\,,
\qquad
\Omega_{1\not\to 0} = \{\omega \in \Omega \,:\, \omega (1)\neq 0 \}\,,
\end{equation*}
and let $ \mathbf X' = \{x _{j} ^{\ell }
\,:\, 1 \le j \le 3\,,\, 0 \le \ell \le \lambda -1 \}- \{x _1 ^0\}$. Then, we apply the Cauchy-Schwartz inequality to
estimate
\begin{gather}\label{e.U11}
\abs{ \operatorname L(f _{\omega } \,:\, \Omega )}
\le
\bigl[ \operatorname L(V \,:\, \Omega _{1\not\to0} ) \cdot W_1 \bigr] ^{1/2}
\\ \label{e.U1=}
\begin{split}
W_1 &= \mathbb E _{x _{j} ^{\ell } \in \mathbf X' }
\prod _{\omega' \in \Omega _{1\not\to0} } V (x _{1,2,3} ^{\omega' })
\ABs{ \mathbb E _{x_1 ^{0}\in S_1}
\prod _{\omega \in \Omega _{1\to 0}} f _{\omega } (x _{1,2,3} ^{\omega })
} ^2
\end{split}
\end{gather}
We continue the analysis of $ W_1$. It follows from the assumption in
part (1) of the Lemma, that $ \omega _0\in \Omega _1$, and $ f _{\omega _0}=f$, but
for all other choices of $ \omega \in \Omega _{1\to 0}$ we have $ f _{\omega }= V$.
In order to expand the square of the expectation, using \eqref{e.elem}, let us define
a new class of maps as follows. For $ \omega \in \Omega _1$, define
\begin{gather} \notag
\overline \omega (j) =
\begin{cases}
\omega (j) & j\neq 1
\\
\lambda & j =1
\end{cases}
\\ \label{e.OO4}
\begin{split}
&\Omega _{1\to \lambda } = \{\overline \omega \,:\, \omega \in \Omega _{1\to0}\}\,,
\qquad
\Omega ^1= \Omega _{1\not\to0} \cup \Omega _{1\to 0} \cup \Omega _{1\to \lambda }\,,
\\
&\Omega _{\{1\}\to\{0,\lambda-1\}} = \{ \omega \in \Omega ^1 \,, \omega (1)=0\}\,.
\end{split}
\end{gather}
Notice that $ \Omega _{\{1\}\to\{0,\lambda-1\}}= \{\omega _0 \,, \, \overline \omega _0\}$, by assumption on $ \Omega $
that holds in this case.
Here and below, we are expanding the set $ \Omega $. We take $ f _{\omega }= V$ for all $ \omega \not \in \Omega $.
We can write
\begin{align} \label{e.use}
W_1 &= \mathbb E _{x _{j} ^{\ell } \in \mathbf X' } \mathbb E _{x_1^0,x_1^\lambda \in S_1}
\prod _{\omega' \in \Omega _{1\not\to0} \cup \Omega _{1\to 4} } V (x _{1,2,3} ^{\omega' })
\prod _{\omega \in \Omega _{\{1\}\to\{0,\lambda-1\}} } f _{\omega } (x _{1,2,3} ^{\omega })
\\
& = \label{e.donotuse}
\mathbb E _{\substack{x_1^0,x_1^\lambda \in S_1\\ x_{2,3} ^{0,0}\in S_{2,3}}}
f (x _{1,2,3} ^{\omega _0}) f (x _{1,2,3} ^{\overline\omega _0 }) \cdot
Z [ \Omega _{\{1\}\to\{0,\lambda-1\}} \;:\; \Omega^1 - \Omega _{\{1\}\to\{0,\lambda-1\}} ] \,,
\end{align}
where the last term is defined in \eqref{e.Zdef}.
It follows from Lemma~\ref{l.Zvar} that $ Z [ \Omega _{\{1\}\to\{0,\lambda-1\}} \;:\; \Omega^1 - \Omega _{\{1\}\to\{0,\lambda-1\}} ]$
is essentially constant on $ V (x _{1,2,3} ^{\omega _0}) V (x _ {1,2,3} ^{\overline \omega _0})$. Namely,
\begin{equation} \label{e.mean}
\begin{split}
\mathbb E\bigl( Z [ \Omega _{\{1\}\to\{0,\lambda-1\}} \;:\; \Omega^1 - \Omega _{\{1\}\to\{0,\lambda-1\}} ]
& \,:\,
V (x _{1,2,3} ^{\omega _0}) V (x _ {1,2,3} ^{\overline
\omega _0}\bigr)
\stackrel {u} = \frac { \operatorname L(V\,:\, \Omega ^{1})}
{ \operatorname L (V,V \,:\, \Omega _{\{1\}\to\{0,\lambda-1\}} )} \,.
\end{split}
\end{equation}
The implied $ \kappa $ in the `$\stackrel {u} = $' is $ \kappa =\sqrt \vartheta $, see Definition~\ref{d.=}. Similar
comment applies to other uses of the the symbol `$ \stackrel {u} = $' below.
And the variance of $ Z [ \Omega _{\{1\}\to\{0,\lambda-1\}} \;:\; \Omega^1 - \Omega _{\{1\}\to\{0,\lambda-1\}} ]$ is very small.
Note that $\operatorname L (V,V \,:\, \Omega _{\{1\}\to\{0,\lambda-1\}} )= \mathbb E _{x_2 , x_3 \in S _{2,3}} \norm V . \Box ^1 S_1 .
^2 $, we can estimate
\begin{equation}\label{e.mmean}
W_1 \le 2
{\operatorname L(V\,:\, \Omega ^{1})} \Biggl[ O( \sqrt \vartheta )+
\frac { \mathbb E _{x_2 , x_3 \in S _{2,3}} \norm f . \Box ^1 S_1 . ^2 }
{ \mathbb E _{x_2 , x_3 \in S _{2,3}} \norm V . \Box ^1 S_1 . ^2 }\Biggr]\,.
\end{equation}
We combine \eqref{e.U11}---\eqref{e.mmean}, to conclude that
\begin{align*}
\Abs{ \operatorname L(f _{\omega } \,:\, \Omega )}
&\le
2
\bigl[ \operatorname L(V \,:\, \Omega _{1\not\to0} ) \cdot
{ \operatorname L(V\,:\, \Omega ^{1} )}
\bigr] ^{1/2}
\times
\Biggl[ O( \sqrt \vartheta )+
\frac { \mathbb E _{x_2 , x_3 \in S _{2,3}} \norm f . \Box ^1 S_1 .^2 }
{ \mathbb E _{x_2 , x_3 \in S _{2,3}} \norm V . \Box ^1 S_1 . ^2 }\Biggr] ^{1/2}
\,.
\end{align*}
And so the proof of \eqref{e.1Box} will follow from the inequality
\begin{equation}\label{e.1Box1}
\begin{split}
\operatorname L(V \,:\, \Omega _{1\not\to0} ) &\cdot
\operatorname L(V\,:\, \Omega ^{1})
\le 2
\operatorname L _{\Omega } (V\,:\, \Omega ) ^2 \,.
\end{split}
\end{equation}
This is Conservation of Densities Proposition, Proposition~\ref{p.con}.
\medskip
We turn to the proof of the second part, namely \eqref{e.2Box}.
The initial stage of the argument follows the lines of the argument above. Namely, we use the estimate
\eqref{e.U11} and \eqref{e.U1=}. The term $ W_1$ is expanded as in \eqref{e.use}, with the same notation
that we have in \eqref{e.OO4}. But, under the assumptions on $ \Omega $ that hold in this case,
$ \Omega _{\{1\}\to\{0,\lambda-1\}} $ need not consist of just two maps $ \omega $.
We apply the Cauchy-Schwartz inequality to $ W_1$. To do this, we make these definitions, recalling that
$ \Omega ^{1}$ is defined in \eqref{e.OO4}.
\begin{gather*}
\Omega _{2\not\to 0} ^{1} = \{\omega \in \Omega ^{1}\,:\, \omega (2)\neq 0\}\,,
\quad
\Omega _{2\to 0} ^{1} = \{\omega \in \Omega ^{1}\,:\, \omega (2)= 0\}\,,
\\
\mathbf X'' = \{x_1 ^{\ell } \,:\, 0 \le \ell \le \lambda \} \cup \{x_2 ^{\ell } \,:\, 1\le \ell \le \lambda -1 \}
\cup \{x_3 ^{\ell } \,:\, 0\le \ell \le \lambda -1 \}\,.
\end{gather*}
Here, the point is that the only variable omitted from $ \mathbf X''$ is $ x_2 ^{0}$. Then, we can
estimate
\begin{gather}\label{e.oU1<}
W_1 \le \bigl[ \operatorname L (V\,:\, \Omega _{2\not\to 0} ^{1}) \cdot W_2
\bigr] ^{1/2}
\\ \label{e.oU2def}
W_2 = \mathbb E _{x _{j} ^{\ell } \in \mathbf X '' }
\prod _{\omega \in \Omega ^{1} _{2\not\to 0}} V (x _{1,2,3} ^{\omega })
\ABs{\mathbb E _{x_2 ^{0}\in S_2} \prod _{\omega \in \Omega _{2\to 0}} f _{\omega } (x _{1,2,3} ^{\omega }) } ^2 \,.
\end{gather}
To expand the square in the definition of $ W_2$, we set
\begin{gather} \notag
\widetilde \omega (j) =
\begin{cases}
\omega (j) & j\neq 2
\\
\lambda & j =2
\end{cases}
\\ \label{e.OOO4}
\begin{split}
&\Omega ^1 _{2\to \lambda } = \{\overline \omega \,:\, \omega \in \Omega ^1_{2\to0}\}\,,
\qquad
\Omega ^2= \Omega^1 _{2\not\to0} \cup \Omega^1 _{2\to 0} \cup \Omega ^1_{2\to \lambda }\,,
\\
&\Omega_{\{1,2\}\to\{0,\lambda-1\}} = \bigl\{\omega \in \Omega ^{2} \,:\, \omega (1), \omega (2) \in \{0,\lambda-1\}\bigr\}\,.
\end{split}
\end{gather}
Observe that $ \Omega_{\{1,2\}\to\{0,\lambda-1\}} = \{\omega _0 \,,\, \overline \omega _{0} \,,\, \widetilde \omega _0 \,,\,,
\overline {\widetilde \omega }_0
\}$.
Then, we can write
\begin{equation}\label{e.oU2=}
W_2=
\mathbb E _{x_j ^{\ell } \in \mathbf Y ''} \prod _{\omega \in \Omega_{\{1,2\}\to\{0,\lambda-1\}}} f (x _{1,2,3} ^{\omega } )
\times
Z [\Omega_{\{1,2\}\to\{0,\lambda-1\}} \;:\; \Omega ^2 - \Omega_{\{1,2\}\to\{0,\lambda-1\}}] \,.
\end{equation}
where $ \mathbf Y''= \{x_1^0,x_1^\lambda ,x_2^0,x_2^\lambda ,x_3^{0}\}$, and $ Z [\Omega_{\{1,2\}\to\{0,\lambda-1\}} \;:\; \Omega ^2 -
\Omega_{\{1,2\}\to\{0,\lambda-1\}}]$ is defined in \eqref{e.Zdef}. (We assumed that $ \omega _0 \equiv 0$.)
Using Lemma~\ref{l.Zvar}, and the the assumption of $ (2\lambda, \vartheta ,4)$-uniformity on $ V$, we can estimate
\begin{align*}
\mathbb E _{x _{j} ^{\ell }\in \mathbf Y''}
\bigl( Z [\Omega_{\{1,2\}\to\{0,\lambda-1\}} \;:\; \Omega ^2 - \Omega_{\{1,2\}\to\{0,\lambda-1\}}]
&
\,:\, \prod _{\omega \in \Omega_{\{1,2\}\to\{0,\lambda-1\}}} V (x _{1,2,3} ^{\omega } )
\bigr)
\\&
\stackrel u =
\frac { \operatorname L (V\,:\, \Omega ^2 )} {\operatorname L (V\,:\,\Omega_{\{1,2\}\to\{0,\lambda-1\}} )}
\end{align*}
and the conditional variance of $ Z [\Omega_{\{1,2\}\to\{0,\lambda-1\}} \;:\; \Omega ^2 - \Omega_{\{1,2\}\to\{0,\lambda-1\}}]$ is
very small. Thus, we can estimate
\begin{equation}\label{e.W2=}
W_2 = 2
\operatorname L (V\,:\, \Omega ^2 ) \times
\Bigl[ O( \sqrt \vartheta )+
\frac { \mathbb E _{x_3^0\in S_3} \norm f . \Box ^{1,2} S _{1,2}. ^{4}}
{ \mathbb E _{x_3^0\in S_3} \norm V . \Box ^{1,2} S _{1,2}. ^{4}} \Bigr]\,.
\end{equation}
Combining \eqref{e.U11}, \eqref{e.U1=}, \eqref{e.oU1<}, \eqref{e.oU2def}, and \eqref{e.W2=}, we see that
\begin{align*}
\Abs{\operatorname L (f_\omega \,:\, \Omega )}
& \le 2
\operatorname L(V \,:\, \Omega _{1\not\to0} ) ^{1/2}
\cdot
\operatorname L (V\,:\, \Omega _{2\not\to 0} ^{1}) ^{1/4} \cdot
\operatorname L (V\,:\, \Omega ^2 ) ^{1/4}
\\
& \qquad \times
\Biggl[
\frac { \mathbb E _{x_3^0\in S_3} \norm f . \Box ^{1,2} S _{1,2}. ^{4}}
{ \mathbb E _{x_3^0\in S_3} \norm V . \Box ^{1,2} S _{1,2}. ^{4}}
\Biggr] ^{1/4} \,.
\end{align*}
The last step in the proof of \eqref{e.2Box} is to verify that
\begin{equation}\label{e.LL}
\operatorname L(V \,:\, \Omega _{1\not\to0} ) ^{1/2}
\cdot
\operatorname L (V\,:\, \Omega _{2\not\to 0} ^{1}) ^{1/4} \cdot
\operatorname L (V\,:\, \Omega ^2 ) ^{1/4}
\le 2
\operatorname L (V\,:\, \Omega )\,.
\end{equation}
This is again the Conservation of Densities Proposition, Proposition~\ref{p.con}.
\medskip
We turn to the third point of the Lemma, namely the inequality \eqref{e.3Box} is true.
We can use earlier parts of the argument.
Let us combine \eqref{e.U11}, \eqref{e.use}, \eqref{e.oU1<}, and \eqref{e.oU2def}. We have
\begin{equation}\label{e.oU3}
\abs{\operatorname L (f _{\omega } \,:\, \omega \in \Omega ) }
\le
2
\operatorname L (V\,:\, \Omega _{1\not\to0}) ^{1/2}
\cdot
\operatorname L (V\,:\, \Omega _{2\not\to0} ^{1}) ^{1/4}
\cdot
W_2 ^{1/4} \,,
\end{equation}
where $ W_2$ is defined in \eqref{e.oU2=}.
The strategy is to repeat an application of the Cauchy-Schwartz inequality in all variables except $ x_3 ^{0}$.
To do this, we define
\begin{gather*}
\Omega _{3\not\to 0} ^{2} = \{\omega \in \Omega ^{2}\,:\, \omega (3)\neq 0\}\,,
\quad
\Omega _{3\to 0} ^{2} = \{\omega \in \Omega ^{2}\,:\, \omega (3)= 0\}\,,
\\
\mathbf X''' = \{x_j ^{\ell } \,:\, j=1,2\,,\ 0 \le \ell \le \lambda \}
\cup \{x_3 ^{\ell } \,:\, 1\le \ell \le \lambda -1 \}\,.
\end{gather*}
Here, the point is that the only variable omitted from $ \mathbf X'''$ is $ x_3 ^{0}$. Then, we can
estimate
\begin{gather}\label{e.oU3<}
W_2 \le \bigl[ \operatorname L (V\,:\, \Omega _{3\not\to 0} ^{2}) \cdot W_3
\bigr] ^{1/2}
\\ \label{e.oU3def}
W_3 = \mathbb E _{x _{j} ^{\ell } \in \mathbf X ''' }
\prod _{\omega \in \Omega ^{2} _{3\not\to 0}} V (x _{1,2,3} ^{\omega })
\ABs{\mathbb E _{x_3 ^{0}\in S_3} \prod _{\omega \in \Omega _{3\to 0} ^2} f _{\omega } (x _{1,2,3} ^{\omega }) } ^2 \,.
\end{gather}
In the product over $ \Omega _{3\to0} ^2$, it is important to observe that if $ f _{\omega }=f$, it must follow that
$ (\omega (1), \omega (2))\in \{0, \lambda \} ^{1,2}$. For if this is not the case, an earlier step would have
switched $ f _{\omega }$ to $ V$.
To expand the square, we define
\begin{gather*} \notag
\underline \omega (j) =
\begin{cases}
\omega (j) & j\neq 3
\\
\lambda & j =3
\end{cases}
\\
\begin{split}
&\Omega _{3\to \lambda } = \{\underline \omega \,:\, \omega \in \Omega ^2, \ \omega (3)=0\}\,,
\qquad
\Omega ^3= \Omega ^2 \cup \Omega _{3\to \lambda }\,,
\\
&\Omega _{ \{1,2,3\} \to \{0, \lambda \}} = \{0, \lambda \} ^{\{1,2,3\}} \,.
\end{split}
\end{gather*}
Then, we can write
\begin{align*}
W_3 & =
\mathbb E _{ x^0 _{1,2,3}, x ^\lambda _{1,2,3} \in S _{1,2,3}}
\prod _{\omega \in \Omega _{ \{1,2,3\} \to \{0, \lambda \}} }
f (x ^{\omega } _{1,2,3}) \times
Z[ \Omega _{ \{1,2,3\} \to \{0, \lambda \}} \;:\; \Omega ^3 - \Omega _{ \{1,2,3\} \to \{0, \lambda \}} ] \,.
\end{align*}
Now, the term $ Z$ is nearly constant, by Lemma~\ref{l.Zvar}, and we have
\begin{align*}
\mathbb E \Bigl( Z[ \Omega _{ \{1,2,3\} \to \{0, \lambda \}} \;:\; \Omega ^3 - \Omega _{ \{1,2,3\} \to \{0, \lambda \}} ]
& \,:\, \prod _{\omega \in \Omega _{ \{1,2,3\} \to \{0, \lambda \}} } V \Bigr)
= \frac { \operatorname L ( V \,:\, \Omega ^{3} )} {\operatorname L (V\,:\, \Omega _{ \{1,2,3\} \to \{0, \lambda \}} )}
\end{align*}
Therefore, we can estimate
\begin{equation}\label{e.W3}
W_3 = \Bigl[ O (\sqrt \vartheta ) +
\frac {\norm f . \Box ^{1,2,3} S _{1,2,3}. ^{8}} {\norm V . \Box ^{1,2,3} S _{1,2,3}. ^{8}}
\Bigr] \times { \operatorname L ( V \,:\, \Omega ^{3} )} \,.
\end{equation}
Combine \eqref{e.oU3}, \eqref{e.oU3<}, \eqref{e.oU3def}, and \eqref{e.W3} to conclude that
\begin{equation}\label{e.W33}
\begin{split}
\abs{\operatorname L (f _{\omega } \,:\, \omega \in \Omega ) }
&\le
2
\operatorname L (V\,:\, \Omega _{1\not\to0}) ^{1/2}
\cdot
\operatorname L (V\,:\, \Omega _{2\not\to0} ^{1}) ^{1/4}
\cdot
\operatorname L (V\,:\, \Omega _{3\not\to 0} ^{2}) ^{1/8}
\\ & \qquad \times
{ \operatorname L ( V \,:\, \Omega ^{3} )} ^{1/8}
\cdot
\Biggl[ O (\sqrt \vartheta ) +
\frac {\norm f . \Box ^{1,2,3} S _{1,2,3}. ^{8}} {\norm V . \Box ^{1,2,3} S _{1,2,3}. ^{8}}
\Biggr] ^{1/8}\,.
\end{split}
\end{equation}
Therefore, it remains for us to check that
\begin{equation}\label{e.33}
\operatorname L (V\,:\, \Omega _{1\not\to0}) ^{1/2}
\cdot
\operatorname L (V\,:\, \Omega _{2\not\to0} ^{1}) ^{1/4}
\cdot
\operatorname L (V\,:\, \Omega _{3\not\to 0} ^{2}) ^{1/8}
\cdot
{ \operatorname L ( V \,:\, \Omega ^{3} )} ^{1/8}
\le 2 \operatorname L (V \,:\, \Omega )\,.
\end{equation}
This again follows from Proposition~\ref{p.con}.
\end{proof}
\section{Linear Forms for the Analysis of Corners} \label{s.formsCorners}
In this section, we reprise the initial portion of the previous section, though
our needs are not quite a significant.
For the uses of this discussion, let us make the definition
\begin{equation}\label{e.tildeTell}
\widetilde T _{\ell } = \prod _{\substack{1\le j<k\le 4\\ j,k\neq \ell }} R _{j,k} \,.
\end{equation}
This is the same definition as for $ T _{\ell }$, but the set $ S _{\ell }$ is missing.
For $ \Omega \subset \Omega _{4\to \lambda }$, where $ \lambda \le 3 $, and choices of functions
$ F _{\omega } \in \{ T _{\ell } \,,\, \widetilde T _{\ell } \,:\, 1\le \ell \le 4 \}$, we have the
linear form
\begin{equation*}
\Lambda (F _{\omega } \,:\, \Omega )=
\mathbb E _{\substack{x _{1,2,3,4} ^{\lambda } \in S _{1,2,3,4}\\ 0\le \lambda \le 3 }}
\prod _{\omega \in \Omega } F _{\omega } (x _{1,2,3,4} ^{\omega })\,.
\end{equation*}
Here, any $ S_j$ that occurs in this expectation is composed with $ \lambda _j$.
Our first Lemma states that we can easily estimate the values of these forms.
\begin{lemma}\label{l.==L} For $ \Omega $ and choices of $ F _{\omega }$ as above we have
\begin{gather*}
\Lambda (F _{\omega } \,:\, \Omega ) \stackrel u =
\prod _{\ell =1} ^{4} \delta _{\ell } ^{\Phi (\ell )} \cdot \prod _{1\le j<k\le 4} \delta _{j,k} ^{\Psi (j,k)}
\\
\Phi (\ell ) = \lvert\{\omega \,:\, F_\omega = T _{\ell } \}\rvert\,,
\qquad
\Psi (j,k) = \lvert \{ \omega \vert _{j,k} \,:\, \omega \in \Omega \} \rvert\,.
\end{gather*}
In the last display we are counting the number of distinct maps there are when $ \omega $ is restricted to
the sets $ \{j,k\}$.
\end{lemma}
\begin{proof}
We have
\begin{equation*}
\prod _{\omega \in \Omega } F _{\omega } (x _{1,2,3,4} ^{\omega })
=
\prod _{\ell =1} ^{4}
\prod _{\omega \in \phi (\ell )} S _{\ell } \circ \lambda (x _{1,2,3,4} ^{\omega })
\times
\prod _{1\le j<k\le 4}
\prod _{\omega \in \psi (j,k )} S _{j,k} \circ \lambda (x _{j,k} ^{\omega })
\end{equation*}
where $ \psi (\ell ) = \{\omega \,:\, F _{\omega }= T _{\ell }\}$, and $ \psi (j,k)=
\{ \omega\vert _{j,k} \,:\, \omega \in \Omega \}$.
The Lemma then follows from the assumptions of admissibility, namely \eqref{e.Ad1Box} and \eqref{e.Ad2Box},
with application of \eqref{e.gcsi1}.
\end{proof}
We need an analog of the Conservation of Densities Lemma, Proposition~\ref{p.con}.
Let $ \Omega \subset \Omega _{4 \to 3}$, and assume that for the set $ \Omega _{1\to0}$ below is not empty.
\begin{equation*}
\Omega _{1\to0} = \{\omega \in \Omega \,:\, \omega (1)=0\,,\, F _\omega \not= \widetilde T_1\}\,,
\qquad
\Omega _{1\not\to} = \Omega-\Omega _{1\to0}\,.
\end{equation*}
Here, we exclude $ \widetilde T_1$, as its expectation does not include any $ \delta _1$.
Consider the estimate below obtained by applying the Cauchy-Schwartz inequality in all variables except $ x_1 ^0$.
\begin{align} \label{e.LFcon1}
\Lambda (F_\omega\,:\, \Omega ) &\le \bigl[ \Lambda (\Omega _{1\not\to0} ) \cdot U_2 \bigr] ^{1/2}
\\ \label{e.LFcon2}
U_2 & = \mathbb E \prod _{\omega \in \Omega _{1\not\to0}} F _{\omega }({x _{1,2,3,4} ^{\omega }})
\cdot \Abs{\mathbb E _{x_1 ^{0} \in S_1} \prod _{\omega \in \Omega _{1\to0}}
\prod _{\omega \in \Omega _{1\to0}} F _{\omega } ({x _{1,2,3,4} ^{\omega }}) } ^2 \,.
\end{align}
Use \eqref{e.elem} to write the last term as $ U_2 = \Lambda (F_\omega \,:\, \Omega ^{1})$, where
we define
\begin{align} \label{e.LFcon3}
\overline \omega (j) =
\begin{cases}
\lambda & j=1
\\
\omega (j) & j=2,3,4
\end{cases}
\\ \label{e.LFcon4}
\Omega ^{1} = \Omega _{1\not\to0} \cup \{\omega , \overline \omega \,:\, \omega \in \Omega _{1\to0}\}\,.
\end{align}
And we define $ F _{\overline \omega }= F _{\omega }$.
\begin{conserve2} \label{p.con2} If
If $ \Omega \subset \Omega _{3\to \lambda -1}$,
with the notation in \eqref{e.LFcon1}---\eqref{e.LFcon4} we have the equality
\begin{equation}\label{e.LFcon5}
\Lambda (F_\omega\,:\, \Omega ) \stackrel u = \Lambda (F_\omega\,:\, \Omega _{1\not\to0}) ^{1/2} \cdot
\Lambda (F_\omega\,:\, \Omega ^1 ) ^{1/2} \,.
\end{equation}
\end{conserve2}
\begin{proof}
Each $ \omega \in \Omega $ be such that it contributes $ 1$ to the density $ \delta _{\ell }$,
for $ 2\le \ell \le 4$ on the left-hand-side of \eqref{e.LFcon5}.
Thus, $ \omega \in \Omega _{1\not\to0}$, and it contributes a $ 1/2$ to this same density
in each of the two terms on the right-hand side. Let $ \omega \in \Omega _{1\to0}$. Then, it
contributes a $ 1$ to the density of $ \delta _1$ on the left-hand side, while on the right hand-side,
there is no contribution from the first term, while the second term contributes a $ 2 \cdot 1/2=1$, since the
there is a new variable $ x_1^4$.
If one considers a density $ \delta _{j,k}$ where $ 2\le j<k\le 4$, it is accounted for much as the case of
$ \delta _2$ above. And a density $ \delta _{1,j}$, with $ j=2,3,4$, is accounted for as is $ \delta _1$ above.
\end{proof}
This Conservation of Densities has an essentially equivalent
formulation, also important to us, that we give here. With the notation of \eqref{e.LFcon1}---\eqref{e.LFcon4},
set
\begin{equation} \label{e.LFZdef}
Z [ \Omega _{1\not\to0} \;:\; \Omega _{1\to0} ] = \mathbb E _{x_1 ^{0} \in S_1}
\prod _{\omega \in \Omega _{1\to0}} F_\omega (x _{1,2,3,4} ^{\omega })
\end{equation}
\begin{lemma}\label{l.LFZvar}
For all choices of $ \Omega \subset \Omega _{4\to 3 }$ as above, we have
\begin{gather}\label{e.LFZvar}
\begin{split}
\operatorname {Var} _{x_j ^{\ell }\in \Omega } \Bigl(Z[\Omega _{1\not\to0} \;:\; \Omega _{1\to0}]
&\,:\, \prod _{\omega \in \Omega _{1\not\to0}} F _{\omega } (x _{1,2,3,4} ^{\omega })\Bigr)
\\&
\le K\sqrt{\vartheta } \cdot
\Bigl[ \mathbb E
\Bigl(Z[\Omega _{1\not\to0} \;:\; \Omega _{1\to0}] \,:\,
\prod _{\omega \in \Omega _{1\not\to0}} F _{\omega } (x _{1,2,3,4} ^{\omega })\Bigr) \Bigr] ^2 \,.
\end{split}
\end{gather}
Here, $ K$ is an absolute constant.
\end{lemma}
Of course the conditional expectation of $ Z$ can be computed.
\begin{proof}
We use the standard formula for the variance of a random variable $ W$ supported on a set $
Y$ given in \eqref{e.elemVar}.
The conditional variance will be small if we have
\begin{equation*}
\mathbb E
\Bigl(Z[\Omega _{1\not\to0} \;:\; \Omega _{1\to0}] ^2 \,:\,
\prod _{\omega \in \Omega _{1\not\to0}} F_\omega (x _{1,2,3,4} ^{\omega })\Bigr)
\stackrel u =
\mathbb E
\Bigl(Z[\Omega _{1\not\to0} \;:\; \Omega _{1\to0}] \,:\,
\prod _{\omega \in \Omega _{1\not\to0}} F_\omega (x _{1,2,3,4} ^{\omega })\Bigr) ^2 \,.
\end{equation*}
But this is a recasting of \eqref{e.LFcon5}.
\end{proof}
There is a variant of the inequality \eqref{e.3Box} which holds. Let us formulate it.
\begin{lemma}\label{l.LF3box} Let $ \Omega \subset \Omega _{4\to 3} $, and let $ F _{\omega } \in \{T _{1} ,
T_2,T_3,T_4\}$. Let $ f _{\omega }$ be a choice of function satisfying $ \lvert f _\omega \rvert\le F _{\omega } $.
Then, we have the following inequality. Suppose, for the sake of simplicity that for $ \omega _0\in \Omega $ we have
$ F _{\omega _0}= T_1$
\begin{equation}\label{e.LF3Box}
\abs{\Lambda (f _{\omega } \,:\, \Omega )}
\le 2 \abs{\Lambda (F _{\omega } \,:\, \Omega )}
\times \biggl\{ \upsilon + \frac { \norm f _{\omega _0} . \Box ^{2,3,4} H _{2,3,4}. ^{8} }
{\norm T_1 . \Box ^{2,3,4} H _{2,3,4}. ^{8}}
\biggr\} ^{1/8}
\end{equation}
\end{lemma}
In view of the fact that we have the Second Conservation of Densities Proposition, Proposition~\ref{p.con2},
and the variance principle Lemma~\ref{l.LFZvar}, the proof of this inequality is just an iteration of the
proof of \eqref{e.3Box} above, as well as the proof of Lemma~\ref{l.Q} below. Accordingly we omit it.
\section{Proof of the von Neumann Lemma}
\label{s.von}
This is a careful application of weighted Gowers-Cauchy-Schwartz inequality,
which does not seem to follow from any standard inequality in the literature.
The primary difference with the weighted inequalities of
the work of Green and Tao,
\cites{math.NT/0404188,math.NT/0606088}
is the absence of the von Mangoldt function with it's uniformity properties, a difference
overcome by the enforced uniformity, an argument invented by Shkredov \cite{MR2266965}.
In our setting, the sets $ X_ a $ will most frequently be $ H$, the copy of the finite
field. The set $ U$ will for the most part be $ \{1,2,3,4\}$, though there are
larger sets $ U$, as large as $ 24$ elements, that occurs in the analysis of different terms below.
We introduce the following 4-linear form. For four functions
$ f _j \;:\; H \times H \times H \to \mathbb C $, for $ 1\le j\le 4$, define
\begin{equation}\label{e.Q}
\begin{split}
{\operatorname Q}(f_1,f_2,f_3,f_4) \stackrel{\mathrm{def}}{{}={}}
\mathbb E_{\substack{y,x_j \in H\\ 1 \le j \le 3 }} &f_4(x_1,x_2,x_3)f_3(x_1,x_2,x_3 + y)
\\& \qquad \times f_2(x_1, x_2 + y, x_3) f_1(x_1 + y, x_2, x_3)
\end{split}
\end{equation}
If $ A\subset H \times H \times H$, it follows that $ {\operatorname Q}(A,A,A,A)$ is the expected number
of corners in $ A$. It is an important remark that this is defined as an average over copies of $ H$, whereas
earlier sections have been defined over e.\thinspace g.\thinspace $ S _{1,2,3,4}$. This fact introduces
extra factors of $ \delta _{\ell }$ below.
We are deliberately choosing a definition that is slightly asymmetric with respect to
the subscripts on the $ f_j$ on the right above, to make the next display more symmetric.
Using the change of variables $ y=x_4-(x_1+x_2+x_3)$, this is
\begin{align*}
{\operatorname Q}(f_1,f_2,f_3,f_4)
& = \mathbb E_{\substack{x_j \in H\\ 1 \le j \le 4 }} \prod _{j=1} ^{4}
f _{j} \circ \lambda _{j}\,,
\\
\lambda _{j} (x_1,x_2,x_3,x_4) &=
\sum _{k \;:\; k\neq j} x_k \operatorname e_k \,, \qquad 1\le j \le 4 \,.
\end{align*}
The point which dominates the analysis below is that the functions $ f _{j} \circ \lambda _{j}$ is
a function of $ \{x _{\ell }\,:\, 1\le \ell \neq j \le 4\}$, i.\thinspace e., is not a function of $ x_j$.
We will write, by small abuse of notation, $ \lambda _{1} (x ^{\omega } _{1,2,3,4})= x ^{\omega }
_{2,3,4}$. This is allowed, as $ \lambda _1 (x ^{\omega } _{1,2,3,4})$ is not a function of $ x _1 ^{\omega (1)}$.
This will allow us reduce the complexity of some formulas below.
We codify the result of the application of the proof of the Gowers-Cauchy-Schwartz Inequality
for the operator $ \operatorname Q$ into the results of the following Lemma. This technical result codifies
the results that we need to understand about the set $ T$, and $ A$ to conclude Lemma~\ref{l.3dvon}.
In this Lemma, we single out for a distinguished role the function that falls in the last place of $ \operatorname Q$,
but there is a corresponding estimate for all the other three functions.
\begin{lemma}\label{l.Q} Let $ \overline T_j$ either be identically $ T$, or $ \overline T_j=T_j$ for all
$ 1\le j \le 4$. Let $ f_j \;:\; \overline T_j \longrightarrow [-1,1] $ be functions. We have the following
estimate.
\begin{gather}\label{e.Q1}
\Abs{ \operatorname Q (f_1,f_2,f_3,f_4)}
\le\operatorname U_1 ^{1/2} \cdot\operatorname U _2 ^{1/4} \cdot\operatorname U _3 ^{1/8} \cdot\operatorname U _4 ^{1/8}\,,
\\ \label{e.Q2}
\operatorname U_1=\operatorname U_1 (\overline T_1)=\mathbb E _{x_2,x_3,x_4\in H} \overline T_1 (x_2,x_3,x_4)
\\ \label{e.Q3}
\operatorname U_2 =\operatorname U_2 (\overline T_2)=
\mathbb E _{\substack{x_3^0,x_4^0\in H\\ x_1^0, x^1_1\in H}}
\prod _{\omega \in \{0,1\}^{ \{1\}} \times \{0\} ^{ \{3,4\}}}
\overline T _{2} ( x
_{\{1,3,4\} } ^{\omega })\,,
\\ \label{e.Q4}
\operatorname U_3 =\operatorname U_3 (\overline T_3) =
\mathbb E _{\substack{x_4^0\in H\\ x _{ \{1,2\} }^0,x _{ \{1,2\} }^1 \in H_{\{1,2\}} }}
\prod _{\omega \in \{0\} ^{ \{1,2\}} \times \{0\} ^{ \{4\}}}
\overline T _3 ( x_{\{1,2,4\}} ^{\omega })\,,
\\ \label{e.Q5}
\operatorname U_4 = \operatorname U _4 (f_4, \overline T_1, \overline T_2, \overline T_3)
= \mathbb E _{\substack{ x_{\{1,2,3\}}^0,x_{\{1,2,3\}}^1\in H_{\{1,2,3\}}}} \;
\operatorname Z \cdot
\prod _{\omega \in \{0\} ^{ \{1,2,3\}} \times \{0\} ^{ \{4\}} }
f _{4} ( x_{\{1,2,3\}} ^{\omega })
\\ \label{e.QZ}
\operatorname Z= \operatorname Z(\overline T_1,\overline T_2,\overline T_3) =
\mathbb E _{x_4 ^{0} \in H}
\prod _{\omega \in\{0,1\}^{\{1,2,3\}}
\times\{0\}^{\{4\}}} \prod _{j=1} ^{3}
\overline T_j \circ \lambda _j (x _{1,2,3,4} ^{\omega })
\end{gather}
\end{lemma}
This Lemma makes it clear that we need to understand the linear forms $ \operatorname U_1,\operatorname U_2,\operatorname U_3$,
and $ \operatorname Z$ for
both the $ T_j$ and for $ T$.
\begin{remark}\label{r.Z}
The presence of the term $ Z$ in \eqref{e.U32=} can be seen in the argument of
\cite{MR2289954}, but it is not needed in Shkredov's approach \cite{MR2266965}.
However, this term is much more subtle in the three dimensional case. Similar terms will arise in
\S~\ref{s.Tbox}, are dealt with systematically in Lemma~\ref{l.Zvar}.
\end{remark}
\begin{proof}
The method of proof is to follow the proof of the Gowers-Cauchy-Schwartz inequality,
especially in the case of \eqref{e.gcsi2}, but keeping track of the
additional information that follows from terms that are neglected in the usual proofs of this inequality.
All earlier applications of the Gowers-Cauchy-Schwartz inequality has
in some sense `lost units of density.' In the present argument, we recover these
lost units by the mechanism of the various functions of $ T$ that appear in the
definitions of $ U _{1}$, $ U _{2}$ and $ U _{3}$ above.
Estimate the left-hand side of \eqref{e.Q1} by
\begin{gather}\label{e..U1}
\lvert \operatorname Q (f _{ 1}, f _{ 2} , f _{ 3}, f _{4})
\rvert
\le \bigl[ U _{1} \cdot U _{1} \bigr] ^{1/2} \,
\\ \label{e..U11}
U _{1}=
\mathbb E _{x_2,x_3,x_4\in H}
\lvert f _{1} \circ \lambda _1\rvert ^2 \le \mathbb E _{x_2,x_3,x_4\in H} \overline T_1(x_2,x_3,x_4)\,,
\\ \label{e..U12}
U _{1,2} = \mathbb E _{x_2,x_3,x_4\in H} \overline T_1 (x _{ \{2,3,4\}})
\ABs{\mathbb E _{x_1} \prod _{j=1} ^{3} f _{\epsilon (j)} \circ \lambda _{j} x
_{\{1,2,3,4}\} } ^2
\end{gather}
We use the Cauchy-Schwartz inequality in the variables $ x_2,x_3,x_4$. The term in \eqref{e..U1} proves \eqref{e.Q2}.
In the last line, we are using the notation of the general Gowers-Cauchy-Schwartz Inequalities,
so that $ x_ { \{1,2,3,4\}}= (x_1,x_2,x_3,x_4)$. This will be helpful
in the steps below.
For $ U _{1,2}$, we use the elementary fact that
\begin{equation}\label{e.elem}
\mathbb E _{x\in X} g (x)\Abs{\mathbb E _{y \in Y} f (x,y)} ^2
= \mathbb E _{\substack{x\in X\\ y^0, y^1\in Y }} g (x) \prod _{\epsilon = 0}
f (x, y ^{\epsilon })\,.
\end{equation}
This is in fact crucial to the proof of the Gowers-Cauchy-Schwartz inequality.
In particular, it is essential that we insert the $ \overline T_1 (x _{ \{2,3,4\}}) $
on the right in \eqref{e..U12}. Thus,
\begin{equation}\label{e..U12=}
U _{1,2}
=
\mathbb E _{\substack{x_2^0,x_3^0,x_4^0\in H\\ x_1^0, x^1_1\in H}}
\overline T_1 (x _{ \{2,3,4\}})
\prod _{\omega \in \{0,1\}^{ \{1\}} \times \{0\} ^{ \{2,3,4\}}}
\prod _{j=2} ^{4} f _{\epsilon (j)} \circ \lambda _{j}( x
_{\{1,2,3,4\} } ^{\omega })\,.
\end{equation}
We refer to this identity as `passing $ x_1$ through the square.'
With this notation, it is clear that the variables $ x_2,x_3,x_4$ will also
need to `pass through the square'.
Thus, we write as below, using the Cauchy-Schwartz inequality in the
variables $ x _1 ^{0}, x_1 ^{1}, x_3^0$, and $ x_4 ^{0}$.
\begin{gather}\label{e.U2}
U _{1,2}\le \bigl[ U _{2} \cdot U _{2,2} \bigr] ^{1/2}
\\ \label{e.U21}
U _{2} \le
\mathbb E _{\substack{x_3^0,x_4^0\in H\\ x_1^0, x^1_1\in H}}
\prod _{\omega \in \{0,1\}^{ \{1\}} \times \{0\} ^{ \{3,4\}}}
\overline T _{2} \circ \lambda _{2} ( x
_{\{1,2,3,4\} } ^{\omega })
\\ \label{e.U22}
\begin{split}
U _{2,2}&=
\mathbb E _{\substack{x_3^0,x_4^0\in H\\ x_1^0, x^1_1\in H}}
\prod _{\omega \in \{0,1\}^{ \{1\}} \times \{0\} ^{ \{3,4\}}}
\overline T_2 ( x _{ \{1,3,4\}} ^{\omega } )
\\ & \qquad \times
\ABs{ \mathbb E _{x_2\in H}
\overline T_1(x _{ \{2,3,4\}})
\prod _{\omega \in \{0,1\}^{ \{1\}} \times \{0\} ^{ \{2,3,4\}}}
\prod _{j=3} ^{4} f _{\epsilon (j)} \circ \lambda _{j}( x
_{\{1,2,3,4\} } ^{\omega })
} ^2
\end{split}
\end{gather}
The term in \eqref{e.U21} is \eqref{e.Q3}.
For the term \eqref{e.U22}, we write
\begin{equation}\label{e.U22=}
\begin{split}
U _{2,2}=
\mathbb E _{\substack{x_3^0,x_4^0\in H\\ x _{ \{1,2\} }^0,x _{ \{1,2\} }^0 \in H_{\{1,2\}} }}
\prod _{\omega \in \{0\} ^{ \{1,2\}} \times \{0\} ^{ \{3,4\}}} &
\Biggl[\overline T_2 (x _{ \{1,3,4\}} ^{\omega }) \overline T_1 (x _{ \{2,3,4\}} ^{\omega })
\\ & \qquad \times
\prod _{j=2} ^{4} f _{\epsilon (j)} \circ \lambda _{j}( x
_{\{1,2,3,4\}}^{\omega }) \Biggr]
\end{split}
\end{equation}
We estimate using the Cauchy-Schwartz inequality in the variables
$ x _{1,2} ^0, x _{1,2} ^{1}$ and $ x_4 ^0$.
\begin{gather}\label{e.U3}
U _{2,2}\le \bigl[ U _{3} \cdot U _{3,2} \bigr] ^{1/2} \,,
\\ \label{e.U31}
\begin{split}
U _{3} &=
\mathbb E _{\substack{x_4^0\in H\\ x _{ \{1,2\} }^0,x _{ \{1,2\} }^1 \in H_{\{1,2\}} }}
\prod _{\omega \in \{0\} ^{ \{1,2\}} \times \{0\} ^{ \{4\}}}
\overline T _3 ( x_{\{1,2,4\}} ^{\omega })
\end{split}
\\
\begin{split}
U _{3,2}=
\mathbb E _{\substack{ x _{ \{1,2\} }^0,x _{ \{1,2\} }^1 \in H_{\{1,2\}} \\ x_4\in H}}
& \ABs{ \mathbb E _{x_3}
\prod _{\omega \in \{0\} ^{ \{1,2\}} \times \{0\} ^{ \{3\}}} \Bigl[
\overline T_2( x _{ \{1,3,4\}} ^{\omega } )\overline T_1 (x _{ \{2,3,4\}} ^{\omega })
\\ & \quad \times
\overline T_3 ( x_{\{1,2,4\}} ^{\omega })
f _{4} \circ \lambda _{4}( x_{\{1,2,3\}} ^{\omega }) \Bigr]
} ^2
\end{split}
\end{gather}
The term $ U _{3}$ is \eqref{e.Q4}.
We write $ U _{3,2}$ as follows, after application of \eqref{e.elem}, and
recalling the definition of $ Z$ in \eqref{e.QZ}.
\begin{gather}\label{e.U32=}
U _{3,2} =
\mathbb E _{\substack{ x_{\{1,2,3\}}^0,x_{\{1,2,3\}}^1\in H_{\{1,2,3\}}}} \;
Z \cdot
\prod _{\omega \in \{0\} ^{ \{1,2,3\}} \times \{0\} ^{ \{4\}} }
f _{4} \circ \lambda _{4}( x_{\{1,2,3,4\}} ^{\omega })
\end{gather}
This completes the proof.
\end{proof}
We now provide the estimates that the previous Lemma calls for, in the case of the sets $ T_j$.
\begin{lemma}\label{l.QTj} For the terms $ U_1, U_2, U_3$ and $ Z$ as defined in \eqref{e.Q2}---\eqref{e.Q4}
and \eqref{e.QZ}, and $ \overline T_j=T_j$ we have these estimates.
\begin{gather}\label{e.QTj}
\operatorname Q (T_1,T_2,T_3,T_4) \stackrel u =
\operatorname U _1 (T_1) ^{1/2} \cdot \operatorname U_2 (T_2) ^{1/4}
\operatorname U _3 (T_3) ^{1/8} \cdot \operatorname U_4 (T_4, T_3,T_2, T_1) ^{1/8} \,.
\end{gather}
The constant $ \vartheta $ in the definition of $ \stackrel u =$, see Definition~\ref{d.U},
can be taken to be $ \vartheta = \mathbb P (T\,:\, H \times H \times H) ^{C}$,
where $ C$ is a large constant, depending only on $ C _{\textup{admiss}}$ in Definition~\ref{d.admissible}.
And for $ \operatorname Z (T_1,T_2,T_3)$, we have this inequalities on conditional variance.
\begin{gather}
\label{e.ZTjvar}
\operatorname {Var}
\Bigl( \operatorname Z(T_1,T_2,T_3) \,:\,
\prod _{\omega \in \{0\} ^{ \{1,2,3\}} \times \{0\} ^{ \{4\}} }
T _{4} ( x_{\{1,2,3\}} ^{\omega })
\Bigr) \le \vartheta \mathbb P (A\,:\, H \times H \times H) ^{C} \,.
\end{gather}
\end{lemma}
\begin{proof}
The first claim \eqref{e.QTj} follows from (an iteration of) the Second Proposition on Conservation
of Densities, Proposition~\ref{p.con2}. The second from Lemma~\ref{l.LFZvar}.
\end{proof}
The content of the next Lemma is that in the case where $ A\subset T$ has full probability, that
$ A$ has the expected number of corners.
\begin{lemma}\label{l.QTTTT} Let $ \mathcal A$ be an admissible corner system. Then, we have
\begin{equation}
\label{e.QTTTT}
\operatorname Q (T,T,T,T) \stackrel u =
\prod _{\ell =1} ^{4} \delta _{T\,:\, \ell } \times
\operatorname Q (T_1,T_2,T_3,T_4) \,.
\end{equation}
Here, the constant $ \vartheta $ implicit in the $ \stackrel u =$ can be taken to be
$ \vartheta = \kappa ' \epsilon $, where these two constants are determined by
$ \kappa _{\textup{admiss}}$ and $ \epsilon _{\textup{admiss}}$ in Definition~\ref{d.admissible},
and can be made arbitrarily small.
\end{lemma}
\begin{proof}
One considers the expression in \eqref{e.QTTTT} is a $ 4$-linear form, and expand
$ T$ as $ T=f _{j,1}+ f _{j,0}$, where $ f _{j,1}= \delta _{T\,:\, j} T_j$. This
leads to an expansion of $ \operatorname Q (T,T,T,T)$ into $ 2 ^{4}$ terms, of which
the leading term is
\begin{align*}
\operatorname Q (f _{1,1},f _{2,1},f _{3,1},f _{4,1})
&= \prod _{j=1} ^{4} \delta _{T\,:\, j}
\cdot \operatorname Q (T_1,T_2,T_3,T_4) \,.
\end{align*}
The remaining $ 2 ^{4}-1$ terms all have at least one $ f _{j,0}$. We can show that
all of these terms is at most a small constant times the expression above
by appealing to \eqref{e.Ad3Box} and
\eqref{e.gcsi2}. In particular, we show that we can estimate
\begin{align} \label{e.Qv}
\Abs{\operatorname Q (f _{1,\epsilon (1)},f _{2,\epsilon(2)},f _{3,\epsilon(3)},f _{4,0})}
& \le 2
\operatorname Q (T_1, T_2, T_3, T_4) \cdot
\Biggl[ \upsilon + \frac { \norm f _{4,0} . \Box ^{1,2,3} S _{1,2,3}. ^{8}}
{ \norm T_4 . \Box ^ {1,2,3} S _{1,2,3}. ^{8}} \Biggr] ^{1/8}\,.
\end{align}
By \eqref{e.Ad3Box}, this proves that this term is very small. This inequality singles out the fourth
coordinate for a special role, but the proof, presented in full in this case, holds in full generality, so
completes this case.
Apply Lemma~\ref{l.Q}, with $ \overline T_j=T_j$ and $ f _{j}= f _{j,\epsilon (j)}$ as above.
The estimate we get from this Lemma is \eqref{e.Q1}, with the terms in \eqref{e.Q2}---\eqref{e.QZ}
estimated in Lemma~\ref{l.QTj}. The particular point to observe is that the function $ Z$ has
a small conditional variance \eqref{e.ZTjvar}.
These conditional estimates hold on the support of the product that occurs in \eqref{e.Q5}. Hence,
we can estimate
\begin{equation}\label{e.Qvv}
\begin{split}
\Abs{\operatorname Q (f _{1,\epsilon (1)},f _{2,\epsilon(2)},f _{3,\epsilon(3)},f _{4,0})}
& \le
\operatorname U_1 (T_1) ^{1/2} \cdot
\operatorname U_2 (T_2) ^{1/4}\cdot
\operatorname U_3 (T_3) ^{1/8} \cdot
\operatorname U_4 (T_1,T_2,T_3,f_{4,0}) ^{1/8}
\\
&=\operatorname U_1 (T_1) ^{1/2} \cdot
\operatorname U_2 (T_2) ^{1/4}\cdot
\operatorname U_3 (T_3) ^{1/8} \cdot
\\ \notag
& \qquad \times
\mathbb E \Biggl(\operatorname Z (T_1,T_2,T_3) \,:\,
\prod _{\omega \in \{0\} ^{ \{1,2,3\}} \times \{0\} ^{ \{4\}} }
T _{4} ( x_{\{1,2,3\}} ^{\omega }) ^{1/8}
\Biggr)
\\ \notag
& \qquad \times { \norm T_4 . \Box ^ {1,2,3} H _{1,2,3}. } \cdot
\Biggl[ \upsilon + \frac { \norm f _{4,0} . \Box ^{1,2,3} H _{1,2,3}. } { \norm T_4 . \Box ^ {1,2,3} H _{1,2,3}. } \Biggr]
\end{split}
\end{equation}
In the last line, $ \upsilon $ is a small quantity arising from the conditional variance estimate
\eqref{e.Zvar}.
The key identity is \eqref{e.QTj}. In it, observe that
\begin{align*}
\operatorname U_4 (T_4, T_3,T_2, T_1)
&\stackrel u = \norm T_4 . \Box ^{1,2,3} H _{1,2,3} . ^{8}
\cdot \mathbb E \Biggl(\operatorname Z (T_1,T_2,T_3) \,:\,
\prod _{\omega \in \{0\} ^{ \{1,2,3\}} \times \{0\} ^{ \{4\}} }
T _{4} ( x_{\{1,2,3\}} ^{\omega }) \Biggr)\,.
\end{align*}
Therefore, we have
\begin{align*}
\operatorname Q (T_1,T_2,T_3,T_4) &\stackrel u =
\operatorname U_1 (T_1) ^{1/2} \cdot
\operatorname U_2 (T_2) ^{1/4}\cdot
\operatorname U_3 (T_3) ^{1/8}
\\ & \quad \times
\mathbb E \Biggl(\operatorname Z (T_1,T_2,T_3) \,:\,
\prod _{\omega \in \{0\} ^{ \{1,2,3\}} \times \{0\} ^{ \{4\}} }
T _{4} ( x_{\{1,2,3\}} ^{\omega }) \Biggr)^{1/8}
\\& \quad
\times \bigl\{ \upsilon + \norm T_4 . \Box ^ {1,2,3} H _{1,2,3}. \bigr\}
\end{align*}
And this completes the proof of \eqref{e.Qv} and hence the Lemma.
\end{proof}
To apply Lemma~\ref{l.Q} to prove Lemma~\ref{l.3dvon}, we will need estimates for the terms in
\eqref{e.Q2}---\eqref{e.Q5}. We turn to this next, discussing the estimates for the
terms $ \operatorname U_j$. The estimates for $ \operatorname Z(T,T,T,T)$ as defined in \eqref{e.QZ}
we discuss in the next Lemma.
\begin{lemma}\label{l.QT} We have the estimates below for the forms $ \operatorname U_j$ defined in
\eqref{e.Q2}---\eqref{e.Q5}.
\begin{align}
\label{e.QT}
\operatorname U_1 (T) &\stackrel u = \delta _{T\,:\, 1}
\operatorname U _1 (T_1)\,,
\\
\label{e.QTT}
\operatorname U_2 (T)
&\stackrel u = \delta _{T\,:\, 2} ^2
\operatorname U _2 (T_2)\,,
\\ \label{e.QTTT}
\operatorname U_3 (T)
&\stackrel u = \delta _{T \,:\, 3} ^{4}
\operatorname U _3 (T_3)\,,
\\ \label{e.Tbox}
\norm T . \Box \{1,2,3\}. ^{8}
&\stackrel u =
\delta _{T\,:\, 4} ^{8} \cdot
\norm T_4 . \Box \{1,2,3\}. ^{8}
\end{align}
The implied constant $ \vartheta $ in the definition of $ \stackrel u =$ can be taken to be
$ \mathbb P (T\,:\, H \times H \times H)$ to some large power.
\end{lemma}
\begin{proof}
The equality \eqref{e.Tbox} is a corollary to part 2 of Lemma~\ref{l.Uuniform}, and Definition~\ref{d.U}.
The other parts of the Lemma are also corollaries to the same fact, but not as stated, but with the
role of $ T_4$ in Definition~\ref{d.U} replaced by that of $ T_2$ for \eqref{e.QTT}, and $ T_3$ for \eqref{e.QTTT}.
\end{proof}
We turn to the analysis of the term $ \operatorname Z (T,T,T)$ as defined in \eqref{e.QZ}.
\begin{lemma}\label{l.Z}
We have the estimates below where $ Z= \operatorname Z (T,T,T)$.
\begin{gather}\label{e..Zmean}
\mathbb E _{x ^0 _{ \{1,2,3\}} , x ^1 _{ \{1,2,3\}} \in H _{ \{1,2,3\}}}
(Z \,:\, U )
\stackrel u = \prod _{j=1} ^{3} \delta _{T\,:\, j} ^{4}
\times \mathbb E _{x ^0 _{ \{1,2,3\}} , x ^1 _{ \{1,2,3\}} \in H _{ \{1,2,3\}}}
(Z (T_1,T_2,T_3) \,:\, U )
\,,
\\
\label{e..Zvar}
\operatorname {Var} _{x ^0 _{ \{1,2,3\}} , x ^1 _{ \{1,2,3\}} \in H _{ \{1,2,3\}}}
(Z \,:\, U)
\le \delta _{A\,:\, T} ^{12}\,,
\\ \label{e.Udef}
\textup{where } \quad
U=\prod _{\omega \in\{0,1\}^{\{1,2,3\}} }\prod _{1\le j <k\le 3} R_{j,k} (x ^{\omega
} _{j,k})\,.
\end{gather}
The implied constant in $ \stackrel u =$ can be taken as in Lemma~\ref{l.QTTTT}.
\end{lemma}
Here, note that we are using the conditional expectation notation. As the random
variable $ Z$ is supported on the event $ U\subset H _{ \{1,2,3\}} ^{0} \times H _{ \{1,2,3\}} ^{1} $,
we have
\begin{gather}\label{e.condExp1}
\mathbb E _{x ^0 _{ \{1,2,3\}} , x ^1 _{ \{1,2,3\}} \in H _{ \{1,2,3\}}}
(Z \,:\, U)
=
\frac {\mathbb E _{x ^0 _{ \{1,2,3\}} , x ^1 _{ \{1,2,3\}} \in H _{ \{1,2,3\}}} Z}
{ \mathbb E _{x ^0 _{ \{1,2,3\}} , x ^1 _{ \{1,2,3\}} \in H _{ \{1,2,3\}}} U}
\\ \label{e.condVar1}
\operatorname {Var} (Z\,:\, U)
=
\frac {
\mathbb E _{x ^0 _{ \{1,2,3\}} , x ^1 _{ \{1,2,3\}} \in H _{ \{1,2,3\}}} Z ^2 -
\Bigl( \mathbb E _{x ^0 _{ \{1,2,3\}} , x ^1 _{ \{1,2,3\}} \in H _{ \{1,2,3\}}} Z \Bigr) ^2
\Bigl(\mathbb E _{x ^0 _{ \{1,2,3\}} , x ^1 _{ \{1,2,3\}} \in H _{ \{1,2,3\}}} U \Bigr) ^{-1} }
{ \mathbb E _{x ^0 _{ \{1,2,3\}} , x ^1 _{ \{1,2,3\}} \in H _{ \{1,2,3\}}} U}
\end{gather}
And the point of the Lemma is that the random variable $ Z$ is nearly constant on
the set $ U$, and we can compute that constant.
\begin{proof}
We first calculate the denominator in \eqref{e.condExp1} and \eqref{e.condVar1}.
This is relatively simple as the sets $ R_{j,k}$ are uniform in $ S_j \times S_k$,
so that we can estimate
\begin{equation}\label{e.U=}
\mathbb E _{x ^0 _{ \{1,2,3\}} , x ^1 _{ \{1,2,3\}} \in H _{ \{1,2,3\}}} U
\stackrel u = \prod _{j=1} ^{3} \delta _{j} ^2 \prod _{1\le j<k\le 3} \delta _{j,k} ^{4} \,.
\end{equation}
We now turn to the numerator in \eqref{e.condExp1}.
The expectation of $ Z$ in \eqref{e.condExp1} is thought of as a $ 12$-linear form.
Set
\begin{equation*}
\Omega _{\neq j} = \{0,1\}^{\{ 1\le k \neq j \le 3\}} \times \{0\} ^{4}\,, \qquad 1\le j\le 3\,.
\end{equation*}
Set $ \Omega _{\neq }= \bigcup _{j=1} ^{3} \Omega _{\neq j}$.
For functions $ \{f _{\omega }\,:\, \omega \in \Omega _{\neq } \}$
define
\begin{equation*}
\operatorname L (f _{\omega } \,:\, \Omega _{\neq} )= \mathbb E _{ \substack{x _{1,2,3} \in H _{1,2,3}\\ x_4\in H }}
\prod _{\omega \in \Omega_1} f _{\omega } \,.
\end{equation*}
We are to prove the estimate
\begin{equation}\label{e.L1}
\operatorname L ( T \,:\, \Omega _{\neq}) \stackrel u =
\prod _{j=1} ^{3} \delta _{T\,:\, j} ^{4} \cdot
\operatorname L ( T _j \,:\, \Omega _{\neq j}\,,\, 1\le j\le 3)\,.
\end{equation}
Expand $ T \circ \lambda _j=f _{j,1}-f _{j,0}$, where $ f _{j,1}= \delta _{T\,:\, j} T_j$.
The leading term is then when $ f _{j,1}$ occurs in all twelve positions.
But, then we have the Second Conservation of Densities Proposition at our disposal, so that
\eqref{e.L1} follows from Proposition~\ref{p.con2}.
The ratio of \eqref{e.L1} and \eqref{e.U=} proves \eqref{e..Zmean}, provided the other
terms arising from the expansion of the $ 12$-linear form are all sufficiently small.
That is, we should see that
for all $ 2 ^{12}-1$ selections of $ f _{j,\epsilon (\omega )} \in \{ f _{j,0}\,,\, f _{j,1}
\}$ for $ \omega \in \Omega _{\neq j}$, $ 1\le j \le 3$, with at least one $ f _{j,\epsilon (\omega )}= f _{j,0}$
we have
\begin{equation}\label{e.L10}
\Abs{ \operatorname L ( f _{j,\epsilon (\omega )} \,:\, \Omega _{\neq} )}
\le \kappa \operatorname L (T\,:\, \Omega _{\neq })\,,
\end{equation}
for a suitably small constant $ \kappa $.
If we use the same line of reasoning that we have before, this
would lead to a (yet) longer multi-linear form. We therefore present the following variant
of the argument used thus far. We prove \eqref{e.L10} under the following assumptions.
For some $ \omega \in \Omega _{\neq 1}$, we have $ f _{1,\epsilon (\omega )}= f _{1,0}= T - \delta _{T\,:\, 1} T_1$.
Moreover, this happens for $ \omega \equiv 0$, which we can assume after a change of variables.
Finally, let $ J _{\textup{small}} = \{ j=2,3 \,:\, \delta _{T\,:\, j} < \delta _{T\,:\, 1}\}$. We
assume that $ f _{j, \epsilon (\omega )}= \delta _{T\,:\, j} T_j$ for all $ j\in J _{\textup{small}} $.
This can also be assumed, after a permutation of the coordinates. We now prove the inequality
\begin{equation}\label{e.L100}
\Abs{ \operatorname L ( f _{j,\epsilon (\omega )} \,:\, \Omega _{\neq} )}
\le \prod _{j\in J _{\textup{small}}} \delta _{T\,:\, j} ^{4} \cdot
\operatorname L ( T _j \,:\, \Omega _{\neq j}\,,\, 1\le j\le 3)
\cdot
\Biggl[ \upsilon + \frac { \norm f _{1,0} . \Box \{2,3,4\}. ^{8} } { \norm T_1 . \Box \{2,3,4\}. ^{8} } \Biggr]
^{1/8}\,.
\end{equation}
Here, $ \upsilon $ will be a very small positive constant. Our assumption \eqref{e.Ad3Box}, together with
the assumption about $ J _{\textup{small}}$ permits us to conclude \eqref{e.L10} from this inequality.
In particular, we can accumulate a large number of powers of $ \delta _{T\,:\, 1}$ from \eqref{e.Ad3Box}.
The essential point, is that we accumulate the correct power on the densities $ \delta _{T\,:\, j}$ for
$ j\in J _{\textup{small}}$, as there is no \emph{a priori} reason that the different densities $ \delta _{T\,:\, j}$
need be comparable.
But, \eqref{e.L100} follows from application of the inequality \eqref{e.LF3Box}, and so our proof of the Lemma
is complete.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{l.3dvon}.]
Write
$ A=f_{0}+f _{1}$ where $ f _{1}= \delta _{A\,:\, T} T$.
We expand
\begin{equation} \label{e.;q15}
\operatorname Q (A,A,A,A)=
\sum _{\epsilon \in M_4}
\operatorname Q (f _{ \epsilon (1)}, f _{ \epsilon (2)}, f _{ \epsilon (3)}, f _{
\epsilon (4)})\,.
\end{equation}
The leading term is for the function $ \epsilon\equiv 1 $.
It is $ \delta _{A\,:\, T} ^{4} \operatorname Q (T,T,T,T)$, with the
latter expression estimated in \eqref{e.QTTTT}.
All other choices of $ \epsilon $ have at least one choice choice of $ 1\le j\le 4$ for
which we have $ \epsilon (j)=0$. We claim that for all of these we have the estimate
\begin{equation}\label{e.;q0}
\lvert
\operatorname Q (f _{ \epsilon (1)}, f _{ \epsilon (2)}, f _{ \epsilon (3)}, f _{
\epsilon (4)})\rvert
\le \kappa \delta _{A\,:\, T} ^{4} \operatorname Q (T,T,T,T) \,.
\end{equation}
This depends upon the assumption \eqref{e.UniformEnough}.
For $ \kappa < 2 ^{-32} $, this will show that
$ \operatorname Q (A,A,A,A) \ge \tfrac 14 \delta _{A\,:\, T} ^{4} \operatorname Q (T,T,T,T) $.
From this, we conclude that the number of corners in $ A$ is at least
\begin{equation*}
\operatorname Q (A,A,A,A) \lvert H\rvert ^{4}- \lvert A\rvert\ge
\tfrac 14 \delta _{A\,:\, T} ^{4} \operatorname Q (T,T,T,T) \lvert H\rvert ^{4} -
\lvert A\rvert >0
\end{equation*}
Here, we subtract off $ \lvert A\rvert $, as the average $ \operatorname Q (A,A,A,A)$
includes the `trivial corners' where all four points in the corner are the same.;
The inequality holds by \eqref{e.BigEnough}, and this completes the proof.
\medskip
We prove \eqref{e.;q0} for $ \epsilon (4)=0$, with the other cases following by symmetry.
Apply Lemma~\ref{l.Q}, with $ \overline T_j=T$, and $ f_4=f _{0}$. This gives us
the inequality
\begin{equation*}
\lvert
\operatorname Q (f _{ \epsilon (1)}, f _{ \epsilon (2)}, f _{ \epsilon (3)}, f _{0})\rvert
\le \operatorname U_1 (T) ^{1/2}
\cdot\operatorname U _2 (T) ^{1/4} \cdot\operatorname U _3 (T)^{1/8} \cdot\operatorname U _4 (f_0,T,T,T)^{1/8}\,.
\end{equation*}
The terms $ \operatorname U_j (T)$ for $ j=1,2,3$ are estimated in Lemma~\ref{l.QT}.
The definition of $ \operatorname U_4 (f_0, T,T,T)$ in \eqref{e.Q5} depends upon $ \operatorname Z$,
which has its properties listed in Lemma~\ref{l.Z}. This leads us to the estimate
\begin{align*}
\operatorname Q (f _{ \epsilon (1)}, f _{ \epsilon (2)}, f _{ \epsilon (3)}, f _{0})\rvert
& \le
\operatorname U_1 (T) ^{1/2}
\cdot\operatorname U _2 (T) ^{1/4} \cdot\operatorname U _3 (T)^{1/8} \cdot
\mathbb E (Z\,:\, U) ^{1/8}
\\ & \qquad \times \norm T. \Box \{1,2,3\}. \cdot
\Biggl[ \upsilon + \frac {\norm f_0. \Box \{1,2,3\}.} {\norm T. \Box \{1,2,3\}. } \Biggr]
\\& \le
\prod _{\ell =1} ^{4} \delta _{T\,:\, \ell } \times
\operatorname U_1 (T_1) ^{1/2}
\cdot\operatorname U _2 (T_2) ^{1/4} \cdot\operatorname U _3 (T_3)^{1/8}
\\& \quad \qquad \times
\mathbb E (Z (T_1,T_2,T_3)\,:\, U) ^{1/8}
\\ & \qquad \times \norm T_4. \Box \{1,2,3\}. \cdot
\Biggl[ \upsilon + \frac {\norm f_0. \Box \{1,2,3\}.} {\norm T. \Box \{1,2,3\}. } \Biggr]
\\
& \le \operatorname Q (T,T,T,T)
\Biggl[ \upsilon + \frac {\norm f_0. \Box \{1,2,3\}.} {\norm T. \Box \{1,2,3\}. } \Biggr]\,.
\end{align*}
Our proof is complete.
\end{proof}
\section{The Paley-Zygmund Inequality for the Box Norm and the set $ T$} \label{s.Tbox}
Let us recall the following classical result.
\begin{paleyZygmund} \label{p.pz} There is a $ 0<c<1$ so that for all random variables
$ -1<Z<1$ with $ \mathbb E Z=0$ we have
$
\mathbb P ( Z> c \mathbb E Z ^2 )\ge c \mathbb E Z ^2 \,.
$
\end{paleyZygmund}
Our central purpose in this section is to provide extensions of this result
to the case where the assumption on the standard deviation of the random variable is
replaced by an assumption on the Box Norm. Extensions are provided into
two different settings, an `unweighted' and a `weighted' one.
Indeed, in the unweighted case, we will only require the two dimensional version of this inequality.
\begin{BoxPaleyZygmund} \label{l.BPZ} There is a constant $ c (2)$,
and $ t (2)>1$ so that the following holds.
For all finite sets $ X _{t}$, $ 1\le t\le 2$, and subsets
$ A \subset X_ {\{1 ,2\}}$, set $ \delta = \mathbb P (A)$ and
$
\sigma = \norm A - \mathbb P (A) . \Box ^{ \{1 ,2 \}} X_ {\{1 ,2 \}} .
$.
There are subsets
\begin{gather} \label{e.bpz1}
X' _{ i} \subset X_i\,, \qquad i= 1,2\,,
\\ \label{e.bpz3}
\mathbb P ( {X'_i} \Bigr)
\ge c (2)(\sigma \delta) ^{t(2)}
\,,
\\ \label{e.bpz4}
\mathbb P ( A \,:\, X' _{1,2})
\ge \delta + c (2) (\delta \sigma ) ^{t(2)}\,.
\end{gather}
\end{BoxPaleyZygmund}
We refer the reader to \cite{MR2187732}*{Proposition 5.7} or \cite{MR2289954}*{Lemma 3.4} for a proof of this Lemma.
We need a more general version of the
Paley-Zygmund Inequality for the Box Norm,
is based upon the properties of the sets $ A\subset T\subset T_j$. We need two
Lemmas, with very similar proofs, accordingly we state one Lemma. Our Lemmas
should be coordinate-free, but to ease the burden of notation, we state them distinguishing
the coordinate $ x_4$ for a special role.
\begin{lemma}\label{l.Tbox} There are constants $ c>0$ and $ C, p>1$ so that the following
holds. Suppose that $ \mathcal T$ is a $ T$-system as in
\eqref{e.Tsystem}, which satisfies \eqref{e.Ad1Box} and \eqref{e.Ad2Box}.
Let $ U\subset V\subset T_4$. Assume that $ V\in \{T_4,T\}$.
\begin{equation}\label{e.Tbox0}
\frac{\norm U- \mathbb P (U\,:\, V) V . \Box ^{ \{1,2,3\}} S_{ \{1,2,3\}}. }
{ \norm V . \Box ^{ \{1,2,3\}} S_{ \{1,2,3\}}. }
\ge \tau
\end{equation}
and that $ V$ is $ (4,\vartheta , 4)$-uniform, (Recall Definition~\ref{d.U}.) where
\begin{equation}\label{e.zvoDef}
\vartheta = ( \tau \mathbb P (U\,:\, V )) ^{C}\,.
\end{equation}
Then, there is a $ T$-system
\begin{equation}\label{e.T'system}
\mathcal T'=\{H\,,\, S'_k\,,\, R ' _{k, \ell } \,,\, T' \,:\, 1\le k, \ell \le 4\,,\ k< \ell \}
\end{equation}
and a set $ V'\subset T'_4$, which satisfy
\begin{gather} \label{e.V'}
\begin{cases}
V'=T'_4 & V=T_4
\\
V'\subset V & V=T
\end{cases}
\\
\label{e.T'Uniform}
\begin{cases}
\mathbb P (T'_4 \,:\, T_4) \ge (\tau \mathbb P (U\,:\, T_4)) ^{p} & V=T_4
\\
\mathbb P (T'\,:\, T)\ge (\tau \mathbb P (U\,:\, T)) ^{p} & V=T
\end{cases}
\\
\label{e.Tbox6}
\mathbb P (U \,:\, T'\cap V) \ge \mathbb P (U\,:\, V)+ c(\tau \cdot \mathbb P (U\,:\, V) ) ^{p}\,.
\end{gather}
\end{lemma}
The point of these estimates is that we have a little information about the new data, in \eqref{e.V'}.
There are some lower bounds on the probabilities
of the elements of the new $ T$-system given by the estimate \eqref{e.T'Uniform}.
And in \eqref{e.Tbox6}, we have that
$ U$ has a slightly larger probability in $ T'\cap V$. Note that we certainly do not
assume that the new $ T$-system $ \mathcal T'$ satisfies the uniformity assumptions
in the definition of admissibility, Definition~\ref{d.admissible}.
\begin{proof}[Proof of Lemma~\ref{l.dinc}.]
To prove Lemma~\ref{l.dinc}, apply Lemma~\ref{l.Tbox} with $ V=T$, $ U=A$, and
$ \tau = \kappa \delta _{A\,:\, T} ^{4}$, where $ \kappa $ is as in \eqref{e.UniformEnough}.
The conclusions of Lemma~\ref{l.Tbox} then imply those of Lemma~\ref{l.dinc}.
\end{proof}
\subsection{One-Dimensional Obstructions
We carry out the proof of Lemma~\ref{l.Tbox}. Throughout, we use the expansion
$ U= f_1+f_0$ where $ f_1= \delta _{U\,:\, V} V$ where $ \delta _{U\,:\, V}= \mathbb P (U\,:\,
V)$. We will also use the notation $ \delta _{V\,:\, 4}= \mathbb P (V\,:\, T_4)$.
The key assumption \eqref{e.Tbox0}, which
could hold due to lower-dimensional obstructions, and so there are two initial stages in
which we address these obstructions.
We begin by considering
the possibility that \eqref{e.Tbox0} holds for some one-dimensional reason. Namely,
let us assume that, for instance, we have
\begin{equation}\label{e.Tbox-oneD}
\begin{split}
\mathbb E _{x _{2,3} \in S_{2,3} } \Abs{ \mathbb E _{x_1\in S_1} f _0 (x_1,x_2,x_3)} ^2
& \ge [c_1 (\delta _{U\,:\, V} \tau ) ^{t_1} ] ^2
\mathbb E _{x _{2,3} \in S_{2,3} } \Abs{ \mathbb E _{x_1\in S_1} V(x_1,x_2,x_3)} ^2
\\
& \ge \tfrac 12 [ c_1 (\delta _{U\,:\, V} \tau ) ^{t_1} ] ^2 \cdot
\delta _4 ^2 \cdot \delta _{V\,:\, 4} ^2 \cdot \delta _{1,2} ^2 \cdot \delta _{1,3} ^2 \cdot \delta _{2,3}\,.
\end{split}
\end{equation}
Note that the last expectation is estimated by virtue of our assumption on
$ (4,\vartheta,4)$-uniformity, recall \eqref{e.Uuniform}.
Here, $ c_1>0$ and $ t_1>1$ are constants that we will specify below,
based upon considerations in the next two stages of our argument.
Let us rephrase \eqref{e.Tbox-oneD} as
\begin{equation} \label{e.Tbox-one'}
\mathbb E _{x _{2,3} \in R_{2,3} } \Abs{ \mathbb E _{x_1\in S_1} f _0 (x_1,x_2,x_3)} ^2
\ge \tfrac 12 c_1 (\delta _{U\,:\, V} \tau ) ^{t_1} \cdot
\delta _4 ^2 \cdot \delta _{V\,:\, 4} ^2 \cdot \delta _{1,2} ^2 \cdot \delta _{1,3} ^2
\end{equation}
where we have replaced the expectation over $ S _{2,3}= S_2 \times S_3$ by expectation over the
smaller set $ R _{2,3}$. Of course, we have
$ \abs{\mathbb E _{x_1\in S_1} f _0 (x_1,x_2,x_3)}\le
\mathbb E _{x_1\in S_1} V (x_1,x_2,x_3) $. But, the
variance of this last random variable over $ R _{2,3}$ is nearly constant. Namely,
\begin{equation} \label{e.Tbox-oneVar}
\operatorname {Var} _{ x _{2,3}\in R _{2,3}} \Bigl( \mathbb E _{x_1\in S_1} V (x_1,x_2,x_3)\Bigr)
\le K \tau ^{C}
\Bigl[ \mathbb E _{\substack{x_1\in S_1\\ x _{2,3}\in R _{2,3}}} V (x_1,x_2,x_3)\Bigr] ^2\,.
\end{equation}
This is a corollary to Lemma~\ref{l.Zvar}.
We are in a situation where we can apply the Paley-Zygmund inequality, Proposition~\ref{p.pz}
Note that the random variable $ \mathbb E _{x_1\in S_1} f _0 (x_1,x_2,x_3)$
is dominated in absolute value by $ \mathbb E _{x_1\in S_1} V (x_1,x_2,x_3)$, which
has average value (on $ R_{2,3}$) given by
\begin{equation} \label{e.4mean}
\mathbb E _{\substack{x_1\in S_1\\ x _{2,3} \in R _{2,3} }} V (x_1,x_2,x_3)
\stackrel u = \delta _{V\,:\, 4} \cdot \delta _{1,2} \cdot \delta _{1,3} \cdot \delta _{4} \,.
\end{equation}
This follows from assumption and \eqref{e.Uuniform}.
Moreover, by \eqref{e.Tbox-oneVar},
the random variable $ \mathbb E _{x_1\in S_1} V (x_1,x_2,x_3)$ has very small variance on $ R _{2,3}$,
so that except for a negligible probability, it is dominated by, say, twice its expectation.
The key point here, is that in applying the Paley-Zygmund inequality, we can use the normalized
variance given by the ratio \eqref{e.Tbox-one'} and \eqref{e.4mean}:
\begin{align*}
\frac
{\mathbb E _{x _{2,3} \in S_{2,3} } \Abs{ \mathbb E _{x_1\in S_1} f _0 (x_1,x_2,x_3)} ^2}
{ [\mathbb E _{\substack{x_1\in S_1\\ x _{2,3} \in R _{2,3} }} V (x_1,x_2,x_3) ] ^2 }
& \ge
\frac
{\tfrac 12 c_1 (\delta _{U\,:\, V} \tau ) ^{t_1}
\delta _4 ^2 \delta _{V\,:\, 4} ^2 \delta _{1,2} ^2 \delta _{1,3} ^2}
{\delta_{ V\,:\, 4} ^2 \cdot \delta _4 ^2 \cdot \delta _{1,2} ^2 }
\\
& = \tfrac 12 c_1 (\delta _{U\,:\, V} \tau ) ^{t_1} \,.
\end{align*}
Thus, we can estimate
\begin{gather} \notag
R' _{2,3} = \Bigr\{ x _{2,3} \in R_{2,3} \,:\, \mathbb E _{x_1\in S_1} f _0 (x_1,x_2,x_3)
\ge \tfrac {c_1} {20}
c_1 (\delta _{U\,:\, V} \tau ) ^{t_1}
\mathbb E _{\substack{x_1\in S_1\\ x _{2,3} \in R _{2,3} }} V (x_1,x_2,x_3)
\Bigr\}\,,
\\ \label{e.new23}
\mathbb P (R' _{2,3} \,:\, R_{2,3})
\ge
\tfrac 1{10} c_1 (\delta _{U\,:\, V} \tau ) ^{t_1} \,.
\end{gather}
We conclude the Lemma by taking the set $R' _{2,3} $ in \eqref{e.T'system} as above,
$ T'= T\cap \overline R _{2,3}' $,
and the other data is unchanged. If $ V=T_4$, the new set $ V' = V \cdot \overline R _{2,3}'$, so
that \eqref{e.V'} holds.
That \eqref{e.T'Uniform} holds follows from \eqref{e.new23}, and several
applications of \eqref{e.gcsi2}.
And that \eqref{e.Tbox6} holds follows from construction of $ R' _{2,3}$.
\subsection{Two-Dimensional Obstructions
We continue the proof assuming that \eqref{e.Tbox-oneD} fails as written, and also fails
under any permutation of the variables $ x_1,x_2$, and $ x_3$. The
potential lower dimensional obstruction are now two-dimensional in nature. We could have for instance
\begin{equation}\label{e.Tbox-twoD}
\mathbb E _{x_1 \in S_1 }
\norm f_0 . \Box ^{2,3} S _{2,3}. ^{4}
\ge
c_2 (\delta _{U\,:\, V} \tau ) ^{t_2}
\mathbb E _{x_1 \in S_1 } \norm V . \Box ^{2,3} S _{2,3}. ^{4}\,.
\end{equation}
Here, $ t_2, c_2>0$ are constants that are to be specified, based upon considerations in
the next stage of the argument.
The last expectation can be computed exactly, and is
\begin{equation}\label{e.T44}
\begin{split}
\mathbb E _{x_1 \in S_1 } \norm V . \Box ^{2,3} S _{2,3}. ^{4}
&= \mathbb E _{\substack{x_1\in S_1\\ x _{2,3} ^0, x _{2,3}^1\in S _{2,3} }}
\prod _{ \omega \in \{0,1\}^{\{2,3\}}}
V (x_1,x _{2,3} ^{\omega })
\\
& \stackrel u = [ \delta _{4} \delta _{V\,:\, 4} ] ^{4} \prod _{1\le j<k\le 3}\delta _{j,k} ^{2}\,.
\end{split}
\end{equation}
Of course we have $ \norm f_0 . \Box ^{2,3} S _{2,3}. ^{4} \le \norm V . \Box ^{2,3} S
_{2,3}. ^{4}$. Still, the deduction of the Lemma in this case doesn't follow from a
a straight forward application of Lemma~\ref{l.BPZ} in two dimensions, as we are in the
weighted case. This
argument is the one that relates the constants $ c_1, t_1$ and constants $ c_2, t_2$.
Following notation used in the proof of Lemma~\ref{l.BPZ}, we define a four linear
term which arises from \eqref{e.Tbox-twoD}.
\begin{equation} \label{e.UboxB4}
\operatorname B _4 (f _{0,0}, f _{0,1}, f _{1,0}, f _{1,1})
= \mathbb E _{\substack{x _{1}\in S_1,
\\ x_{2,3} ^0, x _{2,3}^1\in S _{2,3}} }
\prod _{ \epsilon \in \{0,1\} ^2 }
f _{\epsilon } (x_1, x_{2,3} ^{\epsilon }) \,.
\end{equation}
Note that the left-hand-side of \eqref{e.Tbox-twoD} is
$
\operatorname B _4 (f _{0}, f _{0}, f _{0}, f _{0})
$, and that
$
{\mathbb E _{x_1 \in S_1 } \norm V . \Box ^{2,3} S _{2,3}. ^{4}}=
\operatorname B_4 (V, V, V, V)
$, which is given in \eqref{e.T44}.
Our central claims are these inequalities, which hold for $ c_1, t_1$ sufficiently
large, in terms of $ c_2,t_2$.
\begin{gather}\label{e.B4U}
\frac {\operatorname B _4 (U, U, U, U)}
{\operatorname B_4 (V, V, V, V)}
\ge \delta _{U\,:\, V} ^{4} + \tfrac 1 4 c_2 (\delta _{U\,:\, V} \tau ) ^{t_2} \,,
\\
\label{e.B4U4}
\ABS{\delta _{U\,:\, V} ^{3} -
\frac {\operatorname B _4 (U,U,U,V)}
{\operatorname B_4 (V, V, V, V)} }
\le 8 c_1 (\delta _{U\,:\, V} \tau ) ^{t_1} \,,
\\
Z_V \coloneqq \mathbb E _{\substack{x_1\in S_1\\ x _{2,3}^1 \in S _{2,3} }} V(x_1, x_2^0,x_3^0)
V(x_1, x_2^0,x_3^1) V(x_1, x_2^1,x_3^0) V(x_1, x_2^1,x_3^1) \,,
\\ \label{e.B4Vratio}
\mathbb E _{ x _{2,3} ^{0} \in S _{2,3} } (Z_V)=
{ \operatorname B_4 (V,V,V,V)} \,,
\,,
\\
\label{e.B4Vvar}
\operatorname {Var} _{ x _{2,3} ^{0} \in S _{2,3}} ( Z_V )
\le \sqrt \vartheta \cdot { \operatorname B_4 (V,V,V,V)} ^2
\\ \notag
Z_U \coloneqq \mathbb E _{\substack{x_1\in S_1\\ x _{2,3}^1 \in S _{2,3} }} U (x_1, x_2^0,x_3^0)
U (x_1, x_2^0,x_3^1) U (x_1, x_2^1,x_3^0) V(x_1, x_2^1,x_3^1) \,,
\\ \label{e.B4ratio}
\mathbb E _{ x _{2,3} ^{0} \in S _{2,3} } (Z_U)=
{ \operatorname B_4 (U,U,U,V)} \,.
\,,
\\
\label{e.B4var}
\operatorname {Var} _{ x _{2,3} ^{0} \in S _{2,3}} ( Z_U )
\le 32 c_1 (\delta _{U\,:\, V} \tau ) ^{t_1} { \operatorname B_4 (V,V,V,V)} ^2 \,.
\end{gather}
Notice that the constant $ t_1$ of \eqref{e.Tbox-oneD} appears in the
estimates \eqref{e.B4U4} and \eqref{e.B4var}. We take $ t_1>2t_2+3$.
In \eqref{e.B4var}, note that we have three occurrences of $ U$ and one of $ V$.
The expectation of $ Z$ is the term in \eqref{e.B4U4}.
\begin{proof}[Proof of \eqref{e.B4U}.]
The denominator on the left-hand-side is
estimated in \eqref{e.T44}. So we estimate the numerator. We use the expansion
$ U= f_1+f_0$ four times to write $ \operatorname B _4 (U,U,U,U)$ as
a sum of sixteen terms.
\begin{equation*}
\operatorname B _4 (U,U,U,U)
=\sum _{\epsilon \in M_4} \operatorname B _4 (f _{\epsilon (0,0)},f _{\epsilon (0,1)},
f _{\epsilon (1,0)},f _{\epsilon (1,1)})
\end{equation*}
where $ M_4$ denotes the collection of sixteen maps from $ \{0,1\} ^2 $ into $ \{0,1\}$.
The two significant terms are associated to the maps $ \epsilon \equiv 0$ and $ \epsilon
\equiv1$.
\begin{align*}
\operatorname B _4 (f_1, f_1, f_1, f_1)
&= \delta _{U\,:\, V} ^{4} \operatorname B_4 (V,V,V,V)
\\
\operatorname B _4 (f_0, f_0, f_0, f_0)
&\ge c_2 (\delta _{U\,:\, V} \tau ) ^{t_2} \operatorname B_4 (V,V,V,V)
\end{align*}
The first is by definition of $ f_1= \delta _{U\,:\, V} V$, while the second is
by assumption \eqref{e.Tbox-twoD}. We should argue that the sum of the remaining
fourteen choices of $ \epsilon $ are small. But this follows from
the fact that \eqref{e.Tbox-one'} fails, and the inequality \eqref{e.1Box}. For any
choice of $ \epsilon \not\equiv 0,1$, the central hypothesis leading to that inequality holds.
Of course, it is important to use the fact that the one-dimensional obstructions are not in place
at this point.
\end{proof}
\begin{proof}[Proof of \eqref{e.B4U4}.]
In $ \operatorname B _4 (U,U,U,V)$, expand each $ U$ as $ f_1+f_0$. The leading term is
when each $ U$ is replaced by $ f_0$, giving us
\begin{equation*}
\operatorname B _4 (f_1,f_1,f_1,V)= \delta _{U\,:\, V} ^{3}
\operatorname B _4 (V,V,V,V) \,.
\end{equation*}
The remaining seven terms are of the form
$
\operatorname B _4 (f _{\epsilon (0,0)},f _{\epsilon (0,1)},
f _{\epsilon (1,0)},V )
$, where $ \epsilon \not\equiv 1$. But then, the estimate \eqref{e.1Box} applies,
so this proof is finished.
\end{proof}
\begin{proof}[Proof of \eqref{e.B4Vratio} and \eqref{e.B4Vvar}.]
The equation \eqref{e.B4Vratio} is by definition, and \eqref{e.B4Vvar} is a consequence of
assumption on $ V$ and Lemma~\ref{l.Zvar}.
\end{proof}
\begin{proof}[Proof of \eqref{e.B4ratio} and \eqref{e.B4var}.]
The equation \eqref{e.B4ratio} is by definition of $ Z_U$.
The inequality \eqref{e.B4var} is very similar in spirit to Lemma~\ref{l.Zvar}, but does
not explicitly follow from that Lemma.
To compute the variance of $ Z_U$, we need the following $ 8$-linear form.
\begin{align*}
\operatorname L_8 (g_1,g_2,g_3,&g_4,g_5,g_6,g_7,g_8)
\\&=
\mathbb E _{\substack{ x _{1,2,3} ^0,x _{1,2,3} ^1,x _{1,2,3} ^2\in S _{1,2,3} \\ }}
g_1 (x_1^0,x_2^0,x_3^0)
g_2 (x_1^0,x_2^0,x_3^1)
g_3 (x_1^0,x_2^1,x_3^0)
g_4 (x_1^0,x_2^1,x_3^1)
\\& \qquad \times g_5 (x_1^1,x_2^0,x_3^0)
g_6 (x_1^0,x_2^0,x_3^2)
g_7 (x_1^1,x_2^2,x_3^0)
g_8 (x_1^1,x_2^2,x_3^2)
\end{align*}
The point of this definition is that $ \mathbb E _{x_{2,3}\in S_{2,3}} Z_U ^2
= \operatorname L_8 (U,U,U,V,U,U,U,V)$, and
we want to establish the estimate
\begin{equation}\label{e.L8<}
\mathbb E _{x_{2,3}\in S_{2,3}} Z_U ^2- \bigl(\mathbb E _{x _{2,3}\in S _{2,3}} Z_U \bigr) ^2
\le 20 c_1 (\delta _{U\,:\, V} \tau ) ^{t_1} \bigl(\mathbb E _{x _{2,3}\in S _{2,3}} Z_U \bigr) ^2 \,.
\end{equation}
We already have \eqref{e.B4U4}, which gives us an estimate of $ \mathbb E _{x _{2,3}\in S _{2,3}} Z_U $.
It follows from $ V$ being $ (4, \vartheta ,4)$-uniform that we have
\begin{equation*}
\delta _{U\,:\, V} ^{6}\operatorname L_8 (V,V,V,V,V,V,V,V)
\stackrel u =
[\delta _{U\,:\, V} ^{3} \cdot \operatorname B_4 (V,V,V,V)] ^2
\end{equation*}
And so, we should verify that
\begin{equation}\label{e.L8<<}
\begin{split}
\Abs{ \operatorname L_8 (U,U,U,V,U,U,U,V) - &\delta _{U\,:\, V} ^{6}\operatorname L_8 (V,V,V,V,V,V,V,V) }
\\
&\le 20 c_1 (\delta _{U\,:\, V} \tau ) ^{t_1}\operatorname L_8 (V,V,V,V,V,V,V,V) \,.
\end{split}
\end{equation}
The key assumption is that \eqref{e.Tbox-oneD} fails, which in turn suggests that
we appeal to the inequality \eqref{e.1Box}. But, in the definition of $ \operatorname L_8$,
no single variable occurs in just one function, the key hypothesis needed to apply \eqref{e.1Box}.
This fact brings us to the observation that,
for instance, in the definition of $ \operatorname L_8$, only $ g_7$ and $ g_8$ are
functions of $ x_2^2$. Moreover, we are interested in the case where $ g_8=V$,
a `highly uniform' function, and $ g_7=U= f_1+f_0 $. Thus, our strategy is to selectively
replace occurrences of $ U$ in $ \operatorname L_8 (U,U,U,V,U,U,U,V)$ in such a way that
at each stage, there is single occurrence of $ f_0$, and that there is a variable in $ f_0$ which
is only occurs in instances of $ V$.
Specifically, we write
\begin{gather*}
\operatorname L_8 (U,U,U,V,U,U,U,V) - \delta _{U\,:\, V} ^{6}\operatorname L_8 (V,V,V,V,V,V,V,V)
= \sum _{m=1} ^{6} D_m \,,
\\
D_1 = \operatorname L_8 (U,U,U,V,U,U,f_0,V)\,, \qquad
D_2 = \delta _{U\,:\, V} \operatorname L_8 (U,U,U,V,U,f_0,V,V) \,,
\\
D_3 = \delta _{U\,:\, V} ^2 \operatorname L_8 (U,U,U,V,f_0,V,V,V) \,, \qquad
D_4 = \delta _{U\,:\, V} ^{3} \operatorname L_8 (U,U,f_0,V,V,V,V,V) \,,
\\
D_5 = \delta _{U\,:\, V} ^4 \operatorname L_8 (U,f_0,V,V,V,V,V,V) \,, \qquad
D_6 = \delta _{U\,:\, V} ^{5} \operatorname L_8 (f_0,V,V,V,V,V,V,V) \,.
\end{gather*}
Then, \eqref{e.L8<<} will follow from the estimate
\begin{equation}\label{e.D<}
\lvert D_m\rvert \le 3 c_1 (\delta _{U\,:\, V} \tau ) ^{t_1} \operatorname L_8 (V,V,V,V,V,V,V,V)\,,
\qquad 1\le m\le 6 \,.
\end{equation}
Each of the six inequalities in \eqref{e.D<} follow from the same principle, and so we will
only explicitly discuss the estimate for $ D_1$. Write
\begin{align*}
D_1=
\mathbb E _{\substack{ x _{1,2,3} ^0,x _{1,2,3} ^1\in S _{1,2,3} \\ x _{1,3} ^2 \in S _{1,3}}}&
U (x_1^0,x_2^0,x_3^0)
U (x_1^0,x_2^0,x_3^1)
U (x_1^0,x_2^1,x_3^0)
V (x_1^0,x_2^1,x_3^1)
\\& \quad
\times U (x_1^1,x_2^0,x_3^0)
U (x_1^0,x_2^0,x_3^2)
\cdot \mathbb E _{x_2 ^{2} \in S_2}
f_0 (x_1^1,x_2^2,x_3^0)
V (x_1^1,x_2^2,x_3^2) \,.
\end{align*}
Apply the Cauchy-Schwartz inequality in all variables except $ x_2^2\in S_2$. In so doing, apply the
First Proposition on Conservation of Densities, Proposition~\ref{p.con}, and the assumption of $ V$
being $ (4,\vartheta ,4)$-uniform to conclude that
\begin{gather} \label{e.D11}
\lvert D_1\rvert
\le
\operatorname L_8 (V,V,V,V,V,V,V,V)
\Biggl\{ \sqrt \vartheta + \frac { \operatorname L_4 (f_0,f_0,V,V)} { \operatorname L_4 (V,V,V,V)} \Biggr\}
^{1/2}
\\ \notag
\operatorname L_4 (g_1,g_2,g_3,g_4)
=
\mathbb E _{\substack{x_1^1\in S_1\\ x_2^2,x_2^3 \in S_2 \\ x_3^0,x_3^2\in S_3}}
g_1 (x_1^1,x_2^2,x_3^0)
g_2 (x_1^1,x_2^3,x_3^0)
g_3 (x_1^1,x_2^2,x_3^2)
g_1 (x_1^1,x_2^3,x_3^2)\,.
\end{gather}
In the right-hand-side of \eqref{e.D11}, observe that we can write
\begin{gather*}
{ \operatorname L_4 (f_0,f_0,V,V)}
=
\mathbb E _{\substack{x_1^1\in S_1\\ x_2^2,x_2^3 \in S_2 \\ x_3^0\in S_3}}
f_0 (x_1^1,x_2^2,x_3^0)
f_0 (x_1^1,x_2^3,x_3^0) \cdot Y
\\
Y= Y (x_1^1,x_2^2,x_2^3)
= \mathbb E _{x_3^2\in S_3}
V (x_1^1,x_2^2,x_3^2)
V (x_1^1,x_2^3,x_3^2)\ ,.
\end{gather*}
It follows from Lemma~\ref{l.Zvar} and assumption on $ V$, that $ Y$ is a random variable with
non-zero mean and very small variance on the event $ V (x_1^1,x_2^2,x_3^0)
V (x_1^1,x_2^3,x_3^0)$. Hence,
\begin{align*}
\frac { \operatorname L_4 (f_0,f_0,V,V)} { \operatorname L_4 (V,V,V,V)}
\le
\sqrt \vartheta +
\frac { \operatorname L_4 (f_0,f_0,1,1)} { \operatorname L_4 (V,V,1,1)}
\end{align*}
But the last ratio is controlled by the failure of \eqref{e.Tbox-oneD}, so our proof of
\eqref{e.D<}, and hence \eqref{e.B4var} is complete.
\end{proof}
We need to conclude the proof of the Lemma, assuming the inequalities \eqref{e.B4U}---\eqref{e.B4var}.
Select a point $ x _{2,3} ^{0}\in S _{2,3}$ at random, and define the data in \eqref{e.T'system}
as follows.
\begin{align*}
S_1'(x _{2,3} ^{0}) &= \{ x_1 \,:\, (x_1, x _{2} ^{0}, x_3 ^0)\in U\}\,,
\\
S _{1,2} ' (x _{2,3} ^{0})&= \{ (x_1,x_2^1) \,:\, (x_1, x_2^0,x_3^0),(x_1, x_2^1,x_3^0)\in U\}\,,
\\
S _{1,3} '(x _{2,3} ^{0}) &= \{ (x_1,x_3^1) \,:\, (x_1, x_2^0,x_3^0),(x_1, x_2^0,x_3^1)\in U\}\,,
\\
T' (x _{2,3} ^{0})&= \{ (x_1,x_2^1,x_3^1) \,:\, (x_1, x_2^0,x_3^0),(x_1, x_2^0,x_3^1)\in U\,,\, (x_1, x_2^1,x_3^1) \in V\}\,.
\end{align*}
With this definition, it is clear that \eqref{e.V'} holds, namely if $ V=T_4$, we have $ V'=T_4'=T' (x_{2,3}^0)$.
No change is made to the data not listed here, namely $ S_2, S_3$ and $ S _{2,3}$. The point of these
definitions is that we have
\begin{equation*}
\mathbb E _{ \substack{x_1\in S_1\\ x _{2,3} ^{0}, x _{2,3} ^{1}\in S _{2,3} }} T' (x _{2,3} ^{0})
= \operatorname B_4 (U,U,U,V)\,,
\end{equation*}
and $ \mathbb P _{\substack{x_1\in S_1\\ x _{2,3} ^{1}\in S _{2,3} }}
(T' (x _{2,3} ^{0})) = Z_U(x _{2,3} ^{0})= Z_U$, in the notation of \eqref{e.B4ratio} and \eqref{e.B4var}.
Define the event
\begin{align*}
\widetilde S _{2,3}=
\bigl\{ x ^{0} _{2,3} \in S _{2,3} & \,:\,
\abs{ Z_U- \operatorname B_4 (U,U,U,V) } < [ c_2 (\delta _{U\,:\, V} t)] ^{t_1/2} \operatorname B_4 (V,V,V,V)
\\
&\qquad \abs{ Z_V- \operatorname B_4 (V,V,V,V) } < [ c_2 (\delta _{U\,:\, V} t)] ^{t_1/2} \operatorname B_4 (V,V,V,V)
\bigr\}\,.
\end{align*}
It follows from \eqref{e.B4Vratio}---\eqref{e.B4var} that we have
\begin{equation*}
\mathbb P (S _{2,3} - \widetilde S _{2,3}) < 32
[ c_2 (\delta _{U\,:\, V} t)] ^{t_1/2} \,.
\end{equation*}
Moreover, for $ t_1>4 t_2$, notice that we would have inequalities that look quite similar to
\eqref{e.B4U} and \eqref{e.B4U4}. In particular, we will have
\begin{equation*}
\Abs{ \mathbb E _{x _{2,3} ^{0} \in S _{2,3} } Z_U- \operatorname B_4 (U,U,U,V) }
\le [ c_2 (\delta _{U\,:\, V} t)] ^{t_1/2} \operatorname B_4 (V,V,V,V) \,,
\end{equation*}
with a similar inequality for $ Z_V$. Hence,
we can conclude the proof of the Lemma, by noting that
\begin{align*}
\sup _{x _{2,3} ^{0} \in \widetilde S _{2,3}} \frac { Z_U} {Z_V}
& \ge
\frac
{ \mathbb E _{x _{2,3} ^{0} \in \widetilde S _{2,3}} { Z_U} }
{ \mathbb E _{x _{2,3} ^{0} \in \widetilde S _{2,3}} { Z_V} }
\ge \delta _{U\,:\, V} + \tfrac 14 (\delta _{U\,:\, V} \tau ) ^{t_2} \,.
\end{align*}
\subsection{Three-Dimensional Obstructions
We proceed under the assumption that that both \eqref{e.Tbox-oneD} and \eqref{e.Tbox-twoD}
fail, as written and under all permutations of coordinates. We have specified $ c_1,t_1$
as functions of $ c_2,t_2$, and this argument will specify these last two constants.
We need the $ 8$-linear form, the analog of \eqref{e.UboxB4} given by
\begin{equation}\label{e.U8}
\operatorname B _{8} ( f _{\epsilon }\,:\, \epsilon \in \{0,1\} ^{ \{1,2,3\}})
= \mathbb E _{x _{1,2,3} \in S _{1,2,3}} \prod _{\epsilon \in \{0,1\} ^{\{1,2,3\}}}
f _{\epsilon } (x _{1,2,3} ^{\epsilon })\,.
\end{equation}
The relevant facts we need about this form concern these values. Set
\begin{gather*}
\operatorname B_8 [W]= \operatorname B_8 (W\,:\, \epsilon \in \{0,1\} ^{ \{1,2,3\}})\,, \qquad W= U,V
\\
\operatorname B_8[U,V]= \operatorname B_8(U ,\dotsc, U,V\,:\, \epsilon \in \{0,1\} ^{ \{1,2,3\}})\,,
\end{gather*}
where the lone $ V$ occurs in the $ \{1\} ^{1,2,3}$ position.
Indeed, note that $ \operatorname B_8[U]= \norm U. \Box ^{1,2,3} S _{1,2,3}. ^{8}$.
The facts we need are these.
\begin{gather}\label{e.B8U}
\frac {\operatorname B_8[U]} {\operatorname B_8[V]}\ge \delta _{U\,:\, V} ^{8}+ \tfrac 12 \tau ^{8}\,,
\\
\label{e.B8UV}
\ABs{ \delta _{U\,:\, V} ^{7}- \frac {\operatorname B_8 [U,V]} {\operatorname B_8[V]}}
\le \tfrac 1 {20} (\delta_{U\,:\, V} \tau )^{30}\,,
\\
\notag
Z= \mathbb E _{x^1 _{1,2,3}\in S _{1,2,3}} V (x ^1 _{1,2,3} )\prod _{ \substack{\epsilon \in \{0,1\} ^{ \{1,2,3\}}\\ \epsilon
\not\equiv 0,1} } U (x_ {1,2,3} ^{\epsilon }) \,,
\\
\label{e.B8mean}
\mathbb E (Z\,:\, U) = \frac { \operatorname B_8 [ U,V]} {\mathbb P (U)} \,,
\\
\label{e.B8var}
\operatorname {Var} _{x _{1,2,3} ^{0} \in S _{1,2,3}} (Z\,:\, U)
\le \tfrac 1 {20} (\delta _{U\,:\, V} \tau ) ^{30} \operatorname B_8[V] ^2 \,.
\end{gather}
\begin{proof}[Proof of \eqref{e.B8U}.]
Consider $ \operatorname B_8[U]$. Expand each occurrence of $ U$ as $ f_1+f_0$, where $ f_1= \delta _{U\,:\, V} V$.
This leads to
\begin{equation} \label{e.B8z}
\operatorname B_8[U]= \sum _{ \rho \in M_8}
\operatorname B_8 (f _{\rho (\epsilon) } \,:\, \epsilon \in \{0,1\} ^{ \{1,2,3\}})
\end{equation}
where $ M_8$ is the class of maps from $ \{0,1\} ^{ \{1,2,3\}} $ into $ \{0,1\}$. The leading term is
$ \rho \equiv 1$, which is
\begin{equation} \label{e.rho==1}
\delta _{U\,:\, V} ^{8}\operatorname B_8 [V]= \delta _{U\,:\, V} ^{8} \norm V. \Box ^{1,2,3} S _{1,2,3}. ^{8}\,.
\end{equation}
The other significant term is $ \rho \equiv 0$, which is
\begin{equation*}
\operatorname B_8 (f_0 \,:\, \epsilon \in \{0,1\} ^{ \{1,2,3\}})= \norm f_0. \Box ^{1,2,3} S _{1,2,3}. ^{8}
\ge \tau ^{8} \norm V. \Box ^{1,2,3} S _{1,2,3}. ^{8}\,.
\end{equation*}
The last inequality follows from \eqref{e.Tbox0}.
That leaves $2 ^{8}-2 $ additional terms in $ M_8$ to consider. For each $ \rho \in M_8$ which is not
equivalent to $ 0$ or $ 1$, the assumption for the inequality \eqref{e.2Box} holds. Namely, there is a
choice of $ \epsilon \in \{0,1\} ^{ \{1,2,3\}}$, and choice of distinct $ j,k\in \{1,2,3\}$ so that
$ \rho (\epsilon )=0$, and for every other $ \epsilon ' $, we have either $ \epsilon (j)\neq \epsilon ' (j)$
or $ \epsilon (k)\neq \epsilon ' (k)$. Therefore, the inequality \eqref{e.2Box} holds. Combining this inequality
with our assumption that \eqref{e.Tbox-twoD} fails, we see that this holds.
\begin{align} \label{e.28-2}
\Abs{\operatorname B_8 (f _{\rho (\epsilon )} \,:\, \epsilon \in \{0,1\} ^{ \{1,2,3\}})}
& \le c_2 (\delta _{U\,:\, V} \tau ) ^{t_2}
\times \norm V. \Box ^{1,2,3} S _{1,2,3} . ^{8} \,.
\end{align}
For $ c_2$ sufficiently small, and $ t_2\ge 8$, this completes the proof of \eqref{e.B8U}.
\end{proof}
\begin{proof}[Proof of \eqref{e.B8UV}.]
Keeping the notation of \eqref{e.B8z}, we have
\begin{equation*}
\operatorname B_8[U, V]= \delta _{U\,:\, V} ^{-1} \sum _{ \rho \in M_8'}
\operatorname B_8 (f _{\rho (\epsilon) } \,:\, \epsilon \in \{0,1\} ^{ \{1,2,3\}})
\end{equation*}
where $ M_8'$ is the class of maps $ \rho \in M_8$ such that $ \rho ( 1 ^{ \{1,2,3\}})=1$.
The leading term is again $ \rho \equiv 1$, which is \eqref{e.rho==1} above.
The remaining $ 2 ^{8}-1$ terms all admit the bound \eqref{e.28-2}. Therefore,
\begin{equation*}
\Abs{\operatorname B_8[U, V]- \delta _{U\,:\, V} - \delta _{U\,:\, V} ^{7}
\norm V. \Box ^{1,2,3} S _{1,2,3} . ^{8} }
\le 2 ^{8} (\delta _{U\,:\, V} \tau ) ^{t_2-1}
\times \norm V. \Box ^{1,2,3} S _{1,2,3} . ^{8} \,.
\end{equation*}
This proves \eqref{e.B8UV} for $ c_2$ sufficiently small, and $ t_2\ge 31$.
\end{proof}
\begin{proof}[Proof of \eqref{e.B8mean} and \eqref{e.B8var}.]
The equation \eqref{e.B8mean} is just the definition of conditional expectation.
Note that as $ V$ is $ (4, \vartheta ,4)$-uniform, we have
\begin{align} \notag
\mathbb E _{x _{1,2,3} ^{0}, x _{1,2,3} ^{1} \in S _{1,2,3}}
Z \cdot U& = \operatorname B_8[U,V]
\\& \notag
=
\delta _{U\,:\, V} ^{7}\norm V . \Box ^{1,2,3} S _{1,2,3}. ^{8} + \epsilon \,,
\\ \label{e.EZU}
& = \delta _{U\,:\, V} ^{7} \delta _{V\,:\, 4} ^{8} \prod _{1\le j<k\le 3} \delta _{j,k} ^{4} + \epsilon \,,
\\
\label{e.EZerror}
\lvert \epsilon \rvert & \le
\tfrac 1 {20} (\delta_{U\,:\, V} \tau )^{30} \operatorname B_8 [V]\,,
\end{align}
by \eqref{e.B8UV}, and \eqref{e.Uuniform}.
The
inequality \eqref{e.B8var} is clearly a relative of Lemma~\ref{l.Zvar}, but does not follow from
any principal like that which we have stated. Indeed, we will see that \eqref{e.Tbox-twoD} is
instrumental to this inequality, as it has been to the prior inequalities.
Recalling \eqref{e.elemVar}, we see that we
need to estimate $ \mathbb E Z ^2 \cdot U $. This is a linear form on $ U$ and $ V$, which we now specify.
Take $ \Omega \subset \{0,1,2\} ^{1,2,3}$ be set of maps $ \epsilon \;:\; \{1,2,3\} \to \{0,1,2\} $ such that
the range of $ \epsilon $ does not include both $ 1$ \emph{and } $ 2$. Then,
\begin{equation}\label{e.ZUV=}
\mathbb E _{x _{1,2,3} ^{0} \in S _{1,2,3}} Z ^2 \cdot U
= \mathbb E _{\substack{x _{1,2,3} ^{j} \in S _{1,2,3}\\ j=1,2,3 }}
V (x ^{1} _{1,2,3})V (x ^{2} _{1,2,3}) \prod _{\substack{\epsilon \in \Omega \\ \epsilon \not \equiv 1,2 }}
U (x _{1,2,3} ^{\epsilon }) \,.
\end{equation}
There are $ 13$ occurrences of $ U$ in this expression. (Of the $ 7$ occurrences of $ U$ in $ \operatorname B_8[U,V]$,
all but one get `doubled' in the expression above.) Each occurrence is expanded as
as $ f_1+f_0$, where $ f_1= \delta _{U\,:\, V} V$. The leading term is when each occurrence of $ U$
is replaced by $ f_1$. This leads to
\begin{align}\notag
\delta _{U\,:\, V} ^{13}
\mathbb E _{\substack{x _{1,2,3} ^{j} \in S _{1,2,3}\\ j=1,2,3 }}
\prod _{\substack{\epsilon \in \Omega }}
V (x _{1,2,3} ^{\epsilon })
& \stackrel u = \delta _{U\,:\, V} ^{13} \delta _{4} ^{\lvert \Omega \rvert }
\prod _{1\le j<k\le 3} \delta _{j,k} ^{ \lvert \{\omega \vert _{j,k} \,:\, \omega \in \Omega \}\rvert }
\\ \label{e.ZUleading}
& \stackrel u = \delta _{U\,:\, V} ^{13} \delta _{4} ^{15}
\prod _{1\le j<k\le 3} \delta _{j,k} ^{ 7} =\delta _{U\,:\, V} ^{13} \cdot L_V\,.
\end{align}
Recall that this last expectation can be estimated by assumption that $ V$ is $ (4, \vartheta ,4)$-uniform,
see \eqref{e.Uuniform}.
In each of the $ 2 ^{13}-1$ remaining terms, there is at least one occurrence of $ U$ which is replaced
by $ f _{0}$. As in the previous two proofs, we are again in a situation in which \eqref{e.2Box} applies.
Therefore, as \eqref{e.Tbox-twoD} fails, each of these terms is at most
\begin{equation} \label{e.ZUc}
2 L_V \bigl\{ \vartheta' + c_2 (\delta _{U\,:\, V} \tau ) ^{t_2} \bigr\}\,.
\end{equation}
Therefore, for $ c_2$ sufficiently small, and $ t_2$ sufficiently large, we can combine \eqref{e.ZUc},
\eqref{e.ZUleading} and \eqref{e.ZUV=} to conclude that
\begin{align}\label{e.ZUV==}
\mathbb E _{x _{1,2,3} ^{0} \in S _{1,2,3}} Z ^2 \cdot U
&=
\delta _{U\,:\, V} ^{13} L_V + \epsilon '
\\ \label{e.ZUVerror}
\lvert \epsilon '\rvert & \le c_2' L_V (\delta _{U\,:\, V} \tau ) ^{t_2} \,.
\end{align}
Here, the implied constant in `$\stackrel u = $' depends upon the failure of the inequality \eqref{e.Tbox-twoD},
and $ L_V$ is defined in \eqref{e.ZUleading}.
Now observe that combining \eqref{e.EZU} and \eqref{e.ZUV=} and \eqref{e.ZUleading}, we have
\begin{align} \notag
\mathbb P (U \,:\, T_4) \cdot \mathbb E Z ^2 \cdot U
& = \delta _{U\,:\, V} ^{14} \delta _{4} ^{16}
\prod _{1\le j<k\le 3} \delta _{j,k} ^{ 8} + \epsilon' \cdot \mathbb P (U \,:\, T_4)
\\ \label{e.RV}
& = \bigl(\mathbb E Z \cdot U \bigr) ^2 + \epsilon ''
\\ \label{e.e''}
\lvert \epsilon '' \rvert & \le c_2' \operatorname B_8[V] ^2 [
(\delta _{U\,:\, V} \tau ) ^{t_2} +
\tfrac 1 {20} (\delta_{U\,:\, V} \tau )^{30} ] ^2 \,.
\end{align}
In the last line, we have used \eqref{e.EZerror} and
\eqref{e.ZUVerror}. Dividing \eqref{e.RV} by $ \mathbb P (U\,:\, T_4) ^2 $, and using the estimate
in \eqref{e.e''} completes the proof of \eqref{e.B8var}.
\end{proof}
We can complete the proof of Lemma~\ref{l.Tbox}, assuming the inequalities \eqref{e.B8U}---\eqref{e.B8var}.
For a suitably generic point $ x ^{0} _{1,2,3} \in U$, we define the new data in \eqref{e.T'system} to be
\begin{equation*}
S_1' (x^0 _{1,2,3}) = \{x_1 ^{1} \,:\, x _{1,2,3} ^{1,0,0} \in U\}\,,
\end{equation*}
with a corresponding definition for $ S_2'(x^0 _{1,2,3})$ and $ S_3'(x^0 _{1,2,3})$.
The set $ S' _{1,2} (x^0 _{1,2,3})$ is defined as
\begin{equation*}
S' _{1,2} (x^0 _{1,2,3})
= \{ x _{1,2} ^{1} \in S_1' (x^0 _{1,2,3}) \times S_2' (x^0 _{1,2,3}) \,:\,
x ^{1,1,0} _{1,2,3} \in U \} \,,
\end{equation*}
with a corresponding definition for $ S ' _{1,3} (x^0 _{1,2,3})$ and $ S' _{2,3} (x^0 _{1,2,3})$.
Last of all, the set $ T' (x^0 _{1,2,3})$ is taken to be
\begin{equation*}
T' (x^0 _{1,2,3})
= \{ x ^{1} _{1,2,3} \in V \,:\,
x ^{1,1,0} \in S ' _{1,2} (x^0 _{1,2,3}) \,, \,
x ^{1,0,1} \in S ' _{1,3} (x^0 _{1,2,3}) \,, \,
x ^{0,1,1} \in S ' _{2,3} (x^0 _{1,2,3})
\} \,.
\end{equation*}
With these definitions, note that \eqref{e.V'} holds, that is if $ V=T_4$, then $ V'=T' (x^0 _{1,2,3}) =
T'_4$ in the new $ \mathcal T$-system.
The point of this definition is that
\begin{equation}\label{e.T'point}
\mathbb E _{x ^{0} _{1,2,3}, x ^{1} _{1,2,3} \in S _{1,2,3}}
U (x^0 _{1,2,3}) T' (x^0 _{1,2,3}) = \operatorname B_8 [U,V]\,,
\end{equation}
with the last expression found in \eqref{e.B8UV}.
Now, set
\begin{equation*
U'= \bigl\{ x ^{0} _{1,2,3} \in U \,:\,
\mathbb P _{x^1 _{1,2,3} S _{1,2,3} } (T' (x^0 _{1,2,3})) \ge \tfrac 1 {4} \delta _{U\,:\, V} ^{7}
\operatorname B_8 [ V]
\bigr\}\,.
\end{equation*}
It follows from \eqref{e.B8mean} and
\eqref{e.B8var} that we have
\begin{align*}
\mathbb P _{x^0 _{1,2,3} \in S _{1,2,3} } (U-U')
& \le \mathbb P (U) \cdot \bigl( (\tau \delta _{U\,:\, V}) ^{7} \operatorname B_8 [V]\bigr) ^2
\operatorname {Var} (Z\,:\, U)
\\
& \le \mathbb P (U) (\tau \delta _{U\,:\, V} ) ^{14} \,.
\end{align*}
Now, it will follow from the $ (4, \vartheta , 3)$-uniformity of $ V$, and Lemma~\ref{l.Zvar} that
we have
\begin{equation*}
\operatorname {Var} _{x^0 _{1,2,3}} \Bigl( \mathbb E _{x^1 _{1,2,3} \in S _{1,2,3}}
\prod _{\epsilon \in \{0,1\} ^{1,2,3}}
V (x ^{\epsilon } _{1,2,3}) \,:\, V (x^0 _{1,2,3}) \Bigr)
\le \vartheta \operatorname B_8[V] ^2 \,.
\end{equation*}
Here, $ \vartheta $ is as in \eqref{e.zvoDef}. Therefore, it will follow that in the formula \eqref{e.B8U},
we can change the leading $ U (x^0 _{1,2,3})$ by $ U' (x^0 _{1,2,3})$. Namely, we have
\begin{align} \notag
\operatorname B_8 [ U-U', U ,\dotsc, U]
& \le \operatorname B_8 [ U-U', V ,\dotsc, V]
\\ \label{e.U'}
& \le 2 (\tau \delta _{U\,:\, V} ) ^{14} \operatorname B_8[V] \,.
\end{align}
We can conclude this proof by estimating as follows: For element $ x^0 _{1,2,3} \in U'$, we have
\begin{align*}
\sup _{x^0 _{1,2,3} \in U'} \mathbb P (U\,:\, T) &
=
\frac { \mathbb E _{x^1 _{1,2,3} \in S _{1,2,3}} \prod _{ \substack{\epsilon \in \{0,1\} ^{1,2,3} \\ \epsilon \not\equiv 0 }}
U (x _{1,2,3} ^{\epsilon }) }
{ \mathbb E _{x^1 _{1,2,3}\in S _{1,2,3}} V (x^1 _{1,2,3})
\prod _{ \substack{\epsilon \in \{0,1\} ^{1,2,3} \\ \epsilon \not\equiv 0,1 }}
U (x _{1,2,3} ^{\epsilon }) }
\\
& \ge
\frac { \mathbb E _{x^0 _{1,2,3},x^1 _{1,2,3}\in S _{1,2,3}}
U' (x^0 _{1,2,3})
\prod _{ \substack{\epsilon \in \{0,1\} ^{1,2,3} \\ \epsilon \not\equiv 0 }}
U (x _{1,2,3} ^{\epsilon }) }
{ \mathbb E _{x^0 _{1,2,3},x^1 _{1,2,3}\in S _{1,2,3}} U' (x^0 _{1,2,3}) V (x^1 _{1,2,3})
\prod _{ \substack{\epsilon \in \{0,1\} ^{1,2,3} \\ \epsilon \not\equiv 0,1 }}
U (x _{1,2,3} ^{\epsilon }) }
\\
& \ge \delta _{U\,:\, V} + \tfrac 14 \tau ^{8}\,.
\end{align*}
The last line follows by combining \eqref{e.B8U}, \eqref{e.B8UV}, and \eqref{e.U'}, with this last inequality
showing that modifications of \eqref{e.B8U} and \eqref{e.B8UV} hold, with the leading $ U (x^0 _{1,2,3})$
replaced by $ U' (x^0 _{1,2,3})$.
\section{Proof of Uniformizing Lemma} \label{s.uniform}
We marshal several facts, and set some notations, before beginning
the main lines of the proof of the Information Lemma~\ref{l.uni}.
\subsection{Martingales
We will use basic facts about martingales.
Let $ Z$ be a real-valued random variable on a probability space $ \Omega $,
bounded by one. And let $\mathsf P$ be a finite partition of $ \Omega $.
Elements of the partition we refer to as \emph{atoms}.
The
\emph{conditional expectation of $ Z$ relative to $\mathsf P$}
is
\begin{equation*}
\mathbb E (Z\,:\, \mathsf P )
\coloneqq
\sum _{A\in \mathsf P } A \cdot \mathbb P (A) ^{-1} \mathbb E (Z \cdot A)\,.
\end{equation*}
Partition $ \mathsf P $ \emph{refines} $ \mathsf Q$ iff each element of $\mathsf Q$ is
a finite union of elements of $ \mathsf P$.
In our application, all partitions will be a finite collection of sets.
Let $ \mathsf P _n$ be a sequence of refining partitions of $ \Omega $,
that is, $ \mathsf P _{n}$ is a \emph{refining}
sequence of partitions means that $ \mathsf P _{n+1}$ refines $ \mathsf P _{n}$ for all integers $ n$.
We will take $ \mathsf P _0$ to be the trivial partition, namely $ \mathsf P _0= \{\Omega \}$.
The sequence of random variables $ \mathbb E (Z \,:\, \mathsf P _n)$ is an example of
a \emph{martingale}. The sequence of random variables $ \Delta Z_n =\mathbb E (Z \,:\, \mathsf P _n)
- \mathbb E (Z \,:\, \mathsf P _ {n-1})$ for $ n\ge 1$ is a \emph{martingale difference sequence.}
Then, the sum below is telescoping
\begin{equation*}
\mathbb E (Z \,:\, \mathsf P _n)= \mathbb E (Z \,:\, \mathsf P _0)+\sum _{m=1} ^{n} \Delta Z _m\,.
\end{equation*}
Observe that the martingale difference sequence is a sequence of pairwise orthogonal
random variables. That is, for $ m<n$,
\begin{equation}\label{e.mds}
\mathbb E \Delta Z_m \cdot \Delta Z _n=0\,.
\end{equation}
Indeed, as the partitions $ \mathsf P _n$ are refining, and $ m<n$, for each
element $ E\in \mathsf P _m$, the random variable $ \Delta Z_m $ is constant on $ E$, while
$ \mathbb E \Delta Z _n \cdot E =0$. This leads us to:
\begin{proposition}\label{p.Stopping} Let $ 0<u<1$.
Suppose that $ Z$ is a random variable bounded by $ 1$, and that
$ \mathsf P _n$ is the sequence of refining partitions such that
for an increasing sequence of integers $ t_m$ we have
\begin{equation*}
\mathbb E [ \mathbb E (Z\,:\, \mathsf P _{t_m-1} )] ^2 +u
\le
\mathbb E [ \mathbb E (Z\,:\, \mathsf P _ {t_{m}} )] ^2
\,, \qquad 1\le m<M\,.
\end{equation*}
Then, $M\le u $.
\end{proposition}
\begin{remark}\label{r.stoppingtimes} Below, we will refer to an increasing sequence of
integers as `stopping times.' An extension of this definition, to make the stopping
times certain sequences of measurable functions, is an essential tool in martingale
theory.
\end{remark}
\begin{proof}
Notice that the assumption tells us that $ \mathbb E (\Delta Z _{t _{m}}) ^2 \ge u$.
Indeed, since $ \mathbb E (Z\,:\, \mathsf P _ {t_{m}})= \mathbb E (Z\,:\, \mathsf P _ {t_{m}-1})
+ \Delta Z _{t_m}$, and orthogonality of martingale difference sequences,
\begin{align*}
\mathbb E (\Delta Z _{t _{m}}) ^2
& =
\mathbb E [ \mathbb E (Z\,:\, \mathsf P _ {t_{m}}) ] ^2
-2 \mathbb E [ \mathbb E (Z\,:\, \mathsf P _ {t_{m}}) \cdot \mathbb E (Z\,:\, \mathsf P _ {t_{m}-1})]
+ \mathbb E [\mathbb E (Z\,:\, \mathsf P _ {t_{m}-1}) ^2 ]
\\
&
=\mathbb E [ \mathbb E (Z\,:\, \mathsf P _ {t_{m}}) ] ^2
- \mathbb E [\mathbb E (Z\,:\, \mathsf P _ {t_{m}-1}) ^2 ]
\\ & \ge u \,.
\end{align*}
We then have
\begin{align*}
1 \ge \mathbb E Z ^2 \ge \sum _{m=1} ^{M} \mathbb E [\mathbb E (\Delta Z _{t _{m}}) ^2] \ge N u\,.
\end{align*}
\end{proof}
We will use the extension of the previous proposition.
\begin{corollary}\label{c.Stopping}
Suppose that $ \Omega '\subset \Omega $, where $ (\Omega, \mathbb P ) $ is a probability
space. Let $\mathsf P$ be a partition of $ \Omega '$ into a finite number of sets.
Let $ \mathsf P _m$ be a sequence of refining partitions of $ p $, and $ t _{m} ( p )$, for $ p \in P$,
be a set of stopping times so that for all $ 1\le m \le M (p)$ we have
\begin{equation*}
\mathbb E [ \mathbb E ( p \,:\, \mathsf P _{t_m ( p )-1} ) ^2 ]+u
\le
\mathbb E [ \mathbb E (Z\,:\, \mathsf P _ {t_{m} ( p )} ) ^2 ]
\,, \qquad p \in P\,,\ 1\le m<M ( p )\,.
\end{equation*}
Then,
\begin{equation}\label{e.cStopping}
\sum _{ p \in P} M ( p )\le u ^{-1} \,.
\end{equation}
\end{corollary}
\begin{proof}
We have
\begin{align*}
1 & \ge \sum _{ p \in P } \mathbb P ( p )
\ge \sum _{ p in P} \sum _{m=1} ^{M ( p )}
\mathbb E [\mathbb E (\Delta p _{t_m}) ^2 ]
\ge \sum _{ p in P} \sum _{m=1} ^{M ( p )} u\,.
\end{align*}
And this proves our Corollary.
\end{proof}
Here is an extension of the previous propositions, where the conditional
variance increment is permitted to be much smaller.
\begin{proposition}\label{p.STopping}
Let $ 0<u, \tau <1$, and $ C\ge 1$.
Suppose that $0\le Z\le 1 $ is a random variable, and that
$ \mathsf P _m$ is the sequence of refining partitions, and that $ t_m$ is a sequence
of stopping times such that for all $ 1\le m \le M$,
\begin{gather*}
\mathbb E [Z \cdot E _{m}] \ge \tau
\\
E_m \coloneqq
\bigl\{ p \in \mathsf P _{t_m-1} \,:\, \mathbb E [\mathbb E (Z \cdot p \,:\, \mathsf P _{t_m} )] ^2
\ge \mathbb E (Z\,:\, p ) ^2 + u \mathbb E (Z\,:\, p ) ^C \bigr\}
\end{gather*}
Then, $M\le u ^{-2} \tau ^{-C} $.
\end{proposition}
\begin{proof}
Observe that for
$
\Delta _{m} \coloneqq \mathbb E (Z\,:\, \mathsf P _ {t_m}) - \mathbb E (Z\,:\, \mathsf P _{t_m-1})
$ we have the estimate
\begin{equation*}
\mathbb E [\Delta _m ^2 \cdot E_m ]\ge u ^2 \mathbb E [\mathbb E (Z\,:\, \mathsf P _{t _m-1} ) ^C E_m] \,.
\end{equation*}
Therefore, using Jensen's inequality, available to us as $C\ge 1 $,
\begin{align*}
1 & \ge \sum _{m=1} ^{M} \mathbb E \Delta _m ^2
\ge \sum _{m=1} ^{M} \mathbb E \Delta _m ^2 E_m
\ge \sum _{m=1} ^{M} u ^2 \mathbb E [ \mathbb E (Z\,:\, \mathsf P _{t _m-1} ) ^C E_m ]
\\&\ge \sum _{m=1} ^{M} u ^2 \mathbb E [ \mathbb E (Z\,:\, \mathsf P _{t _m-1} ) E_m ] ^{C}
\ge M u ^2 \tau ^{C} \,.
\end{align*}
This proves the Proposition.
\end{proof}
\subsection{Partitions
We need several partitions, which `fit together' in an appropriate way.
Let $ \Omega $ be a set with partition $\mathsf P$. Let $ \Omega '\subset \Omega $
have partition $ \mathsf P '$. Say that $ \mathsf P '$ is \emph{subordinate} to $\mathsf P$ iff
each atom $ p'\in \mathsf P '$ is contained in some atom $ p\in \mathsf P$. We do not insist that
every atom of $\mathsf P$ be a union of atoms from $ \mathsf P '$, that is, we do not require that
$ \mathsf P '$ refine $ \mathsf P$.
The \emph{minimum} of two partitions $ \mathsf P $ and $ \mathsf P '$ of the same set $ \Omega $ is
\begin{equation*}
\mathsf P \wedge \mathsf P ' = \{ A\cap B \,:\, A\in P \,,\, B\in \mathsf P ' \}.
\end{equation*}
If $ \mathsf P '$ is a partition of a subset $ \Omega '\subset \Omega $, we use the
same notation $ \mathsf P\wedge \mathsf P '$ for a (maximal) partition of $ \Omega '$ subordinate
to both $\mathsf P$ and $ \mathsf P '$.
Suppose that $\mathsf P$ is a partition in $ \Omega $, and that $ \mathsf P '$ is a partition
of $ \Omega '\subset \Omega $, that is subordinate to $\mathsf P$. We define
\begin{equation}\label{e.multi}
\operatorname {multi} (\mathsf P '\,:\, \mathsf P)
=\sup _{ p \in \mathsf P} \sharp \{ p '\in \mathsf P '\,:\, p '\subset p \}\,.
\end{equation}
\subsection{Useful Propositions
This general proposition provides the motivation for the overall
approach we take.
\begin{proposition}\label{p.partitions}
Let $ 0<v<\delta <1$. Let $ A\subset T\subset X$ be finite sets
with $ \mathbb P (A\,:\, T)\ge \delta + v$.
Let $\mathsf P$ be a partition of $ X$, and let $ \mathsf P '\subset \mathsf P $ be any subset
of $\mathsf P$ for which
\begin{equation}\label{e.piSmall}
\mathbb P \Bigl( \bigcup _{p\in \mathsf P '} p \Bigr)
\le v/4\,.
\end{equation}
Then, there is some element $ p\in \mathsf P - \mathsf P '$ with
\begin{equation}\label{e.piConclude}
\mathbb P (T\,:\, p)\ge \tfrac v 4 \mathbb P (T\,:\, X)\,,
\qquad
\mathbb P (A\,:\, T\cap p) \ge \delta +\tfrac v2\,.
\end{equation}
\end{proposition}
\begin{proof}
Take $ \mathsf P ''$ to be all those elements $p\in \mathsf P $ which are in $ \mathsf P '$ or
$ \mathbb P (T\,:\, \mathsf P)\le \tfrac v 4 \mathbb P (T\,:\, X)$. It is clear that we have
\begin{equation*}
\mathbb P \Bigl( A\cap \bigcup _{p\in \mathsf P ''} p \,:\, T \Bigr)\le \tfrac v2 \,.
\end{equation*}
Applying the pigeonhole principle to those elements of $ \mathsf P - \mathsf P ''$ proves the
Proposition.
\end{proof}
The `energy increment' steps we take are governed by these two general propositions.
\begin{proposition}\label{p.simpleEnergy} Let $ A$ be a subset of a probability
space $ (\Omega , \mathbb P )$. Suppose that the there is a subset $ B\subset \Omega $
for which we have
\begin{equation*}
\mathbb P (A\,:\, B)=\mathbb P (A)+ \nu > \mathbb P (A)\,.
\end{equation*}
Then, for the partition $ \mathsf P _B$ of $ \Omega $ generated by $ B$, we have
\begin{equation} \label{e.simpleEnergy}
\mathbb E [\mathbb E (A\,:\, \mathsf P _B)] ^2 \ge \mathbb P (A) ^2 + \mathbb P (B) \cdot \nu ^{2}\,.
\end{equation}
\end{proposition}
In application, we will have $ \nu \,,\, \mathbb P (B)\ge \mathbb P (A) ^{C}$, for
an absolute constant $ C$. Thus, we have
\begin{equation} \label{e.SimpleEnergy}
\mathbb E [\mathbb E (A\,:\, \mathsf P _B)] ^2 \ge \mathbb P (A) ^2 + \mathbb P (A) ^{3C}\,.
\end{equation}
\begin{proof}
Let us set $ \alpha = \mathbb P (A)$, $ \mathbb P (B)= \beta $ so that
\begin{equation*}
\mathbb P (A \cap B)= (\alpha + \nu ) \beta \,, \qquad
\mathbb P (A \cap B ^{c} )= (1- \beta ) \alpha - \nu \beta \,.
\end{equation*}
We can calculate the left-hand side of \eqref{e.simpleEnergy} directly.
\begin{align*}
\mathbb E [\mathbb E (A\,:\, \mathsf P _B)] ^2
&= \mathbb P (B) [ \mathbb P (A\,:\, B)] ^2 +
(1- \mathbb P (B)) [ \mathbb P (A\,:\, B ^{c})] ^2
\\
&= \mathbb P (A \cap B) \cdot \mathbb P (A\,:\, B)
+ \mathbb P (A \cap B ^{c}) \mathbb P (A \,:\, B ^{c})
\\
&=( \alpha + \nu ) ^2 \beta + (1- \beta ) ^{-1} [ (1- \beta )\alpha + \nu \beta ] ^2
\\
&= \alpha ^2 + (1- \beta ) ^{-1} \nu ^2 \beta
\\
&\ge \alpha ^2 + \nu ^2 \beta \,.
\end{align*}
And this proves the proposition.
\end{proof}
This trivial extension of the previous proposition is the one that we use.
\begin{proposition}\label{p.Energy} Let $ A$ be a subset of a probability space
$ (\Omega , \mathbb P )$, and let $\mathsf P$ be a finite partition of $ \Omega $ so that
this condition holds. For a subset $ Q\subset \mathsf P$, suppose the following holds.
For each element $ p \in \mathsf P$, there is a further subset $ p'$ so that
\begin{gather*}
\mathbb P (A \,:\, p ') \ge \mathbb P (A\,:\, p )+ \nu
\,, \qquad p \in Q\,.
\\
\mathbb P \Bigl( \bigcup _{ p \in \mathsf P} p ' \Bigr)\ge \tau \,.
\end{gather*}
Then, for the partition $ \mathsf P '$ which refines both $\mathsf P$ and $ \{ p '\,:\, p \in Q\}$,
we have the estimate
\begin{equation*}
\mathbb E [\mathbb E (A\,:\, \mathsf P ') ]^{2}\ge
\mathbb E [ \mathbb E (A\,:\, \mathsf P) ] ^2 + \tau \nu ^2 \,.
\end{equation*}
\end{proposition}
\bigskip
We will appeal to a simple bound for the tower notation given by
\begin{equation}\label{e.tower}
2 \uparrow n \coloneqq 2 ^{n}\,, \qquad 2 \uparrow\uparrow n \coloneqq 2 \uparrow( 2
\uparrow\uparrow n-1)\,.
\end{equation}
In the function $ 2 \uparrow\uparrow n$ is called the Ackerman function, and its
inverse is
\begin{equation}\label{e.log-star}
\log _{\ast } N=\min \{n \,:\, N\le 2 \uparrow\uparrow n\}\,.
\end{equation}
\begin{proposition}\label{p.simpleBound} For integers $ \ell , u ,v \ge 2$
define
\begin{equation*}
\psi (0, u ,v)= u \cdot {v}\,, \qquad
\psi (\ell +1, u, v)= 2 \uparrow ( u \cdot \psi (\ell ,u,v))
\end{equation*}
We have the estimate
\begin{equation*}
\psi (\ell ,u,v)\le 2 \uparrow\uparrow [ \ell + \log _{\ast } 2 u v ] \,.
\end{equation*}
\end{proposition}
\begin{proof}
Define
\begin{align*}
\epsilon _{\ell }= \frac { \log _2 u} {u \psi (\ell -1)}\,,
\qquad
\epsilon _{k-1 }= \frac {\log _2 u (1+ \epsilon _{k})} {u \psi (k-1) }\,.
\end{align*}
It is elementary to see that $ \epsilon _1\le 1$.
The point of these definitions is that we have
\begin{align*}
\psi (\ell , u, v)&=2 \uparrow [ (1+ \epsilon _ \ell ) u \psi (\ell -1)]
\\
&= 2 \uparrow [ 2 \uparrow [ (1+ \epsilon _{\ell -1}) \psi (\ell -2)]
\\
&\;\,\vdots
\\
&=\stackrel { \textup{$ \ell $ times}}
{\overbrace{2 \uparrow [ 2 \uparrow [ \cdots 2 \uparrow [ (1+ \epsilon _1) u v] \cdots ]]}}
\\
& \le 2 \uparrow\uparrow [ \ell + \log _{\ast } 2 u v ] \,.
\end{align*}
\end{proof}
The following definition is used to make a quicker appeal to Lemma~\ref{l.BPZ},
and its relative Lemma~\ref{l.Tbox}.
\begin{definition}\label{d.good}
Consider a subset $ S $ of a set $ X $, a partition $\mathsf P$,
and a positive parameter $ \Delta $. Say that $ \mathsf P ' $ is
\emph{$ (S, \Delta ,\mathsf P ) $-good} iff $ \mathsf P ' $ refines $\mathsf P$ and
\begin{equation} \label{e.good}
\mathbb E (\mathbb E (S\,:\, \mathsf P ') ^2 )\ge \mathbb E (\mathbb E (S\,:\, \mathsf P ) ^2 )+\Delta \,.
\end{equation}
\end{definition}
\subsection{The $ U (3)$ Norm
In this section we discuss the Lemmas needed to obtain sets that are uniform with respect to the
Gowers $ U (3)$ norm.
\begin{definition}\label{d.affine}
We call a partition of $ H \times H \times H$ \emph{affine} iff all atoms of the partition
are of the form $ V_1 \times V_2 \times V_3$, where $ V_i$ are
all translates of the \emph{same} subspace $ V\le H$.
This is an essential definition for us, as an affine partition, in
say the basis $ (\operatorname e_1, \operatorname e_2, \operatorname e_3) $
is also affine in any choice of basis formed from these three vectors.
Each atom of an affine partition is, after translation, a copy of
$ H \times H \times H$ with a lower dimension.
\end{definition}
In particular, given $ S_j$, $ 1\le j\le 4$, and an affine partition $ \mathsf P $,
for each atom $ \alpha \in \mathsf P $, it makes sense to compute the Gowers uniformity
norm of $ S_j$ relative to the atom $ \alpha $. That is, the atom $ \alpha $ determines
an affine subspace $ V_j$ in the coordinate $ \operatorname e_j$. After translation,
we could assume that $ V_j$ is actually a subspace, in which we can unambiguously
compute the Gowers $ U (3)$ norm. This is what we mean by
\begin{equation}\label{e.U3A}
\norm S_j - \mathbb P (S_j\,:\, \alpha ) . U (3), \alpha .
\end{equation}
The \emph{codimemsion} of an affine partition, written as $ \operatorname {codim} (\mathsf P )$
is the maximum codimension
of $ V_1$ in $ H$, for all $ V_1 \times V_2 \times V_3 \in \mathsf P $.
Clearly, we have
\begin{equation}\label{e.index>}
\abs{ \mathsf P }\le 5 ^{\operatorname {codim} (\mathsf P )}\,.
\end{equation}
We need the following version of the Inverse Theorem for the $ U (3)$ Norm, in
a
\begin{inverse} \label{p.uni0}
There are constant $0<c< C<\infty $ so that the
following holds.
Let $ S\subset H$ and assume that $ \operatorname {dim} (H)> 10C u^{-C} $ and
\begin{equation*}
\norm S - \mathbb P (S\,:\, H). U (3). >u
\end{equation*}
Then, there is an affine subspace $ H'$ of $ S$ so that
$ \operatorname {dim} (H')\ge \operatorname {dim} (H)- Cu ^{-C}$
and
\begin{equation*}
\mathbb P (S\,:\, H')\ge \mathbb P (S\,:\, H)+ c u ^{C}\,.
\end{equation*}
\end{inverse}
We emphasize that the exact value of the estimates on the co-dimensions above are
important in the study of four-term progressions, but the exact form of these
estimates are not important to the proof of our Main Theorem, Theorem~\ref{t.main}.
For this result, see \cite{math.NT/0503014}*{p. 27---28}.
We will use this elementary observation: If $ \mathsf P ,\mathsf P '$ are
affine partitions, then
\begin{equation}\label{e.index+}
\operatorname {codim} (\mathsf P \wedge \mathsf P ')
\le
\operatorname {codim} (\mathsf P )+\operatorname {codim} (\mathsf P ')\,.
\end{equation}
\begin{proposition}\label{p.uni1} There is a constant $ C$ so that
the following holds for all $ 0<u, \tau <1$ the following holds.
Let $ S_j$, $ 1\le j\le 4$ be
sets in the $ j$th coordinate. Then there is
an affine partition $ \mathsf P $ of $ H \times H \times H $, satisfying
$ \operatorname {codim} (\mathsf P ) \lesssim ( C /u \tau ) ^{C} $, so that
\begin{equation*}
\mathbb P (A\in \mathsf P \,:\, \sup _{j} \norm S_j. U (3), A. >u)<\tau \,.
\end{equation*}
\end{proposition}
\begin{proof}
Here is an important point in the proof. For an affine partition $ \mathsf P $,
suppose there is an atom $ A\in \mathsf P $ such that
\begin{equation*}
\norm S_j - \mathbb P (S_j \,:\, A). U (3), A. > u
\end{equation*}
Let $ A_j$ denote the affine subspace for coordinate $ \operatorname e_j$.
Then, there is a partition $ \mathsf P _A$ of $ A_j$ into affine subspaces of
codimension $ \le C u ^{-C}$, for which we have
\begin{equation*}
\mathbb E _{A_j} ( \mathbb E (S_j\cap A_j \,:\, \mathsf P _A) ^2 )
\ge
\mathbb E _ {A_j} (S_j\cap A_j) ^2 + c u ^{C}\,.
\end{equation*}
A moments thought shows that there is then an affine refinement
$ \mathsf P '$ of $ \mathsf P $, in which only the atom $ A$ is further refined, for which
we have
\begin{equation} \label{e.A+}
\mathbb E ( \mathbb E (S_j \,:\, \mathsf P ') ^2 )
\ge
\mathbb E ( \mathbb E (S_j \,:\, \mathsf P ) ^2 )
+ c u ^{C} \mathbb P (A).
\end{equation}
Indeed, since the atom $ A$ is the product of translates of the \emph{same} subspace
$ A_j$, we impose an appropriate translate of the partition $ \mathsf P _A$ on the
two choices of the remaining coordinates. The codimension of the refining partition
has increased by only $ C u ^{-C}$.
\medskip
Here is the principal line of the argument. We construct a sequence of
refining affine partitions $ \mathsf P_n $, and a sequence of stopping times $ \tau _{j,k}$,
for $ 1\le j\le 4$ and $ k\ge 1$,
which are used to running time of the recursive procedure below.
Let $ \mathsf P $ be an affine partition.
Notice that there is some $ C>0$ so that the following is sufficient condition for the
existence of a $ (S_j, u^C \tau , \mathsf P ) $-good partition $ \mathsf P '$:
\begin{equation} \label{e.goodSuff}
\mathbb P (A\in \mathsf P \,:\, \norm S_j. U (3), A.>u)\ge \tau /4
\end{equation}
In addition, $ \mathsf P '$ can be taken to be affine and
$ \operatorname {codim} (\mathsf P ')\le \operatorname {codim} (\mathsf P )+C u ^{-C}$.
This is a consequence of the discussion at the beginning of the proof.
The notion of a \emph{good} partition is defined in Definition~\ref{d.good}.
Initialize variables
\begin{align*}
\mathsf P _0 \leftarrow \{H \times H \times H\}
\,, \qquad
n\leftarrow 0\,,
\qquad
\tau _{j,0}=0\,, \qquad k_j\leftarrow 0
\end{align*}
Likewise set $ \tau _{j,0}=0$
\textsf{WHILE} for some
$ 1\le j\le 4$, there is an affine $ (S_j, u^C \tau /4, \mathsf P _n)$-good partition $ \mathsf P '$,
with $ \operatorname {codim} (\mathsf P _{n+1})\le \operatorname {codim} (\mathsf P _{n})+C u ^{-C}$,
increment
\begin{equation*}
n\leftarrow n+1\,, \quad k_j\leftarrow k_j+1\,.
\end{equation*}
Define $ \tau _{j, k_j}=n$, and $ \mathsf P _{n+1}=\mathsf P '$.
As the underlying space is finite dimensional, this \textsf{WHILE} loop must stop.
The sequence of stopping times
$ \tau _{j,1},\dotsc, \tau _{j,K}$ cannot exceed $ (\tau u) ^{-C}$.
Indeed, the hypotheses of Proposition~\ref{p.Stopping} hold, proving this claim
immediately. The conclusions of the Lemma are then immediate from the recursion,
and the observation (\ref{e.goodSuff}).
\end{proof}
In fact, we will rely upon the following variant of the the previous
result.
\begin{lemma}\label{l.uni2} There is a constant $ C$ so that
the following holds for all $ 0<u, \tau <1$ the following holds.
Let $ \mathcal S_j$, $ 1\le j\le 4$ be a collection of sets
in the $ j$th coordinate. Then there is
an affine partition $ \mathsf P $ of $ H \times H \times H $ of
\begin{align*}
\operatorname {codim} (\mathsf P ) \lesssim \Bigl[
(u \tau ) ^{-1} \prod _{j=1} ^{4} \abs{ \mathcal S_j} \Bigr] ^{C}
\quad \textup{and} \quad
\mathbb P (A\in \mathsf P \,:\, \sup _{j} \norm S_j. U (3), A. >u) &<\tau \,.
\end{align*}
\end{lemma}
This proof is a simple variant of the previous proof. Note that the codimension of the the
partition admits a substantially worse bound. This is because we have to
keep track of a running time for each possible set $ S \in \bigcup _{j} \mathcal S_j$.
\subsection{The Box Norm in Two Variables
The goal of this subsection is Lemma~\ref{l.2BU}, which combines the fact about the $ U (3)$ norm in
Lemma~\ref{l.uni2}, with some facts about the Box Norm.
We begin with some generalities on the Box Norm in two variables.
Recall the definition of $ \mathsf P '$ being $ (S, \delta ,\mathsf P)$-good given in \eqref{e.good} above.
\begin{proposition}\label{p.GG} There is a $ C_2$ so that for all $ 0<u , \tau <1$ the
following holds.
Let $ Z\subset X \times Y$, and let $ \mathsf P _X$, $ \mathsf P _Y$ be partitions of $ X$ and $ Y$.
Suppose that the following condition holds.
\begin{gather*}
\mathbb P ( E \,:\, X \times Y)\ge \tau \,, \qquad \textup{where}
\\
E= \{ (p_x,p_y)\in \mathsf P _X \times \mathsf P _Y \,:\, \norm Z - \mathbb P (Z\,:\, p_x \times p_y) . \Box ^{x,y}
p_x \times p_y . \ge u\}\,.
\end{gather*}
Then,
there are partitions $ \mathsf P '_X$ and $ \mathsf P '_Y$ so that
\begin{gather}
\label{e.2good1}
\textup{ $ \mathsf P '_X \times \mathsf P '_Y$ is $ (Z, \tau u^{C_2}, \mathsf P _X \times \mathsf P _Y)$-good. }
\\
\label{e.2good2}
\textup{ $ \operatorname {multi} (\mathsf P '_X\,:\, \mathsf P _X) \le 2 \uparrow \sharp \mathsf P _Y$\,, and likewise for $ \mathsf P '_Y$. }
\end{gather}
Here, $ C_2$ could be taken to be $ 4$.
\end{proposition}
Note that the estimate \eqref{e.2good2}, recursively applied, leads to tower power style
bounds.
\begin{proof}
For each $ (p_x,p_y)\in E$, Lemma~\ref{l.BPZ} assures us the existence of
a partition $ \mathsf P _x (y)$ of $ p_x$ into two elements, and a partition $\mathsf P _y (x)$ of $ p_y$ into
two elements so that $ \mathsf P _x (y) \times \mathsf P _y (x)$ is $ (Z \cap p_x \times p_y, u ^{C_2},
p_x \times p_y)$-good. (There is no $ \tau $ in this last assertion.)
We take
\begin{equation*}
\mathsf P '_X = \mathsf P _X \wedge \bigwedge _{y\in \mathsf P _Y} \mathsf P _{x} (y)\,,
\end{equation*}
and likewise for $ \mathsf P '_Y$. It is clear that \eqref{e.2good2} holds.
By the assumption that $ \mathbb P (E)> \tau $, and the martingale property \eqref{e.mds},
it follows that \eqref{e.2good1} holds.
\end{proof}
\begin{proposition}\label{p.BoxTower} There is a $ C_2>0$ so that for all $0<u, \tau <1 $
the following holds.
Let $ Z\subset X \times Y$, and let $ \mathsf P _X$, $ \mathsf P _Y$ be partitions of $ X$ and $ Y$.
Let $ \mathsf P _Z $ be a partition of $ Z$ that is subordinate to $ \mathsf P _X \times \mathsf P _Y$.
Suppose that the following condition holds.
\begin{gather*}
\mathbb P ( E \,:\, Z)\ge \tau \,,
\\
E= \{ z\in \mathsf P _Z \,:\, \norm z - \mathbb P (z\,:\, X_z \times Y_z) . \Box ^{x,y} X_z
\times Y_z . \ge u\}\,.
\end{gather*}
Here, $ z\subset X_z \times Y_z$, and $ X_z\in \mathsf P _X$ and $ Y_z\in \mathsf P _Y$.
$ X_z,Y_z$ must exist as $ \mathsf P _Z$ is subordinate to $ \mathsf P _X \times \mathsf P _Y$.
Then, there is a partition $ \mathsf P '_X$ and $ \mathsf P '_Y$ so that
\begin{gather}
\label{e.22good1}
\textup{ $ \mathsf P '_X \times \mathsf P '_Y$ is $ (\mathsf P _Z, \tau u^{C_2}, \mathsf P _X \times \mathsf P _Y)$-good. }
\\
\label{e.22good2}
\operatorname {multi} (\mathsf P '_X\,:\, \mathsf P _X) \le 2 \uparrow [(\sharp \mathsf P _Y) \cdot
\operatorname {multi} (\mathsf P _Z \,:\, \mathsf P _X \times \mathsf P _Y)] \,,
\ \textup{ and likewise for $ \mathsf P '_Y$. }
\end{gather}
Here, $ C_2$ could be taken to be $ 4$.
\end{proposition}
Note in particular the form of the tower in \eqref{e.22good2}, with the notation
as in \eqref{e.tower}
\begin{proof}
For each $ z\in E$, there is a partition $ \mathsf P '_{X_z}$ into two elements, and
likewise for $ \mathsf P ' _{Y_z}$ so that $ \mathsf P ' _{X_z} \times \mathsf P ' _{Y_z} $ is
$ (z, u ^{C_2}, \{X_z\} \times \{Y_z\})$-good.
This follows from \eqref{e.2good1} and \eqref{e.2good2}.
Define the partition $ \mathsf P '_X$ to be
\begin{equation*}
\mathsf P '_X = \mathsf P _X \wedge \bigwedge_{z\in E} \mathsf P '_{X_z}\,.
\end{equation*}
Observe that \eqref{e.22good2} follows. Indeed, for each $ x\in \mathsf P _X$, we
could have up to $ (\sharp \mathsf P _Y) \cdot
\operatorname {multi} (\mathsf P _Z \,:\, \mathsf P _X \times \mathsf P _Y)$ many sets to form the
minimum partition over, leading to \eqref{e.22good2}.
Use the basic fact about martingales, \eqref{e.mds}, and the assumption that
$ \mathbb P (E)\ge \tau $ to conclude that \eqref{e.22good1} holds.
\end{proof}
We make a definition that we use in this section, and the next.
\begin{definition}\label{d.3system} We say that the
data
\begin{equation} \label{e.3sys}
\mathcal S= \{H \times H \times H\,,\,\mathsf P _H\,,\, S_i\,,\, \mathsf P _i \,,\, R_{j,k}\,,\, \mathsf P _{j,k}
\,,\, T\,,\, \mathsf P _T
\,:\, 1\le i\le 4\,,\, 1\le j<k\le 4 \}
\end{equation}
is a $ \emph{partition-system}$ iff
\begin{itemize}
\item $ \mathsf P _H$ is an affine partition of $ H \times H \times H$.
\item $ S_i\subset H$, and $ \mathsf P _i$ is a partition of $ \overline S_i$ that is subordinate to
$ \mathsf P _H$, $ 1\le i\le 4$.
\item $ R_{j,k}\subset S_j \times S_k$,
and $ \mathsf P _{j,k}$ is a partition of $ \overline R_{j,k}$ that is subordinate to $ \mathsf P _j \wedge \overline S_k $
and $\overline S_j \times \mathsf P _k$, $ 1\le j<k\le 4$.
\item $ T\subset H\times H\times H$ is such that
$
T\subset \overline R_{j,k}
$, $ 1\le j<k\le 4$.
\item $ \mathsf P _T = \bigwedge _{ 1\le j<k\le 4} \mathsf P _{j,k}$.
\end{itemize}
We stress that all partitions are collections of subsets of $ H \times H \times H$. Set
\begin{gather}
\label{e.dptl}
\mathsf P _{T, \ell } \coloneqq \mathsf P _{\ell } \wedge \bigwedge _{\substack{ 1\le j<k\le 4\\
j,k\neq \ell }} \mathsf P _{j,k}\,, \qquad 1\le \ell \le 4\,,
\\ \label{e.boldP1}
\mathbf P _1 (\mathcal S) = \sum _{i=1} ^{4} \operatorname {multi} (\mathsf P _i \,:\, \mathsf P_H)\,,
\\ \label{e.boldP2}
\mathbf P _2 (\mathcal S) = \sum _{ 1\le j<k\le 4} \operatorname {multi}(\mathsf P_i \,:\, \mathsf P _{j,k}) \,,
\\ \label{e.boldPT}
\mathbf P _T (\mathcal S) = \operatorname {multi}(\mathsf P_T \,:\, \mathsf P _{H}) \,,
\end{gather}
These last quantities are some counting functions that we will need to keep track of.
\end{definition}
A \emph{trivial partition-system} is a partition-system in which each of the partitions
are trivial.
For each $ t\in \mathsf P _T$, we take
\begin{equation} \label{e.s3t}
\mathcal S_3 (t)
= \{ H _{t,1} \times H _{t,2} \times H _{t,3}\,,\, s_{t:i} \,,\, r_{t:j,k} \,,\, t\,:\, 1\le i\le 4\,,\, 1\le j<k\le 4 \}
\end{equation}
to be the trivial partition-system associated to $ t$. Namely, we have
\begin{itemize}
\item $ t\subset H _{t,1} \times H _{t,2} \times H _{t,3} $.
Here, $ H _{t,1} \times H _{t,2} \times H _{t,3}$ may be the product of affine subspaces in $ H \times H \times H$,
but all relevant notions extend to this setting.
\item $ s _{t:j,k} \in \mathsf P _{j,k}$, with $ s _{t:j,k}\subset H _{t,1} \times H _{t,2} \times H _{t,3}$,
and $ t = \bigwedge _{ 1\le j<k\le 4} s _{t:j,k}$.
\end{itemize}
This is the Lemma that will be applied in the next section.
\begin{lemma}\label{l.2BU} Let $ C_1\ge 1$ be given.
There are finite functions $ \Psi _{2-\Box} \;:\; [0,1] ^2 \times \mathbb N ^{2}
\longrightarrow \mathbb N $ and $ \Psi _{\textup{codim}} \;:\; [0,1] ^2 \times \mathbb N ^2
\longrightarrow \mathbb N $ so that the following holds for all
$ 0< u_2, u_3 \tau <1$.
For all partition-systems $ \mathcal S$, as in \eqref{e.3sys},
there is a partition-system
\begin{equation} \label{e.2SYS}
\mathcal S'= \{H \times H \times H\,,\, \mathsf P _H'\,,\, S_i\,,\, \mathsf P '_i \,,\, R_{j,k}\,,\,
\mathsf P ' _{j,k}\,,\, T \,,\, \mathsf P _T' \,:\, 1\le i\le 4\,,\, 1\le j<k\le 4 \}
\end{equation}
which refines $ \mathcal S$, so that these conditions are met.
For $ 1\le i \le 4$ and $ 1\le j,k \le 4$,
\begin{gather}
\label{e.BU1}
\operatorname {codim} (\mathsf P '_H) \le \Psi _{\textup{codim}} (u_3,\tau , \mathbf P _1(\mathcal S), \mathbf P
_2(\mathcal S))\,,
\\
\label{e.2BU2}
\operatorname {multi}( \mathsf P '_i \,:\, \mathsf P_i)
\le \Psi _{2-\Box} (u_2, \tau, \mathbf P _1(\mathcal S), \mathbf P _2(\mathcal S))\,,
\\ \label{e.2BU3}
\operatorname {multi} ( \mathsf P ' _{j,k} \,:\, \mathsf P _{j} \wedge \mathsf P _{k} )
\le \operatorname {multi} (\mathsf P _{j,k} \,:\, \mathsf P _j \times \mathsf P _k) \,,
\\ \label{e.2BU5}
\mathbb P (E_{2, j,k}\,:\, S_j \times S_k)\le \tau \,,
\\ \notag
E_{2,j,k}=
\Biggl\{ r_{j,k}\in \mathsf P '_{j,k}
\,:\, r_{j,k} \subset s_j \cap s_k\,,\
s_v \in \mathsf P '_{v}\,, v=j,k\,,
\\ \notag \qquad \qquad \qquad
\norm r_{j,k} - \mathbb P ( r_{j,k}\,:\, s_j \times
s_k). \Box ^{ \{j,k\}} s_j \times s_k .
\ge u_2 [\mathbf P_T (\mathcal S')] ^{-C_1} \Biggr\}
\,,
\\ \label{e.2BU4}
\mathbb P (E_{3, j}\,:\, S_j )\le \tau \,,
\\ \notag
E_{3,j}=
\Biggl\{ s_{j}\in \mathsf P '_{j}
\,:\,
\norm s_{j} - \mathbb P ( s _{j}\,:\, A_j ) A_j. U (3), A_j .
\ge u_3 [\mathbf P_T (\mathcal S')] ^{-C_1} \Biggr\} \,.
\end{gather}
Finally, $ \mathbf P_T (\mathcal S')=\mathbf P_T (\mathcal S)$.
We are using the notation \eqref{e.boldP1}---\eqref{e.boldPT}.
\end{lemma}
The conclusion is that virtually all of the elements of the partitions $ \mathsf P _j' $ and $ \mathsf P _ {j,k}'$
are uniform with respect to Gowers Norm, and the Box Norm.
We emphasize that this Lemma provides us with a tower power bound.
In \eqref{e.2BU2}, we have the estimates below, where
note that we have a $ \log _{\ast }$, as in \eqref{e.log-star}, on the left.
\begin{gather}
\label{e.2bu2}
\begin{split}
\log _{\ast } (
\sharp \mathsf P '_i)
&\le
2 u_2 ^{-C_2} \tau ^{-1}
\mathbf P _2 (\mathcal S) ^{C_1 \cdot C_2}
+ \log _{\ast } \mathbf P _1 (\mathcal S) \,.
\end{split}
\end{gather}
Note that by \eqref{e.2BU3},
the multiplicity of the partitions $ \mathsf P '_{j,k} $, defined in \eqref{e.multi},
are not increased in this procedure, though we get a very substantial increase in the
multiplicity of the $ \mathsf P _i'$, from the bound \eqref{e.2BU2}, forming the
principal loss in the application of this Lemma.
The sets $ s_i \not\in E _{1,i}$
are `very uniform,' even with respect to their probabilities in the respective
cell of $ \mathsf P '$.
The `tower' notation in \eqref{e.2BU2} is defined in \eqref{e.tower}.
\begin{proof}
We define a sequence of partition-systems. They are
\begin{equation}\label{e.Sm}
\begin{split}
\mathcal S (m)&= \{H \times H \times H\,,\, \mathsf P _H (m)\,,\, S_i\,,\,
\mathsf P _i (m) \,,\, R_{j,k}\,,\, \mathsf P _{j,k} (m)\,,\, T \,,\, \mathsf P _{T} (m)
\\ & \qquad \,:\, 1\le i\le 4\,,\, 1\le j<k\le 4 \}
\end{split}
\end{equation}
where $ \mathcal S (0)$ is the partition-system given to us by assumption.
These partition-systems are \emph{refining}, in the sense that the corresponding sequences
of partitions are refining.
In this process, the only incremental change to the partitions $ \mathsf P_T (m)$ that are made
are to make them subordinate to the other partitions. Thus, quantities that appear in \eqref{e.2BU5} and
\eqref{e.2BU4} are constant. Namely, $ \mathbf Q= \mathbf P_T (\mathcal S (m))$ is independent of $ m$.
We also define a sequence of stopping times $ \sigma (j,k ; m)$, and $ m (j,k)$
for $ 1\le j<k\le 4$, and $ m \ge 0$.
Initialize
these stopping times as follows, where $ 1\le j<k\le 4$.
\begin{gather*}
m \leftarrow 0, \qquad \sigma ( j,k; 0 ) \leftarrow 0 \,,
\qquad m (j,k)\leftarrow 0, \,.
\end{gather*}
We choose $ C_2$ as in Proposition~\ref{p.BoxTower}.
The main recursion is this: Set
\begin{equation} \label{e.2Delta}
\Delta = u_2 ^{C_2} \tau
= u_2 ^{C_2} \tau \mathbf Q^{-C_1 \cdot C_2}
\end{equation}
\textsf{WHILE} there are $ 1\le j < k\le 4$ so that there is are two
partitions $ \mathsf P'_j $ and $ \mathsf P'_k$
which satisfy \eqref{e.22good1} and \eqref{e.22good2} above for the quantity $ \Delta $. Namely,
\begin{itemize}
\item $ \mathsf P'_j \wedge \mathsf P'_k$
is $ ( \mathsf P _{j,k} ( m ), \Delta , \mathsf P_j (m) \wedge \mathsf P_k (m))$-good.
\item The multiplicity of $ P'_j$ satisfies
\begin{equation}\label{e.upUp}
\begin{split}
\operatorname {mult} ( \mathsf P'_j \,:\, \mathsf P_j (m) )
&\le
2 \uparrow [ \operatorname {mult} (\mathsf P_k (m) \,:\, \mathsf P_H ( m ))
\cdot
\operatorname {multi} ( \mathsf P_{j,k}( m ) \,:\,
\mathsf P_j (m)\times \mathsf P_k (m))]
\\&
\le 2 \uparrow [ \operatorname {mult} (\mathsf P_k (m) \,:\, \mathsf P_H ( m ))
\cdot
\operatorname {multi} ( \mathsf P_{j,k}( 0) \,:\,
\mathsf P_j (0)\times \mathsf P_k (0))] \,,
\end{split}
\end{equation}
and likewise for $ \mathsf P'_k $.
\end{itemize}
We take these steps. Update
\begin{enumerate}
\item (Keep track of stopping times.)
\begin{equation*}
m \leftarrow m +1\,, \quad m (j,k)\leftarrow m (j,k)+1\,, \quad
\sigma ( j,k ; m ({j,k}))\leftarrow m \,.
\end{equation*}
\item (Select affine partition.) To each element of the affine partition $ \mathsf P_H (m)$,
apply Lemma~\ref{l.uni2} to $\mathsf P'_j$, $ 1\le j \le 4$, with
the parameter $ \tau $ that is given to us, and the value of $ u$ in Lemma~\ref{l.uni2} equal to
$
u= u_3 \mathbf Q ^{-C_1}
$.
Set the partition that Lemma~\ref{l.uni2}
supplies to us to be $ \mathsf P_H (m+1)$. Observe that
\begin{align} \label{e.BUcodim}
\operatorname {codim} (\mathsf P_H (m+1))
&\le
\operatorname {codim} (\mathsf P_H (m))+ \bigl[ (u_3 \tau ) ^{-1}
\mathbf Q \bigr] ^{D}
\end{align}
This follows from Lemma~\ref{l.uni2} and \eqref{e.2good2}, for appropriate choice of
constant $ D$. Note that the term $ \operatorname {multi}(\mathsf P'_j \,:\, \mathsf P_H (m)) $
is bounded in \eqref{e.upUp}.
\item (Updating the remaining partitions.)
Set $ \mathsf P _j (m+1)$ to be the maximal partition which refines $ \mathsf P'_j$ and is subordinate to
$ \mathsf P _H (m+1)$. Set $ \mathsf P _{j,k} (m+1)$ to be the maximal partition which refines $ \mathsf P _{j,k}
(m)$, and is subordinate to both $ \mathsf P_j (m+1) $ and $ \mathsf P_k (m+1)$.
The last partition $ \mathsf P_T (m+1) $ is then defined.
\end{enumerate}
At the conclusion of the \textsf{WHILE} loop, return this data:
For $ 1\le j<k\le 4$,
\begin{itemize}
\item $ m $, the integers $ m (j,k)$.
\item The sequence of stopping times
$ \sigma (j,k; \lambda )$, for
$ 0\le \lambda \le m ({j,k}) $.
\end{itemize}
It remains to argue that the partitions returned satisfy the conclusions of the Lemma.
We must have \eqref{e.2BU5}, else by the definition of $ \Delta $ in \eqref{e.2Delta} and
Proposition~\ref{p.BoxTower}, the routine
would not have stopped. The conclusion \eqref{e.2BU3} follows from the construction.
The conclusion \eqref{e.2BU4} follows from the manner in which we apply Lemma~\ref{l.uni2}, in
in particular the point (2) above.
The remaining conclusions \eqref{e.BU1} and \eqref{e.2BU2} require us
to know how many recursions were performed. We turn to this next.
We claim that
\begin{equation*}
m \le \Delta ^{-1} = u_2 ^{-C_2} \tau ^{-1} \mathbf Q ^{C_1 \cdot C_2} \,.
\end{equation*}
But this follows from Corollary~\ref{c.Stopping} applied to the construction,
the sets in $ \mathsf P _{j,k}$, and the stopping times $ \sigma ( \{j,k\}, r_{j,k}, \lambda )$.
Therefore, we have, by induction, and \eqref{e.upUp}, we have
\begin{align*}
\operatorname {multi} (\mathsf P ' _{i} \,:\, \mathsf P ')
& = \operatorname {multi} (\mathsf P _i (m) \,:\, \mathsf P ( m ))
\\
& \le 2 \uparrow [ \mathbf P _2 \cdot
\operatorname {multi} (\mathsf P _i (m -1) \,:\, \mathsf P (m -1 ))]
\\
&\le{} \stackrel { m \ \textup{times} }
{
\overbrace{2 \uparrow[ \mathbf P _2 \cdot 2 \uparrow [ \mathbf P _2 \cdots [ \mathbf P _2 \cdot 2 \uparrow
\mathbf \mathsf P _2 \cdot \mathbf P _1
] \cdots ]]} }
= \psi ( m , \mathbf P _1, \mathbf P _2)\,,
\end{align*}
Here, the notation is from \eqref{e.boldP1}, \eqref{e.boldP2}, and Proposition~\ref{p.simpleBound}, which provides
crude bound given in \eqref{e.2bu2}. This proves \eqref{e.2BU2}. The final conclusion \eqref{e.BU1} follows from
this last bound and \eqref{e.BUcodim}.
\end{proof}
\subsection{The Box Norm in Three Variables
The goal of this section is to add the considerations about the Box Norm in three variables
into our Lemmas, to build up an analog of Lemma~\ref{l.2BU} which also stipulates facts about the
partition $ \mathsf P_T$, which as of yet we have not made any statements about.
\begin{lemma}\label{l.TBU}
There are finite functions $\Psi _{\textup{codim}}\,,\, \Psi _{T} \;:\; [0,1] ^2 \times \mathbb N ^{2}
\longrightarrow \mathbb N $ so that the following holds for all
$ 0< u_T, \tau _T<1$.
For all trivial partition-systems $ \mathcal S$ there is a partition-system $ \mathcal S'$ as in \eqref{e.2SYS},
such that
\begin{gather}
\label{e.TBU1}
\operatorname {codim} (\mathcal S')
\le \Psi _{\textup{codim}} (u _{T}, \tau _T, \mathbb P (T \,:\, H\times H\times H))\,,
\\
\label{e.TBU2}
\mathbf P _T (\mathcal S')
\le \Psi _T (u _{T}, \tau _T, \mathbb P (T \,:\, H\times H\times H))\,,
\\ \label{e.TBU4}
\mathbb P (E \,:\, H\times H\times H) \le \tau _T \,,
\\ \notag
E \coloneqq
\Bigl\{ t \in \mathsf P ' _{ T }
\,:\, \textup{ $\mathcal S _3 (t)$
is \emph{not} $u_T$-admissible } \Bigr\} \,.
\end{gather}
Here, $ \mathcal S_3 (t)$ is the trivial partition system associated with $ t$, as defined in \eqref{e.s3t}.
\end{lemma}
In \eqref{e.TBU4}, admissibility is as in Definition~\ref{d.admissible}.
This proof will generate a second tower power in our estimate for the codimension
in \eqref{e.TBU2}, but we don't detail this particular fact.
\begin{proof}
For this proof, we define a sequence of partition-systems $ \mathcal S (m)$ as in \eqref{e.Sm}.
These partition-systems are \emph{refining} in the sense that the corresponding sequences of
partitions are refining. We take $ \mathcal S (0)$ to be the trivial partition-system
given by the hypothesis of the Lemma.
We also define a sequence of stopping times $ \sigma ( \ell ,p_ \ell )$
for $ 1\le \ell \le 4$, with counters $ p _{\ell }\ge 0$.
Initialize
these variables $ \sigma ( \ell , 0 ) \leftarrow 0$ and $ p _{\ell }\leftarrow 0$, where $ 1\le \ell \le 4$.
Here is the recursive algorithm.
\textsf{IF} $ m $ is even,
apply of Lemma~\ref{l.2BU} to $ \mathcal S(m )$, with the values
$ \kappa ( \tfrac 18 u _{T} \tau _T) ^{C} $ and $ \tfrac1{100}\tau_T$ specified at the beginning of Lemma~\ref{l.TBU}, the Lemma
we are proving. The value of $ C_1$ in Lemma~\ref{l.2BU} is the value of $ C+1$, where the constants $ \kappa $ and
$ C$ are as in the definition of admissible, Definition~\ref{d.admissible}.
We then update $ m \leftarrow m +1$, and take the
updated data $ \mathcal S ( m)$ to be the partition-system from Lemma~\ref{l.2BU}.
Observe that from \eqref{e.2BU2} we have the estimates:
\begin{gather}
\label{e.T2}
\operatorname {multi}( \mathsf P _i( m )\,:\, \mathsf P_i (m-1))
\le \Psi _{2-\Box} (u_T, \tfrac12\tau_T, \mathbf P _1(m-1 ), \mathbf P _2 (m -1))\,.
\end{gather}
\textsf{IF} $ m $ is odd, by the previous step, the conclusions of
Lemma~\ref{l.2BU} are in force. The observation to make is that we have this condition.
For the event $B$ defined below, we have $ \mathbb P (B )\le \tfrac 18 {\tau _T} $.
\begin{equation}\label{e.BB}
\begin{split}
B = \{t\in \mathsf P _T (m) \,:\, &
\textup{$ \mathcal S_3 (t)$ satisfies \eqref{e.Ad2Box} and}
\\
& \qquad \textup{ \eqref{e.Ad1Box} in the
definition of $ u_T$-admissible.}
\}
\end{split}
\end{equation}
Recall that $ \mathcal S_3 (t)$ is given in \eqref{e.s3t}.
That is, with very high probability, if the trivial partition-system $ \mathcal S_3 (t)$ fails
$ u_T$-admissibility, it must be the condition
\eqref{e.Ad3Box} that fails.
Let us see that this observation is true. The conditions \eqref{e.2BU5} and \eqref{e.2BU4} applied to $ \mathcal S (m)$
hold. Thus, except on a set of probability at most $ \tfrac 1 {10} \tau _T$, we have, using the notation of
\eqref{e.s3t},
\begin{gather*}
\norm r _{t:j,k}- \mathbb P (r _{t:j,k}\,:\, s _{t:j} \times s _{t:k}). \Box ^{j,k} s _{t:j} \times s _{t:k}.
\le \kappa (\tfrac 18 \tau _T u_T )^{C}
[ \mathbf P_T (\mathcal S(m))] ^{-C-2} \,,
\\
\norm s _{t:j}- \mathbb P (s _{t:j}\,:\, H _{t:j}). U (3).
\le \kappa (\tfrac 18 \tau _T u_T) ^{C}
[\mathbf P_T (\mathcal S(m))] ^{-C-2} \,.
\end{gather*}
Therefore, if the trivial partition-system $ \mathcal S_3 (t)$ fails either \eqref{e.Ad2Box} or \eqref{e.Ad1Box}
in the definition of $ u_T$-admissibility, it must follow that $ t$ has very small probability in its affine cell.
Namely, we must have
\begin{equation}\label{e.tooSmall}
\mathbb P (t\,:\, H _{t:1}\times H _{t:2}\times H _{t:3}) \le
\tfrac 18 \mathbf P_T (\mathcal S (m)) ^{-1} \tau _T \,.
\end{equation}
But certainly, by the definition of $ \mathbf P_T (\mathcal S (m)$ in \eqref{e.boldPT}, we have
\begin{equation*}
\sum _{t \;:\; \textup{$ t$ satisfies \eqref{e.tooSmall}}} \mathbb P (t \,:\, H \times H \times H)
\le \tfrac 18 \tau _T \,.
\end{equation*}
This means that $ \mathbb P (B )\le \tfrac18 {\tau _T} $ for $ B$ as in \eqref{e.BB}.
\smallskip
\textsf{IF} there is an $ 1\le \ell \le 4$ for which we have
\begin{gather*}
\mathbb P (F_{\ell } \,:\, H\times H\times H) \ge \tfrac 18 \tau_T \,,
\\
F_{\ell } \coloneqq
\Bigl\{ t \in \mathsf P _{ T } (m) - B
\,:\, \textup{ $\mathcal S _3 (t)$
does not satisfy \eqref{e.Ad3Box} for this value of $ \ell $} \Bigr\} \,.
\end{gather*}
For such a choice of $ \ell $, update $ p _{\ell }\leftarrow p _{\ell }+1$,
and set $ \sigma (\ell , p _{\ell })\leftarrow m$.
For each $ t \in F _{\ell }$, we can apply Lemma~\ref{l.Tbox}.
Write
\begin{equation*}
t _{\ell }= s _{t:\ell }\prod _{\substack{ 1\le j<k\le 4\\ j,k\neq \ell }}
r_{t:j,k}\,.
\end{equation*}
Apply Lemma~\ref{l.Tbox} with $ V= t _{\ell }$, $ U= t$, and $ \tau =\kappa u_T ^{C} $.
Since $ t\not\in B$, it follows that $ V=t _{\ell }$ satisfies the hypothesis of that Lemma,
namely that $ V=t _{\ell }$ is $ (4, \vartheta , \ell )$-uniform, with $ \vartheta $ as in
\eqref{e.zvoDef}.
Then, from the conclusion of Lemma~\ref{l.Tbox}, we read this.
There are partitions $ \mathsf P (s _{t : j }, t _{\ell })$, $ 1\le j\le 4$,
of $ s _{t: j }$ into two sets, and partitions
\begin{equation*}
\mathsf P ( r_{t:j,k}, t _{\ell })\,, \qquad
1\le j<k\le 4\,, \ j,k\neq \ell
\end{equation*}
of $ r_{j,k}$ into two sets, so that the there is an atom $ V'$ in the partition
\begin{equation*}
\mathsf P (s _{t: \ell } , t _{\ell }) \wedge \bigwedge _{\substack{1\le j < k \le 4\\ j,k\neq \ell }}
\mathsf P ( r_{t:j,k}, t _{\ell })
\end{equation*}
which has a higher correlation with $ t _{\ell }$. Namely,
\begin{gather*}
\mathbb P (V'\,:\, t)
\ge c \bigl[ \kappa u_T ^{C} \mathbb P (t \,:\, t _{\ell }) \bigr] ^{p} \,,
\\
\mathbb P (t \,:\, V') \ge \mathbb P (t \,:\, t _{\ell })+
c \bigl[ \kappa u_T ^{C} \mathbb P (t \,:\, t _{\ell }) ^{C} \bigr] ^{p} \,.
\end{gather*}
Let
\begin{equation*}
\mathsf P ( t _{\ell })
= \bigwedge _{\substack{ 1\le j<k\le 4\\ j,k\neq \ell }} \mathsf P ( r_{j,k}, t _{\ell })\,.
\end{equation*}
It follows that we have
\begin{equation}\label{e.tUP}
\mathbb E [\mathbb E (T \cap t _{\ell } \,:\, \mathsf P ( t _{\ell }) ]^2
\ge
\mathbb P (T \,:\, t _{\ell }) ^2 +
u_T ^C \mathbb P (T \,:\, t _{\ell })^2 \,.
\end{equation}
We update
\begin{gather*}
\mathsf P _i(m+1) \leftarrow \mathsf P _i(m)\,, \qquad i\neq \ell \,,
\\
\mathsf P (R_{j,k }, m)\wedge \bigwedge _{ t _{\ell }\in F _{\ell }} \mathsf P (R_{j,k }, t _{\ell })\,,
\qquad 1\le j<k\le 4\, ,\ j,k\neq \ell \,.
\end{gather*}
It is this last two steps that create a second tower. Observe that we have,
using the notation of \eqref{e.boldP1} and \eqref{e.boldP2},
\begin{equation}\label{e.bigUP1}
\mathbf P _ u ( \mathcal S( m))
\le \mathbf P _u (\mathcal S(m-1)) 2 \uparrow [2 \mathbf P _2 (\mathcal S(m-1)) ^{6}] \qquad u=1,2\,.
\end{equation}
It follows from \eqref{e.tUP} that we have
\begin{equation}\label{e.TTup}
\mathbb E \,[ \mathbb E (T\,:\, \mathsf P _{T_\ell } (m)) ]^2
\ge
\mathbb E \, [\mathbb E (T\,:\, \mathsf P _{T_\ell } (m-1)) ]^2
+ \tau _T u_T ^{C} \mathbb P (T\,:\, T_ \ell ) ^2 \,.
\end{equation}
The recursion then loops.
\bigskip
Once the recursion has stopped, it follows from the construction, in particular
\eqref{e.TTup}, and Proposition~\ref{p.STopping}, that we must have
\begin{equation} \label{e.90}
p _{\lambda } \le \tau _T ^{-2} u_T ^{-2C} \,.
\end{equation}
The sum $ 2\sum _{\ell =1} ^{4} p _{\ell }$ bounds the running time.
At the end of the recursion, the conclusion \eqref{e.TBU4} holds. The other conclusions
are appropriate upper bounds on the multiplicities in terms of
some (very quickly growing) function of $ u_T$, $ \tau _T$, and the
multiplicities of the given partitions. These estimates follow from
\eqref{e.T2}, and \eqref{e.bigUP1}.
To supply some details, let us set
\begin{align*}
\Gamma (1) & \coloneqq \Psi _3 (u_T, \tfrac 12 \tau _T, \mathbf P _1 (\mathcal S), \mathbf P _2\mathcal S)
\times [2 \uparrow [2 \mathbf P _2 ^{6}]] \,,
\\
\Gamma (p+1) & \coloneqq
\Psi _3 (u_T, \tfrac 12 \tau _T, \Gamma (p), \Gamma (p))
\times [2 \uparrow [2 \Gamma (p) ^{6}]]\,.
\end{align*}
From \eqref{e.boldP1}, \eqref{e.boldP2}, \eqref{e.T2}, \eqref{e.bigUP1},
and \eqref{e.90}, we have
\begin{equation*}
\operatorname {mult} (\mathsf P _i(m) \,:\, \mathsf P (m))
\le
\Gamma (m) \le \Gamma (8 \tau _T ^{-2} u_T ^{-2C})\,, i=1,2\,.
\end{equation*}
Since $ \Psi _3$ is itself a power-tower, defined in terms of the $ 2 \uparrow\uparrow J$
function, we thus, have a second power-tower from this estimate.
Since the partition $ \mathsf P_T$ is generated from the prior partitions, this last estimate
proves \eqref{e.TBU2}. The estimate \eqref{e.TBU1} follows from similar considerations, and
the estimate \eqref{e.BU1}.
\end{proof}
\subsection{Proof of Lemma~\ref{l.uni}
Recall that $ A\subset T$, by assumption, and that $ \mathbb P (A\,:\, T)\ge \delta + \nu $.
Apply Lemma~\ref{l.TBU} to the corner system $ \mathcal A$ as in \eqref{e.Asystem}.
This Lemma also takes the parameters
\begin{equation*}
u _{T}= \delta \,, \qquad \tau _T= c \nu ^ {C_T} \mathbb P (T\,:\, H\times H\times
H)\,.
\end{equation*}
Here the constant $C_T $ is the constant that appears Lemma~\ref{l.Tbox}, see \eqref{e.Tbox6}.
Let $ \mathcal S'$ be the partition-system given to us by this Lemma, satisfying \eqref{e.TBU2}
and \eqref{e.TBU4}.
Also consider the set
\begin{equation*}
E' \coloneqq
\bigl\{ t \in \mathsf P '_ { T }
\,:\, \mathbb P ( t \,:\, H _{t,1} \times H _{t,2} \times H _{t,3} )\le v [\mathbf P _T (\mathcal S')] ^{-1}
\mathbb P (T \,:\, H\times H\times H)
\bigr\}
\end{equation*}
Here, we are using the notation of \eqref{e.s3t} and \eqref{e.boldPT}.
Then, it is clear that $ \mathbb P \Bigl(\bigcup \{t \,:\, t\in E'\} \Bigr)\le \tau _T$.
Hence, by the pigeonhole principle (See Proposition~\ref{p.partitions}.) we can select
$ t\in \mathsf P'_T$ so that $ t\not\in E'$, and the $ \mathcal T$-system $ \mathcal S_3 (t)$
is $ \delta $-admissible, which is \eqref{e.inc0} and $ \mathbb P (A\,:\, T)\ge \delta + \nu /4$
which is \eqref{e.inc5}.
The estimate \eqref{e.H'} follows from the estimate \eqref{e.TBU1}.
\section{The Algorithm to Conclude the Main Theorem}
\label{s.algorithm}
This is a well-known argument. To prove our main Theorem, we should show that
for any $ 0<\delta <1$ there is an $ n (\delta )$ so that if
$ \operatorname {dim} (H)\ge n (\delta )$, and $ A\subset H \times H \times H $
with $ \mathbb P (A \,:\, H \times H \times H )\ge \delta $, then $ A$ contains
a corner.
We recursively construct a sequence of corner-systems
\begin{equation*}
\mathcal A (m)
=
\{H \,, \, S_i (m)\,,\, R_{i,j} (m)\, ,\, T (m)\,, A (m) \,:\, 1\le i,j\le 4\}\,.
\end{equation*}
$ \mathcal A (0)$ is the `trivial' corner-system
\begin{equation*}
R_{i} (0)=H\,, \quad S _{i,j} (0)= H \times H\,, \quad
T = H \times H \times H \,, \quad A (0)=A\,.
\end{equation*}
Moreover, at each stage, $ A (m) \subset A $, so that a corner in $ A (m)$ is
a corner in $ A$.
The point is that the recursion, when it stops, provides us with
an corner-system $ \mathcal A (m_0)$ so that (1)
$ \mathbb P (A (m_0)\,:\, T (m_0))\ge \delta $,
(2)
$ \mathcal A (m_0)$
is $ \mathbb P (A (m_0)\,:\, T (m_0))$-admissible,
(3) $ \mathcal A (m_0)$ satisfies \eqref{e.UniformEnough},
\begin{gather}
\label{e.codim}
\operatorname {dim} (H (m_0))\ge \operatorname {dim} (H )- \Phi _{\textup{dim}} (\delta )\,,
\\
\label{e.ProbA}
\mathbb P (T (m_0) \,:\, H(m_0) \times H(m_0) \times H(m_0) )
\ge \Phi _{A, \mathbb P } (\delta )\,.
\end{gather}
Here, $ \Phi _{\textup{dim}}$ is a map from $ [0,]$ to $ \mathbb N $, and
$ \Psi _{A, \mathbb P } (\delta )$ is a finite function from $ [0,1]$ to itself.
Then, it follows that Lemma~\ref{l.3dvon} implies $ A (m_0)$ has a corner
provided \eqref{e.BigEnough} holds, that is
\begin{align*}
\lvert H (m_0)\rvert ^{4}
\ge 100
\Psi _{A, \mathbb P } (\delta ) ^{3} \,.
\end{align*}
By \eqref{e.codim}, this will clearly hold provided $ \operatorname {dim} (H)>n (\delta )$, for
a computable function $ n (\delta )$.
Thus, our Main Theorem is proved.
\medskip
The recursion is this: Given the corner-system $ \mathcal A (m)$,
it will be $ \mathbb P (A (m)\,:\, T (m))$-admissible. If it does not
satisfy \eqref{e.UniformEnough}, then we apply Lemma~\ref{l.dinc} to
conclude the existence of an corner-system
\begin{equation*}
\mathcal A ' (m)
=\{H '(m)\,,\, S_i '(m)\,,\, R_{i,j} '(m)\, ,\, T '(m)\,, A '(m) \,:\, 1\le i,j\le 4\}
\end{equation*}
satisfying these conditions: $ A' (m)\subset A (m)$,
\begin{equation}\label{e.prime}
\begin{split}
\mathbb P (T' (m)\,:\, T (m)) &\ge \kappa [ \mathbb P (A (m) \,:\, T (m))] ^{1/ \kappa}
\,,
\\
\mathbb P (A' (m)\,:\, T '(m)) &\ge \mathbb P (A (m)\,:\, T (m))+
\kappa [ \mathbb P (A (m) \,:\, T (m))] ^{1/ \kappa}\,.
\end{split}
\end{equation}
These are the conclusions of Lemma~\ref{l.dinc}.
The corner-system $ \mathcal A' (m)$ need not be $\mathbb P (A' (m)\,:\, T '(m)) $-admissible,
therefore, we apply Lemma~\ref{l.uni}, with
\begin{equation*}
\delta = \mathbb P (A (m)\,:\, T (m))\,,
\qquad
v= \kappa [ \mathbb P (A (m) \,:\, T (m))] ^{1/ \kappa}\,.
\end{equation*}
The conclusion of this Lemma gives us a new corner-system $ \mathcal A (m+1)$, which satisfies
\begin{gather}
\begin{split}
\mathbb P (A (m+1)\,:\, T (m+1)) & \ge
\mathbb P (A (m)\,:\, T (m))+
\kappa [ \mathbb P (A (m) \,:\, T (m))] ^{ 1/ \kappa}
\\ \label{e.Aup}
&\ge \delta + {\kappa} \delta ^{1/ \kappa}
\end{split}
\\ \label{e.PT>}
\begin{split}
\mathbb P (T (m+1) \,:\, & H (m+1)
\times H (m+1) \times H (m+1) )))
\\&
\ge \widetilde \Psi _{T}
(\mathbb P (A (m)\,:\, T (m)) , \mathbb P (T (m)\,:\, H (m)
\times H (m) \times H (m) ))\,,
\end{split}
\\
\operatorname {codim} (H (m+1)) \le \Psi_ {\textup{codim} }
(\mathbb P (A (m)\,:\, T (m)) , \mathbb P (T (m)\,:\, H (m)
\times H (m) \times H (m) )) \,.
\end{gather}
The functions $\Psi_ {\textup{codim} } $ and $ \widetilde \Psi _{T}$ are derived from
those in \eqref{e.H'} and \eqref{e.inc2} by a change of variables.
Note that \eqref{e.Aup} implies that the recursion can continue for at most
$ m_0\lesssim 4 (\kappa \delta ^{1/ \kappa }) ^{-1} $ times before it must stop, as the
density of $ A (m) $ in $ T (m)$ can never be more than $ 1$.
Note that initially, we have $ T (0)=H (0) \times H (0) \times H (0)$, therefore the
iteration of the estimate \eqref{e.PT>} can be phrased completely in terms of a fixed function
of $ \delta = \mathbb P (A (0))$, therefore the estimate
\eqref{e.ProbA} holds. A similar argument applies to prove the estimate \eqref{e.codim},
completing the proof of our Main Theorem.
\begin{bibsection}
\begin{biblist}
\bib{0710.4862}{article}{
author={Bergelson, Vitaly},
author={Leibman, Alexander},
author={Lesigne, Emmanuel},
title={Intersective polynomials and polynomial Szemeredi theorem},
eprint={arXiv.org:0710.4862},
date={2007},
}
\bib{MR788966}{article}{
author={Conze, Jean-Pierre},
author={Lesigne, Emmanuel},
title={Th\'eor\`emes ergodiques pour des mesures diagonales},
language={French, with English summary},
journal={Bull. Soc. Math. France},
volume={112},
date={1984},
number={2},
pages={143--175},
issn={0037-9484},
review={\MR{788966 (86i:28019)}},
}
\bib{MR833409}{article}{
author={Furstenberg, H.},
author={Katznelson, Y.},
title={An ergodic Szemer\'edi theorem for IP-systems and combinatorial
theory},
journal={J. Analyse Math.},
volume={45},
date={1985},
pages={117\ndash 168},
issn={0021-7670},
review={MR833409 (87m:28007)},
}
\bib{MR1631259}{article}{
author={Gowers, W. T.},
title={A new proof of Szemer\'edi's theorem for arithmetic progressions
of length four},
journal={Geom. Funct. Anal.},
volume={8},
date={1998},
number={3},
pages={529--551},
issn={1016-443X},
review={\MR{1631259 (2000d:11019)}},
}
\bib{MR2195580}{article}{
author={Gowers, W. T.},
title={Quasirandomness, counting and regularity for 3-uniform
hypergraphs},
journal={Combin. Probab. Comput.},
volume={15},
date={2006},
number={1-2},
pages={143--184},
issn={0963-5483},
review={\MR{2195580}},
}
\bib{gowers-2007}{article}{
author={Gowers, W.~T. },
title={Hypergraph regularity and the multidimensional Szemer\'edi theorem},
eprint={arXiv.org:0710.3032},
date={2007},
}
\bib{MR1844079}{article}{
author={Gowers, W. T.},
title={A new proof of Szemer\'edi's theorem},
journal={Geom. Funct. Anal.},
volume={11},
date={2001},
number={3},
pages={465\ndash 588},
issn={1016-443X},
review={MR1844079 (2002k:11014)},
}
\bib{math.NT/0404188}{article}{
title={The primes contain arbitrarily long arithmetic progressions},
author={Green, Ben},
author={Tao, Terence},
eprint={arXiv:math.NT/0404188},
}
\bib{MR2187732}{article}{
author={Green, Ben},
title={Finite field models in additive combinatorics},
conference={
title={Surveys in combinatorics 2005},
},
book={
series={London Math. Soc. Lecture Note Ser.},
volume={327},
publisher={Cambridge Univ. Press},
place={Cambridge},
},
date={2005},
pages={1--27},
review={\MR{2187732 (2006j:11030)}},
}
\bib{math.NT/0503014}{article}{
title={An inverse theorem for the Gowers $U^3$ norm},
author={Green, Ben},
author={Tao, Terence},
eprint={arXiv:math.NT/0503014},
}
\bib{math.NT/0606088}{article}{
title={{Linear Equations in Primes}},
author={Green, Ben},
author={Tao, Terence},
eprint={arXiv:math.NT/060608d},
}
\bib{MR2150389}{article}{
author={Host, Bernard},
author={Kra, Bryna},
title={Nonconventional ergodic averages and nilmanifolds},
journal={Ann. of Math. (2)},
volume={161},
date={2005},
number={1},
pages={397--488},
issn={0003-486X},
review={\MR{2150389 (2007b:37004)}},
}
\bib{MR2090768}{article}{
author={Host, Bernard},
author={Kra, Bryna},
title={Averaging along cubes},
conference={
title={Modern dynamical systems and applications},
},
book={
publisher={Cambridge Univ. Press},
place={Cambridge},
},
date={2004},
pages={123--144},
review={\MR{2090768 (2005h:37004)}},
}
\bib{MR1827115}{article}{
author={Host, Bernard},
author={Kra, Bryna},
title={Convergence of Conze-Lesigne averages},
journal={Ergodic Theory Dynam. Systems},
volume={21},
date={2001},
number={2},
pages={493--509},
issn={0143-3857},
review={\MR{1827115 (2002d:28007)}},
}
\bib{MR2289954}{article}{
author={Lacey, Michael T.},
author={McClain, William},
title={On an argument of Shkredov on two-dimensional corners},
journal={Online J. Anal. Comb.},
number={2},
date={2007},
pages={Art. 2, 21 pp. (electronic)},
issn={1931-3365},
review={\MR{2289954}},
}
\bib{MR2167756}{article}{
author={R{\"o}dl, V.},
author={Nagle, B.},
author={Skokan, J.},
author={Schacht, M.},
author={Kohayakawa, Y.},
title={The hypergraph regularity method and its applications},
journal={Proc. Natl. Acad. Sci. USA},
volume={102},
date={2005},
number={23},
pages={8109--8113 (electronic)},
issn={1091-6490},
review={\MR{2167756 (2006g:05095)}},
}
\bib{MR0051853}{article}{
author={Roth, K. F.},
title={On certain sets of integers},
journal={J. London Math. Soc.},
volume={28},
date={1953},
pages={104--109},
issn={0024-6107},
review={\MR{0051853 (14,536g)}},
}
\bib{MR2266965}{article}{
author={Shkredov, I. D.},
title={On a generalization of Szemer\'edi's theorem},
journal={Proc. London Math. Soc. (3)},
volume={93},
date={2006},
number={3},
pages={723--760},
issn={0024-6115},
review={\MR{2266965 (2007i:11018)}},
}
\bib{shkredov-2007}{article}{
author={Shkredov, I.~D.},
title={On a two-dimensional analog of Szemeredi's Theorem in Abelian groups},
eprint={http://www.citebase.org/abstract?id=oai:arXiv.org:0705.0451},
date={2007},
}
\bib{MR2167755}{article}{
author={Solymosi, Jozsef},
title={Regularity, uniformity, and quasirandomness},
journal={Proc. Natl. Acad. Sci. USA},
volume={102},
date={2005},
number={23},
pages={8075--8076 (electronic)},
issn={1091-6490},
review={\MR{2167755 (2006g:05096)}},
}
\bib{MR0245555}{article}{
author={Szemer{\'e}di, E.},
title={On sets of integers containing no four elements in arithmetic
progression},
journal={Acta Math. Acad. Sci. Hungar.},
volume={20},
date={1969},
pages={89--104},
issn={0001-5954},
review={\MR{0245555 (39 \#6861)}},
}
\bib{MR0369312}{article}{
author={Szemer{\'e}di, E.},
title={On sets of integers containing no $k$ elements in arithmetic
progression},
note={Collection of articles in memory of Juri\u\i\ Vladimirovi\v c
Linnik},
journal={Acta Arith.},
volume={27},
date={1975},
pages={199--245},
issn={0065-1036},
review={\MR{0369312 (51 \#5547)}},
}
\end{biblist}
\end{bibsection}
\textsc{Michael Lacey, School of Mathematics, Georgia Institute of Technology, Atlanta GA 30332, USA}\hfill\break
\textsc{Email:} \verb|lacey@math.gatech.edu|
\textsc{William McClain, School of Mathematics, Georgia Institute of Technology, Atlanta GA 30332, USA}
\hfill\break
\textsc{Email:} \verb|bill@math.gatech.edu|
\end{document}
\begin{bibsection}
\begin{biblist}
\bib{MR0369299}{article}{
author={Ajtai, M.},
author={Szemer{\'e}di, E.},
title={Sets of lattice points that form no squares},
journal={Stud. Sci. Math. Hungar.},
volume={9},
date={1974},
pages={9\ndash 11 (1975)},
issn={0081-6906},
review={MR0369299 (51 \#5534)},
}
\bib{0710.4862}{article}{
title={Intersective polynomials and polynomial Szemeredi theorem},
author={Bergelson, Vitaly},
author={Leibman, Alexander},
author={Lesigne, Emmanuel},
date={2007},
eprint={arxiv:0710.4862},
}
\bib{MR1726234}{article}{
author={Bourgain, J.},
title={On triples in arithmetic progression},
journal={Geom. Funct. Anal.},
volume={9},
date={1999},
number={5},
pages={968\ndash 984},
issn={1016-443X},
review={MR1726234 (2001h:11132)},
}
\bib{MR833409}{article}{
author={Furstenberg, H.},
author={Katznelson, Y.},
title={An ergodic Szemer\'edi theorem for IP-systems and combinatorial
theory},
journal={J. Analyse Math.},
volume={45},
date={1985},
pages={117\ndash 168},
issn={0021-7670},
review={MR833409 (87m:28007)},
}
\bib{MR1191743}{article}{
author={Furstenberg, H.},
author={Katznelson, Y.},
title={A density version of the Hales-Jewett theorem},
journal={J. Anal. Math.},
volume={57},
date={1991},
pages={64\ndash 119},
issn={0021-7670},
review={MR1191743 (94f:28020)},
}
\bib{MR1631259}{article}{
author={Gowers, W. T.},
title={A new proof of Szemer\'edi's theorem for arithmetic progressions
of length four},
journal={Geom. Funct. Anal.},
volume={8},
date={1998},
number={3},
pages={529--551},
issn={1016-443X},
review={\MR{1631259 (2000d:11019)}},
}
\bib{MR2195580}{article}{
author={Gowers, W. T.},
title={Quasirandomness, counting and regularity for 3-uniform
hypergraphs},
journal={Combin. Probab. Comput.},
volume={15},
date={2006},
number={1-2},
pages={143--184},
issn={0963-5483},
review={\MR{2195580}},
}
\bib{MR1844079}{article}{
author={Gowers, W. T.},
title={A new proof of Szemer\'edi's theorem},
journal={Geom. Funct. Anal.},
volume={11},
date={2001},
number={3},
pages={465\ndash 588},
issn={1016-443X},
review={MR1844079 (2002k:11014)},
}
\bib{math.NT/0404188}{article}{
title={The primes contain arbitrarily long arithmetic progressions},
author={Green, Ben},
author={Tao, Terence},
eprint={arXiv:math.NT/0404188},
}
\bib{MR2187732}{article}{
author={Green, Ben},
title={Finite field models in additive combinatorics},
conference={
title={Surveys in combinatorics 2005},
},
book={
series={London Math. Soc. Lecture Note Ser.},
volume={327},
publisher={Cambridge Univ. Press},
place={Cambridge},
},
date={2005},
pages={1--27},
review={\MR{2187732 (2006j:11030)}},
}
\bib{math.NT/0508063}{article}{
title={Long arithmetic progressions of primes},
author={Green, Ben},
eprint={arXiv:math.NT/0508063},
}
\bib{math.NT/0503014}{article}{
title={An inverse theorem for the Gowers $U^3$ norm},
author={Green, Ben},
author={Tao, Terence},
eprint={arXiv:math.NT/0503014},
}
\bib{math.NT/0606088}{article}{
title={{Linear Equations in Primes}},
author={Green, Ben},
author={Tao, Terence},
eprint={arXiv:math.NT/060608d},
}
\bib{green-onshk}{article}{
title={An Argument of Shkredov in the Finite Field Setting},
author={Green, Ben},
eprint={http://www.dpmms.cam.ac.uk/~bjg23/},
}
\bib{MR2150389}{article}{
author={Host, Bernard},
author={Kra, Bryna},
title={Nonconventional ergodic averages and nilmanifolds},
journal={Ann. of Math. (2)},
volume={161},
date={2005},
number={1},
pages={397--488},
issn={0003-486X},
review={\MR{2150389 (2007b:37004)}},
}
\bib{MR2090768}{article}{
author={Host, Bernard},
author={Kra, Bryna},
title={Averaging along cubes},
conference={
title={Modern dynamical systems and applications},
},
book={
publisher={Cambridge Univ. Press},
place={Cambridge},
},
date={2004},
pages={123--144},
review={\MR{2090768 (2005h:37004)}},
}
\bib{MR2289954}{article}{
author={Lacey, Michael T.},
author={McClain, William},
title={On an argument of Shkredov on two-dimensional corners},
journal={Online J. Anal. Comb.},
number={2},
date={2007},
pages={Art. 2, 21 pp. (electronic)},
issn={1931-3365},
review={\MR{2289954}},
}
\bib{MR1335785}{article}{
author={Meshulam, Roy},
title={On subsets of finite abelian groups with no $3$-term arithmetic
progressions},
journal={J. Combin. Theory Ser. A},
volume={71},
date={1995},
number={1},
pages={168\ndash 172},
issn={0097-3165},
review={MR1335785 (96g:20033)},
}
\bib{MR2167756}{article}{
author={R{\"o}dl, V.},
author={Nagle, B.},
author={Skokan, J.},
author={Schacht, M.},
author={Kohayakawa, Y.},
title={The hypergraph regularity method and its applications},
journal={Proc. Natl. Acad. Sci. USA},
volume={102},
date={2005},
number={23},
pages={8109--8113 (electronic)},
issn={1091-6490},
review={\MR{2167756 (2006g:05095)}},
}
\bib{MR2266965}{article}{
author={Shkredov, I. D.},
title={On a generalization of Szemer\'edi's theorem},
journal={Proc. London Math. Soc. (3)},
volume={93},
date={2006},
number={3},
pages={723--760},
issn={0024-6115},
review={\MR{2266965 (2007i:11018)}},
}
\bib{MR2047239}{article}{
author={Solymosi, J.},
title={A note on a queston of Erd\H os and Graham},
journal={Combin. Probab. Comput.},
volume={13},
date={2004},
number={2},
pages={263\ndash 267},
issn={0963-5483},
review={MR2047239 (2004m:11012)},
}
\bib{MR2038505}{article}{
author={Solymosi, J{\'o}zsef},
title={Note on a generalization of Roth's theorem},
conference={
title={Discrete and computational geometry},
},
book={
series={Algorithms Combin.},
volume={25},
publisher={Springer},
place={Berlin},
},
date={2003},
pages={825--827},
review={\MR{2038505 (2004m:05262)}},
}
\bib{MR2167755}{article}{
author={Solymosi, Jozsef},
title={Regularity, uniformity, and quasirandomness},
journal={Proc. Natl. Acad. Sci. USA},
volume={102},
date={2005},
number={23},
pages={8075--8076 (electronic)},
issn={1091-6490},
review={\MR{2167755 (2006g:05096)}},
}
\bib{MR0245555}{article}{
author={Szemer{\'e}di, E.},
title={On sets of integers containing no four elements in arithmetic
progression},
journal={Acta Math. Acad. Sci. Hungar.},
volume={20},
date={1969},
pages={89--104},
issn={0001-5954},
review={\MR{0245555 (39 \#6861)}},
}
\bib{MR0369312}{article}{
author={Szemer{\'e}di, E.},
title={On sets of integers containing no $k$ elements in arithmetic
progression},
note={Collection of articles in memory of Juri\u\i\ Vladimirovi\v c
Linnik},
journal={Acta Arith.},
volume={27},
date={1975},
pages={199--245},
issn={0065-1036},
review={\MR{0369312 (51 \#5547)}},
}
\end{biblist}
\end{bibsection}
| {
"timestamp": "2008-04-18T15:41:50",
"yymm": "0804",
"arxiv_id": "0804.3019",
"language": "en",
"url": "https://arxiv.org/abs/0804.3019",
"abstract": "In an additive group (G,+), a three-dimensional corner is the four points g, g+d(1,0,0), g+d(0,1,0), g+d(0,0,1), where g is in G^3, and d is a non-zero element of G. The Ramsey number of interest is R_3(G) the maximal cardinality of a subset of G^3 that does not contain a three-dimensional corner. Furstenberg and Katznelson have shown R_3(Z_N) is little-o of N^3, and in fact the corresponding result holds in all dimensions, a result that is a far reaching extension of the Szemeredi Theorem. We give a new proof of the finite field version of this fact, a proof that is a common generalization of the Gowers proof of Szemeredi's Theorem for four term progressions, and the result of Shkredov on two-dimensional corners. The principal tool are the Gowers Box Norms.",
"subjects": "Number Theory (math.NT); Classical Analysis and ODEs (math.CA); Combinatorics (math.CO)",
"title": "Three Dimensional Corners: A Box Norm Proof",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969679646668,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7099044293634144
} |
https://arxiv.org/abs/2210.11961 | Sets of mutually orthogoval projective and affine planes | A pair of planes, both projective or both affine, of the same order and on the same pointset are orthogoval if each line of one plane intersects each line of the other plane in at most two points. In this paper we prove new constructions for sets of mutually orthogoval planes, both projective and affine, and review known results that are equivalent to sets of more than two mutually orthogoval planes. We also discuss the connection between sets of mutually orthogoval planes and covering arrays. | \section{Introduction}
In a projective plane of order $q$ an {\em oval} is a set of $q+1$ points, no three of which are collinear. A beautiful theorem, independently published multiple times, states that in $\PG(2,q)$ there exists a set of $n^2+n+1$ ovals which form the blocks of a second projective plane \cites{baker_projective_1994,MR3248524,glynn_finite_1978,MR420430}.
In the earliest proof, if $D$ is a $(n^2+n+1,n+1,1)$-difference set over $\mathbb{Z}_{n^2+n+1}$, then $-D$ is an oval in the projective plane developed from $D$. Since $-D$ is itself also a difference set, it can be developed into the second projective plane on the same ground set, $\mathbb{Z}_{n^2+n+1}$. Hence we have constructed the blocks of two projective planes with the property that the lines of one are ovals in the other. We introduce the term {\em orthogoval{}} to describe this property.
\begin{definition}
Two planes, both projective or both affine, of the same order and on the same pointset are {\em orthogoval{}} if the blocks of one are ovals in the other, or in other words a line of one plane intersects any line in the other plane in at most two points. A set of planes is called a {\em set of mutually orthogoval{} planes} if the planes are pairwise orthogoval{}.
\end{definition}
The primary goal of this paper is to prove new constructions for sets of mutually orthogoval{} planes, both projective and affine, and to review known results that are equivalent to sets of more than two mutually orthogoval{} planes.
A pair of orthogoval{} projective planes of order $q$ was used by Raaphorst, Moura and Stevens to construct a family of covering arrays of strength 3, $\CA(2q^3-1;3,q^2+q+1,q)$ (see Definition ~\ref{def:CA}) from the circulant matrices of linear feedback shift register sequences over finite fields $\mathbb{F}_q$. For $q > 3$ these are still the best known strength 3 covering arrays for their number of columns and alphabet sizes. Torres-Jimenez and Izquierdo-Marquez constructed covering arrays $\CA(2q^3-q;3,q^2-q+3,q)$ which are equivalent to deleting a particular set of $2q-2$ columns of the $\CA(2q^3-1;3,q^2+q+1,q)$ and then noting that, for every non-zero symbol $x$, the array that remains has two copies of the all-$x$ row. Deleting one of each of these pairs completes the construction \cite{torres-jimenez_covering_2018}. Colbourn, Lanus and Sarkar found, via Sherwood covering perfect hash families, a $\CA(1016;3,8,64)$ using a conditional expectation search \cite{MR3770276}. Investigating its structure they observed that if pairs of orthogoval{} Desarguesian affine planes could be constructed, then keeping just the columns corresponding to the points of the affine planes from the $\CA(2q^3-1;3,q^2+q+1,q)$ and deleting one copy of any repeated row, would yield $\CA(2q^3-q;3,q^2,q)$ and thereby improve Torres-Jimenez and Izquierdo-Marquez's covering arrays by $q-3$ columns. A second purpose of this paper is to discuss the connection between sets of mutually orthogoval{} planes and covering arrays.
In a set of mutually orthogoval{} planes, any three points collinear in one are non-collinear in all the others. Therefore, taking the collection of all lines from each plane in a set of $s$ mutually orthogoval{} planes gives a strength 3 packing design \cite{colbourn_handbook_2007}. The {\em packing number} $D(v,k,t)$ of a $v$-set is the maximum number of $k$-subsets (blocks)
such that every $t$-subset appears in at most one block. This can be used to calculate an upper bound on $s$.
\begin{theorem} \label{thm_upperbound}
If there exists a set of $s$ mutually orthogoval{}
projective planes of order $q$ then
\[
s (q^2+q+1) \leq \D(q^2+q+1, q+1, 3).
\]
If there exists a set of $s$ mutually orthogoval{}
affine planes of order $q$ then
\[ s (q^2+q) \leq \D(q^2, q, 3).
\]
\end{theorem}
The Johnson bound \cite{colbourn_handbook_2007} on the size of packings is
\[
\D(v,k,3) \leq \left \lfloor \frac{v}{k} \left \lfloor \frac{v-1}{k-1} \left \lfloor \frac{v-2}{k-2} \right \rfloor\right \rfloor \right \rfloor.
\]
Since the blocks in $\AG(2,2)$ have size two, the set of $s$ copies of $\AG(2,2)$ is trivially mutually orthogoval{} for every $s$ (similar to the fact that every set of $s$ latin squares of order 1 is mutually orthogonal). Otherwise, the Johnson bound gives the following result.
\begin{corollary}\label{cor_upperbound}
\begin{itemize}
\item If there exists a set of $s$ mutually orthogoval{}
projective planes of order $q$, then $s \leq \max\{5,q+2\}$.
\item If there exists a set of $s$ mutually orthogoval{}
affine planes of order $q > 2$, then $s \leq \max\{7,q+2\}$.
\end{itemize}
\end{corollary}
We will either tighten or meet these upper bounds for some small values of~$q$.
In \cref{sec_proj} we discuss projective planes. There we give a new proof of the existence of a pair of orthogoval{} projective planes from an algebraic geometric point of view using the Cremona transformation. We also consider if larger sets of such projective planes exist. In \cref{sec_affine} we turn our attention to affine planes. We prove that for even characteristic there always exist a pair of planes with the desired property. When $\gcd(n,6)=1$ we construct a set of three mutually orthogoval{} affine planes of order $2^n$. We then construct sets of size of seven for $q=4,8$. In \cref{sec_ca}, we discuss the connections between sets of mutually orthogoval{} planes and covering arrays. We show that for affine planes we can extend the constructed arrays by two additional columns to construct a $\CA(2q^3-q;3,q^2+2,q)$, and demonstrate the utility of sets of more than two orthogoval{} planes. In \cref{sec_final} we give some final thoughts.
\section{Projective planes}\label{sec_proj}
\subsection{A pair of orthogoval{} projective planes}
We start by giving a new proof of the fact that for every prime power $q$ there exists a pair of Desarguesian orthogoval{} $\PG(2,q)$. This proof uses the Cremona transformation and its algebraic geometric approach adds another interesting point of view of this theorem. For a general introduction to algebraic geometry and its terminologies, see \cite{MR0463157}. While we believe that the result was broadly speaking known to algebraic geometers we do not know of a explicit articulation or proof of it in the literature.
We express the points of $\PG(2,q)$ with homogeneous coordinates using colons between the coordinates to make them distinct from other numbers that appear in round brackets in this article. The {\em Cremona transformation}, or {\em standard quadratic transformation} is the partial rational function $\operatorname{Cr} : \PG(2,q) \dashrightarrow \PG(2,q)$ defined by
\[
\operatorname{Cr}((x:y:z)) = (x^{-1}: y^{-1}:z^{-1}).
\]
We can extend this partial map and will use the same symbol for this extension,
\[
\operatorname{Cr}((x:y:z)) = (xyzx^{-1}: xyzy^{-1}:xyzz^{-1}) = (yz:xz:xy).
\]
This map is undefined on the intersection points of the pairs of lines $yz$, $xz$ and $xy$ (variety $V(yz,xz,xy)$), $\{(1:0:0),(0:1:0),(0:0:1)\}$, and is birational since $\operatorname{Cr}\circ \operatorname{Cr} = \operatorname{id}$. We are interested in the image of lines and conics under $\operatorname{Cr}$, which are the varieties of degree one and degree two homogeneous polynomials respectively. For example, if $\ell$ is a line, we take $\operatorname{Cr}(\ell)$ to mean the set $\{\operatorname{Cr}(p):p \in \ell\}$. This set might be a single point, a set of collinear points or a set of points lying on a conic. It may not contain all the points of this line or conic because $\operatorname{Cr}$ is a partial map. We can ``close'' the image set with the following procedure. Let $C = V(f(x,y,z))$ in $\PG(2,q)$, not contained in the lines $V(x)$, $V(y)$, or $V(z)$. Let $g(x,y,z) = f(1/x,1/y/1/z)x^a y^b z^c$, where $a$, $b$ and $c$ are minimal such that $g$ is a polynomial. Then the closure of the image of $f$ under the Cremona transformation is $\operatorname{Cr}(C) = V(g(x,y,z))$.
The Cremona transformation has the following properties which are not hard to prove. See \cite{MR1042981,MR3100243} for examples of proofs of properties of $\operatorname{Cr}$.
\begin{proposition}
\begin{itemize}
\item $\operatorname{Cr}$ is bijective on $\PG(2,q) - V(xyz)$.
\item The lines $x$, $y$ and $z$ map to points $(1:0:0)$, $(0:1:0)$ and $(0:0:1)$ respectively.
\item The line $\beta y + \gamma z$ through $(1:0:0)$ maps to the line $\gamma y + \beta z$, also through $(1:0:0)$; and similarly for lines $\alpha x + \gamma z$ and $\alpha x + \beta y$ through $(0:1:0)$ and $(0:0:1)$ respectively.
\item Each line $ax+by+cz$ that does not contain any of $(1:0:0)$, $(0:1:0)$ or $(0:0:1)$ maps to the irreducible conic $ayz+bxz+cxy$ which contains all three of $(1:0:0)$, $(0:1:0)$ and $(0:0:1)$.
\end{itemize}
\end{proposition}
\begin{theorem} \label{thm_pair_proj}
For every prime power~$q$, there exists a pair of Desarguesian orthogoval{} $\PG(2,q)$.
\end{theorem}
\begin{proof}
Let $\omega \in \mathbb{F}_{q^3}$ so that $\omega \not\in \mathbb{F}_{q}$ and $\omega$ is not a root of $x^{3q^2} - 3x^{q^2+q+1} + x^{3q}+ x^3$. Since 0 is a root with multiplicity 3 and 1 is also a root, if $q \geq 3$ then such an $\omega$ exists by the pigeonhole principle. If $q=2$ then $\alpha+1$ satisfies these requirements where $\alpha$ is a root of $x^3+x+1$. By these conditions, $(\omega:\omega^{q}:\omega^{q^2})$, $(\omega^{q}:\omega^{q^2}:\omega)$ and $(\omega^{q^2}:\omega:\omega^{q})$ are not collinear. Let $\sigma$ be the projective linear transformation of $\PG(2, q^3)$ that maps $(1:0:0)$, $(0:1:0)$, $(0:0:1)$ and $(1:1:1)$ to $(\omega:\omega^{q}:\omega^{q^2})$, $(\omega^{q}:\omega^{q^2}:\omega)$, $(\omega^{q^2}:\omega:\omega^{q})$ and $(1:1:1)$ respectively. We conjugate the Cremona transformation by $\sigma$
\[
\operatorname{Cr}' = \sigma \circ \operatorname{Cr} \circ \sigma^{-1}
\]
and consider its action on $\PG(2,q^3)$.
Since $Cr$ maps
\[
\alpha (1:0:0) + \beta (0:1:0) + \gamma (0:0:1)
\]
to
\[
\beta \gamma (1:0:0) + \alpha \gamma (0:1:0) + \alpha \beta (0:0:1),
\]
$\operatorname{Cr}'$ maps
\[ \alpha (\omega:\omega^{q}:\omega^{q^2})+ \beta (\omega^{q}:\omega^{q^2}:\omega) + \gamma (\omega^{q^2}:\omega:\omega^{q})
\]
to
\[
\beta \gamma (\omega:\omega^{q}:\omega^{q^2}) + \alpha \gamma (\omega^{q}:\omega^{q^2}:\omega) + \alpha \beta (\omega^{q^2}:\omega:\omega^{q}).
\]
Lines not containing any of $(\omega:\omega^{q}:\omega^{q^2})$, $(\omega^{q}:\omega^{q^2}:\omega)$, $(\omega^{q^2}:\omega:\omega^{q})$ are mapped to irreducible conics that contain all three of these conjugate points.
Suppose that $(x:y:z) \in \PG(2,q) \subset \PG(2,q^3)$. Express this point as a sum $(x:y:z) = \alpha (\omega:\omega^{q}:\omega^{q^2})+ \beta (\omega^{q}:\omega^{q^2}:\omega) + \gamma (\omega^{q^2}:\omega:\omega^{q})$. Then the fact that the Frobenius map fixes this point ($(x:y:z)^{q} = (x:y:z)$) implies that
\begin{equation}
\beta = \alpha^{q} \quad \mbox{ and }\quad \gamma = \alpha^{q^2}. \label{coef_ident}
\end{equation}
Conversely if $\alpha$, $\beta$ and $\gamma$ satisfy \cref{coef_ident} then $\alpha (\omega:\omega^{q}:\omega^{q^2})+ \beta (\omega^{q}:\omega^{q^2}:\omega) + \gamma (\omega^{q^2}:\omega:\omega^{q})$ is invariant under the Frobenius map and is therefore in the subplane $\PG(2,q)$.
Thus,
\begin{align*}
\operatorname{Cr}'((x:y:z)) &= \beta \gamma (\omega:\omega^{q}:\omega^{q^2}) + \alpha \gamma (\omega^{q}:\omega^{q^2}:\omega) + \alpha \beta (\omega^{q^2}:\omega:\omega^{q}) \\
&= \alpha^{q+q^2} (\omega:\omega^{q}:\omega^{q^2}) + \alpha^{1+q^2} (\omega^{q}:\omega^{q^2}:\omega) + \alpha^{1+q} (\omega^{q^2}:\omega:\omega^{q}).
\end{align*}
The coefficients $\alpha^{q+q^2}$, $\alpha^{1+q^2}$ and $\alpha^{1+q}$ satisfy the identities in \cref{coef_ident} and so $\operatorname{Cr}'((x:y:z)) \in \PG(2,q)$ if and only if $(x:y:z) \in \PG(2,q)$.
Thus $\operatorname{Cr}'$ is a bijection on $PG(2,q)$ that maps lines to irreducible conics and its image forms a second plane which is orthogoval{} to the first.
\end{proof}
\subsection{Sets of more than two orthogoval{} projective planes}
A $\PG(2,2)$ is a Steiner triple system (STS) of order 7 and any 3 non-collinear points are an oval. Thus two $\STS(7)$ are orthogoval{} if and only if their blocksets are disjoint. It is known that there does not exist a set of three of $\STS(7)$s with mutually disjoint blocksets \cite{colbourn_triple_1999}.
When $q=3$ it is known that the collection of all 715 4-sets of a 13-set can be decomposed into 55 copies of $\PG(2,3)$ \cite{MR704245}.
But not all of the lines in these copies will be ovals in the other copies: if a 4-set intersects a line in three points it is not an oval in the plane containing the line. From \cref{cor_upperbound} we know that if a set of $s$ mutually orthogoval{} projective planes in $\PG(2,3)$ exists then
\[
13s \leq \D(13,4,3) = 65,
\]
and so $s \leq 5$. Chu and Colbourn have shown that a strength 3 packing of 4-sets on 13 points with a cyclic automorphism group has at most 52 blocks \cite{MR2059987}. There does exist such a cyclic packing which can be decomposed into four projective planes of order 3. This consists of the standard Desarguesian plane, together with the points of the conics
\begin{align*}
z^2 + xy & =0\\
z^2+yz+xy &= 0\\
z^2+2yz+2x &=0
\end{align*}
and all the images of these conics in the Singer cycle. In terms of difference sets the blocks of the Desarguesian plane are the development of $D = \{0,1,3,9\}$. The three conics above correspond to difference sets $-D$, $2D$ and $-2D$ respectively. Baker {et al}\onedot proved that no conics are repeated in this set \cite{baker_projective_1994}, and it is easy to check that no triple is repeated in any block. These give the blocks
\begin{align*}
\{0,1,3,9\}+x,\quad \{0,1,5,11\}+x,\quad \{0,1,4,6\}+x,\quad \{0,1,8,10\}+x
\end{align*}
for $x \in \mathbb{Z}_{13}$.
We now show that there does not exist a set of five orthogoval{} projective planes of order~3. Suppose otherwise, for a contradiction. Then the set of all their blocks forms a maximum packing with $\D(13,4,3) =65$ blocks. Each plane corresponds to a clique of size 13 in the 1-block intersection graph of the packing design. In any such packing with 65 blocks, 26 triples are uncovered, the number of blocks containing a pair of points is 5, and each point is contained in exactly 20 blocks. From this it can be deduced that the 26 missing triples are the blocks of a $\STS(13)$. Adjoining one new point to each of these and adding them to the design yields a Steiner quadruple system on 14 points (a $\SQS(14)$). Thus every $v=13$, $k=4$, $t=3$ packing with 65 blocks is obtained from an $\SQS(14)$ by the deletion of one point. There are 4 non-isomorphic $\SQS(14)$ \cite{MR302474}. Using SageMath \cite{sagemath}, the free open-source mathematics software system, for each of these $\SQS(14)$ we can delete each point and construct the 1-block intersection graph of the resulting packing.
None of these graphs has a clique of size larger than 9 (code available upon request), which gives the required contradiction.
To determine the maximum size of a set of mutually orthogoval{} projective planes of orders $q=4$ and $q=5$ we built the 1-intersection graph of all the ovals in these two projective planes in SageMath. Cliques of size $q^2+q+1$ correspond to projective planes with ovals for blocks. A graph is then formed with all these possible cliques as vertices and adjacency determined by a pair being orthogoval{}. For both $q=4$ and $q=5$, no pairs of any of these planes were orthogoval{}, thus there exists a pair of orthogoval{} projective planes of orders $q=4$ and $5$ but no set of three. Source code is available upon request.
We summarize the results of this subsection.
\begin{theorem}\label{thm_proj_sum}
The size of a largest set of mutually orthogoval{} projective planes in $\PG(2,q)$ is two for $q \in \{2,4,5\}$ and is four for $q=3$.
\end{theorem}
\section{Affine planes}\label{sec_affine}
Because all lines have size two in an affine plane of order 2, a collection of any number of affine planes of order 2 on the same point set is trivially mutually orthogoval{}. If $q=3$ then the large set of $\STS(9)$ gives a collection of seven mutually orthogoval{} affine planes in $\AG(2,3)$. By \cref{cor_upperbound}, this is the most possible. Up to isomorphism there are two distinct large sets of $\STS(9)$~\cite{MR2012539}.
\begin{theorem}
The size of a largest set of mutually orthogoval{} affine planes in $\AG(2,3)$ is seven.
\end{theorem}
This is the only case of orthogoval{} affine planes of odd order which we know of, whereas when $q$ is even we can construct pairs or triples of orthogoval{} affine planes and in some cases larger sets. We identify the points of $\AG(2,q)$ as the subset of points $(x:y:1)$ from $\PG(2,q)$.
\subsection{A pair of orthogoval{} affine planes}\label{sec_two_affine}
To show that there exists a pair of orthogoval{} affine planes of order $q = 2^n$, we will need irreducible cubic polynomials in $\mathbb{F}_{q}[x]$ of the form $x^3 + bx + c$. When $n$ is even, $2^n-1$ is divisible by 3 and thus there exist elements $b \in \mathbb{F}_{2^n}$ which are not cube roots. For each such $b$, the polynomial $x^3 - b$ is irreducible. When $n$ is odd, every element is a cube root, $f(x) = x^3$ is a bijection and all polynomials of the form $x^3 + b$ are reducible. However irreducible cubics of the desired form still exist.
\begin{lemma}\label{cubic_lemma}
For each $n$, there exists an irreducible polynomial of the form $x^3+bx+c$ in $\mathbb{F}_{2^n}[x]$.
\end{lemma}
\begin{proof} There exist $q(q-1)$ monic quadratic polynomials $x^2+bx+c$ with $c\neq 0$, of which $(q-1)(q-2)/2 + (q-1) = q(q-1)/2$ are reducible and thus $q(q-1)/2$ are irreducible. There are a total of $q(q-1)$ cubic polynomials $x^3 + bx + c$ with $c \neq 0$. If such a cubic factors into linear terms, it must be of the form $(x-r_0)(x-r_1)(x-(r_0+r_1))$ with $r_0,r_1, r_0+r_1 \neq 0$. Thus there are $(q-1)(q-2)/6$ such reducible cubics. If such a cubic factors into an irreducible quadratic and a linear term, it must be of the form $(x^2+dx+e)(x+d)$ where $d,e \neq 0$ and the linear term is determined by the quadratic term. Thus the number of these reducible cubics matches the number of irreducible quadratics, $q(q-1)/2$. Thus there are $(q-1)(q+1)/3 > 0$ monic irreducible cubics with no $x^2$ term and non-zero constant term.
\end{proof}
For $q=2^n$, an oval $\mathcal{O}$ in $\PG(2,q)$ such that $(x_0+x_1:y_0+y_1:1) \in \mathcal{O}$ for all $(x_0:y_0:1), (x_1:y_1:1) \in \mathcal{O}$ is a {\em translation oval} \cite{hirschfeld_projective_1998}. All ovals can be transformed in $\PGL(3,q)$ to an oval containing $(1:0:0)$, $(0:1:0)$, $(0:0:1)$ and $(1:1:1)$. The translation ovals containing these four points are exactly $\{(x^{2^m}:x:1) : x \in \mathbb{F}_q\} \cup \{(1:0:0)\}$ with nucleus $(0:1:0)$ for all $\gcd(m,n)=1$. We show that there always exists a pencil of conic translation ovals in $\PG(2,q)$. For this we need to characterize the conic translation ovals that may not contain $(1:0:0)$, $(0:1:0)$ or $(1:1:1)$.
\begin{lemma}\label{ovals_as_subspaces}
If $q=2^n$, the conic oval in $\PG(2,q)$ defined by
\[
ax^2+by^2 +cz^2 + fyz + gxz +hxy= 0
\]
is a translation oval if and only if $h=c=0$.
\end{lemma}
\begin{proof}
A translation oval must contain $(0:0:1)$ so $c=0$ is necessary. Since the conic is an oval it contains a pair of points $(x_0:y_0:1)$ and $(x_1:y_1:1)$ such that $x_0y_1 \neq x_1y_0$. Thus $(x_0+x_1:y_0+y_1:1)$ is also on the oval and
\begin{align*}
ax_0^2+by_0^2 + fy_0 + gx_0 +hx_0y_0 &= 0, \\
ax_1^2+by_1^2 + fy_1 + gx_1 +hx_1y_1 &= 0, \\
a(x_0+x_1)^2+b(y_0+y_1)^2 + f(y_0+y_1) + g(x_0+x_1) +h(x_0+x_1)(y_0+y_1) &= 0. \\
\end{align*}
The sum of these three equations is
\[
h(x_0y_1+x_1y_0) = 0
\]
and since $x_0y_1+x_1y_0 \neq 0$ we must have $h=0$.
Conversely if $c=h=0$ and $(x_0:y_0:1)$ and $(x_1:y_1:1)$ are on the conic, then
\begin{align*}
ax_0^2 + by_0^2 + fy_0 + gx_0 &= 0, \\
ax_1^2 + by_1^2 + fy_1 + gx_1 &= 0.
\end{align*}
The quadratic form evaluated at $(x_0+x_1:y_0+y_1:1)$ is
\begin{align*}
a(x_0+x_1)^2 &+ b(y_0+y_1)^2 + f(y_0+y_1) + g(x_0+x_1) \\
&= (ax_0^2 + by_0^2 + fy_0 + gx_0) + (ax_1^2 + by_1^2 + fy_1 + gx_1) &= 0
\end{align*}
so the conic is a translation oval.
\end{proof}
We will call two translation ovals which intersect only in $(0:0:1)$ {\em trivially intersecting}.
\begin{corollary}\label{oval_spread}
For every $q=2^n$ there exists a trivially intersecting pencil of non-singular conic translation ovals in $\PG(2,q)$.
\end{corollary}
\begin{proof}
Let $p = x^3 + bx + c$ be an irreducible cubic equation in $\mathbb{F}_q[x]$ (note $c \neq 0$). We note that $x^3 + b^2x + c^2$ is also irreducible. Define conics
\begin{align*}
\phi &: x^2 + yz, \\
\chi &: y^2 + byz + cxz.
\end{align*}
Both these satisfy \cref{ovals_as_subspaces} so they are translation ovals. The conics determine the pencil
\[
\{\alpha \phi + \beta \chi: (\alpha:\beta) \in \PG(1,q)\}.
\]
The partial derivatives of $\alpha \phi + \beta \chi$ are all zero at the point $(\alpha+\beta b:\beta c:0)$ and this is not on the conic because $x^3 + b^2x + c^2$ is irreducible and thus all conics in the pencil are non-singular.
By the irreducibility of $p$, the equation $\phi(x,y,z) = \chi(x,y,z) = 0$ in $\PG(2,q)$ holds if and only if $(x:y:z) = (0:0:1)$. The other three points of intersection are a conjugate triple in $\PG(2,q^3)$. Every conic in the pencil contains these four points, thus the pencil of conics $\{\alpha \phi + \beta \chi: (\alpha:\beta) \in \PG(1,q)\}$ meet inside $\PG(2,q)$ only in $(0:0:1)$ and are otherwise disjoint.
\end{proof}
In $\mathbb{F}_{2}^{2n}$, a set of $n$-dimensional spaces which pairwise intersect only in the origin is a {\em spread}. The set of these spaces and their cosets form an affine plane of order $2^n$ which is called a {\em translation plane} because the group of all translations is a transitive automorphism group \cite{mr1434062}. The Desarguesian affine plane of order $q=2^n$ can be constructed this way using the spread of 1-dimensional spaces of $\mathbb{F}_q^2$ which is isomorphic to $\mathbb{F}_{2}^{2n}$ as a $\mathbb{F}_2$-vector space. We call this the {\em line spread}. These 1-dimensional spaces correspond to the lines $\alpha x + \beta y$ for $(\alpha:\beta) \in \PG(1,q)$. They alternatively can be constructed from the cyclotomic cosets in $\mathbb{F}_{q^2}^*$. If $\omega$ is a primitive element of $\mathbb{F}_q^2$, the set
\[
C_0 = \langle \omega^{q+1} \rangle = \{\omega^{j(q+1)}: 0 \leq j < q-1\}
\]
is a multiplicative subgroup of $\mathbb{F}_{q^2}^*$. Its cosets are
\begin{equation}\label{line_spread}
C_i = \omega^i \langle \omega^{q+1} \rangle
\end{equation}
for $0 \leq i \leq q$. When we adjoin $0$ to each coset we get exactly the set of 1-dimensional subspaces of $\mathbb{F}_q^2$, a spread. Non-Desarguesian translation planes can also be constructed from spreads \cite{mr1434062}.
We want to understand when the planes constructed from two spreads are orthogoval{}. Let $\mathcal{S} = \{S_i\}$ and $\mathcal{T} = \{T_j\}$ be two spreads in $\mathbb{F}_{2}^{2n}$. If $x,y,z \in (u + S_i) \cap (v + T_j)$ then $0, y-x, z-x \in S_i \cap T_j$ and conversely. This gives the following result.
\begin{lemma}\label{fact:spreads}
The affine planes constructed from $\mathcal{S}$ and $\mathcal{T}$ are orthogoval{} if and only if $|S_i \cap T_j| \leq 2$ for all $i$ and $j$.
\end{lemma}
\cref{oval_spread} shows that there exists a ``spread'' of non-singular conics. The translation plane constructed from this spread is Desarguesian.
\begin{definition}
A map $f$ on $\AG(2,q)$ which is an $\mathbb{F}_2$-linear bijection and maps lines to ovals is {\em affine ovalinear}.
\end{definition}
\begin{lemma}\label{lemma_ovalinear}
If $f$ is an affine ovalinear map on $\AG(2,2^n)$ then the affine planes $\AG(2,2^n)$ and $\AG(2,2^n)^{f}$ are orthogoval{} and both Desarguesian.
\end{lemma}
\begin{proof}
Because $f$ is a bijection, the two planes $\AG(2,2^n)$ and $\AG(2,2^n)^{f}$ are isomorphic and therefore Desarguesian. Since every line in $\AG(2,2^n)^{f}$ is an oval, the two planes are orthogoval{}.
\end{proof}
We note that, by applying $f^{-1}$ to both planes, we have that $\AG(2,2^n)$ and $\AG(2,2^n)^{f}$ are orthogoval{} if and only if $\AG(2,2^n)$ and $\AG(2,2^n)^{f^{-1}}$ are orthogoval{}. We will use \cref{lemma_ovalinear} to prove the next result and again in \cref{sec_three_affine}.
\begin{theorem} \label{pg_map}
The translation plane of order $q=2^n$ constructed from a trivially intersecting pencil of non-singular conics is Desarguesian and orthogoval{} to the translation plane constructed from the line spread.
\end{theorem}
\begin{proof}
Suppose that quadratic forms $\phi$ and $\chi$ generate a trivially intersecting pencil of translation conics. Let $\psi$ be the quadratic form $z^2$. Define the map on $\PG(2,q)$, $f((x:y:z)) = (\phi(x,y,z):\chi(x,y,z):\psi(x,y,z))$. Because $\phi$ and $\chi$ are translation conics, $f$ preserves the $z$-coordinate of all points in $\PG(2,q)$ and is thus $\mathbb{F}_2$-linear on $\AG(2,q)$. Suppose that $(x:y:z)$ is on $\alpha \phi + \beta \chi + \gamma \psi$. Then $f((x:y:z)) $ is on the line $\alpha x + \beta y + \gamma z$. Since each point can be specified by the lines it intersects with, $f$ is a bijection which takes ovals to lines. Thus $f^{-1}$ is affine ovalinear and so by \cref{lemma_ovalinear} $\AG(2,2^n)$ and $\AG(2,2^n)^{f}$ are orthogoval{} and both Desarguesian.
\end{proof}
\begin{corollary}\label{orthog_ag}
There exists a pair of Desarguesian orthogoval{} affine planes of order $q=2^n$.
\end{corollary}
We note that the pair of $\PG(2,q)$ constructed, one from the lines $\alpha x + \beta y + \gamma z$ and the other from the set of conics $\alpha \phi + \beta \chi + \gamma \psi $, is orthogoval{} {\em except} for the line $z$. Thus any three points, not {\em all} from the line $z$, which are collinear in one $\PG(2,q)$ are non-collinear in the other. We will use this fact when discussing the connections to covering arrays in~\cref{sec_ca}.
\subsection{Sets of three orthogoval{} affine planes}\label{sec_three_affine}
For a restricted set of $n$, we can construct sets of three orthogoval{} affine planes of order $q=2^n$. As in the construction of two orthogoval{} affine planes in \cref{sec_two_affine}, we need to control the roots of specific polynomials.
\begin{lemma} \label{lemma:roots-4} If $k$ is relatively prime to $n$, then
for any $a$ and $b$ in $\mathbb{F}_{2^n}$, the equation $ax^{2^k} + bx = 0$ has at most two roots in $\mathbb{F}_{2^n}$, unless $a=b=0$.
\end{lemma}
\begin{proof}
$x=0$ is one root; any other must satisfy $ax^{2^k-1} + b = 0$. If $a=0$ this has no roots unless $b=0$. For $a \not= 0$, the sole root is the $(2^k-1)$th root of $a^{-1}b$. That root exists and is unique since $\gcd(2^n-1, 2^k-1) = 2^{\gcd(k,n)} - 1 = 1.$
\end{proof}
\begin{lemma} \label{lemma:roots-again}
If $3k$ is relatively prime to $n$, then
the polynomial $x^{2^k+1} + x +1$ has no roots in $\mathbb{F}_{2^n}$.
\end{lemma}
\begin{proof}
Suppose $r \in \mathbb{F}_{2^n}$ is a root of this polynomial and let $m=2^k$. Then from
\[ r^{m+1} = r +1 \] and taking the $m$th power (which is a Frobenius automorphism) of both sides we have
\[ r^{m^2+m} = r^m + 1 \]
so that
\[ r^{m^2+m+1} = r^{m+1} + r = 1. \]
Now taking the $(m-1)$th power gives $r^{m^3-1} = 1$, so that
\[ r \in \mathbb{F}_{m^3} = \mathbb{F}_{2^{3k}}. \]
But as we also have $r \in \mathbb{F}_{2^n}$ we conclude
\[r \in \mathbb{F}_{2^{\gcd(3k,n)}} = \mathbb{F}_2. \]
This is a contradiction, as neither 0 nor 1 is a root.
\end{proof}
A similar argument applies to a second polynomial we will encounter.
\begin{lemma} \label{lemma:more-roots}
If $3k$ is relatively prime to $n$, then
the polynomial $x^{2^{2k}-1} + x^{2^k-1} +1$ has no roots in $\mathbb{F}_{2^n}$.
\end{lemma}
\begin{proof}
Suppose $r \in \mathbb{F}_{2^n}$ is a root of this polynomial and let $m=2^k$. Then from
$r^{m^2-1} = r^{m-1} +1,$
taking the $m$th power of both sides we have
$r^{m^3-m} = r^{m^2-m} + 1.$
Multiplying by $r^{m-1}$ gives
$r^{m^3-1} = r^{m^2-1} + r^{m-1} = 1.$
So $r \in \mathbb{F}_{m^3}$, but as we also have $r \in \mathbb{F}_{2^n}$ we conclude
$r \in \mathbb{F}_{2^{\gcd(3k,n)}} = \mathbb{F}_2.$
This is again a contradiction, as neither 0 nor 1 is a root.
\end{proof}
Incidentally, this also shows (because it is $\mathbb{F}_2$-linear) that $x^{2^{2k}} + x^{2^k} + x$ is a permutation polynomial for $\mathbb{F}_{2^n}$ when $n$ is relatively prime to $3k$.
Let $\phi_k(x,y) = (x^{2^k}+ y,\; y^{2^k} + x + y)$ be a map defined on $\AG(2,2^n)$. A simple expansion of terms shows
\begin{equation}\label{lemma:phi-step}
\phi_{2k} = \rho \circ \phi_k\circ \phi_k \circ \rho
\end{equation}
where $\rho(x,y) = (y,x)$ is a linear transformation.
\begin{lemma} \label{lemma:phi-bijection}
For all $k \geq 1$, if $3k$ is relatively prime to $n$, then $\phi_k$ is a bijection.
\end{lemma}
\begin{proof}
Suppose $\phi_k(x,y) = \phi_k(s,t)$. Then
\begin{align*}
x^{2^k} + y &= s^{2^k} + t \text{~and} \\
y^{2^k} + x + y &= t^{2^k} + s + t
\end{align*}
giving
\begin{align*}
(x+s)^{2^k} &= y + t \text{~and} \\
(y+t)^{2^k} &= (x+s) + (y+t).
\end{align*}
Substituting $y+t$ from the first equation into the second gives
\[ (x+s)^{2^{2k}} = (x+s) + (x+s)^{2^k}. \]
That is, $x+s$ is a root of $z^{2^{2k}} + z^{2^k} + z$. Then either $x=s$, or $x+s$ is
a root of $z^{2^{2k}-1}+z^{2^k-1}+1$, which \cref{lemma:more-roots} shows to be impossible.
\end{proof}
\begin{lemma} \label{lemma:phi-ortho}
If $3k$ is relatively prime to $n$, then $\phi_k$ is affine ovalinear on $\mathbb{F}_{2^n}$.
\end{lemma}
\begin{proof}
It is clear that $\phi_k$ is $\mathbb{F}_2$-linear, and \cref{lemma:phi-bijection} shows it to be a bijection. We first find the images of lines through the origin.
The image of the line $x=0$ is the set
\begin{equation} \label{eq:line-0}
\{ (y, y^{2^k}+y) \st y \in \mathbb{F}_{2^n} \}
\end{equation}
while the image of the line $y = ux$ for $u \in \mathbb{F}_{2^n}$ is
\begin{equation} \label{eq:line-non0}
\{ (x^{2^k} + ux, u^{2^k} x^{2^k} + (u+1)x) \st x \in \mathbb{F}_{2^n} \}.
\end{equation}
Now we can look at the intersection of these sets with lines through the origin.
The line $x = 0$ meets the set~\eqref{eq:line-0} in the set
indexed by $\{ y \in \mathbb{F} \st y = 0 \}$, which has one member. It meets the set~\eqref{eq:line-non0}
in the set indexed by
\[ \{ x \in \mathbb{F} \st x^{2^k} + ux = 0 \} \]
which \cref{lemma:roots-4} shows to have at most 2 members.
For $w \in \mathbb{F}$, the line $y = wx$ meets the set~\eqref{eq:line-0} in the set
indexed by
\[ \{ y \in \mathbb{F} \st y^{2^k} + (w+1)y = 0 \}, \]
which \cref{lemma:roots-4} shows to have at most 2 members.
It meets the set~\eqref{eq:line-non0} in the set indexed by
\[ \{ x \in \mathbb{F} \st (u^{2^k}+w)x^{2^k} + (uw + u+1)x = 0 \}. \]
\cref{lemma:roots-4} applies again unless
\[ u^{2^k} + w = 0 = uw + u+1 \]
but then $w = u^{2^k}$ and $u^{2^k+1} + u + 1 = 0$.
So $u$ is a root of $x^{2^k+1} + x + 1$,
which \cref{lemma:roots-again} shows to be impossible.
\end{proof}
\begin{theorem}
If $6k$ and $n$ are relatively prime and $q=2^n$,
the standard plane $\AG(2,q)$, its image $\AG(2,q)^{\phi_k}$, and $\AG(2,q)^{\phi_k \circ \phi_k}$ are mutually orthogoval{}.
\end{theorem}
\begin{proof}
By \cref{lemma_ovalinear,lemma:phi-ortho},
we have that $\AG(2,q)$ and $\AG(2,q)^{\phi_k}$ are orthogoval{}.
Applying bijection $\phi_k$ to both of these planes shows that $\AG(2,q)^{\phi_k}$ and $\AG(2,q)^{\phi_k \circ \phi_k}$ are also orthogoval{}.
It remains to show that the standard plane and $\AG(2,q)^{\phi_k \circ \phi_k}$ are orthogoval{}.
\cref{lemma_ovalinear,lemma:phi-ortho} show that
$\AG(2,q)$ is orthogoval{} to $\AG(2,q)^{\phi_{2k}} = \AG(2,q)^{\rho \circ \phi_{k} \circ \phi_k \circ \rho}$.
Since $\rho$ is a bijection
and $\rho \circ \rho$ is the identity, we can first apply $\rho$ to those two planes to show that $\AG(2,q)^\rho$ is orthogoval{} to $\AG(2,q)^{\rho \circ \phi_{k} \circ \phi_k}$, and then, as $\AG(2,q)^\rho = \AG(2,q)$,
we have $\AG(2,q)$ orthogoval{} to $\AG(2,q)^{\phi_k \circ \phi_k}$.
\end{proof}
Clearly $n$ must always be relatively prime to 6 for this construction. For $k \in \{ 2^i3^j | i, j \geq 0\} = \{1,2,3,4,6,8,\dotsc\}$ this is sufficient.
\begin{corollary} \label{cor_3_affine_planes}
If $n$ is relatively prime to 6, there exists a set of three mutually orthogoval{} affine planes of order $2^n$.
\end{corollary}
If $n >3$ is prime then every $k$ satisfying $1 \leq k \leq n-1$ gives a triple of mutually orthogoval{} affine planes.
We do not know whether or not the sets for different $k$ are isomorphic.
\subsection{Sets of more than three orthogoval{} affine planes}
We can use the construction of translation affine planes from spreads to construct sets of more than three orthogoval{} affine planes for $q=4$ and $8$. The line spread $\mathcal{C}$ defined just after \cref{line_spread} constructs the standard Desarguesian affine plane on pointset $\mathbb{F}_{2^{2n}} \cong \mathbb{F}_{2}^{2n}$. Other spreads which construct isomorphic translation planes can be generated by applying elements $M \in \GL(2n,\mathbb{F}_2)$. By \cref{fact:spreads}, if the intersections of the spaces in the image spread with the spaces from the original line spread all have size no more than two then the pair of affine planes constructed is orthogoval{}. Furthermore if this is true for $M^i$ for each $i$ satisfying $1 \leq i < s$ then the $s$ spreads $\{M^i(\mathcal{C}): 0\leq i < s\}$ generate a set of $s$ mutually orthogoval{} affine planes.
This gives an efficient method for computationally searching for a set of mutually orthogoval{} affine planes of $\AG(2,2^n)$.
Generate matrices $M$ from $\GL(2n,\mathbb{F}_2)$, retaining those for which the plane constructed from $M(\mathcal{C})$ is orthogoval{} to that built from $\mathcal{C}$. Given a large set $M_1, M_2, \dots, M_r$ of such matrices, construct a graph with vertex set $M_0 = I_{2n}, M_1, M_2, \dots, M_r$ in which vertices $M_i$ and $M_j$ are joined when $M_i(\mathcal{C})$ is orthogoval{} to~$M_j(\mathcal{C})$. A clique of size $c$ in this graph is then a set of $c$ mutually orthogoval{} affine planes in $\AG(2,2^n)$. This search yielded a set of seven mutually orthogoval{} affine planes of order~$4$ (\cref{ex:q4sevenoo}) and a set of seven mutually orthogoval{} affine planes of order~$8$ (\cref{ex:q8sevenoo}).
\begin{example}
\label{ex:q4sevenoo}
Let $q=4$ and let $\omega$ be a root of the primitive polynomial $x^4+x+1$.
The cyclotomic cosets $C_i$ of $\mathbb{F}_{16}^*$ and the corresponding additive subgroups $S_i$ of $\mathbb{F}_2^4 \cong \mathbb{F}_{16}$ (using the $\mathbb{F}_2$-linear map $(t_0 t_1 t_2 t_3) \rightarrow t_0 \omega^3 + t_1 \omega^2 + t_2 \omega + t_3$) are given by
\[
\begin{array}{lll}
i & \mbox{$C_i$ (multiplicative)} & \mbox{$S_i$ (additive)}
\\ \hline
0 & \{1, \omega^5,\omega^{10} \} & \{0000, 0001, 0110, 0111\} \\
1 & \{\omega,\omega^6,\omega^{11} \} & \{0000, 0010, 1100, 1110\} \\
2 & \{\omega^2,\omega^7,\omega^{12} \} & \{0000, 0100, 1011, 1111\} \\
3 & \{\omega^3,\omega^8,\omega^{13} \} & \{0000, 1000, 0101, 1101\} \\
4 & \{\omega^4,\omega^9,\omega^{14} \} & \{0000, 0011, 1010, 1001\}.
\end{array}
\]
The 4 lines of the parallel class containing $S_0$ are the cosets
\begin{align*}
L_{01} = 0000 + S_0 &= \{0000, 0001, 0110, 0111\} \\
L_{02} = 0010 + S_0 &= \{0010, 0011, 0100, 0101\} \\
L_{03} = 1000 + S_0 &= \{1000, 1001, 1110, 1111\} \\
L_{04} = 1010 + S_0 &= \{1010, 1011, 1100, 1101\}.
\end{align*}
The remaining 16 lines of the Desarguesian affine translation plane $\AG(2,4)$ are formed similarly from $S_1, \dots, S_4$.
Let
\[
M_{4} =
\begin{bmatrix}
0 & 1 & 1 & 0 \\
0 & 0 & 0 & 1 \\
1 & 1 & 0 & 0 \\
0 & 0 & 1 & 1
\end{bmatrix}.
\]
Then $\{ M_4^{i}(\AG(2,4)): 0 \leq i < 7\}$ is a set of seven mutually orthogoval{} affine planes of order~$4$.
\end{example}
With $\mathbb{F}_4$ generated by $x^2+x+1$ and primitive element $\zeta = \omega^5$, the planes $M_4^i(\AG(2,4))$ for $1 \leq i < 7$ correspond to the six pencil spreads of conics formed from the pairs
\begin{align*}
\phi_1 &: x^2 + (\zeta+1) yz + \zeta xz &\chi_1 &: y^2 + yz + xz;\\
\phi_2 &: x^2 + (\zeta+1)yz + xz &\chi_2 &: y^2 + (\zeta+1)xz;\\
\phi_3 &: x^2 + xz +\zeta xy &\chi_3 &: y^2 + \zeta yz + \zeta xz; \\
\phi_4 &: x^2 + \zeta yz +\zeta xz &\chi_4 &: y^2 + yz + (\zeta+1) xz; \\
\phi_5 &: x^2 + yz + xz &\chi_5 &: y^2 + yz + \zeta xz; \\
\phi_6 &: x^2 + \zeta yz &\chi_6 &: y^2 + xz.\\
\end{align*}
The union of the lines from all seven of the planes forms a $\SQS(16)$. A decomposition of an $\SQS(16)$ into affine planes was previously known \cite{mathon_searching_1997} but this is an interesting method of construction. It is unknown if the 1820 4-subsets of a 16-set can be partitioned into $\SQS(16)$s \cite{MR4113538}; this would be a large set of Steiner Quadruple Systems. Because the cyclotomic cosets $C_i$ are all disjoint, the collection of all cyclotomic cosets over these seven spreads forms the blocks of a resolvable Steiner triple system of order 15, a solution to Kirkman's schoolgirl problem of 1850 \cite{kirkman_query_1850}:
\begin{quote}
``Fifteen young ladies in a school walk out three abreast for seven days in succession: it is required to arrange them daily, so that no two shall walk twice abreast.''
\end{quote}
This design is the derived design of the $\SQS(16)$ at the point $0$. Because the planes have a transitive automorphism group, deriving at any other point gives an isomorphic resolvable $\STS(15)$.
\begin{example}
\label{ex:q8sevenoo}
For $q=8$, let $\mathbb{F}_{64}$ be generated by a primitive element with minimum polynomial $x^6+x+1$. Construct the Desarguesian affine translation plane $\AG(2,8)$. Let
\[
M_{6} =
\begin{bmatrix}
0 & 1 & 0 & 1 & 0 & 1 \\
0 & 0 & 1 & 0 & 1 & 1 \\
1 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 1 & 1 & 1 \\
1 & 1 & 1 & 0 & 0 & 1 \\
0 & 0 & 1 & 1 & 1 & 0
\end{bmatrix}.
\]
Then $\{ M_6^{i}(\AG(2,8)): 0 \leq i < 7\}$ is a set of seven mutually orthogoval{} affine planes of order~$8$.
\end{example}
Only three of these affine planes, those for $i=1,2,4$, are formed from a pencil spread of conics. With $\mathbb{F}_8$ generated from $x^3 + x^2 + 1$ and $\omega$ as its root, the pencils of conics are formed from
\begin{align*}
\phi_1 &: x^2 + \omega yz + \omega xz &\chi_1 &: y^2 + (\omega^2+\omega+1)yz + (\omega+1)xz;\\
\phi_2 &: x^2 + (\omega^2+\omega)yz + xz &\chi_2 &: y^2 + yz + (\omega^2+1) xz;\\
\phi_4 &: x^2 + (\omega^2)yz + (\omega+1) xz +\alpha xy &\chi_4 &: y^2 + (\omega^2+\omega)yz + (\omega^2+\omega) xz.
\end{align*}
The spreads for $i = 3,5,6$ are spreads of conics but are not pencils.
The procedure to find suitable matrices $M \in \GL(2n,\mathbb{F}_2)$ was implemented in~$C$, and the clique-finding algorithm was implemented in \emph{Mathematica}.
The random search was modified in order to allow the C program to run in reasonable time for $q=16$. Rather than seeking a suitable matrix $M$ directly, we choose the first $2n-1$ rows of $M$ randomly and calculate its product with the first $2n-1$ coordinates of each line~$S_i$. If no four such products from the nonzero points of a line $S_i$ occur in the set of products from a single line $MS_{i'}$, then we proceed to test all possible final rows for~$M$; otherwise we discard the partial matrix~$M$.
For $q=8$, the C program running on a 2013 iMac desktop takes on average less than 0.2 milliseconds to find each suitable matrix $M$, and a set of $1200$ such matrices was sufficient to find the first clique of size 7. We later refined it to the solution given in \cref{ex:q8sevenoo}. No clique of size larger than 7 was found from a set of 50,000 such matrices.
For $q=16$, the C program takes on average 15 seconds to find each suitable matrix~$M$. A set of $1,000,000$ such matrices was collected over a number of months on several machines. This set is too large for the clique-finding algorithm of \emph{Mathematica} to handle, but it is straightforward to search in C for a pair of non-identity matrices $M$, $M'$ from the set for which $M(\AG(2,6))$ is orthogoval{} to $M'(\AG(2,6))$; together with the identity matrix $I_8$, this would give a set of three mutually orthogoval{} affine planes of order~$16$. Unfortunately, no such pair was found. The set of $1,000,000$ suitable matrices does not contain an example $M$ for which $M^2$ is also suitable. It remains an open question as to whether there exists a set of more than two mutually orthogoval{} affine planes of orders $2^n$ with $\gcd(n,6) > 1$ and $n>3$.
We summarize the results of this subsection, using \cref{cor_upperbound}.
\begin{theorem} \label{thm_affine_sum}
\begin{itemize}
\item The size of a largest set of mutually orthogoval{} affine planes in $\AG(2,4)$ is seven.
\item There exists a set of seven mutually orthogoval{} affine planes in $\AG(2,8)$.
\end{itemize}
\end{theorem}
\section{Connections to covering arrays}\label{sec_ca}
Here we discuss the connection between sets of $s$ mutually orthogoval{} planes, covering perfect hash families and covering arrays.
\begin{definition} A {\em covering perfect hash family} $\CPHF_{\lambda}(n;t,k,q)$ is a $n \times k$ array of elements from $\mathbb{F}_q^t\setminus \vec{0}$ (equivalently the points of $\PG(t-1,q)$) such that, for each set $T$ of $t$ columns, there exist at least $\lambda$ rows whose entries in the columns of $T$ are linearly independent. If the vector entries all have non-zero last coordinate (equivalently, they are the points of the affine subspace of $\PG(t-1,q)$) then the $\CPHF$ is a {\em Sherwood covering perfect hash family} $\SCPHF_{\lambda}(n;t,k,q)$.
\end{definition}
\begin{definition} \label{def:CA} A {\em covering array} $\CA_{\lambda}(N;t,k,v)$ is a $N \times k$ array of elements from a $v$-set such that, for each set $T$ of $t$ columns and each $t$-tuple from the $v$-set, there exist at least $\lambda$ rows whose entries in the columns of $T$ match the $t$-tuple.
\end{definition}
Covering perfect hash families are used to construct covering arrays. We extend the standard constructions \cite{MR3770276,MR2214495} by considering the index of the constructed covering array.
\begin{theorem}
Suppose that $C$ is a $\CPHF_{\lambda}(n; t, k,q)$ and $\lambda' \leq \lambda$. Then there exists a $\CA_{\lambda'}(n(q^t-1) + \lambda' ; t,k, q)$; and
there exists a $\CA_{\lambda'}(n(q^t-q)+\lambda'q; t, k , q)$ if $C$ is an $\SCPHF(n; t,k, q)$.
\end{theorem}
\begin{proof}
See \cite{MR3770276,MR2214495} for the basic proof. The extension to higher index is straightforward. For a given set $T$ of $t$ columns, {\em any} row in which the entries from $T$ are linearly independent contributes one copy of every $t$-tuple over $\mathbb{F}_q$ to the covering array, so every $t$-tuple appears at least $\lambda$ times. Every row of a $\CPHF$ contributes an all-zero row to the covering array. Every row of a $\SCPHF$ contributes each of the $q$ constant rows to the covering array. In each case all but $\lambda'$ can be deleted.
\end{proof}
\begin{theorem}\label{thm_orthog2cphf}
Suppose that a set of $s$ mutually orthogoval{} Desarguesian projective (affine) planes exists. Then there exists a $\CPHF_{s-1}(s;3,q^2+q+1,q)$ ($\SCPHF_{s-1}(s;3,q^2,q)$).
\end{theorem}
\begin{proof}
Represent the points of $\PG(2,q)$ as length $3$ vectors in homogeneous coordinates over $\mathbb{F}_q$ and the points of $\AG(2,q)$ as length $3$ vectors in homogeneous coordinates over $\mathbb{F}_q$ with last entry $1$. For each $i$ satisfying $1 \leq i \leq s$, let $\phi_i$ be an isomorphism of the $i$th plane in the set onto the first plane.
Let $C$ be the array with $s$ rows and columns indexed by the elements of the common point set of the planes. Define the entries of $C$ by
\[
C_{iz} = \phi_i(z).
\]
Let $T$ be a set of three columns corresponding to points $z_1$, $z_2$ and $z_3$. When these three points are non-collinear in the plane $i$, the corresponding entries in row $i$ are linearly independent. If $z_1$, $z_2$ and $z_3$ appear on a line in one of the planes, then by the definition of orthogoval{} they must be non-collinear in all other $s-1$ planes. Thus there are at least $s-1$ rows with linearly independent entries in columns $z_1$, $z_2$ and $z_3$.
\end{proof}
The covering arrays constructed from these Sherwood covering perfect hash families can be extended by an additional two columns.
\begin{theorem}
Suppose that a set of $s$ mutually orthogoval{} Desarguesian affine planes exists. Then there exists a $\CA_{s-1}(sq^3-q; 3, q^2+2 , q)$.
\end{theorem}
\begin{proof}
The affine plane embeds uniquely into the projective plane so we can extend the set of $s$ mutually orthogoval{} affine planes to a set of $s$ projective planes which are orthogoval{} {\em except} for the line $z$. From these we construct the $s \times (q^2+q+1)$ array $C$ as in the proof of \cref{thm_orthog2cphf} defined by
\[
C_{iz} = \phi_i(z).
\]
Because the planes have the property that any three points not {\em all} on the line $z$ which are collinear in one are non-collinear in all the others, if we delete all but two of the columns of $C$ which are indexed by the points on the line $z$, the resulting array is a $\CPHF_{s-1}(s;3,q^2+2,q)$.
When constructing the covering array from this extended $\SCPHF$ we observe that the repeated rows arise from the multiplication of $r$ with the entry $z$ in $C$ where the vectors $r$ are the vectors with only the last position non-zero, see~\cite{MR3770276}. The multiplication of these vectors $r$ with the two points kept from the line $z$ ($(1:0:0)$ and $(0:1:0)$ for example) yield all zeros so the rows are still repeated in the covering array extended by two columns and the appropriate number can still be deleted.
\end{proof}
From our constructions of sets of mutually orthogoval{} Desarguesian projective planes in \cref{thm_pair_proj,thm_proj_sum} we can build the corresponding covering perfect hash families and covering arrays.
\begin{corollary}
\begin{itemize}
\item For all prime powers $q$ there exists a $\CA(2q^3-1;3,q^2+q+1,q)$.
\item For $\lambda <4$ there exists a $\CA_{\lambda}(27\lambda+26;3,13,3)$.
\end{itemize}
\end{corollary}
From our constructions of sets of mutually orthogoval{} Desarguesian affine planes summarized in \cref{orthog_ag} and \cref{thm_affine_sum} we can build the corresponding covering perfect hash families and covering arrays.
\begin{corollary}
\begin{itemize}
\item For all prime powers $q=2^n$ there exists a $\CA(2q^3-q;3,q^2+2,q)$.
\item For all prime powers $q=2^n$ with $\gcd(n,6)=1$ and $\lambda < 3$, there exists a $\CA_{\lambda}(\lambda q^3+q^3-q;3,q^2+2,q)$.
\item For $\lambda <7$ there exists a $\CA_{\lambda}(27\lambda+24;3,11,3)$.
\item For $\lambda <7$ there exists a $\CA_{\lambda}(64\lambda+60;3,18,4)$.
\item For $\lambda <7$ there exists a $\CA_{\lambda}(512\lambda+504;3,66,8)$.
\end{itemize}
\end{corollary}
\begin{comment}
\begin{corollary}
If there exists a set of For all $n>2$ there exists a $\CA(2^{3n+1}-2^n;3,2^{2n}+2,2^n)$
\end{corollary}
\begin{proof}
Let $q = 2^n$ and $\alpha$ primitive in $\mathbb{F}_{q^3}$. Let $A$ be the $q^3 \times q^2+q+1$ LFSR array with
\[
A_{ij} = \begin{cases} 0 &\mbox{if } i = q^3-1 \\ \Tr(\alpha^{i+j}) &\mbox{otherwise}. \end{cases}
\]
This array has some important properties \cite{MR3248524}:
\begin{enumerate}
\item \label{prop_1} Columns $\{j_1,j_2,j_3\}$ are covered in $A$ if and only if $\{ \alpha^{j_1}, \alpha^{j_2}, \alpha^{j_3}\}$ is linearly independent over $\mathbb{F}_q$.
\item \label{prop_2} If $\{ \alpha^{j_1}, \alpha^{j_2}, \alpha^{j_3}\}$ is linearly dependent over $\mathbb{F}_q$ then there exists $q$ rows (including the all 0 row) where $A$ is 0 in columns $j$ where $\alpha^j$ is linearly dependent on $\{ \alpha^{j_1}, \alpha^{j_2}, \alpha^{j_3}\}$ and only 0 in these columns.
\item \label{prop_3} Rows $i_1 \neq q^3-1$ and $i_2 \neq q^3-1$ are ($\mathbb{F}_q$) scalar multiples of each other if and only if $i_1-i_2$ is a multiple of $q^2+q+1$.
\end{enumerate}
\noindent If $\alpha^j = c_0 + c_1\alpha + c_2\alpha^2$, we will associate column $j$ with the point $(c_0:c_1:c_2) \in \PG(2,q)$. The 3-sets of columns in $A$ that are not covered correspond exactly to triples of collinear points.
By properties~\ref{prop_2} and~\ref{prop_3}, there exists a $i_0<q^2+q+1$ such that $\Tr(\alpha^{i})=\Tr(\alpha^{i+1})=0$. For each column, $j$, if $\Tr(\alpha^{i_0+j})$ is not zero then divide every entry by $\Tr(\alpha^{i_0+j})$. Thus for any $0 \leq k < q-1$, the row $k(q^2+q+1)+i_0$ has only two values
\begin{itemize}
\item $0$ in columns indexed by the $j$ for which $\alpha^j$ is linearly dependent on $1$ and $\alpha$, or equivalently the columns indexed by the points of the line $z$,
\item $\alpha^{k(q^2+q+1)} \in \mathbb{F}_q$ otherwise.
\end{itemize}
Let $A'$ be the array resulting from permuting the columns of $A$ by map $f$ from Theorem~\ref{pg_map}. We note that $(1:0:0)$ and $(0:1:0)$, corresponding to $1$ and $\alpha$ respectively, are fixed points of $f$ so columns 0 and 1 are the same in $A$ and $A'$. The columns corresponding to remaining points of the line $z$, $j = \log_{\alpha}(1+\lambda\alpha)$, are all permuted amongst each other and we delete these columns from both $A$ and $A'$. Concatenate $A$ and $A'$ vertically.
If $\{ \alpha^{j_1}, \alpha^{j_2}, \alpha^{j_3}\}$ is linearly independent then columns $\{j_1,j_2,j_3\}$ are covered in $A$. If $\{ \alpha^{j_1}, \alpha^{j_2}, \alpha^{j_3}\}$ is linearly dependent, they contain at most one of $1$ and $\alpha$ because we have deleted all the other columns of the line through this pair of points. By Corollary~\ref{orthog_ag} the points corresponding to columns $j_1$, $j_2$ and $j_3$ are not collinear in the $\AG(2,q)$ under the map $f$ and so these columns are covered in $A'$.
The rows $q^3-1$ and $k(q^2+q+1)+i_0$ of $A'$ for $0 \leq k < q-1$ are identical to the same rows of $A$ and thus the copy in $A'$ may be deleted without affecting coverage.
\end{proof}
\end{comment}
We believe these are probably the best known construction for higher index covering arrays in the instances where the sets of mutually orthogoval{} planes exist.
\section{Final thoughts}\label{sec_final}
Many open problems remain. Foremost among them is to determine the largest size of a set of mutually orthogoval{} planes of order $q$ for all prime powers $q$. We have some exact values for small $q$ in Theorems \ref{thm_proj_sum} and \ref{thm_affine_sum}. We have the upper bound, usually $q+2$, of \cref{thm_upperbound} and the lower bound of 2 from \cref{thm_pair_proj} and \cref{orthog_ag}, and for some $q$ we have a lower bound of 3 affine planes from \cref{cor_3_affine_planes}. There is a large gap between the lower and upper bounds for all but the smallest $q$.
A secondary question is how many essentially different maximal sets of mutually orthogoval{} planes of order $q$ exist. Even in the case of the pairs or triples of mutually orthogoval{} planes constructed in this paper, we do not know if the constructed sets are non-isomorphic.
It may be possible to use \cref{thm_pair_proj} to find sets of more than two mutually orthogoval{} planes. If we pick two different sets of three conjugate points, we can use each to build other planes on the point set of the standard plane that are each orthogoval{} to the standard plane. The transformation from the second to third plane is just the composition of the two Cremona transformations that formed each. If this composition is itself a conjugated Cremona then we may have constructed a set of three mutually orthogoval{} projective planes.
An alternative approach to construct more than two orthogoval{} projective planes is to use difference sets. Baker {et al}\onedot \cite{baker_projective_1994} prove that if $D \subset \mathbb{Z}_{n^2+n+1}$ is a difference set, then $-D$, $2D$ and $D/2$ are all difference sets that are conics. These could potentially be used to create a set of more than two mutually orthogoval{} planes of orders other than 3. For example $-D/2 = 2D \neq D$ if and only if $-4$ is a multiplier of $D$ and 2 is not. In this case, all three of $D$, $-D$ and $2D $ must be distinct and are each related to each other by multipliers of $-1$, $2$ or $1/2$ and thus form ovals in the planes generated by the other. Gordon, Mills and Welch showed that the only multipliers of a Desarguesian planar difference set in $\mathbb{Z}_{q^2+q+1}$ for $q = p^e$ are the powers of $p$ in $\mathbb{Z}_{q^2+q+1}$ \cite{MR146135}. Thus if there exists $q = p^e$ such that $p^d \equiv -4 \pmod{q^2+q+1}$ and 2 is not generated by $p$, then there exists a set of three Desarguesian orthogoval{} projective planes of order $q$. Unfortunately, 3 is the only prime power up to 10,000,000 for which this is true.
It is also natural to generalize the definition of orthogoval{} to other geometric objects. Possibilities that are intriguing include inversive planes and higher dimensional projective and affine spaces.
\section{Acknowledgements}
We would like to thank Erin Lanus and Kaushik Sarkar. They found an $\SCPHF(2;3,8,64)$ in a conditional expectation search. This turned out to have the structure of a pair of orthogoval{} affine planes of order 8 and discussion of this structure in several venues was the impetus for the research presented in this article. We thank Stefaan De Winter who pointed out that the set of seven mutually orthogoval{} affine planes of order 4 constructs a Kirkman triple system of order 15.
\bibliographystyle{plain}
| {
"timestamp": "2022-10-24T02:13:30",
"yymm": "2210",
"arxiv_id": "2210.11961",
"language": "en",
"url": "https://arxiv.org/abs/2210.11961",
"abstract": "A pair of planes, both projective or both affine, of the same order and on the same pointset are orthogoval if each line of one plane intersects each line of the other plane in at most two points. In this paper we prove new constructions for sets of mutually orthogoval planes, both projective and affine, and review known results that are equivalent to sets of more than two mutually orthogoval planes. We also discuss the connection between sets of mutually orthogoval planes and covering arrays.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "Sets of mutually orthogoval projective and affine planes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.983596967483837,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7099044290163786
} |
https://arxiv.org/abs/0711.3399 | Improved Poincare inequalities with weights | In this paper we prove that if $\Omega\in\mathbb{R}^n$ is a bounded John domain, the following weighted Poincare-type inequality holds: $$ \inf_{a\in \mathbb{R}}\| (f(x)-a) w_1(x) \|_{L^q(\Omega)} \le C \|\nabla f(x) d(x)^\alpha w_2(x) \|_{L^p(\Omega)} $$ where $f$ is a locally Lipschitz function on $\Omega$, $d(x)$ denotes the distance of $x$ to the boundary of $\Omega$, the weights $w_1, w_2$ satisfy certain cube conditions, and $\alpha \in [0,1]$ depends on $p,q$ and $n$. This result generalizes previously known weighted inequalities, which can also be obtained with our approach. | \section*{}
\setcounter{equation}{0}
\title[]{Improved Poincar\'e inequalities with weights}
\author{Irene Drelichman}
\address{Departamento de Matem\'atica, Facultad de Ciencias Exactas y Naturales,
Universidad de Buenos Aires, 1428 Buenos Aires, Argentina} \email{irene@drelichman.com}
\author{Ricardo G. Dur\'an}
\address{Departamento de Matem\'atica, Facultad de Ciencias Exactas y Naturales,
Universidad de Buenos Aires, 1428 Buenos Aires, Argentina} \email{rduran@dm.uba.ar}
\thanks{Supported by ANPCyT under grant PICT 03-13719, by Universidad de Buenos Aires under grant X052 and by
CONICET under grant PIP 5478. The authors are members of CONICET,
Argentina.}
\keywords{weighted Sobolev inequality, weighted Poincar\'e inequality, reverse doubling weights, John domains}
\subjclass[2000]{46E35, 26D10}
\begin{abstract}
In this paper we prove that if $\Omega\in\mathbb{R}^n$ is a bounded John domain, the
following weighted Poincar\'e-type inequality holds:
$$
\inf_{a\in \mathbb{R}}\| (f(x)-a) w_1(x) \|_{L^q(\Omega)} \le C
\|\nabla f(x) d(x)^\alpha w_2(x) \|_{L^p(\Omega)}
$$
where $f$ is a locally Lipschitz function on $\Omega$, $d(x)$ denotes the distance of $x$ to the boundary of $\Omega$,
the weights $w_1, w_2$ satisfy certain cube conditions, and $\alpha \in [0,1]$ depends on $p,q$ and $n$.
This result generalizes previously known weighted inequalities, which can also be obtained with
our approach.
\end{abstract}
\maketitle
\section{Introduction}
The purpose of this paper is to present a simple unified approach
to prove weighted Poincar\'e-type inequalities in John domains.
The class of John domains was first introduced in \cite{J} and
named after the author of that paper by Martio and Sarvas
\cite{MS}. It contains Lipschitz domains as well as other domains
with very non-regular boundaries, and it has played an important role
in several problems in analysis. In particular, as it has been made
clear in \cite{BK}, it is closely connected to the improved
Poincar\'e inequalities we are interested in.
The Sobolev-Poincar\'e inequality
\begin{equation}
\label{sob-poinc} \inf_{a \in \mathbb{R}} \| f(x) -
a\|_{L^{\frac{np}{n-p}}(\Omega)} \le C \|\nabla f(x) \|_{L^p(\Omega)}
\end{equation}
with $\Omega \subseteq \mathbb{R}^n$ being a John domain, and $f\in W^{1,p}(\Omega)$, was proved in the case $1<p<n$ in
\cite{M}, and later extended to the case $p=1$ in \cite{Bo}. See
also \cite{H} for proofs, other references and a nice
account on the history of this problem.
Moreover, it was proved in \cite{BK} that John domains are
essentially the largest class of domains for which this inequality
can hold, more precisely, if $\Omega \subseteq \mathbb{R}^n$ is a
domain of finite volume that satisfies a separation property
(cf.\cite{BK}) and $1\le p<n$, then $\Omega$ satisfies the
Sobolev-Poincar\'e inequality if and only if it is a John
domain.
The Sobolev-Poincar\'e inequality can be seen as a special case of
a much wider family of so-called improved Poincar\'e inequalities.
Indeed, it was proved in \cite{R} that if $\Omega\subseteq
\mathbb{R}^n$ is a bounded John domain, and $f\in L_{loc}^1
(\Omega)$ is such that $\nabla f(x) d(x)^\alpha \in L^p(\Omega)$,
then
\begin{equation}
\label{improv-poinc}
\inf_{a \in \mathbb{R}} \| f(x) - a\|_{L^q(\Omega)} \le C
\|\nabla f(x) d(x)^\alpha\|_{L^p(\Omega)}
\end{equation}
whenever
$1<p\le q\le \frac{np}{n-p(1-\alpha)}$ with $p(1-\alpha)<n$, and
$\alpha \in [0,1]$, with $d(x)$ being the distance of a point $x$
to the boundary of $\Omega$ (the same inequality holds for
unbounded John domains with $1\le p\le q = \frac{np}{n-p(1-\alpha)}$).
Letting $\alpha=0$ in (\ref{improv-poinc}) one clearly obtains
inequality (\ref{sob-poinc}).
A further generalization of Poincar\'e inequalities in weighted
spaces was made in \cite{C} for bounded John domains. It was shown in that paper
that under certain cube conditions on the weights $w_1, w_2$, the
following inequality holds for bounded John domains:
\begin{equation}
\label{poinc-pesos}
\inf_{a \in \mathbb{R}} \| (f(x) - a) w_1(x)\|_{L^q(\Omega)} \le C
\|\nabla f(x) w_2(x)\|_{L^p(\Omega)}
\end{equation}
whenever $f$ is a Lipschitz function and $1<p\le q<\infty$. Notice
that the author of \cite{C} refers to domains satisfying the Boman
chain condition, but for connected domains in $\mathbb{R}^n$ this
is exactly the same class as that of John (see \cite{B} for proof
of this inequality even in a much more general context).
Inequality (\ref{poinc-pesos}) can also be extended to unbounded John domains as
it was done in \cite{R} for the case of (\ref{improv-poinc}) (see \cite{R2}). Both
results rely heavily on the main theorem of \cite{V}, which states that an unbounded John
domain can be written as an increasing union of bounded John domains in a way that
allows to pass to the limit using the dominated convergence theorem.
As we did for inequality (\ref{sob-poinc}), we could also think of
inequality (\ref{poinc-pesos}) as a special case of a wider
family of inequalities explicitly involving powers of the distance
to the boundary. Indeed, we will prove in this paper that if $f$ is a locally
Lipschitz function on $\Omega$
\begin{equation}
\label{improv-pesos}
\inf_{a \in \mathbb{R}} \| (f(x) - a) w_1(x)\|_{L^q(\Omega)} \le C
\|\nabla f(x) d(x)^\alpha w_2(x) \|_{L^p(\Omega)}
\end{equation}
for suitable weights $w_1, w_2$, and with $\alpha$ depending on
$p, q$ as in inequality (\ref{improv-poinc}), thus extending the
results in \cite{C}. Notice that when the density of the locally Lipschitz functions
in the involved weighted norms holds, this result extends to functions
in the corresponding weighted Sobolev spaces.
It is worth noting that the technique we will use for the proof of
inequality (\ref{improv-pesos}) differs completely from the one used in \cite{C} for the case $\alpha=0$.
Instead of relying on chains of cubes and cube-by-cube
inequalities, we recover the simpler classical ideas which relate
Sobolev-Poincar\'e inequalities with fractional integrals (see,
e.g., \cite{H} and references therein). Similar ideas were
previously used for John domains in \cite{M} to prove the
Sobolev-Poincar\'e inequality, but the fact that they can also be
used in connection with the distance to the boundary seems to be
new.
We will use a representation formula proved in \cite{ADM} that
essentially allows us to recover $f$ from its gradient (an
alternative proof of inequality (\ref{sob-poinc}) can also be
found in that paper). It has, as mentioned before, the advantage
of allowing us to introduce the distance to the boundary without
recurring to Whitney cubes, and it will allow us to reduce the
proof of inequalities (\ref{improv-poinc}) and
(\ref{improv-pesos}) to known continuity results for fractional
integrals and the Hardy-Littlewood maximal function.
Although inequality (\ref{improv-poinc}) can be seen as a special
case of (\ref{improv-pesos}) taking $w_1=w_2=1$, we have chosen to
present them separately for the sake of clarity and because the
hypotheses needed are weaker than those we require for the more general
cases. We will also split inequality (\ref{improv-pesos}) into the
cases $w_1 = w_2$ and $w_1 \not= w_2$. We shall refer to the first
case as `one-weighted' case and to
the second one as `two-weighted' case. Once the ideas are made
clear in the simpler cases, we shall be somewhat sketchy to
indicate how they can be adapted to the more general case.
The paper is organized as follows. In section 2 we recall some
definitions, obtain the representation formula that we will be
using in the remainder of the paper, and show how it relates to
the distance to the boundary. Section 3 is devoted to the
unweighted and one-weighted cases. We obtain a simpler proof of
the results in \cite{R} and, following the technique presented in
\cite{H}, we extend inequality (\ref{improv-pesos}) for $w_1=w_2$
to the previously unknown case $p=1$ (Theorem \ref{teo34}).
Finally, in section 4 we show how our arguments can be used to generalize the results in \cite{C} and obtain new
inequalities in the two-weighted case (Theorems \ref{teo41} and \ref{teo42}).
\section{Preliminaries}
The notation used in this paper is rather standard. By $C$ we will denote a general constant which can change its value even within a single string of estimates. We will write $C(*,...,*)$ to emphasize that the constant depends on the quantities appearing in the parentheses only.
By a weight function we mean a nonnegative measurable function on ${\mathbb R}^n$.
Given a domain $\Omega\subset{\mathbb R}^n$ and any
$x\in\Omega$, we let $d(x)$ denote the distance of $x$ to the boundary of
$\Omega$. A bounded domain $\Omega\subset{\mathbb R}^n$ is a John domain if for a
fixed $x_0\in\Omega$ and any $y\in\Omega$ there exists a rectifiable
curve, called John curve, given by
$$
\gamma(\cdot,y)\,:\, [0,1]\to\Omega
$$
such that $\gamma(0,y)=y$ and $\gamma(1,y)=x_0$, and there exist constants
$\delta$ and $K$, depending only on the domain $\Omega$ and on $x_0$, such
that
\begin{equation}
\label{john1} d(\gamma(s,y))\ge\delta s
\end{equation}
and
\begin{equation}
\label{john2} |\dot\gamma(s,y)|\le K
\end{equation}
where $\dot\gamma(s,y):=\frac{\partial\gamma}{\partial s}(s,y)$.
In what follows, we will be using that $\gamma(s,y)$ and
$\dot\gamma(s,y)$ are measurable functions. This property need not be
fulfilled if we take $\gamma(\cdot, y)$ to be an arbitrary John curve
for each fixed $y\in\Omega$, but it can be obtained by means of a
slight technical modification of a given family of curves (see \cite[Lemma 2.1]{ADM} for
details). Moreover, to simplify notation we will assume,
without loss of generality, that $x_0=0$.
Let $\varphi\in C_0^\infty$ such that $\int_{\Omega}\varphi =1$ and
$\mbox{supp\,}\varphi \subset B(0,\delta/2)$. Given a locally Lipschitz function
$f$, we denote by $f_\varphi$ the weighted average of
$f$, namely, $f_\varphi=\int_\Omega f\varphi$.
The following lemmas of this section will be fundamental for the remainder of
this paper. They were proved in \cite{ADM} but
we have chosen to reproduce their proofs here for the sake of completeness.
\begin{lemma}
With the above notations, if $\Omega\subseteq \mathbb{R}^n$ is a John
domain and $y\in \Omega$,
\begin{equation}
\label{green}
f(y)-f_\varphi= \int_\Omega G(x,y)\cdot\nabla f(x)\, dx
\end{equation}
with
\begin{equation}
\label{Gj}
G(x,y):=-\int_0^1\left(\dot\gamma(s,y)+\frac{x-\gamma(s,y)}{s}\right)
\varphi\left (\frac{x-\gamma(s,y)}{s}\right)\frac{1}{s^n}\, ds.
\end{equation}
\end{lemma}
\noindent{\bf Proof.}\,\,
In view of (\ref{john1}), for any $y\in\Omega$ and $z\in
B(0,\delta/2)$ the curve given by
$$
\gamma(s,y) + sz \quad , \quad s\in[0,1]
$$
which joins $y$ and $z$, is contained in $\Omega$. Then
$$
f(y)-f(z)= - \int_0^1 \nabla f(\gamma(s,y)+sz)\cdot (\dot\gamma(s,y)+z)\,
ds.
$$
Multiplying by $\varphi(z)$ and integrating in $z$ we obtain
$$
f(y)- f_\varphi= - \int_\Omega \int_0^1 \nabla f(\gamma(s,y)+sz)\cdot
(\dot\gamma(s,y)+z)\varphi(z)\, ds dz.
$$
Making the change of variable $x=\gamma(s,y)+sz$ we have
$$
f(y)-f_\varphi= - \int_0^1 \int_\Omega \nabla f(x) \cdot
\left(\dot\gamma(s,y)+ \frac{x-\gamma(s,y)}{s}\right)
\varphi\left(\frac{x-\gamma(s,y)}{s}\right)\frac{1}{s^n}\, dx ds
$$
as we wanted to prove. \setlength{\unitlength}{0.01cm
\begin{lemma}
\label{cotadeG} There exists a constant $C=C(n,\delta,K)$ such that
\begin{equation}
\label{boundofG} |G(x,y)|\le
C\frac{\|\varphi\|_{\infty}}{|x-y|^{n-1}}.
\end{equation}
\end{lemma}
\noindent{\bf Proof.}\,\, If $(x-\gamma(s,y))/s\in \mbox{supp\,}\varphi$ then
$|x-\gamma(s,y)| < (\delta/2) s$. Therefore, using (\ref{john2}) and
$\gamma(0,y)=y$ we have
\begin{equation}
\label{xmenosy} |x-y| \le |x-\gamma(s,y)| +
|\gamma(s,y)-\gamma(0,y)| \le (\delta/2) s + K s.
\end{equation}
Therefore,
$$
G(x,y) = \int_{C |x-y|}^1 \left\{
\dot{\gamma}(s,y)+\frac{x-\gamma(s,y)}{s} \right \} \varphi \left
(\frac{x-\gamma(s,y)}{s}\right)\frac{1}{s^n}\, ds .
$$
And, since
$$
\left|\dot{\gamma}(s,y)+\frac{x-\gamma(s,y)}{s}\right| \le K +
{\delta/2},$$ the above estimate follows easily. \setlength{\unitlength}{0.01cm
\begin{lemma}
\label{cotax-y} There exists a constant $C=C(\delta,K)$ such that, if
$G(x,y) \not= 0$, then
$|x-y|\le C(\delta,K) d(x)$.
\end{lemma}
\noindent{\bf Proof.}\,\, Notice that, if $G(x,y)\not= 0$, there exists $s$ such that $\varphi\left(\frac{x-\gamma(s,y)}{s}\right) \not = 0$.
Let $\bar x \in \partial\Omega$ be
such that $d(x)=|x- \bar x|$. By (\ref{xmenosy}) and property (\ref{john1}), we have
$$
|x-y| \le \left(\frac{\delta}{2} + K\right) s \le \left(\frac12
+\frac{K}{\delta}\right) \delta s \le \left(\frac12 +\frac{K}{\delta}\right)
d(\gamma(s,y)).
$$
But,
$$
d(\gamma(s,y))\le |\gamma(s,y)- \bar x|\le |\gamma(s,y) - x| + |x
- \bar x|\le \frac{\delta s}{2} + d(x) \le \frac{d(\gamma(s,y))}{2} +
d(x),
$$
whence
$$
|x-y| \le \left(1 +\frac{2K}{\delta}\right) d(x).
$$
\setlength{\unitlength}{0.01cm
\section{The unweighted and one-weighted cases}
Since the case $p=1$ of the inequalities we are considering is different in nature from the remaining values
of $p$, we will split the proof of both the weighted and unweighted cases into two theorems, respectively.
\begin{theorem}
If $\Omega\subseteq \mathbb{R}^n$ is a bounded John domain,
\begin{equation}
\label{teo1}
\inf_{a\in \mathbb{R}} \| f-a\|_{L^q(\Omega)} \le C \|
\nabla f(x) d(x)^\alpha\|_{L^p(\Omega)}
\end{equation}
whenever $f\in L^q(\Omega)$ is a locally Lipschitz function, $\alpha \in [0,1]$, $p(1-\alpha) <n$, and $1<p\le q\le
\frac{np}{n-p(1-\alpha)}$.
\end{theorem}
\noindent{\bf Proof.}\,\,
By duality,
$$ \| f - f_\varphi\|_{L^q(\Omega)} = \sup_{g\in
L^{q'}(\Omega)} \frac{\int_\Omega (f-f_\varphi) g}{\| g\|_{L^{q'}(\Omega)}},
$$
with $q'$ being the dual
exponent of $q$, $1/q + 1/q'=1$. Therefore, it suffices to obtain a bound for $\int_\Omega (f
-f_\varphi)g$ for $g\in L^{q'}(\Omega)$.
Using the representation formula (\ref{green}), we can write
$$
\int_\Omega (f(y) - f_\varphi) g(y) \, dy = \int_\Omega \int_\Omega
G(x,y)\cdot\nabla f(x)\, dx \, g(y)\, dy
$$
Interchanging the order of integration and using lemmas
\ref{cotadeG} and \ref{cotax-y}, we obtain
\begin{equation}
\label{dualidad}
\int_\Omega |(f(y) - f_\varphi) g(y)| \, dy \le C
\int_\Omega \int_{|x-y|\le C d(x)} \frac{|g(y)|}{|x-y|^{n-1}} \, dy
|\nabla f(x)| \, dx
\end{equation}
We consider separately the cases $\alpha \in [0,1)$ and
$\alpha=1$.
In the case $\alpha \in [0,1)$, if we denote $I_\beta g(x)= \int
g(y) |x-y|^{\beta-n}\, dy$, we can bound the above expression by
\begin{equation}
\label{intfrac}
C \int_{\mathbb{R}^n}|\nabla f (x)| d(x)^\alpha
I_{1-\alpha}|g(x)| \, dx
\end{equation}
where we have assumed that $|\nabla f|$ and $g$ are extended by
zero outside $\Omega$. Applying H\"older's inequality and the
continuity of the fractional integral (see, e.g., \cite{S}), this
expression can be bounded by
$$
\| \nabla f(x) d(x)^\alpha\|_{L^p(\Omega)} \| I_{1-\alpha}
|g(x)|\|_{L^{p'}(\mathbb{R}^n)} \le C \| \nabla f(x)
d(x)^\alpha\|_{L^p(\Omega)} \| g(x)\|_{L^{q'}(\Omega)}
$$
thus proving (\ref{teo1}) in the case $\alpha \in [0,1)$.
In the case $\alpha=1$ (that is, $p=q$), a standard calculation
(see, e.g., \cite[Lemma 2.8.3]{Z}) shows that (\ref{dualidad}) can
be bounded by
$$
C \int_\Omega Mg(x) d(x) |\nabla f(x)| \, dx \le C
\|Mg(x)\|_{L^{q'}(\Omega)} \| \nabla f(x) d(x)\|_{L^q(\Omega)},
$$
and the desired result follows by boundedness of the
Hardy-Littlewood maximal function in $L^{q'}(\Omega)$ (see, e.g.,
\cite{S}). \setlength{\unitlength}{0.01cm
\begin{theorem}
If $\Omega\subseteq \mathbb{R}^n$ is a bounded John domain,
\begin{equation}
\label{teo1-L1}
\inf_{a\in \mathbb{R}} \| f-a\|_{L^{n/(n-1+\alpha)}(\Omega)} \le C \|
\nabla f(x) d(x)^\alpha\|_{L^1(\Omega)}
\end{equation}
whenever $f\in L^{n/(n-1+\alpha)}(\Omega)$ is a locally Lipschitz function, $1-\alpha<n$, and $\alpha \in [0,1]$.
\end{theorem}
\noindent{\bf Proof.}\,\,
In the case $\alpha=1$, inequality (\ref{teo1-L1}) can be proved
as in the previous theorem, using the continuity of the maximal
function in $L^\infty(\Omega)$.
In the case $\alpha\in [0,1)$, we follow the approach used in
\cite{H} to prove the Sobolev-Poincar\'e inequality for John
domains, modifying it to include the distance to the boundary in
our estimates.
For $g\in L^1(\Omega)$, let
$$
E_t=\left\{ x \in \Omega: \int_{\Omega} \frac{g(y)}{|x-y|^{n-1+\alpha}} \,dy >
t\right\}
$$
Then,
$$
|E_t| \le \int_E \int_\Omega \frac{g(y)}{t |x-y|^{n-1+\alpha}} \, dy \,
dx
$$
But,
$$
\int_{E_t} \frac{1}{ |x-y|^{n-1+\alpha}} \, dx \le C |E_t|^{(1-\alpha)/n}
$$
(see, e.g., \cite[inequality 7.2.6]{Jo}). Therefore,
$$
|E_t| t^{n/(n-1+\alpha)} \le C \left( \int_\Omega |g(y)| \, dy
\right)^{n/(n-1+\alpha)}
$$
Since, as in the proof of (\ref{teo1}),
$$
|f- f_\varphi| \le C \int_\Omega \frac{|\nabla f(y)|
d(y)^\alpha}{|x-y|^{n-1+\alpha}} \, dy,
$$
we conclude that
$$
\sup_{t>0} \left| \left\{ x\in \Omega : |f-f_\varphi| > t
\right\}\right| t^{n/(n-1+\alpha)} \le C \left( \int_\Omega |\nabla
f(y)| d(y)^\alpha \, dy \right)^{n/(n-\alpha)}
$$
This in turn implies, by \cite[Theorem 4]{H}, that
$$
\inf_{a\in \mathbb{R}} \| f(x) - a \|_{L^{n/(n-1+\alpha)}(\Omega)} \le
C \| \nabla f(x) d(x)^\alpha\|_{L^1(\Omega)}
$$
\setlength{\unitlength}{0.01cm
\begin{theorem}
Let $\Omega\subseteq\mathbb{R}^n$ be a bounded John domain. If $w$ is
a nonnegative function such that there exists a constant
$K<\infty$ such that
\begin{equation}
\label{cond-1peso}
\left( \frac{1}{|Q|}\int_Q w(x)^q \, dx
\right)^{1/q} \left( \frac{1}{|Q|} \int_Q w(x)^{-p'} \, dx
\right)^{1/p'} \le K
\end{equation}
where $Q$ is any $n$ dimensional cube, and $K$ is independent of
$Q$, then
$$
\inf_{a\in \mathbb{R}}\| (f(x)-a) w(x)\|_{L^q(\Omega)} \le C
\|\nabla f(x) d(x)^\alpha w(x)\|_{L^p(\Omega)}
$$
for all locally Lipschitz $f$, where $0< \alpha \le
1$, $p(1-\alpha)< n$ and $1< p \le q \le \frac{np}{n-p(1-\alpha)}$.
\end{theorem}
\noindent{\bf Proof.}\,\,
By duality, it suffices to bound $\int_\Omega (f-f_\varphi)(y) g(y)
\, dy$ for any $g$ such that $\|g(x) w^{-1}(x)\|_{L^{q'}(\Omega)}<
\infty$.
In the case $\alpha \in [0,1)$, using, as before, the bound (\ref{intfrac})
and H\"older's inequality, we obtain
$$
\int_\Omega |(f- f_\varphi)(y) g(y)| \, dy \le C
\| \nabla f(x) d(x)^\alpha w(x) \|_{L^p(\Omega)} \| I_{1-\alpha} |g(x)|
w(x)^{-1}\|_{L^{p'}(\Omega)}
$$
But, by condition (\ref{cond-1peso}), \cite[Theorem 4]{MW} and the fact that $I_{1-\alpha}$ is self-adjoint,
$$\|I_{1-\alpha} |g(x)| w^{-1}(x)\|_{L^{p'}(\Omega)} \le C \| g(x)
w^{-1}(x)\|_{L^{q'}(\Omega)}
$$
and the theorem follows.
In the case $\alpha=1$, bound (\ref{intfrac}), as before, by
$$
C \int_\Omega Mg(x) d(x) |\nabla f(x)| \, dx \le C \| Mg(x) w^{-1}(x)\|_{L^{p'}(\Omega)} \| \nabla f(x) d(x) w(x)\|_{L^p(\Omega)}
$$
and the result follows, since by condition (\ref{cond-1peso}) and \cite[Theorem 1.2]{CU}
(see also references therein for previously known results),
$$
\| Mg(x) w^{-1}(x)\|_{L^{p'}(\Omega)} \le C \| g(x) w^{-1}(x)\|_{L^{q'}(\Omega)}.
$$
\setlength{\unitlength}{0.01cm
\begin{remark}
Notice that if $w$ satisfies condition (\ref{cond-1peso}), then $w^q$ belongs to Muckenhoupt's class
$A_r$ with $r=\frac{q}{p'}+1$, and therefore it is a doubling weight (which in turn implies that it satisfies the weaker `reverse doubling condition'
required for \cite[Theorem 1.2]{CU}).
\end{remark}
\begin{theorem}
\label{teo34}
Let $\Omega\subseteq\mathbb{R}^n$ be a bounded John domain. If $w$ is
a nonnegative function such that there exists a constant
$K<\infty$ such that
\begin{equation}
\label{cond-1peso-L1}
\left( \frac{1}{|Q|} \int_Q w(x)^{\frac{n}{n-1+\alpha}} \, dx
\right)^{\frac{n-1+\alpha}{n}} \left( \mbox{ess }\sup_{x\in Q} \frac{1}{w(x)}
\right) < K
\end{equation}
where $Q$ is any $n$ dimensional cube and $K$ is independent of $Q$, then
$$
\inf_{a\in \mathbb{R}}\| (f(x)-a)
w(x)\|_{L^{n/(n-1+\alpha)}(\Omega)} \le C \|\nabla f(x)
d(x)^\alpha w(x)\|_{L^1(\Omega)}
$$
for all locally Lipschitz $f$ and $
\alpha \in [0,1)$.
When $\alpha=1$, condition (\ref{cond-1peso-L1}) should be replaced by
$$
M(w(x))\le C w(x)
$$
for almost every $x\in\Omega$ (that is, $w\in A_1$).
\end{theorem}
\noindent{\bf Proof.}\,\,
In the case $\alpha \in [0,1)$, for each $t>0$ let $E_t = \{ |I_{1-\alpha}g(x)|>t\}$. By \cite[Theorem 5]{MW}, if
$w$ satifies condition (\ref{cond-1peso-L1}),
$$
\int_{E_t} w(x)^{\frac{n}{n-1+\alpha}} \, dx \le C t^{-\frac{n}{n-1+\alpha}} \left( \int_{\mathbb{R}^n}
|g(x)| w(x) \, dx\right)^{\frac{n}{n-1+\alpha}}
$$
But, as before,
$$
|f- f_\varphi| \le C \int_\Omega \frac{|\nabla f(y)|
d(y)^\alpha}{|x-y|^{n-1+\alpha}} \, dy = C I_{1-\alpha} (|\nabla
f| d(x)^\alpha)
$$
Therefore, setting $d\mu = w(x)^{n/(n-1+\alpha)} \, dx$, we obtain
that
$$
\mu\{ |f-f_\varphi| > t\} t^{n/(n-1+\alpha)} \le C \mu \{
I_{1-\alpha} (|\nabla f| d(x)^\alpha)> t\} t^{n/(n-1+\alpha)}
$$
$$
\le C \left(\int_\Omega |\nabla f(x)| d(x)^\alpha w(x) \,
dx\right)^{n/(n-1+\alpha)}
$$
which, by \cite[Lemma 4]{H}, implies
$$
\inf_{a\in \mathbb{R}} \left( \int_\Omega |f-a|^{n/(n-1+\alpha)} \,
d\mu\right)^{(n-1+\alpha)/n} \le C \int_\Omega |\nabla f| \, d\nu
$$
where $d\nu = d(x)^\alpha w(x) \, dx$, that is,
$$
\inf_{a\in \mathbb{R}} \|(f-a)(x) w(x)\|_{L^{n/(n-1+\alpha)}(\Omega)}
\le C \| \nabla f (x) d(x)^\alpha w(x) \|_{L^1(\Omega)}
$$
In the case $\alpha=1$, bound (\ref{intfrac}), as before, by
$$
C \int_\Omega Mg(x) d(x) |\nabla f(x)| \, dx \le C \| Mg(x) w^{-1}(x)\|_{L^{\infty}(\Omega)} \| \nabla f(x) d(x) w(x)\|_{L^1(\Omega)}
$$
and the result follows, since by \cite[Theorem 4]{Mu}, if $w\in A_1$,
$$
\| Mg(x) w^{-1}(x)\|_{L^{\infty}(\Omega)} \le C \| g(x) w^{-1}(x)\|_{L^{\infty}(\Omega)}
$$
\setlength{\unitlength}{0.01cm
\begin{remark}
If a weight $w$ satisfies condition (\ref{cond-1peso-L1}), then
$w^q$ belongs to the class $A_1$.
\end{remark}
\section{The two-weighted case}
\begin{theorem}
\label{teo41}
Let $\Omega\subseteq\mathbb{R}^n$ be a bounded John domain. If $w_1$
and $w_2$ are nonnegative functions such that there exists a
constant $K<\infty$ such that
\begin{equation}
\label{cond-2pesos} |Q|^{\frac1n -1}\left(\int_Q w_1(x) \, dx
\right)^{1/q} \left( \int_Q w_2(x)^{1-p'} \, dx \right)^{1/p'} \le
K
\end{equation}
and $w_1, w_2^{1-p'}$ satisfy the following `reverse
doubling' condition:
\begin{equation}
\mbox{ for any } \epsilon \in (0,1)\mbox{ there exists } \delta\in (0,1) \mbox{ such that }
\int_{\epsilon Q} w(x) \, dx \le \delta \int_Q w(x) \, dx \qquad
\end{equation}
where $Q$ is any $n$-dimensional cube, and $K$ is independent of
$Q$, then
$$
\inf_{a\in \mathbb{R}}\| (f(x)-a) w_1^{1/q}(x)\|_{L^q(\Omega)} \le C
\|\nabla f(x) d(x)^\alpha w_2(x)^{1/p}\|_{L^p(\Omega)}
$$
for all locally Lipschitz $f$, whenever
$1< p<q<\infty$ and $\alpha \in [0,1]$. If $p=q$, condition (\ref{cond-2pesos}), should be
replaced by requiring that there exist $r>1$ such that
\begin{equation}
\label{cond-2pesosb} |Q|^{\frac{\alpha}{n}+ \frac1q
-\frac1p}\left(\frac{1}{|Q|} \int_Q w_1(x)^r \, dx \right)^{1/qr}
\left( \frac{1}{|Q|} \int_Q w_2(x)^{(1-p')r} \, dx \right)^{1/p'r}
\le K(r)
\end{equation}
\end{theorem}
\noindent{\bf Proof.}\,\,
As in the previous theorems, by duality it suffices to bound
$\int_\Omega (f-f_\varphi)(y) g(y) \, dy$ for any $g$ such that
$\|g(x) w(x)^{-1/q} \|_{L^{q'}}< \infty$.
We begin by the case $\alpha\in [0,1)$. Using the bound (\ref{intfrac}) and H\"older's inequality, we
obtain
$$
\int_\Omega |(f- f_\varphi)(y) g(y)| \, dy \le C
\| \nabla f(x) d(x)^\alpha w_2(x)^{1/p} \|_{L^p(\Omega)} \|
I_{1-\alpha} |g(x)| w_2(x)^{-1/p}\|_{L^{p'}(\Omega)}
$$
But, by condition (\ref{cond-2pesos}) (respectively, condition
(\ref{cond-2pesosb})) and \cite[Theorem 1]{SW},
$$
\|I_{1-\alpha} |g(x)| w_2^{-1/p} \|_{L^{p'}} \le C \|g(x)
w_1(x)^{-1/q} \|_{L^{q'}}
$$
as we wanted to show.
In the case $\alpha=1$, bound (\ref{intfrac}), as before, by
$$
C \int_\Omega Mg(x) d(x) |\nabla f(x)| \, dx \le C \| Mg(x) w_2^{-1}(x)\|_{L^{p'}(\Omega)} \| \nabla f(x) d(x) w_2(x)\|_{L^p(\Omega)}
$$
and the result follows, since by condition (\ref{cond-1peso}) and \cite[Theorem 1.2]{CU},
$$
\| Mg(x) w_2^{-1}(x)\|_{L^{p'}(\Omega)} \le C \| g(x) w_1^{-1}(x)\|_{L^{q'}(\Omega)}
$$
\setlength{\unitlength}{0.01cm
\begin{remark}
In the previous theorem we may assume that $q\le \frac{np}{n- p(1-\alpha)}$ (and thus $p(1-\alpha)<n$),
since otherwise $w_1$ equals zero almost everywhere on $\{ w_2< \infty \}$.
This was observed in \cite[Remark b]{S2}.
\end{remark}
\begin{theorem}
\label{teo42}
Let $\Omega\subseteq\mathbb{R}^n$ be a bounded John domain. If $w_1$
and $w_2$ are nonnegative functions such that there exists a
constant $K<\infty$ such that
\begin{equation}
\label{cond-2pesos-L1}
M(w_2(x)) \le w_1(x)
\end{equation}
for almost all $x$, then
$$
\inf_{a\in \mathbb{R}}\| (f(x)-a) w_1(x)\|_{L^1(\Omega)} \le C
\|\nabla f(x) d(x) w_2(x)\|_{L^1(\Omega)}
$$
for all locally Lipschitz $f$.
\end{theorem}
\noindent{\bf Proof.}\,\,
By duality, it suffices to bound $\int_\Omega (f-f_\varphi)(y) g(y)
\, dy$ for any $g$ such that $\|g(x) w_1^{-1}(x)\|_{L^\infty(\Omega)}<
\infty$.
As before, bound (\ref{intfrac}) by
$$
C \int_\Omega Mg(x) d(x) |\nabla f(x)| \, dx \le C \| Mg(x) w_2^{-1}(x)\|_{L^\infty (\Omega)} \| \nabla f(x) d(x) w_2(x)\|_{L^1(\Omega)}
$$
and the result follows, since by condition (\ref{cond-1peso-L1}) and \cite[Theorem 4]{Mu},
$$
\| Mg(x) w_2^{-1}(x)\|_{L^\infty(\Omega)} \le C \| g(x) w_1^{-1}(x)\|_{L^\infty(\Omega)}
$$
\setlength{\unitlength}{0.01cm
\begin{remark}
Notice that if one wanted to prove the more general inequality
$$
\inf_{a\in \mathbb{R}}\| (f(x)-a) w_1^{\frac{n-1+\alpha}{n}}(x)\|_{L^{\frac{n}{n-1+\alpha}}(\Omega)} \le C
\|\nabla f(x) d(x)^\alpha w_2(x)\|_{L^1(\Omega)}
$$
following the proof of the one-weighted case, one would need to know that, if $E_t=\{|I_{1-\alpha}g(x)| > t\}$, then
$$
\int_{E_t} w_1(x) \, dx \le C t^{-\frac{n}{n-1+\alpha}} \left( \int |g(x)| w_2(x) \, dx\right)^{\frac{n}{n+\alpha-1}}.
$$
Unfortunately, we were unable to find neither proof of this inequality under the conditions of the previous theorem (or any other sufficient
conditions on the weights $w_1$, $w_2$) nor any counterexample to the required weak inequality.
Such a result is beyond the scope of this paper, but it is worth noticing that it would immediately imply the above two-weighted
Sobolev-Poincar\'e inequality which would complete Theorem \ref{teo41} in the case $p=1$.
\end{remark}
\bigskip
{\bf Acknowledgement.} We wish to thank Jos\'e Sabina de Lis for giving us reference \cite{Jo}.
| {
"timestamp": "2007-11-21T16:48:24",
"yymm": "0711",
"arxiv_id": "0711.3399",
"language": "en",
"url": "https://arxiv.org/abs/0711.3399",
"abstract": "In this paper we prove that if $\\Omega\\in\\mathbb{R}^n$ is a bounded John domain, the following weighted Poincare-type inequality holds: $$ \\inf_{a\\in \\mathbb{R}}\\| (f(x)-a) w_1(x) \\|_{L^q(\\Omega)} \\le C \\|\\nabla f(x) d(x)^\\alpha w_2(x) \\|_{L^p(\\Omega)} $$ where $f$ is a locally Lipschitz function on $\\Omega$, $d(x)$ denotes the distance of $x$ to the boundary of $\\Omega$, the weights $w_1, w_2$ satisfy certain cube conditions, and $\\alpha \\in [0,1]$ depends on $p,q$ and $n$. This result generalizes previously known weighted inequalities, which can also be obtained with our approach.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Improved Poincare inequalities with weights",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969670030071,
"lm_q2_score": 0.7217432062975978,
"lm_q1q2_score": 0.7099044286693429
} |
https://arxiv.org/abs/1405.3133 | Graph Matching: Relax at Your Own Risk | Graph matching---aligning a pair of graphs to minimize their edge disagreements---has received wide-spread attention from both theoretical and applied communities over the past several decades, including combinatorics, computer vision, and connectomics. Its attention can be partially attributed to its computational difficulty. Although many heuristics have previously been proposed in the literature to approximately solve graph matching, very few have any theoretical support for their performance. A common technique is to relax the discrete problem to a continuous problem, therefore enabling practitioners to bring gradient-descent-type algorithms to bear. We prove that an indefinite relaxation (when solved exactly) almost always discovers the optimal permutation, while a common convex relaxation almost always fails to discover the optimal permutation. These theoretical results suggest that initializing the indefinite algorithm with the convex optimum might yield improved practical performance. Indeed, experimental results illuminate and corroborate these theoretical findings, demonstrating that excellent results are achieved in both benchmark and real data problems by amalgamating the two approaches. | \section{Introduction}
\IEEEPARstart{S}{everal} problems related to the isomorphism and matching of graphs have been an important and enjoyable challenge for the scientific community for a long time,
with applications in pattern recognition (see, for example, \cite{PA2,PA1}), computer vision (see, for example, \cite{CV1,CV3,CV2}), and machine learning (see, for example, \cite{nips4,nips3}), to name a few.
Given two graphs, the graph isomorphism problem consists of determining whether these graphs are isomorphic or not, that is, if there exists a bijection between the vertex sets of the graphs which exactly preserves the vertex adjacency. The graph isomorphism problem is very challenging from a computational complexity point of view. Indeed, its complexity is still unresolved: it is not currently classified as NP-complete or P \cite{GandJ}. The graph isomorphism problem is contained in the (harder) graph matching problem. The graph matching problem consists of finding the exact isomorphism between two graphs if it exists, or, in general, finding the bijection between the vertex sets that minimizes the
number of adjacency disagreements. Graph matching is a very challenging and well-studied problem in the literature with applications in such diverse fields as pattern recognition, computer vision, neuroscience, etc.\@ (see \cite{ConteReview}). Although polynomial-time algorithms for solving the graph matching problem are known for certain classes of graphs
(e.g., trees \cite{torsello2005polynomial,ullman1974design}; planar graphs \cite{hopcroft1974linear}; and graphs with some spectral properties \cite{fiori2014spectral,alex}), there are no known polynomial-time algorithms for solving the general case. Indeed, in its most general form, the graph matching problem is equivalent to the NP-hard quadratic assignment problem.
Formally, for any two graphs on $n$ vertices with respective $n \times n$ adjacency matrices $A$ and $B$,
the graph matching problem is to minimize $\| A - PBP^T \|_F$ over
all $P \in \varPi$, where $\varPi$ denotes the set of $n \times n$
permutation matrices, and $\| \cdot \|_F$ is the Froebenius matrix norm
(other graph matching objectives have been proposed in the literature as well, this being a common one).
Note that for any permutation matrix $P$, $\frac{1}{2}\|A - PBP^T\|_F^2=
\frac{1}{2} \|AP-PB\|_F^2 $ counts the number of adjacency
disagreements induced by the vertex bijection corresponding to $P$.
An equivalent formulation of the graph matching problem
is to minimize $-\langle AP , PB \rangle$ over~all
$P \in \varPi$, where $\langle \cdot , \cdot \rangle$ is the
Euclidean inner product, i.e., for all $C,D \in \mathbb{R}^{n \times n}$,
$\langle C,D \rangle := \textup{trace}(C^TD)$. This can be seen
by expanding, for any $P \in \varPi$,
\begin{eqnarray*}
\|A -PBP^T\|^2_F &=& \|AP -PB\|_F^2 \\
& = &
\|A\|_F^2 + \|B\|_F^2 -2 \langle AP, PB \rangle ,
\end{eqnarray*}
and noting that $\|A\|_F^2$ and $\|B\|_F^2$ are constants for the optimization
problem over $P \in \varPi$.
Let ${\mathcal D}$ denote the set of $n \times n$ doubly stochastic
matrices, i.e., nonnegative matrices with row and column sums each equal to $1$.
We define the {\it convex relaxed graph matching problem} to be minimizing
$\|AD -DB\|_F^2$ over all $D \in {\mathcal D}$, and we define the
{\it indefinite relaxed graph matching problem} to be minimizing
$-\langle AD , DB \rangle$ over all $D \in {\mathcal D}$. Unlike
the graph matching problem, which is an integer programming problem,
these relaxed graph matching problems are each continuous optimization problems with
a quadratic objective function subject to affine constraints.
Since the quadratic objective $\|AD -DB\|_F^2$ is also convex
in the variables $D$ (it is a composition
of a convex function and a linear function), there is a
polynomial-time algorithm for exactly solving the convex relaxed graph
matching problem (see \cite{gold}). However, $-\langle AD , DB \rangle$ is not
convex (in fact, the Hessian has trace zero and is therefore {\it indefinite}), and nonconvex quadratic programming is (in general) NP-hard. Nonetheless the indefinite relaxation can be
efficiently approximately solved with Frank-Wolfe (F-W) methodology \cite{FW,FAQ}.
It is natural to ask how the (possibly different) solutions to these relaxed formulations relate to the solution of the original graph matching problem.
Our main theoretical result, Theorem~\ref{thd}, proves, under mild conditions, that convex relaxed graph matching (which is tractable) almost always yields the wrong matching, and indefinite relaxed graph matching (which is intractable) almost always yields the correct matching. We then illustrate via illuminating simulations that this asymptotic result about the trade-off between tractability and correctness is amply felt even in moderately sized instances.
In light of graph matching complexity results (see for example \cite{alex,hardness,atserias2013sherali}), it is unsurprising that the convex relaxation can fail to recover the true permutation. In our main theorem, we take this a step further and provide an answer from a probabilistic point of view, showing almost sure failure of the convex relaxation for a very rich and general family of graphs when convexly relaxing the graph matching problem. This paints a sharp contrast to the (surprising) almost sure correctness of the solution of the indefinite relaxation. We further illustrate that our theory gives rise to a new state-of-the-art matching strategy.
\subsection{Correlated random Bernoulli graphs}
\label{sec:crbg}
Our theoretical results will be set in the context of correlated random (simple) Bernoulli graphs,\footnote{Also known as \textit{inhomogeneous random graphs} in \cite{boll}.} which can be used to model many real-data scenarios. Random Bernoulli graphs are the most general edge independent random graphs, and contain many important random graph families including Erd\H os-R\'enyi and the widely used stochastic block model of \cite{sbm} (in the stochastic block model, $\Lambda$ is a block constant matrix, with the number of diagonal blocks representing the number of communities in the network).
Stochastic block models, in particular, have been extensively used to model networks with inherent community structure (see, for example, \cite{snijders1997estimation,nowicki2001estimation,newman2004finding,airoldi2009mixed}). As this model is a submodel of the random Bernoulli graph model here used, our main theorem (Theorem \ref{thd}) extends to stochastic block models immediately, making it of highly practical relevance.
These graphs are defined as follows.
Given $n\in\mathbb{Z}^+$, a real number $\rho \in [0,1]$, and a
symmetric, hollow matrix $\Lambda \in [0,1]^{n \times n}$, define $\mathcal{E}~:=~\{ \{i,j\}:i\in[n],j\in[n],i\neq j\},$ where $[n]:=\{1,2,\ldots,n\}.$
Two random graphs with respective $n \times n$
adjacency matrices $A$ and $B$ are
{\it $\rho$-correlated Bernoulli$(\Lambda)$} distributed if, for
all $\{i,j\}\in \mathcal{E}$, the random variables (matrix entries)
$A_{i,j},B_{i,j}$ are Bernoulli$(\Lambda_{i,j})$ distributed,
and all of these random variables are collectively independent except that, for each
$\{i,j\}\in \mathcal{E}$, the Pearson product-moment
correlation coefficient for $A_{i,j},B_{i,j}$ is $\rho$.
It is straightforward to show that the parameters $n$, $\rho$, and $\Lambda$
completely specify the random graph pair distribution, and the distribution may be
achieved by first, for all $\{i,j\}\in\mathcal{E}$,
having $B_{ij} \sim \textup{Bernoulli}(\Lambda_{i,j})$ independently drawn and
then, conditioning on $B$, have
$A_{i,j}\sim \textup{Bernoulli}\left ( (1-\rho)\Lambda_{i,j}+ \rho B_{i,j}
\right )$ independently drawn. While $\rho=1$ would imply the graphs are isomorphic, this model allows for a natural vertex alignment (namely the identity function) for $\rho<1$, i.e. when the graphs are not necessarily isomorphic.
\subsection{The main result}
We will consider a sequence of
correlated random Bernoulli~graphs for $n=1,2,3,\dots$, where $\Lambda$
is a function of $n$. When we say that a sequence of events, $\{E_m\}_{m=1}^{\infty}$, holds {\it almost always} we mean that
almost surely it happens that the events in the sequence occur for all but finitely many $m$.
\begin{theorem} \label{thd}
{\it Suppose $A$ and $B$ are adjacency matrices
for $\rho$-correlated Bernoulli$(\Lambda)$ graphs,
and there is an $\alpha \in (0,1/2)$ such that
$\Lambda_{i,j} \in [\alpha,1-\alpha]$ for all $i\ne j$.
Let $P^* \in \varPi$, and denote $A':=P^* A P^{*T}$.\\
a) If $(1-\alpha)(1-\rho)<1/2$, then it almost always holds that
\[\arg \min_{D \in {\mathcal D}} -\langle A'D,DB \rangle
= \arg \min_{P \in \varPi} \|A'-PBP^T\|_F = \{ P^* \} .\]
b) If the between graph correlation $\rho <\! 1$, then it almost always holds that
$P^* \not \in
\arg\min_{D \in {\mathcal D}} \|A'D-DB\|_F .$}
\end{theorem}
This theorem states that: (part {\it a}) the unique solution of the indefinite relaxation almost always is the correct permutation matrix, while (part {\it b}) the correct permutation is almost always not a solution of the commonly used convex relation.
Moreover, as we will show in the experiments section, the convex relaxation can lead to a doubly stochastic matrix that is not even in the Voronoi cell of the true permutation.
In this case, the convex optimum is closest to an incorrect permutation, hence the correct permutation will not be recovered by projecting the doubly stochastic solution back onto $\varPi$.
In the above, $\rho$ and $\alpha$ are fixed. However, the proofs follow {\it mutatis mutandis} if $\rho$ and $\alpha$ are allowed to vary in $n$. If there exist constants $c_1,c_2>0$ such that
$\alpha\geq c_1\sqrt{(\log n)/n}$ and $1/2-c_2\sqrt{(\log n)/n}\geq (1-\rho)(1-\alpha),$ then Theorem \ref{thd}, part {\it a} will hold. Note that $\alpha\geq c_1\sqrt{(\log n)/n}$ also guarantees the corresponding graphs are almost always connected.
For the analogous result for part {\it b}, let us first define
$\sigma(i)=\frac{1}{n-1}\sum_{k\neq i} \Lambda_{ki}(1-\Lambda_{ki}).$
If there exists an $i\in [n]$ such that
$ 1-\frac{3}{2\sigma(i)}\sqrt{(8\log n)/n}>\rho,$ then the results of Theorem \ref{thd}, part {\it b} hold as proven below.
\subsection{Isomorphic versus $\rho$-correlated graphs}
There are numerous algorithms available in the literature for (approximately) solving the graph isomorphism problem (see, for example, \cite{cordella2004sub,fankhauser2012suboptimal}), as well as for (approximately) solving the subgraph isomorphism problem (see, for example, \cite{ullmann1976algorithm}). All of the graph matching algorithms we explore herein can be used for the graph isomorphism problem as well.
We emphasize that the $\rho$-correlated random graph model extends our random graphs beyond isomorphic graph pairs; indeed $\rho$-correlated graphs $G_1$ and $G_2$ will almost surely have on the order of
$[\alpha,1-\alpha]\rho n^2$ edge-wise disagreements. As such, these graphs are a.s. {\it not} isomorphic. In this setting, the goal of graph matching is to align the vertices across graphs whilst simultaneously preserving the adjacency structure as best possible across graphs. However, this model does preserve a very important feature of isomorphic graphs: namely the presence of a latent alignment function (the identity function in the $\rho$-correlated model).
We note here that in the $\rho$-correlated Bernoulli($\Lambda$) model, both $G_1$ and $G_2$ are marginally Bernoulli$(\Lambda)$ random graphs, which is amenable to theoretical analysis. We note here that real data experiments across a large variety of data sets (see Section \ref{data}) and simulated experiments across a variety of robust random graph settings (see Section \ref{sec:otherrand}) also both support the result of Theorem \ref{thd}. Indeed, we suspect that an analogue of Theorem \ref{thd} holds over a much broader class of random graphs, and we are presently investigating this extension.
\section{Proof of Theorem \ref{thd}, part a} Without loss of generality, let $P^*=I$.
We will first sketch the main argument of the proof, and then we will spend the remainder of the section filling in all necessary details of the proof.
The proof will proceed as follows.
Almost always, $-\langle A,B\rangle<-\langle AQ,PB\rangle$ for any $P,\,Q\in\varPi$ such that either $P\neq I$ or $Q\neq I$.
To accomplish this, we count the entrywise disagreements between $AQ$ and $PB$ in two steps (of course, this is the same as the number of entrywise disagreements between $A$ and $PBQ^T$). We first count the entrywise disagreements between $B$ and $PBQ^T$ (Lemma \ref{thy}), and then count the additional disagreements induced by realizing $A$ conditioning on $B$. Almost always, this two step realization will result in more errors than simply realizing $A$ directly from $B$ without permuting the vertex labels (Lemma \ref{ths}). This establishes $-\langle A,B\rangle<-\langle AQ,PB\rangle$, and Theorem \ref{thd}, part {\it a} is a consequence of the Birkhoff-von Neumann theorem.
We begin with two lemmas used to prove Theorem \ref{thd}.
First, Lemma \ref{thg} is adapted from \cite{alon}, presented here
as a variation of the form found in
\cite[Prop. 3.2]{kim}. This lemma lets us tightly estimate the number of disagreements between $B$ and $PBQ^T$, which we do in Lemma \ref{thy}.
\begin{lemma}
{\it For any integer $N>0$ and constant $\alpha \in (0,\frac{1}{2})$,
suppose that the random variable $X$ is a function of at most $N$
independent Bernoulli random variables, each with Bernoulli parameter in the interval $[\alpha, 1-\alpha]$. Suppose that changing the value of any one of the
Bernoulli random variables (and keeping all of the others fixed)
changes the value of $X$ by at most $\gamma$. Then for any
$t$ such that $0 \leq t< \sqrt{\alpha(1-\alpha)}\gamma N$, it holds that
${\mathbb P} \left [ |X- \mathbb{E}X| > t
\right ] \leq 2 \cdot \text{exp}\{-t^2/(\gamma^2 N)\}$. \label{thg}}
\end{lemma}
The next result, Lemma \ref{lem:hoeff1},
is a special case of the classical Hoeffding inequality (see, for example, \cite{fcci}), which we use to tightly bound the number of additional entrywise disagreements between $AQ$ and $PB$ when we realize $A$ conditioning on $B$.
\begin{lemma} \label{lem:hoeff1}{\it Let $N_1$ and $N_2$ be positive integers,
and
$q_1$ and $q_2$ be real numbers in $[0,1]$.
If $X_1 \sim \textup{Binomial}(N_1,q_1)$ and
$X_2 \sim \textup{Binomial}(N_2,q_2)$ are independent, then for any $t \geq 0$ it holds that
\begin{align*}
\mathbb{P} \Big [ \Big | X_1+X_2 &- \mathbb{E} \Big (X_1+X_2
\Big ) \Big | \geq t
\Big ]
\leq 2 \cdot \textup{exp}\left\{\frac{-2t^2}{N_1+N_2}\right\}.
\end{align*}}
\end{lemma}
Setting notation for the next lemmas, let $n$ be given.
Let $\varPi$ denote the set~of~$n \times n$~permutation matrices.
Just for now, fix any $P,Q \in \varPi$ such that
they are not both the identify matrix, and let $\tau, \omega$ be
their respective associated permutations on $[n]$; i.e. for all
$i,j \in [n]$ it holds that $\tau(i)=j$ precisely when $P_{i,j}=1$ and,
for all $i,j \in [n]$, it holds that $\omega(i)=j$ precisely when $Q_{i,j}=1$.
It will be useful to define the following sets:
\begin{align*}
\Delta&:= \{ (i,j) \in [n] \times [n] : \tau(i) \ne i \mbox{ or }
\omega(j) \ne j \},\\
\Delta_t&:= \{ (i,j) \in \Delta : \tau(i)=j
\mbox{ and } \omega(j)=i \},\\
\Delta_d&:= \{ (i,j) \in \Delta : i = j \mbox{ or }
\tau(i)=\omega(j)\},\\
\Delta_\tau&:=\{ (i,j) \in [n] \times [n] : \tau(i) \ne i \},\\
\Delta_\omega&:=\{ (i,j) \in [n] \times [n] :
\omega(j) \ne j \}.
\end{align*}
If we define $m$ to be the maximum of
$| \{ i \in [n]: \tau(i) \ne i \} |$ and
$| \{ j \in [n]: \omega(j) \ne j \} |$, then it follows that
$mn \leq | \Delta | \leq 2mn$.
This is clear from noting that $\Delta_{\omega},\Delta_\tau\subseteq\Delta\subseteq\Delta_\tau\cup\Delta_\omega$.
Also, $|\Delta_t|\leq m$, since for $(i,j)\in\Delta_t$ it is necessary that $\tau(i)\neq i$ and $\omega(j)\neq j$.
Lastly, $|\Delta_d|\leq 4m$, since
$$\Delta_d\subseteq\{(i,i)\in\Delta\}\cup \{ (i,j) \in \Delta : i\neq j,\,
\tau(i)=\omega(j)\},$$
and $|\{(i,i)\in\Delta\}|\leq 2m$, and $|\{ (i,j) \in \Delta : i\neq j,\,
\tau(i)=\omega(j)\}|\leq 2m$.
We make the following assumption in all that follows:\\
{\bf Assumption 1:} {\it Suppose that $\Lambda \in [0,1]^{n \times n}$ is a
symmetric, hollow matrix, there is a real number
$\rho \in [0,1]$, and there is a constant
$\alpha \in (0,1/2)$ such that $\Lambda_{i,j} \in [\alpha,1-\alpha]$
for all $i\ne j$, and $(1-\alpha)(1-\rho)<1/2$.
Further, let $A$, $B$ be the adjacency matrices
of two random $\rho$-correlated Bernoulli$(\Lambda)$ graphs.}
Define the (random) set
$$\Theta' := \{ (i,j) \in \!\Delta: \!i \!\ne\!j,\text{ and }B_{i,j} \ne B_{\tau(i),\omega(j)} \}.$$ Note that $|\Theta'|$ counts the entrywise disagreements induced {\it within} the off-diagonal part of $B$ by $\tau$ and $\omega.$
\begin{lemma} \label{thy}
{\it Under Assumption 1, if $n$ is sufficiently large then
$$ \mathbb{P} \left ( | \Theta' | \not \in \left [
\alpha mn/3, \ 2mn
\right ] \right ) \leq
2e^{-\alpha^2mn/128} . $$}
\end{lemma}
\noindent {\bf Proof of Lemma \ref{thy}:}
For any $(i,j) \in \Delta$, note that
$(B_{i,j}-B_{\tau(i),\omega(j)})^2$ has a Bernoulli
distribution; if $(i,j) \in \Delta_t\cup \Delta_d,$
then the Bernoulli parameter is either $0$ or is in the interval
$[\alpha, 1-\alpha]$, and if $(i,j) \in
\Delta \backslash (\Delta_t\cup \Delta_d)$,
then the Bernoulli parameter is
$\Lambda_{i,j}(1-\Lambda_{\tau(i),\omega(j)})+(1-\Lambda_{i,j})
\Lambda_{\tau(i),\omega(j)},$
and this Bernoulli parameter
is in the interval $[\alpha,1-\alpha]$
since it is a convex combination of values in this interval.
Now, $|\Theta'|= \sum_{(i,j)\in \Delta, i \ne j}
(B_{i,j}-B_{\tau(i),\omega(j)})^2$,
so we obtain that
$
\alpha \left ( |\Delta|-|\Delta_t|-|\Delta_d| \right )
\leq \mathbb{E} ( | \Theta' | ) \leq (1-\alpha) |\Delta |,
$
and thus
\begin{equation} \label{thx}
\alpha m (n-5) \ \leq \ \mathbb{E}(| \Theta' |) \ \leq \ 2(1-\alpha)mn.
\end{equation}
Next we apply Lemma \ref{thg}, since
$|\Theta'|$ is a function of the at-most $N:=2mn$ Bernoulli
random variables $\{ B_{i,j} \}_{(i,j)\in \Delta: i \ne j}$,
which as a set (noting that $B_{i,j}=B_{j,i}$ is counted at most once for each $\{i,j\}$) are independent, each with Bernoulli parameter in
$[\alpha,1-\alpha]$.
Furthermore, changing the value of any one
of these random variable would change $|\Theta'|$ by at most $\gamma:=4$, thus
Lemma \ref{thg} can be applied and,
for the choice of $t:=\frac{\alpha}{2}mn$, we obtain that
\begin{equation} \label{thzz}
\mathbb{P} \left [ \big | |\Theta'| - \mathbb{E} (|\Theta'|) \big | > \alpha mn/2
\right ] \leq 2e^{-\alpha^2mn/128}.
\end{equation}
Lemma \ref{thy} follows from (\ref{thx}) and (\ref{thzz}),
since
\begin{align*}
&\mathbb{P} \big [ \big | |\Theta'| - \mathbb{E} (|\Theta'|) \big | > \alpha mn/2
\big ]\\
&=\mathbb{P} \left[ |\Theta'|\notin\left[\mathbb{E} (|\Theta'|)-\alpha mn/2,\mathbb{E} (|\Theta'|)+\alpha mn/2\right]\right]\\
&\geq \mathbb{P} \left [ |\Theta'|\notin\left[\alpha m(n-5)-\alpha mn/2,2(1-\alpha)mn+\alpha mn/2\right]
\right ]\\
&\geq \mathbb{P} \left [ |\Theta'|\notin\left[\alpha m(n-5)-\alpha mn/2,2mn\right]\right],
\end{align*}
and $5\alpha mn /6\leq \alpha m(n-5)$
when $n$ is sufficiently large (e.g.\@ $ n \geq 30).\,\,\blacksquare$
With the above bound on the number of (non-diagonal) entrywise disagreements between $B$ and $PBQ^T$, we next count the number of additional disagreements introduced by realizing $A$ conditioning on $B$.
In Lemma \ref{ths}, we prove that this two step realization will almost always result in more entrywise errors than simply realizing $A$ from $B$ without permuting the vertex labels.
\begin{lemma} \label{ths}{\it
Under Assumption 1, it almost always holds that,
for all $P,Q \in \varPi$
such that either $P \neq I$ or $Q \neq I$,
$\|A - PBQ^T \|_F > \| A - B \|_F$.}
\end{lemma}
\noindent {\bf Proof of Lemma \ref{ths}:}
Just for now, let us
fix any $P,Q \in \varPi$ such that either $P \neq I$ or $Q \neq I$, and say
$\tau$ and $\omega$ are their respective associated
permutations on $[n]$. Let $\Delta$ and $\Theta'$ be
defined as before.
For every $(i,j) \in \Delta$, a combinatorial argument, combined with $A$ and $B$ being binary valued,
yields (where for an event $C$, $\mathbbm{1}_C$ is the indicator random variable for the event $C$)\vspace{-1.3mm}
\begin{align}
\label{eq:comb}
&\mathbbm{1}_{A_{i,j} \ne B_{i,j}} + \mathbbm{1}_{B_{i,j} \ne B_{\tau(i),
\omega(j)}} = \\
&\hspace{10mm}\mathbbm{1}_{A_{i,j} \ne B_{\tau(i),\omega(j)}} +
2 \cdot \mathbbm{1}_{A_{i,j} \ne B_{i,j} \ \& \ B_{i,j} \ne B_{\tau(i),
\omega(j)}}.\notag
\end{align}
Note that
\begin{align*}
\|A\!-\!PBQ^T\|_F^2&\!=\!\sum_{i,j}(A_{i,j}\!-\!B_{\tau(i),\omega(j)})^2\!=\!\sum_{i,j}\!\mathbbm{1}_{A_{i,j} \ne B_{\tau(i),\omega(j)}}\\
\|A\!-\!B\|_F^2&\!=\!\sum_{i,j}(A_{i,j}\!-\!B_{i,j})^2\!=\!\sum_{i,j}\!\mathbbm{1}_{A_{i,j} \ne B_{i,j}}.
\end{align*}
Summing Eq. (\ref{eq:comb}) over the relevant indices then yields that
\begin{eqnarray} \label{tha}
\|A-PBQ^T\|_F^2-\|A-B\|_F^2= |\Theta|-2|\Gamma|,
\end{eqnarray}
where the sets $\Theta$ and $\Gamma$ are defined as
\begin{align*}
\Theta &:= \{ (i,j)\in[n]\times[n]: B_{i,j} \ne B_{\tau(i),\omega(j)} \}\subseteq\Delta,\\
\Gamma &:=\{ (i,j) \in \Theta : A_{i,j}\ne B_{i,j} \}.
\end{align*}
Now, partition $\Theta$ into sets $\Theta_1$,
$\Theta_2$, $\Theta_d$, and partition
$\Gamma$ into sets $\Gamma_1$,
$\Gamma_2$ where
\begin{align*}
\Theta_1&:= \{ (i,j)\in \Theta : i \ne j \mbox{ and } (j,i) \not \in \Theta \},\\
\Theta_2&:= \{ (i,j)\in \Theta : i \ne j \mbox{ and } (j,i) \in \Theta \},\\
\Theta_d&:= \{ (i,j) \in \Theta : i=j \}, \\
\Gamma_1&:= \{ (i,j) \in \Theta_1: A_{i,j} \ne B_{i,j} \}, \\
\Gamma_2&:= \{ (i,j) \in \Theta_2: A_{i,j} \ne B_{i,j} \}.\end{align*}
Note that all $(i,j)$ such that $i=j$ are not in $\Gamma$.
Also note that $\Theta'\subseteq \Theta$ can be partitioned into the disjoint union $\Theta'=\Theta_1\cup\Theta_2$.
Equation (\ref{tha}) implies
\begin{align*}
|\Gamma_1| + |\Gamma_2| < (
|\Theta_1| + &|\Theta_2| )/2\Rightarrow|\Gamma| < |\Theta|/2\Rightarrow\\
&\|A-B\|_F^2 < \|A-PBQ^T\|_F^2.
\end{align*}
In particular,
\begin{align}
\label{thz}
\big \{ \| A &-B \|_F \geq \|A-PBQ^T\|_F \big \} \Rightarrow \notag\\
&\left \{ |\Gamma_1|+|\Gamma_2| \geq (|\Theta_1| + |\Theta_2|)/2=|\Theta'|/2
\right \}.
\end{align}
Now, conditioning on $B$ (hence, conditioning on $\Theta'$),
we have, for all $i\ne j$, that (see Section \ref{sec:crbg}),
$A_{i,j}\sim\text{Bernoulli}\left((1-\rho)\Lambda_{i,j}+
\rho B_{i,j}\right).$
Thus
${\mathbbm 1}_{A_{i,j} \ne B_{i,j}}$ has a Bernoulli distribution
with parameter bounded above by $(1-\alpha)(1-\rho)$.
Thus, $|\Gamma_1|$ is stochastically dominated by a
Binomial$\left ( |\Theta_1|,(1-\alpha)(1-\rho) \right )$ random variable,
and the independent random variable
$|\Gamma_2|$ is stochastically dominated by a
Binomial$\left (
|\Theta_2|,(1-\alpha)(1-\rho) \right )$ random variable.
An application of Lemma \ref{lem:hoeff1} with $N_1:=|\Theta_1|$,
$N_2:=|\Theta_2|$, $q_1=q_2:=
(1-\alpha)(1-\rho)$, and $t:=\left ( \frac{1}{2}- (1-\alpha)(1-\rho)
\right ) |\Theta'| $, yields (recall that we are conditioning on $B$ here)
\begin{align}
\label{thq}
&\mathbb{P} \left [ |\Gamma_1|+|\Gamma_2| \geq |\Theta'|/2
\right ]\notag \\
&=\!
\mathbb{P} \!\left [ \!|\Gamma_1|\!+\!|\Gamma_2|\! - \!(1\!-\!\alpha)(1\!-\!\rho)|\Theta'|\!
\geq\! \Big (\! 1\!/2\! -\!(1\!-\!\alpha)(1\!-\!\rho) \Big ) |\Theta'|
\right ]\notag \\
& \leq 2\text{exp}\left\{ \frac{-2 \left ( 1/2-(1-\alpha)(1-\rho) \right )^2
|\Theta'|^2 }{|\Theta_1|+ |\Theta_2| }\right\}\notag\\
&\leq
2\text{exp}\left\{-2\Big(1/2-(1-\alpha)(1-\rho)\Big)^2 |\Theta'|\right\}.
\end{align}
No longer conditioning
(broadly) on $B$, Lemma \ref{thy}, equations (\ref{thz})
and (\ref{thq}), and $(1-\alpha)(1-\rho)<\frac{1}{2}$,
imply that
\begin{align}
&\mathbb{P} \Big [ \| A -PBQ^T \|_F \leq \|A-B\|_F \Big ] \notag \\
& \leq \mathbb{P} \left ( | \Theta' | \not \in \big [
\alpha mn/3, \ 2mn
\big ] \right ) \notag\\
&\hspace{15mm}
+\mathbb{P} \Big [ |\Gamma_1|+|\Gamma_2| \geq \frac{1}{2}|\Theta'| \
\Big | \ | \Theta' | \in \big [
\frac{\alpha}{3} mn, \ 2mn
\big ] \Big ] \notag \\
&\leq4\,\text{exp}\left\{- \min \bigg\{ \frac{\alpha^2}{128},\frac{2\alpha}{3}
\bigg(\frac{1}{2}-(1-\alpha)(1-\rho)\bigg)^2\bigg\} mn\right\}. \label{thb}
\end{align}
Until this point, $P$ and $Q$---and their associated permutations
$\tau$ and $\omega$---have been fixed.
Now, for each $m\in[n]$, define ${\mathcal H}_m$ to be
the event that $\| A-PBQ^T\|_F \leq \|A-B\|_F$
for {\it any} $P,Q \in \varPi$ with the property that their associated
permutations $\tau,\omega$ are such that the maximum
of $| \{ i \in [n]: \tau(i)\ne i \} |$ and
$| \{ j \in [n]: \omega(j)\ne j \} |$ is exactly $m$.
There are at most ${n \choose m}m!{n \choose m}m! \leq n^{2m}$ such
permutation pairs.
By (\ref{thb}), for every $m\in[n]$, setting
$$c_1=\min \{ \alpha^2/128,2\alpha
(1/2-(1-\alpha)(1-\rho))^2/3\},$$
we have
$\mathbb{P} ( {\mathcal H}_m )\leq n^{2m} \cdot 4\,
\text{exp}\left\{- c_1 mn\right\}\leq \text{exp}\{-c_2n\},$
for some positive constant $c_2$ (the last inequality holding when $n$ is
large enough). Thus, for sufficiently large $n$, $\mathbb{P} ( \cup_{m=1}^n {\mathcal H}_m ) \leq n\cdot\text{exp}\{-c_2n\}$
decays exponentially in $n$, and is thus finitely summable over
$n=1,2,3,\ldots$. Lemma \ref{ths} follows from the Borel-Cantelli
Lemma. $\blacksquare$
\noindent {\bf Proof of Theorem \ref{thd}, part a:}
By Lemma \ref{ths}, it almost always follows that for
every $P,Q \in \varPi$ not both the identity,
$ \langle AQ,PB \rangle < \langle A,B \rangle $.
By the Birkhoff-von Neuman Theorem, ${\mathcal D}$ is the convex hull
of $\varPi$, i.e., for every $D \in {\mathcal D}$, there exists
constants $\{ a_{D,P} \}_{P \in \varPi}$ such that
$D=\sum_{P \in \varPi}a_{D,P}P$ and $\sum_{P \in \varPi}a_{D,P}=1$.
Thus, if $D$ is not the identity matrix, then almost always
\begin{eqnarray*}
\langle AD, DB \rangle & = & \sum_{P \in \varPi} \sum_{Q \in \varPi} a_{D,P}a_{D,Q}
\langle AQ,PB \rangle \\
& <& \sum_{P \in \varPi} \sum_{Q \in \varPi} a_{D,P}a_{D,Q}
\langle A,B \rangle=\langle A,B \rangle ,
\end{eqnarray*}
and almost always $\text{argmin}_{D\in\mathcal{D}}-\langle AD,DB\rangle=\{I\}$. $\blacksquare$
\section{Proof of Theorem \ref{thd}, part b
The proof will proceed as follows: we will use Lemma \ref{lem:hoeff2} to prove that the identity is almost always not a KKT (Karush-Kuhn-Tucker) point of the relaxed graph matching problem. Since the relaxed graph matching problem is a constrained optimization problem with convex feasible region and affine constraints, this is sufficient for the proof of Theorem \ref{thd}, part b.
First, we state Lemma \ref{lem:hoeff2}, a variant of Hoeffding's inequality, which we use to prove Theorem \ref{thd}, part b.
\begin{lemma} \label{lem:hoeff2} {\it Let $N$ be a positive integer. Suppose that
the random variable $X$ is the sum of $N$ independent random
variables, each with mean $0$ and each taking values in the
real interval $[-1,1]$. Then for any $t \geq 0$, it holds
that $$\mathbb{P}[ |X| \geq t ]\leq 2 \cdot e^{\frac{-t^2}{2N}}.$$}
\end{lemma}
Again, without loss of generality, we may assume $P^*=I$.
We first note that the convex relaxed graph matching problem can be written as
\begin{align}
\min \,&\| AD-DB\|_F^2\label{convexequation},\\
\text{s.t. } &D \mathbf{1}=\mathbf{1}\label{equalconst1},\\
&\mathbf{1}^T D=\mathbf{1}^T\label{equalconst2},\\
&D\geq 0\label{inequalconst},
\end{align}
where (\ref{convexequation}) is a convex
function (of $D$) subject to affine constraints
(\ref{equalconst1})-(\ref{inequalconst}) (i.e., $D \in {\mathcal D}$). It follows that if $I$ is the global (or local) optimizer of the convex relaxed graph matching problem, then $I$ must be a KKT (Karush-Kuhn-Tucker) point (see, for example,
\cite[Chapter 4]{baz}).
The gradient of $\|AD-DB\|_F^2$ (as a function of $D$) is
$$\boldsymbol\nabla(D):=2(A^TAD+DBB^T-A^TDB-ADB^T).$$
Hence, a $\widehat D$ satisfying (\ref{equalconst1})-(\ref{inequalconst}) (i.e., $\widehat D$ is primal feasible) is a KKT point if it satisfies
\begin{equation}
\label{KKTcondgeneral}
\boldsymbol\nabla(\widehat D)+\boldsymbol\mu+\boldsymbol\mu'-\boldsymbol\nu=0,
\end{equation}
where $\boldsymbol \mu,$ $\boldsymbol \mu',$ and $\boldsymbol \nu$ are as follows:
\[\boldsymbol\mu:=\left [
\begin{array}{cccc}
\mu_1 & \mu_1 & \cdots & \mu_1 \\
\mu_2 & \mu_2 & \cdots & \mu_2 \\
\vdots & \vdots & \ddots & \vdots \\
\mu_n & \mu_n & \cdots & \mu_n
\end{array}
\right ]\in\mathbb{R}^{n\times n},\]
noting that the dual variables $\mu_1, \mu_2, \ldots, \mu_n$ are
not restricted. They correspond to the equality primal constraints (\ref{equalconst1})
that the row-sums of a primal feasible $D$ are all one;
\[
\boldsymbol\mu':=\left [
\begin{array}{cccc}
\mu'_1 & \mu'_2 & \cdots & \mu'_n \\
\mu'_1 & \mu'_2 & \cdots & \mu'_n \\
\vdots & \vdots & \ddots & \vdots \\
\mu'_1 & \mu'_2 & \cdots & \mu'_n
\end{array}
\right ]\in\mathbb{R}^{n\times n},\]
noting that
the dual variables $\mu'_1, \mu'_2, \ldots, \mu'_n$ are
not restricted. They correspond to the equality primal constraints (\ref{equalconst2})
that the column-sums of a primal feasible $D$ are all one;
\[
\boldsymbol\nu:=\left [
\begin{array}{cccc}
0 & \nu_{1,2} & \cdots & \nu_{1,n} \\
\nu_{2,1} & 0 & \cdots & \nu_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\
\nu_{n,1} & \nu_{n,2} & \cdots & 0
\end{array}
\right ]\in\mathbb{R}^{n\times n},
\]
noting that the dual
variables $\nu_{i,j}$ are restricted to be nonnegative. They
correspond to the inequality primal constraints (\ref{inequalconst}) that the entries of a primal feasible $D$
be nonnegative. Complementary slackness further constrains the $\nu_{i,j},$ requiring that $\widehat D_{i,j}\nu_{i,j}=0$ for all $i,j.$
At the identity matrix $I$, the gradient $\boldsymbol\nabla(I)$, denoted $\boldsymbol\nabla$, simplifies
to
$\boldsymbol\nabla =[\nabla_{i,j}]= 2A^2+2B^2-4AB\in\mathbb{R}^{n\times n};$
and $I$ being a KKT point is equivalent to:
\begin{equation}
\label{KKTcond}
\boldsymbol\nabla+\boldsymbol\mu+\boldsymbol\mu'-\boldsymbol\nu=0,\end{equation}
where $\boldsymbol\mu,\,\boldsymbol\mu',\, \text{ and }\boldsymbol\nu$ are as specified above.
At the identity matrix, complimentary slackness translates to having
$\nu_{1,1}=\nu_{2,2}=\cdots=\nu_{n,n}=0$.
Now, for Equation (\ref{KKTcond}) to hold,
it is necessary that there exist $\mu_1,\mu_2,
\mu'_1,\mu'_2$ such that
\vspace{-3mm}
\begin{eqnarray}
\nabla_{1,1} + \mu_1 + \mu'_1 & = & 0 ,\label{tif} \\
\nabla_{2,2} + \mu_2 + \mu'_2 & = & 0 ,\label{tig} \\
\nabla_{1,2} + \mu_1 + \mu'_2 & \geq & 0 ,\label{tih} \\
\nabla_{2,1} + \mu_2 + \mu'_1 & \geq & 0 \label{tii} .
\end{eqnarray}
Adding equations (\ref{tih}), (\ref{tii}) and subtracting
equations (\ref{tif}), (\ref{tig}), we obtain\vspace{-3mm}
\begin{eqnarray} \label{thj}
\nabla_{1,2}+\nabla_{2,1} \geq \nabla_{1,1} + \nabla_{2,2}.
\end{eqnarray}
Note that $\frac{1}{2} \boldsymbol\nabla + \frac{1}{2} \boldsymbol\nabla^T =
2(A-B)^T(A-B)$, hence Equation (\ref{thj}) is equivalent to (where $X:=(A-B)^T(A-B)$)
\begin{align} \label{tik}
&2[X]_{1,2}\geq
[X]_{1,1}+[X]_{2,2}.
\end{align}
Next, referring back to the joint distribution of $A$ and $B$ (see Section \ref{sec:crbg}),
we have, for all $i \ne j$,
\begin{align*}
\mathbb{P}\big [A_{i,j}=0,\,B_{i,j}=1 \big ] &=
\mathbb{P}\big [A_{i,j}=1 ,\,B_{i,j}=0 \big ]
\\
&=(1-\rho)\Lambda_{i,j}(1-\Lambda_{i,j}).
\end{align*}
Now, since
\begin{align*}
[X]_{1,1}+[X]_{2,2}=\sum_{i\neq 1} (A_{i,1}-B_{i,1})^2+\sum_{i\neq 2} (A_{i,2}-B_{i,2})^2,
\end{align*}
is the sum of $(n-1)+(n-1)$ Bernoulli random variables which are collectively
independent---besides the two of them which are equal, namely $(A_{12}-B_{12})^2$ and $(A_{21}-B_{21})^2$---we have that
$[X]_{1,1}+[X]_{2,2}$
is stochastically greater than or equal to
a $\text{Binomial}\big (2n-3,2(1-\rho)\alpha(1-\alpha) \big )$
random variable. Also note that
$$[X]_{1,2}=\sum_{i\neq 1,2}(A_{i,1}-B_{i,1})(A_{i,2}-B_{i,2})$$
is the sum of $n-2$ independent random variables (namely, the $(A_{i,1}-B_{i,1})(A_{i,2}-B_{i,2})$'s) each with mean $0$ and
each taking on values in $\{-1,0,1\}$.
Applying Lemma \ref{lem:hoeff1}
and Lemma \ref{lem:hoeff2}, respectively, to
$ X_{11}+X_{22}$ and to
$X_{12}$, with
$t:=(2n-3)2(1-\rho)\alpha(1-\alpha)/4$,
yields
\begin{align*}
&\mathbb{P} \big(
2[X]_{1,2} \geq
[X]_{1,1}+ [X ]_{2,2}
\big)\\
&\hspace{10mm}\leq
\mathbb{P} \big( 2[X]_{1,2} \geq 2t \big) +
\mathbb{P} \big(
[X ]_{1,1}+ [X]_{2,2}
\leq 2t \big)\\
& \hspace{10mm}\leq 2 \cdot e^{\frac{-2t^2}{2n-3}}+ 2 \cdot e^{\frac{-t^2}{2(n-2)}}
\leq e^{-cn},
\end{align*}
for some positive constant $c$ (the last inequality holds when $n$ is large
enough). Hence the probability that Equation (\ref{tik}) holds is seen to
decay exponentially in $n$, and is finitely summable over $n=1,2,3,\ldots$. Therefore, by the Borel-Cantelli Lemma we have that almost always
Equation~(\ref{tik}) does not hold. Theorem \ref{thd}, part {\it b} is now shown,
since Equation (\ref{tik}) is a necessary condition for
$I \in \arg \min_{D \in {\mathcal D}} \| AD-DB \|_F^2$. $\blacksquare$
\section{Experimental results}
In the preceding section, we presented a theoretical result exploring the trade-off between tractability and correctness when relaxing the graph matching problem.
On one hand, we have an optimistic result (Theorem~\ref{thd}, part {\it a}) about an indefinite relaxation of the graph matching problem.
However, since the objective function is nonconvex, there is no efficient algorithm known to exactly solve this relaxation.
On the other hand, Theorem \ref{thd}, part {\it b}, is a pessimistic result about a commonly used efficiently solvable convex relaxation, which almost always provides an incorrect/non-permutation solution.
After solving (approximately or exactly) the relaxed problem, the solution is commonly projected to the nearest permutation matrix.
We have not theoretically addressed this projection step yet. It might be that, even though the solution in $\mathcal{D}$ is not the correct permutation, it is very close to it, and the projection step fixes this. We will numerically illustrate this not being the case.
We next present simulations that corroborate and illuminate the presented theoretical results, address the projection step, and provide intuition and practical considerations for solving the graph matching problem.
Our simulated graphs have $n=150$ vertices and follow the Bernoulli model described above, where the entries of the matrix $\Lambda$ are i.i.d. uniformly distributed in $[\alpha,1-\alpha]$ with $\alpha = 0.1$. In each simulation, we run $100$ Monte Carlo replicates for each value of $\rho$. Note that given this $\alpha$ value, the threshold $\rho$ in order to fulfill the hypothesis of the first part of Theorem \ref{thd} (namely that $(1-\alpha)(1-\rho)<1/2)$ is $\rho = 0.44$. As in Theorem \ref{thd}, for a fixed $P^*\in\varPi$, we let $A':=P^* AP^{*T}$, so that the correct vertex alignment between $A'$ and $B$ is provided by the permutation matrix $P^*$.
We then highlight the applicability of our theory and simulations in a series of real data examples. In the first set of experiments, we match three pairs of graphs with known latent alignment functions. We then explore the applicability of our theory in matching graphs without a pre-specified latent alignment. Specifically, we match 16 benchmark problems (those used in \cite{FAQ,Zaslavskiy2009}) from the QAPLIB library of \cite{qaplib}. See Section \ref{data} for more detail. As expected by the theory, in all of our examples a smartly initialized local minimum of the indefinite relaxation achieves best performance.
\begin{table}[t!]
\caption{Notation}
\centering
\begin{tabular}{l | l| l }
\hline
{\bf Notation} & {\bf Algorithm used} & {\bf Ref.} \\ [0.5ex]
\hline
$D^*\in\text{argmin}_{D \in {\mathcal D}} \| A'D-DB \|_F^2$ & F-W algorithm & \cite{FW}, \\
& run to convergence &\cite{Zaslavskiy2009}\\
\hline
$P_c=$ projecting $D^*$ to $\Pi$ & Hungarian algorithm & \cite{hungarian} \\
\hline
FAQ:$P^*$ & FAQ init. at $P^*$ & \cite{FAQ} \\
\hline
FAQ:$D^*$ & FAQ init. at $D^*$ & \cite{FAQ} \\
\hline
FAQ:$J$ & FAQ init. at $J$ & \cite{FAQ} \\
\hline
\end{tabular}
\label{table:not}
\end{table}
We summarize the notation we employ in Table \ref{table:not}. To find $D^*$, we employ the F-W algorithm (\cite{FW, Zaslavskiy2009}), run to convergence, to exactly solve the convex relaxation. We also use the Hungarian algorithm (\cite{hungarian}) to compute $P_c$, the projection of $D^*$ to $\varPi$. To find a local minimum of $\min_{D \in {\mathcal D}}-\langle A'D,DB \rangle$, we use the FAQ algorithm of \cite{FAQ}. We use FAQ:$P^*$, FAQ:$D^*$, and FAQ:$J$ to denote the FAQ algorithm initialized at $P^*$, $D^*$, and $J:=\mathbf{1} \cdot\mathbf{1}^T/n$ (the barycenter of $\mathcal{D}$). We compare our results to the GLAG and PATH algorithms, implemented with off-the-shelf code provided by the algorithms' authors.
We restrict our focus to these algorithms (indeed, there are a {\it multitude} of graph matching algorithms present in the literature) as these are the prominent relaxation algorithms; i.e., they all first relax the graph matching problem, solve the relaxation, and then project the solution onto $\Pi$.
\subsection{On the convex relaxed graph matching problem}
\label{sec:conv}
Theorem \ref{thd}, part {\it b}, states that we cannot, in general, expect $D^*=P^*$. However, $D^*$ is often projected onto $\Pi$, which could potentially recover $P^*$.
Unfortunately, this projection step suffers from the same problems as rounding steps in many integer programming solvers, namely that the distance from the best interior solution to the best feasible solution is not well understood.
In Figure \ref{fig:energy}, we plot $\| A'D^*-D^*B \|_F^2$ versus the correlation between the random graphs, with $100$ replicates per value of $
\rho$. Each experiment produces a pair of dots, either a red/blue pair or a green/grey pair.
The energy levels corresponding to the red/green dots correspond to $\| A'D^*-D^*B \|_F^2$, while the energies corresponding to the blue/grey dots correspond $\|A'P_c-P_cB\|_F^2$.
The colors indicate whether $P_c$ was (green/grey pair) or was not (red/blue pair) $P^*$.
The black dots correspond to the values of $\| A'P^*-P^*B \|_F^2$
\begin{figure}[t!]
\centering
\def260pt{270pt}
\begin{scriptsize}
\input{energy.pdf_tex}
\end{scriptsize}
\caption{For $\rho\in[0.1,1]$, we plot $\|A'D^*-D^*B\|_F^2$~(red /green) and $\|A'P_c-P_cB\|_F^2$ (blue/gray). Red/blue dots correspond to simulations where $P_c\neq P^*$, and grey/green dots to $P_c=P^*$. Black dots correspond to $\| A'P^*-P^*B \|_F^2$. For each $\rho,$ we ran $100$ MC~replicates.}
\label{fig:energy}
\end{figure}
Note that, for correlations $\rho<1$, $D^*\neq P^*$, as expected from Theorem \ref{thd}, part {\it b}. Also note that, even for correlations greater than $\rho=0.44$, we note $P_c\neq P^*$ after projecting to the closest permutation matrix, even though with high probability $P^*$ is the solution to the unrelaxed problem.
We note the large gap between the pre/post projection energy levels when the algorithm fails/succeeds in recovering $P^*$,
the fast decay in this energy (around $\rho\approx 0.8$ in Figure \ref{fig:energy}), and the fact that the value for $\| A'P^*-P^*B \|_F^2$ can be easily predicted from the correlation value. These together suggest that $\| A'P_c-P_cB \|_F^2-\| A'D^*-D^*B \|_F^2$ can be used \textit{a posteriori} to assess whether or not graph matching recovered $P^*$. This is especially true if $\rho$ is known or can be estimated.
How far is $D^*$ from $P^*$? When the graphs are isomorphic (i.e., $\rho=1$ in our setting), then for a large class of graphs, with certain spectral constraints, then $P^*$ is the unique solution of the convex relaxed graph matching problem \cite{alex}. Indeed, in Figure \ref{fig:energy}, when $\rho=1$ we see that $P^*=D^*$ as expected.
On the other hand, we know from Theorem \ref{thd}, part {\it b} that if $\rho<1,$ it is often the case that $D^*\neq P^*$. We may think that, via a continuity argument, if the correlation $\rho$ is very close to one, then $D^*$ will be very close to $P^*$, and $P_c$ will probably recover $P^*$.
We empirically explore this phenomena in Figure \ref{fig:dist}. For $\rho\in[0.1,1]$, with 100 MC replicates for each $\rho$, we plot the (Frobenius) distances from $D^*$ to $P_c$ (in blue), from $D^*$ to $P^*$ (in red), and from $D^*$ to a uniformly random permutation in $\Pi$ (in black). Note that all three distances are very similar for $\rho<0.8$, implying that $D^*$ is very close to the barycenter and far from the boundary of $\mathcal{D}$. With this in mind, it is not surprising that the projection fails to recover $P^*$ for $\rho<0.8$ in Figure~\ref{fig:energy}, as at the barycenter, the projection onto $\Pi$ is uniformly random.
For very high correlation values ($\rho>0.9$), the distances to $P_c$ and to $P^*$ sharply decrease, and the distance to a random permutation sharply increases. This suggests that at these high correlation levels $D^*$ moves away from the barycenter and towards $P^*$. Indeed, in Figure \ref{fig:energy} we see for $\rho>0.9$ that $P^*$ is the closest permutation to $D^*$, and is typically recovered by the projection step.
\subsection{On indefinite relaxed graph matching problem}
\label{sec:indef}
The continuous problem one would like to solve, $\min_{D \in {\mathcal D}} -\langle A'D,DB \rangle$ (since its optimum is $P^*$ with high probability), is indefinite. One option is to look for a local minimum of the objective function, as done in the FAQ algorithm of \cite{FAQ}. The FAQ algorithm uses F-W methodology (\cite{FW}) to find a local minimum of $-\langle A'D,DB \rangle$. Not surprisingly (as there are many local minima), the performance of the algorithm is heavily dependent on the initialization. Below we study the effect of initializing the algorithm at the non-informative barycenter, at $D^*$ (a principled starting point), and at $P^*$. We then compare performance of the different FAQ initializations to the PATH algorithm \cite{Zaslavskiy2009} and to the GLAG algorithm \cite{fiori2013nips}.
\begin{figure}[t]
\centering
\def260pt{270pt}
\begin{scriptsize}
\input{distances.pdf_tex}
\end{scriptsize}
\caption{Distance from $D^*$ to $P_c$ (in blue), to $P^*$ (in red), and to a random permutation (in black). For each value of $\rho$, we ran 100 MC replicates.}
\label{fig:dist}
\end{figure}
The GLAG algorithm presents an alternate formulation of the graph matching problem. The algorithm convexly relaxes the alternate formulation, solves the relaxation and projects it onto $\Pi$. As demonstrated in \cite{fiori2013nips}, the algorithm's main advantage is in matching weighted graphs and multimodal graphs. The PATH algorithm begins by finding $D^*$, and then solves a sequence of concave and convex problems in order to improve the solution. The PATH algorithm can be viewed as an alternative way of projecting $D^*$ onto $\Pi$. Together with FAQ, these algorithms achieve the current best performance in matching a large variety of graphs (see \cite{fiori2013nips}, \cite{FAQ}, \cite{Zaslavskiy2009}). However, we note that GLAG and PATH often have significantly longer running times than FAQ (even if computing $D^*$ for FAQ:$D^*$); see \cite{FAQ,lyzinski2014spectral}.
Figure \ref{fig:nonconvex} shows the success rate of the graph matching methodologies in recovering $P^*$. The vertical dashed red line at $\rho=0.44$ corresponds to the threshold in Theorem \ref{thd} part {\it a} (above which $P^*$ is optimal whp) for the parameters used in these experiments, and the solid lines correspond to the performance of the different methods: from left to right in gray, FAQ:$P^*$, FAQ:$D^*$, FAQ:$J$; in black, the success rate of $P_c$; the performance of GLAG and PATH are plotted in blue and red respectively.
Observe that, when initializing with $P^*$, the fact that FAQ succeeds in recovering $P^*$ means that $P^*$ is a local minimum, and the algorithm did not move from the initial point. From the theoretical results, this was expected for $\rho>0.44$, and the experimental results show that this is also often true for smaller values of $\rho$. However, this only means that $P^*$ is a local minimum, and the function could have a different global minimum. On the other hand, for very lowly correlated graphs ($\rho<0.3$), $P^*$ is not even a local minimum.
\begin{figure}[t!]
\centering
\def260pt{270pt}
\begin{scriptsize}
\input{methods.pdf_tex}
\end{scriptsize}
\caption{Success rate in recovering $P^*$. In gray, FAQ starting at, from left to right, $P^*$, $D^*$, and $J$; in black, $P_c$; in red, PATH; in blue, GLAG. For each $\rho,$ we ran $100$ MC replicates.}
\label{fig:nonconvex}
\end{figure}
\begin{figure}[t!]
\def260pt{270pt}
\begin{scriptsize}
\input{times2.pdf_tex}
\end{scriptsize}
\caption{Average run time for FAQ:$D^*$ (note that this does not include the time to find $D^*$) and FAQ:$J$ in gray; finding $P_c$ (first finding $D^*$) in black; PATH in red; and GLAG in blue. For each $\rho$, we average over 100 MC replicates. Note that the runtime of PATH drop precipitously at $\rho=0.6,$ which corresponds to the performance increase in Figure \ref{fig:nonconvex}.}
\label{fig:times2}
\end{figure}
The difference in the performance illustrated by the gray lines indicates that the resultant graph matching solution can be improved by using $D^*$ as an initialization to find a local minimum of the indefinite relaxed problem. We see in the figure that FAQ:$D^*$ achieves best performance,
while being computationally less intensive than PATH and GLAG, see Figure \ref{fig:times2} for the runtime result.
This amalgam of the convex and indefinite methodologies (initialize indefinite with the convex solution) is an important tool for obtaining solutions to graph matching problems, providing a computationally tractable algorithm with state-of-the-art performance.
However, for all the algorithms there is still room for improvement. In these experiments, for $\rho\in[0.44,0.7$ theory guarantees that with high probability the global minimum of the indefinite problem is $P^*$, and we cannot find it with the available methods.
When FAQ:$D^*$ fails to recover $P^*$, how close is the objective function at the obtained local minima to the objective function at $P^*$? Figure \ref{fig:trace} shows $-\langle A'D,DB \rangle$ for the true permutation, $P^*$, and for the pre-projection doubly stochastic local minimum found by FAQ:$D^*$. For $0.35<\rho<0.75$, the state-of-the-art algorithm not only fails to recover the correct bijection, but also the value of the objective function is relatively far from the optimal one. There is a transition (around $\rho\approx 0.75$) where the algorithm moves from getting a wrong local minimum to obtaining $P^*$ (without projection!). For low values of $\rho$, the objective function values are very close, suggesting that both $P^*$ and the pre-projection FAQ solution are far from the true global minima. At $\rho\approx0.3,$ we see a separation between the two objective function values (agreeing with the findings in Figure \ref{fig:nonconvex}). As $\rho>0.44$, we expect that $P^*$ is the global minima and the pre-projection FAQ solution is far from $P^*$ until the phase transition at $\rho\approx0.75$.
\begin{figure}[t!]
\hspace{-10mm}
\def260pt{270pt}
\begin{scriptsize}
\input{trace.pdf_tex}
\end{scriptsize}
\caption{Value of $-\langle A'D,DB \rangle$ for $D=P^*$ (black) and for the output of FAQ:$D^*$ (red/blue indicating failure/success in recovering the true permutation). For each $\rho,$ we ran $100$ MC replicates.}
\label{fig:trace}
\end{figure}
\subsection{Real data experiments}
\label{data}
We further demonstrate the applicability of our theory in a series of real data examples. First we match three pairs of graphs where a latent alignment is known. We further compare different graph matching approaches on a set of 16 benchmark problems (those used in \cite{FAQ,Zaslavskiy2009}) from the QAPLIB QAP library of \cite{qaplib}, where no latent alignment is known a priori. Across all of our examples, an intelligently initialized local solution of the indefinite relaxation achieves best performance.
Our first example is from human connectomics. For $45$ healthy patients, we have DT-MRI scans from one of two different medical centers: $21$ patients scanned (twice) at the Kennedy Krieger Institute (KKI), and
$24$ patients scanned (once) at the Nathan Kline Institute (NKI) (all data available at \url{ http://openconnecto.me/data/public/MR/MIGRAINE_v1_0/}). Each scan is identically processed via the MIGRAINE pipeline of \cite{wrg1} yielding a $70$ vertex weighted symmetric graph.
In the graphs, vertices correspond to regions in the Desikan brain atlas, which provides the latent alignment of the vertices.
Edge weights count the number of neural fiber bundles connecting
the regions. We first average the graphs within each medical center and then match the averaged graphs across centers.
For our second example, the graphs consist of the two-hop neighborhoods of the ``Algebraic Geometry'' page in the French and English Wikipedia graphs. The 1382 vertices correspond to Wikipedia pages with (undirected) edges representing hyperlinks between the pages. Page subject provides the latent alignment function, and to make the graphs of commensurate size we match the intersection graphs.
Lastly, we match the chemical and electrical connectomes of the C. elegans worm. The connectomes consist of 253 vertices, each representing a specific neuron (the same neuron in each graph). Weighted edges representing the strength of the (electrical or chemical) connection between neurons. Additionally, the electrical graph is directed while the chemical graph is not.
\begin{table}[t!]
\vspace{-4mm}
\caption{$\|A'P-PB\|_F$ for the $P$ given by each algorithm together with the number of vertices correctly matched ($n_{corr.}$) in real data experiments}
\begin{tabular}{c | c | c c c}
\hline
{\bf Algorithm} & & {\bf KKI-NKI} & {\bf Wiki.} & {\bf C. elegans}\\ [0.5ex]
\hline
Truth & $\|A'P-PB\|_F$ &82892.87 & 189.35 & 155.00 \\
& $n_{corr.}$& 70 & 1381 & 253\\ \hline \hline
Convex relax. & $\|A'P-PB\|_F$&104941.16 & 225.27 & 153.38 \\
& $n_{corr.}$& 41 &97 &2\\ \hline
GLAG & $\|A'P-PB\|_F$& 104721.97 & 219.98 & 145.53 \\
& $n_{corr.}$& 36&181&4\\ \hline
PATH & $\|A'P-PB\|_F$&165626.63 & 252.55 & 158.60 \\
&$n_{corr.}$&1&1&1\\% [1ex] adds vertical space
\hline
FAQ:$J$ & $\|A'P-PB\|_F$ &93895.21 & 205.28 & 127.55\\
&$n_{corr.}$& 38&30&1\\ \hline
{\bf FAQ:D$^*$} & {\bf $\|A'P-PB\|_F$}& {\bf 83642.64} & {\bf 192.11} & {\bf 127.50} \\
& {\bf $n_{corr.}$}&{\bf 63}&{\bf 477}&{\bf 5}\\
\hline
\end{tabular}
\label{table:nonlin}
\end{table}
The results of these experiments are summarized in Table \ref{table:nonlin}. In each example, the computationally inexpensive FAQ:$D^*$ procedure achieves the best performance compared to the more computationally expensive GLAG and PATH procedures. This reinforces the theoretical and simulation results presented earlier, and again points to the practical utility of our amalgamated approach. While there is a canonical alignment in each example, the results point to the potential use of our proposed procedure (FAQ:$D^*$) for measuring the strength~of this alignment, i.e., measuring the strength of the correlation between the graphs. If the graphs are strongly aligned, as in the KKI-NKI example, the performance of FAQ:$D^*$ will be close to the truth and a large portion of the latent alignment with be recovered. As the alignment is weaker, FAQ:$D^*$ will perform even better than the true alignment, and the true alignment will be poorly recovered, as we see in the C. elegans example.
What implications do our results have in graph matching problems without a natural latent alignment? To test this, we matched 16 particularly difficult examples from the QAPLIB library of \cite{qaplib}.
We choose these particular examples, because they were previously used in \cite{FAQ,Zaslavskiy2009} to assess and demonstrate the effectiveness of their respective matching procedures.
Results are summarized in Table \ref{fig:table}.
We see that in every example, the indefinite relaxation (suitably initialized) obtains the best possible result. Although there is no latent alignment here, if we view the best possible alignment as the ``true'' alignment here, then this is indeed suggested by our theory and simulations. As the FAQ procedure is computationally fast (even initializing FAQ at {\it both} $J$ and $D^*$ is often comparatively faster than GLAG and PATH; see \cite{FAQ} and \cite{lyzinski2014spectral}), these results further point to the applicability of our theory. Once again, theory suggests, and experiments confirm, that approximately solving the indefinite relaxation yields the best matching results.
\begin{table}[t!]
\vspace{-4mm}
\caption{$\|A'P-PB\|^2_F$ for the different tested algorithms on 16 benchmark examples of the QAPLIB library.}
\hspace{-2mm}
\includegraphics[width=0.50\textwidth]{qaplib3_crop.pdf}
\vspace{-5mm}
\label{fig:table}
\end{table}
\subsection{Other random graph models}
\label{sec:otherrand}
While the random Bernoulli graph model is the most general edge-independent random graph model, in this section we present analogous experiments for a wider variety of edge-dependent random graph models.
For these models, we are unaware of a simple way to exploit pairwise edge correlation in the generation of these graphs, as was present in Section \ref{sec:crbg}.
Here, to simulate aligned non-isomorphic random graphs, we proceed as follows.
We generate a graph $G_1$ from the appropriate underlying distribution, and then model $G_2$ as an errorful version of $G_1$; i.e., for each edge in $G_1$, we randomly flip the edge (i.e., bit-flip from $0\mapsto1$ or $1\mapsto0$) independently with probability $p\in[0,1]$.
We then graph match $G_1$ and $G_2$, and we plot the performance of the algorithms in recovering the latent alignment function across a range of values of $p$.
We first evaluate the performance of our algorithms on \textit{power law} random graphs \cite{barabasi1999emergence}; these graphs have a degree distribution that follows a power law, i.e., the proportion of vertices of degree $d$ is proportional to $d^{-\beta}$ for some constant $\beta>0$.
These graphs have been used to model many real data networks, from the Internet \cite{albert1999internet,faloutsos1999power}, to social and biological networks \cite{girvan2002community}, to name a few.
In general, these graphs have only a few vertices with high degree, and the great majority of the vertices have relatively low degree.
Figure \ref{fig:powerlaw} shows the performance comparison for the methods analyzed above: FAQ:$P^*$, FAQ:$D^*$, FAQ:$J$, $P_c$, PATH, and GLAG. For a range of $p\in[0,1]$, we generated a 150 vertex power law graph with $\beta=2$, and subsequently graph matched this graph and its errorful version. For each $p$, we have 100 MC replicates.
As with the random Bernoulli graphs, we see from Figure \ref{fig:powerlaw} that the true permutation is a local minimum of the non-convex formulation for a wide range of flipping probabilities ($p\leq0.3$), implying that in this range of $p$, $G_1$ and $G_2$ share significant common structure.
Across all values of $p<0.5$, FAQ:$P^*$ outperforms all other algorithms considered (with FAQ:$D^*$ being second best across this range). This echoes the results of Sections (\ref{sec:conv})--(\ref{data}), and suggests an analogue of Theorem \ref{thd} may hold in the power law setting. We are presently investigating this.
We next evaluate the performance of our algorithms on graphs with bounded maximum degree (also called {\it bounded valence graphs}). These graphs have been extensively studied in the literature, and for bounded valence graphs, the graph isomorphism problem is in $P$ \cite{luks1982isomorphism}.
For the experiments in this paper we generate a random graph from the model in \cite{balinska2001algorithms} with maximum degree equal to $4$, and vary the graph order from $50$ to $350$ vertices. Figure \ref{fig:bounded} shows the comparison of the different techniques and initializations for these graphs, across a range of bit-flipping parameters $p\in[0,1]$.
\begin{figure}[t!]
\centering
\def260pt{280pt}
\begin{scriptsize}
\input{powerlaw.pdf_tex}
\end{scriptsize}
\caption{Success rate in recovering $P^*$ for $150$ vertex power law graphs with $\beta=2$ for: In gray, from right to left, FAQ:$P^*$, FAQ:$D^*$, and FAQ:$J$; in black, $P_c$; in red, PATH; in blue, GLAG. For each value of the bit-flip parameter $p,$ we ran $100$ MC replicates.}
\label{fig:powerlaw}
\end{figure}
It can be observed that even for isomorphic graphs ($p=0$), all but FAQ:$P^*$ fail to perfectly recover the true alignment.
We did not see this phenomena in the other random graph models, and this can be explained as follows. It is a well known fact that convex relaxations fail for regular graphs \cite{fiori2014spectral}, and also that the bounded degree model tends to generate almost regular graphs \cite{koponen2012random}. Therefore, even without flipped edges, the graph matching problem with the original graphs is very ill-conditioned for relaxation techniques. Nevertheless, the true alignment is a local minimum of the non-convex formulation for a wide range of values of $p$ (shown by FAQ:$P^*$ performing perfectly over a range of $p$ in Figure \ref{fig:bounded}). We again note that FAQ:$D^*$ outperforms $P_c$, PATH and GLAG across all graph sizes and bit-flip parameters $p$. This suggests that a variant of Theorem \ref{thd} may also hold for bounded valence graphs as well, and we are presently exploring this.
\begin{figure}[t!]
\vspace{-4mm}
\centering
Success rates for bounded degree graphs
\vspace{3mm}
\def260pt{123pt}
\begin{tiny}
\input{bounded_50.pdf_tex}
\def260pt{123pt}
\input{bounded_100.pdf_tex}
\vspace{0.4cm}
\def260pt{123pt}
\input{bounded_150.pdf_tex}
\def260pt{123pt}
\input{bounded_350.pdf_tex}
\end{tiny}
\caption{Success rate in recovering $P^*$ for bounded degree graphs (max degree $4$). In gray, from right to left, FAQ:$P^*$, FAQ:$D^*$, and FAQ:$J$; in black, $P_c$; in red, PATH; in blue, GLAG. For each probability we ran $100$ MC replicates.}
\label{fig:bounded}
\end{figure}
We did not include experiments with any random
graph models that are highly regular and symmetric (for example, mesh graphs). Symmetry and regularity have two effects on the graph matching problem.
Firstly, it is well known that $P_c\neq P^*$ for non-isomorphic regular graphs (indeed, $J$ is a solution of the convex relaxed graph matching problem).
Secondly, the symmetry of these graphs means that there are potentially several isomorphisms
between a graph and its vertex permuted analogue.
Hence, any
flipped edge could make permutations other than $P^*$ into the minima of the graph matching problem.
\subsection{Directed graphs}
All the theory developed above is proven in the undirected graph setting (i.e., $A$ and $B$ are assumed symmetric). However, directed graphs are common in numerous applications. Figure \ref{fig:directed} repeats the analysis of Figure \ref{fig:nonconvex} with directed graphs, all other simulation parameters being unchanged. The PATH algorithm is not shown in this new figure because it is designed for undirected graphs, and its
performance for directed graphs is very poor.
Recall that in Figure \ref{fig:nonconvex}, i.e., in the undirected setting, FAQ:$J$ performed significantly worse than $P_c$.
In Figure \ref{fig:directed}, i.e., the directed setting, we note that the performance of FAQ:$J$ outperforms $P_c$ over a range of $\rho\in[0.4,0.7$].
As in the undirected case, we again see significant performance improvement (over FAQ:$J$, $P_c$, and GLAG) when starting FAQ from $D^*$ (the convex solution).
Indeed, we suspect that a directed analogue of Theorem \ref{thd} holds, which would explain the performance increase achieved by the nonconvex relaxation over $P_c$.
Here, we note that the setting for the remainder of the examples considered is the undirected graphs setting.
\subsection{Seeded graphs}
In some applications it is common to have some \textit{a priori} information about partial vertex correspondences, and seeded graph matching includes these known partial matchings as constraints in the optimization (see \cite{ModFAQ,JMLR:v15:lyzinski14a,alex}). However, seeds do more than just reducing the number of unknowns in the alignment of the vertices.
Even a few seeds can dramatically increase performance graph matching performance, and (in the $\rho$-correlated Erd\H os-R\'enyi setting) a logarithmic (in $n$) number of seeds contain enough signal in their seed--to--nonseed adjacency structure to a.s. perfectly align two graphs \cite{JMLR:v15:lyzinski14a}.
Also, as shown in the deterministic graph setting in \cite{alex}, very often $D^*$ is closer to $P^*$.
In Figure \ref{fig:seeds}, the graphs are generated from the $\rho$-correlated random Bernoulli model with random $\Lambda$ (entrywise uniform over $[0.1,0.9]$). We run the Frank-Wolfe method (modified to incorporate the seeds) to solve the convex relaxed graph matching problem, and the method in \cite{ModFAQ,JMLR:v15:lyzinski14a} to approximately solve the nonconvex relaxation, starting from $J$, $D^*$, and $P^*$. Note that with seeds, perfect matching is achieved even below the theoretical bound on $\rho$ provided in Theorem 1 (for ensuring $P^*$ is the global minimizer).
This provides a potential way to improve the theoretical bound on $\rho$ in Theorem \ref{thd}, and the extension of Theorem 1 for graphs with seeds is the subject of future research.
\begin{figure}[t!]
\centering
\def260pt{260pt}
\begin{scriptsize}
\input{directed.pdf_tex}
\end{scriptsize}
\caption{Success rate for directed graphs. We plot $P_c$ (black), the GLAG method (blue), and the nonconvex relaxation starting from different points in green, from right to left: FAQ:$J$, FAQ:$D^*$, FAQ:$P^*$.}
\label{fig:directed}
\end{figure}
With the exception of the nonconvex relaxation starting from $P^*$, each of the different
FAQ initializations and the convex formulation all see significantly improved performance as the number of seeds increases. We also observe that the nonconvex relaxation seems to benefit much more from seeds than the convex relaxation.
Indeed, when comparing the performance with no seeds, the $P_c$ performs better than FAQ:$J$.
However, with just five seeds, this behavior is inverted.
Also of note, in cases when seeding returns the correct permutation, we've empirically observed that merely initializing the FAQ algorithm with the seeded start, and not enforcing the seeding constraint, also yields the correct permutation as its solution (not shown).
\begin{figure}[t!]
\centering
\def260pt{270pt}
\begin{scriptsize}
\input{seeds1.pdf_tex}
\end{scriptsize}
\caption{Success rate of different methods using seeds. We plot $P_c$ (top left), FAQ:$J$ (top right), FAQ:$D^*$ (bottom left), and FAQ:$P^*$ (bottom right). For each method, the number of seeds increases from right to left: 0 (black), $5$ (green), $10$ (blue) and $15$ (red) seeds. Note that more seeds increases the success rate across the board.}
\label{fig:seeds}
\end{figure}
Figure \ref{fig:times} shows the running time (to obtain a solution) when starting from $D^*$ for the nonconvex relaxation, using different numbers of seeds. For a fixed seed level, the running time is remarkably stable across $\rho$ when FAQ does not recover the true permutation. On the other hand, when FAQ does recover the correct permutation, the algorithm runs significantly faster than when it fails to recover the truth. This suggests that, across all seed levels, the running time might, by itself, be a good indicator of whether the algorithm succeeded in recovering the underlying correspondence or not. Also note that as seeds increase, the overall speed of convergence of the algorithm decreases and, unsurprisingly, the correct permutation is obtained for lower correlation levels.
\subsection{Features}
Features are additional information that can be utilized to improve performance in graph matching methods, and often these features are manifested as additional vertex characteristics besides the connections with other vertices.
For instance, in social networks we may have have a complete profile of a person in addition to his/her social connections.
We demonstrate the utility of using features with the nonconvex relaxation, the standard convex relaxation and the GLAG method, duely modified to include the features into the optimization. Namely, the new objective function to minimize is
$
\lambda F(P) + (1-\lambda)\textup{trace}(C^TP), \,
$
where $F(P)$ is the original cost function ($-\langle AP,PB\rangle$ in the nonconvex setting, $\|AP-PB\|_F^2$ for the convex relaxation and $\sum_{i,j}\|( [AP]_{i,j},[PB]_{i,j})\|_2$ for the GLAG method), the matrix $C$ codes the features fitness cost, and the parameter $\lambda$ balances the trade-off between pure graph matching and fit in the features domain. For each of the matching methodologies, the optimization is very similar to the original featureless version.
For the experiments, we generate $\rho$-correlated Bernoulli graphs as before, and in addition we generate a Gaussian random vector (zero mean, unit variance) of $5$ features for each node of one graph, forming a $5\times n$ matrix of features; we permute that matrix according to $P^*$ to align new features vectors with the nodes of the second graph. Lastly, additive zero-mean Gaussian noise with a range of variance values is added to each feature matrix independently. If for each vertex $v\in[n]$ the resulting noisy feature for $G_i$, $i=1,2$, is $x_v^{(i)}$, then the entries of $C$ are defined to be
$C_{v,w}=\|x_v^{(1)}-x_w^{(2)}\|_2,$
for $v,w\in[n]$.
Lastly, we set $\lambda=0.5.$
\begin{figure}[t!]
\centering
\def260pt{280pt}
\begin{scriptsize}
\input{times.pdf_tex}
\end{scriptsize}
\caption{Running time for the nonconvex relaxation when starting from $D^*$, for different number of seeds. A red ``x'' indicates the algorithm failed to recover $P^*$, and a black ``o'' indicates it succeeded. In each, the algorithm was run to termination at discovery of a local min.}
\label{fig:times}
\end{figure}
Figure \ref{fig:features} shows the behavior of the methods when using features for different levels of noise in the feature matrix.
Even for highly noisy features (recalling that both feature matrices are contaminated with noise), this external information still helps in the graph matching problem.
For all noise levels, all three methods improve their performance with the addition of features, and of course, the improvement is greater when the noise level decreases.
Note that, as before, $FAQ$ outperforms both $P_c$ and GLAG across all noise levels.
It is also worth noting that for low noise, FAQ:$D^*$ performs comparably to FAQ:$P^*$, which we did not observe in the seeded (or unseeded) setting.
Even for modestly errorful features, including these features improves downstream matching performance versus the setting without features.
This points to the utility of high fidelity features in the matching task.
Indeed, given that the state-of-the-art graph matching algorithms may not achieve the optimal matching for even modestly correlated graphs, the use of external information like seeds and features can be critical.
\section{Conclusions}
In this work we presented theoretical results showing the surprising fact that the indefinite relaxation (if solved exactly) obtains the optimal solution to the graph matching problem with high probability, under mild conditions. Conversely, we also present the novel result that the popular convex relaxation of graph matching almost always fails to find the correct (and optimal) permutation. In spite of the apparently negative statements presented here, these results have an immediate practical implication: the utility of intelligently initializing the indefinite matching algorithm to obtain a good approximate solution of the indefinite problem.
The experimental results further emphasize the trade-off between tractability and correctness in relaxing the graph matching problem,
with real data experiments and simulations in non edge-independent random graph models suggesting that our theory could be extended to more general random graph settings. Indeed,
all of our experiments corroborate that best results are obtained via approximately solving the intractable indefinite problem.
Additionally, both theory and examples point to the utility of combining the convex and indefinite approaches, using the convex to initialize the indefinite.
\begin{figure}[t!]
\centering
\def260pt{260pt}
\begin{scriptsize}
\input{features.pdf_tex}
\end{scriptsize}
\caption{Success rate of different methods using features: $P_c$ (in black), GLAG (in blue), FAQ:$D^*$ (in red), and FAQ:$P^*$ (in green). For each method, the noise level (variance of the Gaussian random noise) increases from left to right: $0.3$, $0.5$, and $0.7$. In dashed lines, we show the success of the same methods without features.
\label{fig:features}
\end{figure}
\bibliographystyle{IEEEtran}
| {
"timestamp": "2015-01-13T02:06:04",
"yymm": "1405",
"arxiv_id": "1405.3133",
"language": "en",
"url": "https://arxiv.org/abs/1405.3133",
"abstract": "Graph matching---aligning a pair of graphs to minimize their edge disagreements---has received wide-spread attention from both theoretical and applied communities over the past several decades, including combinatorics, computer vision, and connectomics. Its attention can be partially attributed to its computational difficulty. Although many heuristics have previously been proposed in the literature to approximately solve graph matching, very few have any theoretical support for their performance. A common technique is to relax the discrete problem to a continuous problem, therefore enabling practitioners to bring gradient-descent-type algorithms to bear. We prove that an indefinite relaxation (when solved exactly) almost always discovers the optimal permutation, while a common convex relaxation almost always fails to discover the optimal permutation. These theoretical results suggest that initializing the indefinite algorithm with the convex optimum might yield improved practical performance. Indeed, experimental results illuminate and corroborate these theoretical findings, demonstrating that excellent results are achieved in both benchmark and real data problems by amalgamating the two approaches.",
"subjects": "Machine Learning (stat.ML); Optimization and Control (math.OC)",
"title": "Graph Matching: Relax at Your Own Risk",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969665221771,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7099044283223072
} |
https://arxiv.org/abs/1508.05869 | Numerical Approximation of Fractional Powers of Regularly Accretive Operators | We study the numerical approximation of fractional powers of accretive operators in this paper. Namely, if $A$ is the accretive operator associated with an accretive sesquilinear form $A(\cdot,\cdot)$ defined on a Hilbert space $\mathbb V$ contained in $L^2(\Omega)$, we approximate $A^{-\beta}$ for $\beta\in (0,1)$. The fractional powers are defined in terms of the so-called Balakrishnan integral formula. Given a finite element approximation space $\mathbb V_h\subset \mathbb V$, $A^{-\beta}$ is approximated by $A_h^{-\beta}\pi_h$ where $A_h$ is the operator associated with the form $A(\cdot,\cdot)$ restricted to $\mathbb V_h$ and $\pi_h$ is the $L^2(\Omega)$-projection onto $\mathbb V_h$. We first provide error estimates for $(A^\beta-A_h^{\beta}\pi_h)f$ in Sobolev norms with index in [0,1] for appropriate $f$. These results depend on elliptic regularity properties of variational solutions involving the form $A(\cdot,\cdot)$ and are valid for the case of less than full elliptic regularity. We also construct and analyze an exponentially convergent sinc quadrature approximation to the Balakrishnan integral defining $A_h^{\beta}\pi_h f$. Finally, the results of numerical computations illustrating the proposed method are given. | \section{Introduction.}
The mathematical study of integral or nonlocal operators has received much
attention due to their wide range of applications, see for instance \cite{ISI:000175019600004,MR2450437,MR0521262,MR2223347,MR2480109,MR1918950,MR660727,MR1727557,MR1709781}.
Let $\Omega$ be a bounded domain in ${\mathbb R}^d$, $d\geq 1$, with a Lipschitz
continuous boundary $\Gamma$ which is the disjoint union of an open set
$\Gamma_N$ and its complement $\Gamma_D=\Gamma\setminus \Gamma_N$.
We define ${\mathbb V}$ to be the
functions in $H^1(\Omega)$ vanishing on $\Gamma_D$.
We then consider a sesquilinear form
$A(\cdot,\cdot)$ defined for $u,v\in {\mathbb V}$
given by
\begin{equation}\label{e:sesq_form}
\bal
A(u,v)&:= \int_\Omega\bigg( \sum_{i,j=1}^d a_{i,j}(x) u_{i}(x) \overline{v_j(x)}
+\sum_{i=1}^d (a_{i,0}(x) u_{i}(x) \overline{v(x)} \\
&\qquad + a_{0,i}(x) u(x) \overline{v_i(x)}) +a_{0,0}(x) u(x)\overline{v(x)}\bigg).
\eal
\end{equation}
Here the subscript on $u$ and $v$ denotes the partial derivative with
respect to $x_i$, $i=1,..,d$, and $\overline{v}$ denotes the complex
conjugate of $v$.
We further assume that $A(\cdot,\cdot)$ is coercive and bounded
(see, \eqref{coer} and \eqref{bdd} below).
Such a sesquilinear form is called \emph{regular} \cite{kato1961}.
There is an unbounded operator $A$ on $L^2(\Omega)$ with domain of
definition $D(A)$ associated with a regular sesquilinear form (see \cite{kato1961} and Section~\ref{s:notation}
below). The unbounded operator associated with such a form is called a
\emph{regularly accretive} operator \cite{kato1961}. For such operators,
the fractional powers are well defined, typically, in terms of
Dunford-Taylor integrals. When
$0<\beta<1$, one can
also use the Balakrishnan formula
\cite{balakrishnan,kato1960,kato1961}:
\beq
A^{-\beta}=\frac {\sin(\beta \pi)}\pi \int_0^\infty \mu^{-\beta} (\mu I
+A)^{-1}\, d\mu.
\label{A-beta}
\eeq
In this paper,
we propose a numerical method for the approximation $A^{-\beta}f$ based
on the finite element
method with an approximation space ${\mathbb V}_h\subset {\mathbb V}$.
It is worth mentioning that a similar formula can be used to represent the solution of a Cauchy problem.
This is the focus of \cite{wenyu}.
Several techniques for approximating $S^{-\beta}f$
are available when $S$ is the operator associated with a Hermitian
form (or symmetric and real
valued on a real-valued functional space).
\modif{The most natural technique} involves approximating $S^{-\beta}$ by
$S_h^{-\beta}$ where $S_h$ is a discretization of $S_h$, e.g., via the
finite element method using ${\mathbb V}_h$. In this case, $S_h^{-\beta}f$ for
$f\in {\mathbb V}_h$ can
be expressed in terms of the discrete eigenvector expansion \cite{MR2300467,MR2252038,MR2800568}:
$$S_h^{-\beta}f= \sum_j c_j \lambda_{j,h}^{-\beta}\psi_{j,h} \quad
\hbox{where}\quad f= \sum_j c_j \psi_{j,h}.$$
Here $(\lambda_{j,h},\psi_{j,h})$ denote the eigenpairs of $S_h$.
An alternative approach is based on a representation of $S^{-\beta}f$ via a ``Neumann to Dirichlet'' map \cite{MR2354493}.
\modif{The numerical algorithm proposed and analyzed in \cite{Abner, AbnerPost} consists
of a finite element method in one higher dimension.
It takes advantage of
the rapid decay of the solution in the additional direction enabling
truncation to a bounded domain of modest size.}
A third approach which is valid for more general $A$,
is based on finite element approximation with an analysis
\modif{employing} the Dunford-Taylor characterization of
$A^{-\beta} f$ \cite{FujitaSuzuki} (see, also, \cite{MR1255054,Ushijima}) and is most closely related to the approach which we will
take in this paper. However, the analysis of \cite{FujitaSuzuki}
only provides errors in
$L^2(\Omega)$, requires full elliptic regularity and fails
to elucidate
the relation between the convergence rate and the smoothness of $f$.
For example, the result in \cite{FujitaSuzuki} does not hold for problems
on non-convex domains or problems with jumps in the higher order
coefficients.
The approach that we shall take in this paper is based on
\eqref{A-beta}.
The introduction of finite elements on a subspace ${\mathbb V}_h\subset {\mathbb V}$
leads to a discrete approximation $A_h$ to $A$. The finite element
approximation to \eqref{A-beta} is then given by
\begin{equation}\label{A-betah}
A_h^{-\beta}\pi_h:=\frac {\sin(\beta \pi)}\pi \int_0^\infty \mu^{-\beta} (\mu I
+A_h)^{-1}\pi_h \, d\mu,
\end{equation}
where $\pi_h$
is the $L^2(\Omega)$-projection onto $\mathbb V_h$.
In \cite{bp-fractional}, we proved the convergence in $L^2(\Omega)$ of
an equivalent version of this method when $A=S$ is real, symmetric and
positive definite. We also showed
the exponential convergence of a sinc quadrature approximation.
The current paper extends the approach of \cite{bp-fractional}
to the case when $A$ is a regularly accretive operator.
The proof provided in \cite{bp-fractional} is based on the fact that, in
the Hermitian case, the domain of $S^\gamma$, for $\gamma\in {\mathbb R}$, is naturally characterized
in terms the decay of the coefficients in expansions involving the
eigenvectors of $S$. Assuming elliptic regularity, it is then possible
show that $D(S^{s/2})$ for $0\leq s \leq 1+\alpha$ coincides with
standard Hilbert spaces. Here $\alpha$ is the regularity parameter (see
below). Thus, norms of the operator \modif{$(\mu I + S)^{-1}$} acting between the
standard Sobolev spaces can be bounded using their series expansions,
the norms in $D(S^{s/2})$, and Young's inequalities.
In contrast, the spaces $D(A^\gamma)$ cannot be characterized in such a
simple way when $A$ is not Hermitian.
The main result of this paper is the following error estimate (Theorem~\ref{FEinterror} and Remark~\ref{r:conv_s12}): for any $0\leq r \leq 1$ there exists $\delta \geq 0$ and a constant $C$ independent of $h$ such that for $f\in D(A^\delta)$
\modif{\begin{equation}\label{e:error_intro}
\| (A^{-\beta}-A_h^{-\beta}\pi_h)f\|_{H^{r}(\Omega)} \leq C
h^ {\alpha+\min(1-r,\alpha)}
\end{equation}
with $C$ being replaced by $C \log(h^{-1})$ for certain combinations of
$r$, $\alpha$, $\delta$ and $\beta$.
Here }
$\alpha>0$ is the so-called elliptic regularity pick-up which is the
regularity above $H^1(\Omega)$ expected for $A^{-1}f$ for appropriate
$f$,
see Assumption~\ref{ellreg}. Even in the case of
Hermitian $A$, the above result extends those in \cite{bp-fractional} to $r>0$.
This paper shows that the general approach for proving \eqref{e:error_intro}
in \cite{bp-fractional} can be extended to the case of regularly accretive
$A$ with additional technical machinery. Some of the
most challenging issues involve the relationship between
$D(A^{s/2})$, for $s\in
[0,1+\alpha]$ and fractional Sobolev spaces.
The case of $s\in [0,1)$ is contained in the acclaimed
paper by Kato \cite{kato1961} showing that for regularly accretive
operators, $D(A^{s/2})$ coincides with the interpolation space
between $L^2(\Omega)$ and ${\mathbb V}$ defined using the real method.
The case of $s=1$ is the celebrated Kato Square Root Problem. This is a
deep result which has been intensively studied
(see, \cite{Agranovich,AxelKeithMc,mcintosh90,MR1255054} and the references in
\cite{mcintosh90}). The results in those papers give conditions when
one can conclude that $D(A^{1/2})=D((A^*)^{1/2})={\mathbb V}$, \modif{where $A^*$ stands for the adjoint of $A$}.
Motivated by the approach of Agranovich and
Selitskii \cite{Agranovich} for proving the Kato Square Root Problem, we
show in this paper that under elliptic regularity assumptions,
$H^s(\Omega)\cap {\mathbb V} \subset D(A^{s/2}) $ and $ H^s(\Omega)\cap {\mathbb V}
\subset D((A^*)^{s/2})$ for $s\in [0,1+\alpha]$
with equality when additional injectivity assumptions on $A^{-1}$ and
$(A^*)^{-1}$ hold. With this information, the norms of $(\mu I+A)^{-1}$
acting between Sobolev spaces can be bounded in terms of the $L^2(\Omega)$
operator norm of $A^t (\mu I+A)^{-1}$ with $t\in [0,1]$. This, in turn,
(see
Lemma~\ref{l:both_bounds})
can be bounded by interpolation using the fact that $D(A^t)$ coincides
with the interpolation scale between $L^2(\Omega)$ and $D(A)$ (using
the complex method).
Similar results for $0\leq s \le 1$ are also required for the finite element approximation $A_h$.
In a discrete setting, the question is to guarantee the existence of a constant $C$ independent of $h$ such that for all $v_h \in \mathbb V_h$
$$
C^{-1} \| v_h \|_{H^{s}(\Omega)} \leq \| A_h^{s/2} v_h\|_{L^2(\Omega)} \leq C \| v_h \|_{H^{s}(\Omega)}.
$$
This is provided by Lemma~\ref{l:characterization_discrete} for $0\leq s
< 1$ and
a solution to the discrete Kato problem ($s=1$) is given in Theorem~\ref{t:discrete_kato}.
We also study a sinc quadrature approximation to $A_h^{-\beta}f$ for
$f\in {\mathbb V}_h$. A change of integration variable shows that
\beq
A_h^{-\beta} =\frac {\sin(\beta \pi)}\pi \int_{-\infty}^\infty
e^{(1-\beta)y} (e^y I
+A_h)^{-1}\, dy. \label{cint}
\eeq
Motivated by \cite{lundbowers}, the sinc quadrature approximation to
$A_h^{-\beta}$ is given by
\beq
{\mathcal Q}^{-\beta}_k (A_h):= \frac{k\sin(\pi \beta)}{\pi} \sum_{\ell=-N}^N e^{(1-\beta) y_\ell} (e^{y_\ell}I+ A_h)^{-1}.
\label{quadh}
\eeq
Here $k:=1/\sqrt N$ is the quadrature step size, $N$ is a positive
integer \modif{and $y_\ell:= \ell k$ for $\ell = -N,...,N$.}
The standard tools related to the sinc quadrature together with the
characterization of
$D(A^{s/2})$ for $s\in [0,1/2]$ mentioned above
yield the quadrature error estimate (Remark~\ref{rem:MN})
$$
\|(A_h^{-\beta}- {\mathcal Q}^{-\beta}_k(A_h))\pi_h \|_{\dH{2s}\to \dH{2s}}
\le C_Q
e^{-\pi^2/(2k)},
$$
where $C_Q$ is a constant independent of $k$ and $h$.
The outline of this paper is as follows.
In Section~\ref{s:notation}, we introduce the notations and properties related to operator calculus with non-Hermitian operators.
Section~\ref{s:symmetric} is devoted to the study of the Hermitian part of $A$ and the related dotted spaces.
The latter is instrumental for the characterization of $D(A^{\frac s 2})$ discussed in Section~\ref{s:characterization}.
The finite element approximations are then introduced in Section~\ref{s:fem}, which also contains the proof of the error estimate \eqref{e:error_intro}.
This coupled with the exponentially convergent sinc quadrature studied in Section~\ref{rem:MN}, yields the final error estimate for the fully discrete and implementable approximation.
We end this work with Section~\ref{s:numerical} providing a numerical illustration of the approximation of fractional convection-diffusion problems.
\section{Fractional Powers of non-Hermitian Operators} \label{s:notation}
We recall that $\Omega$ is a bounded domain in ${\mathbb R}^d$ with a Lipschitz
continuous boundary $\Gamma$ which is the disjoint union of an open set
$\Gamma_N$ and its complement $\Gamma_D=\Gamma\setminus \Gamma_N$.
Let $L^2(\Omega)$ be the space of complex valued functions on $\Omega$ with square integrable absolute value and denote by $\|.\|$ and $(.,.)$ the corresponding norm and Hermitan inner product.
Let $H^1(\Omega)$ be the Sobolev space of complex valued functions on
$\Omega$ and set
$$\mathbb V:=\{v\in H^1(\Omega),\ v=0 \hbox{ on } \Gamma_D\},$$
the restriction of $H^1(\Omega)$ to functions with vanishing traces on $\Gamma_D$.
We implicitly assume that $\Gamma_D$ is such that the trace operator
from $H^1(\Omega)$ is bounded into $L^2(\Gamma_D)$, e.g.,
$\Gamma_D$ does
not contain any isolated sets of zero $d-2$ dimensional measure.
We denote $\|.\|_{1}$ and $\|.\|_{\mathbb V}$ to be
the norms on $H^1(\Omega)$ and $\mathbb V$ defined respectively by
\begin{equation*}
\| v \|_{1}:= \left(\int_\Omega |v|^2+ \int_\Omega | \nabla v|^2\right)^{1/2}, \qquad
\| v \|_{\mathbb V}:= \left( \int_\Omega | \nabla v|^2\right)^{1/2}.
\end{equation*}
For convenience, we avoid the situation where the variational space
requires $L^2(\Omega)$-orthogonalization, i.e., the Neumann problem
without a zeroth order term.
For a bounded operator $G:X(\Omega)\to Y(\Omega)$ between two Banach spaces $(X(\Omega),\|.\|_X)$ and $(Y(\Omega),\|.\|_Y)$ we write
$$
\| G \|_{X\to Y} := \sup_{u \in X, \| u \|_X =1} \| Gu\|_Y
$$
and, in short, $\| G \|:= \| G \|_{L^2 \to L^2}$.
Throughout this paper we use the notation $A \preceq B$ to denote $A\leq
C B$ with a constant $C$ independent of $A$, $B$ and the discretization
mesh parameter
$h$ (defined later).
When appropriate, we shall be more explicit on the dependence of $C$.
Consider the sesquilinear form \eqref{e:sesq_form}
for $ u,v$ in $\mathbb V$.
We assume that
$A(\cdot,\cdot)$ is strongly
elliptic and
bounded.
The assumption
of strong ellipticity is the existence of a positive constant $c_0$
satisfying
\beq
\Re(A(v,v))\ge c_0 \|v\|^2_{\mathbb V}, \qquad \hbox{for all } v\in \mathbb V.
\label{coer}
\eeq
The boundedness of $A(\cdot,\cdot)$ on $\mathbb V$ implies the existence of
a positive constant $c_1$ satisfying
\beq
|A(u,v)|\le c_1 \|u\|_{\mathbb V}\|v\|_{\mathbb V}, \qquad \hbox{for all } u,v\in \mathbb V.
\label{bdd}
\eeq
The conditions \eqref{coer} and \eqref{bdd} imply that the sesquilinear form
$A(\cdot,\cdot)$ is regular on ${\mathbb V}$ (see, Section 2 of
\cite{kato1961}).
Following \cite{kato1961}, we define the Hermitian forms
$$ \Re A(u,v) := \frac {A(u,v)+\overline{A(v,u)} } 2 \quad \hbox{ and }
\quad
\Im A(u,v) := \frac {A(u,v)-\overline{A(v,u)} }{2i}.$$
Note that \eqref{coer} implies that $\Re A(u,u) $ is equivalent to
$\|u\|_{\mathbb V}^2$, for all $v\in {\mathbb V}$ and
\eqref{bdd} is equivalent to
\beq
|\Im A(u,u)| \le \eta\, \Re A(u,u), \qquad \hbox{for all } u\in {\mathbb V},
\label{index}
\eeq
for some $\eta>0$. The smallest constant $\eta$ above is called the
index of $A(\cdot,\cdot)$.
We now define operators associated with regular sesquilinear forms.
Let $A$ be the unique closed m(aximal)-accretive operator
of Theorem 2.1 of \cite{kato1961}, which is defined as follows.
We set
$\widetilde T:L^2(\Omega)\rightarrow {\mathbb V}$ by
$$A(\widetilde Tu,\phi)=(u,\phi),\qquad \hbox{for all } \phi\in {\mathbb V}$$
(which is uniquely defined by the Lax-Milgram Theorem)
and set
$$D(A):=\hbox{Range}(\widetilde T)\subset {\mathbb V}.$$
As $\widetilde T$ is one to one,
we define $Aw :=\widetilde T^{-1}w$, for $w\in D(A)$.
The operator $A$ associated with a regular sesquilinear form is said to be \emph{regularly accretive}.
\modif{We denote by $\mathbb V_a^*$ denotes the set of
bounded antilinear functionals on $\mathbb V$.}
It will be useful to consider also a related bounded operator
$T_a: {\mathbb V}_a^* \rightarrow {\mathbb V}$ defined for $F\in
{\mathbb V}_a^*$ by the unique solution (Lax-Milgram again) to
$$
A(T_aF,\phi)=\langle F,\phi \rangle,\qquad \hbox{for all } \phi \in {\mathbb V}.
$$
Here $\langle \cdot,\cdot\rangle $ denotes the antilinear
functional/function pairing.
The operator $T_a$ is a bijection of ${\mathbb V}_a^*$ onto ${\mathbb V}$ and we denote its inverse
by $A_a$.
>From the definition of $T_a$, we readily deduce that for $u\in {\mathbb V}$, $A_a u$ satisfies
\beq
\langle A_a u,v \rangle = A(u,v), \qquad \hbox{for all } v\in \mathbb V.
\label{Fa}
\eeq
It is also clear from the
definition of $A$ that $u\in D(A)$ if and only if $A_au$ extends to a
bounded antilinear functional on $L^2(\Omega)$ and, then,
$$(Au,v) =\langle A_au,v\rangle = A(u,v),
\qquad \hbox{for all } u\in D(A),v\in \mathbb V.
$$
\modif{The above constructions can be repeated for adjoints defining
$\mathbb V_l^*$ the set of
bounded linear functionals on $\mathbb V$,} $\widetilde T^*:L^2(\Omega)\rightarrow {\mathbb V}$, $D(A^*):=\hbox{Range}(\widetilde T^*) \subset {\mathbb V}$,
$A^*:= (\widetilde T^*)^{-1}:D(A^*) \rightarrow L^2(\Omega)$, $T^*_l:{\mathbb V}_l^* \rightarrow {\mathbb V}$ and $A^*_l:=(T^*_l)^{-1}$.
In this case, for $v \in {\mathbb V}$, $A_lv$ satisfies
\beq
\langle u,A_l v \rangle = A(u,v), \qquad \hbox{for all } u\in \mathbb V.
\label{Fa*}
\eeq
with $\langle \cdot,\cdot\rangle $ also denoting the function/linear
functional pairing.
We also have that $v\in D(A^*)$ if and only if $A_lv$ extends to a
bounded linear functional on $L^2(\Omega)$ and, then,
$$(u,A^*v) =\langle u,A_l^*v\rangle = A(u,v),
\qquad \hbox{for all } u\in \mathbb V, v\in D(A^*).$$
Of course, these definitions imply that for $u\in D(A)$ and $v\in D(A^*)$,
$(Au,v)=(u,A^*v)$.
By construction, the operators $A$ and $A^*$ defined from regular sesquilinear
forms are regularly accretive (cf. \cite{kato1961}).
They satisfy
the following theorem (see, also \cite{FujitaSuzuki}):
\begin{theorem} [Theorem 2.2 of \cite{kato1961}] \label{resbound}
Let $A$ be the unique regularly accretive operator
defined from a regular sesquilinear form
$A(\cdot,\cdot) $ with index $\eta$. Set $\omega:=\arctan(\eta)$.
Then the numerical range and the
spectrum
of $A$ are subsets of the sector $S_{\omega}:=\{z\in {\mathbb C}\ : \
|\arg z|\leq\omega\}$. Further, the resolvent set $\rho(A)$
of $A$ contains $S_{\omega}^c:={\mathbb C} \setminus S_{\omega}$ and on this set the resolvent
$R_z(A):= (A-zI)^{-1}$ satisfies
$$\|R_z(A)\| \le \left \{
\begin{array}{ll}
[|z|\sin(\arg(z)-\omega)]^{-1} &\qquad
\text{for} \quad \omega<\arg(z) \le \frac \pi 2 +\omega,\\
|z|^{-1} & \qquad \text{for} \quad \arg(z)>\frac \pi 2 +\omega.
\end{array} \right .
$$
The result also holds for $A$ replaced by $A^*$.
\end{theorem}
\begin{remark} \label{cz2}
It easily follows from \eqref{coer} that for $\Re(z)\le c_0/2$,
$$\frac{c_0}2 \|u\|_1^2 \le |A(u,u)-z(u,u)|$$
which implies that
$$\|R_z(A)f\|_1 \le \frac 2{c_0}\|f\|.$$
\end{remark}
It follows from Theorem~\ref{resbound} and Remark~\ref{cz2} that the Bochner
integral appearing in \eqref{A-beta}, for $\beta\in (0,1)$,
is well defined and gives a bounded operator $A^{-\beta}$
on
$L^2(\Omega)$.
Fractional powers for positive indices can be defined from those with
negative indices.
For $\beta\in
(0,1)$,
$$D(A^\beta)=\{ u\in L^2(\Omega) \ : \ A^{\beta-1}u\in D(A)\}$$
and $A^\beta u := A(A^{\beta-1})u$ for $u\in D(A^\beta)$.
An alternative but equivalent definition of fractional powers of
positive operators (for $\beta\in (-1,1)$) is given in, e.g.,
\cite{lunardi}. We shall recall some additional properties provided
there (for $\beta\in(0,1)$).
Theorem 4.6
of \cite{lunardi} implies that $D(A)\subset D(A^\beta)$ and
for $v\in D(A)$, $A^\beta v = A^{\beta-1}
Av $ and $Av=A^\beta A^{1-\beta}v= A^{1-\beta} A^{\beta}v$. Also,
for any $\beta>0$, $D(A^\beta) = \{A^{-\beta}v\ : v\in L^2(\Omega)\}$ and
$A^\beta v= (A^{-\beta})^{-1}v$ for $v\in D(A^\beta)$.
Set $w:=A^\beta v$. The last statement in the previous paragraph
implies that $w = (A^{-\beta})^{-1}v$. Now as
$ (A^{-\beta})w=v$,
$$\|v\| = \| A^{-\beta} w\| \le \| A^{-\beta} \| \|w\|
= \| A^{-\beta} \| \|A^\beta v\|.$$
This implies that we can take
\begin{equation}\label{d:norm_DA}
\|v\|_{D(A^s)}:=\|A^s v\|
\end{equation}
as our norm on $D(A^s)$ for $s\in [0,1)$.
Using the above and techniques from functional calculus \cite{haase},
we can conclude similar facts concerning products of fractional powers
and the resolvent, namely,
\beq A^{-\beta} R_z(A) u = R_z(A) A^{-\beta} u, \qquad \hbox{for all } u\in
L^2(\Omega),\beta\ge 0,
\label{anegcom}
\eeq
and
\beq A^{\beta} R_z(A) u = R_z(A) A^{\beta} u, \qquad \hbox{for all } u\in
D(A^\beta),\beta\in [0,1].
\label{aposcom}
\eeq
We shall also connect fractional powers of
operators with their adjoints in the $L^2(\Omega)$-inner product.
We have already noted that for $u\in D(A)$ and $v\in D(A^*)$,
$(Au,v)=(u,A^*v)$. This holds for fractional powers as well, i.e.,
$(A^{\beta} u,v)=(u,(A^*)^\beta v)$
provided that $u\in D(A)$ and $v\in D(A^*)$.
\section{The Hermitian Operator and the Dotted Spaces.}\label{s:symmetric}
For notational simplicity, we set $S(u,v):=\Re A(u,v)$.
As already noted, $S(u,u)^{1/2} $ provides an equivalent norm
on ${\mathbb V}$ and we redefine $\|u\|_{\mathbb V}:=S(u,u)^{1/2}$.
As $S(\cdot,\cdot)$ is regular (i.e. satisfies \eqref{coer} and \eqref{bdd} with $A(.,.)$ replaced by $S(.,.)$),
there is an
associated (m-accretive) unbounded operator $S$. The latter is defined similarly as $A$ from $A(.,.)$ in Section~\ref{s:notation} (see also \cite{kato1961}).
This is, upon first defining $T_S:L^2(\Omega)\rightarrow {\mathbb V}$ by
$T_Sf:=w$ where $w\in {\mathbb V}$ is the unique solution of
$$S(w,\phi)=(f,\phi) \qquad \hbox{for all } \phi\in {\mathbb V}$$
and then setting $D(S):=\hbox{Range}(T_S)$, $S:=T_S^{-1}$.
In addition, as $S(\cdot,\cdot)$
is symmetric and coercive, $S$ is self-adjoint and satisfies
$$S(u,v)=(S^{1/2}u,S^{1/2}v).$$
We consider the Hilbert scale
of spaces defined by $\dH s:=D(S^s)$ for $s\ge 0$.
The above discussion implies
\begin{equation}\label{e:dot_endpoints}
\dH1={\mathbb V} \qquad \text{and} \qquad \dH0=L^2(\Omega).
\end{equation}
Moreover, the operator $T_S$ is a compact Hermitian operator on $L^2(\Omega)$
and so there is a countable $L^2(\Omega)$-orthonormal
basis $\{\psi_i, \ i=1,\ldots,\infty\} $ of
eigenfunctions for $T_S$. The corresponding eigenvalues $\{\mu_i\}$ can
be ordered so that they are non-increasing with limit 0 and we set
$\lambda_i=\mu_i^{-1}$. This leads to a realization
of $\dH s$ in terms of eigenfunction expansions, namely, for $s\in (0,1)$
\begin{equation}\label{e:dot_intermediate}
D(S^s):=\dH s =\bigg\{w=\sum_{j=1}^\infty (w,\psi_j) \psi_j
\in L^2(\Omega) \ : \ \sum_{j=1}^\infty |(w,\psi_j)|^2
\lambda_j^s<\infty\bigg \}.
\end{equation}
The spaces $\dH s $ are Hilbert spaces with inner product
$$(u,v)_s := \sum_{j=1}^\infty \lambda_j^s (u,\psi_j) \overline
{(v,\psi_j)}.$$
Moreover, they are a Hilbert scale of spaces and are also
connected by the real
interpolation method.
As already mentioned $\dH 1 = \mathbb V$ so that the set of antilinear functionals on $\mathbb V$, denoted ${\mathbb V}^*_a$, can be characterized by
$$
{\mathbb V}^*_a=\dH{-1}_a: =\bigg \{
\bigg \langle \sum_{j=1}^\infty c_j \psi_j,\cdot\bigg \rangle \ : \
\sum_{j=1}^\infty |c_j|^2
\lambda_j^{-1}<\infty\bigg\},
$$
where $\bigg \langle \sum_{j=1}^\infty c_j \psi_j, \sum_{i=1}^\infty d_j \psi_j \bigg \rangle := \sum_{j=1}^\infty c_j \overline{d}_j$.
In addition, the set of antilinear functionals on $L^2(\Omega)$,
denoted by
$L^2(\Omega)^*_a$, is given by
$$L^2(\Omega)_a^*=\dH{0}_a: =\bigg \{\bigg\langle \sum_{j=1}^\infty c_j
\psi_j,\cdot\bigg
\rangle\ : \
\sum_{j=1}^\infty |c_j|^2
<\infty\bigg\}.$$
Hence, the intermediate spaces are defined by
$$\dH{-s}_a: =\bigg \{
\bigg \langle \sum_{j=1}^\infty c_j \psi_j,\cdot\bigg \rangle \ : \
\sum_{j=1}^\infty |c_j|^2
\lambda_j^{-s}<\infty\bigg\}$$
and are Hilbert spaces with the obvious inner product. These also
are a Hilbert scale of interpolation spaces for $s\in [-1,0]$.
In addition, these spaces are dual to $\dH s$,
i.e., if $s\in [0,1]$ and
$$\langle w,\cdot \rangle = \bigg \langle \sum_{j=1}^\infty c_j \psi_j,\cdot\bigg \rangle \in \dH {-s}_a$$
then
\beq
\|w\|_{\dH {-s}_a}=\bigg(\sum_{j=1}^\infty \lambda_j^{-s} |c_j|^2
\bigg)^{1/2}
=\sup_{\theta\in \dH s} \frac {\langle w,\theta\rangle}
{\|\theta\|_{\dH s }}
\label{dual}
\eeq
and if $\theta\in {\dH s}$,
\beq
\|\theta\|_{\dH {s}}
=\sup_{w\in \dH {-s}_a} \frac {\langle w,\theta\rangle}
{\|w\|_{\dH {-s}_a }}.
\label{dualother}
\eeq
Considering linear functionals instead of antilinear functionals
and replacing ${\mathbb V}_a^*$ and $L^2(\Omega)^*_a$ by
spaces of linear functionals, ${\mathbb V}_l^*$ and $L^2(\Omega)_l^*$ gives
rise to the analogous Hilbert scale with $\dH{-1}_l={\mathbb V}_l^*$ and
$\dH0_l=L^2(\Omega)_l^*$ as endpoints with equalities similar to
\eqref{dual} and \eqref{dualother} holding for these as well.
As we shall see in Section~\ref{s:characterization}, $D(A^{s/2})$ relates either to $\dH s$ or $H^s(\Omega)\cap {\mathbb V}$
depending on whether $s>0$ is smaller or greater that $1$.
In order to unify the presentation, we introduce the following spaces equipped with their natural
norms:
$$ \tH {s}:= \left \{\bal
\dH s &\qquad \hbox{for }s\in [0,1],\\
H^s(\Omega)\cap {\mathbb V} &\qquad \hbox{for }s\ge 1.
\eal \right.
$$
\section{Characterization of $D(A^{\frac s 2})$} \label{s:characterization}
In this section, we first observe that the dotted spaces for $s\in [0,1)$
coincide with the domains of
fractional powers of $A$ and $A^*$ (c.f., \cite{kato1961}). In addition,
we note that the dotted spaces $\dH{-s}_a$ and $\dH{-s}_l$ can be
identified with the dual
space of $\dH s$, for $s\in [0,1]$. The case of $\tH s:=\dH s$, $s\in (0,1)$
is addressed in the following theorem which is an immediate consequence
of Theorem~3.1 of \cite{kato1961}.
\begin{theorem}[Characterization of $D(A^{\frac s 2})$ for $0\leq s <1$] \label{cont0_1_2}
Assume that \eqref{coer} and \eqref{bdd} hold.
Then for $s\in [0,1 )$,
$$
D(A^{s/2})=D((A^*)^{s/2})=D(S^{s/2})=\tH s,
$$
with equivalent norms.
\end{theorem}
The identification of the negative dotted spaces with with the duals
is given in the following remark.
\begin{remark}[Characterization of Negative Spaces] \label{identify}
We identify $f\in L^2(\Omega)$ with the functional
$F^f_a\in L^2(\Omega)^*_a$ defined by
$$\langle F^f_a,\theta\rangle=(f,\theta),\qquad \hbox{for all } \theta\in L^2(\Omega).$$
It follows from Theorem~\ref{cont0_1_2} and \eqref{dual} that the norms
$\|F^f_a\|_{\dH {-2s} _a} $ and $\|A^{-s} f\|$ are equivalent (for $s\in
[0,1/2)$). Indeed,
$$\bal
\|F_a^f\|_{\dH {-2s} _a}&=\sup_{\phi\in \dH {2s} } \frac {\langle F_a^f,\phi
\rangle } {\|\phi\|_{\dH {2s}}}\approx \sup_{\phi\in \dH {2s} } \frac {(f,\phi)}
{\|(A^*)^{s}\phi\|} \\
&=\sup_{\theta\in L^2(\Omega) } \frac {
(f,(A^*)^{-s}\theta)}
{\|\theta\|_{\dH {2s}}} =\|A^{-s} f\|.
\eal
$$
Here $\approx$ denotes comparability with constants independent of $f$.
For simplicity, we shall write
$\|f\|_{\dH {-2s} _a}$ instead of $\|F^f_a\|_{\dH {-2s} _a} $.
We can identify $L^2(\Omega)$ with $L^2(\Omega)^*_l$ in an analogous way
and similar norm equivalences hold.
\end{remark}
Elliptic regularity is required to obtain convergence rates for finite
element approximation.
Such results for boundary value problems have been
studied by many authors \cite{brbacuta,costabel,dauge,MR1173209,
kellogg,nic1,Kondratev,nazplam,nicaise}.
The follow assumption
illustrates the type of elliptic regularity results available.
\begin{assumption}[Elliptic Regularity]\label{ellreg}
We shall assume elliptic
regularity for the form $A(\cdot,\cdot)$
with indices $\alpha \in (0,1]$. Specifically, we assume that for $s\in (0,\alpha]$,
$T_a$ is a bounded map of $\dot H_a^{-1+s}$ into $\tH{1+s}(\Omega)$
and $T_l^*$ is a bounded map of $\dot H_l^{-1+s}$ into $\tH{1+s}$.
\end{assumption}
The above assumptions imply the following theorem.
\begin{theorem}[Property of $D(A^{\frac s 2})$ for $s > 1$]
\label{t:characterization_hdot_onewayc}
Assume that \eqref{coer}, \eqref{bdd} and
the elliptic regularity assumptions (Assumption~\ref{ellreg})
hold. Then for $s\in (1,1+\alpha]$,
$$
D(A^{s/2}) \subset \tH {s}
\qquad \text{and} \qquad D((A^*)^{ s/2}) \subset \tH s,
$$
with continuous embeddings.
\end{theorem}
\begin{remark}[Kato Square Root Problem] \label{re:KatoT}
The case of $s=1$, i.e.,
$
D(A^{ 1/2}) \subset {\mathbb V} =: \tH 1
$ with continuous imbedding
is contained in the Kato
Square Root Theorem. This is a deep theorem which
has been intensively studied, see
\cite{Agranovich,AxelKeithMc,mcintosh90,MR1255054} and the references in
\cite{mcintosh90}. The Kato Square Root Theorem
holds for our problem under fairly weak regularity
assumptions on the coefficients defining our bilinear form
\cite{Agranovich}.
In fact, it requires the existence of $\epsilon>0$ such that $A_a$ and
$A_l^*$ are bounded operators from $\tH{1+\gamma}$ to
$\dH{-1+\gamma}_a$ and $\dH{-1+\gamma}_l$, respectively,
for $| \gamma | \leq \epsilon$.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{t:characterization_hdot_onewayc}]
We consider the case of $A$ as the case of $A^*$ is similar.
Suppose
that $u$ is in $D(A)$ and $v$ is in $D(A^*)$.
Then, for $t\in [0,\alpha]$,
$$ \bal A(u,v)&= (Au,v)
=(A^{(1-t)/2} A^{(1+t)/2} u,v)\\ &
= (A^{(1+t)/2}
u,(A^*)^{(1-t)/2} v):=F(v).
\eal
$$
Thus, Theorem~\ref{cont0_1_2} gives
$$\bal
|F(v)|
&\le \|A^{(1+t)/2}u\|
\| (A^*)^{(1-t)/2} v\| \\
& \preceq
\|A^{(1+t)/2}u\| \|v\|_{\dH{1-t}}.
\eal$$
This implies that $F\in \dot H_l^{-1+t}$.
The elliptic regularity Assumption~\ref{ellreg} implies
that $u = T_l^* F$ is in $\tH{1+t}$ and satisfies
$$\|u\|_{H^{1+t}} \preceq \|A^{(1+t)/2}u\|.$$
As $D(A)$ is dense in $D(A^{(1+t)/2})$,
$D(A^{(1+t)/2})\subset \tH{1+t}$
follows.
\end{proof}
\section{Finite element approximation}\label{s:fem}
In this section, we define finite element approximations
to the operator
$A^{-\beta}$
for $\beta\in (0,1)$.
For simplicity, we assume that the domain $\Omega$ is polyhedral so that
it can be partitioned into a conforming subdivision made of simplices.
Further, we assume that we are given a finite dimensional subspace $\mathbb V_h\subset \mathbb V$
consisting of continuous complex valued functions, vanishing on
$\Gamma_D$,
which
are piecewise linear with respect to a conforming subdivision of
simplicies of maximal
size diameter $h\le 1$.
Notice that when the form $A(v,w)$ is real for real $v,w$, so is the finite element space (see Remark~\ref{r:real}).
We also need to assume that the triangulation
matches the partitioning $\Gamma=\Gamma_D\cup \Gamma_N$. This means
that any mesh simplex of dimension less than $d$ which lies on $\Gamma$
is contained in either $\bar \Gamma_N$ or $\Gamma_D$.
Given a universal constant $\rho>0$, we restrict further our considerations to quasi-uniform partitions $\mathcal T$, i.e. satisfying
\begin{equation}\label{e:quasiuniform}
\frac{\max_{T\in \mathcal T} \textrm{diam}(T)}{\min_{T\in \mathcal T} \textrm{diam}(T)} \leq \rho.
\end{equation}
Let $\pi_h$ denote the
$L^2(\Omega)$-orthogonal projector onto $\mathbb V_h$.
Given a sequence of conforming subdivisions $\{ \mathcal T\}_{h}$ satisfying \eqref{e:quasiuniform}, there holds
\beq
\|\pi_h v\|_1\le C \|v\|_1, \qquad \hbox{for all } v\in{\mathbb V},
\label{pihb}
\eeq
where the constant $C$ is independent of $h$; see \cite{bramblexu}.
Obviously, $\pi_h$ is a bounded operator on $L^2(\Omega)$ and, by interpolation,
is a bounded operator on $\dH s$ for $s\in [0,1]$ with bounds
independent of $h$.
We shall need the following lemma providing approximation properties for
$\pi_h$. \modif{ These results are a consequence of the Scott-Zhang approximation operator
\cite{ScottZhang} and operator interpolation. As the specific form of
these results are needed for the analysis in the remainder of this paper,
we include a proof for completeness.}
\begin{lemma} Let $s$ be in $[0,1]$ and $\sigma>0$ be such that
$s+\sigma\le 2$. Then there is a constant $C=C(s,\sigma)$ not
depending on $h$ and satisfying
$$\|(I-\pi_h) u \|_{\tH s } \le C h^{\sigma} \|u\|_{\tH{s+\sigma}},
\qquad \hbox{for all } u\in \tH{s+\sigma}.$$
\end{lemma}
\begin{proof}
Let $\widetilde \pi_h$ denote the Scott-Zhang approximation operator
\cite{ScottZhang} mapping onto the set of piecewise linear polynomials
with respect to
the above triangulation (without any imposed boundary conditions).
This operator satisfies, for $\ell=0,1$ and $k=1,2$,
\beq
\|(I-\widetilde \pi_h) u \|_{H^\ell}\preceq h^{k-\ell} \| u \|
_{H^k}, \qquad \hbox{for all } u\in H^k(\Omega).
\label{scottz}
\eeq
In addition, $\widetilde \pi_hu \in {\mathbb V}_h$ for $u\in{\mathbb V}$.
We first verify the lemma when $s+\sigma\le 1$. We clearly have
\beq
\| (I-\pi_h) u \| \le \| (I-\widetilde \pi_h) u \|\preceq h \| u \|
_{{\mathbb V}}, \qquad \hbox{for all } u\in {\mathbb V}
\label{sz0}.
\eeq
It immediately follows from \eqref{pihb}
that
\beq
\| (I-\pi_h) u \|_{{\mathbb V}} \preceq \| u \|
_{{\mathbb V}}, \qquad \hbox{for all } u\in {\mathbb V}.
\label{sz1}
\eeq
Interpolating this and \eqref{sz0} gives
\beq
\| (I-\pi_h) u \|_{\tH s} \preceq h^{1-s}\| u \|
_{{\mathbb V}}, \qquad \hbox{for all } u\in {\mathbb V},
\label{sz2}
\eeq
for $s\in [0,1]$. As $\pi_h$ is stable on $L^2(\Omega)$ and ${\mathbb V}$,
interpolation implies that it is stable on $\tH s$ and hence
\beq
\| (I-\pi_h) u \|_{\tH s} \preceq \| u \|
_{\tH s}, \qquad \hbox{for all } u\in \tH s,
\label{sz3}
\eeq
for $s\in [0,1]$. Interpolating \eqref{sz2} and \eqref{sz3} and applying
the reiteration theorem gives for $\sigma>0$ and $s+\sigma \le 1$,
\beq
\| (I-\pi_h) u \|_{\tH s} \preceq h^\sigma \| u \|
_{\tH {s+\sigma}}, \qquad \hbox{for all } u\in \tH {s+\sigma}.
\label{sz4}
\eeq
We next consider the case when $s+\sigma\in (1,2]$. Taking $\ell=1$ in
\eqref{scottz} and interpolating between the $k=1$ and $k=2$
gives for $\zeta\in [1,2]$,
$$
\|(I-\widetilde \pi_h) u \|_{H^1}\preceq h^{\zeta-1} \| u \|
_{H^\zeta}, \qquad \hbox{for all } u\in H^\zeta(\Omega).
$$
Thus for $u\in \tH {s+\sigma}$, by \eqref{sz4},
$$\bal
\| (I-\pi_h) u \|_{\tH s} &\preceq h^{1-s} \| (I-\pi_h) u \|
_{{\mathbb V}}\preceq h^{1-s} (\| (I-\widetilde \pi_h) u \|_{\mathbb V}
+\| (\widetilde \pi_h -\pi_h) u\|_{\mathbb V})\\
&\preceq h^\sigma \|u\|_{\tH{s+\sigma}} +h^{-s} \| (\widetilde \pi_h -\pi_h)
u\|
\preceq h^\sigma \|u\|_{\tH{s+\sigma}}
\eal
$$
where the last inequality followed from \eqref{scottz} and obvious
manipulations.
\end{proof}
We define $A_h:\mathbb V_h\rightarrow \mathbb V_h$ by
\beq
(A_h v_h,\varphi_h)=A(v_h,\varphi_h), \qquad \hbox{for all } \varphi_h\in \mathbb V_h.
\label{ah}
\eeq
The operator $A_h$ is the discrete analogue of $A$ and we analogously define
$A_h^*$, the discrete analogue of $A^*$. The fractional
powers $A_h^{-\beta}$ for $\beta>0$ are again given by
\eqref{A-beta} but with $A$ replaced by $A_h$, i.e., for $\beta\in(0,1)$,
$A_h^{-\beta}:\mathbb V_h \rightarrow \mathbb V_h$ is given by
\beq
A^{-\beta}_h:=\frac {\sin(\beta \pi)}\pi \int_0^\infty \mu^{-\beta} (\mu I
+A_h)^{-1} \, d\mu.
\label{feA-beta}
\eeq
The goal of this paper is to analyze the error between $A^{-\beta} f$
and $A^{-\beta}_h \pi_hf$.
\begin{remark}[Real Valued Bilinear Forms and Finite Element Spaces]\label{r:real} When $A(v,w)$ is real for real $v,w$, the above
operators restricted to real valued functions are real valued and
hence we may use Sobolev spaces and approximation
spaces $\mathbb V_h$ of real valued functions.
\end{remark}
Similarly, let $S_h:{\mathbb V}_h\rightarrow{\mathbb V}_h$ be defined by
$$ (S_h v_h,w_h)=S(v_h,w_h),\qquad \hbox{for all } v_h,w_h\in {\mathbb V}_h.$$
Theorem 3.1 of \cite{kato1961} applied to the discrete operators
$A_h$ and $S_h$ shows that for $s\in [0,1/2)$,
$$
\bigg(1-\tan\frac{\pi s}2\bigg )\|S_h^s v_h\| \le
\|A_h^s v_h\| \le \bigg[1+\big(\frac s\pi \tan \pi s\big)^{1/2}\big
(\eta+\eta^2)\bigg] \|S_h^s v_h\|,
$$
for all $v_h\in {\mathbb V}_h$.
Here $\eta$ is the index of $A(\cdot,\cdot)$ (see, \eqref{index}).
This also holds for $A^*_h$.
The bound \eqref{pihb}
implies (see, e.g., \cite{bankdupont}) that there are positive
constants $c$ and $C$, not depending on $h$ such that for $s\in [0,1]$,
\beq
c\|S_h^{s/2} v_h\|\le \|v_h\|_{\dH s} \le C \|S_h^{s/2} v_h\|,
\qquad \hbox{for all } v_h\in {\mathbb V}_h.
\label{sh1}
\eeq
Combining the preceding two sets of inequalities proves the following
lemma.
\begin{lemma}[Discrete Characterization of $\dH {s} $ for $s\in \lbrack 0,1)$] \label{l:characterization_discrete}
There exist positive constants $c,C$ independent of $h$
such that for all $v_h\in \mathbb V_h$ and $s\in [0,1)$,
$$
c \|v_h\|_{\dH s}\le \|A_h^{s/2} v_h\|\le C \|v_h\|_{\dH s}.
$$
This result holds with $A_h$ replaced by $A_h^*$.
\end{lemma}
\begin{remark}[Quasi-uniformity Assumption]
The above lemma still holds as long as \eqref{pihb} holds.
This is for instance the case for certain mesh refinement strategies \cite{bankyserentant,brPstein}.
\end{remark}
\section{Error Estimates}\label{s:error}
In this section, we study numerical approximation to the operators
$A^{-\beta}$
for $\beta\in (0,1)$. Specifically,
this involves bounding the errors $(A^{-\beta}-A_h^{-\beta}\pi_h)v$
for $v$ having appropriate smoothness.
We shall use our finite element spaces to approximate $A_a^{-1}:=T_a$.
Specifically,
$T_{h,a}:{\mathbb V}_a^* \rightarrow \mathbb V_h$ is defined for $F\in
{\mathbb V}_a^*$ by
$$
A(T_{h,a} F,\phi_h)=\langle F,\phi_h \rangle,\qquad \hbox{for all } \phi_h \in {\mathbb V}_h.
$$
We define $T_{h,l}^*$ corresponding to $T_l^*:=(A_l^*)^{-1}$
analogously.
\modif{
The following lemma provides approximation error bounds in terms of the norms
needed for our subsequent analysis. Although the techniques in the
proof (Galerkin orthogonality and Nitsche finite element duality) are
completely classical, the results are not quotable (as far as we
know).
We include a proof for completeness.}
\begin{lemma}[Finite Element Error]\label{FEerror} Assume that
\eqref{coer}, \eqref{bdd} and the elliptic regularity Assumption~\ref{ellreg} hold.
Let $s\in [0,\frac 12]$ and set $\alpha_*:=\frac 1 2 (\alpha+\min(1-2s,\alpha))$.
There is a positive constant $c$ not depending on $h$
satisfying
\begin{equation}\label{e:dual_estim}
\|T_a-T_{a,h}\|_{\dH {\alpha-1}_a \rightarrow \dH {2s}}
\le c h^{2\alpha_*}.
\end{equation}
The above immediately implies
$$\|T_a-T_{a,h}\|
\le c h^{2\alpha}.$$
\end{lemma}
\begin{proof} The proof of this lemma is classical and we only include
details for completeness.
We distinguish two cases.\\
\step{1} When $2s \leq 1-\alpha$ then we can fully take advantage of the elliptic regularity assumption.
For $F\in \dH {\alpha-1}_a$,
we set $e=(T_a-T_{a,h})F$. By \eqref{dualother} and the
elliptic regularity
Assumption~\ref{ellreg},
$$\bal
\|e\|_{\dH {2s}} \leq \|e\|_{\dH {1-\alpha}}&\preceq \sup_{G\in \dH {\alpha-1}_l}
\frac{\langle e,G\rangle} {\|G\|_{\dH {\alpha-1}_l}}\preceq
\sup_{G\in \dH {\alpha-1}_l}
\frac {A(e,T^*_l G)} {\|T^*_l G\|_{H^{1+\alpha}}}\\
&=\sup_{w\in \tH{1+\alpha}}
\frac{A(e,w)} {\|w\|_{H^{1+\alpha}}}=\sup_{w\in \tH{1+\alpha}}
\frac{A(e,w-w_h)} {\|w\|_{H^{1+\alpha}}}.
\eal
$$
We used Galerkin orthogonality for the last equality (which holds for
any $w_h\in {\mathbb V}_h$).
Using the approximation property
$$\inf_{w_h\in {\mathbb V}_h} \|w-w_h\|_1 \preceq h^{\alpha}
\|w\|_{H^{1+\alpha}}, \qquad \hbox{for all } w\in \tH{1+\alpha},
$$
\eqref{bdd} and the above inequalities give
$$\|e\|_{\dH {1-\alpha}}\preceq h^{\alpha} \|e\|_1.$$
\step{2}This duality argument yields a reduced order of convergence when $2s>1-\alpha$.
Indeed, proceeding similarly
$$
\bal
\|e\|_{\dH {2s}} &\preceq \sup_{G\in \dH {-2s}_l}
\frac{\langle e,G\rangle} {\|G\|_{\dH {-2s}_l}}\preceq
\sup_{G\in \dH {-2s}_l}
\frac {A(e,T^*_l G)} {\|T^*_l G\|_{H^{2-2s}}}\\
&=\sup_{w\in \tH{2-2s}}
\frac{A(e,w)} {\|w\|_{H^{2-2s}}}=\sup_{w\in \tH{2-2s}}
\frac{A(e,w-w_h)} {\|w\|_{H^{2-2s}}},
\eal
$$
so that together with the approximation property
$$\inf_{w_h\in {\mathbb V}_h} \|w-w_h\|_1 \preceq h^{1-2s}
\|w\|_{H^{2-2s}}, \qquad \hbox{for all } w\in \tH{2-2s},
$$
\modif{we deduce that}
$$
\|e\|_{\dH {2s}}\preceq h^{1-2s} \|e\|_1.
$$
Gathering the two cases $2s > 1+\alpha$ and $2s \leq 1+\alpha$, we get
$$
\|e\|_{\dH {2s}}\preceq h^{\min(1-2s,\alpha)} \|e\|_1.
$$
Whence, together with the estimate
$$\|e\|_1\preceq h^\alpha\|T_aF\|_{H^{1+\alpha}}\preceq
h^\alpha\|F\|_{\dH {\alpha-1}_a} $$
guaranteed by Cea's Lemma and elliptic regularity Assumption~\ref{ellreg},
we obtain \eqref{e:dual_estim} as desired.
\end{proof}
We can now state and prove our main convergence results.
It requires data in the abstract space $D(A^\delta)$ for some $\delta \geq 0$.
A characterization of $D(A^\delta)$ is provided in Theorem~\ref{dissqrrt} below.
\begin{theorem}[Convergence] \label{FEinterror}
Suppose that \eqref{coer} and
\eqref{bdd} as well as
the elliptic regularity Assumption~\ref{ellreg} hold.
Given $s\in [0,\frac 12)$, set $\alpha_*:=\frac 1 2 (\alpha+\min(1-2s,\alpha))$ and
$\gamma:=\max(s+\alpha^*-\beta,0)$ and let $\delta \geq \gamma$.
Assume finally, that $s+\alpha_*\neq 1/2$ when $\delta=\gamma$ and
$s+\alpha_*\ge \beta$.
Then there exists a constant $C$ independent of $h$ and $\delta$ such that
$$
\| (A^{-\beta } - A_h^{-\beta}\pi_h) f\|_{\dH{2s}}
\leq C_{\delta,h} h^{2\alpha_*}\|A^\delta f\|, \qquad \hbox{for all } f\in
D(A^{\delta}),
$$
where
\beq
C_{\delta,h}=\left \{ \bal C\ln(2/h)&:\qquad \hbox{ when } \delta
=\gamma \hbox{ and } s+\alpha_*\ge
\beta, \ s+\alpha_* \not =\frac 1 2\\
C&:\qquad \hbox{ when } \delta>\gamma
\hbox{ and } s+\alpha_*\ge
\beta, \\
C&:\qquad \hbox{ when }
\delta=0 \hbox{ and }\beta>s+\alpha_*.
\eal
\right.
\label{cdelta}
\eeq
\end{theorem}
\begin{remark}[Critical Case $s+\alpha_*=\frac 1 2$] \label{r:conv_s12} The condition $s+\alpha_* \not = \frac 1 2 $ in the above theorem can be removed provided that the Kato Square Root Theorem holds
as well (see, Remark~\ref{re:KatoT}).
\end{remark}
\begin{remark}[Critical Case $2s=1$]\label{r:critical_s1} The above results also hold when
$2s=1$ provided that the continuous and discrete Kato Square Root
Theorem hold. As already mentioned above, the former relies on the
additional assumption requiring the existence of $\epsilon>0$ such
that $A_a$ and $A_l^*$ are bounded from $\tH{1+\gamma}$
to $\dH{\gamma-1}_a$ and $\dH{\gamma-1}_l$, respectively,
for $|\gamma| \leq \epsilon$.
For the discrete Kato Theorem, we will need to assume similar
conditions for operators based on the $S$ form, see
Theorem~\ref{dissqrrt} below.
\end{remark}
The above theorem depends on an auxiliary lemma.
\begin{lemma}\label{l:both_bounds}
For $s \in [0,1]$, there is a constant $C$ not depending on $h$
such that for any $\mu \in (0,\infty)$
$$
\| A^{s}(\mu+A)^{-1}v\|\le C \mu^{s-1} \|v\|,\qquad \hbox{for all }
v\in L^2(\Omega)
$$
and
$$
\|A_h^{s} (\mu+A_h)^{-1}v_h\|\le c \mu^{s-1}\|v_h\|,\qquad \hbox{for all }
v_h\in \mathbb V_h.
$$
\end{lemma}
\begin{proof}
The claim relies on interpolation estimates.
As the same argument is used for both estimates, we only prove the
first.
Theorem~4.29 of \cite{lunardi} implies $A^{it}$ is a bounded
operator satisfying
\beq\|A^{it}\| \le e^{\pi |t|/2},\qquad \hbox{for all } t\in {\mathbb R}.
\label{ait}
\eeq
This, in turn, implies that (e.g., Lemma~4.31 of \cite{lunardi}) for $s\in [0,\frac 1 2]$,
\beq
[L^2(\Omega),D(A^{1/2})]_{2s} =
D(A^{s}).\label{complexi}
\eeq
Here $[X,Y]_{2s}$ denotes the intermediate space between $X$ and $Y$
obtained by the complex interpolation method.
Thus, for $w \in D(A)$, Corollary~2.8 of \cite{lunardi} gives
$$
\| A^{s} w \| \preceq \| w \|_{[L^2(\Omega),D(A)]_{s}} \le \| A w\|^{s} \|w\|^{1-s}.
$$
Now if $w=(\mu+A)^{-1} v$ with $v\in L^2(\Omega)$, then
$$\mu \|w\|^2 +\Re A(w,w)= \Re(v,w)$$
and hence \eqref{coer} immediately implies $\|(\mu+A)^{-1}v\|\le \mu^{-1} \|v\|$.
In addition,
$$\|A w\|= \|v-\mu(\mu+A)^{-1}v\| \le 2\|v\|.
$$
The lemma follows combining the above estimates.
\end{proof}
\begin{proof}[Proof of Theorem \ref{FEinterror}]
Without loss of generality, we may assume that $\delta\le 1+\alpha_*$
since we shall get $2\alpha_*$ order convergence as soon as
$\delta>2\alpha_*-2\beta$ and we always have $2\alpha_*-2\beta \leq 1+\alpha_*$.
\step{1}
We first show that
\begin{equation}\label{e:first_step}
\|(I-\pi_h) A^{-\beta} f\|_{\dH{2s}} \preceq h^{2\alpha_*}\|A^\delta f\|.
\end{equation}
Theorem~4.6 of \cite{lunardi}
implies that $A^{-\beta} f$ is in $ D(A^{\alpha_*})$ when $f$ is in $D(A^{\alpha_*-\beta})$ and we now discuss separately the cases $s+\alpha_* \in (0,\frac 1 2)$, $s+\alpha_*=\frac 1 2$ and $s+\alpha_* \in (\frac 1 2, \frac 1 2 (1+\alpha)]$.
When $s+\alpha_* \in (0,\frac 12)$, we apply Theorem~\ref{cont0_1_2}
and obtain
\begin{equation}\label{e:first_step_AA}
\begin{split}
\|(I-\pi_h) A^{-\beta} f\|_{\dH{2s}} &\preceq h^{2\alpha_*} \| A^{-\beta}
f\|_{\dH{2s+2\alpha_*}} \preceq h^{2\alpha_*} \| A^{s+\alpha_*-\beta}
f\|\\
&\preceq h^{2\alpha_*} \| A^{\delta}
f\|,
\end{split}
\end{equation}
recalling that $\delta\geq \gamma \geq s+\alpha_*-\beta$.
For $s+\alpha_* \in (\frac 12, \frac 12(1+\alpha)]$, we apply
Theorem~\ref{t:characterization_hdot_onewayc} to conclude that
$A^{-\beta} f$ is in
$ \tH{2s+2\alpha_*}$. And again,
\begin{equation}\label{e:first_step_A}
\begin{split}
\|(I-\pi_h) A^{-\beta} f\|_{\dH{2s}} &\preceq h^{2\alpha_*} \| A^{-\beta}
f\|_{H^{2s+2\alpha_*}} \preceq h^{2\alpha_*} \| A^{s+\alpha_*-\beta}
f\|\\
&\preceq h^{2\alpha_*} \| A^{\delta}
f\|.
\end{split}
\end{equation}
Finally, we consider the case $s+\alpha_*=\frac 1 2$, which entails $\alpha \leq 1-s$ so that $\alpha_*=\alpha$ and $2s+2\alpha=1$.
We choose $0<\epsilon<\alpha$ (further restricted below) so that
as above
$$
\|(I-\pi_h) A^{-\beta} f\|_{\dH{2s}} \preceq h^{2\alpha+\epsilon} \| A^{-\beta}
f\|_{H^{1+\epsilon}}\preceq h^{2\alpha+\epsilon} \| A^{\frac{1+\epsilon}2 -\beta}
f\|.
$$
In addition the assumption $\delta > \gamma:=\max(\frac 1 2 - \beta,0)$ yields $\frac 12 -\beta + \frac \epsilon 2 <\delta$ upon choosing a sufficiently small $\epsilon$.
Hence, we deduce
$$
\|(I-\pi_h) A^{-\beta} f\|_{\dH{2s}} \preceq h^{2\alpha} \| A^{\delta}f\|.
$$
This, \eqref{e:first_step_AA} and \eqref{e:first_step_A} yield \eqref{e:first_step}.
\step{2} By the triangle inequality,
it suffices now to bound
\begin{equation}\label{newi}
\begin{split}
&\|(\pi_h A^{-\beta} - A_h^{-\beta}\pi_h)\|_{D(A^\delta)\to\dH{2s}}\\
&\qquad \le\frac{\sin(\beta \pi)}{\pi} \int_0^\infty \mu^{-\beta}\left\|
\pi_h(\mu+ A)^{-1} - (\mu+A_h)^{-1}\pi_h \right\|_{D(A^\delta)\to \dH{2s}}\, d\mu .
\end{split}
\end{equation}
Assuming without loss of generality that $h\leq 1$, we shall break the above integral into integrals on three
subintervals, namely, $(0,1)$,
$(1,h^{-2\alpha_*/\beta})$ and $(h^{-2\alpha_*/\beta},\infty)$.
\step{3} We start with $(h^{-2\alpha_*/\beta},\infty)$.
Recalling the definition of the operator norm \eqref{d:norm_DA} as well as the characterizations of the dotted space provided by Theorem~\ref{cont0_1_2} , we get
\begin{align*}
\| \pi_h (\mu+A)^{-1}\|_{D(A^{\delta})\to \dH{2s}} \preceq
\| A^{\max(s-\delta,0)} (\mu+A)^{-1}\|,
\end{align*}
where we used in addition the stability of $\pi_h$ in $D(A^\delta)$ and the boundedness of $A^{-r}$ from $L^2(\Omega)$ to $L^2(\Omega)$, for $r\geq 0$ (see discussion below Remark~\ref{cz2}).
Hence, applying Lemma~\ref{l:characterization_discrete} yields
$$
\| \pi_h (\mu+A)^{-1}\|_{D(A^{\delta})\to \dH{2s}} \preceq \mu^{\max(s-\delta,0)-1}.
$$
Similarly, but using the discrete characterization provided by Lemma~\ref{l:characterization_discrete}, we obtain
$$
\| (\mu+A_h)^{-1}\pi_h\|_{D(A^{\delta})\to \dH{2s}} \preceq \mu^{\max(s-\delta,0)-1}.
$$
Thus, invoking Lemma~\ref{l:both_bounds}, we deduce that
\begin{align*}
& \int_{h^{-2\alpha_*/\beta}}^\infty \mu^{-\beta} \|
\pi_h(\mu+ A)^{-1} - (\mu+A_h)^{-1}\pi_h \|_{D(A^\delta)\to \dH{2s}} \, d\mu \\
&\qquad \preceq \int_{h^{-2\alpha_*/\beta}}^\infty \mu^{-\beta+\max(s-\delta,0)-1}d\mu
\preceq \int_{h^{-2\alpha_*/\beta}}^\infty \mu^{\max(-\alpha_*,-\beta)-1}d\mu
\preceq h^{2\alpha},
\end{align*}
because $\delta \geq s+\alpha_*-\beta$.
\step{4} For the remaining two cases, we use the identity
$$
\pi_h(\mu+ A)^{-1} - (\mu+A_h)^{-1}\pi_h = (\mu+A_h)^{-1}A_h\pi_h(T_a-T_{h,a})A(\mu+A)^{-1}.
$$
The latter, follows from the identification of Remark~\ref{identify} and that the observation that
for $u\in D(A)$ and $v\in {\mathbb V}$,
$$(T_aAu,v)=A(T_aAu,T_l^*v)=(Au,T_l^*v)=A(u,T_l^*v)=(u,v),$$
i.e., $T_aAu=u$. Also, it is easy to see that
$A_h \pi_h T_{h,a} =\pi_h$.
Thus, for $u\in D(A)$,
$$A_h \pi_h (T_a-T_{h,a}) A u = (\mu +A_h )\pi_h u - \pi_h (\mu +A ) u,$$
which leads to the desired identity.
\step{5} For $\mu \in (1,h^{-2\alpha_*/\beta})$, we write
\begin{align*}
\int_1^{h^{-2\alpha_*/\beta}} & \mu^{-\beta}\|
A_h(\mu+A_h)^{-1}\pi_h(T_a-T_{h,a})A(\mu+A)^{-1}\|_{D(A^\delta)\to \dH{2s}} \,
d\mu \\
&\le \int_1^{h^{-2\alpha_*/\beta}} \mu^{-\beta} \|
A_h(\mu+A_h)^{-1}\pi_h\|_{\dH{1-\alpha_*} \rightarrow \dH{2s}}
\|(T_a-T_{h,a})\|_{\dH{-1+\alpha_*}_a\rightarrow \dH{1-\alpha_*}}\\
&\qquad \qquad \qquad
\|A(\mu+A)^{-1}\|_{D(A^\delta) \rightarrow \dH{-1+\alpha_*}_a}
d\mu
\end{align*}
Now, the definition \eqref{d:norm_DA} of $\|.\|_{D(A^\delta)}$ together with the characterization of the negative spaces provided in Remark~\ref{identify},
imply that
$$
\| A(\mu+A)^{-1}\|_{D(A^\delta) \to \dot H^{\alpha_*-1}_a} \preceq \|A^{(1+\alpha_*)/{2}-\delta}(\mu+A)^{-1}\| \preceq \mu^ {(\alpha_*-1)/2-\delta}.
$$
In addition, using Lemma \ref{l:characterization_discrete},
we obtain
\begin{align*}
\| A_h(\mu+A_h)^{-1}\pi_h\|_{\dot H^{1-\alpha_*} \to \dH{2s}} &=
\sup_{f\in \dot H^{1-\alpha_*}} \frac{\|
A_h(\mu+A_h)^{-1}\pi_h f\|_{\dH{2s}}}{\|\pi_h f\|_{\dot H^{1-\alpha_*}}}\\
&\preceq \sup_{f\in \dot H^{1-\alpha_*}}
\frac{\| A_h(\mu+A_h)^{-1}\pi_h
f\|_{\dH{2s}}}{\|A_h^{(1-\alpha_*)/2}\pi_h f\|}\\
&\preceq \sup_{g\in \mathbb V_h} \frac{\|
A_h^{\frac{1+\alpha_*}{2}+s}(\mu+A_h)^{-1}g\|}{\|g\|}\\
&\preceq \mu^{\frac{-1+\alpha_*}{2}+s}.
\end{align*}
The above three estimates together with
Lemma~\ref{FEerror} yield
\begin{align*}
\int_1^{h^{-2\alpha_*/\beta}} &\mu^{-\beta}\| A_h(\mu+A_h)^{-1}\pi_h(T_a-T_{h,a})A(\mu+A)^{-1}\|_{D(A^\delta)\to \dH{2s}} \, d\mu \\
&\preceq h^{2\alpha_*} \int_1^{h^{-2\alpha/\beta}} \mu^{-1+s+\alpha_*-\beta-\delta} \preceq \left \{
\begin{array}{ll}
h^{2\alpha_*}\ln(2/h)&:\qquad \hbox{ if } \delta=s+\alpha_*-\beta,\\
h^{2\alpha_*}&:\qquad \hbox{ otherwise.}
\end{array}
\right.
\end{align*}
\step{6} Finally for $\mu \in (0,1)$, we write
\begin{align*}
&\int_0^1 \mu^{-\beta} \| A_h(\mu+A_h)^{-1}\pi_h(T_a-T_{h,a})A(\mu+A)^{-1}\|_{D(A^\delta)\to \dH {2s}} \, d\mu \\
& \qquad \preceq \int_0^1 \mu^{-\beta} \| A_h(\mu+A_h)^{-1}\pi_h\|_{\dH{2s}\to \dH{2s}} \| T_a-T_{h,a}\|_{L^2 \to \dH{2s}} \| A(\mu+A)^{-1}\|_{L^2 \to L^2}\, d\mu \\
& \qquad \preceq h^{2\alpha} \int_0^1 \mu^{-\beta} \, d\mu \preceq h^{2\alpha}.
\end{align*}
\step{7} \modif{The proof of the theorem is complete upon collecting the estimates obtained in Steps 3,5 and 6}.
\end{proof}
Theorem~\ref{cont0_1_2} and the Kato Square Root Theorem characterize
$D(A^s)$ for $s\in [0,1/2]$. The characterization can be extended to
$s\in (1/2,(1+\alpha)/2]$ when $A_a$ maps $\tH{2s}$ into
$\dH{2s-2}_a$.
This is of particular importance to characterize the regularity assumption $f\in D(A^\delta)$ in Theorem~\ref{FEinterror}.
\begin{theorem}[Characterization of $D(A^{(1+s)/2})$ for $s \in (0,\alpha \rbrack$] \label{dissqrrt}
Suppose that \eqref{coer} and
\eqref{bdd} hold.
Assume furthermore that for $s \in (0,\alpha ]$,
\begin{equation}\label{a:Ta}
T_a \text{ is an isomorphism from }\dot H_a^{-1+s} \text{into }\tH{1+s}.
\end{equation}
Then,
$$
D(A^{(1+s)/2})=\tH{1+s}
$$
with equivalent norms.
\end{theorem}
\begin{proof} By Theorem~\ref{t:characterization_hdot_onewayc},
we need only prove that $\tH{1+s}\subset D(A^{
(1+s)/2})$.
We first observe that
$D(A)\cap
\tH{1+s}$ is dense in
$\tH{1+s}$.
Indeed, if $w$ is in $\tH{1+s}$ then \eqref{a:Ta} implies that $A_a w$
is in $\dot H_a^{s-1}$. As $\dot H_a^0$ is dense in $\dot H_a^{s-1}$,
there is a sequence $F_n\in \dot H_a^0$ converging to $A_a w$
in $\dot H_a^{s-1}$. Setting
$u_n := (A_a)^{-1} F_n$, elliptic regularity implies that $u_n $
converges to $w$ in $\tH{1+s}$.
Clearly $u_n $ is in $D(A)$,
i.e., $D(A)\cap \tH{1+s}$ is dense in
$\tH{1+s}$ as claimed.
Suppose that $u\in D(A)\cap \tH{1+s}$.
We first show that
\beq
\|A^{ (1+s)/2} u\| \le C \|u \|_{H^{1+s}}.
\label{first}
\eeq
For $v\in D(A^*)$ and $\delta := (1-s)/2\in [0,1/2)$,
$$(A^{(1+s)/2} u,v) = ( A^{-\delta} A u,v) =(Au,(A^*)^{-\delta}v).
$$
Since $u\in D(A)$ and $(A^*)^{-\delta}v\in D(A^*)\subset \mathbb V$,
$$\bal
|( A u, (A^*)^{-\delta} v)|&=|\langle A_a u , (A^*)^{-\delta} v\rangle|
\le \|A_au\|_{\dot H_a^{-1+s}}
\|(A^*)^{-\delta} v\|_{\dH {1-s}}\\
&\preceq \|u\|_{H^{1+s}}
\|(A^*)^{-\delta} v\|_{\dH {1-s}}
\eal
$$
where we also used \eqref{dual} and \eqref{a:Ta}.
Now, Theorem~\ref{cont0_1_2} ensures that
$$\|(A^*)^{-\delta} v\|_{\dH {1-s}}\preceq \|(A^*)^\delta (A^*)^{-\delta} v\|= \|v\|.$$
Combining the above inequalities shows that \eqref{first} holds for $u\in D(A)\cap
\tH{1+s}$.
The inclusion
$\tH{1+s} \subset D(A^{(1+s)/2})$ and
\eqref{first} for \modif{$u\in \tH{1+s} $} hold by density.
\end{proof}
Let $T_{S,a}$ be defined similarly to $T_a$ but using the form $S(\cdot,\cdot)$.
The final result in this section shows that under suitable assumptions,
Lemma~\ref{l:characterization_discrete} holds for $s=1$. This is
a discrete Kato Square Root Theorem. Its proof was motivated by the
proof of the Kato Square Root Theorem given in \cite{Agranovich}.
\begin{theorem} [Discrete Kato Square Root Theorem]\label{t:discrete_kato}
Suppose that \eqref{coer} and
\eqref{bdd} hold.
Assume furthermore that for some $s \in (0,1/2)$,
$T_a$, $T_a^*$ and $T_{S,a}$ are isomorphisms from $\dot H_a^{-1+s}$
into $\tH{1+s}$.
Then, there are positive constants $c,C$ independent of $h$ such that
$$
c \|v_h\|_{{\mathbb V}}\le \|A_h^{1/2} v_h\|\le C \|v_h\|_{{\mathbb V}}, \qquad \hbox{for all } v_h\in {\mathbb V}_h.
$$
The analogous inequalities hold with $A_h^*$ replacing $A_h$ above.
\end{theorem}
\begin{proof}
In this proof $C$ denotes a generic constant independent of $h$.
The assumption on $T_{S,a}$ implies that for
$t\in (0,s]$, $\tH{1+t}=\dH {1+t}$,
with equivalent norms \cite{bp-fractional}.
We first observe that it suffices to show that
\beq
\| A_h^{1/2} u_h \|\le C \|u_h\|_{\mathbb V},\qquad \hbox{for all } u_h\in {\mathbb V}_h
\label{toshow1/2}
\eeq
along with the analogous inequality involving $A_h^*$.
Indeed, if \eqref{toshow1/2} holds then by \eqref{coer},
$$\bal
c_0 \|u_h \|_{\mathbb V}^2 &\le |(A_h u_h,u_h)| \le
\|A_h^{1/2}u_h\| \|(A_h^*)^{1/2} u_h \| \\
&\le C \|(A_h^*)^{1/2} u_h \| \| u_h \|_{\mathbb V}
\eal
$$
and hence
$$ c_0 \|u_h \|_{\mathbb V}\le C \|(A_h^*)^{1/2} u_h \|.$$
The proof of the lower bound involving $A_h^{1/2}$ is similar.
Applying Lemma~\ref{l:characterization_discrete} gives, for $u_h\in {\mathbb V}_h$,
\beq
\bal \|A_h^{(1+s)/2} u_h\|&=\sup_{\theta\in {\mathbb V}_h }
\frac {(A_h^{(1+s)/2}u_h,(A_h^*)^{(1-s)/2}\theta)}
{\|(A_h^*)^{(1-s)/2}\theta\|}\le C\sup_{\theta\in {\mathbb V}_h }
\frac {(A u_h,\theta)}
{\|\theta\|_{\dH {1-s}}}\\
& \le C\|A_au_h\|_{\dH{s-1}} \le C \|u_h \|_{\dH {1+s}}
\eal
\label{ahbound}
\eeq
where we used the assumption on $T_a = (A_a)^{-1}$.
Let $\widetilde \pi_h$ denote the $S$-elliptic projection onto ${\mathbb V}_h$, i.e.,
for $v\in {\mathbb V}$, $w_h=\widetilde \pi_h v\in {\mathbb V}_h$ solves
$$S(w_h,\theta_h)=S(v,\theta_h),\qquad \hbox{for all } \theta_h\in {\mathbb V}_h.$$
It is a consequence
of the isomorphism assumption on $T_{S,a}$ that
$\widetilde \pi_h$ is a uniformly (independent of $h$) bounded operator on
$\tH{1+s}$
(see, e.g., \cite{MR1052086}).
Thus, recalling the eigenvalue decomposition \eqref{e:dot_intermediate}, it holds for $u_h\in {\mathbb V}_h$,
$$\bal \|u_h\|_{\dH {1-s}} &= \sup_{\phi\in \dH {1+s} }
\frac {S(u_h,\phi)} {\|\phi\|_{\dH {1+s}}} =
\sup_{\phi\in \dH {1+s} }
\frac {S(u_h,\widetilde \pi_h \phi)} {\|\widetilde \pi_h\phi\|_{\dH {1+s}}} \frac
{\|\widetilde \pi_h\phi\|_{\dH {1+s}}}
{\|\phi\|_{\dH {1+s}}}\\
&\le C\sup_{\phi_h\in {\mathbb V}_h }
\frac {S(u_h,\phi_h)} {\|\phi_h\|_{\dH {1+s}}}.
\eal
$$
Similarly, for all $u_h\in {\mathbb V}_h$,
$$ \|u_h\|_{\dH {1+s}} = \sup_{\phi_h\in {\mathbb V}_h }
\frac {S(u_h,\phi_h)} {\|\phi_h\|_{\dH {1-s}}}.
$$
Thus, the characterization \eqref{sh1} gives
$$\|u_h\|_{\dH {1+s}}\le C \sup_{\phi_h\in {\mathbb V}_h }
\frac {S(u_h,\phi_h)} {\|S_h^{(1-s)/2}\phi_h\|}
= C\|S_h^{(1+s)/2}u_h\|.$$
Combining this with \eqref{ahbound} gives
$$\|A_h^{(1+s)/2} u_h\|\le C \|S_h^{(1+s)/2}u_h\|, \qquad \hbox{for all } u_h\in {\mathbb V}_h.$$
Interpolating this result with the trivial inequality
$$\|A_h^{0} u_h\|\le \|S_h^0u_h\|,\qquad \hbox{for all } u_h\in {\mathbb V}_h,$$
gives
$$\|A_h^{1/2} u_h\|\le C \|S_h^{1/2}u_h\|=C\|u_h\|_{\mathbb V},\qquad \hbox{for all } u_h\in {\mathbb V}_h$$
and verifies \eqref{toshow1/2}. The proof for the analogous inequality
involving $A_h^*$ is similar.
\end{proof}
\section{Exponentially Convergent sinc Quadrature.}\label{s:SINC}
Theorem~\ref{FEinterror} and Remark~\ref{r:conv_s12} provide estimates for the errors $\| (A^{-\beta} - A_h^{-\beta}\pi_h) f\|_{\dH{2s}}$.
We now apply an exponentially convergent sinc quadrature (see, for
example, \cite{lundbowers}) to approximate the integral in
\eqref{A-betah}.
Since the argument below does not require the operator
$A_h^{-\beta}\pi_h$ to be discrete, the analysis includes and
focuses on the case of
$A^{-\beta}$.
The change of variable and sinc quadrature approximations for $A$
are thus
$$
A^{-\beta}= \frac{\sin(\pi \beta)}{\pi} \int_{-\infty}^\infty e^{(1-\beta)y} (e^yI+A)^{-1} ~ dy
$$
and
$$
{\mathcal Q}^{-\beta}_k(A) := \frac{k\sin(\pi \beta)}{\pi} \sum_{\ell=-N}^N e^{(1-\beta) y_\ell} (e^{y_\ell}I+ A)^{-1} .
$$
Here, for any positive integer $N$, $y_\ell:=\ell k$ and $k:=1/\sqrt N
$.
\def{\overline {D}}{{\overline {D}}}
To estimate the quadrature error, we start by defining $D_{\pi/2} := \{ z\in
{\mathbb C},\ |\Im(z)|<\pi/2\}$ and denote by ${\overline {D}}_{\pi/2} $ its closure.
Here $\omega = \arctan(\eta)$ and $\eta$ is the index of the sesquilinear form $A(.,.)$, see \eqref{index}.
For any functions $u,v \in L^2(\Omega)$ and $z\in {\mathbb C}$ with $-e^z$ not in the spectrum of
$A$, we define
$$
f(z;u,v):= e^{(1-\beta)z}\left( (e^z I+A)^{-1}u,v \right).
$$
Note that $\Re(e^z)$ is non negative for $z\in {\overline {D}}_{\pi/2} $ and hence
Theorem~\ref{resbound} and Remark~\ref{cz2} imply that $f(z,u,v)$ is
well defined for $z\in {\overline {D}}_{\pi/2} $.
We apply the classical analysis for these types of quadrature approximations given in \cite{lundbowers} with a particular attention on deriving estimates uniform in $u,v \in L^2(\Omega)$.
Using the resolvent estimate (Theorem~\ref{resbound}) when $\Re(z) > 0$
and Remark~\ref{cz2} when $\Re(z) \leq 0$ gives
$$ \|(e^zI+A)^{-1}\|\le \left \{\bal |e^{-z}|/\sin(\pi/2-\omega)&:\quad \hbox{ for } z\in
{\overline {D}}_{\pi/2},\ \Re(z)>0,\\
\frac2{c_0} &:\quad \hbox{ for } z\in
{\overline {D}}_{\pi/2},\ \Re(z)\le 0\eal\right .$$
\modif{It is a consequence of the above inequality that $f(t,u,v)$ is an analytic
function of $t$ for
$t\in D_{\pi/2}$. Moreover,
for $z \in {\overline {D}}_{\pi/2} $:}
\begin{equation}
\begin{split}
&|e^{(1-\beta)z} \left( (e^z I+A)^{-1}u,v \right)| \\
& \qquad \qquad \le \|u\| \|v\| \left\lbrace
\begin{array}{ll}
e^{-\beta\Re(z)}/\sin(\pi/2-\omega)&:\qquad \hbox{ for } \Re(z) >0,\\
\frac{2}{c_0} e^{(1-\beta) \Re(z)}&:\qquad \hbox{ for } \Re(z) \le 0.
\end{array} \right .
\end{split}
\label{dbinf}
\end{equation}
This implies that
\begin{equation}
\begin{split}
\int _{-\infty}^\infty &( |f(y-i\pi/2;u,v) | + |f(y+i\pi/2;u,v)
|) \, dy \\& \le \left(2(\beta\sin(\pi/2-\omega))^{-1} + 4((1-\beta)c_0)^{-1}\right):=N(D_{\pi/2})\|u\|\|v\|.
\end{split}
\label{nbinf}
\end{equation}
In addition, applying \eqref{dbinf} gives
$$
\int_{-\pi/2}^{\pi/2} | f(t+iy,u,v)|dy \le C,\qquad \hbox{for all } t\in {\mathbb R}.
$$
\modif{Hence, we can} apply Theorem 2.20 of \cite{lundbowers} to conclude
that for $k>0$,
\beq
\bigg|
\int_{-\infty}^\infty f(y;u,v) \ dy-
k\sum_{\ell=-\infty}^\infty f(\ell k;u,v)
\bigg|
\le
\frac{N(D_{\pi/2};u,v)}{2\sinh(\pi^2/(4k))} e^{-\pi^2/(4k)}.
\label{220res}
\eeq
\modif{
This leads to the following result for the sinc quadrature error.
}
\begin{theorem}[sinc Quadrature Error] \label{l:exp}
For $N>0$ and $k:=1/\sqrt{N}$, let ${\mathcal Q}^{-\beta}_k(\cdot)$ be defined by
\eqref{quadh}.
Then for $s\in [0,1)$, there exists a constant $C(\beta)$ independent of $k$ such that
\begin{equation}
\begin{split}
\|A^{-\beta} - {\mathcal Q}^{-\beta}_k(A) \|_{\dH{s} \to \dH{s}} \le
C(\beta) &
\bigg[
\frac{N(D_{\pi/2})}{2\sinh(\pi^2/(4k))}e^{-\pi^2/(4k)} \\
&+ \frac 1 \beta e^{-\beta/k} +
\frac{2}{(1-\beta)c_0} e^{-(1-\beta)/k}\bigg].
\end{split}
\label{aquad}
\end{equation}
Similarly, there exists a constant $C(\beta)$ independent of $k$ and $h$ such that
\begin{equation}
\begin{split}
\|(A_h^{-\beta} - {\mathcal Q}^{-\beta}_k(A_h))\pi_h \|_{\dH{s} \to \dH{s}}
\le C(\beta) &
\bigg[
\frac{N(D_{\pi/2})}{2\sinh(\pi^2/(4k))}e^{-\pi^2/(4k)}\\
& + \frac 1 \beta e^{-\beta/k} +
\frac{2}{(1-\beta)c_0} e^{-(1-\beta)/k}
\bigg].
\end{split}
\label{ahquad}
\end{equation}
\end{theorem}
\begin{proof}
Both estimates when $s=0$ directly follow from \eqref{220res}
and the estimates
\begin{align*}
&k \sum_{\ell=-N-1}^{-\infty} |f(\ell k; u,v) |
\le \frac 1 { \beta} e^{-\beta/k} \|u\|\|v\|, \\
&k \sum_{\ell=N+1}^\infty |f(\ell k;u,v)|
\le\frac 2{(1-\beta)c_0}
e^{-(1-\beta)/k} \|u\|\|v\|,
\end{align*}
(which are direct consequences of \eqref{dbinf}).
For the case $s\in (0,1)$, we first consider \eqref{aquad}.
Using Theorem~\ref{cont0_1_2} and the commutativity of $A$ and $(e^{y}I+A)^{-1}$, we have
$$
\|A^{-\beta} - {\mathcal Q}^{-\beta}_k(A) \|_{\dH s \to \dH{s}} \preceq
\|A^{s/2}(A^{-\beta} - {\mathcal Q}^{-\beta}_k(A))A^{-s/2} \|.
$$
Applying \eqref{anegcom} shows that the right hand side above equals
$$ \|A^{-\beta} - {\mathcal Q}^{-\beta}_k(A) \|$$
and so the desired estimate follows again from the $s=0$ case.
For \eqref{ahquad} when $s>0$, we apply
Lemma~\ref{l:characterization_discrete} to see that
\begin{align*}
\|(A_h^{-\beta} - {\mathcal Q}^{-\beta}_k(A_h))\pi_h \|_{\dH{s} \to \dH{s}}
&\le \|A_h^{s/2}(A_h^{-\beta} - {\mathcal Q}^{-\beta}_k(A_h))A_h^{-s/2} \|
\|\pi_h\|_{\dH{s} \to \dH{s}}\\
&\preceq
\|(A_h^{-\beta} - {\mathcal Q}^{-\beta}_k(A_h))\pi_h \|.
\end{align*}
The last inequality followed from commutativity and
the fact that $\pi_h$ is a bounded operator on
$\dH s$.
The result now follows from the $s=0$ case.
\end{proof}
\begin{remark}[Critical Case $s=1$]
The quadrature error estimate still holds when $s=1$ provided
that the discrete and continuous
Kato Square Theorems hold. These results are given by
Remark~\ref{re:KatoT} for the continuous operator $A$ and
Theorem~\ref{t:discrete_kato} for the discrete operator $A_h$.
\end{remark}
\begin{remark}[Exponential Decay] \label{rem:MN} The error from the three exponentials above
can essentially be equalized by setting
\beq
{\mathcal Q}^{-\beta}_k (A_h):= \frac{k\sin(\pi \beta)}{\pi} \sum_{\ell=-M}^N
e^{(1-\beta) y_\ell} (e^{y_\ell}I+ A_h)^{-1}
\label{inforder}
\eeq
with
$$\pi^2/(2k)\approx 2\beta k M\approx (2-2\beta) kN.$$
Thus, given $k>0$, we set
$$ M=\bigg\lceil \frac {\pi^2}{4\beta k^2}\bigg\rceil \quad \hbox{ and }\quad
N=\bigg \lceil \frac {\pi^2}{4(1-\beta) k^2}\bigg \rceil
$$
and get the estimate
$$
\|(A_h^{-\beta} - {\mathcal Q}^{-\beta}_k(A_h))\pi_h \|_{\dH{s} \to \dH{s}}
\le C(\beta)
\bigg[ \frac 1 {2\beta} + \frac 1 {2(1-\beta)\lambda_0} \bigg] \bigg[ \frac {e^{-\pi^2/(4k)}}{\sinh(\pi^2/(4k))}+
e^{-\pi^2/(2k)} \bigg].
$$
We note that the right hand side above asymptotically behaves like
$$C(\beta) \bigg[ \frac 1 {2\beta} + \frac 1 {2(1-\beta)\lambda_0} \bigg] e^{-\pi^2/(2k)}$$
as $k\rightarrow 0$.
\end{remark}
This, together with the finite element approximation estimates provided in Theorem~\ref{FEinterror} and Remark~\ref{r:critical_s1}, yields the fully discrete convergence estimate stated below.
\begin{corollary}[Fully Discrete Convergence Estimate]\label{c:fully}
Suppose that \eqref{coer} and
\eqref{bdd} as well as
the elliptic regularity Assumption~\ref{ellreg} hold.
Given $s\in [0,\frac 12)$, set $\alpha_*:=\frac 1 2 (\alpha+\min(1-2s,\alpha))$ and
$\gamma:=\max(s+\alpha^*-\beta,s)$. For $\delta \geq \gamma$, then there exists a constant $C$ independent of $k<1$ and $h$ such that
$$\bal
\| (A^{-\beta } - {\mathcal Q}_k^{-\beta}(A_h)\pi_h) f\|_{\dH{2s}}
\leq C_{\delta,h} h^{2\alpha_*}\|A^\delta f\| &+ C
e^{-\pi^2/2k}\|f\|_{\dH{2s}},\\
&\quad \forall f\in
D(A^{\delta})\cap \dH{2s},
\eal
$$
where $C_{\delta,h}$ is given by \eqref{cdelta}
and ${\mathcal Q}_h^{-\beta}$ is defined by \eqref{quadh}.
In addition, if the continuous and discrete Kato Square Root Theorems
hold (see Remark~\ref{re:KatoT} and Theorem~\ref{t:discrete_kato})
then the above estimate also holds for $s=\frac 1 2$.
\end{corollary}
\section{Numerical Illustrations for the Convection-Diffusion Problem}\label{s:numerical}
In order to illustrate the performance of the proposed algorithm, we
set ${\mathbb V}=H^1_0(\Omega)$ with $\Omega=(0,1)^2$ and for
$b\in \mathbb R$, consider the sesquilinear form
$$
A(u,v):=\int_\Omega (\nabla u \cdot \nabla
\bar v+b(u_x+u_y)\bar v),\qquad \hbox{for all } u,v\in {\mathbb V}.
$$
This form is regular and the corresponding regularly
accretive operator $A$ has domain $H^2(\Omega)\cap {\mathbb V}$.
In general, it is difficult to \modif{analytically} compute solutions to $u=A^{-\beta} f$
although it is possible in this case. Indeed, we
consider the Hermitian form:
$$
\widetilde S(u,v)=\int_\Omega \bigg (\nabla u \cdot \nabla
\bar v+\frac{b^2}2u\bar v\bigg),\qquad \hbox{for all } u,v\in {\mathbb V}.$$
Fix $f\in L^2(\Omega)$ and for $\mu\ge 0$, let $w\in {\mathbb V}$ solve
$$\mu(w,\phi)+A(w,\phi)=(f,\phi),\qquad \hbox{for all } \phi\in {\mathbb V}.$$
Putting $v=e^{-b(x+y)/2} w$ and $\phi=e^{-b(x+y)/2} \theta$ (for
$\theta\in {\mathbb V}$) in the above
equation and integrating by parts when appropriate, we see that
$v\in {\mathbb V}$ is the solution of
$$\mu(v,\theta)+\widetilde S(v,\theta)=(e^{-b(x+y)/2}f,\theta),\qquad \hbox{for all } \theta\in
{\mathbb V},$$
i.e.,
$$e^{-b(x+y)/2} (\mu+A)^{-1}f= (\mu+\widetilde S)^{-1} e^{-b(x+y)/2}f.$$
Substituting this into \eqref{A-beta} shows that
$$A^{-\beta} f = e^{b(x+y)/2} \widetilde S^{-\beta} (e^{-b(x+y)/2}f).$$
Since, for example, $\sin(\pi x)\sin(2\pi y) $ is an eigenvector of
$\widetilde S$ with eigenvalue $5\pi^2+b^2/2$,
$$u:=A^{-\beta} f =
e^{b(x+y)/2} (5\pi^2+b^2/ 2)^{-\beta}\sin(\pi x)\sin(2\pi y)$$
when
\beq
f=e^{b(x+y)/2}\sin(\pi x)\sin(2\pi y).
\label{fanal}
\eeq
The space discretization consists of continuous piecewise bi-linear finite element subordinate to successive quadrilateral refinements of $\Omega$.
Figure \ref{f:convergence_advection} provides the behavior of the errors
$e_h^k:=\| (A^{-\beta} -{\mathcal Q}^{-\beta}_k(A_h)\pi_h ) f \|$
when the advection coefficient is given by $b=1$ and $f$ is given by
\eqref{fanal} . For a fixed number of quadrature points $e_h^k \sim h^2$ while for a fixed spatial resolution $e_h^k$ is exponential decaying. This is in agreement with the estimate provided in Corollary~\ref{c:fully}.
\begin{figure}
\includegraphics[width=0.45\textwidth]{convergence_advection.eps}
\includegraphics[width=0.45\textwidth]{convergence_advection_quad2_semilog.eps}
\caption{(Left) Decay of $e_h^k$ versus the uniform mesh size $h$ \modif{in a log-log plot}. Second order rate of convergence is observed for all values of $\beta$. The number of quadrature points is taken large enough not to interfere with the spatial discretization error. (Right) Exponential decay of $e_h^k$ as a function \modif{of the square root} of number of points $M+N+1$ \modif{in a semi-log plot} (see Remark~\ref{rem:MN} for the definition of $k$). The spatial discretization is fixed and consists in 10 uniform refinements of $\Omega$.}\label{f:convergence_advection}
\end{figure}
We now set $b=10$, $f\equiv 1$ and study the boundary layer inherent to convection-diffusion problems.
For this, we consider 8 successive quadrilateral refinements of $\Omega$ for the space discretization.
In particular, the corresponding mesh size $h:=2^{-8}$ is fine enough for the Galerkin representation not to require any stabilization.
The value of the approximations $A_h^{-\beta}1$ for $\beta=0.1,0.3,0.5,0.7,0.9$ over the segment joining the points $(0,0)$ and $(1,1)$ are plotted in Figure \ref{f:con_diff} together with the graphs of $5A_h^{-\beta}1$ for $\beta=0.1$ and $\beta=0.9$.
The results indicate that the width of the boundary layer for convection-diffusion problems remains proportional to the ratio diffusion / convection and is therefore independent of $\beta$.
However, its intensity decreases with increasing $\beta$.
\begin{figure}\label{f:con_diff}
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{comparison-beta_convdiff_N200_Ref8}&
\includegraphics[width=0.4\textwidth]{convdiff_beta-0_1-0_9}
\end{tabular}
\caption{Approximations of $A^{-\beta}1$ on a subdivision of the unit square using $4^8$ quadrilaterals and $401$ quadrature points.
The width of the boundary layer for convection-diffusion problems appears independent of $\beta$ while its intensity decreases with increasing $\beta$.
(Left) Plots over the segment joining the $(0,0)$ and $(1,1)$ for $\beta=0.1,0.3,0.5,0.7,0.9$.
(Right) Approximations scaled by a factor $5$ for $\beta=0.9$ and $\beta=0.1$.}
\end{figure}
\section*{Acknowledgment}
The first author was
partially supported by
the National Science Foundation through Grant DMS-1254618 while the
second was partially supported by
the National Science Foundation through Grant DMS-1216551.
The numerical experiments are performed using the \emph{deal.ii} library \cite{BHK:07} and \emph{paraview} \cite{paraview} is used for the visualization.
\bibliographystyle{plain}
\def$'${$'$}
| {
"timestamp": "2016-07-15T02:03:38",
"yymm": "1508",
"arxiv_id": "1508.05869",
"language": "en",
"url": "https://arxiv.org/abs/1508.05869",
"abstract": "We study the numerical approximation of fractional powers of accretive operators in this paper. Namely, if $A$ is the accretive operator associated with an accretive sesquilinear form $A(\\cdot,\\cdot)$ defined on a Hilbert space $\\mathbb V$ contained in $L^2(\\Omega)$, we approximate $A^{-\\beta}$ for $\\beta\\in (0,1)$. The fractional powers are defined in terms of the so-called Balakrishnan integral formula. Given a finite element approximation space $\\mathbb V_h\\subset \\mathbb V$, $A^{-\\beta}$ is approximated by $A_h^{-\\beta}\\pi_h$ where $A_h$ is the operator associated with the form $A(\\cdot,\\cdot)$ restricted to $\\mathbb V_h$ and $\\pi_h$ is the $L^2(\\Omega)$-projection onto $\\mathbb V_h$. We first provide error estimates for $(A^\\beta-A_h^{\\beta}\\pi_h)f$ in Sobolev norms with index in [0,1] for appropriate $f$. These results depend on elliptic regularity properties of variational solutions involving the form $A(\\cdot,\\cdot)$ and are valid for the case of less than full elliptic regularity. We also construct and analyze an exponentially convergent sinc quadrature approximation to the Balakrishnan integral defining $A_h^{\\beta}\\pi_h f$. Finally, the results of numerical computations illustrating the proposed method are given.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Numerical Approximation of Fractional Powers of Regularly Accretive Operators",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969655605175,
"lm_q2_score": 0.7217432062975978,
"lm_q1q2_score": 0.7099044276282358
} |
https://arxiv.org/abs/1309.0434 | Carries, group theory, and additive combinatorics | Given a group G and a normal subgroup H we study the problem of choosing coset representatives with few carries. | \section{Introduction}
When numbers are added in the usual way {\em carries} occur along the
route. These carries cause a mess and it is natural to seek ways to
minimize them. This paper proves that {\em balanced arithmetic}
minimizes the proportion of carries. It also positions carries as
{\em cocycles} in group theory and shows that if coset
representatives for a finite-index normal subgroup $H$ in a group
$G$ can be chosen so that the proportion of carries is less than
$2/9$, then there is a choice of coset representatives where no
carries are needed (in other words, the extension {\em splits}).
Finally, our paper makes the link between the problems above and
the emerging field of additive combinatorics. Indeed the tools and
techniques of this field are used in our proofs, and our examples
provide an elementary introduction.
\subsection{Carries}
\begin{example}
Table 1 shows a carries matrix for base $b=10$. Thus when $0$ is
added to one of the digits $0,1,\cdots,b-1$, no carries occur. When
$1$ is added, there is a carry of $b$ at $b-1$. There is a carry of
$b$ in position $i,j$ if and only if $i+j\geq b$.
\end{example}
\begin{table}[htb]
\caption{Carries matrix for $b=10$. There is a carry of $b$ if and
only if $i+j\geq b$.}
\begin{small}\begin{equation*}\begin{array}{c|cccccccccc|}
&0&1&2&3&4&5&6&7&8&9\\\hline
0&0&0&0&0&0&0&0&0&0&0\\
1&0&0&0&0&0&0&0&0&0&b\\
2&0&0&0&0&0&0&0&0&b&b\\
3&0&0&0&0&0&0&0&b&b&b\\
4&0&0&0&0&0&0&b&b&b&b\\
5&0&0&0&0&0&b&b&b&b&b\\
6&0&0&0&0&b&b&b&b&b&b\\
7&0&0&0&b&b&b&b&b&b&b\\
8&0&0&b&b&b&b&b&b&b&b\\
9&0&b&b&b&b&b&b&b&b&b\\\hline
\end{array}\end{equation*}\end{small}
\label{tab1}
\end{table}
For an arbitrary base $b>1$ with digits $0$, $1$, $\ldots$, $b-1$, the corresponding matrix has
$\binom{b}{2}$ carries. If the digits are
chosen uniformly at random, the chance of a carry is $\binom{b}{2}/b^2 = \frac 12 - \frac{1}{2b}$.
This is $45\%$ when $b=10$.
If $b\mathbb{Z}\subset\mathbb{Z}$ is the subgroup $\{0,\pm b,\pm 2b,\cdots\}$ and
coset representatives are chosen as $\{0,1,2,\cdots,b-1\}$, the
carries are cocycles \cite{isaksen}: $i+j=(i+j)_b+f(i,j)$ with $(i+j)_b$ the
sum modulo $b$ and $f(i,j)$ the `remainder'. Here $f(i,j)=0$ when
$i+j<b$ and $f(i,j)=b$ when $i+j\geq b$. It is natural to ask if
some other choice of coset representatives has fewer carries. The
answer is classically known.
\begin{example}
For simplicity, take $b$ odd. The
balanced representatives $\{0,\pm 1,\cdots,\pm\tfrac{b-1}{2}\}$ lead
to about half as many carries. For example, when $b=5$, the carries
table for $5\mathbb{Z}\subset\mathbb{Z}$ is shown in \ref{tab2}.
\end{example}
\begin{table}[htb]
\caption{Carries matrix for $b=5$ with signed coset representatives $\{ 0, \pm 1, \pm 2\}$. Here
$-i$ is coded as $\bar{i}$.}
\begin{small}\begin{equation*}\begin{array}{c|ccccc|}
&\bar2&\bar1&0&1&2\\\hline
\bar2&\bar{b}&\bar{b}&0&0&0\\
\bar1&\bar{b}&0&0&0&0\\
0&0&0&0&0&0\\
1&0&0&0&0&b\\
2&0&0&0&b&b\\\hline
\end{array}\end{equation*}\end{small}
\label{tab2}
\end{table}
For example $(-2)+(-2)=-5+1$ and $2+2=5-1$. The balanced
representatives lead to $6$ carries while the usual choice leads to
$\binom{5}{2}=10$. Signed digit representations have a long history
going back to Colson \cite{colson} and Cauchy \cite{cauchy}. A
careful history is in Cajori \cite{cajori} with Knuth \cite{knuth}
giving further details. The study of carries has links to probability \cite{holte,diaconis2009a} and various parts of algebra \cite{borodin}.
Can one do better? Why do there have to be any carries? What is
the best that can be done? These are problems in additive
combinatorics. If $X$ is a choice of coset representatives for $b\mathbb{Z}$
in $\mathbb{Z}$, we are asking for connections between $X$ and its sumset $X+X$.
\subsection{Group theory}
These questions make sense for any group. For example, the matrix in
\ref{tab1} and \ref{tab2} also give the carries for the cyclic group
$\mathbb{Z}/b\mathbb{Z}\subset\mathbb{Z}/b^2\mathbb{Z}$ with coset representatives
$\{0,1,2,\cdots,b-1\}$ or $\{0,\pm 1,\cdots,\pm (b-1)/2\}$ and
everything interpreted modulo $b^2$. The proofs for $\mathbb{Z}$ do not
carry over to $\mathbb{Z}/b^2\mathbb{Z}$ since $i+j$ might collapse to a coset
representative modulo $b^2$.
Let us now formulate the carries problem precisely when $G$ is a group,
and $H$ a finite index normal subgroup. Let $X\subset G$ be coset representatives
for $H$ in $G$. Given two elements $x_1$ and $x_2$ in $X$, there
is a unique third element $x_{12}\in X$ such that $x_{12}^{-1}x_1x_2$ lies in the
subgroup $H$. Note that if we multiply $x_1 h_1$ and $x_2 h_2$ the
answer is $x_1 x_2 (x_2^{-1} h_1 x_2 h_2) = x_{12}( x_{12}^{-1} x_1 x_2) (x_2^{-1}h_1x_2h_2)$.
In analogy with the usual addition, we view
$x_{12}^{-1} x_1 x_2$ as the {\sl carry} in performing this multiplication.
Thus carries are elements of the subgroup $H$, and a (non-trivial) carry occurs
for $x,y\in X$ exactly when $x\cdot y$ is not in $X$.
If $X$ is a subgroup (so that necessarily $XH=HX=G$ and $H\cap X=\{1\}$), there are
no carries and the extension $H\subset G$ is said to \textit{split}. For
$\mathbb{Z}/b\mathbb{Z}\subset\mathbb{Z}/b^2\mathbb{Z}$, any choice of coset representatives has $b$
elements and $\mathbb{Z}/b\mathbb{Z}$ is the unique subgroup of $\mathbb{Z}/b^2\mathbb{Z}$ of order
$b$, so the extension fails to split. Our main theorem shows that
if the extension $H \subset G$ is not split, then there must be
many carries. To quantify this notion, let us define
\[
C(X)=\frac{|\{x,y\in X:xy\in X\}|}{|X|^2}.
\]
\begin{theorem}\label{thm:7/9}
Let $X$ be coset representatives for a normal, finite index subgroup
$H$ in a group $G$. If
\[ C(X)>7/9 \]
then there is a subgroup $K$ with $HK=G$, $H\cap K=\{1\}$.
\end{theorem}
From Theorem 12 on page 182 of \cite{DF} for example, one sees that the
structure of $G$ above may be described as the semi-direct product of the
normal subgroup $H$ and the group $K$.
Further, the constant $7/9$ is sharp as seen by taking $3\mathbb{Z}\subset\mathbb{Z}$ with
balanced coset representatives.
\subsection{Additive combinatorics}
The problems discussed above may be seen as part of additive
combinatorics. A basic question in this area asks how the size $|X\cdot X|$ depends on the
structure of $X$. If $X$ is a subgroup, then $|X\cdot X|=|X|$. For a random set, one may expect
$X\cdot X$ to have about $|X|^2$ elements. What happens $X\cdot X$ contains unusually few
elements; for example what if $|X\cdot X|\leq 2|X|$? The structure of such sets is
studied in additive combinatorics, which is a burgeoning
area of mathematics with applications in computer science
\cite{trevisan}, harmonic analysis \cite{laba}, number theory \cite{nathanson},
combinatorics and elsewhere. It has spanned a host of new techniques
(e.g. Szemer\'{e}di's regularity lemma \cite{szemeredi,szemeredi-survey}, higher Fourier
analysis \cite{gowers1,gowers2,higher-fourier}). It gives connections between formerly disparate
areas of mathematics (e.g. combinatorics, number theory and ergodic
theory). There are striking results, such as the Green-Tao theorem
that the primes contain arbitrarily long arithmetic progression
\cite{green-tao1,green-tao2}.
Our theorems offer a gentle introduction to this field in a natural
problem. The proof of the main theorem uses results on {\em
approximate homomorphisms} first studied by computer scientists for
property testing (the study of large systems from properties of
small samples). A second related result is given in
\ref{sec:59/60}; here the situation is more general than in Theorem \ref{thm:7/9} but the
conclusion is weaker. This uses an argument of Fournier, familiar in
additive combinatorics, to show that any finite subset $X$ of a group $G$ is almost a
subgroup if $C(X)$ is large.
Our route to the discovery and proof of Theorem \ref{thm:7/9} has
some lessons. Our first results were limited to $p\mathbb{Z}/p^2\mathbb{Z}$ in
$\mathbb{Z}/p^2\mathbb{Z}$, and they were asymptotic: if $X$ is a set of coset
representatives then $C(X)\leq 3/4+\epsilon$ provided $p$ is a
sufficiently large prime (depending on $\epsilon$). Here the
dependence on $\epsilon$ is exponential. The argument uses
\textit{rectification} \cite{bilu-lev-ruzsa,green-ruzsa}, which
roughly speaking converts additively structured subsets of $\mathbb{Z}/p\mathbb{Z}$
to subsets of $\mathbb{Z}$. Later we found out that we could get rid of the
asymptotics, proving that $C(X)\leq (3p^2+1)/(4p^2)$ for any odd
prime $p$ using a theorem of Lev \cite{lev}. This was done
independently by Alon \cite{alon}. All of these arguments rely on
the primality of the base $p$.
\ref{sec:easy} gives a very easy proof of the optimality of balanced
coset representatives for $b\mathbb{Z}\subset\mathbb{Z}$. Theorem
\ref{thm:7/9} is proved in \ref{sec:7/9}. A different proof
(with $C(X)\geq 59/60$ implying splitting) appears in
\ref{sec:59/60}. In Table 2 above there are three types of carries:
$0,+b,-b$, while only $0$ and $+b$ appear with the usual choice of
digits. This is shown to characterize the usual digits in
\ref{sec:twocarries}. The final section presents some problems and
conjectures. We do not know the answer to some simple related
questions: how well can one do for $(b\mathbb{Z})^2\subset\mathbb{Z}^2$?
\smallskip
{\bf Acknowledgments.} We are grateful to Ben Green, Bob Guralnick and Marty Isaacs
for many valuable discussions.
\section{The easiest case: Minimality of balanced digits for $\mathbb{Z}$}\label{sec:easy}
For $b$ a positive integer, consider $b\mathbb{Z}\subset\mathbb{Z}$. Choose coset
representatives $\mathfrak{X}=\{0,x_1,x_2,\cdots,x_{b-1}\}$ in $\mathbb{Z}$.
There is a carry at $i,j$ if $x_i+x_j\notin\mathfrak{X}$. The
following proposition shows that any choice for $\mathfrak{X}$
results in at least $\lfloor b^2/4\rfloor$ carries. Balanced coset
representatives give this and so are best (in this sense). In fact,
the argument works for any set of $b$ real numbers.
\begin{proposition}\label{prop:Z}
Let $\mathfrak{X}=\{0,x_1,\cdots,x_{b-1}\}$ be distinct real
numbers. Then $\mathfrak{X}$ induces at least $\lfloor b^2/4\rfloor$ carries.
\end{proposition}
\begin{proof}
Let there be $c$ positive and $(b-1-c)$ negative elements in
$\mathfrak{X}$. Say $0<y_1<y_2<\cdots<y_c$ are the positives. Then,
adding $y_c$ results in at least $c$ carries. Adding $y_{c-1}$
results in at least $c-1$ carries. Continuing in this fashion,
adding $y_1$ results in at least $1$ carry. This forces at least
$c(c+1)/2$ carries. Similarly, the negative elements in
$\mathfrak{X}$ force at least $(b-1-c)(b-c)/2$ carries, thus obtaining
altogether
\[
\tfrac{1}{2}[c(c+1)+(b-1-c)(b-c)] = \frac{b^2-1}{4} + \Big( c- \frac{b-1}{2}\Big)^2
\]
carries. This proves the Proposition.
\end{proof}
By examining the above proof, we may check that $\lfloor b^2/4\rfloor$ carries
are attained only if $\mathfrak {X}$ is of the form $\{ xn: \, - \lfloor b/2\rfloor < n\le \lfloor b/2\rfloor\}$
for some $x\neq 0$. Thus, for $b\mathbb{Z} \subset \mathbb{Z}$ balanced coset representatives and
their dilates by any number $a$ relatively prime to $b$ are the only examples with $\lfloor b^2/4\rfloor$
carries.
In the other direction, it is easy to (foolishly) choose coset
representatives $\mathfrak{X}$ for $b\mathbb{Z}$ in $\mathbb{Z}$ such that every sum
results in a carry. For example choose $\{b,b+1,\cdots,2b-1\}$.
\section{The next case: Minimality of balanced digits for cyclic groups}\label{sec:cyclic}
This section studies the following problem: consider
$p(\mathbb{Z}/p^2\mathbb{Z})$ as a subgroup of $\mathbb{Z}/p^2\mathbb{Z}$ for an odd prime $p$. The usual coset representatives are
$\{0,1,2,\dots,p-1\}$. Balanced coset representatives are $\{0,\pm
1,\dots,\pm (p-1)/2\}$. The carries matrices are the same
as for $p\mathbb{Z}\subset\mathbb{Z}$. The following proposition implies that balanced coset representatives again give the minimum number of carries.
\begin{proposition}\label{prop:Zp}
Let $p$ be an odd prime. Let $X\subset\mathbb{Z}/p^2\mathbb{Z}$ be coset representatives for the subgroup $p(\mathbb{Z}/p^2\mathbb{Z})$ in $\mathbb{Z}/p^2\mathbb{Z}$. Then $X$ induces at least $(p^2-1)/4$ carries.
\end{proposition}
Proposition \ref{prop:Zp} is a consequence of the following result,
proved below.
\begin{proposition}\label{prop:sr}
Let $p$ be an odd prime. Let $A_1,A_2,A_3\subset\cg{p^2}$ be three
sets of coset representatives for $p (\cg{p^2}) \subset\cg{p^2}$. Then the
number of solutions to $a_1+a_2=a_3$ with $a_1\in A_1$, $a_2\in
A_2$, and $a_3\in A_3$ is at most $(3p^2+1)/4$.
\end{proposition}
The problem of counting the number of solutions to linear equations
in finite fields has been studied in \cite{lev}. The strategy there
is to use Pollard's theorem \cite{pollard}. Our situation is
slightly different in that we are working in $\cg{p^2}$, which is
not a finite field. However, we can still follow the argument in
\cite{lev}, making use of a version of Pollard's theorem for
composite modulus \cite{pollard}.
\begin{theorem}[Pollard]
Let $m$ be a positive integer. Let $A_1,A_2,\dots,A_k$ be subsets of
$\cg{m}$ and let $A_1',A_2',\dots,A_k'$ be another $k$ subsets of
$\cg{m}$ such that each $A_i'$ consists of consecutive elements and
has $|A_i'|=|A_i|$. Write
\[ S(A_1,A_2,\dots,A_k,r)=\sum_{x\in\cg{m}}\min(r,n(x,A_1,A_2,\dots,A_k)), \]
where $n(x,A_1,A_2,\dots,A_k)$ is the number of representations of
$x$ as $x=a_1+a_2+\dots+a_k$ ($a_i\in A_i$). Define
$S(A_1'A_2',\dots,A_k',r)$ similarly. Suppose that at least $k-1$ of
the sets $A_i$ have the property that
\[ (x-y,m)=1\text{ for }x,y\in A_i\text{ and }x\neq y. \]
Then
\[ S(A_1,A_2,\dots,A_k,r)\geq S(A_1',A_2',\dots,A_k',r). \]
\end{theorem}
To gain an appreciation of Pollard's theorem, consider the special case $m=p$ a prime, $k=2$ and $r=1$.
When $m$ is prime, the hypothesis in Pollard's theorem is automatically satisfied. Now $S(A_1,A_2,1)$
counts the number of elements in the sumset $A_1+A_2$, and Pollard's theorem gives that this
cardinality is smallest when $A_1$ and $A_2$ are intervals. It thus follows that $|A_1+A_2|
\ge \min( p, |A_1|+|A_2|-1)$, which is a fundamental result on set addition known as
the Cauchy-Davenport theorem (a result proved by Cauchy in 1813, and rediscovered by Davenport
in 1935). Thus Pollard's theorem may be viewed as a
generalization of the Cauchy-Davenport result. There has also been extensive
work on extending the Cauchy-Davenport theorem, leading up to Kemperman's very general
theorem \cite{Kemp}; see Serra \cite{Ser} for a recent survey.
For the general case of Pollard's theorem, consider for each natural number $\ell$
the set $S_\ell$ of those elements in ${\Bbb Z}/m{\Bbb Z}$ which can be expressed as $a_1+\ldots+a_k$
in at least $\ell$ ways. Then $S(A_1,A_2,\ldots, A_k,r)$ equals the
sum of the cardinalities of $S_\ell$ for all $1\le \ell \le r$.
\begin{corollary} With notation as in Pollard's theorem
$$
\max_{x} n(x,A_1,\ldots,A_k) \le \max_{x} n(x,A_1^{\prime},\ldots,A_k^{\prime}).
$$
\end{corollary}
\begin{proof} Suppose the corollary does not hold, and take $r = \max_x n(x, A_1^{\prime},\ldots,A_k^{\prime})$ in
Pollard's theorem. Note that
$$
S(A_1^{\prime},\ldots, A_k^{\prime},r) = \sum_{x} n(x,A_1^{\prime}, \ldots, A_k^{\prime}) = |A_1^{\prime}|\cdots |A_k^{\prime}|.
$$
On the other hand, since by assumption $r< n(x,A_1,\ldots, A_k)$ for some $x$,
$$
S(A_1,\ldots, A_k,r) = \sum_x \max(r, n(x,A_1,\ldots,A_k))
< \sum_x n(x,A_1,\ldots, A_k) = |A_1| \cdots |A_k|.
$$
But this contradicts Pollard's theorem, proving the Corollary.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:sr}]
Since $A_1$, $A_2$ and $A_3$ consist of coset representatives for $p (\cg{p^2})$ in $\cg{p^2}$,
the hypothesis in Pollard's theorem is satisfied. Now take $A_1^{\prime} = A_2^{\prime}=A_3^{\prime} =I$
where $I$ is the interval of length $p$ centered around the origin. A simple calculation gives that
$$
\max_x n(x, I, I, I) = n(0,I, I, I) = \frac{3p^2+1}{4}.
$$
By Corollary 3.4 it follows that $n(0,A_1,A_2,-A_3)$ is at most $(3p^2+1)/4$. Since $n(0,A_1,A_2,-A_3)$
precisely counts the number of solutions to $a_1+a_2=a_3$, the Proposition follows.
\end{proof}
\section{Carries and Approximate Homomorphisms}\label{sec:7/9}
This section proves Theorem \ref{thm:7/9} and gives an introduction
to computer scientists' use of approximate homomorphisms in
cryptography and for verifying program correctness.
\begin{definition}[Approximate homomorphisms]
Let $G_1,G_2$ be arbitrary groups with $G_1$ finite. Let
$\epsilon>0$. A function $f:G_1\rightarrow G_2$ is an
$\epsilon$-homomorphism if, picking $g,g'$ independently and
uniformly in $G_1$,
\[ \mathbf{P}_{g,g'\in G_1}\{f(g)f(g')=f(gg')\}\geq\epsilon. \]
\end{definition}
Checking if a given program or black box is a homomorphism occurs in
cryptography (e.g. checking a random number generator) and in
program checking (e.g. does this matrix multiplication package
really work). Here is a brief description.
\textit{Cryptography.} Despite recent advances, many cryptography
schemes in active use still proceed by taking a message, given as a string of
letters in a finite field $x_1x_2\cdots x_N$, adding \textit{noise}
$\epsilon_1\epsilon_2\cdots\epsilon_N$ to each coordinate, and
sending $x_i+\epsilon_i=y_i$. A receiver in possession of the recipe
for the noise $\epsilon_i$ decodes via $y_i-\epsilon_i=x_i$. The
noise is usually generated by a pseudorandom generator. For example,
if the field is $\mathbb{Z}/p\mathbb{Z}$, the generator might be
$\epsilon_{i+1}=a\epsilon_i+b\pmod p$. Another scheme has the field
$\mathbb{Z}/2\mathbb{Z}$, breaks the message into blocks: $X_1=(x_1\cdots x_{256})$,
$X_2=(x_{257}\cdots x_{512}),\cdots$, and adds vectors of noise
$\tilde{\epsilon}_1,\tilde{\epsilon}_2,\cdots$. These
$\tilde{\epsilon}_i$ are often generated by a simple scheme such as
$\tilde{\epsilon}_{i+1}=A\tilde{\epsilon}_i$ with $A$ a fixed
$256\times 256$ matrix. Someone interested in checking this
generator has to determine $(a,b)$ (or $A$) and the initial seed. A
first task is to decide if such a linear scheme is in use. This
entails testing if the output is a homomorphism! For background and
a fascinating success story in online poker, see \cite{poker}.
\textit{Program checking.} A host of computer scientists have
developed a sophisticated suite of programs for testing if programs
designed to do standard numerical tasks are doing their job. A
readable entry to this literature is \cite{blr} and their
references. As an example, consider a program $P$ to multiply two
$n\times n$ matrices $A,B$ with elements in a finite field. Given
$A,B$, the program outputs $P(A,B)$. A complete test is out of the
question. A test which proves correctness with high probability is
suggested in \cite{blr}. Given $A,B$, form random uniform matrices
$A_1,B_1$. Set $A_2=A-A_1$, $B_2=B-B_1$, and
$C=P(A_1,B_1)+P(A_1,B_2)+P(A_2,B_1)+P(A_2,B_2)$. If the program is
working then $C=P(A,B)$ by simple algebra. The tools of approximate
homomorphisms are used to show this test (amplified by repetitions)
gives an efficient check which works with arbitrarily high
probability. While the examples above involve homomorphisms between
abelian groups $(\mathbb{Z}/p\mathbb{Z})^{n^2}$, the theorists developed their tools
for general groups. One of their theorems turns out to be just what
we need to prove Theorem \ref{thm:7/9}.
The following theorem, due to Ben-Or, Coppersmith, Luby, and
Rubinfeld \cite{approx_hom}, says that for $\epsilon>7/9$, an
$\epsilon$-approximate homomorphism must coincide with a genuine
homomorphism on a large subset of $G_1$.
\begin{theorem}[Structure theorem for approximate homomorphisms]\label{thm:approx_hom}
Let $G_1,G_2$ be arbitrary groups with $G_1$ finite. Suppose that
$f:G_1\rightarrow G_2$ is an $\epsilon$-approximate homomorphism for
some $\epsilon>7/9$. Then there is a genuine homomorphism
$\phi:G_1\rightarrow G_2$ such that $\mathbf{P}_{g\in
G_1}(f(g)\neq\phi(g))\leq\tau$, where $\tau=\tau(\epsilon)$ is the smaller root of
the equation $3x-6x^2=1-\epsilon$.
\end{theorem}
Note that $\tau(\epsilon)$ equals $(3-\sqrt{24\epsilon -15})/12$, and so $\tau(\epsilon) < (3-\sqrt{11/3})/12 = 0.0904\ldots$
when $\epsilon >7/9$. Both the range $\epsilon >7/9$ and the parameter $\tau(\epsilon)$
are sharp. The
genuine homomorphism $\phi$ in the statement is constructed by
taking $\phi(g)$ to be the most frequent value of $f(gg')f(g')^{-1}$ over
all $g'\in G_1$. Under the stated assumptions, it can be shown that
this most frequent value is well-defined, the resulting map $\phi$ is a
genuine homomorphism, and it well approximates $f$.
\begin{proof}[Proof of Theorem \ref{thm:7/9}] Since $H$ is a normal subgroup, the quotient $G/H$ forms
a group. Consider now the map $f: G/H \to G$ that sends a coset $gH$ to its unique coset representative in $X$.
Given two cosets (along with their representatives in $X$), $gH = xH$ and $g^{\prime} H = x^{\prime} H$
note that $f(gH )f(g^{\prime}H) = f(gg^{\prime}H)$ if and only if $xx^{\prime}$ belongs to $X$. In other words,
$f$ is a $C(X)$-approximate homomorphism.
Since $C(X)>7/9$ by hypothesis, Theorem \ref{thm:approx_hom} implies that
there is a genuine homomorphism $\phi: G/H \to G$ such that $f(gH) =\phi(gH)$ for
all but at most $\tau |G/H| < \frac 1{10} |G/H|$ cosets. Let $K$ denote the image of the homomorphism $\phi$.
Thus $K$ is a subgroup of $G$ with $|K\cap X| \ge (1-\tau) |X| > \frac{9}{10}|X|=\frac{9}{10} |G/H|$. By the
first isomorphism theorem $K$ is isomorphic to $(G/H)/\text{ker}(\phi)$
and therefore the kernel of $\phi$ is trivial, and $|K|=|G/H|$. If
$K$ contains an element $1\neq \ell \in H$, then for each $k\in K$ at most one
of $k$ or $k\ell$ can be in $X$; this would mean that $|K\cap X|\le |K|/2$ contradicting
our lower bound for $|K\cap X|$. Thus $K\cap H=\{1\}$, and distinct elements of $K$ belong
to distinct cosets of $H$. Therefore $K$ consists of a complete set of
coset representatives for $H$ in $G$,
and we have $G=HK$, as desired.
\end{proof}
\section{An argument of Fournier}\label{sec:59/60}
In this section we study a problem that is a little more general than the
carries question. Let $A$ be a finite set in a group $G$, and set
(in analogy with our earlier definition)
\[
C(A)=\frac{|\{a_1,a_2\in A:a_1a_2\in A\}|}{|A|^2}.
\]
The following result, which is established following an argument of Fournier,
shows that if $C(A)$ is close to $1$, then $A$ is almost a subgroup.
\begin{theorem}\label{Fournier} For a finite set $A$ in a group $G$, if $C(A) \ge 1-\delta$
for some $\delta \le 1/60$, then there exists a subgroup $K$ of $G$
such that
$$
|K| \le 10|A|/9 , \qquad \text{and} \qquad |A\cap K| \ge (1-5\delta)
|A|.
$$
\end{theorem}
Let $A$ be any subset of $G$, and let $\epsilon$ be a real number in
$[0,1]$. Define
$$
\text{Sym}_{1-\epsilon}(A) = \{ x \in G: |A \cap Ax | \ge
(1-\epsilon)|A|\}.
$$
Since $|A\cap Ax| = |Ax^{-1} \cap A|$ the set
$\text{Sym}_{1-\epsilon}(A)$ is symmetric (that is, closed under
inverses). The following monotonicity condition is clear:
$$
\text{Sym}_{1-\epsilon_1}(A) \subset \text{Sym}_{1-\epsilon_2}(A) \
\ \text{if } \ \ \epsilon_1 \le \epsilon_2.
$$
Observe further that if $x_1\in \text{Sym}_{1-\epsilon_1}(A)$ and
$x_2 \in \text{Sym}_{1-\epsilon_2}(A)$ then $x_1 x_2 $ lies in
$\text{Sym}_{1-\epsilon_1-\epsilon_2}(A)$. To see this, note that
\begin{align*}
\#\{a\in A: ax_1x_2 \notin A\} &\le \#\{a \in A: \ ax_1 \notin A\} +
\#\{ a\in A: \ ax_1\in A, \ \ ax_1x_2\notin A\} \\
&\le \epsilon_1 |A| + \# \{b \in A: \ bx_2 \notin A\} \le
(\epsilon_1+\epsilon_2)|A|.
\end{align*}
The identity
$$
\sum_{x\in G} |A\cap Ax| =\sum_{x\in G} \sum_{\substack{ {a_1, a_2
\in A} \\ {a_1 =a_2 x}}} 1 = \sum_{a_1, a_2 \in A} \sum_{x=a_2^{-1}
a_1} 1 = |A|^2
$$
shows that
\begin{equation}
\label{Fournier1} |\text{Sym}_{1-\epsilon}(A)| \le |A|/(1-\epsilon).
\end{equation}
\begin{lemma}
\label{Fournier2} Let $A$ be a subset of $G$ with $C(A) \ge
1-\delta$. Then for any $\epsilon>\delta$ we have
$$
(1-\delta/\epsilon) |A| \le |A\cap \text{Sym}_{1-\epsilon}(A)|.
$$
\end{lemma}
\begin{proof} Note that
$$
C(A) |A|^2 = \# \{a_1 a_2 = a_3\} = \sum_{a_2\in A} |A\cap Aa_2|.
$$
Now $|A\cap Aa_2|\le |A|$ for all $a_2\in A$, and $|A\cap Aa_2| \le
(1-\epsilon)|A|$ for $a_2$ lying in $A$ but not in
$\text{Sym}_{1-\epsilon}(A)$. Thus
$$
(1-\delta)|A|^2 \le |A| |A\cap \text{Sym}_{1-\epsilon}(A)| +
(1-\epsilon)|A| (|A|-|A\cap \text{Sym}_{1-\epsilon}(A)|),
$$
and the lemma follows upon rearranging.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Fournier}] With $\eta = 1/20$ we shall show that
$\text{Sym}_{1-2\eta}(A)$ equals $\text{Sym}_{1-4\eta}(A)$. Then
$\text{Sym}_{1-2\eta}(A) \times \text{Sym}_{1-2\eta}(A) \subset
\text{Sym}_{1-4\eta}(A) = \text{Sym}_{1-2\eta}(A)$, and it follows
that $\text{Sym}_{1-2\eta}(A) = \text{Sym}_{1-4\eta}(A)$ is a group.
This is the group $K$ of the Theorem. By \eqref{Fournier1} it
satisfies $|K|\le 10|A|/9$, and by Lemma \ref{Fournier2} we have
$|A\cap K| \ge (1-5\delta) |A|$; thus $K$ has the properties claimed
in the Theorem.
Since $\text{Sym}_{1-2\eta}(A) \subset \text{Sym}_{1-4\eta}(A)$, it
remains only to show the reverse inclusion. Consider any $x\in
\text{Sym}_{1-4\eta}(A)$. The sets $\text{Sym}_{1-\eta}(A)$ and
$x\text{Sym}_{1-\eta}(A)$ both have cardinality at least
$|A|(1-\delta/\eta)$ by Lemma \ref{Fournier2}, and both are
contained in the set $\text{Sym}_{1-5\eta}(A)$ of cardinality at
most $|A|/(1-5\eta)$ by \eqref{Fournier1}. Since $\delta <1/60$, we
deduce that $\text{Sym}_{1-\eta}(A)$ and $x\text{Sym}_{1-\eta}(A)$
have a non-empty intersection, and therefore $x$ may be written as
the product of two elements from $\text{Sym}_{1-\eta}(A)$. Hence
$x$ must lie in $\text{Sym}_{1-2\eta}(A)$, completing the proof.
\end{proof}
\section{Characterizing the traditional choice of
digits}\label{sec:twocarries}
This section returns to the original setting of the cyclic groups
$p(\mathbb{Z}/p^2\mathbb{Z}) \subset\mathbb{Z}/p^2\mathbb{Z}$. The usual choice of coset
representatives $\{0,1,2,\cdots,p-1\}$ results in two types of
carries $\{0,p\}$ (Table 1). Balanced coset representatives (Table
2) need three types of carries $\{0,p,-p\}$. Random coset
representatives almost surely need all $p$ carries. The results
below show that two types of carries characterize the usual choice
of coset representatives. They use some basic tools of additive
combinatorics due to Freiman and make for a nice introduction to
these tools in a natural problem. At present the argument relies
on $p$ being prime, and it would be interesting to extend it to other groups.
\begin{theorem}\label{prop:char} Let $p$ be a prime, and let $A\subset\cg{p^2}$
be a set of coset representatives for $p(\cg{p^2})\subset\cg{p^2}$.
Suppose that the carries matrix associated to $A$ contains only two
distinct entries. Then there exist $c\in (\cg{p^2})^{\times}$ and
$d\in p(\cg{p^2})$ such that after dilating $A$ by $c$ and
translating by $d$ we have either $cA+d=\{0,1,\dots,p-1\}$ or
$cA+d=\{1,2,\cdots,p\}$.
\end{theorem}
If the carries matrix for $A$ contains only two distinct entries then
the sumset $A+A$ is contained in two translates of the set $A$ and thus $|A+A|\le 2|A|$.
Pollard's theorem tells us that $|A+A|\ge 2|A|-1$ (this is essentially the
Cauchy-Davenport theorem, as discussed in Section 3), and so our situation
is very close to the minimal possible doubling of a set. Note that a typical random
set $A$ might be expected to have sumset $A+A$ as large as $|A|^2$ in size,
and one would expect sets with small doubling to be very structured and far from random.
This is the content of a celebrated theorem of Freiman, and we give a sample such
result in the case of subsets of the integers.
\begin{nonumtheorem}[Freiman's $3k-3$ theorem]\label{thm:fz}
Let $A\subset\mathbb{Z}$ with $|A|=k\geq 3$. If $|A+A|=2k-1+b\leq 3k-4$
then $A$ is a subset of an arithmetic progression of length $k+b$.
\end{nonumtheorem}
Freiman's $3k-3$-theorem does not directly apply in our situation, since
we are dealing with a subset of ${\Bbb Z}/{p^2 {\Bbb Z}}$ rather than
a subset of ${\Bbb Z}$. The problem is that the congruence $a+b\equiv c+d \pmod{p^2}$
does not necessarily mean that $a+b=c+d$ as an equation in the
integers. Thus a sumset in ${\Bbb Z}/p^2 {\Bbb Z}$ could look very
different from a sumset in ${\Bbb Z}$. However, if we could choose
representatives for the residue classes of $A \subset \cg{p^2}$ to lie always in
the interval $(-p^2/4,p^2/4]$ then the congruence $a+b\equiv c+d\pmod{p^2}$
is indeed equivalent to the equation $a+b=c+d$. If this can be done, then we may
as well view $A$ as a subset of the integers and results such as Freiman's $3k-3$-theorem
would become applicable. This is a case of a very useful notion of Freiman which
identifies when two subsets of different groups behave additively in a similar way.
\begin{definition}[Freiman isomorphism] Let $A\subset G$ and $B\subset H$ be
two subsets of the abelian groups $G$ and $H$. We say that $A$ and $B$ are
Freiman isomorphic if there is a bijection $\phi: A \to B$ such that the relation
$x+y=z+w$ holds with $x$, $y$, $z$, $w$ in $A$ if and only if
the relation $\phi(x)+\phi(y)=\phi(z)+\phi(w)$ holds in the group $H$.
\end{definition}
Note that if $A$ and $B$ are Freiman isomorphic then $|A+A|= |B+B|$.
Returning to our problem, we would like to show that our set $A\subset \cg{p^2}$
is Freiman isomorphic to a subset of the integers, and then
apply Freiman's $3k-3$ theorem. This follows a strategy pioneered by Freiman himself,
who showed that small subsets of ${\Bbb Z}/p{\Bbb Z}$ with
small doubling are isomorphic to subsets of the integers (also called {\sl rectifiable}) leading to the
following theorem (see Section 2.8 of Nathanson \cite{nathanson}).
\begin{nonumtheorem}[Freiman's $2.4$ theorem]\label{thm:fzp}
Set $c=1/35$ and $\alpha=2.4$. Let $A\subset\cg{p}$ with $|A|=k\leq
cp$. If $ |A+A|=2k-1+b\leq\alpha k-3$ then $A$ is contained in an
arithmetic progression in $\cg{p}$ of length $k+b$.
\end{nonumtheorem}
More recently Bilu, Lev and Ruzsa \cite{bilu-lev-ruzsa} and Green and Ruzsa \cite{green-ruzsa}
have shown how any small subset of $\cg{p}$ with small doubling
may be rectified. By adapting these arguments to our setting of $\cg{p^2}$ we shall
establish the following Proposition.
\begin{proposition} Let $A\subset \cg{p^2}$ be a set of coset representatives for $p(\cg{p^2}) \subset \cg{p^2}$ and
suppose that $|A+A|\le 2|A|$. Then there exists a dilation $c\in (\cg{p^2})^{\times}$ and a translation $d\in \cg{p^2}$ such
that $cA+d$ lies in $(-p^2/4,p^2/4]$. Thus $A$ is Freiman isomorphic to a subset of the integers.
\end{proposition}
Assuming this Proposition, let us now prove Theorem 6.1.
\begin{proof}[Proof of Theorem 6.1 assuming Proposition 6.3] Let $A\subset\mathbb{Z}/p^2\mathbb{Z}$ be a set of coset
representatives with only two distinct carries, so that $|A+A|\leq
2|A|$. By Proposition 6.3 we may dilate $A$ by some $c \in (\cg{p^2})^{\times}$
and obtain a set contained in $(d-p^2/4,d+p^2/4]$ for some $d \in \cg{p^2}$.
This means that $A$ is Freiman isomorphic to a subset of the integers, and applying Freiman's $3k-3$-theorem
we see that $A$ must lie in an arithmetic progression of length at most $|A+A|-|A|+1 \le (p+1)$.
After a dilation if necessary, we may assume that $A$ lies in an
interval of length $p+1$, missing exactly one element from this
interval. Since $A$ consists of coset representatives, the missing
element must be one of the endpoints of the interval, so that
$A$ consists of consecutive elements; say $A = \{u, u+1, \ldots, u+p-1\}$ for some $u$.
It remains to show that $u\equiv 0,1\pmod p$.
To see this, if $u\equiv i\pmod p$ for
some $2\leq i\leq p-1$, then the following examples show that
there must be three types of carries:
\[ (u+p-1)+(u+p-1)-(u+i-2)=2p+u-i; \ \ u+u-(u+i)=u-i;
\]
and
\[
(u+p-1)+u-(u+i-1)=p+u-i. \]
This completes our proof.
\end{proof}
Now we turn to the proof of Proposition 6.3, whose argument involves
two parts. First we establish a combinatorial result which shows that
if a substantial part of $A$ can be translated and dilated into the interval
$(-p^2/4,p^2/4]$ then all of $A$ can be. This result holds for all cyclic
groups $\cg{m}$. Second we use some simple Fourier analysis
to show that a large part of $A$ can be translated and dilated
into $(-p^2/4,p^2/4]$ so that our first argument may be used. This
argument requires that we are working in $\cg{p^2}$.
\subsection{From a large subset to the entire set}
\begin{proposition}\label{prop:rect}
Let $m$ be a positive integer, and let $A$ be a subset
of $\cg{m}$ such that if $x \neq y\in A$ then $(x-y,m)=1$. Of all the sets $cA+d$
(with $c\in (\cg{m})^*$ and $d\in \cg{m}$ let $\ell$ denote the maximum intersection of
such a set with $(-m/4,m/4]$. Suppose that $\ell < |A|$. Then either
$\ell < (|2A|+4)/3$ or $m\le 6(|2A|-\ell)$.
\end{proposition}
\begin{proof} Let us suppose that $A$ has been already translated and dilated to have
maximum intersection with $(-m/4,m/4]$, and let $A_0$ denote this
intersection. Thus $|A_0| =\ell <|A|$ by assumption, and $A_0$ is
Freiman isomorphic to a subset of the integers.
Now $2A_0 \subset 2A$, and write $|2A| = 2\ell -1 + b$. We may assume that $b\le \ell -3$,
else the first alternative in the Proposition holds. Since $|2A_0| \le 2\ell -1 +b$, by Freiman's $(3k-3)$-theorem we see that $A_0$ is contained in an arithmetic progression of
size $\ell +b$. Since the elements of $A$ (and hence $A_0$) satisfy that $(x-y,m)=1$, the
common difference of this arithmetic progression must be coprime to $m$. Therefore
by translating and dilating (using dilations coprime to $m$) we may assume that $A_0$
is contained inside $(-(\ell+b)/2,(\ell+b)/2]$.
Since $\ell <|A|$, there must be an element $a \in A$ such that when reduced $\pmod m$,
$a$ lies either in $(m/4,m/2]$ or $(-m/2,-m/4]$. Now the set $A_0+A_0$ has at least $2\ell -1$ elements, and all of these lie in
$(-\ell-b,\ell+b]$, and the set $A_0 +\{a\}$
has $\ell$ elements all lying in either $(m/4-(\ell+b)/2,m/2+(\ell+b)/2]$ or
$(-m/2-(\ell+b)/2,-m/4+(\ell+b)/2]$. If the second alternative of the proposition doesn't
hold, then the sets $A_0+A_0$ and $A_0+\{a\}$ have at most one element in common,
and thus give at least $3\ell -2$ elements in $2A$ which
is a contradiction.
\end{proof}
\subsection{Obtaining concentration near the origin} Now we carry out the second part of
the argument showing that a large part of $A$ can be put inside $(-p^2/4,p^2/4]$.
\begin{proposition}[Concentration near the origin]\label{cor:rough}
Let $A\subset\mathbb{Z}/p^2\mathbb{Z}$ be as in the statement of Proposition 6.3. Then there exist $c\in (\mathbb{Z}/p^2\mathbb{Z})^{\times}$ and $d\in\mathbb{Z}/p^2\mathbb{Z}$ such
that after dilating $A$ by $c$ and translating by $d$, we have
$$
|(cA+d)\cap (-p^2/4,p^2/4]| \ge \frac p2 \Big( 1 + \Big(\frac{p-2}{2(p-1)}\Big)^{\frac 12}\Big).
$$
\end{proposition}
This
uses a little Fourier analysis: For $A\subset\mathbb{Z}/m\mathbb{Z}$ the Fourier
coefficients are defined by the formula
\[ \hat{A}(r)=\sum_{a\in A}e^{2\pi ira/m} \]
for $r\in\mathbb{Z}/m\mathbb{Z}$.
\begin{lemma}[Obtaining a large Fourier
coefficient]\label{lem:largeFC} Let $m$ be a positive integer, and
let $A\subset \cg{m}$ be a subset. Write $|A|=\alpha_1m$ and
$|A+A|=\alpha_2m$. Then
$$
\max_{r\neq 0} |{\hat A}(r)| \ge |A| \Big( \frac{\alpha_1
(1-\alpha_{2})}{\alpha_{2}(1-\alpha_1)}\Big)^{1/2}.
$$
\end{lemma}
\begin{proof} Write $S=A+A$. Note that
$$
\alpha_1^2 m^2 = |A|^{2} = \sum_{\substack{a_1,a_2\in A\\
a_1+a_2\in S}}1.
$$
Using Parseval's identity this equals
$$
\frac 1m \sum_k {\hat A}(k)^{2} \widehat{S}(-k)= \alpha_1^{2}
\alpha_{2} m^{2} + \frac 1m \sum_{k\neq 0} {\hat A}(k)^{2}
\widehat{S}(-k).
$$
Thus
\begin{align*}
\alpha_1^2 (1-\alpha_2) m^2 = \frac 1m \Big| \sum_{k\neq 0} {\hat
A}(k)^2 \widehat{S}(-k)\Big| \le \Big(\max_{k\neq 0}|{\hat
A}(k)|\Big) \frac{1}{m} \sum_{k\neq 0} |{\hat A}(k)|
|\widehat{S}(-k)|.
\end{align*}
By Cauchy's inequality and Parseval's identity
\begin{align*}
\frac 1m \sum_{k\neq 0} |{\hat A}(k)| |\widehat{S}(-k)| &\le
\Big(\frac 1m \sum_{k\neq 0} |{\hat A}(k)|^2 \Big)^{\frac 12}
\Big(\frac 1m \sum_{k\neq 0} |\widehat{S}(-k)|^2 \Big)^{\frac 12}
\\
&= (\alpha_1(1-\alpha_1))^{\frac 12} (\alpha_2 (1-\alpha_2))^{\frac
12}m,
\end{align*}
and the Lemma follows with a little rearranging.
\end{proof}
We also require the following combinatorial result of Lev\cite{Lev2} (see also Theorem 2.9 of
Nathanson \cite{nathanson}).
\begin{lemma} \label{lem:nathanson}
Let $z_1,\dots,z_m\in\mathbb{C}$ be points on the unit circle. If
\[ |z_1+\dots+z_m|>2n-m+2(m-n)\cos\left(\phi/2\right),\]
then there exists an arc on the unit circle of length $\phi$
containing more than $n$ points. In particular some arc of length
$\pi$ contains at least $\frac{1}{2} (m+ |z_1+\ldots +z_m|)$ points.
\end{lemma}
\begin{proof}[Proof of Proposition 6.5] Apply Lemma \ref{lem:largeFC} with $m=p^2$, $\alpha_1=1/p$, and
$\alpha_2\le 2/p$, to get
\[
\max_{r\neq 0} |\hat{A}(r)|\geq p\left(\frac{p-2}{2(p-1)}\right)^{1/2}. \]
Since $A$ consists of coset representatives for $p(\cg{p^2}) \subset \cg{p^2}$ it follows that ${\hat A}(r)=0$
for those $r$ that are multiples of $p$ but not of $p^2$. Thus the maximal non-zero
Fourier coefficient produced above is coprime to $p$. Thus after dilating the original $A$ by $r$ if needed, we may
assume that the maximal Fourier coefficient is attained at $r=1$.
Now apply Lemma \ref{lem:nathanson} with the $p$ points $e^{2\pi ia/p^2}$ for $a\in A$. We conclude that
some arc of length $\pi$ contains at least $\frac 12 (p+|{\hat A}(1)|)$ points $e^{2\pi i a/p^2}$,
which is the Proposition.
\end{proof}
\begin{proof}[Proof of Proposition 6.3] When $p=2$ we may easily translate and dilate $A$ to equal $\{0,1\}$.
When $p=3$ or $5$ Proposition 6.5 already shows that $A$ may be translated and dilated
to lie inside $(-p^2/4,p^2/4]$. For $p$ at least $7$, a small calculation shows that Propositions 6.4 and 6.5 may be
combined to give the conclusion of Proposition 6.3.
\end{proof}
\section{Open Problems}
For $X$ coset representatives for a normal, finite index subgroup
$H$ in an arbitrary group $G$, we have shown that either $G$ is a
semi direct product of $H$ and another subgroup $K$, or $C(X)\leq 7/9$. For concrete examples of the pair
$(G,H)$, it is an interesting question to determine what the best
upper bound for $C(X)$ in this statement is. Denote this upper bound
by $C(G,H)$. We showed that $C(\mathbb{Z},b\mathbb{Z})=1 - \lfloor b^2/4\rfloor/b^2$; in particular
$C(\mathbb{Z},b\mathbb{Z})=3/4+o(1)$ as $b\rightarrow\infty$. Consider the
two-dimensional question of determining $C(\mathbb{Z}\times\mathbb{Z},b\mathbb{Z}\times
b\mathbb{Z})$. Clearly $C(\mathbb{Z}\times\mathbb{Z},b\mathbb{Z}\times b\mathbb{Z})\leq C(\mathbb{Z}, b\mathbb{Z})$, and
we conjecture that $C(\mathbb{Z}\times\mathbb{Z},b\mathbb{Z}\times b\mathbb{Z}) = C(\mathbb{Z}, b\mathbb{Z})^2 = (9/16+o(1))$;
this bound may be attained by by taking $X=\{-(b-1)/2,\cdots,(b-1)/2\}\times\{-(b-1)/2,\cdots,(b-1)/2\}$.
We are unable to prove this conjecture; however, in \cite{Shao}, Shao makes
partial progress obtaining $C(\mathbb{Z}\times\mathbb{Z},b\mathbb{Z}\times b\mathbb{Z})\leq
1-3\sqrt{3}/4\pi\approx 0.59$; note that $9/16=0.5625$ so that Shao's bound is not
too far from our conjecture. As mentioned earlier, another open problem is to
extend Theorem 6.1 to other groups.
To end this paper, we make a final remark on Theorem
\ref{thm:approx_hom}. It is natural to wonder what can be said about
an $\epsilon$-approximate homomorphism $f$ for a small positive
constant $\epsilon$ (say $\epsilon=0.01$). We have already seen
that, in general, $f$ need not resemble a genuine homomorphism. On
the other hand, what one can conclude is that $f$ resembles a
genuine {\em local} homomorphism. In the special case when $G$ and
$H$ are vector spaces over finite fields, it turns out that any
$\epsilon$-approximate homomorphism {\em does} resemble a genuine
(global) homomorphism \cite{samorodnitsky}. A quantitative version
of this statement is equivalent to the polynomial Freiman-Ruzsa
(PFR) conjecture, a famous open problem in additive combinatorics.
See \cite{pfr} for the precise statement of this conjecture in the
finite field setting.
\bibliographystyle{plain}
| {
"timestamp": "2013-09-03T02:12:33",
"yymm": "1309",
"arxiv_id": "1309.0434",
"language": "en",
"url": "https://arxiv.org/abs/1309.0434",
"abstract": "Given a group G and a normal subgroup H we study the problem of choosing coset representatives with few carries.",
"subjects": "Combinatorics (math.CO); Group Theory (math.GR); Number Theory (math.NT)",
"title": "Carries, group theory, and additive combinatorics",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969641180276,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7099044265871286
} |
https://arxiv.org/abs/cond-mat/0612188 | Optimum exploration memory and anomalous diffusion in deterministic partially self-avoiding walks in one-dimensional random media | Consider $N$ points randomly distributed along a line segment of unitary length. A walker explores this disordered medium moving according to a partially self-avoiding deterministic walk. The walker, with memory $\mu$, leaves from the leftmost point and moves, at each discrete time step, to the nearest point, which has not been visited in the preceding $\mu$ steps. We have obtained analytically the probability $P_N(\mu) = (1 - 2^{-\mu})^{N - \mu - 1}$ that all $N$ points are visited in this open system, with $N \gg \mu \gg 1$. The expression for $P_N(\mu)$ evaluated in the mentioned limit is valid even for small $N$ and leads to a transition region centered at $\mu_1 = \ln N/\ln 2$ and with width $\epsilon = e/\ln2$. For $\mu < \mu_1 - \epsilon/2$, the walker gets trapped in cycles and does not fully explore the system. For $\mu > \mu_1 + \epsilon/2$ the walker explores the whole system. In both cases the walker presents diffusive behavior. Nevertheless, in the intermediate regime $\mu \sim \mu_1 \pm \epsilon/2$, the walker presents anoumalous diffusion behavior. Since the intermediate region increases as $\ln N$ and its width is constant, a sharp transition is obtained for one-dimensional large systems. The walker does not need to have full memory of its trajectory to explore the whole system, it suffices to have memory of order $\mu_1$. | \section{Introduction}
While random walks in regular or disordered media have been thouroughly explored~\cite{fisher:1984}, deterministic walks in regular~\cite{grassberger:92} and disordered media~\cite{bunimovich:2004,boyer_2004,boyer_2005,boyer_2006} have been much less studied.
Here we are concerned with the properties of deterministic walks in random media.
Given $N$ points (cities) distributed in a $d$-dimensional space, a possible question to ask is how efficiently these cities can be visited by a walker who follows a simple movimentation rule.
In the \emph{travelling salesman problem}, one searches for the shortest closed path, which passes once in each city.
This problem has been extensively studied.
In particular, if the coordinates of the cities are distributed following a uniform deviate, results concerning the statistics of the shortest paths have been obtained analytically~\cite{percus:1996, percus:1997, percus:1999}.
To tackle this problem, one imperatively needs to know the coordinates of all the cities in advance.
Global system information must be at the walker's disposal.
Nevertheless, other situations may be envisaged.
For instance, suppose that only local information about the neighborhood ranking of the current city is at the walker's disposal.
In this case, one can think of several deterministic and stochastic strategies to maximize the number of visited cities, while trying to minimize the travelled distance.
Our aim is to study the way a walker explores the medium following the deterministic rule of going to the nearest point, which has not been visited in the previous $\mu$ discrete time steps.
We call this partially self-avoiding walk of the \emph{deterministic tourist walk}~\cite{lima_prl2001,stanley_2001,kinouchi:1:2002}.
Each trajectory, produced by this deterministic rule, has an initial transient of length $t$ and ends in a cycle of period $p$.
Both transient time and cycle period can be combined in the joint distribution $S_{\mu, d}^{(N)}(t,p)$.
The $\mu = 0$ deterministic tourist walk is trivial, the walker does not move at each time step.
The transient and period joint distribution is simply $S^{(N)}_{0, d}(t, p) = \delta_{t,0} \delta_{p,1}$, where $\delta_{i,j}$ is the Kronecker delta.
With memory $\mu = 1$, the walker must leave the current city at each time step and the transient time and period joint distribution has been obtained for $N \gg 1$~\cite{tercariol_2005}.
The $\mu = 1$ walks do not lead to the random medium exploration.
This is due to very short transient times and the tourist gets trapped in pairs of cities, which are mutually nearest neighbors.
Interesting phenomena occur when $\mu \ge 1$ is considered.
In this case, the cycle distribution is no longer peaked at $p_{min} = \mu + 1$, but presents a whole spectrum of cycles with period $p \in [\mu+1, N]$, with possible power-law decay~\cite{lima_prl2001,stanley_2001,kinouchi:1:2002}.
These cycles have been used as a clusterization method~\cite{campiteli_2006}, texture analysis in images~\cite{backes_2006,bruno_2006}, primate foraging~\cite{boyer_2006,boyer_2004} and thesaurus analysis~\cite{kinouchi:1:2002}.
It is interesting to point out that, for 1D systems, determinism imposes serious restrictions.
For any $\mu$ value, cycles with period $p \in [2\mu+1, 2\mu+3]$ are forbidden.
Additionally, for $\mu = 2$ all odd periods, but $p_{min} = 3$, are forbidden.
Also, the heavy tail of the period marginal distribuition $S_{\mu, 1}^{(N)}(p) = \sum_t S_{\mu, 1}^{(N)}(t,p)$ may lead to often-visited-large-period cycles~\cite{lima_prl2001}.
This allows system exploration even for small memory values ($\mu \ll N$).
In this letter, we consider the deterministic tourist walk along 1D random systems.
Through a simple (though not trivial) derivation, we show the existence of a transition in the walker's exploratory behavior at a critical memory $\mu_1 = \ln N / \ln 2$ in a narrow memory range of width $\varepsilon = e/\ln 2$.
This transition splits the walker's behavior in essentially three regimes.
Also, we show numerically that the final step distribution of these walks is the estimator of the fractionary and non-linear diffusion equation solution~\cite{metzler_2000,anteneodo_2005}.
For $\mu < \mu_1 - \varepsilon/2$, the walker gets trapped in cycles and for $\mu > \mu_1 + \varepsilon/2$, the walker visits all the cities.
In both cases, the walker presents normal diffusion behavior.
Nevertheless, for $\mu \sim \mu_1 \pm \varepsilon/2$ the walker superdiffuses indicating that $\mu_1$ is an optimum memory for maximum exploration and minimum movimentation.
A random static semi-infinite medium is constructed by uncountable points that are randomly and uniformly distributed along a line segment with a mean point density $r$.
The distances $x$ between consecutive points follow an exponential probability density function (pdf): $f(x) = r e^{-r x}$, for $x \ge 0$ and $f(x) = 0$, otherwise.
Consider now the tourist dynamics with a walker who leaves from the city $s_1$, placed at the origin of the line segment.
The probability $S_{\mu, si}^{(\infty)}(n)$ for the walker to explore $n$ distinct cities can be derived as follows.
The first $\mu+1$ cities are indeed explored, because the memory $\mu$ prohibits the walker to turn back.
Thus, the distances $x_1$, $x_2$, \ldots, $x_\mu$ may assume any value in the interval $[0, \infty)$.
The following steps are uncertain.
The walker may move either forward to a new city or backward to an already visited city outside the memory window.
In analogy to the geometric distribution, it is useful to define the exploration probability $\tilde{q}_j$ as the probability for the walker to explore a new city at the $j$-th uncertain step.
Thus, $\tilde{q}_1$ can be obtained imposing that the distance $x_{\mu+1}$ must be less than the sum $y_1=\sum_{k=1}^\mu x_k$.
Since the variables $x_1$, $x_2$, \ldots, $x_\mu$ are independent and identically distributed with exponencial pdf, $y_1$ has a gamma pdf.
Hence $\tilde{q}_1 = \int_0^\infty dy_1 r^\mu y_1^{\mu-1} e^{-ry_1} /\Gamma(\mu) \int_0^{y_1} dx_{\mu+1} re^{-rx_{\mu+1}} = 1-2^{-\mu}$.
The exploration probability $\tilde{q}_2$ for the second uncertain step is not exactly equal to $\tilde{q}_1$.
Once the distance $x_{\mu+1}$ must vary in the interval $[0, y_1]$, the variables $x_2$, $x_3$, \ldots, $x_{\mu+1}$ are not all independent, and consequently $y_2=\sum_{k=2}^{\mu+1} x_k$ has not exactly a gamma pdf.
However, for $\mu \gg 1$, $x_{\mu+1}$ rarely exceeds $y_1$ [this probability is just $P(x_{\mu+1}>y_1) = 1-\tilde{q}_1=2^{-\mu}$, meaning that a weak correlation is present].
Therefore, one can make an approximation assuming that $y_2$ follows a gamma pdf and considering $\tilde{q}_2 \approx \tilde{q}_1$.
The same argumentation can be used for the succeeding steps.
When the city $s_n$ is reached, the walker must turn back, finishing the medium exploration.
Once $\tilde{q_1}$ is taken for all $\tilde{q}$, the return probability is $\tilde{p} = 1-\tilde{q} = 2^{-\mu}$ and one has: $S_{\mu, si}^{(\infty)}(n) = 2^{-\mu} (1-2^{-\mu})^{n-\mu-1}$.
Notice that $r$ has been eliminated, indicating that the number of explored cities does not depend on the medium density.
We observe that once the memory $\mu$ assures the walker to visit at least $\mu+1$ cities, its convenient to define the number of extra explored cities as $n_e = n-\mu-1$.
In this variable, $S_{\mu, si}^{(\infty)}$ begins at $n_e=0$ for all $\mu$ value and one has $\mbox{E}(n_e) = 2^\mu-1$, which may be interpreted as the characteristic reach of the walk, and $\mbox{Var}(n_e) = 2^{2\mu}-2^\mu$.
The finite disordered medium is constructed by $N$ points, whose coordinates are randomly (uniform deviate) generated in the interval $[0, 1]$.
The exploration and return probabilities obtained for the semi-infinite medium may also be applied to this finite medium.
The equivalence between these two media can be shown restricting the semi-infinite medium length to the first $N$ points and normalizing it to fit in the interval $[0, 1]$.
For both media, the abscissas of the ranked points follow a beta pdf.
The probability $P_N(\mu)$ for the exploration of the whole $N$-point medium can be derived noticing that the walker must move forward $N-(\mu+1)$ uncertain steps and, when the last city $s_N$ is reached, there is no need to impose a return step.
Therefore
\begin{eqnarray}
\label{Eq:Percolacao}
P_N(\mu) = \tilde{q}^{\, n_e} = \left(1-2^{-\mu}\right)^{N-\mu-1} \; ,
\end{eqnarray}
which is plotted in Fig.~\ref{Fig:Percolacao}(a).
In Fig.~\ref{Fig:Percolacao}(b) one sees that the probability of full medium exploration increases rapidly from 0 to 1, in a well defined transition region.
Considering $N \gg \mu$, Eq.~\ref{Eq:Percolacao} may be approximated to $P_N(\mu) = (1-2^{-\mu})^N$.
To obtain the critical memory $\mu_1$, one considers the inflexion point, leading to
\begin{eqnarray}
\label{Eq:MemCritica}
\mu_1 & = & \log_2 N \; ,
\end{eqnarray}
which is the number of bits to represent the system size.
For all $N$, the curve slope at $\mu_1$ is $\ln 2/e$ [Fig.~\ref{Fig:Percolacao}(b)], the transition region has a constant width
\begin{eqnarray}
\label{Eq:RegTrans}
\varepsilon = \frac{e}{\ln 2} \approx 3.92 \; .
\end{eqnarray}
This indicates that as $N$ increases, $\mu_1$ slowly increases, but its deviation is independent of $N$, so that a sharp transition is found for $N \gg 1$.
The comparison between the analytical result and computer simulation is depicted in Fig.~\ref{Fig:Percolacao}(b).
Observe that the approximation employed leads to satisfactory results even for small $N$.
\begin{figure}[htb]
\begin{center}
\includegraphics[angle=-90,width=.7\columnwidth]{Perc3D.eps}
{\bf (a)}
\includegraphics[angle=-90,width=.7\columnwidth]{Perc.eps}
{\bf (b)}
\caption{
{\bf (a)} Probability of full exploration (Eq.~\ref{Eq:Percolacao}) as a function of $\mu$ and $N$ for 1D deterministic tourist walks.
One sees the abrupt transiton between the localized (bottom) and extended (upper) regimes.
{\bf (b)} Projections of Eq.~\ref{Eq:Percolacao} for some fixed $N$ values.
Analytical results are satisfactory, when compared to numerical simulation ($M=10\,000$ maps for each $N$ and $\mu$ values), even for small $N$ and $\mu$ values.
Error bars are smaller than symbol size.
The transition points $\mu_1$ are given by Eq.~\ref{Eq:MemCritica}, which are weakly dependent on $N$ but all of them have the same constant dispersion $\varepsilon \sim 4$ (Eq.~\ref{Eq:RegTrans}).}
\label{Fig:Percolacao}
\end{center}
\end{figure}
Up to now, we have studied the deterministic walks in 1D disordered media with open boundary condition.
Now, let us focus on the diffusion process.
This tells us about the medium exploration bulk characteristics.
Using periodic boundary conditions for each map, the walkers start to move, following the tourist rule, from the most central point of the interval $[-1/2, 1/2]$.
After $N$ steps, the walkers enter the periodic part, which can be interpreted as the steady state of the exploratory behavior (no new visited cities).
For a given reduced memory $\tilde{\mu} = \mu/ \mu_1$ and after $t=N$ steps, the final position pdf $\rho(x, t)$ has been estimated by $P_{\tilde{\mu}}(x) = n(x)/(M \Delta x)$, where $M$ is the number of maps and $n(x)$ is the number of travelers with final position between $x$ and $x+\Delta x$.
The normalized distribution is shown in Fig.~\ref{Fig:SupNormalizada}, where one sees the huge dispersion around $\tilde{\mu} = 1$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=.6\columnwidth, angle=-90]{SupNormalizada.eps}
\caption{Distribution of the final position after $t = N$ steps in the 1D tourist walk as a function of $\mu$.
Numerical simulations have been perfomed with $\Delta x = 1/100$, $M = 10^5$ maps with $N = 10^4$ cities each using periodic boundary conditions.
The memory $\mu$ varied from $1$ to $50$.}
\label{Fig:SupNormalizada}
\end{center}
\end{figure}
We have shown numerically that $P_{\tilde{\mu}}(x) \sim \rho(x, t)$ is the solution of the fractionary and non-linear difusion equation~\cite{metzler_2000,anteneodo_2005}:
\begin{equation}
\label{Eq:Difusao}
\partial_t \rho(x, t) = D \partial_x^{2+\xi} \rho^{2-q}(x, t) \; .
\end{equation}
Here $\xi$ is the fractionality parameter, which is related to the step-length pdf second-moment divergence, $q$ is the non-linearity parameter, which expresses the influence of correlation between step lengths and $D$ is the diffusion coeficient.
For $|\mu-\mu_1| > \varepsilon/2$, $\rho(x, t)$ has gaussian ($\xi=0$, $q=1$) shapes indicating normal diffusion.
Further, as $|\mu-\mu_1|$ increases, the variance $\sigma^2$ tends to zero and $\rho(x, t)$ becomes a Dirac delta function: $S_{\mu = 1, 1}^{(N)}(p) = \delta_{p,2}$ or $S_{\mu > \mu_1, 1}^{(N)}(p) \approx \delta_{p,N}$.
Anomalous diffusion occurs for $|\mu - \mu_1| < \varepsilon/2$.
For $|\mu - \mu_1|$ smaller, but closer to $\varepsilon/2$, the walks have finite second moment.
The $q$ parameter becomes important and the solutions of Eq.~\ref{Eq:Difusao}, with $\xi=0$, are the generalized $q$-gaussians~\cite{anteneodo_2005}: $G_{q}(x) = \gamma_q e_q(-x^2/a_q^2)/\sqrt{a_q^2 \pi}$, where $e_q(x) = [1+(1-q)x]^{1/(1-q)}$ is a generalization for the exponential function, $a_q^2(t) = [2Dt (2 - q)(3 - q) \gamma^{q-1}]^{2/(3 - q)}$ is related to the variance [$\sigma_q^2 = a_q^2/(5-3q)$] of $\rho$ and normalization is possible only for $q<3$:
$\gamma_q = \sqrt{|q-1|} \{ \Gamma[ 1/(q-1) - \delta] / \Gamma \{ (3-q)/[2(q-1)] - \delta \} \}^{1-2\delta}$,
where $\delta=1$ for $q<1$, which expresses subdiffusion and $\delta=0$ for $1 \le q < 3$ expressing superdiffusion.
If $q \rightarrow 1^+$, then $\gamma \rightarrow 1$, and one has normal diffusion.
The variance is finite only for $q < 5/3$, otherwise the $q$-gaussians give place to L\'{e}vy $\alpha$-stable pdf ($\xi \ne 0$), which are the solutions of Eq.~\ref{Eq:Difusao} around $|\mu-\mu_1| \sim 0$.
To fit $P_{\tilde{\mu}}(x)$ to the theoretical model $\rho(x, t)$, we have computed its variance for each $\tilde{\mu}$ and used the maximum likelihood estimation method to find $q$ (Inset of Fig.~\ref{Fig:VarQ}).
One sees that $\sigma^2$ diverges in the transition region presenting a heavy tail if compared to a normal distribution (Fig.~\ref{Fig:SupNormalizada}).
For $\tilde{\mu} \le \log 2$ and $\tilde{\mu} \ge 2.2577$, $P_{\tilde{\mu}}(x)$ shows a gaussian behavior ($q = 1$ and $\xi = 0$).
This is like a random walk model with independent steps and finite variance.
In one hand, for $\tilde{\mu} < 1$, this is due to the short transient length and cycle period.
The walker excursion is limited to the region near the origin.
On the other hand, for $\tilde{\mu} > 1$, this occurs since all cycles have a period of $N$.
Thus, after $N$ steps, the walker returns to the starting point and the variance vanishes.
This implies that the starting point coordinate follows a gaussian pdf.
In the $q$-gaussian model, $\sigma^2$ diverges at $q = 5/3$.
Despite this result being in accordance with the estimated $q$ values, the model does not fit the data for $ 2 \log 2 < \tilde{\mu} < 2.1072$.
In the region $2 \log 2 < \tilde{\mu} < 3 \log 2 $ the distributions are L\'{e}vy $\alpha$-stable, because their tails are heavier than $q$-gaussian ones.
For $3 \log 2 \le \tilde{\mu} < 2.1072$, the distributions are almost uniform, with a clear central peak.
This behavior is probably due the periodic boundary conditions.
In short, for $0 \le\tilde{\mu} \le \log 2$ and $\tilde{\mu} > 2.2577$, the tourist follows a traditional diffusion process (gaussian solution).
For $\log 2 < \tilde{\mu} \le 2 \log 2$, and $2.1072 \le \tilde{\mu} \le 2.2577$, it is a non-linear superdifusion process (anomalous diffusion with $q$-gaussian solution).
In this region the anomalous behavior is obtained since the steps are not all independent.
For $2 \log 2 < \tilde{\mu} < 2.1072$, the process presents another kind of superdiffusion, where the anomalous behavior is strongly due to the fact that the variance is not finite.
In the region $2 \log 2 < \tilde{\mu} < 3 \log 2$ the distributions are L\'{e}vy $\alpha$-stable.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=.7\columnwidth, angle=-90]{varq2.eps}
\caption{On the lefthand side axis one reads the variance $\sigma^2$ ($\bullet$) of curves $P_{\tilde{\mu}}(x)$ (estimator of $\rho$) as function of normalized memory $\tilde{\mu}$.
Observe that $\sigma^2$ diverges near the critical memory.
This region is equivalent to the smallest amplitude, but with maximum dispersion of $P_{\tilde{\mu}}(x)$, in Fig.~\ref{Fig:SupNormalizada}.
On the righthand side there are the values of $q$ ($\bigtriangleup$) as function of $\tilde{\mu}$.
Notice $q \approx 5/3$ in the region where $\sigma^2$ diverges.
{\bf Inset:} $P_{\mu}(x)$ as a function of the final walker positon for some memory values.
The displayed curves are $q$-gaussians defined in the text.
}
\label{Fig:VarQ}
\end{center}
\end{figure}
Finally, the tourist rule can be relaxed to a stochastic walk.
Thus, the walker goes to nearer cities with greater probabilities.
These probabilities are given by an one-parameter (inverse of the temperature) exponential distribution.
This situation has been studied for $\mu = 0$~\cite{martinez:1:2004} and $\mu = 1$~\cite{risaugusman:1:2003} and we have detected the existence of a critical temperature separating the localized from the extended regimes.
It would be interesting to combine both stochastic movimentation (driven by a temperature parameter) and memory ($\mu$) in the tourist walks to perform full compromise between medium exploration and distance travelled.
In conclusion, if the walker has critical memory $\mu_1$ the solutions of Eq.~\ref{Eq:Difusao} have infinite variance.
This favors the walker to explore the whole system with minimum displacement, indicating that $\mu_1$ is an optimum memory for an exploratory strategy.
It is intriguing that a simple deterministic system as this one can present a complex behavior given by the complete fractionary non-linear difusion equation and that a small memory value allows the global medium exploration.
The authors thank N. A. Alves and F. M. Ramos for fruitful discussions.
ASM acknowledges CNPq (305527/2004-5) and FAPESP (2005/02408-0) for support.
| {
"timestamp": "2006-12-07T14:54:27",
"yymm": "0612",
"arxiv_id": "cond-mat/0612188",
"language": "en",
"url": "https://arxiv.org/abs/cond-mat/0612188",
"abstract": "Consider $N$ points randomly distributed along a line segment of unitary length. A walker explores this disordered medium moving according to a partially self-avoiding deterministic walk. The walker, with memory $\\mu$, leaves from the leftmost point and moves, at each discrete time step, to the nearest point, which has not been visited in the preceding $\\mu$ steps. We have obtained analytically the probability $P_N(\\mu) = (1 - 2^{-\\mu})^{N - \\mu - 1}$ that all $N$ points are visited in this open system, with $N \\gg \\mu \\gg 1$. The expression for $P_N(\\mu)$ evaluated in the mentioned limit is valid even for small $N$ and leads to a transition region centered at $\\mu_1 = \\ln N/\\ln 2$ and with width $\\epsilon = e/\\ln2$. For $\\mu < \\mu_1 - \\epsilon/2$, the walker gets trapped in cycles and does not fully explore the system. For $\\mu > \\mu_1 + \\epsilon/2$ the walker explores the whole system. In both cases the walker presents diffusive behavior. Nevertheless, in the intermediate regime $\\mu \\sim \\mu_1 \\pm \\epsilon/2$, the walker presents anoumalous diffusion behavior. Since the intermediate region increases as $\\ln N$ and its width is constant, a sharp transition is obtained for one-dimensional large systems. The walker does not need to have full memory of its trajectory to explore the whole system, it suffices to have memory of order $\\mu_1$.",
"subjects": "Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech)",
"title": "Optimum exploration memory and anomalous diffusion in deterministic partially self-avoiding walks in one-dimensional random media",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969698879861,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7099044248645333
} |
https://arxiv.org/abs/2202.08522 | Recovering Unbalanced Communities in the Stochastic Block Model With Application to Clustering with a Faulty Oracle | The stochastic block model (SBM) is a fundamental model for studying graph clustering or community detection in networks. It has received great attention in the last decade and the balanced case, i.e., assuming all clusters have large size, has been well studied. However, our understanding of SBM with unbalanced communities (arguably, more relevant in practice) is still limited. In this paper, we provide a simple SVD-based algorithm for recovering the communities in the SBM with communities of varying sizes. We improve upon a result of Ailon, Chen and Xu [ICML 2013; JMLR 2015] by removing the assumption that there is a large interval such that the sizes of clusters do not fall in, and also remove the dependency of the size of the recoverable clusters on the number of underlying clusters. We further complement our theoretical improvements with experimental comparisons. Under the planted clique conjecture, the size of the clusters that can be recovered by our algorithm is nearly optimal (up to poly-logarithmic factors) when the probability parameters are constant.As a byproduct, we obtain an efficient clustering algorithm with sublinear query complexity in a faulty oracle model, which is capable of detecting all clusters larger than $\tilde{\Omega}({\sqrt{n}})$, even in the presence of $\Omega(n)$ small clusters in the graph. In contrast, previous efficient algorithms that use a sublinear number of queries are incapable of recovering any large clusters if there are more than $\tilde{\Omega}(n^{2/5})$ small clusters. | \section{Introduction}
Graph clustering (or community detection) is a fundamental problem in computer science and has wide applications in almost all domains, including biology, social science and physics. Among others, stochastic block model (SBM) is one of the most basic models for studying graph clustering, offering both theoretical arena for rigorously analyzing the performance of different types of clustering algorithms, and synthetic benchmarks for evaluating these algorithms in practice. Since the 1980s (e.g., \citep{holland1983stochastic,bui1987graph,dyer1989solution,boppana1987eigenvalues}), there has been much progress towards the understanding of the statistical and computational tradeoffs for community detection in SBM with various parameter regimes. We refer to the recent survey \citep{abbe2017community} for a list of such results.
In this paper, we focus on a very basic version of the stochastic block model.
In this model, there are $k$ unknown disjoint latent clusters (or communities) $V_1,\dots,V_k$. We sample a random graph with vertices $V = V_1\cup\dots\cup V_k$ as follows: for each pair of vertices $u$ and $v$, if $u$ and $v$ are from the same community, we add an edge between them with probability $p$, otherwise we add an edge between them with probability $q$. All edges are sampled independently. We always assume that $p>q$ and let $n=|V|$. We call such a model SBM($n,k,p,q$). We aim to design algorithms to recover communities when $k$ is large and the gap $p-q$ is small. It is well known \citep{abbe2015community} that if
$n = o \left(\frac{k \log k}{(p-q)^{2}} \right)$,
then it is information theoretically impossible to recover the communities. In the past decades, many algorithms were developed with various trade-offs
between $k,p,q$ and $n$. For example, \cite{vu2018simple}, built upon the work of \cite{mcsherry2001spectral}, gave an algorithm that recovers all communities when $n =\Omega(\frac{k^2\cdot p\cdot\log n}{(p-q)^2})$ if $|V_1|=\dots = |V_k|$.
Most of previous algorithms (e.g. \citep{mcsherry2001spectral,bollobas2004max,chen2012clustering,chaudhuri2012spectral,abbe2017community,vu2018simple,cole2019nonuniform}) only work if \emph{all} of the latent clusters are assumed to be sufficiently large, i.e., for each $j$, $|V_{j}| = \tilde{\Omega}(\sqrt{n})$. From a theoretical perspective, one may wonder if one can recover the community structure from SBM if there exist clusters of small sizes. This was termed by \cite{ailon2013breaking} as ``\emph{small cluster barrier}'' for graph clustering. From a practical perspective, many real-world graphs may have many communities of different sizes, that is, large and small clusters co-exist in these graphs. This motivates us to investigate how to recover the communities in SBM if the latent communities have very different sizes.
\cite{ailon2013breaking} proposed an algorithm that recovers all large latent clusters (even in the presence of small latent clusters) in a random graph sampled from the SBM. However, their algorithm needs a \emph{size-gap assumption} about the size of communities (see Corollary 4 in \citep{ailon2013breaking} and our Section \ref{sec:relatedwork}). In this paper, we provide a singular value decomposition (SVD) based algorithm, \emph{without} this size-gap assumption, for recovering large latent clusters. Our algorithm is built upon \citep{vu2018simple} and \citep{mcsherry2001spectral}. In addition, the tradeoff of the parameters in our algorithm is nearly optimal up to polylogarithmic factors for almost all regimes under the \emph{Kesten-Stigum (KS) threshold} conjecture (see Section \ref{sec:whyourresultisinteresting} for more details).
Furthermore, by using a new connection between SBM and a clustering problem in a faulty oracle model observed by \cite{PZ21:clustering}, we provide an improved time-efficient algorithm for the latter problem as well. Again, assuming the KS-threshold conjecture, our new algorithm reaches a nearly optimal query complexity (up to polylogarithmic factors) for clustering in this faulty oracle model. We elaborate more on this in the following.
\subsection{Our contributions}
\paragraph{Recovering the largest cluster in SBM}
We first give a new algorithm that efficiently finds the largest cluster under a mild assumption.
\begin{theorem}[Recovering the largest cluster]\label{thm:mainSBM}
Let $G$ be a graph that is generated from the SBM($n,k,p,q$) model.
Then there exists a polynomial time algorithm \textsc{Cluster} (i.e. Algorithm \ref{alg:mainalgorithm}) that correctly recovers all the clusters of
size at least $\frac{n}{1.1k}$ in $G$, with probability $1-\frac{1}{n^2}$, if the following condition is satisfied:
\begin{eqnarray}
n \geq C\frac{(p+q) k^2 + {k^2\log n}}{(p-q)^2},
\label{cond:recovery}
\end{eqnarray}
where $C$ is some sufficiently large constant.
\end{theorem}
Note that it is guaranteed that the algorithm always find the largest cluster, as it is size is at least $\frac{n}{k}>\frac{n}{1.1k}$. We call the above inequality (\ref{cond:recovery}) a \emph{recovery condition} for SBM($n,k,p,q$). We remark that our algorithm does not put any extra restriction on the size of the smallest cluster, unlike \citep{vu2018simple,mcsherry2001spectral}.
In particular, we break the small cluster barrier, only assuming that the recovery condition is satisfied. We stress that this is a very mild assumption, due to a famous conjecture by \cite{decelle2011asymptotic}.
We refer to Section \ref{sec:whyourresultisinteresting} for more detailed discussions.
\paragraph{Recovering more clusters} Once we have found all the $k_1$ large clusters of size at least $\frac{n}{k}$, say $V_1,\cdots,V_{k_1}$, by using the algorithm in Theorem \ref{thm:mainSBM}, we can remove $V_1,\cdots,V_{k_1}$ and all the edges incident to them from $G$, and obtain a new graph $G'=(V\setminus \cup_{i=1}^{k_1}V_i, E')$. Note that $G'$ is a graph generated from SBM($n',k',p,q$) where $n'=n-|\cup_{i=1}^{k_1}V_i|$ and $k'=k-k_1$. Thus, we can invoke the above algorithm on $G'$ to find more clusters, if it is possible. That is, the corresponding recovery condition is satisfied, i.e., $n' \geq C\frac{(p+q) (k')^2 + {(k')^2\log n}}{(p-q)^2}$. In general, we can repeat the above process to find all large clusters until we reach a point where the recovery condition no longer holds. Formally, we introduce the following definition of \emph{prominent} clusters.
\begin{definition}[Prominent clusters]\label{def:prominent}
Let $V_{1},\dots, V_{k}$ be $k$ latent clusters and $s_1,\dots,s_k$ be the size of each cluster. WLOG, we assume that $s_1\geq\dots\geq s_k$. Let $k'$ be the smallest number such that
\[
s_{k'+1} + \dots + s_{k} < C\cdot\frac{ (p+q)(k-k')^{2}+ (k-k')^2 \log n}{(p-q)^2}
\]
We call $s_1,\dots,s_{k'}$ \emph{$C$-prominent} clusters of $V$
\end{definition}
By the above definition, Theorem \ref{thm:mainSBM}, and the aforementioned algorithm, which we call \textsc{RecursiveCluster}, we can efficiently recover all these prominent clusters as summarized in the following corollary. Note that $s_k$ is the size of the smallest cluster.
\begin{corollary}[Recovering all the prominent communities]\label{corollary:SBMlargeclusters}
Let $G$ be a graph that is generated from the SBM($n,k,p,q$) model. Assume that $s_{k}\geq \log n$, $k=o((n/\log n)^{1/2})$.
Then there exists a polynomial time algorithm \textsc{RecursiveCluster} that correctly recover all the $C$-prominent clusters of
$G$, with probability $1-o_n(1)$.
\end{corollary}
Here the $s_k \ge \log n$ conditions from
the analysis of~\citep{PZ21:clustering}, one
of the building blocks of our algorithm.
We further remark that our algorithm can be easily adapted to work for a slightly more general model that adds an edge between vertices from clusters $i$ and $j$ with probability $p_{ij}$, for any $1\leq i,j\leq k$. This can be done by appropriately changing the distance parameter $\Delta_b$ and variance parameter $\sigma$ in our algorithm, similarly as in \citep{mcsherry2001spectral,vu2018simple}. Here we omit the details of this generalization as we would like to focus on the most basic case SBM($n,k,p,q$) and its application to the problem of clustering with the faulty oracle. Besides the application to the clustering problem with a faulty model, we explain in Section \ref{sec:whyourresultisinteresting} why the above algorithm is interesting simply from the perspective of studying SBM.
\paragraph{An algorithm for clustering with a faulty oracle}
We apply the above algorithm to obtain an improved algorithm for a clustering problem in a faulty oracle model (or noisy clustering model), which was proposed by \cite{mazumdar2017clustering}. The model is defined as follows: Given a set $V=[n]:=\{1,\cdots,n\}$ of $n$ items which contains $k$ latent clusters $V_1,\cdots,V_k$ such that $\cup_i V_i=V$ and for any $1\leq i<j\leq k$, $V_i\cap V_j=\emptyset$. The clusters $V_1,\dots,V_k$ are unknown. We wish to recover them by making pairwise queries to an oracle $\ensuremath{\mathcal{O}}$, which answers if the queried two vertices belong to the same cluster or not. This oracle gives correct answer with probability $\frac{1}{2}+\frac{\delta}{2}$, where $\delta\in(0,1)$ is a \emph{bias} parameter. Formally, let $\tau$ be a function $\tau: V\times V\to \{\pm 1\}$ such that $\tau(u,v)=1$ if $u,v$ belong to the same cluster and $\tau(u,v)=-1$ if $u,v$ belong to different clusters. For any $u,v$, let $\eta_{u,v}\in \{\pm 1 \}$ be a random noise in the edge observation such that $\ensuremath{\mathrm{E}}[\eta_{u,v}]=\delta$. The noises $\eta_{u,v}$ are independent for all pairs $u,v\in V$. Then the oracle $\ensuremath{\mathcal{O}}$ returns the sign of $\tau(u,v)\eta_{u,v}$ when the pair $u,v$ is queried (and the sign `$+$' or `$-$' indicates whether $u,v$ belongs to the same cluster or not).
It is assumed that repeating the same question to the oracle $\ensuremath{\mathcal{O}}$, it always returns the same answer. (This was known as \emph{persistent noise} in the literature; see e.g. \citep{goldman1990exact}.) Our goal is to recover the latent clusters \emph{efficiently} (i.e., within polynomial time) with high probability by making as few queries to the oracle $\ensuremath{\mathcal{O}}$ as possible. Due to connections to applications in \emph{entity resolution} (also known as the \emph{record linkage}) problem, the signed edges prediction problem in a social network, and the correlation clustering problem, this model has received increasing attention recently (see Section \ref{sec:relatedwork}). We give the following algorithm
for the problem of clustering with a faulty oracle.
\begin{theorem}\label{thm:faultyoracle}
In the faulty oracle model with parameters $n,k,\delta$, there exists a polynomial time algorithm $\textsc{NosiyClustering}$
that
recovers all the clusters of of size $\Omega(\frac{k^2\log n}{\delta^2})$
with success probability $1-o_n(1)$. The total number queries that $\textsc{NosiyClustering}$ performs to the faulty oracle $\ensuremath{\mathcal{O}}$ is $O(\frac{n k\log^2 n}{\delta^2}+\frac{k^{4}\log^3 n}{\delta^4})$.
%
\end{theorem}
The above algorithm improves upon a sequence of previous work \citep{mazumdar2017clustering,green2020clustering,PZ21:clustering}. \cite{mazumdar2017clustering} gave an \emph{inefficient} algorithm that runs in quasi-polynomial time, performs $O(\frac{nk\log n}{\delta^2})$ queries to the oracle and recovers all the
of size $\Omega(\frac{\log n}{\delta^2})$. The query complexity of this algorithm nearly matches an information-theoretic lower bound $\Omega(\frac{nk}{\delta^2})$ presented by the same authors.
Towards efficient algorithms, they designed another algorithm that runs in polynomial time, makes $O(\frac{nk\log n}{\delta^2}+\min\{\frac{nk^2\log n}{\delta^4}, \frac{k^5\log^2 n}{\delta^8} \})$ queries and recovers all clusters of size at least $\Omega(\frac{k\log n}{\delta^4})$. Later, \cite{green2020clustering} gave an efficient algorithm with improved query complexity $O(\frac{n\log n}{\delta^2}+\frac{\log^2 n}{\delta^6})$ for $k=2$. Recently, \cite{PZ21:clustering} achieved a time-efficient algorithm with query complexity $O(\frac{nk\log n}{\delta^2}+\frac{k^{10}\log^2 n}{\delta^4})$
that recovers all clusters of size $\Omega \left( \frac{k^4 \log n}{\delta^2} \right)$.
Under the KS-threshold conjecture, the dependency on $\delta$ in their algorithm is likely to be almost optimal but the dependency on $k$ is sub-optimal (see the second paragraph of Section 1.3 in \citep{PZ21:clustering}). Our algorithm has nearly optimal dependency on both $k$ and $\delta$ (up to polylogarithmic factors).
\subsection{More discussions on Theorem \ref{thm:mainSBM}}\label{sec:whyourresultisinteresting}
Now we give a bit more discussion on Theorem \ref{thm:mainSBM}.
\paragraph{Kesten-Stigum (KS) threshold} Let us first mention a striking conjecture by Decelle, Krzkala, Moore and Zdeborov{\'a} \citep{decelle2011asymptotic}, which is based on deep, non-rigorous ideas from statistical physics.
\begin{conjecture}[KS-threshold conjecture; \citep{decelle2011asymptotic,mossel2015reconstruction}]
Let $G$ be a graph generated from SBM($n,k,p,q$) such that each cluster has size $\frac{n}{k}$. Then for any $k\geq 2$, it is possible to efficiently cluster in a way correlated with the true partition if and only if
\[
n\geq \frac{k(p+(k-1)q) }{(p-q)^2}
\]
The above inequality is also referred as the \emph{Kesten-Stigum (KS) threshold}.
\end{conjecture}
In comparison to our algorithm in Theorem \ref{thm:mainSBM}, we are able to recover a true latent cluster as long as
$ n \geq C\frac{(p+q) k^2 + {k^2\log n}}{(p-q)^2}.$
If we assume $p=O(q \log^c n)$
and $q=\Omega \left( \frac{1}{\log^{c'} n}\right)$ for any constant $c,c'>0$ then
\begin{align*}
&C\frac{(p+q) k^2 + {k^2\log n}}{(p-q)^2}
= O(\frac{k^2q \log^c n +k^2\log n }{(p-q)^2})
=O(\frac{k^2 q \cdot (\log n)^{c+c'+1} }{(p-q)^2})
\end{align*}
In this regime simplifying the KS-threshold gives us $\Theta(\frac{(p+q) k + k^2 q}{(p-q)^2})=\Theta(\frac{k^2q\cdot \log^c n}{(p-q)^2})$ and thus
the tradeoff between the parameters $n,k,p,q$ in our algorithm almost reaches the KS-threshold except the extra $O((\log n)^{O(1)})$ factor. Notice that in the faulty oracle model, we mainly focus on the regime that $p, q= \Theta(1)$ (see Section \ref{section: faultyoracle}), so we reach a nearly optimal bound
in this model.
\paragraph{Comparison to previous work.}
In
\citep{ailon2013breaking}, the authors gave an algorithm (Corollary 4 in \citep{ailon2013breaking}) recovering a large community with the following size-gap assumption,
\begin{itemize}
\item there is an $\alpha\geq \Omega\left(\frac{\sqrt{p(1-q)n}}{p-q}\right)$ such that, none of the cluster sizes falls in the interval $(\alpha,\Theta(\alpha \log^2 n))$.
\end{itemize}
The algorithm in \citep{ailon2013breaking} has to exhaustively search for such a gap, and then apply a SDP-based algorithm to find a large cluster.
In comparison to the above result, we no longer need the size-gap assumption or the minimal size assumption. As a consequence, we provide an improved and simpler algorithm.
\subsection{Other related work}\label{sec:relatedwork}
The model for clustering with a faulty oracle captures some applications in \emph{entity resolution} (also known as the \emph{record linkage}) problem \citep{fellegi1969theory,mazumdar2017theoretical}, the signed edges prediction problem in a social network \citep{leskovec2010predicting,mitzenmacher2016predicting} and the correlation clustering problem \citep{bansal2004correlation}. We refer to references \citep{mazumdar2017clustering,green2020clustering,PZ21:clustering} for more discussions of the motivations for this model.
\section{Recovering the Largest Cluster in SBM}
\label{sec: algorithm}
In this section, we describe our algorithm for recovering the largest cluster in a graph $G=(V,E)$ that is generated from SBM($n,k,p,q$).
The starting point our algorithm is a Singular Value Decomposition (SVD) based algorithm by \cite{vu2018simple}, which in turn is built upon the seminal work of \cite{mcsherry2001spectral}. The main idea underlying this algorithm is as follows: Given the adjacency matrix $A$ of $G$, project the columns of $A$ to the space $A_k$, which is the subspace spanned by the first $k$ left singular vectors of $A_k$. Then it is shown that for appropriately chosen parameters, the corresponding geometric representation of the vertices satisfy a \emph{separability} condition. That is, there exists a number $r>0$ such that 1) vertices in the same cluster have distance at most $r$ from each other; 2) vertices from different clusters have distance at least $4r$ from each other. This is proven by showing that each projected point $P_{\ensuremath{\mathbf{u}}}$ is close to its center, which is point $\ensuremath{\mathbf{u}}$ corresponding to a column in the expected adjacency matrix $\ensuremath{\mathrm{E}}[A]$. There are exactly $k$ centers corresponding to the $k$ clusters. Then one can easily find the clusters according to the distances between the projected points.
The above SVD-based algorithm aims to find all the $k$ clusters at once. Since the distance between two projected points depends on the sizes of the clusters they belong to, the parameter $r$ is inherently related to the size $s$ of the smallest cluster. Slightly more formally, in order to achieve the above separability condition, the work \citep{vu2018simple} requires that the minimum distance (which is roughly $\sqrt{s}(p-q)$) between any two centers is at least $\Omega(\sqrt{\frac{n}{s}})$, which essentially leads to the requirement that the minimum cluster size is large, say $\Omega(\sqrt{n})$, in order to recover all the $k$ clusters.
\paragraph{High-level idea of our algorithm} In comparison to the work \citep{vu2018simple}, we do not attempt to find all the $k$ clusters at once. Instead, we focus on finding large clusters. As in \citep{vu2018simple}, we first project the vertices to points using the SVD. Then instead of directly finding the ``perfect'' clusters from the projected points, we first aim to find a set $S$ that is somewhat close to a latent cluster that is large enough. Formally, we introduce the following definition of \emph{$V_i$-plural} set.
\begin{definition}\label{def:plural}
Let $V_1,\dots,V_k$ be $k$ latent clusters on $[n]$. WLOG, we assume that $|V_1|\geq \cdots \geq |V_k|$. Let $S\subseteq V$. We say that $S$ is \emph{$V_i$-plural} if
\begin{itemize}
\item $|V_i\cap S|\geq \frac{n}{16k}$;
\item For each $\ell\neq i$, $|S\cap V_\ell|\leq \frac{0.001 n}{k}$.
\end{itemize}
\end{definition}
We will find a $V_i$-plural set for any cluster $V_i$ that is large enough, i.e., $|V_i|\geq \frac{n}{1.1k}$. In particular, we will find a $V_1$-plural set, where $V_1$ is the largest cluster, as it always holds that $|V_1|\geq \frac{n}{k}> \frac{n}{1.1k}$. This is done by setting an appropriate distance threshold $L$ to separate points from any two different and \emph{large} clusters. Then by refining Vu's analysis, we can show that for any $u\in V_i$ with $|V_i|\geq \frac{n}{1.1k}$, the set $S$ that consists of all vertices whose projected points belong to the ball surrounding $u$ with radius $L$ is a $V_i$-plural set. The choice of $L$ is crucial for our improvement, which allows us to use recovery condition (\ref{cond:recovery}) without assuming all latent clusters are large (see Section \ref{sec: analysis} for more details).
Now suppose that the $V_i$-plural set $S$ is independent of the edges in $V\times V$ (which is \emph{not} true and we will show how to remedy this later). Then given $S$, we can run a statistical test to identify all the vertices in $V_i$.
To do so,
for any vertex $v \in V$, observe that the subgraph induced by $S \cup \{v\}$ is also sampled from a stochastic block model.
For each vertex $v\in V_i$, the expected number of its neighbors in $S$ is
\[
p\cdot |S\cap V_{i}| + q\cdot |S\setminus V_{i}| = q|S| + (p-q)\cdot |S\cap V_i|\geq q |S| + (p-q) |S\cap V_i| \geq q |S| + (p-q) \frac{n}{16k}.
\]
On the other hand, for each vertex $u\in C'$ for some different cluster $C'\neq V_{i}$, the expected number of its neighbors in $S$ is \[p\cdot |S\cap C'| + q\cdot |S\setminus C'| = q|S| + (p-q)\cdot |S\cap C'|\leq
q|S|+ (p-q)\cdot \frac{0.001 n}{k}\] since $|S\cap C'|\leq \frac{0.001 n}{k}$ for any $C'\neq V_i$.
Hence there exists a $\Theta((p-q)\cdot \frac{n}{k})$ gap between them. Thus, as long as $n\geq \Omega(\frac{k^2\log n}{(p-q)^2})$, with high probability, we can identify if a vertex belong to $V_i$ or not by counting the number of its neighbors in $S$.
To address the issue that the set $S$ does depend on the edge set on $V$, we use a two-phase approach: that is, we first randomly partition $V$ into two parts $U,W$ (of roughly equal size), and then find a $V_i$-plural set $S$ from $U$, then use the above statistical test to find all the vertices of $V_i$ in $W$ (i.e., $V \setminus U$), as described in \textsc{IndentifyCluster}($S,W$) (i.e. Algorithm \ref{alg:givenapluralset}).
Note that the output, say $T^1$, of this test is also $V_i$-plural set. Then we can find all vertices of $V_i$ in $U$ by running the statistical test again using $T^1$ and $U$, i.e., invoking
\textsc{IndentifyCluster}($T^1,U$). Then the union of the outputs of these two tests gives us $V_i$.
\paragraph{Details of the algorithm} Now we describe our algorithm (i.e., Algorithm \ref{alg:mainalgorithm}) in more detail. For a vertex $v$ and a set $T\subset V$, we let $N_{v,T}$ denote the number of neighbors of $v$ in $T$. Given a vector $\mm{x}$, we let $\norm{\mm{x}}$ denote its Euclidean norm.
The first part (i.e., line \ref{alg:vustart} -- \ref{alg:vulast} in Algorithm \ref{alg:mainalgorithm}) is similar to the corresponding part in the algorithm in \cite{vu2018simple}.
That is,
due to technical reasons (e.g., to reduce the correlation between some random variables), instead of directly working on the adjacency matrix of the input graph, we first randomly partition the input vertex set into two subsets {$U$ and $W$, and then randomly partition $U$ into two subsets} $Y$ and $Z$, and further randomly partition $Y$ into two parts $Y_1,Y_2$. Then we define $\widehat{B}$ to be the bi-adjacency matrix of the bipartite graph between $Y$ and $Z$. Then we use the sub-matrix $\widehat{A}$ of $\widehat{B}$
formed by columns indexed by $Y_1$ to find the subspace $\widehat{A}_k$ spanned by the first $k$ left singular vectors of $\widehat{A}$. Then we project the columns of $\widehat{B}$ indexed by $Y_2$ on $\widehat{A}_k$. That is, for each $u\in Y_2$, we obtain the point $\mm{p_u}$ by projecting the column $\widehat{\ensuremath{\mathbf{u}}}$ corresponding to $u$ in $\widehat{B}$ onto ${\widehat{A}_k}$, i.e., $\mm{p_u}=P_{\widehat{A}_k}{\hm{u}}$, where $P_{\widehat{A}_k}$ is the projection on $\widehat{A}_k$.
The second part (i.e. line \ref{alg:oursstart} -- \ref{alg:oursfinish} in Algorithm \ref{alg:mainalgorithm}) deviates from Vu's algorithm. We simply specify a parameter $L_b$ and sample a few number of vertices from $Y_2 \subset U$. If a vertex $v$ is sampled from a large cluster $V_i$ (i.e., $|V_i|\geq \frac{n}{1.1k}$), then we can show that the set $S$ that consists of vertices $u$ whose points $\mm{p_u}$ is contained in the ball $Q$ centered at $\mm{p_v}$ with radius $\Theta(L_b)$, is $V_i$-plural. Then we can use the aforementioned two-phase approach to find $V_i$ (i.e. line \ref{alg:phase1}--\ref{alg:phase2}).
Now, if we sample enough vertices, we are guaranteed that at least one vertex $v$ is sampled from $V_i$ with high probability. However, for a sampled vertex $v$ that does not belong to a large cluster (i.e., the size of the cluster containing $v$ is less than $\frac{n}{1.1 k}$), the set $S$ is not necessarily a $V_i$-plural set. This further implies that the set returned by the two iterations of the algorithm \textsc{IndentifyCluster} is not necessarily a cluster, which we need to figure out. Formally, we can guarantee that if the sampled vertex $v$ belongs to a cluster of size less than $\frac{n}{1.5k}$, then we can reject this vertex and if it belongs to a cluster of size in the interval $[\frac{n}{1.5k}, \frac{n}{1.1k})$, the algorithm \emph{may} find the corresponding cluster. Finally, by outputting all the sets returned by \textsc{IndentifyCluster} with size at least $\frac{n}{1.1k}$ (i.e. line \ref{alg:oursfinish}), we are guaranteed that all the clusters of size at least $\frac{n}{1.1k}$ are found.
\begin{algorithm}[!htb]
\caption{\textsc{Cluster}($G=(V,E),p,q,k$): Recovering all the clusters with size at least $\frac{n}{1.1k}$}\label{alg:mainalgorithm}
\begin{algorithmic}[1]
\STATE\label{alg:vustart} Randomly partition $V$ into two subsets $U$ and $W
\STATE Randomly partition $U$ into two subsets $Y$ and $Z
\STATE $Y_1\gets$ a random subset of $Y$ obtained by selecting each element with probability $1/2$ independently;
$Y_2\gets Y\setminus Y_1
\STATE $\widehat{B} \gets$ the adjacency matrix of the bipartite graph between $Y$ and $Z$
\STATE $\widehat{A}\gets$ the submatrix of $\widehat{B}$ formed by the columns indexed by $Y_1$
\STATE\label{alg:vulast} Project the columns of $\widehat{B}$ indexed by $Y_2$ on $\widehat{A}_k$ to obtain point set ${P}:=\{\mm{p_u}:u\in Y_2\}$
\STATE
\STATE\label{alg:oursstart} $b\gets 0.002$ and $L_b\gets \sqrt{\frac{2nb}{k}}(p-q)$
\FOR{$i=1,\cdots,h=O(k\log n)$}
\STATE\label{alg:sampleavertex} sample a vertex $v$ from $Y_2$
\STATE\label{alg:setS} $S\gets$ the set of all vertices $u\in Y_2$ such that $\norm{\mm{p_u}-\mm{p_v}}\leq \frac{3L_b}{40}$
\IF{$|S|\geq \frac{n}{16k}$}
\STATE\label{alg:phase1} Invoke \textsc{IdentifyCluster}($S,W$) to obtain a set $T_i^1
\IF{there exists $u\in T_i^1\cup\{v\}$ with $N_{u,T_i^1}\leq (0.9p + 0.1q)\cdot |T_i^1|$ or $|T_i^1|\leq \frac{n}{2.8k}$ }\label{alg:if-condition
\STATE continu
\ELSE
\STATE Invoke \textsc{IdentifyCluster}($T_i^1,U$) to obtain a set $T_i^2
\STATE\label{alg:phase2} Merge the two sets to form $T_i=T_i^1 \cup T_i^2$
\ENDIF
\ENDIF
\ENDFOR
\STATE\label{alg:oursfinish} {\Return all the sets $T_{i_1},\cdots, T_{i_j}$ of size at least $\frac{n}{1.1k}$}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[!htb]
\caption{\textsc{IdentifyCluster}($S,R$): Finding a subcluster $R\cap V_i$ from $R$ using a $V_i$-plural set $S$
}\label{alg:givenapluralset}
\begin{algorithmic}[1]
\STATE
$T\gets \emptyset$
\FOR{each $v\in R$
\IF
$N_{v,S}\geq q |S| + (p-q) \frac{3n}{64k}$
\STATE add $v$ to $T$
\ENDIF
\ENDFOR
\STATE \Return $T$
\end{algorithmic}
\end{algorithm}
\section{The Analysis of Algorithm \textsc{Cluster}}
\label{sec: analysis}
In this section, we give the analysis of the algorithm \textsc{Cluster} and prove Theorem \ref{thm:mainSBM}. We let $\ensuremath{\tilde{\mathbf{u}}}$ be the column corresponding to $u$ in the matrix $C$ of expectations w.r.t the columns in $\widehat{B}$, that is, $C_{uv}=p$ if $u,v$ belong to the same cluster and $C_{uv}=q$ otherwise.
For any vertex $u \in Y$, we denote by $\hm{u}$ the corresponding
column vector in $\widehat{B}$.
Furthermore, for any vertex $v \in V$, let $s_v$ be the size of the latent cluster containing
$v$.
Suppose $V_1,V_2,\dots,V_k$ are the latent communities such that $|V_1|\geq |V_2|\geq \dots \geq |V_k|$. Let $n_i=|V_i|$. Given a constant $b\in (0,1]$, we call a cluster $V_i$ \emph{$b$-large}, if $|V_i|>\frac{bn}{k}$.
Let \[
\Delta_{b} :=\min_{u,v}\{\norm{\ensuremath{\tilde{\mathbf{u}}}-\ensuremath{\tilde{\mathbf{v}}}}_2\}=\min_{u,v}\{\sqrt{s_u+s_v} (p-q)\}, \qquad \sigma^2:=\max\{p(1-p),q(1-q)\}
\]
where the minimum is taken over all pairs $u,v$ belonging to different $b$-large clusters, and $s_u$ is the size of the cluster containing $u$.
Let
\[
L_b:=\sqrt{\frac{nb}{k}+\frac{nb}{k}}(p-q)=\sqrt{\frac{2nb}{k}}(p-q), \qquad U_b:=C_0'\sigma \sqrt{k} +C_1'\sqrt{\log n},
\]
where $C_0'$ and $C_1'$ are two constant that will be specified later.
Note that by definition of $\Delta_b$, it holds that
$\Delta_b\geq L_b$.
We show that we can find the first cluster $V_1$, if the following condition holds:
\begin{eqnarray}
n \geq C_2\frac{\sigma^2 k^2 + {k^2\log n}}{b(p-q)^2},
\label{eqn:boundwithb}
\end{eqnarray}
where $C_2$ is some large constant. The Ineq. (\ref{eqn:boundwithb}) guarantees that
$L_b\geq \frac{U_b}{2}$.
Now, we discuss some properties of the problem structure and
some of the results from~\citep{vu2018simple} that we use in building
our algorithm.
\subsection{Basic properties}
{Recall that $Y_1,Y_2,Z, W$ are random subsets of $V$ such that each vertex $v\in V$ belongs to $Y_1,Y_2, Z,W$ with probability $1/8$, $1/8$, $1/4$, and $1/2$ respectively}. Recall that $b$ is a constant and that $\ensuremath{\tilde{\mathbf{u}}}$ is the column corresponding to $u$ in the matrix $C$ of expectations.
\paragraph{Property I}
Since each $b$-large cluster has size at least $\frac{bn}{k}$, we know that with probability $1-\frac{1}{n^4}$, each $b$-large cluster $V_i$ intersects $Z$ with at least $\frac{|V_i|}{6}$ elements. Thus, for any two vertices $u,v\in Y_2$ that belong to two different $b$-large clusters, we have that
\[
\norm{\ensuremath{\tilde{\mathbf{u}}}-\ensuremath{\tilde{\mathbf{v}}}}\geq \frac{\Delta_b}{6}\geq \frac{L_b}{6}.
\]
\paragraph{Property II} Recall that $s_u$ is the size of the cluster containing $u$. Note that for vertex $u$, the corresponding point after projection is $\mm{p_u}=P_{\widehat{A}_k}\hm{u}$.
Now let us recall a result from~\citep{vu2018simple}. We sketch how it can be derived from \citep{vu2018simple} in Appendix \ref{sec:proof}.
We let $e_u:=\hm{u}-\ensuremath{\tilde{\mathbf{u}}}$, i.e., $e_u$ is the random vector with zero mean
in each of its entries.
\begin{lemma}[\citep{vu2018simple}]\label{lem:vu}
It holds that for any $u\in Y_2$, $
\norm{\mm{p_u}-\ensuremath{\tilde{\mathbf{u}}}} \leq \norm{P_{\widehat{A}_k}e_u} + \norm{(P_{\widehat{A}_k}-I)\ensuremath{\tilde{\mathbf{u}}}}
$.
Furthermore, if the minimum cluster size is $\Omega(\log n)$, then with probability at least $1-o(n^{-2})$, it holds that
\begin{itemize}
\item $
\norm{P_{\widehat{A}_k}e_u}\leq \sigma k^{1/2} + C_1\sqrt{\log n}
$, and
\item $\norm{(P_{\widehat{A}_k}-I)\ensuremath{\tilde{\mathbf{u}}}} \leq C_0\sigma \sqrt{\frac{n}{s_u}}$,
\end{itemize}
where $C_0,C_1$ are sufficiently large constant.
\end{lemma}
Now, in line with~\cite{vu2018simple} we show that for each large enough cluster $C$ and any vertex $u \in C$, the projected point $\mm{p_u}$ is close to $\ensuremath{\tilde{\mathbf{u}}}$, the column of $u$ in the matrix of expectation.
\begin{lemma}\label{lemma:pointsinonecluster}
The points $\mm{p_1},\dots, \mm{p_s}$ from \textsc{Cluster} (i.e. Algorithm \ref{alg:mainalgorithm}) satisfies the following:
\begin{itemize}
\item if $u$ belongs to a $b$-large cluster, then
\[
\norm{\mm{p_u}-\ensuremath{\tilde{\mathbf{u}}}} \leq \norm{P_{\widehat{A}_k}e_u} + \norm{(P_{\widehat{A}_k}-I)\ensuremath{\tilde{\mathbf{u}}}}\leq U_b/60 \leq L_b/30
\]
\end{itemize}
\end{lemma}
\begin{proof}
Let $C_0':=60C_0(1/b)^{1/2}+60$ and $C_1':=60C_1$. Then the lemma follows from Lemma \ref{lem:vu}, the fact that $s_u\geq \frac{bn}{k}$, and the assumption $L_b\geq \frac{U_b}{2}$, where $U_b = C_0'\sigma \sqrt{k} +C_1'\sqrt{\log n}$,
\end{proof}
The next lemma says if two vertices $u,v$ belong to two different large clusters, then their projected points $\mm{p_u},\mm{p_v}$ are relatively far from each other.
\begin{lemma}\label{lem:blargefarfromeachother}
Let $u,v$ be two vertices in two different $b$-large clusters. Then $\norm{\mm{p_u}-\mm{p_v}} > \frac{L_b}{30}$.
\end{lemma}
\begin{proof}
If both $s_u,s_v\geq b n/k$, then
$\norm{\mm{p_u}-\ensuremath{\tilde{\mathbf{u}}}} \leq L_b/30, \textrm{ and } \norm{\mm{p_v}-\ensuremath{\tilde{\mathbf{v}}}} \leq L_b/10$.
However, we know that $\norm{\ensuremath{\tilde{\mathbf{u}}}-\ensuremath{\tilde{\mathbf{v}}}}\geq L_b/6$, which gives
\[
\norm{\mm{p_u}-\mm{p_v}} \geq \norm{\ensuremath{\tilde{\mathbf{u}}}-\ensuremath{\tilde{\mathbf{v}}}} - \left(\norm{\mm{p_u}-\ensuremath{\tilde{\mathbf{u}}}} + \norm{\ensuremath{\tilde{\mathbf{v}}}-\mm{p_v}}\right)\geq \frac{L_b}{6} - \frac{2L_b}{30} \geq \frac{L_b}{10}
\]
\end{proof}
Given a point set $P$, and a point $\mm{p} \in P$, let $B_r(\mm{p},P)$ be the set of all points that are within distance at most $r$ from $\mm{p}$ in $P$. Recall that $V_1$ is the largest cluster. We then have the following lemma, whose proof is deferred Appendix~\ref{app:lemma:plural}.
\begin{lemma}\label{lemma:plural}
Let $P=\{\mm{p_u}: u\in Y_2\}$. Let $u$ be an arbitrary vertex in $V_i\cap Y_2$ for some cluster $V_i$ with $|V_i|\geq \frac{n}{1.5k}$, and let $Q:=B_{3L_b/40}(\mm{p_u}, P)$. Then it holds that
\begin{itemize}
\item Let $P_i:=\{\mm{p_v}: v\in V_i\cap Y_2\}$. Then $P_i\subseteq Q$. Furthermore, $|P_i|\geq \frac{n}{16k}$.
\item for any other cluster $V_\ell$ with $\ell\neq i$ and its corresponding point set $P_{\ell}:=\{\mm{p_v}: v\in V_\ell\cap Y_2\}$, it holds that
$|P_{\ell}\cap Q|\leq \frac{bn}{4k}$.
\end{itemize}
\end{lemma}
Note that the above lemma says if $u \in V_i\cap Y_2$ for a cluster $V_i$ with $|V_i|\geq \frac{n}{1.5 k}$, then the vertex set $S$ defined in line \ref{alg:setS} of Algorithm \ref{alg:mainalgorithm} corresponds to the point set $Q=B_{3L_b/40}(\mm{p_u}, P)$, and thus is $V_i$-plural (see Definition \ref{def:plural}) when we specify $b=0.002$. Formally, we have $|S\cap V_i|\geq \frac{n}{16k}$ and for any other clusters $C'$ such that $C'\neq V_i$, $|S\cap C'|\leq \frac{bn}{4k}<\frac{0.001 n}{k}$. Now we show that two invocations
of \textsc{IdentifyCluster} (i.e., line \ref{alg:phase1}--\ref{alg:phase2}) find all the vertices in $V_i$.
\begin{lemma}\label{lemma:setB}
Let $U,W$ be the random partition as specified in Algorithm \ref{alg:mainalgorithm}. Let $b=0.002$ and let $S$ be the $V_i$-plural set defined as above such that $|V_i|\geq \frac{n}{1.5k}$.
Let $T^1:=\textsc{IdentifyCluster}(S,W)$ be such that $T^1 \geq \frac{n}{3.2k}$ and let $T:=T^1 \cup \textsc{IdentifyCluster}(T^1,U)$.
Then with probability $1-\mathcal{O}(n^{-7})$, it holds that $T=V_i$.
\end{lemma}
\begin{proof}
Let $v \in W$ be an arbitrary vertex.
Let $S_v$ denote the subset of vertices of $S$ that belong to the same cluster as $v$.
Let $N_{v,S}$ be the number of neighbors of $v$ in $S$. Since $S$ and $W$ are disjoint, it holds that $N_{v,S}$ is a sum of independent random variables. Furthermore
$
\ensuremath{\mathrm{E}}[N_{v,S}]=p |S_v| + q |S\setminus S_v|= q |S| + (p-q) |S_v|$.
Let $\lambda=(p-q) \frac{n}{64k}$. Note that $\lambda^2/|S|\geq 4\log n$ as $|S|\leq \frac{n}{4}$
and $n\geq \frac{4096 k^2\log n}{(p-q)^2}$.
Now we consider two cases.
If $v\in V_i$, then $|S_v|=|S\cap V_i|\geq \frac{n}{16k}$ as $S$ is $V_i$-plural, and
$\ensuremath{\mathrm{E}}[N_{v,S}]= q |S| + (p-q) |S\cap V_i| \geq q |S| + (p-q) \frac{n}{16k}$.
By Chernoff--Hoeffding bound~(see Theorem~\ref{thm:faultyoracle}), with probability at least $1-e^{-2\lambda^2/|S|}\geq 1-n^{-8}$, we have
$N_{v,S}\geq q |S| + (p-q) \frac{n}{16k} -\lambda = q |S| + (p-q) \frac{3n}{64k}$.
If $v\in C'$ for some cluster $C'\neq V_i$, then $|S_v|=|C'\cap S|\leq \frac{0.001n}{k}$ as $S$ is $V_i$-plural, and
$\ensuremath{\mathrm{E}}[N_{v,S}]=q|S|+(p-q)|C'\cap S| \leq q|S| + (p-q)\frac{0.001n}{k}.
$
By Chernoff--Hoeffding bound, with probability at least $1-e^{-2\lambda^2/|S|}\geq 1-n^{-8}$, we have
$N_{v,S}\leq q|S| + (p-q)\frac{0.001n}{k} + \lambda < q |S| + (p-q) \frac{3n}{64k}$
Therefore, with probability at least $1-n^{-7}$, for each vertex $v \in W$, it holds that
1) if $v\in V_i$, then $ N_{v,S}\geq q |S| + (p-q) \frac{3n}{64k}$,
2) if $v\notin V_i$, then $N_{v,S}< q |S| + (p-q) \frac{3n}{64k}$. Thus, $T^1=\textsc{IdentifyCluster}(S,W)= V_i \cap W.$
On the other hand, we observe that for any set $X\subseteq W$ with $|X|\geq \frac{n}{3.2k}$, $X\subset V_i$, we can use the same analysis as above to analyze $\textsc{IdentifyCluster}(X,U)$. That is, since $X$ is a $V_i$-plural set of $W$, we can guarantee with probability at least $1-n^{-7}$ that
$\textsc{IdentifyCluster}(X,U)= V_i \cap U$.
Since $|V_i|\geq \frac{n}{1.5k}$ and that $W$ is a random set of $V$ that is obtained by including each vertex in $V$ independently with probability $\frac12$,
we have that with probability at least $1-n^{-8}$, $\abs{V_i \cap W}> \frac{\ensuremath{\mathrm{E}}[|W\cap V_i|]}{1.01}>\frac{n}{3.2k}$.
By taking the union bounds, we have that with probability at least $1-O(n^{-7})$, $T^1=V_i\cap W$ and $|T^1|\geq \frac{n}{3.2k}$, we can replace $X$ by $T^1$ in the above analysis, and obtain that
$\textsc{IdentifyCluster}(T^1,U)= V_i \cap U$.
Then the statement of the lemma follows from the fact that $T=T^1\cup \textsc{IdentifyCluster}(T^1,U)= (V_i\cap W)\cup (V_i\cap U) = V_i$.
\end{proof}
Using the above lemma and its analysis, we can obtain the following corollary, whose proof is given in Appendix \ref{sec:proof}.
\begin{corollary}\label{cor:indentify}
Let $v$ be a vertex from $V_i\cap Y_2$ such that $|V_i|\geq\frac{n}{1.1k}$, and let $S=\{u\in Y_2: \norm{\mm{p_u}-\mm{p_v}}\leq \frac{3 L_b}{40}\}$ with $b=0.002$. Let $T'$ be the set returned by \textsc{IdentifyCluster}($S,W$). Then with probability at least $1-\frac{1}{n^4}$, $|T'|\geq \frac{n}{2.8 k}$ and $N_{u,T'}\geq (0.9p + 0.1q)|T'|$ for any $u\in T'\cup\{v\}$.
\end{corollary}
Finally, we show in the following lemma that if the sampled vertex does not belong to a large cluster, then the vertex and the corresponding output set $T$ satisfies the if-condition at line \ref{alg:if-condition} of Algorithm \ref{alg:mainalgorithm}, and thus will be rejected.
\begin{lemma}
\label{lemma:indentifycluster}
Let $v$ be a vertex from $V_i\cap Y_2$ such that $|V_i|<\frac{n}{1.5 k}$, and let $S=\{u\in Y_2: \norm{\mm{p_u}-\mm{p_v}}\leq \frac{3 L_b}{40}\}$ with $b=0.002$. Let $T'$ be the set returned by \textsc{IdentifyCluster}($S,W$). Then with probability at least $1-\frac{1}{n^4}$, either $|T'|<\frac{n}{2.8 k}$,
or there exists $u\in T'\cup\{v\}$, $N_{u,T'} \leq (0.9p + 0.1q)|T'|$.
\end{lemma}
The proof of this Lemma can also be found in the
appendix in section~\ref{app:lemma:indentifycluster}.
Let us now proceed to the proof of Theorem~\ref{thm:mainSBM}.
\subsection{Proof of Theorem \ref{thm:mainSBM}}
The algorithm \textsc{Cluster} (i.e., Algorithm \ref{alg:mainalgorithm}) apparently runs in polynomial time. Now we prove its correctness. Note that we have chosen $b=0.002$ in the algorithm, and thus by assuming $n \geq C\frac{(p+q) k^2 + {k^2\log n}}{(p-q)^2}$
for some large constant $C$, and noting that $\sigma^2\leq p+q$, we are guaranteed that Ineq. (\ref{eqn:boundwithb}) holds, and thus $L_b\geq U_b/2$.
Now note that with probability at least $1-\frac{1}{n^4}$, for any cluster $V_i$ with $|V_i|\geq \frac{n}{1.1k}$, $|V_i\cap Y_2|\geq \frac{n}{16k}$,
as each vertex is contained in $Y_2$ with probability $\frac18$.
Note further that with probability at least $1-\frac{1}{n^4}$, for any cluster $V_i$ with $|V_i|\geq \frac{n}{1.1 k}$, at least one vertex $v\in V_i$ will be sampled at line \ref{alg:sampleavertex} in the algorithm. Once such a vertex $v$ is sampled, then by Corollary \ref{cor:indentify}, with probability at least $1-\frac{1}{n^4}$, the returned set $T^1$ from \textsc{IndetifyCluster}($S,W$) will not satisfy the if-condition at line \ref{alg:if-condition} in Algorithm \ref{alg:mainalgorithm}. Then the algorithm will output
$T=T^1 \cup T^2$ where
$T^1=\textsc{IndentifyCluster}(S,W)$ and $T^2=\textsc{IndentifyCluster}(T_1,U)$.
By Lemma \ref{lemma:setB}, it will output $T=V_i$ with probability at least $1-\mathcal{O}(n^{-7})$.
For any cluster $V_i$ with $\frac{n}{1.5 k}\leq |V_i|< \frac{n}{1.1 k}$, we know that
either 1) the if-condition at line \ref{alg:if-condition} in Algorithm \ref{alg:mainalgorithm} is satisfied, then the algorithm \textsc{IndentifyCluster} will not be invoked, or 2) the if-condition is not satisfied, then
by Lemma \ref{lemma:setB}, with probability $1-O(n^{-7})$,
the two invocations of \textsc{IdentifyCluster} find all the vertices in $V_i$.
Next, for any cluster $V_i$ with $|V_i|<\frac{n}{1.5k}$, we know that by Lemma \ref{lemma:indentifycluster}, the if-condition at line \ref{alg:if-condition} in Algorithm \ref{alg:mainalgorithm} is satisfied, then the algorithm \textsc{IndentifyCluster}($S,V$) will not be invoked.
Therefore, with probability at least $1-\frac{1}{n^2}$, the sets $T_i$ output by \textsc{IndentifyCluster} operations contain \emph{all} the clusters of size at least $\frac{n}{1.1k}$ and \emph{some} the clusters of size in the interval $[\frac{n}{1.5k},\frac{n}{1.1k})$.
Finally, since Algorithm \ref{alg:mainalgorithm} outputs all the sets $T_{i_1},\cdots,T_{i_j}$ of size $\frac{n}{1.1k}$, we are guaranteed that all the clusters of size at least $\frac{n}{1.1k}$ are found by the algorithm.
\section{Application to clustering with a faulty oracle}
\label{section: faultyoracle}
Now we apply our algorithm in the SBM to the faulty oracle model. Consider the faulty oracle model with and parameters $n,k,\delta$. Observe that if we make queries on all pairs $u,v\in V$, then the graph $G$ that is obtained by adding all $+$ edges answered by the oracle $\ensuremath{\mathcal{O}}$ is exactly the graph that is generated from the SBM($n,k,p,q$) with parameters $n$, $k$, $p=\frac{1}{2}+\frac{\delta}{2}$ and $q=\frac{1}{2}-\frac{\delta}{2}$. However, the goal is to recover the clusters by making \emph{sublinear} number of queries, i.e., without seeing the whole graph.
We now describe our algorithm \textsc{NoisyClustering} (i.e., Algorithm \ref{alg:bal_cluster}) for clustering with a faulty oracle. Let $V$ be the items which contains $k$ latent clusters $V_1,\dots,V_{k}$ and $\ensuremath{\mathcal{O}}$ be the faulty oracle. Following the idea of \citep{PZ21:clustering}, we first sample a subset $T\subseteq V$ of size $T=\Theta(\frac{k^2\log n}{\delta^2}),$ and query $\ensuremath{\mathcal{O}}(u,v)$ for all pairs $u,v\in T$. Then apply our SBM clustering algorithm (i.e. Algorithm \ref{alg:mainalgorithm} \textsc{Cluster}) on the graph induced by $T$ to obtain clusters $X_1,\dots, X_{t}$ for some $t\leq k$. We can show that each of these sets is a subcluster of some large cluster $V_i$. Then we can use a majority voting to find all other vertices that belong to $X_i$, for each $i\leq t$. That is, for each $X_i$ and $v\in V$, we check if the number of neighbors of $v$ in $X_i$ is at least $\frac{|X_i|}{2}$. In this way, we can identify all the large clusters $V_i$ corresponding to $X_i$, $1\leq i\leq t$.
Furthermore, we note that we can choose a small subset of $X_i$ of size $\mathcal{O} (\frac{\log n}{\delta^2})$ for majority voting
to reduce query complexity. Then we can remove all the vertices in $V_i$'s and remove all the edges incident to them from both $V$ and $T$ and then we can use the remaining subsets $T$ and $V$ and corresponding subgraphs to find the next sets of large clusters. The algorithm \textsc{NoisyClustering} then recursively find all prominent clusters until we reach a point where the recovery condition on the current graph no longer holds.
Finally, we remark that our algorithm is similar to the one in \cite{PZ21:clustering}, while their algorithm makes use of Vu's algorithm for the nearly-balanced case and then use a reduction from unbalanced case to nearly-balanced case to find the clusters. Here, thanks to our new algorithm \textsc{Cluster} in the SBM, we do not need the reduction step and obtain simpler algorithm with improved performance guarantee.
A simple analysis of this algorithm,
along with a bound on the size of the recoverable clusters and query complexity
can be found in the appendix in Section~\ref{app:th4}.
\begin{algorithm}[!htb]
\caption{\textsc{NoisyClustering}($V,k,\delta$): Clustering algorithm in the faulty oracle model}~\label{alg:bal_cluster}
\begin{algorithmic}[1]
\STATE Let $n=|V|$ and $C$ be the constant from Theorem \ref{thm:mainSBM}
\STATE $V'\gets V$, $n'\gets n$, $t'\gets 0$, $k'\gets k$
\WHILE{$n'\geq C\frac{(k')^2+k'\log n}{\delta^2}$
\STATE Randomly sample a subset $T\subset V'$ of size $|T| = \frac{1000 C\cdot {k'}^2 \log n }{\delta^2}$
\STATE Query all pairs $u,v\in T$
\STATE Let $G[T]$ be graph on vertex set $T$ with only positive edges from the query answers
\STATE Apply \textsc{Cluster}$(G[T],\frac{1}{2}+\delta,\frac{1}{2}-\delta, k')$ to obtain clusters $X_1,\dots, X_t$, for some $t\leq k'$
\FOR{each $i\in[t]$
\STATE $t'\gets t'+1$,
\STATE
Find an arbitrary subset $X_i'\subseteq X_i$ of size $\frac{10^6\log n}{\delta^2}$
\STATE $C_{t'}\gets X_i \cup \{v\in V \setminus T: \textrm{the number of neighbors of $v$ in $X'_i$ is at least } |X'_{i}|/2\}$
\STATE $V'\gets V'\setminus C_{t'}
\ENDFOR
\STATE $n'\gets |V'|$, $k'\gets k - t$
\ENDWHILE
\STATE \Return $C_1,\cdots, C_{t'}$
\end{algorithmic}
\end{algorithm}
| {
"timestamp": "2022-02-18T02:12:43",
"yymm": "2202",
"arxiv_id": "2202.08522",
"language": "en",
"url": "https://arxiv.org/abs/2202.08522",
"abstract": "The stochastic block model (SBM) is a fundamental model for studying graph clustering or community detection in networks. It has received great attention in the last decade and the balanced case, i.e., assuming all clusters have large size, has been well studied. However, our understanding of SBM with unbalanced communities (arguably, more relevant in practice) is still limited. In this paper, we provide a simple SVD-based algorithm for recovering the communities in the SBM with communities of varying sizes. We improve upon a result of Ailon, Chen and Xu [ICML 2013; JMLR 2015] by removing the assumption that there is a large interval such that the sizes of clusters do not fall in, and also remove the dependency of the size of the recoverable clusters on the number of underlying clusters. We further complement our theoretical improvements with experimental comparisons. Under the planted clique conjecture, the size of the clusters that can be recovered by our algorithm is nearly optimal (up to poly-logarithmic factors) when the probability parameters are constant.As a byproduct, we obtain an efficient clustering algorithm with sublinear query complexity in a faulty oracle model, which is capable of detecting all clusters larger than $\\tilde{\\Omega}({\\sqrt{n}})$, even in the presence of $\\Omega(n)$ small clusters in the graph. In contrast, previous efficient algorithms that use a sublinear number of queries are incapable of recovering any large clusters if there are more than $\\tilde{\\Omega}(n^{2/5})$ small clusters.",
"subjects": "Machine Learning (cs.LG)",
"title": "Recovering Unbalanced Communities in the Stochastic Block Model With Application to Clustering with a Faulty Oracle",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969694071564,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.7099044245174979
} |
https://arxiv.org/abs/1009.0855 | Level Sets of the Takagi Function: Local Level Sets | The Takagi function \tau : [0, 1] \to [0, 1] is a continuous non-differentiable function constructed by Takagi in 1903. The level sets L(y) = {x : \tau(x) = y} of the Takagi function \tau(x) are studied by introducing a notion of local level set into which level sets are partitioned. Local level sets are simple to analyze, reducing questions to understanding the relation of level sets to local level sets, which is more complicated. It is known that for a "generic" full Lebesgue measure set of ordinates y, the level sets are finite sets. Here it is shown for a "generic" full Lebesgue measure set of abscissas x, the level set L(\tau(x)) is uncountable. An interesting singular monotone function is constructed, associated to local level sets, and is used to show the expected number of local level sets at a random level y is exactly 3/2. | \section{Introduction}
The Takagi function $\tau(x)$ is a function
defined on the unit interval $ x\in [0,1]$
which was introduced by Takagi \cite{Takagi} in 1903
as an example of a continuous nondifferentiable function.
It can be defined by
\begin{equation}\label{eq101}
\tau(x) := \sum_{n=0}^{\infty} \frac{\ll 2^n x \gg}{2^n}
\end{equation}
where $\ll x \gg := \inf_{n \in {\mathbb Z}} |x-n|$ is the distance from $x$ to the nearest integer.
Variants of this function were presented by van der Waerden \cite{vdW30} in
1930 and de Rham \cite{deR57} in 1957.
\begin{figure}[h]
\centering
\includegraphics[width=3in]{tauv3}
\caption{Graph of the Takagi function $\tau(x)$.}
\label{fig11}
\end{figure}
An alternate interpretation of the Takagi function involves the symmetric tent map
$T: [0,1] \to [0,1]$, given by
\begin{equation}
\label{eq101a}
T(x) = \left\{
\begin{array}{cc}
2x & \mbox{if}~~ 0\le x \le \frac{1}{2},\\
~&~\\
2-2x & \mbox{if}~~\frac{1}{2}\le x \le 1
\end{array}
\right.
\end{equation}
(see \cite{FL00} for further references).
Then we have
\begin{equation*}
\tau(x) := \frac{1}{2} \left( \sum_{n=1}^{\infty} \frac{1}{2^n} T^{(n)}(x) \right),
\end{equation*}
where $T^{(n)}(x)$ denotes the $n$-th iterate of $T(x)$.
The Takagi function appears in many contexts and has been studied
extensively; see the recent surveys of Allaart and Kawamura \cite{AK11}
and of the first author \cite{La11}.
In this paper we consider certain properties of the graph of the Takagi function
$$
{\cal G}(\tau):= \{ (x, \tau(x)): ~0 \le x \le 1 \},
$$
which is pictured in Figure~\ref{fig11}.
It is well known that the values of the Takagi function satisfy $0 \le \tau(x) \le \frac{2}{3}.$
It is also known that this graph has Hausdorff dimension $1$ in ${\mathbb R}^2$, see Mauldin
and Williams \cite[Theorem 7]{MW86}), and furthermore it is
$\sigma$-finite, see Anderson and Pitt \cite[Thm. 6.4]{AP89}.
Here we study the structure of the level sets of this graph. We make the
following definition, which contains a special convention concerning
dyadic rationals which simplifies theorem statements.
\begin{defi}~\label{de11}
{\em
For $0 \le y \le \frac{2}{3}$ the
{\em (global) level set} $L(y)$ at level $y$ is
$$
L(y):= \{ x: ~ \tau(x) =y, ~~0 \le x \le 1 \}.
$$
We make the convention that
$x$ specifies a binary expansion; thus each dyadic rational
value $x= \frac{m}{2^n}$ in a level set occurs twice,
labeled by its two possible
binary expansions.
(Technically $L(y)$ is a multiset, with multiplicities $1$ or $2$.)
}
\end{defi}
Level sets have a complicated structure, depending on the value of $y$.
It is known that there are different levels $y$ where the level set $L(y)$
is finite, countably infinite, or
uncountably infinite, respectively.
In 1959 Kahane \cite[Sec. 1]{Ka59} noted that
the level set $L(\frac{2}{3})$
was a perfect, totally disconnected set of
Lebesgue measure $0$, and in 1984
Baba \cite{Baba84} showed that $L(\frac{2}{3})$ has Hausdorff dimension
$\frac{1}{2}$.
The second author recently proved (\cite{Maddock09}) that the Hausdorff
dimension of any level set is at most $0.668$, and conjectured that
the example of Baba achieves the largest possible Hausdorff dimension.
Recently de Amo, Bhouri, D\'{i}az Carrillo and Fern\'{a}ndez-S\'{a}nchez \cite{ABDF11}
proved this conjecture.
The level sets at a rational level $y \in {\mathbb Q}$ are particularly interesting.
Knuth \cite[Sect. 7.2.1.3, Exercises 82-85]{Kn09} gave a (not necessarily halting)
algorithm
to determine the structure of level
sets at rational levels, revealing very complicated behaviors. For example he determined that
$L(\frac{1}{5})$ has a finite level set with exactly two elements, namely
$~L(\frac{1}{5}) = \{ x, 1-x\}~~\mbox{with}~~ x= \frac{83581}{87040}.$
He also noted that
$L(\frac{1}{2})$ is countably infinite; we include a proof in Theorem~\ref{th38} below.
In 2008 Buczolich \cite{Buz08}
proved that, in the sense of Lebesgue measure on $y \in [0, \frac{2}{3}]$,
almost all level sets $L(y)$ are finite sets. \\
The object of this paper is to introduce and study
the notion of ``local level set". These are sets determined
locally by combinatorial operations on the binary expansion of a real number $x$;
they are closed sets and we
show that each level set decomposes into a disjoint union of local level sets.
(The convention on dyadic rationals made in the definition
above is needed for disjointness of the union.)
The structure
of local level sets is completely analyzable: they are either finite sets or Cantor
sets. Information about the Hausdorff dimension of such sets can
readily be deduced from properties of the binary expansion of $x$. \\
We then study the relation of local level sets and level sets.
How many local level sets are there in a given level set?
To approach this question, we study the behavior of the Takagi function
restricted to the set $\Omega^{L}$ of left hand (abscissa) endpoints $x$
of all the local level sets; these endpoints parametrize the totality of all local level sets.
We show that $\Omega^{L}$ is a closed perfect set (Cantor set) which has Lebesgue measure
$0$; in a sequel (\cite{LM10b})
we show it has Hausdorff dimension $1$. We show the Takagi function behaves relatively
nicely when restricted to $\Omega^{L}$, namely that $\tau^{S}(x):=\tau(x)+ x$ is a
monotone singular continuous function on this set.
It is therefore the integral of a singular probability measure on $[0,1]$,
which we call the Takagi singular measure.
Using this function we deduce that the expected
number of local level sets at a random level $0 \le y \le \frac{2}{3}$ is finite, and we
determine that this expected value
is exactly $\frac{3}{2}$. We also show that there is a dense set of values $y$ having
an infinite number of distinct local level sets.
\medskip
Local level sets provide a way to take apart level sets and better understand their
structure.
In the rest of the introduction we state the main results of this paper in more detail.
\subsection{Local level sets}
This notion of local level set is attached to the binary expansion of abscissa point $x \in [0,1]$.
We show that certain combinatorial flipping operations applied
to the binary expansion of $x$ yield new points $x'$ in the same level set.
The totality of points reachable from $x$ by these combinatorial operations
will comprise the local level set $L_x^{loc}$ associated to $x$.
To describe this, let $x \in [0,1]$ have a binary expansion:
\begin{equation*}
x := \sum_{j=1}^{\infty} \frac{b_j}{2^j}= 0.b_1 b_2 b_3..., ~~~~ \mbox{each}~b_j \in \{0, 1\}.
\end{equation*}
The {\em flip operation} (or {\em complementing operation}) on a single binary digit $b$ is
\begin{equation*}
\bar{b} := 1- b.
\end{equation*}
We associate to the binary expansion the {\em digit sum function} ${N}^{1}(x)$ given by
\begin{equation*}
{N}^{1}_j(x) := b_1 +b_2 + \cdots + b_j.
\end{equation*}
We also associate to the binary expansion the {\em deficient digit function} ${D}_j(x)$ given by
\begin{equation*}
{D}_j(x):= j- 2{N}^{1}_j(x) = j- 2(b_1+b_2+ \cdots + b_j) .
\end{equation*}
Here ${D}_j(x)$ counts the excess of binary digits $b_k=0$ over those with $b_k=1$
in the first $j$ digits, i.e. it is positive if there are more $0$'s than $1$'s in the first $j$ digits.
Note that for dyadic rationals
$x= \frac{m}{2^n}$
the function values depend on which binary expansion is used. \\
We next associate to any $x$ the sequence of digit positions $j$
at which tie-values ${D}_j(x)=0$ occur, which we call {\em balance points}; note that all such
$j$ are even. The {\em balance-set} $Z(x)$ associated to $x$ is the set of balance
points, and is denoted
\beql{123}
Z(x) : = \{ c_k:~~{D}_{c_k}(x)=0\}.
\end{equation}
where we define $c_0=c_0(x) = 0$ and set $c_0(x)< c_1(x)< c_2(x) < ...$. This sequence of tie-values may
be finite or infinite. If it is finite, ending in $c_{n}(x)$, we make the
convention to adjoin a final ``balance point" $c_{n+1}(x)= +\infty$.
We call a {\em ``block"} an indexed set of digits between two consecutive balance points,
\begin{equation*}
B_k(x) := \{ b_j: ~c_k(x) < j \le c_{k+1}(x)\},
\end{equation*}
which includes the second balance point but not the first.
We define an equivalence relation on
blocks, written $B_k(x) \sim B_{k'}(x')$ to mean the block endpoints agree
($c_k(x)= c_{k'}(x')$ and $c_{k+1}(x) = c_{k'+1}(x')$)
and either $B_k(x) = B_{k'}(x')$ or $B_k(x) = \bar{B}_{k'}(x')$, where the bar operation flips
all the digits in the block, i.e.
\begin{equation*}
b_j \mapsto \bar{b}_j:= 1- b_j,~~~~~~~ c_k < j \le c_{k+1}.
\end{equation*}
Finally, we define an equivalence relation $x \sim x'$ to mean that
they have identical balance-sets
$Z(x) \equiv Z(x')$, and furthermore every block $B_k(x) \sim B_k(x')$ for $k \ge 0$.
Note that $x \sim 1-x$; this corresponds to a flipping operation being applied to
every binary digit.
We will show (Theorem \ref{th31})
that the equivalence relation $x\sim x'$ implies that $\tau(x)= \tau(x')$ so that $x$ and $x'$
are in the same level set of the Takagi function.
\begin{defi}~\label{de12}
{\em
The {\em local level set} $L_x^{loc}$ associated to $x$ is the set of equivalent points,
\begin{equation*}
L_x^{loc} := \{ x': ~~x' \sim x\}.
\end{equation*}
We use again the convention that $x$ and $x'$ denote binary
expansions, and hence dyadic rational numbers are represented by two
distinct binary expansions.}
\end{defi}
Each local level set $L_x^{loc}$ is
a closed set. It is a finite set if the balance-set $Z(x)$ is finite, and is a
Cantor set (perfect
totally disconnected set) if $Z(x)$ is infinite.
Note that if $x$ is the expansion of
a dyadic rational, then $L_x^{loc}$ is finite and consists entirely of (expansions of) dyadic
rationals.
\begin{theorem}~\label{th14} {\em (Local level set partition)} \\
$~~~~~$ (1) Each local level set $L_x^{loc}$ is a closed set contained
in some level set.
(2) Two local level sets $L_x^{loc}$
and $L_{x'}^{loc}$ either coincide or are disjoint.
Thus each level set $L(y)$ partitions into a disjoint union of local level sets.
\end{theorem}
This easy result is proved as part of Theorem \ref{th31} in Sect.~\ref{sec31}.
A priori this
disjoint union could be finite, countable or uncountable.
In Theorem \ref{th38}
we give an example of a level set that is a countably infinite union
of local level sets; for this case $y= \frac{1}{2}$
is a dyadic rational. \smallskip
The Hausdorff dimension of a local level set $L_x^{loc}$ is
restricted by the nature of its balance-set $Z(x)$. A necessary condition
to have positive Hausdorff dimension is that $Z(x)$ must have
positive upper asymptotic density in ${\mathbb N}$.
This allows us to deduce the
following result.
\begin{theorem}~\label{th15} {\em (Generic local level sets)}
For a full Lebesgue measure set of abscissa points $x$
the local level set $L_x^{loc}$ is a Cantor set (closed totally
disconnected perfect set) having Hausdorff dimension $0$.
\end{theorem}
Theorem~\ref{th15} is proved in Sect.~\ref{sec32}.
This result implies that if an abscissa value $x$ is picked at random in $[0,1]$,
then with probability one the level set $L(\tau(x))$ is uncountably infinite.
This result differs strikingly from that of Buczolich \cite{Buz08},
who showed that if an ordinate value $y$ is picked uniformly in $[0, \frac{2}{3}]$
then the level set $L(y)$ is finite with probability one.
There is no inherent contradiction here: drawing an abscissa value $x$
will preferentially select levels whose level set $L(\tau(x))$ is ``large",
and Theorem~\ref{th15} quantifies ``large."\\
In Sect ~\ref{sec33} we completely determine the structure of local level sets that contain
a rational number. We prove they are either a finite set or a Cantor set of
positive Hausdorff dimension, and characterize when each case occurs (Theorem~\ref{th50}).
One can check directly that $x_0= \frac{1}{3}$ has a local level set of Hausdorff
dimension $ \frac{1}{2}$ (at level $y= \frac{2}{3}$), which shows that the Hausdorff dimension
upper bound of $\frac{1}{2}$ for level sets
obtained in \cite{ABDF11} is sharp also for local level sets.
\subsection{Expected number of local level sets}
Our second object is to relate local level sets to level sets. How many
local level sets belong to a given level set?
To approach this problem we first study (in Sect.~\ref{sec4}) the set $\Omega^{L}$ of all left
hand endpoints of local level sets, because this set parameterizes all the local
level sets. In Theorem \ref{th33} we
characterize its members in terms of their binary
expansions: they are exactly the values $x$ with binary expansions such that
$$
{D}_j(x) \ge 0 ~~~\mbox{for all} ~~~ j \ge 1.
$$
We call this latter set the {\em deficient digit set}
and show it is a closed, perfect set (Cantor set) of Lebesgue measure
zero.
In \S5 we define a new function, the {\em flattened Takagi function} $\tau^{L}(x)$,
which agrees with $\tau(x)$ on $\Omega^{L}$ and is defined
by linear interpolation across the gaps removed in constructing $\Omega^{L}$.
We prove $\tau^{L}(x)$ to be of bounded variation, and determine a Jordan decomposition.
This consists of a nondecreasing piece $F_{+}(x) := \tau^{L}(x) + x$
which we establish is a singular function whose points
of increase are supported on $\Omega^{L}$, and a
strictly decreasing piece $F_{-}(x) := -x$ which is absolutely continuous.
We name the function
$$
\tau^{S}(x):= F_{+}(x) = \tau^{L}(x) + x
$$
the {\em Takagi singular function}, based on the following result,
which is needed in the proof that the flattened Takagi function has
bounded variation.
\begin{theorem} \label{th15a} {\em (Takagi singular function)}
The function $\tau^{S}(x)$ defined by $\tau^{S}(x)= \tau(x) + x$ for
$x \in \Omega^{L}$
is a nondecreasing function on $\Omega^{L}$. Define its extension to
all $x \in [0,1]$ by
$$
\tau^{S}(x) := \sup\{ \tau^{S}(x_1): x_1 \le x ~~~\mbox{with} ~~ x_1 \in \Omega^{L}\}.
$$
Then the function $\tau^{S}(x)$ is a monotone singular function. That is, it is
a nondecreasing continuous function
having
$\tau^{S}(0)=0, \tau^{S}(1)=1$, which has derivative zero at (Lebesgue) almost all points of $[0,1]$.
The closure of the set of points of increase of $\tau^{S}(x)$ is the deficient digit set $\Omega^{L}$.
\end{theorem}
In a sequel (\cite{LM10b}) we study
a nonnegative Radon measure $d{\mu_S}$, called the {\em Takagi singular measure},
such that
\begin{equation*}
\tau^{S}(x) = \int_{0}^x d{\mu_S},
\end{equation*}
which is a probability measure on $[0,1]$.
This measure is singular with respect to Lebesgue measure.
There we show that
its support $\mbox{Supp}({\mu_S}) = \Omega^{L}$ has (full) Hausdorff dimension $1$.
The Takagi singular measure is not translation-invariant,
but it has certain self-similarity properties under dyadic
rescalings. These are useful in explicitly computing
the measure of various interesting subsets of $\Omega^{L}$.
One may compare analogous properties
of the Cantor function, see Dovghoshey et al \cite[Sect. 5]{DMRV06}.
\smallskip
The bounded variation property of the flattened Takagi function is used
to count the average number of local level sets, as follows.
\begin{theorem}~\label{th16} {\em (Expected number of local level sets)}
With respect to uniform (Lebesgue) measure on
the ordinate space $[0, \frac{2}{3}]$ a full measure set of points have a
finite number of local level sets. Furthermore
the expected number of local level sets on a given level $y \in [0, \frac{2}{3}]$
is $\frac{3}{2}.$
\end{theorem}
This result is proved as Theorem~\ref{th37}, using the coarea formula for
functions of bounded variation.
We show that this result is non-trivial in that there are infinitely many levels
containing infinitely many distinct local level sets.
\begin{theorem}~\label{th16b} {\em (Infinite Number of Local Level Sets)}
There exists a dense set of ordinate values $y$ in $[0, \frac{2}{3}]$, which
are all dyadic rationals,
such that the level set $L(y)$ contains an infinite number of distinct local level sets.
\end{theorem}
This theorem follows directly from a result proved in Sect. \ref{sec5B} (Theorem \ref{th62a}).
This in turn is derived from the fact that $L(\frac{1}{2})$ is a countable set which
contains a countably infinite number of local level sets (Theorem \ref{th38}). \medskip
In the final Sect. \ref{sec9} we formulate some open questions
suggested by this work.
\subsection{Extensions of results and related work}
The Takagi function has self-affine properties,
and there has been extensive study of
various classes of self-affine functions.
In particular, in the late 1980's Bertoin
\cite{Ber88}, \cite{Ber90} studied
the Hausdorff dimension
of level sets of certain classes of self-affine functions;
however his results do not cover the Takagi function.
In 1997 Yamaguti, Hata and Kigami \cite[Chap. 3]{YHK97} gave
a general definition of a family $F(t, x)$ of Takagi-like functions
depending on a parameter $0 < t < 1$ as follows:
let $g(x)$ be a bounded measurable function defined on $[0,1]$
and let $\Phi: [0, 1] \to [0,1]$ be a continuous mapping, then set
\begin{equation*}
F(t, x) := \sum_{n=0}^{\infty} t^n g( \Phi^n(x)).
\end{equation*}
If we specialize to take $\Phi(x)= 2 g(x)= T(x)$, the tent map in (\ref{eq101a}),
then the parameter value $t=\frac{1}{2}$ gives the Takagi function
$$
F(\frac{1}{2}, x) = 2 \left(\sum_{n=1}^{\infty} \frac{1}{2^n} T^{n}(x) \right)= \tau(x).
$$
If one now changes the parameter value to $t= \frac{1}{4}$,
then one gets instead (\cite[p. 35]{YHK97}) the smooth function
\begin{equation*}
F(\frac{1}{4}, x) = \frac{1}{2}\left( \sum_{n=1}^{\infty} \frac{1}{4^n} T^{n}(x) \right)= \frac{1}{2}x(1-x).
\end{equation*}
The level sets of this function are finite. These examples show
an extreme dependence of level set structure on the parameter $t$.
Our analysis
uses the piecewise linear nature of the function $\Phi(x)$
in a strong way, and also uses specific properties of the geometric scaling
by the parameter $t$ at value $t=\frac{1}{2}$. \smallskip
The methods presented should extend to various functions similar in construction to
the Takagi function, such as van der Waerden's function (\cite{vdW30}).
They also extend to intersections
of the graph of the Takagi function with parallel families of lines having
integer slope, a device used by the second author \cite{Maddock09}.
In this paper we have treated only Hausdorff dimension, while the paper
\cite{Maddock09} also obtained upper bounds for Minkowski
dimension (a.k.a. box counting dimension) of level sets. Some results of this
paper (e.g. Theorem \ref{th15}) may be strengthened to give
Minkowski dimension upper bounds. \smallskip
In \cite{LM10b}
we further analyze the structure of global level sets $L(y)$ using local
level sets. We give a new proof of a theorem of Buczolich \cite{Buz08} showing
that if one draws $y$ uniformly from
$[0, \frac{2}{3}]$, then with probability one the level set $L(y)$ is a finite set;
we improve on it by
showing that the expected number of points in such a ``random" level set $L(y)$
is infinite. We also complement this result by showing that the
set of levels $y$ having a level set of positive Hausdorff dimension is
``large" in the sense that it has full Hausdorff dimension $1$,
although it is of Lebesgue measure $0$.\smallskip
Subsequent to this paper, Allaart \cite{A11} \cite{A11b}
obtains many further results on local level sets. He shows that
dyadic ordinates $y=\frac{k}{2^n}$ have finite or countable level sets, and he
determines information on cardinalities of finite level sets.
Finally we remark that
there has been much study of the non-differentiable nature
of the Takagi function in various directions, see for example
Allaart and Kawamura (\cite{AK06}, \cite{AK10}) and references therein.
It is considered as an example in Tricot \cite[Section 6]{Tri97}. \smallskip
\section{Basic Properties of the Takagi Function}\label{sec2}
\setcounter{equation}{0}
We recall
some basic facts and include proofs for the reader's convenience.
We first give Takagi's formula for his function, which
assigns a value $\tau(x)$ directly
to a binary expansion of $x=0.b_1b_2b_3...$.
Dyadic rationals $\frac{k}{2^n}$ have two distinct binary expansions, and
one checks the assigned value $\tau(x)$
is the same for both expansions.
For $0 \le x \le 1$ the distance to the nearest integer function $\ll x \gg$ is
\begin{equation*}
\ll x \gg ~:= \left\{
\begin{array}{lcl}
x & \mbox{if}& 0 \le x < \frac{1}{2},~~\mbox{i.e.} ~b_1=0\\
~&~&~\\
1-x & \mbox{if}& \frac{1}{2} \le x \le 1, ~~\mbox{i.e.} ~ b_{1}=1.
\end{array}
\right.
\end{equation*}
For $n \ge 0$, we have
\beql{204b}
\ll 2^n x \gg ~= \left\{
\begin{array}{lcl}
0.b_{n+1} b_{n+2} b_{n+3} ... & \mbox{if}& b_{n+1}=0\\
~&~&~\\
0.\bar{b}_{n+1} \bar{b}_{n+2} \bar{b}_{n+3} ...& \mbox{if}& b_{n+1}=1,
\end{array}
\right.
\end{equation}
where we use the bar-notation
\begin{equation*}
\bar{b}= 1-b , ~~~\mbox{for}~~~ b=0 ~\mbox{or} ~1,
\end{equation*}
to mean complementing a bit.
\begin{lemma}~\label{le21} {\rm (Takagi \cite{Takagi})}
For $x= 0.b_1b_2 b_3 ...$ the Takagi function is given by
\beql{201}
\tau(x) = \sum_{m=1}^{\infty} \frac{\ell_m}{2^m},
\end{equation}
in which $0 \le \ell_m= \ell_m(x) \le m-1$ is the integer
\begin{equation*}
\ell_m(x) = \# \{ i:~ 1 \le i < m,~~b_i \ne b_{m} \}.
\end{equation*}
In terms of the digit sum function ${N}^{1}_m(x)= b_1 + b_2 + ...+b_m$,
\beql{203}
\ell_{m+1} (x) = \left\{ \begin{array}{lcl}
{N}^{1}_{m}(x) & \mbox{if}& b_{m+1}=0,\\
~&~&~\\
m-{N}^{1}_{m}(x) & \mbox{if}& b_{m+1}=1.
\end{array}
\right.
\end{equation}
\end{lemma}
\paragraph{Proof.} From the definition
\begin{equation*}
\tau(x) = \sum_{n=0}^{\infty} \frac{ \ll 2^n x \gg}{2^n}
\end{equation*}
Now \eqn{204b} gives
\begin{equation*}
\frac{\ll 2^n x \gg}{2^n} =
\left\{
\begin{array}{lcl}
\sum_{j=1}^{\infty} \frac{b_{n+j}}{2^{n+j}} & \mbox{if}& b_{n+1}=0,\\
~&~&~\\
\sum_{j=1}^{\infty} \frac{\bar{b}_{n+j}}{2^{n+j}} & \mbox{if}& b_{n+1}=1.
\end{array}
\right.
\end{equation*}
We substitute this into the formula for $\tau(x)$ and collect all
terms having a given denominator $\frac{1}{2^{m}}$, coming from $m=n+j$ with
$1 \le j \le m.$
For $m= n+j$ we get a contribution of $\frac{1}{2^m}$ whenever $b_{n+j} :=b_{m}= 1$ if $b_{n+1}=0$,
and whenever $b_{n+j}:=b_m=0$ if $b_{n+1}=1$, otherwise get $0$ contribution. Adding up over $j$,
we find the total contribution is $\frac{\ell_m}{2^m}$ where $\ell_m(x)$ counts the number of $b_j$,
$1 \le j < m$ having the opposite parity to $b_m$, which is \eqn{201}.
Note that $\ell_1(x) \equiv 0$, so the summation \eqn{201} really starts with $m=2$.
The formulas \eqn{203} follow by inspection; note that
$m- {N}^{1}_m(x) = {N}^{1}_m(1-x)$ (making an appropriate convention for dyadic rationals).
$~~~\Box$\\
We next recall two basic functional equations,
see Kairies, Darslow and Frank \cite{KDF88}.
\begin{lemma}~\label{le22}{\em (Takagi functional equations)} \\
The Takagi function satisfies two functional equations,
each valid for
$0 \le x \le 1.$
These are the reflection equation
\beql{206a}
\tau(x) = \tau(1-x),
\end{equation}
and the dyadic self-similarity equation
\beql{206b}
2 \tau(\frac{x}{2})= \tau(x) + x.
\end{equation}
\end{lemma}
\paragraph{Proof.}
Here \eqn{206a} follows directly from \eqn{eq101}, since
$\ll k x\gg ~= ~\ll k(1-x) \gg$ for $k \in {\mathbb Z}$.
To obtain \eqn{206b}, let $x= 0. b_1 b_2 b_3 ...$ and set
$y := \frac{x}{2} = 0.0 b_1 b_2b_3 ...$. Then $\ll y\gg = y$, whence \eqn{eq101} gives
$$
2\tau(y)
= 2 \ll y \gg + 2\left(\sum_{n=1}^{\infty} \frac{ \ll 2^{n} y\gg} {2^n}\right)
=x + \sum_{m=0}^{\infty} \frac{ \ll 2^{m} x\gg} {2^m} = x + \tau(x).
~~~~\Box$$
We note that the Takagi function
can be characterized as the unique continuous
function on $[0,1]$ satisfying these two functional equations (Knuth \cite[Exercise 82, solution p. 740]{Kn09}).
Next we recall the construction of the Takagi function $\tau(x)$
as a limit of piecewise linear approximations
\begin{equation*}
\tau_n(x) := \sum_{j=0}^{n-1} \frac{ \ll 2^j x\gg}{2^j},
\end{equation*}
which we name the {\em partial Takagi function} of level $n$.
We require some notation concerning
the binary expansion:
\begin{equation*}
x= \sum_{j=1}^{\infty} \frac{b_j}{2^j}= 0.b_1 b_2 b_3..., ~~~~ \mbox{each}~b_j \in \{0, 1\}.
\end{equation*}
\begin{defi}\label{de23}
{\em
Let $x \in [0,1]$ have binary expansion
$x= \sum_{j=1}^{\infty} \frac{b_j}{2^j}= 0.b_1 b_2 b_3...$, with each $b_j \in \{0, 1\}$.
For each $j \ge 1$ we define the following integer-valued functions.
(1) The {\em digit sum function} ${N}^{1}_j(x)$ is
\begin{equation*}
{N}^{1}_j(x) := b_1 +b_2 + \cdots + b_j.
\end{equation*}
We also let $N_j^{0}(x) = j - {N}^{1}_j(x)$ count the number of $0$'s in the
first $j$ binary digits of $x$.
\smallskip
(2) The {\em deficient digit function} ${D}_j(x)$ is given by
\beql{222}
{D}_j(x):= N_j^{0}(x) - {N}^{1}_j(x) = j- 2{N}^{1}_j(x) = j- 2(b_1+b_2+ \cdots + b_j) .
\end{equation}
Here we use the convention that $x$ denotes a binary expansion;
dyadic rationals have two different binary expansions, and all functions
$N_j^0(x)$, ${N}^{1}_j(x)$, ${D}_j(x)$ depend
on which binary expansion is used.
}
\end{defi}
The name ``deficient digit function" reflects the
fact that ${D}_j(x)$ counts the excess of binary digits $b_k=0$ over those with $b_k=1$
in the first $j$ digits, i.e. it is positive if there are more $0$'s than $1$'s.
\begin{lemma}~\label{le23}{\em (Piecewise linear approximations to Takagi function)} \\
The piecewise linear function
$\tau_n(x) = \sum_{j=0}^{n-1} \frac{ \ll 2^j x\gg}{2^j}$
is linear on each dyadic
interval $[\frac{k}{2^n}, \frac{k+1}{2^n}]$.
(1) On each such interval $\tau_n(x)$ has integer slope between $-n$ and $n$ given by
the deficient digit function
$$
{D}_n(x) = N_{n}^0(x) - {N}^{1}_n(x) =n - 2(b_1+b_2 + \cdots + b_n),
$$
Here $x=0.b_1b_2 b_3...$ may be any interior point on the
dyadic interval, and can also be an endpoint provided the dyadic
expansion ending in $0$'s is taken at the left endpoint
$\frac{k}{2^n}$ and that ending in $1's$ is taken
at the right endpoint $\frac{k+1}{2^n}.$
(2) The values $\{ \tau_n(x): n \ge 1\}$
converge uniformly to $\tau(x)$, with
\beql{253a}
|\tau_n(x) - \tau(x)| \le \frac{2}{3}\cdot\frac{1}{2^{n}}.
\end{equation}
The functions $\tau_n(x)$ approximate the
Takagi function monotonically from below
\begin{equation*}
\tau_1(x) \le \tau_2(x) \le \tau_3(x) \le ...
\end{equation*}
For a dyadic rational $x= \frac{k}{2^n}$, perfect approximation
occurs at the $n$-th step with
\begin{equation*}
\tau(x) = \tau_m (x), ~~~\mbox{for all}~~ m \ge n.
\end{equation*}
\end{lemma}
\paragraph{Proof.}
All statements follow easily from the observation that
each function
$f_n(x) := \frac{ \ll 2^n x\gg}{2^n}$
is a piecewise linear sawtooth function, linear on dyadic intervals
$[\frac{k}{2^{n+1}}, \frac{k+1}{2^{n+1}}]$, with slope
having value $+1$ if the binary expansion of $x$ has $b_{n+1}=0$
and slope having value $-1$ if $b_{n+1}=1$. The inequality in
\eqref{253a} also uses the fact that $\max_{x \in [0,1]} \tau(x) =
\frac 2 3.$
$~~~\Box$\\
The Takagi function itself can be directly
expressed in terms of the deficient digit function.
The relation \eqn{222} compared with the definition \eqn{203} of
$\ell_m(x)$ yields
$$
\ell_{m+1}(x) = \frac{m}{2}- \frac{1}{2} (-1)^{b_{m+1}}{D}_{m}(x).
$$
Substituting this in Takagi's formula \eqn{201}
and simplifying yields the formula
\begin{equation*}
\tau(x)
= \frac{1}{2} -\frac{1}{4} \left(\sum_{m=0}^{\infty} (-1)^{b_{m+1}} \frac{{D}_{m}(x)}{2^m}
\right).
\end{equation*}
We conclude this section with a self-similarity property of the Takagi
function associated to dyadic rationals $x = \frac{k}{2^{n}}$.
\begin{lemma}~\label{le25}{\em (Takagi self-affinity)} \\
For an arbitrary dyadic rational $x_0=\frac{k}{2^n} $ then for $x \in [\frac{k}{2^n}, \frac{k+1}{2^n}]$
given by $x = x_0 + \frac{w}{2^{n}}$
with $w\in [0,1]$,
\begin{equation}
\label{251}
\tau( x) = \tau(x_0) + \frac{1}{2^n} \large( \tau(w) + {D}_n(x_0)w\large).
\end{equation}
That is, the graph of $\tau(x)$ on $ [\frac{k}{2^{n}}, \frac{k+1}{2^{n}}]$
is a miniature version of the tilted Takagi function
$ \tau(x) + D_n(x_0) x$,
vertically shifted by $\tau(x_0)$, and shrunk by a factor
$\frac{1}{2^{n}}$.
\end{lemma}
\paragraph{Proof.}
By Lemma \ref{le23}(1), we have $\tau_{n}(x_0 + \frac{w}{2^{n}}) =
\tau_{n}(x_0) + {D}_n(x_0)\cdot \frac{w}{2^n}$.
Therefore, by \eqref{eq101} it follows that
\begin{align*}
\tau(x) & = \tau_{n}(x) + \sum_{j=n}^\infty
\frac{\ll 2^j x\gg}{2^k}\\
& = \tau_{n}(x_0) + {D}_n(x_0)\cdot \frac{w}{2^n} +
\sum_{j=n}^\infty \frac{\ll 2^j (\frac{w}{2^{n}}) \gg}{2^j}\\
& = \tau(x_0) + \frac{1}{2^n} \large( \tau(w) + {D}_n(x_0) w \large).~~~\Box.
\end{align*}
\paragraph{Remark.}
Lemma \ref{le25} simplifies
in the special case that $D_n(x_0)=0$, in which case
we call $x_0$ a {\em balanced dyadic rational};
this can only occur when $n=2m$ is even. The formula \eqn{251} becomes
\begin{equation*}
\tau( x) = \tau(x_0) + \frac{\tau(w)}{2^n},
\end{equation*}
which shows the graph of the Takagi function over the subinterval $[\frac k
{2^n}, \frac{k+1}{2^n}] \subseteq [0,1]$, up to a translation,
consists of the image of the entire graph scaled by $\frac 1 {2^n}$.
Balanced dyadic rationals play a special role in our analysis,
see Definition \ref{de32}.
\section{Properties of Local Level Sets}\label{sec3}
\setcounter{equation}{0}
In this section we derive some basic properties of local level sets.
Then we determine the size of ``abscissa generic" local level sets,
showing these are uncountable sets of Hausdorff dimension $0$.
Finally we determine the structure of local level sets that contain
a rational number $x$, showing they are either finite sets or
Cantor sets of positive Hausdorff dimension.
\subsection{Partition into local level sets} \label{sec31}
We first show that level sets partition into local level sets.
\begin{theorem}~\label{th31}
(1) Local level sets $L_x^{loc}$ are closed sets. Two local level sets
either coincide or are disjoint.
(2) Each local level set $L_x^{loc}$ is contained in a level set:
$L_x^{loc} \subseteq L(\tau(x))$.
That is, if $x_1 \sim x_2$ then $\tau(x_1) = \tau(x_2).$
(3) Each level set $L(y)$ partitions into local level sets
\begin{equation}
\label{300e}
L(y) = \bigcup_{{x \in \Omega^{L}}\atop{\tau(x)= y}} L_{x}^{loc}.
\end{equation}
Here $\Omega^{L}$ denotes the collection of leftmost endpoints of all local level sets.
\end{theorem}
\paragraph{Proof.}
(1) A local level set, specified by a binary expansion of $x$, is
generated by any one of its elements,
by block-flipping operations (allowing an infinite
number of blocks to be flipped at once) including
$x \to 1-x$ : if $x_2 \in L_{x_1}^{loc}$
then $x_1 \in L_{x_2}^{loc}$ and $L_{x_1}^{loc}= L_{x_2}^{loc}$.
Each set $L_x^{loc}$ is closed,
because any Cauchy sequence $\{ x_n: n \ge 1\}$
in $L_x^{loc}$ must eventually ``freeze" the choice made in any finite
initial set of ``blocks", so that for each $k \ge 1$ there is a value
$n(k)$ so that
$B_k(x_n)= B_k(x_m)$ whenever $n,m \ge n(k)$. Since
all balance-sets $Z(x_n)$ coincide, the limit value $x_{\infty}$
has $B_k(x_{\infty}) = B_k(x_n)$ for all $n \ge n(k)$ so
$B_k(x_{\infty}) \sim B_k(x)$ for all $k \ge 0$. This same relation shows
that the first $k$ balance points of $x_{\infty}$ coincide with those of such $x_n$,
and letting $k \to \infty$ we have $Z(x_{\infty}) = Z(x)$. Thus $x_{\infty} \sim x$,
so $x_{\infty} \in L_x^{loc}$. \\
(2) We assert that if $x \sim x'$, with $x'$ obtained from $x$ by flipping a single block of symbols,
then $\tau(x) = \tau(x')$. This holds since
a block-flip after the $k$th binary digit of $x$ corresponds to
a reflection of $x$ about the center of the dyadic interval of length
$\frac 1 {2^k}$ containing $x$; by Lemma \ref{le25} and Lemma \ref{le22}, the Takagi
function restricted to this interval has this reflection symmetry.
The case $\tau(x) = \tau(x'')$ of general $x''$
in $L_x^{loc}$ will then follow by
flipping
the blocks of $x$ in increasing order as necessary
to match those of $x''$, getting a sequence $\{ x_n: n \ge 1\} \subset L_x^{loc} $
with $\tau(x_n) = \tau(x)$. Now $\lim_{n \to \infty} x_n = x''$
so using the fact that $\tau(x)$ is a continuous
function, we conclude $\tau(x'') = \lim_{n \to \infty} \tau (x_n) = \tau(x).$\\
(3) Local level sets, being closed, have a leftmost
endpoint, and we
can then uniquely label local level sets with their leftmost endpoint. $~~~\Box$ \medskip
We immediately deduce that any level set that is countably infinite must
contain infinitely many local level sets.
\begin{coro}~\label{cor32}
{\em(Countable Level Sets)}
(1) Each local level set $L_x^{loc}$ is either finite or
uncountable.
(2) Each level set $L_x^{loc}$ that is countably infinite
is necessarily a countable disjoint union of finite local level sets.
\end{coro}
\paragraph{Proof.}
(1) This dichotomy for $L_x^{loc}$ is determined by whether the balance set $Z(x)$ is
finite or infinite, since $L_x^{loc}$ is a Cantor set in the latter case.
(2) If $L(y)$ is countably infinite, all local level sets it contains must
be finite, by (1). Thus there must be infinitely many of them.
$~~~\Box$\\
We will later show that
case (2) above occurs: Theorem \ref{th38} proves that
$L(\frac{1}{2})$ is countably infinite.
\subsection{Generic local level sets}\label{sec32}
We analyze the size of ``abscissa generic" local level sets sampled by choosing
$x$ uniformly on $[0, 1]$, and prove these are
Cantor sets of Hausdorff dimension $0$.
\paragraph{Proof of Theorem \ref{th15}.}
The sequence of binary digits in $x=0.b_1b_2b_3...$ of a real number
in $[0,1]$ corresponds to taking a (random) walk on
the integer lattice, starting at point $s_0=0$, with steps $+1$ or $-1$,
with $b_j=0$ corresponding to taking a step in the positive direction,
and $b_j=1$ corresponding to taking a step in the negative direction, i.e.
at time $k$ the walk is at position
$$
s_k = s_0 + \sum_{j=1}^k (-1)^{b_j}.
$$
The interval $[0,1]$ sampled by drawing a random
point $x$ with the uniform distribution (i.e. using Lebesgue measure on $[0,1]$)
corresponds in probability to taking a simple random walk with equal probability steps.
(See Billingsley \cite[Sect. 3]{Bil60},\cite{Bil65}).\\
The relevant property of the random walk is that
a one-dimensional random walk is {\em recurrent}; that is, with probability one it
returns to the origin infinitely many times.
Thus with probability one the balance-set $Z(x)$ includes infinitely many
balance points, so that with probability one the local level set $L_x^{loc}$ has the
structure of a Cantor set, hence is uncountable. This corresponds to a
full Lebesgue measure set of points $x$ having a local level set that
is uncountable.\\
For a treatment of Hausdorff dimension, see Falconer \cite{Fa03}.
To establish the Hausdorff dimension $0$ assertion, we use the result
that with probability one the number of times a simple random walk
returns to the origin in the first $n$ steps is $o(n)$ as $n \to \infty$; in fact
with probability one it is $O\left( n^{\frac{1}{2} + \epsilon}\right)$ as $n \to \infty$.
(See Feller \cite[Chap III]{Fe68}, \cite[Chap XII]{Fe71}).
The proof is then completed by the following deterministic result.
\paragraph{Claim.}
{\em Any $x=0.b_1b_2b_3...$ that has the
property that the number of returns to the origin in the first $n$ steps
of the corresponding random walk is
$o(n)$
as $n \to \infty$ necessarily has Hausdorff dimension
$\dim_{H} ( L_x^{loc}) = 0.$ }\\
To show this, let the balance points of $Z(x)$ be
$0 < c_1< c_2 < c_3< ...$. The hypothesis on $x$ implies
$$
\lim_{ k \to \infty} \frac{k}{c_k} = 0.
$$
We can now cover $L_x^{loc}$ by
$2^k$-dyadic intervals of length $2^{-c_k}$, since there are only $2^k$ possible
flipped sequences; call this covering ${\cal C}_k$. For any $\delta>0$, we have
$$
\lim_{k \to \infty} \sum_{I_j \in {\cal C}_k} |I_j|^\delta= 2^k 2^{- \delta c_k} = 0,
$$
since $\frac{c_k}{k} \to \infty.$ This proves the claim,
which completes the proof.
$~~~\Box$\\
\subsection{ Local level sets containing rational numbers} \label{sec33}
Knuth~\cite[Sect. 7.2.1.3, Exercise 83]{Kn09} raised the question of determining which
rational $y$ have an uncountable level set $L(y)$.
We address here the easier question
of determining which rational numbers $x$
have an uncountable local level set $L_x^{loc}$.
We also show that uncountable local level sets that contain a rational
necessarily have positive Hausdorff dimension.
\begin{theorem}~\label{th50} {\em (Rational local level sets)}
For a rational number $x=\frac{p}{q} \in [0,1]$, the following properties are equivalent.
\begin{enumerate}
\item[(1)] The local level set $L_x^{loc}$ has positive Hausdorff dimension.
\item[(2)] The local level set $L_x^{loc}$ is uncountable.
\item[(3)] The binary expansion of $x$ has a purely periodic part
with an equal number of zeros and ones, and also has a
preperiodic part with an equal number of zeros and ones.
\end{enumerate}
Moreover, if these equivalent properties hold, then $\dim_H(L_x^{loc})
= \frac k r$, where $r$ is the number of bits in the periodic part of the binary
expansion of $x$ and $k$ is the number of balance points per period.
\end{theorem}
\paragraph{Proof.}
Trivially, (1) implies (2).\medskip
To show (2) implies (3), let $x = 0.b_1b_2b_3\ldots$ be the
binary expansion of $x = \frac{p}{q}$.
Let the balance-set $Z(x) := \{ c_j : {D}_{c_j}(x) = 0 \}$ be the set of balance points of $x$,
as defined in
(\ref{123}). By inspection, condition (3) is equivalent to the set $Z(x)$
having
infinite cardinality. From the definition of local level set
$L_x^{loc}$, we see that the cardinality of $L_x^{loc}$ is
$$ \# L_x^{loc} = 2^{\#Z(x)}.$$
Hence, $L_x^{loc}$ is uncountable if and only if (3) holds. \medskip
It remains to show that (3) implies (1).
Since Hausdorff dimension is invariant under scaling, translation, and
finite union, we may reduce to the case where the rational $x$ has no
preperiodic part; that is, $x = 0.(b_1b_2\ldots b_r)^\infty$, with $r$ the
length of the periodic part. Say
that these $r$-bits are partitioned into precisely $1 \leq k \leq
\frac r 2$
blocks by the balance points of $x$.
Let $B_1,\ldots, B_{2^k}$ be the $2^k$ possible dyadic rationals
obtained by applying block-flipping operations to $B_1 = 0.b_1\ldots
b_r$.
Define the function $S_i(x) := B_i + \frac{x}{2^r}.$ Using the
terminology of Falconer \cite[\S 9.2]{Fa03},
the functions $\{S_1, \ldots, S_{2^k} \}$ form an iterated function
system with attractor equal to the local level set
$L_x^{loc} = \bigcup_{i=1}^{2^k} S_i(L_x^{loc}).$
Furthermore, the open set $V = (0,1)$ is a bounded open set for which
$V \supseteq \bigcup_{i=1}^{2^k} S_i(V)$ with the union disjoint.
Therefore, it satisfies the hypothesis of
\cite[Theorem 9.3]{Fa03}, whose conclusion yields that the Hausdorff
dimension
$\dim_H(L_x^{loc}) = s$
where $s$ is the unique number for which
$ \sum_{i=1}^{2^k}c_i^s = 1$,
and $c_i = 2^{-r}$ is the ratio of similitude of the operator $S_i$.
The equation is easily solved for $s$ yielding $s = \frac k r > 0$.
Thus, (3) implies (1).
(It is also a consequence of \cite[Theorem 9.3]{Fa03} that the
$s$-dimensional Hausdorff measure of $L_x^{loc}$ is a finite positive
value.)
$~~~\Box$\\
\paragraph{Remarks.}
(1) By Theorem \ref{th50}(3), any
local level set of a rational number $x$
that is uncountable necessarily
contains infinitely many rational numbers $x'$,
by using periodic sequences of flippings.
(2) A dyadic rational $x$ always belongs to a
finite local level set $L_x^{loc}$. This follows immediately from
Theorem \ref{th50} since local level sets are either finite or
uncountable.
\section{Left Endpoints of Local Level Sets}\label{sec4}
\setcounter{equation}{0}
We study the set of endpoints of local level sets. The leftmost endpoints
fall in $[0, \frac{1}{3}]$ and the rightmost endpoints in $[\frac{2}{3}, 1]$,
and these sets are related by the operation $x \to 1-x$.
Therefore suffices to study
the leftmost endpoint set, denoted $\Omega^{L}$.
The usefulness of this set is that it parametrizes
the complete collection of all local level sets,
and the Takagi function turns out to be well-behaved when
restricted to $\Omega^{L}$.
Theorem \ref{th33}
below describes the main properties of $\Omega^{L}$.
\subsection{Deficient digit set} \label{sec41}
We make the following definition, which will be shown later to
coincide with the set of all leftmost endpoints of local level sets.
\begin{defi}\label{de31}
{\em
The {\em deficient digit set} $\Omega^{L}$ consists of all
points
$$
\Omega^{L} := \left\{ x = \sum_{j=1}^{\infty} \frac{b_j}{2^j}: ~~ {D}_j(x)
\ge 0 ~\mbox{for ~all} ~~j \ge 1\right\},
$$
in which the deficient digit function ${D}_j(x)= j-2{N}^{1}_j(x)$ counts the number of binary
digits $0$'s minus the number of $1$'s in the first $n$ digits. \smallskip
Note that dyadic rationals $\frac{m}{2^r}$
have two different binary expansions; at most one of
these two expansions can belong to $\Omega^{L}$;
if one of them does, then by convention
we assign the dyadic rational to the set $\Omega^{L}$.
}
\end{defi}
We will establish in Theorem~\ref{th33} that the deficient digit set $\Omega^{L}$ is a Cantor set
having Lebesgue measure zero.
We need the following result as a preliminary step.
Recall that a dyadic rational binary expansion
$x:= \frac{k}{2^n}=0.b_1 b_2... b_n0^{\infty}$ (with $k$ odd) is
said to be {\em balanced} if
${D}_{n}(x)=0$, and that $n=2m$ is necessarily even.
\begin{lemma}\label{le31} {\em (Balanced dyadic rationals in $\Omega^{L}$)}
For a fixed integer $m\ge 0$,
the set ${\cal B}^{'}$ of dyadic rationals $\frac{k}{2^{2m}}= 0.b_1 b_2 \cdots b_{2m}$ that
are balanced and belong to $\Omega^{L}$, i.e. have digit sums satisfying
\begin{equation*}
{D}_{j}\left(\frac{k}{2^{2m}}\right) \ge 0 ~~\mbox{for}~~1\le j \le 2m-1~~~\mbox{and}~~
{D}_{2m}\left(\frac{k}{2^{2m}} \right) =0,
\end{equation*}
has cardinality the $m$-th Catalan number $C_{m} = \frac{1}{m+1} \left( {{2m}\atop{m}} \right),$
with $C_0=1$.
\end{lemma}
\paragraph{Proof.}
Each such dyadic rational describes a lattice path starting from $(0,0)$ and
taking steps $(1,1)$ or $(1, -1)$ in such a way as to stay on or above the line $y=0$.
Here the $j$-th step $(1,1)$ corresponds to $b_j=0$ and $(1,-1)$ to $b_j=1$;
the last step necessarily has $b_{2m}=1$.
Such steps can be counted using Bertrand's ballot theorem
(Feller \cite[p.73] {Fe68}). To apply the theorem
we count paths from $(0,0)$ to $(n, x) = (2m+1, 1)$ that
stay strictly above the $x$-axis. Note that all such paths must go
through $(1,1)$, and
that to end at $(2m+1, 1)$ the last step must be $(1, -1)$. The
number of paths is therefore
$$
\frac{1}{2m+1} \left( {{2m+1}\atop{m}} \right)= \frac{1}{m+1} \left( {{2m}\atop{m}} \right)=C_m,
$$
as asserted.
$~~~\Box$\\
We will determine the set of open intervals removed from $[0,1]$
to create the deficient digit sum set $\Omega^{L}$. The following set ${\cal B}$ supplies
labels for these open intervals.
\begin{defi} \label{de32}
{\em
(1) The {\em breakpoint set}
${\cal B}^{'}$ is the set of
all balanced dyadic rationals in $\Omega^{L}$.
It consists of ${B_{\emptyset}}'=0$ together with the collection of all dyadic rationals
$B' = \frac{n}{2^{2m}}$ that have binary expansions of the form
$$
B'=0.b_1 b_2 ... b_{2m-1}b_{2m} ~~~\mbox{for some}~~ m \ge 1,
$$
that satisfy the condition
\begin{equation*}
{D}_j(B') \ge 0 ~~~\mbox{for} ~~1 \le j \le 2m-1,
~~~\mbox{and}~~~{D}_{2m}(B') =0.
\end{equation*}
(2) The {\em small breakpoint set} ${\cal B}$ is the subset of
the breakpoint set ${\cal B}'$ consisting of
${B_{\emptyset}}=0$ plus all members of ${\cal B}'$ satisfying the extra
condition that
the last two binary digits $b_{2m-1}= b_{2m}=1.$
\noindent We may rewrite a dyadic rational in the small breakpoint set as
\begin{equation}
\label{332a}
B= 0. b_1 b_2 ... b_\ell 0 1^k, ~~~\mbox{with}~~ k \ge 2,
\end{equation}
where $2m= k+\ell+1.$
}
\end{defi}
We show that values in the small breakpoint set ${\cal B}$
naturally label the left endpoints $x(B)^{-}$
of the intervals removed from
$[0,1]$ to create the deficient digit set $\Omega^{L}$.
(It is also possible to give a nice labeling for the right endpoints $x(B)^{+}$
which we omit here.)
\begin{defi} \label{de33b}
{\em
For each dyadic rational $B=0.b_1b_2 ...b_{\ell} 01^k$, $k \ge 2$
in the small breakpoint set ${\cal B}$ ($B \ne {B_{\emptyset}}$) we associate
the open interval
\begin{equation*}
I_B := (x( B)^{-}, x(B)^{+})
\end{equation*}
having the endpoints
\begin{eqnarray*}
x(B)^{-} & := &0. b_1 b_2 ... b_\ell 0 1^k (01)^{\infty}\\
x(B)^{+} & := & 0. b_1 b_2 ... b_{\ell} 1 0^k (00)^{\infty}.
\end{eqnarray*}
For $B= {B_{\emptyset}}$ we set
\begin{equation*}
I_{{B_{\emptyset}}} := ( x({B_{\emptyset}})^{-} , x({B_{\emptyset}})^{+}) :=( 0.(01)^{\infty},
1.(00)^{\infty}) = \left( \frac{1}{3}, 1\right).
\end{equation*}
}
\end{defi}
Some data on $I_{{\cal B}}$ for small $\ell$ and $k$ appears in
Table \ref{tab2}.
\begin{table}[h]
\begin{center}
$$
\begin{array}{|c|c|c|c|c|c|}
\hline
~& B = j/2^{2m} & x(B)^{-} & x(B)^{+} & \tau(x(B)^{-}) & \tau(x(B)^+)\\
\hline
2m=4&&&&&\\
\hline
&3/16=.0011& 5/24=.0011(01)^\infty& 1/4= .0100& 13/24 &1/2\\
\hline
2m=6&&&&&\\
\hline
&7/64= .000111& 11/96= .000111(01)^\infty& 1/8 = .001000& 37/96 &3/8\\
\hline
&11/64=.001011& 17/96= .001011(01)^\infty& 3/16= .001100& 49/96&1/2\\
\hline
&19/64=.010011&29/96= .010011(01)^\infty& 5/16= .010100& 61/96&5/8\\
\hline
\end{array}
$$
\end{center}
\caption{Binary expansions of $B$, $x(B)^{-},$ and
$x(B)^{+}$ for $B \in {\cal B}$ of the form $B = \frac{j}{2^{2m}}$
and corresponding $\tau(x(B)^{-})$ and $\tau(x(B)^{+})$. }
\label{tab2}
\end{table}
\begin{lemma}~\label{le32}
The open intervals $\{ I_B:~ B \in {\cal B}\}$ have the following properties.\\
(1) The intervals $I_B$ for $B \in {\cal B}$ are all disjoint from the deficient digit set $\Omega^{L}$
and from each other. All the endpoints $x(B)^{\pm}$
belong to $\Omega^{L}$, with the exception of
$x({B_{\emptyset}})^{+}= 1$. \\
(2) For $B$ in the small breakpoint set ${\cal B}$ there holds
\begin{equation*}
x(B)^{+} - x(B)^{-} = \tau(x(B)^{-})- \tau(x(B)^{+})= \frac{1}{2^{k+\ell} \cdot 3} =\frac{1}{2^{2m-1} \cdot 3}
\end{equation*}
so that $x(B)^{+} > x(B)^{-}$. Thus the ratio $\frac{\tau(x(B)^{+})-
\tau(x(B)^{-})} {x(B)^{+} - x(B)^{-}} = -1$.
\end{lemma}
\paragraph{Proof.}
(1) We have $2m= k+\ell +1$ in \eqn{332a}, and with the
definition of $x(B)^{+}, x(B)^{-}$ this implies that there
are odd integers $n_1, n_2$ such that
\begin{eqnarray*}
x(B)^{-} &= &\frac{n_1}{2^{k+\ell+1}} + \frac{1}{2^{k+\ell+1} \cdot 3}\\
x(B)^{+} &=& \frac{n_2}{2^{\ell+1}}.
\end{eqnarray*}
Furthermore one easily sees that $x(B)^{+} > x(B)^{-}$ and that
\begin{equation}
\label{338c}
x(B)^{+} - x(B)^{-} = \frac{ 2}{2^{k+\ell+1}\cdot 3} = \frac{1}{2^{2m-1}\cdot 3}.
\end{equation}
A key property of $x(B)^{-}$ is that it belongs to $\Omega^{L}$ and satisfies
$$
{D}_{k+\ell+1+ 2j} (x(B)^{-}) = 0 ~~~\mbox{for all} ~~j \ge 0.
$$
Now the binary expansion of any number $x$ strictly between $x(B)^{-}$
and $x(B)^{+}$
first differs from $x(B)^{-}$ in a bit at location $\ell' > \ell$,
with $x$ having digit $1$ and $x(B)^{-}$ digit $0$. But a $1$ in digit
location
$k+\ell + 2j, ~~j \ge 1$ would then produce
$$
{D}_{k+\ell+ 2j} (x)= -1,
$$
which certifies $x \not\in \Omega^{L}$. If there is instead
a change from $0$ to $1$ in a digit of $x(B)^{-}$ in location $j \le \ell+1$,
then this would make $x \ge x(B)^{+}$, a contradiction. We conclude that
no point of the open interval $I_B$ belongs to $\Omega^{L}$.
We note by inspection
that all upper endpoints $x(B)^{+} \in \Omega^{L}$, except $x({B_{\emptyset}})^{+}=1$.
This certifies that all open intervals $I_B$
are disjoint (except possibly $I_{B_{\emptyset}}$) , because the endpoints of the closure of each
such interval are in $\Omega^{L}$ but the interiors are not.
(This prevents
both overlap and inclusion.) Finally, $I_B \subset (0, \frac{1}{3})$
for each interval with $B \ne {B_{\emptyset}}$, yielding disjointness in all cases. \\
(2) In view of \eqn{338c} it remains to verify that
\beql{339a}
\tau(x(B)^{-}) - \tau( x(B)^{+} ) = \frac{1}{2^{k+\ell}\cdot 3}.
\end{equation}
In the notation of Lemma~\ref{le21} we write
$$
y(B)^{+} :=\tau( x(B)^{+} ) = \sum_{j=1}^{\infty} \frac{\ell_j^{+}}{2^j}; ~~~~~~~
y(B)^{-} := \tau(x(B)^{-}) = \sum_{j=1}^{\infty} \frac{\ell_j^{-}}{2^j}.
$$
Now $\ell_j^{-} = \ell_j^{+}$ for $1 \le j \le \ell$, since the two expansions agree.
We also have $\ell= 2m-k-1$ and find that for $B\in{\cal B}$
the first $\ell$ digits of $B$ necessarily contain
$m_1 = m-k = \frac{\ell-k+1}{2}$ values $b_j=1$
and $\frac{\ell+k-1}{2}$ values $b_j=0$. Now we calculate using Lemma~\ref{le21} that
$$
\ell_{\ell+1}^{-}= \frac{\ell-k+1}{2}, ~\mbox{and}~~ \ell_{\ell+1}^{+}= \frac{\ell+k-1}{2}
$$
while for $2 \le j \le k+1$ we have
$$
\ell_{\ell+j}^{-} = \frac{\ell+k+1}{2},~~\mbox{and} ~~~\ell_{\ell+j}^{+} = \frac{\ell-k+3}{2}.
$$
Thus in the first $2m=k+\ell+1$ terms we have
\beql{339c}
\sum_{j=1}^{k+\ell+1} \frac{\ell_j^{-}- \ell_j^{+}}{2^j} = -\left(\frac{k-1}{2^{\ell+1}}\right)+ \sum_{j=1}^k \frac{k-1}{2^{\ell+1+j}}
= \frac{k-1}{2^{\ell}} \left( -\frac{1}{2} + \sum_{j=2}^{k+1} \frac{1}{2^j} \right) = -\frac{ k-1}{2^{k+\ell+1}}.
\end{equation}
For the terms $j \ge k+\ell+1$, for $x(B)^{+}$ we have $(m-k+1)$ $1$'s in the first $2m=k+\ell+1$
positions, hence
\beql{339d}
\sum_{j=2m+1}^{\infty} \frac{\ell_j^{+}}{2^j}= \sum_{j=2m+1}^{\infty} \frac{m-k+1}{2^j} =
\frac{m-k+1}{2^{k+\ell+1}}.
\end{equation}
For $x(B)^{-}$, we have equal numbers of $0$'s and $1's$ at the $2m$-th digit,
so that $\ell_j^{-} = 2m + \tilde{\ell_j}$ where $\tilde{\ell_j}$ correspond to the expansion
of $\frac{1}{3} = 0. (01)^{\infty}$. Since we have $\tau(\frac{1}{3}) = \frac{2}{3}$,
we obtain
\beql{339e}
\sum_{j=2m+1}^{\infty} \frac{\ell_j^{-}}{2^j}= \sum_{j=2m+1}^{\infty} \frac{m}{2^j} +\sum_{j=2m+1}^{\infty}
\frac{ \tilde{\ell}_j}{2^{j}} = \frac{m}{2^{2m}} + \frac{2}{3} \cdot \frac{1}{2^{2m}}
= \frac{ m+ \frac{2}{3}}{2^{\ell+k+1}}.
\end{equation}
Combining \eqn{339c}-\eqn{339e} yields
$$
\tau(x(B)^{-} ) - \tau(x(B)^{+}) = \frac{1}{2^{k+\ell+1}} \left( -(k-1) +(m+\frac{2}{3}) -(m-k+1)\right)
= \frac{2}{3} \cdot \frac{1}{2^{\ell+k+1}}= \frac{1}{2^{k+\ell} \cdot 3},
$$
verifying \eqn{339a}.
$~~~\Box$\\
\subsection{Properties of the deficient digit set} \label{sec42}
The following result characterizes the deficient digit set $\Omega^{L}$.
\begin{theorem}\label{th33} {\em(Properties of the deficient digit set)}\\
$~~~~~$(1) The deficient digit set $\Omega^{L}$ comprises the set of leftmost endpoints of all local
level sets. It satisfies $\Omega^{L} \subset [0, \frac{1}{3}]$.
\smallskip
(2) The deficient digit set $\Omega^{L}$ is a closed, perfect set (Cantor set).
It is given by
\beql{342}
\Omega^{L}= [0, 1) \backslash \bigcup_{ B \in {\cal B}} I_{B},
\end{equation}
where the omitted open intervals $I_B$ have right endpoint a dyadic rational and
left endpoint a rational number with denominator $3 \cdot 2^k$ for some $k \ge 1$.
\smallskip
(3) The deficient digit set $\Omega^{L}$ has Lebesgue measure zero.
\end{theorem}
\paragraph{Proof.}
(1) This property is immediate from the definition of local level
set. The leftmost
endpoint of any local level set satisfies ${D}_j(x) \ge 0$ for all $j \ge 1$ and is the only point in
$L_x^{loc}$ with this property. \medskip
(2) The definition of $\Omega^{L}$ shows that it is a closed set, since
the inequalities ${D}_j(x) \ge 0$ are preserved under pointwise limits.
Note here that any infinite binary expansion ending $.11111...$ is excluded from
membership on $\Omega^{L}$. To see that $\Omega^{L}$ is a perfect set, we show each member of $\Omega^{L}$ is a limit
point of other members of $\Omega^{L}$.
For each member $x \in \Omega^{L}$ whose binary expansion contains an infinite number of $1$'s,
we approximate it from below by
the sequence $x_n= \frac{1}{2^n} \lfloor 2^n x\rfloor \in \Omega^{L}$ obtained by truncating it at the $n$-th digit.
For a dyadic
rational member $x= \frac{k}{2^j}$, which necessarily ends in an infinite string of zeros,
we approximate it from above using the sequence $x_n = x+ \frac{1}{2^{n+j + 2}} \in \Omega^{L}$.
To show the equality \eqn{342} set
$\Omega^{L}_C := [0, 1) \backslash \bigcup_{ B \in {\cal B}} I_{B}.$
We clearly have $\Omega^{L} \subseteq \Omega^{L}_C$, by Lemma~\ref{le32}(1).
It remains to show $\Omega^{L}_ C \subseteq \Omega^{L}$. We check the
contrapositive, that $x \not\in \Omega^{L} $ implies $x \not\in \Omega^{L}_C$. We use the criterion
that if $x \not\in \Omega^{L}$
then ${D}_j(x) <0$ for some $j\ge 1$. Now one can verify that
the removed intervals $I_B$
each detect those $x$ whose first occurrence of
${D}_j(x) <0$ is in a specified digit position $j$, with a specified digit pattern
of the first $k$ digits, followed by some string of $(01)^r$, and these enumerate all
possibilities of this kind. Thus $x \not\in \Omega^{L}_C$, showing \eqn{342}.
The properties of the endpoints of $I_{B}$ are given in Lemma \ref{le32}(2).
\medskip
(3) The set $\Omega^{L}$ is shown to have Lebesgue measure $0$ by
covering it with dyadic boxes at level $2m$ each of size $\frac{1}{2^{2m}}$,
and noting from Lemma~\ref{le31} that exactly $C_m$ such boxes need to be
used to cover $\Omega^{L}$, so that
$$
\meas(\Omega^{L}) \le \frac{C_m}{2^{2m}}.
$$
Stirling's formula gives for the Catalan numbers $C_m$ the estimate
\begin{equation*}
C_m =\left( 1 + o(1)\right) \frac{1}{\sqrt{\pi m^3} }~2^{2m} , ~~\mbox{as}~~ m \to \infty.
\end{equation*}
From this we see that
$ \frac{C_m}{2^{2m}} \to 0$ as $m \to \infty.$
$~~~\Box$.\\
\subsection{Takagi function on deficient digit set} \label{sec44}
We next consider the Takagi function restricted to the deficient
digit set $\Omega^{L}$. We show firstly that it has a weak
increasing property approaching any point of $\Omega^{L}$, and secondly that it is nondecreasing
when further restricted to the set
$\frac{1}{2} \Omega^{L}:= \{ \frac{1}{2} x:~ x \in \Omega^{L}\}.$
Note the characterization
\beql{479bb}
\frac{1}{2} \Omega^{L}=\{ x \in
[0,1]: {D}_j(x) > 0 \textrm{ for all } j \geq 1\},
\end{equation}
which shows that $\frac{1}{2} \Omega^{L}$ is a subset of $\Omega^{L}$.
The following weak increasing property will be
used in the proof of Theorem \ref{th15a} to describe the
points of increase of the Takagi singular function, and in
the proof of Theorem \ref{th37} determining the expected number of
local level sets per level.
\begin{theorem}\label{th48a}
Let $x$ belong to the deficient digit set $ \Omega^{L}$.
(1) If $x$ has a binary expansion that does not end in $0^{\infty}$,
then there exists a strictly increasing sequence $\{ x_k\}_{k=1}^{\infty} \subset \Omega^{L}$
such that
\begin{equation*}
\lim_{k \to \infty} x_k = x~~~\mbox{and all}~~~ \tau(x_k) < \tau (x).
\end{equation*}
(2) If $x$ has a binary expansion that does not end in $(01)^{\infty}$,
then there exists a strictly decreasing sequence
$\{ x_k\}_{k=1}^{\infty} \subset \Omega^{L}$
such that
\begin{equation*}
\lim_{k \to \infty} x_k = x~~~\mbox{and all}~~~ \tau(x_k) > \tau (x).
\end{equation*}
(3) If $x \in \frac{1}{2}\Omega^{L}$, then (1), (2) above
hold with the stronger property that all $\{ x_k\}_{k=1}^{\infty} \subset \frac{1}{2} \Omega^{L}$.
\end{theorem}
\paragraph{Proof.}
(1) The condition that the binary expansion of $x \in \Omega^{L}$ not end in $0^{\infty}$
is necessary for there to exist an infinite sequence $x_1 < x_2 < x_3< \cdots \subset \Omega^{L}$
such that $\lim_{k \to \infty} x_k = x.$
Write $x=0.b_1 b_2 b_3...$, and note that there must be infinitely many indices
$m_1< m_2 < \cdots$ with ${D}_{m_k}(x)>0$.
We then choose the $x_k$ to be the dyadic rationals obtained by suitably
truncating the binary expansion of $x$ at these points:
\begin{equation}
\label{480a}
x_k := 0. b_1 b_2 \cdots b_{m_k}0^{\infty}.
\end{equation}
We clearly have $x_k \in \Omega^{L}$ and $\lim_{k \to \infty} x_k =x$, and
$x_1 \le x_2 \le x_3 < \cdots$.
This sequence contains an infinite strictly increasing subsequence
since the binary expansion of $x$ does not end in $0^{\infty}$.
We also note that if $x \in \frac{1}{2} \Omega^{L},$ i.e. if each ${D}_j(x) \ge 1,$
then each $x_k \in \frac{1}{2} \Omega^{L}.$
It remains to show that $\tau(x_k) < \tau(x).$ By Lemma \ref{le21} we have
\beql{482}
\tau(x) = \sum_{j=1}^{\infty} \frac{\ell_j}{2^j}.
\end{equation}
Letting $N^1(j) := {N}^{1}_j(x)$ (resp. $N^{0}(j): = N_j^{0}(x)$) count the number of $1$'s (resp. $0$'s)
in the first $j$ digits of the binary expansion of $x$, we have
\beql{483}
\tau(x_k) = \sum_{j=1}^{m_k} \frac{\ell_j}{2^j} +
\sum_{j= m_k+1}^{\infty} \frac{N^1(m_k)}{2^j}.
\end{equation}
Now for all $j > m_k$ we have
$$
N^1(m_k) = \min (N^0(m_k) , N^1(m_k)) \le \min( N^0(j-1), N^1(j-1)) \le \ell_j,
$$
and strict inequality holds here for at least one $j > m_k$ because
the binary expansion of $x$ does not end in $0^{\infty}$.
We conclude that $\tau(x_k) < \tau (x)$ by
comparing \eqn{482} and \eqn{483} term by term.
\smallskip
(2) The condition that the binary expansion of $x \in \Omega^{L}$ not end in $(01)^{\infty}$
is necessary for there to exist an infinite sequence $x_1 > x_2 > x_3> \cdots \subset \Omega^{L}$
such that $\lim_{k \to \infty} x_k = x.$
Writing $x=0.b_1 b_2 b_3 \cdots $, there must be infinitely many indices
$m_1 < m_2 < m_3 < \cdots$ such that
\beql{485}
{D}_{j}(x) \ge {D}_{m_k}(x) ~~~\mbox{for all}~~ j \ge m_k.
\end{equation}
We now choose $x_k$ to be the rational numbers:
\beql{485a}
x_k := 0. b_1 b_2 \cdots b_{m_k} (01)^{\infty}.
\end{equation}
We clearly have $x_k \in \Omega^{L}$ and $\lim_{k \to \infty} x_k =x$, and
we have $x_1 \ge x_2 \ge x_3 \ge \cdots$
using the fact that $x \in \Omega^{L}$ together with \eqn{485}, which
implies that ${D}_j(x_{k+1}) \ge {D}_j(x_k)$ for all $j \ge 1$.
This sequence contains an infinite strictly decreasing subsequence
since the binary expansion of $x$ does not end in $(01)^{\infty}$.
Again note that if $x \in \frac{1}{2} \Omega^{L}$, so all ${D}_j(x) \ge 1$,
then all $x_k \in \frac{1}{2} \Omega^{L}.$
It remains to show that $\tau(x_k) > \tau(x)$.
For any $j \ge 1$, set $x[j]:= 0. b_{j+1} b_{j+2} \cdots$ and note that all $x[m_k] \in \Omega^{L}$
by virtue of condition \eqn{485}.
Similarly define $x_k[j]$ and note that
$x_k[m_k] =0.(01)^{\infty} \in \Omega^{L}$.
There are two cases.
\smallskip
{\em Case (i).} If ${D}(m_k)=0$ then $m_k = 2m$ and
$$
\tau(x) = \sum_{j=1}^{2m} \frac{\ell_j}{2^j} + \frac{m}{2^{m_k}} + \frac{1}{2^{m_k}} \tau(x[m_k])
$$
while
$$
\tau(x_k) = \sum_{j=1}^{2m} \frac{\ell_j}{2^j} + \frac{m}{2^{m_k}} +
\frac{1}{2^{m_k}} \cdot \frac{2}{3}.
$$
Since $\tau(x) \le \frac{2}{3}$ and the only $x \in \Omega^{L}$ with $\tau(x)= \frac{2}{3}$
is $x= \frac{1}{3} = 0.(01)^{\infty}$, we conclude that the strict inequality $\tau(x_k) > \tau(x)$
holds.
\smallskip
{\em Case (ii).} If ${D}(m_k) \ge 1$, then since the first $m_k+1$ digits of $x$ and $x_k$ match
we have, using Lemma \ref{le21},
$$
\tau(x) = \sum_{j=1}^{m_k} \frac{\ell_j}{2^j} + \frac{N^1(m_k)}{2^{m_k}}+
\frac{{D}_{m_k}(x)}{2^{m_k}} {x[m_k]}
+ \frac{1}{2^{m_k} } \tau(x[m_k]).
$$
and
$$
\tau(x_k) = \sum_{j=1}^{m_k} \frac{\ell_j}{2^j} + \frac{N^1(m_k)}{2^{m_k}}+
\frac{{D}_{m_k}(x)}{2^{m_k}} x_k[m_k] +
\frac{1}{2^{m_k}} \tau( x_k[m_k]).
$$
Now $x_k[m_k]= 0.(01)^{\infty} = \frac{1}{3} > x[m_k]$ and $\tau(x_k[m_k]) = \frac{2}{3} \ge
\tau(x[m_k])$, so we conclude the strict inequality $\tau(x_k) > \tau(x)$, as required.
(3) Suppose $x \in \frac{1}{2}\Omega^{L}.$
For (1) any truncation $x_k$ given
by \eqn{480a}
will automatically satisfy the defining property \eqn{479bb}
for membership in $\frac{1}{2} \Omega^{L}.$
Similarly for (2) any value $x_k$ given by \eqn{485a} will automatically satisfy \eqn{479bb}.
$~~~\Box$\\
Next consider the Takagi function restricted to the set
$\frac{1}{2} \Omega^{L}.$
We show that the Takagi function is nondecreasing on this set,
and moreover is strictly increasing off a certain specific countable
set of $x$.
We thank P. Allaart for the following proof to establish the
nondecreasing property, which replaces our original argument.
\begin{theorem}\label{th49}
(1) The Takagi function is nondecreasing on the set $\frac{1}{2} \Omega^{L}$.
(2) The Takagi function is strictly increasing on $\frac{1}{2} \Omega^{L}$
away from a countable set of points, which are a subset of those rationals having binary
expansions ending in $0^{\infty}$ or $(01)^{\infty}$. For each level
$y$ the equation $y= \tau(x)$
has at most two solutions with $x \in \frac{1}{2} \Omega^{L}$. Thus if $x_1< x_2< x_3 $ are
all in $\frac{1}{2}\Omega^{L}$ then $\tau(x_3) > \tau(x_1)$.
\end{theorem}
\paragraph{Proof.}\hspace{-.5em}
(1) [Allaart] \hspace{.5em}
Let $x < x' \in \frac 1 2 \Omega^{L}$ have the binary expansions
$x=0.b_1b_2\ldots$ and $x' = 0.b_1'b_2'\ldots$, and let $n$ be the index of the first bit that
differs in the two expansions: that is, $b_i = b_i'$ for $i < n$ and
$b_n = 0$, $b_n' = 1$.
Now set $x_1 := 0.b_1'\ldots b_n'= \frac{k+1}{2^n},$ for some $k \ge 0$. Clearly,
$x< x_1 \leq x'$. Furthermore,
${D}_i(x_1)= {D}_i(x')> 0$ for $1 \le i \le n$, while
${D}_i(x_1)= {D}_n(x_1)+(i-n) >0$ for
all $i >n$, so that
$x_1 \in \frac{1}{2}\Omega^{L}$. Lemma \ref{le23} applied to the interval
$[\frac {k+1}{2^n}, \frac{k+2}{2^n}]$ gives $\tau_n(x_1) \leq
\tau_n(x')$ since $D_n(x_1) = D_n(x') > 0$; hence
$$\tau(x_1) = \tau_n(x_1) \leq \tau_n(x') \leq \tau(x').$$
Therefore to prove the
nondecreasing property, it is enough to prove $\tau(x) \le \tau(x_1)$.
We prove the following stronger claim (which does not require
that either $x$ or $x_1 \in \frac{1}{2} \Omega^{L}$).
\paragraph{Claim.}
If $\frac {k} {2^n} \leq x <\frac{k+1}{2^n}$ and ${D}_i(x) > 0$ for all $i \geq n$, then $\tau(x)
\leq \tau(\frac {k+1}{2^n})$.
\paragraph{Proof of claim.}
The result is proved by induction on the value of $m={D}_n(x)$, at each
step proving it for all $n \ge m.$ We use the self-affine formula
of Lemma \ref{le25}, on $[\frac{k}{2^n}, \frac{k+1}{2^n}]$. Setting $x_0=\frac{k}{2^n}$,
for $x= x_0+ \frac{w}{2^n}$ with $0 \le w \le 1$, it gives
$
\tau(x) = \tau(x_0) + \frac{1}{2^n}( \tau(w) + D_n(x_0) w).
$
Taking $w=1$ gives
$\tau(\frac{k+1}{2^n})= \tau(x_0) +\frac{1}{2^n} D_n(x_0),$
and differencing yields
\beql{491}
\tau(\frac{k+1}{2^n}) - \tau(x) = \frac{1}{2^n} \left(D_n(x_0) -(\tau(w) + D_n(x_0) w)\right).
\end{equation}
Here we note that
\beql{492}
{D}_{i}(w) = {D}_{n+i}(x) -{D}_{n}(x_0), ~~~ ~~ i \ge 1.
\end{equation}
That is, the function ${D}(\cdot)$ itself undergoes a linear shift under the
variable change from $x$ to $w$.
We begin the induction with the base case
$m = 1$. We then have $m={D}_{n}(x) = {D}_{n}(x_0) = 1$,
so that for any $n \ge 1$, \eqn{491} becomes
\beql{493}
\tau(\frac{k+1}{2^n}) - \tau(x)=\frac{1}{2^n} \left(1 -(\tau(w) + w)\right).
\end{equation}
Using \eqn{492} the assumption ${D}_i(x) >0$ for $i \geq n$ implies that
${D}_{i}(w) \ge 0$ for all $i \ge 1$. This says that $w \in \Omega^{L}$, so we have $w \le \frac{1}{3}$
whence $\tau(w) + w \le \frac{2}{3} + \frac{1}{3} = 1. $ Substituting this inequality in \eqn{493}
completes the base case.
For the inductive step, fix $m \geq 1$ and assume the claim holds
for all ${D}_n(x) = m$ and all $n \ge m$.
Now suppose ${D}_n(x) = m+1$. We bisect the line segment
$\left [ \frac {k}{2^n}, \frac {k+1} {2^n}\right)
= \left[ \frac {k}{2^n}, \frac{2k +1}{2^{n+1}} \right) \cup \left[ \frac{2k + 1}{2^{n+1}},
\frac{k+1}{2^n}\right)$ into the sections where the function
$f_n(x) := \tau_{n+1}(x) - \tau_{n}(x) = \frac { \ll 2^n x \gg}{2^n}$ has constant slopes $+1$
and $-1$ respectively and check the claim in these two cases:
{\em Case (i).}
Suppose $ \frac{2k+1}{2^{n+1}} \le x < \frac{k+1}{2^{n}}.$
Here ${D}_{n+1}(x) = m$, since ${D}_{j}(x)$ is the slope
of $\tau_{j}(x)$ at $x$ (see Lemma \ref{le23}). The claim assumption
gives ${D}_{i}(x) > 0$ for all $i \ge n+1$, so the
induction hypothesis now applies to $x$ at level $n'=n+1,$ to give
$\tau(x) \leq \tau(\frac{2k+2}{2^{n+1}})= \tau(\frac{k+1}{2^n})$.
{\em Case (ii).}
Suppose $\frac{k}{2^n} \le x < \frac{2k+1}{2^{n+1}}$. Now \eqn{491} becomes
$$
\tau(\frac{k+1}{2^n}) - \tau(x)=\frac{1}{2^n} \Large(m+1 -(\tau(w) + (m+1)w)\Large),
$$
and the Case (ii) range of $x$ implies $0 \le w \le \frac{1}{2}.$
However for this range of $w$ we have
$$
\tau(w) + (m+1)w \le 1+ \frac{m+1}{2} \le m+1.
$$
Substituting this in the previous inequality gives $\tau(x) \le \tau(\frac{k+1}{2^n})$.
(Note: no conditions on ${D}_{i}(x)$ are required in Case (ii).)
This completes the induction step, and the claim follows.\smallskip
(2) By Theorem \ref{th48a}(3) we have $\tau(x_1) < \tau(x_2)$
for any two points $x_1$ and $x_2$ in $\frac{1}{2} \Omega^{L}$ with
$x_1 < x_2$ such that
neither $x_1$ nor $x_2$ is a rational number with binary expansion
ending in either
$0^{\infty}$ or $(01)^{\infty}$.
More is true; the conditions in Theorem \ref{th48a}
imply furthermore that equality $t=\tau(x_1) = \tau(x_2)$ for $x_1< x_2$
in $\frac{1}{2} \Omega^{L}$ can occur only if $x_1$ ends in $(01)^{\infty}$
and $x_2$ ends in $0^{\infty}$. Thirdly, using the
nondecreasing property of $\tau(x)$ on $\frac{1}{2} \Omega^{L}$,
we infer that for any level $y$ the equation
$y= \tau(x)$ has at most two solutions $x \in \frac{1}{2} \Omega^{L}$.
(A countable set of values $y$ having two solutions in $\frac{1}{2}\Omega^{L}$ exists,
with solutions being pairs
$(\frac{1}{2}x(B)^{-}, \frac{1}{2} x(B)^{+})$ associated to all intervals $I(B), B \in {\cal B}$.)
$~~~\Box$
\section{ Takagi Singular Function}\label{sec5}
\setcounter{equation}{0}
We now study the behavior of the Takagi function restricted to the left hand
endpoints of all local level sets. This leads to defining a singular function whose
points of increase are confined to these endpoints,
which we name the {\em Takagi singular function.}
\subsection{Flattened Takagi function and Takagi singular function} \label{sec51}
We consider the Takagi function restricted to the set $\Omega^{L}$ and linearly
interpolate it across all intervals removed in constructing $\Omega^{L}$, obtaining
a new function, the flattened Takagi function, as follows.
\begin{defi}~\label{de34}
{\em
The
{\em $\Omega^{L}$-projection function} $P^{L}(x) := x_b$ where $x_b$
is the largest point $x_b \in \Omega^{L}$ having $x_b \le x$.
This function is well-defined since $\Omega^{L}$ is a closed set, and $0 \in \Omega^{L}$.
It clearly has the projection property
$$
P^{L}( P^{L}(x)) = P^L(x).
$$
}
\end{defi}
To compute $P^{L}(x)$,
if $x= 0.b_0b_1 ...$ has $x \not\in \Omega^{L}$ then we have
$$
x_b = 0.b_0b_1... b_n(01)^\infty,
$$
where $n$ is the smallest location
in the binary expansion of $x$
such that
${D}_j(x) \ge 0$ for $j \le n$, but ${D}_{n+1}(x) <0$.
If no such $n$ exists then $x_b = x \in \Omega^{L}$.
\begin{defi} \label{de35}
{\em
The {\em flattened Takagi function} $\tau^{L}(x)$ is given by
$$
\tau^{L}(x) := \tau(x_b) -(x-x_b) = \tau (P^{L}(x)) +P^{L}(x)- x.
$$
}
\end{defi}
This definition agrees with the Takagi function on the set
$\Omega^{L}$ of left hand endpoints of local level sets,
and it is linear with slope $-1$ across all omitted intervals $I_{B}$ between
such endpoints. According to
Lemma \ref{le32}(2) this function then
linearly interpolates across those intervals,
showing that $\tau^{L}(x)$ is a continuous function.
It captures all the variation of the Takagi function that is
outside local level sets and
``flattens it " inside local level sets.
It is pictured in Figure \ref{fig411}. (See also the data
in Table \ref{tab2}.)
\begin{figure}[h]
\begin{center}
$ \begin{array}{cc}
\includegraphics[width=2.0in]{tauLv3} &
\includegraphics[width=2.0in]{tauSv3}
\end{array}$
\end{center}
\caption{Graph of flattened Takagi function $\tau^{L}(x)$ (left) and Takagi singular function
$\tau^{S}(x)$ (right) .}
\label{fig411}
\end{figure}
The flattened Takagi function has a countable set of
intervals on which it has slope $-1$. It seems natural to adjust
the function to have slope $0$ on these intervals.
Thus we study the following function.
\begin{defi} \label{de36}
{\em
The {\em Takagi singular function} $\tau^{S}(x)$ is given by
\begin{equation*}
\tau^{S}(x) := \tau^{L}(x) + x,
\end{equation*}
where $\tau^{L}$ is the flattened Takagi function.
That is, $\tau^{S}(x) = \tau(x_b) + x_b$.}
\end{defi}
It is also pictured in Figure \ref{fig411}.
\smallskip
We now prove Theorem \ref{th15a} in the introduction,
showing that $\tau^{S}(x)$ is indeed a
singular continuous function,
justifying its name.
Note that the statement of Theorem \ref{th15a} gave a different definition of
$\tau^{S}$, and part of the proof below is to show that this alternate definition coincides
with the function given by Definition \ref{de36}.
The distributional derivative of $\tau^{S}$ is a singular measure ${\mu_S}$, the Takagi singular measure,
which we study
in \cite{LM10b}.
\paragraph{Proof of Theorem \ref{th15a}.}
We first show that the function $\tau^{S}(x)$ given by Definition \ref{de36}
is a monotone singular
function.
Lemma~\ref{le32}~(2) implies
that the flattened Takagi function has slope $-1$ across all omitted intervals outside
of $\Omega^{L}$, hence the definition of the singular Takagi function given in
Definition \ref{de36} guarantees that
it is linear with slope $0$ across all such intervals $I_B$ for $B \in {\cal B}$.
These intervals were shown to
have full Lebesgue measure in Theorem~\ref{th33}~(3), so its variation is
confined to the set $\Omega^{L}$, which is of measure $0$. To conclude it is
a Cantor function it remains to show that
$\tau^{S}(x)$ is a nondecreasing function on $[0,1]$. Since it is
constant away from $\Omega^{L}$, it suffices to show that $\tau^{S}(x)$ is
nondecreasing when restricted to $\Omega^{L}$. Now when $x \in \Omega^{L}$
we have $\tau^{S}(x) = \tau(x) + x$, and
by the dyadic self-similarity equation
in Lemma~\ref{le22},
$\tau(x)+ x = 2\tau(\frac{x}{2})$, and here $\frac{x}{2} \in \frac{1}{2} \Omega^{L}$.
Therefore
the nondecreasing property of $\tau^{S}(x)$ is equivalent
to showing
that $\tau(x)$ is nondecreasing when restricted to $x \in \frac{1}{2}\Omega^{L}$.
This was shown in Theorem \ref{th49}.
Next we show that the function $\tau^{S}(x)$ given by Definition \ref{de36}
coincides with the function defined in the theorem statement. That is, it
has the two properties: (1) $\tau^{S}(x) = \tau(x) + x$ for all $x \in \Omega^{L}$;
and (2)
$
\tau^{S}(x) = \sup\{ \tau^{S}(x_1): x_1 \le x ~~~\mbox{with} ~~ x_1 \in \Omega^{L}\}.
$
The first property holds since for $x \in \Omega^{L}$
one has $x=x_b$ so $\tau^{S}(x) = \tau(x_b) + x_b= \tau(x)+x.$
The second property holds since
$\tau^{S}(x)$ is now known to be nondecreasing, whence
$$
\tau^{S}(x) = \tau(x_b) + x_b
=\sup\{ \tau^{S}(x_1): x_1 \le x ~~~\mbox{with} ~~ x_1 \in \Omega^{L}\}.
$$
Finally we verify that the set $\Omega^{L}$ is the closure of the set of points
of increase of $\tau^{S}(x)$. It follows from Theorem \ref{th48a}
that all points of $\Omega^{L}$ are points of increase
except for a countable set of
$x \in \Omega^{L}$ which are
rational numbers whose binary expansion ends in $0^{\infty}$
or $(01)^{\infty}$.
Since $\Omega^{L}$ is a perfect set, these rational
numbers are limit points of elements
of $\Omega^{L}$ that are irrational, hence they fall in the closure of the set of points of increase
of $\Omega^{L}$. $~~~\Box$.
\paragraph{\bf Remark.} An alternate proof of the
monotone property of $\tau^{S}(x)$ in Theorem~\ref{th15a} can
be based on defining piecewise linear approximation functions
$\tau^{L}_{n}(x)$ and $\tau^{S}_n(x)$ to $\tau^{L}(x)$ and
$\tau^{S}(x)$, respectively, in an obvious fashion. One can prove by induction
that each $\tau^{S}_{n}(x)$ is a nondecreasing function, using Lemma \ref{le25}(1).
The approximations $\tau^{S}_{n}(x)$ approach $\tau^{S}(x)$ pointwise from below,
giving the result.
\subsection{Jordan decomposition of flattened Takagi function} \label{sec53}
We prove that the flattened Takagi function is of bounded
variation.
Recall that
a {\em function of bounded (pointwise) variation}
$f$ on $U= (0,1)$ is a (possibly discontinuous) function whose
{\em total variation}, denoted ${\rm Var} \, f $ or $V_{0}^{1}(f)$, given by
\begin{equation*}
V_{0}^1(f) = {\rm Var} \, f:= \sup \{ \sum_{i=1}^n |f(x_i)- f(x_{i-1})| : ~0<x_0 < x_1 < \cdots < x_n<1, n \ge 1 \}
\end{equation*}
is finite.
We let $BPV((0,1))$ denote the set of functions of bounded
(pointwise) variation on the open interval
$(0,1)$, following the notation of Leoni \cite[Chap 2]{Le09}.
(In the literature this space is usually denoted $BV(I)$, but this
notation leads to a conflict
with the geometric measure theory notation in Section \ref{sec6N}.)
Any function of bounded variation has a {\em monotone decomposition}
(or {\em Jordan decomposition})
\begin{equation*}
f= f_u + f_d
\end{equation*}
in which $f_u$ is an upward monotone (i.e. non-decreasing) bounded function,
possibly with jump discontinuities, and $f_d$ is a downward monotone
(i.e. non-increasing) bounded function. Such a decomposition is not unique.
Conversely, any function having a Jordan decomposition is of bounded pointwise variation.
A {\em minimal monotone decomposition} is
one such that
$$
V_0^1(f) = V_{0}^1(f_u) + V_{0}^1(f_d).
$$
\begin{theorem}\label{th36} {\em (Jordan decomposition of flattened Takagi function)}
The flattened Takagi function $\tau^{L}(x)$ is of bounded (pointwise) variation,
so is in $BPV((0,1)).$
It has a minimal monotone decomposition given by
\beql{344}
\tau^{L}(x) = f_u(x) + f_d(x),
\end{equation}
with downward part
$f_d(x)= -x$ and upward part
$f_u(x)= \tau^L(x)+x$
both being continuous functions. The upward part is a singular function
whose points of increase are supported on the deficient digit set $\Omega^{L}$.
The total
variation of the flattened Takagi function is $V_0^1(\tau^{L})= 2,$ with $V_0^1(f_u(x))= V_0^1(f_d(x))=1$.
\end{theorem}
\paragraph{Proof.}
(1) The decomposition \eqn{344} holds by definition of $\tau^{S}(x)$.
By Theorem \ref{th15a}
$f_u(x)$ is non-decreasing and
bounded, and is a monotone singular function supported on $\Omega^{L}$.
Clearly $f_d(x) = -x$ is non-increasing and bounded,
thus $\tau^{L}(x)$ is of bounded variation, hence \eqn{344} is a monotone
decomposition.
(2) The minimality of the monotone decomposition \eqn{344} is a consequence of the fact
that the function $f_d$ is
absolutely continuous with respect to Lebesgue measure,
while by Theorem \ref{th15a}
the function $f_u$ is singular with respect to Lebesgue measure.
Thus
$V_0^1(f) = V_0^1(f_u) + V_0^1(f_d) = 1+1= 2,$
as asserted. $~~~\Box$\\
\section{Expected number of local level sets} \label{sec6N}
\setcounter{equation}{0}
The results of the last section show that the Takagi function
restricted to the set $\Omega^{L}$ is well behaved,
giving a function of bounded variation.
We now use the coarea formula
of geometric measure theory for BV-functions to determine
the expected number of local level sets on a random level
$0 \le y \le \frac{2}{3}.$
We formulate the coarea formula
for the one-dimensional case, using the terminology of
Leoni \cite{Le09}; note that the full power of this formula lies in
the $n$-dimensional case.
For an open set $U$ of
the real line, the bounded variation space in the sense of geometric
measure theory $BV(U)$ consists of
all functions $f \in L^{1}(U)$ for which there exists a finite signed
Radon measure $\mu$ on the Borel sets of $U$ such that
$$
\int_{U} f(x) \phi'(x) dx = - \int_{U} \phi(x) d\mu
$$
for all test functions $\phi(x) \in C_{c}^{1}(U)$, where we let $C_c^k(U)$
denote the set of $k$-continuously differentiable functions with
compact support. This signed measure
$\mu$
is called the {\em weak derivative} of $f(x)$ and is denoted
$Df$.
The Jordan decomposition theorem for measures
says that the finite Radon measure
$Df$ decomposes uniquely into the difference $Df= Df^{+} - Df^{-}$
of mutually singular nonnegative measures,
both of which are finite (\cite[Theorem B.72]{Le09}). The associated
{\em total variation measure} $|Df|$ is $|Df|= Df^{+} + Df^{-}.$ (See
Evans and Gariepy \cite[Sect 5.1, Theorem 1]{EG92} for the
$n$-dimensional case.)
The total variation of a function $f$ in $BV(U)$ is expressible
using test functions as
\begin{equation}\label{test-function-definition}
|Df|(U)= V(f, U) := \sup \Big\{ \int_{U} f(x) \phi^{'}(x) dx:
\phi(x) \in C_c^{1}(U; {\mathbb R}),
||\phi|| = \max_{x \in U}( |\phi(x)|) \le 1\Big\}.
\end{equation}
We define the {\em perimeter} of a Lebesgue measurable set $E \subset U$,
denoted $|\partial E|(U)$ or $P(E, U)$, to be the total variation of its characteristic function $\chi_E$ in $U$, i.e.
$|\partial E|(U) = |D \chi_E|(U).$
It follows from \eqref{test-function-definition} that if
$E$ is an open set in $U= (0,1)$ consisting of a finite number of
non-adjacent intervals (where non-adjacent means no two intervals have a common endpoint),
then the perimeter $|\partial E|(U)$ counts the number
of endpoints of the intervals inside $U$. (The perimeter does not detect the endpoints of $U$, e.g.
the perimeter of $E= (0, \frac{1}{2})$ has $|\partial E|=1$.)
For general open sets $E \subset U$, which may have infinitely many open intervals,
the value of the perimeter is more complicated.
Two extreme cases are: (1) The complement $E :=U \smallsetminus C$ of
the middle-third Cantor set $C$ has perimeter $|\partial E|(U)=0$;
(2) An open set $E$ having a partition $E = \bigcup_{i=1}^{\infty} U_i$ in which
each open interval $U_i= (a_i, b_i)$ has an adjacent open interval $J_i = (b_i, b_i +\epsilon_i)$
disjoint from $E$ necessarily has perimeter $|\partial E|(U)=+\infty$.
Since the classical bounded variation space $BPV(U)$ (c.f. Section
\ref{sec53})
consists of functions
while the geometric measure theory space $BV(U)$
consists of
equivalence classes of functions agreeing on sets of
full Lebesgue measure,
these are distinct spaces.
However they are closely related, as follows
(Leoni \cite[Theorem 7.2]{Le09}).
\begin{prop}\label{pr61a} {\em(Relation of $BPV(U)$ and $BV(U)$.)}\\
Let $U \subset {\mathbb R}$ be an open set.
(1) If $f: U \to {\mathbb R}$ is an integrable function belonging to $BPV(U)$, then its
$L^{1}$-equivalence class belongs to $BV(U)$, and satisfies
\begin{equation*}
\var f \ge |Df|(U).
\end{equation*}
(2) Conversely
any $f \in BV(U)$ has a right continuous representative function $\bar{f}$
in its $L^{1}$-equivalence class that belongs to $BPV(U)$, and it
satisfies
\begin{equation*}
\var \bar{f} = |Df|(U).
\end{equation*}
\end{prop}
The following is a one-dimensional version of the coarea formula
for functions in $BV(U)$.
\begin{prop}\label{pr46}
{ \em (Coarea formula for BV functions)}
Let $U=(0,1)$. If $f \in BV (U)$, there holds:
(1) The upper set
\begin{equation*}
E_t:= E_t(f) = \{ x \in U: ~f(x) >t\}
\end{equation*}
has finite perimeter $|\partial E_t|$ for all but a Lebesgue measure $0$ set
of $t \in {\mathbb R}$, and the mapping
$$
t \mapsto |\partial E_t|(U), ~~~t \in {\mathbb R}
$$
is a Lebesgue measurable function.
(2) In addition the variation measure $|Df|$ of $f$ satisfies
\begin{equation*}
|Df|(U) = \int_{-\infty}^{\infty} |\partial E_t|(U) dt.
\end{equation*}
(3) Conversely, if $f \in L^{1}(U)$ and
$
\int_{-\infty}^{\infty} |\partial E_t|(U) dt < \infty,
$
then $f \in BV(U)$.
\end{prop}
\paragraph{Proof.} Versions of the coarea formula for functions
in BV(U) for a given open set $U$ in ${\mathbb R}^n$ are proved in Evans and Gariepy
\cite[Theorem 1, Sec. 5.5]{EG92} and Leoni \cite[Theorem 13.25]{Le09}.
Here we specialize to the case $U=(0,1)$.
$~~~\Box$ \\
In our application, all
relevant functions $f \in BPV(U)$ for $U=(0,1)$ are continuous on $[0,1]$,
in which case by Proposition \ref{pr61a}(2) we have
$|Df|(U) = \var f= V_{0}^1(f)$.
We use the Coarea formula for BV functions to
compute the expected number of local level sets
at a random level over the range $0 \le y \le \frac{2}{3}$, with respect to Lebesgue measure.
\begin{theorem}\label{th37} {\em (Expected number of local level sets)}
For a full Lebesgue measure set of ordinate points $y \in [0, \frac{2}{3}]$ the
number $N^{loc}(y)$ of local level sets at level $y$ is
finite. Furthermore $N^{loc}(y)$ is a Lebesgue measurable function satisfying
\beql{361}
\int_{0}^{\frac{2}{3}} N^{loc}(y) dy = 1.
\end{equation}
That is, the expected number of local level sets on
a randomly drawn ordinate level $y$ is $\frac{3}{2}$.
\end{theorem}
\paragraph{Proof.}
Theorem \ref{th36} shows
that the flattened Takagi function $\tau^{L}(x)$ belongs to $BPV(U)$,
for $U=(0,1).$ Using Proposition \ref{pr61a}(1) we may view it in $BV(U)$ and
apply the coarea formula for BV functions (Proposition \ref{pr46}), taking
$f= \tau^{L}$ and $U= (0,1)$
to obtain
\beql{365}
\int_{-\infty}^{\infty} |\partial E_t (\tau^L)|(U) dt=|D\tau^{L}|(U),
\end{equation}
in which $|\partial E_t (\tau^L)|(U)$ is a Lebesgue measurable
function of $t$.
Since the function $\tau^{L}(x)$ is continuous on $[0,1]$,
Proposition \ref{pr61a}(2) applies to give
$$
|D\tau^{L}|(U) = V_{0}^{1}(\tau^{L})= 2.
$$
Thus we obtain
\begin{equation*}
\int_{0}^{\frac{2}{3}} |\partial E_t (\tau^L)|(U) dt =2.
\end{equation*}
We now study to what extent the integrand $|\partial E_t(\tau^{L})|(U)$ detects
endpoints of local level sets at level $t$. The function
$N^{loc}(t)$ takes nonnegative integer values or $+\infty$.
\smallskip
{\bf Claim.} {\em For irrational $t$ with $0< t < \frac{2}{3}$, there holds
\begin{equation*}
2N^{loc} (t) = |\partial E_t(\tau^L)|(U).
\end{equation*}
In particular, $N^{loc}(t)$ is a Lebesgue
measurable function, because it differs from
$\frac{1}{2} |\partial E_t(\tau^L)|(U)$ on a set of measure zero.}
\smallskip
To prove the claim, we note that for $t >0$ the upper set $E_t := E_t(\tau^{L})= \bigcup_{i} I_i$ is a finite or
countable disjoint union of nonempty open
intervals, which all lie strictly inside $U=(0,1)$.
Let $I=(a, b)$ be one such interval.
If $a \notin \Omega^{L}$, then the point $(a, \tau^{L}(a))$
would lie in the interior of a line segment of slope $-1$ in
the graph ${\cal G}(\tau^{L})$. This is impossible since a point in the interior of a line
of slope $-1$ in ${\cal G}(\tau^{L})$ will have $\tau^{L}(a+ \epsilon) = t- \epsilon < t$ for
all small enough positive $\epsilon$, contradicting $I \subseteq
E_t$. Hence $a \in \Omega^{L}$, and consequently $a$ is the left hand
endpoint of a local
level set in the level set $\tau(x) = t$.
We next assert that the point $b$ must correspond to a point in the
interior of a line segment of slope $-1$ in the graph ${\cal G}(\tau^{L})$.
This holds because, first, it cannot
correspond to an endpoint of such a segment because then it would be a rational number
with expansion ending in $0^{\infty}$, contradicting $\tau(b)=t$ being irrational,
and second, it cannot be a member of
$\Omega^{L}$, since otherwise Theorem~\ref{th48a} (1) would imply there exists
an increasing sequence of values $\{x_k\} \subseteq \Omega^{L}$ with $x_k \to b$,
having $\tau(x_k)< \tau(b)$, which would contradict $b$ being the right
endpoint of an interval of an upper set.
Thus we know $b$ corresponds to a point in the interior of a segment of slope $-1$ in ${\cal G}(\tau_L)$,
hence there is some positive $\epsilon$ depending on $b$ such that the
adjacent open interval $(b, b+ \epsilon)$ is not contained in $E_t$.
We conclude immediately that if
$E_t$ is an infinite disjoint union of nonempty open intervals,
then $|\partial E_t|(U) = \infty$. Moreover $N^{loc}(t) =
\infty$, proving the claim in this case.
To finish, we may assume $E_t$ is a finite disjoint union of nonempty
open intervals $E_t = \cup_{i = 1}^n (a_i, b_i)$, and hence its boundary $\partial
E_t := \bar{E}_t \smallsetminus E_t= \{a_1,\ldots, a_n, b_1,\ldots, b_n\}$. By the above, each
$a_i \in \Omega^{L}$, but each $b_j \notin \Omega^{L}$, so there are gaps
between each of these intervals, and we conclude the perimeter
$|\partial E_t|(U) = 2n$.
It remains to show $N^{loc}(t)=n$. The fact that
each $a_i \in \Omega^{L}$ implies $N^{loc}(t) \ge n$.
Now let $x \in \Omega^{L}$ be the left hand
endpoint of any local level set in the level set $\tau(x) = t$.
It suffices to show that $x = a_i$ for some $1 \leq i \leq n$ since then
$N^{loc}(t) \le \# \{a_1,\ldots, a_n\} = n$.
We know $x$ is irrational because $\tau(x) = t$ is irrational by hypothesis.
Theorem~\ref{th48a}(2) then applies to show there
is a decreasing sequence $\{x_k\} \subseteq \Omega^{L}$ with
$x_k \to x$ and $\tau(x_k) > \tau(x) = t$.
The existence of the sequence $x_k$ implies that $x \in \partial E_t = \{a_1,\ldots, a_n, b_1,\ldots, b_n\}$ since
$x \notin E_t$ but is given as a limit from of decreasing values $x_k \in E_t$. By the above, $x \neq b_j$ for any $j$,
and therefore $x = a_i$ for some $1 \leq i \leq n$, finishing the proof of the claim.
The claim together with \eqn{365} gives \eqn{361}. $~~~\Box$.
\paragraph{Remark.}
Theorem~\ref{th37} gives no information
concerning the multiplicity of local level sets on those levels having an uncountable level
set, because the set of such levels $y$ has Lebesgue measure $0$.
\section{Levels Containing Infinitely Many Local Level Sets}\label{sec5B}
\setcounter{equation}{0}
In this section we show
that there exists a dense set of levels in $[0, \frac{2}{3}]$
that contain a countably infinite
number of local level sets; this complements Theorem~\ref{th37}.
We first show this holds for the particular level set $L(\frac{1}{2})$.
The fact that this level set is countably infinite was previously
noted by Knuth \cite[Sec. 7.2.1.3, Problem 82e]{Kn09}.
\begin{theorem}~\label{th38} {\em (Countably infinite level set)}
The level set $L(\frac{1}{2})$ is
countably infinite, with $L(\frac{1}{2}) = {\cal L}_1 \cup {\cal L}_2$ and
\begin{equation*}
{\cal L}_1:= \left\{ x_k :=\frac{1}{2} - \sum_{j=1}^k (\frac{1}{4})^j : k= 0, 1, 2, ..., \infty \right\},
\end{equation*}
with ${\cal L}_2 = \{ 1-x: ~x \in {\cal L}_1\}.$
It contains an infinite number of distinct local
level sets. One has $L_{x_{\infty}}^{loc}=\{\frac{1}{6} , \frac{5}{6}\} \subset L(\frac{1}{2})$
and $\frac{1}{6} \le x \le \frac{5}{6}$ for all $x \in L(\frac{1}{2}).$
\end{theorem}
\paragraph{Proof.} First we show that each $x \in {\cal L}_1 \cup {\cal L}_2$
satisfies $\tau(x)= 1/2$. By Lemma \ref{le22}(1), it suffices to consider $x \in {\cal L}_1$.
Let $x := x_k := \frac 1 2 - \sum_{j=1}^k (\frac 1 4)^j$ for $0 \leq k \le \infty$.
For $k \ge 1$ the dyadic rational $x_k$ has two binary expansions, the
first expansion
being $x_k^{+}=0 .0 (01)^{k-1} 1$ and the second being
$x_k^{-}=0.0(01)^{k-1}0 1^{\infty}$.
Here the
first binary expansion clearly has all ${D}_j(x_k^{+}) \ge 0$ for $j \ge 1$
which certifies that $x_k^{+} \in \Omega^{L}$. By direct calculation
$\tau(x_0)= \tau(x_1) =\frac{1}{2}$. Now the flip operation shows
for $k \ge 1$ that
$$
x_{k}^{-} =0.0(01)^{k-1} 01^{\infty} =0.(0 (01)^k 1 )1^{\infty}
\sim 0. 0(01)^k 1 0^{\infty} = x_{k+1}^{+}.
$$
Thus we deduce
$$
\tau(x_{k}^{+}) = \tau( x_k^{-}) = \tau(x_{k+1}^{+}),
$$
whence by induction on $k \ge 1$ we conclude that
$\tau(x_k) = \tau(x_k^{+})= \tau(x_k^{-})=\frac{1}{2}$ for all finite
$k \ge 1$, as asserted. Finally the case $k=\infty$, with $x_{\infty}=0.0(01)^{\infty} = \frac{1}{6}$
has
$\tau(x_{\infty}) = \lim_{k \to \infty} \tau(x_k)= \frac{1}{2}$ by continuity of the Takagi function.
Next we observe that the local level sets are
$L_{x_\infty}^{loc} = \{\frac 1 6, \frac 5 6\}$ and
$$L_{x_k^{+}}^{loc} = \{ x_k^+, \, x_{k-1}^-, \, 1- x_k^+, \,
1-x_{k-1}^- \},~~\textrm{ for }k \geq 1.$$
For example, when $k=1$ this is
$L_{x_1^{+}}^{loc} = \{\left(\frac{1}{4}\right)^+,
\left(\frac{1}{2}\right)^-, \left(\frac{3}{4}\right)^-, \left(\frac 1
2 \right)^+ \}.$
This shows that $L(\frac{1}{2})$ has infinitely many
local level sets.
It remains to show that $L(\frac 1 2)$ contains no elements other than those in
${\cal L}_1 \cup {\cal L}_2$. For this, in view of the symmetry of $\tau(x)$, it suffices to prove
two assertions:
\begin{enumerate}
\item[(1)] If $x < \frac 1 6 = x_\infty$,
then $\tau(x) < \frac 1 2$.
\item[(2)] For all $k$, if $x_k < x < x_{k-1}$, then $\tau(x) > \frac 1 2$.
\end{enumerate}
Let $x < \frac 1 6$. Lemma \ref{le22}(1) implies that
$ \tau( x ) = \frac 1 2 \tau(2x) + x$.
Combining this with the inequality $ \tau(x) \leq \frac 2 3$ proves (1).
If $x$ satisfies $x_k < x < x_{k-1} = x_k + \frac {1}{2^{2k}}$ for $k \geq
1$, then $x= x_k + \frac {x'}{2^{2k}}$ for $0 < x' < 1$. Since
${D}_{2k}(x_k) = 0$, then by Lemma \ref{le25},
$$\tau(x) = \tau(x_k) + \frac{\tau(x')}{2^{2k}} > \frac{1}{2}.$$
This proves (2).$~~~\Box$\\
Now define
\beql{621}
\Lambda_{\infty}^{loc} := \{ y: L(y) ~\mbox{contains infinitely many different local level sets} \}.
\end{equation}
Theorem \ref{th38} above shows that $y= \frac{1}{2} \in
\Lambda_{\infty}^{loc}.$
Also, recall from Definition \ref{de32} that the breakpoint set ${\cal B}^{'}$
is the set of all balanced dyadic rationals $B' \in \Omega^{L}$.
\begin{theorem}~\label{th62a} {\em (Levels with an infinite number of local level sets)}
(1) The set $\Lambda_{\infty}^{loc}$
has Lebesgue measure $0$. It is not a closed set.
(2) For each ${B'}=0.b_1 b_2 \cdots b_{2m}\in {\cal B}^{'}$,
the value
\begin{equation*}
y_{B'} := \tau({B'}) + \frac{1}{2^{2m+1}}
\end{equation*}
is a dyadic rational, and $L(y_{B'})$ contains
infinitely many disjoint local level sets. Furthermore, there are
infinitely many dyadic rationals $x \in \Omega^{L}$ with $y_{B'}=\tau(x).$
(3) The set of levels
\begin{equation*}
\Delta_{\infty}^{loc} := \left\{ y_{B'}= \tau({B'}) + \frac{1}{2^{2m+1}}: {B'} \in {\cal B}^{'} \right\}
\end{equation*}
is dense in $[0, \frac{2}{3}]$. Since $\Delta_{\infty}^{loc} \subseteq \Lambda_{\infty}^{loc} $,
the set $\Lambda_{\infty}^{loc}$ is dense in $[0, \frac{2}{3}]$.
\end{theorem}
\paragraph{Proof.}
(1) This measure $0$ property of $\Lambda_{\infty}^{loc}$ follows immediately from the expected number of local
level sets being finite (Theorem~\ref{th37}).
The fact that this set is not a closed set will follow once property (3) is proved.\smallskip
(2) For each balanced dyadic rational ${B'}=0.b_1 b_2 \cdots b_{2m}$ in $\Omega^{L}$ we
consider for $k \ge 1$ the infinite set of dyadic rationals
$$
x_k({B'}): =0.b_1 b_2 \cdots b_{2m} 0(01)^{k-1} 1 = {B'} + \frac{1}{2^{2m}} x_k,
$$
where $x_k= 0.0(01)^{k-1}1$ has $\tau(x_k)= \frac{1}{2}$ by Theorem~\ref{th38}.
Using the self-affine scaling property in Lemma \ref{le25} we have
$$
\tau(x_k({B'})) = \tau({B'}) + \frac{1}{2^{2m}} \tau(x_k) = \tau({B'}) + \frac{1}{2^{2m+1}},
$$
so all points $\tau(x_k({B'}))$ are on the same level $y= y_{B'},$ and $y_{B'}$ is necessarily a
dyadic rational number.
Clearly each $x_k({B'}) \in \Omega^{L}$, so each determines a different local level set,
establishing (2).\smallskip
(3) It is easy to see that the set of balanced dyadic rationals ${B'}=0. b_1... b_{2m}$
having ${D}_j({B'}) \geq 0$ for all $j \ge 0$ and ${D}_{2m}=0$
is dense inside the deficient digit set $\Omega^{L}$. Indeed, given
any $x=0.b_1 b_2... \in \Omega^{L}$, the approximation $x_k= 0.b_1 b_2 ... b_k 1^{{D}_k(x)} 0^{\infty}$
is such a dyadic rational having $|x-x_k| \le 2^{-k}.$
Since there
is at least one local level set on each level, we have $\tau(\Omega^{L}) = [0, \frac{2}{3}].$
Since the flattened Takagi function is continuous, we conclude that
the values $y_{B'} = \tau({B'} +\frac{1}{2^{2m+1}})$ are dense in $[0, \frac{2}{3}],$ as asserted.
$~~~\Box$\\
\paragraph{Remarks.}
\smallskip
(1) The proof above shows the stronger result that if $y \in \Lambda_{\infty}^{loc}$ then
for every balanced dyadic rational ${B'}$ that belongs to $\Omega^{L}$, one has
$y_{B'}^{\ast} : = {B'} + \frac{y}{2^{2m}} \in \Lambda_{\infty}^{loc}.$
\smallskip
(2) One can
ask whether the equality $\Delta_{\infty}^{loc} = \Lambda_{\infty}^{loc}$ might hold,
or (weaker) whether $\Lambda_{\infty}^{loc}$ is a countable set.
\section{Further Questions}\label{sec9}
\setcounter{equation}{0}
This investigation of the structure of local level sets of the Takagi function raises a number of questions for further work.
\medskip
(1) Theorem \ref{th15} shows that an abscissa generic local level set is
uncountable with probability one, with $x$ drawn uniformly from $[0,1]$.
Can one determine the expected number of local level sets at a
level $L(\tau(x))$, with $x$ drawn uniformly from $[0,1]$?
\medskip
We note that Theorem \ref{th16} sheds no light regarding this question.
It seems to involve properties related to a new
measure $\nu$, supported on $\Omega^{L}$, which is mutually singular to both
Lebesgue measure and to
the Takagi singular measure ${\mu_S}$, which we hope to discuss
elsewhere.
\medskip
(2) Theorem~\ref{th16} shows that the expected number of local level sets
at a given height $y$ drawn uniformly in $[0, \frac{2}{3}]$ is $\frac{3}{2}$.
There is an associated probability distribution
$$
\Prob[ N^{loc}(y) = k] := \frac{3}{2} \meas[ y: N^{loc}(y) =k ],
$$
whose mean value is $\frac{3}{2}$. Can one explicitly compute these
probabilities in closed form?
\medskip
(3) Can one explicitly determine the Hausdorff dimension of the local level set $L_x^{loc}$
in terms of properties of the binary expansion of $x$? In particular, to
what extent does the balance set $Z(x)$ determine the Hausdorff dimension of
$L_x^{loc}$?
\medskip
For rational $x$, we have calculated the Hausdorff dimension of
$L_x^{loc}$ in
the proof of Theorem~\ref{th50}. For general $x$,
recall that a necessary condition for positive Hausdorff dimension given in
the proof of Theorem~\ref{th15} is
that $\limsup_{k \to \infty} \frac{k}{c_k} > 0$; this condition
depends only on $Z(x)$.
\medskip
\medskip
(4) Theorem \ref{th50} characterizes those rationals $x$ which have
an uncountable local
level set $L^{loc}$ in terms of their binary expansions.
Can one explicitly characterize (e.g. in terms of binary expansion)
which rational levels $y$ contain some
rational $x$ for which $L_x^{loc}$ is
uncountable?
\medskip
This problem, which is
a weaker version of one proposed by Knuth \cite[Sect. 7.2.1.3, Exercise 83]{Kn09},
may be difficult.
\medskip
(5) The Fourier series of the Takagi function, viewed as a periodic function of
period $1$, is explicitly known in closed form.
Can one explicitly find the Fourier series of the flattened Takagi function $\tau^{L}(s)$
or the Takagi singular function $\tau^{S}(x)$?
\medskip
The structure and behavior
of monotone singular functions, particularly including
their Fourier transforms, is a topic of some interest, tracing back to
work of Hartman and Kershner \cite{HK37},
Salem \cite{Sal42}, \cite{Sal43}, \cite{Sal51}. See Dovgoshey et al
\cite{DMRV06} for a detailed treatment of the Cantor function.
\paragraph{Acknowledgments.}
We thank D. E. Knuth for raising questions on
the Takagi function to one of us. We thank S. T. Kuroda for remarks on Takagi's
original construction, and
Mario Bonk for helpful remarks on Hausdorff dimension
and BV functions. We thank Pieter Allaart for allowing us to include
his simplified proof of Theorem \ref{th49}, and for bringing the work
of Buczolich to our attention. We thank the two reviewers
for many insightful comments and corrections, and for some
additional references.
| {
"timestamp": "2012-04-02T02:00:15",
"yymm": "1009",
"arxiv_id": "1009.0855",
"language": "en",
"url": "https://arxiv.org/abs/1009.0855",
"abstract": "The Takagi function \\tau : [0, 1] \\to [0, 1] is a continuous non-differentiable function constructed by Takagi in 1903. The level sets L(y) = {x : \\tau(x) = y} of the Takagi function \\tau(x) are studied by introducing a notion of local level set into which level sets are partitioned. Local level sets are simple to analyze, reducing questions to understanding the relation of level sets to local level sets, which is more complicated. It is known that for a \"generic\" full Lebesgue measure set of ordinates y, the level sets are finite sets. Here it is shown for a \"generic\" full Lebesgue measure set of abscissas x, the level set L(\\tau(x)) is uncountable. An interesting singular monotone function is constructed, associated to local level sets, and is used to show the expected number of local level sets at a random level y is exactly 3/2.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Level Sets of the Takagi Function: Local Level Sets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969689263265,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7099044241704621
} |
https://arxiv.org/abs/1511.01195 | Small scale equidistribution of random eigenbases | We investigate small scale equidistribution of random orthonormal bases of eigenfunctions (i.e. eigenbases) on a compact manifold M. Assume that the group of isometries acts transitively on M and the multiplicity of eigenfrequency tends to infinity at least logarithmically. We prove that, with respect to the natural probability measure on the space of eigenbases, almost surely a random eigenbasis is equidistributed at small scales; furthermore, the scales depend on the growth rate of multiplicity. In particular, this implies that almost surely random eigenbases on the n-dimensional sphere (n>=2) and the n-dimensional tori (n>=5) are equidistributed at polynomial scales. | \section{Introduction}
Let $(\M,g)$ be a compact and smooth Riemannian manifold of dimension $n$ without boundary. Denote $\Delta=\Delta_g$ the (positive) Laplace-Beltrami operator and $\{e_j\}_{j=0}^\infty$ an orthonormal basis of eigenfunctions (i.e. eigenbasis) of $\Delta$ with eigenvalues $\lambda_j^2$ (counting multiplicities), i.e. $\Delta e_j=\lambda^2_je_j$, where $\lambda_j$ is called the eigenfrequency. Without loss of generality, we assume that the injectivity radius of $\M$ is greater than $1$ throughout the paper.
When the geodesic flow $G_t$ on the cosphere bundle $S^*\M$ of $\M$ is ergodic with respect to the (normalized) Liouville measure $\mu_L$, quantum ergodicity characterizes the asymptotic behaviour of the eigenbases. In particular, the quantum ergodic theorem of \v Snirel'man-Zelditch-Colin de Verdi\`ere \cite{Sn, Ze1, CdV} states that
\begin{thm}[Quantum ergodicity]\label{thm:QE}
Assume that $G_t$ is ergodic on $S^*\M$ with respect to $\mu_L$. Then for any eigenbasis $\{e_j\}$, there is a full density subsequence of eigenfunctions $\{e_{j_k}\}\subset\{e_j\}$ such that
\begin{equation}\label{eq:QE}
\langle Ae_{j_k},e_{j_k}\big\rangle\to\mu_L(\sigma(A))\quad\text{as }k\to\infty
\end{equation}
for any pseudodifferential operator $A$ of order $0$ with principal symbol $\sigma(A)$.
\end{thm}
Here, the density $D$ of a subsequence $J=\{j_k\}\subset\N$ is defined as
$$D(J)=\lim_{N\to\infty}\frac{\#\{j_k<N\}}{N}\quad\text{if it exists}.$$
When $D=0$ ($>0$ or $=1$), we call such subsequence a zero (positive or full) density subsequence.
If $\langle Ae_j,e_j\big\rangle\to\mu_L(\sigma(A))$ as $j\to\infty$ for any eigenbasis $\{e_j\}$, then we say that the system of geodesic flow is quantum unique ergodic (QUE). Hassell \cite{Has} showed that classical ergodicity of $G_t$ is not sufficient to guarantee QUE. In fact, in the case of generic Bunimovich stadia, there exists zero density subsequence of eigenfunctions such that \eqref{eq:QE} is invalid.
On negatively curved manifolds (i.e. all the sectional curvatures are negative everywhere), $G_t$ has stronger properties than ergodicity e.g. central limiting, strong-mixing, and exponentially decay of correlations, etc. The quantum unique ergodicity conjecture states that QUE is valid on any compact negatively curved manifold, see Rudnick-Sarnak \cite{RS}. Restricting to Hecke eigenbases, QUE has been verified in special cases when $\M$ is arithmetic, by Lindenstrauss \cite{Lin}, Silbermann-Venkatesh \cite{SV}, Holowinsky-Soundararajan \cite{HS}, and Brooks-Lindenstrauss \cite{BrLi}.
One consequence of Theorem \ref{thm:QE} is that, the full density subsequence of eigenfunctions $\{e_{j_k}\}$ of any eigenbasis $\{e_j\}$ displays equidistribution asymptotically: Let $\Omega\subset\M$ be a Borel subset with measure-zero boundary. Then by Portmanteau theorem (c.f. Sogge \cite[Theorem 6.2.5]{So1}), we have
\begin{equation}\label{eq:einM}
\int_\Omega|e_{j_k}|^2\,d\mathrm{Vol}\to\frac{\mathrm{Vol}(\Omega)}{\mathrm{Vol}(\M)}\quad\text{as }k\to\infty.
\end{equation}
Here, $\mathrm{Vol}=\mathrm{Vol}_g$ is the Riemannian volume on $\M$, $\Omega$ is a fixed set independent of $\lambda$ and we say that $\Omega$ is of scale $O(1)$ (following the convention in \cite{Han}). Motivated by the applications of eigenfunction equidistribution in regions at small scales $r(\lambda)\to0$ as $\lambda\to\infty$, the author proposed the following question (\cite[Question 1.3]{Han}).
\begin{question}[Small scale equidistribution]\label{q:sseinM}
Let $\rho\in(0,1)$. Given any eigenbasis $\{e_j\}$, does there exist a full density subsequence $\{e_{j_k}\}$ such that
\begin{equation}\label{eq:sseinM}
\int_{B(x,r_{j_k})}|e_{j_k}|^2\,d\mathrm{Vol}=\frac{\mathrm{Vol}(B(x,r_{j_k}))}{\mathrm{Vol}(\M)}+o(r_{j_k}^n)\quad\text{as }k\to\infty
\end{equation}
for $r_{j_k}=r(\lambda_{j_k})=\lambda_{j_k}^{-\rho}$ and all $x\in\M$? Here, $B(x,r)$ is a geodesic ball in $\M$ with center $x$ and radius $r$.
\end{question}
On arithmetic hyperbolic manifolds, such small scale equidistribution properties of eigenfunctions have already been considered in e.g. Luo-Sarnak \cite{LS} and Young \cite{Y}.
On negatively curved manifolds, small scale equidistribution was proved at logarithmic scales by the author \cite{Han} and Hezari-Rivi\`ere \cite{HR1}. Precisely,
\begin{thm}[Small scale equidistribution on negatively curved manifolds]\label{thm:ssencm}
Let $\M$ be negatively curved. Denote $r_j=(\log\lambda_j)^{-\alpha}$.
\begin{enumerate}[(i).]
\item Assume that $\alpha\in[0,1/(2n))$. Fix a point $x_0\in\M$. Then given any eigenbasis $\{e_j\}$, there exists a full density subsequence $\{e_{j_k}\}$ (depending on $x_0$) such that
$$\int_{B(x_0,r_{j_k})}|e_{j_k}|^2\,d\mathrm{Vol}=\frac{\mathrm{Vol}(B(x_0,r_{j_k}))}{\mathrm{Vol}(\M)}+o(r_{j_k}^n)\quad\text{as }k\to\infty.$$
\item Assume that $\alpha\in[0,1/(3n))$. Then given any eigenbasis $\{e_j\}$, there exists a full density subsequence $\{e_{j_k}\}$ such that
\begin{equation}\label{eq:uniforminM}
c\mathrm{Vol}(B\big(x,r_{j_k}))\le\int_{B(x,r_{j_k})}|e_{j_k}|^2\,d\mathrm{Vol}\le C\mathrm{Vol}(B(x,r_{j_k}))\quad\text{as }k\to\infty,
\end{equation}
uniformly for all $x\in\M$, where the positive constants $c$ and $C$ depend only on $\M$.
\end{enumerate}
\end{thm}
Note that the range of $\alpha$ in (ii) was improved to $\alpha\in[0,1/(2n))$ (same as in (i)) by Hezari-Rivi\`ere \cite{HR1}. The equidistribution of eigenfunctions at logarithmic scales has since been applied to the $L^p$ norm and nodal set estimates of eigenfunctions by Hezari-Rivi\`ere \cite{HR1} and Sogge \cite{So2} and counting nodal domains of eigenfunctions by Zelditch \cite{Ze3}. See also a recent survey by Sogge \cite{So3}.
Now we switch our attention to the case when the geodesic flow is not necessarily ergodic. For example, on the $2$-$\dim$ sphere $\mathbb S^2$ where the geodesic flow is completely integrable (thus is not ergodic), the standard eigenbasis fails \eqref{eq:QE}. However, there is an infinite dimensional space of eigenbases due to high multiplicity of eigenvalues; this space carries a natural probability measure. (See \S\ref{sec:spectral} for the precise definition.) Zelditch \cite{Ze2} proved that with respect to this probability measure, almost surely a random eigenbasis $\{u_j\}$ contains a subsequence $\{u_{j_k}\}$ such that $\langle Au_{j_k},u_{j_k}\rangle\to\mu_L(\sigma(A))$ for any pseudodifferential operator $A$ of order $0$; VanderKam \cite{V} improved this result that almost surely $\langle Au_j,u_j\rangle\to\mu_L(\sigma(A))$ for a random eigenbasis $\{u_j\}$. As a consequence, even though the standard eigenbasis is not equidistributed, a typical eigenbasis is.
In this paper, we investigate small scale equidistribution of eigenbases on $\mathbb S^2$. In fact, our main theorem holds on the manifold $\M$ where
\begin{enumerate}
\item[(M1).] the group of isometries acts transitively on $\M$;
\item[(M2).] the multiplicity $m_\lambda$ of eigenfrequency $\lambda$ satisfies
\begin{equation}\label{eq:M2}
\liminf_{\lambda\to\infty}\frac{m_\lambda}{\log\lambda}:=L_\M>0.
\end{equation}
\end{enumerate}
Then the $n$-$\dim$ sphere $\mathbb S^n$ ($n\ge2$) and the $n$-$\dim$ torus $\T^n$ ($n\ge5$) satisfies (M1) and (M2). See \S\ref{sec:ST} for the background about eigenfunctions on the spheres and the tori.
Assuming (M1) and (M2) on $\M$, we study the small scale equidistribution of the whole sequence of a random eigenbasis (as opposite to a full density subsequence as in Question \ref{q:sseinM} and Theorem \ref{thm:ssencm}). Let $\mathcal{B}$ be the space of eigenbases with its natural probability measure. Our main theorem states
\begin{thm}[Small scale equidistribution of random eigenbases]\label{thm:sserandom}
Assume that $\M$ satisfies (M1) and (M2). Let
$$r_j=m_{\lambda_j}^{-\alpha}\quad\text{for }0\le\alpha<\frac{1}{2n}.$$
Then almost surely, a random eigenbasis $\{u_j\}\in\mathcal{B}$ satisfies
\begin{equation}\label{eq:sserandom}
\int_{B(x,r_j)}|u_j|^2\,d\mathrm{Vol}=\frac{\mathrm{Vol}(B(x,r_j))}{\mathrm{Vol}(\M)}+o(r_j^n)\quad\text{as }j\to\infty.
\end{equation}
uniformly for all $x\in\M$.
\end{thm}
Therefore, a random eigenbasis is equidistributed at small scales (depending on the multiplicity growth rate) almost surely. The present work is inspired by Burq-Lebeau \cite{BuLe1, BuLe2} and also Shiffman-Zelditch \cite{SZ}. In \cite{BuLe1, BuLe2}, Burq-Lebeau proved (among other results) that assuming similar conditions as (M1) and (M2), a random eigenbasis is almost surely bounded in $L^p$ norms, $2\le p<\infty$; In \cite{SZ}, Shiffman-Zelditch proved (among other results) that a random holomorphic section sequence is almost surely bounded in $L^p$ norms, $2\le p<\infty$, on a compact K\"ahler manifold. Their results, as well as ours, are consequences of multiplicity growth and Levy concentration of measures (see \S\ref{sec:prob}).
\begin{rmk}
We can prove a similar result to Theorem \ref{thm:sserandom} if (M2) is only valid for a subsequence of eigenfrequencies $\{\lambda_{j_k}\}$: Let $\tilde\mathcal{B}$ be the space of subsequences of eigenfunctions with eigenfrequencies $\{\lambda_{j_k}\}$ endowed with a natural probability measure. Then a random subsequence of eigenfunctions $\{u_{j_k}\}\in\tilde\mathcal{B}$ satisfies \eqref{eq:sserandom} almost surely. Moreover, if in addition $m_{\lambda_j}=1$ for all $j\ne j_k$, then the probability measure on $\tilde\mathcal{B}$ is equivalent to the one on $\mathcal{B}$; we know that $u_j$, $j\ne j_k$, is equidistributed at all scales by Lemma \ref{lemma:spectralproj}; hence Theorem \ref{thm:sserandom} is still valid in this case.
We shall also remark that (M2) on the logarithmic growth of multiplicity is the minimal growth rate for the purpose to apply the argument in this paper. This is due to the exponential concentration rate in Levy concentration of measures. (See \S\ref{sec:proof} for more details of the proof.)
\end{rmk}
We next apply Theorem \ref{thm:sserandom} to the most well-known examples: the spheres and the tori; some of the basic facts about eigenfunctions on the spheres and the tori are gathered in \S\ref{sec:ST}. On $\mathbb S^n$ ($n\ge2$), $m_\lambda\gtrsim\lambda^{n-1}$. So by Theorem \ref{thm:sserandom},
\begin{cor}[Small scale equidistribution of random eigenbases on the spheres]\label{cor:sserandomspheres}
On $\mathbb S^n$ for $n\ge2$, let
$$r_j=\lambda_j^{-\rho}\quad\text{for }0\le\rho<\frac{n-1}{2n}.$$
Then almost surely, a random eigenbasis $\{u_j\}\in\mathcal{B}$ satisfies \eqref{eq:sserandom} uniformly for all $x\in\mathbb S^n$.
\end{cor}
Note that the multiplicity growth on the spheres achieves the maximal rate that $m_\lambda\approx\lambda^{n-1}$. (See \S\ref{sec:spectral}.) Therefore, the range of the scale for equidistribution in Corollary \ref{cor:sserandomspheres} is best that one can get from Theorem \ref{thm:sserandom}.
On $\T^n$ ($n=2,3,4$), (M2) fails; however, there are subsequences of eigenfrequencies that \eqref{eq:M2} is valid so one can derive the corresponding result about these random subsequences of eigenfunctions according to the remark below Theorem \ref{thm:sserandom}. On $\T^n$ ($n\ge5$), $m_\lambda\gtrsim\lambda^{n-2}$. So by Theorem \ref{thm:sserandom},
\begin{cor}[Small scale equidistribution of random eigenbases on the tori]\label{cor:sserandomtori}
On $\T^n$ for $n\ge5$, let
$$r_j=\lambda_j^{-\rho}\quad\text{for }0\le\rho<\frac{n-2}{2n}.$$
Then almost surely, a random eigenbasis $\{u_j\}\in\mathcal{B}$ satisfies \eqref{eq:sserandom} uniformly for all $x\in\T^n$.
\end{cor}
In Hezari-Rivi\`ere \cite{HR2} and Lester-Rudnick \cite{LR}, the small scale equidistribution was proved for a \textit{full density} subsequence of \textit{any} eigenbasis. (The result in Hezari-Rivi\`ere \cite{HR2} is in fact about uniform comparability as in \eqref{eq:uniforminM}, which is a weaker version of small scale equidistribution.) Keep this difference with Corollary \ref{cor:sserandomtori} in mind. We remark that the equidistribution of toral eigenfunctions on $\T^n$ ($n\ge2$) was proved by Lester-Rudnick \cite{LR} to the scale $r=\lambda^{-\rho}$ for $\rho\in[0,1/(n-1))$, which has a smaller range when $n\ge5$ than the one in Corollary \ref{cor:sserandomtori} for random toral eigenfunctions.
\subsection*{Related results on small scale equidistribution and outline of the proof}
We shall point out the differences of small scale equidistribution of eigenbases on negatively curved manifolds (Theorem \ref{thm:ssencm}) and on manifolds satisfying (M1) and (M2) (Theorem \ref{thm:sserandom}).
In summary,
\begin{itemize}
\item On negatively curved manifolds, Theorem \ref{thm:ssencm}, in particular (ii), asserts a weaker version of small scale equidistribution (i.e. uniform comparability of volume and $L^2$-mass) of a \textit{full density} subsequence of \textit{any} eigenbasis.
\item On manifolds satisfying (M1) and (M2), Theorem \ref{thm:ssencm} asserts the equidistribution of the \textit{whole} random eigenbasis \textit{almost surely}.
\end{itemize}
In addition, the two approaches are different in nature. That is,
\begin{itemize}
\item On negatively curved manifolds, Theorem \ref{thm:ssencm} applies the correspondence of classical dynamics of geodesic flow $G_t$ and quantum dynamics of Schr\"odinger propagator $e^{it\Delta/h}$. ($h$ is the Planck parameter.) In the classical dynamics, the expnential decay of correlations of $G_t$ is essential to control the decay rate of the time-average of a symbol at small scales, see Liverani \cite{Liv}.
\item On manifolds satisfying (M1) and (M2), Theorem \ref{thm:ssencm} relies on the fact that the space of eigenbases is infinite dimensional. The proof applies crucially Levy concentration of measures that a Lipschitz function decays exponentially away from its median value on a large-dimensional sphere.
\end{itemize}
We, however, choose a similar strategy as used in \cite{Han, HR1} to prove Theorem \ref{thm:sserandom}. To be precise, we first show the small scale equidistribution for the random eigenbases at a fixed point almost surely; then we use a covering argument to pass such equidistribution property to all points on the manifold uniformly. The key difference is, due to the exponential concentration of probability measures, we can select a finer covering than the one used in \cite{Han, HR1}; this effectively provides the small scale equidistribution without conceding to uniform comparability as in \eqref{eq:uniforminM}.
\subsection*{Organization}
We gather some standard facts about eigenfunctions and probabilistic estimates in \S\ref{sec:pre}; the proof of Theorem \ref{thm:sserandom} is in \S\ref{sec:proof}.
Throughout this paper, $A\lesssim B$ ($A\gtrsim B$) means $A\le cB$ ($A\ge cB$) for some constant $c$ depending only on the manifold; $A\approx B$ means $A\lesssim B$ and $B\lesssim A$; the constants $c$ and $C$ may vary from line to line.
\section{Preliminaries}\label{sec:pre}
\subsection{The spectral decomposition}\label{sec:spectral}
On a compact manifold $\M$, we denote the spectral decomposition as
$$L^2(\M)=\oplus_{k=0}^\infty E_k,$$
where $E_k$ is the eigenspace of $\Delta$ with eigenvalue $\lambda_k^2$. For notational convenience, denote
$$m_k:=m_{\lambda_k}=\dim(E_k)$$
as the multiplicity of $\lambda_k$. By H\"ormander \cite{Ho}, the Weyl asymptotics of eigenvalues states
\begin{equation}\label{eq:Weyl}
\#\{\text{eigenvalues (counting multiplicities)}\le\lambda^2\}=c_0\lambda^n+O(\lambda^{n-1}),
\end{equation}
where $c_0$ depends only on $\M$. It follows that $\lambda_k\gtrsim k^{1/n}$. Observe also that
$$\#\{\text{eigenvalues (counting multiplicities)}\le\lambda^2\}=\sum_{l=0}^km_l,$$
where $\lambda_k\le\lambda<\lambda_{k+1}$. So $m_k\le c_1\lambda_k^{n-1}$, where $c_1$ depends only on $\M$. Furthermore,
$$\sum_{l=1}^km_l\le\sum_{l=1}^kc_1\lambda_l^{n-1}\le c_1k\lambda_k^{n-1},$$
thus $\lambda_k\le c_2k$ by \eqref{eq:Weyl} again. Hence,
\begin{equation}\label{eq:multibd}
k^\frac1n\lesssim\lambda_k\lesssim k\quad\text{and}\quad m_k\lesssim k^{n-1}.
\end{equation}
Write $\{e_{1,k},...,e_{m_k,k}\}$ as an orthonormal basis of eigenfunctions in $E_k$. Let $\mathbb S^d_\mathbb C\subset\mathbb C^{d+1}$ be the complex unit sphere. Then any eigenfunction in $E_k$ can be written as
$$u(x)=\sum_{i=1}^{m_k}u_ie_{i,k}(x),\quad\text{where }u_i\in\mathbb C.$$
So the space of $L^2$-normalized functions in $E_k$ can be identified by $\mathbb S^{m_k-1}_\mathbb C$. Any eigenbasis can be written as
$$\{u_{i,k}\}_{k\in\N,1\le i\le m_k},$$
where $\{u_{1,k},...,u_{m_k,k}\}$ is an orthonormal basis of eigenfunctions in $E_k$. The space of eigenbases in $E_k$ can be identified as
$$\mathcal{B}_k\cong\mathbb{U}(m_k).$$
Here, $\mathbb{U}(m_k)$ is the unitary group on $\mathbb C^{m_k}$ endowed with the probability measure as the Haar measure $\nu_k$.
The space of eigenbases $\mathcal{B}$ can then be identified as
$$\mathcal{B}\cong\times_{k=0}^\infty\mathbb{U}(m_k)\quad\text{endowed with the product probability measure }\nu:=\otimes_{k=0}^\infty\nu_k.$$
\begin{rmk}
One can instead consider the randomization of eigenbases by real coefficients. That is, any $L^2$-normalized eigenfunction in $E_k$ is written as
$$u(x)=\sum_{i=1}^{m_k}u_ie_{i,k}(x),\quad\text{where }u_i\in\R.$$
The space of eigenbases $\tilde\mathcal{B}$ can be identified by
$$\tilde\mathcal{B}\cong\times_{k=0}^\infty\mathbb{O}(m_k)\quad\text{with product probability measure }\tilde\nu:=\otimes_{k=0}^\infty\tilde\nu_k,$$
where $\tilde\nu_k$ is the Haar measure on the orthogonal group $\mathbb{O}(m_k)$ on $\R^{m_k}$.
Then the results in Theorem \ref{thm:sserandom} and Corollaries \ref{cor:sserandomspheres} and \ref{cor:sserandomtori} are also valid, replacing $\mathcal{B}$ with probability measure $\nu$ by $\tilde\mathcal{B}$ with probability measure $\tilde\nu$. For simplicity, we only discuss $\mathcal{B}$ with $\nu$.
\end{rmk}
\begin{lemma}\label{lemma:spectralproj}
Assume that $\M$ satisfies (M1). Then
$$\sum_{i=1}^{m_k}|e_{i,k}(x)|^2=\frac{m_k}{\mathrm{Vol}(\M)}\quad\text{for all }x\in\M.$$
\end{lemma}
\begin{proof}
This is just a more general version of the theorem about zonal harmonics on the sphere (see, e.g. Sogge \cite[\S3.4]{So1}). Write the kernel of the orthogonal projection onto the space $E_k$ as
$$K_k(x,y)=\sum_{i=1}^{m_k}e_{i,k}(x)\overline{e_{i,k}(y)}.$$
It is invariant under any isometry $R$ of $\M$ so $K(Rx,Ry)=K(x,y)$; therefore $K(x,x)$ is constant on $\M$ since the isometries act transitively on $\M$. We then derive that
$$\sum_{i=1}^{m_k}|e_{i,k}(x)|^2=K_k(x,x)=\frac{m_k}{\mathrm{Vol}(\M)}$$
because
$$\int_\M\sum_{i=1}^{m_k}|e_{i,k}(x)|^2=m_k.$$
\end{proof}
For any $u\in E_k$, we also observe that
\begin{equation}\label{eq:Linfty}
\|u\|_{L^\infty(\M)}\le\sqrt{K_k(x,x)}\|u\|_{L^2(\M)}=\left(\frac{m_k}{\mathrm{Vol}(\M)}\right)^\frac12\|u\|_{L^2(\M)}.
\end{equation}
See e.g. Sogge \cite[\S3.4]{So1}.
\subsection{Spheircal harmonics and toral eigenfunctions }\label{sec:ST}
In this subsection, we gather some standard facts about spherical harmonics (i.e. eigenfunctions on the spheres) and toral eigenfunctions on the tori.
On $\mathbb S^n$ ($n\ge2$), any spherical harmonic $u$ in $E_k$ satisfies $\Delta u=k(k+n-1)u$ so the eigenfrequency $\lambda_k=\sqrt{k(k+n-1)}\approx k$; moreover, $\dim(E_k)\approx k^{n-1}$, which achieves the maximal growth rate in the view of \eqref{eq:multibd}. See Sogge \cite[\S3.4]{So1} for more details about spherical harmonics.
On $\T^n$ ($n\ge2$), any toral eigenfunction $u$ in $E_k$ satisfies $\Delta u=\lambda_k^2u$, where $\lambda_k^2=l_1^2+\cdots+l_n^2$ for some integers $l_1,...,l_n$. When $n=2,3,4$, (M2) fails; when $n\ge5$, $\dim(E_k)\approx\lambda_k^{n-2}$, see e.g. Grosswell \cite[(9.20)]{G}.
\subsection{Probabilistic estimates}\label{sec:prob}
We now introduce the concentration of measures as the main driving force of our theorems. Let $\mathbb S^d\subset\R^{d+1}$ be the $n$-$\dim$ unit sphere endowed with its geodesic distance $\mathrm{dist}(\cdot,\cdot)$ and the uniform probability measure $\mu_d$. A real-valued function $F$ on $\mathbb S^d$ is said to be Lipschitz if
$$\|F\|_\mathrm{Lip}:=\sup_{u\ne v}\frac{|F(u)-F(v)|}{\mathrm{dist}(u,v)}<\infty.$$
A number $\Me(F)$ is said to be a median value of $F$ if
$$\mu_d(F\ge\Me(F))\ge\frac12\quad\text{and}\quad\mu_d(F\le\Me(F))\ge\frac12.$$
Levy concentration of measures \cite[Theorem 2.3, (1.10), and (1.12)]{Le} then asserts that a Lipschitz function on $\mathbb S^d$ is highly concentrated around its median value when $d$ is large.
\begin{thm}[Levy concentration of measures]\label{thm:Levy}
Consider a Lipschitz function $F$ on $\mathbb S^d$. Then for any $t>0$, we have
$$\mu_d(|F-\Me(F)|>t)\le\exp\left(-\frac{(d-1)t^2}{2\|F\|_\mathrm{Lip}^2}\right).$$
\end{thm}
We also need the probability distribution of a random eigenfunction in $E_k$. Let $\{e_{1,k},...,e_{m_k,k}\}$ be an orthonormal basis of $E_k$. Recalling the identification of the $L^2$-normalized functions in $E_k$ by $\mathbb S^{m_k-1}_\mathbb C$, we write
$$u(x)=\sum_{i=1}^{m_k}u_ie_{i,k}(x)=\langle(u_1,...,u_{m_k}),\overline{(e_{1,k}(x),...,e_{m_k,k}(x))}\rangle_{\mathbb C^{m_k}},$$
where $(u_1,...,u_{m_k})\in\mathbb S_\mathbb C^{m_k-1}$ and $(e_{1,k}(x),...,e_{m_k}(x))\in\mathbb C^{m_k}$ with length
$$|(e_{1,k}(x),...,e_{m_k,k}(x))|=\left(\frac{m_k}{\mathrm{Vol}(\M)}\right)^\frac12$$
independent of $x\in\M$ by Lemma \ref{lemma:spectralproj}. Thus, for $t\in[0,(m_k/\mathrm{Vol}(\M))^{1/2})$,
$$|u(x)|>t\quad\text{if and only if}\quad|\langle(u_1,...,u_{m_k}),\overline{(e_{1,k}(x),...,e_{m_k,k}(x))}\rangle_{\mathbb C^{m_k}}|>t.$$
We can identify $\mathbb S^{m_k-1}_\mathbb C$ with probability measure $P_k$ by $\mathbb S^{2m_k-1}$ with probability measure $\mu_{2m_k-1}$. We therefore have the following fact.
\begin{lemma}\label{lemma:nukt}
$$P_k(|u(x)|>t)=\begin{cases}
\left(1-\frac{\mathrm{Vol}(\M)t^2}{m_k}\right)^{m_k-1} & \text{if }0\le t<\left(\frac{m_k}{\mathrm{Vol}(\M)}\right)^\frac12,\\
0 & \text{if }t\ge\left(\frac{m_k}{\mathrm{Vol}(\M)}\right)^\frac12.
\end{cases}$$
for all $x\in\M$.
\end{lemma}
See e.g. \cite[\S A.1]{BuLe1} for an elementary proof.
\section{Proof of Theorem \ref{thm:sserandom}}\label{sec:proof}
In this section, we prove Theorem \ref{thm:sserandom}. We first establish \eqref{eq:sserandom} at a fixed point on the manifold; then we use a covering argument to complete the proof for all points uniformly.
\subsection{Small scale equidistribution at a fixed point}
Fix a point $x_0\in\M$. The small scale equidistribution is a consequence of Lemma \ref{lemma:nukt} and Theorem \ref{thm:Levy}; the proof in fact follows further analysis of the case $q=2$ in Burq-Lebeau \cite[\S3]{BuLe1}.
Let $\{e_{1,k},...,e_{m_k,k}\}$ be an orthonormal basis of eigenfunctions in $E_k$. Let $r_k\ge0$ and define
$$F_{x_0,k}(u)=\int_{B(x_0,r_k)}|u(x)|^2\,dx,$$
in which
$$u(x)=\sum_{i=1}^{m_k}u_ie_{i,k}(x),\quad\text{where }(u_1,...,u_{m_k})\in\mathbb S^{m_k-1}_\mathbb C.$$
We know that for $q\ge0$ and a measurable function $F$ on $\mathbb S_\mathbb C^{m_k-1}$,
$$\int_{\mathbb S_\mathbb C^{m_k-1}}|F(u)|^q\,dP_k=q\int_0^\infty t^{q-1}P_k(|F(u)|>t)\,dt.$$
By Lemma \ref{lemma:nukt}, we compute the average value of $F_{x_0,k}$ as
\begin{eqnarray}
\mathcal A(F_{x_0,k})&=&\int_{\mathbb S_\mathbb C^{m_k-1}}\int_{B(x_0,r_k)}|u(x)|^2\,dxdP_k\nonumber\\
&=&\int_{B(x_0,r_k)}\int_{\mathbb S_\mathbb C^{m_k-1}}|u(x)|^2\,dP_kdx\nonumber\\
&=&\int_{B(x_0,r_k)}2\int_0^\infty tP_k(|u(x)|>t)\,dtdx\nonumber\\
&=&\int_{B(x_0,r_k)}2\int_0^{(m_k/\mathrm{Vol}(\M))^{1/2}}t\left(1-\frac{\mathrm{Vol}(\M)t^2}{m_k}\right)^{m_k-1}\,dtdx\nonumber\\
&=&\frac{m_k}{\mathrm{Vol}(\M)}\int_{B(x_0,r_k)}2\int_0^1t(1-t^2)^{m_k-1}\,dtdx\nonumber\\
&=&\frac{\mathrm{Vol}(B(x_0,r_k))}{\mathrm{Vol}(\M)}.\label{eq:AvFx0}
\end{eqnarray}
This just verifies that the average value of the $L^2$-mass of a random eigenfunction $u$ in any region equals the (normalized) volume of the region, since the probability distribution of $|u(x)|$ is independent of $x\in\M$. Here, we only require that $r_k\ge0$.
To compute the median value of $F_{x_0,k}$, we evaluate its Lipschitz norm. For $u,v\in\mathbb S_\mathbb C^{m_k-1}$,
\begin{eqnarray*}
|F_{x_0,k}(u)-F_{x_0,k}(v)|&\le&\int_{B(x_0,r_k)}\big||u(x)|^2-|v(x)|^2\big|\,dx\\
&=&\int_{B(x_0,r_k)}\big|(|u(x)|-|v(x)|)(|u(x)|+|v(x)|)\big|\,dx\\
&\le&\left(\int_{B(x_0,r_k)}|u(x)-v(x)|^2\,dx\right)^\frac12\left(\int_{B(x_0,r_k)}(|u(x)|+|v(x)|)^2\,dx\right)^\frac12\\
&\le&\left\|\sum_{i=1}^{m_k}(u_i-v_i)e_{i,k}(x)\right\|_{L^2(\M)}\left(\int_{B(x_0,r_k)}(|u(x)|+|v(x)|)^2\,dx\right)^\frac12\\
&\le&c\,\mathrm{dist}(u,v).
\end{eqnarray*}
Therefore, $\|F_{x_0,k}\|_\mathrm{Lip}\le c$. By Theorem \ref{thm:Levy}, we have
\begin{eqnarray*}
|\mathcal A(F_{x_0,k})-\Me(F_{x_0,k})|&=&\left|\|F_{x_0,k}\|_{L^1(\mathbb S_\mathbb C^{m_k-1})}-\|\Me(F_{x_0,k})\|_{L^1(\mathbb S_\mathbb C^{m_k-1})}\right|\\
&\le&\|F_{x_0,k}-\Me(F_{x_0,k})\|_{L^1(\mathbb S_\mathbb C^{m_k-1})}\\
&=&\int_0^\infty P_k(|F_{x_0,k}(u)-\Me(F_{x_0,k})|>t)\,dt\\
&\le&\int_0^\infty\exp\left(-\frac{(m_k-1)t^2}{\|F_{x_0,k}\|_\mathrm{Lip}^2}\right)\,dt\\
&\le&cm_k^{-\frac12}.
\end{eqnarray*}
Therefore, when
$$r_k=m_k^{-\alpha}\quad\text{for }0\le\alpha<\frac{1}{2n},$$
we have
$$|\mathcal A(F_{x_0,k})-\Me(F_{x_0,k})|=O(m_k^{-1/2})=o(r_k^n).$$
Hence, seeing \eqref{eq:AvFx0},
\begin{equation}\label{eq:MeFx0}
\Me(F_{x_0,k})=\frac{\mathrm{Vol}(B(x_0,r_k))}{\mathrm{Vol}(\M)}+o(r_k^n).
\end{equation}
In the space of eigenbases $\mathcal{B}_k$ in the eigenspace $E_k$, by Theorem \ref{thm:Levy} again,
\begin{eqnarray*}
&&\nu_k\left(\{u_{i,k}\}_{i=1}^{m_k}\in\mathcal{B}_k:\exists1\le i\le m_k,|F_{x_0,k}(u_{i,k})-\Me(F_{x_0,k})|>t\right)\\
&\le&\sum_{i=1}^{m_k}\nu_k\left(\{u_{i,k}\}_{i=1}^{m_k}\subset E_k:|F_{x_0,k}(u_{i,k})-\Me(F_{x_0,k})|>t\right)\\
&\le&e^{-cm_kt^2}m_k.
\end{eqnarray*}
Here, we use the fact that the map $\{u_{1,k},...,u_{m_k,k}\}\to u_{i,k}$ for each $i=1,...,m_k$ sends the probability measure $\nu_k$ on $\mathbb{U}(m_k)$ to $P_k$ on $\mathbb S_\mathbb C^{m_k-1}$. Then
\begin{eqnarray*}
&&\nu\left(\{u_{i,k}\}_{k\in\N,1\le i\le m_k}\in\mathcal{B}:\exists k\in\N,\exists1\le i\le m_k,|F_{x_0,k}(u_{i,k})-\Me(F_{x_0,k})|>t\right)\\
&\le&\sum_{k=0}^\infty\nu_k\left(\{u_{i,k}\}_{i=1}^{m_k}\in\mathcal{B}_k:\exists1\le i\le m_k,|F_{x_0,k}(u_{i,k})-\Me(F_{x_0,k})|>t\right)\\
&\le&c\sum_{k=1}^\infty e^{-cm_kt^2}m_k.
\end{eqnarray*}
Here, we use the fact that $E_0$ consists of constant functions which are equidistributed at any scale.
Because $\alpha\in[0,1/(2n))$, we can find $\beta\in(\alpha n,1/2)$. Let $t_l=m_k^{-\beta}l$. Since $m_k\lesssim\lambda_k^{n-1}$ by \eqref{eq:multibd}, we compute
\begin{eqnarray*}
&&\nu\left(\{u_{i,k}\}_{k\in\N,1\le i\le m_k}\in\mathcal{B}:\exists k\ge1,\exists1\le i\le m_k,|F_{x_0,k}(u_{i,k})-\Me(F_{x_0,k})|>t_l\right)\\
&\le&C\sum_{k=1}^\infty\exp\left(-cm_kt_l^2\right)\lambda_k^{n-1}\\
&\le&C\sum_{k=1}^\infty\exp\left(-c(1-2\beta)l^2L_\M\log\lambda_k+(n-1)\log\lambda_k\right)
\end{eqnarray*}
by Condition (M2) that $m_k\gtrsim L_\M\log\lambda_k$. Using the fact that $k^{1/n}\lesssim\lambda_k\lesssim k$ in \eqref{eq:multibd},
\begin{eqnarray}
&&\nu\left(\{u_{i,k}\}_{k\in\N,1\le i\le m_k}\in\mathcal{B}:\exists k\ge2,\exists1\le i\le m_k,|F_{x_0,k}(u_{i,k})-\Me(F_{x_0,k})|>t_l\right)\nonumber\\
&\le&C\sum_{k=2}^\infty\exp\left(-c(1-2\beta)l^2L_M\log k+c'\log k\right)\nonumber\label{eq:nux0}\\
&\le&C\sum_{k=2}^\infty\exp\left(-cl^2\log k\right)\\
&\to&0\quad\text{as }l\to\infty\nonumber.
\end{eqnarray}
This implies that
$$\nu\left(\{u_{i,k}\}_{k\in\N,1\le i\le m_k}\in\mathcal{B}:\forall l\in\N,\exists k\ge2,\exists1\le i\le m_k,|F_{x_0,k}(u_{i,k})-\Me(F_{x_0,k})|>t_l\right)=0;$$
hence,
$$\nu\left(\{u_{i,k}\}_{k\in\N,1\le i\le m_k}\in\mathcal{B}:\exists l\in\N,\forall k\ge2,\forall1\le i\le m_k,|F_{x_0,k}(u_{i,k})-\Me(F_{x_0,k})|\le t_l\right)=1.$$
But if an eigenbasis $\{u_{i,k}\}_{k\in\N,1\le i\le m_k}\in\mathcal{B}$ satisfies that for some $l\in\N$,
$$|F_{x_0,k}(u_{i,k})-\Me(F_{x_0,k})|\le t_l\quad\text{for all }k\ge2,1\le i\le m_k,$$
then
$$\int_{B(x_0,r_k)}|u_{i,k}(x)|^2\,dx=\Me(F_{x_0,k})+o(r_k^n)=\frac{\mathrm{Vol}(B(x_0,r_k))}{\mathrm{Vol}(\M)}+o(r_k^n)$$
by \eqref{eq:MeFx0} and $t_l=lm_k^{-\beta}=o(m_k^{-\alpha n})=o(r_k^n)$ since $r_k=m_k^{-\alpha}$. This concludes that \eqref{eq:sserandom} is almost surely true at a fixed point $x_0\in\M$.
\subsection{Small scale equidistribution on the manifold}
To prove the small scale equidistribution uniformly for all points on the manifold, we need a covering lemma.
\begin{lemma}\label{lemma:covering}
Let $s>0$ be sufficiently small. Then there exists a family of geodesic balls that covers $\M$:
$$\bigcup_{p=1}^NB(x_p,s)\supset\M\quad\text{with }N\le c_1s^{-n},$$
where $c_1>0$ depends only on $\M$.
\end{lemma}
The covering lemma follows by choosing $\{B(x_p,s/2)\}_{p=1}^N$ as a maximal family of disjoint balls with radius $s/2$.
Let $\gamma>0$ be chosen later. Then Lemma \ref{lemma:covering} implies that there exists a covering $\{B(x_p,s)\}_{p=1}^N$ with
\begin{equation}\label{eq:gamma}
s=\lambda_k^{-\gamma}:=s_k\quad\text{and}\quad N\lesssim \lambda_k^{\gamma n}.
\end{equation}
Define for $p=1,...,N$ that
$$F_{x_p,k}(u)=\int_{B(x_p,r_k)}|u(x)|^2\,dx,\quad\text{where }r_k=m_k^{-\alpha}.$$
\begin{rmk}
Notice that $\{B(x_p,r_k)\}_{p=1}^N$ is also a covering of $\M$ if $s_k\le r_k$. In fact, we shall choose $\gamma$ large enough such that $s_k=\lambda_k^{-\gamma}\ll r_k$. It is irrelevant for our purpose that the overlapping in the new covering $\{B(x_p,r_k)\}_{p=1}^N$ is not uniformly bounded.
It is however crucial that for any $x\in\M$, there is $x_p$ such that the distance of $x$ and $x_p$ is less than $s_k\ll r_k$; so we can approximate $\mathrm{Vol}(B(x,r_k))$ by $\mathrm{Vol}(B(x_p,r_k))$ and $\int_{B(x,r_k)}|u(x)|^2\,dx$ by $\int_{B(x_p,r_k)}|u(x)|^2\,dx$ better than the corresponding step of the argument in \cite{Han, HR1}, therefore achieving uniform small scale equidistribution rather than uniform comparability of the volume and $L^2$-mass in \eqref{eq:uniforminM}.
\end{rmk}
Repeat the process in the previous subsection. We can prove that
\begin{equation}\label{eq:MeFxp}
\Me(F_{x_p,k})=\frac{\mathrm{Vol}(B(x_p,r_k))}{\mathrm{Vol}(\M)}+o(r_k^n)\quad\text{for all }p=1,...,N;
\end{equation}
moreover, similar to \eqref{eq:nux0}, for some $\beta\in(\alpha n,1/2)$,
\begin{eqnarray*}
&&\nu\left(\{u_{i,k}\}_{k\in\N,1\le i\le m_k}\in\mathcal{B}:\exists k\ge2,\exists1\le i\le m_k,|F_{x_p,k}(u_{i,k})-\Me(F_{x_p,k})|>t_l\right)\\
&\le&C\sum_{k=2}^\infty\exp\left(-cl^2\log k\right).
\end{eqnarray*}
Recalling that $t_l=m_k^{-\beta}l$, by \eqref{eq:multibd},
\begin{eqnarray*}
&&\nu\left(\{u_{i,k}\}_{k\in\N,1\le i\le m_k}\in\mathcal{B}:\exists1\le p\le N,\exists k\ge2,\exists1\le i\le m_k,|F_{x_p,k}(u_{i,k})-\Me(F_{x_p,k})|>t_l\right)\\
&\le&C\sum_{k=2}^\infty\exp\left(-cl^2\log k\right)N\\
&\le&C\sum_{k=2}^\infty\exp\left(-cl^2\log k\right)\lambda_k^{\gamma n}\\
&\le&C\sum_{k=2}^\infty\exp\left(-cl^2\log k+\gamma n\log\lambda_k\right)\\
&\le&C\sum_{k=2}^\infty\exp\left(-cl^2\log k+c'\log k\right)\\
&\to&0\quad\text{as }l\to\infty.
\end{eqnarray*}
This implies that
\begin{eqnarray*}
&&\nu\left(\{u_{i,k}\}_{k\in\N,1\le i\le m_k}\in\mathcal{B}:\forall l\in\N,\exists1\le p\le N,\exists k\ge2,\exists1\le i\le m_k,|F_{x_p,k}(u_{i,k})-\Me(F_{x_p,k})|>t_l\right)\\
&=&0;
\end{eqnarray*}
hence,
\begin{eqnarray*}
&&\nu\left(\{u_{i,k}\}_{k\in\N,1\le i\le m_k}\in\mathcal{B}:\exists l\in\N,\forall1\le p\le N,\forall k\ge2,\forall1\le i\le m_k,|F_{x_p,k}(u_{i,k})-\Me(F_{x_p,k})|\le t_l\right)\\
&=&1.
\end{eqnarray*}
But if an eigenbasis $\{u_{i,k}\}_{k\in\N,1\le i\le m_k}\in\mathcal{B}$ satisfies that for some $l\in\N$,
$$|F_{x_p,k}(u_{i,k})-\Me(F_{x_p,k})|\le t_l\quad\text{for all }1\le p\le N,k\ge2,1\le i\le m_k,$$
then
\begin{equation}\label{eq:ssmrandomatxp}
\int_{B(x_p,r_k)}|u_{i,k}(x)|^2\,dx=\Me(F_{x_p,k})+o(r_k^n)=\frac{\mathrm{Vol}(B(x_p,r_k))}{\mathrm{Vol}(\M)}+o(r_k^n).
\end{equation}
by \eqref{eq:MeFxp} and $t_l=o(r_k^n)$. Note also that the above convergence is uniform for $x_p$, $p=1,...,N$, because $l$ is independent of $x_p$. Therefore, \eqref{eq:sserandom} is valid when $x_0$ is replaced by any $x_p$, $p=1,...,N$.
Next we show that by choosing $\gamma>0$ in \eqref{eq:gamma} large enough (depending only on $\M$), \eqref{eq:ssmrandomatxp} guarantees that
$$\int_{B(z,r_k)}|u_{i,k}(x)|^2\,dx=\frac{\mathrm{Vol}(B(z,r_k))}{\mathrm{Vol}(\M)}+o(r_k^n)\quad\text{uniformly for all }z\in\M,$$
hence finishes the proof of Theorem \ref{thm:sserandom}.
Indeed, let $z\in\M$. Since $\{B(x_p,s_k)\}_{p=1}^N$ is a covering of $\M$, there exists $1\le p\le N$ such that $z\in B(x_p,s_k)$. Observe that from \eqref{eq:multibd},
$$s_k=\lambda_k^{-\gamma}\le cm_k^{-\frac{\gamma}{n-1}}=cr_k^{\frac{\gamma}{\alpha(n-1)}}<r_k,\quad\text{given that }\gamma>\frac{n-1}{2n}>\alpha(n-1).$$
Then we immediately derive that
\begin{eqnarray}
&&\left|\frac{\mathrm{Vol}(B(z,r_k))}{\mathrm{Vol}(\M)}-\frac{\mathrm{Vol}(B(x_p,r_k))}{\mathrm{Vol}(\M)}\right|\nonumber\\
&\le&c\mathrm{Vol}\left(B(x_p,r_k+s_k)\setminus B(x_p,r_k-s_k)\right)\nonumber\\
&\le&cs_kr_k^{n-1}\nonumber\\
&\le&cr_k^{\frac{\gamma}{\alpha(n-1)}}r_k^{n-1}\nonumber\\
&=&o(r_k^n).\label{eq:volumedif}
\end{eqnarray}
Using the $L^\infty$ estimate of $u_{i,k}$ in \eqref{eq:Linfty} that
$$\|u_{i,k}\|_{L^\infty(\M)}\le cm_k^\frac12=cr_k^{-\frac{1}{2\alpha}},$$
we also have
\begin{eqnarray}
&&\left|\int_{B(z,r_k)}|u_{i,k}(x)|^2\,dx-\int_{B(x_p,r_k)}|u_{i,k}(x)|^2\,dx\right|\nonumber\\
&\le&\int_{B(z,r_k)\cup B(x_p,r_k)}|u_{i,k}(x)|^2\,dx-\int_{B(z,r_k)\cap B(x_p,r_k)}|u_{i,k}(x)|^2\,dx\nonumber\\
&=&\int_{(B(z,r_k)\cup B(x_p,r_k))\setminus(B(z,r_k)\cap B(x_p,r_k))}|u_{i,k}(x)|^2\,dx\nonumber\\
&\le&\int_{B(x_p,r_k+s_k)\setminus B(x_p,r_k-s_k)}|u_{i,k}(x)|^2\,dx\nonumber\\
&\le&cr_k^{-\frac1\alpha}\mathrm{Vol}\left(B(x_p,r_k+s_k)\setminus B(x_p,r_k-s_k)\right)\nonumber\\
&\le&cr_k^{-\frac1\alpha}s_kr_k^{n-1}\nonumber\\
&\le&cr_k^{-\frac1\alpha}r_k^{\frac{\gamma}{\alpha(n-1)}}r_k^{n-1}\nonumber\\
&=&o(r_k^n).\label{eq:integraldif}
\end{eqnarray}
given that $\gamma>2(n-1)>(1+\alpha)(n-1)$.
Combining \eqref{eq:ssmrandomatxp}, \eqref{eq:volumedif}, and \eqref{eq:integraldif}, we see that
\begin{eqnarray*}
&&\left|\int_{B(z,r_k)}|u_{i,k}(x)|^2\,dx-\frac{\mathrm{Vol}(B(z,r_k))}{\mathrm{Vol}(\M)}\right|\\
&\le&\left|\frac{\mathrm{Vol}(B(z,r_k))}{\mathrm{Vol}(\M)}-\frac{\mathrm{Vol}(B(x_p,r_k))}{\mathrm{Vol}(\M)}\right|+\left|\int_{B(z,r_k)}|u_{i,k}(x)|^2\,dx-\int_{B(x_p,r_k)}|u_{i,k}(x)|^2\,dx\right|\\
&&+\left|\int_{B(x_p,r_k)}|u_{i,k}(x)|^2\,dx-\frac{\mathrm{Vol}(B(x_p,r_k))}{\mathrm{Vol}(\M)}\right|\\
&=&o(r_k^n)\quad\text{uniformly for all }z\in\M.
\end{eqnarray*}
This completes the proof of Theorem \ref{thm:sserandom}.
\section*{Acknowledgements}
It is a pleasure to thank Melissa Tacy for all our discussions that are related to randomization and its applications to harmonic analysis. I also want to thank Andrew Hassell, Steve Lester, Gabriel Rivi\`ere, and Ze\'ev Rudnick for reading the manuscript and offering suggestions that helped to improve the presentation.
| {
"timestamp": "2015-11-05T02:05:35",
"yymm": "1511",
"arxiv_id": "1511.01195",
"language": "en",
"url": "https://arxiv.org/abs/1511.01195",
"abstract": "We investigate small scale equidistribution of random orthonormal bases of eigenfunctions (i.e. eigenbases) on a compact manifold M. Assume that the group of isometries acts transitively on M and the multiplicity of eigenfrequency tends to infinity at least logarithmically. We prove that, with respect to the natural probability measure on the space of eigenbases, almost surely a random eigenbasis is equidistributed at small scales; furthermore, the scales depend on the growth rate of multiplicity. In particular, this implies that almost surely random eigenbases on the n-dimensional sphere (n>=2) and the n-dimensional tori (n>=5) are equidistributed at polynomial scales.",
"subjects": "Spectral Theory (math.SP); Analysis of PDEs (math.AP); Probability (math.PR)",
"title": "Small scale equidistribution of random eigenbases",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969689263264,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7099044241704621
} |
https://arxiv.org/abs/1005.5495 | Central Swaths (A Generalization of the Central Path) | We develop a natural generalization to the notion of the central path -- a notion that lies at the heart of interior-point methods for convex optimization. The generalization is accomplished via the "derivative cones" of a "hyperbolicity cone," the derivatives being direct and mathematically-appealing relaxations of the underlying (hyperbolic) conic constraint, be it the non-negative orthant, the cone of positive semidefinite matrices, or other.We prove that a dynamics inherent to the derivative cones generates paths always leading to optimality, the central path arising from a special case in which the derivative cones are quadratic. Derivative cones of higher degree better fit the underlying conic constraint, raising the prospect that the paths they generate lead to optimality quicker than the central path. | \section{{\bf Introduction}} \label{s.a}
Let $ {\mathcal E} $ denote a finite-dimensional Euclidean space and let $ p: {\mathcal E} \rightarrow \reals $ be a hyperbolic polynomial, that is, a homogeneous polynomial for which there is a designated direction vector $ e $ satisfying $ p(e) > 0 $ and having the property that for all $ x \in {\mathcal E} $, the univariate polynomial $ t \mapsto p(x+te) $ has only real roots. Thus, $ p $ is ``hyperbolic in direction $ e $.''
Let $ \Lambdapp $ denote the hyperbolicity cone -- the connected component of $ \{ x: p(x) > 0 \} $ containing $ e $. Let $ \Lambdap $ be the closure.
A simple example is $ p(x) = \prod_j x_j $ and $ e = (1,\ldots,1) $, in which case $ \Lambdapp $ is the strictly positive orthant and $ \Lambdap $ is the non-negative orthant. Perhaps the most fundamental example, however, is $ p(X) = \det(X) $, where $ X $ ranges over $ n \times n $ symmetric matrices and $ e = I $, the identity matrix. Here, $ \Lambdapp $ is the cone of (strictly-)positive definite (pd) matrices and $ \Lambdap $ is the positive semidefinite (psd) cone.
G\r{a}rding \cite{garding} showed for each hyperbolic polynomial $ p $ that every $ \hat{e} \in \Lambdapp $ is a hyperbolicity direction, i.e., for every $ x $, all of the roots of $ t \mapsto p(x+t \hat{e} ) $ are real. One of several remarkable corollaries G\r{a}rding established is that $ \Lambdapp $ is convex. (Of course $ \Lambdap $ is thus convex, too.) (See \S2 of \cite{renegar} for simplified proofs.)
The combination of convexity and rich algebraic structure make hyperbolicity cones promising objects for study in the context of optimization, as was first made evident by G\"{u}ler \cite{guler}, who developed a rich theory of interior-point methods for hyperbolic programs, that is, for problems of the form
\[ \begin{array}{rl}
\min & c^*x \\
\textrm{s.t.} & Ax = b \\
& x \in \Lambdap \end{array} \]
-- linear programming, second-order programming and semidefinite programming being particular cases. Key to G\"{u}ler's development is that the function $ x~\mapsto~-~\ln~p(x) $ is a self-concordant barrier for $ \Lambdap $; thus the general theory of Nesterov and Nemirovski \cite{nn} applies.
A primary purpose of the present paper is to use the viewpoints provided by hyperbolic programming to develop a natural generalization to the notion of the central path\footnote{Central Path := $ \{ x( \eta ) : \eta > 0 \} $ where $ x(\eta ) $ solves $ \min_x \eta c^* x - \ln p(x) $, s.t. $ Ax = b $, $ x \in \Lambdapp $.} (a notion that lies at the heart of interior-point method theory). This is accomplished via derivative cones, which are direct relaxations of the underlying convex conic constraint (be it the non-negative orthant, the cone of positive semidefinite matrices, \ldots).
However, perhaps more important than the ``natural generalization to the notion of the central path'' is our ``use (of) the viewpoints provided by hyperbolic programming'' in developing the generalization. Indeed, it is our conviction that even if results about hyperbolic programming never find application more general than linear programming and semidefinite programming, the setting of hyperbolic programming is favorable for engendering intriguing algorithmic ideas that otherwise would have been unrealized (or at least considerably delayed).
Familiarity with the central path is not required to readily understand our results. (The central path simply provides an initial anchor with which many readers {\em are} familiar.)
The literature focusing on hyperbolic polynomials is relatively small but its growth is accelerating and its quality in general is distinctly impressive. Although the nature of our results is such that during the development we have occasion to cite only a few works, we take the opportunity before beginning the development to draw the reader's attention to the bibliography, which includes a variety of notable papers appearing in recent years. In particular, an appreciation of the breadth and quality of research ideas surrounding hyperbolic polynomials can be fostered by browsing \cite{bgls}, \cite{bb}, \cite{gurvits}, \cite{hl}, and \cite{hv}.
\section{{\bf Overview of Results}} \label{s.b}
Let $ \phi $ be a univariate polynomial all of whose coefficients are real. Between any two real roots of $ \phi $ there lies, of course, a root of $ \phi' $. Consequently, because $ \phi' $ is of degree one less than the degree of $ \phi $, a simple counting argument shows that if all of the roots of $ \phi $ are real, then so are all of the roots of $ \phi' $.
In particular, if $ \phi(t) := p(x+te) $ where $ p $ is a polynomial hyperbolic in direction $ e $ (and where $ x $ is an arbitrary point), then all the roots of $ t \mapsto \phi'(t) = \smfrac{d}{dt} p(x+te) = Dp(x+te)[e] $ are real, where $ Dp(x+te) $ denotes the differential of $ p $ at $ x + te $. Hence, the polynomial $ p'_e(x) := Dp(x)[e] $ is, like $ p $, hyperbolic in direction $ e $.
For example, if $ p(x) = \prod_j x_j $ and all coordinates of $ e $ are nonzero, then $ p_e'(x) = \sum_i e_i \prod_{j \neq i} x_j $ is hyperbolic in direction $ e $.
We refer to $ p'_e $ as the ``derivative polynomial (in direction $ e $),'' and denote its hyperbolicity cone by $ \Lambdappe{e}' $. The fact that for every $ x $, the roots of $ t \mapsto p_e'(x+te) $ lie between the roots of $ t \mapsto p(x+te) $ is readily seen to imply $ \Lambdapp \subseteq \Lambdappe{e}' $ -- in words, $ \Lambdappe{e}' $ is a {\em relaxation} of $ \Lambdapp $ (see \S4 of \cite{renegar} for a full discussion).
Of course one can in turn take the derivative in direction $ e $ of the hyperbolic polynomial $ p'_e $, thereby obtaining yet another polynomial -- $ (p'_e)'_e(x) = D^2p(x)[e,e] $ -- hyperbolic in direction $ e $. Letting $ n $ denote the degree of $ p $, repeated differentiation in direction $ e $ results in a sequence of hyperbolic polynomials
\[ \ppie{1}{e} = p_e', \, \ppie{2}{e}, \ldots, \ppie{n-1}{e} \; , \]
where $ \deg( \ppie{i}{e}) = n-i $. (For convenience, let $ \ppie{0}{e} := p $.) The associated hyperbolicity cones $ \Lambdaiepp{i}{e} $ -- and their closures $ \Lambdaiep{i}{e} $ -- form a nested sequence of relaxations of the original cone:
\[ \Lambdap = \Lambdaiep{0}{e} \subseteq \Lambdaiep{1}{e} \subseteq \Lambdaiep{2}{e} \subseteq \cdots \subseteq \Lambdaiep{n-1}{e} \; . \]
The final relaxation, $ \Lambdaiep{n-1}{e} $, is a halfspace, because $ \ppie{n-1}{e} $ is linear.
The cones become tamer as additional derivatives are taken. The halfspace $ \Lambdaiep{n-1}{e} $ is as tame as a cone can be, but extremely tame also is the second-order cone $ \Lambdaiep{n-2}{e} $ -- no cone with curvature could be nicer. As one moves along the nesting towards the original cone $ \Lambdap $, the boundaries gain more and more corners. For example, when $ p(x) = \prod_j x_j $ and all coordinates of $ e $ are positive, the boundary $ \partial \Lambdaiep{i}{e} $ contains all of the non-negative orthant's faces of dimension less than $ n-i $ (hence {\em lots} of corners when $ i $ is small and $ n $ large). On the other hand, everywhere else, $ \partial \Lambdaiep{i}{e} $ has nice curvature properties (no corners), as is reflected in the following motivational theorem pertaining to every hyperbolicity cone whose closure is regular (i.e., has nonempty interior and contains no subspace other than the origin).
\hypertarget{targ_thm_one}{}
\begin{thm_one}
Assume $ \Lambdap $ is regular and $ 0 \leq i \leq n-2 $.
\begin{enumerate}
\item The intersection $ \Lambdap \cap \partial \Lambdaiep{i}{e} $ is independent of $ e \in \Lambdapp $ \\
(thus, a face of $ \Lambdap $ which is a boundary face of $ \Lambdaiep{i}{e} $ for some $ e \in \Lambdapp $ is a boundary face for all $ e \in \Lambdapp $).
\item If $ e \in \Lambdapp $ then any boundary face of $ \Lambdaiep{i}{e} $ either is a face of $ \Lambdap $ \\ or is a single ray contained in $ \Lambdaiepp{i+1}{e} \setminus \Lambdaiep{i-1}{e} $.
\end{enumerate}
\end{thm_one}
We show in \S\ref{s.d} that the theorem is a consequence of results from \cite{renegar}. (In order to make the present section inviting to a broad audience, nearly all proofs are delayed.)
Before moving to discussion of hyperbolic programs, we record a characterization of the derivative cones that is useful both conceptually and in proofs:
\begin{equation} \label{e.a.a}
\Lambdaiep{i}{e} = \{ x:\ppie{j}{e}(x) \geq 0 \textrm{ for all $j = i, \ldots, n-1 $} \} \; .
\end{equation}
(This is immediate from Proposition 18 and Theorem 20 of \cite{renegar}.)
\sep
Consider a hyperbolic program
\[
\left. \begin{array}{rl}
\min & c^*x \\
\mathrm{s.t.} & Ax = b \\
& x \in \Lambdap \end{array} \quad \right\} \, \hp
\]
and its derivative relaxations in direction $ e $,
\[
\left. \begin{array}{rl}
\min & c^*x \\
\mathrm{s.t.} & Ax = b \\
& x \in \Lambdaiep{i}{e} \end{array} \quad \right\} \, \hpie{i}{e} \quad \textrm{($ i=1, \ldots, n-1 $)} \; .
\]
(Strictly speaking, ``$\min$'' should be replaced with ``$\inf$,'' but we focus on instances where a minimizer exists.) The optimal values for the derivative relaxations $ \hpie{i}{e} $ form a decreasing sequence in $ i $, due to the nesting of the derivative cones.
Let $ \feas $ (resp., $ \feasie{i}{e} $) denote the feasible region of $ \hp $ (resp., $ \hpie{i}{e} $) -- the set of points satisfying the constraints. Let $ \opt $ ($ \optie{i}{e} $) denote the set of optimal points -- a.k.a. optimal solutions -- and let $ \val $ denote the optimal value of $ \hp $.
\underline{We assume $ b \neq 0 $} (thus, the origin is infeasible, and the feasible sets are not cones), \underline{$ A $ is surjective} (i.e., onto), \underline{and $ c^* $ is not in the image of $ A^* $} (otherwise every feasible point would be optimal).
\underline{We assume $ \Lambdap $ is a regular cone}. Then, for $ 1 \leq i \leq n-2 $, $ \Lambdaiep{i}{e} $ also is regular (\cite{renegar}, Proposition 13).
From these assumptions and Theorem \hyperlink{targ_thm_one}{1}(B) immediately follows a fact that will play a critical role:
\begin{equation} \label{e.a.b}
\left. \begin{array}{l}
\textrm{If $ 1 \leq i \leq n-2 $, then either} \\
$ \textrm{ ~} $ \quad \textrm{$ \optie{i}{e} = \emptyset $ ,} \\
$ \textrm{ ~} $ \quad \textrm{$ \optie{i}{e} = \opt $ , or} \\ $ \textrm{ ~} $ \quad \textrm{$ \optie{i}{e} $ consists of a single point} \\
$ \textrm{ ~} $ \qquad \quad \textrm{and the point is contained in $ \relint(\feasie{i+1}{e}) \setminus \feasie{i-1}{e} $ ,} \end{array} \quad \right\} \end{equation}
where ``$ \relint $'' denotes relative interior\footnote{The ``relative interior'' of a convex set $ S \subseteq {\mathcal E} $ is the interior of $ S $ when considered as a subset in the smallest affine space containing $ S $ (where the affine space inherits the topology of the Euclidean space $ {\mathcal E}$).}.
Thus, for each $ 1 \leq i \leq n-2 $, the cone $ \Lambdapp $ is naturally partitioned into three regions, one consisting of derivative directions $ e $ for which $ \optie{i}{e} = \opt $, a second consisting of directions for which $ \optie{i}{e} = \emptyset $, and the third consisting of directions for which $ \optie{i}{e} $ consists of a single point lying outside the feasible region for the original optimization problem $ \hp $. We associate names with this partitioning of $ \Lambdapp $, but before doing so, we introduce a restriction.
We shall only be concerned with derivative directions $ e $ satisfying $ Ae = b $ (indeed, key arguments rely heavily on $ A(e-x) = 0 $ for $ x \in \feasie{i}{e} $). Thus, the derivative directions we consider satisfy $ e \in \feasie{i}{e} $ for all $ i $ -- in particular, $ \hpie{i}{e} $ is ``strictly'' feasible, as $ e \in \Lambdapp \subseteq \Lambdaiepp{i}{e} $.
We distinguish two sets of derivative directions for $ 0 \leq i \leq n-1 $:
\begin{quote}
The {\em $ i^{\mathrm{th}} $ central swath} is the set
\[ \swath(i) := \{ e \in \Lambdapp: Ae = b \textrm{ and } \optie{i}{e} \neq \emptyset \} \; , \]
and the set of {\em core} derivative directions is defined by
\[ \core(i) := \{ e \in \swath(i): \optie{i}{e} = \opt \} \; . \]
When $ 1 \leq i \leq n-2 $ and $ e \in \swath(i) \setminus \core(i) $, we use $ \xie{i}{e} $ to denote the unique point in $ \optie{i}{e} $ (unique by (\ref{e.a.b})).
\end{quote}
For reference, we note that from (\ref{e.a.b}),
\begin{equation} \label{e.a.c}
\xie{i}{e} \in \Lambdaiepp{i+1}{e} \cap \partial \Lambdaiep{i}{e} \; .
\end{equation}
Trivially, if $ \opt \neq \emptyset $ (resp., $ = \emptyset $), then $ \swath(0) = \core(0) = \relint(\feas) $ (resp., $ = \emptyset $), that is, the zeroth swath coincides precisely with the relative interior of $ \hp $'s feasible region (resp., is the empty set). More interestingly, the swaths and cores are nested:
\begin{gather*}
\swath(0) \supseteq \swath(1) \supseteq \cdots \supseteq \swath(n-1) \\ \label{}
\core(0) \supseteq \core(1) \supseteq \cdots \supseteq \core(n-1)
\end{gather*}
For the cores, the nesting is an easy consequence of the reverse nesting
\[ \feasie{0}{e} \subseteq \feasie{1}{e} \subseteq \cdots \subseteq \feasie{n-1}{e} \; . \]
These reverse nesting also provide the crux in proving the nesting of the swaths, a proof we defer to \S\ref{s.d}.
A consequence of the nesting of swaths is that if any swath is nonempty, then $ \swath(0) \neq \emptyset $ -- equivalently, $ \opt \neq \emptyset $.
Whereas a path is narrow (one-dimensional), swaths can be broad, just as $ \swath(i) $ typically fills much of the feasible region for $ \hp $ when $ i $ is small. But why do we use the terminology ``{\em central} swaths'' rather than simply ``swaths''? The following elementary theorem (proven in \S\ref{s.d}) gives our first reason.
\hypertarget{targ_thm_two}{}
\begin{thm_two} \quad $ \swath(n-1) = \mathrm{Central \, Path} \; . $
\end{thm_two}
The central path is fundamental in the literature on interior-point methods. The path leads to optimality. Most of the algorithms follow the path, either explicitly or implicitly. A foremost goal of the present paper is to show that not only does the central path lead to optimality, but {\em all} central swaths lead, in a natural manner, to optimality. We show, in particular, that through each point $ e \in \swath(i) $ (for $ 1 \leq i \leq n-2 $), there is naturally defined a trajectory which leads from $ e $ to optimality; moreover, the trajectory remains within $ \swath(i) $ until optimality is reached. An intriguing possibility is that
for small values of $ i $, the trajectory might lead to optimality ``more quickly'' than the central path. (Motivation for this possibility will become clearer as the reader proceeds.)
For $ 1 \leq i \leq n-2 $, consider the idealized setting in which for derivative directions $ e \in \swath(i) $, an exact optimal solution for $ \hpie{i}{e} $ can be computed. If the optimal solution lies in $ \Lambdap $, then clearly it lies in $ \opt $, the set of optimal solutions for the original optimization problem $ \hp $. In this case our goal of solving $ \hp $ has been accomplished. On the other hand, if the optimal solution does not lie in $ \Lambdap $, then $ e \in \swath(i) \setminus \core(i) $, and the optimal solution is the unique point $ \xie{i}{e} $ in $ \optie{i}{e} $. In this case how can we move towards solving $ \hp $? How can we construct a trajectory $ t \mapsto e(t) $ for which $ e(0) = e $ and such that either the trajectory converges to $ \opt $ or the path $ t \mapsto \xie{i}{e(t)} $ converges to $ \opt $ (or both)?
An apparently easier task would be to create a trajectory $ t \mapsto e(t) $ for which $ t \mapsto c^* e(t) $ is monotonically decreasing. Indeed, we could define the trajectory implicitly according to the differential equation $ \smfrac{d}{dt} e(t) = \xie{i}{e(t)} - e(t) $ ($ e(0) \in \swath(i) \setminus \core(i) $) that is, move from $ e(t) $ infinitesimally towards the optimal solution $ \xie{i}{e(t)} $. Assuming this does result in a well-defined trajectory, then clearly, $ t \mapsto c^* e(t) $ is decreasing. However, there are no clear reasons suggesting that the trajectory $ t \mapsto e(t) $ converges to $ \opt $. It is conceivable, for example, that the trajectory reaches the boundary $ \partial \Lambdap $ in finite time, converging to a point having better objective value than $ e(0) $, but not to a point in $ \opt $. Alternatively, in finite time the trajectory might reach $ \core(i) $. It seems plausible that the path $ t \mapsto \xie{i}{e(t)} $ then would have limit in $ \opt $. But how would one prove it? How does one even rule out the possibility that in finite time, the path $ t \mapsto \xie{i}{e(t)} $ goes to infinity while the trajectory $ t \mapsto e(t) $ remains bounded but with no limit points in $ \opt $?
Resolving these issues, and similar ones, is our primary focus. We show that the differential equation $ \smfrac{d}{dt} e(t) = \xie{i}{e(t)} - e(t) $, $ e(0) \in \swath(i) \setminus \core(i) $, does result in well-defined trajectories in $ \swath(i) \setminus \core(i) $, and we show that either the trajectory $ t \mapsto e(t) $ or the path $ t \mapsto \xie{i}{e(t)} $ does converge to $ \opt $. We show many other things as well, but to accurately explain, first we must formalize.
In place of $ \smfrac{d}{dt} e(t) = \xie{i}{e(t)} - e(t) $ we often write $ \dot{e}(t) = \xie{i}{e(t)} - e(t) $. That this dynamics results in well-defined trajectories is immediate from the following theorem, whose (relatively routine) proof is in \S\ref{s.e}.
\hypertarget{targ_thm_three}{}
\begin{thm_three}
Assume $ 1 \leq i \leq n-2 $.
The set $ \swath(i) \setminus \core(i) $ is open in the relative topology of $ \relint(\feas) $.
Moreover, the map $ e \mapsto \xie{i}{e} $ is analytic on $ \swath(i) \setminus \core(i) $.
\end{thm_three}
As an aside, we note that every $ e \in \swath(n-2) $ has a unique optimal solution, simply due to the strict curvature of second-order cones. Thus, we can naturally extend the definition of $ \xie{n-2}{e} $ to include all derivative directions in $ \swath(n-2) $, not just the ones in $ \swath(n-2) \setminus \core(n-2) $. It happens that for the case $ i = n-2 $, virtually all of our results remain valid when ``$ \swath(n-2) $'' is substituted for ``$ \swath(n-2) \setminus \core(n-2) $'' (as we discuss while proving our theorems (\S\S\ref{s.c}-\ref{s.j})). With regards to Theorem \hyperlink{targ_thm_three}{3} in particular, the extended map $ e \mapsto \xie{n-2}{e} $ is analytic on all of $ \swath(n-2) $ (which is open in the relative topology of $ \relint(\feas) $).
Theorem \hyperlink{targ_thm_three}{3} implies that when initiated at $ e(0) \in \swath(i) \setminus \core(i) $, the dynamics $ \dot{e}(t) = \xie{i}{e(t)} - e(t) $ results in a well-defined trajectory. The trajectory remains in $ \swath(i) \setminus \core(i) $ for all time (i.e., is defined for all $ 0 \leq t < \infty $), or is defined only up to some finite time due either to reaching the boundary of $ \swath(i) \setminus \core(i) $ or escaping to infinity. Let $ T(e(0)) $ denote the time at which the trajectory becomes undefined (possibly $ T(e(0)) = \infty $). We refer to $ t \mapsto e(t) $ ($ 0 \leq t < T(e(0)) $) as a ``maximal trajectory.'' For brevity, we often instead write ``$ 0 \leq t < T$'' with the implicit understanding that the time of termination, $ T $, depends on $ e(0) $. (Some of our results distinguish between $ T < \infty $ and $ T = \infty $, but never is distinction made between different finite termination times.)
Here is the formal statement of results partially described earlier:
\hypertarget{targ_main_thm_part_one}{}
\begin{main_thm_part_one}
Assume $ 1 \leq i \leq n-2 $ and let $ t \mapsto e(t) $ $ (0 \leq t < T) $ be a maximal trajectory generated by the dynamics $ \dot{e}(t) = \xie{i}{e(t)} - e(t) $ beginning at $ e(0) \in \swath(i) \setminus \core(i) $.
\begin{enumerate}
\item The trajectory $ t \mapsto e(t) $ is bounded, and $ t \mapsto c^* \xie{i}{e(t)} $ is strictly increasing, with $ \val $ (the optimal value of $ \hp $) as the limit.
\item If $ T = \infty $ then every limit point of the trajectory $ t \mapsto e(t) $ lies in $ \opt $.
\item If $ T < \infty $ then the trajectory $ t \mapsto e(t) $ has a unique limit point $ \bar{e} $ and $ \bar{e} \in \core(i) $; moreover, the path $ t \mapsto \xie{i}{e(t)} $ is bounded and each of its limit points lies in $ \opt $.
\end{enumerate}
\end{main_thm_part_one}
The Main Theorem (Part I) is proven in \S\ref{s.h}.
An immediate consequence of the theorem is that $ T = \infty $ whenever $ \core(i) = \emptyset $, as is the case whenever $ \opt \cap \Lambdaiepp{i}{e} \neq \emptyset $ for some (and hence, by Theorem~\hyperlink{targ_thm_one}{1}(A), for all) $ e \in \Lambdapp $.
Perhaps the reader wonders as to the inspiration for the idea that the trajectories $ t \mapsto e(t) $ arising from the dynamics $ \dot{e}(t) = \xie{i}{e(t)} - e(t) $ lead to optimality, either in the limits of the trajectories themselves or in the limits of the paths $ t \mapsto \xie{i}{e(t)} $. The following theorem (whose proof is in \S\ref{s.f}) serves to clarify the inspiration, as well as to further illuminate our choice of the terminology ``{\em central} swaths'' (as opposed to simply ``swaths'').
\hypertarget{targ_thm_five}{}
\begin{thm_five}
Assume $ t \mapsto e(t) $ $ (0 \leq t < T) $ is a maximal trajectory arising from the dynamics $ \dot{e}(t) = \xie{n-2}{e(t)} - e(t) $, starting at $ e(0) \in \swath(n-2) \setminus \core(n-2) $. \begin{center} If $ e(0) \in \cpath $ then $ \{ e(t): 0 \leq t < T \} \subseteq \cpath $. \end{center}
\end{thm_five}
We remark that when $ i = n-2 $, the termination time $ T $ always is $ \infty $, even when $ \core(n-2) \neq \emptyset $ (see \S\ref{s.f}).
From the two theorems, we see that the central path is but one trajectory in a rich spectrum of paths. Moreover, the central path is at the far end of the spectrum, where the cone $ \Lambdap $ is relaxed to second-order cones $ \Lambdaiep{n-2}{e} $. Second-order cones have nice curvature properties but are far cruder approximations to $ \Lambdap $ than are cones $ \Lambdaiep{i}{e} $ for small $ i $. This raises the interesting prospect that algorithms more efficient than interior-point methods can be devised by relying on a smaller value of $ i $, or on a range of values of $ i $ in addition to $ i = n-2 $.
Some exploration in this vein has been made by Zinchenko (\cite{zinchenko1},\cite{zinchenko3}), who showed for linear programs satisfying standard non-degeneracy conditions that if $ i $ is chosen appropriately and the initial derivative direction $ e(0) $ is within a certain region, then a particular algorithm based on discretizing the flow $ \dot{e}(t) = \xie{i}{e(t)} - e(t) $ converges R-quadratically to an optimal solution.
Before moving to the next result, we acknowledge that maybe the Main Theorem can be strengthened without restricting its general setting. For example, in Part I(B) there is no statement that limit points of the path $ t \mapsto \xie{i}{e(t)} $ are optimal solutions for $ \hp $ -- there is not even a statement that the path is bounded. This omission seems odd given that the trajectory $ t \mapsto e(t) $ is following the path $ t \mapsto \xie{i}{e(t)} $ (according to the dynamics $ \dot{e}(t) = \xie{i}{e(t)} - e(t) $) and given that the theorem states limit points of the trajectory are optimal for $ \hp $. Intuitively, it seems the path would converge to optimality and do so even more quickly than the trajectory. The intuition is correct for a wide variety of ``non-degenerate" problems (indeed, quicker convergence of the path than the trajectory underlies Zinchenko's speedup), but we have been unable to find a proof -- or counterexample -- in the general setting of the theorem.\footnote{To gain a sense of the difficulties (and how first impressions can mislead), consider that for any value $ 0 < T \leq \infty $, it is straightforward to define dynamics on $ \mathbb{R}^{2} $ that generates a pair of paths $ a(t) $, $ b(t) $ for which $ \dot{b}(t) = a(t) - b(t) $ and as $ t \rightarrow T $, $ a(t) $ spirals outward to infinity whereas $ b(t) $ spirals inward to a point. (Thus, although $ a(t) $ is ``leading'' $ b(t) $, the paths end (infinitely) far apart.)}
In a similar vein, maybe it is true when $ T = \infty $ that the trajectory $ t \mapsto e(t) $ has a {\em unique} limit point. If, like the central path, the trajectory was a semialgebraic set then the limit point indeed would be unique (simply because every semialgebraic path that has a limit point has exactly one limit point). However, we doubt that the trajectories are necessarily semialgebraic in general, and we see no other approach to proving uniqueness. The theorem leaves open for the general setting the possibility that when $ T = \infty $ (resp., $ T < \infty $), some trajectories $ t \mapsto e(t) $ (resp., some paths $ t \mapsto \xie{i}{e(t)} $) have non-trivial limit cycles -- yet we have no examples of such behavior.
\sep
Of course the dynamics of moving $ e $ towards an optimal solution $ x $ can also be done for $ e \in \core(i) $, in which case $ e $ would be moving towards $ x \in \optie{i}{e} = \opt $. As a matter of formalism, it would be nice to know that such movement would result in a new derivative direction for which $ x $ is still optimal, that is, a new derivative direction that also is in $ \core(i) $. The following theorem (proven in \S\ref{s.j}) establishes a bit more.
\hypertarget{targ_thm_six}{}
\begin{thm_six}
Assume $ e \in \core(i) $ and let $ {\mathcal A} $ be the minimal affine space containing both $ e $ and $ \opt $. Then
\[ {\mathcal A} \cap \Lambdapp \subseteq \core(i) \; . \]
\end{thm_six}
In the following conjecture, the empty set is taken, by default, to be convex.
\begin{conj}
$ \core(i) $ is convex.
\end{conj}
\sep
Much work remains in order to transform the ideas captured in the Main Theorem (Part I) into general and efficient algorithms. For example, devising and analyzing efficient methods for computing $ \xie{i}{e} $ given $ e $ is, in the general case, a challenging research problem. However, computing $ \xie{n-2}{e} $ (that is, the case $ i = n-2 $) amounts simply to solving a least-squares problem and using the quadratic formula. Here, Chua \cite{chua}, starting with -- and extending -- ideas similar to ones above, devised and analyzed an algorithm for semidefinite programming (and, more generally, for symmetric cone programming) with complexity bounds matching the best presently known -- $ O(\sqrt{n} \log (1/\epsilon) ) $ iterations to reduce the duality gap by a factor $ \epsilon $ when $ \Lambdap $ is the cone of $ n \times n $ sdp matrices.
Although in the present work we do not analyze methods for efficiently computing $ \xie{i}{e} $, we now present a few results relevant to algorithm design. These results also are important to the proof of the Main Theorem.
For the next two theorems, nothing is gained by distinguishing $ \ppie{i}{e} $ from any other hyperbolic polynomial, so we phrase the results simply in terms of a polynomial $ p $ hyperbolic in direction $ e $, and its first derivative $ p_e' $. (The two theorems, moreover, do not require $ \Lambdap $ to be a regular cone.)
Let
\[ q_e := p/p'_e \; , \]
a rational function. The natural domain for $ q_e $ is $ \Lambdappe{e}' $, because $ p'_e(x) > 0 $ for $ x \in \Lambdappe{e}' $ and $ p'_e(x) = 0 $ for $ x $ in the boundary of $ \Lambdappe{e}' $.
The following result is proven in \S\ref{s.g}.
\hypertarget{targ_thm_seven}{}
\begin{thm_seven}
The function $ q_e: \Lambdappe{e}' \rightarrow \reals $ is concave.
\end{thm_seven}
Previously, $ q_e $ was known to be concave on the smaller cone $ \Lambdapp $ (see \cite{bgls}). For us, the significance of the function being concave on the larger cone $ \Lambdapp' $ is that, as the following theorem illustrates, $ \hp $ can be reformulated as a linearly-constrained convex optimization problem (no explicit conic constraint). For motivation, think of the situation where one has an approximation to an optimal solution for $ \hp $ and the goal is to compute a better approximation.
\hypertarget{targ_thm_eight}{}
\begin{thm_eight} \label{}
The optimal solutions $ x \in \Lambdappe{e}' $ for $ \hp $ are the same as for the convex optimization problem
\[ \begin{array}{rl}
\min_{x \in \Lambdappe{e}'} & - \ln c^*( e-x) - q_e(x) \\
\mathrm{s.t.} & Ax = b \; .
\end{array} \]
\end{thm_eight}
\noindent (A caution: The theorem asserts nothing about optimal solutions for $ \hp $ that happen to lie in the intersection of the boundaries $ \partial \Lambdap $ and $ \partial \Lambdape{e}' $.) The theorem is proven in \S\ref{s.g}.
Since, by (\ref{e.a.c}) , $ \xie{i}{e} \in \Lambdaiepp{i+1}{e} $ for $ e \in \swath(i) \setminus \core(i) $, the following corollary is immediate, except for the assertion regarding the second differential, which is established in \S\ref{s.g}.
\hypertarget{targ_cor_nine}{}
\begin{cor_nine}
If $ 1 \leq i \leq n-2 $, $ e \in \swath(i) \setminus \core(i) $ and
\[ f(x) := - \ln c^*(e-x) - \frac{\ppie{i}{e}(x)}{\ppie{i+1}{e}(x)} \; , \]
then $ \xie{i}{e} $ is the unique optimal solution for the convex optimization problem
\[ \begin{array}{rl}
\min_x & f(x) \\
\mathrm{s.t.} & Ax = b \; .
\end{array} \]
Moreover, $ D^2f(\xie{i}{e})[v,v] > 0 $ if $ v $ is not a scalar multiple of $ \xie{i}{e} $ (in particular, if $ v \neq 0 $ satisfies $ Av = 0 $).
\end{cor_nine}
A consequence of the assertion regarding the second differential is that Newton's method will converge quadratically to $ \xie{i}{e} $ if initiated nearby. (Which is not to say that Newton's method is the algorithm of choice for this problem.)
\sep
Our final result relates the dynamics $ \dot{e}(t) = \xie{i}{e(t)} - e(t) $ to the optimization problem dual to $ \hp $:
\[
\left. \begin{array}{rl}
\sup_{y^*,s^*} & y^*b \\
\mathrm{s.t.} & y^* A + s^* = c^* \\
& s^* \in \Lambdap^* \end{array} \quad \right\} \, \hp^* \; ,
\] where $ \Lambdap^* $ is the cone dual to $ \Lambdap $.\footnote{The dual cone $ \Lambdap^* $ consists of the linear functionals $ s^* $ satisfying $ s^*x \geq 0 $ for all $ x \in \Lambdap $ .} A pair $ (y^*,s^*) $ satisfying the constraints is said to be ``strictly'' feasible if $ s^* \in \int(\Lambdap^*) $ (interior).
Letting $ \val^* $ denote the optimal value of $ \hp^* $ ($ \val^* = - \infty $ if $ \hp^* $ is infeasible), we have, just as a matter of tracing definitions, the standard result known as ``weak duality''\footnote{This is proven simply by observing that for feasible $ x $ and $ (y^*,s^*) $,
\[ c^*x = (y^*A + s^*)x = y^*b + s^*x \geq y^*b \; , \] the inequality due to $ x \in \Lambdap $ and $ s^* \in \Lambdap^* $.}: $ \val^* \leq \val $.
Optimizers expect that if a dynamics provides a natural path-following framework for solving a convex optimization problem, then not only do the dynamics generate paths leading to (primal) optimality, also the dynamics somehow generate paths leading to dual optimality.
For $ e \in \swath(i) \setminus \core(i) $, define
\[ \sie{i}{e} := \frac{c^*(e-x) }{\ppie{i+1}{e}(x)} D \ppie{i}{e}(x) \quad \textrm{where } x = \xie{i}{e} \; , \]
and where $ D \ppie{i}{e}(x) $ is the differential of $ \ppie{i}{e} $ at $ x $, i.e., the linear functional defined on vectors $ v $ by $ D \ppie{i}{e}(x)[v] := \smfrac{d}{dt} \ppie{i}{e}(x+tv)|_{t=0} $.
The following result is proven in \S\ref{s.i}.
\hypertarget{targ_main_thm_part_two}{}
\begin{main_thm_part_two}
Assume $ 1 \leq i \leq n-2 $ and let $ t \mapsto e(t) $ $ ( 0 \leq t < T) $ be a maximal trajectory for the dynamics $ \dot{e} = \xie{i}{e} - e $ starting at $ e(0) \in \swath(i) \setminus \core(i) $. Then $ y^*A + \sie{i}{e(t)} = c^* $ has a unique solution $ y^* = \yie{i}{e(t)} $, and the pair $ (\yie{i}{e(t)},\sie{i}{e(t)}) $ is strictly feasible for $ \hp^* $. Moreover,
\[ \yie{i}{e(t)}b = c^* \xie{i}{e(t)} \xrightarrow[t \rightarrow T]{} \val \]
(in fact, increases to $ \val $ strictly monotonically) and the path $ t \mapsto (\yie{i}{e(t)},\sie{i}{e(t)}) $ is bounded.
\end{main_thm_part_two}
Consequences are, of course, that $ \val^* = \val $ (``strong duality'') and that the limit points of $ t \mapsto (\yie{i}{e(t)},\sie{i}{e(t)}) $ form a nonempty set, each of whose elements is optimal for $ \hp^* $.
Thus, although the path $ t \mapsto \xie{i}{e(t)} $ is infeasible for $ \hp $, and the feasible trajectory $ t \mapsto e(t) $ can potentially not converge to optimality (it might instead converge to $ \bar{e} \in \core(i) $), there is naturally generated a path $ t \mapsto (\yie{i}{e(t)},\sie{i}{e(t)}) $ that both is feasible for $ \hp^* $ and converges to optimality.
Perhaps, then, the algorithmic framework we have posed as being for the primal optimization problem would be better posed as being for the dual, since by interchanging the primal and the dual, there would be a single path generated for the primal, and that path would be both feasible and converge to optimality.
\sep
Lastly, we mention that in defining the sequence of derivative polynomials $ \ppi{i}_e $ ($ i=1, \ldots, n-1 $), we could have used various derivative directions $ e_1, \ldots, e_{n-1} $, choosing $ e_1 $ from the hyperbolicity cone for $ p $ and defining $ p_{e_1}'(x) := Dp(x)[e_1] $, choosing $ e_2 $ from the (larger) hyperbolicity cone for $ p_{e_1}' $ and defining $ p_{e_1,e_2}''(x) := Dp_{e_1}'(x)[e_2] = D^2p(x)[e_1,e_2] $, and so on. Several results in the following pages can be extended to this more general setting. However, computing multidirectional derivatives can be prohibitively expensive, even for the innocuous-appearing hyperbolic polynomials $ p(x) = \prod_{i=1}^n a_i^T x $ naturally arising from polyhedral cones $ \{ x \in \reals^n: a_i^Tx \geq 0 \textrm{ for all } i=1, \ldots, n \} $; indeed, choosing $ e_1, \ldots, e_n $ to be the standard basis, $ D^np(x)[e_1, \ldots, e_n ] $ is the permanent of the matrix whose $ i^{ \mathrm{th}} $ column is $ a_i $.
By contrast, if the same direction $ e $ is used for all derivatives ($ i = 1, \ldots, n-1 $), the resulting polynomials $ \ppie{i}{e} $ (resp., their gradients, their Hessians) can be efficiently evaluated at any point if the initial polynomial $ p $ (resp., its gradient, its Hessian) can be efficiently evaluated at any point. This is straightforwardly accomplished by interpolation, and can be sped up via the (inverse) Discrete Fourier Transform (see \S9 of \cite{renegar} for some discussion). As a primary motivation for the present paper is designing efficient algorithms, it thus is sensible to restrict consideration to the same direction $ e $ being used for all derivatives ($ i = 1, \ldots, n-1 $).
\section{{\bf Prelude to the Analysis}} \label{s.c}
Now we turn to proving the results. The theorems are proven in the order in which they were stated with the exceptions of \hyperlink{targ_main_thm_part_one}{Part I} of the Main Theorem and Theorem~\hyperlink{targ_thm_six}{6}. The proof of the first is delayed because it depends on theorems that were stated later. The proof of Theorem 6 is delayed, until the end, due to the combination of the proof being long and the theorem being less important than others.
To ease burdens on the reader, each theorem is restated before its proof, and is renumbered to match the section in which it is proven (in part so it is clear there is no circularity among the proofs). Additionally, concepts and definitions are recalled as they first retake center stage, and a few supplementary results are presented. Thus, the reader is freed from having to refer to the preceding ``overview of results.''
Several proofs rely fundamentally on results from \cite{renegar}, a paper on structural aspects of hyperbolicity cones (and hyperbolic programs), a paper for which a primary goal was to provide a ready reference of ``lower-level'' details so that subsequent papers (such as the present one) could avoid drawn-out proofs. The relevant results are presented in propositions or theorems at the beginning of sections where the results are first needed, with the exception of the following results.
One result from \cite{renegar} is used time and time again -- the characterization we recorded as (\ref{e.a.a}), that is,
\begin{equation} \label{e.c.a}
\Lambdaiep{i}{e} = \{ x:\ppie{j}{e}(x) \geq 0 \textrm{ for all $j = i, \ldots, n-1 $} \} \; .
\end{equation}
For ease of reference we record the following two characterizations that are similar to the one above:
\begin{equation} \label{e.c.b}
\Lambdaiepp{i}{e} = \{ x:\ppie{j}{e}(x) > 0 \textrm{ for all $j = i, \ldots, n-1 $} \} \; ,
\end{equation}
\begin{equation} \label{e.c.c}
\Lambdaiepp{i}{e} = \{ x:\ppie{i}{e}(x) > 0 \textrm{ and } \ppie{j}{e}(x) \geq 0 \textrm{ for all $j = i+1, \ldots, n-1 $} \} \; .
\end{equation}
As was noted earlier, (\ref{e.c.a}) is immediate from Proposition 18 and Theorem 20 in \cite{renegar}. The representations (\ref{e.c.b}) and (\ref{e.c.c}) are established in the two paragraphs following that theorem.
Readily proven from the above characterizations are that for $ 1 \leq i \leq n-1 $,
\begin{equation} \label{e.c.d}
x \in \Lambdaiep{i}{e} \setminus \Lambdaiep{i-1}{e} \quad \Rightarrow \quad \ppie{i-1}{e}(x) < 0
\end{equation}
and
\begin{equation} \label{e.c.e}
x \in \partial \Lambdaiep{i}{e} \quad \Rightarrow \quad \ppie{i-1}{e}(x) \leq 0
\; .
\end{equation}
On a different note, recall that the $ k^{th} $ differential $ D^k f(x) $ of a function $ f: \mathbb{R}^n \mapsto \mathbb{R} $ is defined on $ k $-tuples of vectors by
\[ D^k f(x)[v_1, \ldots, v_k] := \smfrac{d}{dt_1} \cdots \smfrac{d}{dt_k} f(x + t_1 v_1 + \cdots t_k v_k)|_{t_k = 0} \cdots |_{t_1 = 0} \; . \]
If $ f $ is analytic -- as are hyperbolic polynomials -- the differential is well-defined for every $ k $, and is symmetric (i.e., for all permutations $ \pi $ of $ \{ 1, \ldots, k \} $, the values $ D^k f(x)[v_{ \pi(1)}, \ldots, v_{\pi(k)}] $ are identical). Moreover, the differentials are multilinear, that is, $ v_i \mapsto D^kf(x)[v_1, \ldots, v_k] $ is linear when $ v_1, \ldots, v_{i-1}, v_{i+1}, \ldots, v_k $ are fixed.
Fixing vectors $ w_1, \ldots, w_j $ results in a symmetric, $ (k-j) $-multilinear form
\[ [v_1, \ldots, v_{k-j}] \mapsto D^kf(x)[w_1, \ldots, w_j, v_1, \ldots, v_{k-j}] \; . \]
This form is denoted $ D^kf(x)[w_1, \ldots, w_j] $.
A fact used extensively, and which is easily proven (by, say, induction on $ j $ with base case $ j = 0 $), occurs when $ f $ is homogeneous of degree $ \ell $ (i.e., $ f(tx) = t^{ \ell} f(x) $): For $ 0 \leq j \leq k \leq \ell $,
\[ D^kf(x)[\underbrace{x, \ldots,x}_{ \textrm{$ j $ times}}] = \smfrac{(\ell -k+j)!}{( \ell -k)!} D^{k-j}f(x) \; . \]
In particular, for $ 0 \leq j \leq k \leq n- i $,
\begin{equation} \label{e.c.f}
D^k\ppie{i}{e}(x)[\underbrace{x, \ldots,x}_{ \textrm{$ j $ times}}] = \smfrac{(n-i-k+j)!}{(n-i-k)!} D^{k-j}\ppie{i}{e}(x) \; .
\end{equation}
A consequence of (\ref{e.c.f}) used occasionally is that if $ 0 \leq k \leq n-i $, then
\begin{equation} \label{e.c.g}
D^k \ppie{i}{e}(e) = D^{k+i} p(e)[\underbrace{e, \ldots,e}_{ \textrm{$ i $ times}}]
= \smfrac{(n-k)!}{(n-i-k)!} D^{k}p(e)
\end{equation}
-- in particular, $ D^k \ppie{i}{e}(e) $ is a positive multiple of $ D^k p(e) $.
Finally, we mention that the nesting of the derivative cones often is used implicitly. Two examples of assertions made without the nesting being mentioned: ``If $ x \in \optie{i}{e} $ and $ x \in \feas $, then $ x \in \opt $.'' ``If $ x \in \Lambdaiepp{i}{e} $ then $ \ppie{i+1}{e}(x) > 0 $.''
\section{{\bf Proofs of Theorems 1 and 2, and the Nesting of Swaths}} \label{s.d}
The following theorem records results from \cite{renegar} that are used in this section.
\begin{thm} \label{t.d.a}
For any polynomial hyperbolic in direction $ e $, the following hold:
\begin{enumerate}
\item For $ i = 1, \ldots, n-1 $, $ \Lambdaiep{i-1}{e} \cap \partial \Lambdaiep{i}{e}= \Lambdap \cap \partial \Lambdaiep{i}{e} $.
\item For $ i = 0, \ldots, n-1 $, the intersection $ \Lambdap \cap \partial \Lambdaiep{i}{e} $ is independent of $ e \in \Lambdapp $.
\item For $ i = 0, \ldots, n-2 $, the lineality space of $ \Lambdaiep{i}{e} $ consists precisely of the points in $ \Lambdaiep{i}{e} \cap \partial \Lambdaiep{n-1}{e} $.
\item The cones $ \Lambdaiep{0}{e}, \Lambdaiep{1}{e}, \ldots, \Lambdaiep{n-2}{e} $ have the same lineality space (thus, if one of the cones is regular, all are regular).
\item If $ \Lambdap $ is regular and $ 0 \leq i \leq n-2 $, then any face of $ \Lambdaiep{i}{e} $ either is a face of $ \Lambdap $ or is an extreme direction of $ \Lambdaiep{i}{e} $.
\end{enumerate}
\end{thm}
\begin{proof} Result (A) is Proposition 16 in \cite{renegar}; (B) is established from Theorem 12 by induction (with base case $ i = 0 $) and by use of Proposition 22 (which shows that although the definition of the sets ``$ \partial^m \Lambdap $'' appearing in Theorem 12 depends on a derivative direction $ e $, the sets actually are independent of $ e \in \Lambdapp $); (C) follows from Proposition 11; (D) is immediate from Proposition 13 by induction with base case $ i = 1 $ (alternatively, follows from (A) and (C)); (E) is Proposition 24. \end{proof}
Here again is Theorem~\hyperlink{targ_thm_one}{1}, but renamed:
\begin{cor} \label{t.d.b}
Assume $ \Lambdap $ is regular and $ 0 \leq i \leq n-2 $.
\begin{enumerate}
\item The intersection $ \Lambdap \cap \partial \Lambdaiep{i}{e} $ is independent of $ e \in \Lambdapp $ \\
(thus, a face of $ \Lambdap $ which is a boundary face of $ \Lambdaiep{i}{e} $ for some $ e \in \Lambdapp $ is a boundary face for all $ e \in \Lambdapp $).
\item If $ e \in \Lambdapp $ then any boundary face of $ \Lambdaiep{i}{e} $ either is a face of $ \Lambdap $ \\ or is a single ray contained in $ \Lambdaiepp{i+1}{e} \setminus \Lambdaiep{i-1}{e} $.
\end{enumerate}
\end{cor}
\begin{proof}
Parts (B) and (E) of Theorem~\ref{t.d.a} mostly prove the corollary. It remains only to show
\begin{equation} \label{e.d.a}
x \in ( \partial \Lambdaiep{i}{e}) \setminus \Lambdap \quad \Rightarrow \quad x \in \Lambdaiepp{i+1}{e} \setminus \Lambdaiep{i-1}{e} \; .
\end{equation}
However, Theorem\ref{t.d.a}(A) immediately gives
\[ x \in ( \partial \Lambdaiep{i}{e}) \setminus \Lambdap \quad \Rightarrow \quad x \notin \Lambdaiep{i-1}{e} \; , \]
and upon substituting $ i+1 $ for $ i $, immediately gives
\[ x \in \Lambdaiep{i}{e} \setminus \Lambdap \quad \Rightarrow \quad x \notin \partial \Lambdaiep{i+1}{e} \; , \]
that is, gives
\[ x \in \Lambdaiep{i}{e} \setminus \Lambdap \quad \Rightarrow \quad x \in \Lambdaiepp{i+1}{e} \; .\]
The implication (\ref{e.d.a}) is thus established, concluding the proof.
\end{proof}
{\bf Throughout the remainder of the paper, our standard assumptions apply without being made explicit in the statements of theorems, propositions, etc.} Recall these assumptions regard the objective function $ c^* x $, the equations $ Ax = b $, and the cone $ \Lambdap $:
\begin{itemize}
\item $ b \neq 0 $ (in particular, the origin is infeasible)
\item $ A $ is surjective (i.e., onto)
\item $ c^* $ is not in the image of $ A^* $ (otherwise all feasible points would be optimal)
\item $ \Lambdap $ is a regular cone, that is, contains no subspaces other than $ \{ 0 \} $
\end{itemize}
Thus, by Theorem~\ref{t.d.a}(D), $ \Lambdaiep{i}{e} $ is regular for all $ 0 \leq i \leq n-2 $.
Recall the definitions of swaths and cores:
\begin{align*}
& \swath(i) := \{ e \in \Lambdapp: Ae = b \textrm{ and } \optie{i}{e} \neq \emptyset \} \; , \\
& \core(i) := \{ e \in \swath(i): \optie{i}{e} = \opt \} \; .
\end{align*}
Recall, too, that the above corollary (aka Theorem 1) and our standard assumptions easily imply that if $ 1 \leq i \leq n-2 $ and $ e \in \swath(i) \setminus \core(i) $, then $ \optie{i}{e} $ consists of a single point, denoted $ \xie{i}{e} $. The corollary implies, moreover, that
\begin{equation} \label{e.d.b}
\xie{i}{e} \in \relint(\feasie{i+1}{e}) \setminus \feasie{i-1}{e} \; .
\end{equation}
In \S\ref{s.b} we noted that from the nesting
\[ \feasie{0}{e} \subseteq \feasie{1}{e} \subseteq \cdots \subseteq \feasie{n-1}{e} \]
easily follows the nesting of cores,
\[ \core(0) \supseteq \core(1) \supseteq \cdots \supseteq \core(n-1) \; . \]
We now establish (a bit more than) the nesting of swaths.
\begin{prop} \label{t.d.c}
The swaths are nested,
\[ \swath(0) \supseteq \swath(1) \supseteq \cdots \supseteq \swath(n-1) \]
(thus, if any swath is nonempty, so is $ \swath(0) $ -- equivalently, so is $ \opt $).
Moreover, if $ \swath(n-1) \neq \emptyset $, or if $ \swath(i) \setminus \core(i) \neq \emptyset $ for some $ 1 \leq i \leq n-2 $, then $ \opt $ is a bounded set.
\end{prop}
\begin{proof} We know $ \core(i) \subseteq \core(i-1) $, so to prove the nesting of swaths, we assume $ e \in \swath(i) \setminus \core(i) $ and show $ e \in \swath(i-1) $.
First consider the case $ 1 \leq i \leq n-2 $, where $ \optie{i}{e} $ consists of the single point $ \xie{i}{e} $ -- in particular, $ \optie{i}{e} $ is nonempty and bounded, from which follows by standard convexity arguments that the level sets $ \{ x \in \feasie{i}{e}: c^*x \leq \alpha \} $ ($ \alpha \in \mathbb{R} $) are bounded. Since $ \feasie{i-1}{e} \subseteq \feasie{i}{e} $, the level sets $ \{ x \in \feasie{i-1}{e}: c^*x \leq \alpha \} $ also are bounded. From this and the nonemptiness of $ \feasie{i-1}{e} $ (indeed, $ e \in \feasie{i-1}{e} $) easily follows $ \optie{i-1}{e} \neq \emptyset $, that is, $ e \in \swath(i-1) $, as desired.
Observe, too, that the boundedness of the level sets $ \{ x \in \feasie{i}{e}: c^*x \leq \alpha \} $, and the relations $ \feasie{i}{e} \supseteq \feasie{j}{e} $ for $ j = 0, \ldots, i $, imply $ \optie{j}{e} $ is bounded for all $ j = 0, \ldots, i $ -- in particular, $ \opt = \optie{0}{e} $ is bounded, thereby establishing the final statement of the proposition for the case that $ \swath(i) \setminus \core(i) \neq \emptyset $ for some $ 1 \leq i \leq n-2 $. (We note as an aside that always, $ \core(0) = \swath(0) $, so never is $ \core(0) $ a proper subset of $ \swath(0) $.)
We have left to prove that $ \swath(n-1) \subseteq \swath(n-2) $, and that if $ \swath(n-1) \neq \emptyset $, then $ \opt $ is bounded. Here we rely on the following result, which is immediate from (C) and (D) of Theorem~\ref{t.d.a} :
\begin{enumerate}
\addtocounter{enumi}{5}
\item For $ i = 0, \ldots, n-2 $, the lineality space of $ \Lambdap $ consists precisely of the points in $ ( \partial \Lambdaiep{n-1}{e}) \cap \Lambdaiep{i}{e} $.
\end{enumerate}
Now, for every $ e \in \Lambdapp $, the cone $ \Lambdaiep{n-1}{e} $ is a halfspace, i.e., $ \Lambdaiep{n-1}{e} = \{ x: d^*x \geq 0 \} $ for some linear functional $ d^* $. Moreover, since $ \Lambdap $ is a regular cone, the hyperplane $ \{ x: d^*x = 0 \} $ intersects the cone $ \Lambdaiep{n-2}{e} $ only at the origin (by (F) with $ i = n-2 $). Consequently, the level sets $ \{ x \in \Lambdaiep{n-2}{e}: d^* x \leq \alpha \} $ ($ \alpha \in \mathbb{R} $) are bounded.
Assume $ e \in \swath(n-1) $. Then, clearly,
\[ \optie{n-1}{e} = \{ x: Ax = b \textrm{ and } d^*x = 0 \} \; , \]
and the first-order conditions are satisfied:
\[ c^* = y^*A + \lambda d^* \quad \textrm{for some $ \lambda \geq 0 $ and $ y^* $} \; . \]
Since $ c^* $ is not in the image of $ A^* $ (by assumption), it must be that $ \lambda > 0 $, from which follows that each level set $ \{ x \in \Lambdaiep{n-2}{e}: Ax = b \textrm{ and } c^*x \leq \beta \} $ is a level set $ \{ x: Ax = b \textrm{ and } d^*x \leq \alpha \} $ for some $ \alpha $ (depending on $ \beta $), and hence is bounded (according to the conclusion of the preceding paragraph). From this easily follows $ \optie{n-2}{e} \neq \emptyset $, that is, $ e \in \swath(n-2) $. It also follows, of course, that for all $ 0 \leq i \leq n-2 $, the level sets $ \{ x \in \Lambdaiep{i}{e}: Ax = b \textrm{ and } c^*x \leq \beta \} $ are bounded, and hence that $ \optie{i}{e} $ is bounded -- in particular, $ \opt = \optie{0}{e} $ is bounded. \end{proof}
We close this section by restating and proving Theorem \hyperlink{targ_thm_two}{2}.
\begin{thm} \label{t.d.d}
\quad $ \swath(n-1) = \mathrm{Central \, Path} $
\end{thm}
\begin{proof} The central path consists precisely of the points in $ \{ x \in \Lambdapp: Ax = b \} $ which minimize $ f_{\eta}(x) := \eta \, c^* x - \ln p(x) $ for some $ \eta > 0 $. As the functions $ \eta $ are convex, the first-order optimality conditions are sufficient as well as necessary; thus, the central path consists precisely of the points $ e $ satisfying
\begin{equation} \label{e.d.c}
\left. \begin{array}{l}
\eta c^* - \smfrac{1}{p(e)} Dp(e) = y^*A \quad \textrm{for some $ \eta > 0 $ and $ y^* $} \\
Ae = b \\
e \in \Lambdapp \; . \end{array} \quad \right\}
\end{equation}
For every $ e \in \Lambdapp $, on the other hand, from the linearity of $ x \mapsto \ppie{n-1}{e}(x) $, we have $ \ppie{n-1}{e}(x) = D\ppie{n-1}{e}(z)[x] $ for every point $ z $ -- in particular for $ z = e $ -- and thus
\[ \Lambdaiep{n-1}{e} = \{ x: D\ppie{n-1}{e}(e)[x] \geq 0 \} \; . \]
Clearly, then, $ \optie{n-1}{e} \neq \emptyset $ if and only if
\[ c^* = \lambda D\ppie{n-1}{e}(e) + w^*A \quad \textrm{for some $ \lambda \geq 0 $ and $ w^* $} \; . \]
Since $ c^* $ is not in the range of $ A^* $ (by assumption), and since $ D\ppie{n-1}{e}(e) = (n-1)! \, Dp(e) $ (by (\ref{e.c.g})), we thus have that $ e \in \swath(n-1) $ if and only if
\begin{equation} \label{e.d.d}
\left. \begin{array}{l}
c^* = \lambda \, (n-1)! \, Dp(e) + w^*A \quad \textrm{for some $ \lambda > 0 $ and $ w^* $} \\
Ae = b \\
e \in \Lambdapp \; . \end{array} \quad \right\}
\end{equation}
Obviously, the conditions (\ref{e.d.d}) and (\ref{e.d.c}) are equivalent. \end{proof}
\section{{\bf Proof of Theorem 3}} \label{s.e}
Having finished proving that for each $ e \in \swath(i) \setminus \core(i) $, the set $ \optie{i}{e} $ consists of a unique point $ \xie{i}{e} $, it is time to show that the differential equation
\[ \smfrac{d}{dt} e(t) = \xie{i}{e(t)} - e(t), \quad e(0) \in \swath(i) \setminus \core(i) \]
results in well-defined trajectories. This is immediate from Theorem 3, which we now restate and prove.
\begin{thm} \label{t.e.a}
Assume $ 1 \leq i \leq n-2 $.
The set $ \swath(i) \setminus \core(i) $ is open in the relative topology of $ \relint(\feas) $.
Moreover, the map $ e \mapsto \xie{i}{e} $ is analytic on $ \swath(i) \setminus \core(i) $.
\end{thm}
To establish the theorem, we introduce another theorem and a proposition.
The proof of the proposition is left to the reader, as it follows entirely standard lines, and is primarily an application of the Implicit Function Theorem to optimality conditions (see, for example, \S2.4 of \cite{fm} for similar results).
\begin{prop} \label{t.e.b}
Assume $ f: {\mathcal E}_1 \times {\mathcal E}_2 \rightarrow \mathbb{R} $ is analytic, where $ {\mathcal E}_1 $ and $ {\mathcal E}_2 $ are Euclidean spaces. For $ z \in {\mathcal E}_2 $, define $ f_z: {\mathcal E}_1 \rightarrow \mathbb{R} $ by $ f_z(y) := f(y,z) $, and consider for some linear functional $ y \mapsto d^*y $ the following family of optimization problems parameterized by $ z $:
\begin{equation} \label{e.e.a}
\begin{array}{rl}
\min_y & d^*y \\
\textrm{s.t.} & f_z(y) \geq 0 \; . \end{array}
\end{equation}
For some $ \bar{z} $, assume $ \bar{y} $ is a local optimum with the properties that $ Df_{\bar{z}} ( \bar{y}) \neq 0 $, and $ D^2 f_{\bar{z}}( \bar{y})[v,v] < 0 $ for all $ v \neq 0 $ satisfying $ D f_{ \bar{z}}( \bar{y})[v] = 0 $. Then there exists an analytic function $ z \mapsto y_z $ defined on an open neighborhood of $ \bar{z} $, and possessing the properties $ y_{ \bar{z}} = \bar{y} $ and for each $ z $ in the neighborhood, the point $ y_z $ is locally optimal for (\ref{e.e.a}).
\end{prop}
The following theorem collects results from \cite{renegar}. (Keep in mind that, as was emphasized in \S\ref{s.d}, our standard assumptions are always assumed to be in effect. The relevant assumption below is regularity of $ \Lambdap $, but this assumption is immaterial for parts (A) and (B).)
\begin{thm} \label{t.e.c}
Assume $ 1 \leq i \leq n-1 $, $ \bar{e} \in \Lambdapp $ and $ \bar{x} \in ( \partial \Lambdaiep{i}{ \bar{e} }) \setminus \Lambdaiep{i-1}{ \bar{e}} $ (equivalently, by Theorem~\ref{t.d.a}(A), $ \bar{x} \in ( \partial \Lambdaiep{i}{ \bar{e} }) \setminus \Lambdap $).
\begin{enumerate}
\item There exist open neighborhoods $ U $ of $ \bar{e} $, and $ V $ of $ \bar{x} $, with the property that
\[ e \in U \quad \Rightarrow \quad \Lambdaiep{i}{e} \cap V = \{ x \in V: \ppie{i}{e}( x ) \geq 0 \} \; . \]
\item In a neighborhood of $ \bar{x} $, $ \partial \Lambdaiep{i}{ \bar{e} } $ is a manifold, whose tangent space at $ \bar{x} $ is $ \{ v: D\ppie{i}{\bar{e} }(\bar{x} )[v] = 0 \} $.
\item Further assume $ i \leq n-2 $. Then $ D^2 \ppie{i}{\bar{e} }(\bar{x} )[v,v] < 0 $ for all vectors $ v $ that both satisfy $ D \ppie{i}{\bar{e} }(\bar{x} )[v]= 0 $ and are not scalar multiples of $ \bar{x} $.
\end{enumerate}
\end{thm}
\begin{proof}
Recall the characterization (\ref{e.c.a}), that is,
\begin{equation} \label{e.e.b}
\Lambdaiep{i}{e} = \{ x: \ppie{j}{e}(x) \geq 0 \textrm{ for all $ j=i, \ldots, n-1 $} \}.
\end{equation}
For $ i = n-1 $, result (B) is trivial because $ x \mapsto \ppie{n-1}{e}(x) $ is a linear functional, and (A) is immediate with the additional observation that the linear functional varies continuously in $ e $. Henceforth we assume $ 1 \leq i \leq n-2 $.
Theorem~\ref{t.d.a}(A) gives
\[
\bar{x} \in (\partial \Lambdaiep{i}{\bar{e} }) \setminus \Lambdap \quad \Rightarrow \quad \bar{x} \in \Lambdaiepp{i+1}{ \bar{e} } \; ,
\]
and hence,
\begin{equation} \label{e.e.c}
\ppie{j}{\bar{e}}( \bar{x}) > 0 \quad \textrm{for $ j = i+1, \ldots, n-1$} \; .
\end{equation}
Thus, since the polynomials vary continuously in $ e $ as well as in $ x $, there exist open neighborhoods $ U $ of $ \bar{e} $, and $ V $ of $ \bar{x} $, for which
\[ (e,x) \in U \times V \quad \Rightarrow \quad \ppie{j}{e}(x) > 0 \textrm{ for all $ j = i+1 , \ldots, n-1 $} \; . \]
This and (\ref{e.e.b}) establish statement (A) of the present theorem.
In light of (A), to establish (B) we need only show $ D \ppie{i}{ \bar{e}}( \bar{x}) \neq 0 $. However, $ D \ppie{i}{ \bar{e}}( \bar{x})[\bar{e}] = \ppie{i+1}{ \bar{e}}( \bar{x}) > 0 $ (by (\ref{e.e.c}); hence, $ D \ppie{i}{ \bar{e}}( \bar{x}) \neq 0 $.
Finally, (C) is a restatement of Theorem 14 in \cite{renegar}, with $ \Lambdaiep{i-1}{e} $ substituted for $ \Lambdap $ (the hypothesis of regularity is satisfied due to Theorem~\ref{t.d.a}(D)). \end{proof}
\noindent
{\bf {\em Proof of Theorem~\ref{t.e.a}.}} Assume $ 1 \leq i \leq n-2 $ and $ e \in \swath(i) \setminus \core(i) $.
We claim
\begin{equation} \label{e.e.d}
D \ppie{i}{e}(\xie{i}{e})[e - \xie{i}{e}] \neq 0 \; .
\end{equation}
Indeed, as $ \xie{i}{e} \in ( \partial \Lambdaiep{i}{e}) \setminus \Lambdaiep{i-1}{e} $ (using (\ref{e.d.b}), the tangent space of $ \Lambdaiep{i}{e} $ at $ \xie{i}{e} $ is precisely $ \{ v: D\ppie{i}{e}(\xie{i}{e})[v] = 0 \} $, by Theorem~\ref{t.e.c}(B). However, $ e $ is in the interior of the convex cone $ \Lambdaiep{i}{e} $, and hence the difference $ e - \xie{i}{e} $ cannot be in the tangent space. The claim is thus established.
Let $ \bar{{\mathcal E}} := \{ x: Ax = 0 \} $, a Euclidean (sub)space. Define $ f: \bar{{\mathcal E}} \times \bar{{\mathcal E}} \rightarrow \mathbb{R} $ by $ f(y,z) = \ppie{i}{z+e}(y+e) $, and let $ f_z: \bar{{\mathcal E}} \rightarrow \mathbb{R} $ be the function $ f_z(y) := f(y,z) $.
Let $ d^* $ be the projection of $ c^* $ onto $ \bar{{\mathcal E}} $ (i.e., the linear functional on $ \bar{{\mathcal E}} $ satisfying $ d^*y = c^*y $ for all $ y \in \bar{{\mathcal E}} $). For any $ x $ satisfying $ Ax = b $, the projection of $ D\ppie{i}{e}(x) $ onto $ \bar{{\mathcal E}} $ is $ D f_0(y) $ where $ y = x - e $. In particular, for all $ v \in \bar{{\mathcal E}} $,
\[ D \ppie{i}{e}( \xie{i}{e})[v] = D f_0( \bar{y})[v] \quad \textrm{where $ \bar{y} := \xie{i}{e} - e$} \; . \]
Since $ e - \xie{i}{e} \in \bar{{\mathcal E}} $, (\ref{e.e.d}) thus implies $ D f_0( \bar{y}) \neq 0 $.
Similarly, for all $ v \in \bar{{\mathcal E}} $,
\[ D^2 f_0( \bar{y})[v,v] = D^2 \ppie{i}{e}(\xie{i}{e})[v,v] \; . \]
Thus, since $ \xie{i}{e} \notin \bar{{\mathcal E}} $ (because $ b \neq 0 $, by assumption), Theorem~\ref{t.e.c}(C) implies $ D^2 f_0( \bar{y})[v,v] < 0 $ for all $ 0 \neq v \in \bar{{\mathcal E}} $ satisfying $ D f_0( \bar{y})[v] = 0 $.
In all, the hypotheses of Proposition~\ref{t.e.b} are satisfied for $ \bar{z} = 0 $ and $ \bar{y} $. Letting $ z \mapsto y_z $ be the analytic function whose existence is ensured by the proposition, it is readily apparent that $ y_z + e $ is locally optimal for
\[ \begin{array}{rl}
\min_x & c^* x \\
\textrm{s.t.} & Ax = b \\
& \ppie{i}{z+e}(x) \geq 0 \; . \end{array} \]
Since $ z \mapsto y_z $ is continuous (it's even analytic), it thus follows from Theorem~\ref{t.e.c}(A) that for $ z $ in an open neighborhood of the origin in $ \bar{{\mathcal E}} $, the point $ y_z + e $ is locally optimal -- and hence globally optimal -- for the convex optimization problem $ \hpie{i}{z+e} $ -- that is, $ y_z + e \in \optie{i}{z+e} $, and so $ z+e \in \swath(i) $.
However, for $ z $ in a possibly smaller open neighborhood of the origin, $ y_z + e \notin \Lambdap $, because $ y_0 + e = \xie{i}{e} \notin \Lambdap $ and $ z \mapsto y_z $ is continuous. Consequently, for $ z $ in this open neighborhood of the origin, $ \optie{i}{z+e} \setminus \Lambdap \neq \emptyset $, and hence, $ e \notin \core(i) $; thus, $ z \in \swath(i) \setminus \core(i) $, and clearly, $ \xie{i}{z+e} = y_z + e $.
As $ e $ was an arbitrary point in $ \swath(i) \setminus \core(i) $, the proof is complete. \hfill $ \Box $ \vspace{2mm}
In closing this section, we recall that due simply to the strict curvature of (regular) second order cones, for every $ e \in \swath(n-2) $ there is a unique optimal solution of $ \hpie{n-2}{e} $, thus naturally extending the map $ e \mapsto \xie{n-2}{e} $ to all of $ \swath(n-2) $. Additionally, every $ \bar{e} \in \Lambdapp $ and every $ 0 \neq x \in \partial \Lambdaiep{n-2}{\bar{e} } $ satisfies properties (A), (B) and (C) of Theorem~\ref{t.e.c} when $ i = n-2 $ (regardless of whether $ \bar{x} $ satisfies the theorem's hypothesis $ \bar{x} \notin \Lambdap $). Consequently, the proof of Theorem~\ref{t.e.a} easily is made to be a proof showing that the extended map $ e \mapsto \xie{n-2}{e} $ is analytic on $ \swath(n-2) $ (and $ \swath(n-2) $ is open in the relative topology of $ \relint(\feas) $).
\section{{\bf Proof of Theorem 5}} \label{s.f}
We have been proving the theorems of \S\ref{s.b} in the order they appeared, and have just finished establishing Theorem 3, that is, have just finished establishing the well-definedness of the trajectories arising from the differential equation
\[ \smfrac{d}{dt} e(t) = \xie{i}{e(t)} - e(t) ,\quad e(0) \in \swath(i) \setminus \core(i) \; . \]
To continue proving theorems in the order they appeared, we would next prove Part~I of the Main Theorem, which states that either the trajectory $ t \mapsto e(t) $ or the path $ t \mapsto \xie{i}{e(t)} $ converges to optimality (perhaps both). However, there is groundwork to be laid before proving that result, so we jump to the one stated after it, a motivational theorem pertaining to the trajectories in the special case $ i = n-2 $.
For any $ e $, the polynomial $ x \mapsto \ppie{n-2}{e}(x) $ is quadratic, and hence for $ e \in \Lambdapp $, $ \Lambdaiep{n-2}{e} $ is a second-order cone, regularity due to Theorem~\ref{t.d.a}(D).
Recall we say that $ t \mapsto e(t) $ $ ( 0 \leq t < T) $ is a ``maximal trajectory'' for the dynamics $ \dot{e}(t) = \xie{i}{e(t)} - e(t) $ if $ e(0) \in \swath(i) \setminus \core(i) $ and $ T $ ($ = T(e(0)) $) is the time when the trajectory either reaches the boundary of $ \swath(i) \setminus \core(i) $ or escapes to infinity.
Following is a restatement and proof of Theorem \hyperlink{targ_thm_five}{5}, whose only purposes in \S\ref{s.b} were to motivate the choice of terminology ``{\em central} swaths'' and to give insight into the origin of the idea that in general the trajectories $ t \mapsto e(t) $ (or paths $ t \mapsto \xie{i}{e(t)} $) converge to optimality. After completing the paper, the reader likely will not have much difficulty in making the statement of the theorem more complete.\footnote{More specifically, after understanding the paper the reader likely will not have much difficulty in showing the following, in which
\begin{align*}
\cpath & = \{ z( \eta ): \eta > 0 \} \; , \\
\ecpath & = \{ z( \eta ): \eta \in \mathbb{R} \}
\end{align*}
(``extended central path''), and $ z( \eta ) $ solves $ \min_x \eta \, c^* x - \ln p(x) $, s.t., $ Ax = b $, $ x \in \Lambdapp $:
\begin{itemize}
\item $ \cpath \subseteq \swath(n-2) $
\item If $ e(0) \in \ecpath \cap (\swath(n-2) \setminus \core(n-2)) $, then $ T(e(0)) = \infty $ and
\[ \{ e(t): 0 \leq t < \infty \} = \{ e \in \ecpath : c^* e \leq c^* e(0) \} \; . \]
\item If $ e \in \ecpath \cap \core(n-2) $ then
\[ \core(n-2) = \ell \cap \Lambdapp = \ecpath \; , \]
where $ \ell $ is the line containing $ e $ and the (unique) point in $ \optie{n-2}{e} = \opt $.
\end{itemize}
}
\begin{thm} \label{t.f.a}
Assume $ t \mapsto e(t) $ $ (0 \leq t < T) $ is a maximal trajectory arising from the dynamics $ \dot{e}(t) = \xie{n-2}{e(t)} - e(t) $, starting at $ e(0) \in \swath(n-2) \setminus \core(n-2) $. \begin{center} If $ e(0) \in \cpath $ then $ \{ e(t): 0 \leq t < T \} \subseteq \cpath $. \end{center}
\end{thm}
\begin{proof} The proof makes use of results known to anyone familiar with the interior-point method literature.
The central path is the set $ \{ z( \eta ): \eta > 0 \} $, where $ z( \eta ) $ solves
\[ \begin{array}{rl}
\min_x & \eta \, c^* x - \ln p(x) \\
\textrm{s.t.} & Ax = b \\
& x \in \Lambdapp . \end{array} \]
In the terminology of Nesterov and Nemirovski \cite{nn}, the barrier function $ x \mapsto - \ln p(x) $ is self-concordant, a fact whose implications were first developed by G\"uler \cite{guler}.
The barrier function is strictly convex on its domain, $ \Lambdapp $ (``strictly'' because $ \Lambdap $ is a regular cone). Hence, for each $ \eta > 0 $, there is at most one optimizer, $ z( \eta ) $. It is well-known from interior-point method theory that if the optimizer $ z( \eta ) $ exists for one positive value of $ \eta $, then an optimizer $ z( \eta ) $ exists for all $ \eta > 0 $, and the path $ \eta \mapsto z( \eta ) $ is analytic (using that $ x \mapsto -\ln p(x) $ is analytic).
To prove the theorem, it suffices to show that
if $ z( \eta ) $ lies in $ \swath(i) \setminus \core(i) $ (which we know by Theorem~\ref{t.e.a} to be open in the relative topology of $ \relint(\feas)$), then $ \dot{z}( \eta ) $ -- the tangent vector to the central path at $ z( \eta ) $ -- is a positive multiple of the difference $ \xie{n-2}{ z( \eta )} - z( \eta ) $. Indeed, this is sufficient to ensure that if $ z( \eta_0) \in \swath(i) \setminus \core(i) $, then as $ \eta $ increases from $ \eta_0 $, the central path will remain in $ \{ e(t): 0 \leq t < T(z( \eta_0)) \} $ -- where $ e(0) = z( \eta_0) $ -- until reaching the boundary of $ \swath(i) \setminus \core(i) $, that is, will remain in the path of the trajectory until the trajectory ends, implying $ \{ e(t): 0 \leq t < T( z( \eta_0)) \} \subseteq \cpath $.
Now we begin proving that $ \dot{z}( \eta ) $ is indeed a positive multiple of $ \xie{n-2}{z( \eta )} - z( \eta ) $.
Assuming the central path exists, for each $ \eta > 0 $ there exists $ y(\eta )^* $ which together with $ z( \eta ) $ satisfies the first-order optimality condition
\begin{equation} \label{e.f.a}
\eta c^* = \smfrac{1}{p(z(\eta ))} D p(z( \eta )) + y( \eta )^* A \; .
\end{equation}
It is well known that if $ A $ is surjective (one of our standard assumptions), then $ y( \eta )^* $ is unique, and $ \eta \mapsto y( \eta )^* $ is analytic (using that $ x \mapsto -\ln p(x) $ is analytic).
Differentiating in $ \eta $ provides equations satisfied by $ \dot{z}( \eta ) $, the tangent vector to the central path:
\begin{equation}
\label{e.f.b}
c^* = \smfrac{1}{p(z(\eta ))} D^2 p(z( \eta )) [\dot{z}( \eta )] - \smfrac{ D p(z( \eta ))[ \dot{z}( \eta )] }{p(z( \eta ))^2} D p(z( \eta )) + \dot{y}( \eta )^* A \; .
\end{equation}
It is well known, moreover, that $ c^* \dot{z}( \eta ) < 0 $.
Using (\ref{e.f.a}) to substitute for $ D p(z( \eta )) $ in (\ref{e.f.b}), and relying on $ A \dot{z}( \eta ) = 0 $, yields
\begin{equation} \label{e.f.c}
\gamma(\eta ) c^* = D^2 p( z( \eta )) [ \dot{z}(\eta ) ] + w( \eta )^* A
\end{equation}
for some constant $ \gamma (\eta ) $ and vector $ w( \eta )^* $. We aim to use this condition to show that if $ z( \eta ) \in \swath(n-2) \setminus \core(n-2) $, then $ \dot{z}( \eta ) $ is a positive multiple of $ \xie{n-2}{z( \eta )} - z( \eta ) $. To accomplish this aim, we need an appropriate characterization of $ \xie{n-2}{ z( \eta )} $, the optimal solution to $ \hpie{n-2}{z( \eta )} $.
Now, for every $ e \in \Lambdapp $, it holds that $ \Lambdaiep{n-2}{e} \cap \partial \Lambdaiep{n-1}{e} = \{ 0 \} $ (by regularity, and parts C and D of Theorem~\ref{t.d.a}). Thus,
$ \feasie{n-2}{e} \cap \partial \Lambdaiep{n-1}{e} = \emptyset $
(because $ b \neq 0 $) and, as a consequence,
\begin{align}
x \in \feasie{n-2}{e} \quad & \Rightarrow \quad \ppie{n-1}{e}(x) > 0 \label{e.f.d} \\
& \Leftrightarrow \quad D\ppie{n-2}{e}(x)[e] > 0 \nonumber \\
& \Rightarrow D\ppie{n-2}{e}(x) \neq 0 \; . \label{e.f.e}
\end{align}
On the other hand, by the characterization (\ref{e.c.a}) applied to $ \Lambdaiep{n-2}{e} $,
\begin{equation} \label{e.f.f}
\feasie{n-2}{e} = \{ x: Ax = b, \, \ppie{n-2}{e}(x) \geq 0 \textrm{ and } \ppie{n-1}{e}(x) \geq 0 \} \; .
\end{equation}
It follows from (\ref{e.f.d}), (\ref{e.f.e}) and (\ref{e.f.f}) that necessary and sufficient conditions for a point $ x $ to be optimal for the convex optimization problem $ \hpie{n-2}{e} $ are
\begin{equation} \label{e.f.g}
\left. \begin{array}{l}
\gamma c^* = D\ppie{n-2}{e}(x) + y^*A \quad \textrm{for some $ \gamma > 0 $ and $ y^* $} \\
Ax = b \\
x \in \partial \Lambdaiep{n-2}{e}
\end{array}\quad \right\} \end{equation}
Assume $ z( \eta ) \in \swath(i) \setminus \core(i) $. For brevity, write $ z $ for $ z( \eta ) $ and $ \dot{z} $ for $ \dot{z}( \eta ) $. Our goal is to show $ \dot{z} $ is a positive multiple of $ \xie{n-2}{ z} - z $ -- equivalently, to show $ \xie{n-2}{z} = z + t \dot{z} $ for some $ t > 0 $ -- equivalently, to show for some $ t > 0 $ that $ x = z + t \dot{z} $ satisfies the conditions (\ref{e.f.g}) when $ e = z $.
To fix the value of $ t $, consider the condition $ x \in \partial \Lambdaiep{n-2}{z} $. To see that there exists $ t > 0 $ for which $ x = z + t \dot{z} $ is in the boundary, begin by recalling $ c^* \dot{z} < 0 $, and hence by (\ref{e.f.a}), $ D p(z)[ \dot{z} ]< 0 $. However, by (\ref{e.c.g}), $ Dp(z) $ is a positive multiple of $ D \ppie{n-1}{z}(z) $ -- thus, $ D \ppie{n-1}{z}(z )[\dot{z}] < 0 \ $.
Since $ x \mapsto \ppie{n-1}{z }(x) $ is linear and $ \ppie{n-1}{z }(z) > 0 $, it follows that for some $ t> 0 $, the point $ z + t \dot{z} $ lies in $ \partial \Lambdaiep{n-1}{z} $. By the nesting of cones, for a (smaller) value $ t > 0 $, the point $ z + t \dot{z} $ lies in $ \partial \Lambdaiep{n-2}{z} $. Consider the value of $ t $ to be fixed thusly.
Of course $ x = z + t \dot{z} $ satisfies $ Ax = b $. To complete the proof, it remains to show there exist $ \gamma > 0 $ and $ y^* $ satisfying
\begin{equation} \label{e.f.h}
\gamma c^* = D\ppie{n-2}{z}(z + t \dot{z}) + y^*A \; .
\end{equation}
We claim, however, there is no need to be concerned with the sign of $ \gamma $, because so long as (\ref{e.f.h}) is satisfied, $ \gamma $ is forced to be positive. Indeed, if (\ref{e.f.h}) held with $ \gamma < 0 $, then $ z + t \dot{z} $ would satisfy the (sufficient as well as necessary) optimality conditions for the problem obtained from $ \hpie{n-2}{z} $ by replacing ``$ \min c^*x $'' with ``$ \max c^*x $'', contradicting that $ c^*z > c^*(z + t \dot{z}) $. And if (\ref{e.f.h}) held with $ \gamma = 0 $, then by Theorem~\ref{t.e.c}(B), $ \{ x: Ax = 0 \} $ would be a subspace of the tangent space of $ \partial \Lambdaiep{n-2}{z} $ at $ z + t \dot{z} $ -- but this would lead to a contradiction, as $ z $ is in the interior of $ \Lambdaiep{n-2}{z} $ and $ A \dot{z} = 0 $. Thus, we need only be concerned with showing there exist $ \gamma $ and $ y^* $ satisfying (\ref{e.f.h}), and not be concerned with the sign of $ \gamma $.
By linearity of $ x \mapsto D \ppie{n-2}{z}(x) $ (as $ \ppie{n-2}{z} $ is a quadratic polynomial),
\[
D\ppie{n-2}{z}(z + t \dot{z} ) = D\ppie{n-2}{z}(z) + t D\ppie{n-2}{z}( \dot{z}) \; . \]
Since $ D\ppie{n-2}{z}(z) $ is a positive multiple of $ Dp(z) $ (by (\ref{e.c.g})), and since $ Dp(z) $ satisfies (\ref{e.f.a}), to show that $ x = e + t \dot{z} $ satisfies (\ref{e.f.h}) for some $ \gamma $ and $ y^* $, it thus suffices to show
\begin{equation} \label{e.f.i}
D\ppie{n-2}{e}( \dot{z}) = \bar{ \gamma} c^* + \bar{y}^*A \quad \textrm{for some $ \bar{\gamma} $ and $ \bar{y}^* $} \; .
\end{equation}
Now, for any vectors $ u $ and $ v $,
\begin{align*}
D^2 \ppie{n-2}{z}(u)[v] & = \left. \smfrac{d}{dt} \left( D \ppie{n-2}{z}(u + tv) \right) \right|_{t=0} \\ & = D \ppie{n-2}{z}(v) \quad \textrm{(by linearity of $ x \mapsto D \ppie{n-2}{z}(x) $)} \; .
\end{align*}
In particular,
\[
D\ppie{n-2}{z}( \dot{z}) = D^2 \ppie{n-2}{z}(z)[ \dot{z}] \; , \]
and hence, by (\ref{e.c.g}),
\[
D\ppie{n-2}{z}( \dot{z}) \textrm{ is a positive multiple of } D^2 p(z)[ \dot{z}] \; . \]
Consequently, that $ \bar{\gamma} $ and $ \bar{y}^* $ can be chosen to satisfy (\ref{e.f.h}) follows from (\ref{e.f.c}), thus completing the proof. \end{proof}
\section{{\bf Proofs of Theorems 7 and 8}} \label{s.g}
We now consider the optimization problems $ \hpie{i}{e} $ for all $ 0 \leq i \leq n-1 $.
The purpose of the present section is to establish a useful characterization of those optimal solutions of $ \hpie{i}{e} $ which do not lie in $ \partial \Lambdaiep{i+1}{e} $. For $ e \in \swath(i) \setminus \core(i) $, the characterization precisely identifies the unique optimal solution $ \xie{i}{e} $ (because $ \xie{i}{e} \notin \partial \Lambdaiep{i+1}{e} $ by (\ref{e.d.b})). Although only the case $ e \in \swath(i) \setminus \core(i) $ is relevant to the Main Theorem, we record the characterization generally, because it has the potential for computational relevance also when $ e \in \core(i) $.
In this section, the only properties used of the polynomials $ \ppie{i}{e} $ is that they are hyperbolic and nonconstant. {\bf Thus, to ease notation, for this section we let $ p $ be any nonconstant polynomial which is hyperbolic in direction $ e $.} We let $ p'_e $ denote the derivative polynomial.
If $ p $ is linear, then $ p_e' \not\equiv 0 $ is a constant polynomial and its hyperbolicity cone $ \Lambdapp' $ is the entire Euclidean space $ {\mathcal E} $.
To aid the reader's intuition, the following proposition (not used in the sequel) explains which points in $ \opt $ can possibly be not in $ \partial \Lambdape{e}' $.
\begin{prop} \label{t.g.a}
If $ e \in \Lambdapp $ then
\begin{center} either \quad $ \opt \subset \partial \Lambdape{e}' $ \quad or \quad $ \opt \setminus \partial \Lambdape{e}' = \relint(\opt) \; . $
\end{center}
\end{prop}
\begin{proof} The proposition is trivially true when $ p $ is linear, because then, $ \partial \Lambdape{e}' = \emptyset $ and $ \relint(\opt) = \opt $ (due to $ \opt $ being an affine space). The proposition also is trivially true if $ \opt $ is the empty set or consists of a single point. Thus, assume $ \deg(p) \geq 2 $, assume $ \opt $ has more than one point, and assume $ \opt $ is not contained in $ \partial \Lambdape{e}' $.
Since $ \opt \setminus \partial \Lambdape{e}' \neq \emptyset $ (by assumption), from the containment $ \opt \subset \Lambdape{e}' $, and from the convexity of the sets $ \opt $ and $ \Lambdape{e}' $, follows by standard arguments that
\[
\relint(\opt) \cap \partial \Lambdape{e}' = \emptyset \; . \]
Thus, fixing optimal $ x $ not contained in $ \relint(\opt) $, it remains to show $ x \in \partial \Lambdape{e}' $, that is, to show $ p_e'(x) = 0 $.
Choose $ y \in \relint(\opt) $, and let $ y(t) := x + t(y-x) $ ($ t \in \mathbb{R} $). Clearly, if $ y(t) \in \feas $ then $ y(t) \in \opt $. Thus, since $ x $ is in the relative boundary of $ \opt $, it holds that $ y(t) \notin \Lambdap $ for $ t < 0 $.
Since the line segment with endpoints $ x $ and $ y $ is contained in $ \partial \Lambdap $, we have $ p(y(t)) = 0 $ for $ 0 \leq t \leq 1 $. Since the only univariate polynomial with infinitely many roots is the polynomial that is identically zero, it follows $ p(y(t)) = 0 $ for {\em all} $ t \in \mathbb{R} $. Hence, from the characterizations
\[
\Lambdaiep{k}{e} = \{ z: \ppie{j}{e}(z) \geq 0 \textrm{ for all $ j = k, \ldots, n-1$} \} \; , \]
and the fact that $ y(t) \notin \Lambdap $ for $ t < 0 $, follows $ p_e'(y(0)) = 0 $, that is, $ p_e'(x) = 0 $, as desired. \end{proof}
For $ e \in \Lambdapp $,
\[
\textrm{define } \; q_e: \Lambdappe{e}' \rightarrow \mathbb{R} \; \textrm{ by } \; q_e(x) := p(x)/ p_e'(x) \; . \]
As already mentioned, the purpose of this section is to develop a useful characterization of the set $ \opt \setminus \partial \Lambdape{e}' $. The characterization is that the points in the set are precisely the optimal solutions to the following linearly constrained optimization problem:
\begin{equation} \label{e.g.a}
\begin{array}{rl}
\min_{x \in \Lambdappe{e}'} & - \ln c^*(e-x) - q_e(x) \\
\textrm{s.t.} & Ax = b \; .
\end{array} \end{equation}
Critical to achieving the characterization (and critical to characterization's relevance for computation) is Theorem \hyperlink{targ_thm_seven}{7}, now restated and proved.
\begin{thm} \label{t.g.b}
The rational function $ q_e: \Lambdappe{e}' \rightarrow \mathbb{R} $ is concave.
\end{thm}
\begin{proof} Introduce a new variable $ t $ and let $ P(x,t) := t \, p(x) $, a polynomial that is easily seen to be hyperbolic in direction $ E = (e,1) $. Let $ K' $ be the hyperbolicity cone for the derivative polynomial $ P'_E $ -- thus, $ K' $ is the connected component of $ S := \{ (x,t): p(x) + t p'_e(x) > 0 \} $ containing $ E $. We claim $ K' $ is precisely the interior of the epigraph for $ -q_e: \Lambdappe{e}' \rightarrow \reals $. Since $ K' $, being a hyperbolicity cone, is convex, establishing the claim will establish the theorem.
If $ x \in \partial \Lambdape{e}' $ then $ p_e'(x) = 0 $ and, according to (\ref{e.c.e}), $ p(x) \leq 0 $, so for no $ t $ do we have $ (x,t) \in K' $. Thus, now assuming $ x \in \Lambdappe{e}' $ and $ t > -q_e(x) $, to establish the claim it suffices to show there is a path in $ S $ from $ (x,t) $ to $ E = (e,1) $ (because that will imply $ (x,t) $ and $ E $ are in the same connected component of $ S $).
Choose $ \bar{t} $ satisfying $ \bar{t} > \max \{ -q_e(sx + (1-s)e): 0 \leq s \leq 1 \} $ -- the maximum exists because the line segment connecting $ x $ to $ e $ is contained in the convex set $ \Lambdappe{e}' $ and because $ p'_e $ is positive everywhere in $ \Lambdappe{e}' $. It is easily verified that a path in $ S $ from $ (x,t) $ to $ (e,1) $ is obtained with three line segments; the line segment between $ (x,t) $ and $ (x, \bar{t} ) $, the line segment between $ (x, \bar{t} ) $ and $ (e, \bar{t}) $, and the line segment between $ (e, \bar{t}) $ and $ (e,1) $. \end{proof}
We now restate and prove Theorem \hyperlink{targ_thm_eight}{8}, the main result of this section.
\begin{thm} \label{t.g.c}
If $ e \in \Lambdapp $ then
\[ \opt \setminus \partial \Lambdape{e}' = \{ x: x \textrm{ is optimal for (\ref{e.g.a})} \} \]
(possibly the empty set).
\end{thm}
\begin{proof}
For $ x $ in $ \Lambdappe{e}' $ we have $ Dp(x) \neq 0 $ (because $ 0 < p'_e(x) = Dp(x)[e] $).
Thus, from the characterization
\[ (\partial \Lambdap) \setminus \partial \Lambdape{e}' = \{ x: p(x) = 0 \textrm{ and } \ppie{j}{e}(x) > 0 \textrm{ for all $ j = 1, \ldots, n-1 $} \} \; \]
(by (\ref{e.c.c})), it follows that for the convex optimization problem $ \hp $,
\[
\begin{array}{c} x \in \opt \setminus \partial \Lambdape{e} ' \\\Leftrightarrow \\
\lambda c^* = Dp(x) + y^* A \quad \textrm{for some $ \lambda > 0 $ and $ y^* $} \\
A x = b \\
x \in (\partial \Lambdap) \setminus \partial \Lambdape{e}' \; . \end{array}
\]
Observe that these conditions and homogeneity of $ p $ give
\[ \lambda c^*(e-x) = Dp(x)[e-x] = p_e'(x) - n \, p(x) = p_e'(x) \; , \]
that is,
\[
\lambda = \frac{p_e'(x)}{c^*(e-x)} \; .
\]
Clearly, then,
\begin{equation} \label{e.g.b}
\left. \begin{array}{c} x \in \opt \setminus \partial \Lambdape{e} ' \\\Leftrightarrow \\
\smfrac{p_e'(x)}{c^*(e-x)} c^* = Dp(x) + y^* A \quad \textrm{for some $ y^* $} \\
A x = b \\
x \in (\partial \Lambdap) \setminus \partial \Lambdape{e}' \end{array} \quad \right\}
\end{equation}
On the other hand, necessary and sufficient conditions for $ x $ to solve the convex optimization problem (\ref{e.g.a}) are
\begin{equation} \label{e.g.c}
\left. \begin{array}{c}
\smfrac{1}{c^*(e-x)} c^* - Dq_e(x) = w^* A \quad \textrm{for some $ w^* $} \\
Ax = b \\
x \in \Lambdappe{e}' \end{array} \quad \right\}
\end{equation}
Observe that these conditions along with homogeneity of $ p $ and $ p_e' $ give
\begin{align*}
1 & = \smfrac{1}{c^*(e-x)} c^*(e-x) \\ & = Dq_e(x)[e-x] \\
& = \smfrac{1}{p_e'(x)} \left( Dp(x)[e-x] - \smfrac{p(x)}{p_e'(x)} Dp_e'(x)[e-x] \right) \\
& = \smfrac{1}{p_e'(x)} \left( p_e'(x) - n \, p(x) - \smfrac{p(x)}{p_e'(x)} \left( p_e''(x) - (n-1) \, p_e'(x) \right) \right) \\
& = 1 - \frac{p(x)}{p_e'(x)} \left( 1 + \frac{p_e''(x)}{p_e'(x)} \right) \; .
\end{align*}
Consequently, as $ x \in \Lambdappe{e}' $ implies $ p_e'(x) > 0 $ and $ p_e''(x) > 0 $, it must be for optimal $ x $ that $ p(x) = 0 $. Hence, in the necessary and sufficient optimality conditions (\ref{e.g.c}), the containment $ x \in \Lambdappe{e}' $ can be replaced by $ x \in ( \partial \Lambdap) \setminus \partial \Lambdape{e}' $. Since for $ x $ satisfying $ p(x) = 0 $ we also have $ Dq_e(x) = \smfrac{1}{p_e'(x)} Dp(x) $, we see that the conditions (\ref{e.g.c}) are equivalent to
\begin{equation} \label{e.g.d}
\left. \begin{array}{c}
\smfrac{1}{c^*(e-x)} c^* - \smfrac{1}{p_e'(x)} Dp(x) = w^* A \quad \textrm{for some $ w^* $} \\
Ax = b \\
x \in (\partial \Lambdap) \setminus \partial \Lambdape{e}' \end{array} \quad \right\}
\end{equation}
The optimality conditions (\ref{e.g.d}) and (\ref{e.g.b}) are identical. \end{proof}
We close this section with a restatement of Corollary \hyperlink{targ_cor_nine}{9}.
\begin{cor} \label{t.g.d}
If $ 1 \leq i \leq n-2 $, $ e \in \swath(i) \setminus \core(i) $ and
\[ f(x) := - \ln c^*(e-x) - \frac{\ppie{i}{e}(x)}{\ppie{i+1}{e}(x)} \; , \]
then $ \xie{i}{e} $ is the unique optimal solution for the convex optimization problem
\[ \begin{array}{rl}
\min_x & f(x) \\
\mathrm{s.t.} & Ax = b \; .
\end{array} \]
Moreover, $ D^2f(\xie{i}{e})[v,v] > 0 $ if $ v $ is not a scalar multiple of $ \xie{i}{e} $ (in particular, if $ v \neq 0 $ satisfies $ Av = 0 $).
\end{cor}
\begin{proof} The first statement is immediate from Theorem~\ref{t.g.c}. The statement regarding $ D^2 f $ is immediate from Theorem~\ref{t.e.c}(C). \end{proof}
As was noted in \S\ref{s.b}, the corollary's statement regarding $ D^2 f $ implies quadratic convergence of Newton's method when initiated near to $ \xie{i}{e} $. This is not pursued in the present paper.
\section{{\bf Proof of Part I of Main Theorem}} \label{s.h}
Recall that for the differential equation
\[ \smfrac{d}{dt} e(t) = \xie{i}{e(t)} - e(t) \; , \quad e(0) = e \in \swath(i) \setminus \core(i) \; , \]
we let $ T = T(e(0)) $ denote the time (possibly $ T = \infty $) at which the trajectory terminates due either to reaching the boundary of $ \swath(i) \setminus \core(i) $ or escaping to infinity. We say that $ t \mapsto e(t) $ $ ( 0 \leq t < T ) $ is a ``maximal trajectory.''
The primary goal of this section is to prove that the path $ t \mapsto e(t) $, or the trajectory $ t \mapsto \xie{i}{e(t)} $ (perhaps both), converges to optimality as $ t \rightarrow T $.
As we shall see, the proof is reasonably straightforward when $ T = \infty $, in which case the bounded trajectory $ t \mapsto e(t) $ has all limit points in $ \opt $.
Perhaps deserving of mention is that for all $ e \in \swath(n-2) \setminus \core(n-2) $, it holds $ T(e) = \infty $, and so our reasonably straightforward proof applies.
Unfortunately, the case $ T < \infty $ is an entirely different matter. Here we show $ t \mapsto e(t) $ converges to a unique point in $ \core(i) $, and we show the path $ t \mapsto \xie{i}{e(t)} $ is bounded, with all limit points in $ \opt $. This proof is subtle and long. It would be great if a more direct proof were discovered.
Before restating the theorem and beginning the proof, we introduce a new use of differentials that plays a significant role in this section and later.
As was recalled for the reader in \S\ref{s.c}, the $ j^{th} $ differential at $ x $ for an analytic function $ f $ is the symmetric multilinear form defined on $ j $-tuples of vectors $ [v_1, \ldots, v_j] $ by
\[ D^j f(x)[v_1, \ldots, v_j] := \smfrac{d}{dt_1} \cdots \smfrac{d}{dt_j} f(x + t_1 v_1 + \cdots t_j v_j)|_{t_j = 0} \cdots |_{t_1 = 0} \; . \]
Until now, significant roles have been played only by the differentials $ D^j\ppie{k}{e}(x) $ of the hyperbolic polynomials $ x \mapsto \ppie{k}{e}(x) $. Here, $ e $ is viewed as fixed, and $ x $ as the variable.
In this section, we sometimes need to view $ e $ as a variable, not just $ x $. We need to differentiate once with respect to $ e $, and $ j $ times with respect to $ x $. For this we introduce the notation $ D_e ( D^j \ppie{k}{e}(x)) $ to represent the form that assigns to pairs of tuples $ [u] $ and $ [v_1, \ldots, v_j ] $ the value
\[ \smfrac{d}{ds} (D^j \ppie{i}{e+su} (x)[v_1, \ldots, v_j])|_{s=0} \; . \]
(Here we differentiated at $ e $ in direction $ u $ only after differentiating at $ x $ in directions $ v_1, \ldots, v_j $, but the order of differentiation is immaterial, because $ (e,x) \mapsto \ppie{i}{e}(x) $ is a polynomial (in particular, is analytic)). Thus, there is good reason to also denote the form by, say, $ D^j( D_e \ppie{k}{e}(x)) $.)
When acting on a specific pair of tuples $ [u] $ and $ [v_1, \ldots, v_j] $, we denote the assigned value by $ D_e( D^j \ppie{k}{e}(x)[v_1, \ldots, v_j])[u] $ (or by $ D^j( D_e \ppie{k}{e}(x)[u])[v_1, \ldots, v_j] $). The form is multilinear in $ u, v_1, \ldots, v_j $, and is symmetric in $ v_1, \ldots, v_j $.
The subscript on ``$ D_e $'' does not refer to a specific hyperbolicity direction, but rather, to the derivative being taken with respect to the hyperbolicity direction. Thus, for example, $ D_e( D^j \ppie{k}{e(t)}(x)) $ is the form that assigns to pairs $ [u] $ and $ [v_1, \ldots, v_j] $ the value
$ \smfrac{d}{ds} (D^j \ppie{k}{e(t)+su} (x)[v_1, \ldots, v_j])|_{s=0} $.
We have occasion to fix the direction $ u $, in which case results a form on tuples $ [v_1, \ldots, v_j] $; specifically,
\[ [v_1, \ldots, v_j] \mapsto D_e( D^j \ppie{k}{e}(x)[v_1, \ldots, v_j])[u] \; , \]
a form we denote by $ D_e( D^j \ppie{i}{e}(x))[u] $. Similarly, if $ v_1, \ldots, v_i $ and $ u $ are fixed, we use $ D_e(D^j \ppie{k}{e}(x)[v_1, \ldots, v_i])[u] $ to denote the form
\[ [w_1, \ldots, w_{j-i}] \mapsto D_e( D^j \ppie{k}{e}(x)[v_1, \ldots, v_i, w_1, \ldots, w_{j-i}])[u] \; . \]
Time and again we rely on a relation between the forms $ D_e(D^j \ppie{k}{e}(x)) $ and $ D^{j+1} \ppie{k-1}{e}(x) $ when $ k \geq 1 $:
\begin{align}
D_e(D^j\ppie{k}{e }(x)) & = D_e( D^{j+k} p(x)[\underbrace{e, \ldots,e}_{ \textrm{$ k $ times}}]) \nonumber \\ & = k \, D^{j+k} p(x) [\underbrace{e, \ldots,e}_{ \textrm{$ k-1 $ times}}] \quad \textrm{(by multilinearity and symmetry)} \nonumber \\ & = k \, D^{j+1} \ppie{k-1}{e}(x) \; . \label{e.h.a}
\end{align}
That is, for any $ \bar{e} $ and tuples $ [u] $ and $ [v_1, \ldots, v_j] $,
\[ D_e( D^j \ppie{k}{ \bar{e}} (x)[v_1, \ldots, v_j])[u] = k \, D^{j+1} \ppie{k-1}{ \bar{e}} (x) [u, v_1, \ldots, v_j] \]
(a consequence being that the form $ D_e( D^j \ppie{k}{ \bar{e}}(x)) $ is symmetric in $ u, v_1, \ldots, v_j $, not just in $ v_1, \ldots, v_j $). \vspace{5mm}
Here is the restatement of the result to which this section is devoted, \hyperlink{targ_main_thm_part_one}{Part I} of the Main Theorem:
\begin{thm} \label{t.h.a}
Assume $ 1 \leq i \leq n-2 $ and let $ t \mapsto e(t) $ $ (0 \leq t < T) $ be a maximal trajectory generated by the dynamics $ \dot{e}(t) = \xie{i}{e(t)} - e(t) $ beginning at $ e(0) \in \swath(i) \setminus \core(i) $.
\begin{enumerate}
\item The trajectory $ t \mapsto e(t) $ is bounded, and $ t \mapsto c^* \xie{i}{e(t)} $ is strictly increasing, with $ \val $ (the optimal value of $ \hp $) as the limit.
\item If $ T = \infty $ then every limit point of the trajectory $ t \mapsto e(t) $ lies in $ \opt $.
\item If $ T < \infty $ then the trajectory $ t \mapsto e(t) $ has a unique limit point $ \bar{e} $ and $ \bar{e} \in \core(i) $; moreover, the path $ t \mapsto \xie{i}{e(t)} $ is bounded and each of its limit points lies in $ \opt $.
\end{enumerate}
\end{thm}
\subsection{The case $ T = \infty $} \label{s.h.a} As mentioned, our proof of Theorem~\ref{t.h.a} when $ T = \infty $ is much easier than when $ T < \infty $. In this subsection, we focus on the case $ T = \infty $, although the following proposition is important also to the case $ T < \infty $.
\begin{prop} \label{t.h.b}
Under the hypotheses of Theorem~\ref{t.h.a}, the trajectory $ t \mapsto e(t) $ is bounded, and $ t \mapsto c^* \xie{i}{e(t)} $ is strictly increasing.
\end{prop}
\begin{proof} Since $ e(0) \in \swath(i) \setminus \core(i) $, the set $ \optie{i}{e(0)} $ is bounded (indeed, consists of the single point $ \xie{i}{e(0)} $), and hence so are the level sets $ \{ x \in \feasie{i}{e(0)}: c^* x \leq \alpha \} $ ($ \alpha \in \mathbb{R} $). As $ \feas \subseteq \feasie{i}{e(0)} $, the level sets $ \{ x \in \feas: c^* x \leq \alpha \} $ are bounded, too. Hence, since the trajectory $ t \mapsto e(t) $ remains in $ \feas $, and since (clearly) $ t \mapsto c^* e(t) $ is decreasing (strictly monotonically), the trajectory lies entirely within a bounded region, that is, the trajectory is bounded.
It remains to prove that $ t \mapsto c^* \xie{i}{e(t)} $ is strictly increasing. To ease notation, let $ x(t) := \xie{i}{e(t)} $.
From $ t \mapsto \ppie{i}{e(t)}(x(t)) \equiv 0 $ we find that
\begin{align*}
0 & = \smfrac{d}{dt} \ppie{i}{e(t)}(x(t)) \\
& = D\ppie{i}{e(t)}(x(t))[ \dot{x}(t)] + D_e(\ppie{i}{e(t)}(x(t))[ \dot{e}(t)] \\
& = D\ppie{i}{e(t)}(x(t))[ \dot{x}(t)] + i D \ppie{i-1}{e}(x(t)) [ \dot{e}(t)] \quad \textrm{(using (\ref{e.h.a}))} \\
& = D\ppie{i}{e(t)}(x(t))[ \dot{x}(t)] + i \left( (n-i+1) \ppie{i-1}{e(t)}(x(t)) - \ppie{i}{e(t)}(x(t)) \right) \; ,
\end{align*}
where the final equality is due to $ \dot{e}(t) = x(t) -e(t) $ and, by (\ref{e.c.f}),
\[ D \ppie{i-1}{e(t)}(x(t))[x(t)] = (n-i+1) \ppie{i-1}{e(t)}(x(t)) \; . \]
Thus, since $ \ppie{i}{e(t)}(x(t)) = 0 $ and $ \ppie{i-1}{e(t)}(x(t)) < 0 $ (by (\ref{e.c.d})), we have
\begin{equation} \label{e.h.b}
D\ppie{i}{e(t)}(x(t))[ \dot{x}(t)] > 0 \; .
\end{equation}
On the other hand, $ x(t) $ satisfies the first-order condition
\[ \lambda(t) c^* = D\ppie{i}{e(t)}(x(t)) + y(t)^* A \qquad
\textrm{for some $ \lambda(t) > 0 $ and $ y(t)^* $} \; .
\]
Applying both sides to $ \dot{x}(t) $, using $ A \dot{x}(t) = 0 $ and substituting (\ref{e.h.b}), shows $ c^* \dot{x}(t) > 0 $. Thus, $ t \mapsto c^* x(t) $ is strictly increasing.
\end{proof}
We are now in position to easily prove Theorem~\ref{t.h.a} when $ T = \infty $.
\begin{thm} \label{t.h.c}
Under the hypotheses of Theorem~\ref{t.h.a}, if $ T = \infty $ then
\begin{itemize}
\item $ t \mapsto e(t) $ is bounded, with limit points lying in $ \opt $, and
\item $ t \mapsto c^* \xie{i}{e(t)} $ increases strictly monotonically to $ \val $, the optimal value of $ \hp $.
\end{itemize}
\end{thm}
\begin{proof}
Due to Proposition~\ref{t.h.b}, it remains only to prove that both $ t \mapsto c^* e(t) $ and $ t \mapsto c^* \xie{i}{e(t)} $ converge to $ \val $ as $ t \rightarrow \infty $. However, the convergence of $ c^* e(t) $ to $ \val $ is immediate from
\begin{equation} \label{e.h.c}
T = \infty, \quad c^* \xie{i}{e(t)} \leq \val \leq c^* e(t) \quad \textrm{and} \quad \dot{e}(t) = \xie{i}{e(t)} - e(t) \; . \end{equation}
Due to the monotonicity of $ t \mapsto c^* \xie{i}{e(t)} $ (by Proposition~\ref{t.h.b}), the convergence of $ c^* \xie{i}{e(t)} $ to $ \val $ also is immediate from (\ref{e.h.c}) (indeed, otherwise at some time there would occur $ c^* e(t) < \val $, contradicting $ e(t) \in \feas $).
\end{proof}
\subsection{The case $ T < \infty $} \label{s.h.b}
For the challenging case $ T < \infty $, we split most of the analysis into two propositions.
\begin{prop} \label{t.h.d}
Under the hypotheses of Theorem~\ref{t.h.a}, if $ T < \infty $, then letting $ x(t) := \xie{i}{e(t)} $,
either
\begin{enumerate}
\item The trajectory $ t \mapsto e(t) $ has a unique limit point, the limit point is contained in $ \core(i) $, and the path $ t \mapsto x(t) $ is bounded, with all limit points contained in $ \opt $.
\end{enumerate}
or
\begin{enumerate}
\addtocounter{enumi}{1}
\item It holds that $ \liminf_{t \rightarrow T} \|x(t)\| = \infty $. Moreover, there exists a constant $ \alpha < 0 $ such that for all $ 0 \leq t < T $,
\[ \ppie{i-1}{e(t)}(x(t)) \leq \alpha \|x(t)\|^{n-i+1} \; . \]
\end{enumerate}
\end{prop}
\vspace{1mm}
\begin{prop} \label{t.h.e}
Under the hypotheses of Theorem~\ref{t.h.a}, and regardless of whether $ T $ is finite, letting $ x(t) := \xie{i}{e(t)} $, for all $ 0 \leq t < T $ we have
\[ \frac{d}{dt} \, \ln \frac{\left( -\ppie{i-1}{e(t)}(x(t)) \right)^{n-i}}{\left( \ppie{i+1}{e(t)}(x(t)) \right) ^{n-i+1}} \leq
C_1 \frac{\ppie{i-1}{e(t)}(x(t))}{\ppie{i+1}{e(t)}(x(t))} + C_2 \frac{\ppie{i-2}{e(t)}(x(t))}{\ppie{i-1}{e(t)}(x(t))} + C_3 \; , \]
where
\[ C_1 = i(n-i+1)^2 \; , \quad C_2 = (i-1)(n-i)(n-i+2) \quad \textrm{and} \quad C_3 = 2n-i+1 \]
(and where for the case $ i = 1 $ we define $ \ppie{-1}{e(t)} \equiv 0 $).
\end{prop}
\vspace{1mm}
Before proving the two propositions, we show how they complete the proof of Theorem~\ref{t.h.a}.
\begin{thm} \label{t.h.f}
Under the hypotheses of Theorem~\ref{t.h.a}, if $ T < \infty $ then
\begin{itemize}
\item $ t \mapsto e(t) $ has a unique limit point, the limit point is contained in $ \core(i) $, and
\item $ t \mapsto \xie{i}{e(t)} $ is bounded, having all limit points contained in $ \opt $; \\moreover, $ t \mapsto c^*\xie{i}{e(t)} $ increases strictly monotonically (to $ \val $, the optimal value of $ \hp $).
\end{itemize}
\end{thm}
\begin{proof} We already know $ t \mapsto c^* \xie{i}{e(t)} $ is strictly increasing. Completing the proof thus amounts to showing case (A) of Proposition~\ref{t.h.d} holds. Hence, it suffices to assume that case (B) of Proposition~\ref{t.h.d} holds, and then show Proposition~\ref{t.h.e} yields a contradiction. Thus, assume case (B) holds.
Since the trajectory $ t \mapsto e(t) $ is bounded (by Proposition~\ref{t.h.b}), we can choose a compact set $ K $ containing the entire trajectory (we assume nothing about $ K $ other than it is compact). For $ j = i-2, i-1, i+1 $, define
\[ \gamma_j := \max \{ |\ppie{j}{\bar{e} }(x)|: \bar{e} \in K \textrm{ and } \|x\|=1 \} \]
(let $ \gamma_{i-2} := 0 $ when $ i = 1 $). Since $ \ppie{j}{\bar{e} } $ is homogeneous of degree $ n-j $, we have
\[ | \ppie{j}{\bar{e} }(x) | \leq \gamma_j \, \|x\|^{n-j} \quad \textrm{for all $ \bar{e} \in K $ and all $ x $} \; . \]
Let $ x(t) := \xie{i}{e(t)} $.
Using the assumed bound $ \ppie{i-1}{e(t)}(x(t)) \leq \alpha \| x(t) \|^{n-i+1} $ -- keep in mind especially that $ \alpha $ is negative -- and using the positivity of $ \ppie{i+1}{e(t)}(x(t)) $ (according to (\ref{e.d.b})), from the inequality of Proposition~\ref{t.h.e} we find for all $ 0 \leq t < T $,
\[ \frac{d}{dt} \, \ln \frac{\left( -\ppie{i-1}{e(t)}(x(t)) \right)^{n-i}}{\left( \ppie{i+1}{e(t)}(x(t)) \right) ^{n-i+1}} \leq C_1 \, \frac{ \alpha }{\gamma_{i+1}} \, \|x(t)\|^2 + C_2 \, \frac{ \gamma_{i-2}}{| \alpha |} \, \|x(t)\| + C_3 \; . \]
Hence, there exists a value $ r $ for which
\[ \|x(t)\| > r \quad \Rightarrow \quad \frac{d}{dt} \, \ln \frac{\left( -\ppie{i-1}{e(t)}(x(t)) \right)^{n-i}}{\left( \ppie{i+1}{e(t)}(x(t)) \right) ^{n-i+1}} < 0 \; . \]
Since $ \liminf_{t \rightarrow T} \|x(t)\| = \infty $ (by assumption), it follows that
\[ \sup_{0 \leq t < T} \frac{\left( -\ppie{i-1}{e(t)}(x(t)) \right)^{n-i}}{\left( \ppie{i+1}{e(t)}(x(t)) \right) ^{n-i+1}} < \infty \; . \]
However,
\begin{align*}
\frac{\left( -\ppie{i-1}{e(t)}(x(t)) \right)^{n-i}}{\left( \ppie{i+1}{e(t)}(x(t)) \right) ^{n-i+1}} & \geq \frac{\left( | \alpha | \, \|x(t)\|^{n-i+1} \right)^{n-i}}{\left( \gamma_{i+1} \|x(t) \|^{n-i-1} \right) ^{n-i+1}}\\
& \qquad = \frac{| \alpha |^{n-i}}{\gamma_{i+1}^{n-i-1}} \, \|x(t)\|^{n-i+1} \rightarrow \infty \; ,
\end{align*}
a contradiction, thus concluding the proof of the theorem (except for proving the two propositions). \end{proof}
\subsubsection{{\bf Proving the first of the two propositions}} Now we begin proving Proposition~\ref{t.h.d}. The proof relies on three lemmas.
\begin{lemma} \label{t.h.g}
Assume $ \opt $ is a bounded set and $ 1 \leq i \leq n - 2 $. Let $ \{ e_j \} $ be a sequence of derivative directions converging to $ e \in \Lambdapp $, and assume $ x_j \in \optie{i}{e_j} $.
\begin{enumerate}
\item If $ x_j \rightarrow x $, then $ x \in \optie{i}{e} $.
\item If $ \{ x_j \} $ is an unbounded set, then $ \optie{i}{e} = \emptyset $ and $ \liminf \| x_j \| = \infty $.
\item If $ \{ x_j \} $ is an unbounded set and $ \limsup c^* x_j > - \infty $, then $ x_j/\|x_j\| $ has exactly one limit point $ d $; moreover, $ d $ satisfies
$ \ppie{i-1}{e}(d) < 0 $.
\end{enumerate}
\end{lemma}
\begin{proof} First assume $ x_j \rightarrow x $. Then $ \ppi{k}_e(x) = \lim \ppi{k}_{e_j}(x_j) $ for all $ k $. However, by (\ref{e.c.a}), $ \ppi{k}_{e_j}(x_j) \geq 0 $ for all $ k = i, \ldots, n-1 $. Thus, $ \ppi{k}_e(x) \geq 0 $ for all $ k = i, \ldots, n-1 $. Hence, again invoking (\ref{e.c.a}), $ x \in \Lambdaiep{i}{e} $. Since, trivially, $ Ax = b $, we thus see that $ x \in \feasie{i}{e} $.
To prove that $ x $ not only is feasible but is optimal, we assume otherwise and obtain a contradiction. Thus, assume $ \bar{x} \in \feasie{i}{e} $ satisfies $ c^* \bar{x} < c^* x $. By nudging $ \bar{x} $ towards the strictly feasible point $ e $, we may assume $ \bar{x} $ is strictly feasible in addition to satisfying $ c^* \bar{x} < c^* x $. However, by (\ref{e.c.b}), strict feasibility of $ \bar{x} $ implies $ \ppi{k}_e( \bar{x}) > 0 $ for all $ k = i, \ldots, n-1 $. Since $ \ppi{k}_{e_j}( \bar{x}) \rightarrow \ppi{k}_{e}( \bar{x}) $, we thus have, again using (\ref{e.c.a}), that $ \bar{x} \in \feasie{i}{e_j} $ for $ j > J $ (some $ J $). But then
\[ c^*x > c^* \bar{x} \geq \lim c^* x_j = c^*x \; , \]
a contradiction. Hence, $ x $ is optimal, and assertion (A) of the lemma is established.
Now assume $ \{ x_j \} $ is unbounded. Choose a subsequence $ \{ x_{j_{ \ell} } \} $ for which \newline $ \liminf_{ \ell \rightarrow \infty } \|x_{j_{ \ell} } \|=\infty$ and let $ d $ be a limit point of $ \{ x_{j_{ \ell} }/ \| x_{j_{ \ell} } \| \} $. Then $ d $ is a feasible direction for $ \hpie{i}{e} $, that is, satisfies $ Ad = 0 $ and $ d \in \Lambdaiep{i}{e} $. (That $ Ad = 0 $ is trivial. That $ d \in \Lambdaiep{i}{e} $ follows from $ \ppie{k}{e}(d) = \lim_{ \ell \rightarrow \infty} \ppie{k}{e_{j_{\ell}}}(x_{j_{ \ell }}/ \| x_{j_{ \ell }} \| ) \geq 0 $ for $ k = i, \ldots n-1 $, the inequality being due to $ x_{j_{ \ell}} \in \Lambdaiep{i}{e_{j_{ \ell}}} $ along with homogeneity of $ \ppie{i}{e_{j_{\ell}}} $.) The feasible direction $ d $ satisfies $ c^* d \leq 0 $ (because $ e \in \feasie{i}{e_{j_{ \ell}} } $ -- and hence $ \limsup_{ \ell \rightarrow \infty} c^* x_{j_{ \ell}} < \infty $ -- and because $ \liminf_{ \ell \rightarrow \infty} \| x_{j_{ \ell}} \| = \infty $).
Clearly, now, if $ \optie{i}{e} \neq \emptyset $ then $ \optie{i}{e} $ is unbounded (in direction $ d $), implying by Corollary~\ref{t.d.b}(B) that $ \optie{i}{e} = \opt $. But this would contradict our assumption that $ \opt $ is bounded. Thus, $ \optie{i}{e} = \emptyset $.
To conclude the proof of assertion (B) of the lemma, it remains to show $ \liminf \| x_j \| = \infty $. But if this identity did not hold then $ \{ x_j \} $ would have a limit point, and thus, by assertion (A) of the lemma, we would have $ \optie{i}{e} \neq \emptyset $, contradicting what we just proved. The proof of assertion (B) of the lemma is now complete.
Now we prove assertion (C). Assume $ \{ x_j \} $ is unbounded and $ \limsup c^* x_j > - \infty $.
With $ \{ x_j \} $ unbounded, we already know from above that every limit point $ d $ of $ \{ x_j/\|x_j\| \} $ is a feasible direction for $ \hpie{i}{e} $ (that is, satisfies $ Ad = 0 $ and $ d \in \Lambdaiep{i}{e} $) and $ c^* d \leq 0 $.
We claim, however, that every feasible direction $ d $ for $ \hpie{i}{e} $ satisfies $ c^* d \geq 0 $. Indeed, otherwise the optimal objective value of $ \hpie{i}{e} $ would be unbounded and hence there would exist $ \bar{x} \in \relint(\feasie{i}{e}) $ satisfying $ c^* \bar{x} < \limsup c^* x_j $ (using the assumption $ \limsup c^* x_j > - \infty $). But $ \bar{x} \in \feasie{i}{e_j} $ for sufficiently large $ j $ (because $ 0 < \ppie{k}{e}(\bar{x}) = \lim \ppie{k}{e_j}( \bar{x}) $ for $ k = i, \ldots, n-1 $), contradicting that $ x_j $ is optimal for $ \hpie{i}{e_j} $ for all $ j $.
Clearly, from the two preceding paragraphs, every limit point of $ \{ x_j/\|x_j\| \} $ is contained in $ D := \{ d \in \Lambdaiep{i}{e}: Ad = 0 \textrm{ and } c^* d = 0 \} $. To show $ \{ x_j/\|x_j\| \} $ has a unique limit point it thus suffices to show $ D $ consists of a single ray.
We claim $ D \cap \Lambdaiepp{i}{e} = \emptyset $. Otherwise there would exist $ d \in D $ satisfying $ \ppie{k}{e}(d) > 0 $ for $ k = i, \ldots, n-1 $, which from continuity would imply $ d $ to be a feasible direction for $ \hpie{i}{e_j} $ for all $ j > J $ (some $ J $). But then $ \optie{i}{e_j} $ would be an unbounded set for $ j > J $, implying by Corollary~\ref{t.d.b}(B) that $ \optie{i}{e_j} = \opt $, and thus contradicting the assumed boundedness of $ \opt $. Hence, it is indeed the case that $ D \cap \Lambdaiepp{i}{e} = \emptyset $.
It is clear now that $ D = \{ d \in \partial \Lambdaiep{i}{e}: Ad = 0 \textrm{ and } c^* d = 0 \} $. Thus, $ D $ is contained in a boundary face of $ \Lambdaiep{i}{e} $. Hence, $ D $ consists of a single ray or is a subset of $ \Lambdap $, by Corollary~\ref{t.d.b}(B). But if $ D $ were a subset of $ \Lambdap $ then each $ d \in D $ would be a feasible direction for $ \hp $, which, with $ c^* d = 0 $, would imply $ \opt $ to be unbounded, a contradiction to the assumed boundedness of $ \opt $. Thus, $ D $ consists of a single ray, and so $ \{ x_j/\|x_j\| \} $ has a unique limit point $ d $ (and $ d $ satisfies $ d \in \partial \Lambdaiep{i}{e} $, $ Ad = 0 $, $ c^*d = 0 $).
Lastly, we prove $ \ppie{i-1}{e}(d) < 0 $.
As $ x_j \in \partial \Lambdaiep{i}{e_j} $, we have $ \ppie{i-1}{e_j}(x_j) \leq 0 $ (by (\ref{e.c.e})). Thus, by homogeneity and continuity, $ \ppie{i-1}{e}(d) \leq 0 $. However, since $ d \in \partial \Lambdaiep{i}{e} $, if $ d $ satisfied $ \ppie{i-1}{e}(d) = 0 $, then we would have $ d \in \Lambdaiep{i-1}{e} $, and hence $ d \in \Lambdap $ (by Theorem~\ref{t.d.a}(A)) -- so $ d $ would be a feasible direction for $ \hp $, and hence $ \opt $ would be unbounded (using $ c^* d = 0 $), a contradiction. Thus, $ \ppie{i-1}{e}(d) < 0 $, completing the proof of the lemma. \end{proof}
\begin{lemma} \label{t.h.h}
Assume $ \opt $ is a bounded set, $ e \in \Lambdapp $ and $ 1 \leq i \leq n-2 $. Assume there exist sequences $ e_j \in \swath(i) $ and $ x_j \in \optie{i}{e_j} $ for which $ e_j \rightarrow e $, $ \{ x_j \} $ is unbounded and $ \limsup c^*x_j > -\infty $.
\begin{enumerate}
\item There exists a constant $ \alpha < 0 $ and an open neighborhood $ U $ of $ e $ such that for all $ \bar{e} \in U \cap \swath(i) $ and $ \bar{x} \in \optie{i}{\bar{e}} $,
\[ \ppie{i-1}{\bar{e}}(\bar{x}) < \alpha \| \bar{x} \|^{n-i+1} . \]
\item For each $ \ell \in \mathbb{R} $, there exists an open neighborhood $ V( \ell ) $ of $ e $ such that for all $ \bar{e} \in V( \ell ) \cap \swath(i) $ and $ \bar{x} \in \optie{i}{ \bar{e}} $, it holds that $ \| \bar{x} \| \geq \ell $.
\end{enumerate}
\end{lemma}
\begin{proof} Parts (A) and (B) of Lemma~\ref{t.h.g} together imply that if for one sequence $ \{ e_j \} \subset \swath(i) $ there exist $ x_j \in \optie{i}{e_j} $ for which $ \{ x_j \} $ is unbounded -- where $ e_j \rightarrow e $ -- then for every sequence $ \{ e_j \} \subset \swath(i) $ converging to $ e $, and every choice of $ x_j \in \optie{i}{e_j} $, we have $ \liminf \|x_j\| = \infty $. Part (B) of the present lemma easily follows.
Towards proving part (A), let $ e_j $ and $ x_j $ be sequences as in the statement of the present lemma. Part (C) of Lemma~\ref{t.h.g} then shows $ x_j/\|x_j\| $ has a unique limit point $ d $.
We claim $ \bar{x}_j/\|\bar{x}_j\| \rightarrow d $ for any sequences $ \bar{e}_j $ and $ \bar{x}_j \in \optie{i}{\bar{e}_j} $ satisfying $ \bar{e}_j \rightarrow e $. Indeed, the intermingled sequences $ e_1, \bar{e}_1, e_2, \bar{e}_2, \ldots $ and $ x_1, \bar{x}_1, x_2, \bar{x}_2, \ldots $ clearly satisfy the hypotheses of part (C) of Lemma~\ref{t.h.g}, and hence the normalized sequence
\[ x_1/\|x_1\|, \bar{x}_1/\| \bar{x}_1\|, x_2/\|x_2\|, \bar{x}_2/ \| \bar{x}_2\|, \ldots \]
has a unique limit point, which, of course, is $ d $, thus establishing the claim.
Choosing, say, $ \alpha = \smfrac{1}{2} \, \ppie{i-1}{e}(d) $, part (A) of the present lemma is now a consequence of the continuity of the polynomial $ (\bar{e}, \bar{x}) \mapsto \ppie{i-1}{\bar{e}}(\bar{x} ) $, and the fact that $ x \mapsto \ppie{i-1}{ \bar{e}}(x) $ is homogeneous of degree $ n-i+1 $ for all $ \bar{e} $. \end{proof}
Throughout the remainder of the section,
\begin{itemize}
\item assume $ 1 \leq i \leq n- 2 $,
\item assume $ t \mapsto e(t) $ ($ 0 \leq t < T $) is a maximal trajectory defined by the differential equation $ \dot{e}(t) = \xie{i}{e(t)} - e(t) $, starting at $ e(0) \in \swath(i) \setminus \core(i) $,
\item assume $ T < \infty $, and
\item to ease notation, let $ x(t) := \xie{i}{e(t)} $.
\end{itemize}
The third (and final) lemma for proving Proposition~\ref{t.h.d} shows that when the trajectory $ t \mapsto e(t) $ terminates at finite time $ T $, the termination is not due to having reached the boundary of $ \feas $.
\begin{lemma} \label{t.h.i}
\quad $ \{ e(t): 0 \leq t < T \} \subset \Lambdapp $
\end{lemma}
\begin{proof} It suffices to show $ p(e(t)) \geq p(e(0)) e^{-nt} $ for all $ t $, and hence suffices to show $ \smfrac{d}{dt} p(e(t)) \geq -n p(e(t)) $, that is, suffices to show $ Dp(e(t))[ \dot{e}(t)] \geq -n p(e(t)) $. Since $ \dot{e}(t) = x(t) - e(t) $ and since $ Dp(e(t))[e(t)] = n p(e(t)) $ (by (\ref{e.c.f})), it suffices to show $ Dp(e(t))[x(t)] \geq 0 $.
However, by (\ref{e.c.g}), for all $ \bar{e} $, $ Dp(\bar{e} ) $ is a positive multiple of $ D \ppie{n-1}{\bar{e} }(\bar{e} ) $. Since for all $ x $, $ D\ppie{n-1}{\bar{e} }(\bar{e} )[x] = \ppie{n-1}{\bar{e} }(x) $ (because $ x \mapsto \ppie{n-1}{\bar{e} }(x) $ is linear), we have $ Dp(\bar{e} )[x] \geq 0 $ if and only if $ x $ is in the cone $ \Lambdaiep{n-1}{\bar{e} } $. But $ x(t) \in \Lambdaiep{i}{e(t)} \subseteq \Lambdaiep{n-1}{e(t)} $, thus concluding the proof.
\end{proof}
\noindent
{\bf {\em Proof of Proposition~\ref{t.h.d}.}} To rely on Lemmas \ref{t.h.g} and \ref{t.h.h}, we need $ \opt $ to be a bounded set. Boundedness of $ \opt $, however, was established in Proposition~\ref{t.h.b} as a simple consequence of $ \swath(i) \setminus \core(i) $ being nonempty.
Let $ {\mathcal L} $ be the set of limit points for the trajectory $ t \mapsto e(t) $ as $ t \rightarrow T $. By Lemma~\ref{t.h.i}, $ {\mathcal L} \subset \Lambdapp $. Thus, if $ e \in {\mathcal L} $, then either $ e \in \core(i) $ (that is, $ \optie{i}{ e } = \opt $) or $ e \in \Lambdapp \setminus \swath(i) $ (in which case $ \optie{i}{e } = \emptyset $).
Assume first that $ {\mathcal L} $ consists of a single point $ e $.
Then, by Lemma~\ref{t.h.g}(A), each limit point of the path $ t \mapsto x(t) $ lies in $ \optie{i}{e} $. If the path is bounded, the set of limit points is nonempty, hence $ \optie{i}{e} \neq \emptyset $, and thus, from above, $ e \in \core(i) $. Clearly, then, if the path $ t \mapsto x(t) $ is bounded, case (A) of the proposition holds.
On the other hand, if the path $ t \mapsto x(t) $ is unbounded, then Lemma~\ref{t.h.h} implies $ \liminf_{t \rightarrow T} \|x(t) \| = \infty $, and implies there exists $ 0 \leq \bar{t} < T $ and $ \alpha < 0 $ satisfying
\begin{equation} \label{e.h.d}
\ppie{i-1}{e(t)}(x(t)) \leq \alpha \|x(t)\|^{n-i+1} \quad \textrm{for all $ \bar{t} \leq t < T $} \end{equation}
(here we make use of the fact that $ t \mapsto c^* x(t) $ is strictly increasing (Proposition~\ref{t.h.b}), so that the hypothesis $ \limsup c^* x_j > -\infty $ of the lemma is clearly fulfilled).
Hence, from the compactness of the closed interval $ [0, \bar{t}] $ and the fact that for all $ t $, both $ x(t) \neq 0 $ (because $ b \neq 0 $) and $ \ppie{i-1}{e(t)}(x(t)) < 0 $ (by (\ref{e.c.d})), a perhaps larger (but still negative) value of $ \alpha $ satisfies
\begin{equation} \label{e.h.e}
\ppie{i-1}{e(t)}(x(t)) \leq \alpha \|x(t)\|^{n-i+1} \quad \textrm{for all $ 0 \leq t < T $} \; . \end{equation}
Hence, in all, if the path $ t \mapsto x(t) $ is unbounded, then case (B) of the proposition holds, concluding consideration of the case that $ {\mathcal L} $ consists of a single point.
For the remainder of the proof assume $ {\mathcal L} $ contains more than one point. We show case (B) of the proposition holds. We claim that for this it suffices to show $ {\mathcal L} $ is compact, and that each $ e \in {\mathcal L} $ satisfies the hypotheses of Lemma~\ref{t.h.h}. Indeed, compactness and Lemma~\ref{t.h.h}(A) imply $ {\mathcal L} $ can be covered by finitely many open sets $ U_j $ for which there exist $ \alpha_j < 0 $ with the property that for all $ \bar{e} \in U_j \cap \swath(i) $ and $ \bar{x} \in \optie{i}{\bar{e}} $, it holds $ \ppie{i-1}{\bar{e}}( \bar{x} ) < \alpha_j \| \bar{x} \|^{n-i+1} $. Consequently, letting $ \alpha := \max_j \alpha_j $, there exists $ 0 \leq \bar{t} < T $ for which (\ref{e.h.d}) holds. Then, as in the preceding paragraph, for a possibly larger (but still negative) value of $ \alpha $, (\ref{e.h.e}) holds. To conclude proving the claim, it remains only to show $ \liminf_{t \rightarrow T} \|x(t)\| = \infty $. This, however, is easily accomplished by covering $ {\mathcal L} $, for each $ \ell \in \mathbb{R} $, by finitely many open sets $ V( \ell)_j $ as appear in Lemma~\ref{t.h.h}(B).
Thus, to complete the proof of Proposition~\ref{t.h.d}, it remains only to show $ {\mathcal L} $ is compact, and that each $ e \in {\mathcal L} $ satisfies the hypotheses of Lemma~\ref{t.h.h}.
That $ {\mathcal L} $ is compact is trivial -- indeed, the trajectory $ t \mapsto e(t) $ is bounded, by Proposition~\ref{t.h.b}.
Since $ {\mathcal L} $ consists of more than one point, for each $ e \in {\mathcal L} $ and each open neighborhood $ W $ of $ e $, the Euclidean arc length of $ \{ e(t): 0 \leq t < T \} \cap W $ is infinite. Thus, since $ T < \infty $, for each $ e \in {\mathcal L} $ there exists an sequence $ t_1 < t_2 < \ldots $ satisfying $ e(t_j) \rightarrow e $ and $ \|\dot{e}(t_j)\| \rightarrow \infty $. But $ \dot{e}(t_j) = x(t_j) - e(t_j) $, so $ \|x(t_j) \| \rightarrow \infty $. Since, additionally, $ \limsup c^* x(t_j) > - \infty $ (because $ t \mapsto c^* x(t) $ is strictly increasing), $ e $ thus clearly satisfies the hypotheses of Lemma~\ref{t.h.h}. \hfill $ \Box $
\subsubsection{{\bf Proving the second of the two propositions}} In proving Proposition~\ref{t.h.e}, continue to
\begin{itemize}
\item assume $ 1 \leq i \leq n- 2 $,
\item assume $ t \mapsto e(t) $ ($ 0 \leq t < T $) is a maximal trajectory defined by the differential equation $ \dot{e}(t) = \xie{i}{e(t)} - e(t) $, starting at $ e(0) \in \swath(i) \setminus \core(i) $,
\item assume $ T < \infty $, and
\item to ease notation, let $ x(t) := \xie{i}{e(t)} $.
\end{itemize}
Also, for every $ e $, define $ q_e := \ppie{i}{e}/\ppie{i+1}{e} $.
The proof of Proposition~\ref{t.h.e} depends on two lemmas. For understanding the first lemma, recall
the identity given as (\ref{e.h.a}), that is,
\begin{equation} \label{e.h.f}
D_e(D^j\ppie{k}{e}(x)) = k D^{j+1} \ppie{k-1}{e}(x) \; .
\end{equation}
\begin{lemma} \label{t.h.j}
For any $ e $ and $ x $ satisfying $ \ppie{i}{e}(x) = 0 \neq \ppie{i+1}{e}(x) $, we have
\begin{align*}
D_e( D q_e(x))[ x-e] = & \,
D q_e(x) \\ & \quad + \frac{i(n-i)}{\ppie{i+1}{e}(x)} D \ppie{i-1}{e}(x) \\ & \qquad - \frac{i(n-i+1) \ppie{i-1}{e}(x)}{(\ppie{i+1}{e}(x))^2} D\ppie{i+1}{e}(x) \; . \end{align*}
\end{lemma}
\begin{proof} For an arbitrary vector $ \Delta e $ and for any $ x $ at which $ q_e $ is defined (i.e., any $ x $ satisfying $ \ppie{i+1}{e}(x) \neq 0 $), use of (\ref{e.h.f}) gives
\begin{align*}
D_e (Dq_e(x))[ \Delta e] = & \, D_e \left( \smfrac{1}{\ppie{i+1}{e}(x)} D\ppie{i}{e}(x) - \smfrac{\ppie{i}{e}(x)}{\ppie{i+1}{e}(x)^2} D \ppie{i+1}{e}(x) \right) [ \Delta e] \\
= & \, \smfrac{i}{\ppie{i+1}{e}(x)} D^2 \ppie{i-1}{e}(x)[ \Delta e ] \\
& \quad - \smfrac{i+1}{(\ppie{i+1}{e}(x))^2} (D\ppie{i}{e}(x)[ \Delta e]) D\ppie{i}{e}(x) \\
& \qquad - \smfrac{i}{(\ppie{i+1}{e}(x))^2} (D\ppie{i-1}{e}(x)[\Delta e]) D\ppie{i+1}{e}(x)\\
& \qquad \quad - \smfrac{2(i+1)\ppie{i}{e}(x)}{(\ppie{i+1}{e}(x))^3} ( D\ppie{i}{e}(x)[ \Delta e]) D\ppie{i+1}{e}(x)\\
& \qquad \qquad - \smfrac{(i+1)\ppie{i}{e}(x)}{(\ppie{i+1}{e}(x))^2} D^2\ppie{i}{e}(x)[\Delta e ] \; .
\end{align*}
Substitute $ \Delta e = x - e $, then use obvious identities to get rid of ``$ [e] $'' (e.g., $ D\ppie{i}{e}(x)[e] = \ppie{i+1}{e}(x) $, $ D^2 \ppie{i-1}{e}(x)[ e ] = D \ppie{i}{e}(x) $), and use homogeneity to get rid of ``$ [x] $'' -- specifically, use (\ref{e.c.f}). Finally, substitute $ \ppie{i}{e}(x) = 0 $, and $ \frac{1}{\ppie{i+1}{e}(x)} D\ppie{i}{e}(x) = Dq_e(x) $ (because $ \ppie{i}{e}(x) = 0 $), thereby concluding the proof. \end{proof}
\begin{lemma} \label{t.h.k}
For all $ 0 \leq t < T $, there exists $ y(t)^* $ satisfying
\begin{gather*}
D^2q_{e(t)}(x(t))[ \dot{x}(t)] - \frac{c^* \dot{x}(t) }{(c^*(e(t)-x(t)))^2} \, c^* \qquad \qquad \qquad \qquad \qquad \qquad \qquad \\
\qquad \qquad = \frac{i \ppie{i-1}{e(t)}(x(t))}{\ppie{i+1}{e(t)}(x(t))} \left( (n-i+1)\frac{ D\ppie{i+1}{e(t)}(x(t))}{\ppie{i+1}{e(t)}(x(t))} - (n-i)\frac{ D\ppie{i-1}{e(t)}(x(t))}{\ppie{i-1}{e(t)}(x(t))} \right) + y(t)^* A \; .
\end{gather*}
\end{lemma}
\begin{proof} Corollary~\ref{t.g.d} shows $ x(t) $ is optimal for the convex optimization problem
\[ \begin{array}{rl}
\min_x & - \ln c^*(e(t)-x) - q_{e(t)}(x) \\
\mathrm{s.t.} & Ax = b
\end{array} \]
and hence there exists $ w(t)^* $ satisfying the first-order condition
\begin{equation} \label{e.h.g}
\frac{1}{c^*(e(t)-x(t))} \, c^* - Dq_{e(t)}(x(t)) = w(t)^* A \; .
\end{equation}
Since $ t \mapsto e(t) $ and $ t \mapsto \xie{i}{e(t)} $ are analytic, so is $ t \mapsto w^*(t) $ (using that $ A $ is surjective, by assumption). Differentiating in $ t $ gives
\[\frac{c^*( \dot{e}(t)- \dot{x}(t)) }{(c^*(e(t)-x(t)))^2} \, c^* + D^2q_{e(t)}(x(t))[ \dot{x}(t) ] + D_{e(t)}(Dq_{e(t)}(x(t)))[ \dot{e}(t)] = - \dot{w}(t)^* A \; . \]
To complete the proof, substitute
\[ \dot{e}(t) = x(t) - e(t) \quad \textrm{and} \quad \frac{1}{c^*(e(t)-x(t)) } c^* = Dq_{e(t)}(x(t)) + w(t)^*A \; \, \, \textrm{(by (\ref{e.h.g}))} \; , \]
and then use Lemma~\ref{t.h.j} to substitute for $ D_{e(t)}(Dq_{e(t)}(x(t)))[ x(t) - e(t)] $. \end{proof}
\noindent
{\bf {\em Proof of Proposition~\ref{t.h.e}.}} To temper notation, for arbitrary $ t $ satisfying $ 0 \leq t < T $, let
\[ e := e(t), \quad x := x(t), \quad \dot{e} := \dot{e}(t) \quad \textrm{and} \quad \dot{x} := \dot{x}(t). \]
Observe that
\begin{align*}
& \frac{d}{dt} \, \ln \frac{\left( -\ppie{i-1}{e(t)}(x(t))\right)^{n-i}}{\left( \ppie{i+1}{e(t)}(x(t)) \right)^{n-i+1}} \\
& \quad = (n-i) \, \smfrac{d}{dt} \ln \left( -\ppie{i-1}{e(t)}(x(t)) \right) - (n-i+1) \, \smfrac{d}{dt} \ln \ppie{i+1}{e(t)}(x(t)) \\
& \quad = \smfrac{n-i}{\ppie{i-1}{e}(x)} \left( D\ppie{i-1}{e}(x)[ \dot{x} ]
+ D_e (\ppie{i-1}{e}(x))[\dot{e}] \right) \\
& \quad \qquad - \smfrac{n-i+1}{\ppie{i+1}{e}(x)} \left( D\ppie{i+1}{e}(x)[ \dot{x} ] + D_e ( \ppie{i+1}{e}(x) ) [\dot{e}] \right) \; .
\end{align*}
Using Lemma~\ref{t.h.k} to substitute for $ \smfrac{n-i}{\ppie{i-1}{e}(x)} D\ppie{i-1}{e}(x) - \smfrac{n-i+1}{\ppie{i+1}{e}(x)} D\ppie{i+1}{e}(x) $ gives
\begin{align*}
& \frac{d}{dt} \, \ln \frac{\left( -\ppie{i-1}{e(t)}(x(t))\right)^{n-i}}{\left( \ppie{i+1}{e(t)}(x(t)) \right)^{n-i+1}} \\
& \quad = \, - \, \frac{\ppie{i+1}{e}(x)}{i \ppie{i-1}{e}(x)} \left( D^2q_{e}(x)[ \dot{x}, \dot{x} ] - \left( \frac{c^* \dot{x} }{c^*(e-x )} \right)^2 \right) \\
& \quad \qquad + (n-i) \frac{D_e (\ppie{i-1}{e}(x))[ \dot{e}] }{\ppie{i-1}{e}(x)} - (n-i+1) \frac{D_e (\ppie{i+1}{e}(x))[ \dot{e}] }{\ppie{i+1}{e}(x)} \quad \textrm{(using $ y^*(t) A \dot{x}(t) = 0$)} \\
& \quad \leq \frac{\ppie{i+1}{e}(x)}{i \ppie{i-1}{e}(x)} \left( \frac{c^* \dot{x} }{c^*(e-x)} \right)^2 + (n-i) \frac{D_e (\ppie{i-1}{e}(x))[ \dot{e}] }{\ppie{i-1}{e}(x)} - (n-i+1) \frac{D_e (\ppie{i+1}{e}(x))[ \dot{e}] }{\ppie{i+1}{e}(x)} \; ,
\end{align*}
where the inequality is due to the combination of $ \ppie{i+1}{e}(x)/\ppie{i-1}{e}(x) $ being negative (according to (\ref{e.c.d}) and (\ref{e.d.b})) and $ D^2q_e(x) $ being negative semidefinite (Theorem~\ref{t.g.b}).
To complete the proof, simply substitute the following expressions which result from use of $ \dot{e} = x - e $, (\ref{e.c.f}) and (\ref{e.h.a}):
\begin{align*}
D_e (\ppie{i-1}{e}(x))[ \dot{e}] & = (i-1) D\ppie{i-2}{e}(x) [ x -e ] \\
& = (i-1) \left( (n-i+2) \ppie{i-2}{e}(x) - \ppie{i-1}{e}(x) \right) \; , \\ & \\
D_e (\ppie{i+1}{e}(x))[ \dot{e}] & = (i+1) D\ppie{i}{e}(x)[ x - e ] \\
& = (i+1) \left( 0 - \ppie{i+1}{e}(x) \right)
\end{align*}
-- substitute, also,
\begin{align*}
\frac{c^* \dot{x} }{c^*(e-x)} & = Dq_e(x)[ \dot{x}] \qquad \textrm{(first-order condition arising from Corollary \ref{t.g.d})} \\
& = \frac{D\ppie{i}{e}(x)[ \dot{x}] }{\ppie{i+1}{e}(x)} \qquad \textrm{(using $ \ppie{i}{e}(x) = 0 $)} \\
& = - \, \frac{ D_e( \ppie{i}{e}(x)) [\dot{e}] }{\ppie{i+1}{e}(x)} \qquad \textrm{(because $ \smfrac{d}{dt} \ppie{i}{e(t)}(x_{e(t)}) = 0 $)} \\
& = -i \, \frac{ D\ppie{i-1}{e}(x)[x-e] }{\ppie{i+1}{e}(x)} \\
& = -i \, \frac{ (n-i+1) \ppie{i-1}{e}(x) - 0 }{\ppie{i+1}{e}(x)} \; .
\end{align*}
\hfill $ \Box $
\section{{\bf Proof of Part II of the Main Theorem}} \label{s.i}
Recall the optimization problem dual to $ \hp $:
\[
\left. \begin{array}{rl}
\sup_{y^*,s^*} & y^*b \\
\mathrm{s.t.} & y^* A + s^* = c^* \\
& s^* \in \Lambdap^* \end{array} \quad \right\} \, \hp^* \; ,
\] where $ \Lambdap^* $ is the cone dual to $ \Lambdap $. Recall that just as a matter of definitions, the optimal value of $ \hp^* $ satisfies $ \val^* \leq \val $ (``weak duality''), where $ \val $ is the optimal value of $ \hp $.
Recall that a pair $ (y^*,s^*) $ satisfying the constraints is said to be ``strictly'' feasible if $ s^* \in \int(\Lambdap^*) $ (interior).
For $ e \in \swath(i) \setminus \core(i) $, define
\[ \sie{i}{e} := \frac{c^*(e-x) }{\ppie{i+1}{e}(x)} D \ppie{i}{e}(x) \quad \textrm{where } x = \xie{i}{e} \; . \]
We now restate and prove \hyperlink{targ_main_thm_part_two}{Part II} of the Main Theorem.
\begin{thm} \label{t.i.a}
Assume $ 1 \leq i \leq n-2 $ and let $ \{ e(t): 0 \leq t < T \} $ be a maximal trajectory for the dynamics $ \dot{e}(t) = \xie{i}{e(t)} - e(t) $ starting at $ e(0) \in \swath(i) \setminus \core(i) $. Then $ y^*A + \sie{i}{e(t)} = c^* $ has a unique solution $ y^* = \yie{i}{e(t)} $, and the pair $ (\yie{i}{e(t)},\sie{i}{e(t)}) $ is strictly feasible for $ \hp^* $. Moreover,
\[ \yie{i}{e(t)}b = c^* \xie{i}{e(t)} \xrightarrow[t \rightarrow T]{} \val \]
(in fact, increases to $ \val $ strictly monotonically) and the path $ t \mapsto (\yie{i}{e(t)},\sie{i}{e(t)}) $ is bounded.
\end{thm}
\begin{proof} Assume $ e \in \swath(i) \setminus \core(i) $. By Corollary~\ref{t.g.d}, $ \xie{i}{e} $ is optimal for the convex optimization problem
\[ \begin{array}{rl}
\min_x & - \ln c^*(e-x) - \frac{\ppie{i}{e}(x)}{\ppie{i+1}{e}(x)} \\
\mathrm{s.t.} & Ax = b \; ,
\end{array} \]
and hence there exists $ y = \yie{i}{e} $ satisfying the (rearranged) first-order condition
\[ A^*y + \sie{i}{e} = c \; ,\]
where, letting $ q_e := \ppie{i}{e}/\ppie{i+1}{e} $ and $ x_e = \xie{i}{e} $,
\[ \sie{i}{e} := \left( c^*(e - x_e) \right) Dq_e( x_e) = \frac{c^*(e-x_e) }{\ppie{i+1}{e}(x_e)} D \ppie{i}{e}(x_e), \]
(using $ \ppie{i}{e}(x_e) = 0 $).
Thus, to show the pair $ (\yie{i}{e}, \sie{i}{e}) $ is strictly feasible for $ \hp^* $, it suffices to show $ \sie{i}{e} \in \int(\Lambdap^*) $, that is, assuming $ z \in \Lambdap $, $ z \neq 0 $, it suffices to show $ Dq_e(x_e)[z] > 0 $.
Corollary~\ref{t.d.b} shows $ ( \partial \Lambdaiep{i}{e}) \setminus \Lambdap $ contains no line segments of positive length other than those lying in rays $ \{ sx: s > 0 \} $. Thus, the line segment connecting $ x_e $ to $ z $ lies entirely within $ \Lambdaiepp{i}{e} $ with the exception of the point $ x_e $ and possibly the point $ z $. Hence, $ z(s) := (1-s)z + sx_e $ satisfies $ q_e(z(s)) > 0 $ for $ 0 < s < 1 $. Fix $ s $ strictly between 0 and 1.
Concavity of $ q_e $ (according to Theorem~\ref{t.g.b}) and $ q_e(x_e) = 0 $ imply $ q_e(z(s)) \leq Dq_e(x_e)[z(s) - x_e] $, whereas $ q_e(x_e) = 0 $ and homogeneity of $ q_e $ give $ Dq_e(x_e)[x_e] = 0 $. Thus,
\[ 0 < q_e(z(s)) \leq Dq_e(x_e)[z(s)-x_e] = (1-s) Dq_e(x_e)[z] \; , \]
completing the proof that $ (\yie{i}{e}, \sie{i}{e}) $ is strictly feasible for $ \hp^* $. Additionally observing
\begin{align*}
\yie{i}{e} b & = \yie{i}{e} A x_e \\
& = c^* x_e - \sie{i}{e} x_e \\
& = c^* x_e - \left( c^*(e-x_e) \right) Dq_e(x_e)[x_e] \\
& = c^* x_e \; ,
\end{align*}
we thus see for a maximal trajectory $ \{ e(t): 0 \leq t < T \} $, each pair $ (\yie{i}{e(t)}, \sie{i}{e(t)}) $ is strictly feasible for $ \hp^* $, and
\begin{align*}
\yie{i}{e(t)} b & = c^* \xie{i}{e(t)} \\
& \qquad \rightarrow \val \quad \textrm{strictly monotonically} \quad \textrm{(by Proposition~\ref{t.h.b})} \; .
\end{align*}
It only remains to show the path $ t \mapsto (\yie{i}{e(t)}, \sie{i}{e(t)}) $ is bounded, for which it suffices to show $ t \mapsto \sie{i}{e(t)} $ is bounded (as $ A $ is surjective, by assumption). In turn, because $ e(0) \in \Lambdapp $ and $ \sie{i}{e(t)} \in \Lambdap^* $, it suffices to show the value $ \sie{i}{e(t)} e(0) $ is bounded from above independent of $ t $. However,
\begin{align*}
\sie{i}{e(t)} e(0) & = c^*e(0) - \yie{i}{e(t)} A e(0) \\
& = c^* e(0) - \yie{i}{e(t)} b \\
& \qquad \qquad \rightarrow \, c^* e(0) - \val \; ,
\end{align*}
concluding the proof. \end{proof}
\section{{\bf Proof of Theorem 6}} \label{s.j}
We have now finished our analysis of the trajectories $ t \mapsto e(t) $ (and paths $ t \mapsto \xie{i}{e(t)} $) arising from the differential equation $ \dot{e}(t) = \xie{i}{e(t)} - e(t) $ on the set $ \swath(i) \setminus \core(i) $. However, in order to have a relatively complete picture of all of $ \swath(i) $, results regarding the structure of $ \core(i) $ are needed. That is the purpose of this section.
Specifically, just as the dynamics $ \dot{e}(t) = \xie{i}{e(t)} - e(t) $ on $ \swath(i) \setminus \core(i) $ leads to optimality, it would be nice as a matter of formalism to know that if $ e \in \core(i) $ and $ x \in \opt $, then moving $ e $ towards $ x $ results in a point $ e + t(x-e) $ ($ 0 < t < 1 $) also in $ \core(i) $, thus retaining optimality.
\begin{thm} \label{t.j.a}
Assume $ 1 \leq i \leq n-2 $. If $ e \in \core(i) $ and $ x \in \opt $ then
\[ \{ e + t(x-e) \in \Lambdapp : t \in \mathbb{R} \} \subseteq \core(i) \; . \]
\end{thm}
\noindent
Extending the theorem to include $ i = 0 $ is trivial, because $ \core(0) $ is precisely the interior of $ \feas $. It also is easy to extend the theorem to $ i = n-1 $, simply because $ \core(n-1) = \emptyset $.\footnote{To see $ \core(n-1) = \emptyset $, recall that $ (\partial \Lambdaiep{n-1}{e}) \cap \Lambdap $ is precisely the lineality space of $ \Lambdap $ (by Theorem~\ref{t.d.a}(C)). Thus, since $ \Lambdap $ is regular (by assumption) and since the origin is infeasible (because $ b \neq 0 $), the boundary of $ \feas $ cannot intersect the boundary of $ \feasie{n-1}{e} $; in particular, it cannot happen that $ \opt $ intersects $ \optie{n-1}{e} $.} We state the theorem only for $ 1 \leq i \leq n-2 $ to avoid having to be mindful of the special cases $ i = 0, n-1 $ during the course of the proof.
By bootstrapping the theorem, the following elementary corollary provides some additional insight into the structure of $ \core(i) $. The corollary is a restatement of Theorem \hyperlink{targ_thm_six}{6}.
\begin{cor} \label{t.j.b}
If $ e \in \core(i) $ and $ {\mathcal A} $ is the smallest affine space containing both $ e $ and $ \opt $, then
\[ {\mathcal A} \cap \Lambdapp \subseteq \core(i) \; . \]
\end{cor}
\begin{proof}
If $ \opt $ contains only a single point, then the corollary is nothing more than a restatement of the theorem. Thus, assume $ \opt $ contains more than a single point, and fix $ \tilde{x} \in \relint(\opt) $. Then
\[
{\mathcal A} = \{ \tilde{x} + \alpha \, ( e - \tilde{x}) + \beta \, (x - \tilde{x}): \alpha \in \mathbb{R}, \, \beta \geq 0 \textrm{ and } x \in \opt \} \; .
\]
Assume $ y \in {\mathcal A} \cap \Lambdapp $, and fix $ \alpha \in \mathbb{R} $, $ \beta \geq 0 $ and $ x \in \opt $ satisfying
\[ y = \tilde{x} + \alpha \, ( e - \tilde{x}) + \beta \, (x - \tilde{x}) \; . \]
To complete the proof, we show $ y \in \core(i) $.
Since $ y $ and $ e $ lie in the interior of $ \feas $, we have $ c^* y, c^* e > \val $ ($ = c^* \tilde{x}, c^* x $), from which follows $ \alpha > 0 $. Thus, the point
\[ z := \smfrac{\alpha }{\alpha + \beta } e + \smfrac{\beta }{\alpha + \beta } \, x \]
is well-defined, is a convex combination of $ e $ and $ x $, and lies in $ \Lambdapp $ (because $ e \in \Lambdapp $, $ x \in \Lambdap $ and $ \alpha > 0 $). As $ e \in \core(i) $ and $ x \in \opt $, Theorem~\ref{t.j.a} implies $ z \in \core(i) $.
However, $ y = z + t(\tilde{x} -z) $ for $ t = \alpha + \beta - 1 $. Hence, applying Theorem~\ref{t.j.a} again, now with $ z $ (resp., $ \tilde{x} $) in place of $ e $ (resp., $ x $), we find $ y \in \core(i) $. \end{proof}
Before turning to the proof of Theorem~\ref{t.j.a}, we recall the conjecture made in \S\ref{s.b}.
\begin{conj}
$ \core(i) $ is convex.
\end{conj}
\noindent It is highly unclear whether resolving the conjecture positively could have any relevance to algorithms. Nonetheless, a positive resolution would be interesting from a structural perspective.
Now we begin the process of proving Theorem~\ref{t.j.a}. For motivation, we first consider a reasonably-general case for which the proof is rather straightforward, and afterwards move to the proof for the truly general case. That the proof is straightforward in one setting but not in full generality is reminiscent of the proof of Part I of the Main Theorem, where the case of ``$ T = \infty $'' was vastly easier than the case ``$ T < \infty $'' (i.e., \S\ref{s.h.a} was a breeze compared to \S\ref{s.h.b}).
The straightforward case relies on the following lemma.
\begin{lemma} \label{t.j.c}
If $ 0 \leq i \leq n-2 $ and $ x \in \partial \Lambdaiep{i}{e} $, then
\[ D\ppie{i}{e}(x) = 0 \quad \Leftrightarrow \quad x \in \partial \Lambdaiep{i+1}{e} \; . \]
\end{lemma}
\begin{proof} This is Lemma 7 in \cite{renegar}. \end{proof}
Here is the straightforward case:
\begin{thm} \label{t.j.d}
Assume $ 1 \leq i \leq n-2 $ and $ e \in \core(i) $. If $ x \in \opt \cap \Lambdaiepp{i+1}{e} $ then
\[ \{ e + t(x-e) \in \Lambdapp : t \in \mathbb{R} \} \subseteq \core(i) \; . \]
\end{thm}
\begin{proof} Recall Corollary~\ref{t.g.d}, which shows that for every $ \bar{e} \in \Lambdapp $ and $ 0 \leq i \leq n-2 $, the points lying in $ \optie{i}{ \bar{e}} \setminus \partial \Lambdaiep{i+1}{\bar{e}} $ are precisely the optimal solutions to the linearly constrained, convex optimization problem
\[ \begin{array}{rl}
\min_{x \in \Lambdaiepp{i+1}{\bar{e}}} & - \ln c^*( \bar{e} - x) - \frac{\ppie{i}{\bar{e} }(x)}{\ppie{i+1}{\bar{e}}(x)} \\
\textrm{s.t.} & Ax = b \; , \end{array} \]
that is, for every $ \bar{e} \in \Lambdapp $, letting $ \qie{i}{ \bar{e}} := \ppie{i}{ \bar{e}}/\ppie{i+1}{ \bar{e}} $,
\begin{equation} \label{e.j.a}
\left. \begin{array}{c}
x \in \optie{i}{\bar{e} } \cap \Lambdaiepp{i+1}{\bar{e}} \\ \Leftrightarrow \\
\smfrac{1}{c^*( \bar{e}-x) } \, c^* - D \qie{i}{ \bar{e}}(x) = y^* A \quad \textrm{for some $ y^* $} \\
Ax = b \\
x \in \Lambdaiepp{i+1}{ \bar{e}} \; .
\end{array} \quad \right\} \end{equation}
Now assume $ x \in \opt \cap \Lambdaiepp{i+1}{e} $, where $ e \in \core(i) $ and $ 0 \leq i \leq n-2 $. Then, since $ \Lambdap \cap \partial \Lambdaiep{j}{ \bar{e}} $ ($ 0 \leq j \leq n-1 $) is independent of $ \bar{e} \in \Lambdapp $ (by Theorem~\ref{t.d.a}(B)), we have for all $ \bar{e} \in \Lambdapp $ that
\begin{gather}
x \in \Lambdaiepp{i+1}{ \bar{e}} \; , \label{e.j.b} \\
\ppie{j}{ \bar{e}}(x) = 0 \quad \textrm{for all $ 0 \leq j \leq i $} \; , \label{e.j.c}
\end{gather}
and, using Lemma~\ref{t.j.c},
\begin{equation} \label{e.j.d}
D \ppie{j}{ \bar{e}}(x) = 0 \quad \textrm{for all $ 0 \leq j \leq i-1 $} \; .
\end{equation}
Moreover, by (\ref{e.j.a}), there exists $ y^* $ satisfying
\[
\smfrac{1}{c^*( e-x) } \, c^* - D \qie{i}{ e}(x) = y^* A \; ,
\]
that is, as $ \ppie{i}{e}(x) = 0 $, there exists $ y^* $ satisfying
\begin{equation} \label{e.j.e}
\smfrac{1}{c^*( e-x) } \, c^* - \smfrac{1}{\ppie{i+1}{e}(x)} D \ppie{i}{ e}(x) = y^* A \; .
\end{equation}
Fix $ t $ for which $ e + t(x-e) \in \Lambdapp $ (thus, $ - \infty < t < 1 $). To prove the theorem, it suffices to show $ x \in \optie{i}{e + t(x-e)} $, because then, $ \opt \cap \optie{i}{e+t(x-e)} \neq \emptyset $, and hence, $ \opt = \optie{i}{e+t(x-e)} $ (by Corollary~\ref{t.d.b}(B)). Thus, in light of (\ref{e.j.b}), we see from (\ref{e.j.a}) that to prove the theorem, it suffices to show
\[
\smfrac{1}{(1-t) \, c^*(e -x) } \, c^* - D \qie{i}{ e + t(x-e) }(x) = w^* A \quad \textrm{for some $ w^* $} \; ,
\]
that is, as $ \ppie{i}{ e + t(x-e)}(x) = 0 $ (by (\ref{e.j.c})), it suffices to show
\begin{equation} \label{e.j.f}
\smfrac{1}{(1-t) \, c^*(e -x) } \, c^* - \smfrac{1}{\ppie{i+1}{ e + t(x-e) }(x)} D \ppie{i}{ e + t(x-e) }(x) = w^* A \quad \textrm{for some $ w^* $} \; .
\end{equation}
However,
\begin{align*}
D \ppie{i}{e+t(x-e)}(x) & = D^{i+1}p(x)[\underbrace{(1-t)e + tx, \ldots, (1-t)e + tx}_{\textrm{$ i $ times}}] \\ & = \sum_{j = 0}^i \binom{i}{ j }(1-t)^{ j} t^{i - j} D^{i+1}p(x)[\underbrace{e, \ldots, e}_{\textrm{$ j $ times}}, \underbrace{x, \ldots, x}_{\textrm{$ i - j $ times}}] \\
& = \sum_{j = 0}^i \binom{i}{ j }(1-t)^{ j} t^{i - j} D^{i+1-j }\ppie{j}{e}(x)[\underbrace{x, \ldots, x}_{\textrm{$ i - j $ times}}] \\
& = \sum_{j = 0}^i \binom{i}{ j }(1-t)^{ j} t^{i-j} \, \frac{(n-j-1)!}{(n-i-1)!} D \ppie{j}{e}(x) \quad \textrm{(by (\ref{e.c.f}))} \; \\
& = (1-t)^i D \ppie{i}{e}(x) \quad \textrm{(by (\ref{e.j.d}))} \; ,
\end{align*}
and, similarly,
\begin{align*}
\ppie{i+1}{e+t(x-e)}(x) & = \sum_{j = 0}^{i+1} \binom{i+1}{ j }(1-t)^{ j} t^{i+1-j} \, \frac{(n-j)!}{(n-i-1)!} \ppie{j}{e}(x) \\
& = (1-t)^{i+1} \ppie{i+1}{e}(x) \quad \textrm{(by (\ref{e.j.c}))} \; .
\end{align*}
Hence, $ w^* = \smfrac{1}{1-t} \, y^* $ satisfies (\ref{e.j.f}) (where $ y^* $ is as in (\ref{e.j.e})), completing the proof. \end{proof}
In the course of the proof for the ``truly general'' case, there is a crucial outside result to which we refer, and thus before formally beginning the proof, we explain the result.
For $ e \in \Lambdapp $ and an arbitrary point $ x $, let $ M_e(x) $ denote the number of non-negative roots of $ t \mapsto p(x+te) $. The result is that $ M_e(x) $ is independent of $ e \in \Lambdapp $, as was stated in \cite{hl} (as Theorem 2.12). (In \cite{renegar}, the fact was (essentially) established {\em only} for $ x \in \Lambdap $.)
Perhaps worth recording is that the independence easily follows from a most useful tool in the hyperbolic polynomial literature:
\hypertarget{hv_thm}{}
\begin{rem} Assume $ e \in \Lambdapp $. For any points $ x $ and $ z $, there exist $ n \times n $ symmetric matrices $ X $ and $ Z $ such that
\[ (r,s,t) \mapsto p(rx + sz + te) = p(e) \, \det( rX + sZ + tI) \; . \]
\end{rem}
\noindent This formulation of the theorem comes from \cite{lpr}, where it was obtained by straightforward homogenization of the original result in \cite{hv}. (The initial importance of the homogeneous version was that it affirmatively settled the ``Lax Conjecture'' -- see \cite{lpr} for discussion.) (See \cite{branden} for negative results on possibilities of extensions to more than three variables $ r $, $ s $, $ t $.)
\addtocounter{thm}{1}
\begin{cor} \label{t.j.f}
For $ e \in \Lambdapp $ and arbitrary $ x $, let $ M_e(x) $ denote the number of non-negative roots for the polynomial $ t \mapsto p(x+te) $. The value $ M_e(x) $ is the same for all $ e \in \Lambdapp $.
\end{cor}
\begin{proof}
First observe that because all roots of univariate polynomials vary continuously in the coefficients so long as the leading coefficient does not vanish, it suffices to show $ \mult_e(x) $ is independent of $ e \in \Lambdapp $, where $ \mult_e(x) $ is the multiplicity of 0 as a root of $ t \mapsto p(x+te) $. However, for $ e, z \in \Lambdapp $, using the Helton-Vinnikov Theorem,
\[ \mult_e(x) = n - \mathrm{rank}(X) = n - \mathrm{rank}(Z^{-1/2}XZ^{-1/2}) = \mult_{z}(x) \; , \]
and thus the corollary is established. \end{proof}
\noindent
{\bf {\em Proof of Theorem~\ref{t.j.a}.}} Fix $ e \in \core(i) $, where $ 1 \leq i \leq n-2 $.
Let $ x $ denote a point in $ \opt $ ($ = \optie{i}{e} $).
Let $ e(t) := e + t(x-e) $, and let $ I $ denote the open interval consisting of all $ t \in \mathbb{R} $ for which $ e(t) \in \Lambdapp $ -- thus, $ I \subseteq (- \infty, 1) $.
Since $ \Lambdap \cap \partial \Lambdaiep{i}{\bar{e}} $ is independent of $ \bar{e} \in \Lambdapp $ (by Corollary~\ref{t.d.b}(A)), and since $ \optie{i}{e} = \opt $, we have
\begin{equation} \label{e.j.g}
\opt \subset \partial \Lambdaiep{i}{e(t)} \textrm{ for all $ t \in I $} \; .
\end{equation}
Let $ {\mathcal D} := \{ d: Ad = 0 \textrm{ and } c^*d = 0 \} $, and let $ x + {\mathcal D} $ be the set consisting of all points $ x + d $ where $ d \in {\mathcal D} $.
Since $ \relint(\feasie{i}{e(t)}) = (x + \{d: Ad = 0 \}) \cap \Lambdaiepp{i}{e(t)} $ and $ x \in \opt \subset \feasie{i}{e(t)} $, we have
\[ x \in \optie{i}{e(t)} \quad \Leftrightarrow \quad (x + {\mathcal D} ) \cap \Lambdaiepp{i}{e(t)} = \emptyset \; . \]
On the other hand, $ x $ is in $ \optie{i}{e(t)} $, as well as in $ \opt $, if and only if $ e(t) \in \core(i) $ (by Corollary~\ref{t.d.b}(B)). Consequently,
\[
e(t) \in \core(i) \quad \Leftrightarrow \quad (x + {\mathcal D} ) \cap \Lambdaiepp{i}{e(t)} = \emptyset \; . \]
Thus, our goal -- proving $ e(t) \in \core(i) $ for all $ t \in I $ -- will be accomplished if we prove $ (x + {\mathcal D} ) \cap \Lambdaiepp{i}{e(t)} = \emptyset $ for all $ t \in I $. Hence, fixing arbitrary $ d \in {\mathcal D} $, letting $ \ray(d) := \{ sd: s \geq 0 \} $ and
\[ I(d) := \{ t \in I: (x + \ray(d)) \cap \Lambdaiepp{i}{e(t)} = \emptyset \} \; , \]
our goal is to show $ I(d) = I $.
Recall, however, the characterization (\ref{e.c.b}) now applied to $ \Lambdaiepp{i}{e(t)} $:
\begin{equation} \label{e.j.h}
\Lambdaiepp{i}{e(t)} := \{ x: \ppie{j}{e(t)}(x) > 0 \textrm{ for all $ j = i, \ldots, n-1 $} \} \; .
\end{equation}
Clearly, then, $ I \setminus I(d) $ is an open subset of the open interval $ I $. Hence, since $ I(d) \neq \emptyset $ (indeed, $ 0 \in I(d) $), to complete the proof of the theorem, it suffices to show that $ I(d) $ is open.
As $ e $ is only assumed to be an element of $ \core(i) $ -- in particular, $ e $ could be replaced with any $ e(t) $ that happens to be in $ \core(i) $ -- our goal has become:
\begin{equation} \label{e.j.i}
\textrm{Show $ I(d) $ contains an open interval including $ 0 $.}
\end{equation}
For non-negative integers $ j $ and $ k $, define
\[ \alpha_{jk}(t) := D^k \ppie{j}{e(t)}(x)[\underbrace{d, \ldots, d}_{\textrm{$ k $ times}}] \]
and
\[ \beta_j(t) := \begin{cases} 0 & \textrm{if $ \alpha_{jk}(t) = 0 $ for all $ k $ ;} \\
\alpha_{j,k(j,t)} & \textrm{otherwise, where $ k(j,t) := \min \{ k: \alpha_{jk}(t) \neq 0 \} $ .} \end{cases} \]
Note that if $ \beta_j(t) = 0 $, then the polynomial $ s \mapsto \ppie{j}{e(t)}(x+sd) $ is identically zero, whereas if $ \beta_j(t) \neq 0 $, the polynomial evaluated at small, positive $ s $ has the same sign as $ \beta_j(t) $. Thus, since $ x $ lies in the convex set $ \Lambdaiep{i}{e(t)} $, we have by the characterization (\ref{e.j.h}) that
\[ (x + \ray(d)) \cap \Lambdaiepp{i}{e(t)} \neq \emptyset \quad \Leftrightarrow \quad \beta_j(t) > 0 \textrm{ for all $ j = i, \ldots, n-1 $} \; . \]
Our goal (\ref{e.j.i}) has now been reduced to:
\begin{equation} \label{e.j.j}
\textrm{Show } \exists \epsilon > 0 \textrm{ such that } |t| < \epsilon \, \Rightarrow \, \beta_j(t) \leq 0 \textrm{ for some $ j \in \{ i, \ldots, n-1 \} $} \; .
\end{equation}
For this we consider two cases.
First, assume $ \beta_j(0) \geq 0 $ for all $ j = i, \ldots, n-1 $. Then $ x + sd \in \feasie{i}{e} $ for all sufficiently small, positive $ s $. Thus, since $ x \in \optie{i}{e} = \opt $ and $ c^*d = 0 $, we have $ x + sd \in \opt $ for all sufficiently small, positive $ s $. Hence, by (\ref{e.j.g}) , for each $ t \in I $, the univariate polynomial $ s \mapsto \ppie{i}{e(t)}(x+se) $ has value zero on an open interval, and thus is identically zero, so $ \beta_i(t) = 0 $ for all $ t \in I $, (more than) accomplishing (\ref{e.j.j}) for the first case.
Now consider the remaining case, that is, assume $ \beta_j(0) < 0 $ for some $ j \in \{ i, \ldots, n-1 \} $; fix such a $ j $. To accomplish (\ref{e.j.j}) , of course it suffices to show for this fixed value of $ j $ that $ \beta_{j}(t) < 0 $ for all $ t $ in an open interval containing $ 0 $. In turn, by definition of $ \beta_j(t) $, it suffices to:
\begin{equation} \label{e.j.k}
\textrm{Show } \exists \epsilon>0 \textrm{ such that } |t| < \epsilon \quad \Rightarrow \quad
D^{k(j,0)} \ppie{j}{e(t)}(x)[\underbrace{d, \ldots, d}_{\textrm{$ k(j,0) $ times}} ] < 0
\end{equation}
and
\begin{equation} \label{e.j.l}
\textrm{show } (t \in I) \wedge (k < k(j,0)) \quad \Rightarrow \quad D^{k} \ppie{j}{e(t)}(x)[\underbrace{d, \ldots, d}_{\textrm{$ k $ times}} ] = 0 \; .
\end{equation}
The existence of $ \epsilon $ as in (\ref{e.j.k}) is simply a matter of continuity and $ \beta_j(0) < 0 $.
For accomplishing (\ref{e.j.l}) , observe
\begin{align*}
D^k \ppie{j}{e(t)}(x) & = D^{k+j}p(x)[\underbrace{(1-t)e + tx, \ldots, (1-t)e + tx}_{\textrm{$ j $ times}}] \\
& = \sum_{\ell = 0}^j \binom{j}{ \ell }(1-t)^{ \ell} t^{j - \ell} D^{k+j- \ell }\ppie{\ell}{e}(x)[\underbrace{x, \ldots, x}_{\textrm{$ \ell - j $ times}}] \\
& = \sum_{\ell = 0}^j \binom{j}{ \ell }(1-t)^{ \ell} t^{j - \ell} \, \frac{(n-k-\ell)!}{(n-k-j)!} D^k \ppie{\ell}{e}(x) \quad \textrm{(using (\ref{e.c.f}))} \; .
\end{align*}
Consequently, (\ref{e.j.l}) is immediately accomplished by the following proposition, thus concluding the proof of the theorem (except for proving the proposition). \hfill $ \Box $
\begin{prop} \label{t.j.g}
Assume $ e \in \Lambdapp $, $ x \in \Lambdap $ and let $ d $ be a vector. Assume non-negative integers $ j $ and $ k $ satisfy $ D^k \ppie{j}{e}(x) [\underbrace{d, \ldots, d}_{k \textrm{ times}}] \neq 0 $; let $ k(j) $ be the smallest such $ k $ for $ j $ and assume $ k(j) > 0 $.
Then
\[ D^{k } \ppie{ \ell }{e}(x)[\underbrace{d, \ldots, d}_{ k \textrm{ times}}] = 0 \quad \textrm{for all $ \ell = 0, \ldots, j $ and $ k = 0,\ldots, k(j)-1 $ } \; . \]
\end{prop}
The proof of the proposition makes use of the following lemma.
For $ e \in \Lambdapp $ and arbitrary $ z $, recall that $ M_e(z) $ denotes the number of non-negative roots of $ t \mapsto p(z+te) $.
\begin{lemma} \label{t.j.h}
Assume $ e, x \in \Lambdapp $, let $ d \neq 0 $, and let $ s_1 \leq s_2 \leq \cdots \leq s_k $ denote the positive roots (including multiplicities) of $ s \mapsto p(x+sd) $. Then, for every $ \bar{s} \geq 0 $,
\[ M_e(x+\bar{s} d) = \# \{ i: s_i \leq \bar{s} \} \; . \]
\end{lemma}
\begin{proof} Because $ M_e(x+\bar{s} d) $ is independent of $ e \in \Lambdapp $ (by Corollary~\ref{t.j.f}), in proving the lemma we may assume $ e = x $, in which case we need only consider the hyperbolic polynomial obtained by restricting $ p $ to the subspace spanned by $ x $ and $ d $. However, every hyperbolic polynomial in two variables of degree $ n $ is of the form $ (y_1,y_2) \mapsto \prod_{j=1}^n a_j^T y $ for some vectors $ a_j \in \reals^{2} $ (this easily follows from two facts: (i) every complex homogeneous polynomial in two variables of degree $ n $ is of the form $ (y_1,y_2) \mapsto \prod_{j=1}^n (a_{j,1}y_1 + a_{j,1}y_2) $ for some $ a_j \in \mathbb{C}^2 $; (ii) if $ a \in \mathbb{C}^2 $ is not a (complex) multiple of a real vector, then $ \{ (y_1,y_2) \in \mathbb{R}^2 : a_1y_1 + a_2y_2 = 0 \} $ contains only the origin).
Thus, we need only consider hyperbolic polynomials $ p(y_1, y_2) = \prod_{j=1}^n a_j^T y $ -- where $ a_j \in \mathbb{R}^2 $ -- and $ x = (x_1, x_2) $ which for all $ j $ satisfies $ a_j^T x \neq 0 $. For $ (d_1, d_2) \neq (0,0) $, our goal is to show for $ \bar{s} \geq 0 $ that the number of roots $ 0 < \hat{s} \leq \bar{s} $ (counting multiplicities) for the univariate polynomial
\begin{equation} \label{e.j.m}
s \mapsto \prod_{j=1}^n a_j^T (x + sd)
\end{equation}
is the same as the number of roots $ \hat{t} \geq 0 $ (counting multiplicities) for
\begin{equation} \label{e.j.n}
t \mapsto \prod_{j=1}^n a_j^T (x + \bar{s}d + t x ) \; .
\end{equation}
Of course, however, $ \hat{s} > 0 $ is a root of (\ref{e.j.m}) if and only if $ \hat{s} = \bar{s}/(1+\hat{t} ) $ for some root $ \hat{t} \geq 0 $ of (\ref{e.j.n}). The lemma follows.
\end{proof}
\noindent {\bf {\em Proof of Proposition~\ref{t.j.g}}} Assume $ e \in \Lambdapp $. For arbitrary $ z $ and $ 0 \leq \ell \leq n - 1 $, let $ \mie{\ell}{e}(z) $ denote the number of non-negative roots (counting multiplicities) for $ t \mapsto \ppie{\ell}{e}(x+te) $. Additionally, for a vector $ d $ and value $ s \geq 0 $, let $ \nie{\ell}{e}(x,d,s) $ denote the number of roots (counting multiplicities) in the closed interval $ [-s, s] $ for the univariate polynomial $ \bar{s} \mapsto \ppie{\ell}{e}(x+ \bar{s} d) $; if the univariate polynomial is identically zero, let $ \nie{\ell}{e}(x,d,s) = \infty $.
If $ x \in \Lambdapp $ (hence $ x \in \Lambdaiepp{\ell}{e} $ for all $ 0 \leq \ell \leq n-1 $), $ d \neq 0 $ and $ s \geq 0 $, Lemma~\ref{t.j.h} can be applied with $ \ppie{\ell}{e} $ in place of $ p $, and applied with $ -d $ as well as with $ d $, yielding
\begin{equation} \label{e.j.o}
\nie{\ell}{e}(x,d,s) = \mie{\ell}{e}(x+sd) + \mie{\ell}{e}(x-sd) \; .
\end{equation}
On the other hand, for any $ z $, the interlacing of the roots of $ t \mapsto \ppie{\ell}{e}(z+te) $ and its derivative $ t \mapsto \ppie{\ell +1}{e}(z+te) $ gives $ \mie{\ell}{e}(z) \geq \mie{\ell +1}{e}(z) $, and thus,
\begin{equation} \label{e.j.p}
\ell \leq j \quad \Rightarrow \quad \mie{\ell}{e}(z) \geq \mie{j}{e}(z) \; .
\end{equation}
From (\ref{e.j.o}) and (\ref{e.j.p}) follows
\begin{equation} \label{e.j.q}
\left( x \in \Lambdapp \right) \wedge \left( \ell \leq j \right) \quad \Rightarrow \quad \nie{\ell}{e}(x,d,s) \geq \nie{j}{e}(x,d,s) \; .
\end{equation}
Assume, now, that $ x $, $ d $, $ j $ and $ k(j) $ satisfy the hypothesis of the proposition. Observe that definitions readily give $ k(j) = \nie{j}{e}(x,d,0) $. Moreover, proving the proposition amounts precisely to showing
\begin{equation} \label{e.j.r}
\nie{\ell}{e}(x,d,0) \geq \nie{j}{e}(x,d,0) \quad \textrm{for $ \ell = 0, \ldots, j $} \; .
\end{equation}
For $ \epsilon > 0 $, let $ x( \epsilon ) := x + \epsilon e $, a point in $ \Lambdapp $. Each polynomial $ \bar{s} \mapsto \ppie{\ell}{e}( x( \epsilon)+ \bar{s}d) $ ($ 0 \leq \ell \leq n-1 $) is not identically zero and has only real roots (indeed, by homogeneity, the roots are the reciprocals of the non-zero roots for $ t \mapsto \ppie{\ell}{e}(d + tx( \epsilon )) $). Thus, since
\begin{equation} \label{e.j.s}
\left. \begin{array}{c}
\textrm{bounded roots of a univariate polynomial} \\ \textrm{vary continuously in the coefficients}\\
\textrm{(so long as the polynomial does not become identically zero)} \end{array} \quad \right\}
\end{equation}
we have for $ 0 \leq \ell \leq j $ that either $ \nie{\ell}{e}(x,d,0) = \infty $ (i.e., $ \bar{s} \mapsto \ppie{\ell}{e}(x + \bar{s} d) \equiv 0 $), or
\begin{align*}
\nie{\ell}{e}(x,d,0) & = \lim_{ s \downarrow 0} \left( \lim_{ \epsilon \downarrow 0} \left( \nie{\ell}{e}(x( \epsilon),d,s) \right) \right) \\
& \geq \lim_{ s \downarrow 0} \left( \lim_{ \epsilon \downarrow 0} \left( \nie{j}{e}(x( \epsilon),d,s) \right) \right) \quad \textrm{(by (\ref{e.j.q}))} \\
& \geq \lim_{ s \downarrow 0} \left( \nie{j}{e}(x,d,s/2) \right) \quad \textrm{(again using (\ref{e.j.s}))} \\
& = \nie{j}{e}(x,d,0) \; ,
\end{align*}
thereby establishing (\ref{e.j.r}) and hence completing the proof. \hfill $ \Box $
| {
"timestamp": "2012-03-20T01:02:38",
"yymm": "1005",
"arxiv_id": "1005.5495",
"language": "en",
"url": "https://arxiv.org/abs/1005.5495",
"abstract": "We develop a natural generalization to the notion of the central path -- a notion that lies at the heart of interior-point methods for convex optimization. The generalization is accomplished via the \"derivative cones\" of a \"hyperbolicity cone,\" the derivatives being direct and mathematically-appealing relaxations of the underlying (hyperbolic) conic constraint, be it the non-negative orthant, the cone of positive semidefinite matrices, or other.We prove that a dynamics inherent to the derivative cones generates paths always leading to optimality, the central path arising from a special case in which the derivative cones are quadratic. Derivative cones of higher degree better fit the underlying conic constraint, raising the prospect that the paths they generate lead to optimality quicker than the central path.",
"subjects": "Optimization and Control (math.OC); Numerical Analysis (math.NA)",
"title": "Central Swaths (A Generalization of the Central Path)",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969684454966,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7099044238234263
} |
https://arxiv.org/abs/1505.04214 | Algorithmic Connections Between Active Learning and Stochastic Convex Optimization | Interesting theoretical associations have been established by recent papers between the fields of active learning and stochastic convex optimization due to the common role of feedback in sequential querying mechanisms. In this paper, we continue this thread in two parts by exploiting these relations for the first time to yield novel algorithms in both fields, further motivating the study of their intersection. First, inspired by a recent optimization algorithm that was adaptive to unknown uniform convexity parameters, we present a new active learning algorithm for one-dimensional thresholds that can yield minimax rates by adapting to unknown noise parameters. Next, we show that one can perform $d$-dimensional stochastic minimization of smooth uniformly convex functions when only granted oracle access to noisy gradient signs along any coordinate instead of real-valued gradients, by using a simple randomized coordinate descent procedure where each line search can be solved by $1$-dimensional active learning, provably achieving the same error convergence rate as having the entire real-valued gradient. Combining these two parts yields an algorithm that solves stochastic convex optimization of uniformly convex and smooth functions using only noisy gradient signs by repeatedly performing active learning, achieves optimal rates and is adaptive to all unknown convexity and smoothness parameters. | \section{Introduction}
The two fields of convex optimization and active learning seem to have evolved quite independently of each other. Recently, \cite{RR09} pointed out their relatedness due to the inherent sequential nature of both fields and the complex role of feedback in taking future actions. Following that, \cite{RS13} made the connections more explicit by tying together the exponent used in noise conditions in active learning and the exponent used in uniform convexity (UC) in optimization. They used this to establish lower bounds (and tight upper bounds) in stochastic optimization of UC functions based on proof techniques from active learning. However, it was unclear if there were concrete algorithmic ideas in common between the fields.
Here, we provide a positive answer by exploiting the aforementioned connections to form new and interesting algorithms that clearly demonstrate that the complexity of $d$-dimensional stochastic optimization is precisely the complexity of $1$-dimensional active learning. Inspired by an optimization algorithm that was adaptive to unknown uniform convexity parameters, we design an interesting one-dimensional active learner that is also adaptive to unknown noise parameters. This algorithm is simpler than the adaptive active learning algorithm proposed recently in \cite{H11} which handles the pool based active learning setting.
Given access to this active learner as a subroutine for line search, we show that a simple randomized coordinate descent procedure can minimize uniformly convex functions with a much simpler stochastic oracle that returns only a Bernoulli random variable representing a noisy sign of the gradient in a single coordinate direction, rather than a full-dimensional real-valued gradient vector. The resulting algorithm is adaptive to all unknown UC and smoothness parameters and achieve minimax optimal convergence rates.
We spend the first two sections describing the problem setup and preliminary insights, before describing our algorithms in sections 3 and 4.
\subsection{Setup of First-Order Stochastic Convex Optimization}
First-order stochastic convex optimization is the task of approximately minimizing a convex function over a convex set, given oracle access to unbiased estimates of the function and gradient at any point, using as few queries as possible (\cite{NY83}).
We will assume that we are given an arbitrary set $S\subset \mathbb{R}^d$ of known diameter bound $R = \max_{x,y\in S} \|x-y\|$. A convex function $f$ with $x^* = \arg \min_{x \in S} f(x)$ is said to be $k$-uniformly convex if, for some $\lambda > 0, k \geq 2$, we have for all $x,y \in S$
$$f(y) \geq f(x) + \nabla f(x)^\top (y-x) + \frac{\lambda}{2} \|x-y\|^k$$
(strong convexity arises when $k=2$). $f$ is $L$-Lipschitz for some $L>0$ if $\|\nabla f(x)\|_* \leq L$ (where $\|.\|_*$ is the dual norm of $\|.\|$); equivalently for all $x,y \in S$
\begin{equation*}
|f(x) - f(y)| \leq L \|x-y\|
\end{equation*}
A differentiable $f$ is $H$-strongly smooth (or has a $H$-Lipschitz gradient) for some $H>\lambda$ if for all $x,y \in S$, we have $\|\nabla f(x) - \nabla f(y)\|_* \leq H \|x-y\|$, or equivalently
$$f(y) \leq f(x) + \nabla f(x)^\top (y-x) + \frac{H}{2} \|x-y\|^2$$
In this paper we shall always assume $\|.\| = \|.\|_*=\|.\|_2$ and deal with strongly smooth and uniformly convex functions with parameters $\lambda > 0, k \geq 2$, $L,H>0$.\\
A stochastic first order oracle is a function that accepts $x \in S$, and returns
$$\Big(\hat{f}(x),{\hat{g}}(x) \Big) \in \mathbb{R}^{d+1} \mbox{ where } \mathbb{E} \big[\hat{f}(x) \big] = f(x), \mathbb{E}\big[\hat{g}(x)\big]= \nabla f(x)$$
(these unbiased estimates also have bounded variance) and the expectation is over any internal randomness of the oracle. \\
An optimization algorithm is a method that sequentially queries an oracle at points in $S$ and returns $\hat{x}_T$ as an estimate of the optimum of $f$ after $T$ queries (or alternatively tries to achieve an error of $\epsilon$) and their performance can be measured by either function error $f(\hat{x}_T) - f(x^*)$ or point error $\|\hat{x}_T - x^*\|$.\\
\subsection{Stochastic Gradient-Sign Oracles} \label{sgso}
Define a stochastic sign oracle to be a function of $x \in S, j \in \{1...d\}$, that returns
$${\hat{s}}_j(x) \in \{+,-\} \mbox{ where}\footnote{$f=\bT (g)$ means $f=\mathrm{\Omega}(g)$ and $f=\bO (g)$ (rate of growth)} \ \big |\eta(x) - 0.5 \big | = \bT \Big( [\nabla f(x)]_j \Big) \mbox{ and } \eta(x) = \mathbb{P} \big ( {\hat{s}}_j(x) = + | x \big )$$
where ${\hat{s}}_j(x)$ is a noisy sign$\big( [\nabla f(x)]_j \big)$ and $[\nabla f(x)]_j$ is the $j$-th coordinate of $\nabla f$, and the probability is over any internal randomness of the oracle. This behavior of $\eta(x)$ actually needs to hold only when $\big |[\nabla f(x)]_j \big|$ is small.
In this paper, we consider coordinate descent algorithms that are motivated by applications where computing the overall gradient, or even a function value, can be expensive due to high dimensionality or huge amounts of data, but computing the gradient in any one coordinate can be cheap. \cite{N10} mentions the example of $\min_x \frac1{2}\|Ax-b\|^2 + \frac1{2}\|x\|^2$ for some $n \times d$ matrix $A$ (or any other regularization that decomposes over dimensions). Computing the gradient $A^\top (Ax-b) + x$ is expensive, because of the matrix-vector multiply. However, its $j$-th coordinate is $2A^{j\top} (Ax-b) + x_j$ and requires an expense of only $n$ if the residual vector $Ax-b$ is kept track of (this is easy to do, since on a single coordinate update of $x$, the residual change is proportional to $A^j$, an additional expense of $n$).
A sign oracle is weaker than a first order oracle, and can actually be obtained by returning the sign of the first order oracle's noisy gradient if the mass of the noise distribution grows linearly around its zero mean (argued in next section). At the optimum along coordinate $j$, the oracle returns a $\pm 1$ with equal probability, and otherwise returns the correct sign with a probability proportional to the value of the directional derivative at that point (this is reflective of the fact that the larger the derivative's absolute value, the easier it would be for the oracle to approximate its sign, hence the smaller the probability of error). It is not unreasonable that there may be other circumstances where even calculating the (real value) gradient in the $i$-th direction could be expensive, but estimating its sign could be a much easier task as it only requires estimating whether function values are expected to increase or decrease along a coordinate (in a similar spirit of function comparison oracles \cite{JNR12}, but with slightly more power).
We will also see that the rates for optimization crucially depend on whether the gradient noise is sign-preserving or not. For instance, with rounding errors or storing floats with small precision, one can get deterministic rates as if we had the exact gradient since the rounding or lower precision doesn't flip signs.
\subsection{Setup of Active Threshold Learning}
The problem of one-dimensional threshold estimation assumes you have an interval of length $R$, say $[0,R]$. Given a point $x$, it has a label $y \in \{+,-\}$ that is drawn from an unknown conditional distribution $\eta(x) = \mathbb{P} \big( Y=+|X=x\big)$ and the threshold $t$ is the unique point where $\eta(x) = 1/2$, with it being larger than half on one side of $t$ and smaller than half on the other (hence it is more likely to draw a $+$ on one side of $t$ and a $-$ on the other side).
The task of active learning of threshold classifiers allows the learner to sequentially query $T$ (possibly dependent) points, observing labels drawn from the unknown conditional distribution after each query, with the goal of returning a guess $\hat{x}_T$ as close to $t$ as possible. In the formal study of classification (cf. \cite{T04}), it is common to study minimax rates when the regression function $\eta(x)$ satisfies Tsybakov's noise or margin condition (TNC) with exponent $k$ at the threshold $t$. Different versions of this boundary noise condition are used in regression, density or level-set estimation and lead to an improvement in minimax optimal rates (for classification, also cf. \cite{AT07}, \cite{H11}). Here, we present the version of TNC used in \cite{CN07} :
$$M |x-t|^{k - 1} \geq | \eta(x) - 1/2 | \geq \mu |x-t|^{k - 1} \mbox{ whenever}\footnote{Note that $|x-t| \leq \delta_0 := \left( \frac{\epsilon_0}{M} \right)^{\frac1{k-1}} \implies |\eta(x) - 1/2| \leq \epsilon_0 \implies |x-t| \leq \left( \frac{\epsilon_0}{\mu} \right)^{\frac1{k-1}}$} \ |\eta(x) - 1/2| \leq \epsilon_0 $$
for some constants $M>\mu>0,\epsilon_0 > 0, k \geq 1$.
A standard measure for how well a classifier $h$ performs is given by its risk, which is simply the probability of classification error (expectation under $0-1$ loss), $\mathcal{R}(h) = \mathbb{P} \big[ h (x) \neq y \big]$. The performance of threshold learning strategies can be measured by the excess classification risk of the resultant threshold classifier at $\hat{x}_T$ compared to the Bayes optimal classifier at $t$ as given by \footnote{$a \vee b := \max(a,b) \mbox{ and } a \wedge b := \min(a,b)$}
\begin{equation} \label{risk}
\mathcal{R} (\hat{x}_T) - \mathcal{R} (t) = \int\limits_{\hat{x}_T \wedge t}^{\hat{x}_T \vee t} | 2 \eta(x) - 1| dx
\end{equation}
In the above expression, akin to \cite{CN07}, we use a uniform marginal distribution for active learning since there is no underlying distribution over $x$. Alternatively, one can simply measure the one-dimensional point error $|\hat{x}_T - t|$ in estimation of the threshold. Minimax rates for estimation of risk and point error in active learning under TNC were provided in \cite{CN07} and are summarized in the next section.
\subsection{Summary of Contributions}
Now that we have introduced the notation used in our paper and some relevant previous work (more in the next section), we can clearly state our contributions.
\begin{itemize}
\item We generalize an idea from \cite{JN10} to present a simple epoch-based active learning algorithm with a passive learning subroutine that can optimally learn one-dimensional thresholds and is adaptive to unknown noise parameters.
\item We show that noisy gradient signs suffice for minimization of uniformly convex functions by proving that a random coordinate descent algorithm with an active learning line-search subroutine achieves minimax convergence rates.
\item Due to the connection between the relevant exponents in the two fields, we can combine the above two methods to get an algorithm that achieves minimax optimal rates and is adaptive to unknown convexity parameters.
\item As a corollary, we argue that with access to possibly noisy non-exact gradients that don't switch any signs (rounding errors or low-precision storage are sign-preserving), we can still achieve exponentially fast deterministic rates.
\end{itemize}
\section{Preliminary Insights}
\subsection{Connections Between Exponents}
Taking one point as $x^*$ in the definition of UC, we see that
$$|f(x) - f(x^*)| \geq \frac{\lambda}{2} \|x-x^*\|^k$$
Since $\|\nabla f(x)\| \|x-x^*\| \geq \nabla f(x)^\top (x-x^*) \geq f(x) - f(x^*)$ (by convexity),
$$\|\nabla f(x) - 0\| \geq \frac{\lambda}{2} \|x-x^*\|^{k-1} $$
Another relevant fact for us will be that uniformly convex functions in $d$ dimensions are uniformly convex along any one direction, or in other words, for every fixed $x \in S$ and fixed unit vector $u \in \mathbb{R}^d$, the univariate function of $\alpha$ defined by $f_{x,u}(\alpha) := f(x + \alpha u)$ is also UC with the same parameters\footnote{Since $f$ is UC, $f_{x,u}(\alpha) \geq f_{x,u}(0) + \alpha \nabla f_{x,u}(0) + \frac{\lambda}{2}|\alpha|^k$}. For $u = e_j$,
$$\big | [\nabla f(x)]_j - 0 \big | \geq \frac{\lambda}{2} \|x-x_{j}^*\|^{k-1}$$
where $x_{j}^* = x + \alpha_j^* e_j$ and $\alpha_{j}^* = \arg \min_{\{\alpha|x + \alpha e_j \in S\}} f(x + \alpha e_j)$. This uncanny similarity to the TNC (since $\nabla f(x^*) = 0$) was mathematically exploited in \cite{RS13} where the authors used a lower bounding proof technique for one-dimensional active threshold learning from \cite{CN07} to provide a new lower bounding proof technique for the $d$-dimensional stochastic convex optimization of UC functions. In particular, they showed that the minimax rate for $1$-dimensional active learning excess risk and the $d$-dimensional optimization function error both scaled like\footnote{we use $\mathrm{{\tilde{O}}}, \mathrm{{\tilde{\Theta}}}$ to hide constants and polylogarithmic factors} $\mathrm{{\tilde{\Theta}}} \left( T^{-\frac{k}{2k-2}}\right)$, and that the point error in both settings scaled like $\mathrm{{\tilde{\Theta}}} \left( T^{-\frac{1}{2k-2}}\right)$, where $k$ is either the TNC exponent or the UC exponent, depending on the setting. The importance of this connection cannot be emphasized enough and we will see this being useful throughout this paper.\\
As mentioned earlier \cite{CN07} require a two-sided TNC condition (upper and lower growth condition to provide exact tight rate of growth) in order to prove risk upper bounds. On a similar note, for uniformly convex functions, we will assume such a Local $k$-Strong Smoothness condition around directional minima
$$\mbox{\textbf{Assumption LkSS} : \ \ \ \ for all $j \in \{1...d\}$\ \ \ } \big | [\nabla f(x)]_j - 0 \big | \leq \Lambda \|x-x_{j}^*\|^{k-1} $$
for some constant $\Lambda > \lambda/2$, so we can tightly characterize the rate of growth as
$$\big | [\nabla f(x)]_j - 0 \big | = \bT \Big( \|x-x_{j}^*\|^{k-1} \Big)$$
This condition is implied by strong smoothness or Lipschitz smooth gradients when $k=2$ (for strongly convex and strongly smooth functions), but is a slightly stronger assumption otherwise.
\subsection{The One-Dimensional Argument}
The basic argument for relating optimization to active learning was made in \cite{RS13} in the context of stochastic first order oracles when the noise distribution $\mathrm{P}(z)$ is unbiased and grows linearly around its zero mean, i.e.
$$ \int_0^\infty \mathrm{dP}(z) = \tfrac{1}{2} \ \mbox{ and } \ \int_0^t \mathrm{dP}(z) = \bT ( t ) $$
for all $0 <t < t_0$, for constants $t_0$ (similarly for $-t_0 < t < 0$). This is satisfied for gaussian, uniform and many other distributions. We reproduce the argument for clarity and then sketch it for stochastic signed oracles as well.
For any $x \in S$, it is clear that $f_{x,j}(\alpha) := f(x+\alpha e_j)$ is convex; its gradient $\nabla f_{x,j}(\alpha) := [\nabla f(x + \alpha e_j)]_j$ is an increasing function of $\alpha$ that switches signs at $\alpha^*_j := \arg\min_{\{\alpha | x+ \alpha e_j \in S\}} f_{x,j}(\alpha)$, or equivalently at directional minimum $x^*_j := x + \alpha^*_j e_j$. One can think of sign$([\nabla f(x)]_j)$ as being the true label of $x$, sign$([\nabla f(x)]_j+z)$ as being the observed label, and finding $x_j^*$ as learning the decision boundary (point where labels switch signs). Define regression function
$$\eta(x) := \mathbb{P} \Big(\mbox{sign}([\nabla f(x)]_j+z) = +|x \Big)$$
and note that minimizing $f_{x_0,j}$ corresponds to identifying the Bayes threshold classifier as $x_j^*$ because the point at which $\eta(x)=0.5$ or $[\nabla f(x)]_j=0$ is $x_j^*$. Consider a point $x = x^*_j + t e_j$ for $t>0$ with $[\nabla f(x)]_j > 0$ and hence has true label $+$ (a similar argument can be made for $t < 0$). As discussed earlier, $\big| [\nabla f(x)]_j \big| = \bT \Big( \|x-x_j^*\|^{k-1} \Big) = \bT (t^{k-1})$. The probability of seeing label $+$ is the probability that we draw $z$ in $\big(-[\nabla f(x)]_j,\infty \big)$ so that the sign of $[\nabla f(x)]_j+z$ is still positive. Hence, the regression function can be written as
\begin{align*}
\eta(x) \ &= \ \mathbb{P} \Big([\nabla f(x)]_j + z > 0 \Big) \\
\ &= \ \mathbb{P} (z>0) + \mathbb{P} \Big(-[\nabla f(x)]_j < z < 0 \Big) \ = \ 0.5 + \bT \Big( [\nabla f(x)]_j \Big)
\end{align*}
$$
\implies \big |\eta(x) - \tfrac{1}{2} \big| \ = \ \bT \Big( [\nabla f(x)]_j \Big) \ = \ \bT \big( t^{k-1} \big) \ = \ \bT \Big( |x-x_j^*|^{k-1} \Big)\label{bz}
$$
Hence, $\eta(x)$ satisfies the TNC with exponent $k$, and an active learning algorithm (next subsection) can be used to obtain a point $\hat{x}_T$ with small point-error and excess risk. Note that function error in convex optimization is bounded above by excess risk of the corresponding active learner using eq (\ref{risk}) because
\begin{align*} \label{ferrorrisk}
f_j(\hat{x}_T) - f_j(x_j^*) \ &= \ \Bigg| \int\limits^{\hat{x}_T \vee x_j^*}_{\hat{x}_T \wedge x_j^*} [\nabla f(x)]_j \mathrm{dx} \Bigg|
\ &= \bT \Bigg( \int\limits^{\hat{x}_T \vee x_j^*}_{\hat{x}_T \wedge x^*_j} |2\eta(x)-1|\mathrm{dx} \Bigg)\\
\ &=\ \bT \Big(\mathcal{R} (\hat{x}_T)\Big)
\end{align*}
Similarly, for stochastic sign oracles (Sec. \ref{sgso}), using $\eta(x) = \mathbb{P} \big ({\hat{s}}_j(x) = + \big) $,
\begin{eqnarray*}
\big| \eta(x) - \tfrac{1}{2} \big| \ = \ \bT \Big([\nabla f(x)]_j\Big) \ = \ \bT \Big (\|x-x^*_j\|^{k-1} \Big)
\end{eqnarray*}
\subsection{A Non-adaptive Active Threshold Learning Algorithm}
One can use a grid-based probabilistic variant of binary search called the BZ algorithm \cite{BZ74} to approximately learn the threshold efficiently in the active setting, in the setting that $\eta(x)$ satisfies the TNC for known $k, \mu, M$ (it is not adaptive to the parameters of the problem - one needs to know these constants beforehand). The analysis of BZ and the proof of the following lemma are discussed in detail in Theorem 1 of \cite{CN09}, Theorem 2 of \cite{CN07} and the Appendix of \cite{RS13}.
\begin{lemma} \label{BZ}
Given a $1$-dimensional regression function that satisfies the TNC with known parameters $\mu, k$, then after $T$ queries, the BZ algorithm returns a point $\hat{t}$ such that $| \hat{t} - t | = \mathrm{{\tilde{\Theta}}} (T^{-\frac{1}{2k - 2}})$ and the excess risk is $\mathrm{{\tilde{\Theta}}} (T^{-\frac{k}{2k - 2}})$.
\end{lemma}
Due to the described connection between exponents, one can use BZ to approximately optimize a one dimensional uniformly convex function $f_j$ with known uniform convexity parameters $\lambda,k$.
Hence, the BZ algorithm can be used to find a point with low function error by searching for a point with low risk. This, when combined with Lemma \ref{BZ}, yields the following important result.
\begin{lemma} \label{perror}
Given a $1$-dimensional $k$-UC and LkSS function $f_j$, a line search to find $\hat{x}_T$ close to $x^*_j$ up to accuracy $|\hat{x}_T - x^*_j| \leq \eta$ in point-error can be performed in $\mathrm{{\tilde{\Theta}}} (1/\eta^{2k - 2})$ steps using the BZ algorithm. Alternatively, in $T$ steps we can find $\hat{x}_T$ such that $f(\hat{x}_T) - f(x^*_j) = \mathrm{{\tilde{\Theta}}} (T^{-\frac{k}{2k - 2}})$.
\end{lemma}
\section{A 1-D Adaptive Active Threshold Learning Algorithm}
We now describe an algorithm for active learning of one-dimensional thresholds that is adaptive, meaning it can achieve the minimax optimal rate even if the TNC parameters $M,\mu,k$ are unknown. It is quite different from the non-adaptive BZ algorithm in its flavour, though it can be regarded as a robust binary search procedure, and its design and proof are inspired from an optimization procedure from \cite{JN10} that is adaptive to unknown UC parameters $\lambda,k$.
Even though \cite{JN10} considers a specific optimization algorithm (dual averaging), we observe that their algorithm that adapts to unknown UC parameters can use any optimal convex optimization algorithm as a subroutine within each epoch. Similarly, our adaptive active learning algorithm is epoch-based and can use any optimal passive learning subroutine in each epoch. We note that \cite{H11} also developed an adaptive algorithm based on disagreement coefficient and VC-dimension arguments, but it is in a pool-based setting where one has access to a large pool of unlabeled data, and is much more complicated.
\subsection{An Optimal Passive Learning Subroutine}
The excess risk of passive learning procedures for 1-d thresholds can be bounded by $\bO (T^{-1/2})$ (e.g. see Alexander's inequality in \cite{DGL96} to avoid $\sqrt{\log T}$ factors from ERM/VC arguments) and can be achieved by ignoring the TNC parameters.
Consider such a passive learning procedure under a uniform distribution of samples (mimicked by active learning by querying the domain uniformly) in a ball\footnote{Define $B(x,R) := [x-R,x+R]$} $B(x_0,R)$ around an arbitrary point $x_0$ of radius $R$ that is known to contain the true threshold $t$. Then without knowledge of $M,\mu, k$, in $T$ steps we can get a point $\hat{x}_T$ close to the true threshold $t$ such that with probability at least $1-\delta$
$$\mathcal{R} (\hat{x}) - \mathcal{R}(t) = \int\limits_{\hat{x}_T \vee t}^{\hat{x}_T \wedge t} |2\eta(x) - 1|dx \leq \frac{C_\delta R}{\sqrt T}$$
for some constant $C_\delta$. Assuming $\hat{x}_T$ lies inside the TNC region,
$$\mu \int\limits_{\hat{x}_T \vee t}^{\hat{x}_T \wedge t} |x - t|^{k-1} dx \leq \int\limits_{\hat{x}_T \vee t}^{\hat{x}_T \wedge t} |2\eta(x) - 1|dx $$
Hence $\frac{\mu |\hat{x}_T-t|^k}{k} \leq \frac{C_\delta R}{\sqrt T}$. Since $k^{1/k} \leq 2$, w.p. at least $1-\delta$ we get a point-error
\begin{equation}\label{pass}
|\hat{x}_T-t| \leq 2\left[ {\frac{C_\delta R}{\mu \sqrt T}} \right]^{1/k}
\end{equation}
We assume that $\hat{x}_T$ lies within the TNC region since the interval $|\eta(x) ~-~ \tfrac{1}{2}|~ \leq~ \epsilon_0$ has at least constant width $|x-t| \leq \delta_0 = (\epsilon_0/M)^{1/(k-1)}$, it will only take a constant number of iterations to find a point within it. A formal way to argue this would be to see that if the overall risk goes to zero like $\frac{C_\delta R}{\sqrt T}$, then the point cannot stay outside this constant sized region of width $\delta_0$ where $|\eta(x) -1/2| \leq \epsilon_0$, since it would accumulate a large constant risk of at least $\int\limits_{t}^{t+\delta_0} \mu |x-t|^{k-1} = \frac{\mu \delta_0^k}{k}$. So as long as $T$ is larger than a constant $T_0 := \frac{C_\delta^2 R^2 k^2}{\mu^2 \delta_0^{2k}}$, our bound in eq \ref{pass} holds with high probability (we can even assume we waste a constant number of queries to just get into the TNC region before using this algorithm).
\subsection{Adaptive One-Dimensional Active Threshold Learner} \label{subsec1D}
\begin{algorithm}[ht] \label{adapt}
\caption{Adaptive Threshold Learner }
\textbf{Input:} Domain $S$ of diameter $R$, oracle budget $T$, confidence $\delta$\\
\vspace{1mm}
\textbf{Black Box:} Any optimal passive learning procedure $P(x,R,N)$ that outputs an estimated threshold in $B(x,R)$ using $N$ queries\\
\vspace{1mm}
Choose any $x_0 \in S$, $R_1=R, E = \log \sqrt {\frac{2T}{C^2_{\tilde{\delta}} \log T}}, N = \frac{T}{E}$
\vspace{-2mm}
\begin{algorithmic}[1]
\WHILE{$1 \leq e \leq E$}
\STATE $x_e \leftarrow P(x_{e-1},R_e,N)$
\STATE $R_{e+1} \leftarrow \frac{R_e}{2}, e \leftarrow e+1$
\ENDWHILE
\end{algorithmic}
\vspace{1mm}
\textbf{Output:} $x_{E}$ \\
\vspace{1mm}
\end{algorithm}
Algorithm \ref{adapt} is a generalized epoch-based binary search, and we repeatedly perform passive learning in a halving search radius. Let the number of epochs be $E := \log \sqrt {\frac{2T}{C_{\tilde{\delta}}^2 \log T}} \leq \frac{\log T}{2}$ (if$^7$ constant $C_{\tilde{\delta}}^2>2$) and ${\tilde{\delta}} := 2\delta/\log T \leq \delta/E$. Let the time budget per epoch be $N := T/E$ (the same for every epoch) and the search radius in epoch $e \in \{1,...,E\}$ shrink as $R_e := 2^{-e+1} R$.
Let us define the minimizer of the risk within the ball of radius $R_e$ centered around $x_{e-1}$ at epoch $e$ as
$$x^*_e = \arg \min \big\{\mathcal{R} (x) : x \in S \cap B(x_{e-1},R_e) \big\} $$
Note that $x^*_e = t$ iff $t \in B(x_{e-1},R_e)$ and will be one end of the interval otherwise.
\begin{theorem} \label{Tadapt}
In the setting of one-dimensional active learning of thresholds, Algorithm 1 adaptively achieves $\mathcal{R} (x_{E}) - \mathcal{R} (t) = \mathrm{{\tilde{O}}} \left( T ^{-\frac{k}{2k-2}} \right)$ with probability at least $1-\delta$ in $T$ queries when the unknown regression function $\eta(x)$ has unknown TNC parameters $\mu,k$.
\end{theorem}
\begin{proof} Since we use an optimal passive learning subroutine at every epoch, we know that after each epoch $e$ we have with probability at least $1 - {\tilde{\delta}}$ \footnote{By VC theory for threshold classifiers or similar arguments in \cite{DGL96}, $C^2_{\tilde{\delta}} \sim \log(1/{\tilde{\delta}}) \sim\log \log T$ since ${\tilde{\delta}} \sim \delta/ \log T$. We treat it as constant for clarity of exposition, but actually lose $\log \log T$ factors like the high probability arguments in \cite{HK11} and \cite{RS13}}\label{loglog}
\begin{equation}\label{perepoch}
\mathcal{R} (x_{e}) - \mathcal{R} (x^*_e) \leq \frac{C_{\tilde{\delta}} R_e}{\sqrt{T/E}} \leq C_{\tilde{\delta}} R_e \sqrt{\frac{\log T}{2T}}
\end{equation}
Since $\eta(x)$ satisfies the TNC (and is bounded above by $1$), we have for all $x$
$$\mu |x-t|^{k-1} \leq |\eta(x) - 1/2| \leq 1$$
If the set has diameter $R$, one of the endpoints must be at least $R/2$ away from $t$, and hence we get a limitation on the maximum value of $\mu$ as $\mu \leq \frac{1}{(R/2)^{k-1}}$.
Since $k \geq 2$ and $E \geq 2$, and $2^{-E} = C_{\tilde{\delta}} \sqrt{\frac{\log T}{2T}}$, using simple algebra we get
$$ \mu \leq \frac{ 2^{(k-2)E+2}}{(R/2)^{k-1}} = \frac{4.2^{-E}2^{(k-1)E}2^{(k-1)}}{R^{k-1}} = \frac{4.2^{-E}2^{(k-1)}}{(2^{-E}R)^{k-1}} = \frac{4 C_{\tilde{\delta}} 2^{k-1}}{R_{E+1}^{k-1}} \sqrt{\frac{\log T}{2T}}$$
We prove that we will be appropriately close to $t$ after some epoch $e^*$ by doing case analysis on $\mu$. When the true unknown $\mu$ is sufficiently small, i.e.
\begin{equation}\label{musmall}
\mu \leq \frac{4C_{\tilde{\delta}} 2^{k-1}}{R_2^{k-1}} \sqrt{\frac{\log T}{2T}}
\end{equation}
then we show that we'll be done after $e^*=1$. Otherwise, we will be done after epoch $2 \leq e^* \leq E$ if the true $\mu$ lies in the range
\begin{equation}\label{mubig}
\frac{4 C_{\tilde{\delta}} 2^{k-1}}{R_{e^*}^{k-1}} \sqrt{\frac{\log T}{2T}} \leq \mu \leq \frac{4C_{\tilde{\delta}} 2^{k-1}}{R_{e^*+1}^{k-1}} \sqrt{\frac{\log T}{2T}}
\end{equation}
To see why we'll be done, equations (\ref{musmall}) and (\ref{mubig}) imply $R_{e^*+1} \leq 2 \left( \frac{8C_{\tilde{\delta}}^2 \log T}{\mu^2 T} \right)^{\frac{1}{2k-2}}$ after epoch $e^*$ and plugging this into equation (\ref{perepoch}) with $R_{e^*} = 2R_{e^*+1}$, we get
\begin{equation}\label{estar}
\mathcal{R} (x_{e^*}) - \mathcal{R} (x^*_{e^*}) \leq C_{\tilde{\delta}} R_{e^*} \left( \frac{\log T}{2T} \right)^{\frac1{2}} = \bO \left( \left( \frac{\log T}{T} \right)^{\frac{k}{2k-2}} \right)
\end{equation}
There are two issues hindering the completion of our proof. The first is that even though $x_1^* = t$ to start off with, it might be the case that $x^*_{e^*}$ is far away from $t$ since we are chopping the radius by half at every epoch. Interestingly, in lemma \ref{before} we will prove that round $e^*$ is the last round up to which $x^*_e = t$. This would imply from eq (\ref{estar}) that
\begin{equation}\label{eqbefore}
\mathcal{R} (x_{e^*}) - \mathcal{R} (t) = \mathrm{{\tilde{O}}} \left( T^{-\frac{k}{2k-2}} \right)
\end{equation}
Secondly we might be concerned that after the round $e^*$, we may move further away from $t$ in later epochs. However, we will show that since the radii are decreasing geometrically by half at every epoch, we cannot really wander too far away from $x_{e^*}$. This will give us a bound (see lemma \ref{after}) like
\begin{equation}\label{eqafter}
\mathcal{R} (x_{E}) - \mathcal{R} (x_{e^*}) = \mathrm{{\tilde{O}}} \left( T^{-\frac{k}{2k-2}} \right)
\end{equation}
We will essentially prove that the final point $x_{e^*}$ of epoch $e^*$ is sufficiently close to the true optimum $t$, and the final point of the algorithm $x_{E}$ is sufficiently close to $x_{e^*}$. Summing eq (\ref{eqbefore}) and eq (\ref{eqafter}) yields our desired result.
\begin{lemma}\label{before}
For all $e \leq e^*$, conditioned on having $x^*_{e-1}=t$, with probability $1-{\tilde{\delta}}$ we have $x^*_e = t$. In other words, up to epoch $e^*$, the optimal classifier in the domain of each epoch is the true threshold with high probability. \end{lemma}
\begin{proof}
$x_e^* = t$ will hold in epoch $e$ if the distance between the first point $x_{e-1}$ in the epoch $e$ is such that the ball of radius $R_e$ around it actually contains $t$, or mathematically if $| x_{e-1} - t | \leq R_e$. This is trivially satified for $e=1$, and assuming that it is true for epoch $e-1$ we will show show by induction that it holds true for epoch $e \leq e^*$ w.p. $1-{\tilde{\delta}}$. Notice that using equation (\ref{pass}), conditioned on the induction going through in previous rounds ($t$ being within the search radius), after the completion of round $e-1$ we have with probability $1 - {\tilde{\delta}}$
$$|x_{e-1} - t | \leq 2 \left[ {\frac{C_{\tilde{\delta}} R_{e-1}}{\mu \sqrt {T/E}}} \right]^{1/k} $$
If this was upper bounded by $R_e$, then the induction would go through. So what we would really like to show is that $2 \left [\frac{C_{\tilde{\delta}} R_{e-1}}{\mu \sqrt{T/E}} \right ]^{\frac{1}{k}} \leq R_e$. Since $R_{e-1} = 2R_{e}$, we effectively want to show $\frac{2^k C_{\tilde{\delta}} 2R_e }{\mu} \sqrt{ \frac{E}{ T}} \leq R_{e}^k $ or equivalently that for all $e \leq e^*$ we would like to have $\frac{4C_{\tilde{\delta}} 2^{k-1}}{R_{e}^{k-1}} \sqrt{ \frac{ E}{ T}} \leq \mu$. Since $E \leq \frac{\log T}{2}$, we would be achieving something stronger if we showed
$$ \frac{4C_{\tilde{\delta}} 2^{k-1}}{R_{e}^{k-1}} \sqrt{ \frac{ \log T}{2 T}} \leq \mu$$
which is known to be true for every epoch up to $e^*$ by equation (\ref{mubig}).
\end{proof}
\begin{lemma} \label{after}
For all $e^* < e \leq E$, $\mathcal{R} (x_{e}) - \mathcal{R} (x_{e^*}) \leq \frac{C_{\tilde{\delta}} R_{e^*}}{\sqrt {T/E}} = \mathrm{{\tilde{O}}} \left( T^{-\frac{k}{2k-2}} \right) $ w.p. $1-{\tilde{\delta}}$, ie after epoch $e^*$, we cannot deviate much from where we ended epoch $e^*$. \end{lemma}
\begin{proof}
For $e > e^*$, we have with probability at least $1-{\tilde{\delta}}$
$$\mathcal{R} (x_{e}) - \mathcal{R} (x_{e-1}) \leq \mathcal{R} (x_{e}) - \mathcal{R} (x^*_e) \leq \frac{C_{\tilde{\delta}} R_e}{\sqrt {T/E}}$$
and hence even for the final epoch $E$, we have with probability $(1 - {\tilde{\delta}})^{E-e^*}$
$$\mathcal{R} (x_{E}) - \mathcal{R} (x_{e^*}) = \sum_{e=e^*+1}^E [\mathcal{R} (x_{e}) - \mathcal{R} (x_{e-1})] \leq \sum_{e=e^*+1}^E \frac{C_{\tilde{\delta}} R_e}{\sqrt {T/E}}$$
Since the radii are halving in size, this is upper bounded (like equation (\ref{estar})) by
$$ \frac{C_{\tilde{\delta}} R_{e^*}}{\sqrt {T/E}} [1/2 + 1/4 + 1/8 +...] \leq \frac{C_{\tilde{\delta}} R_{e^*}}{\sqrt {T/E}} = \mathrm{{\tilde{O}}} \left( T^{-\frac{k}{2k-2}} \right)$$
\end{proof}
These lemmas justify the use of equations (\ref{eqbefore}) and (\ref{eqafter}), whose sum yields our desired result. Notice that the overall probability of success is at least $(1 - {\tilde{\delta}})^E \geq 1 - \delta$, hence concluding the proof of the theorem.
\end{proof}
\section{Randomized Stochastic-Sign Coordinate Descent}
We now describe an algorithm that can do stochastic optimization of $k$-UC and LkSS functions in $d>1$ dimensions when given access to a stochastic sign oracle and a black-box 1-D active learning algorithm, such as our adaptive scheme from the previous section as a subroutine. The procedure is well-known in the literature, but the idea that one only needs noisy gradient signs to perform minimization optimally, and that one can use active learning as a line-search procedure, is novel to the best of our knowledge.
The idea is to simply perform random coordinate-wise descent with approximate line search, where the subroutine for line search is an optimal active threshold learning algorithm that is used to approach the minimum of the function along the chosen direction. Let the gradient at epoch $e$ be called $\nabla_{e-1} = \nabla f(x_{e-1})$, the unit vector direction of descent $d_e$ be a unit coordinate vector chosen randomly from $\{1...d\}$, and our step size from $x_{e-1}$ be $\alpha_e$ (determined by active learning) so that our next point is $x_e := x_{e-1} + \alpha_e d_e$.
Assume, for analysis, that the optimum of $f_e(\alpha) := f(x_{e-1} + \alpha d_e)$ is
$$\alpha^*_e := \arg \min_\alpha f(x_{e-1} + \alpha d_e) \mbox{ and } x^*_e := x_{e-1} + \alpha_e^* d_e$$
where (due to optimality) the derivative is
\begin{equation} \label{0deriv}
\nabla f_e(\alpha_e^*) = 0 = \nabla f(x^*_e)^\top d_e
\end{equation}
The line search to find $\alpha_e$ and $x_e$ that approximates the minimum $x^*_e$ can be accomplished by any optimal active learning algorithm algorithm, once we fix the number of time steps per line search.
\subsection{Analysis of Algorithm \ref{rscdd}}
\vspace{-4mm}
\begin{algorithm}[h!] \label{rscdd}
\caption{Randomized Stochastic-Sign Coordinate Descent}
\textbf{Input:} set $S$ of diameter $R$, query budget $T$ \\
\vspace{1.2mm}
\textbf{Oracle:} stochastic sign oracle $O_f (x,j)$ returning noisy $\mbox{sign}\big([\nabla f(x)]_j \big)$\\
\vspace{1.2mm}
\textbf{BlackBox:} algorithm $LS (x,d,n)$ : line search from $x$, direction $d$, for $n$ steps\\
\vspace{1.2mm}
Choose any $x_0 \in S$, $E = d(\log T)^2$
\begin{algorithmic}[1]
\WHILE{$1 \leq e \leq E$}
\STATE Choose a unit coordinate vector $d_e$ from $\{1...d\}$ uniformly at random
\STATE $x_e \leftarrow$ $LS(x_{e-1},d_e,T/E)$ using $O_f$
\STATE $e \leftarrow e+1$
\ENDWHILE
\end{algorithmic}
\textbf{Output:} $x_{E}$\\
\vspace{1.5mm}
\end{algorithm}
\vspace{-4mm}
Let the number of epochs be $E = d (\log T)^2$, and the number of time steps per epoch is $T/E$. We can do a line search from $x_{e-1}$, to get $x_e$ that approximates $x^*_e$ well in function error in $T/E = \mathrm{{\tilde{O}}}(T)$ steps using an active learning subroutine and let the resulting function-error be denoted by $\epsilon' = \mathrm{{\tilde{O}}} \Big(T^{-\frac{k}{2k-2}} \Big)$.
$$f(x_e) \leq f(x_e^*) + \epsilon'$$
Also, LkSS and UC allow us to infer (for $k^* = \frac{k}{k-1}$, i.e. $1/k + 1/k^* = 1$)
$$ f(x_{e-1}) - f(x^*_e) \ \geq \ \frac{\lambda}{2} \|x_{e-1} - x^*_e\|^k \ \geq \ \frac{\lambda}{2\Lambda^{k^*}} \big| \nabla_{e-1}^\top d_e \big|^{k^*}$$
Eliminating $f(x^*_e)$ from the above equations, subtracting $f(x^*)$ from both sides, denoting $\Delta_e := f(x_e) - f(x^*)$ and taking expectations
$$
\mathbb{E}[\Delta_{e}] \leq \mathbb{E}[\Delta_{e-1}] - \frac{\lambda}{2\Lambda^{k^*}} \mathbb{E} \Big[ \big| \nabla_{e-1}^\top d_e \big|^{k^*} \Big] + \epsilon'
$$
Since\footnote{$k \geq 2 \implies 1 \leq k^* \leq 2 \implies \| . \|_{k^*} \geq \|.\|_2$} $\mathbb{E} \Big[|\nabla_{e-1}^\top d_e|^{k^*} \big| d_1,...,d_{e-1} \Big] = \frac1{d} \|\nabla_{e-1}\|_{k^*}^{k^*} \geq \frac1{d} \|\nabla_{e-1}\|^{k^*}$ we get
$$
\mathbb{E}[\Delta_{e}] \leq \mathbb{E}[\Delta_{e-1}] - \frac{\lambda}{2d\Lambda^{k^*}} \mathbb{E} \Big[\|\nabla_{e-1}\|^{k^*} \Big] + \epsilon'
$$
By convexity, Cauchy-Schwartz and UC\footnote{$\Delta_{e-1}^k \leq [\nabla_{e-1}^\top(x_{e-1} - x^*)]^k \leq \|\nabla_{e-1}\|^k\|x_{e-1} - x^*\|^k \leq \|\nabla_{e-1}\|^\kappa \frac{2}{\lambda}\Delta_{e-1}$}, $\|\nabla_{e-1}\|^{k^*} \geq \left( \frac{\lambda}{2} \right) ^{1/k-1}\Delta_{e-1}$, we get
$$
\mathbb{E}[\Delta_{e}] \leq \mathbb{E}[\Delta_{e-1}] \left( 1 - \frac1{d} \left( \frac{\lambda}{2\Lambda} \right)^{k^*} \right ) + \epsilon'
$$
Defining\footnote{Since $1 < k^* \leq 2$ and $\Lambda > \lambda/2$, we have $C<1$} $C:= \frac1{d} \left( \frac{\lambda}{2\Lambda} \right)^{k^*} < 1$, we get the recurrence
$$\mathbb{E}[\Delta_{e}] - \frac{\epsilon'}{C} \leq (1-C)\left( \mathbb{E}[\Delta_{e-1}] - \frac{\epsilon'}{C} \right)$$
Since $E = d (\log T)^2$ and $\Delta_0 \leq L\|x_0 - x^*\| \leq LR$, after the last epoch, we have
\begin{align*}
\mathbb{E}[\Delta_E] - \frac{\epsilon'}{C} \ &\leq \ (1-C)^E \left (\Delta_0 - \frac{\epsilon'}{C} \right ) \ \leq \ \exp \big\{-Cd (\log T)^2 \big\} \Delta_0 \ \\
&\leq \ LR T^{-Cd \log T}
\end{align*}
As long as $T > \exp \left\{ (2\Lambda/\lambda)^{k^*} \right\}$, a constant, we have $Cd \log T \geq 1$ and
$$\mathbb{E}[\Delta_E] = \bO (\epsilon') + \mathrm{o}(T^{-1}) = \mathrm{{\tilde{O}}} \Big(T^{-\frac{k}{2k-2}} \Big)$$
which is the desired result. Notice that in this section we didn't need to know $\lambda, \Lambda, k$, because we simply run randomized coordinate descent for $E = d (\log T)^2$ epochs with $T/E$ steps per subroutine, and the active learning subroutine was also adaptive to the appropriately calculated TNC parameters. In summary,
\begin{theorem} \label{Tsscd}
Given access to only noisy gradient sign information from a stochastic sign oracle, Randomized Stochastic-Sign Coordinate Descent can minimize UC and LkSS functions at the minimax optimal convergence rate for expected function error of $\mathrm{{\tilde{O}}}(T^{-\frac{k}{2k-2}})$ adaptive to all unknown convexity and smoothness parameters. As a special case for $k=2$, strongly convex and strongly smooth functions can be minimized in $\mathrm{{\tilde{O}}}(1/T)$ steps.
\end{theorem}
\subsection{Gradient Sign-Preserving Computations}
A practical concern for implementing optimization algorithms is machine precision, the number of decimals to which real numbers are stored. Finite space may limit the accuracy with which every gradient can be stored, and one may ask how much these inaccuracies may affect the final convergence rate - how is the query complexity of optimization affected if the true gradients were rounded to one or two decimal points? If the gradients were randomly rounded (to remain unbiased), then one might guess that we could easily achieve stochastic first-order optimization rates.
However, our results give a surprising answer to that question, as a similar argument reveals that for UC and LkSS functions (with strongly convex and strongly smooth being a special case), our algorithm achieves exponential rates. Since rounding errors do not flip any sign in the gradient, even if the gradient was rounded or decimal points were dropped as much as possible and we were to return only a single bit per coordinate having the true signs, then one can still achieve the exponentially fast convergence rate observed in non-stochastic settings - our algorithm needs only a logarithmic number of epochs, and in each epoch active learning will approach the directional minimum exponentially fast with noiseless gradient signs using a perfect binary search. In fact, our algorithm is the natural generalization for a higher-dimensional binary search, both in the deterministic and stochastic settings.
We can summarize this in the following theorem:
\begin{theorem}
Given access to gradient signs in the presence of sign-preserving noise (such as deterministic or random rounding of gradients, dropping decimal places for lower precision, etc), Randomized Stochastic-Sign Coordinate Descent can minimize UC and LkSS functions exponentially fast, with a function error convergence rate of $\mathrm{{\tilde{O}}}(\exp\{-T\})$.
\end{theorem}
\section{Discussion}
While the assumption of smoothness is natural for strongly convex functions, our assumption of LkSS might appear strong in general. It is possible to relax this assumption and require the LkSS exponent to differ from the UC exponent, or to only assume strong smoothness - this still yields consistency for our algorithm, but the rate achieved is worse. \cite{JN10} and \cite{RS13} both have epoch based algorithms that achieve the minimax rates under just Lipschitz assumptions with access to a full-gradient stochastic first order oracle, but it is hard to prove the same rates for a coordinate descent procedure without smoothness assumptions
Given a target function accuracy $\epsilon$ instead of query budget $T$, a similar randomized coordinate descent procedure to ours achieves the minimax rate with a similar proof, but it is non-adaptive since we presently don't have an adaptive active learning procedure when given $\epsilon$. As of now, we know no adaptive UC optimization procedure when given $\epsilon$.
Recently, \cite{BM11} analysed stochastic gradient descent with averaging, and show that for smooth functions, it is possible for an algorithm to automatically adapt between convexity and strong convexity, and in comparision we show how to adapt to unknown uniform convexity (strong convexity being a special case of $\kappa=2$). It may be possible to combine the ideas from this paper and \cite{BM11} to get a universally adaptive algorithm from convex to all degrees of uniform convexity. It would also be interesting to see if these ideas extend to connections between convex optimization and learning linear threshold functions.
In this paper, we exploit recently discovered theoretical connections
by providing explicit algorithms that take advantage of them.
We show how these could lead to cross-fertilization of fields in both directions and hope that this is just the beginning of a flourishing interaction where these insights may lead to many new algorithms if we leverage the theoretical relations in more innovative ways.
\bibliographystyle{agsm}
| {
"timestamp": "2015-05-19T02:01:14",
"yymm": "1505",
"arxiv_id": "1505.04214",
"language": "en",
"url": "https://arxiv.org/abs/1505.04214",
"abstract": "Interesting theoretical associations have been established by recent papers between the fields of active learning and stochastic convex optimization due to the common role of feedback in sequential querying mechanisms. In this paper, we continue this thread in two parts by exploiting these relations for the first time to yield novel algorithms in both fields, further motivating the study of their intersection. First, inspired by a recent optimization algorithm that was adaptive to unknown uniform convexity parameters, we present a new active learning algorithm for one-dimensional thresholds that can yield minimax rates by adapting to unknown noise parameters. Next, we show that one can perform $d$-dimensional stochastic minimization of smooth uniformly convex functions when only granted oracle access to noisy gradient signs along any coordinate instead of real-valued gradients, by using a simple randomized coordinate descent procedure where each line search can be solved by $1$-dimensional active learning, provably achieving the same error convergence rate as having the entire real-valued gradient. Combining these two parts yields an algorithm that solves stochastic convex optimization of uniformly convex and smooth functions using only noisy gradient signs by repeatedly performing active learning, achieves optimal rates and is adaptive to all unknown convexity and smoothness parameters.",
"subjects": "Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Optimization and Control (math.OC); Machine Learning (stat.ML)",
"title": "Algorithmic Connections Between Active Learning and Stochastic Convex Optimization",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969679646669,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7099044234763908
} |
https://arxiv.org/abs/2002.07357 | Constructions of regular sparse anti-magic squares | Graph labeling is a well-known and intensively investigated problem in graph theory. Sparse anti-magic squares are useful in constructing vertex-magic labeling for graphs. For positive integers $n,d$ and $d<n$, an $n\times n$ array $A$ based on $\{0,1,\cdots,nd\}$ is called \emph{a sparse anti-magic square of order $n$ with density $d$}, denoted by SAMS$(n,d)$, if each element of $\{1,2,\cdots,nd\}$ occurs exactly one entry of $A$, and its row-sums, column-sums and two main diagonal sums constitute a set of $2n+2$ consecutive integers. An SAMS$(n,d)$ is called \emph{regular} if there are exactly $d$ positive entries in each row, each column and each main diagonal. In this paper, we investigate the existence of regular sparse anti-magic squares of order $n\equiv1,5\pmod 6$, and it is proved that for any $n\equiv1,5\pmod 6$, there exists a regular SAMS$(n,d)$ if and only if $2\leq d\leq n-1$. | \section{Introduction}
Magic squares and their various generalizations have been objects of
interest for many centuries and in many cultures. A lot of work has
been done on the constructions of magic squares, for more details,
the interested reader may refer to \cite{Abe,Ahmed,Andrews,Hand} and
the references therein.
\emph{An anti-magic square of order $n$} is an $n\times n$ array
with entries consisting of $n^2$ consecutive nonnegative integers
such that the row-sums, column-sums and two main diagonal sums
constitute a set of consecutive integers. Usually, the main diagonal
from upper left to lower right is called \emph{the left diagonal},
another is called \emph{the right diagonal}. The existence of an
anti-magic square has been solved completely by Cormie et al (\cite{Js,J1}).
It was shown that there exists an anti-magic square
of order $n$ if and only if $n\geq 4$.
Sparse magic square had played an important role in the construction of sparse anti-magic square. For
positive integers $n$ and $d$ with $d<n$, an $n\times n$ array $A$ based on
$\{0,1,\cdots,nd\}$ is called a \emph{sparse magic square of order $n$
with density $d$}, denoted by SMS$(n,d)$, if each element of $\{1,2,\cdots,nd\}$ occurs exactly one entry of $A$,
and its row-sums, column-sums and two main diagonal sums is the same. An SMS$(n,d)$ is
called \emph{regular} if there exist exactly $d$ non-zero elements in each
row, each column and each main diagonal. The existence of a regular
SMS$(n,d)$ has been solved completely by Li et al (\cite{Li}). It was
shown that for any positive integers $n$ and $d$ with $d<n$, there exists a regular SMS$(n,d)$ if and only if $d\geq
3$ when $n$ is odd and even $d\geq 4$ when $n$ is even.
Sparse anti-magic squares are generalizations of anti-magic
squares. For positive integers $n$ and $d$ with $d<n$, let $A$ be an $n\times
n$ array with entries consisting of $0,1,\cdots,nd$ and let $S_A$ be
the set of row-sums, column-sums and two main diagonal sums of $A$.
We call $S_A$ the \emph{sum set} of $A$.
Then $A$ is called a \emph{sparse anti-magic square of order $n$
with density $d$}, denoted by SAMS$(n,d)$, if each element of $\{1,2,\cdots,nd\}$ occurs exactly
one entry of $A$ and $S_A$ consists of
$2n+2$ consecutive integers. In \cite {G4}, an SAMS$(n,d)$ is also
called a \emph{sparse totally anti-magic square}. An SAMS$(n,d)$ is
called \emph{regular} if all of its rows, columns and two main
diagonals contain $d$ positive entries. As an example, a regular
SAMS$(5,2)$ is listed below.
\begin{center}
{\renewcommand\arraystretch{0.8}
\setlength{\arraycolsep}{3.5pt}
\footnotesize
$A=\begin{array}{|c|c|c|c|c|} \hline
1& 6& && \\\hline
&& 2& 8 &\\\hline
5&&&& 3\\\hline
& 9& 7&& \\\hline
&&& 4& 10\\\hline
\end{array}$\ .}
\end{center}
\noindent Here, empty entries of $A$ indicate 0.
It is readily checked that the element set of $A$ consists of
$0,1,2,\cdots,10$, $S_A=\{6,7,\cdots,16,17\}$ and all of its
rows, columns and two main diagonals contain $2$ positive entries.
Sparse anti-magic squares and sparse magic squares are useful in graph theory. In paticular,
they can be used to construct the vertex-magic total labeling for
bipartite graphs, trees and cubic graphs, see \cite {G4,GrayJ1,GrayJ2,GrayJ3,Zhux,Zhux1} and the references therein.
Recently, the authors Chen et al(\cite{chen1,chen2,chen3}) proved the following results.
\begin{lem}(\cite{chen1})\label{01}
There exists a regular SAMS$(n,n-1)$ if and only if $n\geq 4$.
\end{lem}
\begin{lem}(\cite{chen2})\label{02}
There exists a regular SAMS$(n,n-2)$ if and only if $n\geq 4$.
\end{lem}
\begin{lem}(\cite{chen3})\label{03}
There exists a regular SAMS$(n,d)$ for $d\in\{3,5\}$ if and only if $n\geq d$.
\end{lem}
In this paper, we investigate the existence of regular sparse anti-magic
squares of order $n\equiv1,5 \pmod 6$ and obtain the
following theorem.
\begin{thm}\label{mainth}
For any $n\geq 5$ and $n\equiv1,5 \pmod 6$, there exists a regular SAMS$(n,d)$ if and only if $2\leq d\leq n-1$.
\end{thm}
For convenience, the following notations are used throughout this paper.
Let $\mathbb{Z}$ be the set of integers, $I_n=\{1,2,3,\cdots, n\}$ and
we always use $I_m$ and $I_n$ to label the rows and columns of an $m\times n$ array respectively.
Let $a,b\in\mathbb{Z}$ and $[a,b]$ be the set of integers $v$
such that $a\leq v\leq b$. Suppose $A$ is an array based on $\mathbb{Z}$, let
$G(A)$, $R(A)$ and $C(A)$ be the set of non-zero elements,
the set of row-sums and the set of column-sums of $A$ respectively.
Let $l(A)$ and $r(A)$ be the sum of the elements in the left diagonal and
the right diagonal of $A$ respectively. Then $S_A=R(A)\cup C(A)\cup \{l(A),r(A)\}$.
Let $a$ and $n$ be integers,
\begin{center}
$ \langle a\rangle_n= \left\{
\begin{array}{ll}
r, & if \ \ \ n\nmid a,\ a = mn+r\ \ and\ \ 0<r<n, \\
n, & if \ \ \ n|a.\\
\end{array}
\right.$
\end{center}
Clearly, $1\leq\langle a\rangle_n\leq n$.
The remainder of this paper is organized as follows. In Section 2, we
show that there exists a regular SAMS$(n,2)$ for $n\equiv1,5 \pmod 6$ and $n\geq 5$ via direct construction.
In Section 3, we introduce a symmetric forward diagonals array which is important
building block in the construction for a regular sparse anti-magic squares.
In Section 4, we prove that there exists a regular SAMS$(n,d)$ for $n\equiv1,5 \pmod 6$ and $d\in [6,n-3]$.
Finally, the proof of Theorem \ref{mainth} is presented in Section 5.
\section{The existence of a regular SAMS$(n,2)$ for $n\equiv1,5 \pmod 6$ and $n\geq5$ }
In this section, we shall prove that there exists a regular SAMS$(n,2)$ for $n\equiv1,5 \pmod 6$ and $n\geq5$.
The idea of our construction is divided into three steps. Firstly, we give a special array $A$ and a Latin square $B$.
Secondly we shall put the elements of $A$ into the Latin square $B$ to obtain $W$
such that $W$ is a regular SAMS for $n\equiv 1 \pmod 6$ and a near regular SAMS for $n\equiv5 \pmod 6$.
Furthermore, for $n\equiv 5 \pmod 6$, we need to adjust some columns of $W$ to obtain a regular SAMS.
We need the definition of Latin square in the proof of the following.
A \emph{Latin square} of order $n$ is an $n\times n$ array in which each cell contains a
single symbol from an $n$-set $S$, such that each symbol occurs exactly once in each
row and exactly once in each column. A \emph{transversal} in a Latin square of order $n$ is
a set of $n$ cells, one from each row and
column, containing each of the $n$ symbols exactly once.
A Latin square of order $n$ is a \emph{diagonal Latin square} if two main diagonals are transversals.
\begin{thm}\label{SAMS(n,2)}
There exists a regular SAMS$(n,2)$ for $n\equiv1,5 \pmod 6$ and $n\geq5$.
\end{thm}
\begin{proof}
For each $n\equiv1,5 \pmod 6$ and $n\geq5$, it can be written as $n=2m+1$, where $m>1$.
Construct a special $2\times n$ array $A=(a_{i,j})$
over $[1,4m+2]$, where $i=1,2$, $j\in I_n$ and
$$a_{1,j}=\left\{
\begin{array}{lll}
n+j, & j\in[1,m-1],\\
n+j+1, & j\in[m,2m],\\
n, & j=2m+1,\\
\end{array}
\right. \ \ \ \ \
a_{2,j}=\left\{
\begin{array}{lll}
j, & j\in[1,m],\\
3m+1,& j= m+1, \\
j-1, & j\in [m+2,2m+1].\\
\end{array}
\right.$$
Let $R_k$, $k=1,2$, be the set of the elements in the $k$-th row of $A$.
It is easy to see that
\vskip 6pt
\mbox{}\hspace{1.0in}
$R_1=[n+1,n+m-1]\cup[n+m+1,n+2m+1]\cup\{n\}$
\mbox{}\hspace{1.22in} $=[2m+2,3m]\cup[3m+2,4m+2]\cup\{2m+1\}$
\mbox{}\hspace{1.22in} $=[2m+1,4m+2]\setminus\{3m+1\}$.
\begin{center}
$R_2=[1,m]\cup\{3m+1\}\cup[m+1,2m]=[1,2m]\cup\{3m+1\}$.
\end{center}
\noindent
Then we have $R_1\cup R_2=[1,4m+2]$.
Let $S_1$ and $S_2$ be the set of column-sums and forward diagonal-sums respectively.
By a simple calculation, we have
$S_1=\bigcup\limits_{j=1}^{n}\{a_{1,j}+a_{2,j}\}$
\mbox{}\hspace{0.18in}$=\bigcup\limits_{j=1}^{m-1}\{a_{1,j}+a_{2,j}\}\bigcup\{a_{1,m}+a_{2,m},
a_{1,m+1}+a_{2,m+1}\}\bigcup\limits_{j=m+2}^{2m}\{a_{1,j}+a_{2,j}\}\bigcup\{a_{1,2m+1}+a_{2,2m+1}\}$
\mbox{}\hspace{0.16in} $=\bigcup\limits_{j=1}^{m-1}\{n+2j\}\bigcup\{2n,3n+1\}\bigcup\limits_{j=m+2}^{2m}\{n+2j\}\bigcup\{n+2m\}$.
\mbox{}\hspace{0.16in} $=[\bigcup\limits_{j=1}^{2m}\{n+2j\}\setminus\{2n+1\}]\bigcup\{2n,3n+1\}$.
$S_2=\bigcup\limits_{j=1}^{n-1}\{a_{1,j}+a_{2,j+1}\}\bigcup\{a_{1,2m+1}+a_{2,1}\}$
\mbox{}\hspace{0.18in}$=\bigcup\limits_{j=1}^{m-1}\{a_{1,j}+a_{2,j+1}\}
\bigcup\{a_{1,m}+a_{2,m+1}\}\bigcup\limits_{j=m+1}^{2m}\{a_{1,j}+a_{2,j+1}\}\bigcup\{a_{1,2m+1}+a_{2,1}\}$
\mbox{}\hspace{0.16in} $=\bigcup\limits_{j=1}^{m-1}\{n+2j+1\}\bigcup\{(n+m+1)+(3m+1)\}
\bigcup\limits_{j=m+1}^{2m}\{n+2j+1\}\bigcup\{n+1\}$
\mbox{}\hspace{0.16in} $=[\bigcup\limits_{j=1}^{2m}\{n+2j+1\}\setminus\{2n\}]\bigcup\{3n,n+1\}$.
\vskip 6pt
\noindent It follows that $S_1\cup S_2=[n+1,3n+1]\backslash\{2n+1\}$.
Let $B=(b_{i,j})$, where $b_{i,j}=\langle2i+j-1\rangle_n,$ $i,j\in I_n$, note that $n\equiv1,5 \pmod 6$ and $n\geq5$,
then it is easy to check that $B$ is a diagonal Latin square of order $n$ over $I_n$ with
the property
$ b_{n+1-i,n+1-j}=\langle2(n+1-i)+(n+1-j)-1\rangle_n=\langle3n+1-(2i+j-1)\rangle_n=(n+1)-b_{i,j},$ i.e.
$$ b_{i,j}+b_{n+1-i,n+1-j}= n+1.$$
For each $j\in I_n$, define
\begin{center}
$f(x,j)=i$ \emph{if} $b_{i,j}=x$, that is, $f(b_{i,j},j)=i$
\end{center}
and let
\begin{center}
$g(s)=\langle m+2s-1\rangle_n$,\ $s\in I_n$,
\end{center}
then it is easy to see that for each $j\in I_n$, $f(x,j)$ is a bijection function from $I_n$ to $I_n$
since $B$ is a Latin square over $I_n$, $b_{i,j}$ can be regarded as the inverse of $f(x,j)$ for any given $j\in I_n$,
and $g$ is also a bijection function from $I_n$ to $I_n$.
We put $a_{1,s}$ and $a_{2,s}$, $s\in I_n$, into the entries $(f(m,g(s)),g(s))$ and
$(f(m+2,g(s)),g(s))$ of $B$ respectively, the other entries of $B$ are filled by $0$, denoted by $W=(w_{i,j})$,
where $i,j\in I_n$, that is, $w_{f(m,g(s)),g(s)}=a_{1,s}$ and $w_{f(m+2,g(s)),g(s)}=a_{2,s}$.
It is clear that the entries of the $s$-th column, $s\in I_n$, of $A$ are all putted into the
$g(s)$-column of $W$, so the non-zero elements in the same column of $W$
is just in the same column of $A$, then $C(W)=S_1$. We will show that $a_{1,s}$ and $a_{2,\langle s+1\rangle_n}$,
$s\in I_n$, are in the same row of $W$, we need only to prove that for any $s\in I_n$,
$f(m,g(s))=f(m+2,g(s+1))$. Without loss of generality, suppose that $f(m,g(s))=\xi$, we have
$b_{\xi, g(s)}=\langle2\xi+g(s)-1\rangle_n=m$ by the definition of function $f$, and also have
\begin{center}
$b_{\xi, g(s+1)}=\langle2\xi+g(s+1)-1\rangle_n=\langle2\xi+g(s)+2-1\rangle_n=\langle2\xi+g(s)-1\rangle_n+2=m+2$.
\end{center}
It follows that $f(m+2,g(s+1))=\xi$, then the non-zero elements in the same row of $W$
is just in the forward diagonal of $A$, so $R(W)=S_2$.
It is clear that
\begin{center}
$R(W)\cup C(W)=S_1\cup S_2=[n+1,3n+1]\backslash\{2n+1\}$.
\end{center}
Next, we shall consider the elements in the two main diagonals of $W$.
There are exactly two non-zero elements in the right diagonal of $W$ according to
the definition of the diagonal Latin square $B$.
It is easy to calculate that
$$a_{1,m+2}=w_{f(m,g(m+2)),g(m+2)}=w_{m,m+2}$$
since $g(m+2)=m+2$,\ \
$b_{i,j}=\langle 2i+j-1\rangle_n=\langle 2m+(m+2)-1\rangle_n=m$ when $i=m,\ j=m+2$, and
$f_{m,j}=i$, i.e. $f_{m,m+2}=m$,
and
$$a_{2,m+1}=w_{f(m+2,g(m+1)),g(m+1)}=w_{m+2,m}$$
since $g(m+1)=m$,\ \
$b_{i,j}=\langle 2i+j-1\rangle_n=\langle 2(m+2)+m-1\rangle_n=m+2$ when $i=m+2,\ j=m$, and
$f_{m+2,j}=i$, i.e. $f_{m+2,m}=m+2$.
Hence the sum of the elements in the right diagonal of $W$ is
$$w_{m,m+2}+w_{m+2,m}=a_{1,m+2}+a_{2,m+1}=(n+m+2+1)+(3m+1)=3n+2.$$
We shall divided it into two cases to deal with the left diagonal-sum below.
\textbf{Case 1:} For $n\equiv 1 \pmod 6$ and $n\geq 7$, it can be written as $n=6k+1$, where $k\geq 1$.
Note that $n=2m+1$, then $m=3k$.
There are exactly two non-zero elements in the left diagonal of $W$ according to
the definition of the diagonal Latin square $B$.
By simple calculation we have
$$a_{1,k+1}=w_{f(m,g(k+1)),g(k+1)}=w_{n-k,n-k}$$
since $g(k+1)=n-k$,\ \
$b_{n-k,n-k}=<2(n-k)+(n-k)-1>_n=3k=m$ and
$f(m,n-k)=n-k$, and
$$a_{2,n+1-k}=w_{f(m+2,g(n+1-k)),g(n+1-k)}=w_{k+1,k+1}$$
since $g(n+1-k)=k+1$,\ \
$b_{k+1,k+1}=<2(k+1)+(k+1)-1>_n=m+2$ and
$f(m+2,k+1)=k+1$.
Then the sum of the elements in the left diagonal of $W$ is
$$w_{k+1,k+1}+w_{n-k,n-k}=a_{2,n+1-k}+a_{1,k+1}=(n+1-k-1)+(n+k+1)=2n+1.$$
So, $W$ is a regular SAMS$(n,2)$.
\textbf{Case 2:}\ \ For $n\equiv 5 \pmod 6$ and $n\geq 5$,
it can be written as $n=6k-1$, where $k\geq 1$, then $m=3k-1$.
When $k=1$, a regular SAMS$(5, 2)$ is given as an example in Section 1.
When $k>1$, it is easy to check that there are also exactly two non-zero elements
in the left diagonal of $W$ according to the definition of the diagonal Latin square $B$, but their
sum is not equivalent to $2n+1$. In fact,
$$a_{1,5k}=w_{f(m,g(5k)),g(5k)}=w_{k,k}$$
since $g(5k)=k$,\ \
$b_{k,k}=<2k+k-1>_n=3k-1=m$ and
$f(m,k)=k$, and
$$a_{2,k+1}=w_{f(m+2,g(k+1)),g(k+1)}=w_{n+1-k,n+1-k}$$
since $g(k+1)=n+1-k$,\ \
$b_{n+1-k,n+1-k}=<2(n+1-k)+(n+1-k)-1>_n=3k+1=m+2$ and
$f(m+2,n+1-k)=n+1-k$.
So the sum of the elements in the left diagonal of $W$ is
$$w_{k,k}+w_{n+1-k,n+1-k}=a_{1,5k}+a_{2,k+1}=(5k+1+n)+(k+1)=2n+3\neq 2n+1.$$
The array $W^*=(w_{i,j}^*)$, $i,j\in I_n$, is obtained by exchanging column $k$ with column
$k+2$ and exchanging column $n+1-k$ with column $n+1-k-2$ of $W$. We list the elements in the
columns $k,k+2,n+1-k,n+1-k-2$ of $B$, $W$ and $W^*$ in the following tables respectively.
\begin{center}
{\renewcommand\arraystretch{0.7}
\setlength{\arraycolsep}{2pt}
\footnotesize
$\begin{array}{|c|c|c|c|}
\multicolumn{4}{c}{B }\\ \hline
& j=k & j=k+1 & j=k+2 \\\hline
i=k-1 & m-2 & m-1 & m \\\hline
i=k & \emph{\textbf{m}} & m+1 & m+2 \\\hline
i=k+1 & m+2 & m+3 & m+4 \\\hline
\end{array}
\and \hspace{20pt}
\begin{array}{|c|c|c|}
\multicolumn{3}{c}{W }\\ \hline
j=k & j=k+1 & j=k+2 \\\hline
& & a_{1,5k+1} \\\hline
{\color{red}{a_{1,5k}}} & & a_{2,5k+1} \\\hline
a_{2,5k}& & \\\hline
\end{array}
\and \hspace{20pt}
\begin{array}{|c|c|c|}
\multicolumn{3}{c}{W^* }\\ \hline
j=k & j=k+1 & j=k+2 \\\hline
a_{1,5k+1} & & \\\hline
{\color{red}{a_{2,5k+1}}} & & a_{1,5k} \\\hline
& &a_{2,5k} \\\hline
\end{array}$}
\end{center}
\begin{center}
{\renewcommand\arraystretch{0.7}
\setlength{\arraycolsep}{0.8pt}
\footnotesize
$\begin{array}{|c|c|c|c|}
\multicolumn{4}{c}{B }\\ \hline
& j=n+1-k-2 & j=n+1-k-1 & j=n+1-k \\\hline
i=n+1-k-1 & m-2 & m-1 & m \\\hline
i=n+1-k & \emph{\textbf{m}} & m+1 & m+2 \\\hline
i=n+1-k+1 & m+2 & m+3 & m+4 \\\hline
\end{array}
\and \hspace{20pt}
\begin{array}{|c|c|c|}
\multicolumn{3}{c}{W }\\ \hline
j=n+1-k-2 & j=n+1-k-1 & j=n+1-k \\\hline
& & a_{1,k+1} \\\hline
a_{1,k} & & {\color{red}{a_{2,k+1}}} \\\hline
a_{2,k}& & \\\hline
\end{array}$}
\end{center}
\begin{center}
{\renewcommand\arraystretch{0.8}
\setlength{\arraycolsep}{2pt}
\footnotesize
$\begin{array}{|c|c|c|c|}
\multicolumn{4}{c}{W^* }\\ \hline
& j=n+1-k-2 & j=n+1-k-1 & j=n+1-k \\\hline
i=n+1-k-1& a_{1,k+1} & & \\\hline
i=n+1-k & a_{2,k+1} & & {\color{red}{a_{1,k}}} \\\hline
i=n+1-k+1 & & &a_{2,k} \\\hline
\end{array}$}
\end{center}
Hence $$w_{k,k}^*+w_{n+1-k,n+1-k}^*=w_{k,k+2}+w_{n+1-k,n+1-k-2}=a_{2,5k+1}+a_{1,k}=(5k+1-1)+(n+k)= 2n+1.$$
The set of row-sums, column-sums and the right diagonal-sum of $W^*$
is the same as that of $W$.
Then $W^*$ is a regular SAMS$(n,2)$.
\end{proof}
\noindent\textbf{Remark 1} \ \ For any array $C=(c_{i,j})_{n\times n}$, let $\Omega(C)=\{(i,j)|c_{i,j}\neq0,\ i,j\in I_n\}$ and the notation is used in the rest of the paper. In the proof of Theorem \ref{SAMS(n,2)},
we have
\begin{center}
$\Omega(W)=\{(i,j)|b_{i,j}\in\{m,m+2\},\ i,j\in I_n\}$,
\end{center}
and
\begin{center}
$\Omega(W^*)\subset\{(i,j)|b_{i,j}\in\{m-2,m,m+2,m+4\},\ i,j\in I_n\}$
in \textbf{Case 2},
\end{center}
which can be used in the proof later.
\vskip 4pt
To illustrate the proof of Theorem \ref{SAMS(n,2)}, we give an example in the following.
\begin{exam}\label{Ex1}
There exists a regular SAMS$(7,2)$.
\end{exam}
\begin{proof}
By the proof of Theorem \ref{SAMS(n,2)}, take $n=7$, then $m=3$ and $m+2=5$.
\begin{center}
$A=\renewcommand\arraystretch{0.8}\left(
\begin{smallmatrix}
8 & 9 & 11 & 12 & 13 & 14 & 7 \\
1 & 2 & 3 & 10 & 4 & 5 & 6 \\
\end{smallmatrix}
\right),$\ \ \
$B=\renewcommand\arraystretch{0.8}\left(
\begin{smallmatrix}
2 & \textbf{3} & 4 & \textbf{5} & 6 & 7 & 1 \\
4 & \textbf{5} & 6 & 7 & 1 & 2 & \textbf{3} \\
6 & 7 & 1 & 2 & \textbf{3} & 4 & \textbf{5} \\
1 & 2 & \textbf{3} & 4 & \textbf{5} & 6 & 7 \\
\textbf{3} & 4 & \textbf{5} & 6 & 7 & 1 & 2 \\
\textbf{5} & 6 & 7 & 1 & 2 & \textbf{3} & 4 \\
7 & 1 & 2 & \textbf{3} & 4 & \textbf{5} & 6 \\
\end{smallmatrix}
\right).$
\end{center}
It is readily checked that $g(1)=\langle m+2-1\rangle_n=m+1=4$, $f(3,4)=7$ and $f(5,4)=1$ since $b_{7,4}=3$
and $b_{1,4}=5$, then $w_{7,4}=a_{1,1}=8$ and $w_{1,4}=a_{2,1}=1$, and so on.
The array $W$ is obtained in the following,
\begin{center}
{\renewcommand\arraystretch{0.8}
\setlength{\arraycolsep}{3pt}
\footnotesize
\hspace{15pt}
$W=\begin{array}{|c|c|c|c|c|c|c|}\hline
& 7 & & 1 & & & \\\hline
& 6 & & & & & 14 \\\hline
& & & & 13 & & 5 \\\hline
& & 12 & & 4 & & \\\hline
11 & & 10 & & & & \\\hline
3 & & & & & 9 & \\\hline
& & & 8 & & 2 & \\\hline
\end{array}$}\ ,
\end{center}
\noindent where empty entries of $W$ indicate 0. Clearly, $G(W)=[1,14]$ and there are two non-zero
entries in each row, each column and each main diagonal of $W$. On the other
hand, the set of row-sums $R(W)=\{8,20,18,16,21,12,10\}$, the set of column-sums $C(W)=\{14,13,22,9,17,11,19\}$,
$l(W)=15$ and $r(W)=23$, it follows that
$S_W=R(W)\cup C(W)\cup \{l(W),r(W)\}=[8,23]$. So, $W$ is a regular SAMS$(7,2)$.
\end{proof}
The following example is very similar to the above, so
we only list the arrays $A$, $B$, $W$ and $W^*$ by using the proof of Theorem \ref{SAMS(n,2)}.
\begin{exam}\label{Ex2}
There exists a regular SAMS$(11,2)$.
\end{exam}
\begin{proof}
We have $m=5$, $m+2=7$ and $k=2$.
\begin{center}
$A=\renewcommand\arraystretch{0.8}\left(
\begin{smallmatrix}
12 & 13 & 14 & 15 & 17 & 18 & 19 & 20 & 21 & 22 & 11 \\
1 & 2 & 3 & 4 & 5 & 16 & 6 & 7 & 8 & 9 & 10 \\
\end{smallmatrix}
\right),$\ \ \
$B=\renewcommand\arraystretch{0.8}\left(
\begin{smallmatrix}
2 & 3 & 4 & \textbf{5} & 6 & \textbf{7} & 8&9&10&11&1 \\
4 & \textbf{5} & 6 & \textbf{7} & 8 & 9 & 10&11&1&2&3 \\
6 & \textbf{7} & 8 & 9 & 10 & 11 & 1&2&3&4&\textbf{5} \\
8 & 9 & 10 & 11 & 1 & 2 & 3&4&\textbf{5}&6&\textbf{7} \\
10 & 11 & 1 & 2 & 3 & 4 & \textbf{5}&6&\textbf{7}&8&9 \\
1 & 2 & 3 & 4 & \textbf{5} & 6 & \textbf{7}&8 & 9 & 10 & 11 \\
3 & 4 & \textbf{5} & 6 & \textbf{7} & 8 & 9 & 10 & 11&1 & 2 \\
\textbf{5} & 6 & \textbf{7} & 8 & 9 & 10 & 11&1 & 2 & 3 & 4 \\
\textbf{7} & 8 &9 & 10 & 11 & 1& 2 & 3 & 4 & \textbf{5} & 6 \\
9 & 10 &11&1& 2 & 3 & 4 & \textbf{5}&6 & \textbf{7} & 8 \\
11 & 1 & 2 & 3 & 4 & \textbf{5}&6 & \textbf{7} & 8 & 9 & 10 \\
\end{smallmatrix}
\right),$
\end{center}
\begin{center}
{\renewcommand\arraystretch{0.8}
\setlength{\arraycolsep}{3pt}
\footnotesize
\hspace{15pt}
$W=\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|}\hline
& & & 11 & & 1 & & & & & \\\hline
& 22 & & 10 & & & & & & & \\\hline
& 9 & & & & & & & & & 21 \\\hline
& & & & & & & & 20 & & 8 \\\hline
& & & & & & 19 & & 7 & & \\\hline
& & & & 18 & & 6 & & & & \\\hline
& & 17 & & 16 & & & & & & \\\hline
15 & & 5 & & & & & & & & \\\hline
4 & & & & & & & & & 14 & \\\hline
& & & & & & & 13 & & 3 & \\\hline
& & & & & 12 & & 2 & & & \\\hline
\end{array}$}\ .
\end{center}
We exchange column $2$ with column $4$ and column $10$ with column $8$ of $W$ to obtain $W^*$ as follows.
\begin{center}
{\renewcommand\arraystretch{0.8}
\setlength{\arraycolsep}{3pt}
\footnotesize
\hspace{15pt}
$W^*=\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|}\hline
& 11 & & & & 1 & & & & & \\\hline
& 10 & & 22 & & & & & & & \\\hline
& & & 9 & & & & & & & 21 \\\hline
& & & & & & & & 20 & & 8 \\\hline
& & & & & & 19 & & 7 & & \\\hline
& & & & 18 & & 6 & & & & \\\hline
& & 17 & & 16 & & & & & & \\\hline
15 & & 5 & & & & & & & & \\\hline
4 & & & & & & & 14 & & & \\\hline
& & & & & & & 3 & & 13 & \\\hline
& & & & & 12 & & & & 2 & \\\hline
\end{array}$}\ .
\end{center}
\noindent Here empty entries of $W$ and $W^*$ indicate 0. It is easy to see that $W^*$ is a regular SAMS$(11,2)$.
\end{proof}
\section{Symmetric diagonal Kotzig array and symmetric forward diagonals array }
In this section, we introduce a symmetric diagonal Kotzig array and symmetric forward diagonal array
which are the important building blocks in our construction next section.
\begin{defi}\label{symdiaKA}
Suppose $n$ and $d$ are positive integers with $d \leq n$. A $d\times n$ rectangular array $A=(a_{i,j})$, $i\in I_d$,
$j\in I_n$, is a \emph{symmetric diagonal Kotzig array} if it has the following properties:
1. Each row is a permutation of the set $I_n=\{1,2,\cdots, n\}$.
2. All columns have the same sum.
3. All forward diagonals have the same sum.
4. $a_{i,j}+a_{d+1-i,n+1-j}=n+1$ for each $(i,j)\in I_d\times I_n$.
\end{defi}
Three-row arrays satisfying the first two conditions of the Definition \ref{symdiaKA} were used by A. Kotzig
( \cite{Kotzig}) to construct edge-magic labelings and there is an account of this in \cite{Wallis,Wallis1}
where they are called \emph{Kotzig arrays}. I. Gray and J. MacDougall have constructed a $d$-row generalization of these Kotzig arrays and they have been used to construct vertex-magic labelings for complete bipartite graphs ( \cite{Gray1}).
The arrays satisfying the first three conditions of the Definition \ref{symdiaKA} were used by I. Gray and J. MacDougall (\cite{Gray}) to construct sparse semi-magic square and vertex-magic labelings, and they are called \emph{diagonal Kotzig arrays}. Our constructions of squares require the diagonal Kotzig arrays with the additional diagonal condition stated
as property 4 above.
\begin{defi}
Suppose $n$ and $t$ are positive integers and $t \leq n$. A $t\times n$ array
$A=(a_{i,j})$, $i\in I_t, j\in I_n$, is a \emph{symmetric forward diagonals array}, denoted by SFD$(t,n)$ for short, if it satisfies
the following properties:
1. The elements set of $A$ consists of $nt$ consecutive positive integers.
2. All columns have the same sum.
3. All forward diagonals have the same sum.
4. $a_{i,j}+a_{t+1-i,n+1-j}$ is a constant for any $(i,j)\in I_t\times I_n$.
\end{defi}
If $A_1=(a_{i,j}^{(1)})$ is an SFD$(t,n)$ over $I_{nt}$, let
$a_{i,j}^{(2)}=a_{i,j}^{(1)}+l$, where $l$ is a nonnegative integer,
then $A_2=(a_{i,j}^{(2)})$ is also an SFD$(t,n)$ over $[1+l,nt+l]$.
\begin{con}\label{SymDK-SFD}
If there exists a symmetric diagonal Kotzig array of order $d\times n$,
then there exists an SFD$(d,n)$.
\end{con}
\begin{proof}
Let $A=(a_{i,j})$ be a symmetric diagonal Kotzig array of order $d\times n$ and
$B=(b_{i,j})$ be the $d\times n$ array with $b_{i,j}=i-1$,
where $i\in I_d$, $n\in I_n$. Next we shall show that
$S=A+nB=(s_{i,j})$ is an SFD$(d,n)$.
Clearly, $\bigcup\limits_{i=1}^{d}\bigcup\limits_{j=1}^{n}\{s_{i,j}\}=I_{dn}$.
Note that the columns of $A$ and $B$ have constant sum respectively and therefore the columns
of $S = A + nB$ will also have a constant sum $k$. Also the forward diagonals
of $A$ and $B$ have constant sum respectively and so the forward diagonals of $S$ will also
have constant sum, also equal to $k$. Since $a_{i,j}+a_{d+1-i,n+1-j}$ is a constant, then
$$s_{i,j}+s_{d+1-i,n+1-j}=(a_{i,j}+nb_{i,j})+(a_{d+1-i,n+1-j}+nb_{d+1-i,n+1-j})=a_{i,j}+a_{d+1-i,n+1-j}+n(n+1)$$
is a constant. Hence $S$ is an SFD$(d,n)$.
\end{proof}
So in order to show the existence of a SFD$(d,n)$, we only show how
to construct a symmetric diagonal Kotzig array.
The following theorem is obtained by using direct
construction and the recurrence method.
\begin{thm}\label{SymDKA}
There exists a symmetric diagonal Kotzig array of order $d\times n$ for any odd
integer $n\geq 3$ and integer $d\in [3,n]$.
\end{thm}
\begin{proof}
For $i\in I_3$, $j\in I_n$, let $A_3=(a_{i,j})$, where
\begin{center}
$ a_{1,j}= \left\{
\begin{array}{ll}
n-\frac{j-1}{2}, & if \ \ \ j\ \ is \ \ odd, \\
\frac{n+1-j}{2}, & if \ \ \ j \ \ is\ \ even,\\
\end{array}
\right. \ \ \
a_{2,j}= j,\ \ \ a_{3,j}= n+1-a_{1,n+1-j}.$
\end{center}
For $i\in I_4$, $j\in I_n$, let $A_4=(b_{i,j})=\left(
\begin{array}{c}
B_1 \\
B_2 \\
\end{array}
\right)
$, where
\begin{center}
$ b_{1,j}= \left\{
\begin{array}{ll}
j, & if \ \ \ j\leq {n-1\over 2}, \\
j+1, & if \ \ \ {n+1\over 2 }\leq j\leq n-1,\\
{n+1\over 2}, & if \ \ \ j=n, \\
\end{array}
\right. \ \ \
b_{2,j}= \left\{
\begin{array}{ll}
{n+1\over 2}, & if \ \ \ j=1, \\
n+2-j , & if \ \ \ 2\leq j\leq {n+1\over 2}, \\
n+1-j, & if \ \ \ j> {n+1\over 2},\\
\end{array}
\right.$
\end{center}
$$b_{3,j}=n+1-b_{2,n+1-j},\ \ \ \ \ b_{4,j}=n+1-b_{1,n+1-j}.$$
For $i\in I_5$, $j\in I_n$, let $A_5=(c_{i,j})$, where
$$c_{1,j}=[{n+1\over 2}(j-1)](\emph{mod}\ \ n)+1,\ \ \ c_{2,j}=n+1-j,\ \ \ c_{3,j}=j,$$
$$c_{4,j}=n+1-j,\ \ \ c_{5,j}=j+{n+1\over 2}-c_{1,j}.$$
It is readily checked that $A_3$, $A_4$, $A_5$ and $A_6=\left(\begin{array}{c}
A_3 \\
A_3 \\
\end{array}
\right)$ are the symmetric diagonal Kotzig arrays of order $d\times n$ for $d=3,4,5,6$ respectively.
For $d\geq 7$, it can be written as $d=4k+\alpha$, where $\alpha\in\{3,4,5,6\}$. Let
\begin{center}
$E=\left(
\begin{smallmatrix}
B_1 \\
\vdots\\
B_1 \\
A_{\alpha} \\
B_2 \\
\vdots \\
B_2\\
\end{smallmatrix}
\right),$
\end{center}
where $B_i$ occurs $k$ times for $i=1,2$.
It is clear that $E$ is a symmetric diagonal Kotzig array of order $(4k+\alpha)\times n$.
\end{proof}
\noindent\textbf{Remark 2}\ \ \
(i)\ \ It is to be pointed out that the array $A_4$ also has the property that for any $j\in I_n$,
\begin{center}
$b_{1,j}+b_{2,\langle j+1\rangle_n}=n+1$,
\end{center}
so does
\begin{center}
$b_{3,j}+b_{4,\langle j+1\rangle_n}=n+1.$
\end{center}
(ii)\ \ There are many ways to obtain a symmetric diagonal Kotzig array of
order $d\times n$ with $d\geq 7$, here we also give another different combined way below. Let
\begin{center}
$F=\left(
\begin{smallmatrix}
A_3\\
B_1 \\
\vdots\\
B_1 \\
A_{\alpha} \\
B_2 \\
\vdots \\
B_2\\
A_3\\
\end{smallmatrix}
\right),$
\end{center}
where $B_i$ occurs $k-1$ times for $i=1,2$.
Then it is easy to check that $F$ is also a symmetric diagonal Kotzig array of order $(4k+2+\alpha)\times n$.
(iii)\ \ When $d=2e$ and $e\geq 3$, then we can get a
symmetric diagonal Kotzig array of order $d\times n$ by joining two symmetric diagonal Kotzig arrays of order $e\times n$ coming from Theorem \ref{SFD}, which will be used in the proof of the following conclusions when the
number of the rows of a symmetric diagonal Kotzig array is even $d\geq 6$.
\qed
\vskip 12pt
Combine Construction \ref{SymDK-SFD} and Theorem \ref{SymDKA}, we have the following theorem.
\begin{thm}\label{SFD}
For any odd $n\geq 3$ and $t\in [3,n]$, there exists an SFD$(t,n)$ over $[1+l,nt+l]$ for any nonnegative integer $l$.
\end{thm}
\noindent\textbf{Remark 3} \ \
By (i) and (iii) of Remark 2, for $e\geq 2$ and any nonnegative integer $l$, there exists an
SFD$(2e,n)$, $F=(f_{i,j})$, over $[1+l,2en+l]$
by using Construction \ref{SymDK-SFD} and Theorem \ref{SymDKA}, and it has an additional properties:
\begin{center}
$f_{i,\langle i+x\rangle_n}+f_{2e+1-i,n+1-\langle i+x\rangle_n}=2en+1+2l$,
\end{center}
and
\begin{center}
$\sum\limits_{i=1}^ef_{i,\langle i+x\rangle_n}+\sum\limits_{i=e+1}^{2e}f_{i,\langle i+x+y\rangle_n}=(2en+1+2l)e$
\end{center}
for any $x,y\in I_n$.
\section{The existence of a regular SAMS$(n,d)$ for $n\equiv1,5 \pmod 6$ and $d\in [6,n-3]$}
In this section, we shall prove that there exists a regular SAMS$(n,d)$ for any $n\equiv1,5 \pmod 6$ and $d\in [6,n-3]$
by using the arrays $B$, $W$ and $W^*$ in the proof of Theorem \ref{SAMS(n,2)} and the existence of an SFD$(d,n)$
from Theorem \ref{SFD} which constructed by Construction \ref{SymDK-SFD} and Theorem \ref{SymDKA}.
To do this, we also introduce a new concept and some very simple and useful results in the following.
\begin{defi}\label{compatibleDY}
Two $m\times n$ arrays $M=(m_{i,j})$ and $N=(n_{i,j})$ are \emph{compatible} if $\Omega(M)\cap\Omega(N)=\emptyset$,
where $\Omega(M)=\{(i,j)|m_{i,j}\neq0,\ i\in I_m, j\in I_n\}$ and $\Omega(N)=\{(i,j)|n_{i,j}\neq0,\ i\in I_m, j\in I_n\}$.
\end{defi}
\begin{lem}\label{compat-R1}
If there exists a regular SMS$(n,d_1)$ and an SAMS$(n,d_2)$, and they are compatible,
then there exists an SAMS$(n,d_1+d_2)$.
\end{lem}
\begin{proof}
Let $M=(m_{i,j})$ be a regular SMS$(n,d_1)$ over $\{0,1,2,\cdots,nd_1\}$, and $N=(n_{i,j})$ be an SAMS$(n,d_2)$ over $\{0,1,2,\cdots,nd_2\}$.
Let $M'=(m_{i,j}')$, where
\begin{center}
$m_{i,j}'= \left\{
\begin{array}{ll}
m_{i,j}+nd_2, & if \ \ \ m_{i,j}\neq0, \\
0, & if \ \ \ m_{i,j}=0.\\
\end{array}
\right. $
\end{center}
It is readily checked that $M'+N$ is an SAMS$(n,d_1+d_2)$ over $\{0,1,2,\cdots,n(d_1+d_2)\}$.
\end{proof}
\begin{lem}\label{compat-R2}
If there exists an SMS$(n,d_1)$ and a regular SAMS$(n,d_2)$, and they are compatible,
then there exists an SAMS$(n,d_1+d_2)$.
\end{lem}
\begin{proof}
Let $M=(m_{i,j})$ be an SMS$(n,d_1)$ over $\{0,1,2,\cdots,nd_1\}$, and $N=(n_{i,j})$ be a regular SAMS$(n,d_2)$ over $\{0,1,2,\cdots,nd_2\}$.
Let $N'=(n_{i,j}')$, where
\begin{center}
$n_{i,j}'= \left\{
\begin{array}{ll}
n_{i,j}+nd_1, & if \ \ \ n_{i,j}\neq0, \\
0, & if \ \ \ n_{i,j}=0.\\
\end{array}
\right. $
\end{center}
It is readily checked that $M+N'$ is an SAMS$(n,d_1+d_2)$ over $\{0,1,2,\cdots,n(d_1+d_2)\}$.
\end{proof}
\begin{thm}\label{compat-R3}
If there exists a regular SMS$(n,d_1)$ and a regular SAMS$(n,d_2)$, and they are compatible,
then there exists a regular SAMS$(n,d_1+d_2)$.
\end{thm}
\begin{thm}\label{mainth4-1}
There exists a regular SAMS$(n,d)$ for any $n\equiv 1,5 \pmod 6$, $n\geq 11$ and $d\in [6,n-3]$.
\end{thm}
\begin{proof}
Let $d=t+2$, where $t\in [4,n-5]$, and $m=\frac{n-1}{2}$. By Theorem \ref{SAMS(n,2)} there exists a regular SAMS$(n,2)$, $W$ or $W^*$. By Theorem \ref{compat-R3}, to show the conclusion, we need only to
construct a regular SMS$(n,t)$, which is compatible with the regular SAMS$(n,2)$.
By Theorem \ref{SFD} there exists an SFD$(t,n)$ over $[1,nt]$, denoted by $C=(c_{i,j})$.
The Latin square of order $n$, $B$, and the function $f$ are both from the proof of Theorem \ref{SAMS(n,2)}.
When $t=2e+1$, we put $c_{i,j}$, $i\in I_t$, $j\in I_n$, into entry
$(f(\langle2i-2e-1+m\rangle_n,\langle2j+m\rangle_n),\langle 2j+m\rangle_n)$ of $B$,
the other entries of $B$ are filled by $0$, denoted by $D$.
When $t=2e$, we put $c_{i,j}$,
$i\in I_{t}$, $j\in I_n$, into entry $(f(\langle2i'-2e-1+m\rangle_n,\langle2j+m\rangle_n),\langle 2j+m\rangle_n)$ of $B$,
where
\begin{center}
$ i'= \left\{
\begin{array}{ll}
i, & if \ \ \ i\in[1,e], \\
i+1, & if \ \ \ i\in[e+1,2e].\\
\end{array}
\right. $
\end{center}
the other entries of $B$ are filled by $0$, also denoted by $D$.
Firstly, we shall show that $D$ is a regular SMS$(n,t)$.
(i) Note that we put the elements in the $j$-th column
of $C$ into the $\langle 2j+m\rangle_n$-th column of $D$, where $j\in I_n$,
and $\{j|j\in I_n\}=\{\langle 2j+m\rangle_n|j\in I_n\}=I_n$, therefore there are $t$ non-zero
elements in each column of $D$ and the columns of $D$ will
have a constant sum $\frac{(1+nt)t}{2}$ since the columns of $C$ have constant sum $\frac{(1+nt)t}{2}$.
(ii)
For each $ j\in I_n$, the elements in the set $\mathcal{A}_1=\{c_{i,\langle j+i\rangle_n}| i\in I_t\}$ are putted into the
same row of $D$ and it is clear that $|\mathcal{A}_1|=t$ and the elements in the set $\mathcal{A}_1$ are
exactly in the same froward diagonal of $C$. In fact, the element $c_{i,\langle j+i\rangle_n}$ is putted into
$f(\langle2i-2e-1+m\rangle_n,\langle2\langle j+i\rangle_n+m\rangle_n)$-th row of $D$.
Let
$f(\langle2i-2e-1+m\rangle_n,\langle2\langle j+i\rangle_n+m\rangle_n)=\alpha$, then by the definition of the function $f$ from the proof
of Theorem \ref{SAMS(n,2)}, we have
\begin{center}
$b_{\alpha,\langle2\langle j+i\rangle_n+m\rangle_n}=\langle2i-2e-1+m\rangle_n=\langle2\alpha+2(j+i)+m-1\rangle_n$,
\end{center}
it follows that
$\alpha=\langle-j-e\rangle_n$ and $\{j|j\in I_n\}=\{\langle-j-e\rangle_n|j\in I_n\}=I_n$, which is independent of the parameter $i$.
This implies that the elements in the set $\mathcal{A}_1=\{c_{i,\langle j+i\rangle_n}| i\in I_t\}$ lie in the
same row of $D$. Clearly, there are $t$ non-zero elements in each row of $D$ from $|\mathcal{A}_1|=t$ and
so all forward diagonals of $C$ become the rows of $D$.
Then
the rows of $D$ will also have a constant sum $\frac{(1+nt)t}{2}$ since all forward diagonals of $C$ have the same sum $\frac{(1+nt)t}{2}$.
(iii) Let
\begin{center}
$ \Delta= \left\{
\begin{array}{ll}
\bigcup\limits_{i=1}^t\{2i-2e-1+m\}, & if \ \ \ t=2e+1, \\
\bigcup\limits_{i=1}^t\{2i'-2e-1+m\}, & if \ \ \ t=2e.\\
\end{array}
\right. $
\end{center}
Clearly $|\Delta|=t$.
It is easy to check that there are exactly $t$ non-zero elements in
each main diagonal of $D$ since $B$ is a diagonal Latin square and for $i_1,j_1\in I_n$,
\begin{center}
$ \left\{
\begin{array}{ll}
d_{i_1,j_1}=0, & if \ \ \ b_{i_1,j_1}\not\in \Delta, \\
d_{i_1,j_1}\neq0, & if \ \ \ b_{i_1,j_1}\in \Delta.\\
\end{array}
\right. $
\end{center}
Now we compute the main diagonal-sum of $D$.
When $t=2e$, for $i\in I_e$, $j\in I_n$, the elements $c_{i,j}$ and $c_{t+1-i,n+1-j}$ are putted into entries
$(f(\langle2i-2e-1+m\rangle_n,\langle2j+m\rangle_n),\langle 2j+m\rangle_n)$
and $(f(\langle2(t+1-i+1)-2e-1+m\rangle_n,\langle2(n+1-j)+m\rangle_n),\langle 2(n+1-j)+m\rangle_n)$
of $B$ respectively. Let $f(\langle2i-2e-1+m\rangle_n,\langle2j+m\rangle_n)=\sigma$.
Then we have
\begin{center}
$b_{\sigma,\langle2j+m\rangle_n}=\langle2\sigma+(\langle2j+m\rangle_n)+1\rangle_n=\langle2i-2e-1+m\rangle_n$,
\end{center}
\begin{center}
$\begin{aligned} \ \ \ \ b_{n+1-\sigma,n+1-\langle2j+m\rangle_n}&=(n+1)-b_{\sigma,\langle2j+m\rangle_n}=
(n+1)-\langle (2i-2e-1+m)\rangle_n.
\end{aligned}$
\end{center}
It is easy to compute that
\begin{center}
$\langle2(t+1-i+1)-2e-1+m\rangle_n=\langle2(2e+2-i)-2e-1+m\rangle_n=\langle(n+1)-(2i-2e-1+m)\rangle_n$,
\end{center}
\begin{center}
$\langle 2(n+1-j)+m\rangle_n=(n+1)-\langle2j+m\rangle_n$.
\end{center}
Therefore,
\begin{center}
$\begin{aligned} & \ \ \ \ f(\langle2(t+1-i)-2e-1+m\rangle_n,\langle2(n+1-j)+m\rangle_n) \\ &
=f(\langle(n+1)-(2i-2e-1+m)\rangle_n,\langle (n+1)-(2j+m)\rangle_n)\\ &=
(n+1)-\sigma.
\end{aligned}$
\end{center}
It follows that
for $i\in I_e$, $j\in I_n$, $c_{i,j}$ and $c_{t+1-i,n+1-j}$ are putted into the entries
$(\sigma,\langle 2j+m\rangle_n)$ and $(n+1-\sigma,n+1-\langle 2j+m\rangle_n)$
of $B$ respectively. It is easy to see that
\begin{center}
$d_{\sigma,\langle 2j+m\rangle_n}+d_{n+1-\sigma,n+1-\langle 2j+m\rangle_n}=c_{i,j}+c_{t+1-i,n+1-j}=1+nt$.
\end{center}
Then the sum of elements in each diagonal of $D$
is also a constant sum $\frac{(1+nt)t}{2}$ since there are exactly $t$ non-zero elements in each diagonals.
When $t=2e+1$,
we have $d_{m+1,m+1}=c_{e+1,m+1}=\frac{1+nt}{2}$ because the Remark 3 and the element $c_{e+1,m+1}$ is putted into the entry
\begin{center}
$(f(\langle2(e+1)-2e-1+m\rangle_n,\langle2(m+1)+m\rangle_n),\langle 2(m+1)+m\rangle_n)=(f(m+1,m+1),m+1)=(m+1,m+1)$
\end{center}
of $D$ followed from $b_{i,j}=\langle2i+j-1\rangle_n=\langle2(m+1)+(m+1)-1\rangle_n=m+1$ when $i=j=m+1$.
In the similar way to the proof of the case $t=2e$, we have that for $i\in I_e$, $j\in I_n$,
the elements $c_{i,j}$ and $c_{t+1-i,n+1-j}$ are also putted into the entries
$(\sigma,\langle 2j+m\rangle_n)$ and $(n+1-\sigma,n+1-\langle 2j+m\rangle_n)$
of $B$ respectively.
It follows that the sum of elements in each diagonal of $D$
is also a constant sum $(1+nt)e+\frac{1+nt}{2}=\frac{(1+nt)t}{2}$.
Secondly,
we shall show that $D$ is compatible with the regular SAMS$(n,2)$ constructed from Theorem \ref{SAMS(n,2)}.
When $t=2e+1$, denote
\mbox{}\hspace{0.25in} $\Omega(D)=\{(i,j)|d_{i,j}\neq0,\ i,j\in I_n\}$
\mbox{}\hspace{0.62in} $=\{(f(\langle2i-2e-1+m\rangle_n,\langle2j+m\rangle_n),\langle 2j+m\rangle_n)| i\in I_t,\ j\in I_n\}$
\mbox{}\hspace{0.62in} $=\{(x,y)|b_{x,y}\in\bigcup\limits_{i=1}^t \{\langle2i-2e-1+m\rangle_n\},\ x,\ y\in I_n\}$
\vskip 6pt
\noindent
When $t=2e$, denote
\vskip 6pt
\mbox{}\hspace{0.16in} $\Omega(D)=\{(i,j)|d_{i,j}\neq0,\ i,j\in I_n\}
=\{(x,y)|b_{x,y}\in\bigcup\limits_{i=1}^t \{\langle2i'-2e-1+m\rangle_n\},\ x,\ y\in I_n\}$.\\
Clearly $e=\lfloor\frac{t}{2}\rfloor\leq m-2$ since $t\leq n-5=2m+1-5=2m-4$ followed from $d=t+2\leq n-3$.
So it is easy to verify that
\begin{center}
$\{m-2,m,m+2,m+4\}\cap \{\langle2i-2e-1+m\rangle_n|i\in I_t\}=\emptyset$ when $t=2e$,\\
$\{m-2,m,m+2,m+4\}\cap \{\langle2i'-2e-1+m\rangle_n|i\in I_t\}=\emptyset$ when $t=2e+1$.
\end{center}
It follows that
\begin{center}
$\{(x,y)|b_{x,y}\in\{m-2,m,m+2,m+4\}\}\cap \Omega(D)=\emptyset$.
\end{center}
Let $W$ and $W^*$ be the same as those of Theorem \ref{SAMS(n,2)}, that is,
$W$ is an SAMS$(n,2)$ for $n\equiv 1\pmod 6$ and $W^*$ is an SAMS$(n,2)$ for $n\equiv 5\pmod 6$.
By Remark 1, we have
$\Omega(W)=\{(i,j)|b_{i,j}\in\{m,m+2\},\ i,j\in I_n\}$
and
$\Omega(W^*)\subset\{(i,j)|b_{i,j}\in\{m-2,m,m+2,m+4\},\ i,j\in I_n\}$.
Then $\Omega(W)\cap \Omega(D)=\emptyset$ and $\Omega(W^*)\cap \Omega(D)=\emptyset$,
it follows that $W$ and $D$ are compatible, and $W^*$ and $D$ are compatible.
So $D+W$ and $D+W^*$ are the regular SAMS$(n,d)$s by Theorem \ref{compat-R3}.
\end{proof}
To illustrate the proof of Theorem \ref{mainth4-1}, we give an example in the following.
\begin{exam}\label{Ex4}
There exists a regular SAMS$(n,d)$ for $(n,d)\in\{(11,8),(13,9)\}$.
\end{exam}
\begin{proof}
For $(n,d)=(11,8)$, then $m=5$, by Theorem \ref{SFD}, we get
$$C=\left(\begin{smallmatrix}
11&5&10&4&9& 3 &8&2&7&1& 6 \\
1 & 2 & 3 & 4 & 5 & 6 & 7 &8&9&10&11\\
6&11&5&10&4&9&3&8&2&7&1 \\
11&5&10&4&9& 3 &8&2&7&1& 6 \\
1 & 2 & 3 & 4 & 5 & 6 & 7 &8&9&10&11\\
6&11&5&10&4&9&3&8&2&7&1 \\
\end{smallmatrix}\right)+11\left(\begin{smallmatrix}
0 & 0 & 0 & 0 & 0 & 0 & 0& 0 & 0 & 0 & 0 \\
1 & 1 & 1 & 1 & 1 & 1 & 1& 1 & 1 & 1 & 1 \\
2 & 2 & 2 & 2 & 2 & 2 & 2& 2 & 2 & 2 & 2 \\
3 & 3 & 3 & 3 & 3 & 3 & 3& 3 & 3 & 3 & 3 \\
4 & 4 & 4 & 4 & 4 & 4 & 4& 4 & 4 & 4 & 4 \\
5 & 5 & 5 & 5 & 5 & 5 & 5& 5 & 5 & 5 & 5 \\
\end{smallmatrix}\right)
=\left(\begin{smallmatrix}
11&5&10&4&9& 3 &8&2&7&1& 6 \\
12& 13& 14&15& 16& 17& 18& 19& 20& 21& 22\\
28& 33& 27& 32& 26& 31& 25& 30& 24& 29 &23\\
44&38& 43& 37& 42& 36& 41& 35& 40& 34& 39\\
45& 46& 47& 48& 49& 50& 51& 52& 53& 54& 55\\
61& 66&60& 65& 59& 64& 58& 63& 57& 62& 56\\
\end{smallmatrix}\right).
$$
\noindent
By the proof of Theorem \ref{mainth4-1}, we obtain
$$C'=C+
\left(\begin{smallmatrix}
22 & 22 & 22 & 22 & 22 & 22 & 22& 22 & 22 & 22 & 22 \\
22 & 22 & 22 & 22 & 22 & 22 & 22& 22 & 22 & 22 & 22 \\
22 & 22 & 22 & 22 & 22 & 22 & 22& 22 & 22 & 22 & 22 \\
22 & 22 & 22 & 22 & 22 & 22 & 22& 22 & 22 & 22 & 22 \\
22 & 22 & 22 & 22 & 22 & 22 & 22& 22 & 22 & 22 & 22 \\
22 & 22 & 22 & 22 & 22 & 22 & 22& 22 & 22 & 22 & 22 \\
\end{smallmatrix}\right)
=\left(\begin{smallmatrix}
33& 27& 32& 26& 31& 25& 30& 24& 29& 23& 28\\
34& 35& 36& 37& 38& 39& 40& 41& 42& 43& 44\\
50& 55& 49& 54& 48& 53& 47& 52& 46& 51& 45\\
66& 60& 65& 59& 64& 58& 63& 57& 62& 56& 61\\
67& 68& 69& 70& 71& 72& 73& 74& 75& 76& 77\\
83& 88& 82& 87& 81& 86& 80& 85& 79& 84& 78\\
\end{smallmatrix}\right).
$$
The arrays $B$ and $D$ are listed below by the proof of Theorem \ref{mainth4-1} and $W^*$ comes from Example \ref{Ex2}.
\begin{center}
{\renewcommand\arraystretch{0.7}
\setlength{\arraycolsep}{4pt}
\footnotesize
$ B=\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|} \hline
\textbf{2} & 3 & \textbf{4} & 5 & 6 & 7 & \textbf{8} & 9 & \textbf{10} & \textbf{11} & \textbf{1} \\\hline
\textbf{4} & 5 & 6 & 7 & \textbf{8} & 9 & \textbf{10} & \textbf{11} & \textbf{1} & 2 & 3 \\\hline
6 & 7 & \textbf{8} & 9 & \textbf{10} & \textbf{11} & \textbf{1} & \textbf{2} & 3 & \textbf{4} & 5 \\\hline
\textbf{8} & 9 & \textbf{10} & \textbf{11} & \textbf{1} & \textbf{2} & 3 & \textbf{4 }& 5 & 6 & 7 \\\hline
\textbf{10} & \textbf{11} & \textbf{1} & \textbf{2} & 3 & \textbf{4} & 5 & 6 & 7 & \textbf{8} & 9 \\\hline
\textbf{1} & \textbf{2} & 3 & \textbf{4} & 5 & 6 & 7 & \textbf{8} & 9 & \textbf{10} & \textbf{11} \\\hline
3 & \textbf{4} & 5 & 6 & 7 & \textbf{8} & 9 & \textbf{10} & \textbf{11} & \textbf{1} & \textbf{2} \\\hline
5 & 6 & 7 & \textbf{8} & 9 & \textbf{10} & \textbf{11} & \textbf{1} & \textbf{2} & 3 & \textbf{4} \\\hline
7 & \textbf{8} & 9 & \textbf{10} & \textbf{11} & \textbf{1} & \textbf{2} & 3 & \textbf{4} & 5 & 6 \\\hline
9 & \textbf{10} & \textbf{11} & \textbf{1} & \textbf{2} & 3 & \textbf{4} & 5 & 6 & 7 & \textbf{8} \\\hline
\textbf{11} & \textbf{1} & \textbf{2} & 3 & \textbf{4} & 5 & 6 & 7 & \textbf{8} & 9 & \textbf{10} \\\hline
\end{array}$ \ , \
$D=\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|} \hline
42 & & 51 & & & & 66 & & 68 & 24 & 82 \\\hline
46 & & & & 61 & & 67 & 30 & 88 & 41 & \\\hline
& & 56 & & 77 & 25 & 83 & 40 & & 52 & \\\hline
62 & & 76 & 31 & 78 & 39 & & 47 & & & \\\hline
75 & 26 & 84 & 38 & & 53 & & & & 57 & \\\hline
79 & 37 & & 48 & & & & 63 & & 74 & 32 \\\hline
& 54 & & & & 58 & & 73 & 27 & 85 & 36 \\\hline
& & & 64 & & 72 & 33 & 80 & 35 & & 49 \\\hline
& 59 & & 71 & 28 & 86 & 34 & & 55 & & \\\hline
& 70 & 23 & 81 & 44 & & 50 & & & & 65 \\\hline
29 & 87 & 43 & & 45 & & & & 60 & & 69 \\\hline
\end{array}$}
\end{center}
\begin{center}
{\renewcommand\arraystretch{0.7}
\setlength{\arraycolsep}{3pt}
\footnotesize
\hspace{15pt}
$W^*=\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|}\hline
& 11 & & & & 1 & & & & & \\\hline
& 10 & & 22 & & & & & & & \\\hline
& & & 9 & & & & & & & 21 \\\hline
& & & & & & & & 20 & & 8 \\\hline
& & & & & & 19 & & 7 & & \\\hline
& & & & 18 & & 6 & & & & \\\hline
& & 17 & & 16 & & & & & & \\\hline
15 & & 5 & & & & & & & & \\\hline
4 & & & & & & & 14 & & & \\\hline
& & & & & & & 3 & & 13 & \\\hline
& & & & & 12 & & & & 2 & \\\hline
\end{array}$}\ .
\end{center}
Here, all of above empty entries indicate 0. It is easy to check that $D+W^*$ is a regular
SAMS$(11,8)$.
For $(n,d)=(13,9)$,
then $m=6$, by Theorem \ref{SFD}, we have
$$C=\left(\begin{smallmatrix}
1& 2& 3& 4& 5& 6& 8& 9& 10& 11& 12& 13& 7\\
7& 13& 12& 11& 10& 9& 8& 6& 5& 4 &3& 2& 1\\
13& 6& 12& 5& 11& 4& 10& 3& 9& 2& 8& 1& 7\\
1& 2& 3& 4& 5& 6& 7& 8& 9& 10& 11& 12& 13\\
7& 13& 6& 12& 5& 11& 4& 10& 3& 9& 2& 8& 1\\
13& 12& 11& 10& 9& 8& 6& 5& 4& 3& 2& 1& 7\\
7& 1& 2& 3& 4& 5& 6& 8& 9& 10& 11& 12& 13\\
\end{smallmatrix}\right)+13\left(\begin{smallmatrix}
0 & 0 & 0 & 0 & 0 & 0 & 0& 0 & 0 & 0 & 0& 0 & 0 \\
1 & 1 & 1 & 1 & 1 & 1 & 1& 1 & 1 & 1 & 1& 1 & 1 \\
2 & 2 & 2 & 2 & 2 & 2 & 2& 2 & 2 & 2 & 2& 2 & 2 \\
3 & 3 & 3 & 3 & 3 & 3 & 3& 3 & 3 & 3 & 3& 3 & 3 \\
4 & 4 & 4 & 4 & 4 & 4 & 4& 4 & 4 & 4 & 4& 4 & 4 \\
5 & 5 & 5 & 5 & 5 & 5 & 5& 5 & 5 & 5 & 5& 5 & 5 \\
6 & 6 & 6 & 6 & 6 & 6 & 6& 6 & 6 & 6 & 6& 6 & 6 \\
7 & 7 & 7 & 7 & 7 & 7 & 7& 7 & 7 & 7 & 7& 7 & 7 \\
\end{smallmatrix}\right)
=\left(\begin{smallmatrix}
1& 2& 3& 4& 5& 6& 8& 9& 10& 11& 12& 13& 7\\
20& 26& 25& 24& 23& 22& 21& 19& 18& 17& 16& 15& 14\\
39& 32& 38& 31& 37& 30& 36& 29& 35& 28& 34& 27& 33\\
40& 41& 42& 43& 44& 45& 46& 47& 48& 49& 50& 51& 52\\
59& 65& 58& 64& 57& 63& 56& 62& 55& 61& 54& 60& 53\\
78& 77& 76& 75& 74& 73& 71& 70& 69& 68& 67& 66& 72\\
85& 79& 80& 81& 82& 83& 84& 86& 87& 88& 89& 90& 91\\
\end{smallmatrix}\right).
$$
By the proof of Theorem \ref{mainth4-1}, we obtain
$$C'=\left(\begin{smallmatrix}
27& 28& 29& 30& 31& 32& 34& 35& 36& 37& 38& 39& 33\\
46& 52& 51& 50& 49& 48& 47& 45& 44& 43& 42& 41& 40\\
65& 58& 64& 57& 63& 56& 62& 55& 61& 54& 60& 53& 59\\
66& 67& 68& 69& 70& 71& 72& 73& 74& 75& 76& 77& 78\\
85& 91& 84& 90& 83& 89& 82& 88& 81& 87& 80& 86& 79\\
104& 103& 102& 101& 100& 99& 97& 96& 95& 94& 93& 92& 98\\
111& 105& 106& 107& 108& 109& 110& 112& 113& 114& 115& 116& 117\\
\end{smallmatrix}\right).
$$
The arrays $D$ and $W$ are listed below by the proof of
Theorem \ref{mainth4-1} and Theorem \ref{SAMS(n,2)} respectively.
\begin{center}
{\renewcommand\arraystretch{0.7}
\setlength{\arraycolsep}{4pt}
\footnotesize
$ D=\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
& 42 & &53& & 78 & &85&& 103&& 106&37\\\hline
& 60& & 77& &79 & &104 & &105 &36 && 43\\\hline
& 76 && 86 & &98& & 111& 35 && 44& & 54\\\hline
& 80 && 92 && 117& 34 & &45 && 61 && 75\\\hline
& 93 && 116& 32& & 47 && 55 && 74 && 87\\\hline
& 115& 31& & 48& & 62 && 73 && 81 && 94\\\hline
30& & 49& & 56 & & 72 && 88 && 95 && 114\\\hline
50& & 63 && 71 & & 82 && 96 && 113& 29& \\\hline
57& & 70 && 89 & & 97& &112& 28 && 51 &\\\hline
69& & 83 && 99 & & 110& 27 && 52 && 64& \\\hline
90& & 100 && 109& 33& &46 && 58 && 68& \\\hline
101&& 108&39 & & 40 & &65&& 67 && 84& \\\hline
107& 38& & 41 & & 59& & 66& & 91& & 102& \\\hline
\end{array}$ \ , }
\end{center}
\begin{center}
{\renewcommand\arraystretch{0.7}
\setlength{\arraycolsep}{4pt}
\footnotesize
$W=\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
& & & & 13 && 1 & & & & & &\\\hline
& & 26 && 12 & & & & & & & &\\\hline
25 && 11 & & & & & & & & & &\\\hline
10 & & & & & & & & & & &24 &\\\hline
& & & & & & & & & 23 && 9 &\\\hline
& & & & & & & 22 && 8 & & &\\\hline
& & & & & 21 && 7 & & & & &\\\hline
& & & 20 & &19 & & & & & & &\\\hline
& 18 && 6 & & & & & & & & &\\\hline
& 5 & & & & & & & & & && 17\\\hline
& & & & & & & & & & 16 && 4\\\hline
& & & & & & & & 15& & 3 & &\\\hline
& & & & & & 14 && 2 & & & &\\\hline
\end{array}$ \ . }
\end{center}
It is easy to verify that $D+W$ is a regular
SAMS$(13,9)$.
\end{proof}
\section{The Proof of Theorem \ref{mainth} }
In this section, we shall give the proof of our main Theorem \ref{mainth}. Firstly,
we obtain the existence of a regular SAMS$(n,4)$ for $n\equiv1,5 \pmod 6$ and $n\geq 7$ by direct construction.
\begin{thm}\label{mainth5-1}
There exists a regular SAMS$(n,4)$ for any $n\equiv1,5 \pmod 6$ and $n\geq 7$.
\end{thm}
\begin{proof}
For any $n\equiv1,5 \pmod 6$ and $n\geq 7$, it can be written as $n=2m+1$, where $m\geq 3$. We construct a
special array $A=(a_{i,j})$, $i\in I_4$, $j\in I_n$, where
$$ a_{1,j}= \left\{
\begin{array}{ll}
2n+\frac{j+1}{2}, & if \ \ j\ \ is \ \ odd, \\
2n+m+1+ \frac{j}{2}, & if \ \ j\ \ is \ \ even, \\
\end{array}
\right. $$
\vskip 12pt
$\mbox{}\hspace{1.50in} a_{2,j}= n+2-j, $
$$ a_{3,j}= \left\{
\begin{array}{ll}
1 , & if \ \ j=1, \\
n+\frac{j+1}{2}, & if \ \ j\geq3 \ \ is\ \ odd ,\\
n+m+1+\frac{j}{2}, & if \ \ j \ \ is \ \ even, \\
\end{array} \right. $$
\vskip 8pt
$ \mbox{}\hspace{1.50in} a_{4,j}=4n+1-j.$
\noindent It is easy to calculate that
$$A_1=\bigcup\limits_{j=1}^n {\{a_{1,j}\}}=[2n+1,3n],\ \ \
A_2=\bigcup\limits_{j=1}^n {\{a_{2,j}\}}=[2,n+1],$$
$$A_3=\bigcup\limits_{j=1}^n{\{a_{3,j}\}}=[n+2,2n]\cup\{1\}, \ \ \
A_4=\bigcup\limits_{j=1}^n {\{a_{4,j}\}}=[3n+1,4n].$$
Then $G(A)=A_1\cup A_2\cup A_3\cup A_4=[1,4n]$. By a simple calculation, the following is obtained.
\mbox{}\hspace{0.35in}$C(A)=\bigcup\limits_{j=1}^n {\{\sum\limits_{i=1}^4a_{i,j}\}}=\{\sum\limits_{i=1}^4a_{i,1}\}\cup\left(\bigcup\limits_{e=1}^m{
\{\sum\limits_{i=1}^4a_{i,2e+1},\sum\limits_{i=1}^4a_{i,2e}\}}\right)$
\mbox{}\hspace{1.60in} $=\{7n+3\}\cup\left(\bigcup\limits_{e=1}^m\{8n+3-2e,9n+4-2e\}\right)$
\mbox{}\hspace{1.60in} $=\{7n+3,8n+1,9n+2\}\cup\left(\bigcup\limits_{e=1}^{m-1}\{8n+1-2e,9n+2-2e\}\right).$
\mbox{}\hspace{0.80in} $G_3=\bigcup\limits_{e=1}^{m-1}\{a_{1,2e-1}+a_{2,2e}+a_{3,2e+1}+a_{4,2e+2}\}$
\mbox{}\hspace{1.00in} $=\bigcup\limits_{e=1}^{m-1}\{(2n+e)+(n+2-2e)+(n+e+1)+(4n+1-2e-2)\}$
\mbox{}\hspace{1.00in} $=\bigcup\limits_{e=1}^{m-1}\{8n+2-2e\}.$
\mbox{}\hspace{0.80in} $G_4=\bigcup\limits_{e=1}^{m-1}\{a_{1,2e}+a_{2,2e+1}+a_{3,2e+2}+a_{4,2e+3}\}$
\mbox{}\hspace{1.00in} $=\bigcup\limits_{e=1}^{m-1}\{(2n+m+1+e)+(n+2-2e-1)+(n+m+1+e+1)+(4n+1-2e-3)\}$
\mbox{}\hspace{1.00in} $=\bigcup\limits_{e=1}^{m-1}\{9n+1-2e\},$
\mbox{}\hspace{0.80in} $G_5=\{a_{1,n-2}+a_{2,n-1}+a_{3,n}+a_{4,1}\}$
\mbox{}\hspace{1.00in} $=\{(2n+m)+(n+2-n+1)+(n+m+1)+(4n+1-1)\}$
\mbox{}\hspace{1.00in} $=\{8n+3\},$
\mbox{}\hspace{0.80in} $G_6=\{a_{1,n-1}+a_{2,n}+a_{3,1}+a_{4,2}\}$
\mbox{}\hspace{1.00in} $=\{(2n+m+1+m)+(n+2-n)+1+(4n+1-2)\}$
\mbox{}\hspace{1.00in} $=\{7n+2\},$
\mbox{}\hspace{0.80in} $G_7=\{a_{1,n}+a_{2,1}+a_{3,2}+a_{4,3}\}$
\mbox{}\hspace{1.00in} $=\{(2n+m+1)+(n+2-1)+(n+m+1+1)+(4n+1-3)\}$
\mbox{}\hspace{1.00in} $=\{9n+1\}.$
\noindent Denote $F(C)=G_3\cup G_4\cup G_5\cup G_6\cup G_7$, then $F(C)$ is the set of forward diagonal-sums.
Clearly,
\mbox{}\hspace{0.00in} $C(A)\cup F(C)$
\mbox{}\hspace{-0.20in} $=\{7n+3,8n+1,9n+2,8n+3,7n+2,9n+1\}\cup\left(\bigcup\limits_{e=1}^{m-1}\{8n+1-2e,8n+2-2e,9n+1-2e,9n+2-2e\}\right)$
\mbox{}\hspace{-0.20in} $=\{7n+3,8n+1,9n+2,8n+3,7n+2,9n+1\}\cup[7n+4,8n]\cup[8n+4,9n]$
\mbox{}\hspace{-0.20in} $=[7n+2,9n+2]\setminus\{8n+2\}.$
\vskip 6pt
Let $B=(b_{i,j})=(\langle2i+j-1\rangle_n)$ be the Latin square of order $n$ on $I_n$ which comes from the proof of Theorem \ref{SAMS(n,2)}. Define
\begin{center}
$i=g_r(i',j')=<i'-j'-1>_n,\ \ j=g_c(i',j')=<2j'-3>_n,\ i'\in I_4,\ j'\in I_n,$
\end{center}
we put the element $a_{i',j'}$ of $A$ into cell $(i,j)$ of $B$, other cells of $B$ are
filled by $0$, denoted by $D=(d_{i,j})$.
The elements in the same column of $A$ are also in the same column of $D$ since $j=g_c(i',j')=<2j'-3>_n$, and
the elements in the same forward diagonal of $A$ become the same row of $D$ since
\begin{center}
$g_r(1,j')=g_r(2,<j'+1>_n)=g_r(3,<j'+2>_n)=g_r(4,<j'+3>_n)=<-j'>_n,\ j'\in I_n.$
\end{center}
Then
\begin{center}
$G(D)=[1,4n]$ and $R(D)\cup C(D)=C(A)\cup F(C)=[7n+2,9n+2]\setminus\{8n+2\}.$
\end{center}
We also have
\mbox{}\hspace{1.00in} $b_{i,j}=<2i+j-1>_n=<2(g_r(i',j')+g_c(i',j')-1>_n$
\mbox{}\hspace{1.23in} $=<2(i'-j'-1)+(2j'-3)-1>_n$
\mbox{}\hspace{1.23in} $=<2i'-6>_n,\ i'\in I_4$.
\noindent
It follows that the element $a_{i',j'}$ of $A$ is putted into cell $(i,j)$ of $B$ with $b_{i,j}=n-4,n-2,n,2$
respectively. There exist exactly $4$ non-zero elements in each row, each column and the right diagonal
of $B$, so does $D$.
It is easy to calculate that
\mbox{}\hspace{0.65in} $r(D)=d_{2,n-1}+d_{n,1}+d_{n-2,3}+d_{n-4,5}$
\mbox{}\hspace{0.98in} $=a_{4,1}+a_{3,2}+a_{2,3}+a_{1,4}$
\mbox{}\hspace{0.98in} $=(4n+1-1)+(n+m+1+1)+(n+2-3)+(2n+m+1+2)$
\mbox{}\hspace{0.98in} $=9n+3$
\noindent
since $d_{2,n-1}=a_{4,1}$ followed from $g_r(4,1)=<4-1-1>_n=2,\ g_c(4,1)=<2-3>_n=n-1$,
\ \ $d_{n,1}=a_{3,2}$\ \ \ \ followed from $g_r(3,2)=<3-2-1>_n=n,\ g_c(3,2)=<2\cdot2-3>_n=1$,
\ \ $d_{n-2,3}=a_{2,3}$ followed from $g_r(2,3)=<2-3-1>_n=n-2,\ g_c(2,3)=<2\cdot3-3>_n=3$,
\ \ $d_{n-4,5}=a_{1,4}$ followed from $g_r(1,4)=<1-4-1>_n=n-4,\ g_c(1,4)=<2\cdot4-3>_n=5$.
\noindent
Note that there exist exactly $4$ non-zero elements in the left diagonal of $B$ when $n\equiv 1,5 \pmod 6$,
so does $D$.
\textbf{Case 1}\ \ $n\equiv 1\pmod 6$ and $n\geq 7$.
There exist exactly $4$ non-zero elements $a_{1,1}$, $a_{2,2+4k}$, $a_{3,2+2k}$, $a_{4,2}$ in the left diagonal of $D$
because
\mbox{}\hspace{0.50in} $g_r(1,1)=<1-1-1>_n= g_c(1,1)=<2\cdot1-3>_n=n-1$,
\mbox{}\hspace{0.50in} $g_r(2,2+4k)=<2-(2+4k)-1>_n= g_c(2,2+4k)=<2(2+4k)-3>_n=2k$,
\mbox{}\hspace{0.50in} $g_r(3,2+2k)=<3-(2+2k)-1>_n=g_c(3,2+2k)=<2(2+2k)-3>_n=4k+1$,
\mbox{}\hspace{0.50in} $g_r(4,2)=<4-2-1>_n=g_c(4,2)=<2\cdot2-3>_n=1$.
\noindent So
\mbox{}\hspace{0.60in} $l(D)=a_{1,1}+a_{2,2+4k}+a_{3,2+2k}+a_{4,2}$
\mbox{}\hspace{0.90in} $=(2n+1)+(n+2-2-4k)+(n+m+1+1+k)+(4n+1-2)$
\mbox{}\hspace{0.90in} $=8n+2$.
\noindent
Then $D$ is a regular SAMS$(n,4)$.
\textbf{Case 2}\ \ $n\equiv 5\pmod 6$ and $n\geq 5$.
We have exactly four non-zero elements $a_{1,1},a_{2,1+2k},a_{3,1+4k},a_{4,2}$ in the left diagonal of $D$
because
\mbox{}\hspace{0.40in} $g_r(1,1)=<1-1-1>_n=g_c(1,1)=<2\cdot1-3>_n=n-1$,
\mbox{}\hspace{0.40in} $g_r(2,1+2k)=<2-(1+2k)-1>_n=g_c(2,1+2k)=<2(1+2k)-3>_n=4k-1$,
\mbox{}\hspace{0.40in} $g_r(3,1+4k)=<3-(1+4k)-1>_n=g_c(3,1+4k)=<2(1+4k)-3>_n=2k$,
\mbox{}\hspace{0.40in} $g_r(4,2)=<4-2-1>_n= g_c(4,2)=<2\cdot2-3>_n=1$.
\noindent So
\mbox{}\hspace{0.60in} $l(D)=a_{1,1}+a_{2,1+2k}+a_{3,1+4k}+a_{4,2}$
\mbox{}\hspace{0.90in} $=(2n+1)+(n+2-1-2k)+(n+2k+1)+(4n+1-2)$
\mbox{}\hspace{0.90in} $=8n+2$.
\noindent
Then $D$ is also a regular SAMS$(n,4)$.
\end{proof}
We restate our main theorem in the following and prove it.
\textbf{Theorem 1.3}\ \ \
For any $n\geq 5$ and $n\equiv1,5 \pmod 6$, there exists a regular SAMS$(n,d)$ if and only if $2\leq d\leq n-1$.
\noindent\textbf{ Proof} \ \ \
It is clear that there does not exist
a regular SAMS$(n,1)$.
For each $n\equiv1,5 \pmod 6$ and $n\geq 5$, there exists a
regular SAMS$(n,2)$ by Theorem \ref{SAMS(n,2)}.
For each $n\equiv1,5 \pmod 6$, $d=3,5$ and $n>d$, there exists a
regular SAMS$(n,d)$ by Lemma \ref{03}.
For each $n\equiv1,5 \pmod 6$ and $n\geq 7$, there exists a
regular SAMS$(n,4)$ by Theorem \ref{mainth5-1}.
For each $n\equiv1,5 \pmod 6$ and $d=n-1,n-2$, there exists a
regular SAMS$(n,d)$ by Lemmas \ref{01}-\ref{02}.
For each $n\equiv1,5 \pmod 6$ and $d\in[6,n-3]$, there exists a
regular SAMS$(n,d)$ by Theorem \ref{mainth4-1}. The proof is completed.
\vskip 10pt
\noindent\textbf{Acknowledgements}\ \ The authors would like to thank Professor Zhu Lie of Suzhou University for his encouragement and many helpful suggestions.
\vskip 12pt
| {
"timestamp": "2020-02-20T02:08:30",
"yymm": "2002",
"arxiv_id": "2002.07357",
"language": "en",
"url": "https://arxiv.org/abs/2002.07357",
"abstract": "Graph labeling is a well-known and intensively investigated problem in graph theory. Sparse anti-magic squares are useful in constructing vertex-magic labeling for graphs. For positive integers $n,d$ and $d<n$, an $n\\times n$ array $A$ based on $\\{0,1,\\cdots,nd\\}$ is called \\emph{a sparse anti-magic square of order $n$ with density $d$}, denoted by SAMS$(n,d)$, if each element of $\\{1,2,\\cdots,nd\\}$ occurs exactly one entry of $A$, and its row-sums, column-sums and two main diagonal sums constitute a set of $2n+2$ consecutive integers. An SAMS$(n,d)$ is called \\emph{regular} if there are exactly $d$ positive entries in each row, each column and each main diagonal. In this paper, we investigate the existence of regular sparse anti-magic squares of order $n\\equiv1,5\\pmod 6$, and it is proved that for any $n\\equiv1,5\\pmod 6$, there exists a regular SAMS$(n,d)$ if and only if $2\\leq d\\leq n-1$.",
"subjects": "Combinatorics (math.CO)",
"title": "Constructions of regular sparse anti-magic squares",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.983596967003007,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.7099044227823195
} |
https://arxiv.org/abs/1807.03462 | A note on breaking ties among sample medians | Given samples $x_1,\cdots,x_n$, it is well known that any sample median value (not necessarily unique) minimizes the absolute loss $\sum_{i=1}^n |q-x_i|$. Interestingly, we show that the minimizer of the loss $\sum_{i=1}^n|q-x_i|^{1+\epsilon}$ exhibits a singular perturbation behaviour that provides a unique definition for the sample median as $\epsilon \rightarrow 0$. This definition is the unique point among all candidate median values that balances the $logarithmic$ moment of the empirical distribution. The result generalizes directly to breaking ties among sample quantiles when the quantile regression loss is modified in the same way. | \section{Introduction}
Given samples $x_1,\cdots,x_n$, it is well known that
the sample mean $n^{-1}\sum_i x_i$ is the unique minimizer of the empirical squared loss $\mathbb{E}_{n}(\theta-X)^{2}=n^{-1}\sum_{i:x_i\le\theta}(\theta-x_i)^{2}+n^{-1}\sum_{i:x_i>\theta}(x_i-\theta)^{2}$.
This follows from the first order condition
\[
n^{-1}\sum_{i:x_i\le\theta}(\theta-x_i) = n^{-1}\sum_{i:x_i>\theta}(x_i-\theta),
\]
which can be seen as finding the point $\theta$ that balances the
first moment of the distribution.
It is also well known that the sample median need not be unique, but can take on an interval of values if $n$ is even. If it is the
absolute loss $\mathbb{E}_{n}|\theta-X|=n^{-1}\sum_{i:x_i\le\theta}(\theta-x_i)+n^{-1}\sum_{i:x_i>\theta}(x_i-\theta)$
that one is interested in minimizing, then any median value satisfying
$F_n(\theta)=1/2$ (where $F_n(x) = n^{-1}\sum_i I(x_i \le x)$ is the empirical distribution) is a solution to the first order condition\footnote{Since $\sum_{i:x_i\leq\theta}(\theta-x_i)$ has a subderivative whenever $\theta=x_i$, $(\theta-x_i)^0$ in the first order condition \eqref{eq:0-moment} is allowed to take on any value in the interval $[0,1]$ when $\theta=x_i$.}
\begin{equation}
\label{eq:0-moment}
\underbrace{n^{-1}\sum_{i:x_i\le\theta} (\theta-x_i)^0}_{F_n(\theta)} = \underbrace{n^{-1}\sum_{i:x_i>\theta} (x_i-\theta)^0}_{1-F_n(\theta)},
\end{equation}
which seeks any point that balances the zero-th moment of the empirical distribution. Informally, the non-uniqueness of the median can be attributed to
the fact that merely balancing the zero-th moment does not provide enough `discriminative' power, while balancing the first moment does.
In order to report a unique sample median, some method of breaking ties among candidate median values is necessary. Textbook treatments and software packages typically define the sample median as the midpoint of the interval \citep{hf}. Equivalent problems emerge in the calculations of sample quantiles in general. A variety of alternative estimators based on interpolation, linear combinations of order statistics, or smoothing-type approaches \citep{hd,parrish,sv,sm,yang} have been proposed, typically under the assumption of IID samples from a population with a uniquely defined quantile (e.g., when the population distribution is continuous).
In this note, we show that balancing an ever so slightly higher order moment than the zero-th one leads to a way to tiebreak among the sample medians. Recalling that $\log x$ is asymptotically dominated by $x^p$ for any $p>0$, consider choosing $\theta$ to balance the \textit{logarithmic} moment:
\begin{equation}
\sum_{i:x_i<\theta}\log(\theta-x_i) = \sum_{i:x_i>\theta}\log(x_i-\theta).\label{eq:log-moment}
\end{equation}
We show that this is equivalent to the minimization of $\mathbb{E}_{n}|\theta-X|^{1+\epsilon}$ in the limit $\epsilon\downarrow0$: The unique minimizer of $\mathbb{E}_{n}|\theta-X|^{1+\epsilon}$ converges to a candidate value for the median as $\epsilon \downarrow 0$. If there are multiple candidate values, then the one that balances (\ref{eq:log-moment}) is the unique limit. This singular perturbation behaviour of the first order condition converging to (\ref{eq:log-moment}) rather than to (\ref{eq:0-moment}) gives rise to an interesting way for defining the median uniquely. The idea generalizes directly to defining unique sample quantiles $q_{\alpha}$ when the quantile regression loss is modified in the same way.
\section{Result}
Given $\alpha \in (0,1)$, we define a modified version of the weighted absolute loss
for quantile regression as
\begin{equation}\label{eq:loss}
L_{\alpha,\epsilon}(x,q)=\begin{cases}
(1-\alpha)(q-x)^{1+\epsilon} & x\leq q\\
\alpha(x-q)^{1+\epsilon} & x> q
\end{cases}.
\end{equation}
If $\epsilon=0$ then we have the usual loss used in quantile regression,
whose expectation with respect to an empirical distribution $F_{n}(x)$
is minimized by any $\alpha$-quantile $q_{\alpha}$ satisfying $F_{n}(q_{\alpha})=\alpha$. The median naturally corresponds to the case where $\alpha = 1/2$.
The expectation of $L_{\alpha,\epsilon}(x,q)$ is
\begin{equation}
\mathbb{E}_{n}L_{\alpha,\epsilon}(x,q)=\frac{1-\alpha}{n}\sum_{i:x_i\le q}(q-x_i)^{1+\epsilon}+\frac{\alpha}{n}\sum_{i:x_i> q}(x_i-q)^{1+\epsilon},\label{eq:E-loss}
\end{equation}
and its derivative at $q$ is
\begin{equation}\label{eq:gradient}
\frac{1-\alpha}{n}\sum_{i:x_i\le q}(q-x_i)^\epsilon-\frac{\alpha}{n}\sum_{i:x_i> q}(x_i-q)^\epsilon
\end{equation}
up to a factor $1+\epsilon$.
When $\epsilon>0$, $\mathbb{E}_{n}L_{\alpha,\epsilon}(x,q)$ has a unique minimizer because it is strongly convex in $q$. The minimizer balances the weighted $\epsilon$-th order moment in (\ref{eq:gradient}). Whereas for $\epsilon=0$ the zero-th order moment is balanced by possibly many values. Lemma \ref{lem:main} below shows that the minimization of $\mathbb{E}_{n}L_{\alpha,\epsilon}(x,q)$ as $\epsilon\downarrow0$ is qualitatively very different from the minimization of $\mathbb{E}_{n}L_{\alpha,0}(x,q)$.
\vspace{0.1in}
\begin{lem}\label{lem:main} Let $q_{\alpha,\epsilon}$ be the minimizer of $\mathbb{E}_{n}L_{\alpha,\epsilon}(x,q)$.
\noindent (i) Suppose there exists a unique $\alpha$-quantile $q_{\alpha}$, i.e. $F_{n}(q_{\alpha}-)<\alpha$ and $F_{n}(q_{\alpha})>\alpha$.
Then it is the limit of $q_{\alpha,\epsilon}$ as $\epsilon\downarrow 0$.
\noindent (ii) If no unique $\alpha$-quantile
exists, then $F_{n}(q)=\alpha$ in some interval $[q_{\alpha}^{L},q_{\alpha}^{H})$.
The unique solution $q^{\log}_{\alpha} \in (q_{\alpha}^{L},q_{\alpha}^{H})$ that balances
the weighted log-moment
\begin{equation}
(1-\alpha)\sum_{i:x_i< q}\log(q-x_i)-\alpha\sum_{i:x_i> q}\log(x_i-q)=0\label{eq:wlog-moment}
\end{equation}
is the limit of $q_{\alpha,\epsilon}$ as $\epsilon\downarrow 0$.
\end{lem}
The intuition for the result is simple but elegant: Perturbing $\epsilon$ about 0 yields approximations for the terms
\[
n^{-1}\sum_{i:x_i\le q}(q-x_i)^{\epsilon} \approx F_n(q-)+\frac{\epsilon}{n}\sum_{i:x_i< q}\log(q-x_i),
\]
\[
n^{-1}\sum_{i:x_i > q}(x_i-q)^{\epsilon} \approx 1-F_n(q)+\frac{\epsilon}{n}\sum_{i:x_i > q}\log(x_i-q).
\]
Ignoring differences between $F_n(q-)$ and $F_n(q)$ for a moment, the first order condition obtained from setting the derivative (\ref{eq:gradient}) to zero is
\[
F_{n}(q)-\alpha
+\frac{\epsilon}{n}\left\{ (1-\alpha)\sum_{i:x_i\le q}\log(q-x_i) - \alpha\sum_{i:x_i>q}\log(x_i-q)\right\} \approx 0.
\]
The dominant term above is $F_n(q)-\alpha$, so the limiting minimizer has to be an $\alpha$-quantile. Among the candidate $\alpha$-quantiles $[q_\alpha^L,q_\alpha^H)$ in case (ii), the term in the curly brackets now become dominant, giving rise to the logarithmic moment condition (\ref{eq:wlog-moment}).
\begin{proof}
For case (i) where there is a unique $\alpha$-quantile $q_{\alpha}$ (at the location of one of the samples $x_i$), set $q=q_{\alpha}-\delta$ for a small $\delta>0$ and use Taylor's theorem to obtain
\[
n^{-1}\sum_{i:x_i\le q}(q-x_i)^{\epsilon} =
F_{n}(q_{\alpha}-)+\mathcal{O}(\epsilon),
\]
\[
n^{-1}\sum_{i:x_i > q}(x_i-q)^{\epsilon} = 1-F_{n}(q_{\alpha}-)+\mathcal{O}(\epsilon).
\]
The derivative (\ref{eq:gradient}) at $q=q_{\alpha}-\delta$ is then $F_{n}(q_{\alpha}-)-\alpha+\mathcal{O}(\epsilon)<0$
for $\epsilon$ small enough. Likewise, the derivative at $q=q_{\alpha}+\delta$
is $F_{n}(q_{\alpha})-\alpha+\mathcal{O}(\epsilon)>0$ for $\epsilon$
small enough. Given that $\mathbb{E}_{n}L_{\alpha,\epsilon}(x,q)$ is strongly convex, its minimizer $q_{\alpha,\epsilon}$ must then be within $(q_\alpha-\delta,q_\alpha+\delta)$ for $\epsilon$ sufficiently small.
For case (ii), note that $F_{n}(x)$ has atoms at $x=q_{\alpha}^{L},q_{\alpha}^{H}$
but none in $(q_{\alpha}^{L},q_{\alpha}^{H})$. Hence within this
interval, the first sum in (\ref{eq:wlog-moment}) is increasing
in $q$ while the second one is decreasing. Moreover the left hand
side of (\ref{eq:wlog-moment}) approaches $-\infty$ as $q\downarrow q_{\alpha}^{L}$,
and approaches $+\infty$ as $q\uparrow q_{\alpha}^{H}$. Hence (\ref{eq:wlog-moment})
has a unique solution $q_{\alpha}^{\log}$ in $(q_{\alpha}^{L},q_{\alpha}^{H})$. Within
this interval, applying Taylor's theorem shows that
\[
n^{-1}\sum_{i:x_i\le q}(q-x_i)^{\epsilon} = \alpha +\frac{\epsilon}{n}\sum_{i:x_i< q}\log(q-x_i) + \mathcal{O}(\epsilon^{2}),
\]
\[
n^{-1}\sum_{i:x_i > q}(x_i-q)^{\epsilon}=1-\alpha+\frac{\epsilon}{n}\sum_{i:x_i > q}\log(x_i-q)+\mathcal{O}(\epsilon^{2}),
\]
so the derivative \eqref{eq:gradient} is
\[
(1-\alpha)\sum_{i:x_i< q}\log(q-x_i)-\alpha\sum_{i:x_i > q}\log(x_i-q) + \mathcal{O}(\epsilon)
\]
up to a factor $\epsilon/n$. For a small $\delta>0$, the derivative of $\mathbb{E}_{n}L_{\alpha,\epsilon}(x,q)$ at $q = q_{\alpha}^{\log} - \delta$ is then negative for a sufficiently small $\epsilon$, and likewise the derivative at $q = q_{\alpha}^{\log} + \delta$ is positive for $\epsilon$ small enough. The result then follows from the same line of argument for case (i).
\end{proof}
\section{Discussion}
This note serves to show that the introduction of a homotopy between the squared loss (which has a unique minimizer) and the absolute loss (which may have multiple minimizers) can be a means for resolving the non-uniqueness of the sample median. Our result may have implications for a broader family of problems, including non-uniqueness issues that arise in least absolute deviations regression and in quantile regression. While conceptual in value, our result provides insight into a canonical statistical problem and may spur practical innovations in future work.
| {
"timestamp": "2019-09-04T02:32:55",
"yymm": "1807",
"arxiv_id": "1807.03462",
"language": "en",
"url": "https://arxiv.org/abs/1807.03462",
"abstract": "Given samples $x_1,\\cdots,x_n$, it is well known that any sample median value (not necessarily unique) minimizes the absolute loss $\\sum_{i=1}^n |q-x_i|$. Interestingly, we show that the minimizer of the loss $\\sum_{i=1}^n|q-x_i|^{1+\\epsilon}$ exhibits a singular perturbation behaviour that provides a unique definition for the sample median as $\\epsilon \\rightarrow 0$. This definition is the unique point among all candidate median values that balances the $logarithmic$ moment of the empirical distribution. The result generalizes directly to breaking ties among sample quantiles when the quantile regression loss is modified in the same way.",
"subjects": "Methodology (stat.ME)",
"title": "A note on breaking ties among sample medians",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.983596967003007,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7099044227823194
} |
https://arxiv.org/abs/1002.4361 | A unification of permutation patterns related to Schubert varieties | We obtain new connections between permutation patterns and singularities of Schubert varieties, by giving a new characterization of Gorenstein varieties in terms of so called bivincular patterns. These are generalizations of classical patterns where conditions are placed on the location of an occurrence in a permutation, as well as on the values in the occurrence. This clarifies what happens when the requirement of smoothness is weakened to factoriality and further to Gorensteinness, extending work of Bousquet-Melou and Butler (2007), and Woo and Yong (2006). We also show how mesh patterns, introduced by Branden and Claesson (2011), subsume many other types of patterns and define an extension of them called marked mesh patterns. We use these new patterns to further simplify the description of Gorenstein Schubert varieties and give a new description of Schubert varieties that are defined by inclusions, introduced by Gasharov and Reiner (2002). We also give a description of 123-hexagon avoiding permutations, introduced by Billey and Warrington (2001), Dumont permutations and cycles in terms of marked mesh patterns. | \section{Introduction}
In this paper we exhibit new connections between permutation patterns and singularities
of Schubert varieties $X_\pi$ in the complete flag variety $\Fl(\CC^n)$,
by giving a new characterization of Gorenstein varieties in terms of which \emph{bivincular patterns} the
permutation $\pi$ avoids. Bivincular patterns, defined by Bousquet-M\'elou, Claesson,
Dukes and Kitaev~\cite{BCDK10},
are generalizations of classical patterns where conditions are placed on the
location of an occurrence in a permutation, as well as on the values in
the occurrence.
This clarifies what happens when the requirement of smoothness is weakened
to factoriality and further to Gorensteinness, extending work of
Bousquet-M\'elou and Butler~\cite{MR2376109}, and Woo and Yong~\cite{MR2264071}.
We also prove results that translate some known patterns in the literature into
bivincular patterns. In particular we will give a characterization of the
Baxter permutations.
Table \ref{table} summarizes the main results in the paper related to bivincular patterns.
The first line in the table is due to Ryan~\cite{MR870962}, Wolper~\cite{MR1013667} and Lakshmibai and Sandhya~\cite{MR1051089}
and says that a Schubert variety $X_\pi$ is non-singular (or smooth) if and only if $\pi$ avoids
the patterns $1324$ and $2143$. Note that some authors use a different convention for the correspondence between permutations
and Schubert varieties, which results in the reversal of the permutations. These authors would then use the patterns $4231$, $3412$ to
identify the smooth Schubert varieties. Saying that the variety
$X_\pi$ is non-singular means that every local ring is regular.
\begin{table}[ht]
\caption{Connections between singularity properties and bivincular patterns
}
\label{table}
\begin{tabular}{ l l l }
\hline\noalign{\smallskip}
$X_\pi$ is & \multicolumn{2}{l}{The permutation $\pi$ avoids the patterns} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
smooth & $\vinc{4}{1/2, 2/1, 3/4, 4/3}{}$ and $\vinc{4}{1/1, 2/3, 3/2, 4/4}{}$ & \\
\noalign{\smallskip}\noalign{\smallskip}\noalign{\smallskip}
factorial & $\vinc{4}{1/2, 2/1, 3/4, 4/3}{2}$ and $\vinc{4}{1/1, 2/3, 3/2, 4/4}{}$ & \\
\noalign{\smallskip}\noalign{\smallskip}\noalign{\smallskip}
\multirow{2}{*}{Gorenstein} & \multirow{2}{*}{$\bivinc{5}{1/3,2/1,3/5,4/2,5/4}{2}{3}$,
$\bivinc{5}{1/2,2/4,3/1,4/5,5/3}{3}{2}$} &
and associated Grassmannians avoid \\
& & two bivincular pattern families \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}%
A weakening of
this condition is the requirement that every local ring only be a unique
factorization domain; a variety satisfying this is a \emph{factorial} variety.
Bousquet-M\'elou and Butler~\cite{MR2376109} proved a conjecture
stated by Woo and Yong (personal communication) that factorial
Schubert varieties are those that correspond to permutations avoiding
$1324$ and bar-avoiding $21\overline{3}54$. In
the terminology of Woo and Yong~\cite{MR2264071} the bar-avoidance of the latter
pattern corresponds to avoiding $2143$ with Bruhat restriction
$\brur{1}{4}$, or equivalently, interval avoiding
$[2413, 2143]$ in the terminology of Woo and Yong~\cite{MR2422304}. However, as remarked by
Steingr\'imsson~\cite{steingrimsson-2008}, bar-avoiding $21\overline{3}54$
is equivalent to avoiding the vincular pattern
$\vinc{4}{1/2, 2/1, 3/4, 4/3}{2}$. See Theorem \ref{thm:Ulfarsson-Factorial} for the details.
A further weakening is to only require that the local rings of $X_\pi$ be
Gorenstein local rings, in which case we say that $X_\pi$ is a \emph{Gorenstein}
variety. Woo and Yong~\cite{MR2264071} showed that $X_\pi$ is Gorenstein
if and only if it avoids two patterns with two Bruhat restrictions each,
as well as satisfying a certain condition on descents. We will translate
their results into avoidance of bivincular patterns;
see Theorem \ref{thm:Ulfarsson-Gorenstein}.
We also show how \emph{mesh patterns}, introduced by Br\"and\'en and Claesson~\cite{BC11},
subsume many other types of patterns, such as interval patterns defined by Woo and
Yong~\cite{MR2422304}, and define an extension of them
called \emph{marked mesh patterns}. We use these new patterns to further simplify
the description of Gorenstein Schubert varieties (see Theorem \ref{thm:Ulfarsson-Gorenstein2})
and give a new description of Schubert
varieties that are \emph{defined by inclusions}, introduced by Gasharov and Reiner~\cite{MR1934291}
(see Theorem \ref{thm:dbi}).
We also give a description of $123$-hexagon avoiding permutations, introduced by Billey and Warrington~\cite{MR1826948},
in terms of the avoidance of $123$ and one marked mesh pattern (see Proposition \ref{prop:123hex}).
Finally, in Example \ref{ex:markedmesh}, we describe Dumont permutations~\cite{0297.05004} (of the first and second kind) and cycles with marked mesh patterns.
\begin{table}[ht]
\caption{Connections between singularity properties and marked mesh patterns}
\label{table2}
\begin{tabular}{ l l l }
\hline\noalign{\smallskip}
$X_\pi$ is & \multicolumn{2}{l}{The permutation $\pi$ avoids the patterns} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
smooth & \multicolumn{2}{l}{$\patternsbm{scale=0.75}{ 4 }{ 1/2, 2/1, 3/4, 4/3 }{}{}$ and $\vinc{4}{1/1, 2/3, 3/2, 4/4}{}$} \\
\noalign{\smallskip}\noalign{\smallskip}\noalign{\smallskip}
factorial & \multicolumn{2}{l}{$\patternsbm{scale=0.75}{ 4 }{ 1/2, 2/1, 3/4, 4/3 }{2/0, 2/1, 2/2, 2/3, 2/4}{}$ and $\vinc{4}{1/1, 2/3, 3/2, 4/4}{}$} \\
\noalign{\smallskip}\noalign{\smallskip}\noalign{\smallskip}
defined by inclusions &
\multicolumn{2}{l}{$\patternsbm{scale=0.75}{ 4 }{ 1/2, 2/1, 3/4, 4/3 }{}{3/1/4/2/{},1/3/2/4/\scriptscriptstyle{1}}$,
$\patternsbm{scale=0.75}{ 4 }{ 1/2, 2/1, 3/4, 4/3 }{}{4/2/5/3/\scriptscriptstyle{1},3/0/4/1/\scriptscriptstyle{1}}$ and $\vinc{4}{1/1, 2/3, 3/2, 4/4}{}$} \\
\noalign{\smallskip}\noalign{\smallskip}\noalign{\smallskip}
\multirow{2}{*}{Gorenstein} & \multirow{2}{*}{$\patternsbm{scale=0.75}{ 4 }{ 1/2, 2/1, 3/4, 4/3 }{2/0, 2/1, 2/2, 2/3, 2/4, 0/2, 1/2, 3/2, 4/2}{3/1/4/2/{},1/3/2/4/\scriptscriptstyle{1}}$} &
and associated Grassmannians avoid \\
& & two mesh pattern families \\%is balanced \\
\noalign{\smallskip}\noalign{\smallskip}\noalign{\smallskip}
\noalign{\smallskip}\noalign{\smallskip}\noalign{\smallskip}
\noalign{\smallskip}\noalign{\smallskip}\noalign{\smallskip}
$123$-hexagon av.\ & \!$\patternsbm{scale=0.75}{ 4 }{ 1/2, 2/1, 3/4, 4/3 }{}{2/4/3/5/\scriptscriptstyle{1}, 0/2/1/3/\scriptscriptstyle{1}, 4/2/5/3/\scriptscriptstyle{1}, 2/0/3/1/\scriptscriptstyle{1}}$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}%
\section{Three types of pattern avoidance} \label{sec:three-types}
Here we recall definitions of different types of patterns. We will
use one-line notation for all permutations, \textit{e.g.}, write $\pi = 312$ for
the permutation in $S_3$ that satisfies $\pi(1) = 3$, $\pi(2) = 1$ and
$\pi(3) = 2$.
The three types correspond to:
\begin{itemize}
\item Bivincular patterns, subsuming vincular patterns and classical patterns.
\item Barred patterns.
\item Bruhat-restricted patterns.
\end{itemize}
\subsection{Bivincular patterns}
We denote the symmetric group on $n$ letters by $S_n$ and refer to its
elements as \emph{permutations}. We write permutations as words $\pi
= a_1a_2\dotsm a_n$, where the letters are distinct and come from the
set $\{1, 2, \dotsc, n\}$. A \emph{pattern} $p$ is also a
permutation, but we are interested in when a pattern is
\emph{contained} in a permutation $\pi$ as described below.
An \emph{occurrence} (or \emph{embedding}) of a pattern $p$ in a
permutation $\pi$ is classically defined as a subsequence in $\pi$, of
the same length as $p$, whose letters are in the same relative order
(with respect to size) as those in $p$. For example, the pattern $123$
corresponds to an increasing subsequence of three letters in a
permutation. If we use the notation $1_\pi$ to denote the first,
$2_\pi$ for the second and $3_\pi$ for the third letter in an occurrence,
then we are simply requiring that $1_\pi < 2_\pi < 3_\pi$.
If a permutation has no occurrence of a pattern $p$ we
say that $\pi$ \emph{avoids} $p$.
\begin{example}
The permutation $32415$ contains two occurrences of the pattern $123$
corresponding to the sub-words $345$ and $245$. It avoids the pattern
$132$.
\end{example}
The occurrence of a pattern in a permutation $\pi$ can also be defined
as a subset of the diagram $G(p) = \{ (i,\pi(i)) \,|\,} % separator in { ... | ... 1 \leq i \leq n \}$,
that ``\hspace{0.15pt}\nolinebreak looks like\hspace{0.5pt}\nolinebreak''\ the diagram of the pattern. Below is the diagram of
the pattern $123$ and two copies of the digram of the permutation $32415$
where we have indicated the two occurrences of the pattern by circling
the dots.
\[
\pattern{scale=1}{3}{1/1, 2/2, 3/3}{}
\qquad
\imopattern{scale=1}{5}{1/3, 2/2, 3/4, 4/1, 5/5}{}{1/3, 3/4, 5/5}{}
\qquad
\imopattern{scale=1}{5}{1/3, 2/2, 3/4, 4/1, 5/5}{}{2/2, 3/4, 5/5}{}
\]
In a \emph{vincular pattern} two adjacent letters may or may not
be underlined. If they are underlined it means that the
corresponding letters in the permutation $\pi$ must be adjacent.
\begin{example}
The permutation $32415$ contains one occurrence of the pattern
$\vinc{3}{1/1,2/2,3/3}{1}$ corresponding to the sub-word $245$. It avoids the
pattern $\vinc{3}{1/1,2/2,3/3}{2}$.
The permutation $\pi = 324615$ has one occurrence of the
pattern $2143$, namely the sub-word $3265$, but no
occurrence of $\vinc{4}{1/2, 2/1, 3/4, 4/3}{2}$, since $2$ and $6$ are not adjacent in
$\pi$.
\end{example}
It is also convenient to consider vincular patterns as certain types of diagrams.
We use dark vertical strips between dots that are required to be adjacent in the
pattern. Notice that only the second occurrence of the classical pattern $123$ satisfies
the requirements of the vincular pattern, since in the former the dot corresponding
to $2$ in $\pi$ lies in the forbidden strip.
\[
\pattern{scale=1}{3}{1/1, 2/2, 3/3}{1/0, 1/1, 1/2, 1/3}
\qquad
\imopattern{scale=1}{5}{1/3, 2/2, 3/4, 4/1, 5/5}{}{1/3, 3/4, 5/5}{1/0, 1/1, 1/2, 1/3, 1/4, 1/5, 2/0, 2/1, 2/2, 2/3, 2/4, 2/5}
\qquad
\imopattern{scale=1}{5}{1/3, 2/2, 3/4, 4/1, 5/5}{}{2/2, 3/4, 5/5}{2/0, 2/1, 2/2, 2/3, 2/4, 2/5}
\]
\noindent
These types of patterns have been studied sporadically for
a very long time but were not defined in full generality until
Babson and Steingr\'imsson~\cite{MR1758852}.
This notion was generalized further in Bousquet-M\'elou et al~\cite{BCDK10}:
In a \emph{bivincular pattern} we are also allowed to place
restrictions on the values that occur in an embedding of a pattern. We
use two-line notation to describe these patterns. If there is a line over
the letters $i$, $i+1$ in the top row, it means that the corresponding letters in an occurrence
must be adjacent in values. This
is best described by an example:
\begin{example}
An occurrence of the pattern
$\bivinc{3}{1/1,2/2,3/3}{}{2}$ in a permutation $\pi$ is an increasing subsequence
of three letters, such that the third one is larger than the second by exactly
$1$, or more simply, $3_\pi = 2_\pi + 1$.
The permutation $32415$ contains two occurrence of this bivincular pattern
corresponding to the sub-words $345$ and $245$. The second one is
also an occurrence of $\bivinc{3}{1/1,2/2,3/3}{1}{2}$. The permutation
avoids the bivincular pattern $\bivinc{3}{1/1,2/2,3/3}{2}{1,2}$.
\end{example}
By also using horizontal strips we are able to draw diagrams of bivincular patterns.
Below is the diagram of $\bivinc{3}{1/1,2/2,3/3}{1}{2}$ together with one occurrence
of it.
\[
\pattern{scale=1}{3}{1/1, 2/2, 3/3}{1/0, 1/1, 1/2, 1/3, 0/2, 2/2, 3/2}
\qquad
\imopattern{scale=1}{5}{1/3, 2/2, 3/4, 4/1, 5/5}{}{2/2, 3/4, 5/5}{2/0, 2/1, 2/2, 2/3, 2/4, 2/5,
0/4, 1/4, 3/4, 4/4, 5/4}
\]
\noindent
We will also use the notation of \cite{BCDK10} to write bivincular patterns:
A bivincular pattern consists of a triple $(p,X,Y)$ where $p$ is a permutation
in $S_k$ and $X,Y$ are subsets of $\dbrac{0,k} = \{0,\dotsc,k\}$. With this notation an
occurrence of a bivincular pattern in a permutation $\pi = \pi_1\dotsm\pi_n$ in $S_n$
is a subsequence $\pi_{i_1}\dotsm\pi_{i_k}$ such that the letters in the subsequence are in
the same relative order as the letters of $p$ and
\begin{itemize}
\item for all $x$ in $X$, $i_{x+1} = i_x + 1$; and
\item for all $y$ in $Y$, $j_{y+1} = j_y + 1$, where
$\{ \pi_{i_1}, \dotsc, \pi_{i_k} \} = \{ j_1, \dotsc, j_k \}$ and
$j_1 < j_2 < \dotsm < j_k$.
\end{itemize}
By convention we put $i_0 = 0 = j_0$ and $i_{k+1} = n+1 = j_{k+1}$.
\begin{example}
We can translate all of the patterns we have discussed above into this
notation:
\begin{align*}
123 &= (123,\tom,\tom),
&132 &= (132,\tom,\tom),
&\vinc{3}{1/1,2/2,3/3}{1} &= (123,\{1\},\tom),\\
\vinc{3}{1/1,2/2,3/3}{2} &= (123,\{2\},\tom),
&2143 &= (2143,\tom,\tom),
&\vinc{4}{1/2, 2/1, 3/4, 4/3}{2} &= (2143,\{2\},\tom),\\
\bivinc{3}{1/1,2/2,3/3}{}{1} &= (123,\tom,\{1\}),
&\bivinc{3}{1/1,2/2,3/3}{}{1,2} &= (123,\tom,\{1,2\}),
&\bivinc{3}{1/1,2/2,3/3}{2}{1,2} &= (123,\{2\},\{1,2\}).
\end{align*}
\end{example}
\noindent
We have not considered the case when $0$ or $k$ are elements of $X$ or $Y$,
as we will not need those cases. We just remark that if $0 \in X$ then
an occurrence of $(p,X,Y)$ must start at the beginning of a permutation $\pi$,
in other words, $\pi_{i_1} = \pi_1$. The other cases are similar.
The bivincular patterns behave well with respect to the operations
reverse, complement and inverse: Given a bivincular pattern $(p,X,Y)$
we define
\begin{align*}
(p,X,Y)^\rmr &= (p^\rmr,k-X,Y), \qquad
(p,X,Y)^\rmc = (p^\rmc,X,k-Y), \\
(p,X,Y)^\rmi &= (p^\rmi,Y,X),
\end{align*}
where $p^\rmr$ is the usual reverse of the permutation $p$,
$p^\rmc$ is the usual complement of the permutation $p$, and
$p^\rmi$ is the usual inverse of the permutation $p$. Here
$k-M = \{ k-m \,|\,} % separator in { ... | ... m \in M\}$. In \cite{BCDK08} the following is proved.
\begin{lemma}
Let $\rma$ denote one of the operations above, or a composition of them.
Then a permutation $\pi$ avoids the bivincular pattern $p$ if and only
if the permutation $\pi^\rma$ avoids the bivincular pattern
$p^\rma$.
\end{lemma}
\subsection{Barred patterns}
We will only consider a single pattern of this type, but the general definition
is easily inferred from this special case.
We say that a permutation $\pi$ \emph{avoids the barred pattern}
$21\overline{3}54$ if $\pi$ avoids the pattern
$2143$ (corresponding to the unbarred elements)
\emph{except} where that pattern is a part of the pattern
$21354$. This notation for barred patterns was
introduced by West~\cite{W90}. It turns out that avoiding this barred
pattern is equivalent to avoiding the vincular pattern $\vinc{4}{1/2, 2/1, 3/4, 4/3}{2}$; see section
\ref{sec:connections}. See also section \ref{sec:intmesh} on how to write barred
patterns as mesh patterns.
\begin{example}
The permutation $\pi = 4257613$ avoids the barred pattern
$21\overline{3}54$ since
the unique occurrence of $2143$, as the sub-word
$4276$, is contained in the sub-word $42576$ which is an occurrence of
$21354$. Note that it also avoids $\vinc{4}{1/2, 2/1, 3/4, 4/3}{2}$.
\end{example}
\subsection{Bruhat-restricted patterns}
We recall the definition of Bruhat-restricted patterns from
Woo and Yong~\cite{MR2264071}. First we need the \emph{Bruhat order}
on permutations in $S_n$, defined as follows: Given integers $i<j$ in
$\dbrac{1,n}$ and a permutation $\pi \in S_n$ we define $w\brur{i}{j}$
as the permutation that we get from $\pi$ by swapping $\pi(i)$ and $\pi(j)$.
For example $24153\brur{1}{4} = 54123$. We then say that $\pi\brur{i}{j}$
\emph{covers} $\pi$ if $\pi(i) < \pi(j)$ and for every $k$ with $i < k < j$
we have either $\pi(k) < \pi(i)$ or $\pi(k) > \pi(j)$. We then define the
Bruhat order as the transitive closure of the above covering relation.
This definition should be compared to the construction of the graph
$G_\pi$ in subsection \ref{subsec:FacSchubs-forest-like-perms}.
We see that in our example above that $24153\brur{1}{4}$ does not
cover $24153$ since we have $\pi(2) = 4$. Now, given a pattern
$p$ with a set of transpositions $\mathcal{T} = \{\brur{i_\ell}{j_\ell}\}$
we say that a permutation $\pi$ \emph{contains $(p,\mathcal{T})$}, or that
\emph{$\pi$ contains the Bruhat-restricted pattern $p$} (if $\mathcal{T}$
is understood from the context), if there is an embedding of $p$ in $\pi$
such that if any of the transpositions in $\mathcal{T}$ are carried out
on the embedding the resulting permutation covers $\pi$.
We should note that Bruhat-restricted patterns were further generalized
to \emph{intervals of patterns} in Woo and Yong~\cite{MR2422304}.
We delay the discussion of this type of pattern avoidance until
section \ref{sec:intmesh}, where we also introduce \emph{mesh patterns}
and show that an interval pattern is a special case of a mesh pattern.
In the next section we will show how these three types of patterns are
related to one another.
\section{Connections between the three types} \label{sec:connections}
\subsection{Factorial Schubert varieties and forest-like permutations}
\label{subsec:FacSchubs-forest-like-perms}
Bousquet-M\'elou and Butler~\cite{MR2376109} defined and studied
\emph{forest-like} permutations. Here we recall their
definition: Given a permutation
$\pi$ in $S_n$, construct a graph $G_\pi$ on the vertex set $\{1, 2, \dotsc, n\}$
by joining $i$ and $j$ if
\begin{enumerate}
\item $i < j$ and $\pi(i) < \pi(j)$; and
\item there is no $k$ such that $i < k < j$ and $\pi(i) < \pi(k) < \pi(j)$.
\end{enumerate}
\noindent
The permutation $\pi$ is \emph{forest-like} if the graph $G_\pi$ is a forest.
In light of the definition of Bruhat covering above we see that the vertices
$i$ and $j$ are connected in the graph of $G_\pi$ if and only if
$\pi\brur{i}{j}$ covers $\pi$.
They then show that a permutation is forest-like if and only if it
avoids the classical pattern $1324$ and the barred pattern
$p_\text{bar}�= 21\overline{3}54$. This barred pattern can
be described in terms of Bruhat-restricted embeddings and in terms of bivincular
patterns, as we now show.
\begin{figure}
\begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}]
\matrix (m) [matrix of math nodes, row sep=3em, column sep=2.5em,
text height=1.5ex, text depth=0.25ex]
{ & \substack{\text{avoiding } \\ 21\overline{3}54} & \\
\substack{\text{avoiding } \\ \vinci{4}{1/2, 2/1, 3/4, 4/3}{2}} &
& \substack{\text{avoiding } \\ 2143,\ \brur{1}{4}} \\
\substack{\text{avoiding } \\ 2143,\ \brur{2}{3}} &
& \substack{\text{avoiding } \\ \bivinci{4}{1/2,2/1,3/4,4/3}{}{2}}�\\
};
\path[-,font=\scriptsize]
(m-1-2.south west) edge [double] node[description]
{(\ref{enum:barred-Bruhat-bivincular3})}
(m-2-1.north east)
(m-1-2.south east) edge [double] node[description]
{(\ref{enum:barred-Bruhat-bivincular1})}
(m-2-3.160)
(m-2-1) edge [double, shorten >=3pt, shorten <= 3pt] node[description]
{(\ref{enum:barred-Bruhat-bivincular4})}
(m-3-1)
(m-2-3) edge [double, shorten >=3pt, shorten <= 3pt] node[description]
{(\ref{enum:barred-Bruhat-bivincular2})}
(m-3-3);
\end{tikzpicture}
\caption{The barred pattern $21\overline{3}54$ gives a
connection between two bivincular patterns. The labels on the edges
correspond to the enumerated list below}
\label{fig:patterns}
\end{figure}
\begin{enumerate}
\item \label{enum:barred-Bruhat-bivincular1}
Bousquet-M\'elou and Butler \cite{MR2376109} remark that forest-like
permutations $\pi$ correspond to factorial Schubert varieties $X_\pi$ and
avoiding the barred pattern is equivalent to avoiding
$p_\text{Br} = 2143$ with Bruhat restriction $\brur{1}{4}$.
This last part is easily verified.
\item \label{enum:barred-Bruhat-bivincular2}
Avoiding $p_\text{Br} = 2143$ with Bruhat restriction $\brur{1}{4}$
is equivalent to avoiding the bivincular pattern
$p_\text{bi} = \bivinc{4}{1/2,2/1,3/4,4/3}{}{2}$,
as we will now show:
Assume $\pi$ contains the bivincular pattern $p_\text{bi}$,
so we can find an embedding of it in $\pi$ such that $3_\pi = 2_\pi + 1$. This
embedding clearly satisfies the Bruhat restriction.
Now assume that $\pi$ has an embedding of $p_\text{Br}$. If $3_\pi = 2_\pi + 1$ we
are done. Otherwise $2_\pi + 1$ is either to the right of $3_\pi$ or to the left
of $2_\pi$ (because of the Bruhat restriction). In the first
case change $3_\pi$ to $2_\pi + 1$ and we are done. In the second case replace
$2_\pi$ with $2_\pi + 1$, thus reducing the distance in values to $3_\pi$,
then repeat.
\item \label{enum:barred-Bruhat-bivincular3}
The barred pattern $p_\text{bar}�= 21\overline{3}54$ has
another connection to bivincular patterns: avoiding it is equivalent
to avoiding the bivincular pattern $q_\text{biv} = \vinc{4}{1/2, 2/1, 3/4, 4/3}{2}$,
as remarked in the survey by Steingr\'imsson~\cite{steingrimsson-2008}.
\item \label{enum:barred-Bruhat-bivincular4}
We can translate this into Bruhat-restricted embeddings as well: Avoiding
the bivincular pattern $q_\text{bi} = \vinc{4}{1/2, 2/1, 3/4, 4/3}{2}$ is equivalent to
avoiding $q_\text{Br} = 2143$ with Bruhat restriction $\brur{2}{3}$:
Assume $\pi$ has an embedding of $q_\text{Br}$. If $1_\pi$ and $4_\pi$ are
adjacent then we are done. Otherwise look at the letter to right of $1_\pi$.
If this letter is larger than $4_w$ we can replace $4_w$ by it and we are done.
Otherwise this letter must be less than $4_w$, which implies by the Bruhat
restriction, that it must also be less than $1_w$. In this case we replace
$1_w$ by this letter, and repeat.
Now assume $\pi$ has an embedding of the bivincular pattern $q_\text{bi}$.
If $1_\pi$ and $4_\pi$ are adjacent we are done. Otherwise look at the letter to the
right of $1_\pi$. This letter is either smaller than $1_\pi$ or larger than $4_\pi$.
In the first case, replace $1_\pi$ with this letter; in the second case,
replace $4_\pi$ with this letter. Then repeat if necessary.
\end{enumerate}
The above argument will be generalized in Proposition \ref{prop:Bruhat-bivinc},
but this special case gives us:
\begin{theorem}
\label{thm:Ulfarsson-Factorial}[\cite{MR2376109},\cite{steingrimsson-2008}]
Let $\pi \in S_n$. The Schubert variety $X_\pi$ is factorial if and only if
$\pi$ avoids the patterns $\vinc{4}{1/2, 2/1, 3/4, 4/3}{2}$ and $1324$.
\end{theorem}
From the equivalence of the patterns in Figure \ref{fig:patterns}
we also get that a permutation $\pi$ avoids the bivincular pattern
\[
\vinc{4}{1/2, 2/1, 3/4, 4/3}{2} = (2143,\{2\},\tom)
\]
if and only if it avoids
\[
\bivinc{4}{1/2, 2/1, 3/4, 4/3}{}{2} = (2143,\tom,\{2\}).
\]
We will prove this without going through the barred pattern, and
then generalize the proof, but first of all we should note that these
bivincular patterns are inverses of one another, and that will simplify
the proof.
Assume $\pi$ contains $\bivinc{4}{1/2, 2/1, 3/4, 4/3}{}{2}$. If $1_\pi$ and $4_\pi$
are adjacent in $\pi$ we are done. Otherwise consider the element immediately
to the right of $1_\pi$. If this element is less than $2_\pi$ then replace $1_\pi$
by it
and we will have reduced the distance between $1_\pi$ and $4_\pi$. If this element
is larger than $2_\pi$ it must also be larger than $3_\pi$, since
$3_\pi = 2_\pi + 1$,
so replace $4_\pi$ by it. This will (immediately, or after several steps) produce
an occurrence of $\vinc{4}{1/2, 2/1, 3/4, 4/3}{2}$.
Now assume $\pi$ contains $\vinc{4}{1/2, 2/1, 3/4, 4/3}{2}$. Then
$\pi^\rmi$ contains the inverse pattern
\[
(\vinc{4}{1/2, 2/1, 3/4, 4/3}{2})^\rmi = \bivinc{4}{1/2, 2/1, 3/4, 4/3}{}{2}.
\]
Then by the above, $\pi^\rmi$ contains
$\vinc{4}{1/2, 2/1, 3/4, 4/3}{2}$, so $\pi = (\pi^\rmi)^\rmi$ contains
$(\vinc{4}{1/2, 2/1, 3/4, 4/3}{2})^\rmi =
\bivinc{4}{1/2, 2/1, 3/4, 4/3}{}{2}$.
This generalizes to:
\begin{proposition} \label{prop:bivinc-bivinc}
Let $p$ be the pattern
\[
\vinc{8}{1/\cdot, 2/\cdot, 3/\cdot, 4/1, 5/k, 6/\cdot, 7/\cdot, 8/\cdot}{4}
=
(\dotsm 1k \dotsm, \{j\},\tom)
\]
in $S_k$, where $j = p^\rmi(1)$ is the \emph{index} of $1$ in $p$.
A permutation $\pi$ in $S_n$ that avoids the pattern $p$ must also avoid the
bivincular pattern
\[
\bivincs{8}{1/1/\cdot, 2/2/\cdot, 3/\cdot/\cdot, 4/\cdot/1,
5/\cdot/k, 6/\cdot/\cdot, 7/\cdot/\cdot, 8/k/\cdot}{}{2,3,4,5,6}=
(\dotsm 1 k \dotsm, \tom, \{2,3,\dotsc,k-2\}).
\]
\end{proposition}
\begin{proof}
Assume a permutation $\pi$ contains the latter pattern in the proposition.
If $1_\pi$ and $k_\pi$ are adjacent in $\pi$ we are done. Otherwise
consider the element immediately to the right of $1_\pi$. If this element
is larger than $(k-1)_\pi$ we replace $k_\pi$ by it and are done. Otherwise
this element must me less than $(k-1)_\pi$ and therefore less than $2_\pi$,
so we can replace $1_\pi$ by it, and repeat.
\end{proof}
By applying the reverse to everything in Proposition
\ref{prop:bivinc-bivinc} we get:
\begin{corollary}
Let $p$ be the pattern
\[
\vinc{8}{1/\cdot, 2/\cdot, 3/\cdot, 4/k, 5/1, 6/\cdot, 7/\cdot, 8/\cdot}{4}
=
(\dotsm k 1 \dotsm, \{j\},\tom)
\]
in $S_k$, where $j = p^\rmi(k)$ is the \emph{index} of $k$ in $p$.
A permutation $\pi$ in $S_n$ that avoids the pattern $p$ must also avoid the
bivincular pattern
\[
\bivincs{8}{1/1/\cdot, 2/2/\cdot, 3/\cdot/\cdot, 4/\cdot/k,
5/\cdot/1, 6/\cdot/\cdot, 7/\cdot/\cdot, 8/k/\cdot}{}{2,3,4,5,6}
=
(\dotsm k1 \dotsm, \tom, \{2,3,\dotsc,k-2\}).
\]
\end{corollary}
By repeatedly applying the operations of inverse, reverse and complement
we can generate six other corollaries. We will not need them here.
\begin{example}
Let's look at some simple applications:
\begin{enumerate}
\item Consider the bivincular pattern $p_1 = \vinc{4}{1/3, 2/1, 3/4, 4/2}{2}$. Proposition
\ref{prop:bivinc-bivinc} shows that a permutation $\pi$ that avoids $p_1$ must
also avoid $\bivinc{4}{1/3, 2/1, 3/4, 4/2}{}{2}$. In fact, the converse can be
shown to be true, by taking inverses and applying the proposition.
We will say more about the pattern $p_1$ in Example \ref{exampl:Baxter}.
\item Consider the bivincular pattern $p_2 = \vinc{5}{1/3, 2/1, 3/5, 4/2, 5/4}{2}$.
The proposition shows that a permutation $\pi$ that avoids $p_2$ must also
avoid $\bivinc{5}{1/3, 2/1, 3/5, 4/2, 5/4}{}{2,3}$. We
will say more about the pattern $p_2$ in subsection \ref{subsec:Gorenstein-Bivinc}.
\end{enumerate}
\end{example}
\begin{example} \label{exampl:Baxter}
The \emph{Baxter permutations} were originally defined and studied in
relation to the ``commuting function conjecture'' of Dyer, see Baxter~\cite{MR0184217},
and were enumerated by Chung, Graham, Hoggatt and Kleiman~\cite{MR491652}.
Gire~\cite{G93} showed that these permutations
can also be described as those avoiding
the barred patterns $41\overline{3}52$ and $25\overline{1}34$.
It was then pointed out by Ouchterlony~\cite{O05} that this is equivalent to
avoiding the vincular patterns $\vinc{4}{1/3, 2/1, 3/4, 4/2}{2}$ and
$\vinc{4}{1/2, 2/4, 3/1, 4/3}{2}$.
Similarly to what we did above we can show that the Baxter permutations can
also be characterized as those avoiding the bivincular patterns
$\bivinc{4}{1/3, 2/1, 3/4, 4/2}{}{2}$ and $\bivinc{4}{1/2, 2/4, 3/1, 4/3}{}{2}$,
and this is essentially a translation of the description in \cite{MR491652}
into bivincular patterns.
\end{example}
Finally, here is an example that shows the converse of Proposition
\ref{prop:bivinc-bivinc} is not true.
\begin{example}
The permutation $\pi = 423165$ avoids the pattern
$\bivinc{5}{1/2, 2/3, 3/1, 4/5, 5/4}{}{2,3}$ but contains the pattern
$\vinc{5}{1/2, 2/3, 3/1, 4/5, 5/4}{3}$, as the sub-word $23165$.
\end{example}
\subsection{Gorenstein Schubert varieties in terms of bivincular patterns}
\label{subsec:Gorenstein-Bivinc}
Woo and Yong~\cite{MR2264071} classify those permutations $\pi$ that
correspond to Gorenstein Schubert varieties $X_\pi$. They do this using
embeddings of patterns with Bruhat restrictions, which we have described
above, and with a certain condition on the associated Grassmannian permutations
of $\pi$, which we will describe presently:
First, a \emph{descent} in a permutation $\pi$ is an integer $d$ such that
$\pi(d) > \pi(d+1)$, or equivalently, the index of the first letter in
an occurrence of the pattern $\vinc{2}{1/2, 2/1}{1}$.
A \emph{Grassmannian permutation} is a permutation
with a unique descent. Given any permutation $\pi$ we can associate
a Grassmannian permutation to each of its descents as follows: Given a
particular descent $d$ of $\pi$ we construct the sub-word $\gamma_d(\pi)$
by concatenating the right-to-left minima of the segment strictly to the
left of $d+1$ with the left-to-right maxima of the segment strictly to the
right of $d$. More intuitively we start with the descent $\pi(d)\pi(d+1)$
and enlarge it to the left by adding decreasing elements without creating another
descent and similarly enlarge it to the right by adding increasing elements
without creating another descent. We then denote the \emph{flattening}
(or \emph{standardization}) of $\gamma_d(\pi)$ by $\tilde{\gamma}_d(\pi)$,
which is the unique permutation whose letters are in the same relative order as
$\gamma_d(\pi)$.
\begin{example}
Consider the permutation
$\pi = 11 | 6 | 12 | 9 4 1 5 3 7 2 8 | 10$ where we have used the symbol
$\,|\,} % separator in { ... | ...$ to separate two-digit numbers from other numbers.
For the descent at $d = 4$ we get $\gamma_4(\pi) = 6 9 4 5 7 8 | 10$
and $\tilde{\gamma}_4(\pi) = 3 6 1 2 4 5 7$.
\end{example}
\noindent
Now, given a Grassmannian permutation $\rho$ in $S_n$ with its unique descent
at $d$ we construct its \emph{associated partition} $\lambda(\rho)$ as the
partition inside a bounding box $d \times (n-d)$, with $d$ rows and $n-d$ columns,
whose lower border is the lattice path that starts
at the lower left corner of the bounding box and whose $i$-th step,
for $i \in \dbrac{1,n}$, is vertical if $i$ is weakly to the left of
the position $d$, and horizontal otherwise. A corner of the lattice path is called
an \emph{inner corner} if it corresponds to a right turn on the path, otherwise it
is called an \emph{outer corner}. We are interested in the
\emph{inner corner distances} of this partition, that is, for every inner
corner we add its distance from the left side and the distance from the top
of the bounding box. If all these inner corner distances are the same then
the inner corners all lie on the same anti-diagonal.
In Theorem 1 of Woo and Yong~\cite{MR2264071} they show that a
permutation $\pi \in S_n$ corresponds to a Gorenstein Schubert variety $X_\pi$
if and only if
\begin{enumerate}
\item \label{thm:WY06-cond1}
for each descent $d$ of $\pi$, $\lambda(\tilde{\gamma}_d(\pi))$ has all of
its inner corners on the same anti-diagonal; and
\item \label{thm:WY06-cond2}
the permutation $\pi$ avoids both $31524$ and
$24153$ with Bruhat
restrictions $\{ \brur{1}{5}, \brur{2}{3} \}$ and $\{ \brur{1}{5}, \brur{3}{4} \}$,
respectively.
\end{enumerate}
Let's take a closer look at condition \ref{thm:WY06-cond2}: Proposition
\ref{prop:Bruhat-bivinc} below shows that
avoiding $31524$ with Bruhat restrictions
$\{ \brur{1}{5}, \brur{2}{3} \}$ is equivalent to avoiding the bivincular
pattern
\[
\bivinc{5}{1/3,2/1,3/5,4/2,5/4}{2}{3} = (31524, \{2\}, \{3\} ).
\]
Similarly, avoiding
$24153$ with Bruhat restrictions $\{ \brur{1}{5}, \brur{3}{4} \}$
is equivalent to avoiding the bivincular
pattern
\[
\bivinc{5}{1/2,2/4,3/1,4/5,5/3}{3}{2} = (24153, \{3\}, \{2\} ).
\]
\begin{proposition} \label{prop:Bruhat-bivinc}
\begin{enumerate}
\item \label{prop:Bruhat-bivinc-case1}
Let $p$ be the pattern
\[
\dotsm 1 k \dotsm
\]
in $S_k$. Let $j = p^\rmi(1)$ be the index of $1$ in $p$.
A permutation $\pi$ in $S_n$ avoids $p$ with Bruhat restriction
$\brur{j}{j+1}$ if and only if $\pi$ avoids the vincular pattern
\[
\vinc{8}{1/\cdot, 2/\cdot, 3/\cdot, 4/1, 5/k, 6/\cdot, 7/\cdot, 8/\cdot}{4}
=
(\dotsm 1 k \dotsm, \{j\}, \tom).
\]
\item \label{prop:Bruhat-bivinc-case2}
Let $\ell \in \dbrac{1,k-1}$ and $p$ be the pattern
\[
\ell \dotsm (\ell+1)
\]
in $S_k$.
A permutation $\pi$ in $S_n$ avoids $p$ with Bruhat restriction
$\brur{1}{k}$ if and only if $\pi$ avoids the bivincular pattern
\[
\bivincs{11}{1/1/\ell, 2/\cdot/\cdot, 3/\cdot/\cdot, 4/\ell/\cdot,
5/{\ell}/\cdot, 6/{+}/\cdot, 7/1/\cdot, 8/\cdot/\cdot, 9/\cdot/\ell, 10/\cdot/{+}, 11/k/1}
{}{4,5,6}
=
(\ell \dotsm (\ell+1), \tom, \{ \ell \}).
\]
\end{enumerate}
\end{proposition}
\begin{proof}
We consider each case separately.
\begin{enumerate}
\item Assume $\pi$ contains the vincular pattern mentioned. Then it clearly also
contains an embedding satisfying the Bruhat restriction.
Conversely assume $\pi$ contains an embedding satisfying the Bruhat restriction.
If $1_\pi$ and $k_\pi$ are adjacent then we are done. Otherwise look at
the element immediately to the right of $1_\pi$. This element must be either larger
than $k_\pi$, in which case we can replace $k_\pi$ by it and are done, or smaller,
in which case we replace $1_\pi$ by it, and repeat.
\item Assume $\pi$ contains the bivincular pattern mentioned. Then it clearly also
contains an embedding satisfying the Bruhat restriction.
Conversely assume $\pi$ contains an embedding satisfying the Bruhat restriction.
If $(\ell+1)_\pi = \ell_\pi + 1$ then we are done. Otherwise consider the element
$\ell_\pi + 1$. It must either be to the right of $(\ell+1)_\pi$ or to the left
of $\ell_\pi$. In the first case we can replace $(\ell+1)_\pi$ by $\ell_\pi + 1$ and be
done. In the second case replace $\ell_\pi$ with $\ell_\pi + 1$ and repeat. \qedhere
\end{enumerate}
\end{proof}
As a consequence we get:
\begin{corollary}
A permutation $\pi$ in $S_n$ avoids
\[
\dotsm 1 k \dotsm,\ \brur{j}{j+1},
\]
where $j$ is the index of $1$, if and only if the inverse $\pi^\rmi$ avoids
\[
j \dotsm (j+1),\ \brur{1}{k}.
\]
\end{corollary}
Note that we could have proved the statement of the corollary without
going through bivincular patterns and then used that to prove part
\ref{prop:Bruhat-bivinc-case2} of Proposition \ref{prop:Bruhat-bivinc},
as part \ref{prop:Bruhat-bivinc-case2} is the inverse statement of
the statement in part \ref{prop:Bruhat-bivinc-case1}.
Translating condition \ref{thm:WY06-cond1} of Theorem 1 of Woo and Yong~\cite{MR2264071}
into bivincular patterns is a bit more work. The failure of this condition is easily seen to
be equivalent to some partition $\lambda$ of an associated Grassmannian
permutation $\tilde{\gamma}_d(\pi)$
having an outer corner that is either ``too wide'' or ``too deep''.
More precisely, given a Grassmannian permutation $\rho$
and an outer corner of $\lambda(\rho)$,
we say that it is \emph{too wide} if the distance
upward from it to the next inner corner is smaller than the distance to the
left from it to the next inner corner. Conversely we say that an outer
corner is \emph{too deep} if the distance
upward from it to the next inner corner is larger than the distance to the
left from it to the next inner corner. We say that an outer corner
is \emph{unbalanced} if it is either too wide or too deep. We say that
an outer corner is \emph{balanced} if it is not \emph{unbalanced}.
If a permutation has an associated Grassmannian permutation with an outer corner
that is too wide we say
that the permutation itself is \emph{too wide} and similarly for
\emph{too deep}. If the permutation is either too wide or too deep we
say that it is \emph{unbalanced}, otherwise it is \emph{balanced}.
It is time to see some examples.
\begin{example} \label{ex:partitions}
See Figure \ref{fig:partitions} for drawings of the partitions below.
\begin{enumerate}
\item Consider the permutation $\rho = 14235$ with a unique descent at
$d = 2$. It corresponds to the partition $(2) \subseteq 2 \times 3$
and has just one outer corner. This outer corner is too wide.
\item Consider the permutation $\rho = 13425$ with a unique descent at
$d = 3$. It corresponds to the partition $(1,1) \subseteq 3 \times 2$
and has just one outer corner. This outer corner is too deep.
\item Consider the permutation $\rho = 1 3 4 8 9 2 5 6 7 | 10$ with a
unique descent at $d = 5$. It corresponds to the partition
$(4,4,1,1) \subseteq 5 \times 5$
and has two outer corners. The first outer corner is too deep
and the second is too wide.
\item Consider the permutation $\rho = 1 3 6 7 2 4 5 8$ with a
unique descent at $d = 4$. It corresponds to the partition
$(3,3,1) \subseteq 4 \times 4$
and has two outer corners that are both balanced.
\end{enumerate}
\begin{figure}[ht]
\begin{tikzpicture}[scale = 0.45, dot/.style = {fill,draw,circle, minimum size = 1pt}]
\draw[help lines] (0,0) grid (3,2);
\draw[line width = 1pt]
(0,0) -- (0,1) -- (1,1) -- (2,1) -- (2,2) -- (3,2) ;
\begin{scope}[xshift = 5cm]
\draw[help lines] (0,0) grid (2,3);
\draw[line width = 1pt]
(0,0) -- (0,1) -- (1,1) -- (1,2) -- (1,3) -- (2,3) ;
\end{scope}
\begin{scope}[xshift = 9cm]
\draw[help lines] (0,0) grid (5,5);
\draw[line width = 1pt]
(0,0) -- (0,1) -- (1,1) -- (1,2) -- (1,3) -- (2,3) -- (3,3) -- (4,3) -- (4,4) -- (4,5)-- (5,5) ;
\end{scope}
\begin{scope}[xshift = 16cm]
\draw[help lines] (0,0) grid (4,4);
\draw[line width = 1pt]
(0,0) -- (0,1) -- (1,1) -- (1,2) -- (2,2) -- (3,2) -- (3,3) -- (3,4) -- (4,4) ;
\end{scope}
\end{tikzpicture}
\caption{The associated partitions of the permutations in Example \ref{ex:partitions}}
\label{fig:partitions}
\end{figure}
\end{example}
We now show how these properties of Grassmannian permutations can be detected with
bivincular patterns.
\begin{lemma}
\label{lem:Grassmann-perm}
Let $\rho$ be a Grassmannian permutation.
\begin{enumerate}
\item \label{lem:Grassmann-perm-too-wide}
The permutation $\rho$ is too wide if and only if it contains at least one
of the bivincular patterns from the infinite family
\begin{align*}
{\mathcal F}
&= \left\{
\bivinc{5}{1/1, 2/4, 3/2, 4/3, 5/5}{}{2,3,4},
\bivinc{7}{1/1, 2/5, 3/6, 4/2, 5/3, 6/4, 7/7}{}{2,3,4,5,6},
\bivinc{9}{1/1, 2/6, 3/7, 4/8, 5/2, 6/3, 7/4, 8/5, 9/9}{}{2,3,4,5,6,7,8}, \dotsc
\right\}
\end{align*}
The general member of this family is of the form
\[
\bivincs{11}{1/1/1, 2/2/\ell, 3/\cdot/{+}, 4/\cdot/1, 5/\cdot/\cdot,
6/\cdot/\cdot, 7/\cdot/2, 8/\cdot/\cdot, 9/\cdot/\cdot, 10/\cdot/\ell, 11/k/k}
{}{2,3,4,5,6,7,8,9,10},
\]
where $\ell = (k+1)/2$, and $k$ is odd.
\item \label{lem:Grassmann-perm-too-deep}
The permutation $\rho$ is too deep if and only if it contains at least one
of the bivincular patterns from the infinite family
\begin{align*}
{\mathcal G}
&= \left\{
\bivinc{5}{1/1, 2/3, 3/4, 4/2, 5/5}{}{1,2,3},
\bivinc{7}{1/1, 2/4, 3/5, 4/6, 5/2, 6/3, 7/7}{}{1,2,3,4,5},
\bivinc{9}{1/1, 2/5, 3/6, 4/7, 5/8, 6/2, 7/3, 8/4, 9/9}{}{1,2,3,4,5,6,7}, \dotsc
\right\}
\end{align*}
The general member of this family is of the form
\[
\bivincs{11}{1/1/1, 2/2/\ell, 3/\cdot/{+}, 4/\cdot/1, 5/\cdot/\cdot,
6/\cdot/\cdot, 7/\cdot/2, 8/\cdot/\cdot, 9/\cdot/\cdot, 10/\cdot/\ell, 11/k/k}
{}{1,2,3,4,5,6,7,8,9},
\]
where $\ell = (k-1)/2$, and $k$ is odd.
\end{enumerate}
\end{lemma}
Note that these two infinite families can be obtained from
one another by reverse complement.
\begin{proof}
We only consider part \ref{lem:Grassmann-perm-too-wide}, as part
\ref{lem:Grassmann-perm-too-deep} is proved analogously. Assume
that $\rho$ is a Grassmannian permutation that is too wide, so it
has an outer corner that is too wide. Let $\ell$ be the distance
from this outer corner to the next inner corner above. Then
the distance from this outer corner to the next inner corner to
the left is at least $\ell + 1$. This allows us to construct
an increasing sequence $t$ of length $\ell$ in $\rho$,
starting at a distance at least two to the right of the descent.
We can also choose $t$ so that every element in it is
adjacent both in location and values. Similarly we can construct
an increasing sequence $s$ of length $\ell$ in $\rho$,
located strictly to the left of the descent.
We can also choose $s$ so that every element in it is
adjacent both in location and values.
This produces the required member of the family ${\mathcal F}$.
Conversely, assume $\rho$ contains the $i$-th member of the family
${\mathcal F}$, the pattern
\[
\bivincs{11}{1/1/1, 2/2/\ell, 3/\cdot/{+}, 4/\cdot/1, 5/\cdot/\cdot,
6/\cdot/\cdot, 7/\cdot/2, 8/\cdot/\cdot, 9/\cdot/\cdot, 10/\cdot/\ell, 11/k/k}
{}{2,3,4,5,6,7,8,9,10},
\]
where $k = 2i + 3$. Then the occurrence of the pattern
corresponds to an outer corner in the partition of $\rho$ of width
$\ell - 1$ and depth $\ell - 2$.
\end{proof}
We have now shown that:
\begin{proposition}
A permutation $\pi$ is balanced if and only if every associated Grassmannian
permutation avoids every bivincular pattern in the two infinite families
${\mathcal F}$ and ${\mathcal G}$ in Lemma \ref{lem:Grassmann-perm}.
\end{proposition}
This gives us:
\begin{theorem}
\label{thm:Ulfarsson-Gorenstein}
Let $\pi \in S_n$. The Schubert variety $X_\pi$ is Gorenstein if and only if
\begin{enumerate}
\item $\pi$ is balanced; and
\item
the permutation $\pi$ avoids the bivincular patterns
\[
\bivinc{5}{1/3,2/1,3/5,4/2,5/4}{2}{3} \textrm{ and }
\bivinc{5}{1/2,2/4,3/1,4/5,5/3}{3}{2}.
\]
\end{enumerate}
\end{theorem}
With these descriptions of factorial and Gorenstein varieties it is simple
to show that smoothness implies factoriality, which implies Gorensteinness:
If a variety is not factorial it contains either $\vinc{4}{1/2, 2/1, 3/4, 4/3}{2}$ or $1324$,
so it must contain either $2143$ and $1324$ and is therefore not smooth.
If a variety is not Gorenstein then at least one of the following are true,
\begin{enumerate}
\item $\pi$ has an associated Grassmannian permutation that contains
one of the bivincular patterns in the infinite families ${\mathcal F}$ and
${\mathcal G}$ so it also contains $1324$ and is therefore not factorial.
\item $\pi$ contains $\bivinc{5}{1/3,2/1,3/5,4/2,5/4}{2}{3}$ or
$\bivinc{5}{1/2,2/4,3/1,4/5,5/3}{3}{2}$, so it also contains
$\vinc{4}{1/2, 2/1, 3/4, 4/3}{2}$ and is therefore not factorial.
\end{enumerate}
\section{Mesh patterns and marked mesh patterns} \label{sec:intmesh}
\subsection{Mesh patterns}
Br\"and\'en and Claesson~\cite{BC11} introduced a new type of pattern called a mesh pattern
and showed they generalize bivincular patterns and
(most) barred patterns. Here we recall their definition: A \emph{mesh pattern} is
a pair $(p,R)$ where $p$ is a permutation of rank $k$ and $R$ is a subset of
the square $\dbrac{0,k} \times \dbrac{0,k}$. An occurrence of that pattern in a permutation
$\pi$ is first of all an occurrence of $p$ in $\pi$ in the usual sense, that is,
a subset of the diagram $G(\pi) = \{ (i,\pi(i)) \,|\,} % separator in { ... | ... 1 \leq i \leq n \}$. This occurrence
must also satisfy the restrictions determined by $R$, that is, there are order-preserving
injections $\alpha, \beta: \dbrac{1,k} \to \dbrac{1,n}$ such that if
$(i,j) \in R$ then $R_{ij} \cap G(\pi)$ is empty, where
\[
R_{ij} = \dbrac{\alpha(i)+1, \alpha(i+1)-1} \times
\dbrac{\beta(j)+1, \beta(j+1) -1};
\]
with $\alpha(0) = 0 = \beta(0)$ and $\alpha(k+1) = n+1 = \beta(k+1)$.
It is best to unwind this formal definition with a few examples.
\begin{example} \label{ex:markedmesh}
\begin{enumerate}
\item The mesh pattern
$(21, \{ (1,0),(1,1),(1,2) \})$ can be depicted as follows:
\[
\pattern{scale=1}{ 2 }{ 1/2, 2/1 }{1/0, 1/1, 1/2 }
\]
An occurrence of this mesh pattern in a permutation is an inversion
(an occurrence of the classical pattern $21$) with the additional requirement that there
is nothing in between the two elements in the occurrence. We usually refer
to this pattern as the vincular pattern $\vinc{2}{1/2,2/1}{1}$,
that is, a descent.
\item Now consider
the more complicated mesh pattern below.
\[
\pattern{scale=1}{ 2 }{ 1/1, 2/2 }{0/0, 2/0, 0/2, 2/2 }
\]
There are two occurrences of this mesh pattern in the permutation
$\pi = 315426$, shown below.
\[
\imopattern{scale=1}{ 6 }{ 1/3, 2/1, 3/5, 4/4, 5/2, 6/6 }
{}{ 1/3, 6/6 }{
0/0, 0/1, 0/2,
0/6,
6/6,
6/0, 6/1, 6/2 }
\qquad
\imopattern{scale=1}{ 6 }{ 1/3, 2/1, 3/5, 4/4, 5/2, 6/6 }
{}{ 2/1, 6/6 }{
0/0, 1/0,
0/6, 1/6,
6/6,
6/0 }
\]
Every other occurrence of the classical pattern $12$ in $\pi$, \textit{e.g.},
\[
\imopattern{scale=1}{ 6 }{ 1/3, 2/1, 3/5, 4/4, 5/2, 6/6 }
{}{ 1/3, 3/5 }{
0/0, 0/1, 0/2,
0/5, 0/6,
3/5, 4/5, 5/5, 6/5, 3/6, 4/6, 5/6, 6/6,
3/0, 4/0, 5/0, 6/0, 3/1, 4/1, 5/1, 6/1, 3/2, 4/2, 5/2, 6/2 }
\]
fails to satisfy the requirements given, since some of the shaded areas
will have a dot in them.
\item Dukes and Reifegerste~\cite{MR2628782} defined \emph{certified non-inversions} as occurrences
of $132$ that are neither part of $1432$ nor $1342$. Equivalently these are occurrences of the mesh
pattern below.
\[
\pattern{scale=1}{ 3 }{ 1/1, 2/3, 3/2 }{1/3,2/3}%
\]
\end{enumerate}
\end{example}
Br\"and\'en and Claesson~\cite{BC11} showed that a barred pattern with only one barred letter can
be written as a mesh pattern\footnote{This had been noticed earlier as well,
see unpublished work of Isaiah Lankham.}. The procedure is as follows: If $\pi(i)$
is the only barred letter in a barred pattern $\pi$ then the corresponding
mesh pattern is $(\pi', \{(i-1,\pi(i)-1)\})$ where $\pi'$ is the standardization
of $\pi$ after the removal of $\pi(i)$. For example
\[
1\overline{2}3 = \pattern{scale=1}{ 2 }{ 1/1, 2/2 }{1/1}.
\]
More general barred patterns can be also be translated into mesh patterns
as long as the barred letters are neither adjacent in locations nor in values.
The procedure is essentially the same as above, so for example the barred
pattern $63\overline{4}1\overline{2}5$ is contained in a permutation $\pi$
if and only if at least one of the mesh patterns
\[
\pattern{scale=1}{ 4 }{ 1/4, 2/2, 3/1, 4/3 }{2/2}, \qquad
\pattern{scale=1}{ 4 }{ 1/4, 2/2, 3/1, 4/3 }{3/1}
\]
is contained in $\pi$.
It is possible to classify simsun permutations by the avoidance of mesh patterns as follows:
A permutation in $S_n$ is \emph{simsun} if it contains no double descent ($\vinc{3}{1/3, 2/2, 3/1}{1,2}$)
in any of its restrictions to the interval $\dbrac{1,k}$ for some $k \leq n$.
For example the permutation $452613$ is not simsun since if we restrict it to $\dbrac{1,4}$
it becomes $4213$ which contains a double descent. It is now almost trivial to check that
a permutation is simsun if and only if it avoids the mesh pattern
\[
\pattern{scale=1}{ 3 }{ 1/3, 2/2, 3/1 }{1/0, 1/1, 1/2, 2/0, 2/1, 2/2}.
\]
This was also noticed independently and simultaneously by Br\"and\'en and Claesson~\cite{BC11}.
It is easy to see that bivincular patterns are special cases of mesh patterns. Adjacency
conditions on positions in the bivincular pattern become vertical strips, while adjacency
conditions on values become horizontal strips; $R$ is then the union of all the strips.
For example the bivincular
pattern $\bivinc{5}{1/3,2/1,3/5,4/2,5/4}{2}{3}$ from Theorem \ref{thm:Ulfarsson-Gorenstein}
corresponds to the mesh pattern
\begin{equation} \label{eq:transl}
\pattern{scale=1}{ 5 }{ 1/3, 2/1, 3/5, 4/2, 5/4 }{0/3, 1/3, 2/3, 3/3, 4/3, 5/3, 2/0, 2/1, 2/2, 2/3, 2/4, 2/5}.
\end{equation}
We have seen in Proposition \ref{prop:Bruhat-bivinc} that some Bruhat-restricted patterns
can be turned into bivincular patterns. We now show that any Bruhat-restricted pattern can
be turned into a mesh pattern.
Given a pattern $p$ with one Bruhat restriction $\brur{a}{b}$ first note
that this means that $a < b$ and $p(a) < p(b)$. Then a permutation contains $p$ with
the restriction if and only if it contains the mesh pattern
$(p, R)$, where $R$ consists of all the squares in the region with corners
$(a,\pi(a))$, $(b,\pi(a))$, $(b,\pi(b)$, $(a,\pi(b))$.
For example the pattern $31524$, $\brur{1}{5}$ corresponds to
\begin{equation} \label{eq:transl2}
\pattern{scale=1}{ 5 }{ 1/3, 2/1, 3/5, 4/2, 5/4 }{1/3, 2/3, 3/3, 4/3}.
\end{equation}
Given a pattern $p$ with multiple Bruhat restrictions we superpose the
mesh patterns we get for each individual restriction. For example the
pattern $31524$, $\brur{1}{5}$, $\brur{3}{4}$, which is one of the patterns
that determines whether a permutation is Gorenstein or not, corresponds to
\begin{equation} \label{eq:transl3}
\pattern{scale=1}{ 5 }{ 1/3, 2/1, 3/5, 4/2, 5/4 }{1/3, 2/3, 3/3, 4/3, 2/1, 2/2, 2/3, 2/4}.
\end{equation}
Recall that we had already shown (Proposition \ref{prop:Bruhat-bivinc})
that this Bruhat-restricted pattern corresponds to the bivincular pattern
\eqref{eq:transl}. It is easy to see directly that the mesh pattern
\eqref{eq:transl3} is equivalent to the bivincular pattern \eqref{eq:transl},
in terms of being contained/avoided by a permutation.
It is now possible to translate Theorem
\ref{thm:Ulfarsson-Gorenstein} into mesh patterns, and completely get
rid of the middle step of considering the Grassmannian subpermutations.
But this was essentially done in Woo and Yong~\cite{MR2422304} using interval patterns, which
we now show to be special cases of mesh patterns.
\subsection{Interval patterns}
Woo and Yong~\cite{MR2422304} defined the avoidance of \emph{interval patterns}
as a generalization of Bruhat-restricted patterns.
We recall the definition here, with the modification that we reverse the usual
Bruhat order on $S_n$. We do this so the definition can be directly compared
with the definition of Bruhat-restricted avoidance. The (\emph{reversed})
\emph{Bruhat order} on $S_n$ is the partial order defined by
$\rho < \pi$ if $\pi$ can be obtained from $\rho$ by composing with a transposition
and $\pi$ has more non-inversions than $\rho$. Recall that a \emph{non-inversion}
is an occurrence of the classical pattern $12$; we let $\ell(\pi)$ denote the number of
non-inversions in $\pi$. Now we say that a permutation
$\pi$ \emph{contains the interval} $[p,q]$ if there exists a permutation $\rho \leq \pi$
and a common embedding of $p$ into $\rho$ and $q$ into $\pi$ such that the entries
outside of the embedding agree and the posets $[p,q]$, $[\rho,\pi]$ are isomorphic.
\begin{example}
The interval pattern $[41523,31524]$
corresponds to the Bruhat-restricted pattern $31524$, $\brur{1}{5}$, shown
as a mesh pattern above, \eqref{eq:transl2}; and $[45123,31524]$ to $31524$, $\brur{1}{5}$,
$\brur{3}{4}$ also shown above, \eqref{eq:transl3}.
\end{example}
To show that any interval pattern can be turned into a mesh pattern we need a preliminary
definition: Given a permutation $\pi$ of rank $n$ and integers $j,k \in \dbrac{1,n+1}$,
we define a new permutation $\pi \oplus_j k$ of rank $n+1$ as follows:
\[
(\pi \oplus_j k) (\ell) =
\begin{cases}
\pi(\ell) & \text{ if $\ell < j$ and $\pi(\ell) < k$}, \\
\pi(\ell)+1 & \text{ if $\ell < j$ and $\pi(\ell) \geq k$}, \\
k & \text{ if $\ell = j$}, \\
\pi(\ell-1) & \text{ if $\ell \geq j$ and $\pi(\ell) < k$}, \\
\pi(\ell-1)+1 & \text{ if $\ell \geq j$ and $\pi(\ell) \geq k$}.
\end{cases}
\]
For example $34125 \oplus_3 4 = 354126$.
\begin{lemma}
A permutation $\pi$ contains an interval pattern $[p,q]$ if and only
if it contains the mesh pattern $(q,R)$ where $R$ consists of boxes
$(i,j)$ such that
\[
\ell(q) - \ell(p) \neq \ell(q\oplus_i j) - \ell(p\oplus_i j)
\]
\end{lemma}
\begin{proof}
This lemma is a direct corollary of Lemma 2.1 in Woo and Yong~\cite{MR2422304}
which states that an embedding $\Phi$ of $[p,q]$ into $[\rho,\pi]$
is an interval pattern embedding if and only if $\ell(q)-\ell(p) = \ell(\pi) - \ell(\rho)$.
\end{proof}
It should be noted that although the general definition of a mesh pattern did not
exist many authors had drawn diagrams analogous to the diagrams we have been drawing
for mesh patterns, see e.g., \cite{MR1990570}, \cite{MR2376109}.
\begin{example}
To realize the interval $[53241,32154]$ as a mesh pattern we start
by drawing $53241$ with white dots and $32154$ with black dots into
the same diagram and consider the boxes $(i,j)$ that satisfy the condition
in the lemma above.
\[
[53241,32154] =
\impattern{scale=1}{ 5 }{ 1/3, 2/2, 3/1, 4/5, 5/4 }
{ 1/5, 2/3, 3/2, 4/4, 5/1 }{ 1/3, 1/4, 2/2, 2/3, 2/4, 3/1, 3/2, 3/3, 3/4, 4/1, 4/2, 4/3 }
\qquad
[215436, 526413] =
\imopattern{scale=1}{ 6 }{ 1/5, 2/2, 3/6, 4/4, 5/1, 6/3 }
{ 1/2, 2/1, 3/5, 5/3, 6/6 }{ 4/4 }{ 1/2, 1/3, 1/4, 2/1, 2/2, 2/3, 2/4, 3/1, 3/2, 3/3, 3/4, 3/5, 4/1, 4/2, 4/3, 4/4, 4/5, 5/3, 5/4, 5/5 }
\]
\end{example}
In Theorem 6.6 of Woo and Yong~\cite{MR2422304} they show that a
permutation $\pi \in S_n$ corresponds to a Gorenstein Schubert variety $X_\pi$
if and only if $\pi$ avoids intervals of the form
\begin{enumerate}
\item \label{thm:WY08-cond1}
$g_{a,b} = [(a+2) \dotsm (a+b+2) 1 \dotsm a (a+1), 1 (a+2) \dotsm (a+b+1) 2 \dotsm a (a+1) (a+b+2)]$
for all integers $a,b > 0$ such that $a \neq b$,
\item \label{thm:WY08-cond2}
$h_{a,b} = [(a+2) (a+4) \dotsm (a+b+3) 1 (a+b+4) 2 \dotsm (a+1) (a+3), (a+4) \dotsm (a+b+4) (a+2) (a+3) 1 \dotsm (a+1)]$
for all integers $a,b \geq 0$ such that either $a > 0$ or $b > 0$.
\end{enumerate}
\noindent
See Figure \ref{fig:WooYongGor} for some patterns appearing in these two lists.
\begin{figure}
\label{fig:WooYongGor}
\[
g_{1,2} =
\impattern{scale=0.75}{ 5 }{ 1/3, 2/4, 3/5, 4/1, 5/2 }
{ 1/1, 2/3, 3/4, 4/2, 5/5 }{1/1, 1/2, 2/1, 2/2, 2/3, 3/1, 3/2, 3/3, 3/4, 4/2, 4/3, 4/4}
\qquad
g_{1,3} =
\impattern{scale=0.75}{ 6 }{ 1/3, 2/4, 3/5, 4/6, 5/1, 6/2 }
{ 1/1, 2/3, 3/4, 4/5, 5/2, 6/6 }{1/1, 1/2, 2/1, 2/2, 2/3, 3/1, 3/2, 3/3, 3/4, 4/1, 4/2, 4/3, 4/4, 4/5, 5/2, 5/3, 5/4, 5/5}
\]
\[
g_{1,4}�=
\impattern{scale=0.75}{ 7 }{ 1/3, 2/4, 3/5, 4/6, 5/7, 6/1, 7/2 }
{ 1/1, 2/3, 3/4, 4/5, 5/6, 6/2, 7/7 }{1/1, 1/2, 2/1, 2/2, 2/3, 3/1, 3/2, 3/3, 3/4, 4/1, 4/2, 4/3, 4/4, 4/5, 5/1, 5/2, 5/3, 5/4, 5/5, 5/6, 6/2, 6/3, 6/4, 6/5, 6/6}
\qquad
g_{2,3}�=
\impattern{scale=0.75}{ 7 }{ 1/4, 2/5, 3/6, 4/7, 5/1, 6/2, 7/3 }
{ 1/1, 2/4, 3/5, 4/6, 5/2, 6/3, 7/7 }{1/1, 1/2, 1/3, 2/1, 2/2, 2/3, 2/4, 3/1, 3/2, 3/3, 3/4, 3/5, 4/1, 4/2, 4/3, 4/4, 4/5, 4/6, 5/2, 5/3, 5/4, 5/5, 5/6, 6/3, 6/4, 6/5, 6/6}
\]
\[
h_{0,1} =
\impattern{scale=0.75}{ 5 }{ 1/2, 2/4, 3/1, 4/5, 5/3 }
{ 1/4, 2/5, 3/2, 4/3, 5/1 }{1/2, 1/3, 2/2, 2/3, 2/4, 3/1, 3/2, 3/3, 3/4, 4/1, 4/2}
\]
\[
h_{0,2} =
\impattern{scale=0.75}{ 6 }{ 1/2, 2/4, 3/5, 4/1, 5/6, 6/3 }
{ 1/4, 2/5, 3/6, 4/2, 5/3, 6/1 }{1/2, 1/3, 2/2, 2/3, 2/4, 3/2, 3/3, 3/4, 3/5, 4/1, 4/2, 4/3, 4/4, 4/5, 5/1, 5/2}
\qquad
h_{1,1} =
\impattern{scale=0.75}{ 6 }{ 1/3, 2/5, 3/1, 4/6, 5/2, 6/4 }
{ 1/5, 2/6, 3/3, 4/4, 5/1, 6/2 }{1/3, 1/4, 2/3, 2/4, 2/5, 3/1, 3/2, 3/3, 3/4, 3/5, 4/1, 4/2, 4/3, 5/2, 5/3}
\]
\[
h_{0,3}�=
\impattern{scale=0.75}{ 7 }{ 1/2, 2/4, 3/5, 4/6, 5/1, 6/7, 7/3 }
{ 1/4, 2/5, 3/6, 4/7, 5/2, 6/3, 7/1 }{1/2, 1/3, 2/2, 2/3, 2/4, 3/2, 3/3, 3/4, 3/5, 4/2, 4/3, 4/4, 4/5, 4/6, 5/1, 5/2, 5/3, 5/4, 5/5, 5/6, 6/1, 6/2}
\qquad
h_{1,2}�=
\impattern{scale=0.75}{ 7 }{ 1/3, 2/5, 3/6, 4/1, 5/7, 6/2, 7/4 }
{ 1/5, 2/6, 3/7, 4/3, 5/4, 6/1, 7/2 }{1/3, 1/4, 2/3, 2/4, 2/5, 3/3, 3/4, 3/5, 3/6, 4/1, 4/2, 4/3, 4/4, 4/5, 4/6, 5/1, 5/2, 5/3, 6/2, 6/3}
\]
\caption{The first few patterns in Theorem 6.6 of Woo and Yong~\cite{MR2422304} shown as mesh patterns (patterns
that are the reverse complement of a pattern that has already appeared are omitted)}
\end{figure}
\subsection{Marked mesh patterns, DBI- and Gorenstein varieties}%
We introduce a generalization of mesh patterns which we call
\emph{marked mesh patterns} and use them to give an alternative description of Schubert
varieties defined by inclusions, Gorenstein Schubert varieties, $123$-hexagon
avoiding permutations, Dumont permutations and cycles in permutations.
\begin{definition}
A \emph{marked mesh pattern} $(p,\mathcal{C})$ of rank $k$ consists of a classical pattern
$p$ of rank $k$ and a collection $\mathcal{C}$ which contains pairs $(C, \square j)$
where $C$ is a subset of the square $\dbrac{0,k} \times \dbrac{0,k}$, $j$ is a non-negative
integer and $\square$ is one of the symbols $\leq, =, \geq$. An occurrence of $(p,\mathcal{C})$ in a permutation $\pi$
is first of all an occurrence of $p$ in $\pi$ in the usual sense, that is, a subset
of the diagram $G(\pi) = \{ (i,\pi(i)) \,|\,} % separator in { ... | ... 1 \leq i \leq n \}$. This occurrence
must also satisfy the restrictions determined by $\mathcal{C}$, that is, there are order-preserving
injections $\alpha, \beta: \dbrac{1,k} \to \dbrac{1,n}$ such that for each pair $(C, \square j)$
we have
\[
\#(C' \cap G(\pi))\ \square\ j,
\]
where
\[
C' = \bigcup_{(i,j) \in C} R_{ij}.
\]
As above,
$R_{ij} = \dbrac{\alpha(i)+1, \alpha(i+1)-1} \times \dbrac{\beta(j)+1, \beta(j+1) -1}$,
with $\alpha(0) = 0 = \beta(0)$ and $\alpha(k+1) = n+1 = \beta(k+1)$.
\end{definition}
\noindent
Since regions of the form $(C,\geq j)$ are the most common we also write them more simply as $(C,j)$.
\begin{example}
\begin{enumerate}
\item Every mesh pattern $(p,R)$ is an example of a marked mesh pattern, we just define
$\mathcal{C} = \{(R,{=}0)\}$. For example, here is the mesh pattern that identifies
the simsun permutations, written as a marked mesh pattern.
\[
\pattern{scale=1}{ 3 }{ 1/3, 2/2, 3/1 }{1/0, 1/1, 1/2, 2/0, 2/1, 2/2}
=
\patternsbmm{scale=1}{ 3 }{ 1/3, 2/2, 3/1 }{}{1/0/3/3/k}{1/0/3/3/{=0}}
\]
\item Consider the marked mesh pattern
\[
\patternsbm{scale=1}{ 2 }{ 1/1, 2/2 }{}{0/1/3/2/1}.
\]
If a permutation $\pi$ contains it, it has an occurrence of the classical pattern $12$ where
there is at least one element $x$ in the permutation with the property that $1_\pi < x < 2_\pi$.
This is equivalent to saying that $\pi$ contains at least one of the classical patterns $213, 123, 132$.
\item A fixed point of a permutation is an occurrence of the marked mesh pattern
\[
\patternsbmm{scale=1.75}{ 1 }{ 1/1 }{}{0/0/1/2/k, 0/0/2/1/k}{0/1/1/2/{=k}, 1/0/2/1/{=k}},
\]
for some integer $k \geq 0$.
This generalizes to occurrences of cycles in a permutation. For example, a $2$-cycle
is an occurrence the marked mesh pattern $(21,\mathcal{C})$ where
$\mathcal{C}$ consists of the four marked regions below
\[
\patternsbmm{scale=1.75}{ 2 }{1/2, 2/1}{}{0/0/1/3/k, 0/0/3/1/k}{0/1/1/2/{=k_1}, 1/0/2/1/{=k_1}}, \quad
\patternsbmm{scale=1.75}{ 2 }{1/2, 2/1}{}{0/0/2/3/k, 0/0/3/2/k}{1/2/2/3/{=k_2}, 2/0.5/3/0.5/{=k_2}}
\]
for some integers $k_2 > k_1 \geq 0$. These types of patterns can also be extended to include
unions of cycles and thus subsume the patterns defined in \mbox{McGovern}~\cite{M10}.
\item \emph{Dumont permutations of the first kind}~\cite{0297.05004} are permutations of even rank with the property that every
even integer is followed by a smaller integer and every odd integer is either the last entry in the permutation
or is followed by a larger integer.
Therefore a permutation is a Dumont permutation of the first kind if and only if it avoids the marked mesh patterns
\[
\patternsbmm{scale=1.75}{ 1 }{1/1}{1/0,1/1}{0/0/1/1/k}{0/0/1/1/{=k}}, \quad
\patternsbmm{scale=1.75}{ 2 }{1/1,2/2}{1/0,1/1,1/2}{0/0/1/1/k, 2/0/3/1/k}{0/0/1/1/{=k}}, \quad
\patternsbmm{scale=1.75}{ 2 }{1/2,2/1}{0/1,1/1,2/1}{0/0/1/1/k, 0/2/1/3/k}{0/0/1/1/{=k}}, \quad
\]
where $k$ is an odd integer. Note that in the second pattern there is a single marked region $(\{(0,0),(2,0)\},{=}k)$,
consisting of two separated boxes. Similarly for the third pattern.
Dumont permutations of the second kind are also defined in~\cite{0297.05004}.
They can also be defined by the avoidance of the marked mesh patterns
\[
\patternsbmm{scale=2}{ 1 }{ 1/1 }{}{0/0/1/2/k, 0/0/2/1/k}{0/1/1/2/{=k}, 1/0/2/1/{\geq k}}, \quad
\patternsbmm{scale=2}{ 1 }{ 1/1 }{}{0/0/1/2/k, 0/0/2/1/k}{0/1/1/2/{=\ell}, 1/0/2/1/{\scriptscriptstyle{\leq \ell-1}}}
\]
where $k$ is an odd integer and $\ell$ is an even integer.
\item Green and Losonczy~\cite{MR1980344} defined \emph{freely braided permutations} as those permutations
avoiding the classical patterns $3421$, $4231$, $4312$, $4321$. Equivalently, these are the permutations
avoiding the marked mesh pattern
\[
\patternsbmfreelybraided{scale=1}{ 3 }{1/3,2/2,3/1}{}{1/1/2/2, 2/2/3/3}{0/3/1/4/1},
\]
marked with a single region consisting of $(2,0)$, $(3,0)$, $(1,1)$, $(3,1)$, $(0,2)$, $(2,2)$, $(3,0)$ and $(3,1)$.
\item Labelle, Leroux, Pergola and Pinzani~\cite{MR1887485} defined an \emph{inversion of the $j$-th kind}
in a permutation $\pi$ to be a pair of elements $\pi(s) > \pi(t)$, with $s < t$ and such that there
do not exist $j$ distinct indices $t+1 \leq t_1,t_2,\dotsc,t_j \leq n$ such that $\pi(t) > \pi(t_i)$
for $i = 1, \dotsc, j$.
Alternatively, an inversion of the $j$-th kind is an occurrence of the marked mesh pattern
\[
\patternsbm{scale=2}{ 2 }{ 1/2, 2/1 }{}{2/0/3/1/{\scriptscriptstyle{\leq j-1}}}.
\]
\item Kitaev, see e.g.~\cite{MR2319869}, introduced \emph{partially ordered patterns} (\emph{POP})
as a generalization of vincular patterns. Some POPs can be written as marked mesh patterns, e.g.,
an occurrence of the POP $121$ in a permutation means the occurrence of either $231$ or $132$. Therefore
\[
121 = \patternsbm{scale=1}{ 1 }{ 1/1}{}{0/0/1/1/1, 1/0/2/1/1}.
\]
\item Hou and Mansour~\cite{1161.05301} studied permutations avoiding
$\vinc{4}{1/1, 2/{\ \square\ }, 3/{\, 2}, 4/3}{3}$ which are permutations
avoiding the marked mesh pattern
\[
\patternsbm{scale=1}{ 3 }{ 1/1, 2/2, 3/3 }{2/0, 2/1, 2/2, 2/3}{1/0/2/4/1}.
\]
\end{enumerate}
\end{example}
Gasharov and Reiner~\cite{MR1934291} defined Schubert varieties \emph{defined by inclusions} (or
just \emph{DBI}) and
characterized them with pattern avoidance of the patterns $24153$, $31524$, $426153$ and $1324$.
We now show how the first three of these patterns can be represented as two marked mesh patterns.
\begin{theorem} \label{thm:dbi}
Let $\pi \in S_n$. The Schubert variety $X_\pi$ is defined by inclusions if and only if
the permutation $\pi$ avoids the patterns
\[
\patternsbm{scale=1}{ 4 }{ 1/2, 2/1, 3/4, 4/3 }{}{3/1/4/2/{},1/3/2/4/1},
\qquad
\patternsbm{scale=1}{ 4 }{ 1/2, 2/1, 3/4, 4/3 }{}{4/2/5/3/1,3/0/4/1/1},
\qquad
1324.
\]
Where it should be noted that the first marked mesh pattern is marked with a single region,
$\{ (1,3), (3,1) \}$,
consisting of two boxes, and the number of dots in this region is at least $1$.
\end{theorem}
\begin{proof}
For the first marked mesh pattern note that $2143 \oplus_2 4 = 24153$ and $ 2143 \oplus_4 2 = 31524$.
For the second marked mesh pattern note that $(2143 \oplus_4 1) \oplus_6 4 = 426153$.
\end{proof}
We can also use these patterns to describe Gorenstein Schubert varieties:
\begin{theorem} \label{thm:Ulfarsson-Gorenstein2}
Let $\pi \in S_n$. The Schubert variety $X_\pi$ is Gorenstein if and only if it is balanced
and avoids the pattern
\[
\patternsbm{scale=1}{ 4 }{ 1/2, 2/1, 3/4, 4/3 }{2/0, 2/1, 2/2, 2/3, 2/4, 0/2, 1/2, 3/2, 4/2}{3/1/4/2/{},1/3/2/4/1}.
\]
\end{theorem}
\begin{proof}
Similar to the proof of Theorem \ref{thm:dbi}.
\end{proof}
In \cite{MR1826948} Billey and Warrington introduced \emph{$123$-hexagon avoiding}
permutations as permutations avoiding the classical patterns $123$,
$53281764$, $53218764$, $43281765$, $43218765$.\footnote{Actually they introduced \emph{$321$-hexagon avoiding}
permutations as permutations avoiding the reverse of the patterns listed here. We consider the reversed definition
to be compatible with what has appeared above.}
We now show how these four patterns can then be combined into one marked mesh pattern.
\begin{proposition} \label{prop:123hex}
A permutation $\pi$ is $123$-hexagon avoiding if and only if it avoids $123$ and
the marked mesh pattern
\[
\patternsbm{scale=1}{ 4 }{ 1/2, 2/1, 3/4, 4/3 }{}{2/4/3/5/{1}, 0/2/1/3/{1}, 4/2/5/3/{1}, 2/0/3/1/{1}}.
\]
\end{proposition}
\begin{proof}
The ``\hspace{0.15pt}\nolinebreak if\hspace{0.5pt}\nolinebreak''\ part is easily verified. Now assume $\pi$ contains the marked mesh pattern.
Let $x$, $y$, $z$, $w$ correspond, respectively, to elements in the marked regions, read clockwise and
starting at the top. Let us assume that $\pi^\rmi(x) < \pi^\rmi(z)$ and $y < w$, as the other
cases are similar. Then $\pi$ contains the pattern $53281764$.
\end{proof}
Billey and Warrington also showed that $123$-hexagon avoiding permutations can be characterized
by the avoidance of $123$ and the avoidance of a hexagon in the \emph{heap} of the permutation.
See \cite{MR1826948}.
Tenner \cite{MR2333139} studied\footnote{Actually, permutations
that avoid the reverse of these patterns where considered.} permutations avoiding $123$ and $2143$ and showed
that a permutation $\pi$ avoids these two patterns if and only if it is \emph{boolean}
in the sense that the principal order ideal in strong Bruhat order $B(\pi)$ is \emph{Boolean} (that is,
isomorphic to the Boolean poset $B_r$ of subsets of $\dbrac{r}$ for some $r$).
So we immediately get that a Boolean permutation is $123$-hexagon avoiding.
The author is working with a coauthor on determining which patterns control
local complete intersections. The two marked mesh patterns that appear for Schubert
varieties defined by inclusions appear in the description along with one other marked mesh pattern.
We end with a diagram in Figure \ref{fig:diagram} that shows which pattern definitions subsume which.
\begin{figure}[ht]
\label{fig:diagram}
\begin{tikzpicture}
[scale = 0.75, place/.style = {circle,draw = green!50,fill = green!20,thick,minimum size = 5pt},auto]
\node[] at (0,6) (mm) {marked mesh};
\node[] at (0,5) (m) {mesh};
\node[] at (0,3) (biv) {bivincular};
\node[] at (0,1) (vin) {vincular};
\node[] at (0,0) (cl) {classical};
\node[] at (-3,4) (1bar) {$1$-barred};
\node[] at (3,4) (inter) {interval};
\node[] at (3,2) (Bru) {Bruhat-restricted};
\node[] at (-3,5) (McG) {McGovern};
\draw[-] (mm) to (m);
\draw[-] (mm) to (McG);
\draw[-] (m) to (1bar);
\draw[-] (m) to (biv);
\draw[-] (m) to (inter);
\draw[-] (inter) to (Bru);
\draw[-] (biv) to (vin);
\draw[-] (vin) to (cl);
\end{tikzpicture}
\caption{A diagram showing a hierarchy of permutation pattern definitions. $1$-barred refers
to barred patterns with one bar}
\end{figure}
There are still other definitions of permutation patterns such as \emph{grid patterns}, defined by
Huczynska and Vatter~\cite{MR2240760}.
I do not know where they fit into the hierarchy in Figure \ref{fig:diagram}.
\section*{Acknowledgements}
The author thanks Einar Steingr\'imsson and Alexander Woo for many helpful conversations, and Sara Billey
for suggesting the title. The author also thanks an anonymous referee for many helpful comments.
This work is supported by grant no.\ 090038011 from the Icelandic Research Fund.
\bibliographystyle{amsplain}
| {
"timestamp": "2011-06-07T02:07:06",
"yymm": "1002",
"arxiv_id": "1002.4361",
"language": "en",
"url": "https://arxiv.org/abs/1002.4361",
"abstract": "We obtain new connections between permutation patterns and singularities of Schubert varieties, by giving a new characterization of Gorenstein varieties in terms of so called bivincular patterns. These are generalizations of classical patterns where conditions are placed on the location of an occurrence in a permutation, as well as on the values in the occurrence. This clarifies what happens when the requirement of smoothness is weakened to factoriality and further to Gorensteinness, extending work of Bousquet-Melou and Butler (2007), and Woo and Yong (2006). We also show how mesh patterns, introduced by Branden and Claesson (2011), subsume many other types of patterns and define an extension of them called marked mesh patterns. We use these new patterns to further simplify the description of Gorenstein Schubert varieties and give a new description of Schubert varieties that are defined by inclusions, introduced by Gasharov and Reiner (2002). We also give a description of 123-hexagon avoiding permutations, introduced by Billey and Warrington (2001), Dumont permutations and cycles in terms of marked mesh patterns.",
"subjects": "Combinatorics (math.CO); Algebraic Geometry (math.AG)",
"title": "A unification of permutation patterns related to Schubert varieties",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.983596967003007,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7099044227823194
} |
https://arxiv.org/abs/1303.6078 | The Bishop-Phelps-Bollobás theorem for operators on $L_1(μ)$ | In this paper we show that the Bishop-Phelps-Bollobás theorem holds for $\mathcal{L}(L_1(\mu), L_1(\nu))$ for all measures $\mu$ and $\nu$ and also holds for $\mathcal{L}(L_1(\mu),L_\infty(\nu))$ for every arbitrary measure $\mu$ and every localizable measure $\nu$. Finally, we show that the Bishop-Phelps-Bollobás theorem holds for two classes of bounded linear operators from a real $L_1(\mu)$ into a real $C(K)$ if $\mu$ is a finite measure and $K$ is a compact Hausdorff space. In particular, one of the classes includes all Bochner representable operators and all weakly compact operators. | \section{Introduction}
The celebrated Bishop-Phelps theorem of 1961 \cite{BP} states that for a Banach space $X$, every element in its dual space $X^*$ can be approximated by ones that attain their norms. Since then, there has been an extensive research to extend this result to bounded linear operators between Banach spaces \cite{Bou,JW,Lin, Par, Sch} and non-linear mappings \cite{AAP, AGM, AFW,
C, CK1, KL}. On the other hand, Bollob\'as \cite{Bol}, motivated by problems arising in the theory of numerical ranges, sharpened the Bishop-Phelps theorem in 1970, and got what is nowadays called the Bishop-Phelps-Bollob\'as theorem. Previous to presenting this result, let us introduce some notations. Given a (real or complex) Banach space $X$, we write $B_X$ for the unit ball, $S_X$ for its unit sphere, and $X^*$ for the topological dual space of $X$. If $Y$ is another Banach space, we write $\mathcal{L}(X,Y)$ to denote the space of all bounded linear operators from $X$ into $Y$.
\begin{theorem}[Bishop-Phelps-Bollob\'{a}s theorem]\label{thm:BPBTheorem}
Let $X$ be a Banach space. If $x\in S_X$ and $x^*\in S_{X^*}$ satisfy $|x^*(x)-1|< \varepsilon^2/4$, then there exist $y\in
S_X$ and $y^*\in S_{X^*}$ such that $y^*(y)=1$, $\|x^*-y^*\|<\varepsilon$ and $\|x-y\|<\varepsilon$.
\end{theorem}
In 2008, Acosta, Aron, Garc\'ia and Maestre \cite{AAGM2} introduced the Bishop-Phelps-Bollob\'as property to study
extensions of the theorem above to operators between Banach spaces.
\begin{definition}
Let $X$ and $Y$ be Banach spaces. The pair $(X,Y)$ is said to have the \emph{Bishop-Phelps-Bollob\'as property} (\emph{BPBp}) if for every $0<\varepsilon<1$, there is $\eta(\varepsilon)>0$ such that for every $T\in \mathcal{L}(X,Y)$ with $\|T\|=1$ and $x_0 \in S_X$ satisfying $\|T(x_0)\|>1-\eta(\varepsilon)$, there exist $y_0\in S_X$ and $S\in \mathcal{L}(X,Y)$ with $\|S\|=1$ satisfying the following conditions:
\[
\|Sy_0\|=1, \qquad \|y_0 - x_0 \|<\varepsilon, \quad \text{and} \quad \|S- T\|<\varepsilon.
\]
In this case, we also say that the Bishop-Phelps-Bollob\'{a}s theorem holds for $\mathcal{L}(X,Y)$.
\end{definition}
This property has been studied by many authors. See for instance \cite{ABGM, ACK, ACKLM, CasGuiKad, CK2, ChoiKimSK, Kim-c_0, KimLee}. Observe that the BPBp of a pair $(X,Y)$ implies obviously that the set of norm attaining operators is dense in $\mathcal{L}(X,Y)$. However, its converse is false, as shown by the pair $(X,Y)$ where $X$ is the $2$-dimensional $L_1$-space and $Y$ is a strictly, but not uniformly convex space (see \cite{AAGM2} or \cite{ACKLM}). Let us also comment that the Bishop-Phelps-Bollob\'{a}s theorem states that the pair $(X,\mathbb{K})$ has the Bishop-Phelps-Bollob\'{a}s property for every Banach space $X$ ($\mathbb{K}$ is the base scalar field $\mathbb{R}$ or $\mathbb{C}$).
In this paper we first deal with the problem of when the pair $(L_p(\mu),L_q(\nu))$ has the BPBp. Let us start with a presentation of both already known results and our new results. Iwanik \cite{Iwa} showed in 1979 that the set of norm-attaining operators from $L_1(\mu)$ to $L_1(\nu)$ is dense in the space $\mathcal{L}(L_1(\mu),L_1(\nu))$ for arbitrary measures $\mu$ and $\nu$. Our first main result in this paper is that the pair $\mathcal{L}(L_1(\mu),L_1(\nu))$ has the BPBp. This is the content of section~\ref{sec:L_1-L_1}.
On the other hand, Aron et al.\ \cite{ACGM} showed that if $\mu$ is a $\sigma$-finite measure, then the pair $(L_1(\mu), L_\infty[0,1])$ has the BPBp, improving a result of Finet and Pay\'{a} \cite{FP} about the denseness of norm-attaining operators. We generalize this result in section~\ref{sec:L_1-L_infty} showing that $(L_1(\mu), L_\infty(\nu))$ has the BPBp for every measure $\mu$ and every localizable measure $\nu$. This is also a strengthening of a result of Pay\'a and Saleh \cite{PS} which stated only the denseness of norm-attaining operators.
One of the tools used to prove the results above is the fact that one can reduce the proofs to some particular measures. We develop this idea in section~\ref{sec:preliminary}, where, as its first easy application, we extend to arbitrary measures $\mu$ the result in \cite{CK2} that $(L_1(\mu),L_p(\nu))$ has the BPBp for $\sigma$-finite measures $\mu$.
The following result summarizes all what is known about the BPBp for the pair $(L_p(\mu),L_q(\nu))$.
\begin{corollary}
The pair $(L_p(\mu), L_q(\nu))$ has the BPBp
\begin{itemize}
\item[(1)] for all measures $\mu$ and $\nu$ if $p=1$ and $1\leq q<\infty$.
\item[(2)] for any measure $\mu$ and any localizable measure $\nu$ if $p=1$, $q=\infty$.
\item[(3)] for all measures $\mu$ and $\nu$ if $1<p<\infty$ and $1\leq q\leq \infty$.
\item[(4)] for all measures $\mu$ and $\nu$ if $p=\infty$, $q=\infty$, in the real case.
\end{itemize}
\end{corollary}
(1) and (2) follows from the results of this paper (Corollary~\ref{cor:L1Lq}, Theorem~\ref{thm:main0} and Theorem~\ref{thm:localizable}). Since $L_p(\mu)$ is uniformly convex when $1<p<\infty$, (3) follows from \cite{ABGM, KimLee} in the $\sigma$-finite case, generalized here to arbitrary measures $\mu$ (Corollary~\ref{cor:L1Lq}). Finally, (4) follows from \cite{ABCCKLLM}, because every $L_\infty$ space is isometrically isomorphic to a $C(K)$ space.
As far as we know, the cases $(L_\infty(\mu), L_q(\nu))$ for $1\leq q<\infty$ and the complex case of (4) remain open.
Let $\mu$ be a finite measure. Since any $L_\infty$ space is isometrically isomorphic to $C(K)$ for some compact Hausdorff space $K$, it is natural to ask when $(L_1(\mu), C(K))$ has the BPBp. Schachermayer \cite{Sch2} showed that the set of all norm-attaining operators is not dense in $\mathcal{L}(L_1[0,1],C[0,1])$. Hence, $(L_1[0,1], C[0,1])$ cannot have the BPBp. On the other hand, Johnson and Wolfe \cite{JW} proved that if $X$ is a Banach space and if either $Y$ or $Y^*$ is a $L_1(\mu)$ space, then every compact operator from $X$ into $Y$ can be approximated by norm-attaining finite-rank operators. They also showed that every weakly compact operator from $L_1(\mu)$ into $C(K)$ can be approximated by norm-attaining weakly compact ones. In this direction, Acosta et al.\ have shown that $(L_1(\mu), Y)$ has the BPBp for representable operators (in particular, for weakly compact operators) if $(\ell_1, Y)$ has the BPBp, and this is the case of $Y=C(K)$ \cite{ABGKM}.
On the other hand, Iwanik \cite{Iwa2} studied two classes of bounded linear operators from a real $L_1(\mu)$ space to a real $C(K)$ space such that every element of each class can be approximated by norm-attaining elements, and showed that one of the classes strictly contains all Bochner representable operators and all weakly compact operators. In section~\ref{sec:L_1-C(K)}, we deal with Bishop-Phelps-Bollob\'as versions of these Iwanik's results. In particular, we show that for every $0<\varepsilon<1$, there is $\eta(\varepsilon)>0$ such that if $T\in\mathcal{L}(L_1(\mu),C(K))$ with $\|T\|=1$ is Bochner representable (resp.\ weakly compact) and $f_0\in S_{L_1(\mu)}$ satisfy $\|T f_0\|>1-\eta(\varepsilon)$, then there is a Bochner representable (resp.\ weakly compact) operator $S\in\mathcal{L}(L_1(\mu),C(K))$ and $f\in S_{L_1(\mu)}$ such that $\|S f\|=\|S\|=1$, $\|S-T\|<\varepsilon$ and $\|f-f_0\|<\varepsilon$.
Let us finally comment that the proofs presented in sections \ref{sec:L_1-L_1} and \ref{sec:L_1-L_infty} are written for the complex case. Their corresponding proofs for the real case are easily obtained, even easier, from the ones presented there.
\section{Some preliminary results}\label{sec:preliminary}
We start with some terminologies and known facts about $L_1(\mu)$. Suppose that $(\Omega,\Sigma,\mu)$ is an arbitrary measure space and put $X=L_1(\mu)$. Suppose $G$ is a countable subset of $X$. Since the closed linear span $[G]$ of $G$ is separable, we may assume that $[G]$ is the closed linear span of a countable set $\{ \chi_{E_n} \}$ of characteristic functions of measurable subsets with finite positive measure. Let $E= \bigcup_{n} E_n$ and $Z = \{ f\chi_E\, :\, f\in X\}$. Then, $Z= L_1(\mu|_E)$, where $\mu|_E$ is restriction of the measure $\mu$ to the $\sigma$-algebra $\Sigma|_E = \{ E\cap A\, :\, A\in \Sigma\}$. Since $\mu|_E$ is $\sigma$-finite, $Z$ is isometrically (lattice) isomorphic to $L_1(m)$ for some positive finite Borel regular measure $m$ defined on a compact Hausdorff space by the Kakutani representation theorem (see \cite{Lac} for a reference). This space $Z$ is called the {\it band} generated by $G$, and the {\it canonical band projection} $P:X\longrightarrow Z$, defined by $P(f) := f\chi_E$ for $f\in X$, satisfies $\|f\| = \|Pf\| + \|(\Id-P)f\|$ for all $f\in X$. For more details, we refer the reader to the classical books \cite{Lac, Schaefer}.
Next, we present the following equivalent formulation of the BPBp from \cite{ACKLM} which helps to better understand the property and will be useful for our preliminary results. Given a pair $(X,Y)$ of Banach spaces, let
$$
\Pi(X,Y)= \{(x,T)\in X\times \mathcal{L}(X,Y)\,:\, \|T\|=\|x\|=\|Tx\|=1\}
$$
and define, for $0<\varepsilon<1$,
\begin{equation*}
\eta(X,Y)(\varepsilon)=\inf\bigl\{1-\|Tx\|\,:\ x\in S_X,\, T\in \mathcal{L}(X,Y),\, \|T\|=1,\ \dist\bigl((x,T),\Pi(X,Y)\bigr)\geq\varepsilon\bigr\},
\end{equation*}
where $\dist\bigl((x,T),\Pi(X,Y)\bigr)= \inf\bigl\{\max\{\|x-y\|,\|T-S\|\}\ :\ (y,S)\in \Pi(X,Y)\bigr\}$.
Equivalently, for every $\varepsilon\in (0,1)$, $\eta(X,Y)(\varepsilon)$ is the supremum of those $\xi\geq 0$ such that whenever $T\in \mathcal{L}(X,Y)$ with $\|T\|=1$ and $x\in S_X$ satisfy $\|Tx\|\geq 1-\xi$, then there exists $(y,S)\in \Pi(X,Y)$ with $\|T-S\|\leq \varepsilon$ and $\|x-y\|\leq \varepsilon$. It is clear that $(X,Y)$ has the BPBp if and only if $\eta(X,Y)(\varepsilon)>0$ for all $0<\varepsilon<1$.
Our first preliminary result deals with operators acting on an $L_1(\mu)$ space and shows that the proof of some results can be reduced to the case when $\mu$ is a positive finite Borel regular measure defined on a compact Hausdorff space.
\begin{proposition}\label{prop:reduction}
Let $Y$ be a Banach space. Suppose that there is a function $\eta: (0,1)\longrightarrow (0, \infty)$ such that
$$
\eta\bigl(L_1(m),Y\bigr)(\varepsilon)\geq \eta(\varepsilon)>0 \qquad (0<\varepsilon<1)
$$
for every positive finite Borel regular measure $m$ defined on a compact Hausdorff space. Then, for every measure $\mu$, the pair $(L_1(\mu),Y)$ has the BPBp with $\eta\bigl(L_1(\mu), Y\bigr)\geq \eta$.
Moreover, if $Y=L_1(\nu)$ for an arbitrary measure $\nu$, then it is enough to show that
$$
\eta\bigl(L_1(m_1),L_1(m_2)\bigr)(\varepsilon)\geq \eta(\varepsilon)>0 \qquad (0<\varepsilon<1)
$$
for all positive finite Borel regular measures $m_1$ and $m_2$ defined on Hausdorff compact spaces in order to get that $(L_1(\mu),L_1(\nu))$ has the BPBp with $\eta\bigl(L_1(\mu), L_1(\nu)\bigr)\geq \eta$.
\end{proposition}
\begin{proof}
Let $0<\varepsilon<1$. Suppose that $T\in\mathcal{L}(L_1(\mu),Y)$ is a norm-one operator and $f_0\in S_X$ satisfy that $\|Tf_0\|>1-\eta(\varepsilon)$. Let $\{f_n)_{n=1}^\infty$ be a sequence in $X$ such that $\|f_n\|\leq 1$ for all $n$ and $\lim_{n\to \infty} \|Tf_n\|=\|T\|=1$. The band $X_1$ generated by $\{f_n\,:\, n\geq 0\}$ is isometric to $L_1(J, m)$ for a finite positive Borel regular measure $m$ defined on a compact Hausdorff space $J$ by the Kakutani representation theorem. Let $T_1$ be the restriction of $T$ to $X_1$. Then $\|T_1\|=1$ and $\|T_1 f_0\| > 1- \eta(\varepsilon)$. By the assumption, there exist a norm-one operator $S_1: X_1 \longrightarrow Y$ and $g\in S_{X_1}$ such that $\|S_1g\|=1$, $\|T_1-S_1\|< \varepsilon$ and $\|f-g\|< \varepsilon$. Let $P$ denote the canonical band projection from $L_1(\mu)$ onto $X_1$. Then $S := S_1P + T(\Id-P)$ is a norm-one operator from $L_1(\mu)$ to $Y$, $g$ can be viewed as a norm-one element in $S_{L_1(\mu)}$ (just extending by $0$), $\|Sg\|=1$, $\|S- T\|< \varepsilon$ and $\|f-g\|< \varepsilon$. This completes the proof of the first part of the proposition.
In the case when $Y=L_1(\nu)$, we observe that the image $T(X_1)$ is also contained in a band $Y_1$ of $L_1(\nu)$ which, again, is isometric to $L_1(m_2)$ for a finite positive Borel regular measure $m_2$ on a compact Hausdorff space $J_2$. Now, we work with the restriction of $T$ to $X_1$ with values in $Y_1$, we follow the proof of the first part and finally we consider the operators $S$ there as an operator with values in $L_1(\nu)$ (just composing with the formal inclusion of $Y_1$ into $L_1(\nu)$).
\end{proof}
Since for every positive finite Borel regular measure $m$ defined on a compact Hausdorff space, $L_1(m)$ is isometric to $L_1(\mu)$ for a probability measure $\mu$, we get the following.
\begin{corollary}\label{cor:main1} Let $Y$ be a Banach space.
Suppose that there is a strictly positive function $\eta: (0,1)\longrightarrow (0, \infty)$ such that $\eta\bigl(L_1(\mu_1),Y\bigr) \geq \eta$ for every probability measure $\mu_1$. Then $(L_1(\mu),Y)$ has the BPBp for every measure $\mu$, with $\eta\bigl(L_1(\mu),Y\bigr)\geq \eta$.
\end{corollary}
Let us present the first application of the above results. For a $\sigma$-finite measure $\mu_1$, it is shown in \cite{CK2} that $(L_1(\mu_1), Y)$ has the BPBp if $Y$ has the Radon-Nikod\'ym property and $(\ell_1,Y)$ has the BPBp. By following the proof of \cite[Theorem~2.2]{CK2}, we conclude that there is a strictly positive function $\eta_Y: (0,1)\longrightarrow (0,\infty)$ such that $\eta(L_1(\mu_1), Y)\geq \eta_Y$ for every probability measure $\mu_1$. Therefore, the corollary above provides the same result without the assumption of $\sigma$-finiteness. We also recall that $L_q(\nu)$ is uniformly convex for all $1<q<\infty$ and for all measures $\nu$, so it has the Radon-Nikod\'ym property and $(\ell_1,L_q(\nu))$ has the BPBp \cite{AAGM2}. Hence we get the following.
\begin{corollary}\label{cor:L1Lq} Let $\mu$ be an arbitrary measure. If $Y$ is a Banach space with the Radon-Nikod\'ym property and such that $(\ell_1,Y)$ has the BPBp, then the pair $(L_1(\mu), Y)$ has the BPBp. In particular, $(L_1(\mu), L_q(\nu))$ has the BPBp for all $1<q<\infty$ and all arbitrary measures $\nu$.
\end{corollary}
We now deal with operators with values on an $\ell_\infty$-sum of Banach spaces, presenting the following result from \cite{ACKLM} which we will use in section~\ref{sec:L_1-L_infty}. Given a family $\{Y_j\, :\, j\in J\}$ of Banach spaces, we
denote by $\left[\bigoplus_{j\in J} Y_j\right]_{\ell_\infty}$ the
$\ell_\infty$-sum of the family.
\begin{proposition}[\mbox{\cite{ACKLM}}]\label{prop:ACKLM}
Let $X$ be a Banach space and let $\{Y_j\, :\, j\in J\}$ be a family of Banach spaces and let $Y=\left[\bigoplus_{j\in J} Y_j\right]_{\ell_\infty}$ denote their $\ell_\infty$-sum. If $\inf\limits_{j\in J} \eta(X,Y_j)(\varepsilon)>0$ for all $0<\varepsilon<1$, then
$\left(X, Y\right)$ has the BPBp with
$$
\eta(X,Y)= \inf\limits_{j\in J} \eta(X,Y_j).
$$
\end{proposition}
We will use this result for operators with values in $L_\infty(\nu)$. To present the result, we fist recall that given a localizable measure $\nu$, we have the following representation
\begin{equation}\label{eq:repre_L_infty}
L_\infty(\nu) \equiv \left[ \bigoplus\nolimits_{j\in J} Y_j \right]_{\ell_\infty},
\end{equation}
where each space $Y_j$ is either $1$-dimensional or of the form $L_\infty([0,1]^\Lambda)$ for some finite or infinite set $\Lambda$ and $[0,1]^\Lambda$ is endowed with the product measure of the Lebesgue measures. We refer to \cite{Lac} for its background. With this in mind, the following corollary follows from the proposition above.
\begin{corollary}\label{cor:to_L_infty-localizable}
Let $X$ be a Banach space. Suppose that there is a strictly positive function $\eta:(0,1)\longrightarrow (0,\infty)$ such that
$$
\eta\bigl(X,L_\infty([0,1]^\Lambda) \bigr)(\varepsilon)\geq \eta(\varepsilon) \qquad (0<\varepsilon<1)
$$
for every finite or infinite set $\Lambda$. Then the pair $(X,L_\infty(\nu))$ has the BPBp for every localizable measure $\nu$ with
$$
\eta\bigl(X,L_\infty(\nu)\bigr)(\varepsilon)\geq \min\{\eta(\varepsilon),\varepsilon^2/2\} \qquad (0<\varepsilon<1).
$$
\end{corollary}
The proof is just an application of Proposition~\ref{prop:ACKLM}, the representation formula given in \eqref{eq:repre_L_infty} and the Bishop-Phelps-Bollob\'{a}s theorem (Theorem~\ref{thm:BPBTheorem}).
Let us comment that the analogue of Proposition~\ref{prop:ACKLM} is false for $\ell_1$-sums in the domain space (see \cite{ACKLM}), so Proposition~\ref{prop:reduction} cannot be derived directly from the decomposition of $L_1(\mu)$ spaces analogous to \eqref{eq:repre_L_infty}.
Before finishing this section, we state the following lemma of \cite{AAGM2} which we will frequently use afterwards.
\begin{lemma}[\mbox{\cite[Lemma~3.3]{AAGM2}}]\label{elementary}
Let $\{c_n\}$ be a sequence of complex numbers with $|c_n|\leq 1$ for every $n$, and let $\eta>0$ be such that for a convex series $\sum \alpha_n$, $\re \sum_{n=1}^\infty \alpha_n c_n >1-\eta$. Then for every $0<r<1$, the set $A : = \{ i \in \mathbb{N} : \re c_i > r \}$, satisfies the estimate
\[ \sum_{i\in A} \alpha_i \geq 1-\frac{\eta}{1-r}.\]
\end{lemma}
\section{The Bishop-Phelps-Bollob\'as property of $(L_1(\mu),L_1(\nu))$}\label{sec:L_1-L_1}
Our goal in this section is to prove the following result.
\begin{theorem}\label{thm:main0}
Let $\mu$ and $\nu$ be arbitrary measures. Then the pair $(L_1(\mu), L_1(\nu))$ has the BPBp. Moreover, there exists a strictly positive function $\eta:(0,1)\longrightarrow (0,\infty)$ such that
$$
\eta\bigl(L_1(\mu),L_1(\nu)\bigr)(\varepsilon)\geq \eta(\varepsilon) \qquad (0<\varepsilon<1).
$$
\end{theorem}
By Proposition~\ref{prop:reduction}, it is enough to get the result for finite regular positive Borel measures defined on compact Hausdorff spaces. Therefore, Theorem~\ref{thm:main0} follows directly from the next result.
\begin{theorem}\label{thm:main1}
Let $m_1$ and $m_2$ be finite regular positive Borel measures on compact Hausdorff spaces $J_1$ and $J_2$, respectively.
Let $0<\varepsilon<1$ and suppose that $T\in \mathcal{L}(L_1(m_1), L_1(m_2))$ with $\|T\|=1$ and $f_0\in S_{L_1(m_1)}$ satisfy $\|Tf_0\|>1-\frac{\varepsilon^{18}}{5^32^{27}}$. Then there are $S\in S_{L(L_1(m_1), L_1(m_2))}$ and $g\in S_{L_1(m_1)}$ such that
\[
\|Sg\|=1,\quad \|f-g\|< 4\varepsilon\quad \text{and} \quad \|T-S\|< 4\sqrt\varepsilon.
\]
\end{theorem}
Prior to presenting the proof of this theorem, we have to recall the following representation result for operators from $L_1(m_1)$ into $L_1(m_2)$. As we announced in the introduction, we deal with only complex spaces, being the real case easily deductible, even easier, from the proof of the complex case.
Let $m_1$ and $m_2$ be finite regular positive Borel measures on compact Hausdorff spaces $J_1$ and $J_2$, respectively. For a complex-valued Borel measure $\mu$ on the product space $J_1\times J_2$, we define their marginal measures $\mu^i$ on $J_i$ ($i=1,2$) as follows:
\[
\mu^1(A) = \mu(A\times J_2) \quad \text{ and } \quad \mu^2(B)= \mu(J_1\times B),
\]
where $A$ and $B$ are Borel measurable subsets of $J_1$ and $J_2$, respectively.
Let $M(m_1, m_2)$ be the complex Banach lattice consisting of all complex-valued Borel measures $\mu$ on the product space $J_1\times J_2$ such that each $|\mu|^i$ is absolutely continuous with respect to $m_i$ for $i=1,2$ with the norm \[\nor{ \frac{d|\mu|^1}{dm_1}}_\infty.\] It is clear that to each $\mu\in M(m_1, m_2)$ there corresponds a unique bounded linear operator $T_\mu\in L(L_1(m_1), L_1(m_2))$ defined by
\[ \inner{T_\mu (f), g} = \int_{J_1\times J_2} f(x)g(y) \, d\mu(x,y),\]
where $f\in L_1(m_1)$ and $g\in L_\infty(m_2)$.
Iwanik \cite{Iwa} showed that the mapping $\mu\longmapsto T_\mu$ is a surjective lattice isomorphism and
\[ \|T_\mu\| = \nor{ \frac{d|\mu|^1}{dm_1}}_\infty.\]
Even though he showed this for the real case, it can be easily generalized to the complex case. For details, see \cite[Theorem~1]{Iwa} and \cite[IV Theorem 1.5 (ii), Corollary 2]{Schaefer}.
Since the proof of Theorem~\ref{thm:main1} is complicated, we divide it into the following two lemmas.
\begin{lemma}\label{lem:complex1}
Let $0<\varepsilon<1$. Suppose that $T_\mu$ is an element of $\mathcal{L}(L_1(m_1), L_1(m_2))$ with $\|T_\mu\|=1$ for some $\mu\in M(m_1, m_2)$ and that $f_0\in S_{L_1(m_1)}$ is a nonnegative simple function such that $\|T_\mu f_0 \|> 1-\frac{\varepsilon^3}{2^6}$. Then there are a norm-one bounded linear operator $T_\nu$ for some $\nu\in M(m_1, m_2)$ and a nonnegative simple function $f_1$ in $S_{L_1(m_1)}$ such that
\[ \norm{T_\mu - T_\nu}<\varepsilon,\ \ \ \|f_1- f_0\|< 3\varepsilon\]
and we have, for all $x\in {\rm supp}( f_1)$,
\[\frac{d|\nu|^1}{dm_1}(x)=1.\]
\end{lemma}
\begin{proof}
Let $f_0 = \sum_{j=1}^n \alpha_j \frac{\chi_{B_j}}{m_1(B_j)}$, where $\{B_j\}_{j=1}^n$ are mutually disjoint Borel subsets of $J_1$, $\alpha_j\geq 0$ and $m_1(B_j)>0$ for all $1\leq j\leq n$, and $\sum_{j=1}^n \alpha_j=1$. Let $D=\{ x\in J_1 : \frac{d|\mu|^1}{dm_1}(x) > 1-\frac{\varepsilon}8\}$. It is clear that $m_1(D)>0$. Since $\|T_\mu f_0 \|> 1-\frac{\varepsilon^3}{2^6}$, there is $g_0\in S_{L_\infty(m_2)}$ such that
\[\re\inner{T_\mu f_0, g_0} > 1-\frac{\varepsilon^3}{2^6}.\]
Let
\[ J = \left\{ j\in\{1,\ldots,n\}\, :\, \frac{1}{m_1(B_j)} \int_{B_j} \frac{d|\mu|^1}{dm_1}(x) \,dm_1(x) > 1-\frac{\varepsilon^2}{2^6}\right\}.
\]
Then we have
\[\sum_{j\in J} \alpha_j \geq 1-\varepsilon>0.\]
Indeed, since
\begin{align*}
1-\frac{\varepsilon^3}{2^6} &< \re\inner{T_\mu f_0, g_0} = \re \int_{J_1\times J_2} f_0(x) g_0(y)\, d\mu(x,y)\\
&\leq \int_{J_1\times J_2} |f_0(x)| \, d|\mu|(x,y) = \int_{J_1} f_0(x)\, d|\mu|^1(x)\\
&=\sum_{j=1}^n \alpha_j \frac{1}{m_1(B_j)} \int_{B_j} d|\mu|^1(x)=\sum_{j=1}^n \alpha_j \frac{1}{m_1(B_j)} \int_{B_j} \frac{d|\mu|^1}{dm_1}(x)\, dm_1(x),
\end{align*} we have $\sum_{j\in J} \alpha_j \geq 1-\varepsilon>0$ by Lemma~\ref{elementary}.
Note also that for each $j\in J$,
\begin{align*}
1-\frac{\varepsilon^2}{2^6} &< \frac{1}{m_1(B_j)} \int_{B_j} \frac{d|\mu|^1}{dm_1}(x)\, dm_1(x) \\
&= \frac{1}{m_1(B_j)} \int_{B_j\cap D} \frac{d|\mu|^1}{dm_1}(x)\, dm_1(x) + \frac{1}{m_1(B_j)} \int_{B_j\setminus D} \frac{d|\mu|^1}{dm_1}(x)\, dm_1(x) \\
&\leq \frac{m_1(B_j\cap D)}{m_1(B_j)} + \left( 1-\frac{\varepsilon}8\right) \frac{m_1(B_j\setminus D)}{m_1(B_j)}\\
&=1-\frac{\varepsilon}8 \frac{m_1(B_j\setminus D)}{m_1(B_j)}.
\end{align*} Hence we deduce that, for all $j\in J$,
\[\frac{m_1(B_j\setminus D)}{m_1(B_j)}\leq \frac{\varepsilon}8.\]
Let $\tilde{B_j}=B_j\cap D$ and $\beta_j =\frac{ \alpha_j }{\sum_{j\in J} \alpha_j}$ for all $j\in J$ and define \[ f_1 = \sum_{j\in J} \beta_j \frac{\chi_{\tilde{B_j}}}{m_1(\tilde{B_j} )}.\]
It is clear that $f_1$ is a nonnegative element in $S_{L_1(m_1)}$ and
\begin{align*}
\|f_0 - f_1\| &\leq \nor{ \sum_{j\in J} \alpha_j \frac{\chi_{B_j}}{m_1(B_j)} - \sum_{j\in J} \beta_j \frac{\chi_{\tilde{B_j}}}{m_1(\tilde{B_j} )} } + \nor{ \sum_{j \in \{1, \dots, n\} \setminus J} \alpha_j \frac{\chi_{B_j}}{m_1(B_j)} }\\
&\leq \nor{ \sum_{j\in J} \alpha_j \left( \frac{\chi_{B_j}}{m_1(B_j)} - \frac{\chi_{\tilde{B_j}}}{m_1(\tilde{B_j} )} \right) } + \nor{ \sum_{j\in J} (\alpha_j-\beta_j) \frac{\chi_{\tilde{B_j}}}{m_1(\tilde{B_j} )} } + \sum_{j \in \{1, \dots, n\} \setminus J} \alpha_j \\
&< \nor{ \sum_{j\in J} \alpha_j \left( \frac{\chi_{B_j}}{m_1(B_j)} - \frac{\chi_{\tilde{B_j}}}{m_1(\tilde{B_j} )} \right) } + \sum_{j\in J} |\alpha_j-\beta_j| +\varepsilon\\
&\leq \nor{ \sum_{j\in J} \alpha_j \left( \frac{\chi_{B_j}}{m_1(B_j)} - \frac{\chi_{\tilde{B_j}}}{m_1(B_j)} \right) } + \nor{ \sum_{j\in J} \alpha_j \left( \frac{\chi_{\tilde{B_j}}}{m_1(B_j)} - \frac{\chi_{\tilde{B_j}}}{m_1(\tilde{B_j} )} \right) } + \left|1-\sum_{j\in J} \alpha_j\right| + \varepsilon\\
&\leq 2 \sum_{j\in J} \alpha_j \frac{m_1(B_j\setminus D)}{m_1(B_j)} + 2\varepsilon\leq 3\varepsilon.\\
\end{align*}
Define
\[d\nu(x,y) = \sum_{j\in J} \chi_{\tilde{B_j}}(x) \left(\frac{d|\mu|^1}{dm_1}(x)\right)^{-1} d\mu(x, y) + \chi_{(J_1\setminus \tilde{B})}\,d\mu(x,y),\] where $\tilde{B} = \bigcup_{j\in J} \tilde{B_j}$. It is clear that $\frac{d|\nu|^1}{dm_1}(x) =1$ on $\tilde{B}$ and $\frac{d|\nu|^1}{dm_1}(x)\leq 1$ elsewhere. Note also that for all $x\in J_1$,
\begin{align*}
\frac{d|\nu-\mu|^1}{dm_1}(x) &= \sum_{j\in J} \chi_{\tilde{B_j}}(x) \left(\left(\frac{d|\mu|^1}{dm_1}(x)\right)^{-1} -1\right) \frac{d|\mu|^1}{dm_1}\\
& \leq \frac{1}{1-\varepsilon/8}-1 =\frac{\varepsilon}{8-\varepsilon}< \varepsilon.
\end{align*}
Hence $T_\nu$ is a norm-one operator such that $\norm{T_\mu - T_\nu}<\varepsilon$, $\|f_1- f_0\|< 3\varepsilon$ and $\frac{d|\nu|^1}{dm_1}(x)=1$ for all $x\in {\rm supp}( f_1)$.
\end{proof}
\begin{lemma}\label{lem:complex2}
Let $0<\varepsilon<1$. Suppose that $T_\nu$ is a norm-one operator in $\mathcal{L}(L_1(m_1), L_1(m_2))$ and that $f$ is a nonnegative norm-one simple function in $S_{L_1(m_1)}$ satisfying $\|T_\nu f\| > 1-\frac{\varepsilon^6}{2^7}$ and $\frac{d|\nu|^1}{dm_1}(x)=1$ for all $x$ in the support of $f$. Then there are a nonnegative simple function $\tilde{f}$ in $S_{L_1(m_1)}$ and a norm-one operator $T_{\tilde\nu}$ in $\mathcal{L}(L_1(m_1), L_1(m_2))$ such that
\[
\|T_{\tilde\nu}\tilde f\|=1,\quad \|T_\nu - T_{\tilde\nu}\|< 3\sqrt\varepsilon\quad \text{and}\quad \|f-\tilde f\|< 3\varepsilon.
\]
\end{lemma}
\begin{proof}
Let $f= \sum_{j=1}^n \beta_j \frac{\chi_{B_j}}{m_1(B_j)},$
where $\{B_j\}_{j=1}^n$ are mutually disjoint Borel subsets of $J_1$, $\beta_j\geq 0$ and $m_1(B_j)>0$ for all $1\leq j\leq n$, and $\sum_{j=1}^n\beta_j=1$. Since $\|T_\nu f\| >1-\frac{\varepsilon^6}{2^7}$ , there is $g\in S_{L_\infty(m_2)}$ such that
\[
1-\frac{\varepsilon^6}{2^7}< \re \inner{ T_\nu f, g} = \sum_{j=1}^n \beta_j \re \int_{J_1\times J_2} \frac{\chi_{B_j}(x)}{m_1(B_j)} (g(y))\, d\nu(x,y).
\]
Let $\displaystyle J=\left \{ j\in\{1,\ldots,n\}\, :\,\re \int_{J_1\times J_2} \frac{\chi_{B_j}(x)}{m_1(B_j)}g(y)\, d\nu(x,y) > 1-\frac{\varepsilon^3}{2^6}\right\}$. From Lemma~\ref{elementary} it follows that
\[\sum_{j\in J}\beta_j >1-\frac{\varepsilon^3}{2}.\]
Let $f_1 = \sum_{j\in J} \tilde \beta_j \frac{\chi_{B_j}}{m_1(B_j)}$, where $\tilde \beta_j = \beta_j / (\sum_{j\in J}\beta_j )$ for all $j\in J$. Then
\[ \|f_1 - f\| \leq \nor{ \sum_{j\in J} (\tilde \beta_j - \beta_j) \frac{\chi_{B_j}}{m_1(B_j)} } + \sum_{j\in J} \beta_j \leq \varepsilon^3< \varepsilon.\]
Note that there is a Borel measurable function $h$ on $J_1\times J_2$ such that $d\nu(x,y) = h(x,y)\, d|\nu|(x,y)$ and $|h(x,y)|=1$ for all $(x,y)\in J_1\times J_2$. Let
\[ C= \left\{ (x,y) : |g(y)h(x,y)-1| < \frac{\sqrt\varepsilon}{2^{3/2}} \right\}.\]
Define two measures $\nu_{f}$ and $\nu_{c}$ as follows:
\begin{equation*}
\nu_f(A) = \nu(A\setminus C) \quad \text{and} \quad
\nu_c(A) = \nu(A\cap C)
\end{equation*}
for every Borel subset $A$ of $J_1\times J_2$. It is clear that
$$
d\nu= d\nu_{f} + d\nu_{c},\quad d|\nu_f| =\bar hd\nu_f,\quad d|\nu_c| =\bar hd\nu_c, \quad \text{and}\quad d|\nu| = d|\nu_f|+d|\nu_c|.
$$
Since $\frac{d|\nu|^1}{dm_1}(x)=1$ for all $x\in \bigcup_{j=1}^n B_j$, we have
\[
1=\frac{d|\nu|^1}{dm_1}(x) = \frac{d|\nu_{f}|^1}{dm_1}(x)+ \frac{d|\nu_{c}|^1}{dm_1}(x)
\]
for all $x\in B=\bigcup_{j=1}^n B_j$,
and we deduce that $|\nu|^1(B_j)=m_1(B_j)$ for all $1\leq j\leq n$.
We claim that $\frac{|\nu_{f}|^1(B_j) }{m_1(B_j)} \leq \frac{\varepsilon^2}{2^2}$ for all $j\in J$. Indeed, if $|g(y)h(x,y)-1|\geq \frac{\sqrt\varepsilon}{2^{3/2}}$, then $\re (g(y)h(x,y))\leq 1-\frac{\varepsilon}{2^4}$. So we have
\begin{align*}
1-\frac{\varepsilon^3}{2^6} &\leq \frac{1}{m_1(B_j)}\re \int_{J_1\times J_2} \chi_{B_j(x)} g(y)\, d\nu(x,y)\\
&= \frac{1}{m_1(B_j)} \int_{J_1\times J_2} \chi_{B_j(x)} \re\big(g(y)h(x,y)\big) \, d|\nu|(x,y)\\
&= \frac{1}{m_1(B_j)} \int_{J_1\times J_2} \chi_{B_j(x)} \re\big(g(y)h(x,y)\big) \, d|\nu_f|(x,y) \\
&\ \ \ \ \ + \frac{1}{m_1(B_j)} \int_{J_1\times J_2} \chi_{B_j(x)} \re\big(g(y)h(x,y)\big) \, d|\nu_c|(x,y) \\
&\leq \frac{1}{m_1(B_j)} \left((1-\frac{\varepsilon}{2^4}) |\nu_f|^1(B_j) +|\nu_c|^1(B_j)\right) \\
&= 1 - \frac{\varepsilon}{2^4} \frac{|\nu_{f}|^1(B_j)}{m_1(B_j)}.
\end{align*} This proves our claim.
We also claim that for each $j\in J$, there exists a Borel subset $\tilde B_j$ of $B_j$ such that
\[ \left(1-\frac{\varepsilon}2\right) m_1(B_j) \leq m_1(\tilde B_j) \leq m_1(B_j)\] and
\[ \frac{d|\nu_{f}|^1}{dm_1}(x)\leq \frac{\varepsilon}2 \] for all $x\in \tilde B_j$.
Indeed, set $\tilde B_j = B_j\cap \left\{ x \in J_1 : \frac{d|\nu_{f}|^1}{dm_1}(x)\leq\frac {\varepsilon}2\right\}$. Then
\[ \int_{B_j \setminus \tilde B_j} \frac{\varepsilon}2 \, dm_1(x) \leq \int_{B_j} \frac{d|\nu_{f}|^1}{dm_1}(x) \, dm_1(x) = |\nu_{f}^1|(B_j)\leq \frac{\varepsilon^2}{2^2}m_1(B_j). \] This shows that $m_1(B_j \setminus \tilde B_j) \leq \frac{\varepsilon}2 m_1(B_j)$. This proves our second claim.
Now, we define $\tilde{g}$ by $\tilde g(y) = \frac{g(y)}{|g(y)|}$ if $g(y)\neq 0$ and $\tilde g(y) = 1$ if $g(y)=0$, and we write $\tilde f = \sum_{j\in J} \tilde\beta_j \frac{\chi_{\tilde B_j} }{m_1(\tilde B_j) }$. Finally, we define the measure
\[
d\tilde \nu(x,y) = \sum_{j\in J} \chi_{\tilde B_j}(x) \overline{\tilde g(y)} \overline{ h(x, y)} d\nu_c(x,y) \left( \frac{d|\nu_{c}|^1}{dm_1}(x) \right)^{-1} + \chi_{J_1\setminus \tilde B}(x) d\nu(x,y),
\]
where $\tilde B= \bigcup_{j\in J} \tilde B_j$. It is easy to see that $\frac{d|\tilde\nu|^1}{dm_1}(x) =1$ on $\tilde B$ and $\frac{d|\tilde\nu|^1}{dm_1}(x)\leq 1$ elsewhere. Note that
\begin{align*}
d(\tilde \nu- \nu)(x,y) &= \sum_{j\in J} \chi_{\tilde B_j}(x) \left[ \overline{\tilde g(y)} \overline{ h(x, y)}\left( \frac{d|\nu_{c}|^1}{dm_1}(x) \right)^{-1} -1\right] d\nu_c(x,y) \\
&\ \ \ \ - \sum_{j\in J} \chi_{\tilde B_j}(x) d\nu_f(x,y).
\end{align*}
If $(x,y)\in C$, then $|g(y)|\geq 1-\frac{\sqrt{\varepsilon}}{2^{3/2}} \geq 1-\frac{1}{2^{3/2}}$ and
\begin{align*}
\left| \overline{\tilde g(y)} \overline{ h(x, y)}-1 \right| &= \left| \frac{g(y)}{|g(y)|} h(x, y)-1 \right| \\
& \leq \frac{ \left| g(y) h(x, y)-1 \right| }{|g(y)|} + \frac{\big|1-|g(y)|\big|}{|g(y)|} \\
&\leq 2 \frac{ \left| g(y) h(x, y)-1 \right| }{|g(y)|} \leq 2\frac{\sqrt{\varepsilon}}{2^{3/2}}\frac{2^{3/2}}{2^{3/2}-1}\leq 2\sqrt{\varepsilon}.
\end{align*}
Hence, for all $(x,y)\in C$ we have
\begin{align*}
\left| \overline{\tilde g(y)} \overline{ h(x, y)}\left( \frac{d|\nu_{c}|^1}{dm_1}(x) \right)^{-1} -1\right| &\leq
\left| \overline{\tilde g(y)} \overline{ h(x, y)}-1 \right|~ \left( \frac{d|\nu_{c}|^1}{dm_1}(x) \right)^{-1} + \left| \left( \frac{d|\nu_{c}|^1}{dm_1}(x) \right)^{-1} -1 \right|\\
&\leq 2\sqrt{\varepsilon}~ \left( \frac{d|\nu_{c}|^1}{dm_1}(x) \right)^{-1} +\left| \left( \frac{d|\nu_{c}|^1}{dm_1}(x) \right)^{-1} -1 \right|.
\end{align*}
So, we have for all $x\in J_1$,
\begin{align*}
\frac{d| \tilde \nu - \nu|^1}{dm_1}(x) &\leq \sum_{j\in J} \chi_{\tilde B_j}(x) \left[ 2\sqrt{\varepsilon}~ \left( \frac{d|\nu_{c}|^1}{dm_1}(x) \right)^{-1} +\left| \left( \frac{d|\nu_{c}|^1}{dm_1}(x) \right)^{-1} -1 \right|
\right] \frac{d|\nu_c|^1}{dm_1}(x) \\
&\ \ \ \ + \sum_{j\in J} \chi_{\tilde B_j}(x) \frac{d|\nu_f|^1}{dm_1}(x)\\
&\leq \sum_{j\in J} \chi_{\tilde B_j}(x) \left( 2\sqrt{\varepsilon} + \left( 1-\frac{d|\nu_{c}|^1}{dm_1}(x) \right)\right) + \sum_{j\in J} \chi_{\tilde B_j}(x) \left( \frac{d|\nu_{f}|^1}{dm_1}(x) \right)\\
&\leq 2\sqrt{\varepsilon}+\varepsilon <3\sqrt{\varepsilon}.
\end{align*}
This gives that $\|T_\nu - T_{\tilde \nu} \|< 3\sqrt\varepsilon$. Note also that, for all $j\in J$,
\begin{align*}
\inner{ T_{\tilde \nu} \frac{\chi_{\tilde B_j}}{m_1(\tilde B_j)}, \tilde g} &= \int_{J_1\times J_2} \frac{\chi_{\tilde B_j}(x)}{m_1(\tilde B_j)} \tilde g(y) \,d\tilde \nu(x,y)\\
& =\int_{J_1\times J_2} \frac{\chi_{\tilde B_j}(x)}{m_1(\tilde B_j)} \overline{h(x,y)}\left( \frac{d|\nu_{c}|^1}{dm_1}(x) \right)^{-1} \,\, d\nu_c(x,y)\\
&=\int_{J_1} \frac{\chi_{\tilde B_j}(x)}{m_1(\tilde B_j)}\left( \frac{d|\nu_{c}|^1}{dm_1}(x) \right)^{-1} \, d|\nu_c|^1(x) \\
&= \int_{J_1} \frac{\chi_{\tilde B_j}(x)}{m_1(\tilde B_j)} \, dm_1(x)=1.
\end{align*}
Hence we get $\inner{T_{\tilde \nu} \tilde f, \tilde g} =1$, which implies that $\|T_{\tilde \nu} \tilde f\|=\|T_{\tilde \nu}\|= 1$. Finally,
\begin{align*}
\|\tilde f - f\| &\leq \|\tilde f - f_1 \| + \|f_1 - f\|\\
& = \nor{ \sum_{j\in J} \tilde \beta_j \frac{\chi_{\tilde B_j}}{m_1(\tilde B_j)} - \sum_{j\in J} \tilde \beta_j \frac{\chi_{B_j}}{m_1(B_j)}} +\varepsilon \\
& \leq \sum_{j\in J} \tilde \beta_j \left(\nor{ \frac{\chi_{\tilde B_j}}{m_1(\tilde B_j)} - \frac{\chi_{ B_j}}{m_1(\tilde B_j)} } +\nor{ \frac{\chi_{ B_j}}{m_1(\tilde B_j)} - \frac{\chi_{B_j}}{m_1(B_j)}}\right) +\varepsilon \\
&=2\sum_{j\in J}\tilde \beta_j \frac{ m_1(B_j\setminus \tilde B_j)}{m_1(\tilde B_j)} +\varepsilon\\
&\leq 2\sum_{j\in J}\tilde \beta_j \frac{ \frac{\varepsilon}2 m_1(B_j)}{m_1(\tilde B_j)} +\varepsilon\leq \frac{\varepsilon}{1-\varepsilon/2}+\varepsilon < 3\varepsilon. \qedhere
\end{align*}
\end{proof}
We are now ready to prove Theorem~\ref{thm:main1}.
\begin{proof}[Proof of Theorem~\ref{thm:main1}]
Let $0<\varepsilon<1$. Suppose that $T$ is a norm-one element in $\mathcal{L}(L_1(m_1), L_1(m_2))$ and there is $f\in S_{L_1(m_1)}$ such that $\|Tf\|>1-\frac{\varepsilon^{18}}{5^32^{27}}$. Then there is an isometric isomorphism $\psi$ from $L_1(m_1)$ onto itself such that $\psi(f)=|f|$. Using $T\circ \psi^{-1}$ instead of $T$, we may assume that $f$ is nonnegative. Since simple functions are dense in $L_1(m_1)$, we can choose a nonnegative simple function $f_0\in S_{L_1(m_1)}$ arbitrarily close to $f$ so that
$$
\|Tf_0\| > 1-\frac{\varepsilon^{18}}{5^32^{27}} = 1-\frac{\varepsilon_1^3}{2^6},
$$
where $\varepsilon_1 = \frac{\varepsilon^6}{5 \cdot2^7}$. By Lemma~\ref{lem:complex1}, there exist a norm-one bounded linear operator $T_\nu$ for some $\nu\in M(m_1, m_2)$ and a nonnegative simple function $f_1$ in $S_{L_1(M_1)}$ such that $\norm{T - T_\nu}< \varepsilon_1$, $\|f_1- f\|<3\varepsilon_1$ and $\frac{d|\nu|^1}{dm_1}(x)=1$ for all $x\in {\rm supp}( f_1)$. Then
\[
\|T_\nu f_1\| \geq \|Tf\| - \|Tf - T_\nu f\| - \|T_\nu(f-f_1)\|\geq 1-\frac{\varepsilon_1^3}{2^6} - \varepsilon_1-3\varepsilon_1\geq 1-5\varepsilon_1=1-\frac{\varepsilon^6}{2^7}.
\]
Now, by Lemma~\ref{lem:complex2}, there exist a nonnegative simple function $\tilde{f}$ and an operator $T_{\tilde\nu}$ in $\mathcal{L}(L_1(m_1), L_1(m_2))$ such that $\|T_{\tilde\nu}\tilde f\|=\|T_{\tilde\nu}\|=1$, $\|T_\nu - T_{\tilde\nu}\|\leq 3\sqrt\varepsilon$ and $\|f_1-\tilde f\|\leq 3\varepsilon$. Therefore, $\|T-T_{\tilde \nu}\| < 4\sqrt\varepsilon$ and $\|f-\tilde{f}\|< 4\varepsilon$, which complete the proof.
\end{proof}
\section{The Bishop-Phelps-Bollob\'{a}s property
of $(L_1(\mu),L_\infty(\nu))$}\label{sec:L_1-L_infty}
Our aim now is to show that $(L_1(\mu), L_\infty(\nu))$ has the BPBp for any measure $\mu$ and any localizable measure $\nu$.
\begin{theorem}\label{thm:localizable}
Let $\mu$ be an arbitrary measure and let $\nu$ be a localizable measure. Then the pair $(L_1(\mu),L_\infty(\nu))$ has the BPBp. Moreover,
$$
\eta\bigl(L_1(\mu),L_\infty(\nu)\bigr)(\varepsilon)\geq \left(\frac{\varepsilon}{10}\right)^8 \qquad (0<\varepsilon<1).
$$
\end{theorem}
By Corollaries \ref{cor:main1} and \ref{cor:to_L_infty-localizable}, it is enough to prove the result in the case where $\mu$ is $\sigma$-finite and $\nu$ is the product measure on $[0,1]^\Lambda$. Therefore, we just need to prove the following result.
\begin{theorem}\label{thm:main2}
Assume $\mu$ is a $\sigma$-finite measure and $\nu$ is the product measure of Lebesgue measures on $[0,1]^\Lambda$. Let $0 <\varepsilon
<1/3$, let $T:L_1(\mu)\longrightarrow L_{\infty}(\nu)$ be a bounded linear operator of norm one and let $f_0\in S_{L_1(\mu)}$ satisfy $\|T(f_0)\|_{\infty}>1-\varepsilon^8$. Then there exist $S\in \mathcal{L}(L_1(\mu), L_{\infty}(\nu))$ with $\|S\|=1$ and $g_0\in
S_{L_1(\mu)}$ such that
$$
\|S(g_0)\|_{\infty} =1,\quad \|T-S\|<2\varepsilon
\quad \text{and} \quad \|f_0-g_0\|_1 < 10\varepsilon.
$$
\end{theorem}
Recall that the particular case where $\Lambda$ reduces to one point was established in \cite{ACGM}. Actually, our proof is based on the argument given there.
Prior to giving the proof of Theorem~\ref{thm:main2}, we present the following representation result for operators from $L_1(\mu)$ into $L_\infty\bigl([0,1]^\Lambda\bigr)$ and one lemma.
Let $(\Omega, \Sigma, \mu)$ be a $\sigma$-finite measure space and let $K=[0,1]^\Lambda$ be the product space equipped with the product measure $\nu$ of the Lebesgue measures. Let $J$ be a countable subset of $\Lambda$ and let $\pi_{J}$ be the natural projection from $K$ onto $[0,1]^J$. Fix a sequence $(\Pi_n)$ of finite partitions of $[0,1]^J$ into sets of positive measure such that $\Pi_{n+1}$ is a refinement of $\Pi_n$ for each $n$, and the $\sigma$-algebra generated by $\bigcup_{n=1}^\infty \Pi_n$ is the Borel $\sigma$-algebra of $[0,1]^J$. For each $y\in K$ and $n\in \mathbb{N}$, let $B(n,\pi_J(y))$ be the set in $\Pi_n$ containing $\pi_J(y)$. Then, given a Borel set $F$ of the form $F_0\times [0,1]^{\Lambda\setminus J}$ with $F_0 \subset [0,1]^J$, define
\[ \delta(F) = \left\{ y\in K : \lim_{n\to \infty} \frac{ \nu(F\cap \pi_J^{-1}(B(n,\pi_J(y)))}{\nu(\pi_J^{-1}(B(n,\pi_J(y)))}=1\right\}.\]
It is easy to check that $\delta(F) = \delta_J(F_0)\times [0,1]^{\Lambda\setminus J}$, where
\[\delta_J(F_0) = \left\{ y\in [0,1]^J : \lim_{n\to \infty} \frac{ \nu(\pi_J^{-1}(F_0\cap B(n,y)))}{\nu(\pi_J^{-1}(B(n,y)))}=1\right\}.\]
Using the martingale almost everywhere convergence theorem \cite{Doob}, we have
\[
\nu(F\Delta \delta(F))=0
\]
where $F\Delta \delta(F)$ denotes the symmetric difference of the sets $F$ and $\delta(F)$.
On the other hand, it is well-known that the space $\mathcal{L}(L_1(\mu), L_\infty(\nu))$ is isometrically isomorphic to the space $L_\infty(\mu\otimes \nu)$, where $\mu\otimes \nu$ denotes the product measure on $\Omega\times K$. More precisely, the operator $\widehat h$ corresponding to $h\in L_\infty(\mu\otimes\nu)$ is given by
\[ \widehat h(f)(t) = \int_\Omega h(\omega, t)f(\omega) \, d\mu(\omega)\] for $\nu$-almost every $t\in K$. For a reference, see \cite{DF}.
\begin{lemma}\label{basic}
Let $M$ be a measurable subset of $\Omega \times K$ with positive measure, $0<\varepsilon <1$, and let $f_0$ be a simple function.
If $\|\widehat{\chi_M}(f_0)\|_{\infty}>1-\varepsilon$, then there exists a simple function $g_0\in
S_{L_1(\mu)}$ such that
$$
\left\|[\widehat{\chi_M} + \widehat{\varphi}](g_0)\right\|_{\infty}=1 \quad \text{and} \quad
\|f_0-g_0\|_1<4\sqrt{\varepsilon}
$$
for every simple function $\varphi$ in
$L_{\infty}(\mu\otimes \nu)$ with $\|\varphi\|_{\infty}\leq 1$ and vanishing on $M$.
\end{lemma}
\begin{proof}
Write $f_0=\sum_{j=1}^m \alpha_j\frac{\chi_{A_j}}{\mu(A_j)}\in S_{L_1(\mu)}$, where each $A_j$ is a
measurable subset of $\Omega$ with finite positive measure, $A_k\cap A_l = \emptyset$ for $k\neq l$, and $\alpha_j$ is a positive real number for every $j=1,\ldots, m$ with $\sum_{j=1}^{m} \alpha_j
=1$. Since $\|\widehat{\chi_M}(f_0)\|_{\infty}>1-\varepsilon$, there is a measurable subset $B$ of $K$ such that $0<\nu(B)$ and
$$
\inner{\widehat{\chi_M}(f_0), \frac{\chi_B}{\nu(B)}} > 1-\varepsilon.
$$
We may assume that there is a countable subset $J$ of $\Lambda$ such that $M=M_0\times [0,1]^{\Lambda\setminus J}$ and $B=B_0\times [0,1]^{\Lambda\setminus J}$ for some measurable subsets $M_0 \subset \Omega\times [0,1]^J$ and $B_0\subset [0,1]^J$. For each $j\in \{1,\ldots, m\}$, we write $M_j=M~\bigcap~ ( A_j\times B)=(M_0\cap (A_j\times B_0))\times[0,1]^{\Lambda\setminus J}$ and define
$$
H_j=\{(x,y) \,:\, x\in A_j, y\in \delta\big((M_j)_x\big)
\}.
$$
As in the proof of \cite[Proposition~5]{PS}, the $H_j$'s are disjoint measurable subsets of $\Omega \times K$. We note
that for each $j\in \{1,\ldots, m\}$, we have $H_j\subset A_j\times \delta(B)$ and $(\mu\otimes \nu)(M_j\Delta H_j) =0$.
Now, by Fubini theorem, we have that
\begin{align*}
1-\varepsilon &< \langle \widehat{\chi}_M(f_0), \frac{\chi_B}{\nu(B)}\rangle\\
&=\sum_{j=1}^m \frac{\alpha_j}{\mu(A_j)\nu(B)} \int_{\Omega\times K} \chi_{M_j}(x,y)~d(\mu\otimes \nu)\\
&=\sum_{j=1}^m \frac{\alpha_j}{\mu(A_j)\nu(B)} \int_{\Omega\times K} \chi_{H_j}(x,y)~d(\mu\otimes \nu)\\
&=\sum_{j=1}^m \frac{\alpha_j}{\nu(B)} \int_{\delta(B)} \frac{\mu(H_j^y)}{\mu(A_j)}~d\nu(y)\\
&=\frac1{\nu(\delta(B))} \int_{\delta(B)} \sum_{j=1}^m {\alpha_j}\frac{\mu(H_j^y)}{\mu(A_j)}~d\nu(y).
\end{align*}
So, there exists $y_0\in \delta(B)$ such that
\[
\sum_{j=1}^m {\alpha_j}\frac{\mu(H_j^{y_0})}{\mu(A_j)}>1-\varepsilon.
\]
Let $J = \left\{ j\in\{1,\ldots,m\}\, : \, \frac{\mu(H_j^{y_0})}{\mu(A_j)}>1-\sqrt \varepsilon\right\}$. For each $j\in J$, we have that $\mu(A_j\setminus H_j^{y_0}) < \sqrt\varepsilon \mu(A_j)$ and, by Lemma~\ref{elementary}, we also have $\alpha_J := \sum_{j\in J} \alpha_j >1-\sqrt{\varepsilon}$. Define
$$
g_0 = \sum_{j\in J} \beta_j \frac{\chi_{H_j^{y_0}}}{\mu(H_j^{y_0})},
$$
where $\beta_j={\alpha_j}/\alpha_J$. Then
\begin{align*}
\|g_0- f_0\| &< \nor{ \sum_{j\in J} \beta_j \frac{\chi_{H_j^{y_0}}}{\mu(H_j^{y_0})} - \sum_{j\in J} \alpha_j \frac{\chi_{A_j}}{\mu(A_j)} } +\sqrt{\varepsilon}\\
& \leq \nor{ \sum_{j\in J} \beta_j \frac{\chi_{H_j^{y_0}}}{\mu(H_j^{y_0})} - \sum_{j\in J} \beta_j \frac{\chi_{A_j}}{\mu(A_j)} } + \nor { \sum_{j\in J} \beta_j \frac{\chi_{A_j}}{\mu(A_j)} - \sum_{j\in J} \alpha_j \frac{\chi_{A_j}}{\mu(A_j)} } + \sqrt\varepsilon\\
&\leq \nor{ \sum_{j\in J} \beta_j \frac{\chi_{H_j^{y_0}}}{\mu(H_j^{y_0})} - \sum_{j\in J} \beta_j \frac{\chi_{H_j^{y_0}}}{\mu(A_j)} } + \nor{ \sum_{j\in J} \beta_j \frac{\chi_{H_j^{y_0}}}{\mu(A_j)} -\sum_{j\in J} \beta_j \frac{\chi_{A_j}}{\mu(A_j)} } +2\sqrt\varepsilon\\
&\leq 2\frac{\mu(A_j\setminus H_j^{y_0})}{\mu(A_j)} + 2\sqrt{\varepsilon} \leq 4\sqrt\varepsilon
\end{align*}
We claim that $\widehat{\chi_M}+\widehat{\varphi}$ attains its norm at $g_0$.
Let $B_n = \pi_J^{-1}(B(n,\pi_J({y_0})))$ for each $n$. Note that for every $x\in H_j^{y_0}$ we have $(x,{y_0})\in H_j$, which implies
that
$$
\lim_{n\to \infty} \frac{\nu\big((M_j)_x\cap B_n\big)}{\nu(B_n)} =1.
$$
It follows from the Lebesgue dominated
convergence theorem and Fubini theorem that, for each $j\in J$,
\begin{eqnarray*} 1 & = &\lim_{n\to \infty} \frac{1}{\mu(H_j^{y_0})} \int_{H_j^{y_0}} \frac{\nu\big((M_j)_x\cap
B_n\big)}{\nu(B_n)} ~d\mu(x)\\ & =& \lim_{n\to \infty} \frac{(\mu \otimes \nu)(M_j\cap (H_j^{y_0}\times
B_n))}{\mu(H_j^{y_0}) \nu(B_n)}.
\end{eqnarray*}
On the other hand, since the simple function $\varphi$ is assumed to vanish on $M$ and $\|\varphi\|_{\infty} \leq 1$, we have
\begin{eqnarray*}
\left|\big\langle \widehat{\varphi}\big( \frac{\chi_{H_j^{y_0}}}{\mu(H_j^{y_0})}\big),
\frac{\chi_{B_n}}{\nu(B_n)}\big\rangle\right|& =& \left| \frac{1}{\mu(H_j^{y_0}) \nu(B_n)}\int_{H_j^{y_0}\times
B_n} \varphi ~d(\mu\otimes \nu)\right| \\ & \leq & \frac{(\mu\otimes \nu)((H_j^{y_0}\times B_n)\setminus
M_j)}{\mu(H_j^{y_0}) \nu(B_n)}\\ & = & 1- \frac{(\mu \otimes \nu)(M_j\cap (H_j^{y_0}\times B_n))}{\mu(H_j^{y_0})
\nu(B_n)} \longrightarrow 0,
\end{eqnarray*} as
$n\to \infty$. Therefore,
\begin{eqnarray*}
1\geq \big\|[\widehat{\chi_M} + \widehat{\varphi}](g_0)\big\|_{\infty} &\geq & \lim_{n\to \infty}
\left|\inner{(\widehat{\chi_M} + \widehat{\varphi})\Big(\sum_{j\in J} \beta_j
\frac{\chi_{H_j^{y_0}}}{\mu(H_j^{y_0})}\Big), \frac{\chi_{B_n}}{\nu(B_n)}}\right|\\
&= & \lim_{n\to \infty} \sum_{j\in J}
\beta_j \frac{(\mu \otimes
\nu)(M\cap (H_j^{y_0}\times B_n))}{\mu(H_j^{y_0}) \nu(B_n)}\\
&\geq&
\lim_{n\to \infty} \sum_{j\in J}
\beta_j \frac{(\mu \otimes\nu)(M_j\cap (H_j^{y_0}\times B_n))}{\mu(H_j^{y_0}) \nu(B_n)}= 1,
\end{eqnarray*}
which shows that $\widehat{\chi_M}+\widehat{\varphi}$ attains its norm at $g_0$.
\end{proof}
We are now ready to present the proof of the main result in this section.
\begin{proof}[Proof of Theorem~\ref{thm:main2}] Since the set of all simple functions is dense in
$L_1(\mu)$, we may assume $$f_0=\sum_{j=1}^m
\alpha_j\frac{\chi_{A_j}}{\mu(A_j)}\in S_{L_1(\mu)},$$ where each
$A_j$ is a measurable subset of $\Omega$ with finite positive
measure, $A_k\cap A_l = \emptyset$ for $k\neq l$, and every $\alpha_j$ is a nonzero complex number with $\sum_{j=1}^{m} |\alpha_j| =1$.
We may also assume that $0<\alpha_j\leq 1$ for every
$j=1,\ldots, m$. Indeed, there exists an isometric isomorphism $\Psi : L_1(\mu) \longrightarrow L_1(\mu)$ such that $\Psi(f_0)=|f_0|$. Hence we may replace $T$ and $f_0$ by
$T\circ \Psi^{-1}$ and $\Psi(f_0)$, respectively.
Let $h$ be the element in $L_{\infty}(\Omega\times K, \mu\otimes
\nu)$ with $\|h\|_{\infty}=1$ corresponding to $T$, that is, $T= \widehat{h}$. We may find a simple function
$$
h_0\in L_{\infty}(\Omega\times K, \mu\otimes \nu),
\quad \|h_0\|_{\infty} =1
$$
such that $\|h-h_0\|_{\infty} < \|T(f_0)\|_{\infty} -
(1-\varepsilon^8)$, hence $\|\widehat{h}_0(f_0)\|_{\infty} > 1-\varepsilon^8$. We can write $h_0= \sum_{l=1}^p c_l \chi_{D_l}$, where each $D_l$ is a measurable subset of $\Omega\times K$ with positive measure, $D_k\cap D_l= \emptyset$ for $k\neq l$, $|c_l|\leq 1$ for every $l=1, \ldots, p$, and $|c_{l_0}| =1$ for
some $1\leq l_0\leq p$.
Let $B$ be a Lebesgue measurable subset of $K$ with $0<\nu(B)<\infty$ such that
$$\left|\inner{\widehat{h}_0(f_0), \frac{\chi_B}{\nu(B)}} \right| > 1-\varepsilon^8.$$
Choose $\theta\in \mathbb{R}$ so that
\begin{eqnarray*}
1- \varepsilon^8 &< & \big|\langle\widehat{h}_0(f_0),
\frac{\chi_B}{\nu(B)}\rangle\big|\\ &=& \e^{i\theta} \langle\widehat{h}_0(f_0),
\frac{\chi_B}{\nu(B)}\rangle\\
&=& \sum_{j=1}^m\alpha_j~ \e^{i \theta}
\inner{\widehat{h}_0(\frac{\chi_{A_j}}{\mu(A_j)}),
\frac{\chi_B}{\nu(B)}}.
\end{eqnarray*}
Set
$$
J=\left\{j\in \{1,\ldots,m\}\, : \, \re \big[~ \e^{i \theta}
\langle\widehat{h}_0(\frac{\chi_{A_j}}{\mu(A_j)}),
\frac{\chi_B}{\nu(B)}\rangle\big] ~> 1-\varepsilon^4\right\}.
$$
By Lemma~\ref{elementary}, we have
$$\alpha_J= \sum_{j\in J} \alpha_j
> 1-\frac{\varepsilon^8}{1-(1-\varepsilon^4)}=1-\varepsilon^4.
$$
We define $$f_1=\sum_{j\in J} \left(\frac{\alpha_j}{\alpha_J} \right)\frac{\chi_{A_j}}{\mu(A_j)}.$$
We can see that $\|f_1\|_1 =1$,
\begin{eqnarray*}
\|f_0 - f_1\|_1 &\leq & \Big\|\sum_{j\notin J}\alpha_j
\frac{\chi_{A_j}}{\mu(A_j)}\Big\|_1
+\Big (\frac{1}{\alpha_J} -1\Big)~\Big\|\sum_{j\in J}\alpha_j
\frac{\chi_{A_j}}{\mu(A_j)}\Big\|_1\\
&=& \sum_{j\notin J}\alpha_j + (1-\alpha_J) = 2 (1-\alpha_J) < 2\varepsilon^4.
\end{eqnarray*}
and
\begin{eqnarray*}
\left|\langle\widehat{h}_0(f_1), \frac{\chi_B}{\nu(B)}\rangle \right| &\geq
&\re \left[
\e^{i\theta}\inner{ \widehat{h}_0(f_1), \frac{\chi_B}{\nu(B)}}\right]\\
&=& \frac{1}{\alpha_J} \sum_{j\in J} \alpha_j~\re \left[
\e^{i\theta}\inner{ \widehat{h}_0(\frac{\chi_{A_j}}{\mu(A_j)}), \frac{\chi_B}{\nu(B)}}\right]\\
&>& \frac{1}{\alpha_J} \sum_{j\in J} \alpha_j (1-\varepsilon^4) =
1-\varepsilon^4.
\end{eqnarray*}
Let $L = \left\{ l\in\{1,\ldots,p\}\, :\, \re(\e^{i\theta} c_l) > 1-\frac{\varepsilon^2}{2}\right\}$. On the other hand, for each $j\in J$, we have
\begin{eqnarray*}
1-\varepsilon^4 & < & \re \left[
\e^{i\theta}\inner{\widehat{h}_0(\frac{\chi_{A_j}}{\mu(A_j)}),
\frac{\chi_B}{\nu(B)}}\right]\\
&=& \sum_{l=1}^p\re(\e^{i\theta} c_l)~ \frac{(\mu\otimes \nu) (D_l\cap
(A_j\times B))}{\mu(A_j)\nu(B)} \\
&\leq& \sum_{l\in \{1, \dots, p\}\setminus L} (1-\frac{\varepsilon^2}{2} )\frac{(\mu\otimes \nu) (D_l\cap
(A_j\times B))}{\mu(A_j)\nu(B)}\\
&\ &\ \ \ \ \ + \sum_{l\in L} \frac{(\mu\otimes \nu) (D_l\cap
(A_j\times B))}{\mu(A_j)\nu(B)}\\
&\leq& 1 - \frac{\varepsilon^2}{2}\sum_{l\in \{1, \dots, p\}\setminus L} \frac{(\mu\otimes \nu) (D_l\cap
(A_j\times B))}{\mu(A_j)\nu(B)}.
\end{eqnarray*}
This implies that for each $j\in J$
\[
\sum_{l\in \{1, \dots, p\}\setminus L} \frac{(\mu\otimes \nu) (D_l\cap
(A_j\times B))}{\mu(A_j)\nu(B)}\leq 2\varepsilon^2.
\]
Since
\[\sum_{l=1}^p \frac{(\mu\otimes \nu) (D_l\cap
(A_j\times B))}{\mu(A_j)\nu(B)}>1-\varepsilon^4,
\]
for every $j\in J$ we have that
\[ \sum_{l\in L}\frac{(\mu\otimes \nu)(D_l\cap (A_j\times B))}{\mu(A_j)\nu(B)}\geq (1-\varepsilon^4-2\varepsilon^2)\geq 1-3\varepsilon^2.\]
Set $D= \bigcup_{l\in L} D_l$. Then we can see
\begin{eqnarray*} \langle\widehat{\chi}_{D} (f_1),
\frac{\chi_B}{\nu(B)}\rangle &=& \sum_{j\in J} \big(\frac{\alpha_j}{\alpha_J}\big) \cdot
\sum_{l\in L}
\frac{\mu\otimes \nu(D_l\cap (A_j\times B))}{\mu(A_j)\nu(B)}\geq 1-3\varepsilon^2.
\end{eqnarray*}
By Lemma~\ref{basic}, there is $g_0\in S_{L_1(\mu)}$ such that $\|(\widehat{\chi}_{D} + \widehat{\varphi})(g_0)\|_{\infty} =1$ and $\|f_1 -g_0\| <4\sqrt{3\varepsilon^2}<8\varepsilon$ for every simple function $\varphi$ in $L_{\infty}(\mu\otimes \nu)$ vanishing on $D$ with $\|\varphi\|_{\infty}\leq 1$. Therefore, we have
$$
\|f_0-g_0\|_1\leq
\|f_0-f_1\|_1 + \|f_1-g_0\|_1 \leq 2\varepsilon^4 + 8\varepsilon<10\varepsilon.
$$
Define $$h_1 = \e^{-i\theta}~\chi_{D} + \sum_{l\notin L} c_l~
\chi_{D_l} \in L_{\infty}(\mu\otimes \nu).$$ Let $S$ be the operator in $
\mathcal{L}(L_1(\mu), L_{\infty}(m))$ corresponding to $h_1$. Then we get
$$
\|S(g_0)\|_{\infty}= \|\widehat{h_1}(g_0)\|_{\infty}=1
$$
and
$$
\|h_0 -h_1\|_{\infty} = \max_{l\in L} |c_l -
\e^{-i\theta}| = \max_{l\in L} |\e^{i\theta} c_l - 1|.
$$
As $\re
(\e^{i\theta} c_l) > 1- \frac{\varepsilon^2}{2}$ for every $l\in L$, we have that
\begin{eqnarray*}
\big(\im (\e^{i\theta} c_l)\big)^2 &\leq& 1 - \big(\re (\e^{i\theta} c_l)\big)^2 < 1-(1-\frac{\varepsilon^2}{2})^2 = \varepsilon^2 - \frac{\varepsilon^4}{4}.
\end{eqnarray*}
Since
\begin{eqnarray*}
|\e^{i\theta} c_l - 1| &= &\sqrt{\big(1- \re ( \e^{i\theta} c_l)\big)^2 + \big(\im
(\e^{i\theta} c_l)\big)^2}\\ & < & \sqrt{\varepsilon^4/4 + (\varepsilon^2 - \varepsilon^4/4)}
=\varepsilon,
\end{eqnarray*}
we conclude that
$$
\|h_0 - h_1\|_{\infty} < \varepsilon
$$
and
\begin{equation*}
\|T-S\|_{\infty}\leq \|h - h_0\|_{\infty} + \|h_0 - h_1\|_{\infty} < \varepsilon^8 + \varepsilon < 2\varepsilon.\qedhere
\end{equation*}
\end{proof}
\section{The Bishop-Phelps-Bollob\'as Property for some operators from $L_1(\mu)$ into $C(K)$}\label{sec:L_1-C(K)}
Throughout this section, we consider only a finite measure $\mu$ on a measurable space $(\Omega, \Sigma)$ and \textbf{real} Banach spaces $L_1(\mu)$ and $C(K)$. Our aim is to obtain the Bishop-Phelps-Bollob\'{a}s property for some classes of operators from $L_1(\mu)$ to $C(K)$, sharpening the results about denseness of norm-attaining operators given by Iwanik in 1982 \cite{Iwa2}.
We use the following standard representation of operators into $C(K)$ \cite[Theorem~1 in p.~490]{DS}.
\begin{lemma} Given a bounded linear operator $T:X\longrightarrow C(K)$, define $F : K\longrightarrow X^*$ by $F(s) = T^*(\delta_s)$, where $\delta_s$ is the point measure at $s\in K$. Then, for $x\in X$, the relation
$Tx(s) = \inner{x, F(s)}$ defines an isometric isomorphism of $\mathcal{L}(X, C(K))$ onto the space of weak$^*$ continuous functions from $K$ to $X^*$ with the supremum norm. Moreover, compact operators correspond to norm continuous functions.
\end{lemma}
Iwanik \cite{Iwa2} considered operators $T\in \mathcal{L}(L_1(\mu),C(K))$ satisfying one of the following conditions:
\begin{enumerate}
\item The map $s\longmapsto T^*\delta_s$ is continuous in measure.
\item There exists a co-meager set $G\subset K$ such that $\{T^*\delta_s:s\in G\}$ is norm separable in $L_\infty(\mu)$.
\end{enumerate}
We recall that a subset $A$ is said to be a co-meager subset of $K$ if the set $K\setminus A$ is meager, that is, of first category.
\begin{theorem} \label{KIwa}
Let $0<\varepsilon<1$. Suppose that $T\in \mathcal{L}(L_1(\mu),C(K))$ (real case) has norm one and satisfies condition (1). If $\|Tf\|>1-\frac{\varepsilon^2}6$ for some $f\in S_{L_1(\mu)}$, then there exist $S\in \mathcal{L}(L_1(\mu),C(K))$ with $\|S\|=1$ and $g\in S_{L_1(\mu)}$ such that $\|Sg\|=1$, $\|S-T\|<\varepsilon$, and $\|f-g\|<\varepsilon$. Moreover, $S$ also satisfies condition (1).
\end{theorem}
\begin{proof}
Without loss of generality, we assume that there exists $s_0\in K$ such that
$$
Tf (s_0)>1-\frac{\varepsilon^2}6.
$$
Consider the function $G:L_\infty(\mu)\longrightarrow L_\infty(\mu)$ given by
$$
G(h)= \Big(h \wedge (1-\varepsilon/3)\Big) \vee (-1+\varepsilon/3) \qquad \bigl(h\in L_\infty(\mu)\bigr).
$$
Since the lattice operation $G$ is continuous in the $L_\infty$ norm and $T$ satisfies condition (1), we can see that the mapping $s\longmapsto GT^*\delta_s$ is continuous in measure, hence weak$^*$-continuous. Let $\bar S$ be the element of $\mathcal{L}(L_1(\mu),C(K))$ represented by the function $F(s) := GT^*\delta_s$. Then
\[ \| \bar S - T \| = \sup_{s\in K}\| F(s) - T^*\delta_s \| \leq \frac{\varepsilon}3.
\]
Let
$$
C=\left\{\omega \in \Omega ~:~ \sign\big(f(\omega)\big)T^*\delta_s(\omega) > 1-\frac{\varepsilon}3\right\}
$$
and define $S= \overline{S}/\|\overline{S}\|$ and $g=f|_C/\|f|_C\|$, where $f|_C$ is the restriction of $f$ to the subset $C$. It is easy to see that $S$ satisfies condition (1) and
$$
\|S-T\|\leq\|S-\overline{S}\|+\|\overline{S}-T\| = | \|\bar S\|-1| + \|\bar S - T\| \leq 2\|\bar S - T\|< \varepsilon.
$$
Moreover, we get
\begin{eqnarray*}
1-\frac{\varepsilon^2}6
&<&Tf (s_0)=\langle T^*\delta_{s_0}, f \rangle=\int_\Omega T^*\delta_{s_0}(\omega) f(\omega)\, d\mu\\
&=&\int_{C} \sign\big(f(\omega)\big)T^*\delta_{s_0}(\omega) |f(\omega)|\, d\mu+\int_{\Omega\backslash C} \sign\big(f(\omega)\big)T^*\delta_{s_0}(\omega) |f(\omega)|\, d\mu\\
&\leq& \int_{C} |f(x)|\,d\mu+(1- \frac{\varepsilon}3)\int_{\Omega\backslash C}|f(\omega)|\, d\mu\\
&=&1-\frac{\varepsilon}3\int_{\Omega\backslash C}|f(x)|d\mu,
\end{eqnarray*}
which implies that
$$
\int_{\Omega\backslash C}|f(x)|\,d\mu< \frac{\varepsilon}2.
$$
Therefore,
\begin{eqnarray*}
\|g-f\|
&\leq& \|g-f|_C\|+\|f|_C-f\|=2(1-\|f_C\|)\\
&=&2\int_{\Omega\backslash C}|f(x)|d\mu< \varepsilon
\end{eqnarray*}
On the other hand, we see that $Sg(s_0)=\langle S^* \delta_{s_0}, g \rangle = 1$ because $S^*\delta_{s_0}(\omega) = \sign\big(f(x)\big)=\sign\big(g(\omega)\big)$ for every $\omega \in C$. This completes the proof.
\end{proof}
We do not know, and it is clearly of interest, for which topological compact Hausdorff spaces $K$ all operators in $\mathcal{L}(L_1(\mu),C(K))$ satisfy condition (1).
We recall that a bounded linear operator $T$ from $L_1(\mu)$ into a Banach space $X$ is said to be \emph{Bochner representable} if there is a bounded strongly measurable function $g:\Omega \longrightarrow X$ such that
\[
Tf = \int f(\omega)g(\omega) \, d\mu(\omega) \qquad \bigl(f\in L_1(\mu)\bigr).
\]
The Dunford-Pettis-Phillips Theorem \cite[Theorem~12 p.~75]{DU} says that $T\in \mathcal{L}(L_1(\mu), X)$ is weakly compact if and only if $T$ is Bochner representable by a function $g$ which has an essentially relatively weakly compact range. Iwanik \cite{Iwa2} showed that every Bochner representable operator from $L_1(\mu)$ into $C(K)$ satisfies condition (1). Moreover, we get the following.
\begin{corollary}Let $0<\varepsilon <1$. Suppose that $T\in \mathcal{L}(L_1(\mu),C(K))$ (real case) has norm-one and it is Bochner representable (resp.\ weakly compact). If $\|Tf\|>1-\frac{\varepsilon^2}6$ for some $f\in S_{L_1(\mu)}$, then there exist a Bochner representable (resp.\ weakly compact) operator $S\in \mathcal{L}(L_1(\mu),C(K))$ with $\|S\|=1$ and $g\in S_{L_1(\mu)}$ such that $\|Sg\|=1$, $\|S-T\|<\varepsilon$, and $\|f-g\|<\varepsilon$.
\end{corollary}
\begin{proof}
By Theorem~\ref{KIwa}, it is enough to show that if $T$ is a Bochner representable operator from $L_1(\mu)$ into $C(K)$, then $F(s) = T^*\delta_s$ is continuous in measure and that the operator $S$ defined in the proof is Bochner representable.
Let $g:\Omega \longrightarrow C(K)$ be a bounded strongly measurable function which represents $T$. It is easy to check that $F(s)= g(\cdot )(s)$ for all $s\in K$.
Since the range of $g$ is separable, the range of $T$ is separable and contained in a separable sub-algebra $A$ of $C(K)$ with unit. By the Gelfand representation theorem, $A$ is isometrically isomorphic to $C(\bar K)$ for some compact metrizable space $\bar K$. So, we may assume that $K$ is metrizable. To show that the mapping $F(s) = T^*\delta_s=g(\omega)(s)$ is continuous in measure, assume that a sequence $(s_n)$ converges to $s$ in $K$. Then for all $\omega\in \Omega$,
\[ \lim_{n\to \infty} |g(\omega)(s_n)-g(\omega)(s)|=0.\]
By the dominated convergence theorem, we have that
\[ \lim_{n\to \infty} \sup_{f\in S_{L_\infty(\mu)}} \int f(\omega) (g(\omega)(s_n) - g(\omega)(s)) \, d\mu(\omega) \leq \lim_{n\to \infty} \int |g(\omega)(s_n) - g(\omega)(s)| \, d\mu(\omega) =0.\]
Hence the sequence $(g(\cdot)(s_n))_n$ converges to $g(\cdot)(s)$ in measure. That is, $(F(s_n))_n$ converges to $F(s)$.
We note that the operator $\bar S$ in the proof of Theorem~\ref{KIwa} is determined by $GT^*\delta_s = G(g(\cdot )(s))$. Since the mapping
\[ s\longmapsto G(g(\cdot)(s))(\omega) = (g(\omega)(s) \wedge (1-\varepsilon/3))\vee(-1+\varepsilon/3))
\]
is continuous for each $\omega\in \Omega$, the operator $\bar S$ is Bochner representable by this mapping. Finally, if $T$ is weakly compact, then the proof is done by the Dunford-Pettis-Phillips theorem.
\end{proof}
As observed in \cite{Iwa2}, the operator $T: L_1[0,1]\longrightarrow C[0,1]$ determined by $T^*\delta_s = \chi_{[0,s]}$ is not Bochner representable, but satisfies condition (1).
For condition (2), we have the following result.
\begin{theorem}\label{thm:iwanik2}
Let $0<\varepsilon<1$. Suppose that $T\in \mathcal{L}(L_1(\mu),C(K))$ (real case) has norm-one and satisfies condition (2). If $\|Tf\|>1-{{\varepsilon^2}\over{4}}$ for some $f\in S_{L_1(\mu)}$, then there exist $S\in \mathcal{L}(L_1(\mu),C(K))$ with $\|S\|=1$ and $g\in S_{L_1(\mu)}$ such that $\|Sg\|=1$, $\|S-T\|<\varepsilon$, and $\|f-g\|<\varepsilon$. Moreover, $S$ also satisfies condition (2).
\end{theorem}
\begin{proof}
By using a suitable isometric isomorphism, we may first assume that $f$ is nonnegative. Let $G$ be the co-meager set in the condition (2) and $(T^*\delta_{s_k})_k$ be a sequence which is $\|\cdot \|_\infty$-dense in the closure of $\{T^*\delta_s~:~s\in G\}\subset L_\infty(\mu)$. Observe that the sets
\[ \{\omega \in \Omega: a < T^*\delta_{s_k} (\omega) <b \}\]
where $a, b\in \mathbb{Q}$ and $k\geq 1$, form a countable family $\{A_i\}_i$ of measurable subsets of $\Omega$. We define, for each $i$, the functions
\[
u_i(s) = {\rm ess.}\inf\{ T^*\delta_s(\omega) : \omega \in A_i\} \quad \text{and} \quad v_i(s) = {\rm ess.}\sup\{ T^*\delta_s(\omega) : \omega \in A_i\}.
\]
Let $U_i$ and $V_i$ be the set of all continuity points of $u_i$ and $v_i$ for all $i$, respectively. Let $F$ be the intersection of all subsets $U_i$'s and $V_i$'s. We claim that the functions $u_i$'s are upper semi-continuous and the functions $v_i$'s are lower semi-continuous. Indeed, recall that
\[ v_i(s) = \inf\Big\{ \lambda \in \mathbb{R} : \mu\{ \omega \in A_i: T^*\delta_s(\omega)>\lambda\} =0\Big\},\] where $\inf \emptyset = \infty$ and $\inf \mathbb{R} = -\infty$. To show that the set
$\{ s : \lambda< v_i(s)\}$ is open in $K$ for all $\lambda\in \mathbb{R}$, suppose that $v_i(s_0)>\lambda_0$ for some $s_0\in K$ and $\lambda_0\in \mathbb{R}$. It suffice to prove that there is an open neighborhood $V$ of $s_0$ such that $V\subset \{ s : v_i(s) >\lambda_0\}$. We note that
$\mu\{ \omega \in A_i: T^*\delta_{s_0}(\omega) >\lambda_0\} >0$ and there exists $\lambda_1>\lambda_0$ such that
\[ \mu\{ \omega\in A_i : T^*\delta_{s_0}(\omega) >\lambda_1\} >0.\]
Let $E = \{ \omega\in A_i : T^*\delta_{s_0}(\omega) >\lambda_1\} $. Then
\[ \frac{1}{\mu(E)}\int_E T^*\delta_{s_0}(\omega) d\mu(\omega) > \lambda_1>\lambda_0.\]
Since the map $s\longmapsto T^*\delta_s$ is weak$^*$ continuous on $L_\infty(\mu)$, the set
\[ V:= \left \{ s\in K : \frac{1}{\mu(E)}\int_E T^*\delta_{s}(\omega) d\mu(\omega) >\lambda_1 \right\}\]
is an open subset containing $s_0$. We note that $\mu\{ \omega\in A_i : T^*\delta_s(\omega) >\lambda_1\}>0$ for all $s\in V$. Otherwise, there is $s_1\in V$ such that $\mu\{ \omega \in A_i : T^*\delta_{s_1}(\omega) >\lambda_1\}=0$. Then
$T^*\delta_{s_1}(\omega)\leq \lambda_1$ almost everywhere $\omega\in A_i$ and
\[ \frac{1}{\mu(E)}\int_E T^*\delta_{s_1}(\omega) d\mu(\omega) \leq \lambda_1.\] This is a contradiction to the fact that $s_1$ is an element of $V$, which implies that $v_i(s)>\lambda_0$ for all $s\in V$ and $V\subset \{ s : v_i(s) >\lambda_0\}$. This gives the lower semi-continuity of $v_i$. The upper semi-continuity of $u_i$ follows from the fact that $-u_i$ is lower semi-continuous. The claim is proved.
We deduce then that the set $F$ is co-meager (c.f.\ see \cite[\S~32~II. p.~400]{Kur}). Since the set $\{s\,:\, s\in K ,\, |Tf(s)| > 1-\frac{\varepsilon^2}4\}$ is nonempty and open, there exists $s_0\in F\cap G$ such that $|Tf(s_0)|>1-{{\varepsilon^2}\over{4}}$. Without loss of generality, we may assume that
$$Tf(s_0) = \inner{T^*\delta_{s_0}, f}>1-{{\varepsilon^2}\over{4}}.$$
Because of the denseness of the sequence $(T^*\delta_{s_k})_k$, there exists $k_0\in \mathbb{N}$ such that
\[Tf(s_{k_0}) = \inner{T^*\delta_{s_{k_0}}, f}>1-{{\varepsilon^2}\over{4}} \ \ \ \ \text{ and } \ \ \
\|T^* \delta_{s_0} - T^*\delta_{s_{k_0}} \| < \frac{\varepsilon}{4}.\]
Fix $q\in \mathbb{Q}$ such that $1-\frac{3}{4}\varepsilon<q<1-\frac{\varepsilon}2$ and let $$C=\left\{\omega\in \Omega~:~T^*\delta_{s_{k_0}}(\omega)> q\right\}.$$
Then \begin{align*}
1-{{\varepsilon^2}\over{4}}
<&\inner{T^*\delta_{s_{k_0}}, f}=\int_\Omega T^*\delta_{s_{k_0}}(\omega)f(\omega)\,d\mu\\
=&\int_{C} T^*\delta_{s_{k_0}}(\omega)f(\omega)\,d\mu+\int_{\Omega\setminus C} T^*\delta_{s_{k_0}}(\omega)f(\omega)\,d\mu\\
\leq&\int_{C}f(\omega)\,d\mu+\left(1-{{\varepsilon}\over{2}}\right)\int_{\Omega\setminus C}f(\omega)\,d\mu\\
=&1-{{\varepsilon}\over{2}}\int_{\Omega \setminus C}f(\omega)\,d\mu.
\end{align*}
Hence we have that
\[
\int_{\Omega \setminus C} f(\omega) \, d\mu < \frac{\varepsilon}2\quad \text{and}\quad \int_{C} f(\omega) \, d\mu > 1-\frac{\varepsilon}2.
\]
Let $B_n = \{ \omega \,:\, q<T^*\delta_{s_{k_0}}(\omega) <n\}$ for each $n$. Then $C=\bigcup_{n=1}^\infty B_n$ and there exists $n_0$ such that
\[ \int_{B_{n_0}} f(\omega) \, d\mu > 1-\frac{\varepsilon}2.\]
Hence $B_{n_0} = A_{i_0}$ for some ${i_0}$ and $\mu(A_{i_0})>0$. This implies that $u_{i_0}(s_{k_0})\geq q$ and $u_{i_0}(s_0) \geq q-\frac{\varepsilon}{4}>1-\varepsilon$. Setting $A= A_{i_0}$, it is also clear that
\[
\left\|{{f|_A}\over{\|f|_A\|}}-f\right\|<\varepsilon.
\]
Since $u_{i_0}$ is continuous at $s_0$, there exist an open neighborhood $U$ of $s_0$ and a continuous function $h: K\longrightarrow [0,1]$ such that $u_{i_0}(s)>1-{{\varepsilon}}$ for all $s\in U$, $h(s_0)=1$ and $h(U^c)=0$. We define a weak$^*$-continuous map $M:K\longrightarrow L_\infty(\mu)$ by
$$
M(s)(\omega)=T^*\delta_s(\omega)+\chi_A(\omega)h(s)(1-T^*\delta_s(\omega)) \qquad \bigl(\omega\in \Omega,\ s\in K\bigr).
$$
We note that $M(s_{0})=1$ for all $\omega \in A$. It is also easy to get that $$
\|M(s)(\omega)-T^*\delta_s(\omega)\|= \|\chi_A(\omega)h(s)(1-T^*\delta_s(\omega))\|< \varepsilon \quad \text{and} \quad \sup_{s\in K}\|M(s)\|= 1.
$$
Let $S$ be the operator represented by the function $M$. Then $S$ satisfies condition (2), $S\left({{f|_A}\over{\|f|_A\|}}\right)(s_{0})=1$ and $\|S-T\|<\varepsilon$.
\end{proof}
As shown in \cite{Iwa2}, the Dunford-Pettis-Phillips Theorem implies that every weakly compact operator $T$ in from $L_1(\mu)$ to an arbitrary Banach space $Y$ has separable range, hence the range of its weakly compact adjoint $T^*$ is also separable and so $T$ satisfies condition (2). On the other hand, there are Bochner representable operators which do not satisfy the condition (2) (see \cite{Iwa2}). Indeed, let $\mu$ be a strictly positive probability measure on $\mathbb{N}$ and consider the operator $T\in \mathcal{L}(L_1(\mu), C(\{0,1\} ^{\mathbb{N}})$ defined by $Tf(s) = \int f(n)\pi_n(s)\, d\mu(n)$, where $\pi_n$ be the $n$-th natural projection on $\{0,1\}^\mathbb{N}$. Then $T$ is Bochner representable, while $\{ T^*\delta_s : s\in G\}$ is non-separable in $L_\infty(\mu)$ for every uncountable subset $G$ of $\{0,1\}^\mathbb{N}$.
Finally, let us comment that it is also observed in \cite{Iwa2} that if $K$ has a countable dense subset of isolated points, then condition (2) is automatically satisfied for all $T\in \mathcal{L}(L_1(\mu), C(K))$. Actually, in this case, $C(K)$ has the so-called property $(\beta)$ and then the pair $(X, C(K))$ has the BPBp for all Banach spaces $X$ \cite[Theorem~2.2]{AAGM2}.
It would be of interest to characterize those topological Hausdorff compact spaces $K$ such that $(X,C(K))$ has the BPBp for every Banach space $X$.
| {
"timestamp": "2013-03-26T01:04:05",
"yymm": "1303",
"arxiv_id": "1303.6078",
"language": "en",
"url": "https://arxiv.org/abs/1303.6078",
"abstract": "In this paper we show that the Bishop-Phelps-Bollobás theorem holds for $\\mathcal{L}(L_1(\\mu), L_1(\\nu))$ for all measures $\\mu$ and $\\nu$ and also holds for $\\mathcal{L}(L_1(\\mu),L_\\infty(\\nu))$ for every arbitrary measure $\\mu$ and every localizable measure $\\nu$. Finally, we show that the Bishop-Phelps-Bollobás theorem holds for two classes of bounded linear operators from a real $L_1(\\mu)$ into a real $C(K)$ if $\\mu$ is a finite measure and $K$ is a compact Hausdorff space. In particular, one of the classes includes all Bochner representable operators and all weakly compact operators.",
"subjects": "Functional Analysis (math.FA)",
"title": "The Bishop-Phelps-Bollobás theorem for operators on $L_1(μ)$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969645988575,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.7099044210471408
} |
https://arxiv.org/abs/1903.04284 | Cracking the problem with 33 | Inspired by the Numberphile video "The uncracked problem with 33" by Tim Browning and Brady Haran, we investigate solutions to $x^3+y^3+z^3=k$ for a few small values of $k$. We find the first known solution for $k=33$. | \section{Introduction}
Let $k$ be a positive integer with $k\not\equiv\pm4\pmod{9}$. Then Heath-Brown
\cite{Heath-Brown} has conjectured that there are infinitely many triples
$(x,y,z)\in\mathbb{Z}^3$ such that
\begin{equation}\label{eq:main}
k=x^3+y^3+z^3.
\end{equation}
Various numerical investigations of \eqref{eq:main} have been carried out,
beginning as early as 1954 \cite{MW}; see \cite{BPTY} for a thorough
account of the history of these investigations up to 2000.
The computations performed since that time have been dominated
by an algorithm due to Elkies \cite{Elkies}. The latest that we are aware of
is the paper of Huisman \cite{Huisman} (based on the implementation by
Elsenhans and Jahnel \cite{EJ}),
which determined all solutions to \eqref{eq:main} with $k<1000$
and $\max\{|x|,|y|,|z|\}\le 10^{15}$. In particular, Huisman reports
that solutions are known for all but 13 values of $k<1000$:
\begin{equation}\label{eq:exceptions}
33,\;42,\;114,\;165,\;390,\;579,\;627,\;633,\;732,\;795,\;906,\;921,\;975.
\end{equation}
Elkies' algorithm works by finding rational points near the Fermat
curve $X^3+Y^3=1$ using lattice basis reduction; it is well suited to
finding solutions for many values of $k$ simultaneously. In this paper we
describe a different approach that is more efficient when $k$ is fixed.
It has the advantage of provably finding all solutions with a bound on
the \emph{smallest} coordinate, rather than the largest as in Elkies'
algorithm. This always yields a nontrivial expansion of the search range
since, apart from finitely many exceptions that can be accounted for separately,
one has
$$
\max\{|x|,|y|,|z|\}>\sqrt[3]{2}\min\{|x|,|y|,|z|\}.
$$
Moreover, empirically it is often the case that one of the variables
is much smaller than the other two, so we expect the gain to be even
greater in practice.
Our strategy is similar to some earlier approaches (see especially
\cite{HLT}, \cite{Bremner}, \cite{KTS} and \cite{BPTY}), and is based
on the observation that in any solution, $k-z^3=x^3+y^3$ has $x+y$ as a
factor. Our main contribution over the earlier investigations is to note
that with some time-space tradeoffs, the running time is very nearly
linear in the height bound, and it is quite practical when implemented
on modern 64-bit computers.
In more detail, suppose that $(x,y,z)$ is
a solution to \eqref{eq:main}, and assume without loss of generality
that $|x|\ge|y|\ge|z|$. Then we have
$$
k-z^3=x^3+y^3=(x+y)(x^2-xy+y^2).
$$
If $k-z^3=0$ then $y=-x$, and every value of $x$ yields a
solution. Otherwise,
setting $d=|x+y|=|x|+y\sgn{x}$, we see that $d$ divides $|k-z^3|$, and
\begin{align*}
\frac{|k-z^3|}{d}&=x^2-xy+y^2=x(2x-(x+y))+y^2\\
&=|x|(2|x|-d)+(d-|x|)^2=3x^2-3d|x|+d^2,
\end{align*}
so that
\begin{equation}\label{eq:solution}
\{x,y\}=\left\{\frac12\sgn(k-z^3)\left(
d\pm\sqrt{\frac{4|k-z^3|-d^3}{3d}}\right)\right\}.
\end{equation}
Thus, given a candidate value for $z$, there is an effective procedure
to find all corresponding values of $x$ and $y$, by running through all
divisors of $|k-z^3|$. Already this basic algorithm finds all solutions
with $\min\{|x|,|y|,|z|\}\le B$ in time $O(B^{1+\varepsilon})$, assuming
standard heuristics for the time complexity of integer factorization.
In the next section we explain how to avoid factoring and
achieve the same ends more efficiently.
\subsection*{Acknowledgements}
I thank Roger Heath-Brown for helpful comments and suggestions.
\section{Methodology}
For ease of presentation, we will
assume that $k\equiv\pm3\pmod{9}$; note that this
holds for all $k$ in \eqref{eq:exceptions}.
Since the basic algorithm described above is reasonable for finding
small solutions, we will assume henceforth that $|z|>\sqrt{k}$.
Also, if we specialize \eqref{eq:main} to solutions with $y=z$,
then we get the Thue equation $x^3+2y^3=k$, which is efficiently solvable.
Using the Thue solver in PARI/GP \cite{pari}, we verify that
no such solutions exist for the $k$ in \eqref{eq:exceptions}.
Hence we may further assume that $y\ne z$.
Since $|z|>\sqrt{k}\ge\sqrt[3]{k}$, we have
$$\sgn{z}=-\sgn(k-z^3)=-\sgn(x^3+y^3)=-\sgn{x}.$$
Likewise, since $x^3+z^3=k-y^3$ and $|y|\ge|z|$,
we have $\sgn{y}=-\sgn{x}=\sgn{z}$.
Multiplying both sides of \eqref{eq:main} by
$-\sgn{z}$, we thus obtain
\begin{equation}\label{eq:XY}
|x|^3-|y|^3-|z|^3=-k\sgn{z}.
\end{equation}
Set $\alpha=\sqrt[3]{2}-1$, and recall that $d=|x+y|=|x|-|y|$.
If $d\ge\alpha|z|$ then
\begin{align*}
-k\sgn{z}&=|x|^3-|y|^3-|z|^3
\ge(|y|+\alpha|z|)^3-|y|^3-|z|^3\\
&=3\alpha(\alpha+2)(|y|-|z|)z^2+3\alpha(|y|-|z|)^2|z|\\
&\ge3\alpha(\alpha+2)|y-z|z^2.
\end{align*}
Since $3\alpha(\alpha+2)>1$, this is incompatible with our assumptions
that $y\ne z$ and $|z|>\sqrt{k}$. Thus we must have $0<d<\alpha|z|$.
Next, reducing \eqref{eq:XY} modulo $3$ and recalling our assumption
that $k\equiv\pm3\pmod{9}$, we see that
$$
d=|x|-|y|\equiv|z|\pmod{3}.
$$
Let $\epsilon\in\{\pm1\}$ be so that $k\equiv3\epsilon\pmod{9}$. Then,
since every cube is congruent to $0$ or $\pm1\pmod{9}$, we
must have $x\equiv y\equiv z\equiv\epsilon\pmod{3}$, so that
$\sgn{z}=\epsilon\left(\frac{|z|}{3}\right)=\epsilon\left(\frac{d}{3}\right)$.
In view of \eqref{eq:solution}, we get a solution to \eqref{eq:main}
if and only if $d\mid z^3-k$ and
$3d(4|z^3-k|-d^3)=3d(4\epsilon\left(\frac{d}{3}\right)(z^3-k)-d^3)$ is a square.
In summary, to find all solutions to \eqref{eq:main} with
$|x|\ge|y|\ge|z|>\sqrt{k}$, $y\ne z$ and $|z|\le B$, it suffices to solve the
following system for each $d\in\mathbb{Z}\cap(0,\alpha B)$ coprime to $3$:
\begin{equation}\label{eq:system}
\begin{aligned}
&\frac{d}{\sqrt[3]{2}-1}<|z|\le B,
\quad\sgn{z}=\epsilon\left(\frac{d}{3}\right),
\quad z^3\equiv k\pmod{d},\\
&3d\left(4\epsilon\!\left(\frac{d}{3}\right)(z^3-k)-d^3\right)=\square.
\end{aligned}
\end{equation}
Our approach to solving this is straightforward: we work through the
values of $d$ recursively by their prime factorizations, and apply the
Chinese remainder theorem to reduce the solution of $z^3\equiv k\pmod{d}$
to the case of prime power modulus, to which standard algorithms apply.
Let $r_d(k)=\#\{z\pmod{d}:z^3\equiv k\pmod{d}\}$ denote the number of cube
roots of $k$ modulo $d$.
By standard analytic estimates, since $k$ is not a cube, we have
$$
\sum_{d\le\alpha B}r_d(k)\ll_k B.
$$
Heuristically, computing the solutions of $z^3\equiv k\pmod{p}$ for
all primes $p\le\alpha B$ can be done with $O(B)$ arithmetic operations on
integers in $[0,\alpha B]$; see e.g.\ the algorithm described in \cite[\S2.9,
Exercise 8]{NZM}. Assuming this, one can see that with Montgomery's
batch inversion trick \cite[\S10.3.1]{Montgomery2}, the remaining effort
to determine the roots of $z^3\equiv k\pmod{d}$ for all positive integers
$d\le\alpha B$ can again be carried out with $O(B)$ arithmetic operations.
Thus, we can work out all $z$ satisfying the first line of
\eqref{eq:system}, as a union of arithmetic progressions, in linear time.
To detect solutions to the final line, it is crucial to have
a quick method of determining whether
$\Delta:=3d\left(4\epsilon\!\left(\frac{d}{3}\right)(z^3-k)-d^3\right)$
is a square. We first note that for fixed $d$ this condition reduces to
finding an integral point on an elliptic curve; specifically,
writing $X=12d|z|$ and $Y=(6d)^2|x-y|$, from \eqref{eq:solution}
we see that $(X,Y)$ lies on the Mordell curve
\begin{equation}\label{eq:mordell}
Y^2=X^3-2(6d)^3\left(d^3+4\epsilon\left(\frac{d}{3}\right)k\right).
\end{equation}
Thus, for fixed $d$ there are at most finitely many solutions, and they
can be effectively bounded. For some small values of $d$ it is practical
to find all the integral points on \eqref{eq:mordell} and check whether
any yield solutions to \eqref{eq:main}. For instance, using the integral
point functionality in Magma \cite[\S128.2.8]{magma}, we verified that
there are no solutions for $k$ as in \eqref{eq:exceptions} and
$d\le40$, except possibly for $(k,d)\in\{(579,29),(579,34),(975,22)\}$.
Next we note that some congruence and divisibility constraints come for free:
\begin{lemma}
Let $z$ be a solution to \eqref{eq:system}, let $p$ be a prime
number, and set $s=\ord_p{d}$, $t=\ord_p(z^3-k)$. Then:
\begin{enumerate}
\item[(i)]$z\equiv\frac43k(2-d^2)+9(k+d)\pmod{18}$;
\item[(ii)]if $p\equiv2\pmod{3}$ then $t\le 3s$;
\item[(iii)]if $t\le 3s$ then $s\equiv t\pmod{2}$;
\item[(iv)]if $\ord_p{k}\in\{1,2\}$ then $s\in\{0,\ord_p{k}\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\Delta=3d\left(4\epsilon\!\left(\frac{d}{3}\right)(z^3-k)-d^3\right)$.
Writing $\delta=\left(\frac{d}{3}\right)$, we have
$|z|\equiv d\equiv\delta\pmod{3}$. Observing that
$(\delta+3n)^3\equiv \delta+9n\pmod{27}$, modulo $27$ we have
\begin{align*}
\frac{\Delta}{3d}&=
4\epsilon\delta(z^3-k)-d^3=4|z|^3-d^3-4\epsilon\delta k\\
&\equiv4[\delta+3(|z|-\delta)]-[\delta+3(d-\delta)]-4\epsilon\delta k
=3(4|z|-d)-\delta[18+4(\epsilon k-3)]\\
&\equiv3(4|z|-d)-d[18+4(\epsilon k-3)]
=12|z|-9d-4\epsilon dk\\
&\equiv 3|z|-4\epsilon dk.
\end{align*}
This vanishes modulo $9$, so in order for $\Delta$ to be a square, it
must vanish mod $27$ as well. Hence
$$
z=\epsilon\delta|z|\equiv\frac{4\delta dk}{3}
\equiv\frac{4(2-d^2)k}{3}\pmod{9}.
$$
Reducing \eqref{eq:main} modulo $2$ we see that $z\equiv k+d\pmod{2}$,
and this yields (i).
Next set $u=p^{-s}d$ and $v=p^{-t}\epsilon\delta(z^3-k)$, so that
$$
\Delta=3\bigl(4p^{s+t}uv-p^{4s}u^4\bigr).
$$
If $3s<t$ then $p^{-4s}\Delta\equiv-3u^4\pmod{4p}$,
but this is impossible when $p\equiv2\pmod{3}$, since $-3$ is not a
square modulo $4p$. Hence we must have $t\le 3s$ in that case.
Next suppose that $t\le 3s$. We consider the following cases, which
cover all possibilities:
\begin{itemize}
\item If $p=3$ then $s=t=0$, so $s\equiv t\pmod{2}$.
\item If $p\ne3$ and $3s>t+2\ord_p{2}$ then
$\ord_p\Delta=s+t+2\ord_p2$, so $s\equiv t\pmod{2}$.
\item If $3s\in\{t,t+2\}$ then $s\equiv t\pmod{2}$.
\item If $p=2$ and $3s=t+1$ then
$2^{-4s}\Delta=3\bigl(2uv-u^4\bigr)\equiv3\pmod{4}$,
which is impossible.
\end{itemize}
Thus, in any case we conclude that $s\equiv t\pmod{2}$.
Finally, suppose that $p\mid k$ and $p^3\nmid k$.
If $s=0$ then there is nothing to prove, so assume otherwise.
Since $d\mid{z^3-k}$, we must have $p\mid z$, whence
$$0<s\le t=\ord_p(z^3-k)=\ord_p{k}<3s.$$
By part (iii) it follows that $s\equiv\ord_p{k}\pmod{2}$, and thus
$s=\ord_p{k}$.
\end{proof}
Thus, once the residue class of $z\pmod{d}$ is fixed, its residue modulo
$\lcm(d,18)$ is determined. Note also that conditions (ii) and (iii)
are efficient to test for $p=2$.
However, even with these optimizations there are $\gg B\log{B}$
pairs $d,z$ satisfying the first line of \eqref{eq:system} and
conclusions (i) and (iv) of the lemma.
To achieve better than $O(B\log{B})$
running time therefore requires eliminating some values of $z$ from the start.
We accomplish this with a standard time-space tradeoff.
To be precise, set $P=3(\log\log{B})(\log\log\log{B})$, and let
$M=\prod_{5\le p\le P}p$
be the product of primes in the interval $[5,P]$.
By the prime number theorem, we have
$\log{M}=(1+o(1))P$.
If $\Delta$ is a square, then
for any prime $p\mid M$ we have
\begin{equation}\label{eq:legendre}
\left(\frac{\Delta}{p}\right)
=\left(\frac{3d}{p}\right)
\left(\frac{|z|^3-c}{p}\right)\in\{0,1\},
\end{equation}
where $c\equiv\epsilon\left(\frac{d}{3}\right)k+\frac{d^3}{4}\pmod{M}$.
When $\lcm(d,18)\le\alpha B/M$, we first compute this
function for every residue class $|z|\pmod{M}$, and select only those
residues for which \eqref{eq:legendre} holds
for every $p\mid M$.
By Hasse's bound, the number of permissible residues is at
most
$$
\frac{M}{2^{\omega(M/(M,d))}}
\prod_{p\mid\frac{M}{(M,d)}}\left(1+O\!\left(\frac1{\sqrt{p}}\right)\right)
=\frac{M}{2^{\omega(M/(M,d))}}e^{O(\sqrt{P}/\log{P})},
$$
and thus the total number of $z$ values to consider is at most
\begin{align*}
\sum_{\lcm(d,18)\le\frac{\alpha B}{M}}&r_d(k)\left[M+
\frac{e^{O(\sqrt{P}/\log{P})}}{2^{\omega(M/(M,d))}}
\frac{\alpha B}{d}\right]
+\sum_{\substack{d\le\alpha B\\\lcm(d,18)>\frac{\alpha B}{M}}}
\frac{r_d(k)\alpha B}{d}\\
&\ll_k B\log{M}+
\frac{e^{O(\sqrt{P}/\log{P})}}{2^{\omega(M)}}
\sum_{g\mid M}\frac{2^{\omega(g)}r_g(k)}{g}
\sum_{d'\le\frac{\alpha B}{9gM}}\frac{r_{d'}(k)\alpha B}{d'}\\
&\ll_k B\log{M}+B\log{B}
\frac{e^{O(\sqrt{P}/\log{P})}}{2^{\omega(M)}}
\prod_{p\mid M}\left(1+\frac{2r_p(k)}{p}\right)\\
&\ll BP+\frac{B\log{B}}{2^{(1+o(1))P/\log{P}}}
\ll B(\log\log{B})(\log\log\log{B}).
\end{align*}
For the $z$ that are not eliminated in this way, we follow a similar
strategy with a few other auxiliary moduli $M'$ composed of larger
primes, in order to accelerate the square testing. We precompute tables
of cubes modulo $M'$ and Legendre symbols modulo $p\mid M'$, so that
testing \eqref{eq:legendre} is reduced to table lookups.
Only when all of these tests pass do we compute $\Delta$ in
multi-precision arithmetic \cite{GMP} and apply a general square test, and this
happens for a vanishingly small proportion of candidate values.
In fact we expect the number of Legendre tests to be bounded on average,
so in total, finding all solutions
with $|z|\le B$ should require no more than
$O_k\bigl(B(\log\log{B})(\log\log\log{B})\bigr)$
table lookups and arithmetic operations on integers in $[0,B]$.
Thus, when $B$ fits within the machine word size, we expect the running
time to be nearly linear, and this is what we observe in practice for
$B<2^{64}$.
\section{Implementation}
We implemented the above algorithm in \texttt{C}, with a few inline assembly
routines for Montgomery arithmetic \cite{Montgomery1} written by Ben
Buhrow \cite{yafu}, and Kim Walisch's \texttt{primesieve} library
\cite{primesieve} for enumerating prime numbers.
The algorithm is naturally split between values of $d$ with a prime
factor exceeding $\sqrt{\alpha B}$ and those that are
$\sqrt{\alpha B}$-smooth. The former set of $d$ consumes more than
two-thirds of the running time, but is more easily parallelized.
We ran this part on the massively parallel cluster Bluecrystal Phase 3
at the Advanced Computing Research Centre, University of Bristol.
For the smooth $d$ we used a separate small cluster of 32- and 64-core
nodes.
We searched for solutions to \eqref{eq:main} for $k\in\{33,42\}$
and $\min\{|x|,|y|,|z|\}\le10^{16}$, and found the following:
$$
33=8\,866\,128\,975\,287\,528^3+(-8\,778\,405\,442\,862\,239)^3
+(-2\,736\,111\,468\,807\,040)^3.
$$
We also searched for solutions for $k=3$, addressing a question of
Mordell \cite[\S6]{Mordell}. In this case, Cassels \cite{Cassels} observed
that cubic reciprocity forces the additional constraint $x\equiv y\equiv
z\pmod{9}$, and it follows that part (i) of the lemma can be upgraded to a
congruence modulo $162$:
$$
z\equiv 4\left(\frac{d}{3}\right)d+3\bigl(d^2-1\bigr)\pmod{162}.
$$
Despite this added efficiency, we found no solutions,
beyond the known single-digit solutions,
with $\min\{|x|,|y|,|z|\}\le 10^{16}$.
The total computation used approximately 23 core-years over
one month of real time.
\bibliographystyle{amsalpha}
| {
"timestamp": "2019-03-19T01:28:19",
"yymm": "1903",
"arxiv_id": "1903.04284",
"language": "en",
"url": "https://arxiv.org/abs/1903.04284",
"abstract": "Inspired by the Numberphile video \"The uncracked problem with 33\" by Tim Browning and Brady Haran, we investigate solutions to $x^3+y^3+z^3=k$ for a few small values of $k$. We find the first known solution for $k=33$.",
"subjects": "Number Theory (math.NT)",
"title": "Cracking the problem with 33",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969641180276,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.7099044207001051
} |
https://arxiv.org/abs/2108.04147 | Discrete multilinear maximal functions and number theory | Many multilinear discrete operators are primed for pointwise decomposition; such decompositions give structural information but also an essentially optimal range of bounds. We study the (continuous) slicing method of Jeong and Lee -- which when debuted instantly gave sharp multilinear operator bounds -- in the discrete setting. Via several examples, number theoretic connections, pointed commentary, and a unified theory we hope that this useful technique will lead to further applications. This work generalizes, and was inspired by, the author's work with Palsson on a special case. | \section{Introduction}
Studying analytic operators from both a multilinear perspective and a discrete one has been an active area of research. Typically these operators have non-trivial boundedness properties if the underlying surface of integration is curved. A prototypical curved object is the sphere, and maximal spherical averaging operators arise naturally in many contexts. From the multilinear view, optimal Lebesgue space bounds for (multilinear) spherical maximal functions had been pursued in many papers, such as \cite{Oberlin, BGHHO, Geba_et_al, HHYpreprint}, building upon work of Stein \cite{St76} and Bourgain \cite{B86}. From a discrete perspective, Magyar-Stein-Wainger showed optimal bounds that were both different from the continuous ones and heavily employed number theoretic techniques \cite{MSW}. Since their work many analogues and variations have been considered, such as \cite{Hughes, Mirek_etal, MirekTrojan, Ionescu, ACHK} to name just a few.
Very recently, Jeong and Lee debuted a \emph{slicing} or \emph{slice and dice} technique that not only allowed one to get the sharp bounds for the bilinear spherical maximal function \cite{JL} (see also \cite{Dosidis}), but showcased a nice decomposition of the operator into a pointwise product of linear operators. Building on this work, Anderson and Palsson employed this technique in the discrete setting to very quickly deduce optimal bounds there for multilinear versions \cite{AP2} (these are recalled in the next section). This instantly improved their previous work \cite{AP}, which relied on umber theoretic techniques, in all but two dimensions. It remains an open question about the optimal bounds in those dimensions; the difficulty of getting optimal multilinear bounds in low dimensions is indeed due to the dimensional restrictions of the linear operators, and this phenomenon does not seem to appear in the continuous setting. However, the slicing technique provides fast and powerful results in all other dimensions, along with a nice pointwise product bound.
This technique is much more general than the multilinear spherical averaging operator. Here we initiate the study into the generality of Jeong and Lee's slicng technique in the discrete multilinear setting. We prove several bounds for multilinear averaging operators, connect the slicing with the underlying number theory of the arithmetic surface, and formulate a general result.
We begin this paper with a very brief description of the slicnig technique in the discrete setting, before showing how it works via the discrete multilinear Hardy-Littlewood maximal function, the discrete multilinear $k$-spherical maximal function, the discrete multilinear annular maximal function, and the discrete multilinear maximal function over the \emph{Waring-Goldbach} surface. The results for the Hardy-Littlewood case are basic, but display the structure of the technique. The spherical results are found in \cite{AP2}, so our treatment is brief. The annular maximal function results compliment the recent continous theory in \cite{DG} and forthcoming linear discrete theory in \cite{AKM}. Finally, the Waring-Goldbach operator is the most intricately connected to number theory via the well-studied \emph{Waring-Goldbach Problem}, which asks for statistics of \emph{prime} lattice points on spheres \cite{Hua_book} (more details appear in the next section). It is here that the multilinearity and the underlying distribution of the averaging set (primes) interact the most subtly. The Waring-Goldbach results that we obtain are indicative of this and have definite connections to how one must take care in relating the Waring-Goldbach problem in $\mathbb{Z}^{2d}$ to one in $\mathbb{Z}^d$. This can be viewed in some sense as a lack of scale invariance or lack of translation-dilation invariance when working over restricted sets such as the primes. There are likely several immediate applications to bilinear ergodic theory, specifically to multilinear operators whose linear versions have restrictions. The slicing method provides a criterion for classifying the operators where one can easily utilize the linear ergodic theorems to piggyback to multilinear results. While we specifically outline the Waring-Goldbach case here, without delving into these applications, it is likely that this framework is useful much more generally for other restricted sets. Finally, based on all of these examples, we describe and discuss a general framework and result in this paper's final section. Likely many extensions are possible; this study is only the start.
\subsection{Acknowledgements}
The author is supported by NSF DMS-1954407.
\section{Introduction of technique and applications}
The basic argument of Jeong and Lee involves slicng the bilinear spherical maximal function into two linear pieces: the Hardy-Littlewood maximal function and the (linear) spherical maximal function. From there the optimal linear bounds for these operators, properly interpolated with the symmetric slicing (i.e. switching the functions that go with each piece), and trivial estimates, allows for the optimal bounds.
This technique is extremely natural in the discrete setting, allowing for fluid, short, proofs as exhibited in several of the examples below. There is the additional feature, mentioned in the Introduction, that since there are sometimes dimensional restrictions on the linear operators (which arise from underlying number theoretic considerations), these amplify in the multilinear setting, missing the sharp bounds in just a few low dimensions. However, the ease and generality of this technique, along with the wide range of sharp bounds, is appealing, and reduces difficult problems to a few exceptional cases. Hence, when we talk about sharpness in this article, we will mean sharpness in the Lebesgue space exponenents; noting that the dimensional restrictions ofterntimes miss the conjectured sharp bounds in a few dimensions.
The four specific classes of operators that we study all display different subtleties of this method, particularly in the Waring-Goldbach case. Bilinear cases of the first three types of operators studied are easy to depict pictorally in terms of the exponents mapped from. For all of these cases, we get the full \emph{H\"older range}; a figure of the bilinear bounds for the operators defined in Sections 2.1, 2.2 and 2.3 appear in Figure 1 at the end of Section 2.3. A key remark is that the \emph{nesting property} (described below) of discrete spaces allows us to extend these bounds from the H\"older range to the full range of bounds.
We use the notation $|\bm{u}|^2 = |u_1|^2 + \dots |u_d|^2$ for vectors in $\mathbb{Z}^d$. We will frequently and implicitly use the \emph{nesting} feature of the discrete $l^p$ norms, that is that $\|f\|_q \leq \|f\|_p$ for all $1 \leq p \leq q \leq \infty$. we also use symmetry in our arguments to claim the overall bounds: that is, showing an $l^{p_1}(\mathbb{Z}^d) \times \cdots \times l^{p_\ell}(\mathbb{Z}^d)$ bound also shows a similar bound with any permutation of the $p_i$ indices, unless otherwise indicated.
\subsection{Discrete multilinear Hardy-Littlewood maximal function}
The $\ell$-linear discrete Hardy-Littlewood averages are defined by
\[
T_\lambda(f_1,\ldots,f_{\ell})(\bm{x}) = \left| \frac{1}{B(\lambda)}\sum_{\bm{u_1}^2+\cdots + \bm{u_\ell}^2\leq\lambda}f_1(\bm{x}-\bm{u_1})\cdots f_\ell (\bm{x}-\bm{u_\ell}) \right|
\]
where $\bm{u_i} \in \mathbb{Z}^d$ and $B(\lambda) = \#\{ (\bm{u_1},\dots ,\bm{u_\ell}) \in \mathbb{Z}^{\ell d}: \sum_i|\bm{u_i}|^2\leq \lambda\}$ is the number of lattice points on the ball of radius $\lambda^{1/2}$ in $\mathbb{R}^{\ell d}$, which is asymptotic to $\lambda^{\ell d/2}$ for all $d \geq 1$.
We then define the corresponding maximal operator as
\[\
T^*(f_1,\ldots,f_{\ell})(\bm{x}) = \sup_{\lambda \in {\mathbb N}} | T_\lambda(f_1,\ldots,f_{\ell})(\bm{x}) |.
\]
We now illustrate the \emph{slice and dice} technique to show the following (likely) well-known result.
\begin{thm}
$T^*(f_1, \dots , f_{\ell})$ is bounded on $l^{p_1}(\mathbb{Z}^d)\times\ldots\times l^{p_\ell}(\mathbb{Z}^d) \to l^{r}(\mathbb{Z}^d)$, $d \geq 1$, $\sum_i\frac{1}{p_i} \geq \frac{1}{r}$, $p_i> 1$ and $r>1/\ell$. This range is sharp.
\end{thm}
\begin{proof}
We prove this by induction. We start with the base case $\ell = 2$. Here we slightly rename our variables and work with
\[
T_\lambda(f,g)(\bm{x}) = \left| \frac{1}{B(\lambda)}\sum_{|\bm{u}|^2+|\bm{v}|^2\leq \lambda}f(\bm{x}-\bm{u})g(\bm{x}-\bm{v}) \right|.
\]
Thus
\[
|T_\lambda (f,g)(\bm{x})| \leq \frac{1}{\lambda^{\frac{d}{2}}}\sum_{|\bm{u}|^2 \leq \lambda}|f(\bm{x}-\bm{u})|\cdot \frac{1}{\lambda^{\frac{d}{2}}} \bigg|\sum_{|\bm{v}|^2 \leq \lambda - |\bm{u}|^2} g(\bm{x}-\bm{v})\bigg|.
\]
Now let $\eta = \lambda - |\bm{u}|^2$, so $\eta \leq \lambda$ and
\[
\frac{1}{\lambda^{\frac{d}{2}}} \bigg|\sum_{|\bm{v}|^2 \leq \lambda - |\bm{u}|^2} g(\bm{x}-\bm{v})\bigg| \leq \sup_{\eta \in{\mathbb N}}\frac{1}{\eta^{\frac{d}{2}}} \bigg|\sum_{|\bm{v}|^2 \leq \eta} g(\bm{x}-\bm{v}) \bigg| = M_{HL}(g)(\bm{x})
\]
where $M_{HL}$ is the (discrete) linear Hardy-Littlewood maximal function. Thus we can slice our original operator and insert the above to get the pointwise product
\[
|\sup_{\lambda\in{\mathbb N}}|T_\lambda (f,g)(\bm{x})| \leq \sup_{\lambda\in{\mathbb N}} \frac{1}{\lambda^{\frac{d}{2}}}\sum_{|\bm{u}|^2 \leq \lambda} |f(\bm{x}-\bm{u})| \cdot M_{HL}(g)(\bm{x}) = M_{HL}(f)(\bm{x}) \cdot M_{HL}(g)(\bm{x})
\]
and since each operator in this pointwise product is bounded on $l^p(\mathbb{Z}^d)$ for all $p >1$, the result follows with $r > 1/2$.
Now we assume boundedness of the $(\ell-1)$-linear operator and prove bounds for the $\ell$-linear one. We first slice
\[
T_\lambda(f_1,\ldots,f_{\ell})(\bm{x}) = \frac{1}{\lambda^{d/2}}\sum_{\bm{u_1}^2\leq\lambda}|f_1(\bm{x}-\bm{u_1})|\cdot \frac{1}{\lambda^{(\ell-1)d/2}} \left|\sum_{|\bm{u_2}|^2+ \dots |\bm{u_\ell}|^2\leq\lambda} f_2 (\bm{x}-\bm{u_2})\cdots f_\ell (\bm{x}-\bm{u_\ell}) \right|
\]
and dice
\[
\frac{1}{\lambda^{d/2}}\sum_{|\bm{u_2}|^2+ \dots |\bm{u_\ell}|^2\leq\lambda} \prod_{i=2}^\ell\left|f_i (\bm{x}-\bm{u_i}) \right|
\leq M_{HL}^{\ell-1}(f_2, \dots ,f_\ell),
\]
so overall
\[
T^*(f_1, \dots , f_{\ell}) \leq M_{HL}(f_1)\cdot M_{HL}^{\ell-1}(f_2, \dots ,f_\ell).
\]
By the induction hypothesis, we know the $l^{p_2}(\mathbb{Z}^d)\times\ldots\times l^{p_\ell}(\mathbb{Z}^d) \to l^{r}(\mathbb{Z}^d)$ bounds for $M_{HL}^{\ell-1}$, therefore overall, we can conclude the claimed bounds.
We now show sharpness. Our pointwise products obtained above suggest that we can use pointwise products of the sharpness examples for the linear version of our operator, and indeed this is the case. We briefly outline the argument. Let $f_i = \delta_0$ (the Dirac delta function centered at the origin).
\[
\|\sup_{\lambda\in{\mathbb N}}|T_\lambda (f_1, \dots , f_\ell))\|^r_{l^r(\mathbb{Z}^d)} =
\sum_{\bm{x} \in \mathbb{Z}^d}\sup_{\lambda\in{\mathbb N}}\frac{1}{\lambda^{\frac{\ell d}{2}}}\sum_{|\bm{u_1}|^2+\cdots +|\bm{u_\ell}|^2 \leq \lambda}\bigg|\prod_{i=1}^\ell \delta_0(\bm{x}-\bm{u_i})\bigg|^r
\]
\[
\geq \sup_{\lambda\in{\mathbb N}}\sum_{\bm{x}\in \mathbb{Z}^d}\frac{1}{\lambda^{\ell dr/2}}\cdot \#\{\bm{u_i} : \bm{u_i} = \bm{x}, \ell|\bm{x}|^2 \leq \lambda\}.
\]
Choose $\lambda = \ell|\bm{x}|^2$ and bound from below by
\[
\geq \sum_{\bm{x}\in\mathbb{Z}^d} \frac{1}{(\ell|\bm{x}|^2)^{\ell dr/2}} \leq C_{\ell, d, r}\sum_{\bm{x} \in \mathbb{Z}^d}\frac{1}{|\bm{x}|^{\ell dr}}
\]
which converges if and only if $r \geq 1/\ell$ for all $d \geq 1$.
\end{proof}
\begin{remark}
This result can easily be upgraded to weak type restricted estimates at endpoints using known endpoint estimates for the Hardy-Littlewood maximal function. Similar comments also apply to all results stated below, using the known endpoint estimates for the linear operators.
\end{remark}
\begin{remark}
This result also holds in the case of ``k-balls", that is, averages over surfaces $B_k := \{\sum_i |\bm{u_i}|^k \leq \lambda\}$. The above proofs all follow through with obvious modifications, leading to the same results.
\end{remark}
\subsection{Discrete multilinear (k)-spherical maximal function}
These results are proved in \cite{AP2}. We recall the main results, make a few comments, and refer the reader to \cite{AP2} for more details.
We begin by defining averages over ``degree $k$" spheres, $k \geq 3$, $k \in \mathbb{Z}$ (if $k$ is odd, we assume that $y^k = |y|^k$).
Recall that the $\ell$-linear degree $k$ discrete spherical averages are defined as:
\[
T_\lambda(f_1,\ldots,f_{\ell})(\bm{x}) = \left| \frac{1}{N(\lambda)}\sum_{\bm{u_1}^k+\cdots + \bm{u_\ell}^k=\lambda}f_1(\bm{x}-\bm{u_1})\cdots f_\ell (\bm{x}-\bm{u_\ell}) \right|
\]
where $N(\lambda)$ is asymptotic to $\lambda^{\frac{\ell d}{k}-1}$ for all $d > d_0(k)/\ell$ (see \cite{ACHK2} for precise values). Define the maximal function $T^*$ as in the previous subsection.
Also define \[r_0(d,k) = \frac{2+2\delta_0(d,k)}{(\ell - 1)(2+2\delta_0(d,k)) + (1+2\delta_0(d,k))},\] where $\delta_0(d,k)$, defined on page 2 of \cite{ACHK2}, relates to the best known bounds for the discrete linear degree $k$ operator. Finally define \[p_0(d,k) = max \{ 1+\frac{1}{1+2\delta_0(d,k)}, \frac{d}{d-k}\}.\] This $p_0(d,k)$ provides the best known $l^p$ bounds for the discrete linear operator; it is conjectured to be equal to $\frac{d}{d-k}$ for all $k \geq 3$. We can now state (from \cite{AP2})
\begin{thm}
$T^*(f_1, \dots , f_{\ell})$ is bounded on $l^{p_1}(\mathbb{Z}^d)\times\ldots\times l^{p_l}(\mathbb{Z}^d) \to l^{r}(\mathbb{Z}^d)$, $\frac{1}{p_1} + \ldots + \frac{1}{p_l} \geq \frac{1}{r}$, $r>\max\{ r_0(d,k), \frac{d}{\ell d-k}\}$, $p_1,\ldots,p_{\ell}> 1$ and $d > d_{0}(k)$. Moreover, the bound $r > \frac{d}{\ell d-k}$ is a necessary condition.
\end{thm}
The case $k=2$ is much easier to state
\begin{cor}
$T^*(f_1, \dots , f_{\ell})$ is bounded on $l^{p_1}(\mathbb{Z}^d)\times\ldots\times l^{p_\ell}(\mathbb{Z}^d) \to l^{r}(\mathbb{Z}^d)$, $\frac{1}{p_1} + \ldots + \frac{1}{p_l} \geq \frac{1}{r}$, $r>\frac{d}{\ell d-2}$, $p_1,\ldots,p_{\ell}> 1$ and $d \geq 5$.
\end{cor}
Proofs of these are found in \cite{AP2}; however, we want to mention that there is an even easier argument to show the sharpness of these bounds (or necessary conditions). We briefly sketch this argument below in the most general case
\begin{proof}
We show the necessary condition via the simpler example $f_i = \delta_0$ for all $i$. Then
\[
\|\sup_{\lambda\in{\mathbb N}}|T_\lambda (f_1, \dots , f_\ell))\|^r_{l^r(\mathbb{Z}^d)} =
\sum_{\bm{x} \in \mathbb{Z}^d}\sup_{\lambda\in{\mathbb N}}\bigg|\frac{1}{\lambda^{\frac{\ell d}{k}-1}}\sum_{|\bm{u_1}|^k+\cdots +|\bm{u_\ell}|^k = \lambda}\prod_{i=1}^\ell\delta_0(\bm{x}-\bm{u_i})\bigg|^r.
\]
As earlier choose $\lambda = \ell|\bm{x}|^k$, so that there is only one nonzero term in the inner sum for each $\bm{x}$, and we can bound
\[
\geq \bigg(\sum_{\bm{x}\in\mathbb{Z}^d} \frac{1}{\ell |\bm{x}|^{(\ell d -k))r}}\bigg)^{1/r}
\]
which converges if and only if $r \geq \frac{d}{\ell d-k}$, which matches the sharp range for $k=2$. Moreover, it works for any dimension where there is an infinite sequence of spheres such that the Hardy-Littlewood asymptotic holds (after redefining the operator to only average over the restricted range). For example, if $k=2$, this argument works for $d=4$ whenever $\lambda$ is not divisible by 4.
\end{proof}
\subsection{Discrete multilinear annular maximal function}
Here instead of averaging over spheres we will now average over annuli, which in some way ``interpolate" between ball averages and spherical averages, shedding light on refined properties of these objects. This linear (continuous) problem has a significant history, and as mentioned earlier, multilinear results have recently been shown \cite{DG}. A forthcoming preprint of Anderson, Kumchev and Madrid shows linear bounds in the discrete setting. We introduce these averages.
Let $0 < \theta < 1$, and define the annulus of width $\lambda^{\theta}$ by $A(\lambda) = \{\bm{x} \in \mathbb{Z}^d :\lambda - \lambda^\theta < |\bm{x}|^2 \leq \lambda \}$. We will also abuse notation and use $A(\lambda)$ to represent the number of lattice points. We have
\[
A(\lambda) \sim C_d \lambda^{d/2-1+\theta} \]
for all $d \geq 5$. The same asymptotic holds for the annulus of width $\ell \lambda^\theta$, a fact we will implicitly use below (in fact the following defintitions remain valid when defined on an annulus of width $\mu$, as long as $\mu = o(\lambda)$). It is also likely possible to extend this asymptotic to $d=4$, but for the rest of this section we assume $d \geq 5$.
Define the discrete annular averages as
\[
S_{\lambda,\theta}(f) = \lambda^{1 - \theta - d/2}\sum_{\bm{u} \in A(\lambda)}|f(\bm{x}-\bm{u})|.
\]
and maximal function
\[ S^*_{\theta}(f)(\bm{x}) = \sup_{\lambda \in \mathbb N} S_{\lambda,\theta}(f).\]
From \cite{AKM} we have the following bounds for the maximal function.
\begin{thm}
$S^*_{\theta}$ is bounded on $\ell^p(\mathbb Z^d)$ for $p > p_0(\theta,d)$, where
\[ p_0(\theta,d) = \frac d{d-2+2\theta}. \]
\end{thm}
Note that \[ p_0(1^-,d) = 1, \quad p_0(0^+,d) = \frac d{d-2} \] which match the two extremal cases of the Hardy-Littlewood maximal function ($\theta = 0$) and spherical maximal function ($\theta = 1)$. We now define the $\ell$-linear discrete annular averages (over the annulus in $\mathbb{Z}^{\ell d}$)
\[
S_{\lambda,\theta}(f_1, \dots ,f_\ell)(\bm{x}) = \lambda^{1 - \theta - \ell d/2}\sum_{\lambda - \lambda^{\theta} <\sum_i|\bm{u_i}|^2 \leq \lambda}|\prod_{i=1}^\ell f_i(\bm{x}-\bm{u_i})|.
\]
and maximal function analogously.
We have
\begin{thm}
$S^*_{\theta}(f_1, \dots , f_{\ell})$ is bounded on $l^{p_1}(\mathbb{Z}^d)\times\ldots\times l^{p_l}(\mathbb{Z}^d) \to l^{r}(\mathbb{Z}^d)$, $\frac{1}{p_1} + \ldots + \frac{1}{p_\ell} \geq \frac{1}{r}$, $p_1,\ldots,p_{\ell}> 1$ and $r>\frac{d}{\ell d - 2+2\theta}$. These bounds are also sharp.
\end{thm}
\begin{proof}
We start with the bilinear case. We first bound
\[
S_{\lambda,\theta}(f, g)(\bm{x})) \leq \frac{1}{\lambda^{d/2}} \sum_{0 \leq |\bm{v}|^2 \leq \lambda^\theta }|g(\bm{x} - \bm{v})| \cdot \frac{\lambda^\theta}{\lambda^{d/2-1}}\sum_{\lambda - 2\lambda^{\theta} <|\bm{u}|^2 \leq \lambda}|f(\bm{x}-\bm{u})|
\]
and then let $\eta = \lambda^\theta$ (so $\frac{1}{\lambda^{d/2}} = \frac{1}{\eta^{d/2\theta}} \leq \frac{1}{\eta^{d/2}}$), which gives
\[
\frac{1}{\lambda^{d/2}} \sum_{0 \leq |\bm{v}|^2 \leq \lambda^\theta }|g(\bm{x} - \bm{v})| \leq \sup_\eta \frac{1}{\eta^{d/2}} \sum_{0 \leq |\bm{v}|^2 \leq \eta }|g(\bm{x} - \bm{v})| \leq M_{HL}(g).
\]
Then slice and bound
\[
S^*_{\theta}(f, g)(\bm{x})) \leq M_{HL}(g) \cdot \sup_\lambda \frac{\lambda^\theta}{\lambda^{d/2-1}}\sum_{\lambda - 2\lambda^{\theta} <|\bm{u}|^2 \leq \lambda}|f(\bm{x}-\bm{u})| \leq M_{HL}(g)\cdot S^*_{\theta}(f),
\]
by recalling that the factor $2\lambda^\theta$ in the annulus does not affect the definition of the operator. Apply the linear bounds to each operator in the product to get $l^{p}(\mathbb{Z}^d)\times l^{q}(\mathbb{Z}^d) \to l^{r}(\mathbb{Z}^d)$ bounds for all $p,q>1$ and $\frac{1}{p}+\frac{1}{q} \geq \frac{1}{r}$, $r > \frac{d}{2d-2+2\theta}$. By induction, the $\ell$-linear bounds follow.
Again, sharpness can be shown using Dirac delta functions. For the linear case this brief argument appears in \cite{AKM} and is analogous to the other sharpness arguments in this paper. The multilinear case carries through as in previous sections (and again, only relies on the existence of an asymptotic, so as mentioned earlier, it is quite likely that this holds for $d=4$ as well).
\end{proof}
\begin{figure}[h!]\label{range}
\begin{tikzpicture}
\draw (0,0) rectangle (5,5);
\path[fill=blue!35] (0,0)--(0,5)--(3,5) --(5,3)--(5,0);
\path[fill=green!35] (3,5)--(4,5)--(5,4)--(5,3)--(3,5);
\draw[dash pattern= { on 2pt off 1pt}] (0,5)--(4,5)--(5,4)--(5,0);
\draw (0,5)--(0,0)--(5,0);
\node [below left] at (0,0) {$O$};\node [above] at (0,5) {$A$};
\node [above] at (2.5,5) {$B=(\frac{d-2}{d},1)$};
\draw (4,5) circle [radius=0.04];
\node [above] at (4.2,5) {$C$};
\draw (3,5) circle [radius=0.04];
\node [right] at (5,3) {$B'$};
\draw (5,3) circle [radius=0.04];
\node [right] at (5,4) {$C'=(1,\frac{d-2+2\theta}{d})$};
\draw (5,4) circle [radius=0.04];
\draw (5,5) circle [radius=0.04];
\node [below right] at (5,0) {$A'$};
\draw [<->] (5.8,0.7)--(5.8,0)--(6.5,0);
\node at (6.7,0) {$\frac{1}{p}$};
\node at (5.8,0.7) [right]{$\frac{1}{q}$};
\end{tikzpicture}
\caption{The range of $l^p(\mathbb{Z}^d)\times l^q(\mathbb{Z}^d)$ bounds (in terms of $\frac{1}{p}$ and $\frac{1}{q}$) for the operators in 2.1, 2.2 and 2.3: the full square represents the bounds for the bilinear operator in 2.1, the blue region for the operator in 2.2, and the blue + green region for the operator in 2.3. The notation $X'$ represents the symmetric point to $X$ about the diagonal. These bounds are all sharp.}
\end{figure}
\subsection{Discrete multilinear Waring-Goldbach maximal function}
We now show an example of how this technique works over multilinear averages on ``sparser" sets via considering multilinear averages over the prime spheres. The delicate interplay between multilinearity, curvature, and primes demonstrates that multilinear surfaces defined over restricted subsets of the integers often require more intricate analysis. This example also indicates the difficulty of directly relating ``restricted" solutions of Diophantine surfaces between different dimensions.
The discrete averaging operators we consider are averages over the prime vectors on the algebraic surface in $\mathbb{Z}^{\ell d}$
\begin{equation}
\label{sphere}
|\bm{p_1}|^k+ \dots + |\bm{p_\ell}|^k = \lambda,
\end{equation}
Let $P(\lambda)$ denote the number of prime solutions counted with logarithmic weights
\[ P(\lambda) = \sum_{|\bm{p_1}|^k + \dots |\bm{p_\ell}|^k = \lambda} \prod_{i=1}^\ell \log(\bm{p_i}), \] where $\log\mathbf x_i = (\log x_1) \cdots (\log x_d)$ and ${\bf p}$ is a vector with all coordinates prime.
The Waring--Goldbach problem in analytic number theory involves the study of these point (typically stated for $\ell = 1$). Classic work of Hua \cite{Hua_book} gives an asymptotic (for $d$ large enough with respect to $k$ -- see \cite{Kumchev_Wooley1,Kumchev_Wooley2} for best recent results) as long as we restrict to a certain arithmetic progression $\Gamma_{d,k}$:
\begin{equation}
\label{Hua}
P(\lambda) \sim \lambda^{{\ell d/k - 1}},
\end{equation}
where we have ignored the singular series and Gamma function factors that for our purposes can be regarded as constants. Some examples of progressions $\Gamma_{d,k}$ are (see Chapter VIII in Hua~\cite{Hua_book} for more details):
\begin{itemize}
\item $\Gamma_{d,k}$ is the residue class $\lambda \equiv d \pmod 2$ when $k$ is odd;
\item $\Gamma_{5,2}$ is the residue class $\lambda \equiv 5 \pmod {24}$;
\item $\Gamma_{17,4}$ is the residue class $\lambda \equiv 17 \pmod {240}$.
\end{itemize}
The authors in \cite{ACHK} prove $l^p(\mathbb{Z}^n)$ bounds for the linear discrete spherical maximal function along the primes. They consider the discrete spherical averages
\begin{equation}\label{primeavg}
A_\lambda (f)({\bf x}) := \frac{1}{P(\lambda)} \sum_{|{\bf p}|^k=\lambda} (\log{\bf p}) f({\bf x-p}).
\end{equation}
and corresponding maximal function
\[
A^*(f)(\bm{x}) = \sup_{\lambda \in \Gamma_{d,k}}|A_\lambda (f)(\bm{x})|
\]
and show the following bounds (see \cite{ACHK} for definitions and notation -- the authors use $n$ there instead of $d$ to indicate dimension).
\begin{thm}\label{mainmaxfunction}
Let $k \geq 2$ and $d \geq \max\{ d_1(k), d_2(k) \}$. The maximal function given by
\begin{equation}
A_* (f) := \sup_{\lambda \in \Gamma{d,k}} |A_\lambda (f)|
\end{equation}
is bounded on \(\ell^p(\mathbb{Z}^d)\) for all $p > p_{k,d}$.
\end{thm}
In particular, this operator is bounded for all $p> d/(d-2)$ when $k=2$ and $d \geq 7$ (and this bound is sharp in terms of $p$).
We extend their definition to the multilinear setting, which requires a careful definition. For ease of notation, consider the discrete bilinear operator:
\[
A_\lambda (f,g)({\bf x}) := \frac{1}{P(\lambda)} \sum_{|{\bf p}|^k+ |{\bf q}|^k=\lambda} (\log{\bf p})(\log{\bf q}) f({\bf x-p}) g({\bf x-q})
\]
and maximal function
\[
A_* (f,g) := \sup_{\lambda \in \Gamma_{2d,k}} |A_\lambda (f,g)| .
\]
It turns out that to say something meaningful about the boundedness of this operator using the slicing method, one must also impose conditions upon both $\bm{p}$ and $\bm{q}$ separately. We first give a thorough commentary before specifying these conditions.
In the definition of $A_\lambda(f,g)$, we would like to replace $P(\lambda)$ by its asumptotic. We have a nice asymptotic as long as $\lambda \in \Gamma_{2d,k}$ (for the $\ell$-linear version, we would need $\lambda \in \Gamma_{\ell d,k}$), and naturally this condition is translated to the supremum in the maximal function. However, this isn't quite enough to be able to apply the slicing method. The slicing method splits the operator into two pieces, one depending on $\bm{p}$ and one on $\bm{q}$ -- we would then like to apply the Hardy-Littlewood maximal function (along the primes) bound to the piece involving $\bm{p}$ and the linear operator $A^*(g)$ bound to the piece involving $\bm{q}$ (and also the symmetric condition switching the roles of $\bm{p}$ and $\bm{q}$). To be able to bound $A^*(g)$ we need that $|\bm{q}|^k \in \Gamma^1_{d,k}$ for some allowable progression. Similarly, switching the roles of the primes, we need $|\bm{p}|^k \in \Gamma^2_{d,k}$ for another allowable progression. However, since we assume $\lambda \in \Gamma_{2d,k}$, we also need
\[
\Gamma^1_{d,k}+\Gamma^2_{d,k} = \Gamma_{2d,k},
\]
a ``sumset" condition. Note that the sumset condition is not enough without assuming both $|\bm{p}|^k \in \Gamma^1_{d,k}$ and $|\bm{q}|^k \in \Gamma^1_{d,k}$ (an easy example as to why is described below). With this is mind, define the $\ell$-linear discrete spherical averages along the primes (for $d, k, \lambda$ where this is defined, as discussed above):
\[
A_\lambda (f_1, \dots ,f_\ell)({\bf x}) := \frac{1}{\lambda^{\ell d/k -1}} \sum_{\sum_i|{\bf p_i}|^k=\lambda} \prod_{i=1}^\ell (\log{\bf p_i}) f_i({\bf x-p_i})
\]
and corresponding maximal function (with supremum over $\lambda \in \Gamma_{\ell d,k}$. We can now state the main result of this section
\begin{thm}
Fix allowable $d,k,\ell, \Gamma_{\ell d, k}$. Assume that $|\bm{p_i}|^k \in \Gamma^i_{d.k}$ for some allowable $\Gamma$, and the sumset condition $\sum_i\Gamma^i_{d,k} = \Gamma_{\ell d,k}$.
Then $A^*(f_1, \dots , f_{\ell})$ is bounded on $l^{p_1}(\mathbb{Z}^d)\times\ldots\times l^{p_l}(\mathbb{Z}^d) \to l^{r}(\mathbb{Z}^d)$, $\frac{1}{p_1} + \ldots + \frac{1}{p_\ell} \geq \frac{1}{r}$, $p_1,\ldots,p_{\ell}> 1$ and $r>\frac{p_{k,d}}{(\ell-1)p_{k,d}+1}$. we also have $r >\frac{d}{\ell d-k}$ is necessary.
\end{thm}
\begin{remark}
We give an example to show that these conditions on the progression $\Gamma$ aren't empty. Let $k$ be odd, $\ell = 2$, $\Gamma^i_{d,k} = d \mod 2$ and $\Gamma_{\ell d,k} = 2d \mod 2$. Now let $d$ be even. These conditions reduce to restricting $|\bm{p_i}|^k, |\bm{q_i}|^k$ to be even and $\lambda$ to be even. If we just consider odd primes, the conditions on $d$ and $k$ guarantee that $\sum_{j=1}^d|p_j|^k$ and $\sum_{j=1}^d|q_j|^k$ are both even. To guarantee that every even $\lambda$ is hit, notice that since even $\lambda$ are allowable in $\mathbb{Z}^{2d}$, then there must exist prime solutions to the Waring-Goldbach problem. Fix any such $\lambda$. If both $\sum_{j=1}^d|p_j|^k$ and $\sum_{j=1}^d|q_j|^k$ are odd (note they must have the same parity), then there is both an odd number of $p_j = 2$ and an odd number of $q_j = 2$. Therefore, there is a rearrangement of the $p_j$ and $q_j$ such that there is an even number of $p_j=2$ and an even number of $q_j = 2$, and this rearrangement is also a solution to the Waring-Goldbach problem. Therefore every even $\lambda$ is hit by a $\bm{p}$ and a $\bm{q}$ in the specified progressions.
\end{remark}
\begin{proof}
Once the care has been taken regarding the progressions $\Gamma$, the proof follows in a similar fashion as previous examples. First let $\eta = \lambda - |\bm{q}|^k$ and slice
\[
A_\lambda (f,g)({\bf x}) := \frac{1}{\lambda^{d/k}}\sum_{|{\bf p}|^k\leq \lambda} (\log{\bf p})f({\bf x-p})\cdot \frac{1}{\lambda^{d/k-1}}\sum_{|{\bf q}|^k=\eta}(\log{\bf q}) g({\bf x-q})
\]
and note that
\[
\frac{1}{\lambda^{d/k}}\sum_{|{\bf p}|^k\leq \lambda} (\log{\bf p})f({\bf x-p}) \leq \sup_{\lambda \in \Gamma_{d,k}}\frac{1}{\lambda^{d/k}}\sum_{|{\bf p}|^k\leq \lambda} (\log{\bf p})f({\bf x-p}) = M_{HL}^{primes}(f)
\]
so altogether
\[
A_\lambda (f,g)({\bf x}) \leq M_{HL}^{primes}(f)\cdot \sup_{\eta \in \Gamma_{d,k}} \frac{1}{\eta^{d/k-1}}\sum_{|{\bf q}|^k=\eta}(\log{\bf q}) g({\bf x-q}) = M_{HL}^{primes}(f)\cdot A^*(g).
\]
Since $M_{HL}^{primes}$ is bounded on $l^p(\mathbb{Z}^d)$ for all $p> 1$ \cite{Trojan, Wierdl} and $A^*(g)$ is bounded on $l^p(\mathbb{Z}^d)$ for all $p> p_{k,d}$, the result follows in the blinear case. Induction concludes the $\ell$-linear case. The necessary conditions yet again follow by testing Dirac delta functions.
\end{proof}
\section{General formulation and further remarks}
In examining these examples, we can see some common elements that are required. These include: a nice asymptotic for the number of lattice points that scales conveniently, and allows us to ``pull off" the asymptotic for the number of points inside a ball, an additive structure between the $\ell$ variables defining the surface (namely, no mixing of variables via product conditions), positivity in the variables, no ``holes" in the sequence of $\lambda$ defining the surface (or if there are ``holes", that these relate to the holes in the linear versions), and a nice protectivization of the surface in $\mathbb{Z}^{\ell d}$ to the surface in $\mathbb{Z}^{(\ell -1)d}$.
Each of these elements requires some thought. For instance, the asymptotic requirement can be seen by the following argument. Notice that the examples above require an asymptotic for the number of points on our degree $k$ surface in $Z^{\ell d}$ of $\lambda^{\phi(\ell d)}$ and the following relationship:
\[
\lambda^{\phi(\ell d)} = \lambda^{d/k}\cdot \lambda^{\phi((\ell - 1)d)}.
\]
In other words, we need
\[
\frac{\phi(\ell d) - \phi((\ell-1)d)}{d} = \frac{1}{k}
\]
which is a condition on the derivative in the discrete setting (that is, we want this to hold in all large dimensions $d$ and all $\ell \in {\mathbb N}$). This condition is implied by $\phi'(x) = \frac{1}{k}$, that is $\phi(d) = d/k +C$. These $\phi$ are precisely the types of functions involved in the asymptotics for lattice points on spheres, balls, annuli, and related surfaces, This implication clearly indicates the types of surfaces that we can handle by our method based on their asymptotics. There are also other criteria to consider, as mentioned above. As we did for the asymptotic, we can phrase these elements more precisely below, allowing us to state a more general theorem about the applicability of the slicing method.
For convenience, we state everything in terms of the bilinear setup, using the vairables $u$ and $v$.
Consider the following conditions on a bilinear operator
\begin{equation}
\label{general operator}
T_\lambda(f,g)(\bm{x}) = \left| \frac{1}{N(\lambda)}\sum_{h(\bm{u}, \bm{v},\lambda)}f(\bm{x}-\bm{u})g (\bm{x}-\bm{v}) \right|
\end{equation}
that satisfy
\begin{enumerate}
\item An asymptotic scaling that allows for a separation into the ball asymptotic and the asymptotic for the underlying surface in $Z^d$: $N(\lambda) \sim \lambda^{\phi(2d)}$ where $\phi(d) = d/k+C$
\item A positive, additive structure of $h(\bm{u}, \bm{v}, \lambda) := h(\bm{u}, \bm{v}) \leq \lambda$ (or $h(\bm{u}, \bm{v}, \lambda) := h(\bm{u}, \bm{v}) = \lambda$): that is, $h(\bm{u}, \bm{v}) = h_1(\bm{u})+h_2(\bm{v})$, with $h_1, h_2 \geq 0$
\item All large $\lambda$ are covered in the asymptotics in both $Z^{2d}$ and $\mathbb{Z}^d$, or if certain $\lambda$ are excluded, these relate in a one to one fashion to those $\lambda$ excluded in the linear projections (see next item for more details)
\item There is a one to one correspondence between allowable $\lambda$ and $\eta_1$, where $\eta_1$ is defined via the projection $h_1(\bm{u}) \leq \lambda - h_2(\bm{v}) :=\eta_1$ (or with $``="$ for surfaces $h(\bm{u}, \bm{v}) = \lambda$) onto $\bm{u}$ of $h$ (implicitly assume projection is well-defined in $Z^d$)
\item There is a one to one correspondence between allowable $\lambda$ and $\eta_2$, where $\eta_1$ is defined via the projection $h_2(\bm{v}) \leq \lambda - h_1(\bm{u}) :=\eta_2$ onto $\bm{v}$ of $h$ (the symmetric condition).
\end{enumerate}
Now we can state
\begin{thm}
Assume conditions 1-5 above. Also assume that the linear operator
\[
T^*(f) := \sup_{\eta_1} \frac{1}{\eta_1^{\phi(d)}}\sum_{h(\bm{u},\eta_1)}\left|f (\bm{x}-\bm{u}) \right|
\]
is bounded on $l^p$ for all $p> p_d$. Then the bilinear operator $T^*(f,g)$ is bounded on $l^{p}(\mathbb{Z}^d)\times l^{q}(\mathbb{Z}^d) \to l^{r}(\mathbb{Z}^d)$, $\frac{1}{p} + \frac{1}{q} \geq \frac{1}{r}$, $r>\frac{p_d}{2p_d + 1}$, $p,q> 1$.
\end{thm}
Analogous $\ell$-linear extensions are also possible. This result underscores that the way that degree $k$ homogeneous (positive) surfaces interact with the additive $\ell$-linear structure is integral to the slicing method.
\bibliographystyle{amsplain}
| {
"timestamp": "2021-08-10T02:37:57",
"yymm": "2108",
"arxiv_id": "2108.04147",
"language": "en",
"url": "https://arxiv.org/abs/2108.04147",
"abstract": "Many multilinear discrete operators are primed for pointwise decomposition; such decompositions give structural information but also an essentially optimal range of bounds. We study the (continuous) slicing method of Jeong and Lee -- which when debuted instantly gave sharp multilinear operator bounds -- in the discrete setting. Via several examples, number theoretic connections, pointed commentary, and a unified theory we hope that this useful technique will lead to further applications. This work generalizes, and was inspired by, the author's work with Palsson on a special case.",
"subjects": "Classical Analysis and ODEs (math.CA); Number Theory (math.NT)",
"title": "Discrete multilinear maximal functions and number theory",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969708496457,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.709904419671581
} |
https://arxiv.org/abs/2111.00456 | A generalized Cantor theorem in ZF | It is proved in $\mathsf{ZF}$ (without the axiom of choice) that, for all infinite sets $M$, there are no surjections from $\omega\times M$ onto $\mathscr{P}(M)$. | \section{Introduction}\label{s014}
Throughout this paper, we shall work in $\mathsf{ZF}$
(i.e., the Zermelo--Fraenkel set theory without the axiom of choice).
In~\cite{Cantor1892}, Cantor proves that, for all sets~$M$, there are no injections from $\scrP(M)$ into~$M$,
from which it follows easily that, for all sets~$M$, there are no surjections from $M$ onto~$\scrP(M)$.
In~\cite{Specker1954}, Specker proves a generalization of Cantor's theorem,
which states that, for all infinite sets~$M$, there are no injections from $\scrP(M)$ into $M^2$.
In~\cite{Forster2003}, Forster proves another generalization of Cantor's theorem,
which states that, for all infinite sets~$M$, there are no finite-to-one functions from $\scrP(M)$ to~$M$.
In~\cite{Shen2017,Shen2020,Shen2021}, several further generalizations of these results are proved,
among which are the following:
\begin{enumerate}[label=\upshape(\roman*), leftmargin=*, widest=iii]
\item For all infinite sets $M$ and all $n\in\omega$, there are no finite-to-one functions from $\scrP(M)$ to $M^n$ or to~$[M]^n$.\label{s012}
\item For all infinite sets $M$, there are no finite-to-one functions from $\scrP(M)$ to $\omega\times M$.
\item For all infinite sets $M$ and all sets $N$, if there is a finite-to-one function from $N$ to~$M$,
then there are no surjections from $N$ onto~$\scrP(M)$.\label{s013}
\end{enumerate}
For a set~$M$, let $\fin(M)$ denote the set of all finite subsets of~$M$.
Although it can be proved in $\mathsf{ZF}$ that, for all infinite sets~$M$,
there are no injections from $\scrP(M)$ into $\fin(M)$ (cf.~\cite[Theorem~3]{HalbeisenShelah1994}),
the existence of an infinite set $A$ such that there is a finite-to-one function from $\scrP(A)$ to $\fin(A)$
and such that there is a surjection from $\fin(A)$ onto $\scrP(A)$ is consistent with $\mathsf{ZF}$
(cf.~\cite[Remark~3.10]{Shen2017} and \cite[Theorem~1]{HalbeisenShelah1994}).
Now it is natural to ask whether the existence of an infinite set $A$ such that
there is a surjection from $A^2$ onto $\scrP(A)$ or from $[A]^2$ onto $\scrP(A)$
is consistent with~$\mathsf{ZF}$, and these questions are originally asked in~\cite{Truss1973}
and in~\cite{Halbeisen2018} respectively. In~\cite[Question~5.6]{ShenYuan2020}, it is asked whether
the existence of an infinite set $A$ such that there is a surjection from $\omega\times A$ onto $\scrP(A)$
is consistent with~$\mathsf{ZF}$, and it is noted there that an affirmative answer to this question
would yield affirmative answers to the above two questions. In this paper,
we give a negative answer to this question; that is, we prove in $\mathsf{ZF}$ that,
for all infinite sets~$M$, there are no surjections from $\omega\times M$ onto~$\scrP(M)$.
We also obtain some related results.
\section{Preliminaries}
In this section, we indicate briefly our use of some terminology and notation.
For a function~$f$, we use $\dom(f)$ for the domain of~$f$, $\ran(f)$ for the range of~$f$,
$f[A]$ for the image of $A$ under~$f$, $f^{-1}[A]$ for the inverse image of $A$ under~$f$,
and $f{\upharpoonright} A$ for the restriction of $f$ to~$A$.
For functions $f$ and~$g$, we use $g\circ f$ for the composition of $g$ and~$f$.
We write $f:A\to B$ to express that $f$ is a function from $A$ to $B$,
and $f:A\twoheadrightarrow B$ to express that $f$ is a function from $A$ \emph{onto} $B$.
\begin{definition}
Let $A,B$ be arbitrary sets.
\begin{enumerate}[leftmargin=*, widest=1]
\item $A\preccurlyeq B$ means that there exists an injection from $A$ into~$B$.
\item $A\preccurlyeq^\ast B$ means that there exists a surjection from a subset of $B$ onto~$A$.
\item $\fin(A)$ denotes the set of all finite subsets of~$A$.
\item $\scrPI(A)$ denotes the set of all infinite subsets of~$A$.
\end{enumerate}
\end{definition}
Clearly, if $A\preccurlyeq B$ then $A\preccurlyeq^\ast B$,
and if $A\preccurlyeq^\ast B$ then $\scrP(A)\preccurlyeq\scrP(B)$ and $\scrPI(A)\preccurlyeq\scrPI(B)$.
\begin{fact}\label{s001}
$\omega_1\preccurlyeq^\ast\scrP(\omega)$.
\end{fact}
\begin{proof}
Cf.~\cite[Theorem~5.11]{Halbeisen2017}.
\end{proof}
In the sequel, we shall frequently use expressions like ``one can explicitly define'' in our formulations,
which is illustrated by the following example.
\begin{theorem}[Cantor-Bernstein]\label{cbt}
From injections $f:A\to B$ and $g:B\to A$,
one can explicitly define a bijection $h:A\twoheadrightarrow B$.
\end{theorem}
\begin{proof}
Cf.~\cite[III.2.8]{Levy1979}.
\end{proof}
\noindent
Formally, Theorem~\ref{cbt} states that one can find a class function $H$ without free variables
such that, whenever $f$ is an injection from $A$ into $B$ and $g$ is an injection from $B$ into~$A$,
$H(f,g)$ is defined and is a bijection of $A$ onto~$B$.
We say that a set $M$ is \emph{Dedekind infinite} if there exists a bijection from $M$ onto some proper subset of~$M$;
otherwise $M$ is \emph{Dedekind finite}. It is well-known that $M$ is Dedekind infinite if and only if there exists
an injection from $\omega$ into~$M$. We say that a set $M$ is \emph{power Dedekind infinite} if the power set of $M$
is Dedekind infinite; otherwise $M$ is \emph{power Dedekind finite}. Recall Kuratowski's celebrated theorem:
\begin{theorem}[Kuratowski]\label{kurt}
A set $M$ is power Dedekind infinite if and only if there exists a surjection from $M$ onto~$\omega$.
\end{theorem}
\begin{proof}
Cf.~\cite[Proposition~5.4]{Halbeisen2017}.
\end{proof}
\section{The main theorem}
In this section, we prove our main theorem, which states that, for all infinite sets~$M$,
there are no surjections from $\omega\times M$ onto~$\scrP(M)$. We first recall the proof of Cantor's theorem.
\begin{theorem}[Cantor]\label{cnt}
From a function $f:M\to\scrP(M)$, one can explicitly define an $A\in\scrP(M)\setminus\ran(f)$.
\end{theorem}
\begin{proof}
It suffices to take $A=\{x\in\dom(f)\mid x\notin f(x)\}$.
\end{proof}
\begin{lemma}\label{s002}
From an infinite ordinal $\alpha$, one can explicitly define an injection $f:\alpha\times\alpha\to\alpha$.
\end{lemma}
\begin{proof}
Cf.~\cite[2.1]{Specker1954} or \cite[IV.2.24]{Levy1979}.
\end{proof}
\begin{lemma}\label{s003}
From an infinite ordinal~$\alpha$, one can explicitly define an injection $f:\fin(\alpha)\to\alpha$.
\end{lemma}
\begin{proof}
Cf.~\cite[Theorem~5.19]{Halbeisen2017}.
\end{proof}
\begin{lemma}\label{s004}
From an infinite ordinal~$\alpha$, one can explicitly define a bijection $f:\omega^\alpha\twoheadrightarrow\alpha$.
\end{lemma}
\begin{proof}
Let $\alpha$ be an infinite ordinal. Let
\[
\exp(\omega,\alpha)=\bigl\{t:\alpha\to\omega\bigm|\{\gamma<\alpha\mid t(\gamma)\neq 0\}\text{ is finite}\bigr\}
\]
and let $r$ be the right lexicographic ordering of $\exp(\omega,\alpha)$.
It is easy to verify that $r$ well-orders $\exp(\omega,\alpha)$ and
the order type of $\langle\exp(\omega,\alpha),r\rangle$ is $\omega^\alpha$ (cf.~\cite[IV.2.10]{Levy1979}).
Let $g$ be the unique isomorphism of $\langle\omega^\alpha,\in\rangle$ onto $\langle\exp(\omega,\alpha),r\rangle$.
Let $h$ be the function on $\exp(\omega,\alpha)$ defined by
\[
h(t)=t{\upharpoonright}\{\gamma<\alpha\mid t(\gamma)\neq 0\}.
\]
Then $h$ is an injection from $\exp(\omega,\alpha)$ into $\fin(\alpha\times\omega)$.
By Lemmas~\ref{s002} and \ref{s003}, we can explicitly define an injection $p:\fin(\alpha\times\omega)\to\alpha$.
Therefore, $p\circ h\circ g$ is an injection from $\omega^\alpha$ into~$\alpha$.
Now, since the function that maps each $\gamma<\alpha$ to $\omega^\gamma$ is an injection from $\alpha$ into~$\omega^\alpha$,
it follows from Theorem~\ref{cbt} that we can explicitly define a bijection $f:\omega^\alpha\twoheadrightarrow\alpha$.
\end{proof}
\begin{fact}\label{s005}
If $A=B\cup C$ is a set of ordinals which is of order type~$\omega^\delta$,
then either $B$ or $C$ has order type~$\omega^\delta$.
\end{fact}
\begin{proof}
Cf.~\cite[IV.2.22(vii)]{Levy1979}.
\end{proof}
The key step of our proof is the following lemma.
\begin{lemma}\label{s006}
From a surjection $f:\omega\times M\twoheadrightarrow\alpha$, where $\alpha$ is an uncountable ordinal,
one can explicitly define a surjection from $M$ onto~$\alpha$.
\end{lemma}
\begin{proof}
Let $\alpha$ be an uncountable ordinal and let $f$ be a surjection from $\omega\times M$ onto~$\alpha$.
For each $n\in\omega$, let $A_n=f[\{n\}\times M]$, let $\delta_n$ be the order type of~$A_n$,
and let $g_n$ be the unique isomorphism of $\delta_n$ onto~$A_n$.
Let $\delta=\bigcup_{n\in\omega}\delta_n$ and let $g$ be the function on $\omega\times\delta$ defined by
\[
g(n,\gamma)=
\begin{cases}
g_n(\gamma) & \text{if $\gamma<\delta_n$,}\\
0 & \text{otherwise.}
\end{cases}
\]
Then $g$ is a surjection from $\omega\times\delta$ onto~$\alpha$, which implies that $\delta$ is also an uncountable ordinal.
Hence, it follows from Lemma~\ref{s002} that we can explicitly define a surjection from $\delta$ onto~$\alpha$.
So it suffices to explicitly define a surjection from $M$ onto~$\delta$. We consider the following two cases:
\textsc{Case}~1. There exists an $n\in\omega$ such that $\delta=\delta_n$.
Now the function that maps each $x\in M$ to $g_k^{-1}(f(k,x))$ is a surjection of $M$ onto~$\delta$,
where $k$ is the least natural number such that $\delta=\delta_k$.
\textsc{Case}~2. Otherwise. Then $\delta$ is a limit ordinal. Since $\delta>\omega$,
without loss of generality, assume that $\delta_n$ is infinite for all $n\in\omega$.
For each $n\in\omega$, let $\beta_n=\omega^{\delta_n}$. By Lemma~\ref{s004},
for each $n\in\omega$, we can explicitly define a bijection $p_n:\beta_n\twoheadrightarrow\delta_n$.
For each $n\in\omega$, let $h_n$ be the function on $M$ defined by $h_n(x)=p_n^{-1}(g_n^{-1}(f(n,x)))$.
Then, for any $n\in\omega$, $h_n$ is a surjection from $M$ onto~$\beta_n$.
Let $\beta=\omega^\delta$. Clearly, $\beta=\bigcup_{n\in\omega}\beta_n$.
By Lemma~\ref{s004}, it suffices to explicitly define a surjection $h:M\to\beta$.
We first define by recursion two sequences $\langle B_n\rangle_{n\in\omega}$
and $\langle q_n\rangle_{n\in\omega}$ as follows. Let $B_0=M$.
Let $n\in\omega$ and assume that $B_n\subseteq M$ has been defined so that
\begin{equation}\label{s007}
\beta=\bigcup\bigl\{\beta_m\bigm|h_m[B_n]\text{ has order type }\beta_m\bigr\}.
\end{equation}
We define a subset $B_{n+1}$ of $B_n$ and a surjection $q_n:B_n\setminus B_{n+1}\twoheadrightarrow\beta_n$ as follows.
Let $k$ be the least natural number such that $\beta_n<\beta_k$ and such that $h_k[B_n]$ has order type~$\beta_k$.
Let $t$ be the unique isomorphism of $h_k[B_n]$ onto~$\beta_k$. Let $D=\{x\in B_n\mid t(h_k(x))\in\beta_n\}$.
Note that $\beta_n\cdot2<\beta_k$. Now, if \eqref{s007} holds with $B_n$ replaced by~$D$,
we define $B_{n+1}=D$ and let $q_n$ be the function on $B_n\setminus D$ defined by
\[
q_n(x)=
\begin{cases}
\text{the unique $\gamma<\beta_n$ such that $t(h_k(x))=\beta_n+\gamma$} & \text{if $t(h_k(x))<\beta_n\cdot2$,}\\
0 & \text{otherwise.}
\end{cases}
\]
Otherwise, it follows from \eqref{s007} and Fact~\ref{s005} that \eqref{s007} holds with $B_n$ replaced by $B_n\setminus D$,
and then we define $B_{n+1}=B_n\setminus D$ and let $q_n$ be the function on $D$ defined by $q_n(x)=t(h_k(x))$.
Clearly, in either case, $B_{n+1}\subseteq B_n$, \eqref{s007} holds with $B_n$ replaced by~$B_{n+1}$,
and $q_n$ is a surjection from $B_n\setminus B_{n+1}$ onto~$\beta_n$.
Now, it suffices to define $h=\bigcup_{n\in\omega}q_n\cup(\bigcap_{n\in\omega}B_n\times\{0\})$.
\end{proof}
\begin{lemma}\label{s008}
For all infinite sets $M$ and all sets~$N$, if there is a finite-to-one function from $N$ to~$M$,
then there are no surjections from $N$ onto~$\scrP(M)$.
\end{lemma}
\begin{proof}
Cf.~\cite[Theorem~5.3]{Shen2017}.
\end{proof}
Now we are ready to prove our main theorem.
\begin{theorem}\label{gct}
For all infinite sets~$M$, there are no surjections from $\omega\times M$ onto~$\scrP(M)$.
\end{theorem}
\begin{proof}
Assume toward a contradiction that there exist an infinite set $M$
and a surjection $\Phi:\omega\times M\twoheadrightarrow\scrP(M)$.
We first prove that $M$ is power Dedekind infinite.
Clearly, there is a surjection $\Psi\subseteq\Phi$ from a subset of $\omega\times M$
onto $\scrP(M)$ such that, for all $x\in M$, $\Psi{\upharpoonright}(\omega\times\{x\})$ is injective.
If $M$ is power Dedekind finite, then $\dom(\Psi)\cap(\omega\times\{x\})$ is finite for all $x\in M$,
and thus there exists a finite-to-one function from $\dom(\Psi)$ to~$M$,
contradicting Lemma~\ref{s008}. Hence, $M$ is power Dedekind infinite.
Now, it follows from Theorem~\ref{kurt} that $\omega\preccurlyeq^\ast M$,
and thus, by Fact~\ref{s001}, $\omega_1\preccurlyeq^\ast\scrP(\omega)\preccurlyeq\scrP(M)\preccurlyeq^\ast\omega\times M$,
which implies that $\omega_1\preccurlyeq^\ast M$ by Lemma~\ref{s006} and hence $\omega_1\preccurlyeq\scrP(M)$.
Let $h$ be an injection from $\omega_1$ into~$\scrP(M)$. In what follows,
we get a contradiction by constructing by recursion an injection $H$
from $\mathrm{Ord}$ (the class of all ordinals) into~$\scrP(M)$.
For $\gamma<\omega_1$, we take $H(\gamma)=h(\gamma)$.
Now, we assume that $\alpha$ is an uncountable ordinal and that $H{\upharpoonright}\alpha$ is an injection from $\alpha$ into~$\scrP(M)$.
Then $(H{\upharpoonright}\alpha)^{-1}\circ\Phi$ is a surjection from a subset of $\omega\times M$ onto $\alpha$
and hence can be extended by zero to a surjection $f:\omega\times M\twoheadrightarrow\alpha$.
By Lemma~\ref{s006}, $f$ explicitly provides a surjection $g:M\twoheadrightarrow\alpha$.
Since $(H{\upharpoonright}\alpha)\circ g$ is a surjection from $M$ onto~$H[\alpha]$,
it follows from Theorem~\ref{cnt} that we can explicitly define an
$H(\alpha)\in\scrP(M)\setminus H[\alpha]$ from $H{\upharpoonright}\alpha$ (and~$\Phi$).
\end{proof}
\section{A further generalization}
In~\cite{Kirmayer1981}, Kirmayer proves that, for all infinite sets~$M$,
there are no surjections from $M$ onto~$\scrPI(M)$. In this section,
we generalize this result by showing that, for all infinite sets~$M$,
there are no surjections from $\omega\times M$ onto~$\scrPI(M)$,
which is also a generalization of Theorem~\ref{gct}.
The proof is similar to that of Theorem~\ref{gct}, but first we have to
prove that Lemma~\ref{s008} holds with $\scrP(M)$ replaced by~$\scrPI(M)$.
\begin{lemma}\label{s009}
From an infinite ordinal~$\alpha$, one can explicitly define an injection $f:\scrP(\alpha)\to\scrPI(\alpha)$.
\end{lemma}
\begin{proof}
By Lemma~\ref{s002}, we can explicitly define an injection $p:\alpha\times\alpha\to\alpha$.
Let $f$ be the function on $\scrP(\alpha)$ defined by
\[
f(A)=
\begin{cases}
p[A\times\{0\}] & \text{if $A$ is infinite,}\\
p[(\alpha\setminus A)\times\{1\}] & \text{otherwise.}
\end{cases}
\]
Then it is easy to see that $f$ is an injection from $\scrP(\alpha)$ into~$\scrPI(\alpha)$.
\end{proof}
\begin{lemma}\label{s010}
From a set $M$, a finite-to-one function $f:N\to M$, and a surjection $g:N\twoheadrightarrow\alpha$,
where $\alpha$ is an infinite ordinal, one can explicitly define a surjection $h:M\twoheadrightarrow\alpha$.
\end{lemma}
\begin{proof}
Cf.~\cite[Lemma~5.2]{Shen2017}.
\end{proof}
\begin{lemma}\label{s011}
For all infinite sets $M$ and all sets~$N$, if there is a finite-to-one function from $N$ to~$M$,
then there are no surjections from $N$ onto~$\scrPI(M)$.
\end{lemma}
\begin{proof}
Assume toward a contradiction that there exist an infinite set $M$ and a set $N$ such that
there exist a finite-to-one function $f:N\to M$ and a surjection $\Phi:N\twoheadrightarrow\scrPI(M)$.
Clearly, the function that maps each cofinite subset $A$ of $M$ to the cardinality of $M\setminus A$
is a surjection from a subset of $\scrPI(M)$ onto~$\omega$,
and hence $\omega\preccurlyeq^\ast\scrPI(M)\preccurlyeq^\ast N$,
which implies that $\omega\preccurlyeq^\ast M$ by Lemma~\ref{s010}.
Thus $\omega\preccurlyeq\scrPI(\omega)\preccurlyeq\scrPI(M)$.
Let $h$ be an injection from $\omega$ into~$\scrPI(M)$.
In what follows, we get a contradiction by constructing by recursion
an injection $H$ from $\mathrm{Ord}$ into~$\scrPI(M)$.
For $n\in\omega$, we take $H(n)=h(n)$. Now, we assume that $\alpha$ is an infinite
ordinal and that $H{\upharpoonright}\alpha$ is an injection from $\alpha$ into~$\scrPI(M)$.
Then $(H{\upharpoonright}\alpha)^{-1}\circ\Phi$ is a surjection from a subset of $N$ onto $\alpha$
and hence can be extended by zero to a surjection $g:N\twoheadrightarrow\alpha$. By Lemma~\ref{s010},
from $M$, $f$, and~$g$, we can explicitly define a surjection $p:M\twoheadrightarrow\alpha$.
Then the function $q$ on $\scrPI(\alpha)$ defined by $q(A)=p^{-1}[A]$ is an injection
from $\scrPI(\alpha)$ into~$\scrPI(M)$, and thus it follows from Lemma~\ref{s009}
that we can explicitly define an injection $t:\scrP(\alpha)\to\scrPI(M)$.
Then $t^{-1}\circ(H{\upharpoonright}\alpha)$ is a bijection from a subset of $\alpha$ onto $t^{-1}[H[\alpha]]$
and thus can be extended by zero to a function $u:\alpha\to\scrP(\alpha)$.
By Theorem~\ref{cnt}, we can explicitly define a $B\in\scrP(\alpha)\setminus\ran(u)$.
Since $t^{-1}[H[\alpha]]\subseteq\ran(u)$, it follows that $B\notin t^{-1}[H[\alpha]]$,
which implies that $t(B)\notin H[\alpha]$. Now, it suffices to define $H(\alpha)=t(B)$.
\end{proof}
We are now in a position to prove the result mentioned at the beginning of this section.
\begin{theorem}\label{gkt}
For all infinite sets~$M$, there are no surjections from $\omega\times M$ onto~$\scrPI(M)$.
\end{theorem}
\begin{proof}
We proceed along the lines of the proof of Theorem~\ref{gct}.
Assume toward a contradiction that there exist an infinite set $M$
and a surjection $\Phi:\omega\times M\twoheadrightarrow\scrPI(M)$.
We first prove that $M$ is power Dedekind infinite.
Clearly, there is a surjection $\Psi\subseteq\Phi$ from a subset of $\omega\times M$
onto $\scrPI(M)$ such that, for all $x\in M$, $\Psi{\upharpoonright}(\omega\times\{x\})$ is injective.
If $M$ is power Dedekind finite, then $\dom(\Psi)\cap(\omega\times\{x\})$ is finite for all $x\in M$,
and thus there exists a finite-to-one function from $\dom(\Psi)$ to~$M$,
contradicting Lemma~\ref{s011}. Hence, $M$ is power Dedekind infinite.
Now, it follows from Theorem~\ref{kurt} that $\omega\preccurlyeq^\ast M$,
and thus, by Fact~\ref{s001} and Lemma~\ref{s009},
$\omega_1\preccurlyeq^\ast\scrP(\omega)\preccurlyeq\scrPI(\omega)\preccurlyeq\scrPI(M)\preccurlyeq^\ast\omega\times M$,
which implies that $\omega_1\preccurlyeq^\ast M$ by Lemma~\ref{s006}
and hence $\omega_1\preccurlyeq\scrPI(\omega_1)\preccurlyeq\scrPI(M)$.
Let $h$ be an injection from $\omega_1$ into~$\scrPI(M)$. In what follows,
we get a contradiction by constructing by recursion
an injection $H$ from $\mathrm{Ord}$ into~$\scrPI(M)$.
For $\gamma<\omega_1$, we take $H(\gamma)=h(\gamma)$.
Now, we assume that $\alpha$ is an uncountable ordinal and that $H{\upharpoonright}\alpha$ is an injection from $\alpha$ into~$\scrPI(M)$.
Then $(H{\upharpoonright}\alpha)^{-1}\circ\Phi$ is a surjection from a subset of $\omega\times M$ onto $\alpha$
and hence can be extended by zero to a surjection $f:\omega\times M\twoheadrightarrow\alpha$.
By Lemma~\ref{s006}, $f$ explicitly provides a surjection $g:M\twoheadrightarrow\alpha$.
Then the function $q$ on $\scrPI(\alpha)$ defined by $q(A)=g^{-1}[A]$ is an injection
from $\scrPI(\alpha)$ into~$\scrPI(M)$, and thus it follows from Lemma~\ref{s009}
that we can explicitly define an injection $t:\scrP(\alpha)\to\scrPI(M)$.
Then $t^{-1}\circ(H{\upharpoonright}\alpha)$ is a bijection from a subset of $\alpha$ onto $t^{-1}[H[\alpha]]$
and thus can be extended by zero to a function $u:\alpha\to\scrP(\alpha)$.
By Theorem~\ref{cnt}, we can explicitly define a $B\in\scrP(\alpha)\setminus\ran(u)$.
Since $t^{-1}[H[\alpha]]\subseteq\ran(u)$, it follows that $B\notin t^{-1}[H[\alpha]]$,
which implies that $t(B)\notin H[\alpha]$. Now, it suffices to define $H(\alpha)=t(B)$.
\end{proof}
Using the method presented here, we can also show that the statements \ref{s012}--\ref{s013} in Section~\ref{s014}
holds with $\scrP(M)$ replaced by~$\scrPI(M)$ (Lemma~\ref{s011} is just the statement \ref{s013} for~$\scrPI(M)$).
We shall omit the details.
The questions whether the existence of an infinite set $A$ such that
there is a surjection from $A^2$ onto $\scrP(A)$ or from $[A]^2$ onto $\scrP(A)$
is consistent with~$\mathsf{ZF}$ are left open.
\subsection*{Acknowledgements}
Peng was partially supported by NSFC No.~11901562 and the Hundred Talents Program of the Chinese Academy of Sciences.
Shen was partially supported by NSFC No.~12101466.
\normalsize
| {
"timestamp": "2021-11-02T01:18:36",
"yymm": "2111",
"arxiv_id": "2111.00456",
"language": "en",
"url": "https://arxiv.org/abs/2111.00456",
"abstract": "It is proved in $\\mathsf{ZF}$ (without the axiom of choice) that, for all infinite sets $M$, there are no surjections from $\\omega\\times M$ onto $\\mathscr{P}(M)$.",
"subjects": "Logic (math.LO)",
"title": "A generalized Cantor theorem in ZF",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969684454967,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7099044179364028
} |
https://arxiv.org/abs/2203.14535 | Asymptotic analysis of k-hop connectivity in the 1D unit disk random graph model | We propose an algorithm for the closed-form recursive computation of joint moments and cumulants of all orders for k-hop counts in the 1D unit disk random graph model with Poisson distributed vertices. Our approach uses decompositions of k-hop counts into multiple Poisson stochastic integrals. As a consequence, using the Stein method we derive Berry-Esseen bounds for the asymptotic convergence of renormalized k-hop path counts to the normal distribution as the density of Poisson vertices tends to infinity. Computer codes for the recursive symbolic computation of moments and cumulants are provided in appendix. | \section{Introduction}
We consider the statistics and asymptotic behavior of
$k$-hop connectivity of the one-dimensional
unit disk random connection model with connection radius $r>0$
on a finite interval, see \cite{drory1997}.
Random geometric graphs
have the ability to model physical systems in e.g.
wireless networks,
complex networks,
and statistical mechanics.
\medskip
Early results in the normal approximation of subgraph counts in random graphs
can be traced to the development of
the Erd{\H o}s-R\'enyi random graph $\mathbb{G}_n(p)$
\cite{G}, \cite{ER}.
Necessary and sufficient conditions for the asymptotic normality
of the renormalized count of graphs in $\mathbb{G}_n(p_n)$
that are isomorphic to a fixed graph $G$
have been obtained in \cite{rucinski},
and made more precise in \cite{BKR} by
the derivation of explicit convergence rates in the Wasserstein distance,
see also \cite{BarbourHolstJanson}
for bounds on the total variation distance of subgraph counts
to the Poisson distribution.
Such bounds have been strengthened in \cite{reichenbachsAoP} and \cite{roellin2}
using the Kolmogorov distance in the case of triangle counts.
In \cite{PS2}, those results have been extended to any subgraph $G$ using the
Kolmogorov distance.
\medskip
In this paper, we focus on the counting of $k$-hops in the one-dimensional
unit disk random connection model with connection radius $r>0$
on a finite interval, see \cite{drory1997}.
See also \cite{penrosebk} for the more general setting of random geometric graphs
and \cite{wilsher} for the soft connection model.
Here, the nodes are distributed on $[0,kr]$, $k\geq 1$, according to a
Poisson point process $(N_t)_{t\in [0,kr]}$ with
intensity $\lambda (ds)$ of the form
\begin{equation}
\label{fjkldf1}
\lambda (ds) = \sum_{l=1}^k {\bf 1}_{ ( (l-1)r,lr]}(s) \lambda_l((l-1)r + ds),
\end{equation}
where $\lambda_1(ds)=\lambda_1(s) ds, \ldots , \lambda_k(ds)=\lambda_k(s)ds$ are
absolutely continuous
intensity measures on $[0,r]$, $l=1,\ldots , k$, as illustrated in the next graph.
\begin{center}
\begin{tikzpicture}
\begin{axis}[
height=2cm,
xmin=0, xmax=50,
width=1\textwidth,
axis x line=bottom,
axis line style={-},
hide y axis,
ymin=0,ymax=5,
xticklabels={0,0,$r$,$2r$,$3r$,,,,,,$(k-1)r$,$kr$},
scatter/classes={
a={mark=o,draw=black}}
]
\addplot[scatter,only marks,
mark size = 2pt,
fill = black,
scatter src=explicit symbolic]
table {
3 0
6 0
9 0
12 0
16 0
18 0
28 0
35 0
37 0
42 0
46 0
58 0
};
\end{axis}
\end{tikzpicture}
\end{center}
\vskip-0.3cm
\noindent
We are interested in the count $\sigma_k ( t )$ of
$k$-hops between additional nodes located respectively at $0$ and $t$ for some
$t \in [0,kr]$, where two nodes $s,t\in [0,kr]$ are connected if and only
if $|t-s|\leq r$.
In this model,
the distribution of $k$-hop counts has been
expressed by a combinatorial approach in \cite{giles-privault}.
\begin{figure}[H]
\centering
\includegraphics[width=0.94\linewidth]{figure1.pdf}
\caption{Graphs of three $5$-hop paths linking $x=0$ to $y=4.5$ with $r=1$.}
\label{fklds}
\end{figure}
The moments of $k$-hop counts in the random-connection model
have been expressed in \cite{prkhp} as summations over
non-flat partition diagrams,
however, those expressions are difficult to apply
to the derivation of explicit bounds.
In this paper we use a different approach based on the representation of
$k$-hop counts in terms of multiple Poisson
stochastic integrals, which allows us to derive explicit expressions for
moments and cumulants of all orders by recursive formulas.
\medskip
In Proposition~\ref{p1}
we provide a combinatorial expression for
the computation of the joint moments
of $k$-hop counts at different endpoint locations
within $[(k-1)r,kr]$.
This expression is then specialized to the computation
of variance in Proposition~\ref{djklfs2}
and Corollary~\ref{djklfs2-2}.
\medskip
In Proposition~\ref{433},
a recursive algorithm for the closed-form computation of
joint moments is derived by representing
$k$-hop counts as multiple Poisson stochastic integrals.
A similar recursion formula is derived in Proposition~\ref{fjhkds}
for the computation of joint cumulants,
which yields a cumulant bound in Proposition~\ref{1fdjkl}.
\medskip
As a consequence, we obtain the bound
$$
\frac{
c_{k,n}^{(\lambda)} (kr-t;\ldots ; kr-t)}{ (c_{k,2}^{(\lambda)}(kr-t;kr-t) )^{n/2}}
\leq
(n!)^{k-2} O ( \lambda^{1-n/2} )
, \qquad n \geq 2,
$$
on the cumulant $c_{k,n}^{(\lambda)} (kr-t;\ldots ; kr-t)$ of order $n\geq 2$ of
$\sigma_k( t )$, for $t\in [(k-1)r, kr)$.
Denoting by ${\mathord{\mathbb P}}_\lambda$ the distribution of the 1D unit disk graph
with constant Poisson intensity $\lambda >0$, this implies the Berry-Esseen bound
$$
\sup_{x\in {\mathord{\mathbb R}}} | {\mathord{\mathbb P}}_\lambda ( \widetilde{\sigma}_k (t) ) - {\mathord{\mathbb P}}({\cal N} \leq x ) |
\leq \frac{C(k,r)}{\sqrt{\lambda}}
$$
for the convergence of the renormalized $k$-hop count
$$
\widetilde{\sigma}_k (t):= \frac{\sigma_k (t)- \mathop{\hbox{\rm I\kern-0.20em E}}\nolimits_\lambda [\sigma_k(t)]}{\sqrt{{\mathrm{{\rm Var}}}_\lambda [ \sigma_k(t)]}}
$$
to the normal distribution ${\cal N}$
as $\lambda$ tends to infinity, see Proposition~\ref{fsklf34}.
A bound of same order is also obtained in Proposition~\ref{fsklf34} using the Wasserstein distance.
\medskip
The content of this paper can be summarized as follows.
In Section~\ref{sec2} we show that $k$-hop counts can be represented in
terms of multiple Poisson stochastic integrals.
In Section~\ref{sec3} we specialize those expressions when the $k$-hops
are made of a single node per cell.
Section~\ref{sec4} presents moment expressions in terms of sums over non-flat
partitions based on results of \cite{prkhp}.
Sections~\ref{sec5} and \ref{sec6} develop recursive expressions for
the explicit calculation of joint moments and cumulants of any order.
In Sections~\ref{sec7} and \ref{sec8} we derive moment and cumulant bounds with
application to Berry-Esseen rates for the convergence normalized $k$-hop counts
to the normal distribution using the Stein method.
The appendices contain specific moment and cumulant computations,
background on moment computations for Poisson point processes
based on \cite{momentpoi, prob},
and Mathematica codes.
\subsubsection*{Set partitions, moments, cumulants, and M\"obius inversion}
This section gathers some preliminary facts
on the relationships between joint moments, cumulants, and sums over partitions
that will be useful in the sequel.
We let $\Pi[n]$ denote the set of partitions of $\{1,\ldots , n\}$,
and given a symmetric function $f(\rho ) = f(\pi_1,\ldots , \pi_l)$ where
$\pi = \{\pi_1,\ldots , \pi_l \} \in \Pi [n]$ is a partition of $\{1,\ldots , n\}$
of size $n\geq 1$ we will use the notation
$$
\sum_{\pi \in \Pi [n]} f(\pi) = \sum_{l=1}^n \sum_{\pi_1\cup \cdots \cup \pi_l} f(\pi_1,\ldots , \pi_l).
$$
We will also use the M\"obius transform $\widehat{G}$ of a function $G$ on
partitions $\pi$ of $\{1,\ldots ,n\}$, defined as
\begin{equation}
\label{g}
\widehat{G} (\sigma ) := \sum_{\pi \preceq \sigma } G(\pi ),
\qquad \sigma \in \Pi[n],
\end{equation}
where the sum \eqref{g} runs over all partitions $\pi$ of
$\{1,\ldots ,n \}$ that are {finer} than $\sigma$.
The M\"obius inversion formula,
see e.g. \cite{rotabk} or \S~2.5 of \cite{peccatitaqqu},
states that the function $G$ in \eqref{g}
can be recovered from its M\"obius transform
$\widehat{G}$ as
\begin{equation}
\label{bpi}
G(\pi) = \sum_{ \sigma \preceq \pi } \mu ( \sigma , \pi ) \widehat{G} (\sigma),
\end{equation}
where $\mu ( \sigma , \pi )$
is the M\"obius function, with
$\mu ( \sigma , \widehat{\bf 1} ) = (|\sigma |-1)! (-1)^{|\sigma |}$,
where $|\sigma |$ denotes the cardinality of the
block $\sigma \in \Pi [n]$ and
$\widehat{\bf 1} : = \{\{1,\ldots , n\}\}$ is
the one-block partition of $\{1,\ldots , n\}$.
By \eqref{g} and \eqref{bpi} we also have the relation
\begin{equation}
\label{fjlksa1}
G(\pi)
= \sum_{ \sigma \preceq \pi } \mu ( \sigma , \pi )
\sum_{\eta \preceq \sigma } G(\eta )
= \sum_{ \eta \preceq \sigma \preceq \pi }
\mu ( \sigma , \pi ) G(\eta ),
\qquad \pi \in \Pi[n].
\end{equation}
Given $X=(X_1,\ldots , X_n)$ a random vector,
the joint {cumulants} of order $(l_1,\ldots , l_n)$
are the coefficients
$\kappa \big( X_1^{l_1} ; \ldots ; X_n^{l_n} \big)$
appearing in the log-moment generating (MGF) expansion
\begin{equation}
\nonumber
\log \mathop{\hbox{\rm I\kern-0.20em E}}\nolimits\big[ \re^{t_1X_1+\cdots + t_n X_n} \big]
=
\sum_{l_1,\ldots , l_n\geq 1} \frac{t^{l_1}_1\cdots t^{l_n}_n}{l_1! \cdots l_n!}
\kappa \big( X_1^{l_1} ; \ldots ; X_n^{l_n} \big)
,
\end{equation}
for $(t_1,\ldots , t_n)$ in a neighborhood of zero in ${\mathord{\mathbb R}}^n$.
The joint moments of $(X_1,\ldots , X_n)$
are given from its cumulants by the joint moment-cumulant relation
\begin{equation}
\nonumber
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [ X_1 \cdots X_n ]
=
\sum_{\pi \in \Pi [n]}
\prod_{A \in \pi }
\kappa \big( (X_i)_{i \in A} \big)
=
\sum_{l=1}^n
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\prod_{j=1}^l
\kappa \big( (X_i)_{i \in \pi_j} \big),
\end{equation}
see Theorem~1 in \cite{elukacs} or Relation~(2.9) in \cite{mccullagh}.
The M\"obius inversion relation \eqref{bpi} also allows us to
recover joint cumulants from joint moments
as
\begin{eqnarray}
\nonumber
\kappa (X_1 ; \ldots ; X_n)
& = &
\sum_{ \sigma \in \Pi [n] }
\mu ( \sigma , \widehat{\bf 1} )
\prod_{A \in \sigma}
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits \Bigg[ \prod_{i\in A} X_i \Bigg]
\\
\label{jklsda1}
& = &
\sum_{l=1}^n
(l-1)!
(-1)^{l-1}
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\prod_{j=1}^l
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits \Bigg[ \prod_{i\in \pi_j} X_i \Bigg],
\end{eqnarray}
see Theorem~1 of \cite{elukacs}
or Corollary~5.1.6 in \cite{stanley}.
In particular,
the cumulant of order $n\geq 1$ of a Poisson distributed random
variable $X$ is its intensity is $\lambda >0$ for all $n\geq 1$, with
$$
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [ X^n ]
=
\sum_{l=1}^n
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\lambda^l
=
\sum_{l=1}^n
S(n,l)
\lambda^l
$$
where
$S(n,l)$ the Stirling number of the second kind, $1 \leq l \leq n$.
For $\lambda = 1$ this yields the Bell number
$$
B_n
=
\sum_{l=1}^n
S(n,l)
$$
which is the number of partitions of $\{1,\ldots , n \}$, i.e. the
cardinality of $\Pi [n]$.
\section{Multiple stochastic integral representation of $k$-hop counts}
\label{sec2}
In the sequel we use the notations
$u\wedge v := \min (u,v)$ and $u \vee v := \max (u,v)$, $u,v\geq 0$.
Our approach to the recursive computation of moments and cumulants
relies on the following stochastic integral representation of
$k$-hop counts with respect to the Poisson process $(N_t)_{t\in {\mathord{\mathbb R}}_+}$
with intensity \eqref{fjkldf1}.
\begin{prop}
\label{fjkds}
Let $k\geq 2$.
The number of $k$-hops joining $0$ to $t \in [0,kr]$ can be written as
the (non-compensated) multiple Poisson stochastic integral
\begin{equation}
\label{fjkldsf}
\sigma_k (t)
= \int_0^t
\cdots
\int_0^t
f_k(s_1,\ldots , s_{k-1}) dN_{s_1} \cdots dN_{s_{k-1}},
\quad t\in [0,kr],
\end{equation}
where $f_k$ is the function of $k-1$ variables defined as
\begin{equation}
\label{fjkl23}
f_k(s_1,\ldots , s_{k-1})
:=
\prod_{l=0}^{k-1}
{\bf 1}_{ \{ s_{l+1} < s_l + r \} },
\end{equation}
$s_1,\ldots , s_{k-1}\in [0,t]$, with $s_0:=0$ and $s_k:=t$.
\end{prop}
\begin{Proof}
When $k=2$ we note that if a node is present at $s \in [0,r]$ it
connects to every node inside $[0,s)$, and
therefore it generates $\sigma_1 (s^-)$ new nodes,
where $\sigma_1 (s^-)$ denotes the almost sure left limit
$\sigma_1 (s^-):=\lim_{u\nearrow s} \sigma_1 (u)$.
When $s\in (r,2r]$, if a node is present at $s \in [0,r]$
then
the count of $1$-hops up linking $0$ to $s-r$
has to be deducted, which yields the evolution
$$
d\sigma_2(s) =
\left\{
\begin{array}{ll}
{\bf 1}_{[0,r]}(s) \sigma_1(s^-) dN_s,
\quad s\in [0,r],
\\
\\
- {\bf 1}_{[r,2r]}(s) \sigma_1 (s^--r) dN_{s-r},
\quad s\in (r,2r],
\end{array}
\right.
$$
where $(T_n)_{n\geq 1}$
denotes the jump times sequence of $(N_t)_{t\in [0,2r]}$.
More generally, applying this argument by iterations to any $k\geq 3$
leads to the system of jump stochastic differential equations
$$
d\sigma_k (s) =
\left\{
\begin{array}{ll}
{\bf 1}_{[0,(k-1)r]}(s) \sigma_{k-1} (s^-) dN_s,
\quad s\in [0,(k-1)r],
\\
\\
- {\bf 1}_{[r,kr]}(s) \sigma_{k-1} (s^--r) dN_{s-r},
\quad s\in ((k-1)r,kr],
\end{array}
\right.
$$
or
$$
d\sigma_k (s) =
{\bf 1}_{[0,(k-1)r]}(s) \sigma_{k-1} (s^-) dN_s
- {\bf 1}_{[r,kr]}(s) \sigma^{(k-1)}_{s^--r} dN_{s-r},
\quad s\in [0,kr],
$$
hence the recurrence relation
\begin{eqnarray*}
\sigma_k (t)
& = &
\int_0^{((k-1)r)\wedge t} \sigma_{k-1} (s^-) dN_s
- \int_r^{r\vee t} \sigma^{(k-1)}_{s^--r} dN_{s-r}
\\
& = &
\int_0^{((k-1)r)\wedge t} \sigma_{k-1} (s^-) dN_s
- \int_0^{0\vee (t-r)} \sigma_{k-1} (s^-) dN_s
\\
& = &
\int_{0\vee (t-r)}^{((k-1)r)\wedge t} \sigma_{k-1} (s^-) dN_s
\\
& = &
\left\{
\begin{array}{l}
\displaystyle
\int_{t-r}^{(k-1)r} \sigma_{k-1} (s^-) dN_s, \quad t\in [(k-1)r,kr],
\\
\\
\displaystyle
\int_{t-r}^t \sigma_{k-1} (s^-) dN_s, \qquad t\in [r,(k-1)r],
\\
\\
\displaystyle
\int_0^t \sigma_{k-1} (s^-) dN_s, \qquad t\in [0,r],
\end{array}
\right.
\end{eqnarray*}
for $k \geq 2$.
Finally, by induction we obtain
\begin{eqnarray}
\nonumber
\sigma_k (t)
& = &
\int_{0\vee (t-r)}^{((k-1)r)\wedge t}
\int_{0\vee (s_{k-1}^--r)}^{((k-2)r)\wedge s_{k-1}^-}
\cdots
\int_{0\vee (s_2^--r)}^{r\wedge s_2^-} dN_{s_1} \cdots dN_{s_{k-1}}
\\
\label{fdksfa}
& = &
\int_{0\vee (t-r)}^{((k-1)r)\wedge t}
\int_{0\vee (s_{k-1}^--r)}^{((k-2)r)\wedge t}
\cdots
\int_{0\vee (s_2^--r)}^{r\wedge t} dN_{s_1} \cdots dN_{s_{k-1}},
\end{eqnarray}
and we conclude by letting
$$
f_k(s_1,\ldots , s_{k-1})
: =
{\bf 1}_{ \{ t-r < s_{k-1} < (k-1)r\} }
{\bf 1}_{ \{ s_{k-1} -r < s_{k-2} < (k-2)r \} }
\cdots
{\bf 1}_{ \{ s_2 -r < s_1 < r \} }
,
$$
$s_1,\ldots , s_{k-1}\in [0,t]$.
\end{Proof}
In particular, the $2$-hop count is given by
\begin{equation}
\label{fjlksd23}
\sigma_2 (t) = \int_{0\vee (t-r)}^{r\wedge t} dN_s = N_t{\bf 1}_{[0,r]}(t) +
(N_r - N_{t-r} ){\bf 1}_{[r,2r]}(t).
\end{equation}
In the case of $3$-hops, we have
\begin{align*}
& \sigma_3(t)
=
\int_{0\vee (t-r)}^{(2r)\wedge t} \sigma_2 (s^-) dN_s
\\
& =
\left\{
\begin{array}{ll}
\displaystyle
\int_{t-r}^{2r} \sigma_2 (s^-) dN_s
= \int_{t-r}^{2r} \int_{s^--r}^{s^-} dN_u dN_s
= \int_{t-r}^{2r} (N_{s^-}-N_{s^--r}) dN_s,
\quad t\in [2r,3r],
\\
\\
\displaystyle
\int_{t-r}^t \sigma_2 (s^-) dN_s
=
\int_{t-r}^t N_{s^-}dN_s
-
\int_r^t N_{s^--r} dN_s
\\
\displaystyle
\qquad
\qquad
\qquad
\ \
=
\frac{1}{2} (N_t-1)N_t
-
\frac{1}{2} (N_{t-r}-1)N_{t-r}
-
\sum_{l=1+N_r}^{N_t} N_{T_l^--r}
, \quad t\in [r,2r],
\\
\displaystyle
\int_0^t N_{s^-} dN_s = \frac{1}{2} (N_t-1)N_t,
\qquad t\in [0,r].
\end{array}
\right.
\end{align*}
More generally, when $t\in [(l-1)r,lr]$ for some $l\in \{1,\ldots , k-1 \}$,
by \eqref{fdksfa} we have
$$
\sigma_k (t)
= \int_{0\vee (t-r)}^t
\int_{0\vee (s_{k-1}^--r)}^t
\cdots \int_{0\vee (s_{l+1}^--r)}^t
\int_{s_l^--r}^{(l-1)r}
\cdots
\int_{s_2^--r}^r dN_{s_1} \cdots dN_{s_{k-1}},
$$
hence the identity in distribution
$$
\sigma_k (t) \stackrel{d}{\simeq} \int_{r \vee t}^{t+r}
\int_{0\vee (s_{k-1}^--r)}^{t+r}
\cdots \int_{0\vee ( s_{l+1}^--r)}^{t+r}
\int_{s_l^--r}^{lr}
\int_{s_{l-1}^--r}^{(l-1)r}
\cdots
\int_{s_2^--r}^{2r} dN_{s_1} \cdots dN_{s_{k-1}},
$$
where we let $s_k:=t$.
The multiple compensated Poisson stochastic integral of order
$n\geq 1$ of a deterministic symmetric function
$f_n \in L^2({\mathord{\mathbb R}}_+^n, \lambda^{\otimes n})$
is defined
by
$$
I_n (f_n) :=
n!
\int_0^\infty \int_0^{t_{n-1}} \cdots \int_0^{t_2}
{f}_n ( t_1,\ldots , t_n)
d(N_{t_1} - \lambda ( dt_1) )
\cdots
d(N_{t_n} - \lambda ( dt_n) ),
$$
with the isometry property
\begin{equation}
\label{isom1}
\mathbb{E} [ ( I_n (f_n) )^2 ] = (n!)^2
\int_0^\infty \int_0^{t_{n-1}} \cdots \int_0^{t_2}
f_n^2(t_1,\ldots , t_n) \lambda (dt_1)\cdots \lambda (dt_n),
\quad n \geq 1,
\end{equation}
see e.g. Propositions~2.7.1 and ~6.2.4 in \cite{privaultbk2},
and references therein.
The next corollary of Proposition~\ref{fjkds}
gives the chaos decomposition of $k$-hop counts
in terms of multiple Poisson stochastic integrals.
\begin{corollary}
\label{c1}
For any $t\in [0,kr]$, the number $\sigma_k (t)$ of $k$-hops linking $0$ to $t$ can be
represented as the sum of multiple
compensated Poisson stochastic integrals
\begin{equation}
\label{skt}
\sigma_k (t)
= \frac{1}{(k-1)!} \sum_{l=0}^{k-1}
{k-1 \choose l}
I_l \left( {\bf 1}_{\{ * \in [0,t]^l\}}
\int_0^t \cdots \int_0^t
\widetilde{f}_k (*,s_{l+1},\ldots , s_{k-1} )
ds_{l+1}\cdots ds_{k-1}
\right),
\end{equation}
where $\widetilde{f}_k$ is the symmetrization in $k-1$ variables of the function
$f_k$ defined in \eqref{fjkl23}.
\end{corollary}
\begin{Proof}
This is a direct consequence of
Proposition~\ref{fjkds} and the binomial theorem applied
to $(dN_{t_1} - \lambda (dt_1) )\cdots (dN_{t_{k-1}} - \lambda (dt_{k-1}) )$.
\end{Proof}
\section{Single node per cell}
\label{sec3}
In the case where $k$-hop paths are constrained to have a single
node per cell $[(l-1)r,lr]$, $l=1,\ldots , k$,
we must have $t\in [(k-1)r,k]$, and $f_k(s_1,\ldots , s_{k-1}) =0$ unless
$$
(s_1,\ldots , s_{k-1})
\in [0,r] \times
[r, 2r] \times
\cdots
\times
[(k-2)r , (k-1)r].
$$
\subsubsection*{Multiple Poisson stochastic integral expression}
In particular, when $t\in [(k-1)r,k]$ with $k\geq 2$,
Relation~\eqref{fdksfa} yields the identity in distribution
\begin{eqnarray}
\nonumber
\sigma_k (t)
& = & \int_{t-r}^{(k-1)r}
\int_{s_{k-1}^--r}^{(k-2)r}
\cdots
\int_{s_2^--r}^r dN_{s_1} \cdots dN_{s_{k-1}}
\\
\label{fjdksdfa}
& \stackrel{d}{\simeq} &
\int_t^{kr}
\int_{s_{k-1}^--r}^{(k-1)r}
\cdots
\int_{s_2^--r}^{2r} dN_{s_1} \cdots dN_{s_{k-1}}
,
\end{eqnarray}
where $s_k:=t$,
hence the following proposition.
\begin{prop}
\label{fjkldsa}
Let $k\geq 2$.
For $\tau \in [0,r]$ we have the identity in distribution
\begin{equation}
\label{fjld3}
\sigma_k (kr-\tau) \stackrel{d}{\simeq} \int_0^\tau \int_0^{s_{k-1}^-} \cdots \int_0^{s_2^-} dN^{(1)}_{s_1} \cdots dN^{(k-1)}_{s_{k-1}},
\end{equation}
where $\big(N^{(l)}_s\big)_{s\in {\mathord{\mathbb R}}_+}$ is a family of
independent Poisson processes with respective intensities
$\lambda_l(ds )=\lambda_l(s )ds$, $l=1,\ldots , k-1$.
\end{prop}
\begin{Proof}
When $t\in [(k-1)r,kr]$, by \eqref{fjdksdfa} we have the identity in distribution
\begin{eqnarray*}
\sigma_k (t)
& \stackrel{d}{\simeq} &
\int_t^{kr}
\int_{s_{k-1}^--r}^{(k-1)r}
\cdots
\int_{s_2^--r}^{2r} dN_{s_1} \cdots dN_{s_{k-1}}
\\
& = &
(-1)^{k-1} \int_0^{kr-t}
\int_r^{r+u_{k-1}^-}
\cdots
\int_{(k-2)r}^{r + u_2^-} dN_{kr-u_1} \cdots dN_{kr-u_{k-1}}
\end{eqnarray*}
where we let $u_i:=kr-s_i$, $i=1,\ldots , k-1$, hence
the identity in distribution
$$
\sigma_k (t)
\stackrel{d}{\simeq}
\int_0^{kr-t}
\int_r^{r+v_{k-1}^-}
\cdots
\int_{(k-2)r}^{r+v_2^-} dN_{v_1} \cdots dN_{v_{k-1}}.
$$
We note that
in the above integral we have $(l-1)r<v_l\leq l r$ for $l=1,\ldots ,k-1$,
hence the integration intervals are disjoint and the Poisson process
samples $(N_{v_i})$, $i=1,\ldots , k-1$, are independent on
their integration intervals, which yields the identity in distribution
$$
\sigma_k(t) \stackrel{d}{\simeq}
\int_0^{kr-t} \int_0^{s_{k-1}^-} \cdots \int_0^{s_2^-} dN^{(1)}_{s_1} \cdots dN^{(k-1)}_{s_{k-1}},
\qquad
(k-1)r \leq t \leq kr.
$$
\end{Proof}
\subsubsection*{$U$-Statistics formulation}
\noindent
As noted in e.g. \cite{giles-privault}, when $\tau \in [0,r]$,
any node contributing to a $k$-hop path linking $x_0:=0$ to $x_{k+1}:=kr-\tau$
must belong to one of the {lenses} pictured in pink in Figure~\ref{fklds},
and defined as the intervals
$$
L_j := [jr-\tau,jr] = [0,\tau] + jr - \tau, \qquad j=1,\ldots , k-1,
$$
of identical length $\tau$.
In particular,
any $k$-hop path linking $x_0:=0$ to $x_{k+1}:=kr-\tau$
should have a single
node per cell $[(j-1)r,jr]$, $j=1,\ldots , k$,
hence it must be realized using a sequence
$(x_1, \ldots , x_{k-1})$ of nodes such that
$$
x_{i+1} < x_i + r, \qquad i = 0,1,\dots, k,
$$
with $x_0:=0$ and $x_k:=kr-\tau$.
Therefore, any
$k$-hop path $(x_1,\ldots , x_{k-1})$ can be mapped to a
sequence $(y_1,\ldots , y_{k-1}) \in [0,\tau ]^{k-1}$
by the relation
$$
y_j:=x_j - (jr-\tau), \qquad j =1,\ldots , k-1,
$$
with $y_1 > \cdots > y_{k-1}$.
Based on the above description, we can model the random graph using
a Poisson point process $\omega $
on $X:=[0,r]\times \{1,\ldots , k-1\}$, with intensity
$\mu$ of the form
$$
\mu ( ds,\{i\}) := \lambda_l (ds )= \lambda_l (s )ds, \qquad l = 1,\ldots , k-1,
$$
and \eqref{fjld3} can be rewritten as in the next proposition.
\begin{prop}
When $\tau \in [0,r]$, the count $\sigma_k (kr-\tau)$ of $k$-hop paths can be
represented as the $U$-statistics
\begin{equation}
\label{fjda}
\sigma_k (kr-\tau)
=
\sum_{((x_1,l_1),\ldots , ((x_{k-1},l_{k-1}) ) \in \omega^{k-1}
\atop (x_i,l_i)\not=(x_j,l_j), 1\leq i\not= j \leq d}
f_\tau (x_1,l_1;\ldots ; x_{k-1},l_{k-1} )
\end{equation}
of order $k-1$, where $f_\tau :( [0,r] \times \{1,\ldots , k-1\})^{k-1} \to \{0,1\}$ is
the function of $k-1$ variables in $[0,r]\times \{1,\ldots , k-1\}$
given by
$$
f_\tau (x_1,l_1;\ldots ; x_{k-1},l_{k-1})
=
\prod_{i=0}^{k-1} {\bf 1}_{\{x_i<x_{i+1}, \ l_i<l_{i+1} \}}
=
{\bf1}_{\{ l_1=1,\ldots , l_{k-1}=k-1\}}
{\bf1}_{\{ 0 < x_1 < \cdots < x_{k-1} < \tau \}}
$$
with $(x_0,l_0):=(0,0)$
and $(x_k,l_k):=(t,k)$.
\end{prop}
\section{Joint moments of $k$-hop counts}
\label{sec4}
Proposition~\ref{p1} provides a combinatorial expression for the joint
moments of $(\sigma_k(kr-\tau_1),\ldots ,\sigma_k(kr-\tau_n))$ for any
$\tau_1 , \ldots , \tau_n \in [0,r]$,
using sums over partitions of $\{1,\ldots , n\}$.
\begin{prop}
\label{p1}
Let $n \geq 1$.
For any $\tau_1,\ldots , \tau_n \in [0,r]$,
letting
$\widehat{\tau}_\pi := \min_{i\in \pi} \tau_i$ for $\pi \subset \{1,\ldots , n\}$,
we have
\begin{align}
\nonumber
& \mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [ \sigma_k(kr-\tau_1) \cdots \sigma_k(kr-\tau_n) ]
\\
\label{djkls12}
& =
\sum_{\pi^1, \ldots , \pi^{k-1} \in \Pi [n] }
\int_{\prod_{l=1}^{k-1} \prod_{j=1}^{|\pi^l|} [0,\widehat{\tau}_{\pi^l_j}]}
\prod_{1 \leq l < k
\atop
{
1 \leq j \leq |\pi^l |
\atop i\in \pi^l_j}
}
{\bf 1}_{\{
z^1_{\zeta^1_i } < \cdots < z^{k-1}_{\zeta^{k-1}_i } \} }
\lambda_1(dz^1_{\pi^1} ) \cdots \lambda_{k-1} (dz^{k-1}_{\pi^{k-1}} )
,
\end{align}
where
$\zeta^j_i$ denotes the block of $\pi^j$ that contains the index $i\in \{1,\ldots , n\}$, and $dz^j_{\pi^j} := (dz^j_i )_{ i \in \pi^j}$,
$j=1,\ldots , k-1$.
\end{prop}
\begin{Proof}
For any $\tau_1,\ldots , \tau_n \in [0,r]$,
by \eqref{fjda} and Corollary~\ref{c1-a} we have
\begin{align*}
\nonumber
& \mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [ \sigma_k(kr-\tau_1) \cdots \sigma_k(kr-\tau_n) ]
\\
& =
\sum_{
\substack{
\pi \in \Pi [n\times (k-1)]
\\
\pi \wedge \rho = \hat{0}
}}
\int_{[0,r]^{|\pi|}}
\sum_{1\leq l_q \leq k-1 \atop 1 \leq q \leq |\pi|}
\prod_{1 \leq j \leq |\pi | \atop
i\in \pi_j}
f_{\tau_i} \big(z_{\zeta^\pi_{i,1}},l_{\zeta^\pi_{i,1}}; \ldots ; z_{\zeta^\pi_{i,k-1}},l_{\zeta^\pi_{i,k-1}}\big)
\lambda_{\bar{\pi}_1} ( d z_1 )
\cdots
\lambda_{\bar{\pi}_{|\pi|}} ( d z_{|\pi|} )
,
\end{align*}
where
$\widehat{\bf 0} : = \{\{1\},\ldots , \{n\}\}$ is
the $n$-block partition of $\{1,\ldots , n\}$,
$\bar{\pi}_i$ denotes the index $j\in \{1,\ldots , k-1\}$ of the unique block
$\eta_j = ((i,j))_{i=1,\ldots , n}$ containing $\pi_i$, $i=1,\ldots , |\pi|$,
and the sum is taken over non-flat
partitions $\pi$ in
${\rm NC} [n\times (k-1)]$ that are non-crossing in the sense that
if $(k,l)$ and $(k',l')$ belong to a same block of $\pi$ then we should have $l=l'$.
This yields
\begin{align*}
\nonumber
& \mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [ \sigma_k(kr-\tau_1) \cdots \sigma_k(kr-\tau_n) ]
\\
& =
\hskip-0.4cm
\sum_{
\substack{
\pi \in \Pi [n\times (k-1)]
\\
\pi \wedge \rho = \hat{0}
}}
\int_0^{\widehat{\tau}_{\pi_1}}
\cdots
\int_0^{\widehat{\tau}_{\pi_{|\pi|}}}
\hskip-0.4cm
\sum_{1\leq l_q \leq k-1 \atop 1 \leq q \leq |\pi|}
\prod_{1 \leq j \leq |\pi | \atop
i\in \pi_j}
f_{\tau_i} \big(z_{\zeta^\pi_{i,1}},l_{\zeta^\pi_{i,1}}; \ldots ; z_{\zeta^\pi_{i,k-1}},l_{\zeta^\pi_{i,k-1}}\big)
\lambda_{\bar{\pi}_1} ( d z_1 )
\cdots
\lambda_{\bar{\pi}_{|\pi|}} ( d z_{|\pi|} )
\\
& =
\sum_{
\substack{
\pi \in \Pi [n\times (k-1)]
\\
\pi \wedge \rho = \hat{0}
}}
{\bf 1}_{\{
\bar{\pi}_1 \leq \cdots \leq \bar{\pi}_{|\pi|}
\}
}
\sum_{l_1\leq \cdots \leq l_{|\pi|}}
\\
& \qquad \quad
\int_0^{\widehat{\tau}_{\pi_1}}
\cdots
\int_0^{\widehat{\tau}_{\pi_{|\pi|}}}
\prod_{1 \leq j \leq |\pi | \atop
i\in \pi_j}
f_{\tau_i} \big(z_{\zeta^\pi_{i,1}},l_{\zeta^\pi_{i,1}}; \ldots ; z_{\zeta^\pi_{i,k-1}},l_{\zeta^\pi_{i,k-1}}\big)
\lambda_{\bar{\pi}_1} ( d z_1 )
\cdots
\lambda_{\bar{\pi}_{|\pi|}} ( d z_{|\pi|} )
\\
& =
\sum_{
\substack{
\pi \in {\rm NC} [n\times (k-1)]
\\
\pi \wedge \rho = \hat{0}
}}
{\bf 1}_{\{
\bar{\pi}_1 \leq \cdots \leq \bar{\pi}_{|\pi|}
\}
}
\int_0^{\widehat{\tau}_{\pi_1}}
\cdots
\int_0^{\widehat{\tau}_{\pi_{|\pi|}}}
\prod_{1 \leq j \leq |\pi |
\atop i\in \pi_j}
{\bf 1}_{\{ z_{\zeta^\pi_{i,1}} < \cdots < z_{\zeta^\pi_{i,k-1}} \} }
\lambda_{\bar{\pi}_1} (dz_1 ) \cdots \lambda_{\bar{\pi}_{|\pi|}} (dz_{|\pi |} ).
\end{align*}
We conclude to \eqref{djkls12} by noting that
any non-flat and non-crossing partition $\pi$ in
${\rm NC} [n\times (k-1)]$ can be written as
$$
\pi = \{\pi_1,\ldots , \pi_{|\pi|} \} = \bigcup_{l=1}^{k-1} \pi^l,
$$
where $\pi^l\in \Pi [n]$ is a partition of $\{1,\ldots , n\}$
for every $l=1,\ldots , k-1$.
\end{Proof}
Next, we present the application of Proposition~\ref{p1}
in the particular cases of first and second moments.
\subsubsection*{First moment}
When $n=1$ there is only one non-flat and non-crossing partition
of $1 \times (k-1)$, which is given as
$\rho = \{ \{(1,1)\}, \ldots , \{(1,k-1) \}\}$ and
can be represented as follows for $k=9$:
\\
\hskip-0.5cm
\begin{tikzpicture}[every node/.style={draw}]
\centering
\node (l1) at (2,0) [circle] {};
\node (l2) at (4,0) [circle] {};
\node (l3) at (6,0) [circle] {};
\node (l4) at (8,0) [circle] {};
\node (l5) at (10,0) [circle] {};
\node (l6) at (12,0) [circle] {};
\node (l7) at (14,0) [circle] {};
\node (l8) at (16,0) [circle] {};
\draw let \p1=(l1), \p2=(l1), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l1)!0.5!(l1) $) ellipse [x radius=\n2/2+12pt, y radius=0.4cm,rotate=0-\n1];
\draw let \p1=(l2), \p2=(l2), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l2)!0.5!(l2) $) ellipse [x radius=\n2/2+12pt, y radius=0.4cm,rotate=0-\n1];
\draw let \p1=(l3), \p2=(l3), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l3)!0.5!(l3) $) ellipse [x radius=\n2/2+12pt, y radius=0.5cm,rotate=0-\n1];
\draw let \p1=(l4), \p2=(l4), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l4)!0.5!(l4) $) ellipse [x radius=\n2/2+12pt, y radius=0.5cm,rotate=0-\n1];
\draw let \p1=(l5), \p2=(l5), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l5)!0.5!(l5) $) ellipse [x radius=\n2/2+12pt, y radius=0.5cm,rotate=0-\n1];
\draw let \p1=(l6), \p2=(l6), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l6)!0.5!(l6) $) ellipse [x radius=\n2/2+12pt, y radius=0.5cm,rotate=0-\n1];
\draw let \p1=(l7), \p2=(l7), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l7)!0.5!(l7) $) ellipse [x radius=\n2/2+12pt, y radius=0.5cm,rotate=0-\n1];
\draw let \p1=(l8), \p2=(l8), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l8)!0.5!(l8) $) ellipse [x radius=\n2/2+12pt, y radius=0.5cm,rotate=0-\n1];
\end{tikzpicture}
\noindent
This yields
\begin{eqnarray}
\nonumber
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [ \sigma_k(kr-\tau) ]
& = &
\int_0^t \cdots \int_0^t
{\bf 1}_{\{ z_1 < \cdots < z_{k-1} \} }
\lambda_1 (dz_1 ) \cdots \lambda_{k-1} (dz_{k-1} )
\\
\label{fjkslff}
& = &
\int_0^t \int_0^{z_{k-1}} \cdots \int_0^{z_2}
\lambda_1 (dz_1 ) \cdots \lambda_{k-1} (dz_{k-1} ),
\end{eqnarray}
and when $\lambda_i (s)$ is the constant density $\lambda_i >0$ on cell $i$,
$i=1,\ldots , k-1$, we find
\begin{equation}
\nonumber
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [ \sigma_k (kr-\tau) ]
= \lambda_1 \cdots \lambda_{k-1} \frac{t^{k-1}}{(k-1)!}.
\end{equation}
\subsubsection*{Second moment}
\noindent
When $n=2$ the count of blocks
of non-flat and non-crossing partitions of $[2\times (k-1)]$
ranges from $k-1$ to $2k-2$, each block has size either one or two,
as in the following example with $k=9$.
\\
\hskip-0.3cm
\begin{tikzpicture}[every node/.style={draw}]
\centering
\node (l1) at (2,0) [circle] {};
\node (l2) at (4,0) [circle] {};
\node (l3) at (6,0) [circle] {};
\node (l4) at (8,0) [circle] {};
\node (l5) at (10,0) [circle] {};
\node (l6) at (12,0) [circle] {};
\node (l7) at (14,0) [circle] {};
\node (l8) at (16,0) [circle] {};
\node (m1) at (2,1) [circle] {};
\node (m2) at (4,1) [circle] {};
\node (m3) at (6,1) [circle] {};
\node (m4) at (8,1) [circle] {};
\node (m5) at (10,1) [circle] {};
\node (m6) at (12,1) [circle] {};
\node (m7) at (14,1) [circle] {};
\node (m8) at (16,1) [circle] {};
\draw let \p1=(l1), \p2=(l1), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l1)!0.5!(l1) $) ellipse [x radius=\n2/2+12pt, y radius=0.4cm,rotate=0-\n1];
\draw let \p1=(m1), \p2=(m1), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (m1)!0.5!(m1) $) ellipse [x radius=\n2/2+12pt, y radius=0.4cm,rotate=0-\n1];
\draw let \p1=(l2), \p2=(m2), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l2)!0.5!(m2) $) ellipse [x radius=\n2/2+12pt, y radius=0.5cm,rotate=0-\n1];
\draw let \p1=(l3), \p2=(m3), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l3)!0.5!(m3) $) ellipse [x radius=\n2/2+12pt, y radius=0.5cm,rotate=0-\n1];
\draw let \p1=(l4), \p2=(l4), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l4)!0.5!(l4) $) ellipse [x radius=\n2/2+12pt, y radius=0.4cm,rotate=0-\n1];
\draw let \p1=(m4), \p2=(m4), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (m4)!0.5!(m4) $) ellipse [x radius=\n2/2+12pt, y radius=0.4cm,rotate=0-\n1];
\draw let \p1=(l5), \p2=(l5), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l5)!0.5!(l5) $) ellipse [x radius=\n2/2+12pt, y radius=0.4cm,rotate=0-\n1];
\draw let \p1=(m5), \p2=(m5), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (m5)!0.5!(m5) $) ellipse [x radius=\n2/2+12pt, y radius=0.4cm,rotate=0-\n1];
\draw let \p1=(l6), \p2=(m6), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l6)!0.5!(m6) $) ellipse [x radius=\n2/2+12pt, y radius=0.4cm,rotate=0-\n1];
\draw let \p1=(l7), \p2=(m7), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l7)!0.5!(m7) $) ellipse [x radius=\n2/2+12pt, y radius=0.4cm,rotate=0-\n1];
\draw let \p1=(l8), \p2=(m8), \n1={atan2(\y2-\y1,\x2-\x1)}, \n2={veclen(\y2-\y1,\x2-\x1)}
in ($ (l8)!0.5!(m8) $) ellipse [x radius=\n2/2+12pt, y radius=0.4cm,rotate=0-\n1];
\end{tikzpicture}
\noindent
As a consequence, in the next proposition we obtain the
second moment of the count $\sigma_k(t)$ of $k$-hop paths.
Higher cumulants and moments of $\sigma_k(t)$ may also
be computed by this method
using Corollary~\ref{c1} above and Corollary~7.4.1 of
\cite{peccatitaqqu}.
\begin{prop}
\label{djklfs2}
The variance
of the $k$-hop path count $\sigma_k(kr-\tau)$, $\tau\in [0,r]$, is given by
$$
{\mathrm{{\rm Var}}}_\lambda [ \sigma_k (kr-\tau)]
=
\sum_{l=1}^{k-1}
\tau^{2k-2-l}
\frac{ \lambda_1^2\cdots \lambda_{k-1}^2}{(2k-2-l)!}
\sum_{ j_0 + \cdots + j_l = k-1-l \atop j_0,\ldots , j_l\geq 0}
\prod_{q=1}^l \frac{1}{\lambda_{j_0+\cdots + j_{q-1}+q}}
\prod_{p=0}^l {2j_p \choose j_p}
.
$$
\end{prop}
\begin{Proof}
We apply Proposition~\ref{p1}
by noting that the blocks of size one are in even number, and
denoting by $i_1,\ldots , i_l$ their locations with
$i_1=2,i_2=3,i_3=6,i_4=7,i_5=8$ in the above example,
when $t=t_1=t_2$,
and letting $z_0:=0$ and $z_k:=\tau$, we obtain
\begin{align*}
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [\sigma_k^2 (kr-\tau) ]
& =
\lambda_1\cdots \lambda_{k-1}
\sum_{l=0}^{k-1}
\sum_{
0=i_0<i_1< \cdots < i_l <i_{l+1}= k}
\left( \prod_{1\leq q < k \atop
q\notin \{i_1,\ldots ,i_l\}} \hskip-0.3cm \lambda_q
\right)
\\
& \qquad \times \int_0^\tau
\int_0^{z_{i_l}}
\cdots
\int_0^{z_{i_2}}
\prod_{p=0}^l
\left( \frac{ (z_{i_{p+1}}-z_{i_p})^{(i_{p+1}-i_p-1)} }{(i_{p+1}-i_p-1)!}
\right)^2
dz_{i_1}\cdots dz_{i_l}
,
\end{align*}
where
we let $z_0:=0$ and $z_k:=\tau$,
To conclude, we check that
\begin{eqnarray}
\nonumber
\lefteqn{
\! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \int_0^{a_k}
\int_0^{z_{i_l}}
\cdots
\int_0^{z_{i_2}}
\prod_{p=0}^l
\frac{ (z_{i_{p+1}}-z_{i_p})^{2(i_{p+1}-i_p-1)} }{((i_{p+1}-i_p-1)!)^2}
dz_{i_1}\cdots dz_{i_l}
}
\\
\nonumber
&
=
& \frac{1}{((i_1-1)!)^2}
\prod_{p=1}^l
\int_0^1
\frac{ (1-y)^{2(i_{p+1}-i_p-1)}
y^{2i_p-p-1}
}{((i_{p+1}-i_p-1)!)^2}
dy
\\
\nonumber
&
= &
\frac{1}{((i_1-1)!)^2}
\prod_{p=1}^l
\frac{B(2i_p-p,
2(i_{p+1}-i_p)-1)
}{((i_{p+1}-i_p-1)!)^2}
\\
\nonumber
&
= &
\frac{1}{(2k-2-l)!}
\prod_{p=0}^l
\frac{
(2(i_{p+1}-i_p-1))!}
{
((i_{p+1}-i_p-1)!)^2}
,
\end{eqnarray}
where
$$
B(x,y) := \int_0^1 t^{x-1}(1-t)^{y-1} dt = \frac{(x-1)!(y-1)!}{(x+y-1)!},
\qquad x, y>0,
$$
is the beta function, and
$2^{-(i_{l+1}-i_l-1)}(2(i_{l+1}-i_l-1))! / (i_{l+1}-i_l-1)!$
is the number of pair-partitions of $2(i_{l+1}-i_l-1)$.
\end{Proof}
Alternatively, Proposition~\ref{djklfs2}
can be proved as a consequence of Corollary~\ref{c1},
and the It\^o isometry for multiple Poisson stochastic integrals.
For this, we can use the expression
\begin{align*}
& \sigma_k (kr-\tau ) =
\int_{(k-1)r - \tau}^{(k-1)r}
\int_{s_{k-1}^--r}^{(k-2)r}
\cdots
\int_{s_2^--r}^r dN_{s_1} \cdots dN_{s_{k-1}}
\\
& =
\sum_{l=0}^{k-1}
\sum_{
0=i_0<i_1< \cdots < i_l <i_{l+1}= k}
\int_0^{(k-1)r}
\cdots
\int_0^{(k-1)r}
f_k(s_1,\ldots ,s_{k-1})
\prod_{1\leq q \leq d \atop
q\notin \{i_1,\ldots ,i_l\}}
\hskip-0.3cm
(\lambda_q ds_q)
\prod_{p=1}^l
(dN_{s_{i_p}} -\lambda_{i_p}ds_{i_p})
\end{align*}
that follows from Corollary~\ref{c1} and
the isometry and orthogonality property \eqref{isom1} of multiple Poisson
stochastic integrals, to show that
\begin{align*}
& \mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [\sigma_k^2 (kr-\tau) ]
\\
& =
\sum_{l=0}^{k-1}
\sum_{
0=i_0<i_1< \cdots < i_l <i_{l+1}= k}
\int_{[0,(k-1)r]^l}
\bigg(
\int_{[0,(k-1)r]^{d-l}}
f_k(s_1,\ldots ,s_{k-1})
\prod_{1\leq q < k \atop
q\notin \{i_1,\ldots ,i_l\}}
\hskip-0.3cm
(\lambda_q ds_q)
\bigg)^2
\prod_{p=1}^l
(\lambda_{i_p}ds_{i_p}).
\end{align*}
The following table provides variance formulas of $k$-hop counts
computed from Proposition~\ref{djklfs2} by taking $\tau=1$ for simplicity.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|}
\hhline{~-}
\multicolumn{1}{c|}{} &
\addstackgap[3pt]{Variance} \cellcolor{gray!25}
\\
\cline{1-2}
\multicolumn{1}{|c|}{$2$-hops} & \small $\displaystyle \lambda_1$
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{$3$-hops}} & \small $\displaystyle \frac{\lambda_1\lambda_2}{2} + 2\frac{\lambda_1^2\lambda_2+\lambda_1\lambda_2^2}{3!}
$
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{$4$-hops}} & \small $\displaystyle \frac{\lambda_1 \lambda_2 \lambda_3}{3!} + 2 \frac{\lambda_1^2 \lambda_2 \lambda_3 + \lambda_1 \lambda_2^2 \lambda_3 + \lambda_1 \lambda_2 \lambda_3^2}{4!} + \frac{4 \lambda_1^2 \lambda_2 \lambda_3^2 + 6 \lambda_1^2 \lambda_2^2 \lambda_3 + 4 \lambda_1 \lambda_2^2 \lambda_3^2}{5!}$
\\
\cline{1-2}
\end{tabular}
\caption{Variances of $k$-hop counts.}
\end{table}
\vspace{-0.3cm}
\noindent
In case the Poisson intensities are identical on all cells,
we obtain the following result.
\begin{corollary}
\label{djklfs2-2}
Assume that $\lambda = \lambda_1 = \cdots = \lambda_{k-1}$
and let $\tau \in [0,r]$.
Then, the variance
of the $k$-hop path count $\sigma_k (kr-\tau)$ is given by
\begin{equation}
\label{dfjkvar}
{\mathrm{{\rm Var}}}_\lambda [ \sigma_k (kr-\tau)]
=
\frac{1}{(k-1)!}
\sum_{l=0}^{k-2}
{k-1 \choose l}
(\lambda \tau )^{k-1+l}
\frac{\Gamma ( (k-1-l)/2 +1)}{\Gamma ((k-1+l)/2+1)},
\end{equation}
where $\Gamma$ denotes the gamma function defined as
$$
\Gamma (z) :=
\int_0^\infty x^{z-1} e^{-x} dx, \qquad z>0.
$$
\end{corollary}
\begin{Proof}
Let $X_0,\ldots , X_k$ be independent standard normal random
variables. From the moment relation
$\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits \big[ X_p^{2j_p} \big] = 2^{-j_p} (2j_p)!/j_p!$ and
the fact that the sum $X_0^2 + \cdots + X_l^2$ has a Chi square
distribution, we have
\begin{align}
\nonumber
\sum_{j_0 + \cdots + j_l = k-1-l\atop j_0,\ldots , j_l\geq 0}
\prod_{p=0}^l {2j_p\choose j_p}
& =
\frac{2^{k-1-l}}{(k-1-l)!}
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits \left[
\sum_{ j_0 + \cdots + j_l = k-1-l\atop j_0,\ldots , j_l\geq 0}
\frac{(k-1-l)!}{j_0!\cdots j_l!}
\prod_{p=0}^l X_p^{2j_p}
\right]
\\
\nonumber
&
=
\frac{2^{2(k-1-l)}}{(k-1-l)!}
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits \left[
\left( \frac{X_0^2 + \cdots + X_l^2}{2} \right)^{k-1-l}
\right]
\\
\nonumber
&
= 2^{2(k-1-l)} \frac{\Gamma ( k-1-l + (l+1)/2)}{(k-1-l)!\Gamma ((l+1)/2)}.
\end{align}
Hence from Proposition~\ref{djklfs2} we find
\begin{eqnarray*}
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [ \sigma^2_k (kr-\tau)]
& = &
\sum_{l=0}^{k-1}
\frac{(\lambda \tau)^{2k-2-l}}{(2k-2-l)!}
\frac{2^{2(k-1-l)}}{(k-1-l)!}
\frac{\Gamma ( k-1 + (1-l)/2)}{\Gamma ((l+1)/2)}
\\ & = & \sum_{l=0}^{k-1} \frac{(\lambda \tau)^{k-1+l}}{(k-1+l)!} \frac{2^{2l}}{l!} \frac{\Gamma ( (k-1+l+1)/2)}{\Gamma ((k-1-l+1)/2)}
\\
& = &
\frac{1}{(k-1)!}
\sum_{l=0}^{k-1}
{k-1 \choose l} (\lambda \tau)^{k-1+l} \frac{((k-1-l)/2)!}{((k-1+l)/2)!}.
\end{eqnarray*}
\end{Proof}
The following table provides variance formulas of $k$-hop counts
obtained from Corollary~\ref{djklfs2-2},
by taking $\lambda =1$ for simplicity.
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|}
\hhline{~-}
\multicolumn{1}{c|}{} & \cellcolor{gray!25}
\addstackgap[3pt]{Variance}
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[3pt]{$2$-hops}} & $\displaystyle \tau$
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{$3$-hops}} & \small $\displaystyle \frac{\tau^2}{2} + 2\frac{\tau^3}{3}
$
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{$4$-hops}} & \small $\displaystyle \frac{\tau^3}{3!} + \frac{\tau^4}{4} + 2\frac{\tau^5}{15}$
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{$5$-hops}} & \small $\displaystyle \frac{\tau^4}{4!} + \frac{\tau^5}{15} + \frac{\tau^6}{24} + \frac{4\tau^7}{315}$
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{$6$-hops}} & \small $\displaystyle \frac{\tau^5}{5!} + \frac{\tau^6}{72} + \frac{\tau^7}{105} + \frac{\tau^8}{288} + \frac{2 \tau^9}{2835}$
\\
\cline{1-2}
\end{tabular}
\caption{Variances of $k$-hop counts.}
\end{table}
\vspace{-0.3cm}
\noindent
Using the Legendre duplication formula
$(2k-3)! \Gamma ( 3/2)
=
2^{2k-4}
(k-2)! \Gamma (k-1/2)$,
Corollary~\ref{djklfs2-2} also yields the
following asymptotic variance.
In the sequel, for $f$ and $g$ two nonvanishing functions
on ${\mathord{\mathbb R}}_+$ we write $f(\lambda ) \approx g(\lambda )$
if $\lim_{\lambda \to \infty} f(\lambda ) / g(\lambda ) = 1$.
\begin{prop}
As $\lambda$ tends to infinity we have the equivalence
\begin{equation}
\label{fjkdls45}
{\mathrm{{\rm Var}}}_\lambda [ \sigma_k (kr-\tau)]
\approx
\frac{(2 \lambda \tau )^{2k-3}}{2(2k-3)!}.
\end{equation}
\end{prop}
\section{Joint moments recursion}
\label{sec5}
\noindent
In the one-hop case
we simply have
$\sigma_1 (t) = 1$ and $m^{(\lambda )}_{1,n} = 1$, $n \geq 0$.
As for the two-hop count, \eqref{fjlksd23}
yields the joint Poisson moments formula
\begin{equation}
\label{djfkls}
m^{(\lambda )}_{2,n}(\tau_1,\ldots , \tau_n)
=
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [ \sigma_2(2r-\tau_1)\cdots \sigma_2(2r-\tau_n) ]
=
\sum_{l=1}^n \lambda_1^l
\sum_{\eta_1\cup \cdots \cup \eta_l = \{1,\ldots , n\}}
\prod_{j=1}^l \widehat{\tau}_{\eta_j},
\end{equation}
which follows from the expression
$\lambda_1 \min ( \tau_1, \ldots , \tau_n )$
of the joint Poisson cumulants of
$( \sigma_2(2r-\tau_1) , \ldots , \sigma_2(2r-\tau_n) )$.
The direct application of Proposition~\ref{p1}
to the evaluation of higher order joint moments
of $k$-hop counts
is not an easy task due to the complexity
of the summations over partitions involved in \eqref{djkls12}.
In Proposition~\ref{433} we propose to compute joint moments
by a recursion argument
using the multiple Poisson stochastic integral
representation \eqref{fjld3} %
instead of the $U$-statistics expression \eqref{fjda}.
Particular cases are considered with explicit computations for $n=1,2,3$ in
Appendix~\ref{s9}.
\begin{prop}
\label{433}
For $k\geq 1$, the joint moments
$$
m^{(\lambda )}_{k,n} (\tau_1,\ldots , \tau_n)
: = \mathop{\hbox{\rm I\kern-0.20em E}}\nolimits_\lambda [ \sigma_k(kr-\tau_1)\cdots \sigma_k(kr-\tau_n) ],
\qquad 0 \leq \tau_1,\ldots , \tau_n \leq r,
$$
satisfy the recursion
\begin{eqnarray}
\label{fdfdf}
\lefteqn{
m^{(\lambda )}_{k+1,n} (\tau_1,\ldots , \tau_n)}
\\
\nonumber
& = &
\sum_{l=1}^n
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\int_0^{\widehat{\tau}_{\pi_l}}
\cdots
\int_0^{\widehat{\tau}_{\pi_1}}
m^{(\lambda )}_{k,n} (\widebar{u}_{\pi_1} , \ldots , \widebar{u}_{\pi_l} )
\lambda_k (du_1) \cdots \lambda_k (du_l),
\end{eqnarray}
$0\leq \tau_1 , \ldots , \tau_n \leq r$,
where
$\widebar{u}_{\pi_i}:=( \underbrace{u_i,\ldots, u_i}_{|\pi_i|~ {\rm times }} )$ and
$\widehat{\tau}_{\pi}:= \min_{i\in \pi} \tau_i$ for $\pi \subset \{1,\ldots , n\}$.
\end{prop}
\begin{Proof}
By Proposition~\ref{fjkldsa}, when $\tau \in [0,r]$ we have
$\sigma^{(k+1)} (kr-\tau) \stackrel{d}{\simeq} Z^{(k+1)}_\tau$ in distribution,
where $Z^{(k+1)}_\tau$ satisfies the recursion
\begin{equation}
\label{kldsf}
Z^{(k+1)}_\tau = \int_0^\tau Z^{(k)}_u dN^{(k)}_u, \qquad \tau \in [0,r], \quad k \geq 1,
\end{equation}
and $(N^{(l)}_u)_{u\in {\mathord{\mathbb R}}_+}$ is a family of
independent Poisson processes with respective intensities
$\lambda_l(ds):=\lambda ( ds - (l-1) r)$,
$l=1,\ldots , k$.
Hence, by Proposition~\ref{pr11-2},
for $\tau_1 , \ldots , \tau_n \in [0,r]$ we have
\begin{align}
\nonumber
\lefteqn{
m^{(\lambda )}_{k+1,n} (\tau_1,\ldots , \tau_n) =
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits\big[ Z^{(k+1)}_{\tau_1} \cdots Z^{(k+1)}_{\tau_n} \big]
}
\\
\nonumber
& =
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits\left[
\int_0^{\tau_1} Z^{(k)}_u dN^{(k)}_u
\cdots
\int_0^{\tau_n} Z^{(k)}_u dN^{(k)}_u
\right]
\\
\nonumber
& =
\sum_{l=1}^n
\lambda_k^l
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\int_{{\mathord{\mathbb R}}_+^l}
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits \Bigg[
\prod_{j=1}^l
\prod_{i\in \pi_j}
\big( Z^{(k)}_{u_j} {\bf 1}_{[0,\tau_i]}(u_j) \big)
\Bigg]
\lambda_k(du_1)\cdots \lambda_k(du_l)
\\
\nonumber
& =
\sum_{l=1}^n
\lambda_k^l
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\int_{{\mathord{\mathbb R}}_+^l}
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits \Bigg[ \prod_{j=1}^l
\big(
\big(Z^{(k)}_{u_j}\big)^{|\pi_j|}
{\bf 1}_{[0,\widehat{\tau}_{\pi_j}]}(u_j)
\big)
\Bigg]
\lambda_k(du_1)\cdots \lambda_k(du_l)
\\
\nonumber
& =
\sum_{l=1}^n
\lambda_k^l
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\int_0^{\widehat{\tau}_{\pi_l}}
\cdots
\int_0^{\widehat{\tau}_{\pi_1}}
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits \Bigg[ \prod_{j=1}^l
\big(Z^{(k)}_{u_j}\big)^{|\pi_j|}
\Bigg]
\lambda_k(du_1)\cdots \lambda_k(du_l)
\\
\label{kldsf-2}
& =
\sum_{l=1}^n
\lambda_k^l
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\int_0^{\widehat{\tau}_{\pi_l}}
\cdots
\int_0^{\widehat{\tau}_{\pi_1}}
m^{(\lambda )}_{k,n} (\widebar{u}_{\pi_1},\ldots , \widebar{u}_{\pi_l} )
\lambda_k(du_1)\cdots \lambda_k(du_l).
\end{align}
\end{Proof}
Table~\ref{table3} lists the first four joint moments
$m^{(\lambda )}_{2,n}(\tau_1,\ldots ,\tau_n )$ of
the two-hop counts, computed
as an application of Proposition~\ref{433}
from the command {\rm mk}[$\{ \tau_1 , \ldots , \tau_n \}, \{\lambda_1 \}$] in the Mathematica codes~\ref{code1}-\ref{code2} in appendix for $n=1,2,3,4$,
when $\lambda_1(ds)=\lambda_1 ds =ds$.
The case $\lambda_1 (ds)=\lambda_1 ds$, $\lambda_1>0$, is obtained by replacing
$\tau_i$ with $\lambda_1 \tau_i$, $i=1,2,3$.
\begin{table}[H]
\centering
\begin{tabular}{c|l|}
\hhline{~-}
\multicolumn{1}{c|}{} & \cellcolor{gray!25}
\addstackgap[3pt]{~~~~~~~~~~~~~~~~~~~~Joint moments of 2-hop counts}
\\
\cline{1-2}
\multicolumn{1}{|c|}{First} & $\tau_1$
\\
\cline{1-2}
\multicolumn{1}{|c|}{Second} & $\displaystyle \tau_1 + \tau_1 \tau_2
$
\\
\cline{1-2}
\multicolumn{1}{|c|}{Third} & $\displaystyle
\tau_1 + \tau_1 \tau_3 + 2 \tau_1 \tau_2 + \tau_1\tau_2\tau_3 $
\\
\cline{1-2}
\multicolumn{1}{|c|}{Fourth} & $\displaystyle
\tau_1 + \tau_1 \tau_4 + 2 \tau_1 \tau_3 + 4 \tau_1 \tau_2
+ \tau_1\tau_3\tau_4 + 2 \tau_1\tau_2\tau_4 + 3 \tau_1\tau_2\tau_3
+ \tau_1\tau_2\tau_3\tau_4
$
\\
\cline{1-2}
\end{tabular}
\caption{Joint moments $m^{(\lambda )}_{2,n}(\tau_1,\ldots ,\tau_n )$ of $2$-hop counts of orders $n=1,2,3,4$.}
\label{table3}
\end{table}
\vskip-0.3cm
\noindent
Tables~\ref{table4} and \ref{table5} provide the first four moments of
the three-hop and four-hop counts
computed
from the commands {\rm mk}[$\{ \tau_1 , \ldots , \tau_n \}, \{\lambda_1,\lambda_2 \}$]
and {\rm mk}[$\{ \tau_1 , \ldots , \tau_n \}, \{\lambda_1,\lambda_2 ,\lambda_3\}$] in Mathematica for $n=1,2,3,4$,
where for simplicity we take $\lambda_1=\lambda_2 = \lambda_3 = 1$ and
$\tau_1=\tau_2=\tau_3=\tau$.
\begin{table}[H]
\centering
\begin{tabular}{c|l|}
\hhline{~-}
\multicolumn{1}{c|}{} & \cellcolor{gray!25}
\addstackgap[3pt]{~~~~~~~~~~Moments of 3-hop counts}
\\ [0.3ex]
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{First}} & \small $\displaystyle \frac{\tau^2}{2} $
\\ [1ex]
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{Second}} & \small $\displaystyle \frac{\tau^2}{2} + 2 \frac{\tau^3}{3} + \frac{\tau^4}{4}
$
\\ [1ex]
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{Third}} & \small $\displaystyle \frac{\tau^2}{2} + 2\tau^3 + 5\frac{\tau^4}{2} + \tau^5 + \frac{\tau^6}{8} $
\\ [1ex]
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{Fourth}} & \small $\displaystyle
\frac{\tau^2}{2} + 14 \frac{\tau^3}{3} + 53 \frac{\tau^4}{4} + 66 \frac{\tau^5}{5} + 67 \frac{\tau^6}{12} + \tau^7 + \frac{\tau^8}{16}$
\\ [1ex]
\cline{1-2}
\end{tabular}
\caption{Moments $m^{(\lambda )}_{3,n}(\tau,\ldots ,\tau )$ of $3$-hop counts of orders $n=1,2,3,4$.}
\label{table4}
\end{table}
\vskip-0.3cm
\begin{table}[H]
\centering
\begin{tabular}{c|l|}
\hhline{~-}
\multicolumn{1}{c|}{} & \cellcolor{gray!25}
\addstackgap[3pt]{~~~~~~~~~~Moments of 4-hop counts}
\\ [0.3ex]
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{First}} & \small $\displaystyle \frac{\tau^3}{3!} $
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{Second}} & \small $\displaystyle \frac{\tau^3}{3!} + \frac{\tau^4}{4} + 2 \frac{\tau^5}{15} + \frac{\tau^6}{36}$
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{Third}} & \small $\displaystyle \frac{\tau^3}{3!} + 3 \frac{\tau^4}{4} + 5 \frac{\tau^5}{4} + \frac{59 \tau^6}{60} + \frac{13 \tau^7}{35} + \frac{\tau^8}{15} + \frac{\tau^9}{216}$
\\
\cline{1-2}
\end{tabular}
\caption{Moments $m^{(\lambda )}_{4,n}(\tau,\ldots ,\tau )$ of $4$-hop counts of orders $n=1,2,3$.}
\label{table5}
\end{table}
\vspace{-0.3cm}
\noindent
The following figures plot higher joint moment formulas up to the order six,
together with their confirmations by Monte Carlo simulations.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth, height=5cm]{figure2_a.pdf}
\caption{Third moments.}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth, height=5cm]{figure2_b.pdf}
\caption{Fourth moments.}
\end{subfigure}
\caption{Third and fourth joint moments of $4$-hop counts.}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth, height=5cm]{figure3_a.pdf}
\caption{Fifth moments.}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth, height=5cm]{figure3_b.pdf}
\caption{Sixth moments.}
\end{subfigure}
\caption{Fifth and sixth joint moments of $4$-hop counts.}
\end{figure}
\section{Joint cumulants recursion}
\label{sec6}
In the sequel, given
$\{ \pi_1,\ldots , \pi_l \}$ a partition of $\{1,\ldots , n\}$
and $\tau_1,\ldots , \tau_l \in [0,r]$ we denote by
$$
c_{k,n}^{(\lambda)} ( \widebar{\tau}_{\pi_1} ; \ldots ; \widebar{\tau}_{\pi_l} ) : =
\kappa_\lambda \big( ( \sigma_k(kr - \tau_1) )^{|\pi_1|}, \ldots , ( \sigma_k(kr - \tau_l) )^{|\pi_l|}\big)
$$
the joint cumulant
of $\big( ( \sigma_k(kr - \tau_1) )^{|\pi_1|}, \ldots , ( \sigma_k(kr - \tau_l) )^{|\pi_l|}\big)$, and we let
$c_{k,n}^{(\lambda)} ( \tau_1 ; \ldots ; \tau_n )$
denote the joint cumulant
$\kappa_\lambda ( \sigma_k(kr - \tau_1) , \ldots , \sigma_k(kr - \tau_n ) )$.
The following proposition is the counterpart of the moment recursion
of Proposition~\ref{433} obtained by the M\"obius inversion relation
\eqref{bpi}.
Particular cases are considered with explicit computations for $n=2,3,4$ in
Appendix~\ref{s10}.
\begin{prop}
\label{fjhkds}
Let $\tau \in [0,r]$ and $k, n\geq 1$.
The cumulant of order $n$ of $\sigma_{k+1} (kr - \tau)$
satisfies the recursion
\begin{equation}
\label{fjkla}
c_{k+1,n}^{(\lambda)} (\tau_1 ; \ldots ; \tau_n )
=
\sum_{l=1}^n
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\} }
\int_0^{\widehat{\tau}_{\pi_1}} \cdots \int_0^{\widehat{\tau}_{\pi_l}}
c_{k,l}^{(\lambda)} (\widebar{s}_{\pi_1} ; \ldots ; \widebar{s}_{\pi_l} )
\lambda_k ( ds_1 ) \cdots \lambda_k ( ds_l )
,
\end{equation}
where we let
$\widehat{\tau}_{\pi}:= \min_{i\in \pi} \tau_i$ for $\pi \subset \{1,\ldots , n\}$.
\end{prop}
\begin{Proof}
When $k=1$ we have $\sigma_1(\tau ) =1$, $\tau \in [0,r]$,
hence $c_{1,n}^{(\lambda)} (\tau_1;\ldots ; \tau_n ) = {\bf 1}_{\{ n =1\}}$, and
by \eqref{fjlksd23} the cumulants of the two-hop count
$\sigma_2 (2r-\tau) = N_r - N_{r-\tau} \stackrel{d}{\simeq} N^{(1)}_\tau$,
$\tau \in [0,r]$,
are the joint Poisson cumulants
\begin{equation}
\label{fjkldsf-c}
c_{2,n}^{(\lambda)} (\tau_1 ; \ldots ; \tau_n ) = \min (\tau_1,\ldots , \tau_n ),
\qquad
n \geq 1,
\end{equation}
which is consistent with the joint Poisson moments formula
\eqref{djfkls} and shows that \eqref{fjkla} holds at the rank $k=1$.
Next,
assuming that \eqref{fjkla} holds at the rank $k\geq 1$,
by the joint cumulant-moment inversion relation \eqref{jklsda1}
we have
\begin{align*}
&
c_{k+1,n}^{(\lambda)} (\tau_1 ; \ldots ; \tau_n )
=
\sum_{l=1}^n
(l-1)!
(-1)^{l-1}
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\prod_{q=1}^l
m^{(\lambda )}_{k+1,|\pi_q|} ( \tau_{\pi_q} )
\\
& =
\sum_{l=1}^n
(l-1)!
(-1)^{l-1}
\\
& \quad \sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\prod_{q=1}^l
\sum_{\eta^q \preceq \pi_q }
\int_0^{\widehat{\tau}_{\eta^q_1}}
\cdots
\int_0^{\widehat{\tau}_{\eta^q_{|\eta^q|}} }
m^{(\lambda )}_{k,|\pi_q|} \big(\widebar{u}_{\eta^q_1} ;\ldots ; \widebar{u}_{\eta^q_{|\eta^q|}} \big)
\lambda_k (du_1) \cdots \lambda_k (du_{|\eta^q|}),
\\
& =
\sum_{l=1}^n
(l-1)!
(-1)^{l-1}
\\
& \quad \sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\sum_{\eta^i \preceq \pi_i
\atop { 1 \leq i \leq l }
}
\prod_{q=1}^l
\int_0^{\widehat{\tau}_{\eta^q_1}}
\cdots
\int_0^{\widehat{\tau}_{\eta^q_{|\eta^q|}} }
m^{(\lambda )}_{k,|\eta^q|} \big(\widebar{u}_{\eta^q_1} ;\ldots ; \widebar{u}_{\eta^q_{|\eta^q|}} \big)
\lambda_k (du_1) \cdots \lambda_k (du_{|\eta^q|}),
\\
& =
\sum_{l=1}^n
(l-1)!
(-1)^{l-1}
\\
& \quad
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\sum_{\eta^i \preceq \pi_i
\atop { 1 \leq i \leq l } }
\prod_{q=1}^l
\sum_{\psi^q \preceq \eta^q }
\int_0^{\widehat{\tau}_{\eta^q_1}}
\cdots
\int_0^{\widehat{\tau}_{\eta^q_{|\eta^q|}} }
\prod_{m=1}^{|\psi^q|}
c_{k,n}^{(\lambda)} \big(\big( \widebar{u}_{\eta^q_i}\big)_{i \in \psi^q_m} \big)
\lambda_k (du_1) \cdots \lambda_k (du_{|\eta^q|})
\\
& =
\sum_{l=1}^n
(l-1)!
(-1)^{l-1}
\\
&
\quad
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\sum_{\psi^i \preceq \eta^i \preceq \pi_i
\atop
{ 1 \leq i \leq l }
}
\int_0^{\widehat{\tau}_{\eta^q_1}}
\cdots
\int_0^{\widehat{\tau}_{\eta^q_{|\eta^q|}} }
\prod_{m=1}^{|\psi^q|}
\prod_{q=1}^l
c_{k,n}^{(\lambda)} \big(\big( \widebar{u}_{\eta^q_i}\big)_{i \in \psi^q_m} \big)
\lambda_k (du_1) \cdots \lambda_k (du_{|\psi^q|})
\\
& =
\sum_{l=1}^n
\mu ( \sigma , \widehat{\bf 1} )
\sum_{\eta \preceq \sigma \preceq \widehat{\bf 1} }
\sum_{\psi \preceq \eta}
\int_0^{\widehat{\tau}_{\psi_1}}
\cdots
\int_0^{\widehat{\tau}_{\psi_{|\psi |}} }
\prod_{A\in \eta}
c_{k,n}^{(\lambda)} \big(\big( \widebar{u}_{A \cap B} \big)_{B \in \psi} \big)
\lambda_k (du_1) \cdots \lambda_k (du_{|\psi |})
\\
& =
\sum_{l=1}^n
\mu ( \sigma , \widehat{\bf 1} )
\sum_{\eta \preceq \sigma \preceq \widehat{\bf 1} }
G(\eta)
\\
& = G ( \widehat{\bf 1})
\\
& =
\sum_{\eta \preceq \widehat{\bf 1} }
\int_0^{\widehat{\tau}_{\eta_1}}
\cdots
\int_0^{\widehat{\tau}_{\eta_{|\eta|}} }
c_{k,n}^{(\lambda)} (\widebar{u}_{\eta_1};\ldots ; \widebar{u}_{\eta_{|\eta|}} )
\lambda_k (du_1) \cdots \lambda_k (du_{|\eta|})
,
\end{align*}
where we applied Relation~\eqref{fjlksa1}
with $\pi := \widehat{\bf 1}$ and
$$
G ( \eta ) :=
\sum_{\psi \preceq \eta}
\int_0^{\widehat{\tau}_{\psi_1}}
\cdots
\int_0^{\widehat{\tau}_{\psi_{|\psi |}} }
\prod_{A\in \eta}
c_{k,n}^{(\lambda)} \big(\big( \widebar{u}_{A \cap B} \big)_{B \in \psi} \big)
\lambda_k (du_1) \cdots \lambda_k (du_{|\psi |})
$$
which shows \eqref{fjkla} by induction on $k\geq 1$.
\end{Proof}
In order to use
Proposition~\ref{fjhkds} as an induction relation,
the higher order cumulants $c_{k,n}^{(\lambda)} ( \widehat{s}_{\pi_1} ;\ldots ; \widehat{s}_{\pi_l} )$
appearing in \eqref{fjkla} can be computed by recurrence using the
following proposition.
\begin{prop}
\label{jkld}
For any sequence $(X_1,\ldots , X_{n+1})$ of random variables we have
the cumulant relation
$$
\kappa (X_1,\ldots , X_n X_{n+1})
=
\kappa (X_1,\ldots ,X_n,X_{n+1} ) + \sum_{\eta_1\cup \eta_2 = \{ 1, \ldots , n+1 \} \atop \eta_1 \ni n, \ \! \eta_2 \ni n+1, \ \! \eta_1 \cap \eta_2 = \emptyset
}
\kappa (X_{\eta_1} , X_n)\kappa (X_{\eta_2} , X_{n+1}).
$$
\end{prop}
\begin{Proof}
This relation is a particular case of the cumulant-moment relationship
$$
\kappa \big(Z_1,\ldots , Z_n\big)
=
\sum_{l=1}^n
(l-1)!
(-1)^{l-1}
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\prod_{j=1}^l
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits \Bigg[ \prod_{i\in \pi_j} Z_i \Bigg]
$$
which yields, taking $Z_1:=X_1,Z_2:=X_2,\ldots , Z_n:=X_nX_{n+1}$,
\begin{eqnarray*}
\lefteqn{
\! \! \! \! \kappa (X_1,\ldots , X_n X_{n+1})
=
\sum_{l=1}^n
(l-1)!
(-1)^{l-1}
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\prod_{j=1}^l
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits \Bigg[ \prod_{i\in \pi_j} X_i \prod_{ \pi_j \ni n } ( X_nX_{n+1}) \Bigg]
}
\\
& = &
\sum_{l=1}^n
(l-1)!
(-1)^{l-1}
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\prod_{j=1}^l
\sum_{p=1}^{|\pi_j|+1}
\sum_{\eta_1\cup \cdots \cup \eta_p =
\pi_j \cup \{ n, n+1\} }
\prod_{v=1}^p \kappa \big( ( X_u)_{u\in \eta_p} \big)
\\
& = &
\kappa (X_1,\ldots ,X_n,X_{n+1} ) + \sum_{\eta_1\cup \eta_2 = \{ 1, \ldots , n+1 \} \atop \eta_1 \ni n, \ \! \eta_2 \ni n+1, \ \! \eta_1 \cap \eta_2 = \emptyset }
\kappa (X_{\eta_1} , X_n) \kappa (X_{\eta_2} , X_{n+1}),
\end{eqnarray*}
\end{Proof}
\noindent
As an application of Proposition~\ref{jkld},
Tables~\ref{table6} and \ref{table7} present the first five and first three cumulants
$c_{3,n}^{(\lambda)} (\tau ; \ldots ; \tau )$ of
the three-hop count
and $c_{4,n}^{(\lambda)} (\tau ; \ldots ; \tau )$ of the four-hop count
computed by the commands
{\rm ck}[$\{ \tau_1 , \ldots , \tau_n \}, \{ 1 , \ldots , 1 \} , \{ \lambda_1 ,\lambda_2 ,\lambda_2 \} ]$
and
{\rm ck}[$\{ \tau_1 , \ldots , \tau_n \}, \\ \{ 1 , \ldots , 1 \} , \{ \lambda_1 , \lambda_2 , \lambda_3 \} ]$
in the Mathematica codes~\ref{code3}-\ref{code4} in appendix for $n=1,2,3,4,5$,
where for simplicity we take $\lambda_i = 1$, $i=1,2,3$ and
$\tau_j=\tau$, $j=1,\ldots , 5$.
\begin{table}[H]
\centering
\begin{tabular}{c|l|}
\hhline{~-}
\multicolumn{1}{c|}{} & \cellcolor{gray!25}
\addstackgap[3pt]{~~~Cumulants of 3-hop counts}
\\ [0.3ex]
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{First}} & \small $\displaystyle \frac{\tau^2}{2}$
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{Second}} & \small $\displaystyle \frac{\tau^2}{2}+\frac{2\tau^3}{3}$
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{Third}} & \small $\displaystyle \frac{\tau^2}{2} + 2\tau^3 + \frac{7\tau^4}{4}$
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{Fourth}} & \small $\displaystyle
\frac{\tau^2}{2}
+\frac{14 \tau^3}{3}
+\frac{23 \tau^4}{2}
+ \frac{36 \tau^5}{5}
$
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{Fifth}} & \small $\displaystyle
\frac{\tau^2}{2} + 10 \tau^3 + \frac{215}{4} \tau^4 + 86 \tau^5 + 41 \tau^6
$
\\
\cline{1-2}
\end{tabular}
\caption{Cumulants $c_{3,n}^{(\lambda)} (\tau ; \ldots ; \tau )$ of $3$-hop counts of orders $n=1,2,3$.}
\label{table6}
\end{table}
\vskip-0.3cm
\begin{table}[H]
\centering
\begin{tabular}{c|l|}
\hhline{~-}
\multicolumn{1}{c|}{} & \cellcolor{gray!25}
\addstackgap[3pt]{~Cumulants of $4$-hop counts}
\\ [0.3ex]
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{First}} & \small $\displaystyle \frac{\tau^3}{3!}$
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{Second}} & \small $\displaystyle \frac{\tau^3}{3!} + \frac{\tau^4}{4} + \frac{2 \tau^5}{15}$
\\
\cline{1-2}
\multicolumn{1}{|c|}{\addstackgap[10pt]{Third}} & \small $\displaystyle \frac{\tau^3}{3!} + \frac{3 \tau^4}{4} + \frac{5 \tau^5}{4} + \frac{9 \tau^6}{10} + \frac{69 \tau^7}{280}$
\\
\cline{1-2}
\end{tabular}
\caption{Cumulants $c_{4,n}^{(\lambda)} (\tau ; \ldots ; \tau )$ of $4$-hop counts of orders $n=1,2,3$.}
\label{table7}
\end{table}
\vskip-0.3cm
\noindent
The following figures present third and fourth order cumulant plots
for $4$-hop counts,
together with their confirmations by Monte Carlo simulations.
\noindent
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth, height=5cm]{figure4_a.pdf}
\caption{Third cumulant
$c_{4,3}^{(\lambda)} (\tau ; \tau ; \tau )$.}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth, height=5cm]{figure4_b.pdf}
\caption{Fourth cumulant
$c_{4,4}^{(\lambda)} (\tau ;\tau ; \tau ; \tau )$.}
\end{subfigure}
\caption{Third and fourth cumulants of the $4$-hop count
$\sigma_4(4-\tau )$.}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth, height=5cm]{figure5_a.pdf}
\caption{Skewness of $\sigma_4(4-\tau )$.}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth, height=5cm]{figure5_b.pdf}
\caption{Kurtosis of $\sigma_4(4-\tau )$.}
\end{subfigure}
\caption{Skewness and kurtosis of
the $4$-hop count $\sigma_4(4-\tau )$.}
\end{figure}
\section{Moment and cumulant bounds}
\label{sec7}
In this section we take
$\lambda_l(ds):= \lambda ds$, $l=1,\ldots , k$,
$\lambda >0$,
and we write $f(\lambda ) = O(\lambda^n )$ if there exist $C_n>0$ and
$\lambda_n >0$ such that
$|f(\lambda )| \leq C_n \lambda^n$ for any $\lambda >\lambda_n$.
\begin{prop}
Moment bound. For any $\tau \in [0,r]$ and $n\geq 0$, we have
$$
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits_\lambda [ ( \sigma_k (kr - \tau ) )^n ]
\leq
( \mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [ ( N_{\lambda \tau} )^n ] )^{k-1}
=
O \big( ( \lambda \tau )^{(k-1)n} \big)
$$
as $\lambda$ tends to infinity.
\end{prop}
\begin{Proof}
We show by induction on $k\geq 1$ that
$$
m^{(\lambda )}_{k,n} (\tau_1 ,\ldots , \tau_n )
\leq
( \mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [ ( N_{\lambda \tau } )^n ] )^{k-1}
,
\qquad
0 \leq \tau_1,\ldots , \tau_n \leq \tau.
$$
where,
denoting by
$S(n,l)$ the Stirling number of the second
kind,
$$
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [ ( N_{\lambda \tau } )^n ] =
\sum_{l=1}^n
S(n,l) ( \lambda \tau )^l
= O(\lambda^n).
$$
The case $k=1$ is covered by the fact that
$\sigma_1 (r - \tau ) = 1$ and $m^{(\lambda )}_{1,n}(\tau ) = 1$, $n \geq 0$.
Next, by the recurrence relation \eqref{fdfdf} we have
\begin{eqnarray*}
m^{(\lambda )}_{k+1,n} (\tau ,\ldots , \tau )
& \leq &
\sum_{l=1}^n
( \lambda \tau )^l \sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\}}
\sup_{0 \leq \tau_1,\ldots , \tau_l \leq \tau }
m^{(\lambda )}_{k,n} (\tau_1,\ldots , \tau_l )
\\
& = &
\sup_{0 \leq \tau_1,\ldots , \tau_l \leq \tau}
m^{(\lambda )}_{k,n} (\tau_1,\ldots , \tau_l )
\sum_{l=1}^n
S(n,l) ( \lambda \tau )^l
\\
& = &
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [ ( N_{\lambda \tau } )^n ]
m^{(\lambda )}_{k,n} (\tau,\ldots , \tau ),
\qquad k \geq 1.
\end{eqnarray*}
\end{Proof}
The following bound on joint cumulants is also obtained by induction.
\begin{prop}
\label{1fdjkl}
For any $\tau \in [0,r]$,
$k\geq 2$ and $l_1,\ldots , l_p \geq 0$, $p\geq 1$,
we have the joint cumulant bound
\begin{equation}
\label{c3789}
\kappa_\lambda \big( ( \sigma_k(kr - \tau ) )^{l_1}, \ldots , ( \sigma_k(kr - \tau ) )^{l_p}\big)
\leq (2(\lambda + 1) \tau )^{(k-1)(l_1+\cdots + l_p)+1-p} (B_n)^{k-2},
\end{equation}
where $B_n$ denotes the Bell number of order $n\geq 1$.
In particular, we have
\begin{equation}
\label{fjkls343}
c_{k,n}^{(\lambda)} (\tau ; \ldots ; \tau ) \leq (2(\lambda + 1) \tau )^{ 1 + (k-2)n } (B_n)^{k-2},
\quad \tau \in [0,r].
\end{equation}
\end{prop}
\begin{Proof}
We note that for $n=p$, as in \eqref{fjkldsf-c} we have
$c_{2,n}^{(\lambda)} (\tau_1 ; \ldots ; \tau_n) = \min (\tau_1,\ldots , \tau_n)$,
hence by induction from Proposition~\ref{jkld} we obtain
$$
\kappa_\lambda \big( ( \sigma_2(2r - \tau_1) )^{l_1}, \ldots , ( \sigma_2(2r - \tau_p) )^{l_p}\big)
\leq (2(\lambda + 1)\tau )^{l_1+\cdots + l_p +1-p},
$$
$0 \leq \tau_1, \ldots , \tau_p \leq \tau$,
which is \eqref{c3789} for $k=2$.
Next, using Proposition~\ref{fjhkds},
assuming that \eqref{c3789} holds at the rank $k \geq 2$,
by induction for $\lambda \geq 1$
we have
\begin{eqnarray}
\lefteqn{
\nonumber
\! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \!
\big|
\kappa_\lambda \big( \sigma_{k+1}((k+1)r - \tau ) , \ldots , \sigma_{k+1}((k+1)r - \tau ) \big)
\big|
}
\\
& \leq &
\sum_{l=1}^n
\lambda^l \sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\} }
\int_{[0,\tau ]^l}
|
c_{k,l}^{(\lambda)} (\widebar{\tau}_{\pi_1} ;\ldots ; \widebar{\tau}_{\pi_l} )
|
d\tau_1\cdots d\tau_l
\\
\nonumber
& = &
(B_n)^{k-2}
\sum_{l=1}^n
( \lambda \tau ) ^l
\sum_{\pi_1\cup \cdots \cup \pi_l = \{1,\ldots , n\} }
(2(\lambda +1) \tau )^{(k-1)n+1-l}
\\
\nonumber
& \leq &
(B_n)^{k-2}
(2(\lambda +1) \tau )^{(k-1)n+1}
\sum_{l=1}^n
S(n,l)
\\
\nonumber
& = &
(B_n)^{k-1}
(2(\lambda +1) \tau )^{(k-1)n+1},
\end{eqnarray}
which yields
$$
\kappa_\lambda \big( ( \sigma_{k+1}((k+1)r - \tau_1) )^{l_1}, \ldots , ( \sigma_{k+1}((k+1)r - \tau_p ) )^{l_p}\big)
\leq (B_n)^{k-1}(2(\lambda + 1)\tau )^{k ( l_1 + \cdots + l_p ) +1-p},
$$
$0 \leq \tau_1, \ldots , \tau_p \leq \tau$,
by induction from Proposition~\ref{jkld}.
\end{Proof}
\section{Berry-Esseen bounds}
\label{sec8}
In this section we will use the Wasserstein and Kolmogorov distances
$d_W(X,Y)$ and $d_K(X,Y)$
between the distributions of random variables $X, Y$, defined as
$$
d_W (X,Y):
=\sup_{h\in\mathrm{Lip}(1)} |\mathrm{E}[h(X)]-\mathrm{E}[h(Y)]|,
$$
where $\mathrm{Lip}(1)$ denotes the class of real-valued
Lipschitz functions with
Lipschitz constant less than or equal to $1$,
and
$$
d_K(X,Y) : = \sup_{x\in {\mathord{\mathbb R}}} | {\mathord{\mathbb P}} ( X \leq x ) - {\mathord{\mathbb P}}( Y \leq x ) |.
$$
For $\lambda>0$ and $t\in {\mathord{\mathbb R}}_+$ we let
\begin{eqnarray}
\label{fjkldsf1}
\lefteqn{
\sigma_k^{(\lambda )} (t)
:= \int_0^{\lambda t}
\cdots
\int_0^{\lambda t}
f_k(s_1/\lambda ,\ldots , s_{k-1} / \lambda ) dN_{s_1} \cdots dN_{s_{k-1}}
}
\\
\nonumber
& = & \frac{1}{(k-1)!} \sum_{l=0}^{k-1}
\lambda^{k-1-l}
{k-1 \choose l}
I_l \left( \int_0^t \cdots \int_0^t
f_{k-1}(*,s_{l+1},\ldots , s_{k-1} ) ds_{l+1}\cdots ds_{k-1} \right)
\end{eqnarray}
according to \eqref{fjkldsf} and \eqref{skt},
so that the distribution of $\sigma_k^{(\lambda )} (t)$
under ${\mathord{\mathbb P}}$ is the distribution of $\sigma_k (t)$ under
the distribution ${\mathord{\mathbb P}}_\lambda$ of the 1D unit disk graph
with constant Poisson intensity $\lambda >0$.
\medskip
In addition, given $k\geq 2$ and $t\in [(k-1)r,kr)$,
we consider the renormalized $k$-hop count
$$
\widetilde{\sigma}_k^{(\lambda )} (t):= \frac{\sigma_k^{(\lambda )} (t)- \mathop{\hbox{\rm I\kern-0.20em E}}\nolimits_\lambda [\sigma_k(t)]}{\sqrt{{\mathrm{{\rm Var}}}_\lambda [ \sigma_k(t)]}}.
$$
From Proposition~\ref{1fdjkl}, for any $t \in [(k-1)r,kr)$, the skewness
of $\sigma_k (t )$ satisfies
$$
\frac{\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits_\lambda [ ( \sigma_k( t ) - \mathop{\hbox{\rm I\kern-0.20em E}}\nolimits [ \sigma_k(t ) ] )^3 ]}{( {\mathrm{{\rm Var}}}_\lambda [ \sigma_k ( t )] )^{3/2}}
=
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits \big[ \big( \widetilde{\sigma}^{(\lambda )}_k( t ) \big)^3 \big]
=
\frac{
c_{k,3}^{(\lambda)} (kr - t ; kr - t ; kr - t )
}{( {\mathrm{{\rm Var}}}_\lambda [ \sigma_k (t)] )^{3/2}}
\approx \frac{1}{\sqrt{\lambda}}.
$$
By Theorem~1 in \cite{Janson1988}, this shows the convergence in distribution
of $\widetilde{\sigma}_k^{(\lambda )} (t)$
to the standard normal distribution ${\cal N}(0,1)$ as $\lambda$ tends to infinity,
as illustrated in Figures~\ref{f1}-\ref{f2} using empirical probability
density plots.
\vspace{-0.6cm}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth, height=5cm]{figure6_a.pdf}
\vskip-.2cm
\caption{$\lambda = 5$.}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth, height=5cm]{figure6_b.pdf}
\vskip-.2cm
\caption{$\lambda = 400$.}
\end{subfigure}
\caption{Convergence of $3$-hop counts using probability density functions.}
\label{f1}
\end{figure}
\vspace{-0.6cm}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth, height=5cm]{figure7_a.pdf}
\vskip-.2cm
\caption{$\lambda = 5$.}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth, height=5cm]{figure7_b.pdf}
\vskip-.2cm
\caption{$\lambda = 400$.}
\end{subfigure}
\caption{Convergence of $4$-hop counts using probability density functions.}
\label{f2}
\end{figure}
\vspace{-0.4cm}
\noindent
In addition, by \eqref{fjkdls45} and \eqref{fjkls343} we have
$$
\frac{
c_{k,n}^{(\lambda)} (kr - t ; \ldots ; kr - t )}{( {\mathrm{{\rm Var}}}_\lambda [ \sigma_k(t)] )^{n/2}}
=
(B_n)^{k-2} O ( \lambda^{1-n/2} )
\leq
(n!)^{k-2} O ( \lambda^{1-n/2} )
, \quad n \geq 2,
$$
hence the Statulevi\v{c}ius condition
is satisfied with $\gamma := k-3$ and $\Delta := \sqrt{\lambda}$,
which,
by \cite{rudzkis},
Corollary~2.1 in \S~1.3 in \cite{saulis},
see also Theorem~2.4 in \cite{doering}, yields the Berry-Esseen bound
\begin{equation}
\label{fjkf}
d_K \big( \widetilde{\sigma}_k^{(\lambda )} (t) , {\cal N} \big)
\leq \frac{C(k,r)}{\lambda^{1/(2 + 4(k-3))}}
\end{equation}
for $t\in [(k-1)r,kr)$ as $\lambda$ tends to infinity.
More precisely, \eqref{fjkf} can be improved as in the following
proposition.
\begin{prop}
\label{fsklf34}
Let $k\geq 2$ and $t\in [(k-1)r,kr)$.
The renormalized $k$-hop count
$\widetilde{\sigma}_k^{(\lambda )} (t)$
satisfies the Wasserstein and Kolmogorov bounds
\begin{equation}
\label{fjkldsf-2}
d_{K/W} \big( \widetilde{\sigma}_k^{(\lambda )} (t) , {\cal N} \big)
\leq \frac{C(k,r)}{\sqrt{\lambda}}
\end{equation}
for some constant $C(k,r)>0$ as $\lambda$ tends to infinity.
\end{prop}
\begin{Proof}
The kurtosis of $\sigma_k^{(\lambda )}(t )$ satisfies
\begin{eqnarray*}
\frac{\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits_\lambda [ ( \sigma_k(t) - \mathop{\hbox{\rm I\kern-0.20em E}}\nolimits_\lambda [ \sigma_k(t) ] )^4 ]}{( {\mathrm{{\rm Var}}}_\lambda [ \sigma_k (t)] )^2} - 3
& = &
\mathop{\hbox{\rm I\kern-0.20em E}}\nolimits_\lambda \big[ \big( \widetilde{\sigma}_k^{(\lambda )}(t) \big)^4 \big] - 3
\\
& = &
\frac{c_{k,4}^{(\lambda)} (kr-t;kr-t;kr-t;kr-t)}{( {\mathrm{{\rm Var}}}_\lambda [ \sigma_k (t)] )^2}
\\
& = &
(B_4)^{k-2} O ( \lambda^{-1} ),
\end{eqnarray*}
as $\lambda$ tends to infinity.
The Kolmogorov distance bound in \eqref{fjkldsf-2}
then follows from the fourth moment theorem for $U$-statistics and sums of
multiple stochastic integrals Corollary~4.10 in \cite{eichelsbacher}
applied to \eqref{fjkldsf1},
see also Theorem~3 in \cite{lachieze-rey}.
\medskip
Regarding the Wasserstein distance bound,
according to \eqref{fjda} we can the represent $\sigma_k^{(\lambda )} (t)$
as the $U$-statistics
$$
\sigma_k^{(\lambda )} (t)
=
\sum_{((x_1,l_1),\ldots , ((x_{k-1},l_{k-1}) ) \in \omega^{k-1}
\atop (x_i,l_i)\not=(x_j,l_j), 1\leq i\not= j \leq d}
\tilde{f}_{\lambda ( kr -t) } (x_1/\lambda ,l_1;\ldots ; x_{k-1}/\lambda ,l_{k-1} )
$$
of order $k-1$, where $\tilde{f}_t:( [0,r] \times \{1,\ldots , k-1\})^{k-1} \to \{0,1\}$,
given by
$$
\tilde{f}_t (x_1,l_1;\ldots , x_{k-1};l_{k-1})
:= \frac{1}{(k-1)!} \prod_{i=0}^{k-1} {\bf 1}_{\{(l_{i+1}-l_i)x_i<(l_{i+1}-l_i)x_{i+1} \}},
$$
$((x_1,l_1),\ldots , (x_{k-1},l_{k-1}))\in ([0,r]\times \{1,\ldots , k-1\})^{k-1}$
is the symmetrization in
$k-1$ variables in $[0,r]\times \{1,\ldots , k-1\}$ of $f_\tau$.
Theorem~4.7 in \cite{reitzner} yields the bound
$$
d_W\big(\widetilde{\sigma}_k^{(\lambda )}, {\cal N}\big) \leq
\sum_{1\leq i \leq j \leq r}
\frac{\sqrt{M_{i,j}}}{{\mathrm{{\rm Var}}}_\lambda [ \sigma_k(t)]}
$$
where
$M_{i,j}$ is defined in (14) therein satisfies
$$
M_{1,1} \leq (k-1)^4
( \lambda r)^{4(k-1)-3}, \qquad i,j=1.
$$
and
$$
M_{i,j} \leq {k-1 \choose i}^2 {k-1 \choose j}^2
( \lambda r)^{4(k-1)-i-j}, \qquad
2\leq i \leq j \leq r.
$$
Hence by \eqref{fjkdls45} and \eqref{dfjkvar} we have
\begin{eqnarray*}
\lefteqn{
d_W\big(\widetilde{\sigma}_k^{(\lambda )}, {\cal N}\big)
}
\\
& \leq &
\frac{1}{{\mathrm{{\rm Var}}}_\lambda [ \sigma_k(t)]}
\left(
(k-1)^2 ( \lambda r)^{2(k-1)-3/2}
+
\sum_{2\leq i \leq j \leq r}
{k-1 \choose i} {k-1 \choose j}
( \lambda r)^{2(k-1)-i/2-j/2}
\right)
\\
& \leq &
\frac{C(k,r)}{\sqrt{\lambda r}}
+
C(k,r) \sum_{2\leq i \leq j \leq r}
( \lambda r)^{1-i/2-j/2}.
\end{eqnarray*}
The above conclusions can also be reached by
noting that $\widetilde{\sigma}_k^{(\lambda )}$ admits a Hoeffding decomposition
and by applying Theorem~1.3 in \cite{doblerpeccati} for the Wasserstein
distance, or Theorem~6.3 in \cite{PS4}
for the Kolmogorov distance, which refine the central limit
theorem of \cite{dejong1990}.
\end{Proof}
\noindent
Figure~\ref{f3} presents numerical estimates that are consistent with the rate in
\eqref{fjkldsf-2}, by plotting
$\log d_K\big( \widetilde{\sigma}^{(\lambda )}_k (t) , {\cal N} \big)$ against $\log \lambda$
and their comparison with the line of slope $-1/2$.
Kolmogorov distances $d_K$ have been estimated in R using the distrEx package.
\vspace{-0.4cm}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth, height=5cm]{figure8_a.pdf}
\vskip-.2cm
\caption{Three-hop counts.}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=1\linewidth, height=5cm]{figure8_b.pdf}
\vskip-.2cm
\caption{Four-hop counts.}
\end{subfigure}
\caption{Log-log plot of Kolmogorov distances.}
\label{f3}
\end{figure}
\vspace{-0.3cm}
| {
"timestamp": "2022-03-29T02:44:05",
"yymm": "2203",
"arxiv_id": "2203.14535",
"language": "en",
"url": "https://arxiv.org/abs/2203.14535",
"abstract": "We propose an algorithm for the closed-form recursive computation of joint moments and cumulants of all orders for k-hop counts in the 1D unit disk random graph model with Poisson distributed vertices. Our approach uses decompositions of k-hop counts into multiple Poisson stochastic integrals. As a consequence, using the Stein method we derive Berry-Esseen bounds for the asymptotic convergence of renormalized k-hop path counts to the normal distribution as the density of Poisson vertices tends to infinity. Computer codes for the recursive symbolic computation of moments and cumulants are provided in appendix.",
"subjects": "Probability (math.PR); Combinatorics (math.CO)",
"title": "Asymptotic analysis of k-hop connectivity in the 1D unit disk random graph model",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.983596967003007,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7099044168952957
} |
https://arxiv.org/abs/1310.0852 | Hom Quandles | If $A$ is an abelian quandle and $Q$ is a quandle, the hom set $\mathrm{Hom}(Q,A)$ of quandle homomorphisms from $Q$ to $A$ has a natural quandle structure. We exploit this fact to enhance the quandle counting invariant, providing an example of links with the same counting invariant values but distinguished by the hom quandle structure. We generalize the result to the case of biquandles, collect observations and results about abelian quandles and the hom quandle, and show that the category of abelian quandles is symmetric monoidal closed. | \section{\large \textbf{Introduction}}
Ronnie Brown said it best when declaring, ``One of the irritations of group theory is that the set Hom($H$,$K$) of homomorphisms between groups $H$ and $K$ does not have a natural group structure." Of course, when $H$ and $K$ are both commutative, we know that Hom($H$, $K$) is also a commutative group.
Since quandles are algebraic structures having groups as their primordial example, it is natural to wonder when, if ever, the set of quandle homomorphisms from a quandle $X$ to a quandle $X'$, $\mathrm{Hom}(X,X')$, possesses additional structure. It was shown
in \cite{I} that if $Q(K)$ is the fundamental quandle of a knot and $X$ is
an Alexander quandle, the set $\mathrm{Hom}(Q(K),X)$ has an Alexander quandle structure. We
generalize this
result to show that the set $\mathrm{Hom}(Q(K),X)$ has a quandle structure
provided the target quandle $X$ is an `abelian', or `medial', quandle. Moreover,
for links with two or more components, the resulting quandle structure is
not determined by the cardinality of the target quandle and is thus a
stronger invariant than the counting invariant $|\mathrm{Hom}(Q(K),X)|$.
This
paper is organized as follows. In Section \ref{QB} we recall the basics of
quandles, including definitions and examples. In Section \ref{AQ} we turn our focus to the case
of abelian quandles, also called medial quandles. Then, in Section
\ref{HQ}, specifically in Proposition \ref{p:main}, we define a natural quandle structure on the set of quandle
homomorphisms from an arbitrary quandle to a finite abelian quandle.
We continue in this section by illustrating properties of the hom quandle, \textrm{\bf Hom}($Q$,$A$), including those inherited from the quandle $A$.
In Section \ref{HQE} we apply the results of Section \ref{HQ} to
define an enhanced invariant of links associated to finite abelian quandles and in
Section \ref{BQ} we generalize the results of previous sections to the
case of biquandles. In Section \ref{C} we consider the category of abelian
quandles and show that it is symmetric monoidal closed, and we conclude in Section \ref{Q}
with some questions for future work.
\section{\large \textbf{Acknowledgements}}
We thank Tom Leinster for insightful and helpful discussions related to Section \ref{C} and James McCarron for pointing out an error in the initially posted
version. We are also grateful for useful conversations with Lou Kauffman and David Radford when this work was in the beginning stages.
\section{\large \textbf{Quandle Basics}}\label{QB}
We begin with a definition from \cite{J}.
\begin{definition}
\textup{A \textit{quandle} is a set $X$ equipped with a binary operation
$\triangleright:Q\times X\to X$ satisfying
\begin{list}{}{}
\item[(i)]{ (idempotence) for all $x\in X$, $x\triangleright x = x$,}
\item[(ii)]{(inverse) for all $x,y\in X$, there is a unique $z\in X$ with $x=z\triangleright y$, and}
\item[(iii)]{(self-distributivity) for all $x,y,z\in X$, $(x\triangleright y)\triangleright z=(x\triangleright z)\triangleright (y\triangleright z).$}
\end{list}}
\end{definition}
The quandle axioms capture the essential properties of group conjugation, and correspond to the Reidemeister moves on oriented link diagrams
with elements corresponding to arcs and the quandle operation $\triangleright$
corresponding to crossings with $x\triangleright y$ the result of $x$ crossing under
$y$ from right to left. In particular, the operation is distinctly
non-symmetrical -- in $x\triangleright y$, $y$ is acting on $x$ and not conversely --
and thus it makes sense to use a non-symmetrical symbol like $\triangleright$.
Axiom (i) says that every element of $X$ is idempotent under $\triangleright$. Axiom (ii)
says that the action of $y$ on $X$ defined by $f_y(x)=x\triangleright y$ is
bijective for every $y\in X$. Hence there are inverse actions, denoted by
$f_y^{-1}(x)=x\triangleright^{-1}y$, and Axiom (ii) is equivalent to
\begin{list}{}{}
\item[(ii$'$)] there is an \textit{inverse}, or {\it dual}, operation $\triangleright^{-1}:X\times X\to X$
satisfying for all $x,y\in X$
\[(x\triangleright y)\triangleright^{-1} y =x = (x\triangleright^{-1} y)\triangleright y.\]
\end{list}
Thus, we can eliminate the existential quantifier in Axiom (ii) at the cost
of adding a second operation. It is a straightforward exercise to show that
the inverse operation is also idempotent and self-distributive, so $X$ is
a quandle under $\triangleright^{-1}$, known as the \textit{dual quandle} of $(X,\triangleright)$.
One can also show that the two triangle operations distribute over each other,
i.e. we have
\[(x\triangleright y)\triangleright^{-1} z= (x\triangleright^{-1} z)\triangleright (y\triangleright^{-1} z)\quad\mathrm{and}\quad
(x\triangleright^{-1} y)\triangleright z= (x\triangleright z)\triangleright^{-1} (y\triangleright z).
\]
Axiom (iii) says that the quandle operation is self-distributive. This axiom
then implies that the action maps $f_y:X\to X$ are endomorphisms of the
quandle structure:
\[f_z(x\triangleright y)= (x\triangleright y)\triangleright z = (x\triangleright z)\triangleright (y\triangleright z) =f_z(x)\triangleright f_z(y).\]
Indeed, Axioms (ii) and (iii) together say that the action of any element $y\in X$
is always an automorphism of $X$. If $(X,\triangleright)$ satisfies only (ii) and (iii), then
$X$ is called a \textit{rack} or \textit{automorphic set}; a quandle is then a rack
in which every element is a fixed point of its own action.
A quandle is {\it involutory} if $\triangleright =\triangleright^{-1}$, i.e. if for all $x$ and $y$
in $X$ we have
$(x \rhd y) \rhd y = x$. Involutory quandles are also known as {\it kei} or
\begin{CJK*}{UTF8}{min}圭\end{CJK*}; see \cite{NP,T} for more details.
\begin{example}
\textup{Any module over $\mathbb{Z}[t^{\pm 1}]$ is a quandle with operations}
\[x\triangleright y=tx+(1-t)y \quad \mathrm{and}\quad
x\triangleright^{-1} y=t^{-1}x+(1-t^{-1})y.\]
\textup{Such a quandle is called an \textit{Alexander quandle}.
Any module over $\mathbb{Z}[t]/(t^2-1)$ is a kei under
\[x\triangleright y= tx+(1-t)y\]
known as an \textit{Alexander kei}.}
\end{example}
\begin{example}
\textup{Any vector space $V$ over a field $k$
is a quandle under the operations }
\[\vec{x}\triangleright\vec{y}=\vec{x}+
\langle\vec{x},\vec{y} \rangle \vec{y}
\quad\mathrm{and}\quad
\vec{x}\triangleright^{-1}\vec{y}=\vec{x}-
\langle\vec{x},\vec{y} \rangle \vec{y}\]
\textup{where $\langle,\rangle:V\times V\to k$ is an antisymmetric
bilinear form (if the characterisitic of $k$ is 2, then we also require
$\langle \vec{x},\vec{x}\rangle=0$ for all $\vec{x}\in V$). Such a quandle
is called a \textit{symplectic quandle} \cite{NN}.}
\end{example}
As briefly mentioned already, the quandle axioms can be understood as arising from the oriented Reidemeister
moves where quandle elements are associated to arcs in a knot or link diagram
and the quandle operation $x\triangleright y$ is interpreted as arc $x$ crossing under arc
$y$ from the right. We note that the orientation of the undercrossing arc is not
relevant, but only the orientation of the overcrossing arc. The inverse triangle
operation from Axiom (ii$'$) can be interpreted as the understrand crossing
backwards from left to right as illustrated below:
\[\includegraphics{ac-sn-4.png}\quad \includegraphics{ac-sn-8.png}
\]
\noindent Then the quandle axioms are exactly the conditions required for the diagrams
to match up one-to-one before and after the Reidemeister moves.
\[\includegraphics{ac-sn-5.png} \quad \includegraphics{ac-sn-6.png}\]
\[\includegraphics{ac-sn-7.png}\]
Given an oriented knot or link $K$, the \textit{knot quandle}, denoted $Q(K)$,
is the quandle with generators corresponding to arcs in a diagram of $K$
and relations given by the crossings. More precisely, the elements of the
knot quandle are equivalence classes of quandle words in the generators
under the equivalence relation generated by the crossing relations and the
quandle axioms.
We will find it convenient to specify quandle structures on a finite sets
$X=\{x_1,x_2,\dots,x_n\}$ using an $n\times n$ matrix encoding the
quandle operation table. In particular, the entry in row $i$ column $j$ of
the quandle matrix is $k$ where $x_k=x_i\triangleright x_j$. Then, for example, the
Alexander quandle structure on $\mathbb{Z}_3=\{1,2,3\}$ (we use 3 for the class
of zero so we can number our rows and columns starting with 1) with
quandle operation $x\triangleright y= 2x+2y$ has matrix
\[M=\left[\begin{array}{ccc}
1 & 3 & 2 \\
3 & 2 & 1 \\
2 & 1 & 3 \\
\end{array}
\right].\]
Given a knot $K$ and a finite quandle $X$, the cardinaility of the set of
quandle homomorphisms (maps $f:Q(K)\to X$ such that $f(x\triangleright y)=f(x)\triangleright f(y)$)
is a computable knot invariant known as the \textit{quandle counting invariant}. A
quandle
homomorphism $f:Q(K)\to X$ corresponds to a labeling of the arcs in a diagram
of $K$ with elements of $X$ such that the crossing relations are all satisfied.
For instance, the trefoil knot $3_1$ has nine colorings by the quandle structure
on $\mathbb{Z}_3$ listed above, as shown below:
\[\includegraphics{ac-sn-9.png}\]
Hence, the quandle counting invariant here is
$|\mathrm{Hom}(Q(3_1),\mathbb{Z}_3)|=9$.\footnote{The reader may recognize
these as Fox 3-colorings.}
\section{\large \textbf{Abelian Quandles}}\label{AQ}
We now turn our attention to a special class of quandles known as
`abelian' quandles. The reader should be aware that, unlike in the group
case, the adjective ``abelian" is not synonymous with ``commutative." Abelian
quandles satisfy the condition below whereas commutative quandles satisfy
$a \rhd b = b \rhd a$.
\begin{definition}
\textup{A quandle $Q$ is \textit{abelian} if for all $x,y,z,w\in Q$ we have}
\[(x\triangleright y)\triangleright(z\triangleright w) =(x\triangleright z)\triangleright(y\triangleright w).\]
\textup{Abelian quandles are also called \textit{medial} quandles.}
\end{definition}
\begin{example}
\textup{Alexander quandles are abelian:}
\begin{eqnarray*}
(x\triangleright y)\triangleright(z\triangleright w) & = & t(tx+(1-t)y)+(1-t)(tz+(1-t)w) \\
& = & t^2x+t(1-t)y+t(1-t)z+(1-t)^2w \\
& = & t(tx+(1-t)z)+(1-t)(ty+(1-t)w) \\
& = & (x\triangleright z)\triangleright(y\triangleright w)
\end{eqnarray*}
\end{example}
\begin{example}
\textup{However, not all abelian quandles are Alexander. The quandle $Q_2$
with operation table
\[\left[\begin{array}{ccc}
1 & 1 & 2 \\
2 & 2 & 1 \\
3 & 3 & 3\\
\end{array}\right]\]
is abelian as can be verified by checking that
$(a\triangleright b)\triangleright (c\triangleright d)=(a\triangleright c)\triangleright (b\triangleright d)$ for $a,b,c,d\in\{1,2,3\}$;
however, $Q_2$ is not isomorphic to any Alexander quandle by the following
lemma.}
\end{example}
\begin{lemma}
If $Q$ is an Alexander quandle containing an element $y\in Q$ which acts
trivially on $Q$, i.e. if $x\triangleright y=x$ for all $x\in Q$, then $Q$ is
isomorphic to the trivial quandle on $|Q|$ elements.
\end{lemma}
\begin{proof}
Suppose $y$ acts trivially on $Q$, so that $x\triangleright y= x$ for all $x\in Q$. Then
\[0=x-(x\triangleright y)= x-tx-(1-t)y=(1-t)(x-y)\]
for all $x\in Q$. In particular, since every $x\in Q$ has the form $x=(x+y)-y$,
the map $(1-t):Q\to Q$ is the zero map, so $t:Q\to Q$ is the identity map. Then
\[x\triangleright z=tx+(1-t)z= 1x+0z=x\]
and the quandle operation on $Q$ is trivial.
\end{proof}
\begin{example}
\textup{Unlike Alexander quandles, symplectic quandles are generally
non-abelian. Consider the symplectic quandle structure on $(\mathbb{Z}_2)^2$
defined by
\[\left[\begin{array}{c}
x_1 \\
x_2
\end{array}\right]
\triangleright
\left[\begin{array}{c}
x_1 \\
x_2
\end{array}\right]
=
\left[\begin{array}{c}
x_1 \\
x_2
\end{array}\right]+
\left(\left[\begin{array}{cc} x_1 & x_2 \end{array}\right]
\left[\begin{array}{cc}
0 & 1 \\
1 & 0
\end{array}\right]
\left[\begin{array}{c}
y_1 \\
y_2
\end{array}\right]\right)
\left[\begin{array}{c}
y_1 \\
y_2
\end{array}\right].
\]
This four-element quandle has operation matrix
\[M=\left[\begin{array}{cccc}
1 & 1 & 1 & 1 \\
2 & 2 & 4 & 3 \\
3 & 4 & 3 & 2 \\
4 & 3 & 2 & 4 \\
\end{array}\right]\]
where
\[x_1=\left[\begin{array}{c} 0 \\ 0\end{array}\right],
x_2=\left[\begin{array}{c} 1 \\ 0\end{array}\right],
x_3=\left[\begin{array}{c} 0 \\ 1\end{array}\right] \quad \mathrm{and}\quad
x_4=\left[\begin{array}{c} 1 \\ 1\end{array}\right].
\]
It is easy to see from the table that this quandle is not
abelian; for instance, we have
\[(2\triangleright 4)\triangleright (1\triangleright 2)= 3\triangleright 1=3 \]
but
\[(2\triangleright 1)\triangleright (4\triangleright 2)= 2\triangleright 3=4\ne 3.\]
We also note that this quandle is a kei, so this example also shows that
kei need not be abelian.
}
\end{example}
\begin{lemma}
In addition to right-distributivity, the operation in an abelian quandle
is left-distributive.
\end{lemma}
\begin{proof}
If $A$ is an abelian quandle, then for all $x,y,z\in A$ we have
\[x\triangleright(y\triangleright z) =
(x\triangleright x)\triangleright (y\triangleright z)
=(x\triangleright y) \triangleright (x\triangleright z).\]
\end{proof}
\section{\large \textbf{Hom Quandles}}\label{HQ}
We now wish to study the structure of the set of quandle homomorphisms
$\mathrm{Hom}(Q,A)$ where $Q$ is any quandle and $A$ is an abelian quandle.
\begin{theorem}
Let $Q$ and $A$ be quandles. If $A$ is abelian, then the set of
quandle homomorphisms $\mathrm{Hom}(Q,A)$ is a quandle under the
pointwise operation $(f\triangleright g)(q)=f(q)\triangleright g(q)$. Moreover,
$\mathrm{Hom}(Q,A)$ is abelian.
\end{theorem}
\begin{proof}
For Axiom (i), we have
\[(f\triangleright f)(q)= f(q)\triangleright f(q)=f(q).\]
For Axiom (ii), define $(f\triangleright^{-1} g)(q)=f(q)\triangleright^{-1} g(q)$ in $A$.
Then we have
\[((f\triangleright g)\triangleright^{-1} g) (q) = (f(q)\triangleright g(q))\triangleright^{-1} g(q) = f(q).\]
For Axiom (iii), we have
\begin{eqnarray*}
((f\triangleright g)\triangleright h)(q) & = & (f\triangleright g)(q) \triangleright h(q) \\
& = & (f(q)\triangleright g(q))\triangleright h(q) \\
& = & (f(q)\triangleright h(q))\triangleright(g(q)\triangleright h(q)) \\
& = & (f\triangleright h)(q)\triangleright(g\triangleright h)(q) \\
& = & [(f\triangleright h)\triangleright(g\triangleright h)](q).
\end{eqnarray*}
To show that $\mathrm{Hom}(Q,A)$ is abelian, let
$f, g, h, k \in \mathrm{Hom}(Q,A)$. Then
\begin{eqnarray*}
[(f \rhd g) \rhd (h \rhd k)] (q) & = & (f \rhd g) (q) \rhd (h \rhd k) (q) \\
& = & (f(q) \rhd g (q)) \rhd (h(q) \rhd k(q)) \\
& = & (f(q) \rhd h(q)) \rhd (g(q) \rhd k(q)) \\
& = & (f \rhd h)(q) \rhd (g \rhd k)(q) \\
& = & [(f \rhd h) \rhd (g \rhd k)](q)
\end{eqnarray*}
\end{proof}
\begin{remark} \textup{Henceforth, we will use $\mathrm{Hom}(Q,A)$ to denote the set of quandle homomorphisms and $\mathrm{\bf Hom}(Q,A)$ to denote the quandle.}
\end{remark}
When the domain quandle $Q$ is a knot quandle, we can interpret
the pointwise quandle operation in terms of knot diagrams. In particular,
$\mathrm{\bf Hom}(Q(K),A)$ can be represented as the set of
$A$-labelings of a fixed diagram $D$ of $A$; the $\triangleright$ operation on diagrams
is then given by the pointwise operation on the arc labels, i.e.
\[\includegraphics{ac-sn-10.png}\]
\begin{example}\textup{
In $\mathrm{\bf Hom}(Q(3_1),\mathbb{Z}_3)$ we have
\[\includegraphics{ac-sn-11.png}\]
}\end{example}
Then at any crossing, we have
\[\includegraphics{ac-sn-3.png}\]
On the one hand, we have $z=(x\triangleright y)\triangleright(a\triangleright b)$ by definition of the $\triangleright$
operation on diagrams; on the other hand, for the labeling to be a valid
quandle labeling, we must have $z=(x\triangleright a)\triangleright(y\triangleright b)$. Thus, we have
\begin{proposition}\label{p:main}
The set
$\mathrm{Hom}(Q(K),A)$ forms a quandle under the pointwise $\triangleright$ operation
if and only if $(x\triangleright y)\triangleright(a\triangleright b)=(x\triangleright a)\triangleright(y\triangleright b)$, i.e., if and only if
$A$ is abelian.
\end{proposition}
The hom quandle $\mathrm{\bf Hom}(Q,A)$ inherits many properties held by
the target quandle $A$.
\begin{theorem}
Let $Q$ and $A$ be quandles, where $A$ is abelian. If $A$ is
commutative or involutory, then $\mathrm{\bf Hom}(Q,A)$ is also commutative or involutory.
\end{theorem}
\begin{proof}
This follows from a straightforward computation.
\end{proof}
An additional observation:
\begin{theorem}
Let $Q$ be a quandle and $A \cong A'$ be abelian quandles. Then {\bf Hom}$(Q, A) \cong$ {\bf Hom}$(Q, A')$.
\end{theorem}
\begin{proof}
This follows from a straightforward computation.
\end{proof}
\begin{theorem}
Let $Q$ be a finitely generated quandle and $A$ a finite
abelian
quandle. Then $\mathrm{\bf Hom}(Q,A)$ contains a subquandle isomorphic to $A$.
\end{theorem}
\begin{proof} Define maps $f_a:Q\to A$ by $f_a(x)=a$ for all $x\in Q$ and
consider the map $\phi : A \rightarrow \mathrm{\bf Hom}(Q,A)$
defined by $\phi(a)=f_a$. First,
note that $\phi$ is a homomorphism of quandles since for any $a,b\in A$ we have
\[\phi(a\triangleright b) = f_{a\triangleright b}
\quad\mathrm{and}\quad
\phi(a)\triangleright \phi(b)=f_a\triangleright f_b\]
Then for any $x\in Q$, we have
\[(f_a\triangleright f_b)(x)=a\triangleright b=f_{a\triangleright b}(x)\]
as required. Further, $\phi$ is injective since $\phi(a)=\phi(b)$ implies
$f_a=f_b$ which implies $a=b$. Then the image
subquandle
$\mathrm{Im}(\phi)\subset \mathrm{\bf Hom}(Q,A)$ is isomorphic to $A$.
\end{proof}
More generally, we have
\begin{theorem}
Let $Q$ be a finitely generated quandle and $A$ an abelian quandle. Then
$\mathrm{\bf Hom}(Q,A)$ is isomorphic to a subquandle of $A^{c}$ where $c$ is
minimal number of generators of $Q$.
\end{theorem}
\begin{proof}
Let $q_1,\dots,q_c$ be a set of generators of $Q$ with minimal cardinality.
Any homomoprhism $f:Q\to A$ must send each $q_k$ to an element $f(q_k)$ in
$A$, and such an assignment of images to generators defines a quandle
homomorphism if and only if the relations in $Q$ are satisfied by the assignment, i.e.
if and only if
\[f(q_j\triangleright q_k)=f(q_j)\triangleright f(q_k)\]
for all $1\le j,k\le c$.
Then the elements of $\mathrm{\bf Hom}(Q,A)$ can be identified with the subset of
$A^c$ consisting of $c$-tuples of images of generators under $f$ satisfying the
relations in $Q$, i.e.
\[f\leftrightarrow (f(q_1),f(q_2),\dots,f(q_c)).\]
The pointwise operation in $\mathrm{\bf Hom}(Q,A)$ agrees with the componentwise
operation in the Cartesian product $A^c$,
\begin{eqnarray*}
((f\triangleright g)(q_1),\dots,(f\triangleright g)(q_c))
& = & (f(q_1)\triangleright g(q_1),\dots, f(q_c)\triangleright g(q_c)) \\
& = & (f(q_1),\dots, f(q_c)) \triangleright (g(q_1),\dots,g(q_c))
\end{eqnarray*}
so $\mathrm{\bf Hom}(Q,A)$ is isomorphic to the subquandle of $A^c$ consisting of
$c$-tuples satisfying the relations of $Q$.
\end{proof}
In the simplest case, we can identify the structure of the hom quandle.
Recall that the \textit{trivial quandle of $n$ elements} is a set $T_n$ of
cardinality $n$ with quandle operation $x\triangleright y=x$ for all $x,y\in X$. That is,
the trivial quandle has quandle matrix
\[M_{T_n}=\left[\begin{array}{rrrr}
1 & 1 & \dots & 1 \\
2 & 2 & \dots & 2 \\
\vdots & \vdots & \ddots & \vdots \\
n & n & \dots & n
\end{array}\right].\]
\begin{lemma}\label{t-hom}
Any map between trivial quandles is a quandle homomorphism.
\end{lemma}
\begin{proof}
Let $f:T_n\to T_m$ be a map between trivial quandles $T_n$ and $T_m$.
Then for any $x_i,x_j\in X$, we have $f(x_i\triangleright x_j)=f(x_i)$ and
$f(x_i)\triangleright f(x_j)=f(x_i)$. Thus
\[f(x_i\triangleright x_j)=f(x_i)=f(x_i)\triangleright f(x_j)\]
and $f$ is a quandle homomoprhism.
\end{proof}
\begin{theorem} Let $T_n$ and $T_m$ be the trivial quandles of orders $n$
and $m$, respectively. Then $\mathrm{\bf Hom}(T_n, T_m) \cong T_{m^n}$.
\end{theorem}
\begin{proof}
Let $T_n=\{x_1,\dots, x_n\}$ and $T_m=\{y_1,\dots, y_m\}$.
Any map $f:T_n\to T_m$ can be encoded as a vector
\[f=(f(x_1),f(x_2),\dots f(x_n))\]
and there are $m^n$ such maps. Indeed, every such map is quandle
homomorphism by Lemma \ref{t-hom}.
Now, define $\phi:\mathrm{\bf Hom}(T_n,T_m)\to T_{m^n}$ by
\[\phi(y_1,\dots,y_n)=\sum_{k=1}^n y_k m^{k-1}.\]
Then $\phi$ is a bijection; the inverse map rewrites
$x\in \{1,2,\dots,m^n\}$ in base-$m$.
The quandle structure on $\mathrm{\bf Hom}(T_n,T_m)$ is defined by
\[(f\triangleright g)(x)=f(x)\triangleright g(x)=f(x),\]
so we have $f\triangleright g=f$ for all $f,g\in\mathrm{\bf Hom}(T_n,T_m)$ and
$\mathrm{\bf Hom}(T_n,T_m)$ is a trivial quandle. Then by Lemma \ref{t-hom},
$\phi$ is an isomorphism of quandles.
\end{proof}
\begin{remark}\textup{
The condition that $\mathrm{\bf Hom}(Q,A)\cong A^c$ where $c$ is the minimal number
of generators of $A$ is not limited to trivial quandles. For instance, the
quandle $\mathrm{\bf Hom}(Q(3_1),R_3)$ is isomorphic to $(R_3)^2$, where
$Q(3_1)$ is the fundamental quandle of the trefoil knot and
$R_3=\mathbb{Z}_3[t]/(t-2)$ is the connected quandle of 3 elements; we note
that $Q(3_1)$ has a presentation with two generators (as do all 2-bridge knots).
In the next
section, we show that $\mathrm{\bf Hom}(Q,A)$ need not be isomorphic to $A^c$
for links with quandle generator index $c$. }
\end{remark}
\section{\large \textbf{Hom Quandle Enhancement}} \label{HQE}
Recall that for any oriented knot $K$ and finite quandle $A$, the cardinality
of the hom set $\mathrm{Hom}(Q(K),A)$ is a computable knot invariant. As we
have seen, if $A$ is abelian then the hom set is not just a set but a quandle.
The natural question is then whether the hom quandle is a stronger invariant
than the counting invariant. In general, an invariant which determines the
counting invariant is an \textit{enhancement} of the counting invariant, and if
there are examples in which the enhancement distinguishes knots or links which
have the same counting invariant, we say the enhancement is a \textit{proper
enhancement}. Thus, we would like to know whether the hom quandle is a proper
enhancement. It turns out, the answer is yes:
\begin{example}\textup{Let $A$ be the quandle defined by the quandle matrix
\[M_a=\left[\begin{array}{cccc}
1 & 4 & 4 & 1 \\
3 & 2 & 2 & 3 \\
2 & 3 & 3 & 2 \\
4 & 1 & 1 & 4
\end{array}\right]\]
and consider the links $L6a1$ and $L6a5$ on the Thistlethewaite link table on
the knot atlas \cite{KA}.
\[\begin{array}{cc}
\includegraphics{ac-sn-2.png} & \includegraphics{ac-sn-1.png} \\
L6a1 & L6a5 \\
\end{array}\]
Our \texttt{Python} computations show that while both hom quandles have the
same cardinality, namely 16, the hom quandles are not isomorphic.
\[M_{\mathrm{\bf Hom}(Q(L6a1),A)}=
\left[\begin{array}{cccccccccccccccc}
1 & 1 & 2 & 2 & 16 & 16 & 15 & 15 & 15 & 15 & 16 & 16 & 2 & 2 & 1 & 1 \\
2 & 2 & 1 & 1 & 15 & 15 & 16 & 16 & 16 & 16 & 15 & 15 & 1 & 1 & 2 & 2 \\
4 & 4 & 3 & 3 & 13 & 13 & 14 & 14 & 14 & 14 & 13 & 13 & 3 & 3 & 4 & 4 \\
3 & 3 & 4 & 4 & 14 & 14 & 13 & 13 & 13 & 13 & 14 & 14 & 4 & 4 & 3 & 3 \\
12 & 12 & 11 & 11 & 5 & 5 & 6 & 6 & 6 & 6 & 5 & 5 & 11 & 11 & 12 & 12 \\
11 & 11 & 12 & 12 & 6 & 6 & 5 & 5 & 5 & 5 & 6 & 6 & 12 & 12 & 11 & 11 \\
9 & 9 & 10 & 10 & 8 & 8 & 7 & 7 & 7 & 7 & 8 & 8 & 10 & 10 & 9 & 9 \\
10 & 10 & 9 & 9 & 7 & 7 & 8 & 8 & 8 & 8 & 7 & 7 & 9 & 9 & 10 & 10 \\
7 & 7 & 8 & 8 & 10 & 10 & 9 & 9 & 9 & 9 & 10 & 10 & 8 & 8 & 7 & 7 \\
8 & 8 & 7 & 7 & 9 & 9 & 10 & 10 & 10 & 10 & 9 & 9 & 7 & 7 & 8 & 8 \\
6 & 6 & 5 & 5 & 11 & 11 & 12 & 12 & 12 & 12 & 11 & 11 & 5 & 5 & 6 & 6 \\
5 & 5 & 6 & 6 & 12 & 12 & 11 & 11 & 11 & 11 & 12 & 12 & 6 & 6 & 5 & 5 \\
14 & 14 & 13 & 13 & 3 & 3 & 4 & 4 & 4 & 4 & 3 & 3 & 13 & 13 & 14 & 14 \\
13 & 13 & 14 & 14 & 4 & 4 & 3 & 3 & 3 & 3 & 4 & 4 & 14 & 14 & 13 & 13 \\
15 & 15 & 16 & 16 & 2 & 2 & 1 & 1 & 1 & 1 & 2 & 2 & 16 & 16 & 15 & 15 \\
16 & 16 & 15 & 15 & 1 & 1 & 2 & 2 & 2 & 2 & 1 & 1 & 15 & 15 & 16 & 16
\end{array}\right]\]
\[M_{\mathrm{\bf Hom}(Q(L6a5),A)}=
\left[\begin{array}{cccccccccccccccc}
1 & 1 & 1 & 1 & 16 & 16 & 16 & 16 & 16 & 16 & 16 & 16 & 1 & 1 & 1 & 1 \\
2 & 2 & 2 & 2 & 15 & 15 & 15 & 15 & 15 & 15 & 15 & 15 & 2 & 2 & 2 & 2 \\
3 & 3 & 3 & 3 & 14 & 14 & 14 & 14 & 14 & 14 & 14 & 14 & 3 & 3 & 3 & 3 \\
4 & 4 & 4 & 4 & 13 & 13 & 13 & 13 & 13 & 13 & 13 & 13 & 4 & 4 & 4 & 4 \\
12 & 12 & 12 & 12 & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 12 & 12 & 12 & 12 \\
11 & 11 & 11 & 11 & 6 & 6 & 6 & 6 & 6 & 6 & 6 & 6 & 11 & 11 & 11 & 11 \\
10 & 10 & 10 & 10 & 7 & 7 & 7 & 7 & 7 & 7 & 7 & 7 & 10 & 10 & 10 & 10 \\
9 & 9 & 9 & 9 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 9 & 9 & 9 & 9 \\
8 & 8 & 8 & 8 & 9 & 9 & 9 & 9 & 9 & 9 & 9 & 9 & 8 & 8 & 8 & 8 \\
7 & 7 & 7 & 7 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 7 & 7 & 7 & 7 \\
6 & 6 & 6 & 6 & 11 & 11 & 11 & 11 & 11 & 11 & 11 & 11 & 6 & 6 & 6 & 6 \\
5 & 5 & 5 & 5 & 12 & 12 & 12 & 12 & 12 & 12 & 12 & 12 & 5 & 5 & 5 & 5 \\
13 & 13 & 13 & 13 & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 13 & 13 & 13 & 13 \\
14 & 14 & 14 & 14 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 14 & 14 & 14 & 14 \\
15 & 15 & 15 & 15 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 15 & 15 & 15 & 15 \\
16 & 16 & 16 & 16 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 16 & 16 & 16 & 16
\end{array}\right]\]
Recall from \cite{QP} that if $X$ is a quandle, then the polyomial
\[\phi(X)=\sum_{x\in X} s^{r(x)}t^{c(x)}\]
where $r(x)=|\{y\in X\ :\ x\triangleright y=x\}|$ and $c(x)=|\{y\in X\ :\ y\triangleright x=y\}|$ is
an invariant of quandle isomorphism type known as the \textit{quandle
polynomial} of $X$. Then we have
\[\phi(\mathrm{\bf Hom}(Q(L6a1),A))= 16s^4t^4\ne
16s^8t^8=\phi(\mathrm{\bf Hom}(Q(L6a5),A))\]
and the quandles are not isomorphic.
}\end{example}
\section{\large \textbf{Abelian Biquandles}}\label{BQ}
If we think of quandles as the algebraic structure encoding the oriented
Reidemeister moves where the inbound overcrossing arc act on the inbound
undercrossing arc at a crossing, it natural to ask what algebraic structure
results when we allow both inbound \textit{semiarcs}, i.e. portions of the knot divided at both over and undercrossings, to act on each other at a crossing.
The resulting algebraic structure is known as a \textit{biquandle}; see
\cite{K}. More precisely, we have
\begin{definition}\textup{Let $X$ be a set and define $\Delta:X\to X\times X$
by $\Delta(x)=(x,x)$. A \textit{biquandle map}
on $X$ is an invertible map $B:X\times X\to X\times X$ denoted
\[B(x,y)=(B_1(x,y),B_2(x,y))=(y^x,x_y)\]
such that
\begin{itemize}
\item[(i)] There exists a unique invertible \textit{sideways map} $S:X\times X\to X\times X$ such that for all $x,y\in X$, we have
\[S(B_1(x,y),x)=(B_2(x,y),y);\]
\item[(ii)] The component map $(S\Delta)_{1}=(S\Delta)_2:X\to X$ is
bijective, and
\item[(iii)] $B$ satisfies the \textit{set-theoretic Yang-Baxter equation}
\[(B\times I)(I\times B)(B\times I)=(I\times B)(B\times I)(I\times B).\]
\end{itemize}
A \textit{biquandle} is a set $X$ with a choice of biquandle map $B$.
}\end{definition}
\begin{example}\textup{
Examples of biquandles include
\begin{itemize}
\item \textit{Constant Action Biquandles}. For any set $X$ and bijection
$\sigma:X\to X$, the map $B(x,y)=(\sigma(y),\sigma^{1}(x))$ is a biquandle
map.
\item \textit{Alexander Biquandles.} For any module $X$ over
$\mathbb{Z}[t^{\pm 1}, r^{\pm 1}]$, the map
\[B(\vec{x},\vec{y})=((1-tr)\vec{x}+t\vec{y},r\vec{x})\]
is a biquandle map.
\item \textit{Fundamental Biquandle of an oriented Link}. Given an
oriented link diagram $L$, let $G$ be a set of generators corresponding
bijectively with semiarcs (portions of the link divided at both over and
undercrossing points) in $L$, and define the set of \textit{biquandle words}
$W$ recursively by the rules
\begin{itemize}
\item $G\subset W$ and
\item If $x,y\in W$ then $B_{1,2}^{\pm 1}(x,y),S_{1,2}^{\pm 1}(x,y)\in W$.
\end{itemize}
Then the \textit{fundamental biquandle} of $L$, denoted $B(L)$, is the set of
equivalence
classes of $W$ under the equivalence relation generated by the biquandle
axioms and the \textit{crossing relations} in $L$:
\[\includegraphics{ac-sn-12.png}\quad\raisebox{0.5in}{$z=y^x,\ w=x_y$}\]
\item An $n\times 2n$ matrix $M$ with entries in $X=\{1,2,\dots, n\}$ can be
interpreted as a pair of operation tables, say with $B(i,j)=(M[j,i], M[i,j+n])$.
Then such a matrix defines a biquandle structure on $X$ provided the entries
satisfy the biquandle axioms.
\end{itemize}
}\end{example}
We can generalize the abelian property from quandles to biquandles $X$ by
requiring that the set of biquandle homomorphsims $\mathrm{Hom}:B(L)\to X$
forms a biquandle under the diagrammatic operations analogous to the quandle
case.
\[\includegraphics{ac-sn-13.png}\]
More precisely, we have
\begin{definition}\textup{
We say a biquandle map $B:X\times X\to X\times X$ is \textit{abelian} if for
all $a,b,x,y\in X$ we have
\[(b^a)^{y^x}=(b^y)^{a^x}, \quad
(a_b)^{x_y}=(a^x)_{b^y}, \quad
\quad\mathrm{and}\quad
(x_y)_{a_b}=(x_a)_{y_b}.
\]
Note that two of the four conditions determined in the diagram, namely
$(a_b)^{x_y}=(a^x)_{b^y}$ and $(y^x)_{b^a}=(y_b)^{x_a}$, are equivalent.
}\end{definition}
\begin{example}\textup{
Alexander biquandles are abelian, as we can verify directly.
\begin{eqnarray*}
(b^a)^{y^x} & = & t(b^a)+(1-tr)(y^x) \\
& = & t(tb+(1-tr)a)+ (1-tr)(ty+(1-tr)x) \\
& = & t^2b+t(1-tr)a+ t(1-tr)y+(1-tr)^2x \\
& = & t^2b+t(1-tr)y+ t(1-tr)a+(1-tr)^2x \\
& = & (b^y)^{a^x}, \\
& & \\
(a_b)^{x_y} & = & t(a_b)+(1-tr)(x_y) \\
& = & tra+(1-tr)(rx) \\
& = & r(ta+(1-tr)x) \\
& = & (a^x)_{b^y},
\end{eqnarray*}
and
\[(x_y)_{a_b}=r(x_y)=r^2x=r(x_a)=(x_a)_{y_b}.\]
}\end{example}
As with quandles, we have
\begin{proposition}
If $Y$ is a biquandle and $X$ is an abelian biquandle, then the set of
biquandle homomorphisms
$\mathrm{Hom}(Y,X)$ has a biquandle structure defined by
\[f^g(x)= f(x)^{g(x)}\quad\mathrm{and}\quad g_f(x)=g(x)_{f(x)}.\]
\end{proposition}
In particular, if $X$ is a finite abelian biquandle, then the hom
biquandle $\mathrm{\bf Hom}(B(L),X)$ is a link invariant which determines,
but is not determined by, the biquandle counting invariant.
\begin{example}\textup{
Consider the biquandle with operation matrix
\[M_X=\left[\begin{array}{ccccc|ccccc}
3 & 3 & 3 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 2 & 2 \\
2 & 2 & 2 & 3 & 3 & 3 & 3 & 3 & 3 & 3 \\
5 & 5 & 5 & 5 & 5 & 4 & 4 & 4 & 5 & 5 \\
4 & 4 & 4 & 4 & 4 & 5 & 5 & 5 & 4 & 4 \\
\end{array}\right].\]
Our \texttt{Python} computations indicate that both the $(4,2)$-torus link
$L4a1$ and the Whitehead link $L5a1$ have biquandle counting invariant
value $|\mathrm{Hom}(B(L4a1),X)|=81=|\mathrm{Hom}(B(L5a1),X)|$ with respect
to $X$, but $\mathrm{\bf Hom}(B(L4a1),X)$ and $\mathrm{\bf Hom}(B(L5a1),X)$ are not
isomorphic as biquandles. Indeed, they have distinct upper biquandle
polynomials
\[\phi_1(\mathrm{\bf Hom}(B(L4a1),X))=12s^{77}t^{77} + 4s^{77}t^{68} + 40s^{71}t^{71} + 9s^{68}t^{72} + 16s^{65}t^{65}\]
and \[\phi_1(\mathrm{\bf Hom}(B(L5a1),X))=16s^{77}t^{77} + 40s^{71}t^{71} + 4s^{65}t^{56} + 9s^{56}t^{60} + 12s^{56}t^{56}\]
respectively where
\[\phi_1(X)=\sum_{x\in X} s^{r(x)}t^{c(x)}\]
where $r(x)=|\{y\in X\ :\ x^y=x\}|$ and $c(x)=|\{y\in X\ :\ y_x=y\}|$.
\[
\begin{array}{cc}
\includegraphics{ac-sn-14.png} & \includegraphics{ac-sn-15.png} \\
L4a1 & L5a1 \\
\end{array}
\]
}\end{example}
\section{\large \textbf{Categorical Framework}}\label{C}
Since we know that the collection of homomorphisms between two abelian quandles forms an abelian quandle, a natural question to ask is whether the category of abelian quandles is a `symmetric monoidal closed' category. In this section, we show that the answer to this question is yes.
Recall that a symmetric monoidal category $C$ is \textit{closed} if for any object $Q \in C$, the functor $ - \otimes Q: C \rightarrow C$ has a right adjoint. We will denote this right adjoint by $\mathrm{\bf Hom}(Q,-)$ and the adjointness condition then says:
$$\textrm{Hom}(P \otimes Q, R) = \textrm{Hom}(P, \mathrm{\bf Hom}(Q,R)) \qquad (\ast) $$
for all $P,Q,R \in C$ and where ``=" means natural isomorphism. The right adjoint $\mathrm{\bf Hom}(-,-)$ is uniquely determined by adjointness and defines a functor $C^{op} \times C \rightarrow C$.
\subsection{Tensor Product of Abelian Quandles}
Let $Q$ and $A$ be abelian quandles. We define the {\it tensor product}, $Q \otimes A$, to be the free quandle on the set $Q \times A$ quotiented out by the relations
$$(q, a_1) \rhd (q, a_2) = (q, a_1 \rhd a_2) \qquad \textrm{and} \qquad (q_1, a) \rhd (q_2, a) = (q_1 \rhd q_2, a)$$
for $q, q_1, q_2 \in Q$ and $a, a_1, a_2 \in A$. For abelian quandles $X$, a homomorphism $Q \otimes A \rightarrow X$ is essentially the same thing as a \textit{bihomomorphism}, that is, a function $f: Q \times A \rightarrow X$ such that:
\begin{itemize}
\item $f(q, -): A \rightarrow X$ is a homomorphism for each $q \in Q$, and
\item $f(-, a): Q \rightarrow X$ is a homomorphism for each $a \in A$.
\end{itemize}
The unit for this tensor product is the one-element quandle $1$, which can be checked directly. We note that, in principle, the unit is actually the free quandle on a single generator, but that is, in fact, the one-element quandle due to the first quandle axiom.
We remark that this situation works very much as it does for modules. The key point is that both the theory of abelian quandles and the theory of modules are `commutative' theories. We recall that a \textit{commutative theory} is an algebraic theory such that each operation of the theory is a homomorphism. For example, the theory of abelian groups is commutative because for any abelian group $G$, the map $+: G \times G \rightarrow G$ is a homomorphism. More explicitly, in a commutative theory, given any $n$-ary operation $\alpha$ and any $m$-ary operation $\beta$, the equation
$$\alpha(\beta(x_{11}, \ldots, x_{1m}), \ldots , \beta(x_{n1}, \ldots, x_{nm})) = \beta(\alpha(x_{11}, \ldots, x_{n1}), \ldots , \alpha(x_{1m}, \ldots, x_{nm}))$$
holds. Since the theory of quandles only has one operation $\rhd$, all we need is the equation
$$(x_{11} \rhd x_{12}) \rhd (x_{21} \rhd x_{22}) = (x_{11} \rhd x_{21}) \rhd (x_{12} \rhd x_{22}),$$
and this is precisely what the definition of abelian quandle guarantees.
\subsection{Category of Abelian Quandles}
By Theorem \ref{p:main}, we know that given abelian quandles $Q$ and $A$, the set Hom$(Q,A)$ becomes an abelian quandle under pointwise operations. Indeed, $\textrm{Hom}(Q, A)$ is the underlying set of $\textrm{\bf Hom}(Q,A)$. It is not difficult to show that a homomorphism $Q \otimes A \rightarrow X$ is essentially the same thing as a homomorphism $Q \rightarrow \textrm{\bf Hom}(A, X)$. This fact, more formally, gives us the answer to the question raised at the beginning of this section:
\begin{theorem}
The category of abelian quandles is symmetric monoidal closed under the tensor product $\otimes$ and closed structure \textup{\textbf{Hom}(-,-)} defined above.
\end{theorem}
\begin{proof}
This follows from the main theorem of Linton \cite{L} since the theory of abelian quandles is commutative.
\end{proof}
We note that this means the category of abelian quandles can be enriched over itself, and the adjointness condition $(\ast)$ is a natural isomorphism of quandles
$$\textrm{\bf Hom}(Q \otimes A, X) = \textrm{\bf Hom}(Q, \textrm{\bf Hom}(A,X)).$$.
\section{\large \textbf{Questions for Future Research}}\label{Q}
In this section we collect a few questions for future research.
In \cite{J}, Joyce shows that the fundamental abelian quandle of a classical
knot determines, and is determined by, the fundamental Alexander quandle of
the knot. Is the analogous statement true for Alexander biquandles?
What other properties does the hom quandle, Hom$(Q,A)$ inherit from the quandles $Q$ and $A$? For example, does it inherit the properties of being connected, dihedral, Core or conjugation? Given connected quandles, is it true that the hom quandle structure is determined by the counting invariant?
Moreover, what is the relationship (if any) between the cardinalities of $Q$, $A$, and Hom$(Q,A)$? How is the homology of $Q \otimes A$ related to the homologies of $Q$ and $A$? Under the conjugation or Core functors, the braid group becomes a quandle. Is this `braid quandle' abelian?
| {
"timestamp": "2014-03-11T01:01:07",
"yymm": "1310",
"arxiv_id": "1310.0852",
"language": "en",
"url": "https://arxiv.org/abs/1310.0852",
"abstract": "If $A$ is an abelian quandle and $Q$ is a quandle, the hom set $\\mathrm{Hom}(Q,A)$ of quandle homomorphisms from $Q$ to $A$ has a natural quandle structure. We exploit this fact to enhance the quandle counting invariant, providing an example of links with the same counting invariant values but distinguished by the hom quandle structure. We generalize the result to the case of biquandles, collect observations and results about abelian quandles and the hom quandle, and show that the category of abelian quandles is symmetric monoidal closed.",
"subjects": "Geometric Topology (math.GT); Quantum Algebra (math.QA)",
"title": "Hom Quandles",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969660413472,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7099044162012242
} |
https://arxiv.org/abs/1502.01944 | Sums of three cubes, II | Estimates are provided for $s$th moments of cubic smooth Weyl sums, when $4\le s\le 8$, by enhancing the author's iterative method that delivers estimates beyond classical convexity. As a consequence, an improved lower bound is presented for the number of integers not exceeding $X$ that are represented as the sum of three cubes of natural numbers. | \section{Introduction} A heuristic application of the Hardy-Littlewood (circle) method
suggests that the set of integers represented as the sum of three cubes of natural
numbers should have positive density. Although intense effort over the past $75$ years
has delivered a reasonable approximation to this expectation, an unconditional proof
remains elusive. However, each phase of progress has been accompanied by technological
advances of value elsewhere in applications of the circle method, and so even modest
advances remain of interest. The most recent progress \cite{Woo2000b} hinges on an
extension of Vaughan's method \cite{Vau1989} utilising smooth numbers, in which
fractional moments of exponential sums are estimated non-trivially. In this paper, we make
further progress on sums of three cubes by exploiting a new mean value estimate to
improve earlier estimates for fractional moments of cubic smooth Weyl sums. Although
these improvements are modest in scale, such estimates have found many applications
(see, for example, \cite{BBW1995}, \cite{BW2007}, \cite{BW2015a}), and it seems
reasonable to expect that our new bounds will also be of considerable utility.\par
We begin with a new lower bound for for the number, $N(X)$, of integers not exceeding
$X$ which are the sum of three cubes of natural numbers.
\begin{theorem}\label{theorem1.1} One has $N(X)\gg X^\bet$, where
$\bet=0.91709477$.
\end{theorem}
Lower bounds for $N(X)$ are at least implicit in work of Hardy and Littlewood
\cite{HL1925} from 1925. By developing methods based on diminishing ranges and their
$p$-adic variants, Davenport \cite{Dav1939} established the lower bound
$N(X)\gg X^{13/15-\eps}$, subsequently obtaining $N(X)\gg X^{47/54-\eps}$ (see
\cite{Dav1950}). Thirty-five years later, Vaughan \cite{Vau1985}, \cite{Vau1986}
enhanced these methods, first proving that $N(X)\gg X^{8/9-\eps}$, and later that
$N(X)\gg X^{19/21-\eps}$. His seminal introduction \cite{Vau1989} of methods utilising
smooth numbers led to the lower bound $N(X)\gg X^{11/12-\eps}$ (see also Ringrose
\cite{Rin1986} for an intermediate result). The author's derivation of effective estimates
for fractional moments of smooth Weyl sums \cite{Woo1995} first delivered a lower
bound of the shape $N(X)\gg X^{1-\xi/3-\eps}$, where $\xi=0.24956813\ldots$ denotes
the positive root of the polynomial $\xi^3+16\xi^2+28\xi-8$. Subsequently, the author
obtained a similar estimate in which $\xi=(\sqrt{2833}-43)/41=0.24941301\ldots$
(see \cite{Woo2000b}). With this value of $\xi$, one has $1-\xi/3=0.91686232\ldots $,
which should be compared with the exponent $0.91709477$ of Theorem
\ref{theorem1.1}. Subject to the truth of an unproved Riemann Hypothesis concerning
a certain Hasse-Weil $L$-function, meanwhile, one has the conditional estimate
$N(X)\gg X^{1-\eps}$ due to Hooley \cite{Hoo1986,Hoo1997} and Heath-Brown
\cite{HB1998}.\par
Theorem \ref{theorem1.1} follows from an estimate for the sixth moment of a certain
smooth Weyl sum. Define the set of $R$-smooth numbers of size at most $P$ by
$$\calA(P,R)=\{ n\in [1,P]\cap \dbZ: \text{$p|n$ and $p$ prime}\Rightarrow p\le R\}.$$
Then, with $e(z)=e^{2\pi iz}$, we introduce the smooth and classical Weyl sums
\begin{equation}\label{1.1}
f(\alp;P,R)=\sum_{x\in \calA(P,R)}e(\alp x^3)\quad \text{and}\quad
F(\alp;P)=\sum_{1\le x\le P}e(\alp x^3).
\end{equation}
In \S7 we establish the mean value estimate contained in the following theorem.
\begin{theorem}\label{theorem1.2} Write $\del_6=0.24871567$. Then there exists a
positive number $\eta$ with the property that, whenever $R\le P^\eta$, one has
\begin{equation}\label{1.2}
\int_0^1|F(\alp;P)^2f(\alp;P,R)^4|\d\alp \ll P^{3+\del_6}.
\end{equation}
\end{theorem}
For comparison, \cite[Theorem 1.2]{Woo2000b} yields a similar estimate with
$\del_6=0.24941301\ldots $, whilst the earlier work of Vaughan \cite{Vau1989}
provides an analogous sixth moment estimate for $f(\alp;P,R)$ with associated exponent
$\del_6=\tfrac{1}{4}+\eps$, for any $\eps>0$. Note that in many applications
(see \cite{BW2007, BW2015a, BW2015b}), it is crucial that (\ref{1.2}) hold with
$\del_6<\tfrac{1}{4}$, hence the significance of Theorem \ref{theorem1.2}.\par
The bound (\ref{1.2}) of Theorem \ref{theorem1.2} leads to improvement in estimates
associated with the unrepresentation theory of Waring's problem for cubes. Let $E_s(X)$
denote the number of integers not exceeding $X$ which are {\it not} the sum
of $s$ cubes of natural numbers. Then the arguments of Br\"udern \cite{Bru1991}
and Kawada and Wooley \cite{KW2010} lead to the estimates recorded in the following
theorem.
\begin{theorem}\label{theorem1.3} Write
$\tau=\tfrac{2}{7}\left( \tfrac{1}{4}-0.24871567\right) =1/2725.15\dots $. Then
one has
$$E_4(X)\ll X^{37/42-\tau},\quad E_5(X)\ll X^{5/7-\tau},\quad
E_6(X)\ll X^{3/7-2\tau}.$$
\end{theorem}
The aforementioned work of Br\"udern \cite{Bru1991} yields the bound
$E_4(X)\ll X^{37/42+\eps}$, whilst Kawada and Wooley \cite[Theorem 1.4]{KW2010}
obtain a conclusion similar to that of Theorem \ref{theorem1.3}, though with $\tau$
slightly smaller than $1/5962$. We will not discuss the (routine) proof of Theorem
\ref{theorem1.3} further here, noting merely that the conclusion of Theorem
\ref{theorem1.2} is the key input into the methods of \cite{Bru1991}.\par
We establish Theorem \ref{theorem1.2} as a consequence of estimates for the mean
values
\begin{equation}\label{1.3}
U_s(P,R)=\int_0^1|f(\alp;P,R)|^s\d\alp ,
\end{equation}
with $4\le s\le 8$. The iterative method of \cite{Woo1995} obtains a bound for $U_s(P,R)$
in terms of corresponding bounds for $U_{s-2}(P,R)$ and $U_t(P,R)$, wherein $t$ is a
parameter to be chosen with $\tfrac{4}{3}(s-2)\le t\le 2(s-2)$. A key player in
determining the strength of these estimates is an exponential sum of the shape
$$\Ftil_1(\alp)=\sum_{\substack{u\in \calA(P^\tet R,R)\\ u>P^\tet}}
\sum_{\substack{z_1,z_2\in \calA(P,R)\\ z_1\equiv z_2\mmod{u^3}\\ z_1\ne z_2}}e(\alp
u^{-3}(z_1^3-z_2^3)),$$
in which $\tet$ is a parameter with $0\le \tet\le \tfrac{1}{3}$. This exponential sum is
made awkward to handle by the constraint that the summands $z_1$ and $z_2$ be
smooth. In this paper we estimate the auxiliary integral
$$\int_0^1\Ftil_1(\alp)|f(\alp;P^{1-\tet},R)|^{s-2}\d\alp $$
in terms of the mediating mean value
$$\int_0^1|\Ftil_1(\alp)^2f(\alp;P^{1-\tet},R)^2|\d\alp .$$
By orthogonality, the latter counts the number of solutions of an underlying Diophantine
equation. By discarding the smoothness constraint implicit in the sum $\Ftil_1(\alp)$, much
of the strength of the Hardy-Littlewood method may be preserved in the ensuing minor
arc estimate. After preparing an auxiliary estimate in \S2, we analyse this new mean value
in \S3, and indicate in \S4 how it may be utilised in the method of \cite{Woo1995}. Ideas
relevant for the estimation of the mean value $U_s(P,R)$ when $s=6$, and when $s>6.5$,
are presented in \S5.\par
The Keil-Zhao device (see \cite[page 608]{Kei2014} and the discussion leading to
\cite[equation (3.10)]{Zha2014}) enables us in \S6 to obtain stronger minor arc estimates
for smooth Weyl sums than available hitherto. When $\grm\subseteq [0,1)$, $0<t\le 2$ and
$s\ge 6$, this idea delivers an estimate of the shape
$$\int_\grm |f(\alp;P,R)|^{s+t}\d\alp \ll P^{t/2}\Bigl( \sup_{\alp \in \grm}|F(\alp;P)|
\Bigr)^{t/2}\int_0^1|f(\alp;P,R)|^s\d\alp ,$$
in place of
$$\int_\grm |f(\alp;P,R)|^{s+t}\d\alp \ll \Bigl( \sup_{\alp \in \grm}|f(\alp;P,R)|\Bigr)^t
\int_0^1|f(\alp;P,R)|^s\d\alp .$$
The ease with which classical Weyl sums can be estimated on sets of minor arcs ensures
that this device is of utility when $s$ lies between $6$ and $8$. In particular, in \S7 we
explain how to improve \cite[Theorem 2]{BW2001}, which establishes that when $R$ is a
small enough power of $P$, then $U_s(P,R)\ll P^{s-3}$ for $s\ge 7.691$.
\begin{theorem}\label{theorem1.4} Suppose that $\eta>0$ and $P$ is sufficiently large in
terms of $\eta$, and further that $R\le P^\eta$. Then provided that $s\ge 7.5906$, one
has
$$\int_0^1|f(\alp;P,R)|^s\d\alp \ll P^{s-3}.$$
\end{theorem}
Our estimates for the mean values $U_s(P,R)$ depend on those for $U_t(P,R)$ for
appropriate choices of $t$. In \S7, we describe how computations associated with this
complicated iteration were performed, and discuss the extent to which the computed
exponents reflect the sharpest available from this circle of ideas. These conclusions are
summarised in the following theorem.
\begin{theorem}\label{theorem1.5}
Let $(s,\del_s,\Del_s)$ be a triple listed in Table 1. Suppose that $\eta>0$ and $P$ is
sufficiently large in terms of $\eta$, and further that $R\le P^\eta$. Then
$$\int_0^1|f(\alp;P,R)|^s\d\alp \ll P^{s/2+\del_s}\quad \text{and}\quad
\int_0^1|f(\alp;P,R)|^s\d\alp \ll P^{s-3+\Del_s}.$$
\end{theorem}
Exponents may be derived for values of $s$ between those in the table by linear
interpolation using H\"older's inequality. Values of $\del_s$ and $\Del_s$ computed in \S7
have been rounded up, as appropriate, in the final decimal place recorded.\par
\begin{table}
\begin{tabular}{ | l | l | l || l | l | l | p{5cm} |}
\hline
\ $s$ & \ \ \ \ \ \ $\del_s$ & \ \ \ \ \ $\Del_s$ & \ $s$ & \ \ \ \ \ \ $\del_s$ &
\ \ \ \ \ $\Del_s$ \\ \hline
4.0&0.00000000&1.00000000&6.0&0.24871567&0.24871567\\ \hline
4.1&0.00130000&0.95130000&6.1&0.27667792&0.22667792\\ \hline
4.2&0.00495852&0.90495852&6.2&0.30598066&0.20598066\\ \hline
4.3&0.01069296&0.86069296&6.3&0.33718632&0.18718632\\ \hline
4.4&0.01811263&0.81811263&6.4&0.36984515&0.16984515\\ \hline
4.5&0.02685074&0.77685074&6.5&0.40263501&0.15263501\\ \hline
4.6&0.03754195&0.73754195&6.6&0.43542486&0.13542486\\ \hline
4.7&0.04903470&0.69903470&6.7&0.46851012&0.11851012\\ \hline
4.8&0.06130069&0.66130069&6.8&0.50330866&0.10330866\\ \hline
4.9&0.07426685&0.62426685&6.9&0.53863866&0.08863866\\ \hline
5.0&0.08780854&0.58780854&7.0&0.57423853&0.07423853\\ \hline
5.1&0.10328796&0.55328796&7.1&0.61131437&0.06131437\\ \hline
5.2&0.11894874&0.51894874&7.2&0.64881437&0.04881437\\ \hline
5.3&0.13477800&0.48477800&7.3&0.68631437&0.03631437\\ \hline
5.4&0.15076406&0.45076406&7.4&0.72381437&0.02381437\\ \hline
5.5&0.16689626&0.41689626&7.5&0.76131437&0.01131437\\ \hline
5.6&0.18316493&0.38316493&7.6&0.80000000&0.00000000\\ \hline
5.7&0.19954296&0.34954296&7.7&0.85000000&0.00000000\\ \hline
5.8&0.21593386&0.31593386&7.8&0.90000000&0.00000000\\ \hline
5.9&0.23232477&0.28232477&7.9&0.95000000&0.00000000\\ \hline
\end{tabular}
\vskip.2cm
\caption{Associated and permissible exponents for $4\le s\le 8$.}
\end{table}
In this paper, we adopt the convention that whenever $\eps$, $P$ or $R$ appear in a
statement, either implicitly or explicitly, then for each $\eps>0$, there exists a
positive number $\eta=\eta(\eps)$ such that the statement holds whenever $R\le P^\eta$
and $P$ is sufficiently large in terms of $\eps$ and $\eta$. Implicit constants in
Vinogradov's notation $\ll$ and $\gg$ will depend at most on $\eps$ and $\eta$. Since our
iterative methods involve only a finite number of statements (depending at most on
$\eps$), there is no danger of losing control of implicit constants. Finally, write
$\|\tet\|=\underset{y\in\dbZ}{\min}|\tet-y|$.
\section{An auxiliary mean value estimate} Before announcing our pivotal mean value
estimate, we introduce some notation. Let $\phi$ be a real number with
$0\le \phi\le \tfrac{1}{3}$, and write
\begin{equation}\label{2.1}
M=P^\phi,\quad H=PM^{-3}\quad \text{and}\quad Q=PM^{-1}.
\end{equation}
Define the exponential sums
\begin{equation}\label{2.2}
F_1(\alp)=\sum_{1\le z\le 2P}\sum_{1\le h\le H}\sum_{M<m\le MR}
e(2\alp h(3z^2+h^2m^6)),
\end{equation}
$$D(\alp)=\sum_{1\le h\le H}\biggl| \sum_{1\le z\le 2P}e(6\alp hz^2)\biggr|^2$$
and
\begin{equation}\label{2.3}
E(\alp)=\sum_{1\le h\le H}\biggl| \sum_{M<m\le MR}e(2\alp h^3m^6)\biggr|^2.
\end{equation}
Also, when $\grB\subseteq [0,1)$, we introduce the mean value
\begin{equation}\label{2.4}
\Ups(P,R;\phi;\grB)=\int_\grB|F_1(\alp)^2f(\alp;2Q,R)^2|\d\alp ,
\end{equation}
and then write $\Ups(P,R;\phi)=\Ups(P,R;\phi;[0,1))$. We observe that an application of
Cauchy's inequality to (\ref{2.2}) yields the bound $|F_1(\alp)|^2\le D(\alp)E(\alp)$.
Consequently, when $t\ge 2$, we obtain the estimate
\begin{equation}\label{2.5}
\Ups(P,R;\phi;\grB )\le \int_\grB \left( D(\alp)E(\alp)\right)^{2/t}|F_1(\alp)|^{2-4/t}
|f(\alp;2Q,R)|^2\d\alp .
\end{equation}
Recall the definition (\ref{1.3}) of the mean value $U_s(P,R)$. We say that an exponent
$\mu_s$ is {\it permissible} whenever it has the property that, with the notational
conventions introduced above, one has $U_s(P,R)\ll P^{\mu_s+\eps}$. It follows that, for
each positive number $s$, a permissible exponent $\mu_s$ exists with
$s/2\le \mu_s\le s$. We refer to the exponent $\del_s$ as {\it associated} when
$\mu_s=s/2+\del_s$ is permissible, and $\Del_s$ as {\it admissible} when
$\mu_s=s-3+\Del_s$ is permissible.\par
We require a Hardy-Littlewood dissection. Let $\grm$ denote the set of points
$\alp\in [0,1)$ with the property that, whenever there exist $a\in \dbZ$ and $q\in \dbN$
with $(a,q)=1$ and $|q\alp-a|\le PQ^{-3}$, then one has $q>P$. Further, let
$\grM=[0,1)\setminus \grm$.
\begin{lemma}\label{lemma2.1} Suppose that $t\ge 4$ and $0\le \phi\le \tfrac{1}{3}$.
Then whenever $\del_t$ is an associated exponent, one has
$$\Ups(P,R;\phi;\grm)\ll P^{1+\eps}MH^{1+2/t}Q^{1+2\del_t/t}.$$
\end{lemma}
\begin{proof} We ultimately work outside the range $0\le \phi\le \tfrac{1}{7}$ in which
the estimate
$$\sup_{\alp \in \grm}|F_1(\alp)|\ll P^\eps (PM)^{1/2}H$$
follows from \cite[Lemmata 3.1 and 3.4]{Vau1989}, and so we engineer a hybrid method
combining elements of the Hardy-Littlewood method with a Diophantine interpretation of
auxiliary equations. We begin by applying H\"older's inequality to (\ref{2.5}), obtaining
the bound
\begin{equation}\label{2.6}
\Ups(P,R;\phi;\grm )\le \Bigl( \sup_{\alp\in \grm}D(\alp)\Bigr)^{2/t}I_1^{2/t}I_2^{1-4/t}
U_t(2Q,R)^{2/t},
\end{equation}
where $U_t(2Q,R)$ is defined via (\ref{1.3}),
\begin{equation}\label{2.7}
I_1=\int_0^1E(\alp)|F_1(\alp)|^2\d\alp \quad \text{and}\quad
I_2=\int_0^1|F_1(\alp)|^2\d\alp .
\end{equation}
\par The estimates
\begin{equation}\label{2.8}
I_2\ll P^{1+\eps}MH\quad \text{and}\quad U_t(2Q,R)\ll Q^{t/2+\del_t+\eps}
\end{equation}
follow, respectively, from \cite[Lemma 2.3]{Vau1989} with $j=1$ and the definition of an
associated exponent. Also, given $\alp\in [0,1)$, we find from \cite[Lemma 3.1]{Vau1989}
that whenever $a\in \dbZ$ and $q\in \dbN$ satisfy $(a,q)=1$ and $|\alp-a/q|\le q^{-2}$,
then
\begin{equation}\label{2.9}
D(\alp)\ll P^\eps \biggl( \frac{P^2H}{q+Q^3|q\alp-a|}+PH+q+Q^3|q\alp-a|\biggr) .
\end{equation}
By Dirichlet's theorem on Diophantine approximation, there exist $a\in \dbZ$ and
$q\in \dbN$ with $0\le a\le q\le P^{-1}Q^3$, $(a,q)=1$ and $|q\alp -a|\le PQ^{-3}$.
When $\alp\in \grm$, it follows that $q>P$, and hence we deduce via (\ref{2.1}) that
\begin{equation}\label{2.10}
\sup_{\alp\in \grm}D(\alp)\ll P^\eps (PH+P^{-1}Q^3)\ll P^{1+\eps}H.
\end{equation}
\par Finally, by reference to (\ref{2.2}), (\ref{2.3}) and (\ref{2.7}), it follows from
orthogonality that $I_1$ counts the number of integral solutions of the equation
\begin{equation}\label{2.11}
h_0^3(n_1^6-n_2^6)=h_1(3z_1^2+h_1^2m_1^6)-h_2(3z_2^2+h_2^2m_2^6),
\end{equation}
with
$$1\le h_0,h_1,h_2\le H,\quad M<n_1,n_2,m_1,m_2\le MR\quad \text{and}\quad 1\le
z_1,z_2\le 2P.$$
Let $N_1$ denote the number of solutions of (\ref{2.11}) counted by $I_1$ in which
$n_1=n_2$, let $N_2$ denote the corresponding number in which
$h_1z_1^2\ne h_2z_2^2$, and let $N_3$ denote the number with $n_1\ne n_2$ and
$h_1z_1^2=h_2z_2^2$. Thus $I_1\le N_1+N_2+N_3$.\par
By orthogonality, it follows from (\ref{2.2}) and (\ref{2.11}) with $n_1=n_2$ that
$$N_1\le HMR\int_0^1|F_1(\alp)|^2\d\alp ,$$
and hence we deduce from (\ref{2.7}) and (\ref{2.8}) that
\begin{equation}\label{2.12}
N_1\ll P^{1+\eps}M^2H^2.
\end{equation}
\par When $\bfh,\bfm,\bfn,\bfz$ is a solution of (\ref{2.11}) counted by $N_2$, the
integer
$$L=h_0^3(n_1^6-n_2^6)-h_1^3m_1^6+h_2^3m_2^6$$
is non-zero. There are $O(H^3(MR)^4)$ possible choices for $L$, and we find from
(\ref{2.11}) that for each fixed choice one has $3(h_1z_1^2-h_2z_2^2)=L$. With $h_1$
and $h_2$ already fixed, it follows from \cite[Lemma 3.5]{VW1995} that the number of
possible choices for $z_1$ and $z_2$ is $O((h_1h_2|L|P)^\eps)$. Thus we
conclude that
\begin{equation}\label{2.13}
N_2\ll P^\eps H^3M^4.
\end{equation}
\par Finally, consider a solution $\bfh,\bfm,\bfn,\bfz$ counted by $N_3$. Given $h_2$ and
$z_2$, an elementary estimate for the divisor function shows that the number of possible
choices for $h_1$ and $z_1$ satisfying $h_1z_1^2=h_2z_2^2$ is $O((HP)^\eps)$. Fix
any one amongst these $O((HP)^{1+\eps})$ possible choices for $h_1,h_2,z_1,z_2$. One
finds from (\ref{2.11}) that $h_0,\bfm,\bfn$ satisfy the equation
$$(h_1m_1^2)^3-(h_2m_2^2)^3=h_0^3(n_1^6-n_2^6).$$
Since $n_1\ne n_2$, the right hand side here is non-zero, and hence also the left hand
side. Thus, again applying a divisor function estimate, it follows that for any one amongst
the $O((MR)^2)$ possible choices for $m_1$ and $m_2$, there are $O(P^\eps)$ possible
choices for $h_0$, $n_1-n_2$ and $n_1^5+n_1^4n_2+\ldots +n_2^5$. We deduce that
there are just $O(P^\eps)$ possible choices for $h_0$, $n_1$ and $n_2$, and thus
\begin{equation}\label{2.14}
N_3\ll P^\eps (HP)^{1+\eps}(MR)^2\ll P^{1+3\eps}HM^2.
\end{equation}
\par On combining (\ref{2.12})--(\ref{2.14}), we conclude via (2.1) that
$$I_1=N_1+N_2+N_3\ll P^\eps(PM^2H^2+H^3M^4)\ll P^{1+\eps}M^2H^2.$$
Substituting this estimate together with (\ref{2.8}) and (\ref{2.10}) into (\ref{2.6}), we
arrive at the upper bound
$$\Ups(P,R;\phi;\grm)\ll P^\eps (PH)^{2/t}(PM^2H^2)^{2/t}(PMH)^{1-4/t}
Q^{1+2\del_t/t},$$
and the conclusion of the lemma follows with a modicum of computation.
\end{proof}
We require a complementary major arc estimate.
\begin{lemma}\label{lemma2.2} Suppose that $t\ge 4$ and $0\le \phi\le \tfrac{1}{3}$.
Then whenever $\del_t$ is an associated exponent, one has
$$\Ups(P,R;\phi;\grM)\ll P^{1+\eps}MH^{1+2/t}Q^{1+2\del_t/t}.$$
\end{lemma}
\begin{proof} The major arcs $\grM$ are contained in the union of the intervals
$$\grM(q,a)=\{ \alp\in [0,1):|q\alp-a|\le PQ^{-3}\},$$
with $0\le a\le q\le P$ and $(a,q)=1$. Define $\Del(\alp)$ for $\alp\in [0,1)$ by putting
$$\Del(\alp)=(q+Q^3|q\alp-a|)^{-1},$$
when $\alp\in \grM(q,a)\subseteq \grM$, and otherwise by setting $\Del(\alp)=0$. Then it
follows
from (\ref{2.9}) that when $\alp\in \grM$, one has
\begin{equation}\label{2.15}
D(\alp)\ll P^{2+\eps}H\Del(\alp)+P^{1+\eps}H.
\end{equation}
We apply H\"older's inequality to (\ref{2.5}), just as in the treatment of
$\Ups(P,R;\phi;\grm)$ in the proof of Lemma \ref{lemma2.1}. Thus, by comparing
(\ref{2.10}) and (\ref{2.15}), we obtain
$$\Ups(P,R;\phi;\grM)\ll P^\eps \left(PMH^{1+2/t}Q^{1+2\del_t/t}+(P^2HT)^{2/t}
\Ups(P,R;\phi;\grM)^{1-2/t}\right),$$
where
\begin{equation}\label{2.16}
T=\int_\grM\Del(\alp)E(\alp)|f(\alp;2Q,R)|^2\d\alp .
\end{equation}
Thus we infer that
\begin{equation}\label{2.17}
\Ups(P,R;\phi;\grM)\ll P^{1+\eps}MH^{1+2/t}Q^{1+2\del_t/t}+P^{2+\eps}HT.
\end{equation}
\par In preparation for the estimation of $T$, we consider the mean value
$$T_0=\int_0^1E(\alp)|f(\alp;2Q,R)|^2\d\alp .$$
By reference to (\ref{2.3}), it follows from orthogonality that $I_6$ counts the number of
integral solutions of the equation
$$2h^3(n_1^6-n_2^6)=x_1^3-x_2^3,$$
with $1\le h\le H$, $M<n_1,n_2\le MR$ and $x_1,x_2\in \calA(2Q,R)$. Here, the number
of diagonal solutions with $x_1=x_2$ and $n_1=n_2$ is $O(HMRQ)$. There are
$O(H(MR)^2)$ possible choices for $h$, $n_1$ and $n_2$ with
$2h^3(n_1^6-n_2^6)\ne 0$. For each fixed such choice, an elementary estimate for the
divisor function shows that there are $O(Q^\eps)$ possible choices for $x_1-x_2$ and
$x_1^2+x_1x_2+x_2^2$, hence also for $x_1$ and $x_2$. Then we conclude via
(\ref{2.1}) that
\begin{equation}\label{2.18}
T_0\ll P^\eps (HMQ+HM^2)\ll P^{1+\eps}H.
\end{equation}
\par On recalling (\ref{2.3}), one finds that
$$E(\alp)|f(\alp;2Q,R)|^2=\sum_{l\in \dbZ}\psi(l)e(l\alp),$$
where $\psi(l)$ denotes the number of solutions of the equation
$$2h^3(n_1^6-n_2^6)+x_1^3-x_2^3=l,$$
with $1\le h\le H$, $M<n_1,n_2\le MR$ and $x_1,x_2\in \calA(2Q,R)$. In view of
(\ref{2.18}), one has $\psi(0)=T_0\ll P^{1+\eps}H$. Moreover,
$$\sum_{l\in \dbZ}\psi(l)=E(0)f(0;2Q,R)^2\ll H(MR)^2Q^2.$$
Then by applying \cite[Lemma 2]{Bru1988} within (\ref{2.16}), we deduce via (\ref{2.1})
that
$$T\ll Q^{\eps-3}\left( P(P^{1+\eps}H)+H(MR)^2Q^2\right) \ll P^{2\eps}.$$
On substituting this estimate into (\ref{2.17}), we conclude that
$$\Ups(P,R;\phi;\grM)\ll P^{1+\eps}MH^{1+2/t}Q^{1+2\del_t/t}+P^{2+\eps}H.$$
The proof of the lemma is completed by noting that the relations (\ref{2.1}) ensure that
the second term on the right hand side here is majorised by the first.
\end{proof}
We finish this section by combining the conclusions of Lemmata \ref{lemma2.1} and
\ref{lemma2.2}.
\begin{lemma}\label{lemma2.3} Suppose that $t\ge 4$ and $0\le \phi\le \tfrac{1}{3}$.
Then whenever $\del_t$ is an associated exponent, one has
$$\int_0^1|F_1(\alp)^2f(2\alp;Q,R)^2|\d\alp \ll P^{1+\eps}MH^{1+2/t}Q^{1+2\del_t/t}.
$$
\end{lemma}
\begin{proof} On recalling (\ref{2.4}), the desired conclusion follows from Lemmata
\ref{lemma2.1} and \ref{lemma2.2} by means of the relation
$$\Ups(P,R;\phi)=\Ups(P,R;\phi;\grM)+\Ups(P,R;\phi;\grm).$$
\end{proof}
\section{Further auxiliary mean value estimates} We now introduce notation more closely
aligned with the author's work \cite{Woo1995, Woo2000a, Woo2000b} on fractional
moments of smooth Weyl sums. We define the modified set of smooth numbers
$\calB(L,\pi,R)$ for prime numbers $\pi$ by putting
$$\calB(L,\pi,R)=\{ n\in \calA(L\pi,R):\text{$n>L$ and $\pi|n$}\}.$$
Recall the notation (\ref{2.1}). We define the exponential sums
\begin{align}
\Ftil_{d,e}(\alp;\pi)&=\sum_{u\in \calB(M/d,\pi,R)}\,
\sum_{\substack{x,y\in \calA(P/(de),R)\\ (x,u)=(y,u)=1\\ x\equiv y\mmod{u^3}\\
y<x}}e\left(\alp u^{-3}(x^3-y^3)\right),\label{3.1}\\
F_{d,e}(\alp)&=\sum_{1\le z\le 2P/(de)}\sum_{1\le h\le Hd^2/e}\sum_{M/d<u\le MR/d}
e\left(2\alp h(3z^2+h^2m^6)\right)\label{3.2}
\end{align}
and
\begin{equation}\label{3.3}
\ftil(\alp;P,M,R)=\max_{m>M}\biggl| \sum_{x\in \calA(P/m,R)}e(\alp x^3)\biggr|.
\end{equation}
Note here that $F_{d,e}(\alp)=0$ whenever $e>Hd^2$. Finally, we put
\begin{equation}\label{3.4}
\Ups_{d,e,\pi}(P,R;\phi)=\int_0^1 |\Ftil_{d,e}(\alp;\pi)^2\ftil(\alp;P/(de),M/d,\pi)^2|
\d\alp .
\end{equation}
\par We begin by demystifying the mean value $\Ups_{d,e,\pi}(P,R;\phi)$.
\begin{lemma}\label{lemma3.1}
When $\pi\le R$, one has
$$\Ups_{d,e,\pi}(P,R;\phi)\ll P^\eps \int_0^1|F_{d,e}(\alp)^2f(\alp;2Q/e,R)^2|\d\alp .$$
\end{lemma}
\begin{proof} We first eliminate the maximal aspect of the sum
$\ftil(\alp;P/(de),M/d,\pi)$ implicit in $\Ups_{d,e,\pi}(P,R;\phi)$. Define
$$\calD_K(\tet)=\sum_{|m|\le K^3}e(m\tet)\quad \text{and}\quad \calD_K^*(\tet)=\min
\{ 2K^3+1,\|\tet\|^{-1}\},$$
and note that for $K\ge 1$, one has
\begin{equation}\label{3.5}
\int_0^1\calD_K^*(\tet)\d\tet\ll \log (2K).
\end{equation}
On recalling (\ref{2.1}), we find that whenever $m>M$, then one has
$$\sum_{x\in \calA(P/m,R)}e(\alp x^3)=\int_0^1f(\alp+\tet;Q,R)
\calD_{P/m}(\tet)\d\tet.$$
Since $\calD_{P/m}(\tet)\ll \calD_{P/m}^*(\tet)\le \calD_Q^*(\tet)$ for $m>M$, we thus
infer from (\ref{3.3}) that
\begin{equation}
\ftil(\alp;P/(de),M/d,\pi)\ll \int_0^1|f(\alp+\tet;Q/e,\pi)|\calD_Q^*(\tet)\d\tet .\label{3.6}
\end{equation}
\par On substituting (\ref{3.6}) into (\ref{3.4}), we deduce that
$$\Ups_{d,e,\pi}(P,R;\phi)\ll \int_{[0,1)^3}|\Ftil_{d,e}(\alp;\pi)^2f_{\tet_1}(\alp)
f_{\tet_2}(\alp)|\calD_Q^*(\tet_1)\calD_Q^*(\tet_2)\d\tet_1\d\tet_2\d\alp ,$$
where, temporarily, we abbreviate $f(\alp+\tet;Q/e,\pi)$ to $f_\tet(\alp)$. Write
\begin{equation}\label{3.7}
\Xi_{d,e,\pi}(\tet)=\int_0^1|\Ftil_{d,e}(\alp;\pi)^2f(\alp+\tet;Q/e,\pi)^2|\d\alp .
\end{equation}
Then by applying the inequality $|z_1z_2|\le |z_1|^2+|z_2|^2$ and invoking symmetry,
we infer via (\ref{3.5}) that
\begin{align}
\Ups_{d,e,\pi}(P,R;\phi)&\ll \int_0^1\Xi_{d,e,\pi}(\tet_1)\calD_Q^*(\tet_1)\d\tet_1
\int_0^1\calD_Q^*(\tet_2)\d\tet_2 \notag\\
&\ll Q^\eps \int_0^1 \Xi_{d,e,\pi}(\tet_1)\calD_Q^*(\tet_1)\d\tet_1.\label{3.8}
\end{align}
\par Consider next the integral solutions of the equation
\begin{equation}\label{3.9}
u_1^{-3}(x_1^3-y_1^3)-u_2^{-3}(x_2^3-y_2^3)=w_1^3-w_2^3,
\end{equation}
with, for $i=1$ and $2$, the constraints
$$w_i\in \calA(Q/e,\pi),\quad u_i\in \calB(M/d,\pi,R),\quad x_i,y_i\in \calA(P/(de),R),$$
$$(x_i,u_i)=(y_i,u_i)=1,\quad x_i\equiv y_i\mmod{u_i^3}\quad \text{and}\quad
y_i<x_i.$$
Then by orthogonality, it follows from (\ref{3.1}) and (\ref{3.7}) that the mean value
$\Xi_{d,e,\pi}(\tet)$ counts the number of such solutions, with each solution counted with
weight $e(\tet (w_2^3-w_1^3))$. The latter weight being unimodular, it follows that
$|\Xi_{d,e,\pi}(\tet)|$ is bounded above by the corresponding number of unweighted
solutions, and hence by the number of integral solutions of the equation (\ref{3.9}) with,
for $i=1$ and $2$, the constraints
$$w_i\in \calA(Q/e,R),\quad M/d<u_i\le MR/d,$$
$$1\le y_i<x_i\le P/(de)\quad \text{and}\quad x_i\equiv y_i\mmod{u_i^3}.$$
We now substitute $z_i=x_i+y_i$ and $h_i=(x_i-y_i)u_i^{-3}$ $(i=1,2)$ into equation
(\ref{3.9}). It follows that $1\le h_i\le (P/(de))(M/d)^{-3}$ for $i=1$ and $2$. Moreover,
we have $2x_i=z_i+h_iu_i^3$ and $2y_i=z_i-h_iu_i^3$ $(i=1,2)$. Then on noting that
$$u^{-3}\left( (z+hu^3)^3-(z-hu^3)^3\right)=2h(3z^2+h^2u^6),$$
and recalling (\ref{2.1}), we see that $|\Xi_{d,e,\pi}(\tet)|$ is bounded above by the
number of integral solutions of the equation
$$2h_1(3z_1^2+h_1^2u_1^6)-2h_2(3z_2^2+h_2^2u_2^6)=w_1^3-w_2^3,$$
with, for $i=1$ and $2$,
$$w_i\in \calA(2Q/e,R),\quad M/d<u_i\le MR/d,$$
$$1\le z_i\le 2P/(de)\quad \text{and}\quad 1\le h_i\le Hd^2/e.$$
Then on recalling (\ref{3.2}), it follows by orthogonality that
$$|\Xi_{d,e,\pi}(\tet)|\le \int_0^1|F_{d,e}(\alp)^2f(\alp;2Q/e,R)^2|\d\alp .$$
On substituting this estimate into (\ref{3.8}), we conclude that
$$\Ups_{d,e,\pi}(P,R;\phi)\ll Q^\eps \biggl( \int_0^1 D_Q^*(\tet)\d\tet\biggr)
\int_0^1|F_{d,e}(\alp)^2f(\alp;2Q/e,R)^2|\d\alp .$$
The conclusion of the lemma now follows on applying the bound (\ref{3.5}).
\end{proof}
\begin{lemma}\label{lemma3.2}
Suppose that
$$1\le d\le M,\quad 1\le e\le \min\{ Q,Hd^2\}\quad \text{and}\quad 0\le \phi\le
\tfrac{1}{3}.$$
Then, whenever $t\ge 4$ and $\del_t$ is an associated exponent, one has
$$\Ups_{d,e,\pi}(P,R;\phi)\ll d^{4/t}e^{-3-2/t}P^{1+\eps}MH^{1+2/t}Q^{1+2\del_t/t}.
$$
\end{lemma}
\begin{proof} A comparison of (\ref{2.2}) and (\ref{3.2}) reveals that, as a consequence
of Lemma \ref{lemma2.3} in combination with (\ref{2.1}), whenever $t\ge 4$ and
$M^3\le P$, one has
\begin{equation}\label{3.10}
\int_0^1|F_{1,1}(\alp)^2f(2\alp;Q,R)^2|\d\alp \ll
P^{1+\eps}MH^{1+2/t}Q^{1+2\del_t/t}.
\end{equation}
We apply this conclusion with $P/(de)$ in place of $P$ and $M/d$ in place of $M$. In view
of the relations (\ref{2.1}), we have also $Hd^2/e$ in place of $H$ and $Q/e$ in place of
$Q$. The hypotheses of the lemma concerning $e$ and $\phi$ then ensure that
$$(M/d)^3(P/(de))^{-1}=e/(Hd^2)\le 1,$$
whence $(M/d)^3\le P/(de)$, confirming the validity of the estimate (\ref{3.10}) with
these substitutions. Hence we obtain the estimate
\begin{align*}
\int_0^1|F_{d,e}(\alp)^2f(\alp;2Q/e,R)^2|\d\alp &\ll
\left( \frac{P}{de}\right)^{1+\eps}\left(\frac{M}{d}\right)
\left( \frac{Hd^2}{e}\right)^{1+2/t}\left( \frac{Q}{e}\right)^{1+2\del_t/t}\\
&\ll d^{4/t}e^{-3-2/t-2\del_t/t}P^{1+\eps}MH^{1+2/t}Q^{1+2\del_t/t}.
\end{align*}
Since Lemma \ref{lemma3.1} establishes the relation
$$\Ups_{d,e,\pi}(P,R;\phi)\ll P^\eps \int_0^1|F_{d,e}(\alp)^2f(\alp;2Q/e,R)^2|\d\alp ,$$
the conclusion of the lemma follows on noting that $\del_t\ge 0$.
\end{proof}
We also have need of estimates for the mean values
\begin{equation}\label{3.11}
\Lam_{d,e,\pi}^{(m)}(P,R;\phi)=\int_0^1|\Ftil_{d,e}(\alp;\pi)|^{2m}\d\alp \quad
(m=1,2).
\end{equation}
\begin{lemma}\label{lemma3.3} When $1\le d\le M$, $1\le e\le \min\{Q,Hd^2\}$ and
$\pi\le R$, one has
$$\Lam_{d,e,\pi}^{(1)}(P,R;\phi)\ll P^{1+\eps}HMe^{-2}\quad \text{and}\quad
\Lam_{d,e,\pi}^{(2)}(P,R;\phi)\ll P^{2+\eps}H^3M^4e^{-5}.$$
\end{lemma}
\begin{proof} These estimates are given by \cite[equations (3.25) and (3.26)]{Woo1995}.
\end{proof}
Finally, we recall an estimate for the mean value
\begin{equation}\label{3.12}
\Util_s(P,M,R)=\int_0^1\ftil(\alp;P,M,R)^s\d\alp .
\end{equation}
\begin{lemma}\label{lemma3.4}
Suppose that $s>1$ and that $\del_s$ is an associated exponent. Then whenever $P>M$
and $R>2$, one has $\Util_s(P,M,R)\ll_s (P/M)^{s/2+\del_s+\eps}$.
\end{lemma}
\begin{proof} This is immediate from \cite[Lemma 3.2]{Woo1995}.
\end{proof}
\section{New associated exponents, I: $4\le s\le 6.5$} We now convert the mean value
estimates of \S2 into new associated exponents by means of the ideas of
\cite[\S\S2--4]{Woo1995}. Write
\begin{equation}\label{4.1}
\Ome_{d,e,\pi}(P,R;\phi)=\int_0^1|\Ftil_{d,e}(\alp;\pi)\ftil(\alp;P/(de),M/d,\pi)^{s-2}|
\d\alp ,
\end{equation}
and then put
\begin{equation}\label{4.2}
\calU_s(P,R)=\sum_{1\le d\le D}\sum_{\pi\le R}\sum_{1\le e\le Q}
d^{2-s/2}e^{s/2-1}\Ome_{d,e,\pi}(P,R;\phi).
\end{equation}
The relevant results from \cite{Woo1995} are summarised in the following lemma.
\begin{lemma}\label{lemma4.1} Suppose that $s>4$ and $0<\phi\le \tfrac{1}{3}$. Then
whenever $\mu_{s-2}$ and $\mu_s$ are permissible exponents, and $1\le D\le P^{1/3}$,
one has
$$U_s(P,R)\ll P^{\mu_s+\eps}D^{s/2-\mu_s}+MP^{1+\mu_{s-2}+\eps}+
P^{\left(\tfrac{s-3}{s-2}\right)\mu_s+\eps}V_s(P,R),$$
where
$$V_s(P,R)=\left( PM^{s-2}Q^{\mu_{s-2}}+M^{s-3}\calU_s(P,R)\right)^{1/(s-2)}.$$
\end{lemma}
\begin{proof} The desired result follows at once on substituting the conclusion of
\cite[Lemma 3.3]{Woo1995} into that of \cite[Lemma 2.3]{Woo1995}.
\end{proof}
We are now equipped to announce our new associated exponents.
\begin{lemma}\label{lemma4.2} Suppose that $s\ge 4$ and $0\le \gam\le \tfrac{1}{4}$,
and let $t$ satisfy
\begin{equation}\label{4.3}
\frac{2s-6+8\gam}{1+2\gam}\le t\le \frac{2s-4}{1+2\gam}.
\end{equation}
Suppose that $\del_{s-2}$ and $\del_t$ are associated exponents, and put
\begin{equation}\label{4.4}
\tet_0=\frac{2s-4-t+2(s-2)\del_t-2t\del_{s-2}}
{6s-12+t-4\gam t+2(s-2)\del_t-2t\del_{s-2}}.
\end{equation}
Then the exponent $\del_s=\del_{s-2}(1-\tet)+\tfrac{1}{2}(s-2)\tet$ is associated, where
we write $\tet=\max\{0,\min\{\tet_0,\tfrac{1}{3}\}\}$.
\end{lemma}
\begin{proof} We begin by estimating the mean value $\Ome_{d,e,\pi}(P,R;\phi)$.
Suppose that
$$d\le M,\quad e\le \min\{Q,Hd^2\},\quad \pi\le R\quad \text{and}\quad 0\le \phi\le
\tfrac{1}{3}.$$
Then on recalling (\ref{3.4}), (\ref{3.11}) and (\ref{3.12}), an application of H\"older's
inequality to (\ref{4.1}) yields the bound
\begin{align}
\Ome_{d,e,\pi}(P,R;\phi)\le &\, \Ups_{d,e,\pi}(P,R;\phi)^{\gam_1}
\Util_t(P/(de),M/d,\pi)^{\gam_2}\notag \\
&\, \times \Lam_{d,e,\pi}^{(1)}(P,R;\phi)^{\gam_3}
\Lam_{d,e,\pi}^{(2)}(P,R;\phi)^\gam,
\label{4.5}
\end{align}
where
$$\gam_1=\tfrac{1}{4}(2s-4-t-2t\gam),\quad \gam_2=(s-2-2\gam_1)/t\quad
\text{and}\quad \gam_3=\tfrac{1}{2}-\gam_1-2\gam .$$
\par A few words are in order to confirm that the above is indeed a valid application of
H\"older's inequality. Observe first that the hypotheses $s>4$ and
$0\le \gam\le \tfrac{1}{4}$, together with those concerning the value $t$, ensure that
$$2s-6+8\gam\le t(1+2\gam)\le 2s-4,$$
so that
$$0\le \gam_1\le \tfrac{1}{4}\left( (2s-4)-(2s-6+8\gam)\right) =
\tfrac{1}{2}(1-4\gam)\le 1.$$
Hence we deduce that
$$0=\tfrac{1}{2}-\tfrac{1}{2}(1-4\gam)-2\gam\le \gam_3\le \tfrac{1}{2}-2\gam<1.$$
Also, since $s\ge 4$ and $\gam_1\le \tfrac{1}{2}(1-4\gam)$, one finds that
$$\gam_2\ge (s-3+4\gam)/t>0.$$
Moreover, since $t\ge (2s-6+8\gam)/(1+2\gam)$, we have
$$(1+2\gam)(s-2-2\gam_1-t)\le 4-s-2\gam_1-\gam(12+4\gam_1-2s).$$
When $4\le s\le 6$, we therefore deduce that
$$t(1+2\gam)(\gam_2-1)\le 4-s-2\gam_1\le 0,$$
and when $s>6$ we see instead that
$$t(1+2\gam)(\gam_2-1)\le 4-s-2\gam_1+\tfrac{1}{4}(2s-12)\le 1-\tfrac{1}{2}s\le 0.$$
Thus, in all circumstances, one has $0\le \gam_2\le 1$. Finally, the relations
\begin{equation}\label{4.6}
\gam+\gam_1+\gam_2+\gam_3=1,\quad 4\gam+2\gam_1+2\gam_3=1\quad
\text{and}\quad 2\gam_1+t\gam_2=s-2
\end{equation}
follow by direct computation.\par
By applying Lemmata \ref{lemma3.2}--\ref{lemma3.4}, we deduce from (\ref{4.5}) that
\begin{align*}
\Ome_{d,e,\pi}(P,R;\phi)\ll &\, P^\eps
\left( d^{4/t}e^{-3-2/t}PMH^{1+2/t}Q^{1+2\del_t/t}\right)^{\gam_1}\\
&\times (PMHe^{-2})^{\gam_3}(P^2M^4H^3e^{-5})^{\gam}
\left( (Q/e)^{t/2+\del_t}\right)^{\gam_2}.
\end{align*}
Thus, by making use of the relations (\ref{4.6}) and
$$t\ge 2,\quad \gam_1\le \tfrac{1}{2},\quad 3\gam_1+
\tfrac{1}{2}t\gam_2+2\gam_3+5\gam\ge \tfrac{1}{2}s,\quad 2\gam_1+t\gam=s-2-
\tfrac{1}{2}t,$$
we deduce that
\begin{equation}\label{4.7}
\Ome_{d,e,\pi}(P,R;\phi)\ll de^{-s/2}P^{1/2+\eps}M^{1/2+2\gam}H^{(s-2)/t}
Q^{s/2-1+(s-2)\del_t/t}.
\end{equation}
\par When $e>Hd^2$, one has $F_{d,e}(\alp)=0$, and hence
$\Ome_{d,e,\pi}(P,R;\phi)=0$. Thus, on substituting (\ref{4.7}) into (\ref{4.2}), we
discern that
$$\calU_s(P,R)\ll P^{1/2+\eps}M^{1/2+2\gam}H^{(s-2)/t}
Q^{s/2-1+(s-2)\del_t/t}\Sig_0,$$
where
$$\Sig_0=\sum_{1\le d\le D}\sum_{\pi\le R}\sum_{1\le e\le \min\{Q,Hd^2\}}
d^{3-s/2}e^{-1}.$$
We therefore conclude that
$$\calU_s(P,R)\ll D^2P^{1/2+2\eps}M^{1/2+2\gam}H^{(s-2)/t}
Q^{s/2-1+(s-2)\del_t/t}.$$
In the notation of Lemma \ref{lemma4.1}, therefore, we have
$$V_s(P,R)^{s-2}\ll P^\eps M^{s-3}(\Psi_1+D^2\Psi_2),$$
where
$$\Psi_1=PMQ^{\mu_{s-2}}\quad \text{and}\quad
\Psi_2=P^{1/2}M^{1/2+2\gam}H^{(s-2)/t}Q^{s/2-1+(s-2)\del_t/t}.$$
On recalling (\ref{2.1}) and the definition of an associated exponent, the equation
$\Psi_1=\Psi_2$ implicitly determines a linear equation for $\phi$, namely
\begin{align*}
1+&\phi+\left( \tfrac{1}{2}(s-2)+\del_{s-2}\right) (1-\phi)\\
&=\tfrac{1}{2}+\left(\tfrac{1}{2}+2\gam\right)\phi+\Bigl(\frac{s-2}{t}\Bigr)(1-3\phi)+
\Bigl( \tfrac{1}{2}(s-2)+\Bigl(\frac{s-2}{t}\Bigr)\del_t\Bigr) (1-\phi).
\end{align*}
A modicum of computation reveals that this equation has solution $\phi=\tet_0$, where
$\tet_0$ is given by (\ref{4.4}). Put $D=P^\ome$, where $\ome$ is any sufficiently
small, but fixed, positive number. Then we may follow the discussion of
\cite[\S4]{Woo1995} to confirm via Lemma \ref{lemma4.1} that whenever
$\mu_{s-2}=\tfrac{1}{2}(s-2)+\del_{s-2}$ and $\mu_t=\tfrac{1}{2}t+\del_t$ are
permissible exponents, then so too is
$$\mu_s=\mu_{s-2}(1-\tet)+1+(s-2)\tet.$$
It follows that the exponent $\del_s=\del_{s-2}(1-\tet)+\tfrac{1}{2}(s-2)\tet $ is
associated, completing the proof of the lemma.
\end{proof}
We highlight three special cases of Lemma \ref{lemma4.2} for future use.
\begin{corollary}\label{corollary4.3} Suppose that $4<s\le 5$. Then whenever
$\del_{2s-4}\le 2$ is an associated exponent, so too is $\del_s=\tfrac{1}{2}(s-2)\tet$,
where
$$\tet=\frac{\del_{2s-4}}{4+\del_{2s-4}}.$$
\end{corollary}
\begin{proof} We take $\gam=0$ and $t=2s-4$, so that $\gam$ and $t$ satisfy
(\ref{4.3}). It follows from Hua's lemma \cite[Lemma 2.5]{Vau1997} that
$$\int_0^1|f(\alp;Q,R)|^4\d\alp \ll Q^{2+\eps},$$
and hence one may take $\del_u=0$ for $0<u\le 4$. With these choices of $s$,
$\gam$ and
$t$, one finds that $\del_{s-2}=0$, and hence (\ref{4.4}) gives
$$\tet_0=\frac{2(s-2)\del_t}{8s-16+2(s-2)\del_t}=\frac{\del_{2s-4}}{4+\del_{2s-4}}.$$
But $0\le \del_{2s-4}\le 2$, and hence $0\le \tet_0\le \tfrac{1}{3}$. The conclusion of the
corollary is now immediate from Lemma \ref{lemma4.2}.
\end{proof}
\begin{corollary}\label{corollary4.4} Suppose that $5\le s\le 6$. Then whenever
$\del_6\le \tfrac{3}{2}$ is an associated exponent, so too is
$\del_s=\tfrac{1}{2}(s-2)\tet$, where
$$\tet=\frac{s-5+(s-2)\del_6}{3s-3+(s-2)\del_6}.$$
\end{corollary}
\begin{proof} We take $\gam=0$ and $t=6$, so that $s$, $\gam$ and $t$ satisfy
(\ref{4.3}). We again have $\del_u=0$ for $0<u\le 4$, and hence $\del_{s-2}=0$.
Hence (\ref{4.4}) gives
$$\tet_0=\frac{2s-10+2(s-2)\del_6}{6s-6+2(s-2)\del_6}=
\frac{s-5+(s-2)\del_6}{3s-3+(s-2)\del_6}.$$
But by hypothesis, one has $0\le \del_6\le \tfrac{3}{2}$ and $5\le s\le 6$, and hence
$$0\le \tet_0\le \frac{1+4\del_6}{15+4\del_6}\le \tfrac{1}{3}.$$
The conclusion of the corollary therefore follows from Lemma \ref{lemma4.2}.
\end{proof}
\begin{corollary}\label{corollary4.5} Suppose that $6\le s\le \tfrac{13}{2}$. Then
whenever $\del_{s-2}\le \del_6\le \tfrac{1}{2}$ is an associated exponent, so too is
$\del_s=\del_{s-2}(1-\tet)+\tfrac{1}{2}(s-2)\tet$, where
$$\tet=\frac{s-5+(s-2)\del_6-6\del_{s-2}}{33-3s+(s-2)\del_6-6\del_{s-2}}.$$
\end{corollary}
\begin{proof} We take $\gam=\tfrac{1}{2}(s-6)$, so that when $6\le s\le \tfrac{13}{2}$,
one has $0\le \gam\le \tfrac{1}{4}$, and in addition
$$\frac{2s-6+8\gam}{1+2\gam}=6\quad \text{and}\quad
\frac{2s-4}{1+2\gam}=2+\frac{6}{s-5}\ge 6.$$
We are therefore entitled to apply Lemma \ref{lemma4.2} with $t=6$, in which case
\begin{align*}
\tet_0&=\frac{2s-10+2(s-2)\del_6-12\del_{s-2}}
{6s-6-12(s-6)+2(s-2)\del_6-12\del_{s-2}}\\
&=\frac{s-5+(s-2)\del_6-6\del_{s-2}}{33-3s+(s-2)\del_6-6\del_{s-2}}.
\end{align*}
By hypothesis, we have $6\le s\le \tfrac{13}{2}$ and $\del_{s-2}\le \del_6\le
\tfrac{1}{2}$, and hence
$$\tet_0\ge \frac{s-5-2\del_6}{2s+3+(s-2)\del_6-6\del_{s-2}}\ge
\frac{s-6}{2s+3+(s-2)\del_6-6\del_{s-2}}\ge 0,$$
and
$$\tet_0\le \frac{s-5+\tfrac{9}{2}\del_6}{13-2\del_6}\le \frac{\tfrac{3}{2}+
\tfrac{9}{4}}{12}<\tfrac{1}{3}.$$
The conclusion of the corollary therefore follows from Lemma \ref{lemma4.2}.
\end{proof}
\section{New associated exponents, II: $s=6$ and $6.5<s\le 8$} We turn next to
methods yielding associated exponents when $s=6$, and when $s>6.5$, beginning with
one generalising that of \cite[Lemma 2.2]{Woo2000b}.
\begin{lemma}\label{lemma5.1} Let $t$ be a real number with $4<t\le 8$. Then
whenever $\del_6\le \frac{2}{3}$ and $\del_t\le \frac{1}{6}(t-4)$ are associated
exponents, then so too is
\begin{equation}\label{5.1}
\del_6^\prime =2\max \left\{ \frac{8-t+8\del_t}{24+t+8\del_t},\frac{\del_6}
{4+\del_6}\right\}.
\end{equation}
Moreover, one has
\begin{equation}\label{5.2}
\int_0^1|F(\alp;P)^2f(\alp;P,R)^4|\d\alp \ll P^{3+\del_6^\prime+\eps}.
\end{equation}
\end{lemma}
\begin{proof} On considering the Diophantine equation underlying (\ref{1.3}), one sees
that
$$U_6(P,R)\ll \int_0^1|F(\alp;P)^2f(\alp;P,R)^4|\d\alp .$$
Consequently, the confirmation of the estimate (\ref{5.2}) suffices to establish that the
exponent $\del_6^\prime$ defined in (\ref{5.1}) is associated. We put
$$\phi=\max\left\{\frac{8-t+8\del_t}{24+t+8\del_t},\frac{\del_6}{4+\del_6}\right\}.$$
Our hypotheses concerning $t$, $\del_t$ and $\del_6$ ensure that
$$\frac{8-t+8\del_t}{24+t+8\del_t}\le \frac{8-t+\tfrac{4}{3}(t-4)}{24+t+\tfrac{4}{3}
(t-4)}=\frac{8+t}{56+7t}\le \frac{16}{112}=\frac{1}{7},$$
and
$$\frac{\del_6}{4+\del_6}\le \frac{2/3}{4+2/3}=\frac{1}{7},$$
so that $0\le \phi\le \tfrac{1}{7}$. Recall the definitions (\ref{2.1}) and (\ref{2.2}), and
define $\grm$ and $\grM$ as in the preamble to Lemma \ref{lemma2.1}. Also, when
$\grB\subseteq [0,1)$, define
\begin{equation}\label{5.3}
I(\grB)=\int_\grB |F_1(\alp)f(\alp;2Q,R)^4|\d\alp .
\end{equation}
Then \cite[inequality (5.3)]{Woo1995} yields the estimate
\begin{equation}\label{5.4}
\int_0^1|F(\alp;P)^2f(\alp;P,R)^4|\d\alp \ll P^\eps M^3\left(PMQ^2+I([0,1))\right) .
\end{equation}
\par We begin with a discussion of the minor arc contribution $I(\grm)$. By applying
H\"older's inequality to (\ref{5.3}), one obtains
\begin{equation}\label{5.5}
I(\grm)\ll U_t(2Q,R)^{4/t}\biggl( \int_\grm |F_1(\alp)|^{t/(t-4)}\d\alp\biggr)^{1-4/t}.
\end{equation}
Since we may assume that $\del_t$ is an associated exponent, we have
$$U_t(2Q,R)\ll Q^{t/2+\del_t+\eps}.$$
Also, on recalling that $0\le \phi\le \tfrac{1}{7}$, it follows from
\cite[inequality (5.4)]{Woo1995} together with the argument of the proof of
\cite[Lemma 3.7]{Vau1989} that
\begin{align*}
\int_\grm |F_1(\alp)|^{t/(t-4)}\d\alp&\le \Bigl( \sup_{\alp\in \grm}|F_1(\alp)|
\Bigr)^{\tfrac{8-t}{t-4}}\int_0^1|F_1(\alp)|^2\d\alp \\
&\ll P^\eps \left( (PM)^{1/2}H\right)^{\tfrac{8-t}{t-4}}PMH.
\end{align*}
Thus we deduce from (\ref{5.5}) that
\begin{equation}\label{5.6}
I(\grm)\ll P^\eps (PM)^{1/2}H^{4/t}Q^{2+4\del_t/t}.
\end{equation}
\par In order to estimate $I(\grM)$, we have merely to follow the argument leading to
\cite[equation (2.10)]{Woo2000b}. Thus, again making use of the fact that
$0\le \phi\le \tfrac{1}{7}$, the estimate preceding \cite[equation (2.10)]{Woo2000b}
gives
\begin{align}
I(\grM)&\ll P^{1+\eps}HM(PQ^{-2})^{2/3}(Q^5)^{1/3}+P^{1+\eps}HM^{1/2}
(PQ^{-2})^{1/2}(Q^{3+\del_6})^{1/2}\notag \\
&\ll P^{1+\eps}HMQ\left( (PQ^{-1})^{2/3}+(P(QM)^{-1})^{1/2}Q^{\del_6/2}\right) .
\label{5.7}
\end{align}
By combining (\ref{5.6}) and (\ref{5.7}), we obtain an estimate for $I([0,1))$. By
substituting this into (\ref{5.4}) and recalling (\ref{2.1}), we deduce that
$$\int_0^1|F(\alp;P)^2f(\alp;P,R)^4|\d\alp \ll P^{3+\eps}M^2
(1+\Phi_1+\Phi_2+\Phi_3),$$
where
$$\Phi_1=(PM)^{-1/2}H^{4/t}Q^{4\del_t/t},\quad \Phi_2=M^{-4/3}\quad \text{and}
\quad \Phi_3=M^{-2}Q^{\del_6/2}.$$
In view of (\ref{2.1}), one finds that the respective conditions
$$\phi\ge \frac{8-t+8\del_t}{24+t+8\del_t}\quad \text{and}\quad \phi\ge
\frac{\del_6}{4+\del_6}$$
ensure that $\Phi_1\le 1$ and $\Phi_3\le 1$. Thus, our choice of $\phi$ ensures that
$$\int_0^1|F(\alp;P)^2f(\alp;P,R)^4|\d\alp \ll P^{3+\eps}M^2=P^{3+2\phi+\eps},$$
confirming the estimate (\ref{5.2}) and completing the proof of the lemma.
\end{proof}
We recall also an estimate for associated exponents $\del_s$ of use when
$s>\tfrac{13}{2}$.
\begin{lemma}\label{lemma5.2} Suppose that $s>4$. Then whenever
$\del_{s-2}\le \tfrac{1}{4}$ and $\del_{4(s-2)/3}\le 1$ are associated exponents, so too
is $\del_s=\del_{s-2}(1-\tet)+\tfrac{1}{2}(s-2)\tet$, where
$$\tet=\frac{1+3\del_{4(s-2)/3}-4\del_{s-2}}{9+3\del_{4(s-2)/3}-4\del_{s-2}}.$$
\end{lemma}
\begin{proof} This is immediate from \cite[Corollary to Lemma 2]{BBW1995}.
\end{proof}
Finally, we recall a simple consequence of convexity.
\begin{lemma}\label{lemma5.3} Suppose that $s>2$ and $t<s$. Then, whenever
$\del_{s-t}$ and $\del_{s+t}$ are associated exponents, so too is
$\del_s=\tfrac{1}{2}(\del_{s+t}+\del_{s-t})$.
\end{lemma}
\begin{proof} This is \cite[Lemma 4.3]{BW2001}.
\end{proof}
\section{The Keil-Zhao device} Lilu Zhao \cite[equation (3.10)]{Zha2014} has observed
that, in wide generality, one may obtain an estimate of Weyl-type for an exponential sum
over an arbitrary set, provided this sum inhabits an appropriate mean valuee. The same
idea is applied also in independent work of Keil \cite[page 608]{Kei2014}. This observation
is useful in obtaining permissible exponents $\mu_s$ when $s>6$. Before announcing our
conclusions, we introduce some notation useful in its proof. Write
\begin{equation}\label{6.1}
g(\alp;P,R)=\sum_{\substack{x\in \calA(P,R)\\ x>P/2}}e(\alp x^3)\quad \text{and}\quad
G(\alp)=\sum_{P/2<x\le P}e(\alp x^3).
\end{equation}
\begin{lemma}\label{lemma6.1} Suppose that $s\ge 6$ and the exponent $\Del_s$ is
admissible. Suppose also that $\tfrac{1}{16}(8-s)\le \Del_s\le \tfrac{1}{4}$ and
$u>s+8\Del_s$. Then there exist positive numbers $\eta$ and $c$, depending at most on
$u$, with the following property. Whenever $P$ is sufficiently large in terms of $\eta$, and
$\exp\left( c(\log \log P)^2\right)\le R\le P^\eta$, then
\begin{equation}\label{6.2}
\int_0^1|f(\alp;P,R)|^u\d\alp \ll P^{u-3}.
\end{equation}
In particular, the exponent $\mu_w=w-3$ is permissible for $w\ge u$.
\end{lemma}
\begin{proof} We seek to show that whenever $v\ge s+8\Del_s$, then
\begin{equation}\label{6.3}
\int_0^1|f(\alp;P,R)|^v\d\alp \ll P^{v-3+\eps}.
\end{equation}
When $u>v$, the bound (\ref{6.2}) follows from this estimate via
\cite[Lemma 4.5]{BW2001}. Next, by applying a dyadic dissection, we deduce from
(\ref{1.1}) and (\ref{6.1}) that
$$f(\alp;P,R)=\sum^\infty_{\substack{j=0\\ 2^j\le \sqrt{P}}}g(\alp;2^{-j}P,R)
+O(\sqrt{P}),$$
whence an application of H\"older's inequality reveals that
\begin{align*}
\int_0^1|f(\alp;P,R)|^v\d\alp &\ll (\log P)^{v-1}\sum^\infty_{\substack{j=0\\ 2^j\le
\sqrt{P}}} \int_0^1|g(\alp;2^{-j}P,R)|^v\d\alp+P^{v/2}\\
&\ll P^\eps \max_{\sqrt{P}\le X\le P}\int_0^1|g(\alp;X,R)|^v\d\alp +P^{v/2}.
\end{align*}
Consequently, provided we are able to show that
\begin{equation}\label{6.4}
\int_0^1|g(\alp;P,R)|^v\d\alp\ll P^{v-3+\eps},
\end{equation}
then the bound (\ref{6.3}) follows. Henceforth, we abbreviate $g(\alp;P,R)$ to $g(\alp)$.
\par We establish (\ref{6.4}) via the Hardy-Littlewood method. When $1\le X\le P$, define
the major arcs $\grM(X)$ to be the union of the intervals
$$\grM(q,a;X)=\{\alp\in[0,1):|q\alp-a|\le XP^{-3}\},$$
with $0\le a\le q\le X$ and $(a,q)=1$. Also, put $\grm(X)=[0,1)\setminus \grM(X)$. Finally,
write $\grP=\grM(P^{4/5})$, $\grQ=\grM(P^{3/8})$, $\grp=\grm(P^{4/5})$ and
$\grq=\grm(P^{3/8})$.\par
We begin by observing that, as a consequence of \cite[Corollary 3.2]{BW2001}, one has
$$\int_\grQ |f(\alp;P,R)|^6\d\alp +\int_\grQ|f(\alp;P/2,R)|^6\d\alp \ll P^{3+\eps},$$
so that
$$\int_\grQ |g(\alp)|^6\d\alp \ll P^{3+\eps}.$$
Since $|g(\alp)|=O(P)$, we find that whenever $v\ge 6$, one has
\begin{equation}\label{6.5}
\int_\grQ|g(\alp)|^v\d\alp \ll P^{v-3+\eps}.
\end{equation}
\par Suppose next that $\alp\in \grq$. By Dirichlet's theorem on Diophantine
approximation, there exist $a\in \dbZ$ and $q\in \dbN$ with $(a,q)=1$, $q\le P^{11/5}$
and $|q\alp-a|\le P^{-11/5}$. An application of \cite[Lemma 2.2]{BW2001} in concert
with \cite[equation (2.1)]{BW2001} delivers the estimate
$$g(\alp)\ll \frac{q^{\eps-1/6}P(\log P)^{5/2+\eps}}{(1+P^3|\alp-a/q|)^{1/3}}
+P^{9/10+\eps}.$$
When $\alp\in \grp$, it follows that $q>P^{4/5}$, and thus $g(\alp)\ll P^{9/10+\eps}$.
Meanwhile, when $\alp\in \grP\cap \grq$, we have either $q>P^{3/8}$ or
$|q\alp-a|>P^{-21/8}$, and hence $|g(\alp)|\ll P^{15/16+\eps}$. Consequently, since
$\grq=\grp\cup (\grP\cap \grq)$, we conclude that
\begin{equation}\label{6.6}
\sup_{\alp\in \grq}|g(\alp)|\ll P^{15/16+\eps}.
\end{equation}
\par We now turn to the main task at hand. Suppose that $s\ge 6$ and that $\Del_s$
is an admissible exponent. We consider the mean value
\begin{equation}\label{6.7}
T_0=\int_\grq |g(\alp)|^{s+2}\d\alp .
\end{equation}
By reference to (\ref{6.1}), an application of Cauchy's inequality shows that
\begin{equation}\label{6.8}
T_0=\sum_{\substack{x\in \calA(P,R)\\ x>P/2}}\sum_{\substack{y\in \calA(P,R)\\
y>P/2}}\int_\grq |g(\alp)|^se(\alp(x^3-y^3))\d\alp \le PT_1^{1/2},
\end{equation}
where
$$T_1=\sum_{\substack{P/2<x,y\le P\\ x,y\in \calA(P,R)}}\biggl| \int_\grq
|g(\alp)|^se(\alp(x^3-y^3))\d\alp \biggr|^2.$$
We bound $T_1$ above by removing the condition $x,y\in \calA(P,R)$, obtaining
$$T_1\le \sum_{P/2<x,y\le P}\int_\grq \int_\grq |g(\alp)g(\bet)|^se\left( (\alp-\bet)
(x^3-y^3)\right) \d\alp \d\bet .$$
Thus, again recalling (\ref{6.1}), we deduce by means of (\ref{6.8}) that
\begin{equation}\label{6.9}
T_0^2\le P^2\int_\grq \int_\grq |g(\alp)g(\bet)|^s|G(\alp-\bet)|^2\d\alp \d\bet .
\end{equation}
\par We analyse the mean value on the right hand side of (\ref{6.9}) by means of the
Hardy-Littlewood method. Let $\grN=\grM(P^{3/4})$ and $\grn=\grm(P^{3/4})$. Denote
by $\kap(q)$ the multiplicative function defined on prime powers by taking
$$\kap(p^{3l})=p^{-l},\quad \kap(p^{3l+1})=3p^{-l-1/2},\quad
\kap(p^{3l+2})=p^{-l-1}\quad (l\ge 0).$$
Also, define the function $\Ups(\gam)$ for $\gam\in \grN$ by taking
\begin{equation}\label{6.10}
\Ups(\gam)=\kap(q)^2(1+P^3|\gam-a/q|)^{-1},
\end{equation}
when $\gam\in \grM(q,a;P^{3/4})\subseteq \grN$, and put $\Ups(\gam)=0$ when
$\gam \in \grn$. Then it follows from \cite[Lemma 2.1]{KW2001} that
$G(\gam)^2\ll P^2\Ups(\gam)+P^{3/2+\eps}$. Substituting this estimate into (\ref{6.9}),
we deduce that
\begin{equation}\label{6.11}
T_0^2\ll P^{7/2+\eps}\biggl( \int_0^1 |g(\alp)|^s\d\alp \biggr)^2+P^4T_2,
\end{equation}
where
$$T_2=\int_\grq \int_\grq \Ups(\alp-\bet)|g(\alp)g(\bet)|^s\d\alp \d\bet .$$
\par By applying the trivial inequality $|z_1\cdots z_n|\le |z_1|^n+\ldots +|z_n|^n$, we
find that
$$|g(\alp)g(\bet)|^s\ll |g(\alp)g(\bet)^{s-1}|^2+|g(\bet)g(\alp)^{s-1}|^2.$$
Hence, by symmetry, we obtain the estimate
$$T_2\ll \Bigl( \sup_{\bet\in \grq}|g(\bet)|\Bigr)^{s-4}\int_\grq \int_\grq
\Ups(\alp-\bet)|g(\bet)^{s+2}g(\alp)^2|\d\alp \d\bet .$$
By invoking (\ref{6.6}), we thus deduce that
\begin{equation}\label{6.12}
T_2\ll (P^{15/16+\eps})^{s-4}\int_\grq |g(\bet)|^{s+2}\int_0^1\Ups(\alp-\bet)
|g(\alp)|^2\d\alp \d\bet .
\end{equation}
\par On recalling the definitions (\ref{6.1}) and (\ref{6.10}), we discern that
\begin{align*}
\int_0^1\Ups(\alp-\bet)|g(\alp)|^2\d\alp &=\int_\grN \Ups(\gam)
|g(\gam+\bet)|^2\d\gam \le \sum_{1\le q\le P^{3/4}}\kap(q)^2\Lam(q),
\end{align*}
where
$$\Lam(q)=\sum^q_{\substack{a=1\\ (a,q)=1}}\int_{-P^{-9/4}}^{P^{-9/4}}
(1+P^3|\tet|)^{-1}\biggl| \sum_{\substack{x\in \calA(P,R)\\ x>P/2}}
e(x^3(\bet+\tet+a/q))\biggr|^2\d\tet .$$
Let $c_q(n)$ be Ramanujan's sum, which we define by
$$c_q(n)=\sum^q_{\substack{a=1\\ (a,q)=1}}e(an/q).$$
Then it follows that
$$\sum^q_{\substack{a=1\\ (a,q)=1}}\biggl| \sum_{\substack{x\in \calA(P,R)\\
x>P/2}}e(x^3(\bet+\tet+a/q))\biggr|^2=\sum_{\substack{P/2<x,y\le P\\
x,y\in \calA(P,R)}}c_q(x^3-y^3)e((\bet+\tet)(x^3-y^3)).$$
Thus, the well-known estimate $|c_q(n)|\le (q,n)$ yields the bound
$$\Lam(q)\le \sum_{1\le x,y\le P}(q,x^3-y^3)\int_{-P^{-9/4}}^{P^{-9/4}}
(1+P^3|\tet|)^{-1}\d\tet,$$
and consequently
$$\int_0^1\Ups(\alp-\bet)|g(\alp)|^2\d\alp \ll P^{-3}\log (2P)\sum_{1\le q\le P^{3/4}}
\kap(q)^2\sum_{1\le x,y\le P}(q,x^3-y^3).$$
From here, the treatment following \cite[equation (3.2)]{BW2001} delivers the upper
bound
\begin{equation}\label{6.13}
\int_0^1\Ups(\alp-\bet)|g(\alp)|^2\d\alp \ll P^{\eps-1}.
\end{equation}
\par Next, substituting (\ref{6.13}) into (\ref{6.12}), we infer that
$$T_2\ll P^{\eps-1}(P^{15/16})^{s-4}\int_\grq|g(\bet)|^{s+2}\d\bet.$$
In view of (\ref{6.7}) and (\ref{6.11}), the hypothesis that $\Del_s$ is admissible yields
$$T_0^2\ll P^{7/2+\eps}\left( P^{s-3+\Del_s}\right)^2+P^{3+\eps}\left(
P^{15/16}\right)^{s-4}T_0,$$
whence
$$T_0\ll P^{s-1+\eps}\left( P^{\Del_s-1/4}+P^{-(s-4)/16}\right) .$$
On recalling (\ref{6.7}), application of H\"older's inequality and the trivial estimate
$|g(\alp)|\le P$ delivers the upper bound
\begin{align*}
\int_\grq |g(\alp)|^v\d\alp&\le P^{v-(s+8\Del_s)}T_0^{4\Del_s}
\biggl( \int_0^1|g(\alp)|^s\d\alp \biggr)^{1-4\Del_s}\\
&\ll P^{v-s-8\Del_s+\eps}\left( P^{s}\left(
P^{\Del_s-5/4}+P^{-(s+12)/16}\right) \right)^{4\Del_s}\left(
P^{s-3+\Del_s}\right)^{1-4\Del_s}.
\end{align*}
Thus we deduce that whenever $\Del_s\ge \tfrac{1}{16}(8-s)$, then
$$\int_\grq |g(\alp)|^v\d\alp \ll P^{v-3+\eps}\left(1+P^{-\Del_s+(8-s)/16}\right)\ll
P^{v-3+\eps}.$$
But the latter condition on $s$ is assured by the hypotheses of the lemma, and thus
we conclude via (\ref{6.5}) that
$$\int_0^1|g(\alp)|^v\d\alp =\int_\grQ |g(\alp)|^v\d\alp +\int_\grq|g(\alp)|^v
\d\alp \ll P^{v-3+\eps}.$$
This confirms the estimate (\ref{6.4}), and the conclusion of the lemma follows.
\end{proof}
\section{Computations} We now address the problem of how to implement the
computation of associated exponents $\del_s$ for $4\le s\le 8$. Let $h$ be a small
positive number that we view as a step size, and put $J=\lceil 16/h\rceil$. It is convenient in
what follows to assume that $1/h\in \dbN$. We begin with an array of known associated
exponents $\del_{jh}$ $(0\le j\le J)$. Thus, we have the associated exponents
$\del_4=0$ and $\del_s=\tfrac{1}{2}s-3$ $(s\ge 8)$ which follow from Hua's lemma (see
\cite[Lemma 2.5]{Vau1997}). Making use also of the associated exponent
$\del_6=\tfrac{1}{4}$ due to Vaughan \cite[Theorem 4.4]{Vau1989}, one may apply
convexity to deliver the associated exponents
$$\del_s=\max\left\{ 0,\tfrac{1}{8}(s-4),\tfrac{3}{8}s-2,\tfrac{1}{2}s-3\right\} .$$
For the interesting values of $j$ with $4<jh<8$, one may now calculate new associated
exponents $\del_{jh}$ by means of Lemmata \ref{lemma4.2},
\ref{lemma5.1}-\ref{lemma5.3} and \ref{lemma6.1}. Here, we note that associated
exponents $\del_s$ are related to admissible exponents $\Del_s$ by means of the
relation $\del_s=\tfrac{1}{2}s-3+\Del_s$. Should any of these new associated exponents
be superior to the old ones, then they may be substituted into the array of values
$\del_{jh}$. By iterating this process for $4/h<j<8/h$, one derives new associated
exponents converging to some set of limiting values.\par
We summarise the formulae delivered by the above-cited lemmata as follows.\vskip.1cm
\noindent (i) \emph{Method $A_s(t,\gam)$.} We apply Lemma \ref{lemma4.2} for
$\gam=lh$ and $t=mh$ with $0\le l\le (4h)^{-1}$ and
\begin{equation}\label{7.1}
\frac{2jh-6+8lh}{1+2lh}\le mh\le \frac{2jh-4}{1+2lh}.
\end{equation}
Thus one finds that the exponent $\del_{jh}^\prime$ is associated, where
\begin{equation}\label{7.2}
\del_{jh}^\prime=\del_{jh-2}(1-\tet)+\tfrac{1}{2}(jh-2)\tet ,
\end{equation}
in which $\tet=\max\left\{ 0,\min\left\{ \tet_0,\tfrac{1}{3}\right\} \right\}$, and
$$\tet_0=\frac{2jh-4-mh+2(jh-2)\del_{mh}-2mh\del_{jh-2}}{6jh-12+mh-4(lh)(mh)+
2(jh-2)\del_{mh}-2mh\del_{jh-2}}.$$
\vskip.1cm
\noindent (ii) \emph{Method $B_6(t)$.} We apply Lemma \ref{lemma5.1} for $t=mh$
with $4<mh\le 8$. Thus, when $\del_{mh}\le \tfrac{1}{6}(mh-4)$, we find that the
exponent $\del_6^\prime$ is associated, where
$$\del_6^\prime =2\max \left\{ \frac{8-mh+8\del_{mh}}{24+mh+8\del_{mh}},
\frac{\del_6}{4+\del_6}\right\} .$$
\vskip.1cm
\noindent (iii) \emph{Method $C_s$.} First, if $i$ is the integer for which
$\tfrac{4}{3}(j-2/h)\in (i,i+1]$, then convexity provides the associated exponent
$$\del_{4(jh-2)/3}=\left(i+1-\tfrac{4}{3}(j-2/h)\right)\del_{ih}+
\left(\tfrac{4}{3}(j-2/h)-i\right)\del_{(i+1)h}.$$
Next, Lemma \ref{lemma5.2} shows the exponent $\del_{jh}^\prime$ given by
(\ref{7.2}) to be associated, where
$$\tet_0=\frac{1+3\del_{4(jh-2)/3}-4\del_{jh-2}}{9+3\del_{4(jh-2)/3}-4\del_{jh-2}}.$$
\vskip.1cm
\noindent (iv) \emph{Process $L_s(t)$.} We apply Lemma \ref{lemma5.3} for $t=mh$
with $1\le m\le 1/h$. Thus one finds that the exponent $\del_{jh}^\prime$ is associated,
where $\del_{jh}^\prime =\tfrac{1}{2}(\del_{(j+m)h}+\del_{(j-m)h})$.
\vskip.1cm
\noindent (v) \emph{Process $W_s$.} We apply Lemma \ref{lemma6.1}. Thus one finds that
$\del_{jh}^\prime=\tfrac{1}{2}jh-3$ is an associated exponent whenever $\del_{jh-mh}$
is associated and satisfies
$$3-\tfrac{1}{2}(j-m)h+\del_{jh-mh}<\tfrac{1}{8}mh.$$
\vskip.1cm
We wrote a straightforward computer program to implement this iterative process. Our
language of choice was the QB64 implementation of QuickBasic, running on a Windows
Surface Pro3 in Windows 8.1 (Intel Core i3 processor at 1.5 GHz). All parameters were
stored using double precision variables. The most time consuming method to apply is
process $A_s(t,\gam)$, since there are many possible choices for $t=mh$ and $\gam=lh$
to test. It is apparent that $\gam$ should be chosen as small as possible consistent with
the constraint (\ref{7.1}). However, applying process $A_s(t,\gam)$ for each eligible
value of $s=jh$ $(4<s<8)$ nonetheless has running time with order of growth $h^{-2}$.
This limited our computation, in the first instance, to a step size of $h\ge 10^{-4}$.\par
Having experimented with this iteration, it becomes apparent that certain of the processes
dominate the others for different values of $s$. By refining the program to select dominant
processes for different ranges of $s$, the running time is vastly improved to order of growth
$h^{-1}$. Note that the array size limit effective for QB64 on the platform employed was
at least $2\times 10^8$. Thus, final computations with step size $h=10^{-6}$ were
feasible for $4<s\le 6.5$, and step size $h=10^{-5}$ throughout $4<s\le 8$, this being
limited only by running-time considerations rather than memory limitations. We summarise
below the parameters associated with these dominant processes.\vskip.1cm
\noindent (i) $4<s\le 5$. Process $A_s(2s-4,0)$, so that $\del_s^\prime$ is determined
according to Corollary \ref{corollary4.3}. Thus $\del_{jh}^\prime$ is given by
(\ref{7.2}) with
$$\tet_0=\frac{\del_{2jh-4}}{4+\del_{2jh-4}}.$$
\vskip.1cm
\noindent (ii) $5<s\le 5.6462$. Process $A_s(6,0)$, so that $\del_s^\prime$ is determined
according to Corollary \ref{corollary4.4}. Thus $\del_{jh}^\prime$ is given by (\ref{7.2})
with
$$\tet_0=\frac{jh-5+(jh-2)\del_6}{3jh-3+(jh-2)\del_6}.$$
\vskip.1cm
\noindent (iii) $5.6462<s<6$. Process $L_s(t)$, linear interpolation between
$\del_{5.6462}$ and $\del_6$.\vskip.1cm
\noindent (iv) $s=6$. Process $B_6(5.392938)$.\vskip.1cm
\noindent (v) $6<s\le 6.081$. Process $L_s(t)$, linear interpolation between
$\del_6$ and $\del_{6.081}$.\vskip.1cm
\noindent (vi) $6.081<s\le 6.3395$. Process $A_s(6,\tfrac{1}{2}(s-6))$, so that
$\del_s^\prime$ is determined according to Corollary \ref{corollary4.5}. Thus
$\del_{jh}^\prime$ is given by (\ref{7.2}) with
$$\tet_0=\frac{jh-5+(jh-2)\del_6-6\del_{jh-2}}{33-3jh+(jh-2)\del_6-6\del_{jh-2}}.$$
\vskip.1cm
\noindent (vii) $6.3395<s\le 6.5$. Process $L_s(t)$, linear interpolation between
$\del_{6.3395}$ and $\del_{6.5}$.\vskip.1cm
\noindent (viii) $6.5<s\le 7.06$. Processes $C_s$ and $L_s(t)$.\vskip.1cm
\noindent (ix) $7.06<s<8$. Processes $W_s$ and $L_s(t)$.\vskip.1cm
Some additional discussion seems warranted concerning the robustness of these
computations. The first point to make is that, while the above restricted iteration may
not be guaranteed to deliver optimal estimates, the exponents that it delivers will at least
be legitimate associated exponents. Thus the exponents presented in Table 1 in the
introduction may be considered upper bounds for optimal associated exponents. In this
context, it is worth noting that we experimented with adjustments to the step size $h$,
and found no improvement in the first $8$ digits of the decimal expansions of the
computed values of $\del_s$, even when $h$ varied from $10^{-4}$ to $10^{-6}$.\par
The second point concerns the stability of the iteration. There is a potential danger in
iterations involving large numbers of cycles that round-off errors may accumulate, leading
to substantial cumulative errors and even to unstable iterative processes. In our
computations, we exercised some caution concerning this issue by artificially inflating the
newly computed associated exponents by adding a small positive quantity $\tau$ at the
end of each iteration. Thus, with $\tau=10^{-9}$, we replaced the newly computed
associated exponent $\del_s$ by $\del_s+\tau$. This has the effect of slightly weakening
our exponents, though round-off errors (which in double-precision arithmetic are very
much smaller) are swamped by this cushion of numerical security. This device has the
effect of permitting some control on the number of decimal digits reliably computed.\par
We now interpret these computations in the context of the
conclusions presented in the introduction. First, Theorem \ref{theorem1.2} follows from
the computed associated exponent $\del_t=0.14963020$ for $t=5.392938$ that follows
from the computations underlying Table 1 via convexity, and the upper bound (\ref{5.2})
of Lemma \ref{lemma5.1}. Next, the exponent $\Del_{7.1}=0.06131437$ is admissible,
according to Theorem \ref{theorem1.5} and the associated Table 1. Then it follows from
Lemma \ref{lemma6.1} that
$$\int_0^1|f(\alp;P,R)|^u\d\alp \ll P^{u-3}$$
whenever $u>7.1+8\Del_{7.1}=7.59051\ldots $. This establishes Theorem \ref{theorem1.4}. Finally, the proof
of Theorem \ref{theorem1.1} is a standard consequence of Theorem \ref{theorem1.2},
following an application of Cauchy's inequality. The proof of
\cite[Theorem 1.1]{Woo2000b} to be found in the final phases of \cite[\S2]{Woo2000b}
shows, for example, that whenever $\del_6$ is an associated exponent, then
$N(X)\gg X^{1-\del_6/3-\eps}$. The conclusion of Theorem \ref{theorem1.1} therefore follows on making use of the
associated exponent $\del_6=0.24871567$. Note also that Theorem \ref{theorem1.5} for
$s=4$ follows from \cite{Hoo1963}.
\bibliographystyle{amsbracket}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
| {
"timestamp": "2015-02-09T02:12:52",
"yymm": "1502",
"arxiv_id": "1502.01944",
"language": "en",
"url": "https://arxiv.org/abs/1502.01944",
"abstract": "Estimates are provided for $s$th moments of cubic smooth Weyl sums, when $4\\le s\\le 8$, by enhancing the author's iterative method that delivers estimates beyond classical convexity. As a consequence, an improved lower bound is presented for the number of integers not exceeding $X$ that are represented as the sum of three cubes of natural numbers.",
"subjects": "Number Theory (math.NT)",
"title": "Sums of three cubes, II",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969655605173,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7099044158541886
} |
https://arxiv.org/abs/0710.1521 | Algebraic quantum permutation groups | We discuss some algebraic aspects of quantum permutation groups, working over arbitrary fields. If $K$ is any characteristic zero field, we show that there exists a universal cosemisimple Hopf algebra coacting on the diagonal algebra $K^n$: this is a refinement of Wang's universality theorem for the (compact) quantum permutation group. We also prove a structural result for Hopf algebras having a non-ergodic coaction on the diagonal algebra $K^n$, on which we determine the possible group gradings when $K$ is algebraically closed and has characteristic zero. | \section{Introduction}
A remarkable fact, discovered by Wang \cite{wa}, is the existence
of a largest (universal) compact quantum group, denoted ${\mathcal Q}_n$, acting on the set
$[n] =\{1, \ldots ,n\}$, which is infinite if $n\geq 4$.
In view of its universal property, the quantum group ${\mathcal Q}_n$ is called
the quantum permutation group on $n$ points, and is seen a good
quantum analogue of the classical permutation group $S_n$.
The compact quantum group ${\mathcal Q}_n$ is defined through a Hopf ${\rm C}^*$-algebra
$A_s(n)$ via an heuristic formula $A_s(n) = {\rm C}({\mathcal Q}_n)$. The algebra $A_s(n)$
satisfies Woronowicz' axioms in \cite{wo} and Wang's universality theorem is stated in terms
of a coaction of $A_s(n)$ on the diagonal ${\rm C}^*$-algebra
${\mathbb C}^n$ (the algebra of functions on $[n]$).
Canonically associated with $A_s(n)$ is
a dense Hopf $*$-subalgebra, denoted here $A_s(n, {\mathbb C})$.
It is clear from its presentation that one can define an analogous
Hopf algebra over any field ${\mathbb K}$, which we denote
$A_s(n,{\mathbb K})$.
In this paper we study some aspects of the Hopf algebra $A_s(n,{\mathbb K})$,
which coacts on the diagonal algebra ${\mathbb K}^n$.
We prove that the coaction is universal in various cases (Theorem 3.8).
In particular our main result states that in characteristic
zero $A_s(n,{\mathbb K})$ is the universal cosemisimple Hopf algebra
coacting on ${\mathbb K}^n$, which in geometric language means that
there exists a largest linearly reductive algebraic quantum group acting on $n$ points.
This refines Wang's Theorem 3.1 in \cite{wa}.
We also prove a structural result for Hopf algebras coacting
on ${\mathbb K}^n$ in a non-ergodic manner. Then if ${\mathbb K}$ is algebraically
closed and has characteristic zero, we give a general description of
cocommutative cosemisimple quotients of $A_s(n,{\mathbb K})$, which exactly correspond
to group gradings on the diagonal algebra ${\mathbb K}^n$.
It is worthwhile to note that the classification problem for
group gradings over various classes of associative algebras (matrix algebras, triangular
algebras, incidence algebras...)
has been intensively studied in recent years: see \cite{bz,vz,ps}.
Quantum permutation groups have been studied in connection
with subfactor theory, free probability theory and the classification
problem for quantum groups.
We refer the reader to the survey paper \cite{bbc2} for an overview of
these recent results. As well as presenting some new results, we hope
that the present paper might serve as a friendly algebraic introduction
for entry
to these developments.
The paper is organized as follows.
Section 2 consists of notations and preliminaries, with a short review of Hopf algebra
coactions and universal coactions. In Section 3 we study Hopf algebra coactions
on the diagonal algebra ${\mathbb K}^n$, and prove various universality results
for $A_s(n,{\mathbb K})$. Section 4 is devoted to non-ergodic coactions.
Group gradings on diagonal algebras are described in Section 5, leading
to the classification of cosemisimple cocommutative quotients of $A_s(n,{\mathbb K})$, if
${\mathbb K}$ has characteristic zero and is algebraically closed.
\section{Notations and preliminaries}
\subsection{Diagonal algebras}
Throughout the paper ${\mathbb K}$ is field.
The diagonal algebra ${\mathbb K}^n$, $n \in \mathbb N^*$, is always equipped with its canonical basis
$e_1, \ldots , e_n$, with
$$e_ie_j = \delta_{ij} e_i, \quad 1 = \sum _{i=1}^n e_i$$
The algebra ${\mathbb K}^n$ is identified with the algebra of ${\mathbb K}$-valued
functions on the set $[n] = \{1, \ldots , n\}$.
Under this identification the idempotent $e_i$ is the characteristic
function of the subset $\{i\}$.
\subsection{Hopf algebras and comodules}
We assume that the reader has some familiarity with bialgebras, Hopf
algebras and their comodules, for which the books \cite{mo,ks}
are convenient references.
Unless otherwise indicated,
the comultiplication, counit and antipode of a Hopf algebra
are always denoted by $\Delta$, $\varepsilon$ and $S$ respectively.
Let $B$ be a bialgebra and let $x_{ij}$, $i,j \in [n]$, be some elements of $B$.
Recall that the matrix $x = (x_{ij})\in M_n(B)$ is said to be \textbf{multiplicative}
if $$\Delta(x_{ij}) = \sum_{k=1}^n x_{ik} \otimes x_{kj}, \quad \varepsilon(x_{ij}) = \delta_{ij}$$
The matrix $x$ is multiplicative if and only the linear map
$$\beta : {\mathbb K}^n \longrightarrow {\mathbb K}^n \otimes B, \quad \beta(e_i)= \sum_{k=1}^n e_k \otimes x_{ki},$$
endows ${\mathbb K}^n$ with a $B$-comodule structure.
We will be interested in particular types of multiplicative matrices.
We say that a multiplicative matrix $x =(x_{ij}) \in M_n(B)$ is \textbf{semi-magic}
if the following two families of relations hold in $B$.
$$
\leqno(2.1) \quad x_{ki}x_{kj} = \delta_{ij} x_{ki}, \ \forall i,j,k \in [n]$$
$$ \leqno(2.2) \quad \sum_{k=1}^n x_{ik} =1, \ \forall i \in [n]$$
We say that the multiplicative matrix $x$ is \textbf{magic} if it is
semi-magic and if furthermore the following relations hold in $B$.
$$
\leqno(2.3) \quad x_{ik}x_{jk} = \delta_{ij} x_{ik}, \ \forall i,j,k \in [n]$$
$$ \leqno(2.4) \quad \sum_{k=1}^n x_{ki} =1, \ \forall i \in [n]$$
\subsection{Cosemisimple Hopf algebras}
Recall that a Hopf algebra $H$ is said to be cosemisimple if its comodule
category is semisimple. This is equivalent to saying that there exists
a linear map $h : H \longrightarrow {\mathbb K}$ (the Haar measure)
such that
$$h(1) = 1, \quad ({\rm id}_H \otimes h) \circ \Delta = h(-)1_H = (h \otimes {\rm id}_H) \circ \Delta$$
A CQG algebra is Hopf $*$-algebra (over ${\mathbb C}$) having all its finite dimensional comodules
equivalent to unitary ones. A CQG algebra is automatically
cosemisimple, but there exist cosemisimple
Hopf algebras that do not admit any CQG algebra structure (for example
the quantum group $SL_q(2)$ for $q$ non real and not a root of unity). CQG algebras
are the algebras of representative functions on compact quantum groups, see \cite{ks}
for more details.
\subsection{Coactions on algebras}
Let $A$ be an algebra and let $B$ be a bialgebra. A \textbf{coaction $B$ on the algebra
$A$} consists of an algebra map $\beta : A \longrightarrow A \otimes B$ making
$A$ into a $B$-comodule. One also says that $A$ is a $B$-comodule algebra.
If a linear map $\beta : A \longrightarrow A \otimes B$ endows
$A$ with a $B$-comodule structure, then it is a coaction on the algebra $A$
if and only if $A$ is an algebra in the monoidal category of $B$-comodules, that
is, the multiplication map $m: A \otimes A \longrightarrow A$ and unit map ${\mathbb K} \longrightarrow A$
are $B$-colinear. Coactions on algebras correspond to quantum (semi)groups actions on quantum spaces.
Let $\mathcal C$ be a subcategory of the category of bialgebras (resp. Hopf algebras).
Let $H$ be a bialgebra (resp. Hopf algebra) coacting on the algebra $A$, via
$\alpha : A \longrightarrow A \otimes H$. We say that the coaction is \textbf{universal in $\mathcal C$},
or that $H$ is \textbf{the universal bialgebra (resp. Hopf algebra) in $\mathcal C$ coacting on $A$}, if
for any coaction $\beta : A \longrightarrow A \otimes H'$, with $H'$ being a bialgebra
(resp. Hopf algebra) in $\mathcal C$, there exists a unique bialgebra map
$f : H \longrightarrow H'$ such that
$({\rm id}_{A} \otimes f) \circ \alpha = \beta$.
Manin first \cite{ma} proposed to construct bialgebras and Hopf algebras
by looking at universal objects (in suitable categories) coacting on well
chosen algebras (quadratic algebras in \cite{ma}).
In the case of finite-dimensional algebras, it is not difficult to show
that given a finite-dimensional algebra $A$, there exists a universal bialgebra coacting
on $A$, and hence, through Manin's Hopf envelope construction \cite{ma},
a universal Hopf algebra coacting on $A$. The problem with this Hopf algebra
is that it is not finitely generated in general, and hence can hardly be thought of as the
algebra of functions on the quantum symmetry group of a finite quantum space.
Manin's work was continued by Wang in \cite{wa} in the framework of Woronowicz algebras,
the objects dual to compact quantum groups.
In this paper Wang showed that there exists indeed a universal
Woronowicz algebra (or equivalently a universal CQG algebra) coacting on the diagonal ${\rm C}^*$-algebra ${\mathbb C}^n$, denoted now $A_s(n)$.
For non-commutative ${\rm C}^*$-algebras, Wang showed that such a universal object
does not exist (as shown by the quantum $SO(3)$-groups), but studied instead
quantum symmetry groups of ${\rm C}^*$-algebras endowed with a faithful positive functional,
leading to the construction of other quite interesting quantum groups.
In this paper we concentrate on Hopf algebra coactions on the diagonal algebra ${\mathbb K}^n$.
The algebra $A_s(n, {\mathbb C})$, the canonically defined dense CQG
subalgebra of Wang's $A_s(n)$, is the universal ${\mathbb C}$-algebra generated
by the entries of a magic matrix, and it is clear that such a definition
works over any field. We are especially interested in knowing
if $A_s(n, {\mathbb K})$ is still universal in an appropriate category.
\section{The universal coaction}
Let ${\mathbb K}$ be an arbitrary field. We study Hopf algebra coactions on the diagonal algebra
${\mathbb K}^n$. We begin with bialgebra coactions, for which we
have the following basic result. The proof uses standard arguments, and is left to the reader,
who might also consult \cite{wa}.
\begin{proposition}
Let $B$ be a bialgebra and let
$\beta : {\mathbb K}^n \longrightarrow {\mathbb K}^n \otimes B$ be a right comodule structure
on ${\mathbb K}^n$, with
$$\beta(e_i) = \sum_{k=1}^n e_k \otimes x_{ki}$$
Then $\beta$ is an algebra map (and hence a coaction on the algebra ${\mathbb K}^n$) if and only if the
matrix $x=(x_{ij})$ is semi-magic.
\end{proposition}
Having this proposition in hand, we see that there indeed exists a
universal bialgebra coacting on ${\mathbb K}^n$, which is the universal algebra
generated by the entries of a semi-magic matrix.
At the Hopf algebra level, Manin's Hopf envelope of the previous bialgebra furnishes
a universal Hopf algebra coacting on ${\mathbb K}^n$. However this Hopf algebra is certainly too
big (not finitely generated), and
we believe that the good object is $A_s(n, {\mathbb K})$, defined as follows.
\begin{definition}
The algebra $A_s(n,{\mathbb K})$ is the algebra presented by generators $u_{ij}$, $i,j \in [n]$, subject to the relations making $u=(u_{ij})$ a magic matrix.
\end{definition}
Here is the first basic result regarding $A_s(n,{\mathbb K})$. The proof is left to the reader.
\begin{proposition}
The algebra $A_s(n,{\mathbb K})$ has a Hopf algebra structure defined by
$$\Delta(u_{ij}) = \sum_{k=1}^n u_{ik} \otimes u_{kj}, \quad \varepsilon(u_{ij}) = \delta_{ij}, \quad
S(u_{ij})=u_{ji}$$
The formula $$\alpha(e_i) = \sum_{k=1}^n e_k \otimes u_{ki}$$
defines a coaction of $A_s(n,{\mathbb K})$ on the algebra ${\mathbb K}^n$.
\end{proposition}
The relationship of $A_s(n,{\mathbb K})$ with the symmetric group is examined
in the next proposition, where several arguments
of \cite{wa} are used. The defining relations of magic
matrices are those of permutation matrices, but since we are in arbitrary characteristic
we cannot claim directly here that a commutative Hopf algebra is a function algebra.
\begin{proposition}
There exists a surjective Hopf algebra map $\pi_n : A_s(n,{\mathbb K}) \longrightarrow {\mathbb K}(S_n)$, where
${\mathbb K}(S_n)$ is the Hopf algebra of ${\mathbb K}$-valued functions on the symmetric group $S_n$, such that:
\begin{enumerate}
\item The map $\pi_n$ is an isomorphism if and only if $n\leq 3$.
\item The map $\pi_n$ induces an isomorphism between $A_s^c(n,{\mathbb K})$, the maximal commutative
quotient of $A_s(n,{\mathbb K})$, and ${\mathbb K}(S_n)$.
\end{enumerate}
Moreover the algebra $A_s(n,{\mathbb K})$ is non commutative and infinite-dimensional if $n\geq 4$.
\end{proposition}
\begin{proof}
The map $\pi_n$ is defined by sending $u_{ij}$ to $p_{ij}$, the function defined
by $p_{ij}(\sigma) =\delta_{i,\sigma(j)}$.
It is surjective because $e_\sigma$, the characteristic function of a permutation $\sigma$,
satisfies
$$e_\sigma = p_{\sigma(1)1} \cdots p_{\sigma(n)n}$$
The algebra map $\pi_n$ thus induces a surjective algebra map
$\pi_n^c:A_s^c(n,{\mathbb K}) \longrightarrow {\mathbb K}(S_n)$. Denoting the generators of $A_s^c(n,{\mathbb K})$ by $x_{ij}$, we define
a linear map ${\mathbb K}(S_n) \longrightarrow A_s(n,{\mathbb K})$ by sending $e_\sigma$ to
$x_{\sigma(1)1} \cdots x_{\sigma(n)n}$. One checks easily that this a Hopf algebra map, and that
it is the reciprocal isomorphism of $\pi_n^c$.
If $n \geq 4$, one can reproduce Wang's argument in \cite{wa}, page 201, to see that
$A_s(n,{\mathbb K})$ has an infinite dimensional quotient given by a free product
of non-trivial algebras.
It is trivial that $A_s(2, {\mathbb K})$ is commutative, while some
slightly more involved computations, left to the reader, show that $A_s(3, {\mathbb K})$ is commutative as well.
This concludes the proof.
\end{proof}
Proposition 3.4 seems to indicate that the quantum group corresponding to
$A_s(n,{\mathbb K})$ is some kind of free version of the symmetric group $S_n$. See
\cite{bb,bc,bbc} for probabilistic and representation theoretic
meanings of freeness.
When ${\mathbb K} = {\mathbb C}$, the Hopf algebra $A_s(n,{\mathbb C})$ is a CQG algebra
(and is the dense CQG algebra of Wang's algebra $A_s(n)$),
a fact that has been already mentioned in several papers.
We reproduce the argument here, and we note that the cosemisimplicity
holds more generally in characteristic zero.
\begin{proposition}
If ${\mathbb K}$ is a characteristic zero field,
the Hopf algebra $A_s(n,{\mathbb K})$ is cosemisimple .
If ${\mathbb K} = {\mathbb C}$, then $A_s(n,{\mathbb C})$ has a Hopf $*$-algebra structure
given by $u_{ij}^*=u_{ij}$, and is a CQG algebra.
\end{proposition}
\begin{proof}
It is straightforward to check that there exists a ${\mathbb K}$-algebra map
$\tau : A_s(n,{\mathbb K}) \longrightarrow A_s(n,K)$ such that $\tau (x_{ij})=x_{ji}$. The algebra
$A_s(n,{\mathbb K})$ is defined over the ordered field $\mathbb Q$, and thus $A_s(n,{\mathbb K})$ is cosemisimple
by Theorem 4.7 in \cite{bi98}. When ${\mathbb K} = {\mathbb C}$, the Hopf $*$-algebra structure
is easily defined, and the generating multiplicative matrix of $A_s(n,{\mathbb C})$ being
unitary, we use [Proposition 28, p.417] in \cite{ks} to conclude that $A_s(n,{\mathbb C})$ is a CQG algebra
\end{proof}
We now wish to study when the coaction defined in Proposition 3.3 is universal.
First we need to clarify the interactions between the various relations
defining magic matrices. This done in the following lemma.
\begin{lemma}
Let $H$ be a Hopf algebra, and let $x=(x_{ij}) \in M_n(H)$ be a multiplicative matrix.
If three of the families of relations (2.1), (2.2), (2.3), (2.4) hold for the elements $x_{ij}$, then
the fourth family also holds, so that $x$ is magic matrix, and $S(x_{ij}) = x_{ji}$, for all $i,j$.
\end{lemma}
\begin{proof}
Assume that relations (2.1), (2.2) and (2.3) hold. Then
it is easy to see that $xx^t =I$, the identity matrix ($x^t$ is the transpose matrix). The matrix
$x$ is invertible with inverse $S(x)$, hence $S(x) = x^t$. Now one gets Relations
(2.4) by applying the antipode to Relations (2.2). The other cases are treated with similar
arguments.
\end{proof}
\begin{lemma}
Let $H$ be a Hopf algebra with bijective antipode
coacting on the diagonal algebra ${\mathbb K}^n$, with coaction
$\beta : {\mathbb K}^n \longrightarrow {\mathbb K}^n \otimes H$. Let $x_{ij} \in H$, $i,j \in[n]$, be such that
$$\beta(e_i) = \sum_{k=1}^n e_k \otimes x_{ki}$$
so that the matrix $x=(x_{ij})\in M_n(H)$ is semi-magic.
For $i \in [n]$, let $u_i=\sum_{k=1}^nx_{ki}$. Then $u_i$ is invertible for all $i \in [n]$, and
$$S(x_{ij}) = u_i^{-1}x_{ji}, \quad {\rm and} \quad
S^{-1}(x_{ij}) = x_{ji}u_i^{-1}, \ \forall i,j \in [n]$$
\end{lemma}
\begin{proof}
The matrix $x$ is multiplicative because $\beta$ endows ${\mathbb K}^n$ with an $H$-comodule structure, and is semi-magic by Proposition 3.1. We have
$$(x^t)x = {\rm diag}(u_1, \ldots, u_n)$$
and since $x$ and $x^t$ are invertible with respective inverses $S(x)$ and $S^{-1}(x)^t$,
the elements $u_i$ are all invertible and we have
$$S(x) = {\rm diag}(u_1^{-1}, \ldots , u_n^{-1}) x^t \quad {\rm and} \quad
S^{-1}(x)^t = x {\rm diag}(u_{1}^{-1}, \ldots , u_{n}^{-1})$$
This concludes the proof of the lemma.
\end{proof}
We are now ready
to prove the main result of the section.
\begin{theorem}
Let $H$ be a Hopf algebra coacting on the diagonal algebra ${\mathbb K}^n$, with coaction
$\beta : {\mathbb K}^n \longrightarrow {\mathbb K}^n \otimes H$. Assume that one of the following conditions hold.
\begin{enumerate}
\item The usual integration map $\psi : {\mathbb K}^n \longrightarrow {\mathbb K}$, $e_i \longmapsto 1$, is $H$-colinear.
\item $S^2 = {\rm id}_H$.
\item $K$ has characteristic zero or characteristic $p>n$ and $H$ is cosemisimple.
\end{enumerate}
Then there exists a unique Hopf algebra map
$f : A_s(n,{\mathbb K}) \longrightarrow H$ such that
$$({\rm id}_{{\mathbb K}^n} \otimes f) \circ \alpha = \beta$$
\end{theorem}
\begin{proof}
Let $x_{ij} \in H$, $i,j \in [n]$ be as in Lemma 3.7.
We already know that the matrix $(x_{ij})$ is semi-magic, and
it is clear from the construction of $A_s(n,{\mathbb K})$ that we just have to prove that it is magic.
In case (1), by the $H$-colinearity of $\psi$, we see that Relations (2.4) hold in $H$, and
by Lemma 3.6, the matrix $x$ is magic and the theorem is proved in the first case.
Assume now that $S^2 = {\rm id}_H$.
We have $S(x_{ij}) = u_i^{-1}x_{ji} =
S^{-1}(x_{ij}) = x_{ji}u_i^{-1}$, $\forall i,j \in [n]$, by Lemma 3.7.
Let $i,j,k \in [n]$ with $i \not = j$. Then, using again Lemma 3.7, we have
$$0 = x_{kj}x_{ki} = S(x_{kj}x_{ki}) = u_k^{-1} x_{ik} x_{jk} u_k^{-1}$$
and hence relations (2.3) hold in $H$. We conclude using
Lemma 3.6.
We assume now that $H$ is cosemisimple. Let $h : H \longrightarrow {\mathbb K}$ be the Haar measure
and let $x_{ij} \in H$, $i,j \in [n]$, be as in Lemma 3.7. Put $\alpha_i = h(u_i) = h(\sum_kx_{ki})$.
The map $\varphi = (\psi \otimes h) \circ \beta$, ${\mathbb K}^n \longrightarrow {\mathbb K}$, is $H$-colinear,
with $\varphi(e_i) = \alpha_i$. Thus the bilinear form
$\omega : {\mathbb K}^n \otimes {\mathbb K}^n \longrightarrow {\mathbb K}$ defined
by $\omega = \varphi \circ m$, where $m$ is the multiplication of ${\mathbb K}^n$, is also $H$-colinear.
We have $\omega(e_i,e_j) = \delta_{ij}\alpha_i$, and if $E= {\rm diag}(\alpha_1, \ldots, \alpha_n)$,
the $H$-colinearity of $\omega$ gives
$x^t E x = E$, and hence $x^t E = E S(x)$. Thus we have
$$\alpha_i S(x_{ij}) = \alpha_j x_{ji}, \quad \forall i,j \in [n]$$
Let $I = \{ i \in [n] \ | \ \alpha_i\not = 0\}$. The set $I$ is non-empty
since
$$\sum_{i=1}^n \alpha_i = \sum_{i,k=1}^nh(x_{ki}) =nh(1)= n \not =0$$
We have
$$S(x_{ij}) = \alpha_i^{-1} \alpha_j x_{ji}, \ \forall i \in I, \forall j \in [n] $$
and since $S$ is bijective, we have $x_{ij} = 0$ for $i \in I$ and $j\not \in I$.
Also for $i \not \in I$ and $j \in I$, we have, by Lemma 3.7,
$x_{ij} = u_j S(x_{ji})=0$.
We now concentrate on the elements $x_{ij}$, $i,j \in I$.
We wish to prove that $x_0 = (x_{ij})_{i,j \in I}$ is a magic matrix.
For $i,j \in I$, we have
$$ \Delta(x_{ij}) = \sum_{k\in I} x_{ik} \otimes x_{kj}, \quad \varepsilon(x_{ij}) = \delta_{ij}$$
and hence $x_0$ is a multiplicative matrix. It is clear
that Relations (2.1) hold for the elements $x_{ij}$, $i,j \in I$. Also, for $i \in I$
$$1 = \sum_{k \in I}x_{ik} + \sum_{k \not \in I}x_{ik} = \sum_{k \in I}x_{ik}$$
and Relations (2.2) hold for the elements $x_{ij}$, $i,j \in I$.
Let $i,j,k \in I$, with $i \not = j$. We have
$$x_{ik}x_{jk}= \alpha_k \alpha_i^{-1}S(x_{ki})\alpha_k \alpha_j^{-1} S(x_{kj})
=\alpha_k^2 \alpha_i^{-1} \alpha_j^{-1} S(x_{kj}x_{ki}) =0$$
and hence Relations (2.3) hold for the elements $x_{ij}$, $i,j \in I$.
Now by Lemma 3.6 $x_0$ is a magic matrix and in particular we have
$$\sum_{k \in I} x_{ki} =1, \ \forall i\in I$$
Hence for $i \in I$, we have
$$u_i = \sum_{k \in I} x_{ki} + \sum_{k \not \in I} x_{ki} = \sum_{k \in I} x_{ki}=1$$
and $\alpha_i=h(u_i)=1$, $\forall i \in I$. But then
$$n= \sum_{i \in I} \alpha_i = \#I$$
Thus $I = [n]$ and the proof of Theorem 3.8 is complete.
\end{proof}
\bigskip
Combining Theorem 3.8 and Proposition 3.5, we get
the following universality result.
\begin{theorem}
If ${\mathbb K}$ has characteristic zero, the Hopf algebra $A_s(n,{\mathbb K})$ is the universal
cosemisimple Hopf algebra coacting on the algebra ${\mathbb K}^n$.
\end{theorem}
This result is a refinement of
Wang's universality Theorem in \cite{wa}.
Indeed it is possible to get an equivalent version of Wang's Theorem
as an immediate corollary of Theorem 3.8.
\begin{theorem}[Theorem 3.1 in \cite{wa}]
The CQG algebra $A_s(n, {\mathbb C})$ is the universal CQG algebra coacting
on ${\mathbb C}^n$.
\end{theorem}
\begin{proof}
Let $H$ be a CQG algebra coacting on ${\mathbb C}^n$: here this means that the action is moreover
a $*$-algebra map. The elements $x_{ij}$ in the proof of Theorem 3.8 therefore satisfy $x^*_{ij}=x_{ij}$, and the Hopf algebra morphism from the proof of Theorem 3.8
is a $*$-algebra map.
\end{proof}
We conclude the section by some remarks and questions.
First we should mention that case (1) of Theorem 3.8
is a particular case of a general machinery developed in \cite{bi00}. We have
included it here because it is immediate using the arguments
needed to prove the other cases, and also
because the invariance of classical integration
is a natural requirement (automatic in the classical case),
which further motivates the use of $A_s(n,{\mathbb K})$.
\medskip
The corepresentation theory of
$A_s(n,{\mathbb C})$ is worked out in \cite{ba0}: it is similar to the representation theory of the algebraic
group $SO(3,{\mathbb C})$.
This can be generalized in characteristic zero.
On the other hand, we do not know if $A_s(n,{\mathbb K})$ is cosemisimple in
positive characteristic $p>n$. In this case there is always a non-trivial cosemisimple Hopf
algebra coacting on ${\mathbb K}^n$, the function algebra on $S_n$. So we have the following question.
\begin{question}
Assume that ${\mathbb K}$ has characteristic $p>n\geq 4$. Does there exist
a universal cosemisimple Hopf algebra coacting on the algebra ${\mathbb K}^n$?
\end{question}
The noncommutative cosemisimple Hopf algebras constructed in \cite{bi0} show that if
this universal Hopf algebra exists, then it is not isomorphic to ${\mathbb K}(S_n)$.
\medskip
It is also clear that $A_s(n,{\mathbb K})$ might be defined over any ring, with functoriality properties.
This suggests that the mod $p$ reduction $A_s(n,\mathbb Z)\rightsquigarrow A_s(n,{\mathbb Z}/p{\mathbb Z})$ could be
used in some contexts (especially in the context of quantum automorphism
groups of finite graphs as in \cite{bbch}), but this idea has not been
fruitful yet.
\section{Non-ergodic coactions}
In this section we study non-ergodic coactions
on ${\mathbb K}^n$. First recall
that if $H$ is a Hopf algebra coacting on an algebra $A$, the coaction is
said to be \textbf{ergodic} if the fixed point subalgebra
$$A^{co H} = \{ a \in A \ | \ \alpha(a) = a \otimes 1\}$$
is reduced to $\K1={\mathbb K}$. Also
recall that the coaction is said to be \textbf{faithful} if $H$ is generated, as an algebra, by
the space of coefficients $\{ (\psi \otimes {\rm id}_H) \circ \alpha(a), \ a \in A, \ \psi \in A^*\}$.
From the quantum group viewpoint, ergodic coactions
correspond to transitive actions. The following result is the analogue
of the decomposition of a classical group action into disjoint orbits.
\begin{proposition}
Let $H$ be a Hopf algebra coacting faithfully on the algebra ${\mathbb K}^n$, and satisfying one the
assumptions of Theorem 3.8.
Then there exists a sequence of positive integers $m_1 \geq \ldots \geq m_k >0$, with
$m_1 + \cdots +m_k=n$ and $k=\dim(({\mathbb K}^n)^{co H})$, with ergodic coactions of $H$ on each ${\mathbb K}^{m_i}$, and
with a surjective
Hopf algebra morphism $$A_s(m_1,{\mathbb K}) * \cdots * A_s(m_k, {\mathbb K}) \longrightarrow H$$
\end{proposition}
\begin{proof}
It is well-known that a subalgebra of a diagonal algebra is itself diagonal, and corresponds
to a partition of the set $[n]$. Hence we have a
partition $[n] = X_1 \sqcup \cdots \sqcup X_k$, with $m_i = \#X_i$, $m_1 \geq \cdots \geq m_k>0$,
and we have
$$({\mathbb K}^n)^{co H} = {\mathbb K} f_1 \oplus \cdots \oplus {\mathbb K} f_k$$
with $f_i = \sum_{j \in X_i} e_j$ being a minimal coinvariant projection, $\forall i$. Moreover we have
$${\mathbb K}^n = {\mathbb K}^n f_1 \oplus \cdots \oplus {\mathbb K}^n f_k$$
The coinvariance of the $f_i$'s ensures that the original coaction restricts
to coactions on the diagonal algebras ${\mathbb K}^nf_i = {\mathbb K}^{m_i}$ (with unit $f_i$), and the coactions
are ergodic by the minimality of the $f_i$'s.
Hence we get the announced coactions, which by Theorem 3.8 produce the announced Hopf algebra map, and the surjectivity follows
from the faithfulness assumption.
\end{proof}
Although this result seems to reduce the study of coactions to ergodic ones, we have
to say that in general it is difficult to determine
the quotients of a free product. Here is a modest application to the determination
of low degree quantum permutation groups.
\begin{corollary}
Let $H$ be a CQG algebra coacting on ${\mathbb C}^5$. If the coaction is not ergodic, then
$H$ is a Hopf $*$-algebra quotient of $A_s(4,{\mathbb C})$ (listed in \cite{bb3})
or is a Hopf $*$-algebra quotient of ${\mathbb C}(S_3)*{\mathbb C}(S_2)$.
\end{corollary}
\section{Group gradings on diagonal algebras}
Let $G$ be a group and let $A$ be an algebra. Let us recall that
a $G$-grading on $A$ consists of a vector space decomposition
$$A = \bigoplus_{g \in G} A_g$$
with $A_gA_h \subset A_{gh}$ for all $g,h \in G$.
It is then easy to see that $1 \in A_1$.
Also it is well known that a $G$-grading is exactly
the same thing as a coaction on $A$ by ${\mathbb K}[G]$, the convolution group algebra of $G$.
If we have a $G$-grading, the formula $\alpha(a) = a\otimes g$, $a \in A_g$, defines
a coaction of ${\mathbb K}[G]$ on $A$, and the converse follows from the cosemisimplicity of
$A$ and the fact that its simple comodules are one-dimensional.
We freely interchange the two notions.
For a non-zero element $a \in A_g$, we write $|a|=g$.
We say that the grading is faithful if the set
$$S = \{ g \in G \ | \ \exists a \in A \ {\rm with} \ |a|=g\}$$
generates $G$ as a group. Faithful $G$-gradings correspond to faithful coactions as in the previous section. Of course we are only interested in faithful gradings.
We say that the grading is ergodic if $A_1=\K1$, which means that
the corresponding coaction is ergodic.
\medskip
In this section we determine the possible group gradings on diagonal algebras. The conclusion will be given
in Proposition 5.2.
We begin with a lemma, very similar to Lemma 5 in \cite{ps}.
\begin{lemma}
Let $G$ be a group and assume that $A={\mathbb K}^n$ has a faithful $G$-grading.
\begin{enumerate}
\item If $A_g \not = 0$, then $g$ has finite order.
\item If the grading is ergodic, the group $G$ is abelian.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $a \in A$ with $|a|=g$. Let $l >0$ be such that
there exist $\lambda_0, \ldots , \lambda_{l-1} \in {\mathbb K}$ such that
$a^l = \sum_{i=0}^{l-1} \lambda_ia^i$. Then
$$a^l\otimes g^l = \sum_{i=0}^{l-1} \lambda_i a^i \otimes g^i$$
If $g$ does not have finite order, then $a^l=0$ and $a=0$ (${\mathbb K}^n$ has no non-zero nilpotents),
which contradicts the assumption on $a$.
We assume now that the grading is ergodic. Let $a,b \in A$
with $|a|=g$ and $|b|=h$. The elements $g,h \in G$ have finite order
and hence by Lemma 5 in \cite{ps} the elements $a$ and $b$ are invertible.
Thus $ab=ba$ is a non-zero element in $A_{gh} \cap A_{hg}$, and $gh=hg$.
The grading being faithful, we conclude that $G$ is abelian.
\end{proof}
\begin{proposition}
Let $G$ be a group. Assume that ${\mathbb K}$ has characteristic zero and is algebraically closed. Then the following are equivalent.
\begin{enumerate}
\item There exists a faithful $G$-grading on ${\mathbb K}^n$.
\item There exists a family of transitive abelian groups
$G_i \subset S_{m_i}$, $i=1, \ldots, k$, with $m_1+ \cdots + m_k=n$, and a surjective group morphism
$$G_1 * \cdots * G_k \longrightarrow G$$
\end{enumerate}
If these conditions hold, the grading is ergodic if and only if $G\subset S_n$ is a transitive
abelian group.
\end{proposition}
\begin{proof}
Let $G \subset S_n$ be an abelian group. We begin by constructing
a $G$-grading on ${\mathbb K}^n$. The action of $G$ on $[n]$ induces
a coaction ${\mathbb K}^n \longrightarrow {\mathbb K}^n \otimes {\mathbb K}(G)$. Combined with the Hopf algebra
isomorphisms ${\mathbb K}(G) \backsimeq {\mathbb K}[\widehat{G}] \backsimeq {\mathbb K}[G]$, this gives a
$G$-grading on ${\mathbb K}^n$, which is ergodic if and only if $G$ is transitive.
Assume now that condition (2) holds. The previous construction
gives a (transitive) $G_i$-grading on ${\mathbb K}^{m_i}$, and hence a
$G_1 * \cdots * G_m$-grading on ${\mathbb K}^n = {\mathbb K}^{m_1} \oplus \cdots \oplus {\mathbb K}^{m_k}$.
This gives finally a faithful $G$-grading on ${\mathbb K}^n$.
Assume now that ${\mathbb K}^n$ has a faithful $G$-grading. If the grading is ergodic,
the group $G$ is abelian by Lemma 5.1, and the ${\mathbb K}[G]$-coaction gives a ${\mathbb K}(G)$-coaction, and hence
a transitive $G$-action on $[n]$. The grading is faithful and hence $G \subset S_n$ is a transitive
abelian group. In general, we have, by Proposition 4.1 and its proof,
a surjective Hopf algebra map
$A_s(m_1,{\mathbb K}) * \cdots * A_s(m_k, {\mathbb K}) \longrightarrow {\mathbb K}[G]$
and the image of $A_s(m_i)$ is a group algebra ${\mathbb K}[G_i]$ that
coacts ergodically on ${\mathbb K}^{m_i}$. By the ergodic case we know that each $G_i\subset S_{m_i}$
is a transitive abelian group, and we are done.
\end{proof}
\begin{corollary} Assume that ${\mathbb K}$ has characteristic zero and is algebraically closed.
Every cocommutative and cosemisimple Hopf algebra quotient of
$A_s(n,{\mathbb K})$ is isomorphic to ${\mathbb K}[G]$, where the group $G$ is a quotient
of a free product of transitive abelian groups.
Every cocommutative Hopf $*$-algebra quotient of
$A_s(n,{\mathbb C})$ is isomorphic to ${\mathbb C}[G]$, where the group $G$ is a quotient
of a free product of transitive abelian groups.
\end{corollary}
\begin{proof}
This follows directly from the previous result because a cosemisimple
and cocommutative Hopf algebra is a group algebra, and also a cocommutative
CQG algebra is a group algebra.
\end{proof}
| {
"timestamp": "2007-10-08T13:55:14",
"yymm": "0710",
"arxiv_id": "0710.1521",
"language": "en",
"url": "https://arxiv.org/abs/0710.1521",
"abstract": "We discuss some algebraic aspects of quantum permutation groups, working over arbitrary fields. If $K$ is any characteristic zero field, we show that there exists a universal cosemisimple Hopf algebra coacting on the diagonal algebra $K^n$: this is a refinement of Wang's universality theorem for the (compact) quantum permutation group. We also prove a structural result for Hopf algebras having a non-ergodic coaction on the diagonal algebra $K^n$, on which we determine the possible group gradings when $K$ is algebraically closed and has characteristic zero.",
"subjects": "Quantum Algebra (math.QA); Rings and Algebras (math.RA)",
"title": "Algebraic quantum permutation groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969641180276,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7099044148130814
} |
https://arxiv.org/abs/2008.08503 | The Erdős-Ko-Rado theorem for $2$-intersecting families of perfect matchings | A perfect matching in the complete graph on $2k$ vertices is a set of edges such that no two edges have a vertex in common and every vertex is covered exactly once. Two perfect matchings are said to be $t$-intersecting if they have at least $t$ edges in common. The main result in this paper is an extension of the famous Erdős-Ko-Rado (EKR) theorem \cite{EKR} to 2-intersecting families of perfect matchings for all values of $k$. Specifically, for $k\geq 3$ a set of 2-intersecting perfect matchings in $K_{2k}$ of maximum size has $(2k-5)(2k-7)\cdots (1)$ perfect matchings. | \section{Introduction and Preliminaries}
In this paper we present two different approaches to establish a version of the Erd\H{o}s-Ko-Rado theorem for $2$-intersecting families of perfect matchings. There are many recent results that verify analogs of the Erd\H{o}s-Ko-Rado theorem. This research area started with Erd\H{o}s, Ko, and Rado's work on systems of intersecting sets. In 1961, they proved if $\mathcal{F}$ is a $t$-intersecting family of $k$-subsets of $\{1,2,\ldots, n\}$, then there is a tight upper bound on the size of $\mathcal{F}$ with $n$ sufficiently large~\cite{EKR}.
\begin{theorem}[EKR]\cite{EKR}
If $\mathcal{F}$ is a $t$-intersecting family of $k$-subsets of $\{1,2,\ldots, n\}$, then there exists a function $f(k,t)$ such that if $n\geq f(k,t)$, then
\begin{equation*}
|\mathcal{F}|\leq \binom{n-t}{k-t}.
\end{equation*}
If equality holds, then $\mathcal{F}$ consists of all $k$-subsets containing a fixed $t$-subset of $\{1,2,\ldots, n\}$.
\end{theorem}
Twenty-three years after the publication of Erd\H{o}s, Ko and Rado's work, Wilson~\cite{W} enhanced their results by giving an algebraic proof of the their result with the exact value of $f(k, t)$ for all $k$ and $t$. Later in 1997, Ahlswede and Khachatrian~\cite{AK} found all maximum $t$-intersecting families of $k$-subsets for all values of $n$. In 2011, Ellis, Friedgut, and Pilpel~\cite{Ellis} showed that the analog of the EKR theorem holds for $t$-intersecting families of permutations of $\{1,\dots, n \}$, when $n$ is sufficiently large relative to $t$. In 2005, Meagher and Moura~\cite{MM} proved that a natural version of the EKR theorem holds for uniform set-partitions. Recently, an algebraic proof of this well-known theorem for intersecting families of perfect matching was found by Godsil and Meagher~\cite{GMP} and their proof is based on eigenvalue techniques originally utilized by Wilson~\cite{W}. Further, they conjectured a version of the EKR theorem holds for $t$-intersecting families of perfect matchings, when $2k\geq 3t+2$. In 2018, Lindzey~\cite{L} proved this conjecture for all $t$, provided that $k$ is sufficiently large relative to $t$. In this paper we prove the conjecture holds for $t=2$ and all $k\geq 3$. \\
In Section~\ref{sec:background}, we provide some necessary background on perfect matchings and introduce the association scheme for perfect matchings. We convert the problem of finding the maximum size of an intersecting set of perfect matchings to the problem of finding a maximum coclique in a graph. Section~\ref{sec:FirstApproach} gives a proof of the result for some values of $k$; this proof uses the well-known clique-coclique bound. In Section~\ref{sec:Second Approach}, we derive a different approach that proves the result for all $k$. In this section we construct a matrix in the association scheme that is a weighted adjacency matrix for the graph in question, we prove our result by showing the ratio bound holds with equality for this weighted adjacency matrix. We conclude this work with some possible related future directions and open problems and we include an appendix which provides several partial tables of eigenvalues for different graphs in the association scheme for perfect matchings.
\section{Background on Perfect Matchings}
\label{sec:background}
A \textsl{matching} $M$ in a graph $X$ is a set of edges such that no two edges have a vertex in common. If a matching covers every vertex of $X$, it is called a \textsl{perfect matching}~\cite{GG}. Two perfect matchings are said to be \textsl{$t$-intersecting} if they have at least $t$ edges in common. If $t=1$, we just say that they are intersecting. In this paper we only consider perfect matchings in complete graphs with an even number of vertices. Our goal is to find the size of the largest set of $2$-intersecting perfect matchings in $K_{2k}$ for all $k \geq 3$. A perfect matching is a special case of a uniform set-partition in which the size of each part is 2. In~\cite{MM} a proof for a version of the EKR theorem for uniform set-partitions is presented. However, the results given in~\cite{MM} are asymptotic in nature (as in the size of $k$ needs to be sufficiently large relative to $t$) and do not apply to perfect matchings when $t > 1$.
It is easy to check that the number of perfect matchings in $K_{2k}$ is
\begin{equation*}
\frac{1}{k!}\binom{2k}{2}\binom{2k-2}{2}\cdots \binom{2}{2} = (2k-1)(2k-3)(2k-5)\cdots 1.
\end{equation*}
For any positive integer $k$ define
\begin{equation*}
(2k-1)!!:=(2k-1)(2k-3)(2k-5)\cdots 1,
\end{equation*}
so the number of perfect matchings in $K_{2k}$ is $(2k-1)!!$.
A set of all perfect matchings that contain a common set of $t$ edges is called a \textsl{canonically $t$-intersecting set}. The size of a canonically $t$-intersecting set of perfect matchings in $K_{2k}$ is $(2k-2t-1)!!$. For a set $T$ of $t$ disjoint edges in $K_{2k}$, we use $\nu_{T}$ to denote the characteristic vector of the set of all perfect matchings that include all the edges in $T$.
\subsection{Perfect matching derangement graph}
The approach we take is to define a graph in which every coclique is a set of intersecting perfect matchings. We then use algebraic techniques to find the size of the largest cocliques in this graph. To start, we state some well-known terminology.
Let $X$ be a graph. A \textsl{clique} in $X$ is a set of vertices in which any two are adjacent; a \textsl{coclique} is a set of vertices in which no two are adjacent. The size of a largest clique and a largest coclique are denoted by $\omega(X)$ and $\alpha(X)$, respectively. The \textsl{adjacency matrix} $A(X)$ of $X$ is a matrix in which rows and columns are indexed by the vertices and the $(i ,j)$-entry is 1 if $i\sim j$, and 0 otherwise. A \textsl{weighted adjacency matrix} $A_{W}(X)$ subordinate to $X$ is a symmetric matrix in which rows and columns are indexed by the vertices and the $(i, j)$-entry may be non-zero (which is interpreted as its edge weight) if $i\sim j$ and is 0 otherwise. The \textsl{eigenvalues} of $X$ refer to the eigenvalues of its adjacency matrix. We use $\mathbf{1}$ to denote the all-ones vector; for any $d$-regular graph, the all-ones vector is an eigenvector with eigenvalue $d$.
In general, finding the largest coclique of a graph $X$ is a well-known NP-hard problem, but there is a famous upper bound on $\alpha(X)$ that we use throughout this paper.
\begin{theorem}[Delsarte-Hoffman bound]\cite[p. 31]{GMB}\label{ratioBound}
Let $A$ be a weighted adjacency matrix for a graph $X$ on vertex set $V(X)$. If $A$ has constant row sum $d$ and least eigenvalue $\tau$, then
\begin{equation*}\label{RatioBound}
\alpha(X)\leq \frac{|V(X)|}{1-\frac{d}{\tau}}.
\end{equation*}
If equality holds for some coclique $S$ with characteristic vector $\nu_{S}$, then
\begin{equation*}
\nu_{S}-\frac{|S|}{|V(X)|}\mathbf{1}
\end{equation*}
is an eigenvector with eigenvalue $\tau$.
\end{theorem}
This bound is based on the ratio between the largest and the smallest eigenvalue for a weighted adjacency matrix, thus it is also known as the \textsl{Ratio Bound}. The Ratio Bound is important here since we apply it to a graph defined so that the cocliques are sets of $2$-intersecting perfect matchings.
\begin{definition}\label{Mt(2k)}\cite{GMP}
Define the \textsl{perfect matching derangement graph} $M_{t}(2k)$ to be the graph whose vertices are perfect matchings on complete graph $K_{2k}$. In this graph two vertices are adjacent if they have at most $(t-1)$ edges in common. Denote the adjacency matrix of $M_{t}(2k)$ by $A_{t}(2k)$.
\end{definition}
In a coclique of $M_{t}(2k)$, any two vertices are not adjacent; thus they have more than $t-1$ edges in common or in other words, they are $t$-intersecting perfect matchings. Using the Delsarte-Hoffman bound, our problem transforms into finding a weighted adjacency matrix for $M_2(2k)$, for any $k \geq 3$, with a sufficiently large ratio between the largest and least eigenvalues. This method to prove EKR theorems was first developed by Wilson in 1984~\cite{W}. In 2015, Godsil and Meagher applied this method to the family of all perfect matchings of the complete graph $K_{2k}$ to find the largest set of intersecting perfect matchings ($t=1$)~\cite{GMP}; later in 2017 it was applied to $t$-intersecting perfect matchings by Lindzey~\cite{L}. \\
\begin{example}[$M_{1}(6)$]
In Definition~\ref{Mt(2k)}, let $t=1$ and $2k=6$. The number of perfect matchings in $K_{6}$ is $5!!$, so $M_{1}(6)$ has 15 vertices. Two vertices here are adjacent if they are not intersecting. Therefore we have,
\begin{figure}[H]
\includegraphics[scale=0.5]{M16}
\caption{Graph $M_{t}(2k)$ when $t=1$ and $2k=6$}
\end{figure}
\end{example}
\subsection{Perfect matching association scheme}
A \textit{set partition} $P=\{P_1, \ldots, P_\ell\}$, of the set $\{1,2,\ldots, n\}$ is a grouping the elements of this set into nonempty subsets (parts) such that each element is included in exactly one part. A set partition in which all parts have even size is called an \textit{even set partition}.
An \textsl{integer partition} of a positive integer $n$ is a list $\lambda = [\lambda_1,\lambda_2, \dots, \lambda_\ell]$ of positive integers with $n = \sum_i \lambda_i$; typically in an integer partition $\lambda_i \geq \lambda_{i+1}$. We denote an integer partition of $n$ by $\lambda \vdash n$. An integer partition in which all parts have even size is called an \textsl{even partition}. For example, if $\lambda \vdash k$, with $\lambda = [\lambda_1,\lambda_2, \dots, \lambda_\ell]$, then $2\lambda = [ 2\lambda_1, 2\lambda_2, \dots, 2\lambda_\ell]$ is an even partition. The set of sizes of the parts in a set partition $P$ of $\{1,2\dots,n\}$ is an integer partition of $n$, we call the integer partition the \textsl{shape} of $P$. (See~\cite[Sections 3.1 and 15.4]{GMB} for more details about partitions.)\\
It is easily seen that any perfect matching of $K_{2k}$ is an even set partition with all parts of size 2; and the shape of any perfect matching is the integer partition $[2,2,\dots,2]$. Taking the union of (or overlapping) two perfect matchings in $K_{2k}$ produces disjoint even cycles in $K_{2k}$, where the union of parallel edges gives rise to 2-cycles. Any two perfect matchings in $K_{2k}$ produce a set partition of the set $\{1,\dots,2k\}$ where each part is the set of vertices contained in one of the even cycles in the union of the two perfect matchings. The shape of this set partition is the integer partition $\lambda = [ \lambda_{1},\lambda_{2}, \cdots, \lambda_{\ell} ]$, here the cycles have size $\lambda_{i}$. Such an integer partition will be even. Using this we define a set of graphs on the collection of all perfect matchings.
\begin{definition}\label{CycleType}
Let $k$ be an integer and $\lambda \vdash k$. Define $A_{2\lambda}$ to be a matrix in which rows and columns are indexed by perfect matchings of $K_{2k}$. In $A_{2\lambda}$ the $(P,Q)$-entry is 1 if the union of the perfect matchings $P$ and $Q$ has shape $2\lambda$, and 0 otherwise.
\end{definition}
We define
\[
\mathcal{A}=\{ A_{2\lambda} \, | \, \lambda \vdash k\}.
\]
The set $\mathcal{A}$ forms a symmetric association scheme which is known as the \textsl{perfect matching association scheme}~\cite[Section 15.4]{GMB}, and the matrix algebra $\mathbb{C}[\mathcal{A}]$ constructed by the complex linear combinations of the matrices in $\mathcal{A}$ is called the \textsl{Bose-Mesner algebra} of this association scheme. For a matrix $A_{2\lambda} \in \mathcal{A}$ define the graph $X_{2\lambda}$ so that $A_{2\lambda}$ is its adjacency matrix. A \textsl{graph in the association scheme} is any graph $X$ with $A(X) \in \mathbb{C}[\mathcal{A}]$. Every graph in this association scheme is an undirected graph with the set of perfect matchings as its vertex set.
The symmetric group $\Sym(2k)$ acts transitively on the set of perfect matchings of $K_{2k}$. This action preserves the adjacencies
in $M_{t}(2k)$. This implies $\Sym(2k)$ is a subset of the automorphism group of the graph $M_{t}(2k)$ and that $M_{t}(2k)$ is vertex transitive. For any $\lambda \vdash k$, consider the set of all pairs of perfect matchings $P$ and $Q$, with the
property that the union of $P$ and $Q$ has shape $2\lambda$. It is known that $\Sym(2k)$ is transitive on this set of pairs (again see~\cite[Section 15.4]{GMB}), this implies each graph $X_{2\lambda}$ is edge transitive.
We state some properties of this association scheme and refer the reader to~\cite[Chapter 3]{GMB} for more details and proofs. Denote the \textsl{Schur product} of two matrices $A=[a_{i,j}]$ and $B = [b_{i,j}]$ by the matrix $A \circ B $ whose $(i,j)$-entry is given by $[a_{i,j}b_{i,j}]$. For any two matrices $A_{\lambda_{i}}$ and $A_{\lambda_{j}}$ in $\mathcal{A}$,
\begin{equation*}
A_{\lambda_{i}}\circ A_{\lambda_{j}} = \left\{
\begin{array}{ll}
0 & i\neq i , \\
A_{\lambda_{i}} & i=j.
\end{array}
\right.
\end{equation*}
This implies the matrices $A_{\lambda_{i}}$ are linearly independent and Schur orthogonal. Hence the matrix set $\{ A_{2\lambda} \, | \, \lambda \vdash n \}$ is an orthogonal basis for the Bose-Mesner algebra of this scheme.
Further, the matrices in $\mathcal{A}$ are symmetric and commute, thus they are simultaneously diagonalizable.
The group $\Sym(2k)$ acts transitively on the set of perfect matchings of $K_{2k}$, and the stabilizer of a single perfect matching under this action is isomorphic to the wreath product $\Sym(2) \wr \Sym(k)$. The action of the group $\Sym(2k)$ on the set of perfect matchings is equivalent its action on the cosets of $\Sym(2k) / \left( \Sym(2) \wr \Sym(k) \right)$. The perfect matching scheme is the Schurian association scheme, or the orbital scheme of the action of $\Sym(2k)$ on the cosets $\Sym(2k) / \left( \Sym(2) \wr \Sym(k) \right)$.
The $(2k-1)!!$-dimensional vector space of vectors indexed by the perfect matchings is a
$\Sym(2k)$-module. It is well-known that the irreducible representations of $\Sym(2k)$ correspond to integer partitions of $2k$~\cite{Ra} and that this module can be expressed as the sum of irreducible modules $\Sym(2k)$ corresponding to even integer partitions of $2k$. We denote these irreducible modules by the corresponding even integer partitions (see~\cite[Chapter 15]{GMB} for details). (For example, $[2k]$ will be used to denote the irreducible module corresponding to the trivial representation; this is the 1-dimensional vector space of constant vectors of length $(2k-1)!!$.) It follows that the common eigenspaces of the matrices in the perfect matching association scheme are unions of these irreducible modules, thus the common
eigenspaces in the perfect matching association scheme correspond to the even integer partitions of $2k$. Further details on these eigenspaces are in Subsection~\ref{Mod of Char Table}.
The graph $M_{t}(2k)$ is the union of the graphs of the scheme $X_{\lambda}$ in which the even partition $\lambda$ has at most $t-1$ cycles of length 2, this can be expressed as
\begin{equation*}
A_{t}(2k) = \sum_{\lambda \vdash 2k}A_{\lambda},
\end{equation*}
where the sum is take over partitions $\lambda$ have at most $t-1$ parts of length 2. Further, the eigenvalues of $M_{t}(2k)$ are the sums of the eigenvalues of matrices $A_{\lambda}$ in this sum.
\section{Clique-Coclique Approach}
\label{sec:FirstApproach}
For the first approach, we construct a large clique in $M_2(2k)$. This clique is constructed using \textsl{difference sets} and \textsl{projective planes} which are introduced and discussed in the next subsection. Then we use this clique to construct a weighted adjacency matrix $M$ of $M_{2}(2k)$, and we will show that the Delsarte-Hoffman bound holds with equality for this matrix. This approach only works for some values of $k$, but motivates the second approach.
We follow the approach from~\cite{MR3990672}, where the authors show how to find the matrix used in Wilson's proof of the EKR Theorem~\cite{W}.
\subsection{Singer difference sets and finite projective plane }
\label{sec:SinderDS}
A \textsl{symmetric balanced incomplete design} with parameters $(v, k, \lambda)$ is a collection of subsets from a base set of size $v$; these subsets are called blocks; with the property that each block has exactly $k\geq 1$ elements where $v>k$, each element appears in exactly $k$ blocks, and each pair of elements appears in exactly $\lambda \geq 1$ blocks.
\begin{definition}\label{FPP}\cite[p.51]{Wallis}
A symmetric balanced incomplete block design with parameters $(n^{2}+n+1, n+1, 1)$ is called a \textsl{finite projective plane of order $n$} in which the base set is the point set of size $n^{2}+n+1$ and each block ($(n+1)$-subset) in this design a block is called a line.
Alternatively, A finite projective plane consists a finite set of points $P$ and a set $L$ of subsets of $P$, called lines, satisfying the axioms (P1), (P2), and (P3):
\begin{itemize}
\item[(P1)]Given two points in $P$, there is exactly one line that contains both.
\item[(P2)]Given two lines in $L$, there is exactly one point on both.
\item[(P3)]There are four points of which no three are co-linear.
\end{itemize}
\end{definition}
A difference set $(v, k, \lambda)$-DS is a $k$-set $B$ of an abelian group $G$ of order $v$ in which for any nonzero element $g$ of $G$ there are exactly $\lambda$ ordered pairs in $B$ such that $g$ can be constructed as their difference~\cite[Section 5.1]{Wallis}. If the group $G$ is cyclic, then the difference set $(v, k, \lambda)$-DS is called a cyclic difference set.
A difference set $D = \{d_1 , d_2, \dots , d_k \}$ can be \textsl{developed} in to a symmetric balanced incomplete block design with the blocks
\[
D + g := \{d_1+g, d_2+g, \dots ,d_k+g \}
\]
for all $g \in G$. The design developed by a $(v, k, \lambda)$-DS is actually a symmetric balanced incomplete block design with the same parameters~\cite[p.64]{Wallis}. \\
A difference set may not exist for an arbitrary choice of the parameters $(v, k, \lambda)$. But the following theorem shows the existence of a special type of a difference set, called a \textsl{Singer difference set}. As a result if $n$ is any prime power, then a finite projective plane of order $n$ exists.
\begin{theorem}\cite[Section 12.2]{Colburn}
If $n$ is a prime power, there is a cyclic difference set with parameters:
\begin{equation*}
\left( \frac{n^{d+1}-1}{n-1}, \frac{n^{d}-1}{n-1}, \frac{n^{d-1}-1}{n-1} \right).
\end{equation*}
These difference sets are called \textsl{Singer difference sets}.
\end{theorem}
To construct a Singer difference set with the parameters in the above theorem, let $\alpha$ be a generator of the multiplicative group of $\mathcal{F}_{n^{d+1}}$ ($\alpha$ is called primitive). Consider the usual trace function $Tr$ from $\mathcal{F}_{n^{d+1}}$ to $\mathcal{F}_{n}$, this function maps an element $x$ to $\sum_{i=0}^{d}x^{n^{i}}$. Then the following set is the desired difference set
\begin{equation}\label{DS}
\mathcal{D} = \{i\,|\,Tr(\alpha^{i}) = 0 \}.
\end{equation}
For more details see~\cite[Section 5.5]{Wallis} and~\cite[Section 12.2]{Colburn}. In particular, if $d=2$ and $n=2^{a}$, for every positive integer $a$, there exists a $(2^{2a}+2^{a}+1, 2^{a}+1, 1)$-DS and so a finite projective plane of the same parameters. This projective plane has $2^{2a}+2^{a}+1$ lines and each line has $2^{a}+1$ points on it.
\begin{definition}\cite[Section 4.1]{Wallis}
Consider an $(n+1)$-subset $\mathcal{O}$ of the point set in a finite projective plane with parameters $(n^{2}+n+1, n+1, 1)$. If the intersection of $\mathcal{O}$ with any line in this plane has at most 2 points in common, $\mathcal{O}$ is called an oval.
\end{definition}
If $\mathcal{D}$ is a Singer difference set for the finite projective plane with parameters $(2^{2a}+2^{a}+1, 2^{a}+1, 1)$-DS, then $-\mathcal{D}$ is an oval, and the set of lines is given by
\begin{equation}\label{Lines}
\mathcal{L}=\{ \mathcal{D}+x|x\in \mathbb{Z}_{2^{2a}+2^{a}+1} \}.
\end{equation}
\begin{lemma}\label{LinesWithZero}
Let $\mathcal{P}$ be the finite projective plane with parameters $(2^{2a}+2^{a}+1, 2^{a}+1, 1)$ developed from the Singer difference set $\mathcal{D}$. Then in $\mathcal{P}$:
\begin{enumerate}[label=(\alph*)]
\item the lines containing the zero element are exactly the ones formed by adding an element of the oval $\mathcal{O} = -\mathcal{D}$ to the set $\mathcal{D}$;
\item for every $s\in \mathbb{Z}_{2^{2a}+2^{a}+1} \backslash \mathcal{O}$, there exists exactly one line which contains both $s$ and zero; and
\item each line containing zero, also contains exactly one element from $\mathcal{O}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The lines containing the zero element are
\[
\mathcal{D}^{*} = \{D+(-d_{1}), D+(-d_{2}), \cdots, D+(-d_{2^{a}+1}) \}.
\]
Also, for every $s\in \mathbb{Z}_{2^{2a}+2^{a}+1} \backslash \mathcal{O}$, there exists exactly one pair $(d_{i}, d_{j})$ in $D$ such that $s=d_{i}-d_{j}$ (by the definition of the Singer difference set and the fact that here $\lambda = 1$). Now, the line $\mathcal{D}+(-d_{j})$ is
\begin{equation*}
\mathcal{D}+(-d_{j}) = \{d_{1}-d_{j}, \cdots, \underbrace{d_{i}-d_{j}}_{s}, \cdots, \underbrace{d_{j}-d_{j}}_{0}, \cdots, d_{2^{a}+1}-d_{j} \}.
\end{equation*}
So $s\in \mathcal{D} +(-d_{j})$, and $\mathcal{D} +(-d_{j})$ is the only line in $\mathcal{D}^{*}$ containing $s$, since otherwise there would be two lines in $\mathcal{D}^{*}$ containing both $s$ and 0. On the other hand, for every $d_{t} \in \mathcal{D}$, there is exactly one pair $(d_{i}, d_{j})$ in $\mathcal{O}$ for which $-d_{t} = d_{i}-d_{j}$, thus $-d_{t} \in \mathcal{D} -d_{j}$. So each line in $\mathcal{D}^{*}$ includes exactly one element of $\mathcal{O}$.\\
\end{proof}
\subsection{Construction of large cliques}
We know that $X_2(2k)$ is a vertex-transitive graph. We apply the following well-known bound on the size of a coclique on a vertex-transitive graph.
\begin{theorem}[Clique-Coclique Bound]\cite[p.26]{GMB} Let $X$ be a vertex-transitive graph. Then
\begin{equation*}
\alpha(X) \, \omega(X)\leq |V(X)|.
\end{equation*}
\\
If equality holds for a clique $C$ and a coclique $S$, then the vectors
\begin{equation*}
\chi_{C}-\frac{|C|}{v}\mathbf{1}, \quad \chi_{S}-\frac{|S|}{v}\mathbf{1}
\end{equation*}
are orthogonal, where $\chi_{C}$ and $\chi_{S}$ are the characteristic vectors of $C$ and $S$ respectively. In particular $\chi_{C}^{T}\chi_{S} = 1$.
\end{theorem}
\begin{theorem}\label{M}
Suppose that $X$ is a graph in a symmetric association scheme. If there is a clique in $X$ for which equality holds in the clique-coclique bound, then there exists a weighted adjacency matrix of $X$, for which the ratio bound holds with equality.
\end{theorem}
\begin{proof}
Let $\mathcal{A} = \{A_0, A_1, \dots, A_k\}$ be a symmetric association scheme on $v$ vertices and denote the row sum of $A_i$ by $v_i$. Let $X_{i}$ be the graph associated with $A_{i}$, for $i \in \{0,\dots, k\}$. Then $X$ is a graph such that $X = \bigcup_{i\in T} X_{i}$ for some $T \subset \{1,\dots, k\}$.
Let $C$ be a clique in $X$ for which the clique-coclique bound holds with equality, so $\alpha(X) = v /|C|$. Let $\chi_C$ be the characteristic vector of $C$. Then $\chi_C^T \chi_C$ is a positive semi-definite (psd) matrix. The projection of this matrix into the Bose-Mesner algebra is the matrix
\begin{equation*}
\widehat{M} = \sum_{i=0}^k \frac{\chi_C^T A_i \chi_C }{v v_i}A_i.
\end{equation*}
The projection of a psd matrix is a psd matrix, so $\widehat{M}$ is again a psd matrix (see~\cite[Lemma 3.2]{MR3990672}). Since $A_0=I$, we have $\frac{\chi_C^T A_0 \chi_C }{v v_0}A_0 = \frac{|C|}{v} I$.
Define $M = \widehat{M} - \frac{|C|}{v} I$. Since $\widehat{M}$ is psd, the minimal eigenvalue for $M$ is at least $ -\frac{|C|}{v}$. Further, since $C$ is a clique in $X$, any two vertices in $C$ will be related in some $A_i$ for $i \in T$, in particular $\chi_C A_j \chi_C =0$ for $j \not \in T$. Thus
\begin{equation*}
M = \sum_{i=1}^k \frac{\chi_C^T A_i \chi_C }{v v_i}A_i =
\sum_{i \in T} \frac{\chi_C^T A_i \chi_C }{v v_i}A_i
\end{equation*}
and $M$ is a weighted adjacency matrix of $X$.
The row sum of $\widehat{M}$ can be calculated as follows
\[
\widehat{M} {\bf 1} = \sum_{i=0}^k \frac{\chi_C^T A_i \chi_C }{v v_i}A_i {\bf 1}
= \frac{1}{v} \sum_{i=0}^k \chi_C^T A_i \chi_C {\bf 1}
= \frac{1}{v} \chi_C^T \left( \sum_{i=0}^k A_i \right) \chi_C {\bf 1}
= \frac{1}{v} \chi_C^T J \chi_C {\bf 1}
= \frac{|C|^2}{v} {\bf 1}.
\]
This implies the row sum of $M$ is
\begin{equation*}
d= \frac{|C|^2}{v} - \frac{|C|}{v}.
\end{equation*}
We also know the minimum eigenvalue $\tau$ is negative and at least $ -\frac{|C|}{v}$ and the maximum coclique has size $\frac{v}{|C|}$. The ratio bound, applied to $M$ gives
\begin{equation*}
\frac{v}{|C|} = \alpha(X) \leq \frac{v}{1-\frac{d}{\tau}} = \frac{v}{1-\frac{ \frac{|C|^2}{v} - \frac{|C|}{v}}{\tau}}.
\end{equation*}
Rearranging verifies that $\tau$ is less than or equal to $-\frac{|C|}{v}$. Thus $\tau = -\frac{|C|}{v}$ and equality holds in the ratio bound.
\end{proof}
In the next theorem we show that equality in the clique-coclique bound holds for $M_2(2k)$ for infinitely many values of $k$.
\begin{theorem}\label{CliqueApproach}
If $2k=2^{a}+2$, where $a$ is a positive integer, then $\omega(M_{2}(2k)) = (2k-1)(2k-3)$.
\end{theorem}
\begin{proof}
We construct a maximum clique of size $(2k-1)(2k-3)$ for the graph $M_{2}(2k)$ where $2k=2^{a}+2$, for every positive integer $a$. This construction is described in the following four steps.
\begin{itemize}
\item[ {\bf Step 1.} ]Using (\ref{DS}) and (\ref{Lines}) we construct a Singer difference set $\mathcal{D}= \{d_{1}, d_{2}, \cdots, d_{2^{a}+1} \}$ with parameters $(2^{2a}+2^{a}+1, 2^{a}+1,1)$, and we construct the lines $\mathcal{L}$ of the corresponding finite projective plane. Let $\mathcal{O} = -D= \{-d_{1}, -d_{2}, \cdots, -d_{2^{a}+1} \}$ be an oval of size $2^{a}+1$ in the projective plane. Note that by the definition of an oval, $\mathcal{O}$ has at most 2 points in common with each line in $\mathcal{L}$.\\
\item[{\bf Step 2.}]Let $K_{2k}$ be the complete graph with its vertices indexed by the elements in $\mathcal{O} \cup \{0\}$. Pick the lines from $\mathcal{L}$ which have at least one element in common with the oval $\mathcal{O}$, call this set $\mathcal{L'}$. For lines in $\mathcal{L}'$ with 2 points in common with the oval $\mathcal{O}$, consider their common pair of points as an edge in $K_{2k}$. Next consider the lines in $\mathcal{L}'$ with only one point in common with $\mathcal{O}$, these exist from Lemma~\ref{LinesWithZero}. The point in common with the oval $\mathcal{O}$ and the zero element on that line form an edge in $K_{2k}$. Lemma~\ref{LinesWithZero} also indicates that for every $s\in \mathbb{Z}_{2^{2a}+2^{a}+1} \backslash \mathcal{O}$, there exists exactly one line in $\mathcal{L}$ containing both $s$ and zero. In addition, each line that contains 0 also contains exactly one element of $\mathcal{O}$.\\
\item[{\bf Step 3.}]Now consider $s\in \mathbb{Z}_{2^{2a}+2^{a}+1} \backslash \mathcal{O}$. We select the lines in $\mathcal{L'}$ which include $s$. The edges corresponding to the selected lines for $s$ form a perfect matching: by the definition of a finite projective plane every pair of lines has exactly one point in common, and since we are choosing the lines in the subset $\mathcal{L'}$ which contain $s$, they cannot have any other element in common. Hence the corresponding edges we are defining for $s$ are disjoint. Also note that by the argument in Step 2, there is exactly one edge with zero as one of its vertices, and all other selected lines include two elements of $\mathcal{O}$ (the lines in $\mathcal{L}'$ with one element of $\mathcal{O}$ are exactly the lines in $\mathcal{L}$ that include 0). In addition, for every $-d_{i}\in \mathcal{O}$, there is exactly one line that contains both $s$ and $-d_{i}$, which means the edges corresponding to the selected lines for $s$ cover all the elements of the oval $\mathcal{O}$ precisely once.\\
\item[{\bf Step 4.}] By repeating Step 3 for all $s \in \mathbb{Z}_{2^{2a}+2^{a}+1} \backslash \mathcal{O}$, we produce a set of perfect matchings which we denote by $\mathcal{C}$. The size of $\mathcal{C}$ is
\begin{align*}
|\mathbb{Z}_{2^{2a}+2^{a}+1} \backslash \mathcal{O}| &= \left( 2^{2a}+2^{a}+1 \right) -(2^{a}+2) \\
&= (2^{a}+1)(2^{a}-1) \\
&= (2k-1)(2k-3).
\end{align*}
\end{itemize}
The set $\mathcal{C}$ is actually a clique in $M_{2}(2k)$: in any clique in $M_{2}(2k)$, a pair of perfect matchings should have at most 1 edge in common. Suppose $PM_{s}$ and $PM_{t}$ are the two perfect matchings in $\mathcal{C}$ corresponding to $s$ and $t$ in $\mathbb{Z}_{2^{2a}+2^{a}+1}-\mathcal{O}$ and assume that they have 2 edges in common, say $e_{1}$ and $e_{2}$. This means that there are lines $\ell_{s_1}$ and $\ell_{t_1}$ containing $s$ and $t$, respectively, and both $\ell_{s_1}$ and $\ell_{t_1}$ include the vertices of $e_{1}$. By definition, every pair of lines has exactly one point in common so $\ell_{1}:=\ell_{s_1}=\ell_{t_1}$. Similarly, there is a line $\ell_{2}$ including the points on $e_{2}$, $s$ and $t$. So the lines $\ell_{1}$ and $\ell_{2}$ have more than one point in common, therefore they are the same, which we label as $\ell$. But then the line $\ell$ contains more than two elements from the oval $\mathcal{O}$, which contradicts the definition of an oval.\\
Finally we note that $\mathcal{C}$ is a maximum clique in $M_{2k}$ since a canonical $2$-intersecting perfect matching is a coclique of size $(2k-5)!!$; hence the equality in the clique-coclique bound holds.
\end{proof}
We have proved that for infinitely many values of $2k = 2^{a}+2$ we can build a maximum clique in $M_{2}(2k)$, hence we can construct the matrix $M$ as Theorem~\ref{M} with equality in the ratio bound. The value $2^{a}+2$ grows fast as $a$ increases, which makes it difficult to find the maximum clique, and the values in $M$ by computer. In the next approach we try to build the matrix $M$ without identifying maximum cliques.
\section{Second Approach }
\label{sec:Second Approach}
In this section we find a set of coefficients ${\bf a_{\lambda_{i}}}$ for the matrices $A_{\lambda_{i}}$ in the association scheme of perfect matchings so that the matrix $M = \sum_{i=1}^{m}{\bf a_{\lambda_{i}}} A_{\lambda_{i}}$ is a weighted adjacency matrix for the graph $M_{2}(2k)$ and the ratio bound holds with equality for the matrix $M$. To verify this, we need to determine the row sum and the least eigenvalue of the matrix $M$.\\
If we have the complete character table of the perfect matching association scheme, then we can easily find the row sum and the least eigenvalue of $M$. The matrices in an association scheme are simultaneously diagonalizable, therefore they have common eigenspaces. This means that the eigenvalue of a linear combination of the matrices $A_{\lambda_{i}}$ corresponding to an eigenspace is actually the same linear combination of the eigenvalues of matrices $A_{\lambda_{i}}$ corresponding to the eigenspace. But finding the complete character table of this scheme for $2k\geq 40$ is still an unsolved problem. In his 1994 paper, Muzychuk~\cite{Mu} studied the eigenvalues of the association scheme of the symmetric group $\Sym(2k)$. The calculations are quite complicated and Muzychuk only found the eigenvalues up to $2k = 10$. More recently in 2018, Srinivasan~\cite{Sri1, Sri2} presented a recursive algorithm to find the character tables up to $2k=40$.
In this work, we calculate several entries in the character table of the perfect matching association scheme for all values of $2k$. From these eigenvalues we will find an appropriate weighted adjacency matrix. To do this, we calculate the eigenvalues of some carefully chosen \textsl{quotient} graphs.
\subsection{Eigenvalues of perfect matching association scheme}\label{Mod of Char Table}
\begin{definition}\label{quotient}\cite[Section 2.2]{GMB}
Let $\pi = [\pi_{1},\pi_{2}, \ldots, \pi_{i}]$ be a set partition of the vertices of the graph $X$. This partition is \textsl{equitable} if the number of vertices in $\pi_{k}$ that are adjacent to a vertex in $\pi_{\ell}$ is determined only by $k$ and $\ell$, where $k,\ell\in \{1,\ldots, i\}$. If $\pi$ is an equitable partition of $X$, the \textsl{quotient} graph $X/\pi$ is a directed multi-graph with the parts of $\pi$ as its vertices, and if a vertex in $\pi_{\ell}$ has exactly $\nu$ neighbours in $\pi_{k}$, then $X/\pi$ has $\nu$ arcs from $\pi_{\ell}$ to $\pi_{k}$.
\end{definition}
Quotient graphs are usually represented as a matrix, the rows and columns are index by the parts of the equitable partition and the
$(\pi_k, \pi_\ell)$-entry is the number of edges from a vertex in $\pi_k$ to $\pi_\ell$.
Consider an integer partition $\lambda = [\lambda_{1},\lambda_{2}, \cdots, \lambda_{i}]$. The orbit partition formed by the Young subgroup $\Sym(\lambda) = \Sym(\lambda_{1}) \times \Sym(\lambda_{2}) \times \cdots \times \Sym(\lambda_{i})$ acting on the set of all perfect matchings of $K_{2k}$ (vertices of $M_{2}(2k)$) is an equitable partition. Hence for every class $\mu$ in the perfect matching association scheme, the quotient graph of $X_{\mu}$, with respect to this orbit partition, is well-defined. We denote this quotient graph by $X_\mu/\pi(\lambda)$. The eigenvalues of $X_\mu/\pi(\lambda)$ are also eigenvalues of the matrix $A_{\mu}$~\cite[p.28]{GMB}. Using this fact, we will construct some quotient graphs for several classes in the perfect matching association scheme to build a portion of the character table in the next subsection.\\
Let $A_{\lambda}$ be one of the matrices in the perfect matching association scheme. Let $\Sym(\lambda)$ be a Young subgroup and $A_{\ell}/\pi(\lambda)$ represent the corresponding quotient graph. Any eigenvector $\nu'$ of the quotient graph can be \textsl{lifted} to form an eigenvector $\nu$ for $A_{\ell}$ (the $P$-entry of $\nu$ is equal to the entry of $\nu'$ corresponding to the part that contains $P$). The groups $\Sym(\lambda)$ and $\Sym(n)$ both act on the cosets of $\Sym(n)/\left( \Sym(2) \wr \Sym(k) \right)$, and thus also act on the vector $\nu$ by permuting the indices. Since the entries of $\nu$ are constant on the orbits of $\Sym(\lambda)$, the vector $\nu$ is unchanged by the action of $\Sym(\lambda)$. Define
\[
V = \spanof\{ \sigma \nu : \sigma \in \Sym(n) \}.
\]
In particular, $V$ is the $\Sym(n)$-module generated by the action of $\Sym(n)$ on $\nu$.
For two integer partitions $\mu =[\mu_1,\dots, \mu_\ell]$ and $\lambda=[\lambda_1,\dots, \lambda_k]$ of $n$, we say that $\mu \geq \lambda$ in the \textsl{dominance ordering} if $\mu_j = \lambda_j$, for all $j<i$ and $\mu_i > \lambda_i$ for some
$i \in\{1,\dots, \min\{k,\ell\}\}$. We will use $\phi_\lambda$ to denote the character of $\Sym(n)$ associated to the partition $\lambda$.
The decomposition of the representation of $\Sym(n)$ induced by the trivial representation on a Young subgroup is well-known.
\begin{theorem}\label{Ind.Decomp.1}\cite[Chapter 12]{GMB}
If $\lambda \vdash n$, then
\[
ind_{\Sym(n)}(1_{\Sym(\lambda)}) = \phi_{\lambda}+\sum_{\mu>\lambda}^{}K_{\mu \lambda}\phi_{\mu},
\]
where $K_{\mu \lambda}$ is the \textsl{Kostka number} (see \cite[Section 12.5]{GMB} for more information).
\end{theorem}
The decomposition of the representation of $\Sym(n)$ induced by the trivial representation on $Sym(2)\wr \Sym(k)$ is also well-known.
\begin{lemma}\label{Ind.Decomp.2}\cite[Chapter 12]{GMB}
For integers $n$ and $k$ with $n\geq 2k$,
\[
ind_{\Sym(n)}(1_{\Sym(2)\wr \Sym(k)}) = \sum_{\lambda \vdash k} \phi_{2\lambda}.
\]
\end{lemma}
Our plan is to use the Young subgroup to form the quotient graphs. The eigenvalues of the quotient graphs belong to the modules that are in both decompositions in Theorem~\ref{Ind.Decomp.1} and Lemma~\ref{Ind.Decomp.2}.
\begin{theorem}\label{module}
Assume that $\Sym(n)$ acts on the set $\Omega$, and that $A$ is the adjacency matrix for an orbital of the action of $\Sym(n)$ on $\Omega$.
Let $\lambda \vdash n$ and $\pi$ be the orbit partition from the action of $\Sym(\lambda)$ on $\Omega$. If $\eta$ is an eigenvalue of the quotient graph $A/\pi$, then $\eta$ is an eigenvalue of $A$. Moreover, $\eta$ belongs to some $\Sym(n)$-module represented by the partition $\mu$ where $\mu \geq \lambda$ in the dominance ordering.
\end{theorem}
\begin{proof}
Let $\pi$ be the orbit partition of $\Sym(\lambda)$ acting on $\Omega$. Assume that $\nu'$ is an eigenvector of the quotient graph $A/\pi$ with eigenvalue $\eta$. The vector $\nu'$ can be lifted to an $\eta$-eigenvector of $A$, which we denote by $\nu$.\\
The group $\Sym(n)$ and $\Sym(\lambda)$ both act on $\Omega$ and thus also act on the vector $\nu$ by permuting the entries. Since the entries of $\nu$ are constant on the orbits of $\Sym(\lambda)$, the vector $\nu$ is unchanged by the action of $\Sym(\lambda)$. Define a vector space $W$ to be the span of the vector $\nu$, so $\Sym(\lambda)$ fixes every element in $W$.\\
Set $V = \oplus_{\sigma \in \Sym(n)} \sigma W$. Then $V$ is isomorphic to the module for the induced representation $\mathrm{ind}_{\Sym(n)}(1_{\Sym(\lambda)}) = \phi_{\lambda}+\sum_{\mu>\lambda}K_{\mu \lambda}\phi_{\mu}$. Clearly the vector $\nu \in V$ and since $\nu$ is an $\eta$-eigenvector there is a $\mu \geq \lambda$ so that the $\mu$-module is a subspace of the $\eta$-eigenspace.
\end{proof}
\begin{example}\label{ex:ep}
Consider the matrix $A_{[2k-4,4]}$ in the perfect matching association scheme. Note that in the graph corresponding to this matrix, two perfect matchings are adjacent if their union forms a $4$-cycle and a $(2k-4)$-cycle. The following matrix is the quotient graph corresponding to the group $\Sym(2k-2)\times \Sym(2)$. Denote this quotient graph by $A(X_{[2k-4,4]})/\pi([2k-2,2])$ (in this notation the first integer partition is the class in the perfect matching association scheme, the second partition is the Young subgroup used to form the partition of the vertices in the graph).
\[
\renewcommand*{\arraystretch}{1.5}
A(X_{[2k-4,4]})/[2k-2,2] =\left[
\begin{array}{c|c}
\bf{0} & k(k-1)(2k-6)!! \\ \hline
\frac{1}{2}k(2k-6)!! & \frac{1}{2}k(2k-3)(2k-6)!!
\end{array}\right].
\]
\vspace{0.5cm}
For the matrix $A(X_{[2k-4,4]})/\pi([2k-2,2])$ the all-ones vector $\bf{1}$ is an eigenvector corresponding to the largest eigenvalue,
$k(k-1)(2k-6)!!$. This eigenvalue is actually the degree of $A_{[2k-4,4]}$, and by Theorem~\ref{module}, this eigenvalue corresponds to the $[2k]$-module in the character table. It is well-known that the trace of a matrix is equal to the sum of its eigenvalues, so by subtracting the degree from the trace, we find the second eigenvalue of this matrix which is $-\frac{1}{2}k(2k-6)!!$. Using Theorem~\ref{module}, and noting that the degree eigenvalue belongs to the $[2k]$-module, it is easy to deduce that the second eigenvalue belongs to the $[2k-2,2]$-module.\\
The quotient graph of $A_{[2k-4,4]}$ with the orbit partition formed by the action of the group $\Sym(2k-4)\times \Sym(4)$ on the perfect matchings is the following
\[
\renewcommand*{\arraystretch}{1.5}
\left[
\begin{array}{c|c|c}
2(2k-6)!! & (2k-4)!! & (k-1)(k-2)(2k-6)!! \\ \hline
\frac{1}{2}(2k-6)!! & \frac{1}{2}(5k-2)(2k-6)!! & \frac{1}{2}(2k^{2}-7k+1)(2k-6)!!\\ \hline
\frac{3}{2}(k-1)(2k-8)!! & 3(2k^{2}-7k+1)(2k-8)!! & (2k^{3}-14k^{2}+\frac{51}{2}k-\frac{3}{2})(2k-8)!!\\
\end{array}\right].
\vspace*{0.4cm}
\]
Denote this matrix by $A(X_{[2k-4,4]})/\pi([2k-4,4])$. This matrix yields the eigenvalues of $A_{[2k-4,4]}$ belonging to the modules $[2k]$, $[2k-2,2]$, and $[2k-4,4]$. From the matrix $A(X_{[2k-4,4]})/\pi([2k-2,2])$ we already have two eigenvalues of the matrix $A(X_{[2k-4,4]})/ \pi([2k-4,4])$, those corresponding to the modules $[2k]$ and $[2k-2,2]$. Hence by subtracting these two eigenvalues from the trace, we find a third eigenvalue of $A_{[2k-4,4]}$, the one corresponding to the module $[2k-4,4]$, and it is equal to $\frac{1}{2}(7k-15)(2k-8)!!$.\\
\end{example}
By finding several quotient graphs for the classes $[2k]$, $[2k-2,2]$, $[2k-4,4]$, and $[2k-6,6]$ in the perfect matching association scheme, we construct a portion of the character table for the association scheme on the perfect matchings for $K_{2k}$ for any $k\geq 6$. By using Theorem~\ref{module} and the dominance ordering recursively to define the quotient graphs of each class in this association scheme, we can determine the eigenvalues that belong to some of the modules in the character table. These results are recorded in Table~\ref{CharTable}.
\FloatBarrier
\begin{sidewaystable}
\begin{centering}
\begin{tabular}[h]{N| m{1.4cm} ? m{1.8cm} | m{3.4cm} | m{4.4cm} | c | m{3.9cm}| m{0.4cm} |}
\hline
\rule{0pt}{20pt}& ~ & \textcolor{Red}{$[2k]$} & \textcolor{Red}{$[2k-2,2]$} & \textcolor{Red}{$[2k-4,4]$} & $[2k-4,2,2]$ & $[2k-6,6]$ & $\cdots$ \\[4ex]
\Xhline{5\arrayrulewidth}
\rule{0pt}{20pt}& $\mathbf{\chi}_{[2k]}$ & $\frac{(2k)!!}{2k}$ & $\frac{(2k)!!}{2(2k-2)}$ & $\frac{(2k)!!}{4(2k-4)}$ & $\frac{(2k)!!}{8(2k-4)}$ & $\frac{(2k)!!}{6(2k-6)}$ & $\cdots$ \\[4ex]
\hline
\rule{0pt}{20pt}& $\chi_{[2k-2,2]}$ & $-(2k-4)!!$ & $\frac{(2k-4)!!}{2}$ & $\frac{-2k(2k-6)!!}{4}$ & \textcolor{Gray}{$?$} & $\frac{-2k(2k-4)!!}{6(2k-6)}$ & $\cdots$ \\[4ex]
\rule{0pt}{20pt}& $\chi_{[2k-4,4]}$ & $-(2k-6)!!$ & $-(5k-12)(2k-8)!!$ & $\frac{(7k-15)(2k-8)!!}{2}$ & \textcolor{Gray}{$?$} & $\frac{-2k(2k-6)!!}{6(2k-6)}$ & $\cdots$ \\[4ex]
\rule{0pt}{20pt}& $\chi_{[2k-4,2,2]}$ & $2(2k-6)!!$ & $-(2k-6)!!$ & $\frac{-(2k-6)!!}{2}$& \textcolor{Gray}{$?$} & $\frac{4k(2k-6)!!}{6(2k-6)}$ & $\cdots$ \\[4ex]
\rule{0pt}{20pt}& $\chi_{[2k-6,6]}$ & $-3(2k-8)!!$ & $-3(3k-10)(2k-10)!!$ & $-3(9k^{2}-71k+140)(2k-12)!!$ & \textcolor{Gray}{$?$} & $6(5k^{2}-38k+70)(2k-12)!!$ & $\cdots$ \\[4ex]
\rule{0pt}{20pt}& \vdots & \vdots & \vdots & \vdots & \textcolor{Gray}{$\vdots$} & \vdots & ~ \\[4ex]
\hline
\end{tabular}
\caption{Character table of the perfect matching association scheme}
\label{CharTable}
\end{centering}
\end{sidewaystable}
\FloatBarrier
We prove another result beyond Theorem~\ref{module} that will be used later in this work.
For a set $S \subset \{1, \dots ,2k \}$ of size four, consider the set of all perfect matchings that include two edges $e$ and $e'$ such that $e$ and $e'$ form a set partition of $S$.
This set is the first part in the equitable partition formed by the action of $\Sym([2k-4,4])$ on the perfect matchings given in Example~\ref{ex:ep}.
For example, if $S = \{1,2,3,4\}$ the characteristic vector of this set is
\[
w_{\{1,2,3,4\}} = \nu_{\{1,2\}, \{3,4\} } +\nu_{\{1,3\}, \{2,4\} } + \nu_{\{1,4\}, \{2,3\} }.
\]
In the next theorem we use $w_S$ to denote the characteristic vector for the set of perfect matchings in which an arbitrary 4-set $S$ is contained in only 2 edges.
\begin{theorem}\label{thm:4sets}
The set $\{ w_S \, |\, S \subset \{1,2,\dots,2k\} \textrm{ with } |S|=4\}$ is a spanning set for the $\Sym(2k)$ module $\spanof \{ [2k], [2k-2,2], [2k-4,4]\}$.
\end{theorem}
\begin{proof}
Consider the quotient matrix $A(X_{[2k-4,4]})/\pi([2k-4,4])$ formed by the action of $\Sym([2k-4,4])$ on the perfect matchings (this matrix is given in Example~\ref{ex:ep}). The first part in the equitable partition is the set of all perfect matchings for which a fixed set of size $S$ four is contained in only two edges (that is, $S$ is the 4-set stabilized by $\Sym([2k-4,4])$).
The matrix $A(X_{[2k-4,4]})/\pi([2k-4,4])$ is a $3 \times 3$-matrix that is diagonalizable. This implies the vector $(1,0,0)$ can be expressed as a linear combination of the eigenvectors of the quotient matrix. Further, each of the eigenvectors of the quotient matrix can be lifted to be an eigenvector for the adjacency matrix. By Theorem~\ref{module}, these lifted vectors are the eigenvectors belonging to the $[2k]$, $[2k-2,2]$ and $[2k-4,4]$ modules of $\Sym(2k)$.
Using the same linear combination to produce the vector $(1,0,0)$ from the eigenvectors of the quotient matrix, $w_S$ is a linear combination of eigenvectors for the modules $[2k]$, $[2k-2,2]$ and $[2k-4,4]$ (indeed $w_S$ is the vector formed by lifting $(1,0,0)$). So we conclude that for any subset $S$ of size four that $w_S$ is in the span of the $[2k]$, $[2k-2,2]$ and $[2k-4,4]$ modules.
Finally, we show that the dimension of the span of $w_S$ where $S$ is taken over all 4-subsets of $\{1,2,\dots,2k\}$ is equal to the dimension of the span of the $[2k]$, $[2k-2,2]$ and $[2k-4,4]$ modules. Define $N$ to be the matrix with the rows indexed by the perfect matchings of $K_{2k}$ and the columns by the 4-subsets $S \subset \{1,2,\dots,2k\}$, with each column the vector $w_S$. The entries of $N^TN$ depend only the size of the intersection of the 4-subsets, so it can be written as a linear combination of the matrices in the Johnson scheme $J(2k,4)$. In particular
\[
N^TN = (2k-5)!! I + (2k-7)!! J(2k,4,2) + 9*(2k-9)!! J(2k,4,0).
\]
The eigenvalues of the Johnson scheme are well-known and can be used to calculate the eigenvalues of $N^TN$. It is straight-forward to see that 0 is an eigenvalue with multiplicity $2k-1 + \binom{2k}{4} - \binom{2k}{3}$. This implies the rank of $N^TN$, and hence the rank of $N$, equals the dimension of $[2k]$, $[2k-2,2]$ and $[2k-4,4]$ modules. Thus the set $\{ w_S \, | \, S \subset \{1,\dots ,2k\} \textrm{ with } |S| =4\}$ is a spanning set.
\end{proof}
\subsection{The degrees of the irreducible modules of $\Sym(n)$}
In this subsection we review some results on the dimension the irreducible modules of $\Sym(2k)$. Later we use these results to prove Theorem~\ref{LeastEvalTrick}.\\
Let $\lambda = [\lambda_{1}, \lambda_{2}, \cdots, \lambda_{t}]$ be an integer partition of $2k$, the dimension of the $\lambda$ module will be denoted by $m(\lambda)$. The \textsl{dual partition}
$\lambda^{*}$ to the partition $\lambda$ is the partition with the Young diagram that is the reflection of the Young diagram of $\lambda$. The degree of a partition and its dual is the same; denote by $m(\lambda) = m(\lambda^{*})$. A partition $\lambda$ is called \textsl{primary} if $\lambda \geq \lambda^{*}$ in the dominance ordering (see~\cite{Ra} for more details).
\begin{theorem}\cite[p.151]{Ra}\label{RaTheorem}
Let $\lambda = [ \lambda_{1},\lambda_{2},\cdots, \lambda_{t}]$ be an integer partition of $2k$ in which $\lambda_{1} \geq k$. Then,
\begin{equation*}
m([\lambda_{1}, 2k-\lambda_{1}])\leq m(\lambda).
\end{equation*}
\end{theorem}
The next result follows from a straight-forward application of the hook length formula.
\begin{lemma}\cite[Section 12.6]{GMB}\label{Hook}
Let $n \geq 2k$, then
\begin{equation*}
m([n-k,k])\ = \binom{n}{k}-\binom{n}{k-1}.
\end{equation*}
\end{lemma}
The next result is a general bound on the degree of a representation in which the first part of the corresponding integer partition is considered small.
\begin{theorem}\cite[p.163]{Ra}\label{F(n)Theorem}
Let $\lambda$ be a primary partition of $n$ for which the first part $\lambda_{1}< \lfloor \frac{n}{2} \rfloor$. Then $m(\lambda)\geq F(n)$, where
\[
F(n) =
\begin{cases}
n\cdot F(n-1)(m+2) & \textrm{ if $n=2m+1$ is odd},\\
2\cdot F(n-1) & \textrm{ if $n$ is even},
\end{cases}
\]
with $F(0) = 2$. In particular, for $n\geq 8$,
\begin{equation}\label{F(n)}
\frac{3}{2}\cdot F(n-1)\leq F(n) \leq 2\cdot F(n-1).
\end{equation}
\end{theorem}
\subsection{Least eigenvalue of weighted adjacency matrix}
In this subsection, our goal is to show that the set of perfect matchings with two fixed edges is a maximum coclique in $M_2(2k)$. To address this, we determine an appropriate set of coefficients $a_{2\lambda}$ so that
\begin{align}
M = \sum_{\lambda \vdash k} a_{2\lambda} A_{2\lambda}
\end{align}
is a weighted adjacency matrix of $M_2(2k)$ with row sum $(2k-1)(2k-3)-1$ and the least eigenvalue $-1$. This proves that the ratio bound
\[
\alpha(X) = \frac{|V(X)|}{1-\frac{d}{\tau}} = \frac{(2k-1)!!}{1-\frac{(2k-1)(2k-3)-1}{-1}} = (2k-5)!!
\]
holds with equality for $M_2(2k)$. To be a weighted adjacency matrix of $M_2(2k)$, we need that $a_{2\lambda} =0$ whenever $\lambda$ has 2 or more ones. Further, the eigenvalue of $M$ corresponding to the $\mu$-module is $\xi^\mu = \sum_{\lambda \vdash k} a_{2\lambda} \xi_{2\lambda}^\mu$, where $\xi_{2\lambda}^\mu$ is the eigenvalue of $A_{2\lambda}$ belonging to the $\mu$-module.
\begin{theorem}\label{SmallValues}
For $3\leq k \leq 9$, there exists a weighted adjacency matrix of the graph $M_{2}(2k)$ for which the degree and the least eigenvalue are $(2k-1)(2k-3)-1$ and $-1$, respectively.
\end{theorem}
\begin{proof}
For $k=3,4,5$, define the matrices $M_{6} = A_{[6]}+A_{[4,2]}$, $M_{8} = \frac{1}{4}A_{[8]}+\frac{1}{2}A_{[6,2]}+\frac{1}{2}A_{[4,4]}$, and $M_{10} = \frac{1}{12}A_{[10]}+\frac{1}{12}A_{[8,2]}+\frac{1}{6}A_{[4,4,2]}$. The matrices $M_{6}$, $M_{8}$, and $M_{10}$ are the desired weighted adjacency matrices for the graphs $M_{2}(6)$, $M_{2}(8)$, and $M_{2}(10)$, respectively.
\FloatBarrier
\begin{center}
\begin{tabular}[h]{N | m{1.4cm} ? c | c | c | c | c ? c |}
\hline
\rule{0pt}{20pt}& ~ & \textcolor{Red}{$A_{[8]}$} & \textcolor{Red}{$A_{[6,2]}$} & \textcolor{Red}{$A_{[4,4]}$} & $A_{[4,2,2]}$ & $A_{[2,2,2,2]}$ & \textcolor{Green}{$M_{8}$} \\
\Xhline{5\arrayrulewidth}
\rule{0pt}{20pt}& $\mathbf{\chi}_{[8]}$ & $48$ & $32$ & $12$ & $12$ & $1$ & $\textcolor{Green}{34}$ \\
\hline
\rule{0pt}{20pt}& $\mathbf{\chi}_{[6,2]}$ & $-8$ & $4$ & $-2$ & $5$ & $1$ & $\textcolor{Green}{-1}$ \\
\rule{0pt}{20pt}& $\mathbf{\chi}_{[4,4]}$ & $-2$ & $-8$ & $7$ & $2$ & $1$ & $\textcolor{Green}{-1}$ \\
\rule{0pt}{20pt}& $\mathbf{\chi}_{[4,2,2]}$ & $4$ & $-2$ & $-2$ & $-1$ & $1$ & $\textcolor{Green}{-1}$ \\
\rule{0pt}{20pt}& $\mathbf{\chi}_{[2,2,2,2]}$ & $-6$ & $8$ & $3$ & $-6$ & $1$ & $\textcolor{Green}{4}$ \\
\hline
\end{tabular}
\vspace*{0.2cm}
\captionof{table}{Character table for $2k=8$}
\label{CharTableK=4}
\end{center}
\FloatBarrier
As we see in Table~\ref{CharTableK=4}, the row sum and the least eigenvalue of $M_{8}$ are $(2k-1)(2k-3)-1$ and $-1$. Similarly, using the character tables for $k=3$ and $k=5$~\cite{Mu, Sri1}, we find the eigenvalues of the matrices $M_{6}$ and $M_{10}$ and verify that the ratio bound holds with equality.\\
For $k=6$, Theorem~\ref{CliqueApproach} proves that equality holds in the ratio bound. For $7\leq k \leq 9$, we have the complete character table for the perfect matching association scheme. So we can express the eigenvalues of $M = \sum_{\lambda \vdash k} a_{2\lambda} A_{2\lambda}$ as a system of linear equations. The objective is to maximize the value of the greatest eigenvalue (this is the row sum, so the eigenvalue belonging to the $[2k]$ module) while fixing the eigenvalues corresponding to the modules $[2k-2,2]$, $[2k-4,4]$, and $[2k-4,2,2]$ to be $-1$, and having all other eigenvalues strictly greater than $-1$. The Gurobi Optimizer~\cite{gurobi} is then used to find solutions for these system of inequalities.
As such we determined the desired weighted adjacency matrices as follows:
\[
\begin{split}
M_{7} &= \frac{1}{640}A_{[14]}+\frac{1}{80}A_{[6,6,2]}+\frac{1}{60}A_{[4,4,4,2]}, \\
M_{8} &= \frac{1}{3840}A_{[14,2]}+\frac{1}{2048}A_{[10,6]}+\frac{1}{120}A_{[4,4,4,4]}, \\
M_{9} &= \frac{1}{80640}A_{[18]}+\frac{1}{13440}A_{[8,8,2]}+\frac{1}{4480}A_{[6,6,4,2]}.
\end{split}
\]
\end{proof}
To find the set of coefficients for $k\geq 10$, we consider linear combinations of the form
\[
M_{k} = \mathbf{a_1} A_{[2k]} + \mathbf{a_2} A_{[2k-2,2]} + \mathbf{a_3} A_{[2k-4,4]}.
\]
To find the values of $\mathbf{a_1}, \mathbf{a_2}$ and $\mathbf{a_3}$ for $k\geq 10$, we use the eigenvalues in the partial character table in Table~\ref{CharTable} to produce a corresponding linear
system. For this system, there is one equation for each of the eigenvalues that correspond to the irreducible modules $[2k-2,2]$, $[2k-4,4]$, and $[2k-4,2,2]$, which are equated to $-1$. The rationale for choosing these modules is that they, along with $[2k]$, are the modules that are in both the decomposition of $\mathrm{ind}_{\Sym(n)}(1_{\Sym([2k-4,2,2])})$ and $\mathrm{ind}_{\Sym(n)}(1_{\Sym(2) \wr \Sym(k)})$. Observe $\Sym([2k-4,2,2])$ is the group that stabilizes the set of all perfect matchings which include a fixed pair of edges.
Using the results in Subsection~\ref{Mod of Char Table}, this linear system becomes:
\newcolumntype{C}{>{{}}c<{{}}}
\[
\setlength\arraycolsep{0pt}
\renewcommand\arraystretch{1.25}
\begin{array}{*{3}{rC}l}
-(2k-4)!!\mathbf{a_1} & + & (k-2)(2k-6)!!\mathbf{a_2} & - & k(k-3)(2k-8)!!\mathbf{a_3} & = & -1, \\
-(2k-6)!!\mathbf{a_1} & - & (5k-12)(2k-8)!!\mathbf{a_2} & + & \frac{1}{2}(7k-15)(2k-8)!!\mathbf{a_3} & = & -1, \\
2(2k-6)!!\mathbf{a_1} & - & (2k-6)!!\mathbf{a_2} & - & \frac{1}{2}(2k-6)!!\mathbf{a_3} & = & -1.
\end{array}
\]
Solving this system, we obtain the coefficients $\mathbf{a_1} = \frac{1}{4(2k-6)!!}$, and $\mathbf{a_2} = \mathbf{a_3} = \frac{1}{(2k-6)!!}$. Note that for $k>4$, the determinant of the coefficient matrix corresponding to the aforementioned linear system is nonzero, so the values $\mathbf{a_1}$, $\mathbf{a_2}$, and $\mathbf{a_3}$ are unique. This can be easily checked by using any basic mathematical software.
\begin{theorem}\label{LeastEvalTrick}
For $k\geq 10$, let $M = \mathbf{a_1}A_{[2k]}+\mathbf{a_2}A_{[2k-2,2]}+\mathbf{a_3}A_{[2k-4,4]}$
where $\mathbf{a_1} = \frac{1}{4(2k-6)!!}$, and $\mathbf{a_2} = \mathbf{a_3} = \frac{1}{(2k-6)!!}$. Then the row sum and the least eigenvalue of the matrix $M$ are $(2k-1)(2k-3)-1$ and $-1$, respectively. Moreover, the only modules with eigenvalue equal to -1 are $[2k-2,2]$, $[2k-4,4]$ and $[2k-4,2,2]$.
\end{theorem}
\begin{proof}
For $10\leq k\leq 14$, similar to the proof of Theorem~\ref{SmallValues}, by utilizing the complete character tables of the perfect matching association scheme~\cite{Mu, Sri1} we can find all the eigenvalues of the matrix $M$, and we see that the ratio bound holds with equality.\\
For the remainder of the proof assume that $k\geq 15$. Reviewing the linear system of equations, it follows that $-1$ is an eigenvalue of $M$ (corresponding to the modules $[2k-2]$, $[2k-4,4]$, and $[2k-4,2,2]$). Denote the row sum by $d_{M}$; this is the linear combination of the degrees for matrices $A_{[2k]}$, $A_{[2k-2,2]}$, and $A_{[2k-4,4]}$; say $d_{[2k]}$, $d_{[2k-2,2]}$, and $d_{[2k-4,4]}$. For the coefficients $\mathbf{a_1}$, $\mathbf{a_2}$, and $\mathbf{a_3}$, we calculate
\begin{align*}
d_{M} &= \mathbf{a_1}d_{[2k]}+\mathbf{a_2}d_{[2k-2,2]}+\mathbf{a_3}d_{[2k-4,4]}\\
&= \frac{(2k-2)!!}{4(2k-6)!!}+\frac{k(2k-4)!!}{(2k-6)!!}+\frac{k(k-1)(2k-6)!!}{(2k-6)!!} \\
&= (2k-1)(2k-3)-1.\\
\end{align*}
Finally, we need to prove that all other eigenvalues of the matrix $M$ are strictly greater than $-1$. Let $\{d_{M}^{(1)}, -1^{(m_{1})}, -1^{(m_{2})}, -1^{(m_{3})},\theta_{4}^{(m_{4})}, \cdots, \theta_{k}^{(m_{k})} \}$ be the spectrum of the matrix $M$, where the values $m_{i}$ represent the multiplicity of the eigenvalues. By the Lemma~\ref{Hook}, and the hook length formula~\cite[Section 12.6]{GMB} we have,
\begin{align*}
m_{1} =& \frac{2k(2k-3)}{2},\\
m_{2} =& \frac{2k(2k-1)(2k-2)(2k-7)}{4!} ,\\
m_{3} =& \frac{2k(2k-1)(2k-4)(2k-5)}{12}.
\end{align*}
As we defined, $M = \mathbf{a_1}A_{[2k]}+\mathbf{a_2}A_{[2k-2,2]}+\mathbf{a_3}A_{[2k-4,4]}$, the entry on the diagonal of the matrix $M^{2}$ is
\begin{align*}
M^{2}[i,i] &= \mathbf{a_1}^{2}d_{[2k]}+\mathbf{a_2}^{2}d_{[2k-2,2]}+\mathbf{a_3}^{2}d_{[2k-4,4]}\\
&= \frac{(2k-2)!!}{\left( 4(2k-6)!!\right)^{2}}+ \frac{ k(2k-4)!!}{\left((2k-6)!!\right)^{2}}+\frac{ k(k-1)(2k-6)!!}{\left( (2k-6)!!\right)^{2}}\\
&= \frac{13k^2-23k+2}{4(2k-6)!!}.
\end{align*}
It is well-known that the trace of any matrix is equal to the sum of its eigenvalues. Hence we have that the trace of $M^2$ is
\begin{align}\label{TraceTrick}
\left( \frac{13k^2-23k+2}{4(2k-6)!!}\right) (2k-1)!! \nonumber = d_{M}^{2}+m_{1}+m_{2}+m_{3}+\sum_{i=4}^{k}m_{i}\theta_{i}^{2}.
\end{align}
From the equation above, we obtain the following inequality for any eigenvalue $\theta_i$ of $M$,
\begin{equation}\label{Lambda}
| \theta_{i} |\leq \sqrt{\frac{(2k-1)!!}{(2k-6)!!}\left( \frac{13k^2-23k+2}{4}\right)-\left( 18k^{4}-74k^{3}+\frac{191}{2}k^{2}-\frac{79}{2}k+4 \right)} \sqrt{\frac{1}{m_{i}}}.
\end{equation}
To finish the proof, it is sufficient to prove that the term in the right-hand side of the above inequality is strictly less than $1$; or equivalently
\begin{equation}\label{mi}
m_{i} > \frac{(2k-1)!!}{(2k-6)!!}\left( \frac{13k^2-23k+2}{4}\right)-\left( 18k^{4}-74k^{3}+\frac{191}{2}k^{2}-\frac{79}{2}k+4 \right).
\end{equation}
Let the partition $\lambda_{i} = [\lambda_{i_1}, \lambda_{i_2},\cdots, \lambda_{i_\ell}]$ where $\lambda_{i_1}\geq \lambda_{i_2} \geq \cdots \geq \lambda_{i_\ell}$. Then there are 3 cases:
\begin{itemize}
\item[{\bf Case 1.}] Assume that $\lambda_{i_1}\geq k$.
First, consider the module $[2k-6,6]$. Using the linear combination of the eigenvalues of the matrices $A_{[2k]}$, $A_{[2k-2,2]}$, and $A_{[2k-4,4]}$ corresponding to the module $[2k-6,6]$ it follows that the eigenvalue of $M$ belonging to $[2k-6,6]$ is
\[
\frac{-6(2k-12)!!}{(2k-6)!!} \left( 8k^2-65k+130\right).
\]
This eigenvalue is greater than $-1$ for $k\geq 15$ (this can be checked with a mathematical software).
Second, consider the modules $[2k-6,4,2]$ and $[2k-6,2,2,2]$. Using the hook length formula~\cite[Section 12.6]{GMB}, we calculate the dimensions $m_{[2k-6,4,2]}$ and $m_{[2k-6,2,2,2]}$. If, in the right hand side of (\ref{mi}), we approximate the term $(2k-6)!!$ with $(2k-7)!!$, then the inequality holds for $m_{[2k-6,4,2]}$ with $k\geq 37$, and for $m_{[2k-6,2,2,2]}$ for $k\geq 63$. Using Maple for the values of $m_{[2k-6,4,2]}$ for $15\leq k \leq 36$, and for and for $m_{[2k-6,2,2,2]}$ with $15\leq k \leq 62$, observe that (\ref{mi}) holds for these two modules.\\
Next consider the module $[2k-8,8]$. By Lemma~\ref{Hook} the multiplicity of the corresponding eigenvalue is
\begin{equation*}
\frac{(2k)(2k-1)(2k-2)(2k-3)(2k-4)(2k-5)(2k-6)(2k-15)}{8!}.
\end{equation*}
Thus, if in the right hand side of (\ref{mi}), we approximate the term $(2k-6)!!$ with $(2k-7)!!$, then the inequality holds for all $k\geq 20$. Using Maple for the values $15\leq k\leq 19$, we have that (\ref{mi}) holds for this module.\\
Finally, using Theorem~\ref{RaTheorem} and Lemma~\ref{Hook}, for all modules with $\lambda_{i_1} \geq k$ and $\lambda_{i_1}\leq 2k-8$, the multiplicity $m_{i}$ is greater than the multiplicity for the module $[2k-8,8]$. Thus (\ref{mi}) holds for all remaining modules with $\lambda_{i_1}>k$ .
\item[{\bf Case 2.}] Assume that $\lambda_{i_1} < k$ and $\lambda_{i}$ is primary.
Using Theorem~\ref{F(n)Theorem} for $2k\geq 8$, we have
\begin{equation}\label{UpBound}
m(\lambda)\geq F(2k) = 2F(2k-1) \geq (2) \left( \frac{3}{2} \right) F(2k-2)\geq \cdots \geq 2(3^{k}).
\end{equation}
Approximating the term $(2k-6)!!$ with $(2k-7)!!$, in (\ref{Lambda}), for $k\geq 15$ we have
\begin{equation*}
\theta_{i}^{2} \leq \frac{104k^{5}-724k^{4}+1738k^{3}+595k-46}{4m_{i}} \leq \frac{104k^{5}}{8(3^{k})}\leq 1.
\end{equation*}
In fact, for $k\geq 1$, the term $(-724k^{4}+1738k^{3}+595k-46)$ is always negative. Noting this along with (\ref{UpBound}) proves the second inequality above. The last inequality holds for all $k\geq 15$.\\
\item[{\bf Case 3.}] Assume that $\lambda_{i_1} < k$ and $\lambda_{i}$ is not primary.
We know that the degrees of a partition and its dual are the same, so $m(\lambda) = m(\lambda^{*})$. Let $\lambda^{*} = (\lambda_{1}^{*},\lambda_{2}^{*},\cdots, \lambda_{t}^{*})$. Since $\lambda$ is an even partition, then $\lambda_{1}^{*} = \lambda_{2}^{*} = f$. Since $\lambda_{i}$ is not primary, $\lambda^{*}\geq \lambda$. This means that $\lambda^{*}$ is primary. If $f\geq k$, then $\lambda^{*} = [k,k]$ which is covered in Case 1. If $f<k$, this is covered by Case 2.
\end{itemize}
\end{proof}
Recall that a canonical $2$-intersecting set of perfect matchings is the set of all perfect matchings that contain the edges $e_1$ and $e_2$. The size of a canonical $2$-intersecting set is $(2k-5)!!$. Theorems~\ref{SmallValues} and~\ref{LeastEvalTrick}, along with the ratio bound (Theorem~\ref{ratioBound}), show that the size of a 2-intersecting set of perfect matchings is no larger than $(2k-5)!!$. The ratio bound further implies that if $S$ is a maximum
2-intersecting set and $v_S$ is the characteristic vector of $S$, then $v_S - \frac{1}{(2k-1)(2k-3)}{\bf 1}$ is a $-1$-eigenvector for $M_2(2k)$. Since the only irreducible representations of $\Sym(2k)$ that affords $-1$ as an eigenvalue are $[2k-2,2]$, $[2k-4,4]$, and $[2k-4,2,2]$, this implies that $v_S$ is in $\spanof \{[2k], [2k-2,2], [2k-4,4], [2k-4,2,2]\}$. The next result gives a more convenient spanning set for this space.
Define $\nu_{e_{1},e_{2}}$ be the characteristic vector of all perfect matchings with the (disjoint) edges $e_{1}$ and $e_{2}$.
\begin{theorem}
For $k\geq 4$, let
\[
V(2k) = \spanof\{\nu_{e_{1},e_{2}} \,|\, e_{1},e_{2} \textrm{ edges in } K_{2k} \}
\]
and
\[
W(2k) = \spanof \{[2k], [2k-2,2], [2k-4,4], [2k-4,2,2]\}.
\]
Then $W(2k) = V(2k)$.
\end{theorem}
\begin{proof}
Using the ratio bound for the weighted adjacency matrices given in Theorems~\ref{SmallValues} and~\ref{LeastEvalTrick}, we see that the size of a maximum coclique in $M_{2}(2k)$ is $(2k-5)!!$, so all perfect matchings which have two fixed edges, say $e_{1}$ and $e_{2}$, form a maximum coclique. Thus, by Theorem~\ref{ratioBound}, $\nu_{e_{1},e_{2}}-\frac{(2k-5)!!}{(2k-1)!!}\mathbf{1}$ is an eigenvector for the least eigenvalue. The least eigenvalue is $-1$ and only the modules $[2k-2,2]$, $[2k-4,4]$, and $[2k-4,2,2]$ have $-1$ as their eigenvalues. So $\nu_{e_{1},e_{2}}\in W(2k)$, and $V(2k) \subseteq W(2k)$.
Denote the dimensions of the sets $W(2k)$ and $V(2k)$, by $D_{W}(2k)$ and $D_{V}(2k)$. Then, by Lemma~\ref{Hook}, and more generally the hook length formula
\[
D_{W}(2k) = 1+\binom{2k}{2}-\binom{2k}{1}+\binom{2k}{4}-\binom{2k}{3}+\frac{(2k)(2k-1)(2k-4)(2k-5)}{12}.
\]
For $4\leq k\leq 11$, using GAP~\cite{GAP4}, we note $D_{V}(2k) = D_{W}(2k)$, thus $V(2k)=W(2k)$. For $k>11$, we prove the same result by induction.
By summing all vectors in $V(2k+2)$, we obtain a multiple of the all ones vectors, so the module $[2k+2]$ is contained in $V(2k+2)$. Similarly, by summing all vectors with a fixed edge and subtracting an appropriate multiple of the all ones vector, we see the space $V(2k)$ contains the characteristic vector of the set of all perfect matchings that contain a fixed edge. Thus $V(2k+2)$ includes the module $[2k,2]$~\cite[Lemma 8.2]{GMP}.
For any set $S \subset \{1,2,\dots 2k+2\}$ of size four the characteristic vector of all the perfect matchings in which the four elements of $S$ appear as two independent edges is in $V(2k+2)$. By Theorem~\ref{thm:4sets} these vectors are a spanning set for $\spanof \{ [2k+2], [2k,2], [2k-2,4]\}$. Thus each of these modules is contained in $V(2k)$.
Since the edges of $K_{2k}$ are a subset of the edges form $K_{2k+2}$, the space $V(2k+2)$ contains a subspace isomorphic to $V(2k)$. By induction, the dimension of $V(2k)$ is
\[
D(2k) = 1+\binom{2k}{2}-\binom{2k}{1}+\binom{2k}{4}-\binom{2k}{3}+\frac{(2k)(2k-1)(2k-4)(2k-5)}{12}.
\]
This is a lower bound on the dimension of $V(2k+2)$.
For $k\geq 11$,
\[
D_V(2k) > D([2k+2]) + D([2k,2]) + D([2k-2,4])
\]
We conclude that $V(2k+2)$ must include some of the space $[2k-2,2,2]$. But since these are irreducible $G$-modules and $V(2k+2)$ is invariant over $G$, this implies all of $[2k-2,2,2]$ is also contained in $V(2k+2)$.
\end{proof}
Putting these results together, we have our main result.
\begin{theorem}
The size of the largest set of $2$-intersecting perfect matchings in $K_{2k}$ with $k\geq 3$ is $(2k-5)!!$.
Further, if $S$ is a set of $2$-intersecting perfect matchings the characteristic vector of $S$ is a linear combination of the characteristic vectors of the canonically 2-intersecting sets of perfect matchings.
\end{theorem}
\section{Further Work}
In this paper, we proved that the Erd\H{o}s-Ko-Rado theorem holds for $2$-intersecting families of perfect matchings of the complete graph $K_{2k}$. Our first open question is if
these approaches can be generalized
to prove a version of the Erd\H{o}s-Ko-Rado theorem for the family of $t$-intersecting perfect matchings of the complete graph $K_{2k}$, where $t>2$? This has been done for $k$ sufficiently large relative to $t$ in~\cite{L}. In this work, it is quite remarkable that we were able to find a weighted adjacency matrix for $M_2(2k)$ for which the ratio bound holds with equality that only uses three of the classes of the association scheme. It is an open question if a comparably simple weighted adjacency matrix would exist for larger values of $t$.\\
Our next question is, if we can characterize the largest set of $t$-intersecting perfect matchings for graphs other than the complete graph? In the special case that the graph is the complete bipartite graph $K_{n,n}$, each perfect matching corresponds to a permutation. The set of intersecting permutations has been well-studied and versions of the EKR theorem hold~\cite{CaKu, Ellis}. It would be interesting to consider other bipartite graphs, such as the hypercube, although just enumerating the perfect matchings maybe quite difficult.\\
Finally, while we were working on computing various entries of the character table of the perfect matching association scheme, we observed some interesting patterns for the values in the table. There are conjectures and some results about signs and values of the eigenvalues in the association scheme for the permutations, see~\cite{Ku2}. We suspect that there are similar results for the eigenvalues in the perfect matching scheme. For example, we make the following related interesting conjecture.
\begin{conj}
Consider the character table of the perfect matching association scheme for $2k$. The greatest eigenvalue in the row corresponding to the module $[2k-2\ell, 2\ell]$ is the one that corresponds to the same class of the scheme, $[2k-2\ell,2\ell]$. In addition, in the same row all the eigenvalues corresponding to the classes which are greater than $[2k-2\ell,2\ell]$ (in the dominance ordering) are negative.
\end{conj}
\section{Acknowledgements}
We wish to thank Brett Stevens who introduced us to the clique used in Theorem~\ref{CliqueApproach}.
\bibliographystyle{plain}
| {
"timestamp": "2020-08-20T02:16:19",
"yymm": "2008",
"arxiv_id": "2008.08503",
"language": "en",
"url": "https://arxiv.org/abs/2008.08503",
"abstract": "A perfect matching in the complete graph on $2k$ vertices is a set of edges such that no two edges have a vertex in common and every vertex is covered exactly once. Two perfect matchings are said to be $t$-intersecting if they have at least $t$ edges in common. The main result in this paper is an extension of the famous Erdős-Ko-Rado (EKR) theorem \\cite{EKR} to 2-intersecting families of perfect matchings for all values of $k$. Specifically, for $k\\geq 3$ a set of 2-intersecting perfect matchings in $K_{2k}$ of maximum size has $(2k-5)(2k-7)\\cdots (1)$ perfect matchings.",
"subjects": "Combinatorics (math.CO)",
"title": "The Erdős-Ko-Rado theorem for $2$-intersecting families of perfect matchings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969641180276,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7099044148130814
} |
https://arxiv.org/abs/1908.03669 | A Survey of Tuning Parameter Selection for High-dimensional Regression | Penalized (or regularized) regression, as represented by Lasso and its variants, has become a standard technique for analyzing high-dimensional data when the number of variables substantially exceeds the sample size. The performance of penalized regression relies crucially on the choice of the tuning parameter, which determines the amount of regularization and hence the sparsity level of the fitted model. The optimal choice of tuning parameter depends on both the structure of the design matrix and the unknown random error distribution (variance, tail behavior, etc). This article reviews the current literature of tuning parameter selection for high-dimensional regression from both theoretical and practical perspectives. We discuss various strategies that choose the tuning parameter to achieve prediction accuracy or support recovery. We also review several recently proposed methods for tuning-free high-dimensional regression. | \section{Introduction}
High-dimensional data, where the number of covariates/features (e.g., genes)
may be of the same order or substantially exceed the sample size (e.g., number of patients),
have become common in many fields due to the advancement in science and technology. Statistical methods for analyzing
high-dimensional data have been the focus of an enormous amount of research in the past decade or so,
see the books of \citet{Hastie09}, \citet{buhlmann2011}, \citet{hastie2015statistical} and
\citet{Wainwright19}, among others, for extensive discussions.
In this article, we consider a linear regression model of the form
\begin{align}
{\bf y} = {\bf X} \bm{\beta}_0 + \mbox{\boldmath $\epsilon$},\label{model}
\end{align}
where ${\bf y}=(y_1,\ldots,y_n)^T$ is the vector of responses, ${\bf X} = ({\bf x}_1,\ldots,{\bf x}_n)^T$ is an $n\times p$ matrix of covariates, $ \bm{\beta}_0=(\beta_{01},\ldots,\beta_{0p})^T$ is the vector of unknown regression coefficients,
$\mbox{\boldmath $\epsilon$}=(\epsilon_1,\ldots,\epsilon_n)^T$ is a random noise vector with each entry having mean zero and variance $\sigma^2$.
We are interested in the problem of estimating $ \bm{\beta}_0$ when $p\gg n$.
The parameter $ \bm{\beta}_0$ is usually not identifiable in high dimension without imposing additional structural assumption,
as there may
exist $ \bm{\beta}_0'\neq \bm{\beta}_0$ but ${\bf X} \bm{\beta}_0'={\bf X} \bm{\beta}_0$.
One intuitive and popular structural assumption underlying a large body of the past work on high-dimensional regression
is the assumption of strong (or hard) sparsity. Loosely speaking, it means only a relatively small number---usually much less than the sample size $n$---
of the $p$ covariates are active in the regression model.
To overcome the issue of over-fitting, central to high-dimensional data analysis are penalized or regularized
regression techniques represented by Lasso \citep{tibshirani1996, chen2001} and its variants such as Dantzig
selector \citep{candes2007dantzig},
SCAD \citep{FL:2001}, MCP \citep{Zhang:2010} and Capped $L_1$ \citep{zhang2010analysis}.
In a nutshell,
a high-dimensional penalized regression estimator solves
\begin{eqnarray}
\label{model1} \min_{ \bm{\beta}}\Big\{(2n)^{-1}||{\bf y}-{\bf X}
\bm{\beta}||^2+\sum_{j=1}^pp_{\lambda}(|\beta_j|)\Big\}, \end{eqnarray} where
$ \bm{\beta}=(\beta_1,\ldots,\beta_p)^T$, $||\cdot||$
denotes the $L_2$ vector norm, and $p_{\lambda}(\cdot)$ is a penalty
function which depends on a tuning parameter $\lambda>0$. Customarily,
the intercept $\beta_0$ is not penalized.
Regardless of the penalty function,
the choice of the tuning parameter $\lambda$ plays a crucial role in the
performance of the penalized high-dimensional regression estimator.
The tuning parameter $\lambda$ determines the level of the sparsity of the solution.
Generally speaking, a larger value of $\lambda$ indicates heavier penalty and tends to produce a sparser model.
The paper aims to provide a broad review of the current literature on tuning parameter selection
for high-dimensional penalized regression from both theoretical and practical perspectives.
We discuss different strategies for tuning parameter selection to achieve accurate
prediction performance or to identify active variables in the model, where the later goal is often
referred to as support recovery. We also review several recently proposed tuning-free high-dimensional
regression procedures, which circumvent the difficulty of tuning parameter selection.
\section{Tuning parameter selection for Lasso}
\subsection{Background}
A simple yet successful approach for avoiding over-fitting and enforcing sparsity is to
regularize the classical least-squares regression with the $L_1$ penalty, corresponding to
adopting $p_{\lambda}(|\beta_j|)=\lambda|\beta_j|$ in (\ref{model1}).
This choice leads to the well known Least Absolute Shrinkage and Selection Operator (Lasso,
\citet{tibshirani1996}), which simultaneously performs estimation and variable selection.
In the field of signal processing, the Lasso is also known as basis
pursuit \citep{chen2001}.
Formally,
the Lasso estimator $\widehat{ \bm{\beta}}^{\scalebox{.7}{\mbox{Lasso}}}(\lambda)$
is obtained by minimizing the regularized least squares loss function, that is,
\begin{eqnarray}\label{Lasso}
\widehat{ \bm{\beta}}^{\scalebox{.7}{\mbox{Lasso}}}(\lambda)=\argmin_{ \bm{\beta}}\Big\{(2n)^{-1}\sum_{i=1}^n(Y_i-{\bf x}_i^T \bm{\beta})^2+\lambda|| \bm{\beta}||_1\Big\},
\end{eqnarray}
where ${\bf x}_i^T=(x_{i1}, \ldots, x_{ip})$ is the $i$th row of ${\bf X}$,
$|| \bm{\beta}||_1$ denotes the $L_1$-norm of $ \bm{\beta}$ and $\lambda$ denotes the tuning parameter.
By varying the value of $\lambda$ and solving the above minimization problem for
each $\lambda$, we obtain a solution path for Lasso.
In the literature, a great deal of work has been devoted to understanding the theoretical
properties of Lasso, including the theoretical guarantee on the nonasymptotic estimation error bound
$||\widehat{ \bm{\beta}}^{\scalebox{.7}{\mbox{Lasso}}}(\lambda)- \bm{\beta}_0||_2$, the prediction
error bound $||{\bf X}(\widehat{ \bm{\beta}}^{\scalebox{.7}{\mbox{Lasso}}}(\lambda)- \bm{\beta}_0)||_2$,
and the ability of recovering the support set or the active set of the model
$\{j: \beta_{0j}\neq 0, j=1, \ldots, p\}$,
see
\citep{Greenshtein2004, meinshausen2006high, ZY2006, bunea2007,
van2008, zhang2008sparsity, bickel2009, Candes2009}, among others.
The tremendous success of $L_1$-regularized regression technique is partly due to its computational convenience.
Efficient algorithms such as the exact path-following LARs algorithm \citep{efron2004least} and the fast coordinate descent algorithm
\citep{friedman2007pathwise, wu2008coordinate} have greatly facilitated the
use of Lasso.
\subsection{A theoretical perspective for tuning parameter selection}\label{bear}
Motivated by the Karush-Kuhn-Tucker condition for convex optimization \citep{boyd2004},
\citet{bickel2009} proposed a general principal for selecting $\lambda$
for Lasso. More specifically, it is suggested that
$\lambda$ should be chosen such that
\begin{eqnarray}\label{toptimal}
P\Big\{||n^{-1}{\bf X}^T\mbox{\boldmath $\epsilon$}||_{\infty}\leq \lambda\Big\}\geq 1-\alpha,
\end{eqnarray}
for some small $\alpha>0$,
where $||\cdot||_{\infty}$ denotes the infinity (or supremum) norm.
Consider the important example where the random errors $\epsilon_i$, $i=1, \ldots, n$, are independent $N(0,\sigma^2)$
random variables and the design matrix is normalized such that each column has $L_2$-norm
equal to $\sqrt{n}$. One can show that an upper bound of $\lambda$ satisfying (\ref{toptimal}) is given by
$\tau \sigma\sqrt{\log p/n}$ for some positive constant $\tau$. To see this, we observe that by the property
of the tail probability of Gaussian distribution and the union bound,
\begin{eqnarray*}\label{toptimal}
P\Big\{||n^{-1}{\bf X}^T\mbox{\boldmath $\epsilon$}||_{\infty}\leq \tau \sigma\sqrt{\log p/n}\Big\}\geq 1-2\exp\big(-(\tau^2-2)\log p/2\big),
\end{eqnarray*}
for some $\tau>\sqrt{2}$. Similar probability bound holds if the random errors
have sub-Gaussian distributions (e.g., Section 4.2 of \citep{Negahban2012}).
Most of the existing theoretical properties of Lasso were derived
while fixing $\lambda$ at
an oracle value satisfying (\ref{toptimal}) or within a range of oracle values whose bounds satisfying similar constraints.
For example, the near-oracle error bound of Lasso given in \citet{bickel2009}
was derived assuming $\lambda=\tau \sigma\sqrt{\log p/n}$ for some $\tau>2\sqrt{2}$
when ${\bf X}$ satisfies a restricted eigenvalue condition.
See \citet{buhlmann2011} for further discussions on the restricted eigenvalue condition
and other similar conditions on ${\bf X}$ to guarantee that the design matrix is well behaved
in high dimension.
The theory of Lasso suggests that $\lambda$ is an important
factor appearing in its estimation error bound. To achieve optimal estimation error bound, it is desirable to choose
the smallest $\lambda$ such that (\ref{toptimal}) holds. This choice, however, depends on both the unknown random error distribution
and the structure of the design matrix ${\bf X}$. As discussed above,
a reasonable upper bound for such a theoretical choice of $\lambda$ requires the knowledge of
$\sigma$, the standard deviation of the random error.
Estimation of $\sigma$ in high dimension is itself a difficult problem.
As a result, it is often infeasible to apply the theoretical choice of $\lambda$
in real data problems.
\subsection{Tuning parameter selection via cross-validation}
In practice, a popular approach to selecting the tuning parameter $\lambda$ for Lasso
is a data-driven scheme called cross-validation, which aims for optimal prediction accuracy.
Its basic idea is to randomly split the data into a training data set and a testing (or validation) data set such that
one may evaluate the prediction error on the testing data while fitting the model using the training data set.
There exist several different versions of cross-validation, such as leave-$k$-out cross-validation, repeated random sub-sampling validation (also known as Monte Carlo cross-validation), and $K$-fold cross-validation. Among these options, $K$-fold cross-validation is most
widely applied in real-data analysis.
The steps displayed in Algorithm~\ref{algo:kfoldcv} illustrate the implementation of $K$-fold cross-validation for Lasso.
The same idea broadly applies to more general problems such as penalized likelihood estimation with different penalty functions.
First, the data is randomly partitioned into $K$ roughly equal-sized subsets (or folds), where typical choice of $K$ is 5 or 10.
Given a value of $\lambda$,
one of the $K$ folds is retained as the validation data set to evaluate the prediction error, and the remaining data
are used as the training data set to obtain $\widehat{ \bm{\beta}}^{\scalebox{.7}{\mbox{Lasso}}}(\lambda)$.
This cross-validation process is then repeated, with each of the $K$ folds being used as the validation data set exactly once. For example, in carrying out a 5-fold cross-validation for Lasso, we randomly split the data into five roughly equal-sized parts $\mathcal{V}_1,\cdots,\mathcal{V}_5$. Given a tuning parameter $\lambda$, we first train the model and estimate $\widehat{ \bm{\beta}}^{\scalebox{.7}{\mbox{Lasso}}}(\lambda)$ on $\{\mathcal{V}_2,\cdots,\mathcal{V}_5\}$ and then compute the
total prediction error on $\mathcal{V}_1$. Repeat this process by training on $\{\mathcal{V}_1, \mathcal{V}_3,\mathcal{V}_4,\mathcal{V}_5\}$ and validating on $\mathcal{V}_2$, and so on. The cross-validation error $\mbox{CV}(\lambda)$ is obtained as the average of the prediction errors over the $K$
validation data sets from this iterative process.
\begin{algorithm}[!h]
\caption{K-fold cross-validation for Lasso} \label{algo:kfoldcv}
\begin{algorithmic}[1]
\State Randomly divide the data of sample size $n$ into $K$ folds, $\mathcal{V}_1, . . . \mathcal{V}_K$, of roughly equal sizes.
\State Set $\mbox{Err}(\lambda)$ = 0.
\For {$k=1,\cdots,K $}
\State Training dataset $({\bf y}_T,{\bf X}_T)=\{(y_i,{\bf x}_i): i\notin \mathcal{V}_k\} $.
\State Validation dataset $({\bf y}_V,{\bf X}_V)=\{(y_i,{\bf x}_i): i\in \mathcal{V}_k\}$.
\State $\widehat{\vbeta}^{\scalebox{.7}{\mbox{Lasso}}}(\lambda)\leftarrow \argmin\limits_{ \bm{\beta}} \big\{(2|\mathcal{V}_k|)^{-1}||{\bf y}_T-{\bf X}_T
\bm{\beta}||^2+\lambda|| \bm{\beta}||_1\big\}$.
\State $\mbox{Err}(\lambda) \leftarrow\ \mbox{Err}(\lambda) + ||{\bf y}_V-{\bf X}_V \widehat{\vbeta}^{\scalebox{.7}{\mbox{Lasso}}}(\lambda)||^2$.
\EndFor
\Return $\mbox{CV}(\lambda)=n^{-1}\mbox{Err}(\lambda)$.
\end{algorithmic}
\end{algorithm}
Given a set $\Lambda$ of candidate tuning parameter values, say a grid $\{\lambda_1, \ldots, \lambda_m\}$, one would compute $\mbox{CV}(\lambda)$ according to Algorithm~\ref{algo:kfoldcv} for each $\lambda\in\Lambda$. This yields the cross-validation error curve $\{\mbox{CV}(\lambda): \lambda\in\Lambda\}$.
To select the optimal $\lambda$ for Lasso, two useful strategies are usually recommended.
A simple and intuitive approach is to select the $\lambda$ that minimizes the cross-validation error, i.e.,
\begin{eqnarray}\label{minl}
\widehat{\lambda} = \argmin\limits_{\lambda} \mbox{CV}(\lambda).\label{cv_crit}
\end{eqnarray}
An alternative strategy is based on the so-called “one-standard-error rule”, which
chooses the most parsimonious model (here corresponding to larger $\lambda$ and more regulation)
such that its cross-validation error is within one standard-error of
$\mbox{CV}(\widehat{\lambda} )$.
This is feasible as the $K$-fold cross-validation allows
one to estimate the standard error of the cross-validation error.
The second strategy acknowledges
that the cross-validation error curve is estimated with error and is motivated by the principle of parsimony
(e.g., Section 2.3 of \citet{hastie2015statistical}).
Several R functions are available to implement $K$-fold cross-validation for Lasso, such as the ``cv.glmnet''
function in the R package \texttt{glmnet} \citep{glmnet} and the``cv.lars''function in the R package \texttt{lars} \citep{lars}.
Below are the sample R codes for performing the $5$-fold cross-validation for Lasso using the ``cv.glmnet'' function.
\begin{lstlisting}
library(glmnet)
data(SparseExample)
cvob1=cv.glmnet(x, y, nfolds=5)
plot(cvob1)
\end{lstlisting}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{cvglmnet.png}
\caption{Cross-validation for Lasso}
\label{fig:cvglmnet}
\end{figure}
The plot produced by the above commands is given in
Figure~\ref{fig:cvglmnet}, which depicts the cross-validation error curve
(based on the mean-squared prediction error in this example) as well as the one-standard-error
band. In the plot, $\lambda_{\min}$ is the tuning parameter
obtained by (\ref{minl}), i.e., the value of the tuning parameter
that minimizes the cross-validation prediction error. And $\lambda_{1se}$ denotes the tuning parameter selected via the one-standard-error rule. The numbers at the top of the plot correspond to the numbers of non-zero coefficients
(or model sizes) for models fitted with different $\lambda$ values. For this data example,
the prediction errors based on $\lambda_{\min}$ and $\lambda_{1se}$ are close, while the model based on
$\lambda_{1se}$ is notably sparser than the one based on $\lambda_{\min}$.
In the existing work on the theory of Lasso, the tuning parameter is usually considered to be deterministic,
or fixed at a pre-specified theoretical value. Despite the promising
empirical performance of cross-validation, much less is known about
the theoretical properties of the cross-validated
Lasso, where the tuning parameter is selected in a data-driven manner.
Some important progress has been made recently in understanding the
properties of cross-validated Lasso. \cite{Homrighausen2013},
\cite{Chatterjee2015} and \cite{Homrighausen2017}
investigated the risk consistency of cross-validated Lasso under different regularity
conditions.
\cite{Chetverikov2016} derived a nonasymptotic error bound for
cross-validated Lasso and showed that it can achieve the optimal estimation rate
up to a factor of order $\sqrt{\log(pn)}$.
\subsection{Scaled Lasso: adapting to unknown noise level}
As discussed earlier, the optimal choice of $\lambda$ for Lasso requires the knowledge
of the noise level $\sigma$, which is usually unknown in real data analysis.
Motivated by \citet{stadler2010} and the discussions on this paper by
\citet{antoniadis2010comments} and \citet{sun2010comments},
\citet{sun2012scaled} thoroughly investigated the performance of an iterative algorithm
named scaled Lasso, which jointly estimates the
regression coefficients $ \bm{\beta}_0$ and the noise level $\sigma$ in a sparse
linear regression model.
Denote the loss function for Lasso regression by
\begin{align}
L_\lambda( \bm{\beta}) = (2n)^{-1}||{\bf y}-{\bf X} \bm{\beta}||^2+\lambda|| \bm{\beta}||_1. \label{scaled_lasso_loss}
\end{align}
The iterative algorithm for scaled Lasso is described in Algorithm~\ref{algo:scaledLasso},
where $ \bm{\beta}^0$ and $\lambda_0$ are initial values independent of $ \bm{\beta}_0$ and $\sigma$.
In this algorithm, the tuning parameter is rescaled iteratively.
In the output of the algorithm, $\widehat{\vbeta}({\bf X},{\bf y})$ is referred to as the scaled Lasso estimator.
\begin{algorithm}[!h]
\caption{Scaled Lasso algorithm (Sun and Zhang, 2012)} \label{algo:scaledLasso}
\begin{algorithmic}[1]
\State Input $({\bf X},{\bf y})$, $ \bm{\beta}^0$, and $\lambda_0$.
\State $ \bm{\beta}\leftarrow \bm{\beta}^{0}$.
\While {$L_\lambda( \bm{\beta})\leq L_\lambda( \bm{\beta}^{0})$}
\State $ \bm{\beta}^{0}\leftarrow \bm{\beta}$
\State $\widehat{\sigma} \leftarrow n^{-1/2}||{\bf y}-{\bf X} \bm{\beta}^{0}||$.
\State $\lambda \leftarrow \widehat{\sigma}\lambda_0$.
\State $ \bm{\beta} \leftarrow \argmin\limits_{ \bm{\beta}} L_\lambda( \bm{\beta}) $.
\EndWhile
\Return $\widehat{\sigma}({\bf X},{\bf y})\leftarrow \widehat{\sigma} $ and $\widehat{\vbeta}({\bf X},{\bf y})\leftarrow \bm{\beta}^0$.
\end{algorithmic}
\end{algorithm}
\citet{sun2012scaled} showed that the outputs of Algorithm~\ref{algo:scaledLasso}
converge to the solutions of a joint minimization problem, specifically,
\begin{align}
(\widehat{\vbeta},\widehat{\sigma})
= \argmin\limits_{ \bm{\beta},\sigma} \Big\{(2n\sigma)^{-1}||{\bf y}-{\bf X} \bm{\beta}||^2 +\sigma/2+ \lambda_0|| \bm{\beta}||_1\Big\}. \label{scaled_lasso}
\end{align}
This is equivalent to jointly minimizing
Huber’s concomitant loss function with the $L_1$ penalty \citep{Owen2007, antoniadis2010comments}.
This loss function possesses the nice property of being jointly convex in $( \bm{\beta}, \sigma)$.
It can also be shown that the solutions are scale-equivariant in ${\bf y}$, i.e.,
$\widehat{\vbeta}({\bf X},c{\bf y}) = c\widehat{\vbeta}({\bf X},{\bf y})$ and $\widehat{\sigma}({\bf X},c{\bf y}) = |c|\widehat{\sigma}({\bf X},{\bf y})$ for any constant $c$. This property is practically important in data analysis.
Under the Gaussian assumption and other mild regularity conditions, \citet{sun2012scaled} derived oracle inequalities
for prediction and joint estimation of $\sigma$ and $ \bm{\beta}_0$ for the scaled
lasso, which in particular imply the consistency and asymptotic normality of $\widehat{\sigma}({\bf X},{\bf y})$ as an
estimator for $\sigma$.
The function ``scalreg'' in the R package \texttt{scalreg} implements Algorithm~\ref{algo:scaledLasso} for the scaled Lasso.
The sample codes below provide an example on
how to analyze the ``sp500'' data in that package with scaled Lasso.
\begin{lstlisting}
library(scalreg)
data(sp500)
attach(sp500)
x = sp500.percent[,3: (dim(sp500.percent)[2])]
y = sp500.percent[,1]
scaleob <- scalreg(x, y)
\end{lstlisting}
\section{Alternative $L_1$-penalty based methods: from tuning selection to tuning free}
This section provides a brief review of several recently proposed
$L_1$-penalty based tuning-free procedures for high-dimensional sparse linear regression.
These procedures
tackle the challenge of tuning parameter selection for Lasso from different angles.
As suggested by (\ref{toptimal}), the theoretically optimal tuning parameter for Lasso
depends on both the design matrix ${\bf X}$ and the unknown error distribution
(standard deviation $\sigma$, tail behavior, etc).
The three procedures we review here (square-root Lasso, TREX
and Rank Lasso) aim to automatically adapt to one or more aspects of these factors.
\subsection{Scale-free square-root Lasso}
Square-root Lasso is a variant of Lasso proposed by \citet{belloni2011} which enjoys the advantage to
avoid calibrating the tuning parameter with respect to the noise level $\sigma$.
Square-root Lasso replaces least squares loss (or $L_2$ loss) function in Lasso with
its positive square root.
Assuming the $\epsilon_i$ are independently distributed with mean zero and variance $\sigma^2$,
the square-root Lasso estimator is defined as
\begin{align}
\label{sqrtLasso}
\widehat{\vbeta}_{\sqrt{\scalebox{.7}{\mbox{Lasso}}}}(\lambda) =\argmin\limits_{ \bm{\beta}}\Big\{n^{-1/2}||{\bf y}-{\bf X} \bm{\beta}|| + \lambda|| \bm{\beta}||_1\Big\}.
\end{align}
Let $L_{\scalebox{.7}{\mbox{SR}}}( \bm{\beta})=n^{-1/2}||{\bf y}-{\bf X} \bm{\beta}||$ denote the loss function
of square-root Lasso and let $S_{\scalebox{.7}{\mbox{SR}}}$ denote its
subgradient evaluated at $ \bm{\beta}= \bm{\beta}_0$.
The general principal of tuning parameter selection
(e.g., \citet{bickel2009}) suggests to choose $\lambda$ such that
$
P(\lambda>c||S_{\scalebox{.7}{\mbox{SR}}}||_{\infty})\geq 1-\alpha_0,
$
for some constant $c>1$ and a given small $\alpha_0>0$.
An important observation that underlies the advantage of square-root Lasso is that
\begin{eqnarray*}
S_{\scalebox{.7}{\mbox{SR}}}=\frac{n^{-1}\sum_{i=1}^n{\bf x}_i\epsilon_i}{(n^{-1}\sum_{i=1}^n\epsilon_i^2)^{1/2}}
\end{eqnarray*}
does not depend on $\sigma$.
Computationally, the square-root lasso can be formulated as a solution to a convex conic programming problem. The function ``slim'' in the R package \texttt{flare} \citep{flare} implements a family of Lasso variants for high-dimensional regression, including the square-root Lasso. The sample codes below demonstrate how to
implement the square-root Lasso using this function to
analyze the ``sp500'' data in the \texttt{scalreg} package.
The arguments \texttt{\hlblue{method="lq", q = 2}}
yield square-root Lasso, which are also the default options in the ``slim'' function.
\begin{lstlisting}
library(flare)
data(sp500)
attach(sp500)
x = sp500.percent[,3: (dim(sp500.percent)[2])]
y = sp500.percent[,1]
sqrtob <- slim(x, y, method="lq", q = 2)
\end{lstlisting}
\citet{belloni2011} recommended the choice
$\lambda =c n^{-1/2}\Phi^{-1}\Big(1-\frac{\alpha}{2p}\Big)$,
for some constant $c>1$ and $\alpha>0$. Note that this choice of $\lambda$ does not depend on $\sigma$, and it is valid asymptotically without requiring the random errors to be Gaussian.
Under general regularity conditions, \citet{belloni2011} showed that
there exists some positive constant $C_n$ such that
$$P\Big(||\widehat{\vbeta}_{\sqrt{\scalebox{.7}{\mbox{Lasso}}}}(\lambda) - \bm{\beta}_0||\leq C_n\sigma\big\{n^{-1}s\log(2p/\alpha)\big\}^{1/2}\Big) \geq 1-\alpha,$$
where $s=|| \bm{\beta}_0||_0$ is the sparsity size of the true model.
Hence, square root Lasso achieves the near-oracle
rate of Lasso even when $\sigma$ is unknown.
The square-root Lasso and Lasso are equivalent families of estimators.
There exists a one-to-one mapping between the tuning
parameter paths of square-root Lasso and Lasso \citep{tian2018selective}.
It is also worth pointing out that
the square-root Lasso is related to but should not be
confused with the scaled Lasso \citep{sun2012scaled}.
The current literature contain some confusion (particularly in the use of names)
about these two methods.
The connection and distinction
between them are nicely discussed in Section 3.7 of
\cite{van2016estimation}.
\subsection{TREX}
The scaled Lasso and square-root Lasso both address the need to calibrate $\lambda$ for $\sigma$. However, the tail behavior of the noise vector, as well as the structure of the design matrix, could also have significant effects on the optimal selection of $\lambda$.
To alleviate these additional difficulties, \citet{lederer2015} proposed a new approach for high-dimensional variable selection.
The authors named the new approach TREX to emphasize that it aims at
Tuning-free Regression that adapts to the Entire noise and the
design matrix ${\bf X}$.
Indeed, the most attractive property of TREX is that it automatically adjusts $\lambda$ for
the unknown noise standard deviation
$\sigma$, the tail of the error distribution and the design matrix.
In contrast to the square-root Lasso, the TREX estimator modifies the Lasso loss function in a different way.
The TREX estimator is defined as
\begin{align}
\label{TREX}
\widehat{\vbeta}_{\scalebox{.7}{\mbox{TREX}}} =\argmin\limits_{ \bm{\beta}}\Big\{L_{\mbox{TREX}}( \bm{\beta}) + || \bm{\beta}||_1\Big\},
\end{align}
where
\begin{eqnarray*}
L_{\scalebox{.7}{\mbox{TREX}}}( \bm{\beta}) = \frac{2||{\bf y}-{\bf X} \bm{\beta}||^2}{||{\bf X}^T({\bf y}-{\bf X} \bm{\beta})||_\infty}.
\end{eqnarray*}
TREX does not require a tuning parameter.
In this sense, it is a completely tuning-free procedure.
\citet{lederer2015} proved that the TREX estimator
is close to a Lasso solution with tuning parameter of the same order as the theoretically optimal $\lambda$. They presented examples where TREX
has promising performance comparing with Lasso.
The modified loss function for the TREX estimator, however, is no longer convex.
\citet{bien2016non} showed the remarkable result that
despite the non-convexity, there exists a polynomial-time algorithm that is guaranteed to find the global minimum of the TREX problem.
\citet{bien2018prediction} recently established a prediction
error bound for TREX, which deepens the understanding of the theoretical properties of TREX.
\subsection{Rank Lasso: a tuning free and efficient procedure}
Recently, \citet{Wang2018} proposed an alternative approach to overcoming the challenges of tuning
parameter selection for Lasso. The new method, named Rank Lasso, has an optimal tuning parameter
that can be easily simulated
and automatically adapts to both the unknown
random error distribution and the structure of the design matrix.
Moreover, it enjoys several other appealing properties:
it is a solution to a convex optimization problem and can be
conveniently computed via linear programming; it has similar performance as Lasso does
when the random errors are normally distributed and is
robust with substantial efficiency gain for heavy-tailed random errors;
it leads to a scale-equivariant estimator
which permits coherent interpretation when the response variable undergoes a
scale transformation.
Specifically, the new estimator is defined as
\begin{eqnarray}\label{dragon}
\widehat{\vbeta}_{\scalebox{.7}{\mbox{rank}}}(\lambda) = \argmin\limits_{ \bm{\beta}}\Big\{Q_n( \bm{\beta}) +\lambda|| \bm{\beta}||_1\Big\}, \label{rank_est}
\end{eqnarray}
where the loss function
\begin{align}
Q_n( \bm{\beta}) = [n(n-1)]^{-1} \sumsum_{i\neq j}\big|(y_i-{\bf x}_i^T \bm{\beta})-(y_j-{\bf x}_j^T \bm{\beta})\big|. \label{rank_loss}
\end{align}
The loss function $Q_n( \bm{\beta})$ is related to
Jaeckel's dispersion function with Wilcoxon scores \citep{jaeckel1972}
in the classical nonparametric statistics literature.
For this reason, the estimator in (\ref{dragon}) is referred to
as the rank Lasso estimator.
In the classical low-dimensional setting,
regression with Wilcoxon loss function was investigated by \citet{wang2009}
and \citet{wang2009local}.
To appreciate its tuning free property, we observe that the gradient function of
$Q_n( \bm{\beta})$ evaluated at $ \bm{\beta}_0$ is
\begin{eqnarray*}
{\bf S}_n := \frac{\partial Q_n( \bm{\beta}) }{\partial \bm{\beta}}\Big|_{ \bm{\beta}= \bm{\beta}_0}=-2[n(n-1)]^{-1}{\bf X}^T\mbox{\boldmath $\xi$},
\end{eqnarray*}
where $\mbox{\boldmath $\xi$} = (\xi_1,\cdots,\xi_n)^T$ with $\xi_i = 2r_i - (n+1)$ and
$r_i = \mbox{rank}(\epsilon_i)$ among $\epsilon_1,\cdots,\epsilon_n$.
Note that the random vector $\{r_1,\cdots,r_n\}$ follows the uniform distribution on the permutations of integers $\{1,\cdots,n\}$. Consequently, $\mbox{\boldmath $\xi$}$ has a completely known distribution that is independent of the random error distribution.
Hence, the gradient function has the complete pivotal property \citep{Parzen1994}, which implies the tuning-free
property of rank-Lasso.
To see this, recall that
the general principal of tuning parameter selection
\citep{bickel2009}
suggests choosing $\lambda$ such that
$P(\lambda>c||{\bf S}_n||_\infty)\geq 1-\alpha_0$
for some constant $c > 1$ and a given small $\alpha_0>0$. With the design matrix ${\bf X}$ and a completely known distribution of $\mbox{\boldmath $\xi$}$, we can easily simulate the distribution of ${\bf S}_n$ and hence compute the theoretically optimal $\lambda$.
\citet{Wang2018} established
a finite-sample estimation error bound for the Rank Lasso estimator with the aforementioned simulated tuning parameter
and showed that it achieves the same optimal near-oracle estimation error rate
as Lasso does. In contrast to Lasso, the conditions required by rank Lasso for the
error distribution are much weaker and allow for heavy-tailed distributions such as Cauchy distribution.
Moreover, they proved that further improvement in efficiency can be achieved
by a second-stage enhancement with some light tuning.
\section{Other alternative tuning parameter selection methods for Lasso}
\subsection{Bootstrap-based approach}
\citet{Hall2009} developed an $m$-out-of-$n$ bootstrap algorithm to select the tuning parameter
for Lasso, pointing out that standard bootstrap methods would fail.
Their algorithm employs a wild bootstrap procedure (see Algorithm~\ref{algo:boots}),
which allows one to estimate the mean squared error of the parameter estimators for different tuning parameters.
For each candidate $\lambda$,
this algorithm computes the bootstrapped mean-square error estimate $\mbox{Err}(\lambda)$.
The optimal tuning parameter is chosen as
$
\widehat{\lambda}_{\scalebox{.7}{\mbox{boots}}} = \big(n/m\big)^{1/2}\argmin\limits_{\lambda} \mbox{Err}(\lambda).
$
The final estimator for $ \bm{\beta}_0$ is given by
\begin{eqnarray*}
\widehat{\vbeta}_{\scalebox{.7}{\mbox{boots}}} = \argmin\limits_{ \bm{\beta}}\Big\{ \sum_{i=1}^n (y_i-\bar{y}-{\bf x}_i^T \bm{\beta})^2 + \widehat{\lambda}_{\scalebox{.7}{\mbox{boots}}}|| \bm{\beta}||_1\Big\}.
\end{eqnarray*}
\begin{algorithm}[!h]
\caption{Bootstrap algorithm} \label{algo:boots}
\begin{algorithmic}[1]
\State Input $({\bf y},{\bf X})$, a $\sqrt{n}$-consistent ``pilot estimator'' $\widetilde{\vbeta}$, and $\lambda$.
\State $\widehat{\epsilon}_i\leftarrow y_i-\bar{y}-{\bf x}_i^T\widetilde{\vbeta}$.
\State $\widetilde{\epsilon}_i\leftarrow \widehat{\epsilon}_i-n^{-1}\sum_{j=1}^n \widehat{\epsilon}_j$.
\State Set $ \mbox{Err}(\mbox{\boldmath $\lambda$}) \leftarrow 0$.
\For {k = 1,...,N}
\State Obtain $\epsilon_1^*,...,\epsilon_m^*$ by sampling randomly from $\widetilde{\epsilon}_1,...,\widetilde{\epsilon}_n$ with replacement.
\State $y_i^* \leftarrow \bar{y} + {\bf x}_i^T\widetilde{\vbeta} + \epsilon_i^*$, $i=1,...,m$.
\State $\widehat{\vbeta}^*(\lambda) \leftarrow \argmin\limits_{ \bm{\beta}} \big\{\sum_{i=1}^m(y_i^*-\bar{y}^*-{\bf x}_i^T \bm{\beta})^2 + \lambda|| \bm{\beta}||_1\big\}$.
\State $\mbox{Err}(\lambda) \leftarrow \mbox{Err}(\lambda) + ||\widehat{\vbeta}^*(\lambda)-\widetilde{\vbeta}||^2$.
\EndFor
\Return $\mbox{Err}(\lambda) $.
\end{algorithmic}
\end{algorithm}
Their method and theory were mostly developed for the $p<n$ case.
The algorithm requires that the covariates are centered at their empirical means
and that a $\sqrt{n}$-consistent ``pilot estimator'' $\widetilde{\vbeta}$ is available.
\citet{Hall2009} proved that if $m = O(n/(\log n)^{1+\eta})$ for some $\eta>0$, then
the estimator $\widehat{\vbeta}_{\scalebox{.7}{\mbox{boots}}}$ can identify the true model
with probability approaching one as $n\rightarrow\infty$.
They also suggested that the theory can be generalized to the high dimensional case with fixed sparsity $|| \bm{\beta}_0||_0$, however, the order of $p$ would depend on the “generalized parameters” of the model such as the tail behaviors of the random noise.
\citet{chatterjee2011} proposed a modified bootstrap method for Lasso.
This method first computes a thresholded version
of the Lasso estimator and then applies the residual bootstrap.
In the classical $p\ll n$ setting, \citet{chatterjee2011} proved that the modified bootstrap method
provides valid approximation to the distribution of the
Lasso estimator. They further recommended to choose $\lambda$
to minimize the bootstrapped approximation to the mean squared error of the Lasso estimator.
\subsection{Adaptive calibration for $l_{\infty}$}
Motivated by Lepski's method for non-parametric regression \citep{lepski1990,lepski1997},
\citet{Chichignoud2016} proposed a novel adaptive validation method
for tuning parameter selection for Lasso.
The method, named Adaptive Calibration for $l_{\infty}$ (AV$_\infty$),
performs simple tests along a single Lasso path to select the optimal tuning parameter.
The method is equipped with a fast computational routine and theoretical guarantees on
its finite-sample performance with
respect to the super-norm loss.
Let $\Lambda = \{\lambda_1,...,\lambda_N\}$ be a set of candidate values for $\lambda$, where $0<\lambda_1<\cdots<\lambda_N=\lambda_{\max} = 2n^{-1}||{\bf X}^T{\bf y}||_\infty$.
Denote $\widehat{\vbeta}^{\scalebox{.7}{\mbox{Lasso}}}(\lambda_j)$ as the Lasso estimator in (\ref{Lasso}) with tuning parameter set as $\lambda=\lambda_j$, $j=1,\cdots,N$. The proposed AV$_\infty$
selects $\lambda$ based on the tests for sup-norm differences of Lasso estimates with different tuning parameters. It is defined as
\begin{align}
\widehat{\lambda}_{\scalebox{.7}{\mbox{AC}}} = \min\Big\{\lambda\in\Lambda: \max\limits_{\substack{\lambda',\lambda''\in \Lambda\\ \lambda',\lambda''\geq \lambda}}\Big[ \frac{||\widehat{\vbeta}^{\scalebox{.7}{\mbox{Lasso}}}(\lambda')- \widehat{\vbeta}^{\scalebox{.7}{\mbox{Lasso}}}(\lambda'')||_\infty}{\lambda'+\lambda''}-\bar{C}\Big]\leq 0 \Big\},\label{AV_crit}
\end{align}
where $\bar{C}$ is a constant with respect to the $L_\infty$ error bound of Lasso estimator. \citet{Chichignoud2016} recommended the universal choice $\bar{C} = 0.75$ for all practical purposes.
\citet{Chichignoud2016} proposed a simple and fast implementation for the tuning parameter selection via AV$_\infty$,
see the description in
Algorithm~\ref{algo:AV}, where in the algorithm the binary random variable $\widehat{t}_{\lambda_j}$ is defined as
\begin{eqnarray*}
\widehat{t}_{\lambda_j} = \prod_{k=j}^{N}\mathbbm{1}\Big\{ \frac{||\widehat{\vbeta}(\lambda_j)- \widehat{\vbeta}(\lambda_k)||_\infty}{\lambda_j+\lambda_k}\leq\bar{C}\Big\},
\quad j=1, \ldots, N,
\end{eqnarray*}
with $\mathbbm{1}$ being the indicator function.
The final estimator for the AV$_\infty$ method is the Lasso estimator with the tuning parameter $\widehat{\lambda}_{\scalebox{.7}{\mbox{AC}}}$, denoted as $\widehat{\vbeta}(\widehat{\lambda}_{\scalebox{.7}{\mbox{AC}}})$.
As shown in Algorithm~\ref{algo:AV}, AV$_\infty$ only needs to compute one solution path, in contrast to the $K$ paths in
the $K$-fold cross-validation for Lasso.
The new method is usually faster than cross-validation.
\citet{Chichignoud2016} proved that $||\widehat{\vbeta}(\widehat{\lambda}_{\scalebox{.7}{\mbox{AC}}})- \bm{\beta}_0||_\infty$ achieves the optimal
sup-norm error bound of Lasso up to a constant pre-factor with high probability under some regularity conditions.
\begin{algorithm}[!h]
\caption{AV$_\infty$ algorithm} \label{algo:AV}
\begin{algorithmic}[1]
\State Input $\widehat{\vbeta}(\lambda_1),...,\widehat{\vbeta}(\lambda_N),\bar{C}$.
\State Set $j\leftarrow N$.
\While {$\widehat{t}_{\lambda_{j-1}}\neq 0$ and $j>1$}
\State Update index $j\leftarrow j-1$.
\EndWhile
\Return $\widehat{\lambda} \leftarrow \lambda_j$.
\end{algorithmic}
\end{algorithm}
\section{Nonconvex penalized high-dimensional regression and tuning for support recovery}
\subsection{Background}
Lasso is known to achieve accurate
prediction under rather weak conditions \citep{Greenshtein2004}. However,
it is also widely recognized that Lasso
requires stringent conditions on the design matrix ${\bf X}$ to
achieve variable selection consistency \citep{Zou2006, Zhao2006}.
In many scientific problems, it is of importance to identify relevant or active variables.
For example, biologists are often interested in identifying the genes associated with certain disease.
This problem is often referred to as {\it support recovery}, with the goal to identify
$\mathcal{S}_0=\{j: \beta_{0j}\neq 0, j=1, \ldots, p\}$.
To alleviate the bias of Lasso due to
the over-penalization of $L_1$ penalty, nonconvex penalized regression
has been studied in the literature as an alternative to Lasso \citep{FLreview, zhang2012review}.
Two popular choices of nonconvex penalty functions
are SCAD \citep{FL:2001} and MCP \citep{Zhang:2010}.
The SCAD penalty function is given by
\begin{align}
\label{scad}
p_{\lambda}(|\beta_j|) = \begin{cases}
\lambda |\beta_j|, &\mbox{ if } |\beta_j|\leq \lambda,\\
\frac{2a \lambda |\beta_j| - \beta_j^2-\lambda^2}{2(a-1)},&\mbox{ if } \lambda<|\beta_j|< a\lambda,\\
\frac{(a+1)\lambda^2}{2},&\mbox{ if } |\beta_j|\geq a\lambda,
\end{cases}
\end{align}
where $a>2$ is a constant and \citet{FL:2001} recommended the choice $a = 3.7$.
The MCP penalty function is given by
\begin{align}
\label{mcp}
p_{\lambda}(|\beta_j|) = \begin{cases}
\lambda |\beta_j|-\frac{\beta_j^2}{2a}, &\mbox{ if } |\beta_j|\leq a\lambda,\\
\frac{a\lambda^2}{2},&\mbox{ if } |\beta_j|> a\lambda.
\end{cases}
\end{align}
where $a>1$ is a constant.
Figure~\ref{fig:nonconvex} depicts the two penalty functions.
\begin{figure}
\centering
\includegraphics[scale=0.4]{penalty_plot.pdf}
\caption{SCAD and MCP penalty functions ($\lambda=1$)}
\label{fig:nonconvex}
\end{figure}
As cross-validation for Lasso aims for prediction accuracy, it tends
to select a somewhat smaller tuning parameter (i.e., less regulation).
The resulted model size hence is usually larger than the true model size.
In the fixed $p$ setting, \citet{Wang2007} proved that with a positive probability cross-validation leads to a
tuning parameter that would yield an over-fitted model.
Recent research has shown that when nonconvex penalized regression is combined with some modified BIC-type
criterion, the underlying model can be identified with probability approaching one under appropriate regularity conditions.
Several useful results were obtained in the low-dimensional setting.
For example, effective Bayesian information criterion (BIC) type
criterion for tuning parameter selection for nonconvex penalized regression
was investigated in Wang, Li and Tsai (2007) for fixed $p$
and Wang, Li and Leng (2009) for diverging $p$ (but $p<n$).
\citet{zou2007degrees} considered Akaike information criterion (AIC) and BIC type criterion based on the degrees
of freedom for Lasso.
Also in the fixed $p$ setting, \citet{Zhang2010} studied generalized
information criterion, encompassing AIC and BIC.
They revealed that BIC-type selector enables identification of
the true model consistently and that AIC-type selector is asymptotically loss
efficient.
In the rest of this section, we review several modified BIC-type criteria in the high-dimensional setup ($p\gg n$)
for tuning parameter selection with the goal of support recovery.
\subsection{Extended BIC for comparing models when $p\gg n$}
Let $\mathcal{S}$ be an arbitrary subset of $\{1,\cdots,p\}$. Hence, each $\mathcal{S}$
indexes a candidate model. Given the data (${\bf X},{\bf y}$),
the classical BIC, proposed by \citet{schwarz1978}, is defined as follows
$$\mbox{BIC}(\mathcal{S}) = -2\log L_n\{\widehat{\vbeta}(\mathcal{S})\} + ||\mathcal{S}||_0\log n,$$
where $L_n(\cdot)$ is the likelihood function,
$\widehat{\vbeta}(\mathcal{S})$ is the maximum likelihood estimator for the model with support $\mathcal{S}$,
and $||\mathcal{S}||_0$ is the cardinality of the set $\mathcal{S}$.
Given different candidate models, BIC selects the model with support $\mathcal{S}$ such that
BIC($\mathcal{S}$) is minimized.
In the classical framework where $p$ is small and fixed, it is known \citep{Rao1989} that under standard conditions
BIC is variable selection consistent, i.e., $\mathcal{S}_0$ is identified with probability approaching one
as $n\rightarrow \infty$ if the true model is in the set of candidate models.
However, in the large $p$ setting, the number of candidate models grows exponentially fast in $p$.
The classical BIC is no longer computationally feasible.
\citet{chen2008} was the first to rigorously study the extension of BIC to high-dimensional regression where $p\gg n$.
They proposed an extended family of BIC of the form
\begin{align}
\mbox{BIC}_\gamma(\mathcal{S}) = -2\log L_n\{\widehat{\vbeta}(\mathcal{S})\} + ||\mathcal{S}||_0\log n + 2\gamma\log \binom p{||\mathcal{S}||_0},\label{BICchen}
\end{align}
where $\gamma\in[0,1]$. Comparing with the classical BIC, the above modification incorporates the model size in the
penalty term.
It was proved that if $p=O(n^\kappa)$ for some constant $\kappa$, and $\gamma > 1 -(2\kappa)^{-1}$, then this extended BIC is
variable selection consistent under some regularity conditions.
\citet{kim2012consistent} also investigated variants of extended BIC
for comparing models for high-dimensional least-squares regression.
\subsection{HBIC for tuning parameter selection and support recovery}
The extended BIC is most useful if a candidate set of models is provided and if the true model is contained in such a candidate set
with high probability. One practical choice is to construct such a set of candidate models
from a Lasso solution path. As Lasso requires stringent conditions on the design matrix ${\bf X}$
to be variable selection consistent. It is usually not guaranteed that the Lasso solution path contains the oracle
estimator, the estimator corresponding to support set $\mathcal{S}_0$.
Alternatively, one may construct a set of candidate models from the solution path of SCAD or MCP.
However, as the objective function of SCAD or MCP is nonconvex,
multiple minima may be present. The solution path of SCAD or MCP hence
may be nonunique and do not necessarily contain the oracle estimator.
Even if a solution path is known to contain the oracle estimator,
to find the optimal tuning parameter which yields the oracle estimator
with theoretical guarantee is challenging in high dimension.
To overcome these difficulties,
\citet{wang2013} thoroughly studied how to calibrate non-convex penalized least squares
regression to find the optimal tuning parameter for support recovery when $p\gg n$.
Define a consistent solution path to be a path that contains the oracle estimator with probability approaching one.
\citet{wang2013} first proved that an easy-to-calculate calibrated CCCP (CCCP stands for ConCave Convex procedure)
algorithm produces a consistent solution path.
Furthermore, they proposed HBIC, a high-dimensional BIC criterion, and proved that it can be applied
to the solution path to select the optimal tuning parameter which
asymptotically identifies the oracle estimator.
Let $\widetilde{ \bm{\beta}}(\lambda)$ be
the solution corresponding to $\lambda$ on a consistent solution path, for example, the one obtained
by the aforementioned calibrated nonconvex-penalized regression with SCAD or MCP penalty.
HBIC selects the optimal tuning parameter $\lambda$ in $\Lambda_n=\{\lambda: ||\widetilde{ \bm{\beta}}(\lambda)||_0\leq K_n \}$, where $K_n$ is allowed to diverge to infinity, by minimizing
\begin{align}
\mbox{HBIC}(\lambda) = \log \Big\{\frac{1}{n}||{\bf y}-{\bf X} \widetilde{ \bm{\beta}}(\lambda)||^2\Big\} + ||\widetilde{ \bm{\beta}}(\lambda)||_0 \frac{C_n\log p}{n},\label{HBIC}
\end{align}
where $C_n$ diverges to infinity.
\citet{wang2013} proves that if $C_n || \bm{\beta}_0||_0\log p= o(n)$ and $K_n^2\log p\log n =o(n)$, then under mild conditions, HBIC identifies the true model with probability approaching one. For example, one can take $C_n=\log(\log n)$. Note that
the consistency is valid in the ultra-high dimensional setting, where $p$ is allowed to grow exponentially fast in $n$.
In addition, \citet{wang2011consistent} studied a variant of HBIC in combination of a sure screening procedure.
\citet{Fan2013} investigated
proxy generalized information criterion,
a proxy of the generalized information criterion \citep{Zhang2010} when $p\gg n$.
They identified a range
of complexity penalty levels such that the tuning parameter that is selected by optimizing the proxy generalized information
criterion can achieve model selection consistency.
\section{A real data example}
We consider the data set sp500 in the R package \texttt{scalreg}, which contains a year's worth of close-of-day data for most of the Standard and Poors 500 stocks.
The response variable \texttt{sp500.percent} is the daily percentage change.
The data set has 252 observations of 492 variables.
We demonstrate the performance of Lasso with $K$-fold cross validation, scaled Lasso and $\sqrt{\mbox{Lasso}}$ methods
on this example. Other methods reviewed in this paper which do not yet have publicly available software packages
are not implemented. We evaluate the performance of different methods based on 100 random splits.
For each split, we randomly select half of the data to train the model, and
then compute the $L_1$ and $L_2$-prediction errors and estimated model sizes
on the other half of the data. For Lasso, we select the tuning parameter by $10$-fold cross validation, using the
R function ``cv.glmnet'' and the one-standard-error rule. For scaled Lasso, we apply the default tuning parameter selection
method in the R function ``scalreg'', which is the quantile-based penalty level (\bfblue{lam0=``quantile"}) introduced and studied in \citet{Sun2013}. For $\sqrt{\mbox{Lasso}}$ method, we use R function ``slim'' to train the model. However, the package does not have a build-in tuning
parameter selection method. As the optimal tuning parameter depends on the tail behavior of the random error,
it is also chosen by 10-fold cross validation.
Table~\ref{realdata} summarizes the averages and standard deviations of the $L_1$ and $L_2$- prediction errors
and estimated model sizes for the three methods with 100 random splits. Lasso and $\sqrt{\mbox{Lasso}}$ have similar performance, though Lasso method tends to yield sparser models. Scaled Lasso has slightly larger prediction errors and model sizes.
The difference may be due to the non-normality of th data, which would affect the performance of the default tuning parameter selection
method in the ``scalreg'' function.
\begin{table}[!h]
\centering
\caption{Analysis of sp500 data}\label{realdata}
\begin{tabular}{c|ccc}
\hline
& Lasso & Scaled Lasso & $\sqrt{\mbox{Lasso}}$ \\ \hline
$L_1$ error & 0.17 (0.02) & 0.21 (0.02) & 0.17 (0.02) \\
$L_2$ error & 0.05 (0.01) & 0.08 (0.03) & 0.05 (0.01) \\
Sparsity & 60.03 (5.39) & 120.82 (4.70) & 76.63 (8.27) \\
\hline
\end{tabular}
\end{table}
\section{Discussions}
Developing computationally efficient tuning parameter selection methods with theoretical guarantees
is important for many high-dimensional statistical problems but has so far only received limited attention
in the current literature.
This paper reviews several commonly used tuning parameter selection approaches
for high-dimensional linear regression and provides some insights on how they work.
The aim is to bring more attention to this important
topic to help stimulate future fruitful research in this direction.
The review article focused on regularized least squares types of estimation procedures
for sparse linear regression.
The specific choice of tuning parameter necessarily depends on the user's own research objectives:
Is prediction the main research goal? Or is identifying relevant variables of more importance?
How much computational time is the researcher willing to allocate? Is robustness of any concern
for the data set under consideration?
The problem of tuning parameter selection is ubiquitous and has been investigated in
settings beyond sparse linear least squares regression.
\citet{Lee2014} extended the idea of extended BIC to high-dimensional quantile regression. They recommended to select the model that minimizes
\begin{align}
\mbox{BIC}_{\mbox{Q}}(\mathcal{S}) = \log\Big\{\sum_{i=1}^n\rho_\tau\big(y_i-{\bf x}_{i}^T \widehat{\vbeta}(\mathcal{S})\big) \Big\} + (2n)^{-1}C_n ||\mathcal{S}||_0\log n,\label{QHBIC}
\end{align}
where $\rho_\tau(u) = 2u\big(\tau-I(u<0)\big)$ is the quantile loss function, and $C_n$ is some positive constant that diverges to infinity as $n$ increases. They also proved variable selection consistency property when $C_n\log n / n\rightarrow 0$ under some regularity conditions.
\citet{BC2011} and \citet{koenker2011} considered tuning parameter selection for penalized quantile regression based
on the pivotal property of the quantile score function.\citet{wang2012} considered tuning parameter selection using
cross-validation with the quantile loss function. For support vector machines (SVM), a widely used approach for classification,
\citet{zhang2016consistent} recently established the consistency of extended BIC type criterion for tuning
parameter selection in the high-dimensional setting.
For semiparametric regression models, \citet{xie2009scad} explored cross-validation for high-dimensional partially linear mean regression;
\citet{sherwood2016partially} applied an extended BIC type criterion for high-dimensional partially linear additive quantile regression.
\citet{datta2017cocolasso} derived a corrected cross-validation procedure for high-dimensional
linear regression with error in variables.
In \citet{guo2016high}, an extended BIC type criterion was used for
high-dimensional and banded vector autoregressions. In studying high-dimensional panel data,
\citet{kock2013oracle} empirically investigated both cross validation and BIC for tuning parameter selection.
Although the basic ideas of cross validation and BIC can be intuitively generalized to more complex
modeling settings, their theoretical justifications are often still lacking despite the promising numerical
evidence. It is worth emphasizing that intuition is not always straightforward
and theoretical insights can be valuable. For instance, when investigating
high-dimensional graphs and variable selection with the lasso,
\citet{meinshausen2006high} observed that
the consistency of neighborhood selection hinges on the choice of the penalty parameter.
The oracle value for optimal prediction does not lead to a consistent
neighborhood estimate.
| {
"timestamp": "2019-08-13T02:04:14",
"yymm": "1908",
"arxiv_id": "1908.03669",
"language": "en",
"url": "https://arxiv.org/abs/1908.03669",
"abstract": "Penalized (or regularized) regression, as represented by Lasso and its variants, has become a standard technique for analyzing high-dimensional data when the number of variables substantially exceeds the sample size. The performance of penalized regression relies crucially on the choice of the tuning parameter, which determines the amount of regularization and hence the sparsity level of the fitted model. The optimal choice of tuning parameter depends on both the structure of the design matrix and the unknown random error distribution (variance, tail behavior, etc). This article reviews the current literature of tuning parameter selection for high-dimensional regression from both theoretical and practical perspectives. We discuss various strategies that choose the tuning parameter to achieve prediction accuracy or support recovery. We also review several recently proposed methods for tuning-free high-dimensional regression.",
"subjects": "Methodology (stat.ME); Machine Learning (cs.LG); Machine Learning (stat.ML)",
"title": "A Survey of Tuning Parameter Selection for High-dimensional Regression",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9669140244715405,
"lm_q2_score": 0.7341195385342971,
"lm_q1q2_score": 0.7098304774473874
} |
https://arxiv.org/abs/math/0608650 | The boundary behavior of holomorphic functions: Global and local results | We develop a new technique for studying the boundary limiting behavior of a holomorphic function on a domain $\Omega$ -- both in one and several complex variables. The approach involves two new localized maximal functions. As a result of this methodology, theorems of Calderón type about local boundary behavior on a set of positive measure may be proved in a new and more natural way. We also study the question of nontangential boundedness (on a set of positive measure) versus admissible boundedness. Under suitable hypotheses, these two conditions are shown to be equivalent. | \section{Introduction}
The first theorem about the boundary limiting behavior of holomorphic
functions was proved by P. Fatou in his thesis in 1906 [FAT]. He used Fourier
series techniques to show that if $f$ is a bounded, holomorphic
function on the unit disc $D \subseteq {\Bbb C}$ (i.e., $f \in H^\infty(D)$), then
$$
\lim_{r \rightarrow 1^-} f(r e^{i\theta})
$$
exists for almost every $\theta \in [0,2\pi)$. Furthermore,
for $\alpha > 1$ and $P \in \partial D$, we define
$$
\Gamma_\alpha(P) = \{\zeta \in D: |z - P| < \alpha (1 - |z|)\} \, .
$$
This is the {\it nontangential} or {\it Stolz} approach region.
Then Fatou showed that, for $f \in H^\infty(D)$ and $\alpha > 1$ fixed,
the limit
$$
\lim_{\Gamma_\alpha(P) \ni z \rightarrow P} f(z)
$$
exists for almost every $P \in \partial D$.
Later on, Privalov [PRI1], [PRI2], Plessner [PLE], and others refined Fatou's result (see [DIK] for
a detailed account of the history). The standard
theorem today is that, if $0 < p \leq \infty$, if $\alpha > 1$ is fixed, and if
$f \in H^p(D)$ (the Hardy space), then
$$
\lim_{\Gamma_\alpha(P) \ni z \rightarrow P} f(z)
$$
exists for almost every $P \in \partial D$. With suitable estimates for the Poisson
kernel (see [KRA8]), one can prove a similar result
(for nontangential convergence) on any bounded domain in ${\Bbb C}$ with $C^2$
boundary. The result may be refined further so that it is valid for functions
in the Nevanlinna class (see [GAR] as well as our
Section 9).
A theorem of Littlewood (see [LIT]) shows
that, in a very precise sense, the nontangential approach regions $\Gamma_\alpha$
are the {\it broadest} approach regions through which a theorem of this kind
can be obtained---even for bounded holomorphic functions.
On a bounded domain in ${\Bbb C}^n$ with $C^2$ boundary, it is classical that a
holomorphic function satisfying suitable growth conditions---say
membership in the Hardy class $H^p$ or even the Nevanlinna class (see Section 9)---will
have nontangential boundary
limits almost everywhere with respect to $(2n-1)$-dimensional area measure $d\sigma$
on the boundary. In the present context, ``nontangential'' means
through an approach region of {\it conical shape}:
$$
\Gamma_\alpha(P) = \{z \in \Omega: |z - P| < \alpha \cdot \delta_{\partial \Omega}(P)\} \, ,
$$
with $\delta_{\partial\Omega}(P)$ denoting the Euclidean distance of $P$
to $\partial \Omega$.
Also, if $\Omega = \{z \in {\Bbb C}^n: \rho(z) < 0\}$, so that $\rho$ is a defining
function for $\Omega$ and if $0 < p < \infty$, then
$$
H^p(\Omega) = \{f \ \hbox{on} \ \Omega: \sup_{0 < \epsilon < \epsilon_0} \int_{\partial \Omega_\epsilon}
|f(\zeta)|^p \, d\sigma(\zeta)^{1/p} \equiv \|f\|_{H^p(\Omega)} < \infty\} \, .
$$
Also $H^\infty(\Omega)$ consists of the bounded holomorphic functions
with the supremum norm.
Here $\epsilon_0$ is a small, positive number and
$\Omega_\epsilon \equiv \{z \in \Omega: \rho(z) = - \epsilon\}$.
Details connected with this definition may be found in [KRA1].
See [TSU], [ZYG1], [ZYG2], [ZYG3]
for historical background of these ideas.
The proof of such a theorem depends once again
on having the appropriate
estimates for the Poisson kernel (see [KRA8], [KRA1]).
It came as quite a surprise when, in 1970, Adam Koranyi [KOR1], [KOR2]
showed that a broader method of approach than
nontangential is valid when the domain in
question is the unit ball in ${\Bbb C}^n$. To wit, let $B = \{z \in {\Bbb C}^n:
|z|^2 < 1\}$. For $\alpha > 1$ and $P \in \partial B$, define
$$
{\cal A}_\alpha(P) = \{z \in B: |1 - z \cdot \overline{P}|
< \alpha (1 - |z|)\} \, .
$$
Here, as is standard, $z \cdot \overline{P} \equiv \sum_j z_j \overline{P}_j$. One
may calculate (see [KRA1]) that the approach region ${\cal A}_\alpha$
has nontangential shape in complex normal directions but parabolic shape in
complex tangential directions. Koranyi's result
is that, if $f \in H^p(B)$, then
$$
\lim_{{\cal A}_\alpha(P) \ni z \rightarrow P} f(z)
$$
exists for $\sigma$-almost every $P \in \partial B$. Koranyi's proof
depends decisively on an analysis of the shape of the
singularity of the Poisson-Szeg\H{o} kernel
$$
{\cal P}(z, \zeta) = \frac{(n - 1)!}{2\pi^n} \frac{(1 - |z|^2)^n}{|1 - z\cdot \overline{\zeta}|^{2n}} .
$$
for the ball (which shape is decidedly different from the shape
of the singularity for the classical Poisson kernel---see
[KRA1], [KRA8]). Put in other words, where the classical results
depend on estimates for the standard Poisson kernel (in
particular, an analysis of its singularity), the new results
of Koranyi required estimates on the Poisson-Szeg\"{o} kernel.
In 1972, E. M. Stein [STE1] showed how to prove a result like
Koranyi's on {\it any} domain in ${\Bbb C}^n$ with $C^2$ boundary.
His analysis (later refined by Barker---see [BAR] and the discussion in
[KRA1]) avoids the use of canonical kernels, but instead
depends on an analysis of the Levi geometry of the
domain. See also [LEM] for a quite different and
original approach to these matters.
We now realize that Stein's result was an important first step,
but it is far from the optimal result for most domains. More
precisely, the parabolic approach in complex tangential
directions is really only suitable at {\it strongly
pseudoconvex} boundary points. For points of {\it finite type}
$m$, an approach region which has aperture $\hbox{Im}\, z_1 = |z'|^m$
(where $z_1$ is the complex normal direction and $z'$ the
remaining complex tangential directions---see the discussion
below) is the right idea.\footnote{It is instructive to
examine the boundary behavior of a holomorphic function a
point $P \in \partial \Omega$ which is {\it strongly
pseudoconcave}. By the {\it Kontinuit\"{a}tssatz} of
multivariable complex function theory (or the Hartogs
extension phenomenon), {\it any} holomorphic function on
$\Omega$ will continue analytically to an entire neighborhood
of $P \in \partial \Omega$. So the correct approach region at
such a boundary point will be unrestricted. This observation
is quite different from the result of Koranyi/Stein.}
But in fact the analysis is much more subtle than that, for it
is not just the {\it type} of the boundary point but the {\it
magnitude of that type} that must play a role. The type and
the magnitude of the type depend semicontinuously on $P \in
\partial \Omega$ when $\Omega \subseteq {\Bbb C}^2$. The dependence is
more subtle for $\Omega \subseteq {\Bbb C}^n$, $n > 2$. The calculations
in [NSW1], [NSW2] (see also [KRA9]) begin to show how to tame
these rather complicated ideas.
Now let us examine a slightly different direction of these studies.
It is an old result of A. P. Calder\'{o}n [CAL] that if a function
$u$, harmonic on the upper half-
space ${\Bbb R}^{N+1}_+$, is
nontangentially bounded on a set $E \subseteq {\Bbb R}^N \equiv \partial
{\Bbb R}^{N+1}_+$ of positive $N$-dimensional measure then $u$ has
nontangential boundary limits at almost every point of $E$. In
the important book [STE1], E. M. Stein proved an analogous
result for a holomorphic function on a strongly
pseudoconvex domain $\Omega \subseteq
{\Bbb C}^n$ and for admissible boundedness (see [KRA1] as well as
the forthcoming [DIK] for a discussion of both nontangential
and admissible approach regions). Of course it is true
that a holomorphic function on $\Omega \subseteq {\Bbb C}^n$
that is nontangentially bounded on a set $E \subseteq \partial \Omega$ of
positive $(2n-1)$-dimensional measure will have nontangential limits
almost everywhere on $E$---see [CAL], [STE1] and references therein.
So certainly a function that is {\it admissibly bounded} on
a set $E \subseteq \partial \Omega$ will have nontangential limits
almost everywhere in $E$, just because admissible boundedness is
a stronger condition than nontangential boundedness. One of the
main points of the present paper is to give a new proof of a fairly
general version of the Calder\'{o}n theorem for admissible
approach regions---one in which the admissible regions have
a geometry that is adapted to the particular domain under study (see Section 8).
Thus the result presented here is more general than that in
[STE1] or [BAR]---just because the approach regions now
fit the Levi geometry.
One of the main points of the present work is to make a comparison
between nontangential behavior of holomorphic functions of several
variables and admissible behavior. We prove the somewhat surprising
result that if a holomorphic function on the ball $B \subseteq {\Bbb C}^n$ is
{\it nontangentially bounded} almost everywhere on a set $E \subseteq \partial B$ of positive
measure then it is in fact {\it admissibly} bounded almost everywhere
on $E$. Discussion, context, and proof appear below.
\section{Nontangential Boundary Behavior Versus \hfill \break
\null \indent Admissible Boundary Behavior}
There are a variety of results in the subject that link, or at least
compare and contrast, the isotropic behavior suggested by
nontangential approach regions with the nonisotropic behavior suggested
by admissible approach regions. Perhaps the first result of this
kind was announced by Stein in [STE2]. The details of
the argument appear in [KRA1]. The result states that
a holomorphic function on a smoothly bounded domain $\Omega$ in ${\Bbb C}^n$ which
is in a classical Lipschitz function space is in
fact in a stronger nonisotropic function space.
Roughly speaking, such a function is automatically twice
as smooth in tangential directions. This result is in fact
valid for all $0 < \alpha < \infty$. We now provide some
of the concepts and details pertaining to Stein's result.
For $0 < \alpha < 1$ and $\Omega \subseteq {\Bbb R}^N$, we define
$$
\Lambda_\alpha(\Omega) = \{f: \Omega \rightarrow {\Bbb C} :
|f(x+h) - f(x)| \leq C |h|^\alpha \ \hbox{for all}\ x, x+h\in \Omega\} \, .
$$
We equip $\Lambda_\alpha$ with the norm
$$
\|f\|_{\Lambda_\alpha} = \sup_{x, x+h \in \Omega \atop h \ne 0}
\frac{|f(x+h) - f(x)|}{|h|^\alpha} + \|f\|_{L^\infty(\Omega)} \, .
$$
If $\alpha = 1$, then we use the slightly more subtle definition
\begin{eqnarray*}
\Lambda_1(\Omega) & = & \left \{ f: \Omega \rightarrow {\Bbb C} :
|f(x + h) + f(x - h) - 2f(x)| \right. \\
\null \qquad \qquad && \hbox{ \ \ \ \ } \leq \left. C |h| \ \hbox{for all}\ x, x+h, x-h
\in \Omega \right \} \, .
\end{eqnarray*}
We equip $\Lambda_1$ with the norm
$$
\|f\|_{\Lambda_1} = \sup_{x, x+h \in \Omega \atop x \ne y}
\frac{|f(x + h) + f(x - h) - 2f(x)|}{|h|}
+ \|f\|_{L^\infty(\Omega)} \, .
$$
Inductively, if $\alpha > 1$, we say
that $f \in \Lambda_\alpha(\Omega)$ if
$f$ is continuously differentiable, $f \in \Lambda_{\alpha-1}$,
and $\nabla f \in \Lambda_{\alpha - 1}$. The norm is
$$
\|f\|_{\Lambda_\alpha} = \|f\|_{\Lambda_{\alpha-1}(\Omega)} +
\|\nabla f\|_{\Lambda_{\alpha-1}(\Omega)} \, .
$$
Assume that the domain $\Omega$ has $C^2$ boundary.
Let $U$ be a tubular neighborhood of
$\partial \Omega$ (see [HIR]).
Thus each point in $U$ has a unique nearest point in $\partial \Omega$
and there is a well-defined Euclidean orthogonal projection $\pi: U \rightarrow \partial \Omega$.
We say that a $C^\infty$ curve $\gamma: [0,1] \rightarrow \Omega \cap U$ lies
in ${\cal C}^1(\Omega)$ if {\bf (i)} $|\dot \gamma(t)| \leq 1$ for
all $t$ and {\bf (ii)} $\dot \gamma(t)$ lies in the complex
tangent space (see [KRA1]) at $\pi(\gamma(t))$ for each $t$. We think
of such a $\gamma$ as a ``normalized complex tangential curve''.
Let $0 < \alpha < \infty$. Following E. M. Stein [STE2], we set
$$
\Gamma_{\alpha, 2\alpha}(\Omega) = \{f \in \Lambda_\alpha(\Omega):
f \circ \gamma \in \Lambda_{2\alpha}([0,1]) \ \hbox{for each} \
\gamma \in {\cal C}^1(\Omega)\} \, .
$$
Thus a function in $\Gamma_{\alpha, 2\alpha}(\Omega)$ is smooth
of order $\Lambda_\alpha$ in {\it all directions}, but smooth
of order $\Lambda_{2\alpha}$ in complex tangential directions.
In fact it is convenient to think about an $f \in \Gamma_{\alpha,2\alpha}$
as a function that is $\Lambda_\alpha$ along complex normal curves
and is $\Lambda_{2\alpha}$ along complex tangential curves. This
point of view has been developed, among other places, in
[KRA3]--[KRA6]. See also [GRS] and [RUD].
Stein's remarkable theorem about these function spaces is as follows.
\begin{theorem} \sl
Let $\Omega \subseteq {\Bbb C}^n$ be a domain with $C^2$ boundary.
Let $\alpha > 0$.
Let $f$ be a holomorphic function on $\Omega$ which lies
in $\Lambda_\alpha(\Omega)$. Then $f \in \Gamma_{\alpha, 2\alpha}(\Omega)$.
\end{theorem}
\noindent This result has been refined and
generalized in [KRA3]--[KRA6].
In [KRA2] it was shown that the analogous result for the space $BMO$
of functions of bounded mean oscillation fails.
To wit, if $\Omega =B \subseteq {\Bbb C}^n$ is the unit ball,
then there are two types of balls to consider in the boundary $\partial B$:
If $P \in \partial B$ and $r > 0$ then
$$
\beta_1(P,r) = \{z \in \partial B: |z - P| < r\}
$$
and
$$
\beta_2(P,r) = \{z \in \partial B: |1 - z \cdot \overline{P}| < r\} \, .
$$
Notice that $\beta_1$ is the standard, isotropic Euclidean ball whereas
$\beta_2$ is a nonisotropic ball with extent $r$ in the complex normal
direction and extent $\sqrt{r}$ in the complex tangential directions.
We define
$$
BMO_1(\partial B) = \left \{ g \ \hbox{on} \ \partial B: \sup_{z, r} \frac{1}{\sigma(\beta_1(z,r))}
\int_{\beta_1(z,r)} |g(\zeta) - g_{\beta_1(z,r)} | \, d\sigma(\zeta) < \infty \right \} \, .
$$
Here $d\sigma$ is $(2n-1)$-dimensional Hausdorff measure
and $g_S$ denotes the average of $g$ over the set $S$: $g_S =
[1/V(S)] \cdot \int_S g(t) \, dV(t)$.
Likewise
$$
BMO_2(\partial B) = \left \{ g \ \hbox{on} \ \partial B: \sup_{z, r} \frac{1}{\sigma(\beta_2(z,r))}
\int_{\beta_2(z,r)} |g(\zeta) - g_{\beta_2(z,r)} | \, d\sigma(\zeta) < \infty \right \} \, .
$$
Say that $g$ on the ball is in ${\cal BMO}_1(B)$ if $f$ is holomorphic, $f \in H^2$,
and $f$ has boundary function that is in fact in $BMO_1$.
Say that $g$ on the ball is in ${\cal BMO}_2(B)$ if $f$ is holomorphic, $f \in H^2$,
and $f$ has boundary function that is in fact in $BMO_2$.
The result of [KRA2] is that there is a
holomorphic function on the ball in ${\Bbb C}^n$ which is in classical
isotropic ${\cal BMO}_1$ but not in nonisotropic ${\cal BMO}_2$. That proof used
quite a lot of functional analysis, and did not exhibit the
counterexample explicitly. A more concrete proof, with an explicit
example, was given in [ULR]. The result of [KRA2] was particularly
surprising because it showed as a byproduct that $BMO$ is not an interpolation
space between $L^p$ and $\Lambda_\alpha$.
In view of these results, it is natural to wonder whether a
holomorphic function on a domain $\Omega$ in ${\Bbb C}^n$ that is
nontangentially bounded almost everywhere on a set $E \subseteq \partial \Omega$ will
in fact be admissibly bounded almost everywhere on $E$. If
this were true, then it would follow, at least for a
reasonable class of domains $\Omega$, that a function that is
nontangentially bounded on a set $E \subseteq \partial \Omega$ of
positive measure will in fact have admissible limits almost
everywhere on $E$. In view of the Lindel\"{o}f principle developed
in [CIK] and [KRA12], this is a very natural sort of result. And it turns
out to be true. Its proof is one of the main results of the present
paper.
\section{Definitions and Prior Results}
We take this opportunity to review the standard definitions and
concepts pertaining to this subject. The reference [KRA1] is
a good source for the details. See also [KRA11].
We begin with harmonic analysis on ${\Bbb R}^{N+1}$, which is the most natural
setting for the consideration of nontangential convergence.
Define the upper half space $$ U = {\Bbb R}^{N+1}_+ = \{x = (x_1,
x_2, \dots, x_N, x_{N+1}):
x_{N+1} > 0\} \, .
$$
We often shall write an element of ${\Bbb R}^{N+1}_+$ as $(x', x_{N+1})$, where
$x' \in {\Bbb R}^N$ and $x_{N+1} > 0$. Of course the boundary of ${\Bbb R}^{N+1}_+$
can be identified with ${\Bbb R}^N = \{(x_1, \dots, x_N, 0)\}$ in a natural way.
If $\alpha > 1$ and $P \in \partial {\Bbb R}^{N+1}_+ \equiv {\Bbb R}^N$,
then we define the {\it Stolz region} or {\it nontangential
approach region} $\Gamma_\alpha$(P) of aperture $\alpha$ at $P$
to be
$$
\Gamma_\alpha(P) = \{x = (x', x_{N+1}) \in {\Bbb R}^{N+1}_+: |x' - P| < \alpha x_{N+1}\} \, .
$$
This is a conical-shaped region in the upper half space. Points
in this region {\it cannot} approach the boundary along a
tangential curve.
Let $f$ be a function on $\RR^{N+1}_+$. We say that $f$ is {\it nontangentially
bounded} on a set $E \subseteq \partial \RR^{N+1}_+$ if, for each $P \in \partial
\RR^{N+1}_+$, there is an $\alpha = \alpha(P) > 1$ such that
$f \bigr |_{\Gamma_\alpha(P)}$ is bounded. The bound, of course,
may (and, in general, will) depend on $P$ and on $\alpha$.
But observe that, if $E$ has positive $N$-dimensional measure, then we may use elementary
measure theory to find a set $E' \subseteq E$ of positive measure and
a constant $\alpha' > 1$ and a number $M' > 0$ so that
$|f| \leq M'$ on $\Gamma_{\alpha'}(P')$ for each $P' \in E'$.
Thus we may uniformize the estimate in the definition of ``nontangentially
bounded''.
With notation as in the last paragraph, we say that $f$ has
{\it nontangential limit} on the set $E \subseteq \partial \RR^{N+1}_+$ if,
for each $\alpha > 1$, and each point $P \in E$, the limit
$$
\lim_{\Gamma_\alpha(P) \ni x \rightarrow P} f(x)
$$
exists.\footnote{It is worth noting that the definition
of {\it nontangentially bounded} imposes on each $P \in E$ a
condition involving just one $\alpha$, depending on $P$. But
the definition of {\it nontangential limit} imposes on each
$P \in E$ a condition for all $\alpha$.}
Calder\'{o}n's celebrated theorem [CAL] says this:
\begin{theorem} \sl
Let $u$ be a harmonic function on $\RR^{N+1}_+$. Let $E \subseteq \partial \RR^{N+1}_+$
have positive $N$-dimensional measure. If $u$ is nontangentially bounded
on $E$, then $u$ has nontangential limits almost everywhere on $E$.
\end{theorem}
Calder\'{o}n notes in his paper that his result holds for holomorphic
functions of several complex variables; but that more general
result is also formulated in terms of classical nontangential convergence.
See also [WID]. In fact the concept of admissible convergence would
not be invented for another twenty years.
Let $B \subseteq {\Bbb C}^n$ be the unit ball. Let $f$ be a complex-valued
function on $B$ and let $E \subseteq \partial B$. We say that $f$ is
{\it admissibly bounded} on $E$ if, for each $P \in E$, there
is an $\alpha = \alpha(P) > 1$ such that $f$ is bounded on
${\cal A}_\alpha(P)$. Using elementary measure theory, it may be
seen (in analogy with the situation for classical nontangential convergence)
that if $E \subseteq \partial \Omega$ has positive
$(2n-1)$-dimensional measure then there is a set $E' \subseteq E$ of
positive measure and a number $\alpha' > 1$ and a constant
$M' > 0$ such that $|f|$ is bounded by $M'$ on ${\cal
A}_{\alpha'}(P')$ for each $P' \in E'$. See [STE1].
With notation as in the last paragraph, we say that $f$ has
{\it admissible limit} on the set $E \subseteq \partial \Omega$ if,
for each $\alpha > 1$, and each point $P \in E$, the limit
$$
\lim_{{\cal A}_\alpha(P) \ni z \rightarrow P} f(z)
$$
exists.
Now Stein's theorem [STE1, Theorem 12] states the following:
\begin{theorem} \sl
Let $\Omega \subseteq {\Bbb C}^n$ be a strongly pseudoconvex domain
with $C^2$ boundary (see [KRA1]). Let $E \subseteq \partial \Omega$
be a set of positive $(2n-1)$-dimensional measure. Let
$f$ be a holomorphic function on $\Omega$. Then $f$ is
admissibly bounded on almost everywhere on $E$ if and only if $f$ has admissible
limits at almost every point of $E$.
\end{theorem}
The proof of this last result that appears in [STE1] relies on
the potential theory and the Levi geometry of the domain in
question. In particular, it requires the construction of a
special ``preferred'' Levi metric. It also depends on
estimates involving the Lusin area integral; that is to say,
the argument is not direct. S. Ross Barker [BAR] has provided
an alternative, more measure-theoretic approach to the matter
and thereby proved the result to be true on a broad class of
domains (and also avoided the use of the area integral).
Barker only enunciates and proves a result to the effect that
a holomorphic function that is admissibly bounded almost
everywhere (on the entire boundary) has then admissible limits
almost everywhere (on the entire boundary). He comments at the
end that his result can be localized (in the spirit of
Calder\'{o}n). One of the points of the present paper is to
provide a new approach to the Calder\'{o}n result. We can also prove a
sharper version of the theorem, in the sense that we can in
many cases adapt the shape of the approach regions to the Levi
geometry of the particular domain under study (see Section 8).
\section{The Main Results of the Present Paper}
In this section we collect the statements of the main results of the present
paper. We also briefly indicate their context and significance.
Recall once again that, in the pioneering work [STE1], Stein proves
theorems about the boundary behavior of holomorphic functions {\it using
approach regions of the same parabolic complex tangential geometry}, no
matter what the particular intrinsic complex geometry of the domain in
question. It was only in later work (see [NSW1], [NSW2], [KRA9]) that the
mathematical machinery was developed for adapting the shape of the
approach region to the Levi geometry of the domain. The work in [DIB1],
[DIB2], [DIB3] extends the new ideas further. It should be stressed that the
results presented in the present paper build on these ideas. For
instance, the paper [BAR] certainly extends Stein's version of the
Calder\'{o}n local Fatou theorem to any smoothly bounded domain in
${\Bbb C}^n$; but it still used the old parabolic approach regions of Koranyi
and Stein. In the present paper we prove a version of this theorem for
several different types of domains; and we use approach regions that are
specifically adapted to the geometry of the domain in question (see
Sections 6, 8 for the details).
Our theorems do {\it not} apply to an arbitrary
smoothly bounded domain in ${\Bbb C}^n$.
At this stage in the development of our mathematical machinery they
cannot. For all the proofs here require {\bf (i)} that the boundary of the
domain be equipped with a system of balls that, together with standard
$(2n-1)$-dimensional area measure, make the boundary a space of
homogeneous type in the sense of [COW1], [COW2]
and {\bf (ii)} the geometric
structure of the approach regions ${\cal A}_\alpha$ must be compatible (in
a sense to be described in detail below) with the balls from {\bf (i)}. As
of this writing, we know how to carry out such a program on {\bf (a)}
strongly pseudoconvex domains, {\bf (b)} domains of finite type in
${\Bbb C}^2$, and {\bf (c)} finite type, convex domains in ${\Bbb C}^n$. Refer
to Section 8 for the relevant geometric ideas.
\begin{theorem} \sl
Let $\Omega$ be either a strongly
pseudoconvex domain in ${\Bbb C}^n$ or a finite type domain in
${\Bbb C}^2$ or a convex, finite type domain in ${\Bbb C}^n$.
Let $E \subseteq \partial \Omega$ be a set of positive
measure (either 3-dimensional Hausdorff measure for a domain
in ${\Bbb C}^2$ or $(2n-1)$-dimensional Hausdorff measure for a
domain in ${\Bbb C}^n$). Suppose that $f$ is a holomorphic function
on $\Omega$. If $f$ is admissibly bounded at almost every
point of $E$ then $f$ has admissible limits at almost every
point of $E$.
\end{theorem}
\noindent {\bf Discussion:} In the strongly pseudoconvex case,
Stein proves this theorem in [STE1, Theorem 12]. His proof
proceeds by way of a Lusin area integral argument. We provide
a new, more direct proof and also extend the result to finite
type domains in ${\Bbb C}^2$ and convex, finite type domains in
${\Bbb C}^n$. We avoid the use of a special ``preferred'' metric and of the
Lusin area integral and work more
directly with the Levi geometry of the domain.
As part of our treatment of Theorem 4, we shall need to
give a detailed consideration of approach regions for
Fatou theorems on domains of the type under discussion (Section 6).
This is a subtle matter, for the shape of the regions varies
in a sort of semi-continuous manner with the base boundary point.
Further details will also appear in Section 8 below.
\begin{theorem} \sl
Let $f$ be a holomorphic function on the unit ball in ${\Bbb C}^n$, $n > 1$.
Let $E \subseteq \partial B$ be a set of positive $(2n-1)$-dimensional measure.
Then $f$ is nontangentially bounded at almost every point of $E$ if and
only if $f$ is admissibly bounded at almost every point of $E$.
\end{theorem}
\begin{corollary} \rm
Let $f$ be a holomorphic function on the unit ball in ${\Bbb C}^n$, $n > 1$.
Let $E \subseteq \partial B$ be a set of positive $(2n-1)$-dimensional measure.
Assume that $f$ is nontangentially bounded at almost every point of $E$.
Then $f$ has admissible limits at almost every point of $E$.
\end{corollary}
\noindent {\bf Discussion:} In fact this result is valid in considerably
greater generality. But all the key ideas are already present in
the ball case, and matters are clearer when everything may be written
explicitly.
It should be stressed that Theorem 5 is {\it not} true point-by-point.
That is to say, at a particular point of the boundary of $B$ it is not
true that nontangential boundedness implies admissible boundedness.
This circle of questions is closely related to the Lindel\"{o}f principle,
for which see [CIK] and [KRA12].
The theorem answers a fairly old question, one that is rather natural
in view of the discussion in Section 1. This new result puts the whole
idea of admissible convergence into a very natural context.
\section{An Ontology of Maximal Functions}
In this paper we shall use eleven different maximal functions. For the
convenience of the reader, we collect all their definitions here.
We begin by thinking about the most natural and classical setting for
maximal functions, which is the Euclidean space ${\Bbb R}^N$. Let $f$ be
a locally integrable function on ${\Bbb R}^N$. For $x \in {\Bbb R}^N$ we define
$$
Mf(x) = \sup_{r > 0} \frac{1}{|B(x,r)|} \int_{B(x,r)} |f(t)| \, dt \, .
$$
and
$$
{\cal M}f(x) = \limsup_{r \rightarrow 0^+} \frac{1}{|B(x,r)|} \int_{B(x,r)} |f(t)| \, dt \, .
$$
Here, as usual,
\begin{itemize}
\item The set $B(x,r)$ is the standard isotropic Euclidean ball in ${\Bbb R}^N$ with center $x$ and radius $r > 0$.
\item We let $|B(x,r)|$ denote the $N$-dimensional Lebesgue measure of $B(x,r)$, which
is $c_N r^N$.
\item The measure $dt$ is the standard Lebesgue measure.
\end{itemize}
It is also useful to let
$$
M_\delta f(P) = \sup_{0 < r \leq \delta} \frac{1}{|B(x,r)|} \int_{B(x,r)} |f(t)| \, dt \, .
$$
The first maximal operator $M$ is the classical one due to
Hardy and Littlewood. The second ${\cal M}$ is our first new maximal operator.
This maximal function differs from the classical one in that
the supremum has been replaced by the limit supremum. The third maximal
operator $M_\delta$ is another small modification of $M$, restricting
to balls of radius not exceeding $\delta$.
Now let $\Omega \subseteq {\Bbb C}^n$ be a domain on which a notion of admissible
approach region ${\cal A}_\alpha(P)$, $P \in \partial \Omega$, has been
defined---see Section 8. If $g$ is a complex-valued
function on $\Omega$ and $P \in \partial \Omega$ then we define
$$
g_\alpha^{*}(P) = \sup_{z \in {\cal A}_\alpha(P)} |g(z)|
$$
and
$$
g_\alpha^{**}(P) = \limsup_{{\cal A}_\alpha(P) \ni z \rightarrow P} |g(z)| \, .
$$
Now let $B \subseteq {\Bbb C}^n$ be the unit ball.
Of course $\partial B$ is equipped with a family
of isotropic Euclidean balls
$$
\beta_1(P,r) = \{z \in \partial B: |z - P| < r\} \, .
$$
We shall also utilize the nonisotropic balls given by the condition
$$
\beta_2(P,r) = \{z \in \partial B: |1 - z \cdot \overline{P}| < r\} \, .
$$
Corresponding to these two types of balls in $\partial B$ we shall
have two types of maximal functions. Let $d\sigma$ be boundary
area measure. If $\varphi$ is a locally integrable
function on $\partial B$ and $P \in \partial B$ then we set
$$
M_1 \varphi(P) = \sup_{r > 0} \frac{1}{\sigma(\beta_1(P,r))} \int_{\beta_1(P,r)} |\varphi(\zeta)| \, d\sigma(\zeta)
$$
and
$$
M_2 \varphi(P) = \sup_{r > 0} \frac{1}{\sigma(\beta_2(P,r))} \int_{\beta_2(P,r)} |\varphi(\zeta)| \, d\sigma(\zeta) \, .
$$
We also define two maximal functions based on the limsup rather than the supremum:
$$
{\cal M}_1 \varphi (P) = \limsup_{r \rightarrow 0^+} \frac{1}{|\beta_1(P,r)|} \int_{\beta_1(P,r)} |\varphi(t)| \, d\sigma(t)
$$
and
$$
{\cal M}_2 \varphi (P) = \limsup_{r \rightarrow 0^+} \frac{1}{|\beta_2(P,r)|} \int_{\beta_2(P,r)} |\varphi(t)| \, d\sigma(t) \, .
$$
In addition, we shall have two sets of truncated maximal operators as follows:
$$
M_{1,\delta} \varphi (P) = \sup_{0 < r \leq \delta} \frac{1}{\sigma(\beta_1(P,r))} \int_{\beta_1(P,r)} |\varphi(\zeta)| \, d\sigma(\zeta)
$$
and
$$
M_{2,\delta} \varphi (P) = \sup_{0 < r \leq \delta} \frac{1}{\sigma(\beta_2(P,r))} \int_{\beta_2(P,r)} |\varphi(\zeta)| \, d\sigma(\zeta) \, .
$$
\section{Some Estimates for Maximal Functions}
In this section we study some of our new maximal functions. These functions
are rather natural tools for the study of the boundary behavior of
holomorphic functions. Previous studies (see [STE1], [KRA1], [KRA9],
[NSW1], [NSW2]) endeavored to study the entire boundary at once---using
classical maximal functions that were designed for such a purpose.
Our goal here is to localize the process. This change is particularly
propitious for the development of results of Calder\'{o}n type.
\begin{proposition} \sl
The maximal operator ${\cal M}$ is of weak type $(1,1)$ and also
of strong type $(p,p)$ for $1 < p \leq \infty$.
\end{proposition}
{\bf Proof:} The classical Hardy-Littlewood maximal function $M$ is
known (see [STE3]) to be of weak type $(1,1)$ and of strong type $(p,p)$ for
$1 < p \leq \infty$. Clearly ${\cal M} f(x) \leq Mf(x)$
for any $f$. The result follows.
\hfill $\BoxOpTwo$
\smallskip \\
We have formulated and proved Proposition 7 on ${\Bbb R}^N$. But the statement
and proof transfer {\it grosso modo} to the boundary of a $C^2$, bounded
domain in ${\Bbb R}^N$ or ${\Bbb C}^N$. After all, such a boundary is a smooth
manifold hence is locally Euclidean. Put in different terms, this boundary
is certainly a space of homogeneous type (see [COW1], [COW2])
when it is equipped with
isotropic balls and the standard Hausdorff measure on the boundary. Observe that,
thus far, we are not taking the complex structure or the Levi geometry into account.
We are only looking at classical Euclidean geometry.
Our key tool in proving boundary limit theorems for holomorphic
functions is as follows.
\begin{theorem} \sl
Let $\Omega$ be a bounded domain in ${\Bbb C}^n$, $n \geq 2$, with $C^2$ boundary which is of
one of these types:
\begin{itemize}
\item Strongly pseudoconvex domains in ${\Bbb C}^n$;
\item Finite type domains in ${\Bbb C}^2$;
\item Finite type, convex domains in ${\Bbb C}^n$.
\end{itemize}
Let $u$ be a real-valued, nonnegative, plurisubharmonic function on $\Omega$, continuous on $\overline{\Omega}$.
Let $\varphi = u \bigr |_{\partial \Omega}$ be the boundary trace of $u$.
Let $P \in \partial \Omega$ and $\alpha > 1$. Then
$$
u_\alpha^{**}(P) \leq C_\alpha \cdot {\cal M}_2 {\cal M}_1 \varphi (P) \, . \eqno (\star)
$$
\end{theorem}
This is our local version of Lemma 8.6.10 in [KRA1] or Theorem
2, p.\ 11, the Lemma, p.\ 33, and Lemma 1b, p.\ 42 in [STE1].
It will be the key tool in obtaining a suitable version of
Calder\'{o}n's theorem for domains in ${\Bbb C}^n$.
Note that the maximal functions on the righthand side of $(\star)$
are the new localized maximal functions defined in terms of
the limit supremum. Thus the are {\it smaller} than the
maximal functions in the classical inequalities of Stein and Barker.
As we know,
once a result like Theorem 8 is established, then it is a
straightforward exercise with measure theory to see that
suitably bounded holomorphic functions have boundary limits.
What is new here is the local nature of the maximal function
estimate. This, coupled with the newly defined maximal
functions, will give a new way to think about Fatou-type
theorems and Calder\'{o}n-type theorems even in ${\Bbb C}^1$.
We note that Theorem 8 has of course a standard, classical formulation
on the unit disc. In that context, we deal of course with
nontangential convergence and there is only one
limsup-type maximal function
${\cal M}$ on the boundary (see [KRA1]). The estimate then
reads
$$
u_\alpha^{**}(P) \leq C'' {\cal M} u(P) \, .
$$
Here we interpret $u_\alpha^{**}$ on the left to be the limsup
over $\Gamma_\alpha(P)$.
\section{Proof of Theorem 8 on the Disc and the Ball}
To fix ideas, we will begin by proving Theorem 8 on the disc $D$ in ${\Bbb C}$.
Now of course the correct concept is classical nontangential convergence,
and the function $u$ is {\it subharmonic}.
Fix a point $P \in \partial D$. We may as well suppose that $P = 1 + i0$. Fix a
parameter $\alpha > 1$ and let $z \in \Gamma_\alpha(P)$ be near
the boundary. Let $\delta = 1 -|z|$.
For a suitable $c > 0$, depending on $\alpha$, we may be sure that
$D(z, c\delta) \subseteq D$. Thus
$$
u(z) \leq C \cdot M_\delta u (\pi(z)) \, ,
$$
where $\pi(z) = z/|z|$ is the standard Euclidean projection of $z$ to $\partial D$.
See [KRA1, Proposition 8.1.10] or [STE1] for the idea behind estimating $u$
by the classical maximal function on the boundary.
It is essential at this point to notice that the arc centered at $\pi(z)$ and
having radius $c'\delta$ will certainly contain $P$. This is because
$z \in \Gamma_\alpha(P)$. Note that $c' = c'(\alpha)$. Hence we may estimate the last line by
$$
u(z) \leq C'' \cdot M_{c'\delta} u(P) \, . \eqno (\star\star)
$$
Since we are now working on the unit disc $D$ in ${\Bbb C}$, we no
longer have distinct maximal functions (modeled on the limsup)
based on either isotropic balls or nonisotropic balls. There
is just the single limsup maximal function ${\cal M}$
based on arcs in $\partial D$.
Now choose a sequence $z_j \in \Gamma_\alpha(P)$ such that
$$
z_j \rightarrow P \quad \hbox{and} \quad u(z_j) \rightarrow \limsup_{\Gamma_\alpha(P) \ni z \rightarrow P} u(z) \equiv u^{**}_\alpha(P) \, .
$$
Then we know by $(\star\star)$ that
$$
u(z_j) \leq C'' \cdot M_{c' \delta_j} u (P) \, ,
$$
Here $\delta_j = 1 - |z_j|$. Certainly, since $z_j \rightarrow P$, we know
that $\delta_j \rightarrow 0$. As $j \rightarrow \infty$, the righthand side is
certainly $\leq C''' \cdot {\cal M} u(P)$. We conclude therefore that
$$
u_\alpha^{**}(P) \leq C'' {\cal M} u(P) \, .
$$
That is the desired conclusion.
\hfill $\BoxOpTwo$
\smallskip \\
Now let us turn to the situation on the ball $B \subseteq {\Bbb C}^n$.
This circumstance is rather more delicate, for we cannot pass
directly from the interior to the boundary by way of single
maximal function in order to get the estimates that we need. In
the end, the estimate that we obtain is in terms of {\it two}
maximal functions.
The nonisotropic balls mesh nicely with the admissible approach
regions
$$
{\cal A}_\alpha(P) = \{z \in B: |1 - z \cdot \overline{P}| < \alpha (1 - |z|)\} \, .
$$
Notice in particular that the set of points in ${\cal A}_\alpha(P)$ having
distance precisely $\delta > 0$ from $\partial B$ can be described by
$$
{\cal E}_\alpha(\delta) = \{z \in B: \delta_{\partial B}(z) = \delta,
|1 - z \cdot \overline{P}| < \alpha \cdot \delta\} \, .
$$
The projection of ${\cal E}_\alpha(\delta)$ to $\partial B$ is
the set
$$
\{z \in \partial B: |1 - z \cdot \overline{P}| < \alpha \cdot \delta \} \, .
$$
Thus we see in a natural way that the admissible approach region
${\cal A}_\alpha(P)$ is built up from the nonisotropic balls
$\beta(P,r)$ and, conversely, the nonisotropic balls are projections
of level sets of the approach regions ${\cal A}_\alpha(P)$. It is
this relationship, between balls and approach regions, that we shall
want to exploit when we study more general domains.
Another key ingredient of our analysis on the unit ball in
${\Bbb C}^n$ is the existence of certain polydiscs. If $\alpha > 1$
is fixed and $P \in \partial B$, then consider a point $z \in
{\cal A}_\alpha(P)$. It is helpful to normalize coordinates so
that $\hbox{Re}\, z_1$ is the real normal direction at $z$ and thus
$\hbox{Im}\, z_1$ is the complex normal direction. Thus $z_2, \dots,
z_n$ span the complex tangential directions at $z$. If
$\alpha' = 2\alpha$ then of course $z \in {\cal
A}_{\alpha'}(P)$. Thus, letting $\delta = 1 - |z|$, we see
that the polydisc
$$
{\cal D} = {\cal D}(z) \equiv D(z_1, \delta/2) \times D(z_2,, \sqrt{\delta/(2\alpha)}) \times \cdots \times
D(z_2,, \sqrt{\delta/(2\alpha)})
$$
lies in ${\cal A}_{\alpha'}$ and hence in $B$.
Now, as usual, let $u$ be a nonnegative function that is continuous on $\overline{B}$ and
plurisubharmonic on $B$. Certainly we have (iterating the sub-mean
value property on each coordinate disc in each dimension
that makes up ${\cal D}$)
$$
u(z) \leq \frac{1}{|{\cal D}|} \int_{\cal D} u(\zeta) \, dV(\zeta) \, . \eqno (*)
$$
Now, in order to pass from the interior to the boundary, we must exploit
our knowledge of the classical Poisson integral. Let us denote
the Poisson kernel by $P$ and the Poisson integral of a boundary
function $f$ by $Pf$. It follows from the maximum principle that
the plurisubharmonic function $u$ is majorized by the Poisson
integral $P\varphi$
of its boundary function $\varphi$. And that in turn is majorized by (see [KRA1, Chapter 8])
the Hardy-Littlewood maximal function $M_1 \varphi$ of $\varphi$ at the projected boundary
point of the argument; but we in fact only need the Hardy-Littlewood maximal
function based on balls of radius $\leq \delta$, and that we denote
by $M_{1, \delta}$. Thus line $(*)$ is majorized by
$$
\frac{1}{|{\cal D}|} \int_{\pi(\cal D)} M_{1, \delta} \varphi(\pi(\zeta)) \cdot \delta \, d\sigma(\zeta) \, .
$$
Here, of course, $\pi(z) = z/|z|$ is the projection of
$B \setminus \{0\}$ to $\partial B$ and the extra $\delta$ in
the integrand comes from the real normal dimension of ${\cal D}$.
Now it is essential to note that $\pi({\cal D})$ is comparable to a nonisotropic
ball of center $\pi(z)$ and radius $c\delta$. So we may
rewrite our estimate as
$$
u(z) \leq \frac{C}{\sigma(\beta(\pi(z), c\delta)} \int_{\beta(\pi(z), c\delta)} M_{1, \delta} \varphi(\zeta) \, d\sigma(\zeta) \, .
$$
Because the boundary is a space of homogeneous type---in particular the enveloping property is
valid---we may replace the ball of radius $c\delta$ and centered at $\pi(z)$ with a ball
of radius $c'\delta$ and centered at $P$. So we have
$$
u(z) \leq \frac{C'}{\sigma(\beta(P, c'\delta))} \int_{\beta(P, c'\delta)} M_{1, \delta} \varphi(\zeta) \, d\sigma(\zeta) \, .
$$
And this line is not greater than
$$
C' M_{2, \delta}(M_{1, \delta} \varphi)(P) \, ,
$$
where $M_{2, \delta}$ is the Hardy-Littlewood-type maximal function modeled on the nonisotropic balls $\beta$ of
radius not exceeding $\delta$.
Now choose a sequence $z_j \in {\cal A}_\alpha(P)$ such that $u(z_j) \rightarrow \limsup_{z \rightarrow P} u(z)$.
Then of course
$$
u(z_j) \leq C' M_{2, \delta_j} (M_{1, \delta_j} \varphi)(P) \, .
$$
Letting $j \rightarrow \infty$, we find that
$$
u^{**}_\alpha(P) \leq C' \cdot {\cal M}_2 {\cal M}_1 \varphi(P) \, .
$$
Of course the maximal functions ${\cal M}_j$ on the right denotes the ``limsup'' maximal
function that we defined and considered earlier. Thus we have obtained the desired estimate.
\hfill $\BoxOpTwo$
\smallskip \\
We have only presented the proof of Theorem 8 so far on the unit ball $B$ in ${\Bbb C}^n$.
But we assert that it is valid on more general classes of domains, as we have indicated
above. In Section 8 we isolate those geometric properties that are needed in
order to see that the result goes through in the claimed greater generality.
\section{The (Localized) Calder\'{o}n Theorem}
Now we shall present our new approach to the Calder\'{o}n theorem.
To repeat, this point of view is new even in the classical setting
of the unit disc in ${\Bbb C}$. We shall confine our discussion
to the unit ball, where all the key ideas are already clear.
\begin{proposition} \sl
Let $f$ be a holomorphic function on the unit ball $B \subseteq {\Bbb C}^n$.
Let $M > 0$ and suppose that $|f| \leq M$.
Let $E \subseteq \partial B$ be a set of positive measure, and supposed
that $f$ is admissibly bounded on $E$. Then $f$ has admissible
limits at almost every point of $E$.
\end{proposition}
\noindent {\bf Remark:} The tauberian condition $|f| \leq M$ is a bit
artificial, and is certainly not part of the standard canon of
the Calder\'{o}n theorem. But it is a useful tool in our proof.
Afterward, we shall remove this condition and recover the standard
Calder\'{o}n result.
\smallskip \\
\noindent{\bf Proof:} Let $\sigma$ be the usual rotationally-invariant
$(2n-1)$-dimensional area measure on $\partial B$. Let $\epsilon > 0$. By
outer regularity, select an open set $U \subseteq \partial B$ such that $U \supseteq E$ and
$\sigma(U \setminus E) \leq \epsilon \cdot \sigma(E)$. We shall use the
maximal functions, and the attendant notation, that we introduced earlier
in Section 4.
As usual, if $u$ is a plurisubharmonic function on $B$, continuous
on $\overline{B}$, and if $\varphi$ is the boundary trace of $u$, then
we know for each $P \in \partial B$ that
$$
u_\alpha^{**}(P) \leq C_\alpha {\cal M}_2 {\cal M}_1 \varphi (P) \, .
$$
Fix a point $P \in \partial \Omega$ and let $\nu = \nu_P$ be the unit
outward normal vector at $P$.
Following the classical argument presented in [KRA1, Theorem 8.6.11], we apply this
last inequality to the function
$$
f_{j,k}(z) = \left |f \left (z - \frac{1}{j}\nu_z\right ) - f\left (z - \frac{1}{k} \nu_z \right )\right |
$$
for some $j, k$ positive integers.
Then of course $f_{j,k}$ is plurisubharmonic. If we restrict attention
to $f$ and $f_{j,k}$ on a neighborhood $\widetilde{\Omega} \cap \Omega$,
where $\widetilde{\Omega}$ is a neighborhood in ${\Bbb C}^n$ of $P$, then we may also
take $f_{j,k}$ to be continuous on $\overline{\widetilde{\Omega} \cap \Omega}$. Following
the argument in the proof of Theorem 8.6.10 in [KRA1], we know that
$$
\int_E |(f_{j,k})_\alpha^{**}(\zeta)|^2 \, d\sigma(\zeta) \leq
C_\alpha \cdot \int_U {\cal M}_2 ({\cal M}_1 f_{j,k}(\zeta))^2 \, d\sigma(\zeta) \, .
$$
It is important to note that the maximal functions on the right are
defined using the limsup. Thus we may say not only that each
maximal function is bounded on $L^2$, but also that it is bounded
from $L^2$ of any open set $\widetilde{U}$ containing $\overline{U}$
to $L^2$ of $U$---just because the boundedness would be proved using
arbitrarily small balls. So we obtain
$$
\int_E |(f_{j,k})_\alpha^{**}(\zeta)|^2 \, d\sigma(\zeta) \leq
C''_\alpha \int_{\widetilde{U}} |f_{j,k}(\zeta)|^2 \, d\sigma(\zeta) \, ,
$$
where $\widetilde{U}$ is an open set in $\partial B$ that contains $\overline{U}$ and
such that $\sigma(\widetilde{U} \setminus E) < \epsilon \cdot \sigma(E)$.
Letting $j \rightarrow \infty$ as in (8.6.10.2) of [KRA1], we find that
$$
\int_E \limsup_{{\cal A}_\alpha(\zeta) \ni z \rightarrow \zeta} \left |(f\left (\zeta\right ) - f\left (z - \frac{1}{k} \nu\right )\right |^2 \, d\sigma(\zeta) \leq
C''_\alpha \int_{\widetilde{U}} \left |\widetilde{f}(\zeta) - f \left (\zeta - \frac{1}{k}\nu\right ) \right |^2 \, d\sigma(\zeta) \, .
$$
Here $\widetilde{f}(\zeta)$ denotes the nontangential limit of $f$ at almost every point of $E$, which
we know exists {\it a fortiori} by Calder\'{o}n's classical result.
Now of course the trick (on the righthand side) is to write $\widetilde{U} = (\widetilde{U} \setminus E) \cup E$.
Thus
$$
RHS = \int_{\widetilde{U} \setminus E} + \int_E \equiv I + II \, .
$$
The first integral is estimated quite simply by $4M^2 \cdot \sigma(\widetilde{U} \setminus E) \leq C \cdot \epsilon \sigma(E)$.
Here, of course $C$ depends on $\alpha$ and on $M$. But it does not depend on any of the other parameters that
are relevant to our present estimations. In fact if we replace
$\epsilon$ by $\epsilon/M^2$, then we
may remove the dependence on $M$. This will be important later.
So we have
\begin{eqnarray*}
\lefteqn{\int_E \limsup_{{\cal A}_\alpha(\zeta) \ni z \rightarrow \zeta} \left |(f\left (\zeta\right ) - f\left (z - \frac{1}{k} \nu\right )\right |^2 \, d\sigma(\zeta)} \\
& \leq &
C''_\alpha \int_E \left |\widetilde{f}(\zeta) - f \left (\zeta - \frac{1}{k}\nu\right ) \right |^2 \, d\sigma(\zeta) + C \epsilon \sigma(E) \, . \qquad \qquad (*)
\end{eqnarray*}
And now one can proceed to imitate the argument at the end of the proof of Theorem 8.6.10 in [KRA1] to find that
$$
\sigma \left \{ \zeta \in E: \limsup_{{\cal A}_\alpha(\zeta) \ni z \rightarrow \zeta} |f(z) - \widetilde{f}(\zeta)| > \epsilon \right \}
\leq C \cdot \epsilon \cdot \sigma(E) \, .
$$
We conclude the argument with standard reasoning using elementary measure theory to see that
$\lim_{{\cal A}_\alpha(\zeta) \ni z \rightarrow \zeta} f(z) = \widetilde{f}(\zeta)$.
\hfill $\BoxOpTwo$
\smallskip \\
Our next job is to remove the tauberian hypothesis (i.e., the assumption of a global bound by $M$).
\begin{theorem} \sl Let $f$ be a holomorphic function on the unit ball $B \subseteq {\Bbb C}^n$.
Let $E \subseteq \partial B$ be a set of positive measure, and supposed
that $f$ is admissibly bounded almost everywhere on $E$. Then $f$ has admissible
limits at almost every point of $E$.
\end{theorem}
{\bf Proof:} For $\delta > 0$ small, let $B_\delta \equiv B(0, 1 - \delta) \subseteq B
\subseteq {\Bbb C}^n$. For each such $\delta > 0$ there is
of course a bound $M_\delta$ so that $|f| \leq M_\delta$ on $\overline{B_\delta}$.
If $E$ is as in the statement of the theorem, let $E_\delta$ be its Euclidean orthogonal projection
into $\partial B_\delta$. Fix $\epsilon > 0$ as before.
Choose $\widetilde{U}_\delta \supset E_\delta$ so that $\sigma(\widetilde{U}_\delta \setminus E) < [\epsilon/M_\delta^2]\cdot \sigma(E)$.
Then the estimate $(*)$ holds on $B_\delta$ with
$E_\delta$ replacing $E$ (and, implicitly,
$\widetilde{U}_\delta$ replacing $\widetilde{U}$).
Now taking the supremum over $\delta > 0$,
we find that
\begin{eqnarray*}
\lefteqn{\int_E \limsup_{{\cal A}_\alpha(\zeta) \ni z \rightarrow \zeta} \left |(f\left (\zeta\right ) - f\left (z - \frac{1}{k} \nu\right )\right |^2 \, d\sigma(\zeta)} \\
& \leq &
C''_\alpha \int_E \left |\widetilde{f}(\zeta) - f \left (\zeta - \frac{1}{k}\nu\right ) \right |^2 \, d\sigma(\zeta) + C \epsilon \sigma(E) \, .
\end{eqnarray*}
And now the proof may be completed as in the argument for the last theorem.
\hfill $\BoxOpTwo$
\smallskip \\
A retrospective of the proof just presented shows that we have constructed machinery that
allows a standard sort of localization of the classical Fatou theorem. If the ingredients
are in place to prove Theorem 8, then the Calder\'{o}n theorem follows immediately.
Section 8 explains how all these ingredients are present on domains other than the unit
ball $B$.
\section{Ingredients Needed for a Proof on a General Domain}
At this time we do not know how to prove the results considered here on a perfectly arbitrary
bounded domain in ${\Bbb C}^n$ with $C^2$ boundary. In fact our reasoning depends in essential ways (as does
the reasoning of Stein and others) on the Levi geometry of the domain. The pertinent
desiderata are in fact known to hold on
\begin{enumerate}
\item[{\bf (i)}] the unit disc in ${\Bbb C}$;
\item[{\bf (ii)}] the unit ball in ${\Bbb C}^n$;
\item[{\bf (iii)}] strongly pseudoconvex domains in ${\Bbb C}^n$;
\item[{\bf (iv)}] domains of finite type in ${\Bbb C}^2$;
\item[{\bf (v)}] convex domains of finite type in ${\Bbb C}^n$, $n \geq 2$.
\end{enumerate}
We take this opportunity to isolate the essential features of the
geometry that are needed for our reasoning, and give references
where the reader may verify that these domains do indeed have the
required properties. Fix a bounded domain $\Omega \subseteq {\Bbb C}^n$ with
$C^2$ boundary.
\begin{enumerate}
\item[{\bf (a)}] The boundary $\partial \Omega$ must be equipped with a family of balls
$\beta_2(P,r)$. We use the notation $\beta_1(P,r)$ to denote the standard, isotropic,
Euclidean balls with center $P$ and radius $r$. The ball $\beta_2(P,r)$ will typically
be nonisotropic and its shape will derive rather naturally from the complex structure
and/or the Levi geometry of $\Omega$.
\item[{\bf (b)}] On the boundary of a suitable domain in ${\Bbb C}^n$, the balls $\beta_2(P,r)$, together with the standard
$(2n-1)$-dimensional Hausdorff area measure $d\sigma$, form a space of homogeneous
type in the sense of [COW1], [COW2]. Of course the classical Euclidean balls
$\beta_1(P,r)$ together with $d\sigma$ also form a space of homogeneous
type.
\item[{\bf (c)}] The domain $\Omega$ is equipped with a family of approach
regions ${\cal A}_\alpha(P)$ for each $P \in \partial \Omega$ and each $\alpha > 1$.
Each ${\cal A}_\alpha(P)$ is an open set in $\Omega$, and ${\cal A}_\alpha(P) \subseteq
{\cal A}_{\alpha'}(P)$ whenever $\alpha' > \alpha$.
\item[{\bf (d)}] The approach regions ${\cal A}_\alpha(P)$ and the
balls $\beta_2(P,r)$ are related in the following manner. If $\alpha > 1$ is
fixed and $\delta > 0$ is small then the Euclidean orthogonal projection
of
$$
\{z \in {\cal A}_\alpha(P): \delta_{\partial\Omega}(z) = \delta\}
$$
to $\partial \Omega$ is comparable to a ball $\beta_2(P, c\delta)$. Here,
of course, $c$ will depend on $\alpha$. Conversely, the set
$$
\bigcup_{\delta > 0} \{z \in \Omega: \pi(z) \in \beta_2(P, \delta),
\delta_{\partial\Omega}(z) = \delta\}
$$
is comparable to an approach region ${\cal A}_{c\delta}(P)$.
\item[{\bf(e)}] Suppose, after a normalization of coordinates, that
$\hbox{Re}\, z_1$ is the real
normal direction at $z$, $\hbox{Im}\, z_1$ the complex normal direction,
and $z_2, \dots, z_n$ form an orthonormal basis
for the remaining $(n-1)$ complex tangential directions.
There is a $c > 0$ with the following property.
If $\alpha > 1$ is fixed and $z \in {\cal A}_\alpha(P)$
with $\delta = \delta_{\partial\Omega}(z)$ then
there are positive exponents $\lambda_1 = \lambda_1(z)$,
\dots, $\lambda_{n-1} = \lambda_{n-1}(z)$ so that the
polydisc
$$
{\cal D}(z) \equiv D(z_1, c\delta) \times D(z_2, c \delta^{\lambda_1}) \times \dots
\times D(z_n, c\delta^{\lambda_{n-1}} )
$$
still lies in $\Omega$.
\item[{\bf (f)}] A critical property of the polydisc ${\cal D}(z)$ in part {\bf (e)} is
that the Euclidean orthogonal projection $\pi({\cal D}(z))$ in $\partial \Omega$ is
comparable to a nonisotropic ball $\beta_2(\pi(z), c'\delta)$. What is crucial
here is that $\delta$ will be the size of this ball in the complex normal direction,
and that will automatically determine all the other dimensions of the $(2n-1)$-dimensional
ball.
\item[{\bf (g)}] The ball $\beta_2(\pi(z), c'\delta)$ from part {\bf (f)} is comparable
to a ball $\beta_2(P, c''\delta)$, where $P$ is as in part {\bf (e)}.
\end{enumerate}
A review of the proofs that we have presented in Sections 5, 6, 7 show that these seven
properties are precisely those that we used to establish our results. Thus
Theorem
4 is true for the five types of domains described in {\bf (i)}--{\bf (v)}.
The references for properties {\bf (a)}--{\bf (g)} on domains {\bf (i)}--{\bf (iv)} are
\begin{enumerate}
\item[{\bf (i)}] For the disc, see [KRA1].
\item[{\bf (ii)}] For the ball, see [KRA11], [KRA1], [STE1].
\item[{\bf (iii)}] For strongly pseudoconvex domains in ${\Bbb C}^n$,
see [KRA1], [STE1], [KRL].
\item[{\bf (iv)}] For finite type domains in ${\Bbb C}^2$, see
[NSW1], [NSW2], [NRSW], [CAT].
\item[{\bf (v)}] For convex, finite type domains in ${\Bbb C}^n$, see
[DIF], [MCN1], [MCN2] and references therein.
\end{enumerate}
\section{The Nevanlinna Class}
For many purposes, the most natural space of functions on which to
consider Fatou-type theorems is the Nevanlinna class. Here,
for a fixed bounded domain $\Omega \subseteq {\Bbb C}^n$ with $C^2$ boundary, we
say that $f$ on $\Omega$ lies in ${\cal N}^+$ if {\bf (i)} $f$ is
holomorphic and {\bf (ii)} $\log^+ |f|$ has a harmonic majorant.
By a standard lemma that can be found in [STE1] or [KRA1], this
definition is equivalent to requiring that
$$
\sup_{0 < \epsilon < \epsilon_0} \int_{\partial \Omega_\epsilon}
\log^+ |f(\zeta)| \, d\sigma(\zeta) < \infty \, .
$$
Here $\Omega_\epsilon = \{z \in \Omega: \rho(z) = - \epsilon\}$
for some defining function $\rho$ for $\Omega$ (see [KRA1]) and
$$
\log^+ x = \left \{ \begin{array}{lcr}
0 & \hbox{if} & x \leq 1 \\
\log x & \hbox{if} & x > 1 \, .
\end{array}
\right.
$$
Stein's book [STE1] contained rather elaborate and technical arguments
to handle the boundary behavior of functions in ${\cal N}^+$. A few
years later, Barker [BAR] provided a much simpler approach. His key ideas
was the next lemma. Note also that the case of {\it meromorphic functions}
in the Nevanlinna class was treated by Neff [NEF1], [NEF2] and Lempert [LEM].
\begin{lemma} \sl
Let $u$ be a nonnegative, continuous, plurisubharmonic function on $\Omega$
(we do not necessarily mandate that $u$ be continuous on $\overline{\Omega}$).
Assume that $u$ has a harmonic majorant. [Thus there is a finite, positive
measure $\mu$ on $\partial \Omega$ such that
$$
u(z) \leq \int_{\partial \Omega} P(z, \zeta) \, d\mu(\zeta) \, .]
$$
Here of course $z \in \Omega$ and $P$ is the standard Poisson kernel. Let $\alpha > 1$.
Then the admissible maximal function
$$
u^{*}_\alpha (\zeta) \equiv \sup_{z \in {\cal A}_\alpha(\zeta)} |u(z)|
$$
for $\zeta \in \partial \Omega$ satisfies
$$
u^{*}_\alpha (\zeta) \leq C_\alpha \left [ M_2( [M_1(\mu)]^{1/2}) \right ]^2 \, .
$$
and hence is finite almost everywhere in $\partial \Omega$.
\end{lemma}
We note first of all that Barker's lemma is still true if we replace $u^*_\alpha$ with
our maximal function $u^{**}_\alpha$ (defined using the
limsup), $M_1$ with ${\cal M}_1$, and
$M_2$ with ${\cal M}_2$. Thus we know that
$$
u^{**}_\alpha (\zeta) \leq C_\alpha \left [ {\cal M}_2( [{\cal M}_1(\mu)]^{1/2}) \right ]^2 \, .
$$
As Barker notes, in case $f \in {\cal N}^+$,
one may apply this last lemma to the function $u = \log^+|f|$. It follows
then that $u^{**}_\alpha$ is finite almost everywhere, and we may then
use our standard arguments to see that $f$ has an admissible limit almost everywhere.
Thus we have
\begin{theorem} \sl
Let $\Omega \subseteq {\Bbb C}^n$, $n \geq 2$, be a bounded domain
with $C^2$ boundary. Assume that either $\Omega$ is the ball,
or a finite type domain in ${\Bbb C}^2$, or a convex finite type
domain in ${\Bbb C}^n$. Suppose that $f \in {\cal N}^+(\Omega)$.
Then $f$ has admissible boundary limits almost everywhere.
\end{theorem}
\section{Nontangential Versus Admissible Approach}
Now we shall prove Theorem 5. In fact, following the example that we have
already set with our proof of Theorem 4 (see Proposition 9), we shall at first
prove a version of the theorem that has an additional tauberian
hypothesis.
\begin{theorem} \sl
Let $f$ be a holomorphic function on the
unit ball $B$ in ${\Bbb C}^n$, $n > 1$. Assume that there is a constant
$M > 0$ so that $|f| \leq M$. Let $E \subseteq \partial B$ be a set
of positive $(2n-1)$-dimensional measure. Then $f$ is
nontangentially bounded at almost every point of $E$ (with a
bound $C$ that is in general, and most interestingly, smaller
than $M$) if and only if $f$ is admissibly bounded (with the
same bound $C$) at almost every point of $E$.
\end{theorem}
As enunciated, we shall work on the domain the ball $B$, and for simplicity
and clarity we shall restrict attention to $B \subseteq {\Bbb C}^2$. Thus assume that
the holomorphic function $f$ on $B$ is nontangentially bounded on the set
$E \subseteq \partial B$ of positive 3-dimensional Hausdorff measure. As usual we
call the measure $d\sigma$.
With elementary measure-theoretic arguments, we may extract from
$E$ a subset of positive measure so that $f$ is nontangentially bounded
at each point of the subset with a {\it uniform bound} $C$ and on
a cone $\Gamma_\alpha$ of uniform size---independent of the point.
We continue to call this new set $E$. Not that, in general,
$C < M$---that is certainly the most interesting case.
We shall show then that $f$ is {\it admissibly bounded}
with bound $C$.
Now let $P \in \partial B$ be a point of density
(with respect to classical, isotropic balls)
of $E \subseteq \partial B$, and
let $U \subseteq \partial B$ be a small, relatively open neighborhood of $P$.
Let us consider a foliation of $U$ by complex tangential curves.
Call the curves $\gamma_w: (-\epsilon, \epsilon) \rightarrow U$, where $w$
is a 2-dimensional parameter. Let $g_w$ denote the image curve of $\gamma_w$.
Restrict attention now to those
$g_w$ which intersect $E$ in a set of positive 1-dimensional measure.
For each such $g_w$, pick a point $\gamma_w(t_w)$ that is
a point of 1-dimensional density of $g_w \cap E$. Let $\epsilon > 0$.
Choose a neighborhood $I_w = (t_w - \delta_w, t_w + \eta_w)$ so that
$$
\frac{{\cal H}^1(\gamma_w(I_w) \cap E)}{{\cal H}^1(I_w)} > 1 - \epsilon \, .
$$
We may suppose that $t_w, \delta_w, \eta_w$ are rational numbers. Now, with some
elementary measure theory, we may focus on a collection of $\gamma_w$,
$w$ in a 2-dimensional set of positive measure, so that each of the
$I_w$ is the {\it same} interval $I^*$. Give this set of $w$ the name ${\cal S}$,
and let $s \in {\cal S}$ be a 2-dimensional point of density. We fix attention
on the point $x_0 = \gamma_s(t_s)$.
We may repeat the preceding arguments using a foliation $\widetilde{\gamma}_w$ of $U$
that is still complex tangential but is {\it transverse} to $\gamma_w$ (remember
that we are working in the boundary of the ball $B$ in ${\Bbb C}^2$, so the complex
tangent space has real dimension 2). This gives rise to a point $\widetilde{s} \in \widetilde{S}$.
By elementary measure theory---in particular by Fubini's theorem---we may suppose
that $x_0 = \gamma_s(t_s) = \widetilde{\gamma}_{\tilde{s}}(\widetilde{t}_{\tilde{s}}) = \widetilde{x}_0$. We continue to call the point $x_0$.
Thus we focus our attention on the curves $\gamma_w(I^*)$ for $w
\in {\cal S}$ and $\widetilde{\gamma}_{\widetilde{w}}(\widetilde{I}^*)$ for $\widetilde{w} \in \widetilde{S}$.
We examine an admissible approach region with base point $x_0$ as above.
Call that region ${\cal A}_{\alpha}(x_0)$, some $\alpha > 1$.
Let $z \in {\cal A}_\alpha(x_0)$ be near
to the boundary---at distance much less than the length of
$\widetilde{I}$ or $\widetilde{I}^*$. Let $\delta = \delta_{\partial B}(z)$. Now consider, as
usual, a nonisotropic polydisc ${\cal D}$ centered at $z$, having radius $c'\delta$
in the complex normal directions and radii $c'\sqrt{\delta}$ in the
complex tangential directions, some small $c' > 0$.
The natural thing to do at this point is to estimate
$$
|f(z)| \leq \frac{1}{|{\cal D}|} \int_{{\cal D}} |f(\zeta)| \, dV(\zeta) \, .
$$
Because of our density statements about $\widetilde{I}$ and ${\cal S}$, we
can estimate this last line by
$$
(1 - c''\epsilon) C + c'' \epsilon \cdot M \, .
$$
Since the point $z \in {\cal A}_\alpha(s)$ was chosen arbitrarily,
and since $\epsilon > 0$ was arbitrary, we in fact have shown
that $f$ is admissibly bounded at $x_0$ with bound $C$. Since points
of the kind $x_0$ are measure-theoretically generic, we now know that we have a set of
positive measure in $E$ on which $f$ is admissibly bounded.
Again, by elementary measure theory, we may then conclude
that $f$ is admissibly bounded at almost all points of $E$.
That completes the proof.
\hfill $\BoxOpTwo$
\smallskip \\
It remains to show that our result holds without the tauberian
hypothesis $|f| \leq M$. So now let $f$ be nontangentially bounded
on a set $E \subseteq \partial B$ of positive measure.
As usual, we may take the nontangential approach regions $\Gamma_\alpha(P)$
to be of uniform aperture, and the bound $C$ to be uniform.
For $\tau > 0$ small, let $B_\tau = B(0,1 - \tau)$. Then of course $f$ is bounded
by some $M_\tau$ on $B_\tau$. Let $E_\tau$ be the projection of $E$ to
$\partial B_\tau$. Then of course $f$ is nontangentially bounded on $E_\tau$
by $C$ (because each approach region
${\cal A}_\alpha^\tau(P_\tau) \subseteq B_\tau$ for
$P_\tau \in E_\tau$ is a subset of ${\cal A}_\alpha(P)$, where
$P = \pi(P_\tau)$). Since the tauberian hypothesis is in place on $B_\tau$, we may
conclude that $f$ is admissibly bounded by $C$ on $E_\tau$. But now,
for each $P \in E$, note that
$$
{\cal A}_\alpha (P) = \bigcup_{\tau > 0 \ \rm small} {\cal A}_\alpha^\tau (P_\tau) \,
$$
where ${\cal A}_\alpha^\tau(T_\tau)$ is the admissible region in $B_\tau$
based at the point $P_\tau$ (the projection of $P$ to $\partial B_\tau$).
Since $f$ is admissibly bounded by $C$ on each of the approach regions
on the right, it follows that $f$ is bounded by $C$ on ${\cal A}_\alpha(P)$.
This reasoning is valid at almost every point $P$ of $E$. The proof
is therefore complete.
\section{Concluding Remarks}
The results in this paper are formulated and proved on the ball,
on strongly pseudoconvex domains, on finite type domains in ${\Bbb C}^2$,
and on convex, finite type domains in ${\Bbb C}^n$. Other types
of domains can be handled with {\it ad hoc arguments}. Among those
are the bidisc and complete Reinhardt domains like
$$
\Omega_{2, \infty} = \{z \in {\Bbb C}^2: |z_1|^2 + 2 e^{-1/|z_2|^2} < 1\} \, .
$$
A complete theory of Fatou theorems and Calder\'{o}n theorems, which
can treat any bounded $C^2$ domain and which fully accounts for its
attendant Levi geometry, has yet to be produced. The paper [KRA9] offers
a conceptual framework for handling all domains---using the Kobayashi metric
as a stepping stone and structural tool---but in practice it is rather
difficult to verify all the hypotheses of the results in [KRA9].
We are of the opinion, however, that invariant metrics are the right argot
for formulating function theoretic problems and results on arbitrary
domains. Such metrics can read the Levi geometry, and they also take
into account the way that holomorphic functions in the interior depend
on the shape of the domain. We look forward to future work in this
direction.
\newpage
\noindent {\LARGE \sc References}
\vspace*{.2in}
\begin{enumerate}
\item[{\bf [BAR]}] S. R. Barker, Two theorems on boundary values
of analytic functions, {Proc.\ Am.\ Math.\ Soc.} 68(1978),
54--58.
\item[{\bf [CAL]}] A. P. Calder\'{o}n, On the behavior of harmonic
functions near the boundary, {\em Trans.\ Am.\ Math.\ Soc.} 68(1950),
47-54.
\item[{\bf [CAT]}] D. Catlin, Estimates of invariant metrics on pseudoconvex
domains of dimension two, {\it Math.\ Zeit.} 200(1989), 429--466.
\item[{\bf [CIK]}] J. A. Cima and S. G. Krantz, A Lindel\"{o}f principle and
normal functions in several complex variables, {\em Duke Math.\ Jour.}
50(1983), 303-328.
\item[{\bf [COW1]}] R. R. Coifman and G. Weiss, {\it Analyse Harmonique
Non-Commutative sur Certain Espaces Homogenes}, Lecture Notes in Math.\
242, Springer-Verlag, Berlin, 1971.
\item[{\bf [COW2]}] R.\ R.\ Coifman and G.\ Weiss, Extensions of
Hardy spaces and their use in analysis, {\it Bull.\ AMS}
83(1977), 569-645.
\item[{\bf [DIB1]}] F. Di Biase, Approach regions and maximal functions in
theorems of Fatou type, thesis, Washington University in St.~Louis, 1995.
\item[{\bf [DIB2]}] F. Di Biase, Exotic Convergence in Theorems of Fatou
Type, in {\it Harmonic Functions on Trees and Buildings}, Adam Koranyi,
ed., Contemporary Mathematics, vol.\ 206, American Mathematical Society,
1997.
\item[{\bf [DIB3]}] F. Di Biase, {\it Fatou type theorems:
Maximal Functions and Approach Regions}, Birkh\"{a}user Publishing,
Boston, 1998.
\item[{\bf [DIF]}] F. Di Biase and B. Fischer,
Boundary behaviour of $H^p$ functions on convex domains
of finite type in ${\Bbb C}^n$, {\it Pacific J.\ Math.} 183(1998), 25--38.
\item[{\bf [DIK]}] F. Di Biase and S. G. Krantz, {\it The Boundary Behavior
of Holomorphic Functions}, Birkh\"{a}user Publishing, Boston, 2006, to appear.
\item[{\bf [FAT]}] P. Fatou, S\'eries trigonom\'etriques et s\'eries
de Taylor, {\it Acta Math.}, 30(1906), 335-400.
\item[{\bf [GAR]}] J. B. Garnett, {\it Bounded Holomorphic Functions},
Academic Press, New York, 1981.
\item[{\bf [GRS]}] P. Greiner and E. M. Stein, {\it Estimates for the
$\overline{\partial}$-Problem}, Princeton University Press, Princeton,
NJ, 1977.
\item[{\bf [HIR]}] M. Hirsch, {\it Differential Topology}, Springer-Verlag,
New York, 1976.
\item[{\bf [HUW]}] R. Hunt and R. Wheeden, On the boundary
values of harmonic functions 132(1968), 307--322.
\item[{\bf [JEK]}] D. Jerison and C. Kenig, Boundary behaviour of
harmonic functions in non-tangentially accessible
domains, {\it Adv.\ Math.} 46(1982), 80--147.
\item[{\bf [KOR1]}] A. Koranyi, Harmonic functions on Hermitian hyperbolic
space, {\em Trans.\ A. M. S.} 135(1969), 507-516.
\item[{\bf [KOR2]}] A. Koranyi, Boundary behavior of Poisson integrals
on symmetric spaces, {\em Trans.\ A.M.S.} 140(1969), 393-409.
\item[{\bf [KRA1]}] S. G. Krantz, {\it Function Theory of Several Complex Variables},
$2^{\rm nd}$ ed., American Mathematical Society, Providence, RI, 2001.
\item[{\bf [KRA2]}] S. G. Krantz, Holomorphic functions of
bounded mean oscillation and mapping properties of the
Szeg\"{o} projection, {\em Duke Math. J.} 47(1980), 743-761.
\item[{\bf [KRA3]}] S. G. Krantz, Intrinsic Lipschitz classes on manifolds with applications
to complex function theory and estimates for the $\overline{\partial}$
and $\overline{\partial}_{b}$ equations, {\it Manuscripta Math.} 24(1978), 351-378.
\item[{\bf [KRA4]}] S. G. Krantz, Smoothness of harmonic and holomorphic functions,
{\em Proc.\ Symp.\ Pure Math.}, Vol. 35 (1979)
(S. Wainger and G. Weiss, eds.), 63-67.
\item[{\bf [KRA5]}] S. G. Krantz, Characterizations of various domains of holomorphy
via $\overline{\partial}-$ estimates and applications to a problem of Kohn, {\it
Illinois J. Math.} 23(1979), 267-285.
\item[{\bf [KRA6]}] S. G. Krantz, Lipschitz spaces on stratified groups, {\it Trans.\ Am.\
Math. Soc.} 269(1982), 39-66.
\item[{\bf [KRA7]}] S. G. Krantz, Finite type conditions and elliptic boundary
value problems, {\it Jour.\ Diff.\ Eq.} 34(1979), 239-260.
\item[{\bf [KRA8]}] S. G. Krantz, Estimation of the Poisson kernel, {\it Journal
of Math.\ Analysis and Applications} 302(2005), 143--148.
\item[{\bf [KRA9]}] S. G. Krantz, Invariant metrics and the boundary behavior of holomorphic
functions on domains in ${\Bbb C} ^{n},$ {\em Jour.\ Geometric Anal.} 1(1991), 71-98.
\item[{\bf [KRA10]}] S. G. Krantz, Fatou theorems old and new: an overview of the
boundary behavior of holomorphic functions, Proceedings of
an International Conference on Complex Variables held in Seoul, Korea,
{\it Journal of the Korean Math.\ Society}37(2000), 139--175.
\item[{\bf [KRA11]}] S. G. Krantz, {\it Partial Differential Equations and
Complex Analysis}, CRC Press, Boca Raton, FL, 1992.
\item[{\bf [KRA12}] S. G. Krantz, The Lindel\"{o}f principle in
several complex variables, preprint.
\item[{\bf [KRL]}] S. G. Krantz and S.-Y. Li, Area integral
characterizations of Hardy spaces on domains in ${\Bbb C}^n$,
{\it Complex Variables} 32(1997), 373--399.
\item[{\bf [LEM]}] L. Lempert, Boundary behavior of meromorphic
functions of several complex variables, {\em Acta Math.}
144(1980), 1-26.
\item[{\bf [MCN1]}] J. McNeal, Convex domains of finite type, {\it J. Funct.\
Anal.} 108(1992), 361--373,
\item[{\bf [MCN2]}] J. McNeal, Estimates on the Bergman kernels
of convex domains, {\it Advances in Math.} 109(1994),
108--139.
\item[{\bf [NRSW]}] A. Nagel, J.-P. Rosay, E. M. Stein, and S. Wainger,
Estimates for the Bergman and Szeg\"o kernels in ${\Bbb C}^2$, {\it Ann.\ of
Math.} 129(1989), 113--149.
\item[{\bf [NSW1]}] A. Nagel, E. M. Stein, and S. Wainger, Boundary
behavior of functions holomorphic in domains of finite type, {\em Proc.\
Nat.\ Acad.\ Sci.\ USA} 78(1981), 6596-6599.
\item[{\bf [NSW2]}] A. Nagel, E. M. Stein, and S. Wainger, Balls and
metrics defined by vector fields I: Basic properties, {\it Acta Math.}
155(1985), 103-147.
\item[{\bf [NEF1]}] C. A. Neff, Maximal Function Estimates for Meromorphic Nevanlinna Functions, thesis,
Princeton University, 1986.
\item[{\bf [NEF2]}] C. A. Neff, Boundary Convergence of Functions in the Nevanlinna Class, {\it Colloq.\ Math.}
60(1990), 477-506.
\item[{\bf [PLE]}] A. Plessner, \"{U}ber die Verhalten analytischer
Funktionen am Rande ihres Definitionsbereiches, {\it J. F. M.}
159(1927), 219--227.
\item[{\bf [PRI1]}] I. I. Privalov, Integrale de Cauchy,
Saratov, 1919.
\item[{\bf [PRI2]}] I. I. Privalov, {\it Randeigenschaften
analytischer funktionen}, $2^{\rm nd}$ ed., VEB Deutscher
Verlag der Wissenschaften, Berlin, 1956.
\item[{\bf [RUD]}] W. Rudin, Holomorphic Lipschitz functions in
balls, {\em Comment.\ Math.\ Helvet.} 53(1978), 143-147.
\item[{\bf [STE1]}] E. M. Stein, {\it Boundary Behavior of
Holomorphic Functions of Several Complex Variables}, Princeton
University Press, Princeton, 1972.
\item[{\bf [STE2]}] E. M. Stein, Singular integrals and
estimates for the Cauchy-Riemann equations, {\em Bull.\ A.M.S.}
79(1973), 440-445.
\item[{\bf [STE3]}] E. M. Stein, {\it Singular Integrals and
Differentiability Properties of Functions}, Princeton University Press,
Princeton, NJ, 1970.
\item[{\bf [TSU]}] M. Tsuji, bdry beh. of holo. fcns., 1930s xxxxxx
\item[{\bf [ULR]}] D. Ullrich, Tauberian theorems for pluriharmonic functions
which are BMO or Bloch, {\it Mich.\ Math.\ Jour.} 33(1986), 325-333.
\item[{\bf [WID]}] K. O. Widman, On the boundary behavior of solutions to
a class of elliptic partial differential equations, {\it Ark.\ Mat.}
6(1966), 485--533.
\item[{\bf [ZYG1]}] A. Zygmund, On the boundary values of functions
of several variables. I, {\it Fund.\ Math.} 36(1949), 207--235.
\item[{\bf [ZYG2]}] A. Zygmund, A remark on functions of several variables,
{\it Acta Sci.\ Math.\ Szeged}, Pars B, 12(1950), 66--68.
\item[{\bf [ZYG3]}] A. Zygmund, Note on the boundary values of functions
of several variables, {\it Ann.\ of Math. Studies} 25, Princeton
University Press, Princeton, NJ, 1950.
\end{enumerate}
\end{document}
Now let us consider the more general situation that is really
the focus of our work.
We shall assume that we have a fixed domain $\Omega$ equipped with
approach regions ${\cal A}_\alpha(P)$ for each $P \in \partial \Omega$
and each $\alpha > 1$. The domain $\Omega$ will be bounded and with
$C^2$ boundary. We shall assume that $U$ is a tubular neighborhood
of $\partial \Omega$ and that $\pi: U \rightarrow \partial \Omega$ is the
usual Euclidean normal projection. Let $\delta_0 > 0$ be such that
if $z \in \Omega$ and $\delta_{\partial\Omega}(z) < \delta_0$
then $z \in U$.
Most importantly, we shall take it that $\partial \Omega$ is equipped
with a family of balls. These balls will be related to the ${\cal A}_\alpha$
in the following way. Fix $\alpha > 1$ and $P \in \partial \Omega$.
If $0 < \delta < \delta_0$ then
$$
\pi \left (\{z \in \Omega: \delta_{\partial\Omega}(z) = \delta \ \hbox{and} \
z \in {\cal A}_\alpha(P)\} \right )
$$
is the ball $\beta(P, \delta\alpha)$. We equip $\partial \Omega$ with the usual
$(2n-1)$-dimensional Hausdorff measure, denoted $d\sigma$. As a standing hypothesis we shall
suppose that $\partial \Omega$, equipped with these balls and with the measure $d\sigma$,
forms a space of homogeneous type (see [COW]).
With $\alpha > 1$ and $P \in \partial \Omega$ fixed as above, we now
perform an analysis of the kind that we have already done for the disc
$D$. Let $w \in {\cal A}_\alpha(P)$. After a normalization of coordinates
we may take it that $\hbox{Re}\, z_1$ is the real normal direction at $w$.
The $\hbox{Re}\, z_1, \hbox{Im}\, z_1$ span the real and complex normal directions.
And $z_2, \dots, z_n$ span the complex tangential directions at $w$.
Let $\delta = \delta_z$ denote the Euclidean distance of $z$ to $\partial \Omega$.
We shall need the following notation. Let $z \in \Omega$. Then, for $r > 1$
and $\delta = \delta(z)$, we let $\beta(z,\delta, r)$ be the set of
points in ${\cal A}_r(\pi(z))$ having distance
$\delta$ from the boundary.
Observe, then, that $\pi(\beta(z, \delta, r)) = \beta(\pi(z), \delta r)$.
Now define
$$
{\cal D} = \beta(z, \delta/2) \times (-\delta/2, \delta/2) \, ,
$$
where it is understood that the interval $(-\delta/2, \delta/2)$ is
in the real normal direction.
| {
"timestamp": "2006-08-26T07:13:18",
"yymm": "0608",
"arxiv_id": "math/0608650",
"language": "en",
"url": "https://arxiv.org/abs/math/0608650",
"abstract": "We develop a new technique for studying the boundary limiting behavior of a holomorphic function on a domain $\\Omega$ -- both in one and several complex variables. The approach involves two new localized maximal functions. As a result of this methodology, theorems of Calderón type about local boundary behavior on a set of positive measure may be proved in a new and more natural way. We also study the question of nontangential boundedness (on a set of positive measure) versus admissible boundedness. Under suitable hypotheses, these two conditions are shown to be equivalent.",
"subjects": "Complex Variables (math.CV)",
"title": "The boundary behavior of holomorphic functions: Global and local results",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9669140244715405,
"lm_q2_score": 0.7341195385342971,
"lm_q1q2_score": 0.7098304774473874
} |
https://arxiv.org/abs/1009.6153 | Three Balls Problem Revisited - On the Limitations of Event-Driven Modeling | If a tennis ball is held above a basket ball with their centers vertically aligned, and the balls are released to collide with the floor, the tennis ball may rebound at a surprisingly high speed. We show in this article that the simple textbook explanation of this effect is an oversimplification, even for the limit of perfectly elastic particles. Instead, there may occur a rather complex scenario including multiple collisions which may lead to a very different final velocity as compared with the velocity resulting from the oversimplified model. | \section{Introduction}
Consider a set of two balls made of the same viscoelastic material whose centers
are vertically aligned at positions $z_1$ and $z_2$ as sketched in Fig.
\ref{fig:sketch}.
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth]{scetch.pdf}
\caption{Sketch of the problem.}
\label{fig:sketch}
\end{figure}
Let $R_1$ and $R_2$ be the radii of the particles and $\Delta h$ their initial
vertical spacing. At time $t=0$ we release the
particles to collide with the floor. The question for the maximum
height reached by the upper sphere after the collision is then a
common textbook problem. The textbook solution is based on the
assumption that the collisions of the lower particle with the floor
and the subsequent collision of the lower particle with the upper one
are separate two-body interactions which may be treated independently,
that is, one disregards the duration of the collisions. Whether this
assumption is justified depends, of course, on the initial conditions,
the particle sizes and the material parameters.
The experimental investigation of the described problem is tricky as
even a small deviation from the vertical alignment of the initial
positions of the spheres leads to considerable post-collisional
velocities in horizontal direction, in particular, if more than two
balls are involved.
Simple but effective techniques were
introduced\cite{Harter:1971,Mellen:1968,Mellen:1995}, which allow for
experiments with chains of up to about 7 aligned balls.
In this paper we will show that although the problem looks simple, there may emerge rather complex
behavior including multiple collisions. The simple solution mentioned
above is, thus, only a special case of a more general solution.
\section{Independent Collision Model}
\label{sec:ICM}
\subsection{Collision Scenario}
\label{sec:collision-scenario}
To introduce our notation and for later reference let us first
reproduce the solution for the independent collisions model (ICM), that is, the
final velocity of the upper ball is the
result of a) the inelastic collision of the lower ball with the floor
and b) the inelastic collision of the lower ball with the upper
ball. Note that here and in the following we restrict ourselves to the
situation where $m_1 > m_2$. For the opposite case, $m_1<m_2$, it may
be shown that the sketched sequence of collisions fails, even under
the assumption of perfectly elastic collisions\cite{Harter:1971,Patricio:2004}.
Inelasticity of the balls is described by the coefficient of restitution
which relates the precollisional relative velocity
$v_{ij}=v_i-v_j$ of colliding particles, $i$ and $j$ to
the post-collisional one, $v_{ij}^\prime=v_i^\prime-v_j^\prime$,
\begin{equation}
\label{eq:cor}
\varepsilon=\left.-v_{ij}^\prime \right/v_{ij}\,.
\end{equation}
The lower sphere ($m_1$) reaches the floor at time
$t_{10}=\sqrt{\frac{2}{g}(z_{1}^{(0)}-R_1)}$ at the velocity
$v_{10}=-\sqrt{2g(z_{1}^{(0)}-R_1)}$ from where it is reflected with $v_{10}^\prime=-\varepsilon
v_{10}$. The first index always denotes the considered particle and the second
its collision-partner. Index 0 stands for the floor, 1 for the lower and 2 for
the upper sphere. Upper index $^{(0)}$ stands for initial values.
Post-collisional values are marked by primes. The particles collide then at
$t_{12}=t_{10}-\frac{\Delta h}{v_{10}(1+\varepsilon)}\equiv t_{10}+\Delta t$
whereby the upper sphere is located at position
$z_{21}=z_{21}^\prime=-\frac{1}{2}gt_{12}^2+z_{2}^{(0)}$ with the initial height $z_{2}^{(0)}=z_{1}^{(0)}+R_1+R_2+\Delta h$. At $t_{12}$ the
lower particle has the velocity $v_{12}=-g\Delta t+v_{10}^{\prime}$ and the upper $v_{21}=-gt_{12}$. Employing the
collision rule, Eq. \eqref{eq:cor}, and conservation of momentum, we obtain the final
velocities
\begin{eqnarray}
v_{2}^\prime &=& \frac{m_2v_{21}+m_1[v_{12}+\varepsilon(v_{12}-v_{21})]}{m_1+m_2}
\label{eq:v2FinEmd}\\
v_{1}^\prime&=&\frac{m_1v_{12}+m_2[v_{21}+\varepsilon(v_{21}-v_{12})]}{m_1+m_2}
\label{eq:v1FinEmd}
\end{eqnarray}
and the relative velocity
\begin{equation}
\label{eq:vrelprime}
v_r^\prime=v_2^\prime-v_1^\prime\,.
\end{equation}
In the case of elastic ($\varepsilon=1$) and instantaneous ($\Delta h\to 0$)
collisions we find
\begin{eqnarray}
\label{eq:v2p}
v_{2}^\prime&=&-v_{10}\frac{3-\mu}{1+\mu}\qquad\text{with}\quad \quad\mu=\frac{m_2}{m_1}\\
\label{eq:1}
v_{1}^\prime&=&-v_{10}\frac{1-3\mu}{1+\mu}\,.
\end{eqnarray}
For $\mu=\nicefrac{1}{3}$ the lower sphere loses all its kinetic energy
($v_1^\prime=0$) and the upper sphere rebounds with twice its initial velocity.
In the limit $\mu\rightarrow0$ we recover the well known textbook result
$v_2^\prime=-3v_{10}$, that is, the upper ball rises to about nine times of its
initial dropheight ($z_2^{\text{max}}=9z_{2}^{(0)}-8(2R_1+R_2)$).
The system of bouncing balls can be exhaustively described also for
more than two spheres \cite{Kerwin:1972}, provided
the collisions are considered as isolated events, that is, only
two-particle interactions are taken into account.
\subsection{Coefficient of Restitution Resulting from the Solution of Newton's Equation}
\label{sec:coeff-rest-result}
The contact of viscoelastic spheres is described by the (modified) Hertz
contact law \cite{BrilliantovEtAl:1996}. In this article we will use a
simplified force law since it allows for a exhaustive analytical solution of the
problem. To justify this approximation, we will show later by means of numerical
simulations that the more correct Hertz contact force leads to qualitatively
identical results, see Appendix \ref{sec:AppVisco}.
We describe the contact of dissipatively interacting particles, $i$ and $j$, by
\begin{equation}
\label{eq:dashpot}
F\left(\xi_{ij},\dot{\xi}_{ij}\right)=\min\left[0,-k\xi_{ij}-\gamma\dot{\xi}_{ij}\right]
\end{equation}
as a function of the mutual compression
\begin{equation}
\label{eq:xi}
\xi_{ij}(t)=\max\left[0,R_i+R_j-\abs{\vec{r_i}(t)-\vec{r_j}(t)}\right]
\end{equation}
and the compression rate
$\dot{\xi}_{ij}=\text{d}\xi_{ij}(t)/\text{d}t$, where $R_i$ is the
radius of particle $i$ and $\vec{r}_i(t)$ is its position at time $t$. The
expression in square
brackets in Eq. \eqref{eq:dashpot} may become positive during the expansion phase,
that is, the
(positive) dissipative force may overcompensate the (negative) elastic
force which would lead to a resulting erroneous attractive
force, see e.g. \cite{SchwagerPoeschel:2008}. Therefore, the $\min[\dots]$
function is applied to take into account that the interaction force is always
repulsive (negative).
Consider an isolated pair of colliding particles $i$ and $j$ approaching
one another at impact rate $v=\dot{\xi}(t=0)$ at $t=0$. Using the force, Eq.
\eqref{eq:dashpot}, we obtain the relative velocity after a collision by solving Newton's equation of motion,
\begin{equation}
\label{eq:Newton}
m_{ij}^\text{eff}\ddot{\xi_{ij}}
=F\left(\dot{\xi_{ij}},\xi_{ij}\right)\,,
\end{equation}
with the effective mass $m_{ij}^\text{eff}=m_im_j/\left(m_i+m_j\right)$ and
initial conditions $\xi_{ij}(0)=0$ and
$\dot{\xi_{ij}}(0)=v$. The collision is complete at time $t_c$
when $\ddot{\xi}_{ij}(t_c)=0$ \cite{SchwagerPoeschel:2008}.
Of course, for a pairwise collision the final velocity as obtained
from Eq. \eqref{eq:cor} must coincide with the final velocity as
obtained from integrating Newton's equation of motion. Therefore, the solution
$\dot{\xi}_{ij}(t_c)$ of Eq. \eqref{eq:Newton} allows to relate
the coefficient of restitution $\varepsilon$ to the parameters $k$ and $\gamma$ of the force law,
Eq. \eqref{eq:dashpot}, via
\begin{equation}
\label{eq:cor2}
\varepsilon = -\frac{\dot{\xi}(t_c)}{v}\,.
\end{equation}
Straightforward calculation \cite{SchwagerPoeschel:2008} yields for the
duration of the collision\\
\begin{equation}
\label{eq:tc}
t_c = \begin{cases}
\displaystyle\frac{1}{\omega}\left(\pi
-\arctan\displaystyle\frac{2\beta\omega}{\omega^2-\beta^2}\right) &
\mbox{for~~~} \displaystyle\beta<\frac{\omega_0}{\sqrt{2}} \\[0.5cm]
\displaystyle-\frac{1}{\omega}\arctan\displaystyle\frac{2\beta\omega}{
\omega^2-\beta^2} & \mbox{for~~~} \displaystyle\beta>\frac{\omega_0}{\sqrt{2}}
\end{cases}
\end{equation}
with
\begin{equation}
\omega_0^2\equiv\frac{k}{m^{\text{eff}}}\,;~~~~~\beta\equiv
\frac{\gamma}{2m^{\text{eff}}}\,;~~~~\omega\equiv\sqrt{\omega_0^2-\beta^2}\,.
\end{equation}
For the coefficient of restitution we obtain
\begin{equation}
\varepsilon = \begin{cases}
\displaystyle\exp\left[-\frac{\beta}{\omega}\left(\pi - \arctan\displaystyle\frac{2\beta\omega}{\omega^2-\beta^2}\right)\right] & \!\!\!\!\!\!\mbox{for~} \displaystyle\beta<\frac{\omega_0}{\sqrt{2}} \\[0.3cm]
\displaystyle\exp\left[\frac{\beta}{\omega}\arctan\displaystyle\frac{2\beta\omega}{\omega^2-\beta^2}\right]
& \!\!\!\!\!\!\mbox{for~} \displaystyle \beta\in\left[\frac{\omega_0}{\sqrt{2}},\omega_0\right]
\\[0.3cm]
\displaystyle\exp\left[-\frac{\beta}{\Omega}\ln\frac{\beta+\Omega}{\beta-\Omega}\right] & \!\!\!\!\!\!\mbox{for~} \beta>\omega_0
\end{cases}
\label{eq:dashpotCOR1}
\end{equation}
where $\Omega\equiv\sqrt{\beta^2-\omega_o^2}$.
Note that $\varepsilon$
depends on the parameters of the force
and the effective mass $m^\text{eff}$ of the colliding particles, that is,
$\varepsilon=\varepsilon(k,\gamma, m^\text{eff})$. Thus, $\varepsilon$ may not
be considered as a pure material constant.
\section{Simultaneous Contacts}
\label{sec:simult-cont}
\subsection{Equations of Motion}
\label{sec:equations-motion}
The ICM fails if we take into account the finite
duration of the collisions. In this case, it may happen that the
collision of the particles (process b) starts yet before the collision
of the lower particle with the floor (process a) has terminated. In
this case we have a three-particle interaction of the floor and both
balls which cannot be resolved using the concept of the coefficient of
restitution. In this case, the final velocity of the upper particle must be
determined by integrating Newton's equation of motion for the
three-particle system which requires the detailed knowledge of the
interaction forces. Consequently, we have to solve the set of
Newton's equations
\begin{equation}
\label{eq:set}
\begin{split}
m_1\ddot{z}_1+m_1g+F_{12}-F_{01}&= 0\\
m_2\ddot{z}_2+m_2g-F_{12}&= 0\,,
\end{split}
\end{equation}
where $F_{ij}$ is the model-specific interaction law between
particles $i$ and $j$ and the floor is considered as particle 0 (with $m_0\to\infty$).
The failure of the simplifying ICM was discussed in the context of the
closely related problem of Newton's cradle. A simple
analysis reveals immediately that the textbook-like explanation using
isolated collisions is insufficient \cite{Kline:1960}. Instead, the details of the
interaction force must be taken into account. The explanation of
Newton's cradle is far from being simple and there is an intensive and
controversial discussion about this seemingly simple classroom experiment \cite{Chapman:1960,HerrmannSchmaelzle:1981,HerrmannSeitz:1982,PiquetteWu:1982,HerrmannSchmaelzle:1984,Reinsch:1994,HutzlerEtAl:2004,HinchSaintJean:1999}.
The necessity of considering the details of the interaction force
becomes obvious immediately when considering colliding rods instead of
spheres \cite{Auerbach:1994,Maecker:1953,FuPaul:1970}. In fact, the investigation
of longitudinal waves in colliding bodies and the corresponding duration of
the collision is a classical problem of mechanics, investigated by
some of the most eminent scientists, such as
Poisson \cite{Poisson:1833}, Boltzmann \cite{Boltzmann:1882} and other
important scientists \cite{Voigt:1915,Schneebeli:1871,Hamburger:1886}
\subsection{Comparison with the ICM}
\label{sec:comparison}
In Sec. \ref{sec:coeff-rest-result} we conclude the coefficient of restitution
from the interaction force. Using this result, we can compute the final relative
velocity $v_r^\prime$ by means of Eq. \eqref{eq:vrelprime}, employing the
assumption of independent collisions. Alternatively we can obtain $v_r^\prime$
by solving the set of equations \eqref{eq:set}
numerically. The latter approach does not require any assumption on the sequence
of the collisions. We will see that both results may deviate considerably
according to rather complex dynamics of the system.
In order to compare both results by means of Eq. \eqref{eq:dashpotCOR1} we map
the constants of the force law to the coefficient of restitution
$(k,\gamma,m^\text{eff}) \leftrightarrow \varepsilon$.
We assume that the collisions between the lower sphere and the floor
($ij=01$) and between the spheres ($ij=12$) take place at the same coefficient
of restitution. Since the effective mass enters Eq. \eqref{eq:dashpotCOR1}, for
given material stiffness, $k=\text{const.}$, Eq. \eqref{eq:dashpotCOR1} then
provides a relation between $\varepsilon$ and $\gamma_{ij}$, thus, we can
determine $\gamma_{ij}$ by specifying $\varepsilon$ as a control parameter of
the problem.
The latter assumption implies the somewhat unphysical fact that the lower side
of the large sphere (where it contacts the floor) is characterized by a larger
dissipative constant $\gamma_{01} \ne \gamma_{12}$ than its upper side (where it
contacts the smaller sphere). We will justify this assumption in App.
\ref{sec:same-material} where we show that the perhaps more plausible
assumption $\gamma_{01}=\gamma_{12}$, implying $\varepsilon_{01}\ne
\varepsilon_{12}$, leads to qualitatively identical results.
\section{Basketball -- Tennis Ball Problem}
\label{sec:tbp_lin_dash}
\subsection{Collision Sequence}
\label{sec:collision-sequence}
Let us assume two vertically aligned balls (the basketball -- tennis ball problem)
as sketched in Fig. \ref{fig:sketch}. We integrate Newton's equation of motion,
Eq. \eqref{eq:set}, for this system numerically and obtain the forces between
the bottom and the lower sphere and between both spheres, see
Fig. \ref{fig:dash_forces}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.99\columnwidth,clip]{forces.pdf}
\caption{Forces $F_{01}$ and $F_{12}$ obtained from solving Eqs. \eqref{eq:set}.
During the contact between the lower particle and the floor there occur
multiple contacts between the spheres (full line). For discussion see the text.
Parameters: $R_1=10$\,cm, $R_2=1$\,cm, $\Delta h=0.1$\,mm, $z_{1}^{(0)}=0.6$\,m,
$k=5.0\cdot10^7$\,N/m, $\varepsilon=0.7$ (corresponding to hard rubber).
}
\label{fig:dash_forces}
\end{figure}
For time $t_{01}^\text{(b)} \le t \le t_{01}^\text{(e)}$ the lower particle is
in contact with the floor as indicated by the force $F_{01}\ne 0$. During this
interval the balls are in contact repeatedly, starting at time (first contact)
$t_{12}^\text{(b)}$ and ending (last contact) at time $t_{12}^\text{(e)}$ as
indicated by $F_{12}\ne0$. An interesting detail is the discontuity of $F_{01}$ at $t=t_{01}^{(b)}$ which is a consequence of the force law, Eq. \eqref{eq:dashpot}: At the instant of the contact where $\xi_{01}\to 0$, the elastically restoring term, $k\xi_{01}$, vanishes whereas the (repulsive) dissipative term, $\gamma\dot{\xi}_{01}$, has a finite value as soon as the particles get into contact.
The existence of multiple collisions shown in Fig. \ref{fig:dash_forces} shows
that the ICM described in Sec. \ref{sec:ICM} fails for the chosen set of
parameters which provokes mainly two questions:
\begin{enumerate}
\item How many contacts between the spheres occur and how does their number
depend on the system parameters ($\Delta h$, $R_1$, $R_2$, $\gamma_{ij}$ or
$\varepsilon$ respectively)?
\item If multiple collisions take place, when does the collision sequence terminate?
\end{enumerate}
Depending on the system parameters we may obtain
$t_{12}^\text{(e)}\leq t_{01}^\text{(e)} $ or
$t_{12}^\text{(e)}>t_{01}^\text{(e)} $, therefore, the second question must be
answered by a definition: The collision-sequence terminates at time $t=t_f$ (see
Fig. \ref{fig:dash_forces}) when the last contact between the spheres ceases, before the large sphere collides with the floor for the
second time.
To answer the first question,
we refer to Fig. \ref{fig:dash_contacts} which illustrates the sequence of
collisions in dependence of the coefficient of restitution $\varepsilon$ for
fixed $\Delta h$ and $R_1/R_2$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.99\columnwidth]{numVsAna.pdf}
\caption{Sequence of collisions for varying coefficient of restitution. The
dashed line shows the end of the contact between the lower sphere and the floor,
$t_{01}^\text{(e)}$.
The fat line corresponds to the force drawn in Fig. \ref{fig:dash_forces}
($R_1=10$ cm, $R_2=1$ cm, $\Delta h=0.1$ mm, $z_{1}^{(0)}=0.6$ m,
$k=5.0\cdot10^7$ N/m).}
\label{fig:dash_contacts}
\end{figure}
The value of $\varepsilon$ was adjusted by varying $\gamma$, according to
Eq. \eqref{eq:dashpotCOR1} while keeping $k=5.0\cdot10^7\text{N}/\text{m}$ invariant.
Figure \ref{fig:dash_contacts} should be read horizontally (for fixed value of
$\varepsilon$): each black or grey line marks time intervals when the particles are in
contact.
For elastic balls, $\varepsilon=1$, and the chosen parameters there occur 3
collisions.
For sufficiently large $\Delta h$, this number may be unity, that is, the
independent-collisions condition is fulfilled (see below). Keeping $\Delta h$, $k$
and the radii $R_1$ and $R_2$ constant and decreasing $\varepsilon$, the number
of contacts increases. This is due to the fact that the relative velocity of the
balls decreases because of inelastic collisions and, thus, the intervals of free
flight become shorter while the duration of the contacts depends only weakly on
the value of the inelasticity.
For yet smaller $\varepsilon$ the relative velocity after the $k^\text{th}$
contact may be small enough such that the lower ball catches up with the upper
because of its upwards acceleration due to its contact with the floor. This
effect makes some free-flight intervals vanish for decreasing $\varepsilon$ and,
thus, reduces the number of contacts. Summarizing, for each set of parameters
$\{\Delta h$, $R_1$, $R_2$, $k\}$ the number of collisions as a function of
$\varepsilon$ is a function with a single
maximum.
For the force law Eq. \eqref{eq:dashpot} the basketball -- tennis ball problem may
be solved analytically by a piecewise procedure, see App. \ref{sec:ana}. To
check against numerical errors, the
horizontal gray lines (in between the black lines) show the same information as
the result of an analytical theory which agrees perfectly with the numerical
data.
There is an interesting case when the final velocity of the lower ball
after losing contact with the floor is only slightly larger than the velocity of
the upper ball after the previous collision. Since both balls move only under
the action of gravity, the balls may collide an ultimate time even after the contact between the
lower ball and the floor has already finished. These events may be seen in Fig.
\ref{fig:dash_contacts} as narrow spikes at
$\varepsilon\approx0.78$,
$\varepsilon\approx0.67$, etc.
The number of contacts of the spheres as a function of $\varepsilon$ and the
initial distance $\Delta h$ is shown in more detail in Figure
\ref{fig:NoContacts} (top). As explained above for each value of $\varepsilon$
there is an interval for $\Delta h$ which maximizes the number of
contacts.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.99\columnwidth]{nContactDh.pdf}
\includegraphics[width=0.99\columnwidth]{nContactRr.pdf}
\caption{Number of contacts between
the spheres as a function of $\varepsilon$ and $\Delta h$ (top) and
$\varepsilon$ and $R_1/R_2$ (bottom).
The dashed lines show the value of $\Delta h$ and
$R_1/R_2$ used
in Fig. \ref{fig:dash_contacts}; the $+$ symbol shows the parameters used
in Fig. \ref{fig:dash_forces}. The solid line in the lower panel indicates
$R_1/R_2=1$. (Parameters: $k=5.0\cdot10^7\text{N/m}$,
$R_1=10\text{cm}$ and $R_2=1\text{cm}$ (top) and $k=5.0\cdot10^7\text{N/m}$,
$R_2=1\text{cm}$ and $\Delta h=0.1\text{mm}$ (bottom))}
\label{fig:NoContacts}
\end{figure}
For the interaction force, Eq. \eqref{eq:dashpot} the ratio $t_{c,01}/t_{c,12}$ of the contact duration of the collision lower sphere/ground $t_{c,01}$ and the contact duration of the collision lower/upper sphere $t_{c,12}$ increases with $m_1/m_2$ (or $R_1/R_2$, respectively). Consequently, the number of
contacts increases with $R_1/R_2$, shown in Fig. \ref{fig:NoContacts} (bottom). On the other hand, increasing $R_1/R_2$ also increases the initial relative velocity between the two spheres and with that the intervalls of free flight, what in turn reduces the possible number of contacts. Whereas the effect explained first is dominating, the interplay of both effects explaines the rather complex behaviour shown in the bottom panel of Fig. \ref{fig:NoContacts}.
From Figs. \ref{fig:dash_contacts} and \ref{fig:NoContacts} we see that for a
vast range of parameters the true collision scenario as obtained from the
integration of Newton's equations of motion deviates drastically from the
independent-collision scenario outlined in Sec. \ref{sec:ICM}.
\subsection{Effective Coefficient of Restitution}
\label{sec:effCor}
By solving Newton's equation, we can compute the final relative
velocity $v_r^\prime=v_2(t_f)-v_1(t_f)$ which corresponds to Eq.
\eqref{eq:vrelprime} obtained from the ICM. To compare both results, we compute
$v_r^\prime$ by integrating Eq. \eqref{eq:set} using the interaction force Eq. \eqref{eq:dashpot} for a certain set of
parameters $\{\Delta h,\:m_1,\:m_2,\:\:k\}$ and a specified
$\varepsilon=\varepsilon_\text{spec}$ (which in turn determines $\gamma$ via Eq.
\eqref{eq:dashpotCOR1}). Then, by inverting Eq. \eqref{eq:vrelprime} we
determine the coefficient of restitution $\varepsilon=\varepsilon_\text{eff}$
which would yield the same final relative velocity for the ICM. If
$\varepsilon_\text{eff} /\varepsilon_\text{spec}\approx 1$, both models yield
the same result, that is, the ICM is an acceptable approximation. Otherwise, the
ICM fails.
Consider the dependence of $\varepsilon_\text{eff}/\varepsilon_\text{spec}$
on the initial distance $\Delta h$. For large $\Delta h$ the lower sphere leaves
the floor before it contacts the upper one, that is, the ICM holds true. Figure
\ref{fig:EpsRat_dh} (top) shows that
$\varepsilon_\text{eff}/\varepsilon_\text{spec} \to 1$ with increasing $\Delta
h$. Moreover, as expected for $\varepsilon_\text{eff}/\varepsilon_\text{spec}\to
1$ there is only one contact which is a necessary (but not sufficient)
precondition for independent collisions.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.99\columnwidth]{dhGross.pdf}
\includegraphics[width=0.99\columnwidth]{dhKlein.pdf}
\caption{
$\varepsilon_\text{eff}/\varepsilon_\text{spec}$ as a function of $\Delta h$ and
the corresponding number of contacts (right axis, top). The bottom figure shows a
magnification of the small $\Delta h$ range. Parameters:
$R_1=10\text{cm}$,
$R_2=1\text{cm}$,
$k=5.0\cdot10^7\text{N/m}$,
$\varepsilon_\text{spec}=0.9$.
}
\label{fig:EpsRat_dh}
\end{figure}
Figure \ref{fig:EpsRat_dh} (bottom) is a magnification of the range of small
$\Delta h$. As discussed before, the number of contacts as a function of $\Delta h$ has
a maximum.
The oscillations in the number of contacts as a function of $\Delta h$ for very small $\Delta h$ correspond to the spikes shown in Fig. \ref{fig:dash_contacts} where the lower sphere catches up with the upper after the lower sphere has already left the ground.
\begin{figure}[h!]
\centering
\includegraphics[width=0.99\columnwidth]{r1R2.pdf}
\caption{$\varepsilon_\text{eff}/\varepsilon_\text{spec}$ as a function of
$R_1/R_2$ (with $R_2=1cm$) and the corresponding number of contacts
(right axis). Further parameters: $k=5.0\cdot10^7\text{N/m}$,
$\varepsilon_\text{spec}=0.8$, $\Delta h=0.1\text{mm}$.}
\label{fig:EpsRat_rr}
\end{figure}
Similarly, Fig. \ref{fig:EpsRat_rr} shows $\varepsilon_\text{eff}/\varepsilon_\text{spec}$ as a function of $R_1/R_2$. Since
increasing $R_1/R_2$ increases the number of contacts, $\varepsilon_\text{eff}/\varepsilon_\text{spec}$ decreases and thus, as expected, the ICM invalidates with increasing $R_1/R_2$.
While for most values of the parameter space $\varepsilon_\text{eff}/\varepsilon_\text{spec}<1$, there is a small interval of $R_1/R_2$ where
$\varepsilon_\text{eff}/\varepsilon_\text{spec}>1$ (inset in Fig, \ref{fig:EpsRat_rr}) which is a deviation from the ICM too. Here, because of the similar durations of the contact upper sphere/lower sphere and lower sphere/floor, the lower sphere still being in contact with floor pushes the upper one upward. As far as we see, this is the only (tiny) effect which allows for
$\varepsilon_\text{eff} > \varepsilon_\text{spec}$.
\section{Conclusion}
We considered the motion of two vertically aligned spheres which are released to collide with the floor under the action of gravity. For the analysis of the dynamics of this textbook problem (basketball -- tennis ball problem) we used two complementary methods. First we described the system exploiting the {\em independent collision model} (ICM) which assumes instantaneous collisions between the spheres and between the lower sphere and the floor. The collisions are described by a single number, the coefficient of restitution, $\varepsilon$, and the duration of the collisions is neglected. Second, we described the dynamics by analytically and numerically solving Newton's equation of motion. Here the collisions are characterized by an interaction force law, $F(\xi,\dot{\xi})$. For the case of the linear dashpot model used here, the force is a function of the elastic and dissipative parameters $k$ and $\gamma$. Since there is a direct relation between $\varepsilon$ and $\{k,\gamma\}$, we can compare the results of the ICM and the solution of Newton's equations.
We specify two characteristics of the process, a) the final relative velocity, $v^\prime$, between the spheres and b) the number of collisions between the spheres during the process. If the approaches were equivalent, we should obtain equivalent results for a) and b).
Obviously, in case of the ICM there is only one collision between the spheres and the final result for $v^\prime$ is the solution of a textbook problem, Eq. \ref{eq:vrelprime}. In the article we show that Newton's equations yield a different scenario, including multiple collisions and the ICM is only valid in a certain limit, that is, the ICM fails for a wide range of parameters.
To quantify the deviations, we solve Newton's equations with parameters $k$ and $\gamma$ that correspond to a certain specified coefficient of restitution $\varepsilon=\varepsilon_\text{spec}$. Then we compare this value with the {\em effective} coefficient of restitution, $\varepsilon_\text{eff}$, obtained from the final relative velocity as obtained from Newton's equation. The value $\varepsilon_\text{eff}/\varepsilon_{spec}=1$ would indicate that both models agree. Our results reveal, however, a dramatic deviation from this ideal behavior. In Figs. \ref{fig:EpsRat_dh} and \ref{fig:EpsRat_rr} we see that in contradiction to the ICM, the ratio $\varepsilon_\text{eff}/\varepsilon_{spec}$ may adopt any value from almost zero up to slightly larger than one, that is, the ICM fails dramatically.
While our subject, the basketball -- tennis Ball problem, is only a cute but relatively unimportant toy problem, our results may have serious consequences for numerical simulation techniques of granular many-particle problems. There exist two established simulation techniques for the simulation of granular systems, Molecular Dynamics (MD) and Event-driven Molecular Dynamics (EMD). While MD solves Newton's equations of motion for all $N$ particles constituting the granular system, thus, solves a system of $3N$ (without rotation) coupled, strongly non-linear differential equations, EMD describes the dynamics of the $N$-particle system as a sequence of pairwise collisions. The latter approach allows for a great speedup of the numerical simulation since instead of solving computer-time intensive solutions of differential equations, we only have to compute postcollisional velocities from the precollisional ones, as a function of the coefficient of restitution for each pair of colliding particles, $\{\vec{v}_i, \vec{v}_j,\varepsilon\} \to \{\vec{v}_i^{\,\prime}, \vec{v}_j^{\,\prime}\}$ via a simple propagation function. In between the collisions the particles follow simple ballistic trajectories.
It is obvious, that EMD allows for very efficient simulations as compared with MD, in particular for large $N\sim 10^6\dots 10^8$, however, this speedup comes for the price of the assumption of independent collisions, that is, EMD assumes instantaneous collisions neglecting the duration of the collisions. While this assumption may be justified in a granular gas where the mean free flight time is large as compared to the typical duration of collisions, it fails for dense systems. Our simple one-dimensional, 3-particle system shows that the failure may be dramatic.
For the analytical calculations presented in this article we made two major assumptions whose justification might not be obvious beforehand: First we assumed a linear-dashpot force, Eq. \ref{eq:dashpot}, for the interaction of viscoelastic spheres. This force allows for a simple mapping of the constants $k$ and $\varepsilon$ to the coefficient of restitution which is, moreover, a constant in this case. Of course, the interaction of spheres is described by a (modified) Hertz law which leads to a impact velocity dependent $\varepsilon$. We could perform the entire calculation presented here also for the Hertz law, however, at a {\em much} larger mathematical effort (see \cite{SchwagerPoeschel:2008a} for a similar calculation). We prefer here the simplified force and demonstrate in Appendix \ref{sec:AppVisco} that the Hertz law leads to qualitatively identical results.
The second simplification concerns the assumption of a universal coefficient of restitution for the description of the collisions between the particles and between the lower particle and the floor. Since the effective mass enters the mapping between the force constants and $\varepsilon$, the assumption of a universal $\varepsilon$ implies that the lower sphere is characterized by a certain set of parameters $\{k,\gamma\}$ when colliding with the floor, but by a different set of parameters when colliding with the upper sphere. The alternative assumption of invariant material parameters is, perhaps, more plausible but leads then to different values of the coefficient of restitution for particle-particle and particle-floor collisions. While these alternative assumptions lead, of course, to different results, in Appendix \ref{sec:same-material} we demonstrate that the qualitative properties of the dynamics are the same for both assumptions.
| {
"timestamp": "2010-10-01T02:01:57",
"yymm": "1009",
"arxiv_id": "1009.6153",
"language": "en",
"url": "https://arxiv.org/abs/1009.6153",
"abstract": "If a tennis ball is held above a basket ball with their centers vertically aligned, and the balls are released to collide with the floor, the tennis ball may rebound at a surprisingly high speed. We show in this article that the simple textbook explanation of this effect is an oversimplification, even for the limit of perfectly elastic particles. Instead, there may occur a rather complex scenario including multiple collisions which may lead to a very different final velocity as compared with the velocity resulting from the oversimplified model.",
"subjects": "Classical Physics (physics.class-ph); Soft Condensed Matter (cond-mat.soft); Popular Physics (physics.pop-ph)",
"title": "Three Balls Problem Revisited - On the Limitations of Event-Driven Modeling",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9669140235181257,
"lm_q2_score": 0.7341195327172402,
"lm_q1q2_score": 0.709830471122873
} |
https://arxiv.org/abs/1410.6537 | Choices, intervals and equidistribution | We give a sufficient condition for a random sequence in [0,1] generated by a $\Psi$-process to be equidistributed. The condition is met by the canonical example -- the $\max$-2 process -- where the $n$th term is whichever of two uniformly placed points falls in the larger gap formed by the previous $n-1$ points. This solves an open problem from Itai Benjamini, Pascal Maillard and Elliot Paquette. We also deduce equidistribution for more general $\Psi$-processes. This includes an interpolation of the $\min$-2 and $\max$-2 processes that is biased towards $\min$-2. |
\subsubsection*{Acknowledgments}
Much thanks to Elliot Paquette and Pascal Maillard for many useful conversations. The first referee's careful reading and suggestion to generalize to arbitrary $\Psi$ processes are greatly appreciated. Itai Benjamini is the source of a very similar model that spurred this research. Toby Johnson and Bal\'azs Gerencs\'er provided nice suggestions on earlier drafts. I am grateful to my advisor Christopher Hoffman for encouraging me to stick with the problem and his advice to consider a small subinterval. Gerandy Brita Montes de Oca's assistance with reading and understanding \cite{elliot} was very helpful.
Tatiana Toro and Shirshendu Ganguly gave some useful advice about the operator $\mathscr C$. Thanks to Chloe Huber and Chris Fowler for being good listeners about the ups and downs of this project. Lastly, I appreciate the partial support from NSF RTG grant 0838212.
\section{Proving \thref{cor:main}} \label{sec:inequalities}
For this entire section we will let $F$ denote $F^{\Psi}$. To establish \eqref{eq:inequality}, we rely almost entirely on \eqref{eq:integro} and \eqref{eq:diff}. For convenience we rerecord them here:
\begin{align} |z\psi'(F(z)) F'(z) - \psi( F(z))| \leq (2 - \delta) \psi( F(z) ) \tag{1}, \qquad \delta \in (0,1],
\end{align}
\begin{align}
F'(z) = z \int_z^\infty \frac 1 y d \Psi( F(y)), \tag{2}
\end{align}
\begin{align}
z F''(z) - F'(z) + z \psi( F(z)) F'(z) = 0. \tag{3}
\end{align}
We start with the proof of \thref{cor:main}. It follows from a sequence of lemmas.
\begin{proof}[Proof of \thref{cor:main}]
First off we need the conclusion of \thref{cor:extra} to guarantee \thref{master} \ref{continuous} holds for the interpolations we consider.
Equidistribution for the $\max$-2 process then follows from \thref{lem:special} by taking $p_2=1$. The fact that the interpolation that is $60$\%-$\min$-2 satisfies \eqref{eq:inequality} follows by taking $p_{-2} = .6$ in \thref{lem:pinch}. Part three (for general interpolations) follows from \thref{lem:pinch2}.
\end{proof}
Now we give the proofs of the necessary lemmas. We break this up into two sections: one for interpolations of $\max$-2 and $\min$-2 processes and the other for general interpolations.
\subsection{Interpolations of $\min$-$2$ and $\max$-$2$}
Fix $p_{-2}, p_2 \in [0,1]$ with $p_2+p_{-2} = 1$. We will work exclusively in this subsection with $\Psi$ that are interpolations of the $\min$-2 and $\max$-2 process. Thus,
\begin{align*}
\Psi(u) &= p_2 u^2 + p_{-2} (1 - (1-u)^2), \\
\psi(u) &= 2 p_2 u + 2 p_{-2} (1- u), \\
\psi'(u) &= 2 p_2 - 2 p_{-2},
\end{align*}
This is the distribution function (and derivatives) for an interpolation where at each step we add a point according the $\min$-2 process with probability $p_{-2}$ and according to the $\max$-2 process with probability $p_2$.
Our first lemma establishes \eqref{eq:inequality} holds so long as $p_{-2} \leq p_2$. Note that the case $p_2=1$ is the $\max$-2 process.
\begin{lemma}\thlabel{lem:special}
If $p_{-2} \leq p_2$ then \eqref{eq:inequality} holds.
\end{lemma}
\begin{proof}
Dropping the constant $2-\delta$ from the right side of \eqref{eq:inequality} it suffices to prove that
\begin{align}
| \psi(F(z)) - z\psi'(F(z)) F'(z) | \leq \psi(F(z)). \label{eq:sufficient2}
\end{align}
We break into two cases:
\begin{itemize}
\item First suppose $\psi(F(z)) \geq z \psi'(F(z))F'(z)$ so that \eqref{eq:sufficient2} reduces to proving that $$- z \psi'(F(z)) F'(z) \leq 0.$$ As $F$ is increasing we know $F'(z) \geq 0$. The hypothesis $p_{-2} \leq p_2$ guarantees that $\psi'(F(z)) \geq 0$. Thus, the inequality is satisfied.
\item Next, suppose $ \psi( F(z) ) \leq z\psi'(F(z)) F'(z)$. Rearranging \eqref{eq:sufficient2}
we seek to show
$$2(p_2 - p_{-2}) zF'(z) \leq 2 \psi( F(z) ).$$
Note that both sides are zero at $z=0$. By the fundamental theorem of calculus it then suffices to prove the above inequality holds for the derivatives.
Differentiating and again using the fact that $\psi'(F(z)) = 2 (p_2 - p_{-2})$ reduces the problem to establishing
\begin{align*}2(p_2 - p_{-2}) ( z F''(z) + F'(z) ) \leq 4(p_2 - p_{-2}) F'(z).\end{align*}
After some algebra this is equivalent to
\begin{align}
z F''(z) \leq F'(z). \label{eq:reduction2}
\end{align}
From \eqref{eq:diff} we know that $z F''(z) = F'(z)- z \psi( F(z) ) F'(z)$. Substitute this
into \eqref{eq:reduction2} and we have a sufficient condition
is that
$$F'(z)- 2 z \psi(F(z)) F'(z) \leq F'(z).$$
This holds as $F'(z)$ and $\psi( F(z))$ are nonnegative.
\end{itemize}
\end{proof}
To prove \eqref{eq:inequality} holds when $p_{-2} > p_2$ requires a different analysis of the differential equation at \eqref{eq:diff}.
\noindent \thref{lem:zF'} shows $zF'(z)$ can be bounded in terms of $p_{2}$.
\begin{lemma} \thlabel{lem:F'1}
If $p_{-2} > p_2$ then
$\lim_{\epsilon \to 0 } {F'(\epsilon)}/{\epsilon} \leq 2.$
\end{lemma}
\begin{proof}
Starting from the formula at \eqref{eq:integro} then integrating by parts gives
\begin{align}
\lim_{\epsilon \to 0} \frac{F'(\epsilon)}{\epsilon} &= \int_0^\infty \frac{ 1}{y} d\Psi(F(y))
=\xnorm{\Psi \circ F}. \label{eq:F'1}
\end{align}
Plugging into $\Psi$ we have
\begin{align}
\Psi(F(y)) &= p_2 F(y)^2 + p_{-2}(1 - (1-F(y))^2)\\
& =F(y)[ (p_2- p_{-2}) F(y) + 2p_{-2} ] \nonumber.
\end{align}
The hypothesis $p_2 < p_{-2}$ means an upper bound for the above is
\begin{align}
\Psi(F(y)) &\leq 2 p_{-2} F(y) \leq 2 F(y). \label{eq:F'11}
\end{align}
\thref{master} \ref{rate} implies that $\xnorm{F} = 1$. It follows from \eqref{eq:F'1} and \eqref{eq:F'11} that
\begin{align*}
\lim_{\epsilon\to 0} \frac { F'(\epsilon)}{\epsilon} \leq 2\xnorm{F} = 2.
\end{align*}
\end{proof}
\begin{lemma} \thlabel{lem:zF'}
It $p_{-2} > p_2$ then
$$z F'(z) \leq 2( p_2 e)^{-2 }.$$
\end{lemma}
\begin{proof}
Integrate \eqref{eq:diff} as in \cite[Proposition 8.1]{elliot} so that for any $\epsilon >0$
\begin{align*}F'(z) = \frac{F'(\epsilon)}{\epsilon}z \exp\left( - \int_\epsilon^z \psi(F(y)) dy \right).\end{align*}
Taking $\epsilon \to 0$ and applying \thref{lem:F'1} gives
\begin{align}F'(z) \leq 2z \exp\left( - \int_0^z \psi(F(y)) dy \right).\label{eq:F'}\end{align}
We observe that $\psi(F(y)) = 2p_{2}F(y) + 2p_{-2}(1 - F(y)).$ Since we are assuming $p_{-2} > p_2$ and know that $F(y) \leq 1$ we obtain a lower bound by evaluating at $\psi(1)$:
\begin{align}\psi(F(y)) \geq \psi(1) = 2 p_2. \label{eq:psi}
\end{align}
Applying this to \eqref{eq:F'} and multiplying by $z$ gives
\begin{align*}
zF'(z) &\leq 2 z^2 e^{- 2p_2 z}.
\end{align*}
The maximum of $z^2 e^{-2p_2 z}$ is at $z= 1/p_2$. Plug this in above to obtain the claimed bound.
\end{proof}
\begin{lemma} \thlabel{lem:pinch}
If $p_2 < p_{-2} \leq .6$ then \eqref{eq:inequality} holds.
\end{lemma}
\begin{proof}
Using the triangle inequality on the left side of \eqref{eq:inequality} it suffices to find $\delta$ such that for all $z \geq 0$
\begin{align}
| z \psi'(F(z)) F'(z)| + |\psi(F(z)) | \leq (2-\delta)\psi(F(z)) . \label{eq:first}
\end{align}
Because $F$ is a distribution function, we know that $F'\geq 0$. Also, note that $$(2-\delta)\psi(F(z)) - |\psi(F(z))| \leq (1- \delta) \psi(F(z)).$$ Thus, to establish \eqref{eq:first} it is enough to prove
\begin{align} z F'(z) \leq \frac {(1- \delta)\psi(F(z) )} {|\psi'(F(z))|}, \qquad \text{for $z \geq 0$}. \nonumber
\end{align}
We have from \eqref{eq:psi} that $\psi(u) \geq 2p_2$ and can compute $|\psi'(u)| = 2 |p_2 - p_{-2}|$.
It then suffices to prove
\begin{align*}
z F'(z) \leq \frac{ p_2(1- \delta)}{|p_2- p_{-2}|}.
\end{align*}
By \thref{lem:zF'} it suffices to choose $\delta$, $p_{-2}$ and $p_{2}$ so that
$$ 2\left( p_2 e \right)^{-2} \leq \frac{ p_2(1- \delta)}{|p_2- p_{-2}|}.$$
Combining with our hypotheses we have the following system of constraints
\begin{align}
2e^{-2}|p_2 - p_{-2}| &\leq (1-\delta)(p_2)^3, \nonumber\\
p_2 + p_{-2} &= 1 ,\nonumber\\
p_2 &< p_{-2},\nonumber \\
0 &< \delta \leq 1. \nonumber
\end{align}
Take $\delta \to 0$ and use the fact that $p_{-2}$ is assumed to be larger than $p_{2}$, and the solution must be strictly smaller than the real root of the cubic
$$\frac 2 { e^2} (p_{-2} - (1- p_{-2})) = (1 - p_{-2})^3.$$
This is approximately $.61$, thus $p_{-2} \leq .6$ lies in the solution set.
\end{proof}
\begin{remark}
The bound $p_{-2} \leq .6$ could be optimized further in the preceding lemmas, but the gain would be marginal. Something like $p_{-2} \leq .68$ is the best that comes out of optimizing our argument. We sacrifice this marginal gain for the sake of clarity.
\end{remark}
\subsection{General interpolations of $\max$-$k$, uniform and $\min$-$k$ processes}
We will reprove versions of the previous three lemmas for more general interpolations. Let $\mathbf p =(p_k)_{k \neq -1,0}$ be a probability measure on $\mathbf Z \setminus \{-1,0\}$. In this subsection we consider the interpolations
\begin{align*}
\Psi(u) &= p_1 u + \textstyle \sum_{k \geq 2} p_k u^k + p_{-k} (1 - (1-u)^k).
\end{align*}
Define $C_{\mathbf{p}} = \sum_{k \geq 2} k (k-1)(p_k + p_{-k}).$ This constant arises because $\sup_{u \geq 0} |\psi'(u)| \leq C_{\mathbf{p}}.$ First we give a bound on $F'$ that holds for any $\Psi$-process.
\begin{lemma} \thlabel{lem:F'2}
Let $\Psi$ satisfy $(C)$ and $(D)$. For all $z \geq 0$ it holds that ${F'(z)} \leq 1.$
\end{lemma}
\begin{proof}
This follows from a simple bound on \eqref{eq:integro}:
\begin{align*}
F'(z) = z \int_z^\infty \frac{ \psi(F(y)) }{y}F'(y) dy
&\leq z \cdot \frac 1 z \int_z^\infty \psi(F(y))F'(y) dy \\
&= \Psi(1) - \Psi(F(z)).
\end{align*}
Since $\Psi(1) =1$ we conclude that $F'(z) \leq 1.$
\end{proof}
Now let us return to the setting where $\Psi$ is an interpolation of $\max$-$k$, uniform and $\min$-$k$ processes given by $\mathbf p$.
\begin{lemma} \thlabel{lem:zF'2}
Suppose that $p_{1} > 0$. It holds that
$$z F'(z) \leq \frac{2e^{-1}}{(p_1)^2} .$$
\end{lemma}
\begin{proof}
Integrate \eqref{eq:diff} as in \cite[Proposition 8.1]{elliot} so that for any $\epsilon >0$
\begin{align*}F'(z) = \frac{F'(\epsilon)}{\epsilon}z \exp\left( - \int_\epsilon^z \psi(F(y)) dy \right).\end{align*}
Taking $\epsilon = 1$ and applying \thref{lem:F'2} gives
\begin{align}F'(z) \leq z \exp\left( - \int_1^z \psi(F(y)) dy \right).\label{eq:F'2}\end{align}
Notice that
\begin{align}\psi(u) = p_1 + \textstyle \sum_{k \geq 2} k [p_k u^{k-1} + p_{-k} (1- u)^{k-1}] \geq p_1. \label{eq:psi2}
\end{align}
Apply this to \eqref{eq:F'2} then multiply by $z$ to obtain the bound
\begin{align*}
zF'(z) &\leq e^{p_1} z^2 e^{-p_1z}
\end{align*}
The maximum of $z^2 e^{-p_1 z}$ is at $z= 2/ p_1$. Plug this in above to obtain the claimed bound.
\end{proof}
\begin{lemma} \thlabel{lem:pinch2}
If $C_{\mathbf{p}} \leq \frac 12$ then \eqref{eq:inequality} holds.
\end{lemma}
\begin{proof}
As in \thref{lem:pinch} it suffices to show
for some $\delta \in (0,1]$ and all $z \geq 0$
\begin{align} z F'(z) \leq \frac {(1- \delta)\psi(F(z) )} {|\psi'(F(z))|} . \nonumber
\end{align}
We have from \eqref{eq:psi2} that $\psi(u) \geq p_1$ and can compute $$|\psi'(u)| \leq \sum_{k \geq 2} k(k-1)| u^{k-2} - (1-u)^{k-2}| \leq C_{\mathbf{p}}.$$
It then suffices to prove
\begin{align}
z F'(z) \leq \frac{(1- \delta) p_1}{C_{\mathbf{p}}}. \label{eq:end}
\end{align}
By \thref{lem:zF'2} and the hypothesis $C_{\mathbf{p}} \leq 1/2$ it suffices to choose the $p_k$ so that
$$ \frac{2e^{-1}}{(p_1)^2} \leq 2(1- \delta) p_1.$$
Rewriting and letting $\delta \to 0$ we require that $ e^{-1/3} <p_1$. It is easy to verify (by just checking the case $p_k =0$ for $k \neq 1, 2$) that we must have $\sum_{k \neq 1} p_k<1/4$ in order to satisfy $C_{\mathbf p} < 1/2$. Thus, $p_1 > 3/4$. Since $e^{-1/3} \approx .71 < 3/4 =p_1$ the above displayed inequality holds.
\end{proof}
\section{Introduction}
A sequence in $[0,1]$ is \emph{equidistributed} if the limiting proportion of points in each subinterval is equal to the subinterval's length.
Over a century ago Weyl proved that $\{ \beta n \mod 1\}_{n \geq 1}$ is equidistributed for any irrational number $\beta$ (see \cite{weyl}).
Since then connections
have been found in ergodic theory, number theory, complex analysis and computer science (\cite{ergodic}, \cite{primes}, \cite{complex}, \cite{computer}). See \cite{uniform} for an overview.
Not long after Weyl's Theorem,
attention turned to equidistribution of random sequences.
One way to obtain a random sequence in $[0,1]$ is to independently choose points uniformly. Call the resulting sequence the \emph{uniform process}. The strong law of large numbers guarantees this is equidistributed almost surely.
Another random process known to equidistribute points is the \emph{Kakutani interval splitting procedure} (introduced in \cite{kakutani0}), where at each step a point is added uniformly to the current largest subinterval. Almost sure equidistribution is proven in \cite{kakutani} and \cite{Loot} using stopping times. Because points are placed in the largest gaps they ought to spread more evenly than the uniform process. Indeed, \cite{pyke} proves the size of the largest interval is asymptotic to $2/n$; the same order as the average interval. Compare to $\log n / n$ in the uniform process (see \cite{uniformsize}).
\cite{elliot} introduces a family of interval splitting processes that exhibit a wider range of behavior.
The canonical example is the \emph{max-2 process}. The dynamics are as follows:
\begin{itemize}
\item Partition $[0,1]$ into subintervals by placing finitely many points in any manner.
\item At each step sample two points uniformly from $[0,1]$. Each lies in a subinterval formed by the previous configuration.
\item Keep the point contained in the larger subinterval and disregard the other point. Break a tie by flipping a fair coin.
\end{itemize}
A discrete analogue of the $\max$-$2$ process appears in \cite{choices1} where $n$ balls are placed into $n$ bins. For each ball two bins are selected uniformly and the ball is placed in the bin with fewer balls. They find that the most-filled bin has $\approx \log_2 \log n$ balls; significantly less then $\approx \log n / \log \log n$ if the balls were instead placed uniformly. This is studied in more detail in \cite{power2} and \cite{choices2}.
In the $\max$-$2$ process choosing the larger gap should spread points more evenly. Despite our intuition this is difficult to formalize, and equidistribution was a primary open problem from \cite{elliot}.
The natural counterpart is the \emph{min-2 process} where the point contained in the smaller subinterval is kept. Unlike the previous processes, points are prone to clump together. It is natural to also define the $\max$-$k$ and $\min$-$k$ processes; in these the max or (resp.) min of $k$ candidate points is selected at each step.
Before we can state the theorem we describe a more general splitting procedure known as a \emph{$\Psi$-process} (introduced in \cite{elliot}). For technical convenience we will assume that points arrive according to a Poisson process with intensity $e^t$. Suppose at time $t$ that $N_t$ points have arrived and we have interval lenghts $I_1^{(t)}, I_2^{(t)}, \hdots, I_{N_t}^{(t)}$. Define the size-biased empirical distribution function
$$\tilde A_t(x) = \sum_{i=1}^{N_t} I_i^{(t)} \ind{ I_i^{(t)} \leq x }.$$
This function is now defined to evolve according to Markovian dynamics as follows. Let us say that the next point arrives at time $s>t$, for the $N_s$-th step (with $N_s = N_t+1$) we choose an interval at random, with length $\l_{s} = \tilde A_{s^-}^{-1}(u)$, where $u$ is sampled from a law on $(0,1]$ whose distribution function we denote by $\Psi$. This randomly chosen interval is now subdivided into two pieces at a point chosen uniformly inside the interval. This produces a new sequence of interval lengths $I_1^{(s)}, \hdots I_2^{(s)}, \hdots , I^{(s)}_{ N_s }$ and the process is repeated. Note that $\tilde A_t(x)$ is constant (in $t$) between point arrivals. We remark that the $\max$-$k$, uniform and $\min$-$k$ processes are $\Psi$-processes with $\Psi(u) = u^k, u,$ and $1 -(1-u)^k,$ respectively.
We abbreviate a few common assumptions for $\Psi$:
\begin{align*}
\text{(C)} & \; \Psi \text{ is continuous.}\\
\text{(C$^1$)} & \; \Psi \text{ is continuously differentiable.}\\
\text{(C$^2$)} & \; \Psi \text{ is twice continuously differentiable.}\\
\text{(D)} & \text{ There exist $c >0$ and $\kappa _\Psi \in [1, \infty)$, such that } 1 - \Psi(u) \geq c(1- u)^{\kappa_\Psi} \text{ for all $u \in (0,1)$.}
\end{align*}
Set $A_t(x) = \tilde A_t(e^{-t}x)$. The main theorem of \cite{elliot} proves that, when (C) and (D) hold, $A_t(x)$ converges pointwise to a (deterministic) continuously differentiable distribution function $F^\Psi(x).$
For future theorem statements we note that (C$^1$) and (C$^2$) both imply (D).
Here we study $\tilde A_t^\alpha$, the restriction of $\tilde A_t$ to the $N_t^\alpha$ subintervals contained in $[0,\alpha]$. We find conditions on $\Psi$ that guarantee pointwise convergence $A^\alpha_t \to \alpha F^\Psi$, where $A_t^\alpha(x) = \tilde A_t^\alpha (e^{-t}x)$ and $\alpha F^\Psi$ denotes the map $x \mapsto \alpha \cdot F^\Psi(x)$. When this holds the subinterval lengths in $[0,\alpha]$ evolve to look the same as those in all of [0,1]. This sameness is enough to deduce equidistribution.
\begin{thm}\thlabel{thm:eqd}
Let $\psi = \Psi'$. If $\Psi$ satisfies \emph{(C$^2$)}
and for some $\delta \in (0,1]$ and all $z \geq 0$
\begin{align} |z\psi'(F^\Psi(z)) (F^\Psi)'(z) - \psi( F^\Psi(z))| \leq (2 - \delta) \psi( F^\Psi(z) ) \label{eq:inequality},
\end{align}
then the $\Psi$-process is equidistributed a.s.\
\end{thm}
The condition \eqref{eq:inequality} arises from a technical computation (see the proof \thref{prop:cond1}) used to show that a family of processes containing $(A_t^\alpha)_{t \geq 0}$ contract in a certain norm. We stress that it is not at all obvious which $\Psi$ and $F^\Psi$ should satisfy this condition. Our only tools are the properties of $F^\Psi$ established in \cite{elliot}. Most importantly, it satisfies the integro-differential equation (see \cite[Lemma 3.5]{elliot}):
\begin{align}
(F^\Psi)'(z) = z \int_z^\infty \frac 1 y d \Psi( F^\Psi(y)), \label{eq:integro}
\end{align}
and the differential equation (see \cite[Proposition 8.1]{elliot}):
\begin{align}
z (F^\Psi)''(z) - (F^\Psi)'(z) + z \psi( F^\Psi(z)) (F^\Psi)'(z) = 0 \label{eq:diff}.
\end{align}
Remarkably, this is enough information to deduce \eqref{eq:inequality} holds for the $\max$-2 process, an interpolation of $\max$-2 and $\min$-2 processes that is biased towards $\min$-2, and arbitrary interpolations of $\max$-$k$, uniform and $\min$-$k$ processes that place enough weight on the uniform process.
\begin{cor} \thlabel{cor:main} The following are equidistributed a.s.\
\begin{enumerate}
\item The $\max$-$2$ process.
\item The interpolation that is $60\%$-$\min$-$2$ and $40\%$-$\max$-$2$; $\Psi(u) = .6(1- (1-u)^2) + .4u^2$.
\item The interpolation of $\max$-$k$, uniform and $\min$-$k$ processes given by a probability measure $\mathbf p = (p_k)_{k \neq -1,0}$ on $\mathbb Z \setminus \{-1,0\}$, that satisfes $\sum_{k \geq 2} k (k-1)[ p_k + p_{-k}] \leq 1/2;$
$$\Psi(u) = p_1 u + \sum_{k \geq 2} p_k u^k + p_{-k} ( 1 - (1-u)^k).$$
For example, this includes the interpolations
\begin{enumerate}
\item $(1/k^2)\%$-$\min$-$k$ for a single fixed $k$ and otherwise
\emph{uniform}.
\item $99.95\%$-\emph{uniform } and $(5^{-k})\%$-$\min$-$k$ for all $k=2,3,\hdots$.
\end{enumerate}
\end{enumerate}
\end{cor}
The reason our approach works for only certain $\Psi$ is unclear. Numerical methods indicate the inequality fails for other processes, suggesting a different approach is needed. This is surprising since processes which ought to better equidistribute points, like a $\max$-3 process, do not meet our criterium. Nonetheless, we conjecture that all $\max$-$k$ and $\min$-$k$ processes are equidistributed.
The properties established in \thref{master} are an important step in exploring this for $\max$, $\min$ and more general $\Psi$-processes.
The rate of convergence to a uniform placement of points and also the asymptotic size of the largest interval are other important open problems. More thorough discussion can be found in \cite{elliot}.
\subsubsection*{Overview}
This article is organized to quickly arrive at the proof of \thref{thm:eqd}. In Section \ref{sec:prelims} we describe the evolution of intervals in $[0,\alpha]$ and give the major definitions. In Section \ref{sec:thm1} we state without proof \thref{prop:cond1} and \thref{master}. The first proposition describes the importance of \eqref{eq:inequality} holding. The second shows that $A_t^\alpha$ has similar properties as those needed of $A_t$ to deduce convergence in \cite{elliot}. We then use this to establish \thref{thm:eqd}. Section \ref{sec:proofs} contains the proofs for the previous section.
Finally, in Section \ref{sec:inequalities} we prove \thref{cor:main} by showing that various interpolations satisfy \eqref{eq:inequality}.
\section{Subintervals in $[0,\alpha]$} \label{sec:prelims}
We start with a formal definition for a process to be equidistributed. Suppose $n_0$ points are initially placed. After $n$ iterations of an interval splitting process let $N_n^\alpha$ be the number of the first $n_0 + n$ points smaller than $\alpha$. We say a sequence is \emph{equidistributed} if $n^{-1} N_n^\alpha \to \alpha$ for all $\alpha \in [0,1]$.
It is convenient to work in continuous time. Following \cite{elliot} we have points arrive as a Poisson process with intensity $e^t$. Formal details are in \thref{prop:cond1}. So, in continuous time equidistribution is equivalent to $e^{-t} N_t^\alpha \to \alpha$ for all $\alpha \in [0,1]$.
\subsection{Describing $\tilde {\mathbf A}^\alpha_t$}
Fix $\alpha \in [0,1]$. We use the convention that a bold face letter represents a process indexed by time (i.e.\ $\tilde {\mathbf A} = (\tilde A_t)_{t \geq 0}$).
Define the joint processes $(\tilde {\mathbf{A}}^\alpha, \tilde{\mathbf A}^{\alpha_+}, \tilde {\mathbf A})$ to be the size-biased empirical distributions of interval lengths contained in $[0,\alpha]$, $[\alpha,1]$ and $[0,1]$, respectively. Formally, letting $I_{1}^{\alpha, (t)}, \hdots, I_{N_t^\alpha}^{\alpha, (t)}$
be the lengths of subintervals contained in $[0,\alpha]$ at time $t$ we define
$$\tilde A_t^\alpha(x) = \sum_{j=1}^{N_t^\alpha} I_j^{\alpha, (t)} \cdot \ind{I_j^{\alpha, (t)} \leq x },$$ and similarly for $\tilde A^{\alpha_+}_t$ and $\tilde A_t$. The spark for the refined analysis comes from the relation
\begin{align}
\tilde A_t^{\alpha}(x) + \tilde A_t^{\alpha_+}(x) = \tilde A_t(x), \qquad \forall t,x \geq 0 \label{eq:key}.
\end{align}
To ensure that no intervals are double counted assume the initial set of points placed in $[0,1]$ always contains $\{\alpha\}$. This assumption is only for convenience. Our proof could be adapted to omit it by running the process until two points $\alpha_1 \leq \alpha \leq \alpha_2$ land sufficiently close to $\alpha$, and then using the bound $N_t^{\alpha_1} \leq N_t^{\alpha} \leq N_t^{\alpha_2}$. We further remark that the same reasoning extends our theorems to the unit circle.
In \cite[Section 2]{elliot} the authors prove that
$$\tilde A_t(x) = \tilde A_0(x) + \int_0^t e^sx^2 \int_x^\infty \frac{\psi(\tilde A_s(z))}{z} d \tilde A_s(z) + \tilde M_t$$
for some martingale $\tilde M_t$. The following proposition shows that $\tilde A_t^\alpha$ satisfies a similar equation.
\begin{prop} \thlabel{prop:formula}
Let $\psi = \Psi'$. For any $\Psi$-process satisfying \emph{(C$^1$)}, the joint processes $(\tilde {\mathbf A}^\alpha, \tilde {\mathbf A}^{\alpha_+}, \tilde {\mathbf A})$ satisfy the equation
$$\tilde A_t^\alpha(x) = \tilde A_0^\alpha(x) + \int_0^t e^s x^2 \int_x^\infty \frac{ \psi(\tilde A_s(z))}{z} d \tilde A^\alpha_s(z) ds + \tilde M^\alpha_t(x),$$
with $\tilde M_t^\alpha$ a martingale.
\end{prop}
\begin{proof}
We first build up some necessary definitions. Let $\Psi$ be a continuously differentiable distribution function.
Define a Poisson random measure $\prod$ on $[0,\infty) \times [0,1]^2$
with intensity $e^t dt \otimes d \Psi(u) \otimes dv .$
Set $\l_t(u) = \tilde A_{t^-}^{-1}(u)$. We use the function $h(v,\l,x) = v \ind { \l v \leq x } + (1-v) \ind{\l(1-v) \leq x } )$ to ``cut" our sampled interval by $v$.
We need to detect whether the sampled interval belongs to $[0,\alpha]$. We use the function $g_t^\alpha( \l_t(u)) = \ind{ \l_t(u) \subset [0,\alpha]}.$ The function $g_t^\alpha$ can be constructed rigorously by assuming all of the subintervals have different lengths, and putting a point mass on each length of subintervals in $[0,\alpha]$. This is a harmless simplification; even for starting configurations with same-length subintervals we know that (when $\Psi \in C^1)$ after an a.s.\ finite time a point will be added to each interval. Once this happens all of the subintervals are of different lengths a.s.\ and will continue to be of different lengths a.s.
We combine all of this to define
\begin{align*}
\tilde B^\alpha(s,u,v,x) &= \l_s(u)\ind{ \l_s(u) > x } g_t^\alpha( \l_s(u)) h(v, \l_s(y)),
\end{align*}
so that
$\tilde A^\alpha_t(x) = \tilde A^\alpha_0(x) + \textstyle \sum_{(s,u,v,x) \in \Pi, s \leq t} \tilde B^\alpha(s,u,v,x).$
Looking to obtain the semimartingale decomposition of $\tilde A^\alpha_t(x)$ we integrate $B(t,u,v,x).$ Note that $\int_0^1 h(v, \l ,x) dv = (x/\l)^2$. We then write
\begin{align*}
\int \int \tilde B^\alpha(t,u,v,x) dv d \Psi(u) &= \int_0^1 \l_t(u) \ind{ \l_t(u) > x} g_t^\alpha(\l_s(u)) ( x/ \l_t(u))^2 d \Psi(u) \\
&= x^2 \int_0^1 \frac{ 1 } { \l_t(u) } \ind{ \l_t(u) > x } g_t^\alpha(\l_t(u))d \Psi(u) \\
&=x^2 \int_x^\infty \frac{ 1 } { z } g_t^\alpha(z)d \Psi( \tilde A_{t^-}(z)).
\end{align*}
The last line follows from the fact that for a bounded Borel function, $f$, $$\int_0^1 f( \l_t(u) ) d \Psi(u) = \int_0^\infty f(z) d \Psi( \tilde A_{t^-}(z) ).$$
Recall that $\Psi$ is assumed to be $C^1$, and that the indicator function $g_t^\alpha$ is zero unless the selected interval belongs to $[0,\alpha]$. This lets us write
$$g_t^\alpha(z) d \Psi(\tilde A_{t^-}(z)) = \psi( \tilde A_{t^-}(z) ) d \tilde A^\alpha_{t^-}(z).$$
We now rewrite the integral of $\tilde B^\alpha_t$ as
\begin{align*}
\int \int \tilde B^\alpha(t,u,v,x) dv d \Psi(u) & = x^2 \int_x^\infty \frac{ \psi( \tilde A_{t^-}(z) ) } { z }d\tilde A^\alpha_ {t^-}(z).
\end{align*}
Integrate this from $0$ to $t$ and we arrive at the claimed decomposition of $\tilde A^\alpha_t(x)$.
\end{proof}
\subsection{Definitions and notation}
What follows are the essential facts and notation for understanding the proof of \thref{thm:eqd}. Let non-tilde processes represent the original process scaled by $e^{-t}$ (i.e.\ $A_t(x) = \tilde A_t(e^{-t} x) )$. In light of \thref{prop:formula}, a change of variables gives the relationship
\begin{align}\mathbf A^\alpha = \mathscr C(\mathbf A^\alpha, \mathbf A) + \mathbf M^\alpha, \label{eqn:relationship}\end{align}
where $\mathscr C \colon \mathcal X \times \mathcal X \to C( [0,\infty), L^1_{\text{loc}})$ is defined by
$$\mathscr C(\mathbf F,\mathbf G)_t(x) = F_0(e^{-t} x) + \int_0^t (e^{s-t} x)^2 \int_{ e^{s-t} x} ^\infty \frac{ \psi( G_s(z) ) }{z} d F_s(z) ds.$$
Here $\mathcal X= \mathcal B( [0,\infty), \mathcal D)$
where $\mathcal D =\{ F \colon [0,\infty) \to [0,1], \text{c\'adl\'ag, increasing}\}$. The set $\mathcal X$ is a subspace of the space $\mathcal B([0,\infty), L_{\text{loc}}^1)$ of measurable maps from $[0,\infty)$ to $L_{\text{loc}}^1$ with the topology of locally uniform convergence, which we denote by the symbol $\overset{\mathcal X} \to$.
We say that a family of functions $( \mathbf F^{(n)})_{n \in \mathbb N}$ in $\mathcal X$ is \emph{asymptotically equicontinuous} if for every compact $K \subset [0,\infty)$,
$$\lim_{\delta \to 0} \lim_{n \to \infty } \sup_{\substack{s,t \geq 0 \\ |s-t| \leq \delta} } \int_K | F_s^{(n)}(x) - F_s^{(n)}(x) | dx = 0.$$
A family of distributions $(F_t)_{t \geq 0}$ is \emph{tight} if for all $\epsilon>0$ there exists $N$ such that $F_t(N) \geq 1- \epsilon$ for all $t \geq 0$.
We will use $\hat F$ and $F^{\Psi}$ interchangeably to denote the a.s.\ pointwise limiting distribution of $A_t$ from \cite[Theorem 1.1]{elliot}. Also define the stationary distribution $ \mathbf{\hat F}^*$ so that $\hat F^*_t = \hat F$ for all $t \geq 0$. With the convergence $A_t \to \hat F$ in mind, we consider the operator
$$\mathscr C^* (\mathbf F)_t = \mathscr C(\mathbf F, \hat{\mathbf F}^*)_t = F_0(e^{-t} x) + \int_0^t (e^{s-t} x)^2 \int_{ e^{s-t} x} ^\infty \frac{ \psi(\hat F(z))}{z} d F_s(z) ds.$$
We will see in the proof of \thref{thm:eqd} that the limiting distribution of $A^\alpha_t$ belongs to the set of fixed points
$$\mathfrak F^\alpha = \{ \mathbf F \in \mathcal X_1 \colon \mathbf F = \mathscr C^*(\mathbf F), F_t(+\infty) = \alpha \text{ and } ( \tfrac 1 \alpha F_t)_{t \geq 0} \text{ tight} \}.$$
Here $\mathcal X_1 = \mathcal B( [0,\infty), \{F \in \mathcal D\colon \xnorm{F} \leq 1\})$, where $\xnorm{\cdot}$ is the case $\delta =1$ of
the following family of norms on $L^1_{\text{loc}}([0,\infty))$:
\begin{align}
\quad\dnorm{f} = \int_0^\infty x^{-1 - \delta} |f(x)| dx, \quad \delta \in (0,1]. \label{eq:norm}
\end{align}
The norm used exclusively in \cite{elliot} is $\xnorm{f} = \int_0^\infty x^{-2} |f(x)| dx $. This extra $\delta$ of freedom lets us prove the interpolation between $\min$-2 and $\max$-2 is equidistributed. The effect of working in this norm is the appearance of the $(2-\delta)$ term in \eqref{eq:inequality}.
We remark that $\xnorm{\cdot}$ does have special significance. A key property (see \thref{master} \ref{rate}) is that $\xnorm{ \tilde A_t^\alpha } = e^{-t} N_t^\alpha.$ Thus, we can recover the number of points added to the interval $[0,\alpha]$, which is the fundamental quantity for proving equidistribution.
\section{Proof of \thref{thm:eqd}} \label{sec:thm1}
We delay the proofs of the following two propositions until the next section. Our goal is to make transparent the necessary ingredients for proving \thref{thm:eqd}.
The first proposition describes the benefit of when a $\Psi$-process satisfies \eqref{eq:inequality}.
\begin{prop} \thlabel{prop:cond1}
If $\Psi$ satisfies $\emph(\text{\emph{C}}^1)$ and there exists $\delta \in (0,1]$ such that \eqref{eq:inequality} holds for all $z \geq 0$, then $$\dnorm{F_t - \alpha\hat F} \leq 2(1+ \delta ^{-1}) e^{-\delta t }$$ for all $\mathbf F \in \mathfrak F^{\alpha}$.
\end{prop}
\noindent We will also need several general properties of $\mathbf A^\alpha$.
\begin{prop}\thlabel{master} The following hold for any $\Psi$ satisfying $(\text{\emph{C}}^2)$:
\begin{enumerate}[label = {(\Roman*)}, labelindent = .2 cm]
\item $\xnorm{A^\alpha_t} = e^{-t} N^\alpha_t$ and $\xnorm{\alpha \hat F} = \alpha.$ \label{rate}
\item The collection of distribution functions $(\frac 1 \alpha A^\alpha_t)_{t \geq 0}$ is tight. \label{tight}
\item The family $( \mathbf A^{\alpha, (n)})$ defined by $A_t^{\alpha, (n)} = A^\alpha_{t+n}$ is asymptotically equicontinuous. \label{equi}
\item $\mathbf M^{\alpha, (n)} \overset{\mathcal X}\to 0$ as $n \to \infty$, where $M_t^{\alpha,(n)}(x) = M^\alpha_{t+n}(x) - M^\alpha_n(e^{-t}x)$ for every $t \geq 0.$ \label{noise}
\item Suppose additionally that $\sup_{ z \geq 0 }z \hat F'(z) < \infty$ (discussion of this hypothesis appears in \thref{lem:extra}). Define $\mathbf A^{(n)}$ by $A^{(n)}_t = A_{t+n}$. If $\mathbf F^{(n)}\overset{\mathcal X} \to \mathbf F$ then $\mathscr C(\mathbf F^{(n)}, \mathbf A^{(n)}) \overset{\mathcal X} \to \mathscr C^*( \mathbf F)$. \label{continuous}
\end{enumerate}
\end{prop}
\begin{proof}[Proof of \thref{thm:eqd}]
All statements are meant to hold almost surely. Also we abbreviate items from \thref{master} as a roman numeral.
In the continuous process points are added as a Poisson process with intensity $e^tdt$. So, it suffices to show $e^{-t} N_t^\alpha \to \alpha$.
By \ref{tight}, \ref{equi} and the version of the Arzel\'a-Ascoli theorem in \cite[Lemma 7.3]{elliot} we may choose a sequence $(\mathbf A^{\alpha, (n_k)})$ which converges to a family of (scaled by $\alpha$) distributions $\mathbf F^{\alpha, (\infty)}$ with $F_t^{\alpha, (\infty))}(+ \infty ) = \alpha$ for every $t \geq 0$. Taking limits in the formula at \eqref{eqn:relationship} we obtain
$$\mathscr C(\mathbf A^{\alpha, (n_k)}, \mathbf A^{(n_k) } ) + \mathbf M^{\alpha , (n_k)} \overset{ \mathcal X} \to \mathbf F^{\alpha, (\infty)}.$$
By \ref{noise} and \ref{continuous} we have
$$\mathscr C( \mathbf A^{\alpha, (n_k)}, \mathbf A^{(n_k) } )\overset{\mathcal X} \to \mathscr C^*(\mathbf F^{\alpha, (\infty) }).$$
Thus, $\mathbf F^{\alpha, (\infty) } \in \mathfrak F^\alpha$. Since we are assuming \eqref{eq:inequality} holds, \thref{prop:cond1} implies that $ \dnorm{F_t^{\alpha, (\infty) } - \alpha\hat F } \leq (2+ \delta ^{-1}) e^{- \delta t}$. A similar argument as the conclusion of the proof of \cite[Theorem 7.1]{elliot} gives almost sure pointwise convergence $A_t^\alpha \to \alpha \hat F$. \cite[Theorem 1.1]{elliot} states that $A_t \to \hat F$ pointwise. We can then deduce from \eqref{eq:key} that $A^{\alpha_+}_t \to (1- \alpha) \hat F$. Combining pointwise convergence, $\eqref{eq:key}$ and Fatou's lemma we deduce that $\xnorm{A_t^\alpha} \to \xnorm{ \alpha \hat F}$. Indeed,
\begin{align*}
\liminf \xnorm{A_t^\alpha} &\geq \xnorm{\alpha \hat F},\\
\limsup \xnorm{A_t^\alpha} &= 1 - \liminf \xnorm{A^{\alpha_+}_t} \leq 1- (1-\alpha) = \xnorm{\alpha \hat F}.
\end{align*}
This finishes the proof since \ref{rate} states that $\xnorm{A_t^\alpha} = e^{-t} N_t^\alpha$ and $\xnorm{\alpha \hat F}=\alpha$.
\end{proof}
\section{Proofs of \thref{prop:formula}, \thref{prop:cond1}
\section{Proof of \thref{prop:cond1} and \thref{master}}\label{sec:proofs}
\subsection{\thref{prop:cond1}}
The proof of \thref{prop:cond1} proceeds analogously to \cite[Lemma 4.1 and Proposition 3.4]{elliot}. A significant difference is that they apply integration by parts to $$\frac 1 z d \Psi(\tilde F_s(z) ),$$ whereas our operator $\mathscr C^*$ requires applying integration by parts to $$\frac{\psi(\hat F(z)) }z d \tilde F_s(z).$$
The requirement at \eqref{eq:inequality} arises from the extra term $\psi(\hat F(z))$.
Also, note that we work in the norm $\dnorm{\cdot}$ to obtain the constant $ (2-\delta)$ in \eqref{eq:inequality}.
\begin{proof}[Proof of \thref{prop:cond1}]
Let $\mathbf F \in \mathfrak F^\alpha$. We consider the rescaled processes $\tilde F_t(x) = F( e^{t} x)$, $\tilde F^{\Psi}_t(x) = \hat F(e^t x)$. It then holds that $\tilde{\mathbf F} = \tilde{ \mathscr C} (\tilde {\mathbf F})$ where
$$\tilde {\mathscr C}(\tilde{\mathbf F})_t(x) = \tilde F_0(x) + \int_0^t e^s x^2 \int_x^\infty \frac{ \psi(\hat F(z) )}{z} d \tilde F_s(z) ds.$$
Our goal is to prove the distance between $\tilde{ \mathbf F }$ and $\alpha \tilde {\mathbf{\hat F} }^*$ is decreasing in $t$:
\begin{align}
\partial_t \dnorm{ \tilde F_t - {\alpha \tilde F^{\Psi}_t} } = \int_0^\infty x ^{-1 - \delta} \partial _t | \tilde F_t(x) - {\alpha \tilde F^{\Psi}_t}(x) | dx \leq 0 \label{eq:suffices1}.
\end{align}
We start by differentiating under the integral sign
\begin{align*}
\partial_t \tilde \mathscr C( \tilde{\mathbf F } )_t(x) = e^{t} x^2 \int_x^\infty \frac{ \psi(\hat F(z))} {z} d \tilde F_t(z)
\end{align*}
to write for each $x\geq0 $ the dynamics for the difference $\tilde F_t(x) - {\alpha \tilde F^{\Psi}_t}(x)$ as
$$\partial _t ( \tilde F_t(x) - {\alpha \tilde F^{\Psi}_t}(x)) = e^{t} x^2 I_t(x),$$
$$I_t(x) = \int_x^\infty \frac{ \psi(\hat F(z) )}{z} \partial_z( \tilde F_t(z) - {\alpha \tilde F^{\Psi}_t}(z) ) dz.$$
Multiply both sides by $\sgn( \tilde F_t - {\alpha \tilde F^{\Psi}_t})$ to obtain
\begin{align*}
e^{-t} \partial _t | \tilde F_t(x) - {\alpha \tilde F^{\Psi}_t}(x) | = x^2 \begin{cases} \sgn ( \tilde F_t(x) - {\alpha \tilde F^{\Psi}_t}(x)) I_t(x), & \tilde F_t(x) \neq {\alpha \tilde F^{\Psi}_t}(x) \\ 0, & \tilde F_t(x) = {\alpha \tilde F^{\Psi}_t}(x) \end{cases}.
\end{align*}
Let $\hat f(z) = z\psi'(\hat F(z)) \hat F'(z) - \psi(\hat F(z)) $. An application of integration by parts to the integral gives
$$I_t(x) = - \frac{\psi(\hat F(x)) }{x} (\tilde F_t(x) - {\alpha \tilde F^{\Psi}_t}(x) ) + \int_x^\infty \frac{ \hat f(z) }{z^2} (\tilde F_t(z) - {\alpha \tilde F^{\Psi}_t}(z) ) dz.$$ The previous two equations therefore yield
\begin{align*}
e^{-t} \partial _t | \tilde F_t(x) - {\alpha \tilde F^{\Psi}_t}(x) |
& \leq - x \psi(\hat F(x)) | \tilde F_t(x) - {\alpha \tilde F^{\Psi}_t}(x)| + x^2 \int_x^\infty |\hat f(z) | \frac{ |\tilde F_t(z) - {\alpha \tilde F^{\Psi}_t}(z)| }{z^2} dz.
\end{align*}
We next multiply both sides by $x^{-1 - \delta}$ and integrate with respect to $x$ from $0$ to infinity to obtain the bound
\begin{align}
e^{-t} \int_0^\infty x ^{-1-\delta} \partial_t | \tilde F_t(x) - {\alpha \tilde F^{\Psi}_t}(x) | dx &\leq \int_0^\infty - \psi(\hat F(x)) \frac{| \tilde F_t(x) - {\alpha \tilde F^{\Psi}_t}(x)|}{x^{\delta}} dx \nonumber \\
& \qquad \qquad
+ \int_0^\infty x^{1- \delta} \int_x^\infty |\hat f(z) | \frac{ | \tilde F_t(z) - {\alpha \tilde F^{\Psi}_t}(z)|}{z^2} dz dx . \nonumber
\end{align}
An application of Fubini's theorem lets us rewrite the second integral as
\begin{align*}
\int_0^\infty x^{1- \delta} \int_x^\infty |\hat f(z) | \frac{ | \tilde F_t(z) - {\alpha \tilde F^{\Psi}_t}(z)|}{z^2} dz dx &= \int_0^\infty |\hat f(z) | \frac{ | \tilde F_t(z) - {\alpha \tilde F^{\Psi}_t}(z)|}{z^2} \int_0^z x^{1- \delta } dx dz \\
&=\int_0^\infty (2 - \delta)^{-1} |\hat f(z) | \frac{| \tilde F_t(z) - {\alpha \tilde F^{\Psi}_t}(z) | }{z^{\delta }} dz.
\end{align*}
Hence we can combine the integrals to obtain the bound
\begin{align}
e^{-t} \int_0^\infty x ^{-2} \partial_t | \tilde F_t(x) - {\alpha \tilde F^{\Psi}_t}(x) | dx & \leq \int_0^\infty \Big((2-\delta)^{-1} |\hat f (z)| - \psi(\hat F(z))\Big) \frac{|\tilde F_t(z) - {\alpha \tilde F^{\Psi}_t}(z)|}{z^{\delta}} dz . \nonumber
\end{align}
Our hypothesis \eqref{eq:inequality} guarantees that the term inside the integral:
$$(2-\delta)^{-1} |\hat f (z)| - \psi(\hat F(z))\leq 0.$$
Therefore \eqref{eq:suffices1} holds.
This establishes that
\begin{align}\dnorm{\tilde F_t - {\alpha \tilde F^{\Psi}_t}} \leq \dnorm{ \tilde F_0 - \alpha \tilde F^\Psi_0} = \dnorm{ F_0 - \alpha \hat F}. \label{eq:=1}
\end{align}
A change of variables $x = e^{-t} z$ gives
\begin{align}
\dnorm{F_t - \alpha \hat F} &= \int_0^\infty x^{-1 - \delta} | F_t(x) - \alpha \hat F(x)|dx\nonumber \\
&= e^{-\delta t} \int_0^\infty z^{-1 - \delta} | \tilde F_t(z) - {\alpha \tilde F^{\Psi}_t}(z) | dz \nonumber\\
&= e^{-\delta t}\dnorm{ \tilde F_t - {\alpha \tilde F^{\Psi}_t}}\nonumber\\
&\leq e^{-\delta t} \dnorm{ F_0 - \alpha \hat F},
\end{align}
where at the last line we apply \eqref{eq:=1}.
It remains to prove that $\dnorm{ F_0 - \alpha \hat F} \leq C$, for some $C>0$. By assumption, $\mathbf F\in \mathcal X_1$ and therefore $\xnorm{ F_0} \leq 1$. As $0 \leq F_0(x) \leq 1$ we can break up the integral and use integrability of $x^{-1- \delta}\ind{x >1}$:
$$\int_0^\infty x^{-1-\delta} F_0(x) dx \leq \int_0^1 x^{-2} F_0(x) dx + \int_1^\infty x^{-1-\delta} dx \leq \xnorm{F_0} + \delta^{-1} \leq 1 + \delta ^{-1}.$$
Similarly, $\dnorm{\alpha \hat F} \leq 1 + \delta ^{-1}$. Apply the triangle inequality to conclude $\dnorm{F_0 - \alpha \hat F} \leq \dnorm{F_0} + \dnorm{\alpha \hat F} \leq 2(1+ \delta^{-1}).$
\end{proof}
\subsection{\thref{master}} \label{proofs}
In \thref{master} we prove that $ A_t^\alpha$ and $A_t$ have similar properties. Each statement requires some manipulation. Fortunately \cite{elliot} contains much of the heavy-lifting.
We make one remark concerning the proof of \ref{continuous}. In \cite{elliot} they prove continuity of an operator $\mathscr S^\Psi$ with domain $\mathcal X$. Our operator $\mathscr C$ has domain $\mathcal X \times \mathcal X$. This makes the proof more involved, and also restricts us to proving continuity in sequences of the form $ (\mathbf F^{(n)}, \mathbf A^{(n)} )$.
\begin{proof}[Proof of \ref{rate}]
The equality $\xnorm{\alpha \hat F} = \alpha$ is \cite[Lemma 3.5]{elliot}. For the other equality, take $I^{\alpha, (t)}_j$ to be the length of an interval in $[0,\alpha]$. Define the measure $\mu_t^\alpha = e^{-t} \textstyle \sum_{1}^{N_t^\alpha} \delta_{e^tI^{\alpha, (t)}_j}.$
This gives $\mu_t^\alpha$ is the empirical distribution of rescaled interval lengths. We can then write $$A_t^\alpha(x) = \int_0^xy \mu_t(dy).$$ Applying Fubini's theorem shows that $$\xnorm{ A_t^\alpha} = \int_0^\infty x^{-2} \int_0^xy \mu_t^\alpha(dy) dx = \int_0^\infty \mu_t^\alpha(dy) = e^{-t}N_t^\alpha.$$
\end{proof}
\begin{proof}[Proof of \ref{tight}]
Recall that a family of distributions $(F_t)_{t \geq 0}$ is \emph{tight} if for all $\epsilon>0$ there exists $N$ such that $F_t(N) \geq 1- \epsilon$ for all $t \geq 0$.
\cite[Proposition 6.3]{elliot} implies $(A_t)_{t \geq 0}$ is tight. Fix $\epsilon>0$ and let $N$ be such that $A_t(N) \geq 1 - \alpha \epsilon$ for all $t \geq 0$.
The relationship at \eqref{eq:key} ensures $A^{\alpha}_t(N) + A^{\alpha_+}_t(N) \geq 1 - \alpha \epsilon.$
As $A_t^{\alpha} \leq \alpha $ and $A_t^{\alpha_+} \leq 1 -\alpha $, this inequality could only hold if $A_t^\alpha(N) \geq \alpha - \alpha \epsilon$ for all $t\geq 0$. Hence, $(\frac 1 \alpha A_t^\alpha)_{t \geq 0 }$ is tight.
\end{proof}
\begin{proof}[Proof of \ref{equi}]
Recall, that a family of functions $( \mathbf F^{(n)})_{n \in \mathbb N}$ in $\mathcal X$ is asymptotically equicontinuous if for every compact $K \subset [0,\infty)$,
$$\lim_{\delta \to 0} \lim_{n \to \infty } \sup_{\substack{s,t \geq 0 \\ |s-t| \leq \delta } } \int_K | F_s^{(n)}(x) - F_s^{(n)}(x) | dx = 0.$$
The proof is similar to \cite[Lemma 7.5]{elliot}. The idea is that it suffices to show the existence of a $\delta_0>0$ and constant $C$ so that for every $0<\delta_1 < \delta_0$ there exists almost surely a $T_{\delta_1}< \infty$ so that
\begin{align}\sup_{ t \geq T_{\delta_1}, 0 \leq \delta \leq \delta_1} \int_0^\infty \frac{ | A_{t+ \delta}^\alpha(x) - A_t^\alpha(x) | }{ x^2} dx \leq C \delta_1.\label{eq:as2}
\end{align}
This is sufficient since we for any $\delta_1 >0$ and any $M>0$, almost surely
\begin{align*}
\lim_{n \to \infty} \sup_{ s,t \geq 0, |s-t| \leq \delta_1} \int_0^M | A_s^{\alpha, (n)}(x) - A_t^{\alpha, (n)}(x) | dx & \leq \sup_{ t \geq T_{\delta_1}, 0 \leq \delta \leq \delta_1} \int_0^\infty \frac{ | A_{t+ \delta}^\alpha(x) - A_t^\alpha(x) | }{ x^2} dx \\
& \leq M^2 C \delta_1.
\end{align*}
As this holds jointly with probability 1 for a countable sequence of $\delta_1$ going to 0 and $M \in \mathbb N$, the asymptotic equicontinuity of $(A^{(n)})_{n \geq 0}$ follows.
The formula at \eqref{eq:as2} follows from the fact that
$\tilde A^\alpha_t$ satisfies the monotonicity condition, for any $\delta >0$,
\begin{align}
\tilde A_t^\alpha(x) \leq \tilde A^\alpha_{t+\delta}( e^{-\delta} x ) \leq \tilde A^\alpha_{t+ \delta}(x). \label{eq:ae1}
\end{align}
Another necessary fact is that number of points kept in $[0,\alpha]$ from time $t$ to $t+\delta$ is bounded by the number of points added to $[0,1]$ in that same time interval. Formally, for any $\delta >0$ we have $N_{t+\delta}^\alpha - N_t^\alpha \leq N^1_{t+\delta} - N^1_t$. This lets us deduce the equivalent for $N_t^\alpha$ as for $N_t$ in \cite[Lemma 7.6]{elliot}. Namely, that there is a $\delta >0$ so that for every $0 < \delta < \delta_0$ there exists almost surely a $T_\delta < \infty$ so that
$$\sup_{t \geq T_\delta} N_{t+\delta}^\alpha - N_t^\alpha \leq 2 \delta e^t.$$
The argument finishes by using the formula from \thref{master} \ref{rate} for $N_t^\alpha$ in terms of $\xnorm{ A_t^\alpha}$. See the proof of \cite[Lemma 7.5]{elliot} for further details.
\end{proof}
\begin{proof}[Proof of \ref{noise}]
The proof is similar to the decay of the noise subsection in \cite[Section 7]{elliot}. The idea is to bound the martingale $\mathbf M^\alpha$ by computing various moments of the underlying process $\mathbf B^\alpha$. We can use the same bounds as in \cite{elliot} because points are added to $[0,\alpha]$ no faster than to $[0,1]$. This ensures that $B^\alpha(s,u,v,x) \leq B(s,u,v,x)$. Here $B(s,u,v,x)$ is the function defined at \cite[(3)]{elliot}.
\end{proof}
\begin{proof}[Proof of \ref{continuous}]
Suppose that $\mathbf F^{(n)} \overset{\mathcal X} \to \mathbf F$. An equivalent notion of convergence in the topology of local uniform convergence is that $\mathbf F^{(n)} \overset{\mathcal X} \to \mathbf F$ if and only if for all compact $K \subset [0,\infty)$
$$\lim_{n \to \infty} \sup_{0 \leq s \leq t} \int_K |F^{(n)}_s(x) - F_s(x) | dx = 0.$$
\cite[Theorem 7.1]{elliot} implies $\mathbf A^{(n)} \overset{\mathcal X} \to \mathbf F^*$. Thus it suffices to prove for any fixed $T > 0$ and $K >0$
\begin{align}
\int_0^K |\mathscr C(\mathbf F, \mathbf F^*)_t(x)- \mathscr C(\mathbf F^{(n)}, \mathbf A^{(n)})_t(x) | dx \to 0 \label{eq:0.1}
\end{align}
uniformly for $t \leq T$.
For fixed $n$ we can write
\begin{align*}
\mathscr C(\mathbf F^{(n)}, \mathbf A^{(n)})_t(x)&= F_0^{(n)}(x) + \int_0^t (e^{s-t} x)^2 \int_{e^{s-t} x}^\infty \frac{ \psi(A_s^{(n)}(z))}{z} d F^{(n)}_s(z) ds.
\end{align*}
If we write $\psi( A_s^{(n)}(z)) = \psi(\hat F(z)) + \psi( A_s^{(n)}(z)) - \psi(\hat F(z))$ the above becomes
\begin{align*}
\mathscr C(\mathbf F^{(n)}, \mathbf A^{(n)})_t(x)&= {\mathscr C( \mathbf F^{(n)}, \mathbf F^*)_t(x)} + {\int_0^t (e^{s-t} x)^2 \int_{e^{s-t} x}^\infty \frac{ \psi( A_s^{(n)}(z)) - \psi(\hat F(z)) }{z} d F^{(n)}_s(z) ds}.
\end{align*}
We can then bound the left side of \eqref{eq:0.1} by
\begin{align}
\int_0^K |\mathscr C(\mathbf F &, \mathbf F^*)_t(x) - \mathscr C( \mathbf F^{(n)}, \mathbf F^*)_t(x) | dx \label{eq:term1} \\
&+ \int_0^K {\int_0^t (e^{s-t} x)^2 \int_{e^{s-t} x}^\infty \frac{ |\psi( A_s^{(n)}(z)) - \psi(\hat F(z)) |}{z} d F^{(n)}_s(z) ds} dx. \label{eq:term2}
\end{align}
It suffices to show that as $n \to \infty$ each summand converges to zero uniformly for $t \leq T$
\vspace{.2 cm}
\subsubsection*{First summand} Start by bounding the summand at \eqref{eq:term1} by
\begin{align*}
\int_0^K |F_0(e^{-t}x) - F^{(n)}_0(e^{-t}x)| dx + \int_0^K\int_0^t (e^{s-t} x)^2 \bigg |\int_{e^{s-t} x}^\infty \frac{ \psi(\hat F(z) )}{z} d( F_s(z) - F^{(n)}_s(z) ) \bigg|ds dx.
\end{align*}
The first quantity goes to zero uniformly for $t \leq T$ by the definition of $\mathbf F^{(n)} \overset{\mathcal X} \to \mathbf F$ since a change of variables gives
$$\int_0^K |F_0(e^{-t}x) - F^{(n)}_0(e^{-t}x)| dx \leq e^t \int_0^K | F_0(x) - F^{(n)}_0(x) | dx.$$
Expand the interior of the second quantity with integration by parts and take the absolute value signs inside to bound it by
$$ \underbrace{\frac{ \psi(\hat F(e^{s-t} x))}{e^{s-t}x } | F_s(e^{s-t} x) dx - F_s^{(n)}(e^{s-t} x)|}_{\text{term one}} + \underbrace{\int_{e^{s-t} x}^\infty \bigg| \frac{d}{dz}\frac{ \psi(\hat F(z) )}{z} \bigg| | F_s(z) - F^{(n)}_s(z) | dzdx}_{\text{term two}} .$$
Multiply term one by $(e^{s-t} x)^2$ and integrate so it becomes
$$\int_0^K \int_0^t (e^{s-t} x) \psi( \hat F(e^{s-t}x) ) |F_s(e^{s-t} x)- F_s^{(n)}(e^{s-t} x)| dsdx.$$
Since $\hat F$ is a distribution function and $\psi$ is continuous we have $(\psi \circ \hat F)(u) \leq \sup_{u \in [0,1]} \hat \psi(u) < D < \infty$ for some constant $D$. Thus, the above is bounded by
$$D\int_0^K \int_0^t (e^{s-t} x)|F_s(e^{s-t} x)- F_s^{(n)}(e^{s-t} x)| dx.$$
The above goes to zero by the definition of $\mathbf F^{(n) } \overset{\chi} \to \mathbf F$.
As for term two, we differentiate to rewrite it as
\begin{align}
\int_{e^{s-t} x}^\infty\frac{| z\psi'(\hat F(z) ) \hat F'(z) - \psi(\hat F(z))| }{z^2} |F_s(z) - F^{(n)}_s(z) | dz . \label{eq:1.1}
\end{align}
Our additional hypothesis is that $z \hat F'(z)$ is bounded. Since the range of $\hat F$ is contained in the compact interval $[0,1]$ and $\Psi \in C^2$ we have $\psi\circ \hat F$ and $\psi'\circ \hat F$ are also bounded. Therefore, $C = \sup_{0 \leq z \leq \infty} | z \hat F'(z)\psi'(\hat F(z) ) - \psi( \hat F(z) )|< \infty$. It follows that \eqref{eq:1.1} is less than
\begin{align}
C\int_{e^{s-t} x}^\infty\frac{1}{z^2} |F_s(z) - F^{(n)}_s(z) | dz \label{eq:1.2}.
\end{align}
Finally we are in the position of $I_2$ from \cite[Lemma 3.3]{elliot} and can conclude that \eqref{eq:1.2} goes to zero uniformly for $t \leq T$.
\item
\vspace{.2 cm}
\subsubsection*{Second summand} Fix $M>0$ and for any function $f:[0,\infty) \to [0,1]$ define $ f^M = f|_{[0,M]}$ to be the restriction to the domain $[0,M]$. We have in \cite[Theorem 7.1]{elliot} that $A^{M}$ converges pointwise to $\hat F^M$. Observe that each $A^{M}_t$ is an increasing function with compact domain, and $\hat F^M$ is continuous by \cite[Lemma 3.5]{elliot}. Together these imply (see \cite[exercise 7.13]{rudin}) that for any $\epsilon >0$ there exists $t_\epsilon$ such that for all $z \in [0,M]$
$$\sup_{t \geq t_\epsilon} | A_t^{M}(z) - \hat F_t^M(z)| < \epsilon.$$
Because the functions $A^{(n)}_t$ are translates of $A_t$ it follows that for all $n > t_\epsilon$ we have
\begin{align*} \sup_{t \geq 0 }| A_t^{(n),M}(z) - \hat F_t^M(z)|&\leq \sup_{t \geq t_\epsilon} | A_{t}^M(z) - \hat F_t^M(z)| < \epsilon .\end{align*}
As the functions $A_t^{(n)}$ and $\hat F$ are supported on $[0,1]$, we have their compositions with $\psi$ are uniformly continuous. We conclude that there exists $n_0$ such that for all $z \in [0,M]$
\begin{align}
\sup_{t \geq 0 } |\psi( A_t^{(n)}(z)) - \psi(\hat F(z))| < \epsilon, \qquad \text{for } n \geq n_0. \label{eq:2.1}
\end{align}
We truncate the integral then apply \eqref{eq:2.1} to bound the absolute value of \eqref{eq:term2} by
\begin{align}
\epsilon \int_0^K \int_0^t (e^{s-t} x)^2 &\int_{e^{s-t} x}^M \frac{ 1 }{z} d F^{(n)}_s(z) ds dx \label{eq:2.00}\\
&+ \int_0^K\int_0^t (e^{s-t} x)^2 \int_{M}^\infty \frac{ |\psi( A_s^{(n)}(z)) - \psi(\hat F(z))| }{z} d F^{(n)}_s(z) ds dx \label{eq:2.01}.
\end{align}
We can use the fact that $F_s^{(n)}(z) \leq 1$ and bound the inside integral of \eqref{eq:2.00} by
$$\frac{ 1 }{ e^{s-t x}} \int _{e^{s-t}x}^M d F_s^{(n)}(z) \leq \frac{2}{ e^{s-t}x}.$$
Thus \eqref{eq:2.00} is bounded by
$$\epsilon \int_0^K \int_0^t 2 e^{s-t} x ds dx\leq \epsilon (1 - e^{-t}) K^2 \leq \epsilon K^2.$$
As $K$ is fixed, this can be made arbitrarily small.
\begin{comment}
Integrate the in
\begin{align*}
\frac{ F_s^{(n)}(M)}{M} - \frac{ F_s^{(n)}(e^{s-t} x)}{e^{s-t} x}+ \int_{e^{s-t} x}^M \frac {F_s^{(n)}(z)} {z^2} d z.
\end{align*}
Using the fact that $F_s^{(n)}(z) \leq 1$ this is bounded by $\frac {1}{M} + \frac 1 { e^{s-t}x}$. Multiplying by $(e^{s-t} x)^2$ and integrating gives the following bound on \eqref{eq:2.00}
\begin{align*}\epsilon \int_0^K\int_0^t (e^{s-t} x)^2 \int_{e^{s-t} x}^M \frac{ 1 }{z} d F^{(n)}_s(z) ds dx&\leq \epsilon \int_0^K\int_0^t(e^{s-t} x)^2 \frac 1 M + e^{s-t}x \: ds dx\\
&= \epsilon \int_0^K \frac {x^2}{2M}( 1- e^{-2t}) + (1- e^{-t})dx \\
&\leq \epsilon (\frac{ K^3}{M} + K^2) .
\end{align*}
Thus, \eqref{eq:2.00} can be made arbitrarily small.
\end{comment}
Lastly we consider \eqref{eq:2.01}. Since $\sup_{u \geq 0} \psi(u) = D < \infty$ we use similar estimates as in \eqref{eq:2.00} and start with the bound
\begin{align*}
\int_0^K\int_0^t (e^{s-t} x)^2 \int_{M}^\infty &\frac{ |\psi( A_s^{(n)}(z)) - \psi(\hat F(z))| }{z} d F^{(n)}_s(z) ds dx \\
&\qquad \qquad \leq 4 D \int_0^K\int_0^t (e^{s-t} x)^2 \frac{ 1 }{ M} ds dx \\
& \qquad \qquad \leq \frac {4D K^3(1 - e^{-2t}) } {6M} .
\end{align*}
Since $M$ can be made arbitrarily large, this can be made as small as we like. Therefore, the absolute value of \eqref{eq:term2} can be bounded by any $\epsilon>0$ uniformly for $t \leq T$.
\end{proof}
\begin{lemma} \thlabel{lem:extra}
If $\Psi$ satisfies $(\text{\emph{C}}^2)$ and either $\psi(1) >0$ or $\Psi(u) = 1 - (1-u)^k$ for some positive integer $k$ then $\sup_{z \geq 0 }z \hat F'(z) < \infty.$
\end{lemma}
\begin{proof}
\cite[Proposition 8.2]{elliot} states that when $\psi(1)>0$ it holds that $\hat F'(x) \leq C e^{-ax}$ for some constants $C,a >0$. Additionally, for the $\min$-$k$ process $(\Psi(u) = 1 - (1-u)^k)$ it is shown in \cite[Proposition 8.4]{elliot} that $\hat F'(x) \leq C_k x^{ -1 - \epsilon_k}$ for some $C_k,\epsilon_k >0$. Note that $\sup _{k \geq 0} C_k < \infty$ and $\epsilon_k \to 0$.
\end{proof}
\begin{cor} \thlabel{cor:extra}
From \thref{lem:extra} $z \hat F'(z)$ is bounded for all interpolations of the $\max$-$k$ and $\min$-$k$ processes.
\end{cor}
We remark that it appears boundedness of $z \hat F'(z)$ does not necessarily hold for general $\Psi$. At the very least it does not obviously follow from \eqref{eq:integro} or \eqref{eq:diff}.
| {
"timestamp": "2015-09-08T02:05:09",
"yymm": "1410",
"arxiv_id": "1410.6537",
"language": "en",
"url": "https://arxiv.org/abs/1410.6537",
"abstract": "We give a sufficient condition for a random sequence in [0,1] generated by a $\\Psi$-process to be equidistributed. The condition is met by the canonical example -- the $\\max$-2 process -- where the $n$th term is whichever of two uniformly placed points falls in the larger gap formed by the previous $n-1$ points. This solves an open problem from Itai Benjamini, Pascal Maillard and Elliot Paquette. We also deduce equidistribution for more general $\\Psi$-processes. This includes an interpolation of the $\\min$-2 and $\\max$-2 processes that is biased towards $\\min$-2.",
"subjects": "Probability (math.PR)",
"title": "Choices, intervals and equidistribution",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.966914018751051,
"lm_q2_score": 0.7341195327172401,
"lm_q1q2_score": 0.7098304676232703
} |
https://arxiv.org/abs/0810.1288 | Stellar disruption by a supermassive black hole: is the light curve really proportional to $t^{-5/3}$? | In this paper we revisit the arguments for the basis of the time evolution of the flares expected to arise when a star is disrupted by a supermassive black hole. We present a simple analytic model relating the lightcurve to the internal density structure of the star. We thus show that the standard lightcurve proportional to $t^{-5/3}$ only holds at late times. Close to the peak luminosity the lightcurve is shallower, deviating more strongly from $t^{-5/3}$ for more centrally concentrated (e.g. solar--type) stars. We test our model numerically by simulating the tidal disruption of several stellar models, described by simple polytropic spheres with index $\gamma$. The simulations agree with the analytical model given two considerations. First, the stars are somewhat inflated on reaching pericentre because of the effective reduction of gravity in the tidal field of the black hole. This is well described by a homologous expansion by a factor which becomes smaller as the polytropic index becomes larger. Second, for large polytropic indices wings appear in the tails of the energy distribution, indicating that some material is pushed further away from parabolic orbits by shocks in the tidal tails. In all our simulations, the $t^{-5/3}$ lightcurve is achieved only at late stages. In particular we predict that for solar type stars, this happens only after the luminosity has dropped by at least two magnitudes from the peak. We discuss our results in the light of recent observations of flares in otherwise quiescent galaxies and note the dependence of these results on further parameters, such as the star/hole mass ratio and the stellar orbit. | \section{Introduction}
X-ray flares from quiescent (non-AGN) galaxies are often interpreted as
arising from the tidal disruption of stars as they get close to a dormant
supermassive black hole (SMBH) in the centre of the galaxy \citep{komossa99}.
Similar processes also occur on much smaller scales, such as in compact binary
systems, where the black hole is only of stellar mass
\citep{rosswog08,rosswog08b}.
The pioneering work by \citet{lacy82,rees88,phinney89b} and, from a numerical
point of view, by \citet{evans89} have set the theoretical standard for the
interpretation of such events. In particular, a distinctive feature of this
theory is an apparent prediction for the time dependence of the light curve of
such events, in the form $L(t)\propto t^{-5/3}$ (note that the original paper
by \citealt{rees88} quotes a $t^{-5/2}$ dependence, later corrected to
$t^{-5/3}$ by \citealt{phinney89b}). Since then, a $t^{-5/3}$ light curve is
generally fitted to the observed luminosities of events interpreted as stellar
disruptions. In this paper, we revisit the theoretical arguments behind such
scaling and we show (as also originally argued by \citealt{rees88}) that the
light curve does {\it not need} to have this scaling and, in particular, that
it critically depends on the internal structure of the star being disrupted.
We provide a simple model to calculate the light curve starting from the
density profile of the star and we show that more centrally concentrated stars
tend to produce shallower light curves. We further supplement our model by a
numerical calculation of the process, using Smoothed Particle Hydrodynamics
(SPH).
We start by briefly summarizing the main argument of \citet{rees88}. Let us
consider a star, originally in hydrostatic equilibrium at a large distance
from the black hole. Since pressure forces and the internal self-gravity of
the star are in equilibrium, the only unbalanced force is the gravitational
pull of the black hole. The various fluid elements of the star therefore move
in essentially Keplerian orbits around the black hole, each one with its own
eccentricity, that is initially very close to the eccentricity of the centre
of mass of the star (in the following, for simplicity, we make the simple
assumption that the centre of mass of the star is in a parabolic orbit around
the black hole). Therefore, the distribution of specific mechanical energy
within the star is very narrow around the energy of the centre of mass. As
the star moves closer to the black hole, the various Keplerian orbits tend to
be squeezed, perturbing the hydrostatic balance. Pressure forces then
redistribute energy inside the star, therefore widening the specific energy
distribution. After the encounter, the star is thus characterized by a much
wider distribution of internal energies, with part of the fluid having a
negative energy (and therefore being bound to the black hole) and part having
a positive energy (and therefore remaining unbound). In the picture of
\citet{rees88} it is this energy distribution (and only this) that determines
the light curve of the event. Indeed, after the encounter the fluid elements
again move in Keplerian orbits (but with their new energy). The bound elements
then come back close to pericentre after a Keplerian period $T$, linked to
their (negative) energy $E$ by:
\begin{equation}
E=-\frac{1}{2}\left(\frac{2\pi GM_{\rm h}}{T}\right)^{2/3},
\label{eq:ET}
\end{equation}
where $M_{\rm h}$ is the black hole mass. The mass distribution with specific
energy $\mbox{d} M/\mbox{d} E$ then translates, through Eq. (\ref{eq:ET}), into a mass
distribution of return times $\mbox{d} M/\mbox{d} T$. The next fundamental assumption is
that once the bound material has come back to the pericentre it loses its
energy and angular momentum on a timescale much shorter than $T$, thus
suddenly accreting onto the SMBH and giving rise to the flare. The mass
distribution of return times is therefore effectively the mass accretion rate
of the black hole during the event, from which the luminosity can be easily
computed. We thus have:
\begin{equation}
\frac{\mbox{d} M}{\mbox{d} T}=\frac{\mbox{d} M}{\mbox{d} E}\frac{\mbox{d} E}{\mbox{d} T}=\frac{(2\pi GM_{\rm h})^{2/3}}{3}\frac{\mbox{d} M}{\mbox{d} E} T^{-5/3}.
\label{eq:MT}
\end{equation}
In order to obtain the `standard' $t^{-5/3}$ light curve, we then have to make
the second fundamental assumption that the energy distribution is uniform.
Note that \citet{rees88} did not show that this should be the case, and only
assumed it for simplicity. Later, the numerical simulations by \citet{evans89}
apparently showed a uniform energy distribution, hence suggesting that the
light curve is generally proportional to $t^{-5/3}$. In the following, we
first show analytically that the energy distribution need not be uniform, but
depends on the properties of the star, and in particular on its internal
structure. We then show numerically that in fact it is not uniform and does
depend on the properties of the stars, in a way that approximately reproduces
the analytical results.
Starting from the pioneering work of \citet{carter82,carter83}, numerical
simulations of this process have been performed in a variety of studies
\citep{bicknell83,evans89,laguna93a,laguna93b,ayal2000,bogdanovic04}. Most of
these studies used Smoothed Particle Hydrodynamics (SPH,
\citealt{monaghan92}), mostly because of its ability of following the system
over a wide range of physical scales, where much of the simulated region is
essentially 'empty'. These various attempts have considered the effects of
varying the orbital parameters of the encounter \citep{bicknell83,evans89}, of
the inclusion of relativistic terms in the equation of motion
\citep{laguna93a,laguna93b,ayal2000} and have described the expected
observational outcome \citep{bogdanovic04}. However, surprisingly, no attempt
has been made at exploring the effect of varying the internal structure of the
star. Indeed, all such analyses have considered a simple polytropic model for
the star, with $\gamma$ invariably set to $5/3$ (although
\citealt{rosswog08,rosswog08b} has consider the encounter of a white dwarf
with a stellar mass black hole, which is in the much smaller mass ratio
regime).
The paper is organized as follows. In section 2 we describe our analytical
model to derive the energy distribution of the disrupted debris. In section 3
we describe our numerical code and the set up of our simulations. In section 4
we describe the results of the simulations. In section 5 we discuss our
results and draw our conclusions.
\section{The process of tidal disruption of a star by a SMBH}
\begin{figure}
\centerline{\epsfig{figure=Scheme.eps,width=0.5\textwidth}}
\caption{Schematic view of the geometry of the system. The radius of the
star is $R_{\star}$The SMBH is on the right, at a distance $R_{\rm p}\gg
R_{\star}$.}
\label{fig:scheme}
\end{figure}
\begin{figure*}
\centerline{\epsfig{figure=distr.eps,width=0.3\textwidth}
\epsfig{figure=acc.eps,width=0.3\textwidth}
\epsfig{figure=pl.eps,width=0.3\textwidth}}
\caption{Left: Distribution of internal energy for polytropic stars with
different indices. Solid line: $\gamma=5/3$, long-dashed line: $\gamma =1.4$,
short-dashed line: $\gamma=4/3$. Center: corresponding evolution of the
accretion rate for the same three cases. The red line indicates for
comparison a simple $t^{-5/3}$ power-law. Right: time evolution of the
power-law index $n=\mbox{d}\ln\dot{m}/\mbox{d}\ln t$. As can be seen the value
$n=-5/3$ is only approached at late times.}
\label{fig:analytic}
\end{figure*}
A simple and instructive way to consider the process is by treating the
interaction of the star with the black hole under the impulse approximation,
that is assuming that the interaction occurs in a very small time span as the
star gets close to pericentre. This approximation is probably appropriate for
highly hyperbolic encounters, but is only approximate for the parabolic case
considered here. We therefore expect the details to differ somewhat from what
is derived here below, even if the qualitative behaviour is correct. In this
approximation, the motion of the star is simply a straight line until it
reaches pericentre, at which point it is subject to a short impulse that
deflects the various fluid elements, each of them individually conserving
their specific energy. Until it reaches pericentre, therefore, the structure
of the star is essentially unchanged and it keeps its original radial density
profile and its initial radius. The key thing to realize here is that the
spread of specific energy the star has reached just before the impulse is
simply given by the different depths at which the various fluid elements are
within the black hole potential well, because they all share the same velocity
and therefore the same kinetic energy. This was clearly realized by
\citet{lacy82} and \citet{evans89} who pointed out that ``The spread in
specific energy of the gas... is given by the change in the black hole
potential across a stellar radius''. The impulse occurs istantaneously and
therefore does not modify the kinetic energy of the fluid emlements but
imparts some degree of rotation in the star, with $\Omega\approx (2GM_{\rm
h}/R_{\rm p}^3)^{1/2}$, where $R_{\rm p}$ is the pericenter distance (cf.
\citealt{evans89}). We consider here the case in which the stellar radius
$R_{\star}$ is much smaller than $R_{\rm p}$, which is appropriate for the
case of stellar disruption by a supermassive black holes (although not for the
case of compact binaries). This allows us to easily estimate the expected
energy spread, as
\begin{equation}
\Delta E = \left(\frac{\mbox{d} E_{\rm p}}{\mbox{d} r}\right)_{R_{\rm p}}\Delta r_{\rm max} = \frac{GM_{\rm h}}{R_{\rm p}^2}R_{\star},
\label{eq:deltaE}
\end{equation}
where $E_{\rm p}=GM_{\rm h}/r$ is the potential energy due to the black hole
and $\Delta r_{\rm max}= R_{\star}$ is the maximum deviation from the
pericentre distance (cf. \citealt{lacy82}). We thus expect the energy
distribution to extend roughly between $-\Delta E$ and $\Delta E$. This simple
result has been obtained in all the early analyses of the problem. However, in
the same approximation as before, we are also able to derive the whole energy
distribution starting from the density, by calculating what is the fraction of
stellar mass at a given $\Delta r$ from the centre. Fig. \ref{fig:scheme}
illustrates the geometry. It can be easily shown that
\begin{equation}
\frac{\mbox{d} M}{\mbox{d}\Delta r} = 2\pi \int_{\Delta r}^{R_{\star}}\rho(r)r\mbox{d} r,
\label{eq:MR}
\end{equation}
where $\rho(r)$ is the spherically symmetric mass density of the star. The
relation between the distribution of $\Delta r$ and the distribution of energy
$E$ is simply given by:
\begin{equation}
\frac{\mbox{d} M}{\mbox{d} E} = \frac{\mbox{d} M}{\mbox{d}\Delta r}\frac{R_{\star}}{\Delta E},
\label{eq:ME}
\end{equation}
where $\Delta E$ is given by Eq. (\ref{eq:deltaE}).
It is useful to introduce dimensionless quantities. We then define $\epsilon =
-E/\Delta E$ (where we have also included a minus sign because we are
interested in material with negative specific energy) as our dimensionless
energy, $x=\Delta r/R_{\star}$ as our radial coordinate within the star,
$x_{\rm p}=R_{\rm p}/R_{\star}\gg 1$ as our dimensionless pericentre distance
and $m=M/M_{\star}$ as our dimensionless mass. We also introduce a fiducial
time unit $T_0 = 2\pi(R_{\rm p}^3/GM_{\rm h})^{1/2}$ and a dimensionless time
$\tau = T/T_0$, as well as a fiducial density $\rho_0=M_{\star}/R_{\star}^3$
and a dimensionless density $\hat{\rho}=\rho/\rho_0$. The outcome of the
disruption event depends on the 'penetration factor' $\beta=R_{\rm p}/R_{\rm
t}$, that is the ratio of the pericenter distance to the tidal radius
$R_{\rm t}=q^{1/3}R_{\star}$, where $q=M_{\rm h}/M_{\star}$ is the mass ratio
between the black hole and the star. In order to tidally disrupt the star we
require $\beta\lesssim 1$. For example, for a mass ratio $q= 10^6$, we have
$\beta=1$ for $R_{\rm p}=100 R_{\star}$. To give an idea of the numbers
involved, we note that for $R_{\rm p}=100R_{\odot}$, $M_{\rm
h}=10^6M_{\odot}$, $M_{\star}=1M_{\odot}$ and $R_{\star}=R_{\odot}$, we have
$T_0 \approx 3.18~10^{-4}$ yrs $\approx 0.11$ days, while the unit for the
accretion rate is $M_{\star}/T_0\approx 3.1~10^3M_{\odot}$/yr. In these
units, Eqs. (\ref{eq:ET}), (\ref{eq:MT}), (\ref{eq:MR}) and (\ref{eq:ME})
become simply:
\begin{equation}
\epsilon = \frac{x_{\rm p}}{2}\tau^{-2/3},
\label{eq:dim1}
\end{equation}
\begin{equation}
\frac{\mbox{d} m}{\mbox{d}\tau} = \frac{x_{\rm p}}{3}\frac{\mbox{d} m}{\mbox{d}\epsilon}\tau^{-5/3},
\label{eq:dim2}
\end{equation}
\begin{equation}
\frac{\mbox{d} m}{\mbox{d} x} = 2\pi\int_x^1\hat{\rho}(x')x'\mbox{d} x',
\label{eq:dim3}
\end{equation}
\begin{equation}
\frac{\mbox{d} m}{\mbox{d} \epsilon}=\frac{\mbox{d} m}{\mbox{d} x} .
\label{eq:dim4}
\end{equation}
The above simple set of equations therefore allows us to calculate the
accretion rate onto the black hole as a function of the internal stellar
structure. In general, we expect the density to show a peak at small radii $x$
and therefore a peak at small specific energies $\epsilon$. Since material at
lower energies contributes to the accretion at later times, we can already
predict what relative changes do we expect with respect to the standard
$t^{-5/3}$ light curve. In particular, we expect that if the star is more
centrally condensed the flare should start with a relatively longer delay
(less matter at large energies - small return time) and should have a
shallower light curve (more matter at small energies - large return time).
However, unless the density is strongly diverging at small radii, we expect
$\mbox{d} m/\mbox{d} x= \mbox{d} m/\mbox{d}\epsilon$ to flatten at the lowest energies and
therefore the light curve to approach a $t^{-5/3}$ profile at late times.
\begin{figure*}
\centerline{\epsfig{figure=dens_prof.eps,width=0.5\textwidth}
\epsfig{figure=initial_dens.eps,width=0.5\textwidth}
}
\caption{Radial density profiles for the four models considered here. In the
left panel we show the density of four solutions of the Lane-Emden equation
with (from the highest to the lowest central density) $\gamma=1.4$, 1.5, 5/3
and 1.8. In the right panel we show the corresponding SPH density estimates
for the initial conditions of our simulations.}
\label{fig:initial}
\end{figure*}
As an example, we can use the above analytical formulae to calculate the
specific energy distribution and the accretion rate as a function of time
predicted for some simple stellar models with known density profiles. We have
thus considered simple polytropic spheres with different indices $\gamma =
5/3$, 1.4 and $4/3$. We have first solved numerically the Lane-Emden equation
for the three cases and have then computed the various relevant quantities
using Eq. (\ref{eq:dim1})-(\ref{eq:dim4}) above, assuming $x_{\rm p}=100$. The
results are shown in Fig. \ref{fig:analytic}. The left panel shows the
prediction for the energy distribution, where the solid line indicates the
relatively non compact case $\gamma=5/3$, the short-dashed line indicates
$\gamma=1.4$ and the long-dashed case shows the most compact case
$\gamma=4/3$. As can be seen, the energy distributions do extend up to
$\epsilon \sim 1$, but are not flat except at very low energies, the effect
becoming more pronounced for the more compact cases. The middle panel shows
the predicted evolution of the mass accretion rate $\dot{m}=\mbox{d} m/\mbox{d} \tau$,
for the three values of $\gamma$ with the same line styles as the left panel.
The red line shows for comparison a simple power law with index -5/3. It can
be seen that indeed the light curves are slightly shallower that $t^{-5/3}$
and approach it only at late times. This is even more evident in the right
panel, where we plot the power law index $n=\mbox{d}\ln\dot{m}/\mbox{d}\ln\tau$ for the
three cases. If we want to put some numbers on the estimates above, note that
for our standard numerical values described above a time of 1 year corresponds
to roughly $\tau\approx 3000$. We then see that the power law index after 1
year of the flare is $n\approx -1.5$ for $\gamma=5/3$, which is reasonably
close to the expected -5/3. However, such stellar model is probably
unrealistic for a solar type star, whose structure is rather more similar to a
$\gamma=4/3$ polytrope, in which case, after 1 year of the flare the power law
index is still $n\approx -0.8$.
\section{Numerical simulations}
The model described in the previous section is only approximate in that it
treats the interaction between the star and the black hole as instantaneous.
In particular, the distribution of specific energy of the disrupted stellar
material, and consequently the resultant lightcurve, has been computed by
assuming that the stellar structure is essentially unchanged until it reaches
pericentre. Still, it highlights some important features of the stellar
disruption process: the expected energy distribution is in general not flat,
and it tends to become progressively more peaked towards lower energies as the
stellar structure model gets more centrally concentrated (that is, as the
polytropic index $\gamma$ becomes smaller). In order to gain a better
understanding of the process, we have therefore compared the analytical
expectations with the results of numerical hydrodynamical simulations of the
process.
\begin{figure*}
\centerline{\epsfig{figure=peri1.eps,width=0.5\textwidth}
\epsfig{figure=peri2.eps,width=0.465\textwidth}
}
\caption{{\bf (a)} Projected density of the of the star at pericentre.
The black hole is outside the image, at the origin of the
coordinate system. The four panels refer to different values of $\gamma=$
1.4 (upper left), 1.5 (upper right), 5/3 (lower left) and 1.8 (lower
right). {\bf (b)} Same as panel {\bf (a)}, but after
the encounter, when the star is located at roughly two times the pericentre
distance.}
\label{fig:image}
\end{figure*}
\subsection{Numerical setup}
In the case where the encounter is parabolic, as mentioned above, the two
relevant dynamical parameters are the mass ratio between the star and the
black hole, $q=M_{\rm h}/M_{\star}$ and the penetration factor $\beta = R_{\rm
p}/R_{\rm t}$. In this work we have considered the case where $\beta=1$ and
$q=10^6$, which imply that the pericentre distance is equal to 100 times the
radius of the star.
Following the several investigations summarized in the Introduction, we have
also used a non-relativistic SPH code to simulate the encounter. Our code uses
individual particle timesteps \citep{bate95}, it evolves the smoothing length
by keeping a fixed mass within a smoothing sphere (equivalent to roughly 60
particles) and includes the relevant terms needed to ensure energy
conservation when the smoothing length is variable (see \citealt{price05} for
a recent review). We also adopt a standard SPH artificial viscosity
\citep{monaghan92} with viscosity parameters $\alpha_{\rm sph}=1$ and
$\beta_{\rm sph}=2$.
In order to describe the basic dynamics of the encounter we do not require to
use an extremely large number of particles in order to reach a satisfactory
resolution. Indeed, \citet{evans89} have shown that their results were
numerically converged with a number of particles $N$ equal to a few $10^4$.
Even recent calculations have only used a relatively small number of
particles, of the order of $10^3$ \citep{ayal2000} up to $2~10^4$
\citep{bogdanovic04}. In this work we have run all our simulations at the two
resolution of $N=10^4$ and $N=10^5$ and have noticed no appreciable difference
in the results, thereby confirming the numerical convergence of the results.
In the following, we only show the higher resolution results.
We initialize our simulations by placing the SPH particles to form the
structure of a polytropic star of given index $\gamma$ (we have considered the
four cases $\gamma = 1.4$, 1.5, 5/3 and 1.8). This is done by initially
placing the particles using close sphere packing and then differentially
stretching their radial position to achieve the desired density profile. This
method minimizes the statistical noise associated with random placing of the
particles (we thank Walter Denhen for providing this setup routine). We then
relax the structure of the star by evolving it in isolation until its internal
properties settle down.
We have considered four different values of $\gamma=$ 1.4, 1.5, 5/3 and 1.8.
In this way we encompass the expected range for different kinds of stars, from
radiative to convective ones. Indeed, a solar type star has a density profile
close to a $\gamma=4/3$ polytrope (it is actually best described by
$\gamma\approx 1.3$). Unfortunately, a $\gamma=4/3$ polytrope is difficult to
simulate, as it has zero binding energy. The lowest value of $\gamma$ that we
use is then 1.4. Red giants and low mass stars can be described by a
$\gamma=5/3$ polytrope, while neutron stars have a structure which is probably
closer to a $\gamma=2.5$ polytrope.
We plot in Fig. \ref{fig:initial} the initial density
profile of our four models as predicted from the solution of the Lane-Emden
equation (left panel) and as realized after the initial conditions have been
allowed to relax (right). As can be seen, the four models differ in their
central concentration, such that the $\gamma=1.4$ model is the most
concentrated and the $\gamma=1.8$ is the least. It might be worth also to
recall that models with larger $\gamma$ are less compressible than models with
lower $\gamma$.
Finally, we introduce the black hole as a point mass at the origin and we
displace the star so as to place its center of mass on the required parabolic
orbit (since the star is an extended object this actually means that the total
mechanical energy of the star is slightly negative, amounting to roughly
-0.005 in our units). The initial distance from the black hole is three times
the pericentre distance (in other simulations not described here, we have also
used a larger initial distance and found no significant difference). Our code
units are $R_{\star}$ for length and $M_{\star}$ for mass, which ensure that
our results are described in the same dimensionless variables as described in
Section 2. The black hole is modelled as a sink onto which SPH particles can
be accreted if they come closer to the black hole that a distance 0.25 in code
units. However, in practice, given that our pericentre is very large and that
we do not follow the evolution of the debris long after the interaction, no
particles are actually accreted during the course of our simulations.
\section{Results}
\subsection{The $\gamma=5/3$ case}
Before comparing the results obtained with various polytropic indices, we
start by describing the results that we have obtained in the $\gamma=5/3$,
which is directly comparable to the simulations discussed in previous papers.
In particular, this simulation is essentially a higher resolution version of
the one initially discussed in \citet{evans89}.
Two snapshots of the integrated density profile of the star are shown in the
lower left panels of Fig. \ref{fig:image}(a,b), at two different times, that
is when the star is at pericentre and when it is at roughly two times the
pericentre distance, after the encounter. The overall structure of the star
looks qualitatively similar to the one shown in \citet{evans89}. It is
interesting to notice that at pericentre the star is already quite distorted
with respect to its initial configuration and in particular it has expanded
somewhat (recall that its initial radius is 1 in code units). This occurs
because, in isolation, the star is in hydrostatic equilibrium between its
pressure and its self-gravity. As the star approaches the black hole the tidal
field effectively acts as to reduce the stellar gravity, making pressure
forces unbalanced and therefore `inflating' the star. This effect is expected
to be more significant for small than for large $\gamma$. This reflects the
fact that the radius of a polytrope with small $\gamma$ is more sensitive to
the effective gravity.
A more quantitative comparison can be done by looking at the distribution of
specific energies of the disrupted star. This is shown in Fig. \ref{fig:evans}
at four different times during the simulations: at $t=0$ (upper left panel),
at pericentre (upper right), and after the encounter, when the star is roughly
at four times the pericentre distance (lower left) and ten times the
pericentre distance (lower right). For ease of comparison with
\citet{evans89}, only for this plot we have used a logarithmic scale for the
distribution. It can be seen that initially the distribution is very narrow
and centered at $\epsilon=0$, which just reflect the fact that the whole star
is initially on a parabolic orbit. As the star approaches the black hole, the
distribution becomes wider and indeed approaches the width predicted by the
simple analysis of Section 2 (which is equal to unity in the units adopted
here). The lower left panel, in particular, showing the distribution at four
times the pericentre, compares almost exactly with the distribution shown by
\citet{evans89} (their fig. 3), confirming that indeed our simulations
replicate accurately their results. However, one can see that the density
distribution keeps evolving until the star is at roughly 10 pericentre
distances, where it finally settles down in the configuration shown in the
lower right panel of Fig. \ref{fig:evans}. We thus see that the distribution
is characterized by a central peak at lower energies, followed by two `wings'
at larger energies. The presence of a central peak is expected based on the
analytical model described above. The wings, on the other hand, refer to the
stellar material at the surface of the star, which at pericentre is somewhat
distorted from its initially spherical shape (as can be seen in Fig.
\ref{fig:image}, lower left panel) and would obviously show some discrepancies
with respect to the simple `spherical' model of Section 2.
Fig. \ref{fig:spec1} (solid line) shows the distribution of specific energies
averaged over 10 time units, when the stars has reached $\sim 20$ pericentre
distances and the distribution has settled down. This is compared with the
prediction of the analytical model of Section 2 (cf. Fig. \ref{fig:analytic},
left panel), which is shown with a dashed line. Since the profiles are all
normalized to 1, in order to compare the shape of the distribution at the
peak, we have scaled down the analytical profile by a factor $\approx 1.6$. We
thus see that the analytical profile does approximately match the shape of the
distribution at the peak, except for the presence of the wings, indicating the
presence of more material at extreme energies than predicted by the model.
Note that, obviously, the light curve produced in this case does not show the
standard $t^{-5/3}$ decline, especially at early times. More details on the
resultant light curves are given in the next Section, where we compare the
results obtained with different values of the polytropic index.
\begin{figure}
\centerline{\epsfig{figure=evans.eps,width=0.5\textwidth}
}
\caption{Distribution of specific energies for the case $\gamma=5/3$. The four
show the distribution for the initial condition (upper left panel), when the
star is at pericentre ((upper right), at four times the pericentre distance
(lower left) and at ten times the pericentre distance (lower right).}
\label{fig:evans}
\end{figure}
\begin{figure}
\centerline{\epsfig{figure=specg1.66.eps,width=0.5\textwidth}
}
\caption{Distribution of specific energies for the case $\gamma=5/3$. Solid
line: average distribution at the end of the simulation. Dashed line:
predicted distribution based on the analytical model, re-normalized to match
the peak.}
\label{fig:spec1}
\end{figure}
\subsection{Varying the polytropic index}
We now discuss the effects of varying the polytropic index $\gamma$ on the
structure of the disrupted star. A first comparison can be obtained by looking
at Fig. (\ref{fig:image}), where the various panels show the projected density
of the star at pericentre (left) and at two times the pericentre (right) for
the four cases considered (from top left to bottom right: $\gamma=1.4,$ 1.5,
5/3 and 1.8). Several interesting differences can be already seen from these
images. First of all, note that the overall expansion of the star is similar
in all cases. However, for larger values of $\gamma$ the density structure of
the star is much more uniform. This is particularly evident in the right
panel, which refers to well after pericentre passage. In the case where
$\gamma=1.4$ the high density core is compact, with the density in the `puffy'
tidal tails gently declining. In contrast, at the opposite extreme of
$\gamma=1.8$ the high density region is more extended and the edge of the
tidal tails is more clearly defined, revealing a sharper density cut-off at
the edge. It is also interesting to note the different degree of internal
rotation induced in the star by the tidal interaction, with the elongated core
being more aligned with the line joining the star and the black hole (at the
origin of the coordinate system) for smaller $\gamma$ than for larger ones.
\begin{figure*}
\centerline{\epsfig{figure=specg1.4.eps,width=0.35\textwidth}
\epsfig{figure=specg1.5.eps,width=0.35\textwidth}}
\centerline{\epsfig{figure=specg1.66b.eps,width=0.35\textwidth}
\epsfig{figure=specg1.8.eps,width=0.35\textwidth}}
\caption{Specific energy distribution for the four simulations. Upper left:
$\gamma=1.4$, upper right: $\gamma=1.5$, lower left: $\gamma=5/3$, lower
right: $\gamma=1.8$. The solid lines are the result of the simulations,
while the dashed lines show the distribution expected from the simple
analytical theory outlined in Section 2, where in each case the initial
density profile of the star has been homologously expanded by a factor
$\xi=2.5$, 2.1, 1.63 and 1.6 for the four cases $\gamma=1.4$, 1.5, 5/3 and
1.8, respectively.}
\label{fig:specave}
\end{figure*}
\begin{figure}
\centerline{\epsfig{figure=shock_image.eps,width=0.5\textwidth}}
\caption{Top: Projected density fot the $\gamma=5/3$ case at
$t=16.75$. Bottom: vertical cross section of the quantity $q$, defined in
Eq. (\ref{eq:q}). When $|q|>1$ the gas undergoes a shock. It can be seen
that this occurs at the edge of the tidal tails.}
\label{fig:shockimage}
\end{figure}
\begin{figure}
\centerline{\epsfig{figure=deltam.eps,width=0.5\textwidth}}
\caption{Amount of material involved in shocks as a function of time for the
four simulations with $\gamma=1.4$ (solid line), $\gamma=1.5$ (short-dashed
line), $\gamma=5/3$ (long-dashed line) and $\gamma=1.8$ (dot-dashed
line). As the polytropic index grows more mass undergoes shocks, producing
progressively more pronouced wings in the energy distribution
(cf. Fig. \ref{fig:specave}).}
\label{fig:deltam}
\end{figure}
Let us now look at the distribution of specific energies within the star for
the four different simulations. This is shown in Fig. \ref{fig:specave}, where
the solid lines refer to the simulations, averaged over 10 time units when the
star has reached a distance of roughly 20 times the pericentre. Note that in
each simulation, as mentioned above, the stars are somewhat inflated once they
reach pericentre. In order to compare the numerical results with the
analytical predictions we therefore have to take into account this expansion.
We have thus simply taken the initial equilibrium density as a function of
radius within the star and re-scaled the radius by a constant factor $\xi$,
thus effectively applying a homologous expansion to the stellar structure. We
have then calculated the expected energy distribution based on Eqs.
(\ref{eq:dim3}) and (\ref{eq:dim4}) for this `inflated' profile. The resulting
analytical predictions are then shown in Fig. \ref{fig:specave} with a dashed
line. The expansion factor to match the numerical data is $\xi=2.5$, 2.1, 1.63
and 1.6 for the four cases $\gamma=1.4$, 1.5, 5/3 and 1.8, respectively. It
interesting to see that this expansion parameter decreases as we increase
$\gamma$, reflecting the reduced response to variations in the gravity field
as $\gamma$ gets larger. It can be seen that for $\gamma=1.4$ our inflated
polytropic model describes very accurately the outcome of the simulation.
However, as $\gamma$ increases the results of the simulations start to deviate
from the model, in particular in the appearance of wings in the tail of the
distribution. These wings become progressively more prominent as $\gamma$ gets
larger. In the previous section we have already shown that the core of the
distribution for $\gamma=5/3$ is well described by a non inflated model.
Essentially, what is happening here is that as $\gamma$ increases the
expansion of the star becomes progressively less homologous, with more
material being pushed to higher energies (in absolute value). Since the
expansion velocity is significantly supersonic, the only way to transfer
energy within the star is through shocks, occurring in the tidal tails. As
$\gamma$ increases, the density profile of the star becomes shallower, and
more material undergoes shocks in the outer layers of the star, hence
increasing the appearance of the wings. We estimate quatitatively the
importance of shocks in our simulations in the following way. For each
particle $i$ in our simulations we compute the quantity
\begin{equation}
q_{i} = \left\{ \begin{array}{ll}
\displaystyle \frac{h_i({\mathbf{\nabla\cdot u}})_i}{c_{{\rm s},i}} &
\mbox{when \hspace{2mm}} (\mathbf{\nabla\cdot u})_i<0 \\
0 & \mbox{otherwise}
\end{array}
\right.
\label{eq:q}
\end{equation}
where $h_i$ is the particle's smoothing length, $(\mathbf{\nabla\cdot u})_i$
is the local divergence of flow velocity and $c_{{\rm s},i}$ is the local
sound speed. The quantity $q$ is therefore non-zero and negative in regions of
convergent flow and shocks occur where $|q|\geq 1$. Fig. \ref{fig:shockimage}
shows the structure of the disrupted star for the $\gamma=5/3$ case at
$t=16.75$. The top panel shows the projected density, while the bottom panel
shows a vertical cross section of $q$. It can be seen that most of the
disrupted star is expanding, except for the tip of the tidal tails, where
there is a strong convergent flow, which has indeed $|q|>1$ and therefore
undergoes a shock.
To see how does the effect of shocks changes as the polytropic index is
varied, we also compute the quantity $\delta m_{\rm shock}$, that we define
as the total mass of particles that have $|q|>1$. Fig. \ref{fig:deltam} shows
the time evolution of $\delta m_{\rm shock}$ for the four simulations with
$\gamma = 1.4$ (solid line), $\gamma=1.5$ (short-dashed line), $\gamma=5/3$
(long-dashed line) and $\gamma=1.8$ (dot-dashed line). This plot shows a few
interesting features. First, we see that as the index $\gamma$ increases, the
amount of shocked mass increases as well, confirming our expectation that more
mass is involved with the shocks in the tidal tails. In particular, the two
simulations with the largest $\gamma$, which are the ones displaying the more
pronounced 'wings' in the energy distribution, are also the two in which more
mass undergoes shocks. Second, we see that shocks appear to occur in a
sequence of peaks. The first one, common to all simulations, occurs at
$t\approx 4$, which corresponds to pericenter passage. For the largest values
of $\gamma$, we then see a number of other peaks, which can be interpreted as
the manifestation of strongly non-linear stellar pulsations induced by the
tidal interaction (see also \citealt{ivanov01}). The period of these
oscillation decreases with increasing $\gamma$, consistent with the
expectation that the period of the fundamental mode of stellar pulsations
should vary as $(3\gamma-4)^{-1/2}$ (e.g., \citealt{cox80}).
As mentioned in Section 2, the fact that for larger $\gamma$ the energy
distribution becomes relatively flatter implies that the distribution of
return times should become steeper and more rapidly approach the $t^{-5/3}$
profile expected for an exactly flat distribution. The resultant accretion
rate for the four simulations is shown in Fig. \ref{fig:acc} (left panel),
where the solid line refers to $\gamma=1.4$, the short-dashed line to
$\gamma=1.5$, the dot-dashed line to $\gamma=5/3$ and the long-dashed line to
$\gamma=1.8$. To give an idea of the numbers involved we have ploted the
results in physical units, assuming $M_{\star}=1M_{\odot}$ and
$R_{\star}=R_{\odot}$ (note that if the disrupted star is a giant, the time
unit is increased by a factor $(R_{\rm giant}/R_{\odot})^{3/2}$, which can be
several hundreds, then suggesting that the rise to peak might be observable,
and the decay time prior to reaching the asymptotic $t^{-5/3}$ behaviour can
be very long). It can be indeed be easily seen that only for the largest
values of $\gamma$ does the light curve follow the $t^{-5/3}$ profile at early
time, while for lower $\gamma$ the profile gets significantly more shallow.
This is seen even better in the right panel of Fig. \ref{fig:acc}, where we
plot the instantaneous power law index $n$ (that is, the logarithmic time
derivative of the accretion rate) associated with the lightcurve for the cases
$\gamma=1.4$ (squares) and $\gamma=5/3$ (triangles) as a function of magnitude
drop from the peak (we only plot these two cases for simplicity: the two other
cases follow essentially the same behaviour). The dashed line at the bottom
indicates $n=-5/3$. This plot illustrates quite clearly that the $t^{-5/3}$
regime is only approached at late times, after the luminosity has dropped
$\sim$ 2 magnitudes from the peak for $\gamma=1.4$. In the case $\gamma=5/3$
the asymptotic regime is approached more quickly, after only a luminosity drop
of approximately 1 magnitude.
\begin{figure*}
\centerline{\epsfig{figure=acc_time.eps,width=0.5\textwidth}
\epsfig{figure=pl_mag.eps,width=0.5\textwidth}}
\caption{Left: accretion rate as a function of time for the four simulations.
Solid line: $\gamma=1.4$ (internal structure close to a solar-type star);
short-dashed line: $\gamma=1.5$; dot-dashed line: $\gamma=5/3$ (internal
structure appropriate for low-mass, fully convective stars); long-dashed
line: $\gamma=1.8$. The physical units refer to the case where the disrupted
star has $M_{\star}=M_{\odot}$ and $R_{\star}=R_{\odot}$ (note that if the
disrupted star is a giant, the time unit is increased by a factor $(R_{\rm
giant}/R_{\odot})^{3/2}$, which can be several hundreds, then suggesting
that the rise to peak might be observable, and the decay time prior to
reaching the asymptotic $t^{-5/3}$ behaviour can be very long). Right:
instantaneous power-law index $n=\mbox{d}\log\dot{M}/\mbox{d}\log t$ for $\gamma=1.4$
(squares) and $\gamma=5/3$ (triangles) as a function of magnitude drop from
the peak. The dashed line at the bottom indicates the commonly-invoked
power-law index $n = -5/3$ for the light curve.}
\label{fig:acc}
\end{figure*}
\section{Discussion and conclusions}
Candidate tidal disruption events of a star by a dormant black hole are
usually associated with luminous flares in the nucleus of an otherwise normal
galaxy. These can be detected in X-rays, for example with Chandra and ROSAT
\citep{halpern04} or with XMM \citep{esquej08}, or in the optical/UV
\citep{gezari08}. X-ray data generally observe the flare evolving down from
the peak by a few orders of magnitude, and in the best studied case, NGC 5905,
the decline appears to be consistent with a $t^{-5/3}$ fall-off
\citep{halpern04}. However, in some other cases \citep{gezari08} the
observations only span a relatively small drop in luminosity from the peak. In
these cases the lightcurve appears to be shallower than $t^{-5/3}$, and the
best fit of \citet{gezari08} to their optical data indicates a value of
$n\approx -1.1$ in one case and $n\approx -0.82$ in another. These results are
consistent with our prediction that initially the lightcurve should be
shallow, approaching a $t^{-5/3}$ profile only after the luminosity has
dropped by 2-3 magnitudes from the peak.
To summarize, in this paper we have revisited the arguments at the basis of
the expected lightcurve produced by the tidal disruption of a star in a
parabolic orbit close to a supermassive black holes. The $t^{-5/3}$ profile
originally proposed by \citet{rees88} and \citet{phinney89b} only holds in the
case where the energy distribution $\mbox{d}m/\mbox{d}\epsilon$ of the remnant
is flat, which we have shown is not the case, in general. We have proposed a
simple analytical model that relates the resultant energy distribution to the
density structure of the star. This model predicts that more centrally
concentrated (solar--type) stars should produce flares with a lightcurve
shallower than $t^{-5/3}$, approaching it only at late stages. We have tested
the model with numerical simulations and found that it does reproduce the
simulated behaviour, with the following two corrections. Firstly, we have to
account for the inflation of the star from its initial structure due to the
effective reduction of gravity as it moves in the tidal field of the black
hole. This is well described by a homologous expansion by a factor which
becomes smaller as the polytropic index becomes larger. Secondly, for large
polytropic indices we see the appearance of wings in the tails of the energy
distribution, indicating that some material has been put further away from
parabolic orbits as a result of shocks in the tidal tails.
In all cases, we do not obtain a $t^{-5/3}$ lightcurve, except at late times.
Close to the peak of the luminosity, the lightcurve is very sensitive to the
structure of the star, being shallower for stars with polytropic index close
to 4/3, expected for solar type stars. In this case, the $t^{-5/3}$ profile is
reached only after the luminosity has dropped by at least two magnitudes. For
stars with a relatively flat density profile, such as red giants and low mass
stars, the $t^{-5/3}$ profiles is reached earlier.
In this paper we have only investigated a very simple setup, with a given mass
ratio between the star and the supermassive black hole, and one given set of
orbital parameters. It is expected that the results would be further dependent
on the such additional parameters, such as the ratio of tidal radius to
pericentre distance and the eccentricity of the orbit. We plan to consider
these effects in subsequent investigations.
Finally, it should be further emphasised that all these results refer
essentially to the return time of the disrupted debris, and only correspond to
an actual luminosity under the further assumption that the subsequent
accretion is perfectly efficient and occurs on a much shorter timescale, which
may not be the case (see \citealt{ayal2000}).
\section*{Acknowledgements}
We thank Walter Dehnen for providing us with his setup routine for polytropic
spheres. We acknowledge several interesting discussions with Walter Dehnen,
Mark Wilkinson, Sergei Nayakshin and Paul O'Brien. We also thank the Referee,
Stephan Rosswog, for an insightful report. All the visualization of SPH
simulations have been obtained using the SPLASH visualization tool by Dan
Price \citep{splash}
\bibliographystyle{mn2e}
| {
"timestamp": "2008-10-07T22:07:41",
"yymm": "0810",
"arxiv_id": "0810.1288",
"language": "en",
"url": "https://arxiv.org/abs/0810.1288",
"abstract": "In this paper we revisit the arguments for the basis of the time evolution of the flares expected to arise when a star is disrupted by a supermassive black hole. We present a simple analytic model relating the lightcurve to the internal density structure of the star. We thus show that the standard lightcurve proportional to $t^{-5/3}$ only holds at late times. Close to the peak luminosity the lightcurve is shallower, deviating more strongly from $t^{-5/3}$ for more centrally concentrated (e.g. solar--type) stars. We test our model numerically by simulating the tidal disruption of several stellar models, described by simple polytropic spheres with index $\\gamma$. The simulations agree with the analytical model given two considerations. First, the stars are somewhat inflated on reaching pericentre because of the effective reduction of gravity in the tidal field of the black hole. This is well described by a homologous expansion by a factor which becomes smaller as the polytropic index becomes larger. Second, for large polytropic indices wings appear in the tails of the energy distribution, indicating that some material is pushed further away from parabolic orbits by shocks in the tidal tails. In all our simulations, the $t^{-5/3}$ lightcurve is achieved only at late stages. In particular we predict that for solar type stars, this happens only after the luminosity has dropped by at least two magnitudes from the peak. We discuss our results in the light of recent observations of flares in otherwise quiescent galaxies and note the dependence of these results on further parameters, such as the star/hole mass ratio and the stellar orbit.",
"subjects": "Astrophysics (astro-ph)",
"title": "Stellar disruption by a supermassive black hole: is the light curve really proportional to $t^{-5/3}$?",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.966914018751051,
"lm_q2_score": 0.7341195327172401,
"lm_q1q2_score": 0.7098304676232703
} |
https://arxiv.org/abs/2010.08767 | Bounds on the running maximum of a random walk with small drift | We derive a lower bound for the probability that a random walk with i.i.d.\ increments and small negative drift $\mu$ exceeds the value $x>0$ by time $N$. When the moment generating functions are bounded in an interval around the origin, this probability can be bounded below by $1-O(x|\mu| \log N)$. The approach is elementary and does not use strong approximation theorems. | \section{Introduction}
\subsection{Background}
This paper arose from the need of a random walk estimate for the authors' article \cite{busa-sepp-poly} on directed polymers. This estimate is a {\it positive} lower bound on the running maximum of a random walk with a small {\it negative} drift. Importantly, the bound had to come with sufficient control over its constants so that it would apply to an infinite sequence of random walks whose drift scales to zero as the maximum is taken over expanding time intervals. The natural approach via a Brownian motion embedding appeared to not give either the desired precision or the uniformity. Hence we resorted to a proof from scratch. For possible wider use we derive the result here under general hypotheses on the distribution of the step of the walk.
The polymer application of the result pertains to the exactly solvable log-gamma polymer on the plane. The objective of \cite{busa-sepp-poly} is to prove that there are no bi-infinite polymer paths on the planar lattice $\bZ} \def\Q{\bQ} \def\R{\bR}\def\N{\bN^2$. The technical result is that there does not exist any nontrivial Gibbs measures on bi-infinite paths that satisfy the Dobrushin-Lanford-Ruelle (DLR) equations under the Gibbsian specification defined by the quenched polymer measures. In terms of limits of finite polymer distributions, this means that as the southwest and northeast endpoints of a random polymer path are taken to opposite infinities, the middle portion of the path escapes. This is proved by showing that in the limit the probability that the path crosses the $y$-axis along a given edge decays to zero. This probability in turn is controlled by considering stationary polymer processes from the two endpoints to an interval along the $y$-axis. The crossing probability can be controlled in terms of a maximum of a random walk. In the case of the log-gamma polymer, the steps of this random walk are distributed as the difference of two independent log-gamma variables. The case needed for \cite{busa-sepp-poly} is treated in Example \ref{ex:drw2} below.
\subsection{The question considered}
We seek a lower bound on the probability that the running maximum of a random walk with negative drift reaches a level $x>0$. To set the stage, we discuss the matter through Brownian motion. Let $S^N_n=\sum_{i=1}^n X^N_i$ be a random walk with drift $\bE(X^N_1)=\mu_N=\mu N^{-1/2}<0$, and such that the random walks $S^N$ converge weakly to a Brownian motion with drift $\mu<0$. The probability of the event
\begin{align*}
\sup_{1\leq n \leq N}S^N_n>x
\end{align*}
should be approximately the same as that of
\begin{align}\label{bwd}
\sup_{0\leq s \le 1} (B_s+s\mu) > xN^{-1/2}.
\end{align}
This latter can be computed (see \eqref{kmt612}) to be
\begin{align}\label{bc}
\P\big\{\sup_{0\leq s \leq 1} (B_s+s\mu ) > xN^{-1/2}\tspa\big\}=1-O(\abs\mu xN^{-1/2}\tspa).
\end{align}
This suggests that we should aim for an estimate of the type
\begin{align}\label{rwc}
\P\Big(\sup_{1\leq n \leq N}S^N_n>x\Big)\ge 1-O(\abs{\mu_N} x ).
\end{align}
To reach this precision weak convergence is not powerful enough, for a weak approximation of random walk by Brownian motion reaches only a precision of $O(N^{-1/4})$ \cite{sawyer1972rates,fraser1973rate}.
Our estimate \eqref{mr} below does almost capture \eqref{rwc}: we have to allow an additional $\log N$ factor inside the $O(\cdot)$ and consider $x$ of order at least $(\log N)^2$.
The by-now classical Koml\'os-Major-Tusn\'ady (KMT) coupling \cite{koml-majo-tusn-76} gives a strong approximation of random walk with Brownian motion with a discrepancy that grows logarithmically in time. This precision is sufficient for us, as we illustrate in Section \ref{sec:kmt}. The problem is now the control of the constants in the approximation.
Uniformity of the constants is necessary for our application in \cite{busa-sepp-poly}. But verifying this uniformity from the original work \cite{koml-majo-tusn-76} appeared to be a highly nontrivial task. In the end it was more efficient to derive the estimate (Theorem \ref{thm:lm} below) from the ground up.
The difficulty of the original KMT proof has motivated several recent attempts at simplification and better understanding of the result, such as Bhattacharjee and Goldstein \cite{bhattacharjee2016strong},
Chatterjee \cite{chatterjee2012new}, and Krishnapur \cite{krishnapur2020one}. There is another strong approximation result due to Sakhanenko \cite{sakhanenko1984rate} which, according to p.~232 of \cite{chatterjee2012new}, ``is so complex that some researchers are hesitant to use it''.
\subsection{Sketch of the proof}
Our proof is elementary. The most sophisticated result used is the Berry-Esseen theorem.
Given a random walk of small drift $\mu<0$, our approach can be summarized in two main steps:
\begin{enumerate}
\item Up to the time the random walk hits the level $-\varepsilon \abs\mu^{-1}$ it behaves like an unbiased random walk.
\item By the time the random walk hits the level $-\varepsilon\abs\mu^{-1}$ it will have had about $\log_2(\varepsilon|\mu|^{-1}x^{-1} )$ independent opportunities to hit the level $x$. By the previous step this implies that the probability on the left-hand side of \eqref{rwc} is of order $1-(1/2)^{\log_2(\varepsilon|\mu|^{-1}x^{-1} )}=1-O(|\mu| \varepsilon^{-1} x )$.
\end{enumerate}
As we will take $\varepsilon=(\log N)^{-1}$ in the proof, we will obtain the right order in \eqref{rwc} up to a logarithmic factor (Theorem \ref{thm:lm}). After the statement of the theorem we illustrate it with examples. Then we demonstrate that even if we knew that the constants in the KMT approximation can be taken uniform, the result would not be essentially stronger in the regime in which we apply our result.
\section{Main result} \label{sec:mr}
For each $N\in\bZ} \def\Q{\bQ} \def\R{\bR}\def\N{\bN_{>0}$, let $\{X^N_i\}_{i\geq 1}$ be a sequence of non-degenerate i.i.d.\ random variables.
Denote their moment generating function by
\begin{align*}
M_N(\theta)&=\bE\big(e^{\theta X^N_1}\big).
\end{align*}
Write $M_N^{(0)}=M_N$ and $M_N^{(i+1)}=(d/d\theta)M_N^{(i)}$.
\begin{assumption}\label{ass:rw}
\begin{enumerate} [{\rm(i)}] \itemsep=3pt
We assume that the random variables $\{X^N_i\}_{i\geq 1}$ satisfy the following:
\item There exists an open interval $(-\theta_0, \theta_0)$ around the origin on which each moment generating function $M_N$ is finite. Furthermore, there exists a finite constant
$C_M$ and $\frq >0$ such that we have the uniform bounds
\begin{align}
|M_N^{(i)}(\theta)|&\leq \Cm \quad \text{for all $N$, $0\leq i\leq 3$, and $\theta\in[-\frq ,\frq ]$}
\label{ed}
\end{align}
for the compact interval $[-\frq ,\frq ]\subset(-\theta_0, \theta_0)$.
\item There exists a finite constant $\cva >0$ such that
\begin{align}\label{cv}
\bE\big[(X^N_1)^2\big]\ge \sigma_N^2= \Vvv(X^N_1) \ge \cv \quad \text{for all $N$.}
\end{align}
\item There exists a finite constant $\cmu>0$ such that the expectations $\mu_N=\bE\big(X^N_1\big)$
satisfy
\begin{align}\label{r}
-\cmu(\log N)^{-3} \leq \mu_N \le 0 \quad \text{for all $N$.}
\end{align}
\end{enumerate}
\end{assumption}
The conditions in Assumption \ref{ass:rw} are fairly natural. Note that \eqref{ed} has to be checked only for $i=0$ at the expense of shrinking the interval $[-\frq, \frq]$ and increasing $\Cm$.
To make a positive maximum possible,
condition \eqref{cv} ensures enough diffusivity and condition \eqref{r} limits the strength of the negative drift. The bound \eqref{mr} below shows that $\cmu$ has to be vanishingly small in order for the result to be nontrivial.
For $m\ge 1$ let $S^N_m=\sum_{k=1}^{m}X^N_k$ be the random walk associated with the steps $\{X^N_i\}_{i
\geq 1}$. Here is the main theorem.
\begin{theorem}\label{thm:lm}
There exist finite constants $C$ and $N_0$ that depend on $\frq ,\cv, \cmu $ and $\Cm $ such that, for every $N\ge N_0$ and $x\ge(\log N)^2 $,
\begin{align}\label{mr}
\mathbb{P}\Big(\max_{1\leq m \leq N}S^N_m\le x\Big)\leq C\tspa x(\log N)(\tspa |\mu_N|\vee N^{-1/2}\tspa ).
\end{align}
\end{theorem}
\medskip
\begin{remark}[The constants in the theorem] The constant $C$ in the upper bound \eqref{mr} is given by
\begin{equation}\label{CC} C=4\exp\bigl\{ 4(C_0+4c_\tau^{-1})(1+\cmu) + 8\frq^{-1} \bigl(1+ \log(e^{\frq }\Cm +1)\bigr) +4 \Cm \bigr\} + 3 \end{equation}
where
\begin{equation}\label{CC1}
C_0=2(\cva^{-1}+9e^{\frq}\cva^{-3}\Cm^{\tspa 5/2} ) + 2e^{8\Cm \cva^{-4}+8\cva^{-2} }c_\tau^{-1}+12\cva^{-2}\,,
\end{equation}
\begin{equation}\label{CC2} c_\tau=\log\frac2{1+\Phi_{\cv } [-2,2]}\,, \end{equation}
and $\Phi_{\cv }$ is the mean zero Gaussian distribution with variance $\cv$.
Throughout the proof we state explicitly
the various conditions $N\ge N_0$ required along the way. Let us assume that $N\ge 2$ so that $\log N$ does not vanish. Then all the conditions $N\ge N_0$ can be combined into a single condition of the form
\begin{equation}\label{fN_0} f(\Cm, \cmu, \cv, \frq, N) \ge 1 \end{equation}
where the function
$f$ is a strictly positive continuous function on $\R_{>0}^4\times\R_{\ge2}$, nondecreasing in $\frq$, nonincreasing in $\Cm$ and $\cmu$, but depends on $\cv$ in both directions. When $(\Cm, \cmu, \cv, \frq)$ is restricted to a compact subset $\mathcal{K}$ of $\R_{>0}^4$, there exists a finite index $N_\mathcal{K}$ such that $f(\Cm, \cmu, \cv, \frq, N)$ is a nondecreasing function of $N\ge N_\mathcal{K}$ for any fixed $(\Cm, \cmu, \cv, \frq)\in \mathcal{K}$, and
\[ \lim_{N\to\infty}\; \inf_{(\Cm, \cmu, \cv, \frq)\tsp \in\tsp \mathcal{K}} f(\Cm, \cmu, \cv, \frq, N) = \infty. \]
In particular, for each compact subset $\mathcal{K}\subset\R_{>0}^4$ there exists a finite index $N_{0,\mathcal{K}}$ such that \eqref{fN_0} holds
for all $N\ge N_{0,\mathcal{K}}$ and all $(\Cm, \cmu, \cv, \frq)\in \mathcal{K}$.
Furthermore, it is evident from \eqref{CC}-\eqref{CC2} that $C$ is a continuous function of $(\Cm, \cmu, \cv, \frq)\in\R_{>0}^4$. We conclude with the following local uniformity statement.
\end{remark}
\begin{corollary} \label{cor:lm}
For each compact subset $\mathcal{K}\subset\R_{>0}^4$ there exist finite constants $C_{0,\mathcal{K}}$ and $N_{0,\mathcal{K}}$ such that the following holds: the estimate \eqref{mr} with $C=C_{0,\mathcal{K}}$ on the right-hand side is valid whenever $N\ge N_{0,\mathcal{K}}$, simultaneously for all walks $\{S^N_m\}_{m\ge 1}$ that satisfy Assumption \ref{ass:rw} with parameters $(\Cm, \cmu, \cv, \frq)\in \mathcal{K}$.
\end{corollary}
We illustrate the result with some examples.
\begin{example}[Gaussian random walk]
Let $B_t$ be a Brownian motion, $\mu<0$, and define the random walk $S^N_m=B_{m}+mN^{-1/2}\mu$ .
We can verify that the bound \eqref{mr} is off by a logarithmic factor in this case, by comparison with the running maximum of the Brownian motion.
For $x>0$ and large enough $N$
\begin{equation}\label{op}\begin{aligned}
\P\big(\max_{1\le m\le N}S^N_m\leq x\big)&\geq \P\big(\sup_{0\le t\le N}B_t+tN^{-1/2}\mu\leq x\big)\\
&\geq 1-e^{-2xN^{-1/2}|\mu|}\geq x|\mu_N|=xN^{-1/2}|\mu|.
\end{aligned}\end{equation}
where the middle inequality follows from \eqref{kmt612} with $\mu(N)=\mu$ and $b(N)=xN^{-1/2}$.\eqref{op} shows that the optimal error is at most $O(x|\mu_N|)$, and that Theorem \ref{thm:lm}, if not optimal, is only $\log N$ away from being so.
\end{example}
A natural way to produce examples is to take $X^N_i$ as the difference of two independent random variables whose means come closer as $N$ grows and whose variances stay bounded and bounded away from zero.
\begin{example}[Exponential walk]\label{ex:Exp}
Consider a random walk $S_n=\sum_{k=1}^n X_k$ with step distribution $X_k\deq Y_\alpha-Y_\beta$ where $Y_\alpha$ and $Y_\beta$ are two independent exponential random variables with rates $\alpha$ and $\beta$, respectively. $\mu=\bE[X_k]=\frac{\beta-\alpha}{\alpha\beta}$ so we assume that $\alpha>\beta$. The distribution of the supremum of $S_n$ is well-known and also feasible to compute (Example (b) in Section XII.5 of Feller \cite{fellII}):
for $x>0$,
\begin{align*}
\P\bigl( \sup_{n\ge0} S_n \le x) =1- \frac\beta\alpha e^{-(\alpha-\beta)x}
=\beta \abs\mu \bigl( 1+\beta x\bigr) + O(\mu^2x^2)
\end{align*}
where we assume $\abs\mu x$ small and expand $e^s=1+s+O(s^2)$.
We obtain a lower bound:
\begin{align*}
\P\bigl( \max_{1\le n\le N} S_n \le x) &\ge P\bigl( \sup_{n\ge0} S_n \le x)
=\beta \abs\mu \bigl( 1+\beta x\bigr) + O(\mu^2x^2) \\
&\ge \beta^2 \abs\mu x + O(\mu^2x^2).
\end{align*}
Thus for $\abs\mu\ge N^{-1/2}$ and small $x\abs\mu$, the upper bound \eqref{mr} loses only a logarithmic factor.
That $\max_{1\le n\le N} S_n$ is close to the overall maximum $\sup_{n\ge0} S_n$ in the case $\abs\mu\ge N^{-1/2}$ is a consequence of the fact that the overall maximum is taken at a random time of order $\mu^{-2}$.
This claim is seen conveniently through ladder intervals $\{T_i\}_{i\ge 1}$. These are the intervals $T_i=\tau_i-\tau_{i-1}$ between successive ladder epochs defined by $\tau_0=0$ and
\[ \tau_i=\inf\{ n>\tau_{i-1}: S_n > S_{\tau_{i-1}}\}. \]
The distribution of $T_i$ is given by
\[ \P(T_i=\infty)=1-\frac\beta\alpha \quad\text{and}\quad
\P(T_i=n) = C_{n-1}\frac{\alpha^{n-1}\beta^n}{(\alpha+\beta)^{2n-1}}\quad \text{for } n\in\bZ} \def\Q{\bQ} \def\R{\bR}\def\N{\bN_{>0}, \]
where $\{C_k\}_{k\ge0}$ are the Catalan numbers. (This calculation can be found in Lemma B.3 in the appendix of \cite{fan-sepp-arxiv}.)
Set $T_0=0$ and let $N=\max\{ n\ge0 : T_n<\infty\}$ be the number of finite ladder intervals. The maximum $\sup_{n\ge 0}S_n$ is taken at time $\zeta=\sum_{i=1}^N T_i$. One calculates $\bE[\zeta]=\frac1{\alpha\beta}\mu^{-2}$ and
$\Vvv[\zeta]=c_{\alpha, \beta} \mu^{-4}$. Thus for large enough $k$, $\P(\zeta> k\mu^{-2}) \le C_{\alpha, \beta} k^{-2}$.
\end{example}
\begin{example}[Log-gamma walk]\label{ex:drw2}
This is the application of Theorem \ref{thm:lm} used in \cite{busa-sepp-poly}.
Let $G^\lambda$ denote generically a parameter $\lambda$ gamma random variable, that is, $G^\lambda$ has density function $f(x)=\Gamma(\lambda)^{-1}x^{\lambda-1}e^{-x}$ on $\R_{>0}$.
For $\alpha, \beta>0$ let $S^{\alpha,\beta}_m=\sum_{i=1}^m X^{\alpha,\beta}_i$ denote the random walk where the distribution of the i.i.d.\ steps $\{X^{\alpha,\beta}_i\}_{i\geq 1}$ is specified by
\[ X^{\alpha,\beta}_1 \; \deq \; \log G^{\alpha}-\log G^{\beta} \]
with two independent gamma variables $G^{\alpha}$ and $G^{\beta}$ on the right.
Let $\psi_0(s)=\Gamma'(s)/\Gamma(s)$ be the digamma function and $\psi_1(s)=\psi_0'(s)$ the trigamma function on $\R_{>0}$. Their key properties are that $\psi_0$ is strictly increasing with $\psi_0(0+)=-\infty$ and $\psi_0(\infty)=\infty$, while $\psi_1$ is strictly decreasing and strictly convex with $\psi_1(0+)=\infty$ and $\psi_1(\infty)=0$.
Fix a compact interval $[\rho_{\rm min}, \rho_{\rm max}]\subset(0,\infty)$. Fix a positive constant $a_0$ and let $\{s_N\}_{N\ge 1}$ be a sequence of nonnegative reals such that $0\le s_N\le a_0(\log N)^{-3}$.
Define a set of admissible pairs
\[ \mathcal{S}} \def\cT{\mathcal{T}_N=\{(\alpha, \beta): \alpha, \beta\in [\rho_{\rm min}, \rho_{\rm max}], \;
-s_N\le \alpha-\beta \le 0\}.
\]
For $(\alpha, \beta)\in\mathcal{S}} \def\cT{\mathcal{T}_N$, the mean step satisfies
\begin{equation}\label{mu}\begin{aligned}
\mu_{\alpha,\beta}&=\bE[X^{\alpha,\beta}_1] =\bE[\log G^{\alpha}]- \bE[\log G^{\beta} ]=\psi_0(\alpha)-\psi_0(\beta) \\
&=\psi_1(\lambda)(\alpha-\beta) \;\in\; [-a_0\tspa \psi_1(\rho_{\rm min})(\log N)^{-3}, 0]
\end{aligned} \end{equation}
where we used the mean value theorem with some $\lambda\in(\rho_{\rm min}, \rho_{\rm max})$. We take $\cmu=a_0\tspa \psi_1(\rho_{\rm min})$.
The MGF of $X^{\alpha,\beta}_1$ is
\begin{align}\label{M}
M_{\alpha,\beta}(\theta)&=\bE\big[e^{\theta X^{\alpha,\beta}_1}\big]
=\bE\big[(G^{\alpha})^{\theta}\hspace{0.9pt}\big]
\, \bE\big[(G^{\beta})^{-\theta}\hspace{0.9pt}\big]
=\frac{\Gamma(\alpha+\theta)\Gamma(\beta-\theta)}{\Gamma(\alpha)\Gamma(\beta)}
\end{align}
for $\theta\in(-\alpha\hspace{0.9pt}, \beta)$.
For the interval in assumption \eqref{ed} we can take
$[-\frq , \frq ]=[-\tfrac12\rho_{\rm min}, \tfrac12\rho_{\rm min}]$. Now \eqref{ed} holds with a single constant $\Cm=\Cm(\rho_{\rm min}, \rho_{\rm max}) $ for all choices of $\alpha, \beta\in[\rho_{\rm min}, \rho_{\rm max}]$.
The variance satisfies
\[ \Vvv(X^{\alpha,\beta}_1) = \Vvv(\log G^{\alpha})+ \Vvv(\log G^{\beta} ) = \psi_1(\alpha)+ \psi_1(\beta)\ge 2\psi_1(\rho_{\rm max})=\cv. \]
The constants $(\Cm, \cmu, \cv, \frq)$ have been fixed and they work simultaneously for all $(\alpha, \beta)\in\mathcal{S}} \def\cT{\mathcal{T}_N$ and all $N\ge 1$. Define $C$ through \eqref{CC}--\eqref{CC2}. Choose $N_0$ so that \eqref{fN_0} holds for all $N\ge N_0$. Now $C$ and $N_0$ are entirely determined by $(a_0, \rho_{\rm min}, \rho_{\rm max})$. We state the result as a corollary of Theorem \ref{thm:lm}.
\begin{corollary} In the setting described above the bound below holds for all $N\ge N_0$, $(\alpha, \beta)\in\mathcal{S}} \def\cT{\mathcal{T}_N$, and $x\ge(\log N)^2$:
\begin{align*}
\mathbb{P}\Big\{\tspa\max_{1\leq m \leq N}S^{\alpha,\beta}_m\le x\Big\}\leq C\tsp x \tsp (\log N)(\mu_{\alpha,\beta} \vee N^{-1/2}\hspace{0.9pt}).
\end{align*}
\end{corollary}
\end{example}
\medskip
\section{Comparison with the KMT coupling} \label{sec:kmt}
As a counterpoint to our Theorem \ref{thm:lm} we derive here an estimate for a single random walk with the Koml\'os-Major-Tusn\'ady (KMT) \cite{koml-majo-tusn-76} coupling with Brownian motion. We emphasize though that {\it Theorem \ref{thm:kmt} below is not an alternative to our Theorem \ref{thm:lm} because we do not know how the constants $C, K, \lambda$ below depend on the distribution of the walk. Hence without further work we cannot apply the resulting estimate \eqref{kmt620} to an infinite family of random walks.}
However, this section does illustrate that in a certain regime of vanishing drift the estimates \eqref{mr} and \eqref{kmt620} are essentially equivalent, as explained below in Remark \ref{rm:kmt4}. So even if one were to conclude that the constants $C, K, \lambda$ below can be taken uniform, the result remains the same.
Let $\wb S_n=\sum_{k=1}^n \wb X_k$ be a mean-zero random walk with i.i.d.\ steps $\{\wb X_k\}$ and unit variance $E[\,\wb X^{\tspa 2}\,]=1$.
The KMT coupling (Theorem 1 in \cite{koml-majo-tusn-76}) constructs this walk together with a standard Brownian motion $B_\bbullet$ on a probability space such that the following bound holds:
\begin{equation}\label{kmt5}
P\bigl( \;\max_{1\le k\le N} \abs{\wb S_k-B_k} \ge C\log N +z\tspa \bigr) \le Ke^{-\lambda z}
\qquad \text{for all } N\in\bZ} \def\Q{\bQ} \def\R{\bR}\def\N{\bN_{>0} \ \text{ and } \ z>0, \end{equation}
where $C, K, \lambda$ are finite positive constants determined by the distribution of $\wb X_k$.
We apply this to the running maximum of a random walk with a negative drift.
\begin{theorem}\label{thm:kmt} Let
$S_n=\sum_{i=1}^n X_i$ be a random walk with i.i.d.\ steps $\{X_i\}$ that
satisfy $E[e^{tX}]<\infty$ for $t\in(-\delta,\delta)$ for some $\delta>0$. Assume the drift is negative: $\mu=EX_1<0$, and the variance $\sigma^2=E[\hspace{0.9pt}(X_1-\mu)^2\hspace{0.9pt}]>0$.
Then there exists a constant $C_1$ determined by the distribution of the normalized variable $\wb X_1=\sigma^{-1}(X_1-\mu)$ such that, for all real $x>0$ and integers $N>e^4$,
\begin{equation}\label{kmt620} \begin{aligned}
P\bigl\{\hspace{0.9pt}\max_{0\le k\le N} S_k < x \bigr\}
&\le C_1\Bigl( \tspa N^{1-(\log N)/2} + \frac{\sigma x+\sigma^2\log N}{N^{3/2}\mu^2} \hspace{0.9pt} e^{(\sigma^{-1}x+\log N)\sigma^{-1}\mu} \tspa \Bigr)\\[3pt]
&\qquad
+ 1 - e^{2(\sigma^{-1}x+C_1\log N)\sigma^{-1}\mu} .
\end{aligned}\end{equation}
\end{theorem}
\medskip
\begin{remark} \label{rm:kmt4} To compare this estimate with Theorem \ref{thm:lm}, imagine that we can let $\mu$ vary as a function of $N$ while preserving the constant $C_1$ in \eqref{kmt620}.
Consider the regime
where $\sigma^2$ is constant, $x>\log N$ and $\abs\mu$ vanishes fast enough so that $x\abs\mu$ stays bounded. Then the first parenthetical expression on the right of \eqref{kmt620} is dominated by a constant multiple of $x N^{-3/2}\mu^{-2}$. To the last part apply $1-e^{s}\le \abs s$ for $s<0$. The bound \eqref{kmt620} becomes
\begin{equation}\label{kmt622}
P\bigl\{\hspace{0.9pt}\max_{0\le k\le N} S_k < x \bigr\}
\le C_2 \hspace{0.9pt} x \bigl(
N^{-3/2}\mu^{-2} + \abs\mu\bigr) .
\end{equation}
The bound \eqref{mr} is worse than the one above by at most a $\log N$ factor, and not at all if $\mu$ vanishes fast enough. In particular, for the application in \cite{busa-sepp-poly}, the KMT bound cannot give anything substantially better than Theorem \ref{thm:lm}.
\end{remark}
\medskip
\begin{proof}[Proof of Theorem \ref{thm:kmt}]
Apply \eqref{kmt5} to
the mean-zero unit-variance normalized walk $\wb S_N= \sigma^{-1}(S_N-N\mu)$.
To simplify some steps below we can assume that $C\ge 1\vee\lambda^{-1}$.
Let $x>0$ and $z=\lambda^{-1}\log N$.
\begin{align}
\nonumber &P\bigl\{\hspace{0.9pt}\max_{0\le k\le N} S_k < x\bigr\} = P\bigl\{ \hspace{0.9pt}\max_{0\le k\le N} \bigl(\wb S_k +k\sigma^{-1} \mu\bigr) < \sigma^{-1} x\bigr\}\\[4pt]
\label{kmt600} &\le Ke^{-\lambda z} + P\bigl\{\hspace{0.9pt} \max_{0\le k\le N} \bigl(B_k +k\sigma^{-1}\mu\bigr) < \sigma^{-1} x+ C\log N + z \bigr\} .
\end{align}
Let $M_k=\sup_{0\le s\le 1}( B_{k+s}-B_k)$. Since $\mu<0$,
\begin{align*}
\sup_{0\le t\le N} \bigl(B_t +t\sigma^{-1}\mu\bigr)
&\le \max_{0\le k\le N} \bigl(B_k +k\sigma^{-1}\mu\bigr)
+ \max_{0\le k\le N-1} M_k.
\end{align*}
With this we continue from above.
\begin{equation}\label{kmt603} \begin{aligned}
\text{line \eqref{kmt600}} \; \le \;
Ke^{-\lambda z} \; + \; & P\bigl\{\hspace{0.9pt} \sup_{0\le t\le N} \bigl(B_t +t\sigma^{-1}\mu\bigr)
< \sigma^{-1}x+ 2C\log N + z \bigr\} \\[3pt]
&\qquad
+ P\bigl\{ \hspace{0.9pt}\max_{0\le k\le n-1} M_k > C\log N \bigr\} .
\end{aligned}\end{equation}
We bound the two probabilities above separately. Recall that $C\ge 1$. For the running maximum of standard Brownian motion, by (2.8.4) on page 96 of \cite{kara-shre},
\begin{equation}\label{kmt609} \begin{aligned}
P\bigl\{ \hspace{0.9pt}\max_{0\le k\le N-1} M_k > \log N \bigr\}
&\le N \tsp P\bigl\{ \tspa \sup_{0\le s\le 1} B_s > \log N \bigr\}
= N \sqrt{2/\pi} \text{int}_{\log N}^\infty e^{-y^2/2}\,dy \\
&\le \frac{N\sqrt{2/\pi}}{ \log N} \text{int}_{\log N}^\infty y\tspa e^{-y^2/2}\,dy
= \frac{\sqrt{2/\pi}}{ \log N} N^{1-(\log N)/2} .
\end{aligned} \end{equation}
For the running maximum of Brownian motion with drift, use first Brownian scaling, and then the density of the hitting time $T_{b(N)}$ of the point
$b(N)=N^{-1/2}(\sigma^{-1}x+ 2C\log N + z) $ with drift $\mu(N)=\sigma^{-1}N^{1/2}\mu<0$
from (3.5.12) on page 197 of \cite{kara-shre}.
\begin{equation}\label{kmt612} \begin{aligned}
& P\bigl\{\hspace{0.9pt} \sup_{0\le t\le N} \bigl(B_t +t\sigma^{-1}\mu\bigr)
< \sigma^{-1}x+ 2C\log N + z \bigr\} \\[3pt]
&= P\bigl\{\hspace{0.9pt} \sup_{0\le t\le 1} \bigl(B_t +t\sigma^{-1}N^{1/2}\mu\bigr)
< N^{-1/2}(\sigma^{-1}x+ 2C\log N + z) \bigr\} \\[3pt]
&= P\bigl\{\hspace{0.9pt} \sup_{0\le t\le 1} \bigl(B_t +t \mu(N)\bigr)
< b(N) \bigr\} = P^{(\mu(N))}\{ T_{b(N)} > 1 \} \\[3pt]
&= {b(N)} \text{int}_1^\infty \frac1{\sqrt{2\pi s^3}} e^{-(b(N)-\mu(N)s)^2/2s}\,ds
+ P^{(\mu(N))}\{ T_{b(N)} = \infty \} \\[3pt]
&= {b(N)} e^{b(N)\mu(N)}\text{int}_1^\infty \frac1{\sqrt{2\pi s^3}} e^{-\tfrac12b(N)^2s^{-1}-\tfrac12\mu(N)^2s}\,ds
+ 1- e^{2b(N)\mu(N)} \\
& \le 2 \tsp e^{b(N)\mu(N)} \hspace{0.9pt} \frac{b(N)}{\mu(N)^2} + 1- e^{2b(N)\mu(N)} \\
& \le 2\tsp e^{(\sigma^{-1}x+\log N)\sigma^{-1}\mu} \hspace{0.9pt} \frac{\sigma x+3C\sigma^2\log N}{N^{3/2}\mu^2}
+ 1 - e^{2(\sigma^{-1}x+3C\log N)\sigma^{-1}\mu}.
\end{aligned} \end{equation}
The second last inequality dropped the denominator $2\pi s^3\ge 1$ and the term $-\tfrac12b(N)^2s^{-1}$ from the exponent, and then integrated.
The last inequality substituted in
$z=\lambda^{-1}\log N \le C\log N$ to bound
\[ N^{-1/2}(\sigma^{-1}x+\log N) \le b(N)\le N^{-1/2}(\sigma^{-1}x+3C\log N). \]
The conclusion \eqref{kmt620} follows from substituting into \eqref{kmt603} the bounds from above.
\end{proof}
\medskip
\section{Auxiliary facts}
Before starting the proof proper, we record some simple facts. First, assumptions
\eqref{ed} and \eqref{cv} gives these bounds:
\begin{equation}\label{cubic}\begin{aligned}
&0<\cv \leq \mu_{2,N}\equiv \bE\big[(X^N_1)^2\big]=M_N^{(2)}(0)\leq C_M,\\[3pt]
&|\mu_{3,N}|\equiv |\bE\big[(X^N_1)^3\big]| =\abs{M_N^{(3)}(0)} \le C_M, \\[3pt]
&\P(X^N_1>t)\leq C_Me^{-\frq t}.
\end{aligned}\end{equation}
\begin{lemma}\label{lem:exp} Let $\{Y_i\}$ be i.i.d.\ random variables with common marginal distribution $\nu$. Assume that, for two constants $0<c_1,C_1<\infty$,
\begin{align}
\bE(e^{tY_1})\leq C_1 \quad\text{ for }\quad t\in[0,c_1]\label{assu2}.
\end{align}
Then
\begin{align*}
\mu_{\rm max}=\mu_{\rm max}(\nu, n)\equiv\bE\bigl[ \max\{0,Y_1,...,Y_n\}] \leq c_1^{-1}\log(C_1n+1).
\end{align*}
\end{lemma}
\begin{proof}
For $0<t\leq c_1$,
\begin{align*}
e^{t\mu_{\rm max}}\leq \bE\big(e^{t(0\vee \max_{1\leq i \leq n}Y_i)}\big)\leq 1+\bE\Big(\sum_{i=1}^{n}e^{tY_i}\Big)=1+ n\bE\big(e^{tY_1}\big)\leq C_1n+1,
\end{align*}
and the claim follows by taking $t=c_1$.
\end{proof}
Since $M_N''>0$ there is a unique minimizer
\begin{align}\label{theta}
\theta^N_0
=\arg \min\{ M_N(\theta)\}.
\end{align}
\begin{lemma}
Let $N_0$ be such that $\Cm\abs{\mu_N} \le \tfrac13\cv$ for $N\ge N_0$ and
set $\cm =2\cva^{-2} $. Then for $N\ge N_0$,
\begin{align*}
0\le\theta^N_0\leq \cm |\mu_N|.
\end{align*}
\end{lemma}
\begin{proof} If $M'_N(0)=\mu_N=0$ then the minimum is taken at $\theta^N_0=0$.
So suppose $M'_N(0)=\mu_N<0$. Expansion
for $\theta\in(0,\frq )$ gives, with some $\theta'\in(0,\theta)$,
\begin{align*}
M'_N(\theta)&=\mu_N+\mu_{2,N}\theta+ \tfrac12 M^{(3)}_N(\theta') \tspa \theta^2
\ge \mu_N+\mu_{2,N}\theta - \tfrac12 \Cm \theta^2.
\end{align*}
Since $M'$ is strictly increasing and $\cm =2/\cv\ge 2\mu_{2,N}^{-1}$, by the choice of $N_0$ we have for $N\ge N_0$
\begin{align*}
&
M'_N(-\cm \mu_N) \ge M'_N\Big(-\frac{2\mu_N}{\mu_{2,N}}\Big)
\ge -\mu_N - 2\Cm \hspace{0.9pt} \frac{\mu_N^2}{\mu_{2,N}^2}
\ge -\mu_N\bigl( 1 - \tfrac23 \cv /\mu_{2,N}^2\bigr) >0.
\end{align*}
It follows that there exists a unique $\theta^N_0\in (0,\cm |\mu_N|)$ such that $M'_N(\theta^N_0)=0.$
\end{proof}
Define a tilted measure $Q(d\omega)=\rnder^{\theta_0^N}_{N,n}(\omega)\P(d\omega)$ in terms of the Radon-Nikodym derivative
\begin{align*}
\rnder^{\theta_0^N}_{N,n}(\omega)=\frac{e^{\theta^N_0S^N_n}}{\bE\big(e^{\theta^N_0S^N_n}\big)}.
\end{align*}
Denote the expectation under $Q$ by $\bE^Q$. Increase $N_0$ further so that $N\ge N_0$ implies $\theta^N_0\in[-\frq /2,\frq /2]$ and $-\mu_N\le 2$. Then for $0\leq i \leq 3$ and $\theta\in(-\frq /2,\frq /2)$, the MGF under $Q$ satisfies
\begin{equation}\label{QM4}\begin{aligned}
M^{(i)}_{Q,N}(\theta)&=\bE^Q\big((X^N_1)^ie^{\theta X^1_N}\big)=M_N(\theta^N_0)^{-1}\bE\big((X^N_1)^ie^{(\theta+\theta^N_0) X^1_N}\big) \\
&=M_N(\theta^N_0)^{-1}M^{(i)}_N(\theta^N_0+\theta)\leq e^{-\mu_N \theta_0^N}\Cm \leq e^{\frq }\Cm ,
\end{aligned}\end{equation}
where the first inequality used Jensen's inequality and \eqref{ed}.
From this we get moment bounds under $Q$: for $0\leq i \leq 3$,
\begin{equation}\label{qm}\begin{aligned}
\bE^Q\big((X^N_1)^i\big)&=M^{(i)}_{Q,N}(0)\leq e^{\frq }\Cm.
\end{aligned}\end{equation}
For $\abs\theta\leq \theta_1$, there exists $\theta'\in(-\theta_1,\theta_1)$
\begin{align*}
M_N^{(2)}(\theta)=\mu_{2,N}+M_N^{(3)}(\theta')\theta
\end{align*}
Increase $N_0$ further if necessary so that
$\theta^N_0\le\frac{\cv}{2\Cm}$ for $N\ge N_0$ and we can write
\begin{align*}
M_N^{(2)}(\theta_0^N)\geq \cv-\Cm\theta_0^N\ge \frac{\cv}{2}.
\end{align*}
Then from $\bE^Q(X^N_1)=0$ and the third equation in \eqref{QM4},
\begin{align}\label{vq}
\Vvv^Q(X^N_1)=\bE^Q\big((X^N_1)^2\big)&=M^{(2)}_{Q,N}(0)=M_N(\theta^N_0)^{-1}M^{(2)}_N(\theta^N_0)\ge C_M^{-1}\frac{\cv}{2}.
\end{align}
\section{Proof of the main theorem}
To lighten the notation we omit the label $N$ from $\mu=\mu_N$ and $\theta_0=\theta^N_0$, and from some other notation that obviously depend on $N$. For $y>0$ let
\begin{align*}
\tau_y=\inf\{m\ge 1: |S^N_m|\geq y\}
\end{align*}
denote the first hitting time of the cylinder of width $2y$. Let $\Phi_{\sigma^2}$ denote the centered Gaussian distribution with variance $\sigma^2$.
\begin{lemma}\label{lem:ubc1}
For real $k\ge 0$ and $y\ge y_0$ we have
$
\mathbb{P}(\tau_y> ky^2) \leq 2 e^{-c_\tau k},
$
where
\begin{equation}\label{y_0} y_0=1\,\vee\,\frac{6\Cm \sigma_*^{-3}}{1-\Phi_{\cv }[-2,2]}
\qquad\text{and}\qquad
c_\tau=\log\frac2{1+\Phi_{\cv } [-2,2]} \,\in\,(0,\log 2).
\end{equation}
\end{lemma}
\begin{proof}
Let $\widebar S^N_m=S^N_m-m\mu$ be the centered walk. Consider an integer $k\ge 1$ and a real $y\ge 1$.
Look at the process along time increments of size $\fl {y}^2$:
\begin{align*}
\P(\tau_y>ky^2)&\leq \P(\tau_y>k\fl {y}^2)\leq\mathbb{P}( \,|S^N_{m\fl{y}^2}|\leq y \text{ for } m=1,\dotsc,k\,)\\
&\le \mathbb{P}( \,|S^N_{m\fl {y}^2}-S^N_{(m-1)\fl {y}^2}|\leq 2y \text{ for } m=1,\dotsc,k\,) \\
&=\bigl(\mathbb{P}\{ \,S^N_{\fl {y}^2}\in [-2y,2y]\} \bigl)^k
= \bigl(\mathbb{P}\bigl\{ \,\fl {y}^{-1}\wb S^N_{\!\fl {y}^2}\in [-2-\mu \fl {y}\tspa,2-\mu \fl {y}]\bigr\} \bigl)^k \\
&\le \Bigl(\Phi_{\sigma_N^2} [-2-\mu \fl {y},2-\mu \fl {y}] + 3 \frac{\mu_{3,N}}{\sigma^{3}}\fl {y}^{-1} \Bigr)^k
\le \bigl(\Phi_{\sigma_N^2} [-2,2] + 6\Cm \sigma_*^{-3}y^{-1} \bigl)^k.
\end{align*}
The penultimate inequality is the Berry-Esseen Theorem. We use the version from \cite[Section 3.4.4]{durr} where the constant is 3. The last inequality is a simple property of a centered Gaussian. Now for $y\ge y_0$
and $c_\tau$ as above,
\begin{align*}
\sup_{\cv \leq \sigma^2\leq C_M}\Phi_{\sigma^2} [-2,2] + 3\Cm \sigma_*^{-3} y^{-1}
& \le \Phi_{\cv } [-2,2] + 3\Cm \sigma_*^{-3} y_0^{-1}\\
&\le \tfrac{1}{2}(1+\Phi_{\cv } [-2,2])= e^{-c_\tau}.
\end{align*}
We have proved $
\P(\tau_y>ky^2)\leq e^{-c_\tau k} $ for $k\in\bZ} \def\Q{\bQ} \def\R{\bR}\def\N{\bN_{\ge0}$.
Extend this to real $k\in \R_{\ge0}$:
\begin{align*}
\P(\tau_y>ky^2)&\leq \P(\tau_y>\fl {k}y^2)\leq e^{-c_\tau \fl{k}}\leq e^{-c_\tau( k-1)}=2(1+\Phi_{\cv } [-2,2])^{-1} e^{-c_\tau k}<2e^{-c_\tau k}.
\qedhere\end{align*}
\end{proof}
Let $H_N=\mu^{-2}\wedge N$. By \eqref{r}
\begin{align}\label{HB}
\cmu^{-2}(\log N)^6\leq H_N\leq N.
\end{align}
Define the truncated version of $\tau_y$
\begin{align*}
\hat{\tau}_y=\tau_y\wedge H_N .
\end{align*}
The following result shows that although the random walk $S^N_m$ has negative drift, up to times of order $H_N$ it behaves similarly to an unbiased random walk in the following sense: if $y>0$ is not too small, but small compared to $H_N^{1/2}$, the probability that the random walk reaches level $y$ before level $-y$ is close to $1/2$. Our choice of $H_N$ can be justified by decomposing the random walk into
\begin{align*}
S^N_n=\sum_{i=1}^{n}\big(X_i^{N}-\mu\big)+n\mu.
\end{align*}
For $\varepsilon>0$ small and $\abs\mu\ge N^{-1/2}$ (so that $H_N=\mu^{-2}$),
\begin{align}\label{dt}
(\varepsilon H_N)^{-1/2}S^N_{\varepsilon H_N}=(\varepsilon H_N)^{-1/2}\sum_{i=1}^{\varepsilon H_N}\big(X_i^{N}-\mu\big)+\varepsilon^{1/2} .
\end{align}
As
\begin{align*}
(\varepsilon H_N)^{-1/2}\sum_{i=1}^{\varepsilon H_N}\big(X_i^{N}-\mu\big)\overset{d}\approx N(0,\sigma),
\end{align*}
we see that the left hand side of \eqref{dt} is dominated by the first term on the right hand side. That is, up to time $\varepsilon H_N$ the random walk $S^N$ behaves approximately like an unbiased random walk.
\begin{lemma}\label{lem:ep} Let $y_0$ be as in \eqref{y_0}.
There exist finite constants $N_0$ and $C_0$ such that, for $N\ge N_0$ and $y_0\le y\le (\log N)^{-1}H_N^{1/2}$,
\begin{align*}
\P(S_{\hat{\tau}_y}\geq y)\geq \tfrac1{2}\Bigl[1- C_0 H_N^{-\frac12}\big(y+(\log H_N)^2\big)-\frac2{\frq y}\log(e^{\frq }\Cm H_N+1)
\Bigr].
\end{align*}
$C_0$ depends on $\frq ,\cv $ and $\Cm $ while $N_0$ depends on $\frq ,\cv $, $\cmu$ and $\Cm $.
\end{lemma}
\begin{proof}
The constant $C_0$ comes as follows in terms of the constants previously introduced above and new constants $C_2, C_3, C_4$ introduced below in the course of the proof:
\begin{equation}\label{C_0} \begin{aligned}
C_0=C_2+C_4 &=2(\cva^{-1}+9e^{\frq}\cva^{-3}\Cm^{5/2} ) + 2C_3c_\tau^{-1}+6\cm \\
&=2(\cva^{-1}+9e^{\frq}\cva^{-3}\Cm^{\tspa 5/2} ) + 2e^{2\Cm \cm ^2+4\cm }c_\tau^{-1}+6\cm .
\end{aligned}\end{equation}
Under the measure $Q$, $S_n$ is a mean-zero random walk and hence a martingale. Furthermore, ${\hat{\tau}_y}$ is a bounded stopping time. From this,
\begin{align*}
0= \text{int} S_{\hat{\tau}_y} dQ=\text{int}_{S_{\hat{\tau}_y}\geq y} S_{\hat{\tau}_y} dQ+\text{int}_{S_{\hat{\tau}_y}\leq-y} S_{\hat{\tau}_y} dQ+\text{int}_{S_{\hat{\tau}_y}\in (-y,y)} S_{\hat{\tau}_y} dQ.
\end{align*}
On the event $S_{\hat{\tau}_y}\geq y$, we have $\hat{\tau}_y=\tau_y$ and $S_{\hat{\tau}_y-1}<y\le S_{\hat{\tau}_y}=S_{\hat{\tau}_y-1}+X^N_{\hat{\tau}_y}\le y+X^N_{\hat{\tau}_y}$ and so
\begin{align*}
\text{int}_{S_{\hat{\tau}_y}\geq y} S_{\hat{\tau}_y} \,dQ
&\leq \text{int}_{S_{\hat{\tau}_y}\geq y}\big(y+X^N_{\hat{\tau}_y}\big)\,dQ
\le \text{int}_{S_{\hat{\tau}_y}\geq y}\big(y+0\vee\max_{1\leq i\leq H_N} X^N_i\big)\,dQ \\[4pt]
&\le y\hspace{0.9pt} Q(S_{\hat{\tau}_y}\geq y)+\mu_{\rm max}(Q, H_N)
\;\leq \; y\hspace{0.9pt} Q(S_{\hat{\tau}_y}\geq y)+ 2\frq ^{-1}\log(e^{\frq }\Cm H_N+1),
\end{align*}
where we applied Lemma \ref{lem:exp} under the distribution $Q$ with $C_1=e^{\frq }\Cm ,c_1=\tfrac12\frq $ from \eqref{QM4}.
Combine the displays above to obtain
\begin{align*}
Q(S_{\hat{\tau}_y}\geq y) &\geq -\, y^{-1}\!\!\!\text{int}\limits_{S_{\hat{\tau}_y}\leq-y} S_{\hat{\tau}_y} dQ \; - \; y^{-1} \!\!\!\!\!\!\text{int}\limits_{S_{\hat{\tau}_y}\in (-y,y)} S_{\hat{\tau}_y} dQ \; - \; 2\frq ^{-1}y^{-1}\log(e^{\frq }\Cm H_N+1) \\[4pt]
&\ge Q(S_{\hat{\tau}_y}\leq-y)- Q(S_{\hat{\tau}_y}\in(-y,y)) - 2\frq ^{-1}y^{-1}\log(e^{\frq }\Cm H_N+1) .
\end{align*}
Use
\begin{align*}
Q(S_{\hat{\tau}_y}\leq-y)=1-Q(S_{\hat{\tau}_y}\geq y)-Q(S_{\hat{\tau}_y}\in(-y,y))
\end{align*}
to rewrite the above as
\begin{equation}\label{o:600}
\hspace{0.9pt} Q(S_{\hat{\tau}_y}\geq y)\geq \tfrac12 [1-2\hspace{0.9pt} Q(S_{\hat{\tau}_y}\in(-y,y))- 2\frq ^{-1}y^{-1}\log(e^{\frq }\Cm H_N+1)].
\end{equation}
It remains to bound the probability on the right.
$S_{\hat{\tau}_y}\in(-y,y)$ forces $\hat{\tau}_y=H_N$ and thereby another application of the Berry-Esseen theorem, while using \eqref{qm}, \eqref{vq} and $y\ge y_0\ge 1$, gives
\begin{align*}
Q\bigl\{S_{\hat{\tau}_y}\in(-y,y)\bigr\}&=Q\bigl\{H_N^{-1/2}S_{H_N}\in(-H_N^{-1/2}y,H_N^{-1/2}y)\bigr\}\\
& \le \Phi_{\cv }(-H_N^{-1/2}y,H_N^{-1/2}y) + 3\frac{e^{\frq}\Cm }{2^{-3/2}\Cm^{-3/2}\cva^3} H_N^{-\frac12} \\
& \le 2(2\pi\cv)^{-1/2}yH_N^{-1/2} + 9\frac{e^{\frq}\Cm }{\Cm^{-3/2}\cva^3} H_N^{-\frac12} \leq (\cva^{-1}+9e^{\frq}\cva^{-3}\Cm^{5/2} )\tsp y \tsp H_N^{-\frac12}\\
& \equiv \tfrac12C_2 \tsp y \tsp H_N^{-\frac12}.
\end{align*}
Rewrite \eqref{o:600} as
\begin{align}\label{qlb}
Q(S_{\hat{\tau}_y}\geq y)\geq \tfrac12[1-C_2 \tsp y \tsp H_N^{-\frac12}-2y^{-1}\frq ^{-1}\log(e^{\frq }\Cm H_N+1)].
\end{align}
It remains to switch from $Q$ back to the original distribution $\P$.
Recall the Radon-Nikodym derivative $\rnder^{\theta}_n={M(\theta)^{-n}}{e^{\theta S_n}}$. Introduce a temporary quantity $G_0>1$ to be chosen precisely below. Decompose according to the value of $\hat{\tau}_y$
and use Cauchy-Schwarz:
\begin{align}
Q(S_{\hat{\tau}_y}\geq y)
&=\bE\big[ \rnder^{\theta_0}_{ {\hat{\tau}_y}}
(\mathbf{1}_{S_{\hat{\tau}_y}\geq y,\hspace{0.9pt} {\hat{\tau}_y}\leq G_0}+\mathbf{1}_{S_{\hat{\tau}_y}\geq y,\hspace{0.9pt} {\hat{\tau}_y}> G_0})\big] \nonumber \\
\label{plb}
&\leq \bE\big[\rnder^{\theta_0}_{ {\hat{\tau}_y}} \tspa \mathbf{1}_{S_{\hat{\tau}_y}\geq y,\hspace{0.9pt} {\hat{\tau}_y}\leq G_0}\big]
+\Big(\bE\big[(\rnder^{\theta_0}_{ {\hat{\tau}_y}})^2\big]\Big)^\frac12\Big(\P\{S_{\hat{\tau}_y}\geq y,\hspace{0.9pt} {\hat{\tau}_y}> G_0\}\Big)^{\frac12}.
\end{align}
Let us first bound the second term on line \eqref{plb}.
Note that $\rnder^{\theta}_n$ is a $\P$-martingale and ${\hat{\tau}_y}$ is a stopping time bounded by $H_N$. Hence $(\rnder^{\theta}_n)^2$ is a submartingale and we have
\begin{align*}
&\Big(\bE[(\rnder^{\theta_0}_{ \hat{\tau}_y})^2\tspa]\Big)^\frac12\Big(\P\{S_{\hat{\tau}_y}\geq y,{\hat{\tau}_y}> G_0\}\Big)^{\frac12}\\
&\leq\Big(\bE[(\rnder^{\theta_0}_{H_N})^2\tspa]\Big)^\frac12\Big(\P\{S_{\hat{\tau}_y}\geq y,{\hat{\tau}_y}> G_0\}\Big)^{\frac12}
=\bigg(\frac{M(2\theta_0)}{M(\theta_0)^{2}}\bigg)^{H_N/2}\Big(\P\{S_{\hat{\tau}_y}\geq y,{\hat{\tau}_y}> G_0\}\Big)^{\frac12}.
\end{align*}
To bound the $M$-factor on the right,
expand $M$ and use \eqref{ed}, \eqref{cubic} and $\mu<0$. In the numerator, for some $\eta\in(0,2\theta_0)$,
\[ M(2\theta_0) = 1+\mu2\theta_0+2\mu_2\theta_0^2+\tfrac86M^{(3)}(\eta)\theta_0^3
\le 1 +2\mu_2\theta_0^2+\tfrac43\Cm\theta_0^3
\]
and similarly in the denominator:
\begin{equation}\label{o:256} \begin{aligned}
&\Big[M(2\theta_0)M(\theta_0)^{-2}\Big]^{H_N/2}\le \Big(1 +2\mu_2\theta_0^2+\tfrac43\Cm\theta_0^3\Big)^{H_N/2}\Big(1+\mu\theta_0+\tfrac12\mu_2\theta_0^2-\tfrac16\Cm\theta_0^3\Big)^{-H_N}\\
&\leq \Big(1+2 \Cm \cm ^2\mu^2+\tfrac43 C_M\cm^3\abs\mu^3\Big)^{\tfrac12\mu^{-2}} \Big(1- \cm \mu^2-C_M\cm^3\abs\mu^3\Big)^{-\mu^{-2}} \\
&\leq e^{2\Cm \cm ^2+4\cm }
\equiv C_3.
\end{aligned}\end{equation}
Above we used $ H_N\leq \mu^{-2}$ and increased $N_0$ once more so that $N\ge N_0$ guarantees $\tfrac23\cm\abs\mu\le 1$, $ \cm \mu^2+ C_M\cm^3\abs\mu^3\le \tfrac12$ and $C_M\cm^3\abs\mu\le 1$. Then we applied the bounds
\begin{align*} \Big(1+2 \Cm \cm ^2\mu^2+\tfrac43 C_M\cm^3\abs\mu^3\Big)^{\tfrac12\mu^{-2}} &\le e^{\Cm\cm^2(1+\frac23\cm\abs\mu)} \le e^{2\Cm \cm ^2},\\
\Big(1- \cm \mu^2- C_M\cm^3\abs\mu^3\Big)^{-\mu^{-2}} &\le \Big(1+2\cm \mu^2\bigl(1+C_M\cm^3\abs\mu\bigr) \Big)^{\mu^{-2}}\le e^{2\cm (1+C_M\cm^3\abs\mu) }\le e^{4\cm },
\end{align*}
where the second line also used $(1-a)^{-1}\le 1+2a$ for $a\in[0,\tfrac12]$.
Put \eqref{o:256} back up, set $G_0=yH_N^{1/2}$, and apply Lemma \ref{lem:ubc1} (for which we use the assumption $y\ge y_0$):
\begin{align}\label{qub2}
\Big(\bE[(\rnder^{\theta_0}_{ \hat{\tau}_y})^2\tspa]\Big)^\frac12\Big(\P\{S_{\hat{\tau}_y}\geq y,{\hat{\tau}_y}> G_0\}\Big)^{\frac12}\leq C_3\bigl(\P\{{\hat{\tau}_y}>G_0\}\bigr)^\frac12 \leq 2 C_3 e^{-c_\tau H_N^{1/2}y^{-1}}.
\end{align}
Next we bound the first term on line \eqref{plb}. Use $M(\theta_0)\le 1$. Let $\mathcal{M}_n=\max_{1\leq i \leq n}X^N_i$.
\begin{equation}\label{qub2.4} \begin{aligned}
&\bE\big[\rnder^{\theta_0}_{ {\hat{\tau}_y}}\tspa\mathbf{1}_{S_{\hat{\tau}_y}\geq y,\hspace{0.9pt} {\hat{\tau}_y}\leq G_0}\big] =
\bE\Big[\frac{e^{\theta_0S_{\hat{\tau}_y}}}{M(\theta_0)^{\hat{\tau}_y}}\mathbf{1}_{S_{\hat{\tau}_y}\geq y,\hspace{0.9pt} {\hat{\tau}_y}\leq G_0}\Big] \\
&
\leq \bE\Big[\frac{e^{\theta_0 S_{\hat{\tau}_y}}}{M(\theta_0)^{G_0}}\mathbf{1}_{S_{\hat{\tau}_y}\geq y,\hspace{0.9pt} \mathcal{M}_{H_N}\leq(\log H_N)^2}\Big]
+
\bE\Big[\frac{e^{\theta_0 S_{\hat{\tau}_y}}}{M(\theta_0)^{\hat{\tau}_y}}\mathbf{1}_{S_{\hat{\tau}_y}\geq y,\hspace{0.9pt} \mathcal{M}_{H_N}>(\log H_N)^2}\Big]\\
&\leq \bE\Big[\frac{e^{\theta_0(y+\mathcal{M}_{H_N})}}{M(\theta_0)^{G_0}}\mathbf{1}_{S_{\hat{\tau}_y}\geq y,\hspace{0.9pt} \mathcal{M}_{H_N}\leq (\log H_N)^2}\Big]
+\bE\Big[\frac{e^{\theta_0 S_{\hat{\tau}_y}}}{M(\theta_0)^{\hat{\tau}_y}}\mathbf{1}_{S_{\hat{\tau}_y}\geq y,\hspace{0.9pt} \mathcal{M}_{H_N}>(\log H_N)^2}\Big]
\end{aligned}\end{equation}
Let us first bound the second term. Using Cauchy-Schwarz, the bound
\begin{align*}
\P(\mathcal{M}_{H_N}>t)\leq H_NC_Me^{-\frq t},
\end{align*}
the bound \eqref{o:256}, and the tail bound in \eqref{cubic}, it follows that
\begin{align*}
\bE\Big[\frac{e^{\theta_0 S_{\hat{\tau}_y}}}{M(\theta_0)^{\hat{\tau}_y}}\mathbf{1}_{S_{\hat{\tau}_y}\geq y,\hspace{0.9pt} \mathcal{M}_{H_N}>(\log H_N)^2}\Big]
&\leq\Big(\bE[(\rnder^{\theta_0}_{H_N})^2\tspa]\Big)^\frac12\Big(\P\{\mathcal{M}_{H_N}>(\log H_N)^2\}\Big)^{\frac12}
\\
&\leq C_3H_N^{1/2}C^{1/2}_Me^{-\frac12\frq (\log H_N)^2}.
\end{align*}
The first term on the last line of \eqref{qub2.4} is bounded as follows, with $G_0=yH_N^{1/2}$.
\begin{align*}
&\bE\Big[\frac{e^{\theta_0(y+\mathcal{M}_{H_N})}}{M(\theta_0)^{G_0}}\mathbf{1}_{S_{\hat{\tau}_y}\geq y,\hspace{0.9pt} \mathcal{M}_{H_N}\leq (\log H_N)^2}\Big]\leq \bE\Big[\frac{e^{\theta_0(y+(\log H_N)^2)}}{M(\theta_0)^{G_0}}\mathbf{1}_{S_{\hat{\tau}_y}\geq y}\Big]\\
&\leq \P(S_{\hat{\tau}_y}\geq y) {e^{\cm H_N^{-1/2}[y+(\log H_N)^2]}}M(\theta_0)^{ -\tspa yH_N^{1/2}}\leq \P(S_{\hat{\tau}_y}\geq y) {e^{\cm H_N^{-1/2}[y+(\log H_N)^2]}}e^{\cm yH_N^{-1/2}}\\
&=\P(S_{\hat{\tau}_y}\geq y) {e^{\cm H_N^{-1/2}[2y+(\log H_N)^2]}}\\
&\leq \P(S_{\hat{\tau}_y}\geq y)\big[1+2\cm H_N^{-1/2}[2y+(\log H_N)^2]\big]
\leq \P(S_{\hat{\tau}_y}\geq y)+2\cm H_N^{-1/2}[2y+(\log H_N)^2].
\end{align*}
We used above Jensen's inequality in the form $M(\theta_0)^{-yH_N^{1/2}} \le e^{-\theta_0\mu yH_N^{1/2}}$, the definition of $H_N$ in the form $\abs\mu H_N^{1/2}\le 1$,
and then $\theta_0\leq \cm \abs\mu\leq \cm H^{-1/2}$.
Furthermore, by \eqref{HB} and our assumption $y\le (\log N)^{-1}H_N^{1/2}$ we have
\[ \cm H_N^{-1/2}[2y+(\log H_N)^2]\le \cm (2+\cmu) (\log N)^{-1} \le \log 2\]
where we choose $N_0$ large enough so that the last inequality holds for $N\ge N_0$. Then we applied the inequality $e^x\le 1+2x$ for $x\in[0,\log 2]$.
Going back to \eqref{qub2.4},
for $N\ge N_0$,
\begin{align*}
E\big[\rnder^{\theta_0}_{ {\hat{\tau}_y}}\tspa\mathbf{1}_{S_{\hat{\tau}_y}\geq y,\hspace{0.9pt} {\hat{\tau}_y}\leq G_0}\big]
&\leq \P(S_{\hat{\tau}_y}\geq y)+2\cm H_N^{-1/2}[2y+(\log H_N)^2]+C_3H_N^{1/2}C^{1/2}_Me^{-\frac12\frq(\log H_N)^2}\\
&\leq \P(S_{\hat{\tau}_y}\geq y)+3\cm H_N^{-1/2}[2y+(\log H_N)^2].
\end{align*}
The second inequality is guaranteed for example by choosing $N_0$ large enough so that $N\ge N_0$ implies
\[ \cmu^{-2}(\log N)^6\ge \hspace{0.9pt} e^{\frq^{-1}} \qquad\text{and}\qquad
\cm C_3^{-1} \Cm^{-1/2}\bigl(\log [\cmu^{-2} (\log N)^6\tspa]\bigr)^2\ge e^{\frac12\frq^{-1}}.\]
This works due to the lower bound \eqref{HB} on $H_N$ and because the function $f(x)=xe^{-\frac12\frq(\log x)^2}$ achieves its maximum $e^{\frac12\frq^{-1}}$ at $x=e^{\frq^{-1}}$ after which it decreases.
Combine the above with \eqref{qub2} on line \eqref{plb} to get this upper bound:
\begin{equation}\label{dlb}\begin{aligned}
Q(S_{\hat{\tau}_y}\geq y)
&\leq \P(S_{\hat{\tau}_y}\geq y)+2C_3 e^{-c_\tau H_N^{1/2}y^{-1}}+3\cm H_N^{-1/2}[2y+(\log H_N)^2]\\
&\le \P(S_{\hat{\tau}_y}\geq y)+2C_3 c_\tau^{-1} H_N^{-1/2}y+3\cm H_N^{-1/2}[2y+(\log H_N)^2]\\
&\le \P(S_{\hat{\tau}_y}\geq y)+ C_4 \tsp H_N^{-1/2}[ y+(\log H_N)^2]
\end{aligned}\end{equation}
where $C_4=2C_3c_\tau^{-1}+6\cm$.
The second inequality above came from $xe^{-x}\leq e^{-1}$ for $x\ge 0$.
Put \eqref{dlb} and \eqref{qlb} together to obtain the claim of the lemma.
\end{proof}
By adjusting a constant we can replace $\hat{\tau}_y$ with $\tau_y$ in the previous estimate.
\begin{corollary}\label{cor:St} Under the assumptions of Lemma \ref{lem:ep}, with $C_{10}=C_0+2c_\tau^{-1}$,
\begin{align}\label{qub4}
\P(S_{\tau_y}\geq y)\geq
\tfrac1{2}\Bigl[1- C_{10} H_N^{-\frac12}\big(y+(\log H_N)^2\big)-\frac2{\frq y}\log(e^{\frq }\Cm H_N+1)
\Bigr]
\end{align}
\end{corollary}
\begin{proof} The assumption $y_0\le y\le H_N^{1/2}$ implies that Lemma \ref{lem:ubc1} applies to give
\begin{equation}\label{o:290} \P(\tau_y>H_N) \le e^{-c_\tau H_Ny^{-2}} \le e^{-c_\tau H_N^{1/2}y^{-1}}
\le c_\tau^{-1} H_N^{-1/2}y.\end{equation}
The claim then comes from Lemma \ref{lem:ep} and
\P(S_{\tau_y}\geq y)\geq \P(S_{\hat{\tau}_y}\geq y)-\P(\tau_y>H_N).
\end{proof}
For $w>0$ truncate:
\begin{align*}
\wh X^{N,w}_i=X^{N}_i\mathbf{1}_{\{X^N_i\geq -w\}}-w\mathbf{1}_{\{X^N_i< -w\}}
\qquad\text{and}\qquad
\wh S^{N,w}_n=\sum_{i=1}^{n}\wh X^{N,w}_i.
\end{align*}
Define
\begin{align*}
t_y=\inf\{m\ge 1:|\wh S^{N,w}_m|\geq y\}.
\end{align*}
We transfer bound \eqref{qub4} to the truncated walk $\widehat S$. The reason is that the proof of the forthcoming Lemma \ref{lem:ub} is easier for the truncated RW.
\begin{corollary}\label{cor:plb} Under the assumptions of Lemma \ref{lem:ep}, with $C_{11}=C_0+4c_\tau^{-1}$,
\begin{align*}
\P(\wh S^{N,w}_{t_y}\geq y)\geq \tfrac1{2}\Bigl[1- C_{11} H_N^{-\frac12}\big(y+(\log H_N)^2\big)-\frac2{\frq y}\log(e^{\frq }\Cm H_N+1)-H_NC_Me^{-\frq w}
\Bigr].
\end{align*}
\begin{proof}
Note that
\begin{align}\label{qub3}
&\P\big(\wh S^{N,w}_m\neq S^N_m \ \text{ for some $1\leq m\leq H_N$}\big)= \P\big(\wh X^{N,w}_i\neq X^N_i \quad \text{for some $1\leq i\leq H_N$}\big) \\
&=\P\big( \inf_{1\leq i\leq H_N}X^N_i<-w\big)\leq H_NC_Me^{-\frq w}.\nonumber
\end{align}
Moreover,
\begin{align}\label{qub5}
\P(\wh S^{N,w}_{t_y}\geq y)
&\geq \P(S^N_{\tau_y}\geq y,\,\tau_y\leq H_N, \,\wh S^{N,w}_m= S^N_m \quad \text{for all $1\leq m\leq H_N$})\nonumber\\
&\geq\P(S^N_{\tau_y}\geq y)-\P(\tau_y> H_N)-\P(\wh S^{N,w}_m\ne S^N_m \quad \text{for some $1\leq m\leq H_N$})\nonumber\\
&\geq\P(S^N_{\tau_y}\geq y)- c_\tau^{-1} H_N^{-1/2}y-H_NC_Me^{-\frq w},
\end{align}
where we used \eqref{qub3} and \eqref{o:290}.
Combine the above with \eqref{qub4} to obtain the result.
\end{proof}
\end{corollary}
We turn to the main argument of the proof of Theorem \ref{thm:lm}, that is, to show that the probability of the random walk $\wh{S}_m$ to hit the level $x$ before hitting the level $-\varepsilon H^{1/2}$ is close to $x|\mu|$. This gives rise to the error term in \eqref{mr}. We sketch the reasoning.
Let us try to hit the level $x>0$ starting from the origin. By Corollary \ref{cor:plb} there is a probability $\approx1/2$ to hit $x$ before hitting $-x$. Suppose we failed and hit $-x$ first. We have another chance to hit $x$ by going $2x$ upward from the level $-x$. By Corollary \ref{cor:plb} the probability of going $2x$ up to the level $x$ before going $2x$ down to the level $-4x$ is $\approx 1/2$. We continue this way until we either hit the level $x$ or the level $-\varepsilon H_N^{1/2}$. How many trials to hit $x$ do we have before we hit $-\varepsilon H_N^{1/2}$? Approximately $K=\log_2(x^{-1}\varepsilon H_N^{1/2})$. The trials are independent and so the probability of hitting the level $-\varepsilon H_N^{1/2}$ before hitting the level $x$ is $\approx 2^{-K}=Cx|\mu|$, which is what we seek.
We introduce the notation to make the sketch precise. See Figure \ref{fig: points} for an illustration.
Define $K=\fl{\log_2 ({x}^{-1}(\log N)^{-1} H_N^{1/2})}-2$.
For $i\ge 0$ set $L_i= 2^{i+2}-3$. Inductively these satisfy $L_0=1$ and $L_i=2L_{i-1}+3$. Furthermore,
\begin{align*}
xL_K\leq (\log N)^{-1} H_N^{1/2}.
\end{align*}
Define the stopping times
\begin{align*}
T_0&=\inf\{n:|\wh S^{N,x}_n|\geq xL_0\}\\
\text{and } \quad
T_i&=\inf\{n\ge T_{i-1}:\wh S^{N,x}_n\leq -xL_i \text{ or }\wh S^{N,x}_n\geq x \}.
\end{align*}
Note that $T_i=T_{i-1}$ is possible.
\begin{lemma}\label{lem:ub}
There exist finite constants $C_{12}$ and $N_0$ such that for $N\ge N_0$ and $x\ge(\log N)^2$,
\begin{align*}
\P\bigl(\,\max_{1 \leq m\leq T_K} \wh S^{N,x}_m< x\bigr)\leq C_{12}\hspace{0.9pt} x\hspace{0.9pt} (\log N)H_N^{-1/2},
\end{align*}
where
\begin{align*}
C_{12}=4\exp\bigl\{ 4(C_0+4c_\tau^{-1})(1+\cmu) + 8\frq^{-1} \bigl(1+ \log(e^{\frq }\Cm +1)\bigr) +4 \Cm \bigr\}
\end{align*}
and $C_0$ in the expression above is from \eqref{C_0}.
\end{lemma}
\begin{proof}
Since $C_0\ge 2$, we have $C_{12}\ge 4e^8\ge 2^{10}$. Then we can assume that $x\le 2^{-10} (\log N)^{-1}H_N^{1/2}$, for otherwise the bound on the probability is $> 1$. This guarantees that $K\ge 8$. It also implies that unless $\abs\mu\le 2^{-10}(\log N)^{-3}$, the result is trivial.
Since $\wh X^{N,x}_i\ge-x$,
\begin{equation}\label{coa} \{\wh S^{N,x}_{T_i}\leq-xL_i\}=\{-x(L_i+1)<\wh S^{N,x}_{T_i}\leq-xL_i\}\end{equation}
and
\begin{align*}
\big\{\wh S^{N,x}_{T_i}\leq-xL_i,\wh S^{N,x}_{T_{i+1}}\leq-xL_{i+1}\big\}\subseteq \{T_i<T_{i+1}\}.
\end{align*}
Note that
\begin{align}\label{e}
E\equiv \Big\{\max_{1 \leq m\leq T_K} \wh S^{N,x}_m< x\Big\}\subseteq \bigcap_{1\leq i \leq K} \{\wh S^{N,x}_{T_i}\leq -x L_i\}.
\end{align}
Due to \eqref{coa}
\begin{equation}\label{sm}\begin{aligned}
&\P\big(\wh S^{N,x}_{T_0}\leq -xL_0,...,\wh S^{N,x}_{T_{i-1}}\leq -xL_{i-1},\wh S^{N,x}_{T_i}\leq -xL_i\big)\\%\nonumber
&=\P\big(\wh S^{N,x}_{T_i}\leq -xL_i|\wh S^{N,x}_{T_0}\leq -xL_0,...,\wh S^{N,x}_{T_{i-1}}\leq -xL_{i-1}\big) \\
&\qquad\qquad \cdot \; \P\big(\wh S^{N,x}_{T_0}\leq -xL_0,...,\wh S^{N,x}_{T_{i-1}}\leq -xL_{i-1}\big)\\%\nonumber
&\leq\P\big(\wh S^{N,x}_{T_i}\leq -xL_i|\wh S^{N,x}_{T_0}\leq -xL_0,...,\wh S^{N,x}_{T_{i-1}}=-x(L_{i-1}+1)\big)\\
&\qquad\qquad \cdot \;\P\big(\wh S^{N,x}_{T_0}\leq -xL_0,...,\wh S^{N,x}_{T_{i-1}}\leq -xL_{i-1}\big)\\%\nonumber
&=\P\big(\wh S^{N,x}_{t_{x(L_{i-1}+2)}}\leq -x(L_{i-1}+2)\big)\P\big(\wh S^{N,x}_{T_0}\leq -xL_0,...,\wh S^{N,x}_{T_{i-1}}\leq -xL_{i-1}\big).
\end{aligned}\end{equation}
The last equality used the definition of the stopping time $t_y$, the definition of $L_i$, and the Markov property. For $1\leq i \leq K$ define the events
\begin{align*}
A^N_i=\{\wh S^{N,x}_{t_{x(L_{i-1}+2)}}\leq -x(L_{i-1}+2)\}.
\end{align*}
Applying \eqref{sm} to \eqref{e} repeatedly,
\begin{align*}
\P(E)\leq \P\Big(\bigcap_{1\leq i \leq K} \{\wh S^{N,x}_{T_i}\leq -xL_i\}\Big)\leq \prod_{1\leq i \leq K}\P(A^N_i).
\end{align*}
Let $x\ge (\log N)^2$. Recall that by \eqref{HB}, $(\log H_N)^2H_N^{-1/2}\leq \cmu(\log N)^{-1}$ and $H_N\le N$. Apply Corollary \ref{cor:plb} with $w=x$ and $y_i=x(L_{i-1}+2) \in [x, (\log N)^{-1} H_N^{1/2}]$ for $i=1,\dotsc,K$ and $N\ge N_0$ to get this estimate:
\begin{align*
\P(A^N_i)&\leq \tfrac1{2}\Bigl[1+C_{11} H_N^{-\frac12}
\big(y_i+(\log H_N)^2\big)
+ \frac2{\frq y_i}\log(e^{\frq }\Cm H_N+1)
+H_NC_Me^{-\frq x}
\Bigr]\\
&\leq \tfrac1{2}\Bigl[1+C_{11}(1+\cmu) (\log N)^{-1}
+ \frac{2\log(e^{\frq }\Cm N+1)}{\frq (\log N)^2}
+C_M N^{1-\frq (\log N)^2}
\Bigr]\\
&\leq \tfrac1{2}\bigl[1+C_A(\log N)^{-1}],
\end{align*}
where we set
\begin{align*}
C_A=C_{11}(1+\cmu) + 2\frq^{-1} \bigl(1+ \log(e^{\frq }\Cm +1)\bigr) + \Cm
\end{align*}
and if necessary we increase $N_0$ further so that $N^{1-\frq (\log N)^2}\le (\log N)^{-1}$ for $N\ge N_0$.
Continue with the above estimate,
\begin{align*}
\P(E)&\leq \prod_{i=1}^K\P(A_i^N)\leq \Big(\tfrac1{2}\bigl[1+C_A(\log N)^{-1}]\Big)^K\\
&=\Big(\tfrac1{2}[1+C_A(\log N)^{-1}]\Big)^{\fl{\log_2 ({x}^{-1}(\log N)^{-1} H_N^{1/2})}-2}\\
&\leq 4x(\log N) H_N^{-1/2} \tsp[1+C_A(\log N)^{-1}]^{\log_2\! N}\\
&\leq 4e^{4C_{\!A}}x(\log N) H_N^{-1/2}=4e^{4C_{\!A}}x(\log N)(|\mu|\vee N^{-1/2}),
\end{align*}
where we used $\log_2\!N=\frac{\log N}{\log 2}\leq 4\log N$.
\end{proof}
We are ready to prove Theorem \ref{thm:lm}. By Lemma \ref{lem:ub}, by the time $\wh S$ hits the level $(\log N)^{-1} H_N^{1/2}$, with high probability it has hit level $x$ as well. It remains to verify the two points below.
\begin{enumerate} [(i)]
\item $\wh S$ is close to $S$ on the time interval $[1,N]$. This follows from a union bound and the exponential tail of $X^{N}_1$.
\item With high probability by time $N$ we hit the boundary of the cylinder of width $(\log N)^{-1} H_N^{1/2}$. This follows from Lemma \ref{lem:ubc1}.
\end{enumerate}
\begin{figure}[t]
\centering%
\begin{tikzpicture}[scale=0.8, every node/.style={transform shape}]
\draw (0,1) -- (10,1);
\node [scale=1][left] at (0,1) {$x$};
\draw (0,0) -- (10,0);
\node [scale=1][left] at (0,0) {$0$};
\draw (0,-1) -- (10,-1);
\node [scale=1,red][left] at (0,-1) {$-xL_0=-x$};
\draw (0,-2) -- (10,-2);
\node [scale=1][left] at (0,-2) {$-2x$};
\draw (0,0) -- (10,0);
\draw (0,-3) -- (10,-3);
\node [scale=1][left] at (0,-3) {$-3x$};
\draw (0,-4) -- (10,-4);
\node [scale=1][left] at (0,-4) {$-4x$};
\draw (0,-5) -- (10,-5);
\node [scale=1,red][left] at (0,-5) {$-xL_1=-5x$};
\node [scale=1][] at (0,-6) {$\vdots$};
\draw [red] (0,-7) -- (10,-7);
\node [scale=1,red][left] at (0,-7) {$-xL_K=(\log N)^{-1} H_N^{1/2}$};
\draw [dashed,red] (0,0) -- (2,1);
\draw (0,0) -- (2,-1.5);
\draw [dashed,red] (2,-1.5) -- (6.5,1);
\node [scale=1,][left] at (1.8,-1.6) {$\wh S^N_{T_0}$};
\draw (2,-1.5) -- (5.5,-5.3);
\node [scale=1,][left] at (5.3,-5.5) {$\wh S^N_{T_1}$};
\end{tikzpicture}
\caption{\small By the time the random walk $\wh S^N$ exits the cylinder of radius $(\log N)^{-1} H_N^{1/2}$ it has had about $K$ independent opportunities to hit the level $x$, each with probability close to $1/2$.}\label{fig: points}
\end{figure}
\begin{proof}[Proof of Theorem \ref{thm:lm}]
Consider $x\ge (\log N)^2$. Observe that
\begin{align*}
\Big\{\max_{1\leq m \leq N}|\wh S^{N,x}_m|\geq (\log N)^{-1} H_N^{1/2},\max_{1 \leq m\leq T_K} \wh S^{N,x}_m> x\Big \}\subseteq \Big\{\max_{1\leq m \leq N}\wh S^{N,x}_m\geq x\Big\}.
\end{align*}
Indeed, on the event $\max_{1\leq m \leq N}|\wh S^{N,x}_m|\geq (\log N)^{-1} H_N^{1/2} {\ge xL_K}$ we have $T_K\leq N$.
Next,
\begin{align}\label{cS}
\P\big(\wh S^{N,x}_m\neq S^N_m \quad \text{for some $1\leq m\leq N$}\big)
&=\P\big(X^N_i< -x \quad \text{for some $1\leq i\leq N$}\big)\\
&\leq \Cm Ne^{-\frq x} \leq \Cm Ne^{-\frq (\log N)^2}\nonumber.
\end{align}
By Lemma \ref{lem:ubc1}, \eqref{cS} and Lemma \ref{lem:ub},
\begin{align*}
&\P\bigl(\,\max_{1\leq m \leq N}\wh S^{N,x}_m\geq x\bigr)
\geq 1-\P\bigl(\,\max_{1\leq m \leq N}|\wh S^{N,x}_m|<(\log N)^{-1} H_N^{1/2}\bigr)-\P\bigl(\,\max_{1 \leq m\leq T_K} \wh S^{N,x}_m\leq x\bigr)\\
&\ge 1-\Big[\P\bigl(\,\max_{1\leq m \leq N}| S^{N}_m|<(\log N)^{-1} H_N^{1/2}\bigr)+\P\big(\wh S^{N,x}_m\neq S^N_m \quad \text{for some $1\leq m\leq N$}\big)\Big]\\
&\qquad\qquad
-\P\bigl(\,\max_{1 \leq m\leq T_K} \wh S^{N,x}_m\leq x\bigr)\\
&=1-\Big[\P\bigl(\tau_{(\log N)^{-1} H_N^{1/2}}>N\bigr)+\P\big(\wh S^{N,x}_m\neq S^N_m \quad \text{for some $1\leq m\leq N$}\big)\Big]\\
&\qquad\qquad -\P\bigl(\,\max_{1 \leq m\leq T_K} \wh S^{N,x}_m< x\bigr)\\
&\geq 1-\big[2 e^{-c_\tau N(\log N)^2 H_N^{-1}}+\Cm Ne^{-\frq (\log N)^2}\big]-C_{12}\tsp x\tsp(\log N) H_N^{-1/2}\\
&\geq 1-2 e^{-c_\tau (\log N)^2 }-\Cm Ne^{-\frq (\log N)^2}-C_{12}\tsp x\tsp(\log N) H_N^{-1/2}\\
&\ge 1- (C_{12}+2)x(\log N) H_N^{-1/2} .
\end{align*}
To get the inequalities above for $N\ge N_0$
we increase $N_0$ if necessary so that $N\ge N_0$ guarantees $(\log N)^{-1} H_N^{1/2}\ge y_0$ to apply Lemma \ref{lem:ubc1}, and furthermore so that
$2e^{-c_\tau (\log N)^2 } \vee \Cm Ne^{-\frq (\log N)^2} \le (\log N)^3 N^{-1/2}$ to get the last inequality.
Now the final inequality:
\begin{align*
\P\bigl(\max_{1\leq m \leq N}S^N_m\geq x\bigr)
&\geq \P\bigl(\max_{1\leq m \leq N}\wh S^{N,x}_m\geq x\bigr) -\P\bigl(\wh S^{N,x}_m\neq S^N_m \quad \text{for some $1\leq m\leq N$}\bigr)\nonumber\\
&\geq 1-(C_{12}+2)x(\log N) H_N^{-1/2} -\Cm Ne^{-\frq (\log N)^2} \nonumber\\
&\geq 1-(C_{12}+3)x(\log N)(|\mu|\vee N^{-1/2}).
\end{align*}
Theorem \ref{thm:lm} has been proved.
\end{proof}
\small
| {
"timestamp": "2020-11-12T02:22:18",
"yymm": "2010",
"arxiv_id": "2010.08767",
"language": "en",
"url": "https://arxiv.org/abs/2010.08767",
"abstract": "We derive a lower bound for the probability that a random walk with i.i.d.\\ increments and small negative drift $\\mu$ exceeds the value $x>0$ by time $N$. When the moment generating functions are bounded in an interval around the origin, this probability can be bounded below by $1-O(x|\\mu| \\log N)$. The approach is elementary and does not use strong approximation theorems.",
"subjects": "Probability (math.PR)",
"title": "Bounds on the running maximum of a random walk with small drift",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9669140235181256,
"lm_q2_score": 0.7341195269001831,
"lm_q1q2_score": 0.7098304654982789
} |
https://arxiv.org/abs/1011.1506 | The Dehn functions of Out(F_n) and Aut(F_n) | For n > 2, the Dehn functions of Aut(F_n) and Out(F_n) are exponential. Hatcher and Vogtmann proved that they are at most exponential, and the complementary lower bound in the case n=3 was established by Bridson and Vogtmann. Handel and Mosher completed the proof by reducing the lower bound for n>4 to the case n=3. In this note we give a shorter, more direct proof of this last reduction. | \section{Introduction}
Dehn functions provide upper bounds on the complexity of the word problem in finitely presented groups. They are examples of filling functions: if a group $G$ acts properly and cocompactly on a simplicial complex $X$, then the Dehn function of $G$ is asymptotically equivalent to the function that provides the optimal upper bound on the area of least-area discs in $X$, where the bound is expressed as a function of the length of the boundary of the disc. This article is concerned with the Dehn functions of automorphism groups of finitely-generated free groups.
Much of the contemporary study of $Out(F_n)$ and $Aut(F_n)$ is based on the deep analogy between these groups, mapping class groups, and lattices in semisimple Lie groups, particularly
${\rm{SL}}(n,\mathbb Z)$. The Dehn functions of mapping class groups are quadratic \cite{mosher},
as is the Dehn function of
${\rm{SL}}(n,\mathbb Z)$ if $n\ge 5$ (see \cite{young}). In contrast, Epstein {\em{et al.}}
\cite{Ep} proved that the Dehn function of ${\rm{SL}}(3,\mathbb Z)$ is exponential. Building on their result, we proved in \cite{BV} that $Aut(F_3)$ and $Out(F_3)$ also have exponential Dehn functions. Hatcher and Vogtmann \cite{HV} established an exponential upper bound on the Dehn function of $Aut(F_n)$ and $Out(F_n)$ for all $n\ge 3$. The comparison with ${\rm{SL}}(n,\mathbb Z)$ might lead one to suspect that this last result is not optimal for
large $n$, but recent work of Handel and Mosher \cite{HM} shows that in fact it is: they
establish an exponential lower bound by using
their general results on quasi-retractions to reduce to the case $n=3$.
\begin{theorem}\label{t:main} For $n\ge 3$, the Dehn functions of $Aut(F_n)$ and $Out(F_n)$ are
exponential.
\end{theorem}
This theorem answers Questions 35 and 37 of \cite{BVsurvey}.
We learned the contents of \cite{HM} from Lee Mosher at Luminy in June 2010 and realized that
one can also reduce the Theorem
to the case $n=3$ using
a simple observation about natural maps between different-rank Outer spaces and Auter spaces (Lemma \ref{maps}).
The purpose of this
note is record this observation and the resulting proof of the Theorem
\subsection*{1. Definitions}
Let $A$ be a 1-connected simplicial complex. We consider simplicial
loops $\ell\colon S\to A^{(1)}$, where $S$ is a simplicial subdivision of the
circle. A {\em simplicial filling} of $\ell$ is a simplicial map $L\colon D\to A^{(2)}$,
where $D$ is a triangulation of the 2-disc and $L|_{\partial D}=\ell$.
Such fillings always exist, by simplicial approximation.
The filling area of $\ell$, denoted
${\rm Area}_A(\ell)$, is the least number of triangles in the domain of any simplicial filling of $\ell$.
The {\em{Dehn function}}\footnote{The standard definition of
area and Dehn function are
phrased in terms of singular discs, but this version is $\simeq$ equivalent.}
of $A$ is the least function
$\delta_A\colon \mathbb N\to\mathbb N$ such that ${\rm Area}_A(\ell)\le \delta_A(n)$ for all loops
of length $\le n$ in $A^{(1)}$. The Dehn function of a finitely presented group $G$ is the Dehn function
of any 1-connected 2-complex on which $G$ acts simplicially with finite stabilizers and compact quotient. This is well-defined up
to the following equivalence relation: functions $f,g\colon \mathbb N\to\mathbb N$ are equivalent if $f\preceq g$ and $g\preceq f$,
where $f\preceq g$ means that there is a
constant $a>1$ such that $f(n) \le a\, g(an+a) + an +a$. The Dehn function can be interpreted as a measure of the
complexity of the word problem for $G$ --- see \cite{mrb-bfs}.
\begin{lemma}\label{loops} If $A$ and $B$ are 1-connected simplicial complexes, $F\colon A \to B$ is a simplicial map, and $\ell$ is a loop in the 1-skeleton of $A$, then ${\rm Area}_A(\ell)\geq {\rm Area}_B(F\circ\ell)$.
\end{lemma}
\begin{proof} If $L\colon D\to A$ is a simplicial filling of $\ell$, then
$F\circ L$ is a simplicial filling of $F\circ \ell$, with the same
number of triangles in the domain $D$.
\end{proof}
\begin{corollary}\label{c:factor} Let $A, B$ and $C$ be 1-connected
simplicial complexes with simplicial maps $A\to B\to C$.
Let $\ell_n$ be a sequence of simplicial loops in $A$ whose length is bounded above by a linear function of $n$, let $\overline \ell_n$ be the image loops in $C$ and let $\alpha(n) = {\rm Area}_C(\overline \ell_n)$. Then the Dehn function of $B$ satisfies
$\delta_B(n)\succeq\alpha(n)$.
\end{corollary}
\begin{proof} This follows from Lemma~\ref{loops} together with the observation that a simplicial map
does not increase the length of any loop in the 1-skeleton.
\end{proof}
\subsection*{2. Simplicial complexes associated to $Out(F_n)$ and $Aut(F_n)$.}
Let $K_n$ denote the spine of Outer space, as defined in \cite{CV}, and $L_n$ the spine of Auter space, as defined in \cite{HV}.
These are contractible simplicial complexes with cocompact proper actions by $Out(F_n)$ and $Aut(F_n)$ respectively, so we may use them to compute the Dehn functions for these groups.
Recall from \cite{CV} that a {\em marked graph} is a finite metric graph $\Gamma$ together with a homotopy equivalence $g\colon R_n \to \Gamma$, where $R_n$ is a fixed graph with one vertex and $n$ loops. A vertex of
$K_n$ can be represented either as a marked graph $(g,\Gamma)$ with all vertices of valence at least three, or as a free minimal action of $F_n$ on a simplicial tree (namely the universal cover of $\Gamma$). A vertex of $L_n$ has the same descriptions except that there is a chosen basepoint in the marked graph (respected by the marking) or in the simplicial tree. Note that we allow marked graphs to have separating edges. Both $K_n$ and $L_n$ are flag complexes, so to define them it suffices to describe what it means for vertices to be adjacent. In the marked-graph description, vertices of $K_n$ (or $L_n$) are adjacent if one can be obtained from the other by a forest collapse (i.e. collapsing each component of a forest to a point).
\subsection*{3. Three Natural Maps}
There is a {\em{forgetful map}} $\phi_n\colon L_n\to K_n$ which simply forgets the basepoint; this map is simplicial.
Let $m<n$. We fix an ordered basis for $F_n$, identify $F_m$ with the subgroup generated
by the first $m$ elements of the basis, and identify $Aut(F_m)$ with the
subgroup of $Aut(F_n)$ that fixes the last $n-m$ basis elements. We consider
two maps associated to this choice of basis.
First, there is
an equivariant {\em{augmentation map}} $\iota\colon L_m\to L_n$ which attaches a bouquet of $n-m$ circles to the basepoint of each marked graph and marks them with the last $n-m$ basis elements of $F_n$. This map is simplicial, since a forest collapse has no effect on the bouquet of circles at the basepoint.
Secondly, there is a {\em{restriction map}} $\rho\colon K_n\to K_m$ which is easiest to describe using trees. A point in $K_n$ is given by a minimal free simplicial action of $F_n$ on a tree $T$ with no vertices of valence 2. We define $\rho(T)$ to be the minimal invariant
subtree for $F_m<F_n$; more explicitly, $\rho(T)$ is the union of the axes in $T$ of all elements of $F_m$. (Vertices of $T$ that have valence 2 in $\rho(T)$
are no longer considered to be vertices.)
One can also describe $\rho$ in terms of marked graphs. The chosen embedding $F_m<F_n$ corresponds to choosing an $m$-petal subrose $R_m\subset R_n$.
A vertex in $K_n$ is given by a graph $\Gamma$ marked with a homotopy equivalence
$g\colon R_n\to \Gamma$, and the restriction of $g$ to $R_m$ lifts to a homotopy
equivalence $\widehat g\colon R_m\to \widehat \Gamma$, where $\widehat \Gamma$ is
the covering space corresponding to $g_*(F_m)$. There is a canonical retraction $r$ of $\widehat \Gamma$ onto its {\em compact core}, i.e.~the smallest connected subgraph containing all nontrivial embedded loops in $\Gamma$.
Let $\widehat \Gamma_0$ be the graph obtained by erasing all vertices
of valence 2 from the compact core and define $\rho(g,\Gamma)=(r\circ \widehat g, \widehat \Gamma_0)$.
\begin{lemma} For $m<n$, the restriction map $\rho\colon K_n\to K_m$ is simplicial.
\end{lemma}
\begin{proof} Any forest collapse in $\Gamma$ is covered by
a forest collapse in $\widehat \Gamma$ that preserves the compact core, so $\rho$ preserves adjacency.
\end{proof}
\begin{lemma}\label{maps} For $m<n$, the following diagram of simplicial maps commutes:
$$
\begin{matrix}
L_m&\buildrel{\iota}\over\to &L_{n}\\
\phi_m\downarrow&&\downarrow\phi_n\\
K_m&\buildrel{\rho}\over\leftarrow & K_{n}
\end{matrix}
$$
\end{lemma}
\begin{proof}
Given a marked graph with basepoint $(g,\Gamma;v)\in L_n$, the marked graph
$\iota(g,\Gamma;v)$ is obtained by attaching $n-m$ loops at $v$ labelled
by the elements $a_{m+1},\dots,a_n$
of our fixed basis for $F_n$. Then $(g_n,\Gamma_n):=\phi_n\circ\iota(g,\Gamma;v)$ is obtained by forgetting the basepoint, and the cover of $(g_n,\Gamma_n)$ corresponding to
$F_m<F_n$ is obtained from a copy of $(g,\Gamma)$ (with its labels) by attaching
$2(n-m)$ trees. (These trees are obtained from the Cayley graph of $F_n$
as follows: one cuts at an edge labelled $a_i^\varepsilon$, with $i\in\{m+1,\dots,n\}$ and $\varepsilon=\pm 1$, takes one component of the result, and then attaches the hanging
edge
to the basepoint $v$ of $\Gamma$.) The effect of $\rho$ is to
delete these trees.
\end{proof}
\subsection*{4. Proof of the Theorem} In the light of the Corollary
and Lemma \ref{maps},
it suffices to exhibit a sequence of loops $\ell_i$ in the 1-skeleton of $L_3$
whose lengths are bounded by a linear function of $i$ and whose filling area
when projected to $K_3$ grows exponentially as a function of $i$. Such a
sequence of loops is essentially described in \cite{BV}. What we actually
described there were words in the generators of $Aut(F_3)$ rather than
loops in $L_3$, but standard quasi-isometric arguments show that this is equivalent. More explicitly, the words we considered were $w_i=T^iAT^{-i}BT^iA^{-1}T^{-i}B^{-1}$
where
\[
T\colon\begin{cases}
a_1\mapsto a_1^2a_2\cr
a_2\mapsto a_1a_2\cr
a_3\mapsto a_3
\end{cases}
A\colon\begin{cases}
a_1\mapsto a_1\cr
a_2\mapsto a_2\cr
a_3\mapsto a_1a_3
\end{cases}
B\colon\begin{cases}
a_1\mapsto a_1\cr
a_2\mapsto a_2\cr
a_3\mapsto a_3a_2
\end{cases}
\]
To interpret these as loops in the 1-skeleton of $L_3$ (and $K_3$) we note that $A=\lambda_{31}$ and $B=\rho_{32}$ are elementary transvections and $T$ is the composition of two elementary transvections: $T=\lambda_{21}\circ \rho_{12}$. Thus $w_i$ is the product of $8i+4$ elementary transvections. There is a
(connected) subcomplex of the 1-skeleton of $L_3$ spanned by roses
(graphs with a single vertex) and Nielsen graphs (which have $(n-2)$ loops at the base vertex
and a further trivalent vertex). We say roses are adjacent if they have distance $2$ in this graph.
Let $I\in L_3$ be the rose marked by the identity map $R_3\to R_3$. Each elementary transvection $\tau$ moves $I$ to an adjacent rose $\tau I$, which is connected to $I$ by a Nielsen graph $N_\tau$. A composition $\tau_1\ldots\tau_k$ of elementary transvections gives a path through adjacent roses $I, \tau_1I, \tau_1\tau_2I, \ldots,\tau_1\tau_2\ldots \tau_kI$; the Nielsen graph connecting $\sigma I$ to $\sigma\tau I$ is $\sigma N_\tau$. Thus the word $w_i$ corresponds to a loop $\ell_i$ of length $16i+8$ in the 1-skeleton of $L_3$.
Theorem A of \cite{BV} provides an exponential lower bound on the filling area of $\phi\circ \ell_i$
in $K_3$. \qed
\smallskip
The square of maps in Lemma \ref{maps} ought to have many uses beyond the one in this note (cf.~\cite{HM}). We mention just one, for illustrative purposes. This is a special case of the fact that every infinite cyclic subgroup of $Out(F_n)$ is quasi-isometrically embedded \cite{alibegovic}.
\begin{proposition}
The cyclic subgroup of $Out(F_n)$ generated by any Nielsen transformation (elementary transvection) is quasi-isometrically embedded.
\end{proposition}
\begin{proof} Each Nielsen transformation is in the image of
the map $\Phi\colon Aut(F_2)\to Aut(F_n)\to Out(F_n)$ given by the inclusion of a
free factor $F_2<F_n$. Thus it suffices to prove that if a
cyclic subgroup $C=\<c\><Aut(F_2)$ has infinite image in $Out(F_2)$, then
$t\mapsto \Phi(c^t)$ is a quasi-geodesic. This is
equivalent to the assertion that some (hence any) $C$-orbit in $K_n$ is quasi-isometrically embedded,
where $C$ acts on $K_n$ as $\Phi(C)$ and $K_n$ is given the piecewise Euclidean metric where all edges
have length $1$.
$K_2$ is a tree and $C$ acts on $K_2$ as a hyperbolic isometry, so the $C$-orbits in
$K_2$ are quasi-isometrically embedded. For each $x\in L_2$, the $C$-orbit of $\phi_2(x)$
is the image of the quasi-geodesic
$t\mapsto c^t.\phi_2(x) =\phi_2(c^t.x)$. We factor $\phi_2$
as a composition of $C$-equivariant simplicial maps $L_2\overset{\iota}\to K_n\overset{\phi_n}\to
K_2$, as in Lemma \ref{maps},
to deduce that the $C$-orbit
of $\phi_n\iota(x)$ in $K_n$ is
quasi-isometrically embedded.
\end{proof}
A slight variation on the above argument shows that if one lifts a free group
of finite index $\Lambda<Out(F_2)$ to $Aut(F_2)$ and then maps it to
$Out(F_n)$ by choosing a free factor $F_2<F_n$, then the inclusion
$\Lambda\hookrightarrow Out(F_n)$ will be a quasi-isometric embedding.
| {
"timestamp": "2011-11-29T02:01:01",
"yymm": "1011",
"arxiv_id": "1011.1506",
"language": "en",
"url": "https://arxiv.org/abs/1011.1506",
"abstract": "For n > 2, the Dehn functions of Aut(F_n) and Out(F_n) are exponential. Hatcher and Vogtmann proved that they are at most exponential, and the complementary lower bound in the case n=3 was established by Bridson and Vogtmann. Handel and Mosher completed the proof by reducing the lower bound for n>4 to the case n=3. In this note we give a shorter, more direct proof of this last reduction.",
"subjects": "Group Theory (math.GR); Geometric Topology (math.GT)",
"title": "The Dehn functions of Out(F_n) and Aut(F_n)",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.966914019704466,
"lm_q2_score": 0.734119526900183,
"lm_q1q2_score": 0.7098304626985968
} |
https://arxiv.org/abs/0906.0809 | Minimizing the footprint of your laptop (on your bedside table) | I often work on my laptop in bed. When needed, I park the laptop on the bedside table, where the computer has to share the small available space with a lamp, books, notes, and heaven knows what else. It often gets quite squeezy.Being regularly faced with this tricky situation, it finally occurred to me to determine once and for all how to place the laptop on the bedside table so that its ``footprint'' - the area in which it touches the bedside table - is minimal. In this note I give the solution of this problem, using some very pretty and elementary mathematics. | \section{Introduction} I often work on my laptop in bed. When needed, I park the laptop on the bedside table, where the computer has to share the small available space with a lamp, books, notes, and heaven knows what else. It often gets quite squeezy.
Being regularly faced with this tricky situation, it finally occurred to me to determine once and for all how to place the laptop on the bedside table so that its ``footprint'' - the area in which it touches the bedside table - is minimal. In this note I give the solution of this problem, using some very pretty elementary mathematics.
\section{Mathematical laptops and bedside tables}
We assume that both the laptop and the bedside table are rectangular, and we will refer to these rectangles as the {\em laptop} and the {\em table}. We further assume that the center of gravity of the laptop is its midpoint. Finally, without loss of generality we may assume that the laptop is 1 unit wide.\footnote{That is, the shorter side of the laptop is 1 unit in length. And, if the laptop is square then it is a unit square. Yes, yes, only a mathematician would consider the possibility of a square laptop, but bear with me. As will become clear, considering square laptops provides an elegant key to our problem.}
We are considering all placements of the laptop such that it will not topple off the table; these are exactly the placements for which the midpoint of the laptop is also a point of the table. We are then interested in determining for which of these placements the {\em footprint} of the laptop is of minimal area; here, the footprint is the common region of the laptop and the table.
In all reasonable circumstances, the optimal answer to this problem will always resemble the arrangement in Figure~\ref{laptop1}.\footnote{``Reasonable circumstances'' means in reference to laptops and tables of relative dimensions close to those of the real items. In the nitty gritty of this note we'll specify the exact scope of our solution, and also what happens in some unrealistic but nevertheless mathematically interesting scenarios.} {\em This optimal placement is characterized by the fact that the midpoint of the laptop coincides with one of the corners of the table and the footprint is an isoceles right triangle.}
\begin{figure}[h]
\centerline{\includegraphics{laptop1.jpg}}
\caption[f1]{No (stable) placement of your laptop on a bedside table has a smaller footprint.\label{laptop1}}
\end{figure}
The proof is divided into two parts. First, we consider those placements for which the midpoint of the laptop coincides with one of the corners of the table: we prove that among such placements our special placement has smallest footprint area. Then, we extend our argument, proving that any placement for which the laptop midpoint is not a table corner must have a greater footprint area.
\section{Balancing on a corner}
We begin by considering a right-angled cross through the center of a square, as illustrated in the left diagram in Figure~\ref{square}. Whatever its orientation, the cross cuts the square into four congruent pieces. This shows that if we place a unit square laptop on the corner of a sufficiently large table, its footprint will always have area 1/4, no matter how the square is oriented; see the diagram on the right.
\pagebreak
\begin{figure}[h]
\centerline{\includegraphics{square.jpg}}
\caption[f1]{A square laptop with midpoint at a corner will have a footprint area of 1/4. \label{square}}
\end{figure}
Next, consider a non-square laptop with its midpoint on the corner of a large table, as in Figure~\ref{overlap}. We regard the footprint as consisting of a blue part and a red part, as shown. Our previous argument shows that, as we rotate the laptop, the area of the blue part stays constant. On the other hand, the red part only vanishes in the special position shown on the right. We conclude that this symmetric placement of the laptop uniquely provides the footprint of least area.
\begin{figure}[h]
\centerline{\includegraphics{overlap.jpg}}
\caption[f1]{The blue regions have the same area, and so the right footprint is smaller. \label{overlap}}
\end{figure}
These arguments required that the table be sufficiently large. How large? The arguments work as long as the rotated square never
pokes off another side of the table. So, since the short side of the laptop has length 1, we only require that
the shortest side of the table be at least of length $1/\sqrt 2$; see Figure~\ref{smallsquare}.
\pagebreak
\begin{figure}[h]
\centerline{\includegraphics{small_square.jpg}}
\caption[f1]{Our corner argument works for relatively small tables. \label{smallsquare}}
\end{figure}
\section{In the corner is best}
We now want to convince ourselves that the minimal footprint must occur for one of these special placements over a table corner.
We start with a table that is at least as wide as the diagonal of the square inscribed in our laptop; see the left diagram in Figure~\ref{corner_best}. Place the laptop anywhere on the table. Now consider a cross in the middle of the laptop square, and
with arms parallel to the table sides.
\begin{figure}[h]
\centerline{\includegraphics{corner_best.jpg}}
\caption[f1]{If the table contains the square highlighted on the left, then at least one of the quarters of the square on the right is contained in the footprint of the laptop. \label{corner_best}}
\end{figure}
As we saw above, the cross cuts the square into four congruent pieces. Furthermore, wherever the laptop is placed
and however it is oriented, at least one of these congruent pieces will be part of the footprint: this is a consequence
of our assumption on the table size. Finally,
unless the midpoint is over a corner of the table, this quarter-square region clearly cannot be the full footprint.
Putting everything together, we can therefore guarantee that our symmetric corner arrangement is optimal if the table is at least as large as the square table in Figure~\ref{corner_best}. This square table has side length $\sqrt 2$.
By refining the arguments above, we now want to show that our solution holds for any table that is at least 1 unit wide. Since our laptop is also 1 unit wide, this probably takes care of most real life laptop balancing problems.
Begin with a circle inscribed in the laptop square, and with the red and the green regions within, as in Figure~\ref{red_square}. The
regions are mirror images, and are arranged to each have area 1/4. Note that if the laptop is rotated around its midpoint,
either fixed region remains within the laptop.
\begin{figure}[h]
\centerline{\includegraphics{new_block1.jpg}}
\caption[f1]{Both the red and the green regions have the critical area of 1/4. \label{red_square}}
\end{figure}
Now place the laptop on the table with some orientation. Suppose that the laptop footprint contains a red or a green region,
or such a region rotated by 90, 180, or 270 degrees; see Figure~\ref{thin1}.
Then it is immediate that the footprint area for the laptop in that position is greater than~1/4.
\begin{figure}[h]
\centerline{\includegraphics{smaller3.jpg}}
\caption[f1]{The footprint area is at least 1/4, for the laptop midpoint in either the blue or brown region. \label{thin1}}
\end{figure}
In fact the footprint may not contain such a region. However, this will be the case unless the laptop midpoint is close to a table corner,
in one of the little blue squares pictured in Figure~\ref{thin1}.
On the other hand, if the midpoint is in a blue square
then the footprint will contain one of the original quarter-squares of area 1/4; see the diagram on the right side of Figure~\ref{thin1}.
At this point we summarize what we have discovered so far.
\begin{theorem} Consider a laptop that is 1 unit wide and a table that is at least 1 unit wide. If the laptop is not a square, then the placement of the laptop on the table that gives the smallest footprint is shown in Figure~\ref{laptop1}. If the laptop is a square, then the minimal area footprints are for placements for which the midpoint of the laptop coincides with a corner of the table.
\end{theorem}
\section{Odds and ends}
What if you are the unlucky owner of a really small bedside table? First of all, it is usually not
difficult to determine the best placement for a specific laptop/table combination. To get a feel for this, and for
what to expect in general, consider Figure~\ref{thin3}, where we balance a laptop on square tables of different sizes.
\begin{figure}[h]
\centerline{\includegraphics{square1.jpg}}
\caption[f1]{Balancing a laptop on the corners of squares of different sizes. \label{thin3}}
\end{figure}
Here are some simple observations, applicable both to square and non-square tables:
\begin{enumerate}
\item If your table is really tiny, the footprint will always be the whole table, no matter where the laptop is placed.
This will be the case if the table diagonal is no longer than half the width of the laptop.
\item Suppose that the (square or non-square) table diagonal is just a little bit longer than half the width of the laptop, ensuring that if part of the table sticks out from underneath the laptop, then this part is a triangle cut off one of the corners of the table.
In this case, if a table corner is sticking out and if the laptop midpoint is not at the opposite corner, then it is easy to see that simply translating the laptop to this opposite corner will lower the footprint area. Consequently, the minimal footprint will correspond to one of these special placements. For a square laptop, the minimal footprint occurs when the protruding triangle is isosceles. This is not terribly surprising. What may be surprising is that this is not at all obvious to prove; a descent into the land of nitty gritty seems unavoidable.
For non-square tables, the optimal placement will not necessarily correspond to an isosceles table corner sticking out. To see this, consider a very thin table. Then the only way a corner can stick out is if the table diagonal is almost perpendicular to the long side of the laptop. This precludes an isosceles triangle part of the table sticking out; see Figure~\ref{thin4}.
\begin{figure}[h]
\centerline{\includegraphics{diagonal.jpg}}
\caption[f1]{Balancing the laptop on a very thin table with a tiny piece of the table showing. \label{thin4}}
\end{figure}
\item From here on things get even more complicated: all the problems that we mentioned for the last scenario plus many-sided, odd-shaped footprints, no easy way to see why the best placement should be among the placements for which the midpoint of the laptop is one of the corners, etc.
\item From here on our theorem applies.
\end{enumerate}
We end this note with some challenges for the interested reader (in likely order of difficulty):
\begin{itemize}
\item Extend our theorem to include all tables that are at least $1/\sqrt 2$ wide. (The table shown in Figure~\ref{smallsquare} has these dimensions.)
\item Turn Scenario 2 discussed above into a theorem (are there pretty proofs?).
\item Prove the Ultimate Laptop Balancing Theorem, that includes everything that your lazy author did and did not cover in this note: arbitrary location of the center of gravity, starshaped laptops and jellyfish-shaped tables, higher-dimensional tables and laptops, etc.
\end{itemize}
Have Fun, and Good Luck!
\pagebreak
\noindent Burkard Polster\\
School of Mathematical Sciences\\
Monash University, Victoria 3800\\
Australia\\
e-mail: Burkard.Polster@sci.monash.edu.au\\
web: www.qedcat.com
\end{document}
| {
"timestamp": "2009-06-04T03:50:18",
"yymm": "0906",
"arxiv_id": "0906.0809",
"language": "en",
"url": "https://arxiv.org/abs/0906.0809",
"abstract": "I often work on my laptop in bed. When needed, I park the laptop on the bedside table, where the computer has to share the small available space with a lamp, books, notes, and heaven knows what else. It often gets quite squeezy.Being regularly faced with this tricky situation, it finally occurred to me to determine once and for all how to place the laptop on the bedside table so that its ``footprint'' - the area in which it touches the bedside table - is minimal. In this note I give the solution of this problem, using some very pretty and elementary mathematics.",
"subjects": "History and Overview (math.HO)",
"title": "Minimizing the footprint of your laptop (on your bedside table)",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9669140235181257,
"lm_q2_score": 0.734119521083126,
"lm_q1q2_score": 0.7098304598736849
} |
https://arxiv.org/abs/1805.10315 | Modular class of even symplectic manifolds | We provide an intrinsic description of the notion of modular class for an even symplectic manifold and study its properties in this coordinate free setting. | \section{Introduction}
The definition of the modular vector field of a Poisson manifold $%
(M;\{\_,\_\})$, is as follows: given a volume element $\eta$ on $M$, the
modular vector field $Z^{M}$ maps each function $f\in C^{\infty}(M)$ into
the divergence with respect to $\eta$ of the hamiltonian vector field
associated to $f$, i.e,
\begin{equation}
Z^{M}(f):=\operatorname{div}^{\eta}(X_{f})=\operatorname{div}^{\eta }(\{\operatorname{d}f,\_\}).
\label{eq0}
\end{equation}
What is called the modular class of $(M;\{\_,\_\})$, is its class in the
Poisson-Lichnerowicz cohomology (\cite{Lic 77}). The concept of modular
vector field was introduced by Koszul in \cite{Kos 84}, in his study of the
cohomology of a Poisson manifold, and Weinstein (in \cite{Wei 97}) has used
it as a tool to understand the modular automorphisms of von Neumann
algebras, observing that these share with their semiclassical limits
(Poisson algebras) the property of having modular automorphisms groups. The
concept has also appeared in geometry in the classification of quadratic
Poisson structures (see \cite{Duf-Har 91}). The modular vector field and the
related notion of volume element, has also been used intensively by O. M.
Khudaverdian and others in the study of graded Poincar\'{e}-Cartan
invariants, the geometry of Batalin-Vilkovisky formalism, etc (see \cite{Khu
81}, \cite{Khu 91}, \cite{Khu 98}). So we feel that an intrinsic,
geometrical study of these structures deserves attention.
The notion of modular class only needs a Poisson structure to be defined,
but we will center our attention in the non degenerate case.
In the graded setting, when a graded Poisson manifold $((M,\wedge \mathcal{E}%
),[\![\_,\_]\!])$ is given (see Sections $2$ and $3$ for the definitions), a
fundamental distinction appears: even though an appropriate definition of
divergence can be given, the analog of the mapping (\ref{eq0}) does not give
a derivation on ${\wedge }\mathcal{E}$ when the Poisson bracket is odd with
respect to the $\mathbb{Z}$-grading, but a generator for the Poisson
bracket, in the sense of Gerstenhaber algebras (see \cite{Khu 91}, \cite
{Kos-SCh 99} or \cite{Kos-Mon 01}). On the other hand, when the bracket is
even with respect to the $\mathbb{Z}$-grading the same mapping does give a
derivation on ${\wedge }\mathcal{E}$. So it is in this case that it makes
sense to develop the notions of graded modular vector field and modular
class.
In the nongraded case, it is a well known fact that any symplectic manifold $%
(M,\omega )$ is unimodular, i.e, it gives the zero class. Now suppose $M$ is
the base manifold of a given graded Poisson manifold $(M,\wedge \mathcal{E})$
whose graded Poisson bracket $[\![\_,\_]\!]$ is nondegenerate and extends
the Poisson bracket in $M$ defined by $\omega $. It is also known that this
bracket has an associated volume form which, in local coordinates, is
expressed by the Berezinian of the bracket matrix (see \cite{Ber 87}); using
this volume form, it can be seen that the modular vector field is zero, so $%
(M,\wedge \mathcal{E})$ is unimodular.
Our purpose in this paper, is to give a geometrical, coordinate free setting
for these results.We define the notion of symplectic Berezinian volume
element in an intrinsic way, and study how it changes with the section of
the Berezinian sheaf chosen, along with its relation to the canonical
Berezinian. As an application, we give a graded formulation of the
continuity equation of fluid mechanics.
\section{Graded forms on $(M,\Gamma(\Lambda E))$}
For the generalities on graded manifolds, see \cite{Kos 77}, \cite{Lei 80}
or \cite{Ber 87}; our approach here follows \cite{Mon-San 97}. Let $M$ be an
$m$-dimensional smooth manifold, and let $C_{M}^{\infty }$ be the sheaf of
smooth functions on $M$. Let $E\rightarrow M$ be a vector bundle of rank $n$%
, and let $\mathcal{E}=\Gamma (E)$ be its sheaf of smooth sections. Let $%
\wedge \mathcal{E}=\Gamma (\Lambda E)$ be the sheaf of smooth sections of
the exterior algebra bundle $\wedge E\rightarrow M$.
We refer to \cite{Kos 77} or to \cite{Mon-San 97} for definitions of graded
vector field, graded differential form, insertion operator, $\iota(D)$ ($D$
being a graded vector field), exterior differential, $d^{G}$, and Lie
operator $\mathcal{L}_{D}^{G}$.
Being a graded homomorphism of graded modules, a graded differential form
has a degree. Thus, we can define a $\mathbb{Z}\times\mathbb{Z}-$bigrading
on the module of graded differential forms and we will say that a graded
differential form $\lambda$ has bidegree $(p,k)\in\mathbb{Z}\times\mathbb{Z}$
if
\begin{equation*}
\lambda:\mathrm{\operatorname{Der}}\wedge\mathcal{E}\times.\overset{p)}{.}.\times%
\mathrm{\operatorname{Der}}\wedge\mathcal{E}\longrightarrow \wedge\mathcal{E}
\end{equation*}
and if, for all $D_{1},...,D_{p}\in$ $\mathrm{\operatorname{Der}}\wedge\mathcal{E}
$,
\begin{equation*}
\left| \left\langle D_{1},...,D_{p};\lambda\right\rangle \right|
=\sum\limits_{i=1}^{p}\left| D_{i}\right| +k\text{.}
\end{equation*}
Using this bigrading, any graded $p-$differential form $\lambda$ can be
decomposed as a sum $\lambda=\lambda_{(0)}+...+\lambda_{(n)}$, where $%
\lambda_{(i)}$ is a homogeneous graded form of bidegree $(p,i)$.
A fundamental result is the following corollary to a theorem by Kostant (4.7
in \cite{Kos 77}).
\begin{proposition}
\label{cor-Ko} Every $d^{G}$-closed graded form of bidegree $(p,k)$ with $%
k>0 $ is exact.
\end{proposition}
Use will be made of the fact that the space of graded vector fields, $%
\mathrm{\operatorname{Der}}\wedge\mathcal{E}$, is a locally-free sheaf of ${\wedge%
}\mathcal{E}$-modules \cite{Kos 77}. See \cite{Mon 92} and \cite{Rot 90})
for an analysis ot its structure: Let $\mathcal{E}^{\ast}$ be the sheaf of
sections of the dual bundle $E^{\ast}\rightarrow M$. There is a monomorphism
\begin{equation*}
i\colon\Gamma({\wedge}\mathcal{E})\otimes\mathcal{E}^{\ast}\hookrightarrow
\mathrm{\operatorname{Der}}\wedge\mathcal{E}
\end{equation*}
On the other hand, let $\mathcal{X}(M)=\operatorname{Der}C_{M}^{\infty}$ be the
sheaf of smooth vector fields on $M$. A connection $\nabla$ on ${\wedge }%
\mathcal{E}$ gives, by definition, a morphism
\begin{align*}
\Gamma({\wedge}\mathcal{E})\otimes\mathcal{X}(M) & \rightarrow \mathrm{%
\operatorname{Der}}\wedge\mathcal{E} \\
\alpha\otimes X & \mapsto\alpha\nabla_{X}.
\end{align*}
\section{Divergence operators and modular graded vector fields}
By definition, a \textit{divergence operator} on ${\wedge}\mathcal{E}$ is an
even linear map, $\operatorname{div}:\mathrm{\operatorname{Der}}\wedge\mathcal{E}%
\rightarrow{\wedge}\mathcal{E}$, such that
\begin{equation}
\operatorname{div}(sD)=s\ \operatorname{div}(D)+(-1)^{|s||D|}D(s)\ , \label{gooddiv}
\end{equation}
for any $D\in\mathrm{\operatorname{Der}}\wedge\mathcal{E}$ and any $s\in{\wedge}%
\mathcal{E}$.
The \textit{modular vector field} $Z^{M}$, associated to a divergence
operator $\operatorname{div}$ and a graded Poisson bracket $[\![\_,\_]\!]$ on $%
\wedge\mathcal{E}$, is the even graded vector field defined as
\begin{equation}
s\in{\wedge}\mathcal{E}\mapsto D_{s}=[\![s,\_]\!]\in \mathrm{\operatorname{Der}}%
\wedge\mathcal{E}\mapsto\operatorname{div}(D_{s})\in\wedge\mathcal{E} \label{eq3.3}
\end{equation}
It is easy to check that when the even Poisson bracket is the Poisson
bracket associated to an even symplectic form, $\Theta$, then $Z^{M}$ is a
locally hamiltonian graded vector field. From now on, we shall work
exclusively in this case, this is, with an even symplectic form on $(M,\wedge%
\mathcal{E})$ and its associated even Poisson bracket $[\![\ ,\
]\!]_{\Theta} $.
\begin{lemma}
\label{ocho} Let $D = \sum\limits_{i\in\mathbb{N}}$ $D_{2i}\in \mathrm{%
\operatorname{Der}}\wedge\mathcal{E}$ be a locally hamiltonian even derivation.
Consider the decomposition (according to the $\mathbb{Z}-$degree) $%
\Theta=\Theta_{(0)}+\Theta_{(\geq2)}$. Then, $D$ is a graded hamiltonian
vector field for $\Theta$ if and only if $\iota_{D_{0}}\Theta_{(0)}$ is an
exact graded form.
\end{lemma}
\begin{proof}
It is an straightforward computation thanks to Prop. \ref{cor-Ko}.
\end{proof}
This means that the modular class just depends on the zero degree term of
the modular vector field.
\section{The symplectic Berezinian volume element and the modular class}
Let $\Theta$ be an even symplectic form on a graded manifold $(M,\wedge
\mathcal{E})$ of dimension $(2n,m)$. We know that there are three objects
associated to the even symplectic form (see \cite{Rot 90}): an usual
symplectic form, $\omega$, on the base manifold $M$; a non degenerate
symmetric bilinear form, $g$, on $E^{\ast}$ and a connection, $\nabla$, on $%
E $, compatible with $g$, i.e, $\nabla g = 0$.
Let $\omega^{n}$ be the symplectic volume element on $M$, and let $\mu_{g}$
the metric volume element on $E$.
Given $s\in\wedge\mathcal{E}$ of compact support, we can define
\begin{equation*}
\int_{\xi}s:=\int_{M}(i_{\mu_{g}}s)\omega^{n},
\end{equation*}
where $i_{\mu_{g}}s$ denotes the total contraction of $\mu_{g}\in
\Gamma(\Lambda^{m}E^{\ast})$ with $s$.
Such a definition includes, in an implicit way, the definition of a
Berezinian volume element, $\xi $. (See \cite{Lei 80}, \cite{HR-MM 85} or
\cite{Kos-Mon 01})
We are going to define a divergence operator associated to the even
symplectic form through a Berezinian volume element, ${\xi}$. Given a
derivation $D\in\mathrm{\operatorname{Der}}\wedge\mathcal{E}$, there is a unique
section, denoted by $\operatorname{div}^{\xi}(D)\in\wedge\mathcal{E}$ such that
\begin{equation*}
-\int_{\xi}D(s)=\int_{\xi}\operatorname{div}^{\xi}(D)\wedge s,
\end{equation*}
for all $s\in\wedge\mathcal{E}$ of compact support.
This is, indeed, a divergence operator.
\begin{proposition}
\begin{equation*}
\operatorname{div}\nolimits^{\xi }(s\wedge D)=s\wedge {\operatorname{div}^{\xi }}%
(D)+(-1)^{|D||s|}D,
\end{equation*}
\end{proposition}
\begin{proof}
Just a matter of computation:
\begin{align*}
\int_{\xi }\operatorname{div}\nolimits^{\xi }(s\wedge D)\wedge \overline{s}&
=-\int_{\xi }s\wedge D(\overline{s}) \\
& =-\int_{M}i_{\mu _{g}}(s\wedge D(\overline{s}))\omega ^{n} \\
& =-(-1)^{|D||s|}\int_{M}i_{\mu _{g}}(D(s\wedge \overline{s}))\omega
^{n}+(-1)^{|D||s|}\int_{M}i_{\mu _{g}}(D(s)\wedge \overline{s})\omega ^{n} \\
& =-(-1)^{|D||s|}\int_{\xi }D(s\wedge \overline{s})+(-1)^{|D||s|}\int_{\xi
}D(s)\wedge \overline{s} \\
& =(-1)^{|D||s|}\int_{\xi }\operatorname{div}\nolimits^{\xi }(D)\wedge s\wedge
\overline{s}+(-1)^{|D||s|}\int_{\xi }D(s)\wedge \overline{s} \\
& =\int_{\xi }(s\wedge \operatorname{div}\nolimits^{\xi
}(D)+(-1)^{|D||s|}D(s))\wedge \overline{s}.
\end{align*}
\end{proof}
Now, we would like to know what happens when we change the section of the
Berezinian sheaf; for this, we recall that the Berezinian module is a right $%
\wedge \mathcal{E}$-module of rank $1$ (see \cite{HR-MM 85}). So, given a
Berezinian volume element $\xi $, any other Berezinian volume element is of
the kind $\xi .\bar{s}$ for an invertible even element, $\bar{s}\in \wedge
\mathcal{E}$.
\begin{proposition}
If $\bar{s}$ is of compact support, then $\operatorname{div}\nolimits^{\xi \bar{s}%
}=\operatorname{div}\nolimits^{\xi }+\operatorname{d}\nolimits^{G}\log \bar{s}.$
\end{proposition}
\begin{proof}
From the definition of Berezinian,
\begin{equation*}
\int\nolimits_{\xi \bar{s}}\_=\int\nolimits_{\xi }\bar{s}\wedge \_.
\end{equation*}
Now we have, for any $s\in \wedge \mathcal{E},$%
\begin{equation}
\int\nolimits_{\xi \bar{s}}D(s)=-\int\nolimits_{\xi \bar{s}}\operatorname{div}%
\nolimits^{\xi \bar{s}}(D)\wedge s=-\int\nolimits_{\xi }\bar{s}\wedge
\operatorname{div}\nolimits^{\xi \bar{s}}(D)\wedge s. \label{eq4.1}
\end{equation}
On the other hand,
\begin{eqnarray}
\int\nolimits_{\xi \bar{s}}D(s) &=&\int\nolimits_{\xi }\bar{s}\wedge
D(s)=\int_{M}i_{\mu _{g}}(\overline{s}\wedge D(s))\omega ^{n}= \label{eq4.2}
\\
&=&\int_{M}i_{\mu _{g}}(D(\overline{s}\wedge s))\omega ^{n}-\int_{M}i_{\mu
_{g}}(D(\overline{s})\wedge s)\omega ^{n}= \notag \\
&=&\int\nolimits_{\xi }D(\bar{s}\wedge s)-\int\nolimits_{\xi }D(\bar{s}%
)\wedge s= \notag \\
&=&-\int\nolimits_{\xi }\operatorname{div}\nolimits^{\xi }(D)\wedge \bar{s}\wedge
s-\int\nolimits_{\xi }\overline{s}\wedge \overline{s}^{-1}\wedge D(\bar{s}%
)\wedge s= \notag \\
&=&-\int\nolimits_{\xi }\bar{s}\wedge \operatorname{div}\nolimits^{\xi }(D)\wedge
s-\int\nolimits_{\xi }\overline{s}\wedge \overline{s}^{-1}\wedge D(\bar{s}%
)\wedge s. \notag
\end{eqnarray}
Equating (\ref{eq4.1}) and (\ref{eq4.2}), we obtain
\begin{eqnarray*}
\operatorname{div}\nolimits^{\xi \bar{s}}(D) &=&\operatorname{div}\nolimits^{\xi }(D)+%
\overline{s}^{-1}\wedge D(\bar{s})= \\
&=&\operatorname{div}\nolimits^{\xi }(D)+D(\log \bar{s})= \\
&=&\operatorname{div}\nolimits^{\xi }(D)+\left\langle D;\operatorname{d}%
\nolimits^{G}\log \bar{s}\right\rangle ,
\end{eqnarray*}
and, from here, the statement.
\end{proof}
This enables us to give the following definition.
\begin{definition}
The \emph{modular class} of an even Poisson bracket is the class of any
modular vector field in the quotient $\mathrm{\mathrm{\operatorname{Der}}}\wedge
\mathrm{\mathcal{E}}/\mathrm{\operatorname{Ham}}(\pi )$.
\end{definition}
Let us note how the notion of symplectic Berezinian is related to that of
canonical Berzinian. Given the volume form $\omega ^{n}$ on $M$ and the
metric volume $\mu _{g}$, as they are forms of maximal degree on $M$, there
must exist a function $f$ such that $\omega ^{n}=\operatorname{e}\nolimits^{f}\mu
_{g}$. If $s_{(\max )}$ denotes the maximal degree part of the section $s$,
also there must exist a $h$ with $s_{(\max )}=h\mu _{g}$, and we have that
the canonical Berzinian gives
\begin{equation*}
\int\nolimits_{can}s=\int\nolimits_{M}s_{(\max )};
\end{equation*}
on the other hand, the symplectic Berezinian reads
\begin{eqnarray*}
\int\nolimits_{symp}s &=&\int_{M}(i_{\mu _{g}}s)_{(\max )}\omega
^{n}=\int_{M}i_{\mu _{g}}(h\mu _{g})\operatorname{e}\nolimits^{f}\mu _{g} \\
&=&\int_{M}h\operatorname{e}\nolimits^{f}\mu _{g}=\int\nolimits_{M}\operatorname{e}%
\nolimits^{f}s_{(\max )},
\end{eqnarray*}
so $\operatorname{e}\nolimits^{f}$ is the section that passes from $%
\int\nolimits_{symp}$ to $\int\nolimits_{can}$. Then, from Proposition $4$,
the associated divergences are related through
\begin{equation*}
\operatorname{div}\nolimits^{symp}=\operatorname{div}\nolimits^{can}+\operatorname{d}%
\nolimits^{G}f.
\end{equation*}
In the case of $(M,\omega ,g)$ a K\"{a}hler manifold, in which $f$ is a
constant function, $\operatorname{div}\nolimits^{symp}=\operatorname{div}%
\nolimits^{can}$.
The basic derivations in this setting are of the type $i_{\chi }$, for $\chi
\in \Gamma (E^{\ast })$, and $\nabla _{X}$, for a vector field $X$, and
where we can use the linear connection $\nabla $ induced by the even
symplectic form. Let us compute their divergences.
\begin{lemma}
Let $\nabla$ be a connection compatible with $g$, then,
\begin{equation*}
\operatorname{div}^{\xi}(i_{\chi})=0, \qquad\operatorname{div}^{\xi}(\nabla_{X})=\operatorname{div}%
^{\omega^{n}}(X).
\end{equation*}
\end{lemma}
\begin{proof}
Indeed, $i_{\chi}s$ is a section of degree $<m = \operatorname{rk}(E)$, then $%
i_{\mu_{g}}i_{\chi}s=0$ for any $s$. For the other basic derivations,
\begin{align*}
i_{\mu_{g}}(\nabla_{X}s)\omega^{n} &
=X(i_{\mu_{g}}s)\omega^{n}-(i_{\nabla_{X}\mu_{g}}s)\omega^{n}= \\
& =\mathcal{L}_{X}((i_{\mu_{g}}s)\omega^{n})-(i_{\mu_{g}}s)\mathcal{L}%
_{X}\omega^{n}= \\
& =\operatorname{d}i_{X}((i_{\mu_{g}}s)\omega^{n})+i_{X}\operatorname{d}%
((i_{\mu_{g}}s)\omega^{n})-(i_{\mu_{g}}s)\mathcal{L}_{X}\omega^{n}.
\end{align*}
Now, the first term $\operatorname{d}i_{X}((i_{\mu_{g}}s)\omega^{n})$ does not
contribute in the integral because it is an exact term. The second, $i_{X}%
\operatorname{d}((i_{\mu_{g}}s)\omega^{n})$, is equal to zero because $%
(i_{\mu_{g}}s)\omega^{n}$ is a top degree differential form on $M$. The
third term gives $(i_{\mu_{g}}s)\operatorname{div}\nolimits^{\omega^{n}}(X)$
because $\mathcal{L}_{X}\omega^{n}=\operatorname{div}\nolimits^{\omega^{n}}(X)%
\omega^{n}$. Finally, note that $\nabla_{X}\mu_{g}$ vanishes by hypothesis.
Therefore, $-\int\nolimits_{\xi}\nabla_{X}s=\int\nolimits_{\xi}\operatorname{div}%
\nolimits^{\omega^{n}}(X)s$.
\end{proof}
\begin{theorem}
Any even symplectic form on a graded manifold $(M,\wedge\mathcal{E})$ is
unimodular.
\end{theorem}
\begin{proof}
Recall Rothstein Theorem: $\Theta=\varphi^{\ast}(\Theta_{\omega,g,\nabla})$
for an automorphism $\varphi$ of $\wedge\mathcal{E}$ and where $\nabla$ is
compatible with $g$; thus, it is clear that if we prove that $\Theta
_{\omega,g,\nabla}$ is unimodular, $\Theta$ will also be. $\Theta
_{\omega,g,\nabla}$ is given by
\begin{align*}
\left\langle \nabla_{X},\nabla_{Y};\Theta_{\omega,g,\nabla}\right\rangle &
=\omega(X,Y)+\frac{1}{2}R(X,Y,\_,\_) \\
\left\langle \nabla_{X},i_{\chi};\Theta_{\omega,g,\nabla}\right\rangle & =0
\\
\left\langle i_{\chi},i_{\psi};\Theta_{\omega,g,\nabla}\right\rangle &
=g(\chi,\psi),
\end{align*}
so that $(\Theta_{\omega,g,\nabla})_{(0)}$, which we shall denote $%
\Theta_{(0)}^{\omega,g,\nabla}$, is given by
\begin{align*}
\left\langle \nabla_{X},\nabla_{Y};\Theta_{(0)}^{\omega,g,\nabla
}\right\rangle & =\omega(X,Y) \\
\left\langle \nabla_{X},i_{\chi};\Theta_{(0)}^{\omega,g,\nabla}\right\rangle
& =0=\left\langle i_{\chi},i_{\psi};\Theta_{(0)}^{\omega,g,\nabla
}\right\rangle .
\end{align*}
The graded Hamiltonian vector field associated to $f\in C^{\infty}(M)$
through the symplectic form $\Theta_{\omega,g,\nabla}$ is given by
\begin{equation*}
D_{f} = \nabla_{X_{f}}+ h.d.t.
\end{equation*}
We have Lemma $2$, telling us that $D$ is a graded Hamiltonian vector field
if and only if $\iota _{D_{0}}\Theta _{(0)}$ is an exact graded form. On the
other hand, we know that
\begin{equation}
\pi _{(0)}(Z^{M}(f))=\pi _{(0)}(\operatorname{div}(D_{f}))=\operatorname{div}(\nabla
_{X_{f}})=0.
\end{equation}
Therefore $Z^{M}=i_{N}+\text{\emph{higher degree terms,}}$ where $N\in
\operatorname{End}\mathcal{E}$. But then,
\begin{equation*}
\iota _{Z_{0}^{M}}\Theta _{(0)}^{\omega ,g,\nabla }=\iota _{i_{N}}\Theta
_{(0)}^{\omega ,g,\nabla }=0.
\end{equation*}
\end{proof}
\section{Applications}
In this section, we intend to provide some ideas about the possible
applications of these results. In the classical case, the notions of
divergence and vanishing modular class, are intimately related to
conservation laws along the flow of fluids; in fact, to one of the basic
equations of fluid dynamics, the continuity equation. We do not intend here
to give a complete description of the equations of graded fluids, we will
content ourselves with a study of what the graded continuity equation must
be (this is the only basic equation of fluid dynamics which is directly
related to the conservation of volume by the Hamiltonian flow).
Let us consider more concretely the classical situation we want to extend to
the graded case.
Let $V\in\mathcal{X}(M)$ be a vector field describing a classical dynamical
system (for instance, think of the velocities field on a fluid), and let $%
\{\varphi_{t}\}_{t\in\mathbb{R}}$ be its flow. Associated to any function $%
f\in C^{\infty}(M)$ (which describes the density of some observable on the
system), we have the continuity equation
\begin{equation}
\frac{\partial f}{\partial t}+\operatorname{div}(fV)=0 \label{cont}
\end{equation}
(here we allow the possibility of a time dependence in $f$). This equation,
expresses the conservation of the total magnitude associated to $f$:
\begin{equation}
\frac{d}{dt}\int_{M}f\mu=0, \label{cons}
\end{equation}
where $\mu$ is a volume form on $M$, usually the symplectic volume form
coming from the hamiltonian structure of the dynamical system.
What would be the graded analog of (\ref{cont})?. We can not mimic the
physical reasoning of the classical case, because in the graded one there is
no notion of volume form (understood as a maximal degree graded form), but
we can extend the geometrical interpretation. For this, let us note that (%
\ref{cont}) can be rewritten as
\begin{equation}
(\frac{\partial}{\partial t}+\mathcal{L}_{V})(f\mu)=0. \label{contLie}
\end{equation}
The continuity equation in its form (\ref{contLie}), allows one to interpret
$f\mu$ as a density form on the fluid which is dinamically conserved along
the flow $\{\varphi_{t}\}_{t\in\mathbb{R}}$. Here $f$ can be a volume
density, a charge density, etc. Moreover, this equation and its geometrical
interpretation carry over to graded manifolds. Now, an ``observable
density'' will be a superfunction $\rho\in\wedge\mathcal{E}$. A graded
vector field is a $D\in\mathrm{\operatorname{Der}}\wedge\mathcal{E}$ and its
flow, in general, is two-parameter dependent (see \cite{Mon-San 93} for
details on superflows), $\{\Phi_{(t,s)}^{\ast}\}_{(t,s)\in\mathbb{R}^{1|1}}$%
, where $\Phi :\mathbb{R}^{1|1}\times(M,\wedge\mathcal{E)}%
\rightarrow(M,\wedge\mathcal{E)}$. Thus, if $(t,s)$ are the (global)
supercoordinates of $\mathbb{R}^{1|1}$, the graded analog of (\ref{contLie})
would be the expression of the conservation of $\rho$ along the flow of $D$:
\begin{equation}
(\frac{\partial}{\partial t}+\frac{\partial}{\partial s}+\mathcal{L}%
_{D}^{G})(\rho)=0, \label{contLiegrad}
\end{equation}
where we have taken $\frac{\partial}{\partial t}+\frac{\partial}{\partial s}$
as the ``integrating model'' for supervector fields flows (see \cite{Mon-San
93}). Also, $\rho$ can eventually depend upon $s,t$.
By using our results (Proposition $3$ and Theorem $6$), we can recast (\ref
{contLiegrad}) in a form similar to the classical one (\ref{cont}):
\begin{align*}
(\frac{\partial}{\partial t}+\frac{\partial}{\partial s}+\mathcal{L}%
_{D}^{G})(\rho) & =(\frac{\partial}{\partial t}+\frac{\partial}{\partial s}%
)(\rho)+D(\rho)= \\
& =(\frac{\partial}{\partial t}+\frac{\partial}{\partial s})(\rho
)+(-1)^{\left| D\right| \left| \rho\right| }(\operatorname{div}%
\nolimits^{\xi}(\rho D)-\rho\wedge\operatorname{div}\nolimits^{\xi}(D))= \\
& =(\frac{\partial}{\partial t}+\frac{\partial}{\partial s})(\rho
)+(-1)^{\left| D\right| \left| \rho\right| }(\operatorname{div}%
\nolimits^{\xi}(\rho D)).
\end{align*}
Thus, the equation of continuity reads now
\begin{equation*}
(\frac{\partial}{\partial t}+\frac{\partial}{\partial s})(\rho)+(-1)^{\left|
D\right| \left| \rho\right| }(\operatorname{div}\nolimits^{\xi}(\rho D))=0.
\end{equation*}
Indeed, though it is not evident, this equation is of the ``conservation of
mass'' type. We only have to take into account the properties of the
superflows which are analogues to those of the classical flow of vector
fields. Let us denote by $(U,\wedge\mathcal{E}|_{U})$ an open superdomain
and by $\Phi_{(t,s)}^{\ast}(U,\wedge\mathcal{E}|_{U})$ the superdomain
obtained from the action of the superflow of $D$. Then, if $\int$ denotes
the berezinian integral,
\begin{align*}
(\frac{\partial}{\partial t}+\frac{\partial}{\partial s})\int_{\Phi
_{(t,s)}^{\ast}(U,\wedge\mathcal{E}|_{U})}\rho & =\int_{(U,\wedge \mathcal{E}%
|_{U})}(\frac{\partial}{\partial t}+\frac{\partial}{\partial s}%
)\Phi_{(t,s)}^{\ast}\rho= \\
& =\int_{(U,\wedge\mathcal{E}|_{U})}\Phi_{(t,s)}^{\ast}\left[ (\frac
{\partial}{\partial t}+\frac{\partial}{\partial s})\rho+\mathcal{L}%
_{D}^{G}\rho\right] = \\
& =\int_{\Phi_{(t,s)}^{\ast}(U,\wedge\mathcal{E}|_{U})}\left[ (\frac
{\partial}{\partial t}+\frac{\partial}{\partial s})\rho+\mathcal{L}%
_{D}^{G}\rho\right] ,
\end{align*}
and the continuity equation is equivalent to
\begin{equation}
(\frac{\partial}{\partial t}+\frac{\partial}{\partial s})\int\rho=0,
\label{conserv}
\end{equation}
which is a conservation equation.
Note how this result embodies the classical one about conservation of mass
in a fluid (for definiteness, moving on $\mathbb{R}^{2}$ with its usual
symplectic and metric structure). It suffices to take $\rho(\overrightarrow
{x},t)=f(\overrightarrow{x},t)\mu$ (where $f$ is the density of the fluid
and $\mu$ is the symplectic volume form on $\mathbb{R}^{2})$, and $D=%
\mathcal{L}_{X}$ (where $X$ is the field of velocities) as a derivation on $(%
\mathbb{R}^{2},\Gamma(\Lambda T^{\ast}\mathbb{R}^{2}))$, and then, by the
definition of berezinian integral, (\ref{conserv}) leads to
\begin{align*}
0 & =(\frac{\partial}{\partial t}+\frac{\partial}{\partial s})\rho +\mathcal{%
L}_{D}^{G}\rho= \\
& =\frac{\partial}{\partial t}(f\mu)+\mathcal{L}_{X}(f\mu)= \\
& =\frac{\partial f}{\partial t}\mu+\operatorname{div}(fX)\mu,
\end{align*}
that is, the classical equation
\begin{equation*}
\frac{\partial f}{\partial t}+\operatorname{div}(fX)=0.
\end{equation*}
The advantage of the equation (\ref{conserv}), is that it allows to consider
all kinds of magnitudes expressibles as differential forms, in the spirit of
the generalization of classical mechanics proposed by Michor (see \cite{Mic
85}).
\textbf{Acknowlegements}. A previous version of this result was presented in
Colloque IHP, ``G\'{e}ometrie diff\'{e}rentielle et physique
math\'{e}matique'', Paris, June 2001. We thank Y. Kosmann-Schwarzbach, A.
Weinstein and G. Tuynman for suggesting us the possibility of a simpler
proof.
This work has been partially supported by the Spanish Ministerio de
Educaci\'{o}n y Cultura, Grant PB-97-1386.
| {
"timestamp": "2018-05-29T02:00:37",
"yymm": "1805",
"arxiv_id": "1805.10315",
"language": "en",
"url": "https://arxiv.org/abs/1805.10315",
"abstract": "We provide an intrinsic description of the notion of modular class for an even symplectic manifold and study its properties in this coordinate free setting.",
"subjects": "Differential Geometry (math.DG); Mathematical Physics (math-ph)",
"title": "Modular class of even symplectic manifolds",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.966914019704466,
"lm_q2_score": 0.734119521083126,
"lm_q1q2_score": 0.7098304570740028
} |
https://arxiv.org/abs/2208.13926 | Regular projections of the link L6n1 | Given a link projection $P$ and a link $L$, it is natural to ask whether it is possible that $P$ is a projection of $L$. Taniyama answered this question for the cases in which $L$ is a prime knot or link with crossing number at most five. Recently, Takimura settled the issue for the knot $6_2$. We answer this question for the case in which $L$ is the link $L6n1$. | \section{Introduction}\label{sec:intro}
We work in the piecewise linear category, and links are hosted in the $3$-sphere ${\mathbb S}^3$. If we project a link $L$ onto the $2$-sphere ${\mathbb S}$, we obtain a {\em projection} of $L$. We {\em resolve} a link projection by giving over/under information at each crossing, thus obtaining a link {\em diagram}.
As usual, we are interested in {regular} projections of links, such as the projection $P$ in Figure~\ref{fig:100}. We recall that a projection is {\em regular} if it has finitely many multiple points, and each multiple point is a transverse (non-tangential) double point. Two projections are {\em equivalent} if there is a self-homeomorphism of ${\mathbb S}$ that takes one to the other.
Let $P$ be a projection and let $L$ be a link. We do not distinguish between isotopic links, and so $P$ {\em is a projection of $L$} if there is a link isotopic to $L$ that projects to $P$. Thus $P$ is a projection of $L$ if and only if it is possible to resolve $P$ to obtain a diagram $D$ of a link isotopic to $L$. See Figure~\ref{fig:100} for an illustration.
\begin{figure}[ht!]
\centering
\scalebox{0.25}{\input{fig100-3.pdf_t}}
\caption{On the left-hand side we have a projection $P$ of a $3$-component link. We turn $P$ into the diagram $D$ at the center of the figure by giving over/under information at each crossing, that is, by {\em resolving} each crossing of $P$. Using a sequence of Reidemeister moves, it is possible to transform $D$ into the diagram on the right-hand side, which is a diagram of the link $L6n1$. Therefore $P$ is a projection of $L6n1$.}
\label{fig:100}
\end{figure}
Given a projection $P$ and a link $L$, it is natural to ask whether $P$ is a projection of $L$. This question and several variants have been thoroughly investigated in the literature~\cite{cantarella,endoitoh,evenzohar,hanaki2009,hanaki2010,hanaki2014,hanaki2015,hanaki2020,huhtaniyama,itotakimura,medina1,smooth,ptaniyama}. Taniyama answered this question for all the prime links (including knots) with crossing number at most five~\cite{taniyamaknots,taniyamalinks}. Recently, Takimura settled the issue for the knot $6_2$~\cite{takimura2018}.
\subsection{Our main result} In this paper we answer this question for the case in which $L$ is a $3$-component prime link with crossing number six, namely the link $L6n1$ in the Thistlethwaite Link Table~\cite{atlas}. We refer the reader again to the right-hand side of Figure~\ref{fig:100}.
In order for a projection $P$ to be a projection of $L6n1$, an obvious requirement is that $P$ be a projection of a $3$-component link. Every such projection $P$ is the union of three knot projections, which we colour arbitrarily so that we have a {\em blue} knot projection $B$, a {\em red} knot projection $R$, and a {\em green} knot projection $G$. We refer the reader again to Figure~\ref{fig:100}.
We say that $P$ is {\em pairwise crossing} if $B,R$, and $G$ pairwise cross each other. Since the components of the link $L6n1$ are pairwise linked, it follows that if $P$ is a projection of $L6n1$, then $P$ must be pairwise crossing. Our main result is that this obvious necessary condition is also sufficient.
\begin{theorem}\label{thm:main}
A projection of a $3$-component link is a projection of the link $L6n1$ if and only if it is pairwise crossing.
\end{theorem}
\subsection{Overview of the proof of Theorem~\ref{thm:main}}
For convenience, for the rest of the paper we regard every link projection as a $4$-regular graph embedded on ${\mathbb S}$, by turning each crossing into a degree $4$ vertex. See Figure~\ref{fig:200}.
Following~\cite{ptaniyama} and~\cite{taniyamaknots}, if $P$ is a link projection then $\Ls{P}$ denotes the set of all those links $L$ such that $P$ is a projection of $L$. With this notation, Theorem~\ref{thm:main} claims that if $P$ is a $3$-component link projection, then $L6n1 \in \Ls{P}$ if and only if $P$ is pairwise crossing.
A key tool in the proof of Theorem~\ref{thm:main} is the identification of two {\em reduction operations} on link projections, introduced in Section~\ref{sec:red}. As we will see, when we perform a reduction operation on a pairwise crossing projection $P$ we obtain a projection ${\overline{P}}$ that satisfies the following properties:
\vglue 0.1 cm
\noindent{(R1)} {\sl ${\overline{P}}$ has fewer vertices than $P$.}
\vglue 0.1 cm
\noindent{(R2)} {\sl ${\overline{P}}$ is pairwise crossing.}
\vglue 0.1 cm
\noindent{(R3)} {\sl $\Ls{{\overline{P}}} \subseteq \Ls{P}$.}
\vglue 0.1 cm
A pairwise crossing projection $P$ is {\em irreducible} if we cannot apply any reduction operation to it. If we start with an arbitrary pairwise crossing projection $P$, after performing a finite number of reduction operations we end up with an irreducible projection $P'$. An iterative application of (R3) yields that $\Ls{P'} \subseteq \Ls{P}$. In particular, if $P'$ is a projection of $L6n1$, then $P$ is also a projection of $L6n1$. Therefore {\em in order to prove Theorem~\ref{thm:main}, it suffices to show that every irreducible projection is a projection of $L6n1$}.
To achieve this goal, we fully characterize which projections are irreducible: as we state in Section~\ref{sec:proofmain} (see Proposition~\ref{pro:twoirr} and Figure~\ref{fig:670}) up to equivalence there are only two irreducible projections. It is easy to see that they are both projections of $L6n1$ (see Figure~\ref{fig:670}). Thus every irreducible projection is indeed a projection of $L6n1$, and so Theorem~\ref{thm:main} follows.
\section{The reduction operations}\label{sec:red}
Before we identify the reduction operations, we make a few elementary remarks on link projections. Recall that we regard link projections as $4$-regular graphs embedded on ${\mathbb S}$.
\subsection{Straight-ahead walks, monochromatic vertices, and bichromatic vertices}
We recall that a {\em straight-ahead walk} in a $4$-regular graph is a walk in which every time we reach a vertex $v$ in the walk, the next edge in the walk is the opposite edge to the one from which we arrived to $v$. A walk is {\em closed} if its final vertex is the same as its initial vertex.
We are interested in projections of $3$-component links, and so every projection under consideration is a projection of a $3$-component link. Thus every projection $P$ of interest is the edge-disjoint union $P=B\cup R\cup G$ of three straight-ahead closed walks: a {\em blue} walk $B$, a {\em red} walk $R$, and a {\em green} walk $G$. We refer the reader to Figure~\ref{fig:200}.
\begin{figure}[ht!]
\centering
\scalebox{0.31}{\input{fig200-5.pdf_t}}
\caption{A link projection is regarded as a $4$-regular graph embedded on ${\mathbb S}$, by turning each crossing into a degree $4$ vertex. Every $3$-component link projection is the edge-disjoint union of three straight-ahead closed walks: a {blue} walk $B$, a {red} walk $R$, and a {green} walk $G$. The projection in this figure is pairwise crossing: there is at least one blue-red vertex, at least one blue-green vertex, and at least one red-green vertex.}
\label{fig:200}
\end{figure}
If $v$ is a vertex of $P$, and the four edges incident with $v$ are of the same colour, then we also colour $v$ with this colour, and say that $v$ is {\em monochromatic}. Otherwise, two of the edges incident with $v$ are of one colour, and the other two are of another colour. In this case we colour $v$ with both colours, and say that $v$ is {\em bichromatic}. Thus a monochromatic vertex is either blue, red, or green, and there are three {\em types} of bichromatic vertices: blue-red, blue-green, or red-green. See Figure~\ref{fig:200}.
In view of Theorem~\ref{thm:main}, our interest lies in pairwise crossing projections. In every such projection the blue straight-ahead closed walk $B$ and the red straight-ahead closed walk $R$ must cross each other, and so there is at least one blue-red vertex. Actually, using the Jordan curve theorem it is easy to see that there must be at least two blue-red vertices. Similarly, there are at least two blue-green vertices and at least two red-green vertices.
\subsection{The reduction operations}
As we mentioned in the previous section, an essential tool in the proof of Theorem~\ref{thm:main} is the existence of two {\em reduction operations} on link projections, each of which satisfies properties (R1)--(R3).
\subsubsection{The first reduction operation: shortcutting a projection}
Let $P=B\cup R\cup G$ be a pairwise crossing projection. {Suppose that there is a face $f$ in $P$ whose boundary contains two edges $e,e'$ of the same colour, which without loss of generality we may assume to be blue. We refer the reader to Figure~\ref{fig:220}(i) for an illustration, where $f$ is the shaded face and $e, e'$ are the thick edges.}
{As illustrated in Figure~\ref{fig:220}(ii), we subdivide the edge $e$ with a degree $2$ vertex $x$, and subdivide $e'$ with a degree $2$ vertex $y$. As we show in Figure~\ref{fig:220}(iii) and (iv), the blue straight-ahead closed walk $B$ is naturally decomposed into two straight-ahead walks $B_1$ and $B_2$, each of which starts at $x$ and ends at $y$.} {We say that $B_1$ and $B_2$ are the $xy$-{\em walks}. For $i\in\{1,2\}$, we say that $B_i$ is {\em colourful} if it has at least one blue-red vertex and at least one blue-green vertex.}
\def\tq#1{{\Scale[2.0]{#1}}}
\def\tj#1{{\Scale[3.0]{#1}}}
\def\tz#1{{\Scale[2.4]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.29}{\input{fig220-3.pdf_t}}
\caption{Illustration of the shortcut operation.}
\label{fig:220}
\end{figure}
\def\tz#1{{\Scale[2.0]{#1}}}
{Suppose that at least one of $B_1$ and $B_2$ is colourful. As in the example in Figure~\ref{fig:220}, we may assume without loss of generality that $B_1$ is colourful. We can then proceed to {\em shortcut} $B$, as follows. First, we discard $B_2$ and join $x$ and $y$ in $B_1$ with an arc contained in $f$, as illustrated in Figure~\ref{fig:220}(v). Finally we suppress $x$ and $y$. As illustrated in Figure~\ref{fig:220}(vi), as a result we obtain a new blue straight-ahead closed walk $\overline{B}$. We say that to obtain $\overline{B}$ we {\em shortcut} $B$ {\em at $x$ and $y$}, and the projection ${\overline{P}}={\overline{B}}\cup R\cup G$ is obtained by {\em shortcutting $P$ at $x$ and $y$}.}
In order to show that properties (R1)--(R3) hold, we start by noting that the internal vertices of the discarded $xy$-walk ($B_2$ in our previous discussion) are in $P$ but not in ${\overline{P}}$. Thus (R1) holds. We also note that the colourfulness of $B_1$ implies that ${\overline{P}}$ is pairwise crossing. Thus (R2) holds.
We finally move on to showing that (R3) holds. We illustrate the proof in Figure~\ref{fig:400}, using the projections $P$ and ${\overline{P}}$ from Figure~\ref{fig:220}. Suppose that $L\in \Ls{{\overline{P}}}$, that is, $\overline{P}$ is a projection of a link $L$. Thus it is possible to resolve each vertex of $\overline{P}$ so that the result is a diagram $\overline{D}$ of $L$.
To prove (R3) we need to show that $P$ is also a projection of $L$. To achieve this, we describe how to resolve each vertex of $P$ to obtain a diagram $D$ equivalent to ${\overline{D}}$.
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.295}{\input{fig400-5.pdf_t}}
\caption{Using the descending algorithm, a diagram ${\overline{D}}$ of ${\overline{P}}$ can be extended to a diagram $D$ of $P$, equivalent to ${\overline{D}}$.}
\label{fig:400}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
First, for each vertex of $P$ that is also a vertex of $\overline{P}$, we resolve it exactly as it was resolved back in $\overline{P}$ in order to obtain $\overline{D}$. See Figure~\ref{fig:400}.
We now describe how to resolve each vertex of $P$ that is not a vertex in $\overline{P}$. Note that these are precisely the vertices in the straight-ahead walk $B_2$ that was discarded in the shortcutting process. Recall that the endpoints of $B_2$ are $x$ and $y$. As in~\cite{ptaniyama} and~\cite{taniyamaknots}, we use the {\em descending algorithm} from $x$ to $y$ to resolve each vertex of $B_2$: we traverse $B_2$ from $x$ to $y$, and whenever we arrive to a vertex for the first time, we resolve this vertex so that the strand we are currently traversing is the overstrand. We refer the reader again to Figure~\ref{fig:400}: the thick part is obtained using the descending algorithm from $x$ to $y$.
Let $D$ be the diagram obtained by resolving the vertices of $P$ in the way we have described. The use of the descending algorithm implies that the strand in $D$ from $x$ to $y$ can be isotoped to the strand in ${\overline{D}}$ from $x$ to $y$. This implies that $D$ and ${\overline{D}}$ are equivalent diagrams, and so the proof of (R3) is complete.
Needless to say, choosing the blue straight-ahead closed walk $B$ for the discussion was arbitrary, as evidently a totally analogous shortcut operation can be applied to $R$ or to $G$.
\subsubsection{The second reduction operation: simplifying a $\Theta$}
Suppose that $P=B\cup R\cup G$ contains a straight-ahead cycle $C=uvwu$, such as the one illustrated in Figure~\ref{fig:1160}(i). Suppose that (a) $v$ and $w$ are joined by an edge $e$ distinct from the edge that joins them in $C$; and (b) the connected component of ${\mathbb S}\setminus C$ that contains $e$ does not contain any other part of $P$. We then say that $C+e$ is a {\em $\Theta$} in $P$. See Figure~\ref{fig:1160}(ii).
\def\te#1{{\Scale[2.6]{#1}}}
\def\tf#1{{\Scale[3.2]{#1}}}
\def\tz#1{{\Scale[3.0]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.25}{\input{fig1160-1.pdf_t}}
\caption{Illustration of the notion of a $\Theta$ in a projection.}
\label{fig:1160}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
\def\te#1{{\Scale[3.4]{#1}}}
We {\em simplify} $\Theta$ by {\em splitting} $u$ as illustrated in Figure~\ref{fig:1620}. This is the second and last reduction operation on link projections that we will use in this paper.
\def\te#1{{\Scale[2.6]{#1}}}
\def\tf#1{{\Scale[3.2]{#1}}}
\def\tz#1{{\Scale[3.0]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.25}{\input{fig1620-1.pdf_t}}
\caption{To simplify the $\Theta$ on the left hand side, we open up (``split'') $u$ as shown on the right hand side.}
\label{fig:1620}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
\def\te#1{{\Scale[3.4]{#1}}}
The projection ${\overline{P}}$ obtained from $P$ by simplifying the $\Theta$ has one fewer vertex than $P$, as $u$ belongs to $P$ but it is not in ${\overline{P}}$ anymore. Thus (R1) holds. Now since $C$ is a straight-ahead cycle it follows that $u$ is a monochromatic vertex. Therefore every bichromatic vertex in $P$ is also in ${\overline{P}}$, and since $P$ is pairwise crossing it follows that ${\overline{P}}$ is also pairwise crossing. Therefore Property (R2) also holds for this reduction operation.
We now prove that (R3) holds. Suppose that $L\in \Ls{{\overline{P}}}$, that is, ${\overline{P}}$ is a projection of a link $L$. Thus it is possible to resolve each vertex of ${\overline{P}}$ so that the result is a diagram ${\overline{D}}$ of $L$.
To prove (R3) we need to show that $P$ is also a projection of $L$. To achieve this, we describe how to resolve each vertex of $P$ to obtain a diagram $D$ equivalent to ${\overline{D}}$.
First, for each vertex of $P$ that is not in $\{u,v,w\}$, we resolve it exactly as it was resolved back in ${\overline{P}}$ in order to obtain ${\overline{D}}$. It remains to describe how to resolve $u,v$, and $w$.
The way in which we resolve $u,v$, and $w$ in $P$ depends on how $v$ and $w$ are resolved in ${\overline{P}}$ to obtain ${\overline{D}}$. For instance, if $v$ and $w$ are resolved in ${\overline{P}}$ as in Figure~\ref{fig:1890}(i), then we resolve $u,v$, and $w$ in $P$ as in Figure~\ref{fig:1890}(ii). It is easy to see that the strand from $x$ to $y$ in (ii) (that is, in $D$) can be isotoped to the strand from $x$ to $y$ in (i) (that is, in ${\overline{D}}$). Since all the other vertices of $P$ are resolved in the same way as they are resolved in ${\overline{P}}$, we conclude that the resulting diagram $D$ of $P$ is equivalent to the diagram ${\overline{D}}$ of ${\overline{P}}$.
There are three more ways in which the vertices $v$ and $w$ can be resolved in ${\overline{P}}$. These are illustrated in Figure~\ref{fig:1890}(iii), (v), and (vii). If $v$ and $w$ are resolved in ${\overline{P}}$ as in (iii), then we resolve $u,v$, and $w$ in $P$ as in (iv). If $v$ and $w$ are resolved in ${\overline{P}}$ as in (v), then we resolve $u,v$, and $w$ in $P$ as in (vi). Finally, if $v$ and $w$ are resolved in ${\overline{P}}$ as in (vii), then we resolve $u,v$, and $w$ in $P$ as in (viii).
\def\te#1{{\Scale[2.6]{#1}}}
\def\tf#1{{\Scale[3.2]{#1}}}
\def\tz#1{{\Scale[3.0]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.25}{\input{fig1890-3.pdf_t}}
\caption{Illustration of the proof of (R3) for the operation of simplyfing a $\Theta$.}
\label{fig:1890}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
\def\te#1{{\Scale[3.4]{#1}}}
As in the first case we discussed above, in each of these cases it is easy to see that the strand from $x$ to $y$ in $D$ can be isotoped to the strand from $x$ to $y$ in ${\overline{D}}$. Since all the other vertices of $P$ are resolved in the same way as they are resolved in ${\overline{P}}$, we conclude that the resulting diagram $D$ of $P$ is equivalent to the diagram ${\overline{D}}$ of ${\overline{P}}$. This completes the proof that the reduction operation of simplifying a $\Theta$ satisfies Property (R3).
\section{Irreducible projections and proof of Theorem~\ref{thm:main}}\label{sec:proofmain}
We say that a projection is {\em irreducible} if it is pairwise crossing and it is not possible to apply any reduction operation to it. The heart of the proof of Theorem~\ref{thm:main} is the following statement.
\begin{proposition}\label{pro:twoirr}
Up to equivalence, the only irreducible projections are the projections $P_1$ and $P_2$ in Figure~\ref{fig:670}.
\end{proposition}
\def\tf#1{{\Scale[2.8]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.28}{\input{fig670-2.pdf_t}}
\caption{The two irreducible projections $P_1$ and $P_2$.}
\label{fig:670}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
The rest of the paper is devoted to the proof of Proposition~\ref{pro:twoirr}. As we will now see, Theorem~\ref{thm:main} follows easily from this statement.
\begin{proof}[Proof of Theorem~\ref{thm:main}, assuming Proposition~\ref{pro:twoirr}]
As we pointed out before stating Theorem~\ref{thm:main}, the ``only if'' part of the theorem holds simply because $L6n1$ is pairwise linked. Thus in order to prove the theorem we need to show the ``if'' part: every pairwise crossing projection is a projection of $L6n1$.
Let $P$ be any pairwise crossing projection. Our goal is to show that $L6n1\in \Ls{P}$. In view of Properties (R1) and (R2) of the reduction operations, given a pairwise crossing projection $P$, we can iteratively apply to $P$ a sequence of reduction operations until we reach an irreducible projection $P'$. An iterative application of (R3) then implies that $\Ls{P'}\subseteq \Ls{P}$. Since by Proposition~\ref{pro:twoirr} the only irreducible projections are $P_1$ and $P_2$, we conclude that ($\dag$) {\em either $\Ls{P_1} \subseteq \Ls{P}$ or $\Ls{P_2} \subseteq \Ls{P}$.}
As we illustrate in Figure~\ref{fig:670}, both $P_1$ and $P_2$ can be resolved into $L6n1$. That is, $L6n1\in\Ls{P_1}$ and $L6n1\in\Ls{P_2}$. In view of ($\dag$), it follows that $L6n1\in\Ls{P}$.
\end{proof}
\section{Towards the proof of Proposition~\ref{pro:twoirr}: properties of irreducible projections}\label{sec:proofmain}
In this section we pave the way towards the proof of Proposition~\ref{pro:twoirr}, by establishing several properties that must be satisfied in an irreducible projection. More specifically, we identify two structures that cannot exist in an irreducible projection, and we show that every face in an irreducible projection must be bounded by a cycle.
\subsection{Disposable digons}
Let $P=B\cup R\cup G$ be a pairwise crossing projection. Suppose that there exist parallel edges $e_1,e_2$ such that $e_1\cup e_2$ bounds an open disk $\Delta$ that does not contain any part of $P$. We say that $e_1\cup e_2$ is a {\em digon}. See Figure~\ref{fig:700}.
\def\tf#1{{\Scale[2.0]{#1}}}
\def\tz#1{{\Scale[1.4]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.5}{\input{700-1.pdf_t}}
\caption{A digon in a link projection. If either $u$ and $v$ are monochromatic, or they are bichromatic (necessarily of the same type) and they are not the only bichromatic vertices of their type, then we can shortcut $P$ at $x$ and $y$.}
\label{fig:700}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
Let $u,v$ be the common endvertices of $e_1$ and $e_2$. If $e_1$ and $e_2$ are of the same colour, then $u$ and $v$ are monochromatic of the same colour. Otherwise, $u$ and $v$ are bichromatic of the same type. We say that $e_1\cup e_2$ is a {\em disposable} digon if either (i) $u$ and $v$ are monochromatic; or (ii) $u$ and $v$ are bichromatic and they are not the only bichromatic vertices of their type in $P$.
\begin{observation}\label{obs:disdig}
In an irreducible projection there cannot be any disposable digons.
\end{observation}
\begin{proof}
Let $e_1\cup e_2$ be a disposable digon in an irreducible projection $P$. As we illustrate in Figure~\ref{fig:700}, let $e_3$ (respectively, $e_4$) be the edge that precedes (respectively, succeeds) $e_1$ as we traverse the straight-ahead walk (either $B,R$, or $G$) that contains $e_1$. As we also illustrate in that figure, we subdivide $e_3$ (respectively, $e_4$) with a degree $2$ vertex $x$ (respectively, $y$).
Note that $x$ and $y$ are incident with the same face. One of the two $xy$-walks contains the vertices $u$ and $v$, and it does not contain any other vertices of $P$. Now the assumption that $e_1\cup e_2$ is disposable guarantees that the other $xy$-walk is colourful, and so we can shortcut $P$ at $x$ and $y$. But this contradicts the assumption that $P$ is irreducible.
\end{proof}
\subsection{Superfluous walks}
Let $P=B\cup R\cup G$ be a pairwise crossing projection, and suppose that $P$ contains a monochromatic vertex $v$. Without loss of generality, for the purposes of this discussion we may assume that $v$ is blue.
The straight-ahead blue closed walk $B$ is then the edge-disjoint union of two straight-ahead closed walks $\beta_1$ and $\beta_2$ that start and end at $v$. These are the $v$-{\em walks}. Similarly as when we defined the shortcut operation, for $i\in\{1,2\}$ we say that $\beta_i$ is {\em colourful} if it has at least one blue-red vertex and at least one blue-green vertex. If $\beta_1$ (respectively, $\beta_2$) is colourful, then we say that $\beta_2$ (respectively, $\beta_1$) is {\em superfluous}.
\begin{observation}\label{obs:supwal}
In an irreducible projection there cannot be any superfluous walks.
\end{observation}
\begin{proof}
Using the notation and terminology from the previous discussion, by way of contradiction suppose that $\beta_2$ is superfluous, and so $\beta_1$ is colourful. As we illustrate in Figure~\ref{fig:1910}, we let $e_1,e_2$ be the edges incident with $v$ that are in $\beta_2$.
\def\tf#1{{\Scale[2.0]{#1}}}
\def\tz#1{{\Scale[1.5]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.45}{\input{fig1910-3.pdf_t}}
\caption{Illustration of the proof of Observation~\ref{obs:supwal}.}
\label{fig:1910}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
As we also illustrate in Figure~\ref{fig:1910}, we subdivide $e_1$ (respectively, $e_2$) with a degree $2$ vertex $x$ (respectively, $y$). One of the two $xy$-walks contains all the edges and vertices of $\beta_1$. Since $\beta_1$ is colourful, it follows that this $xy$-walk is colourful, and so we can shortcut $P$ at $x$ and $y$. This contradicts the assumption that $P$ is irreducible.
\end{proof}
\subsection{Faces in irreducible projections}
We conclude this section with an important remark on irreducible projections.
\begin{observation}\label{obs:facial}
If $P$ is an irreducible projection, then every face of $P$ is bounded by a cycle.
\end{observation}
\begin{proof}
We show that an irreducible projection $P$ cannot have a cut vertex, that is, a vertex $v$ such that $P\setminus\{v\}$ is disconnected. This implies the observation, since it follows that every irreducible projection is $2$-connected, and in every spherical embedding of a $2$-connected graph each face is bounded by a cycle.
By way of contradiction, suppose that $P$ is an irreducible projection with a cut vertex $v$. It is easy to see that $v$ is necessarily monochromatic, and without loss of generality we may assume that $v$ is blue. Let $\beta_1,\beta_2$ be the $v$-walks. We note that one of $\beta_1$ and $\beta_2$ must contain all the blue-red vertices and all the blue-green vertices. Indeed, since $v$ is a cut-vertex, otherwise the red straight-ahead walk $R$ and the green straight-ahead walk $G$ would be disjoint, contradicting that $P$ is pairwise crossing.
Without loss of generality we may assume that $\beta_1$ contains all the blue-red vertices and all the blue-green vertices. Thus $\beta_1$ is colourful, and $\beta_2$ is superfluous. In view of Observation~\ref{obs:supwal}, this contradicts the irreducibility of $P$.
\end{proof}
\section{Good sections in pairwise crossing projections}\label{sec:goodsections}
The proof of Proposition~\ref{pro:twoirr} relies crucially on the concept of a good section. We refer the reader to Figure~\ref{fig:2040} and its caption for an illustration of the upcoming notions. Let $P=B\cup R\cup G$ be a pairwise crossing projection. A cycle of $P$ is {\em facial} if it bounds a face. A {\em section} of a facial cycle $C$ is a path contained in $C$, all of whose edges are of the same colour, and that is maximal with respect to this property.
\def\te#1{{\Scale[3.2]{#1}}}
\def\tf#1{{\Scale[2.8]{#1}}}
\def\tz#1{{\Scale[2.4]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.295}{\input{fig2040-5.pdf_t}}
\caption{The blue path $uaw$ is a section of the facial cycle that bounds face $g$. This section is good, as its endvertices are of distinct types: $u$ is blue-red and $w$ is blue-green. The green path $stz$ is a section of the facial cycle that bounds face $h$, but it is not a good section: its endvertices $s$ and $z$ are of the same type, namely green-blue.}
\label{fig:2040}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
Clearly, the endvertices of a section $S$ are necessarily bichromatic. If the endvertices of $S$ are of distinct types, then $S$ is a {\em good} section. For instance, in a good blue section one endvertex is blue-red, the other endvertex is blue-green, and all the internal vertices (if any) are blue.
The {\em length} of a section is its number of edges. Note that the length of a section may be one, as it may consist of only one edge and its endvertices.
The next statement plays a crucial role in the proof of Proposition~\ref{pro:twoirr}.
\begin{observation}\label{obs:goodsectionsexist}
Every pairwise crossing projection has at least two good sections of each colour.
\end{observation}
\begin{proof}
Let $P=B\cup R\cup G$ be a pairwise crossing projection. By symmetry, it suffices to show that there are at least two good blue sections in $P$. We illustrate the proof using the projection $P$ in Figure~\ref{fig:2040}. As we illustrate in Figure~\ref{fig:2080}(i), we start by letting $P'$ be the plane graph obtained by removing from $P$ all the green edges and all the green vertices, but keeping all the green-blue or green-red vertices.
It is easy to see that since $P$ is pairwise crossing there must exist a face $f$ of $P'$ whose boundary walk $W$ contains at least one green-blue vertex and at least one green-red vertex. See Figure~\ref{fig:2080}(i). In particular, $W$ contains both blue and red edges, and so $W$ may be written as a concatenation $W=W_1 W_2 \cdots W_k$ of an even number $k$ of walks, where $W_1, W_3, \ldots, W_{k-1}$ contain only blue edges, and $W_2, W_4, \ldots, W_k$ contain only red edges. For instance, as we show in Figure~\ref{fig:2080}(ii), the boundary walk $W$ of the face $f$ in Figure~\ref{fig:2080}(i) is the concatenation of a (thick) blue walk $W_1$, a (thick) red walk $W_2$, a (thin) blue walk $W_3$, and a (thin) red walk $W_4$.
\def\te#1{{\Scale[3.0]{#1}}}
\def\tf#1{{\Scale[2.6]{#1}}}
\def\tz#1{{\Scale[2.2]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.295}{\input{fig2080-3.pdf_t}}
\caption{Illustration of the proof of Observation~\ref{obs:goodsectionsexist}.}
\label{fig:2080}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
Since $W$ has at least one blue-green vertex, it follows that one of the blue walks $W_1, W_3, \ldots,$ $W_{k-1}$ contains at least one blue-green vertex. Without loss of generality, we may assume that $W_1$ has at least one blue-green vertex. Note that $W_1$ starts with a blue-red vertex $u$, ends with another blue-red vertex $v$, and every internal vertex of $W_1$ is either blue or blue-green. As we illustrate in Figure~\ref{fig:2080}(ii), we let $w$ (respectively, $z$) be the first (respectively, last) blue-green vertex that we encounter as we traverse $W_1$ from $u$ to $v$. We note that $w$ and $z$ may be the same vertex.
As we illustrate in Figure~\ref{fig:2080}(iii), the subwalk of $W_1$ that starts at the blue-red vertex $u$ and ends at the blue-green vertex $w$ is a good section of a face $g$ back in $P$ (see also Figure~\ref{fig:2040}). Similarly, as we show in Figure~\ref{fig:2080}(iv), the subwalk of $W_1$ that starts at the blue-green vertex $z$ and ends at the blue-red vertex $v$ is a good section of a face $h$ back in $P$ (see also Figure~\ref{fig:2040}). Thus there are at least two good blue sections in $P$, and so we are done.
\end{proof}
\section{Proof of Proposition~\ref{pro:twoirr}}\label{sec:prooftwoirr}
The proof of Proposition~\ref{pro:twoirr} has two major ingredients, captured in the following two lemmas.
\begin{lemma}\label{lem:nomono}
If $P$ is an irreducible projection, then all good sections of $P$ have length one.
\end{lemma}
\begin{lemma}\label{lem:simple}
If $P$ is an irreducible projection in which all good sections have length one, then $P$ has exactly $6$ vertices.
\end{lemma}
Before moving on to the proofs of Lemmas~\ref{lem:nomono} and~\ref{lem:simple}, we note that Proposition~\ref{pro:twoirr} follows easily from them.
\begin{proof}[Proof of Proposition~\ref{pro:twoirr}, assuming Lemmas~\ref{lem:nomono} and~\ref{lem:simple}]
Let $P=B\cup R\cup G$ be an irreducible projection. Combining Lemmas~\ref{lem:nomono} and~\ref{lem:simple} we obtain that $P$ has exactly $6$ vertices. Since every pairwise projection has at least two bichromatic vertices of each type, it follows that $P$ has exactly two blue-red vertices, exactly two blue-green vertices, exactly two red-green vertices, and no monochromatic vertices of any colour. That is, $B,R$, and $G$ are cycles that pairwise cross each other in exactly two vertices. It is a straightforward exercise to verify that then $P$ is equivalent to either $P_1$ or $P_2$ in Figure~\ref{fig:670}: this is the well-known fact that every arrangement of three pseudocircles that pairwise cross exactly twice is equivalent to either the Krupp arrangement or to the non-Krupp arrangement (see ~\cite[Figure 1]{felsnerscheucher}).
\end{proof}
\subsection{Proof of Lemma~\ref{lem:nomono}}
Lemma~\ref{lem:nomono} is an immediate consequence of the next two statements.
\begin{claim}\label{cla:ge3}
In an irreducible projection there cannot be any good section of length at least three.
\end{claim}
\begin{claim}\label{cla:eq2}
In an irreducible projection there cannot be any good section of length exactly two.
\end{claim}
\begin{proof}[Proof of Claim~\ref{cla:ge3}]
By way of contradiction, suppose that $P=B\cup R\cup G$ has a facial cycle $C$ with a good section $S=v_0 e_1 v_1 \ldots e_k v_k$ of length $k\ge 3$. Recall that the endvertices of every good section are bichromatic vertices of distinct types. Without loss of generality we may assume that $S$ is blue, and that $v_0$ is blue-red and $v_k$ is blue-green. See Figure~\ref{fig:230} for an illustration.
\def\te#1{{\Scale[3.4]{#1}}}
\def\tf#1{{\Scale[3.4]{#1}}}
\def\tz#1{{\Scale[3.2]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.23}{\input{fig230-4.pdf_t}}
\caption{Illustration of the proof of Claim~\ref{cla:ge3}.}
\label{fig:230}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
Subdivide $e_1$ with a degree $2$ vertex $x$, $e_2$ with a degree $2$ vertex $y$, and $e_k$ with a degree $2$ vertex $z$. To prove the claim we show that either (i) at least one of the two $xy$-walks is colourful; or (ii) at least one of the two $yz$-walks is colourful; or (iii) at least one of the two $xz$-walks is colourful. This will complete the proof, since in either case we can apply a shortcut operation on $P$, contradicting its irreducibility.
We traverse the blue straight-ahead closed walk $B$ starting at $x$ with the blue half-edge $x v_1$. Suppose that in this traversal we encounter $y$ before we encounter $v_k$. In this case the part of the traversal from $x$ to $y$ (that is, one of the two $xy$-walks) clearly contains neither $v_0$ nor $v_k$. Thus the other $xy$-walk contains both $v_0$ and $v_k$, and so it is colourful. Therefore in this case (i) holds. A totally analogous argument shows that if in the traversal we encounter $z$ before we encounter $v_k$, then one of the two $xz$-walks is colourful, and so (ii) holds.
We may then assume that in the traversal of $B$ starting with $x v_1$ we encounter first $v_k$, and then we encounter $y$ and $z$ in some order. In this case the part of the traversal that has $y$ and $z$ as endpoints is a $yz$-walk that contains neither $v_0$ nor $v_k$. Thus the other $yz$-walk contains both $v_0$ and $v_k$, and so it is colourful. Therefore in this case (iii) holds.
\end{proof}
\begin{proof}[Proof of Claim~\ref{cla:eq2}]
By way of contradiction, suppose that $P=B\cup R\cup G$ has a facial cycle $C$ with a good section $S=u e v e' w$ of length $2$. Without loss of generality we may assume that $S$ is blue, and that $u$ is blue-red and $w$ is blue-green. See Figure~\ref{fig:1140} for an illustration. As we also illustrate in that figure, we let $B_1$ denote the $v$-walk that contains $u$ and $e$, and we let $B_2$ denote the other $v$-walk, which is the one that contains $w$ and $e'$.
\def\te#1{{\Scale[3.4]{#1}}}
\def\tf#1{{\Scale[2.8]{#1}}}
\def\tz#1{{\Scale[2.4]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.3}{\input{fig1140-1.pdf_t}}
\caption{Illustration of the proof of Claim~\ref{cla:eq2}.}
\label{fig:1140}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
Our strategy to prove the claim is to find structural properties of $P$ that follow from its irreducibility. These properties are captured as statements (I), (II), (III), (IV), and (V) below. As we argue after the proof of (V), these properties imply the existence of a $\Theta$ in $P$ (see Figure~\ref{fig:1540}). This will conclude the proof of the claim, since the existence of a $\Theta$ contradicts the irreducibility of $P$.
\vglue 0.2 cm
\noindent{(I) {\sl $B_1$ does not contain any blue-green vertex, and $B_2$ does not contain any blue-red vertex.}}
\begin{proof}
By way of contradiction, suppose that $B_1$ contains some blue-green vertex. Since $B_1$ contains also the blue-red vertex $u$, it follows that $B_1$ is a colourful $v$-walk, and so $B_2$ is a superfluous $v$-walk. In view of Observation~\ref{obs:supwal}, this contradicts the irreducibility of $P$. A totally analogous argument shows that $B_2$ cannot contain any blue-red vertex.
\end{proof}
\vglue 0.2 cm
\noindent{(II) {\sl $B_1$ is a cycle, and $B_2$ is a cycle.}}
\begin{proof}
By symmetry, it suffices to show that $B_2$ is a cycle. By way of contradiction, suppose that this is not the case. As we illustrate in Figure~\ref{fig:1290}, then there must exist a blue vertex $z$ in $B_2$ such that the four edges incident with $z$ are in $B_2$. As we also illustrate in that figure, the $z$-walk that contains $v$ also contains $u$ and $w$, and so it is colourful. Thus the other $z$-walk (the thick one) is superfluous. In view of Observation~\ref{obs:supwal}, this contradicts the irreducibility of $P$.
\end{proof}
\def\te#1{{\Scale[3.4]{#1}}}
\def\tf#1{{\Scale[2.8]{#1}}}
\def\tz#1{{\Scale[2.4]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.3}{\input{fig1290-2.pdf_t}}
\caption{Illustration of the proof of (II) in Claim~\ref{cla:eq2}.}
\label{fig:1290}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
Since $B_2$ is a cycle, by the Jordan curve theorem ${\mathbb S}\setminus B_2$ has two connected components, each of which is homeomorphic to an open disk. We let $\Delta$ denote the connected component of ${\mathbb S}\setminus B_2$ that does not contain the blue-red vertex $u$. We refer the reader to Figure~\ref{fig:1340} for an illustration, where $\Delta$ is the shaded region.
\vglue 0.2 cm
\noindent{(III) {\sl If $g=st$ is an edge of $P$ contained in $\Delta$, and its endvertices $s$ and $t$ are both in $B_2$, then either $s=w$ or $t=w$.}}
\begin{proof}
By way of contradiction, suppose that $s{\neq}w$ and $t{\neq}w$. See Figure~\ref{fig:1340}. As we also illustrate in that figure, we subdivide one of the two edges of $B_2$ incident with $s$ with a degree $2$ vertex $x$, and we subdivide one of the two edges of $B_2$ incident with $t$ with a degree $2$ vertex $y$.
\def\te#1{{\Scale[3.4]{#1}}}
\def\tf#1{{\Scale[2.8]{#1}}}
\def\tz#1{{\Scale[2.4]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.28}{\input{fig1340-2.pdf_t}}
\caption{Illustration of the proof of (III) in Claim~\ref{cla:eq2}.}
\label{fig:1340}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
Since $g$ is an edge, then $x$ and $y$ are in the same face of $P$. Now one of the $xy$-walks is completely contained in $B_2$: this is the $xy$-walk highlighted with thick edges in Figure~\ref{fig:1340}. The other $xy$-walk contains $u$ and $w$, and so it is colourful. Since $x$ and $y$ are in the same face it follows that we can shortcut $P$ at $x$ and $y$. But this contradicts the irreducibility of $P$.
\end{proof}
\vglue 0.2 cm
\noindent{(IV) {\sl The only blue vertex is $v$.}}
\vglue 0.2 cm
\begin{proof}
By way of contradiction, suppose that there is a blue vertex $s$ distinct from $v$. Since each of $B_1$ and $B_2$ is a cycle, it follows that $s$ must be in $B_1\cap B_2$. As we illustrate in Figure~\ref{fig:1410}, it follows that there must be a subpath $Q$ of $B_1$, contained in $\Delta$, that has $s$ as one of its endvertices. We let $t$ denote the endvertex of $Q$ that is not $s$.
\def\te#1{{\Scale[3.4]{#1}}}
\def\tf#1{{\Scale[2.8]{#1}}}
\def\tz#1{{\Scale[2.4]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.28}{\input{fig1410-1.pdf_t}}
\caption{Illustration of the proof of (IV) in Claim~\ref{cla:eq2}.}
\label{fig:1410}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
It follows from (III) that $Q$ cannot consist of a single edge, and so there is at least one internal vertex $z$ of $Q$. We claim that $z$ must be blue-red, as illustrated in Figure~\ref{fig:1410}. To see this, first we note that $z$ cannot be blue, since every blue vertex is in $B_1\cap B_2$, and hence in the boundary of $\Delta$. Thus $z$ is either blue-red or blue-green. Now if $z$ is blue-green, then the $v$-walk $B_1$ is colourful (as it also contains the blue-red vertex $u$), and so $B_2$ is superfluous. Since by Observation~\ref{obs:supwal} this contradicts the irreducibility of $P$, we conclude that $z$ cannot be blue-green. Thus $z$ is blue-red, as claimed.
To finish the proof we note that one of the two $t$-walks contains $w,v,s$, and $z$: this is the $t$-walk highlighted with thick edges in Figure~\ref{fig:1410}. Since $w$ is blue-green and $z$ is blue-red, this $t$-walk is colourful. Thus the other $t$-walk is superfluous. In view of Observation~\ref{obs:supwal}, this contradicts the irreducibility of $P$.
\end{proof}
\vglue 0.2 cm
\noindent{(V) {\sl There is no green vertex contained in $\Delta$.}}
\vglue 0.2 cm
\begin{proof}
By way of contradiction, suppose that there is a green vertex $z$ contained in $\Delta$. We start by noting that the red straight-ahead walk $R$ lies entirely outside $\Delta$. Indeed, if some part of $R$ were contained in $\Delta$, since the blue-red vertex $u$ is outside $\Delta$ then necessarily $R$ would intersect the boundary $B_2$ of $\Delta$. In particular, $B_2$ would contain at least one blue-red vertex, contradicting (I).
Let $G_1, G_2$ be the two green $z$-walks. Since $P$ is pairwise crossing it follows that at least one of $G_1$ and $G_2$ has a green-red vertex. Without loss of generality, we may assume that $G_1$ has a red-green vertex. Since $R$ lies entirely outside $\Delta$, and $z$ is inside $\Delta$, it follows that $G_1$ must cross $B_2$. In particular, $G_1$ has at least one green-blue vertex. Thus $G_1$ has at least one green-red vertex and at least one green-blue vertex, and so it is colourful. Thus $G_2$ is a superfluous $z$-walk. By Observation~\ref{obs:supwal}, this contradicts the irreducibility of $P$.
\end{proof}
\vglue 0.3cm
\noindent{\em Conclusion of the proof of Claim~\ref{cla:eq2}.} We start by noting that no vertex can be contained in $\Delta$. Indeed, by way of contradiction, suppose that some vertex $z$ of $P$ is contained in $\Delta$. As we argued in the proof of (V), the red straight-ahead walk $R$ is contained outside $\Delta$, and so $z$ is neither red, nor red-blue, nor red-green. We know from (IV) that $v$ is the only blue vertex, and from this it follows that no blue edge can be inside $\Delta$. Therefore $z$ is neither blue, nor blue-red, nor blue-green. The only possibility left is that $z$ is green, but this is ruled out using (V). We conclude that no vertex is contained in $\Delta$.
Let $e_w$ be the green edge incident with the blue-green vertex $w$ that is contained in $\Delta$. Since no vertex is contained in $\Delta$, it follows that the other endvertex of $e_w$ is a blue-green vertex $q$ in $B_2$. See Figure~\ref{fig:1540}. We claim that $e_w$ is the only part of $P$ contained in $\Delta$. Indeed, by way of contradiction suppose that some edge $g=st$ other than $e_w$ is contained in $\Delta$. Since no vertex is contained in $\Delta$, it follows that both $s$ and $t$ are in $B_2$. Since $g\neq e_w$, it follows that neither $s$ nor $t$ is equal to $w$, contradicting (III).
\def\te#1{{\Scale[3.4]{#1}}}
\def\tf#1{{\Scale[2.8]{#1}}}
\def\tz#1{{\Scale[2.4]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.3}{\input{fig1540-1.pdf_t}}
\caption{Conclusion of the proof of Claim~\ref{cla:eq2}.}
\label{fig:1540}
\end{figure}
\def\tf#1{{\Scale[2.4]{#1}}}
\def\tz#1{{\Scale[2.0]{#1}}}
Thus $e_w$ is the only part of $P$ contained in $\Delta$. As illustrated in Figure~\ref{fig:1540}, it follows that $v,q$, and $w$ are the only vertices of $B_2$, and so $B_2 + e_w$ is a $\Theta$ in $P$. But this is impossible, since by assumption $P$ is irreducible.
\end{proof}
\subsection{Proof of Lemma~\ref{lem:simple}}
As we shall see, Lemma~\ref{lem:simple} follows from three easy observations and an application of Euler's formula.
\begin{proof}[Proof of Lemma~\ref{lem:simple}]
Let $P=B\cup R\cup G$ be an irreducible projection in which all good sections have length one. That is, every good section is an edge, which we will call a {\em good} edge. Thus, for instance, a blue edge is good if and only if one of its endvertices is blue-red and the other is blue-green. We recall from Observation~\ref{obs:goodsectionsexist} that there exist at least two good sections of each colour, and so it follows that $P$ has at least two good edges of each colour.
\vglue 0.2 cm
\noindent{\bf Claim A.} {\sl $P$ does not have any monochromatic vertices.}
\vglue 0.2 cm
\begin{proof}
By symmetry it suffices to show that there are no blue vertices in $P$. By way of contradiction, suppose that $P$ has a blue vertex $u$. Let $e$ be a good blue edge. One of the two $u$-walks contains the edge $e$, and so it contains at least one blue-red vertex and at least one blue-green vertex. Therefore the $u$-walk that does not contain $e$ is superfluous. In view of Observation~\ref{obs:supwal}, this contradicts the assumption that $P$ is irreducible.
\end{proof}
The {\em degree} $|f|$ of a face $f$ of $P$ is the number of edges in the facial cycle that bounds $f$.
\vglue 0.2 cm
\noindent{\bf Claim B.} {\sl If $P$ has at least three faces of degree $2$, then $P$ has exactly $6$ vertices.}
\vglue 0.2 cm
\begin{proof}
By way of contradiction suppose that $P$ has at least $8$ vertices, and that there exist three faces $f_1, f_2, f_3$ of $P$ that have degree $2$. Then for $i=1,2,3$ the face $f_i$ is bounded by a digon $D_i$. By Claim A we have that $P$ has no monochromatic vertices, and so each of $D_1,D_2$, and $D_3$ is a digon whose two vertices are bichromatic of the same type.
Without loss of generality, we may assume that the vertices of $D_1$ are blue-red. In view of Observation~\ref{obs:disdig} it follows that these are the two only blue-red vertices of $P$, as otherwise $D_1$ would be a disposable digon. Once again without loss of generality we may then assume that the vertices of $D_2$ are blue-green, and again using Observation~\ref{obs:disdig} it follows that these two are the only blue-green vertices of $P$. Therefore the vertices of $D_3$ are necessarily red-green, and invoking again Observation~\ref{obs:disdig} these two are the only red-green vertices of $P$. Combining these conclusions we have that $P$ has exactly two blue-red vertices, two blue-green vertices, and two red-green vertices. Since all the vertices of $P$ are bichromatic, it follows that $P$ has exactly $6$ vertices.
\end{proof}
\vglue 0.2 cm
\noindent{\bf Claim C.} {\sl $P$ has at most three faces of degree at least $4$.}
\vglue 0.2 cm
\begin{proof}
By way of contradiction, suppose that $P$ has at least four faces of degree at least $4$. Clearly, the facial cycle of a face of degree at least $4$ contains (at least) two edges of the same colour. Thus there exists a colour $c\in\{\text{\rm blue,red,green}\}$ and two faces $f,f'$ of $P$ such that the facial cycle $C$ of $f$ contains at least two edges of colour $c$, and also the facial cycle $C'$ of $f'$ contains at least two edges of colour $c$. Without loss of generality, we may assume that $c$ is blue.
Therefore $C$ contains two blue edges $e_1,e_2$, and $C'$ contains two blue edges $e_3,e_4$. Note that $\{e_1,e_2\}$ and $\{e_3,e_4\}$ are not necessarily disjoint, but they cannot be the same set. Since $P$ has at least two good blue edges, this implies that either (i) there is a good blue edge $e$ that is not in $\{e_1,e_2\}$; or (ii) there is a good blue edge $e$ that is not in $\{e_3,e_4\}$. Without loss of generality, we may assume that (i) holds.
Recall that $e_1$ and $e_2$ are blue edges in the boundary of the same face $f$. We subdivide $e_1$ (respectively, $e_2$) with a degree $2$ vertex $x$ (respectively, $y$). One of the two $xy$-walks contains the edge $e$. Since $e$ is good, one of its endvertices is blue-red and the other one is blue-green. Therefore one of the two $xy$-walks is colourful, and so we can shortcut $P$ at $x$ and $y$. But this contradicts the assumption that $P$ is irreducible.
\end{proof}
We know from Claim A that $P$ has no monochromatic vertices, and so each of $B,R$, and $G$ is a cycle. Any two of these cycles have an even number of vertices in common. That is, there is an even number of bichromatic vertices of each type. Therefore $P$ has an even number of vertices. Since Lemma~\ref{lem:simple} claims that $P$ has exactly six vertices, in order to prove the lemma we need to show that $P$ cannot have $8$ or more vertices.
By way of contradiction, suppose that $P$ has $n\ge 8$ vertices. Since $P$ is a $4$-regular graph, it follows that $P$ has $2n$ edges. Using Euler's formula we obtain that $P$ has $n+2$ faces, which we label $f_1,f_2,\ldots,f_{n+2}$ so that $|f_1|\le |f_2| \le \cdots \le |f_{n+2}|$. Since $\sum_{i=1}^{n+2} |f_i|$ equals twice the number of edges of $P$, it follows that $\sum_{i=1}^{n+2} |f_i| = 4n$.
By Claim B we have that $|f_3|\ge 3$, and Claim C implies that $|f_{n-1}|\le 3$. Combining these observations we have that $|f_3|=|f_4|=\cdots=|f_{n-1}|=3$. Thus there are at least $n-3$ faces of degree $3$. Since $n\ge 8$, there are at least $5$ faces of degree $3$. Note that since there are no monochromatic vertices, every face of degree $3$ consists of a good blue edge, a good red edge, and a good green edge. Since every edge belongs to exactly two faces, and there are at least $5$ faces of degree $3$, we conclude that there are at least three good edges of each colour.
We claim that this implies that there cannot be any face of degree at least four. Indeed, seeking a contradiction, suppose that there is a face $f$ of degree at least four. Then $f$ has (at least) two edges $e',e''$ of the same colour, which without loss of generality we may assume to be blue. We subdivide $e'$ (respectively, $e''$) with a degree $2$ vertex $x$ (respectively, $y$). Since there are at least three good blue edges, there exists a good blue edge $e$ that is neither $e'$ nor $e''$. Since one endvertex of $e$ is blue-red and the other is blue-green, the $xy$-walk that contains $e$ is colourful. Thus we can shortcut $P$ at $x$ and $y$, contradicting the irreducibility of $P$.
Thus no face has degree at least four, and so necessarily $|f_{n}|=|f_{n+1}|=|f_{n+2}|=3$. Therefore $\sum_{i=1}^{n+2}|f_i| = |f_1| + |f_2| + \sum_{i=3}^{n+2}|f_i| \le 3 + 3 + 3\cdot n= 3n+6$. Since $\sum_{i=1}^{n+2} |f_i| = 4n$, it follows that $4n\le 3n+6$, and so $n\le 6$. This contradicts the assumption that $n\ge 8$.
\end{proof}
\section{Concluding remarks}
Taniyama's motivation in~\cite{taniyamaknots} was the investigation of a relation $\ge$ on links. Following Taniyama, given two links $L_1,L_2$, we write $L_1 \ge L_2$ if every projection of $L_1$ is also a projection of $L_2$. In this case we say that $L_2$ is a {\em minor} of $L_1$, and that $L_1$ {\em majorizes} $L_2$.
The characterization given by Theorem~\ref{thm:main} determines which 3-component links majorize $L6n1$: {\em a 3-component link $L$ majorizes $L6n1$ if and only if every projection of $L$ is pairwise crossing}. Clearly, a $3$-component link $L$ satisfies that all its projections are pairwise crossing if and only if its components are pairwise linked. Thus we have the following.
\begin{observation}\label{obs:maj2}
A link $L$ majorizes $L6n1$ if and only if $L$ is a $3$-component link whose components are pairwise linked.
\end{observation}
In the other direction, Theorem~\ref{thm:main} can also be used to find out which links are majorized by $L6n1$: {\em a $3$-component link $L$ is majorized by $L6n1$ if and only if every pairwise crossing projection $P$ is a projection of $L$}. It is not difficult to show that the only links (other than $L6n1$) that satisfy this property are the three links in Figure~\ref{fig:850}. Therefore:
\begin{observation}\label{obs:maj1}
The links that are majorized by $L6n1$ are $L6n1$ itself and the links in Figure~\ref{fig:850}.
\end{observation}
\def\tz#1{{\Scale[3.0]{#1}}}
\def\tf#1{{\Scale[1.6]{#1}}}
\begin{figure}[ht!]
\centering
\scalebox{0.62}{\input{fig850-2.pdf_t}}
\caption{Links that are majorized by $L6n1$.}
\label{fig:850}
\end{figure}
\def\tz#1{{\Scale[2.0]{#1}}}
\def\tf#1{{\Scale[2.4]{#1}}}
\section*{Acknowledgments}
This work was supported by CONACYT under Proyecto Ciencia de Frontera 191952.
\bibliographystyle{abbrv}
| {
"timestamp": "2022-08-31T02:04:46",
"yymm": "2208",
"arxiv_id": "2208.13926",
"language": "en",
"url": "https://arxiv.org/abs/2208.13926",
"abstract": "Given a link projection $P$ and a link $L$, it is natural to ask whether it is possible that $P$ is a projection of $L$. Taniyama answered this question for the cases in which $L$ is a prime knot or link with crossing number at most five. Recently, Takimura settled the issue for the knot $6_2$. We answer this question for the case in which $L$ is the link $L6n1$.",
"subjects": "Geometric Topology (math.GT); Combinatorics (math.CO)",
"title": "Regular projections of the link L6n1",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9669140235181257,
"lm_q2_score": 0.7341195152660687,
"lm_q1q2_score": 0.7098304542490905
} |
https://arxiv.org/abs/1102.3080 | Covering Point Patterns | An encoder observes a point pattern---a finite number of points in the interval $[0,T]$---which is to be described to a reconstructor using bits. Based on these bits, the reconstructor wishes to select a subset of $[0,T]$ that contains all the points in the pattern. It is shown that, if the point pattern is produced by a homogeneous Poisson process of intensity $\lambda$, and if the reconstructor is restricted to select a subset of average Lebesgue measure not exceeding $DT$, then, as $T$ tends to infinity, the minimum number of bits per second needed by the encoder is $-\lambda\log D$. It is also shown that, as $T$ tends to infinity, any point pattern on $[0,T]$ containing no more than $\lambda T$ points can be successfully described using $-\lambda \log D$ bits per second in this sense. Finally, a Wyner-Ziv version of this problem is considered where some of the points in the pattern are known to the reconstructor. | \section{Introduction}
An encoder observes a point pattern---a finite number of points in the
interval $[0,T]$---which is to be described to a reconstructor using
bits. Based on these bits, the reconstructor wishes to produce a
covering-set---a subset of $[0,T]$ containing all the points---of
least Lebesgue measure. There is a trade-off between the number of
bits used and the Lebesgue measure of the covering-set. This trade-off
can be formulated as a continuous-time rate-distortion problem
(Section~\ref{sec:poisson}). In this paper we investigate this
trade-off in the limit where $T\to\infty$.
When the point pattern is produced by a
homogeneous Poisson process, this problem is closely related to
that of transmitting information through an ideal peak-limited
Poisson channel
\cite{kabanov78,davis80,wyner88,wyner88b}. In fact, the two problems
can be considered dual in the sense of
\cite{coverchiang02}. However, the duality results of
\cite{coverchiang02} only apply to discrete memoryless channels and
sources, so they cannot be directly used to solve our
problem. Instead, we shall use a technique that is similar to Wyner's
\cite{wyner88,wyner88b} to find the desired rate-distortion
function. We shall show that, if the point pattern is the
outcome of a homogeneous Poisson process of intensity $\lambda$, and
if the reconstructor is restricted to select covering-sets of average
measure not exceeding $DT$, then the minimum number of
bits per second needed by the encoder to describe the pattern is
$-\lambda\log D$.
Previous works \cite{rubin74,colemankiyavashsubramanian08} have
studied rate-distortion functions of the Poisson process with different
distortion measures. It is interesting to
notice that our rate-distortion function, $-\lambda\log D$, is
equal to the one in \cite{colemankiyavashsubramanian08},
where a queueing distortion measure is considered. This is no
coincidence, since the Poisson channel is closely related to the
queueing channel introduced in \cite{anantharamverdu96}.
We also show that the Poisson process is the most difficult to cover,
in the sense that any point process that, with high probability, has no
more than $\lambda T$ points in $[0,T]$ can be described with
$-\lambda \log D$ bits per second. This is even true if an adversary
selects the point pattern provided that the pattern contains no more
than $\lambda$ points per second and that the encoder and the
reconstructor are allowed to use random codes.
Finally, we consider a Wyner-Ziv setting \cite{wynerziv76} of the
problem where some points in the pattern are known to the
reconstructor but the encoder does not know which ones they are. This
can be viewed as a dual problem to the Poisson channel with
noncausal side-information \cite{brosslapidothWLG09}. We
show that in this setting one can achieve the same minimum rate as
when the transmitter \emph{does} know the reconstructor's side-information.
The rest of this paper is arranged as follows: in
Section~\ref{sec:notations} we introduce some notation; in
Section~\ref{sec:poisson} we present the result for the Poisson process;
in Section~\ref{sec:general} we present the results for general point
processes and arbitrary point patterns; and in Section~\ref{sec:wz} we
present the results for the Wyner-Ziv setting.
\section{Notation}\label{sec:notations}
We use a lower-case letter like $x$ to denote a number, and an upper-case
letter like $X$ to denote a random variable. We use a boldface
lower-case letter like $\vect{x}$ to denote a vector, a function of
reals, or a point pattern, and it will be clear from the
context which one we mean. If $\vect{x}$ is a vector, $x_i$ denotes its
$i$th element. If $\vect{x}$ is a function, $x(t)$
denotes its value at $t\in\Reals$. If $\vect{x}$ is a point pattern,
we use $n_\vect{x}(\cdot)$ to denote its counting function, so
$n_\vect{x}(t_2)-n_\vect{x}(t_1)$ is the number of points in
$\vect{x}$ that fall in the interval $(t_1,t_2]$. We use a bold-face
upper-case letter like $\vect{X}$ to denote a random vector, a random
function, or a random point process. The random counting function
corresponding to a point process $\vect{X}$ is denoted by
$N_{\vect{X}}(\cdot)$.
We use $\textnormal{Ber}(p)$ to denote the Bernoulli distribution of
parameter $p$, namely, the distribution that has probability $p$ on
the outcome $1$ and probability $(1-p)$ on the outcome $0$.
\section{Covering a Poisson Process}\label{sec:poisson}
Consider a homogeneous Poisson process $\mathbf{X}$ of
intensity~$\lambda$ on the interval $[0,T]$. Its counting function
$N_{\vect{X}}(\cdot)$ satisfies
\begin{equation*}
\Pr\left[N_{\vect{X}}(t+\tau)-N_{\vect{X}}(t)=k\right] = \frac{e^{-\lambda
\tau}(\lambda\tau)^k}{k!}
\end{equation*}
for all $\tau\in[0,T]$, $t\in[0,T-\tau]$ and $k\in\{0,1,\ldots\}$.
The encoder maps the realization of the Poisson process to a message
in $\{1,\ldots,2^{TR}\}$. The reconstructor then maps this message
to a $\{ 0,1 \}$-valued, Lebesgue-measurable, signal $\hat{x}(t)$, $t\in
[0,T]$. We wish to minimize the total length of the region where
$\hat{x}(t)=1$ while guaranteeing that all points in the original
Poisson process lie in this region. See
Figure~\ref{fig:problem-illustration} for an illustration.
\begin{figure}[htbp]
\centering
\setlength{\unitlength}{0.68cm}
\begin{picture}(12.5,4.5)
\put(0.4,3.7){$\mathbf{x}$}
\put(0,2.6){\vector(1,0){12}}
\put(12,2.1){$t$}
\put(1.2,2.6){\line(0,1){1}}
\put(3.7,2.6){\line(0,1){1}}
\put(5.2,2.6){\line(0,1){1}}
\put(5.7,2.6){\line(0,1){1}}
\put(8.3,2.6){\line(0,1){1}}
\put(9.6,2.6){\line(0,1){1}}
\multiput(1.2,2.35)(0,-0.3){7}{\line(0,1){0.1}}
\multiput(3.7,2.35)(0,-0.3){7}{\line(0,1){0.1}}
\multiput(5.2,2.35)(0,-0.3){7}{\line(0,1){0.1}}
\multiput(5.7,2.35)(0,-0.3){7}{\line(0,1){0.1}}
\multiput(8.3,2.35)(0,-0.3){7}{\line(0,1){0.1}}
\multiput(9.6,2.35)(0,-0.3){7}{\line(0,1){0.1}}
\put(0,0.6){\vector(1,0){12}}
\put(12,0.1){$t$}
\linethickness{0.5mm}
\put(0,0.6){\line(1,0){0.2}}
\put(0.2,0.6){\line(0,1){1}}
\put(0.2,1.6){\line(1,0){1.3}}
\put(1.5,0.6){\line(0,1){1}}
\put(1.5,0.6){\line(1,0){2}}
\put(3.5,0.6){\line(0,1){1}}
\put(3.5,1.6){\line(1,0){2.9}}
\put(6.4,0.6){\line(0,1){1}}
\put(6.4,0.6){\line(1,0){2.7}}
\put(9.1,0.6){\line(0,1){1}}
\put(9.1,1.6){\line(1,0){0.6}}
\put(9.7,0.6){\line(0,1){1}}
\put(9.7,0.6){\line(1,0){1.3}}
\put(0.4,1.8){$\mathbf{\hat{x}}$}
\put(6.8,1.4){\small{missed!}}
\put(7.5,1.35){\vector(1,-1){0.7}}
\end{picture}
\caption{Illustration of the problem.}
\label{fig:problem-illustration}
\end{figure}
More formally, we formulate this problem as a continuous-time
rate-distortion problem, where the distortion between the point
pattern $\mathbf{x}$ and the reproduction signal
$\hat{\mathbf{x}}$ is
\begin{equation}\label{eq:distortion}
d(\mathbf{x},\hat{\mathbf{x}}) \triangleq \begin{cases}
\frac{\mu\left(\hat{x}^{-1}(1)\right)}{T}, & \textnormal{if
all points in $\mathbf{x}$ are in $\hat{x}^{-1}(1)$}\\
\infty, & \textnormal{otherwise}\end{cases}
\end{equation}
where $\mu(\cdot)$ denotes the Lebesgue measure.
We say that $(R,D)$ is an achievable rate-distortion pair for the
homogeneous Poisson process of intensity $\lambda$ if, for every
$\epsilon>0$, there exists some $T_0>0$
such that, for every $T>T_0$, there exists an encoder
$f_T(\cdot)$ and a
reconstructor $\phi_T(\cdot)$ of rate $R+\epsilon$ bits per second
which, when applied to the Poisson
process $\vect{X}$ on $[0,T]$, gives
\begin{equation*}
\E{d\bigl(\mathbf{X},\phi_T\left(f_T({\mathbf{X}})\right)\bigr)} \le
D+\epsilon.
\end{equation*}
Denote by $R(D,\lambda)$ the minimum rate $R$ such that $(R,D)$ is
achievable for the homogeneous Poisson process of intensity
$\lambda$. Define
\begin{equation}\label{eq:poisson}
R_{\textnormal{Pois}}(D,\lambda)\triangleq \begin{cases} -\lambda\log D \textnormal{ bits per
second},& D\in(0,1)\\ 0, & D\ge 1.\end{cases}
\end{equation}
\begin{thm}\label{thm:poisson}
For all $D,\lambda>0$,
\begin{equation}\label{eq:poisson1}
R(D,\lambda)=R_{\textnormal{Pois}}(D,\lambda).
\end{equation}
\end{thm}
To prove Theorem \ref{thm:poisson}, we propose a scheme to reduce the
original problem to one for a discrete memoryless source. This is
reminiscent of Wyner's
scheme for reducing the peak-limited Poisson channel to a discrete
memoryless channel \cite{wyner88}. We shall
show the optimality of this scheme in Lemma~\ref{lem:optimality}, and
we shall
then prove Theorem~\ref{thm:poisson} by computing the best rate that
is achievable using this scheme.
\emph{Scheme 1:} We divide the time-interval
$[0,T]$ into slots of $\Delta$ seconds long. The encoder first maps
the original point pattern $\vect{x}$ to a $\{0,1\}$-valued vector
$\vect{x}'$ of length $\frac{T}{\Delta}$\footnote{When $T$ is not
divisible by $\Delta$, we consider $\vect{x}$ as a
pattern on $[0,T']$ where $T'=\lceil \frac{T}{\Delta}\rceil
\Delta$. When we let $\Delta$ tend to zero, the difference between
$T$ and $T'$ also tends to zero. Henceforth we ignore this technicality
and assume $T$
is divisible by $\Delta$.} in the following way: if
$\vect{x}$ has at least one point in the time-slot $((i-1)\Delta,
i\Delta]$, choose $x_i'=1$; otherwise choose $x_i'=0$. The encoder
then maps $\vect{x}'$ to a message in $\{1,\ldots,2^{TR}\}$.
Based on the encoder's message, the reconstructor produces a
$\{0,1\}$-valued length-$\frac{T}{\Delta}$ vector $\hat{\vect{x}}'$ to
meet the distortion criterion
\begin{equation*}
\E{d'(\vect{X}',\hat{\vect{X}}')} \le D+\epsilon,
\end{equation*}
where the distortion measure $d'(\cdot,\cdot)$ is given by
\begin{IEEEeqnarray*}{rCl}
d'(0,0)& = & 0\\
d'(0,1)& = & 1\\
d'(1,0)& = & \infty\\
d'(1,1)& = & 1.
\end{IEEEeqnarray*}
It then maps $\hat{\vect{x}}'$ to a continuous-time signal
$\hat{\vect{x}}$ through
\begin{equation*}
\hat{x}(t)=\hat{x}_{\lceil \frac{t}{\Delta} \rceil}',\quad t\in[0,T].
\end{equation*}
Scheme~1 reduces the task of designing a code for
$\vect{X}$ subject to distortion $d(\cdot,\cdot)$
to the task of designing a code for the vector
$\vect{X}'$ subject to the distortion $d'(\cdot,\cdot)$. The way we
define $d'(\cdot,\cdot)$ yields the simple relation
\begin{equation}
d(\vect{x},\hat{\vect{x}})=d'(\vect{x}',\hat{\vect{x}}').
\end{equation}
When $\vect{X}$ is the homogeneous Poisson process of intensity
$\lambda$, the components of $\vect{X}'$ are
independent and identically distributed (IID)
$\textnormal{Ber}(1-e^{-\lambda \Delta})$. Let $R_\Delta(D,\lambda)$
denote the rate-distortion function for $\vect{X}'$ and
$d'(\cdot,\cdot)$. If we combine
Scheme~1 with an optimal code for $\vect{X}'$ subject to
$\E{d'(\vect{X}', \hat{\vect{X}}')} < D+\epsilon$, we can achieve any
rate that is larger than
\begin{equation*}
\frac{R_\Delta(D,\lambda) \textnormal{ bits}}{\Delta \textnormal{
seconds}}.
\end{equation*}
The next lemma, which is reminiscent of
\cite[Theorem 2.1]{wyner88b}, shows that when we let $\Delta$ tend to
zero, there is no loss in optimality in using Scheme~1.
\begin{lem}\label{lem:optimality}
For all $D,\lambda > 0$,
\begin{equation}\label{eq:lem}
R(D,\lambda)=\lim_{\Delta\downarrow 0}
\frac{R_\Delta(D,\lambda)}{\Delta}.
\end{equation}
\end{lem}
\begin{IEEEproof} See Appendix.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Theorem \ref{thm:poisson}]
We derive $R(D,\lambda)$ by computing the
right-hand side of \eqref{eq:lem}. To compute $R_\Delta(D,\lambda)$ we
apply Shannon's formula of the rate-distortion function for a discrete
memoryless source \cite{shannon48}:
\begin{equation}\label{eq:1}
R_\Delta(D,\lambda)=\min_{P_{\hat{Z}|Z}: \E{d_\Delta(Z,\hat{Z}})\le D}
I(Z;\hat{Z}). \footnote{Strictly speaking, since our distortion
measure is unbounded, we need to modify Shannon's proof of this
formula in order to use it for our problem. This can be done
by letting the reconstructor produce the
all-one sequence, which yields bounded distortion for any source
sequence, whenever no codeword can be found that is jointly typical
with the source sequence.}
\end{equation}
When $D\in(0,1)$, the
conditional distribution $P_{\hat{Z}|Z}$ which achieves the minimum on
the right-hand side of \eqref{eq:1} is
\begin{IEEEeqnarray*}{rCl}
P_{\hat{Z}|Z}^*(1|0) & = & D e^{\lambda\Delta}-e^{\lambda\Delta}+1,\\
P_{\hat{Z}|Z}^*(1|1) & = & 1.
\end{IEEEeqnarray*}
Computing the mutual information $I(Z;\hat{Z})$ under this
$P_{\hat{Z}|Z}^*$ yields
\begin{equation}\label{eq:2}
R_\Delta(D,\lambda)=\Hb(D)-e^{-\lambda\Delta}\Hb(D
e^{\lambda\Delta}- e^{\lambda\Delta}+1),\ \ D\in(0,1),
\end{equation}
where $\Hb(\cdot)$ denotes the binary entropy function.
When $D\ge 1$, it is optimal to choose $\hat{Z}=1$
(deterministically), yielding
\begin{equation}\label{eq:3}
R_\Delta(D,\lambda)=0,\quad D\ge 1.
\end{equation}
Combining \eqref{eq:lem}, \eqref{eq:2} and \eqref{eq:3} and computing
the limit as $\Delta$ tends to zero yields
\eqref{eq:poisson1}.
\end{IEEEproof}
\section{Covering General Point Processes and Arbitrary Point
Patterns}\label{sec:general}
We next consider a general point process $\mathbf{Y}$.
We assume that there exists some $\lambda$ such that
\begin{equation}\label{eq:general}
\lim_{t\to\infty} \Pr \left[
\frac{N_{\vect{Y}}(t)}{t}>\lambda+\delta\right] = 0 \quad \textnormal{for
all } \delta>0.
\end{equation}
Condition \eqref{eq:general} is satisfied, for example, when
$\mathbf{Y}$ is an ergodic process whose expected number of points per
second is less than or equal to $\lambda$.
Since the Poisson process is memoryless, one naturally expects it to
be the most difficult to describe. This is indeed the case, as the
next theorem shows.
\begin{thm}\label{thm:general}
The pair $(R_{\textnormal{Pois}}(D,\lambda),D)$ is
achievable on any point process
satisfying~\eqref{eq:general}.
\end{thm}
Before proving Theorem \ref{thm:general}, we state a stronger
result. Consider a point pattern $\vect{z}$ chosen by an adversary
on the interval $[0,T]$ which contains no more than $\lambda T$
points. The corresponding counting function
$n_{\vect{z}}(\cdot)$ must then satisfy
\begin{equation}\label{eq:arbitrary}
n_{\vect{z}}(T) \le \lambda T.
\end{equation}
The encoder and the reconstructor are
allowed to use random codes. Namely, they fix a distribution on all
(deterministic) codes of a certain rate on $[0,T]$. According to this
distribution, they randomly pick a code which is not revealed to the
adversary. They then apply it to the point pattern $\vect{z}$ chosen
by the adversary. We say that
$(R,D)$ is achievable with random coding against an adversary subject
to \eqref{eq:arbitrary} if, for every $\epsilon>0$, there exists some
$T_0$ such that, for every $T>T_0$, there exists a random code
on $[0,T]$ of rate $R+\epsilon$ such that the expected
distortion between \emph{any} $\vect{z}$ satisfying
\eqref{eq:arbitrary} and its reconstruction is smaller than $D+\epsilon$.
\begin{thm}\label{thm:arbitrary}
The pair $(R_{\textnormal{Pois}}(D,\lambda),D)$ is
achievable with random coding against an adversary subject
to~\eqref{eq:arbitrary}.
\end{thm}
\begin{IEEEproof}
First note that when $D\ge 1$, the encoder does not need to describe
the pattern: the reconstructor simply produces the all-one
function, yielding distortion $1$ for any $\vect{z}$. Hence the pair
$(0,D)$ is achievable with random coding.
Next consider $D\in(0,1)$. We use Scheme~1 as in Section
\ref{sec:poisson} to reduce the original problem to one of random
coding for an arbitrary discrete-time sequence $\vect{z}'$. Here
$\vect{z}'$ is $\{0,1\}$-valued, has length $\frac{T}{\Delta}$, and
satisfies
\begin{equation}\label{eq:constraint_discrete}
\sum_{i=1}^{T/\Delta} z_i' \le \lambda T.
\end{equation}
We shall construct a random code of rate $\frac{R}{\Delta}$ which,
when applied to any $\vect{z}'$ satisfying
\eqref{eq:constraint_discrete}, yields
\begin{equation*}
\E{d'(\vect{z}',\hat{\vect{Z}}')} < D+\epsilon,
\end{equation*}
where the random vector $\hat{\vect{Z}}'$ is the result of applying
the random encoder and decoder to $\vect{z}'$. Combined with
Scheme~1 this random code will yield a random code on the
continuous-time point pattern $\vect{z}$ that achieves the
rate-distortion pair $(R,D)$.
Our discrete-time random code consists of $2^{TR}$ $\{0,1\}$-valued,
length-$\frac{T}{\Delta}$ random sequences $\hat{\vect{Z}}_m'$,
$m\in\{1,\ldots, 2^{TR}\}$. The first sequence $\hat{\vect{Z}}_1'$
is chosen deterministically to be the all-one sequence. The other
$2^{TR}-1$ sequences are drawn independently, with each sequence
drawn IID $\textnormal{Ber}(D)$.
To describe source sequence $\vect{z}'$, the encoder looks for a
codeword $\hat{\vect{z}}_m'$, $m\in\{2,\ldots,2^{TR}\}$ such that
\begin{equation}\label{eq:11}
\hat{z}_{m,i}'=1 \textnormal{ whenever }z_i'=1.
\end{equation}
If it finds one or more such codewords, it sends the index of the
first one; otherwise it sends $1$. The
reconstructor outputs the
sequence $\hat{\vect{z}}_m'$ where $m$ is the message it receives
from the encoder.
We next analyze the expected distortion of this random code for a
fixed $\vect{z}'$ satisfying \eqref{eq:constraint_discrete}. Define
\begin{equation*}
\mu\triangleq \frac{\sum_{i=1}^{T/\Delta} z_i'}{T},
\end{equation*}
and note that by \eqref{eq:constraint_discrete} $\mu\le
\lambda$. Denote by $\mathcal{E}$ the
event that the encoder cannot find $\hat{\vect{z}}_m'$,
$m\in\{2,\ldots, 2^{TR}\}$ satisfying \eqref{eq:11}. If
$\mathcal{E}$ occurs, the encoder sends $1$ and the resulting
distortion is equal to $1$.
The probability that a randomly
drawn codeword $\hat{\vect{Z}}_m'$ satisfies \eqref{eq:11} is
\begin{equation*}
D^{\mu T}\ge D^{\lambda T} = 2^{(\lambda \log D)T}.
\end{equation*}
Because the codewords $\hat{\vect{Z}}_m'$, $m\in\{2,\ldots,2^{TR}\}$
are chosen independently, if we choose $R>-\lambda \log D$, then
$\Pr [\mathcal{E}] \to 0$ as $T\to\infty$. Hence, for large
enough $T$, the contribution to the expected distortion from the
event $\mathcal{E}$ can be ignored.
We next analyze the expected distortion conditional on
$\mathcal{E}^{\textnormal{c}}$. The reproduction $\hat{\vect{Z}}'$ has the
following distribution: at positions where $\vect{z}'$ takes the value
$1$, $\hat{\vect{Z}}'$ must also be $1$; at other positions the
elements of $\hat{\vect{Z}}'$ have the IID $\textnormal{Ber}(D)$
distribution. Thus the expected value of $\sum_{i=1}^{T/\Delta}
\hat{Z}_i'$ is
$\mu T + D(\frac{T}{\Delta}-\mu T)$, and
\begin{equation*}
\E{\left.d'(\vect{z}',\hat{\vect{Z}}')\right| \mathcal{E}^\textnormal{c}}
= D+ (1-D)\mu\Delta.
\end{equation*}
When we let $\Delta$ tend to zero, this value tends to
$D$. We have thus shown that, for small enough $\Delta$, we can
achieve the pair $(R/\Delta, D)$ on $\vect{z}'$ with random coding
whenever $R>-\lambda \log D$, and therefore we can also achieve
$(R,D)$ on the continuous-time point pattern $\vect{z}$ with random
coding if $R>-\lambda\log D$.
\end{IEEEproof}
We next use Theorem~\ref{thm:arbitrary} to prove
Theorem~\ref{thm:general}.
\begin{IEEEproof}[Proof of Theorem~\ref{thm:general}]
It follows from
Theorem~\ref{thm:arbitrary} that, on any point process satisfying
\eqref{eq:general}, the pair $(R_{\textnormal{Pois}}(D,\lambda+\delta),D)$ is achievable
with \emph{random coding}. Further, since there is no adversary, the
existence of a good random code guarantees the existence of
a good deterministic code. Hence $(R_{\textnormal{Pois}}(D,\lambda+\delta),D)$ is also
achievable on this process with deterministic
coding. Theorem~\ref{thm:general} now follows when we let $\delta$
tend to zero, since $R_{\textnormal{Pois}}(D,\cdot)$ is a continuous function.
\end{IEEEproof}
\section{Some Points are Known to the Reconstructor}\label{sec:wz}
In this section we consider a Wyner-Ziv setting for our
problem. We first consider the case where $\vect{X}$ is a homogeneous
Poisson process of intensity $\lambda$. (Later we consider an
arbitrary point pattern.) Assume that each point in $\vect{X}$ is
known to the reconstructor independently with
probability $p$. Also assume that the encoder does not know which
points are known to the reconstructor. The encoder maps $\vect{X}$
to a message in $\{1,\ldots,2^{TR}\}$, and the reconstructor
produces a Lebesgue-measurable, $\{0,1\}$-valued signal $\hat{\vect{X}}$ on
$[0,T]$ based on this message and the positions of the points that he
knows. The achievability of a rate-distortion pair is defined
in the same way as in
Section~\ref{sec:poisson}. Denote the smallest rate $R$ for which
$(R,D)$ is achievable by $R_{\textnormal{WZ}}(D,\lambda,p)$.
Obviously, $R_{\textnormal{WZ}}(D,\lambda,p)$ is lower-bounded by the
smallest achievable rate when the transmitter \emph{does} know which
points are known to the reconstructor. The latter rate is given by
$R_{\textnormal{Pois}}(D,(1-p)\lambda)$, where
$R_{\textnormal{Pois}}(\cdot,\cdot)$ is given by
\eqref{eq:poisson}. Indeed, when the encoder knows which points are
known to the reconstructor, it is optimal for it to describe only the
remaining points, which themselves form a homogeneous Poisson process
of intensity $(1-p)\lambda$. The reconstructor then selects a
set based on this description to cover the points unknown to it and
adds to this set the points it knows. Thus,
\begin{equation}\label{eq:wz1}
R_{\textnormal{WZ}}(D,\lambda,p)\ge R_{\textnormal{Pois}}(D,(1-p)\lambda).
\end{equation}
The next theorem shows that \eqref{eq:wz1} holds with equality.
\begin{thm}\label{thm:wz}
Knowing the points at the reconstructor only is as good as knowing
them also at the encoder:
\begin{equation}
R_{\textnormal{WZ}}(D,\lambda,p) = R_{\textnormal{Pois}}(D,(1-p)\lambda).
\end{equation}
\end{thm}
To prove Theorem~\ref{thm:wz}, it remains to show that the pair
$(R_{\textnormal{Pois}}(D,(1-p)\lambda), D)$ is achievable. We shall
show this as a
consequence of a stronger result concerning arbitrarily varying
sources.
Consider an arbitrary point pattern $\vect{z}$ on $[0,T]$ chosen by an
adversary. The adversary is allowed to put at most $\lambda T$ points
in $\vect{z}$. Also, it must reveal all but at most $\nu T$
points to the reconstructor, without telling the encoder which points
it has revealed. The encoder and the reconstructor are
allowed to use random codes, where the encoder is a random mapping
from $\vect{z}$ to a message in $\{1,\ldots, 2^{TR}\}$, and where the
reconstructor is a random mapping from this message, together with the point
pattern that it knows, to a $\{0,1\}$-valued, Lebesgue-measurable
signal $\hat{\vect{z}}$. The distortion $d(\vect{z},\hat{\vect{z}})$
is defined as in \eqref{eq:distortion}.
\begin{thm}\label{thm:wzadversary}
Against an adversary who puts at most $\lambda T$ points on $[0,T]$
and reveals all but at most $\nu T$ points to the reconstructor, the
rate-distortion pair $(R_{\textnormal{Pois}}(D,\nu),D)$ is
achievable with random coding.
\end{thm}
\begin{IEEEproof}
The case $D\ge 1$ is trivial, so we shall only consider the
case where $D\in(0,1)$. The encoder
and the reconstructor first use Scheme~1 as in
Section~\ref{sec:poisson} to reduce the point pattern $\vect{z}$ to a
$\{0,1\}$-valued vector $\vect{z}'$ of length
$\frac{T}{\Delta}$. Define
\begin{equation*}
\mu\triangleq \frac{\sum_{i=1}^{T/\Delta} z_i'}{T},
\end{equation*}
and note that, by assumption, $\mu\le\lambda$. If $\mu\le \nu$, then
we can ignore the reconstructor's side-information and use the
random code of Theorem~\ref{thm:arbitrary}. Henceforth we assume
$\mu>\nu$.
Denote by $\vect{s}$ the point pattern known to the reconstructor
and by $\vect{s}'$ the vector obtained from $\vect{s}$ through the
discretization in time of Scheme~1. Since there are at most $\nu T$
points that are unknown to the reconstructor,
\begin{equation}\label{eq:14}
\sum_{i=1}^{T/\Delta} s_i'\ge (\mu-\nu)T.
\end{equation}
The encoder conveys the value of $\mu T$ to the receiver using
bits. Since $\mu T$ is an integer between $0$ and $\lambda T$, the
number of
bits per second needed to describe it tends to zero as $T$ tends to
infinity.
Next, the encoder and the reconstructor randomly generate
$2^{T(R+\tilde{R})}$ independent codewords $$\hat{\vect{z}}_{m,l}',\quad
m\in\{1,\ldots, 2^{TR}\},\ l\in\{1,\ldots,2^{T\tilde{R}}\},$$
where each codeword is generated IID $\textnormal{Ber}(D)$.
To describe $\vect{z}'$, the encoder looks for a codeword
$\hat{\vect{z}}_{m,l}'$ such that
\begin{equation}\label{eq:12}
\hat{z}_{m,l,i}'=1 \textnormal{ whenever } z_i'=1.
\end{equation}
If it finds one or more such codewords, it sends the index $m$ of
the first one; otherwise
it tells the reconstructor to produce the all-one sequence.
When the reconstructor receives the index $m$, it looks for an index
$\tilde{l}\in\{1,\ldots,2^{T\tilde{R}}\}$ such that
\begin{equation}\label{eq:13}
\hat{z}_{m,\tilde{l},i}'=1 \textnormal{ whenever }
s_i'=1.
\end{equation}
If there is only one such codeword, it outputs it as the
reconstruction; if there are more than one such codewords, it
outputs the all-one sequence.
To analyze the expected distortion for $\vect{z}'$ over this random
code, first consider the event that the encoder cannot find a
codeword satisfying \eqref{eq:12}. Note that the probability that a
randomly generated codeword satisfies \eqref{eq:12} is $D^{\mu
T}$, so the probability of this event tends to zero as
$T$ tends to infinity provided that
\begin{equation}\label{eq:15}
R+\tilde{R}>-\mu \log D.
\end{equation}
Next consider the event that the reconstructor finds more than one
$\tilde{l}$ satisfying \eqref{eq:13}.
The probability that a randomly generated codeword satisfies
\eqref{eq:13} is $D^{\sum_{i=1}^{T/\Delta} s_i'}$. Consequently, by
\eqref{eq:14} the probability of this event tends to zero as
$T$ tends to infinity provided
\begin{equation}\label{eq:16}
\tilde{R} < -(\mu-\nu)\log D.
\end{equation}
Finally, if the encoder finds a codeword satisfying \eqref{eq:12}
and the reconstructor finds only one codeword satisfying
\eqref{eq:13}, then the two codewords must be the same. Following the
same calculations as in the proof of Theorem~\ref{thm:arbitrary},
the expected distortion in this case tends to $D$ as $\Delta$ tends
to zero.
Combining \eqref{eq:15} and \eqref{eq:16}, we can make the expected
distortion arbitrarily close to $D$ as $T\to\infty$ if
\begin{equation*}
R>-\nu \log D.
\end{equation*}
\end{IEEEproof}
\begin{IEEEproof}[Proof of Theorem \ref{thm:wz}]
The claim follows from \eqref{eq:wz1}, Theorem
\ref{thm:wzadversary}, and the Law of Large Numbers.
\end{IEEEproof}
\begin{appendix}
In this appendix we prove Lemma~\ref{lem:optimality}. Given
any rate-distortion code with $2^{TR}$ codewords
$\hat{\vect{x}}_m$, $m\in\{1,\ldots, 2^{TR}\}$ that achieves
expected distortion $D$, we shall construct a new code that can be
constructed through Scheme~1, that contains $(2^{TR}+1)$ codewords, and
that achieves an expected distortion that is arbitrarily close to $D$.
Denote the codewords of our new code by $\hat{\vect{w}}_m$,
$m\in\{1,\ldots,2^{TR}+1\}$. We choose the last codeword to be the
constant 1. We next describe our choices for the other codewords.
For every $\epsilon>0$ and every $\hat{\vect{x}}_m$, we can
approximate the set $\{t\colon \hat{x}_m(t)=1\}$ by a set
$\mathcal{A}_m$ that is equal to a finite, say $N_m$, union of open
intervals. More specifically,
\begin{equation}\label{eq:21}
\mu\left(\hat{x}_m^{-1}(1)\bigtriangleup
\mathcal{A}_m\right)\le 2^{-TR}\epsilon,
\end{equation}
where $\bigtriangleup$ denotes the
symmetric difference between two sets (see,
e.g., \cite[Chapter 3, Proposition 15]{royden88}). Define
\begin{equation*}
\set{B}\triangleq \bigcup_{m=1}^{2^{TR}}
\left(\hat{x}_m^{-1}(1)\setminus \mathcal{A}_m\right),
\end{equation*}
and note that by \eqref{eq:21}
\begin{equation}\label{eq:19}
\mu (\set{B})\le \epsilon.
\end{equation}
For each $\mathcal{A}_m$, $m\in\{1,\ldots,2^{TR}\}$, define
\begin{equation*}
\mathcal{T}_m\triangleq \left\{t\in [0,T]\colon \bigl( \left(\left\lceil
{t}/{\Delta}\right\rceil -
1\right)\Delta, \left\lceil
{t}/{\Delta}\right\rceil\Delta\bigr] \cap
\mathcal{A}_m \neq\emptyset\right\}.
\end{equation*}
We now construct $\hat{\vect{w}}_m$, $m\in\{1,\ldots,2^{TR}\}$ as
\begin{equation*}
\hat{\vect{w}}_m= \mathbf{1}_{\mathcal{T}_m},
\end{equation*}
where $\mathbf{1}_\mathcal{S}$ denotes the indicator function of the
set $\mathcal{S}$. Note that $\mathcal{A}_m \subseteq \mathcal{T}_m
= \hat{w}_m^{-1}(1)$.
See Figure~\ref{fig:discretize} for an illustration of this construction.
\begin{figure}[htbp]
\centering
\setlength{\unitlength}{0.68cm}
\begin{picture}(12,4.5)
\put(0,3){\vector(1,0){12}}
\put(12,2.5){$t$}
\multiput(0,2.9)(1,0){12}
{\line(0,1){0.2}}
\multiput(0,2.7)(1,0){12}
{\line(0,1){0.1}}
\multiput(0,2.4)(1,0){12}
{\line(0,1){0.1}}
\multiput(0,2.1)(1,0){12}
{\line(0,1){0.1}}
\multiput(0,1.8)(1,0){12}
{\line(0,1){0.1}}
\multiput(0,1.5)(1,0){12}
{\line(0,1){0.1}}
\multiput(0,1.2)(1,0){12}
{\line(0,1){0.1}}
\put(2.35,2.65){\tiny{$\Delta$}}
\put(2.3,2.75){\vector(-1,0){0.3}}
\put(2.7,2.75){\vector(1,0){0.3}}
\put(0,1){\vector(1,0){12}}
\put(12,0.5){$t$}
\multiput(0,0.9)(1,0){12}
{\line(0,1){0.2}}
\linethickness{0.5mm}
\put(0,3){\line(1,0){0.2}}
\put(0.2,3){\line(0,1){1}}
\put(0.2,4){\line(1,0){1.3}}
\put(1.5,3){\line(0,1){1}}
\put(1.5,3){\line(1,0){2}}
\put(3.5,3){\line(0,1){1}}
\put(3.5,4){\line(1,0){2.9}}
\put(6.4,3){\line(0,1){1}}
\put(6.4,3){\line(1,0){2.7}}
\put(9.1,3){\line(0,1){1}}
\put(9.1,4){\line(1,0){0.6}}
\put(9.7,3){\line(0,1){1}}
\put(9.7,3){\line(1,0){1.3}}
\put(0.2,4.2){$\mathbf{1}_{\mathcal{A}_m}$}
\put(0,1){\line(0,1){1}}
\put(0,2){\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(2,1){\line(1,0){1}}
\put(3,1){\line(0,1){1}}
\put(3,2){\line(1,0){4}}
\put(7,1){\line(0,1){1}}
\put(7,1){\line(1,0){2}}
\put(9,1){\line(0,1){1}}
\put(9,2){\line(1,0){1}}
\put(10,1){\line(0,1){1}}
\put(10,1){\line(1,0){1}}
\put(0.2,2.2){$\mathbf{\hat{w}}_m$}
\end{picture}
\caption{Constructing $\hat{\vect{w}}_m$ from
$\mathcal{A}_m$.}
\label{fig:discretize}
\end{figure}
Let
$$ N\triangleq \max_{m\in\{1,\ldots,2^{TR}\}} N_m.$$
It can be seen that
\begin{equation}\label{eq:20}
\mu\left(\hat{w}_m^{-1}(1)\right)-\mu(\mathcal{A}_m) \le
2N\Delta, \quad m\in\{1,\ldots,2^{TR}\}.
\end{equation}
Our encoder works as follows: if $\vect{x}$ contains no point in
$\mathcal{B}$, it maps $\vect{x}$ to the same message as the given
encoder; otherwise it maps $\vect{x}$ to the index $(2^{TR}+1)$ of
the all-one codeword. To analyze the distortion, first consider the
case where $\vect{x}$ contains no point in $\mathcal{B}$. In this
case, all points in $\vect{x}$ must be covered by the selected codeword
$\hat{\vect{w}}_m$. By \eqref{eq:21} and \eqref{eq:20}, the
difference
$d(\vect{x},\hat{\vect{w}}_m)-d(\vect{x},\hat{\vect{x}}_m)$, if
positive, can be
made arbitrarily small by choosing small $\epsilon$ and
$\Delta$. Next consider the case where $\vect{x}$ does contain
points in $\mathcal{B}$. By \eqref{eq:19}, the probability that this
happens can be made arbitrarily small by choosing $\epsilon$ small,
therefore its contribution to the expected distortion can also be made
arbitrarily small. We conclude that our code
$\{\hat{\vect{w}}_m\}$ can achieve a distortion that is arbitrarily
close to the distortion achieved by the original code
$\{\hat{\vect{x}}_m\}$. This concludes the proof of
Lemma~\ref{lem:optimality}.
\end{appendix}
\bibliographystyle{IEEEtran}
| {
"timestamp": "2011-02-16T02:01:57",
"yymm": "1102",
"arxiv_id": "1102.3080",
"language": "en",
"url": "https://arxiv.org/abs/1102.3080",
"abstract": "An encoder observes a point pattern---a finite number of points in the interval $[0,T]$---which is to be described to a reconstructor using bits. Based on these bits, the reconstructor wishes to select a subset of $[0,T]$ that contains all the points in the pattern. It is shown that, if the point pattern is produced by a homogeneous Poisson process of intensity $\\lambda$, and if the reconstructor is restricted to select a subset of average Lebesgue measure not exceeding $DT$, then, as $T$ tends to infinity, the minimum number of bits per second needed by the encoder is $-\\lambda\\log D$. It is also shown that, as $T$ tends to infinity, any point pattern on $[0,T]$ containing no more than $\\lambda T$ points can be successfully described using $-\\lambda \\log D$ bits per second in this sense. Finally, a Wyner-Ziv version of this problem is considered where some of the points in the pattern are known to the reconstructor.",
"subjects": "Information Theory (cs.IT)",
"title": "Covering Point Patterns",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587257892506,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7097978934705867
} |
https://arxiv.org/abs/2008.01428 | Canonical trace ideal and residue for numerical semigroup rings | For a numerical semigroup ring $K[H]$ we study the trace of its canonical ideal. The colength of this ideal is called the residue of $H$. This invariant measures how far is $H$ from being symmetric, i.e. $K[H]$ from being a Gorenstein ring. We remark that the canonical trace ideal contains the conductor ideal, and we study bounds for the residue.For $3$-generated numerical semigroups we give explicit formulas for the canonical trace ideal and the residue of $H$. Thus, in this setting we can classify those whose residue is at most one (the nearly-Gorenstein ones), and we show the eventual periodic behaviour of the residue in a shifted family. | \section*{Introduction}
\label{sec:introd}
Let $(R,\mm, K)$ be a local ring (or a positively graded $K$-algebra) which is Cohen-Macaulay and possesses a canonical module $\omega_R$. In \cite{HHS} the trace ideal of $\omega_R$ is used as a tool to stratify the Cohen-Macaulay rings and to define the class of nearly Gorenstein rings. We recall that if $N$ is any $R$-module, its trace is the ideal $\tr(N)=\sum_{\varphi\in \Hom_R(N,R)} \varphi(N)$ in $R$.
The relevance of $\tr(\omega_R)$ (also called the canonical trace ideal of $R$) stems from the fact that it describes the non-Gorenstein locus of the ring $R$.
Namely, by \cite[Lemma 2.1]{HHS}, for any $\pp \in \Spec(R)$, $\pp \supseteq \tr(\omega_R)$ if and only if $R_\pp$ is not a Gorenstein ring. Thus $\tr(\omega_R)=R$ if and only if $R$ is a Gorenstein ring. In \cite{HHS}, the ring $R$ is called nearly Gorenstein when $\tr(\omega_R) \supseteq \mm$. Also, the residue of $R$, denoted $\res(R)$ is defined as the length of the module $R/\tr(\omega_R)$. Several other invariants for such rings are surveyed in \cite{brennen-et-al}.
In this paper we study bounds, and in small codimension we give exact formulas, for $\res(R)$ when $R$ is the semigroup ring $K[H]$ associated to the numerical semigroup $H$ and the field $K$. This allows to determine the nearly Gorenstein property in some families of semigroups.
We outline the structure of the paper.
First, in Section~\ref{sec:prelim} we transfer the terminology and notations from rings to the setting of numerical semigroups.
A numerical semigroup $H$ is a subsemigroup of $\NN$ containing $0$
such that the number of gaps $g(H)=|\NN \setminus H|$ is finite. The largest gap (i.e. positive integer not in $H$) is the Frobenius number $\Fr(H)$.
In Proposition \ref{prop:arithmetic-ng} we show that if $H$ is generated by an arithmetic sequence, then $K[H]$ is nearly Gorenstein.
As a measure of how far is $K[H]$ from being Gorenstein (equivalently, that $H$ is symmetric, cf. \cite{Kunz}), we introduce the residue of $H$ defined as
$$\res(H)=\dim_K K[H]/\tr(\omega_{K[H]}).$$
Clearly, $\res(H)=0$ when $H$ is symmetric, and $\res(H)\leq 1$ precisely when $K[H]$ is nearly Gorenstein.
The exponents of the monomials in $\tr(\omega_{K[H]})$ form a semigroup ideal $\tr(H)\subseteq H$.
We note in Proposition \ref{prop:subsets-trace} that if $H$ is not symmetric, then $\mathcal{C}_H \subseteq \tr(H) \subseteq H\setminus\{0\}$, where $\mathcal{C}_H$
is the semigroup ideal generated by the elements of $H$ larger than $\Fr(H)$.
This observation gives a first estimate
$$\res(H)\leq n(H):= |\{ x\in H: x< \Fr(H)\}|$$ in Corollary \ref{cor:boundres}.
Examples computed with the NumericalSgps package \cite{Num-semigroup} in GAP \cite{GAP} indicate (Question \ref{que:g-n}) that another bound might also hold:
\begin{equation}
\label{eq:intro}
\res(H) \leq n(H)-g(H).
\end{equation}
This bound is proved to be correct if $K[H]$ is nearly Gorenstein, and also if $H$ is $3$-generated, cf. Proposition \ref{prop:3-bound}.
When $H$ is $3$-generated and not symmetric, the relation ideal $I_H \subset K[x_1, x_2, x_3]$ of $K[H]$ is given by the maximal minors of the structure matrix of $H$,
which is of the form
\begin{eqnarray}
\label{intro-structure}
A=\left( \begin{array}{ccc} x_1^{a_1} & x_2^{a_2} & x_3^{a_3}\\
x_2^{b_2} & x_3^{b_3} & x_1^{b_1}
\end{array}\right).
\end{eqnarray}
With this notation we derive in Proposition \ref{prop:3-semi-trace-formula} that $$\res(H)=\prod_{i=1}^3 \min\{a_i, b_i\}.$$
Working with the structure matrix of $H$ allows us to parametrize explicitly the non-symmetric $3$-generated semigroups $H$ whose trace is at either
end of the interval $[\mathcal{C}_H, H\setminus\{0\}]$, see Theorem \ref{thm:3semi-trace-is-maximal} and Proposition \ref{prop:3semi-trace-is-conductor}.
Example \ref{ex:trace-conductor} shows that $\res(H)$ may take any nonnegative integer value, even if we fix the number of generators of $H$.
Still, once we fix $n_1<\dots<n_e$, the residue of the semigroups in the shifted family $\{ \langle n_1+j,\dots, n_e+j\rangle \}_{j\geq 0}$
seem to change periodically with $j$, for $j\gg 0$.
This goes in the same direction as a recent number of other results about eventually periodic properties in this shifted family, see \cite{JS}, \cite{Vu}, \cite{HeS}, \cite{St-surveybetti}, \cite{C-all}, \cite{OP}.
Using \cite{S-3semi}, we prove in Theorem \ref{thm:3semi-res-periodic}
that given $n_1<n_2<n_3$ and letting $H_j=\langle n_1+j, n_2+j, n_3+j\rangle$ we have $\res(H_j)=\res(H_{j+(n_3-n_1)})$ for all $j\gg 0$.
In this setup, in Corollary \ref{cor:bound-res-shifts} we obtain another upper bound for $\res(H_j)$ when $j\gg 0$, depending on $n_3-n_1$.
In the Appendix we prove the inclusion of the conductor ideal in any trace ideal, and we characterize when the equality holds. This is made in the more general context of extensions of local rings $R\subseteq \widetilde{R}$ with isomorphic residue fields, $\widetilde{R}$ a discrete valuation ring in $Q(R)$ and a finite $R$-module..
\section{The canonical trace ideal of a semigroup, or Rings to semigroups transition}
\label{sec:prelim}
A numerical semigroup $H$ is a submonoid of $\NN$, and unless stated otherwise we assume $|\NN\setminus H| < \infty$.
Say $H$ is minimally generated by $n_1<n_2<\ldots <n_e$ with $e>1$. We write $H=\langle n_1,\ldots,n_e\rangle$. The number $e$ is called the {\em embedding dimension} of $H$ and the number $n_1$ the {\em multiplicity }of $H$. One always has $n_1\leq e$. We say that $H$ has {\em minimal multiplicity} if $n_1=e$.
In this case, one also says that $H$ has {\em maximal embedding dimension}, cf. \cite{RoSa-book}.
The elements in the set $G(H)=\NN\setminus H$ are called the {\em gaps} of $H$.
As $|G(H)|<\infty$, there exists a largest integer $\Fr(H)$, called the {\em Frobenius number} of $H$, such that $\Fr(H)\not\in H$.
We denote by $M$ the subset $H\setminus \{0\}$. The elements $f\in G(H)$ with $f+M\in H$ are called {\em pseudo-Frobenius numbers}.
The set of pseudo-Frobenius numbers will be denoted by $\PF(H)$. The cardinality of $\PF(H)$ is called the {\em type} of $H$, denoted $\type(H)$.
We fix a field $K$. The positively graded $K$-subalgebra $K[H]=K[t^{n_1},\ldots,t^{n_e}]$ of $K[t]$ is the semigroup ring of $H$. Its graded maximal ideal is $\mm= (t^{n_1},\ldots,t^{n_e})$. The embedding dimension (resp.\ multiplicity) of $H$ is also the embedding dimension (resp.\ multiplicity) of $K[H]$ in the algebraic sense.
The polynomial ring $K[t]$ is a finite module over $K[H]$ and is the integral closure of $K[H]$ in its quotient field $Q(K[H])=K(t)$. The module $K[t]/K[H]$ has finite length and a $K$-basis given by the residue classes of $\{t^a\: \; a\in G(H)\}$.
The canonical module $\omega_{K[H]}$ of $K[H]$ is the fractionary $K[H]$-ideal generated by the elements $t^{-f}$ with $f\in \PF(H)$, see \cite[Exercise 21.11]{Eis}.
Therefore, the Cohen-Macaulay type of $K[H]$ is equal to $\type(H)$. In particular, $K[H]$ is Gorenstein if and only if $\PF(H)=\{\Fr(H)\}$.
Kunz \cite{Kunz} showed that $K[H]$ is Gorenstein if and only if $H$ is {\em symmetric}, i.e. for all $x\in \ZZ$ either $x\in H$, or $\Fr(H)-x \in H$.
The anti-canonical ideal of $K[H]$ is the fractionary ideal $\omega^{-1}_{K[H]}=\{x\in Q(K[H]): x\cdot \omega_{K[H]}\subseteq K[H] \}$.
Since $K[H]$ is a domain, by \cite[Lemma 1.1]{HHS} one has $\tr(\omega_{K[H]})=\omega_{K[H]}\cdot \omega_{K[H]}^{-1}$.
We mention that the almost Gorenstein numerical semigroup rings (as defined by Barucci and Fr\"oberg in \cite{BF}, see also \cite{GMT}) are a proper subclass of the nearly Gorenstein ones, by \cite[Proposition 6.1]{HHS}. For our purposes, we will take as definition for almost Gorensteinness Nari's characterization which we explain next.
Let $\PF(H)=\{f_1,\ldots,f_{\tau-1},\Fr(H)\}$, with $f_i<f_{i+1}$ for $1\leq i <\tau-2$. It is known by Nari \cite{Nari} that $K[H]$ is almost Gorenstein, if and only if
\begin{equation}
\label{eq:ag-symmetries}
f_i+f_{\tau-i}=\Fr(H)\quad \text{for}\quad i=1,\ldots, \lfloor \tau/2 \rfloor.
\end{equation}
The semigroup $H$ is called {\em almost symmetric} if $K[H]$ is almost Gorenstein, and $H$ is called {\em nearly Gorenstein}, if $K[H]$ is nearly Gorenstein. These two classes of semigroups have been recently considered in \cite{MoSt}.
A subset $I\subset \ZZ$ is called a {\em relative ideal} of $H$ if $I+H \subseteq I$ and $h+I\subseteq H$ for some $h\in H$.
If moreover $I\subseteq H$, then $I$ is called an {\em ideal} of $H$.
Let $\Omega_H$ and $\Omega_H^{-1}$ be the set of exponents of the monomials in $\omega_{K[H]}$, and in $\omega_{K[H]}^{-1}$ respectively.
Then $\Omega_H$ and $\Omega_H^{-1}$ are relative ideals of $H$ called the canonical, respectively the anti-canonical ideal of $H$.
We define the {\em trace} of $H$ as $\tr(H)= \Omega_H+ \Omega_H^{-1}$.
It is clear that $\tr(H)$ is an ideal in $H$ consisting of the exponents of the monomials in $\tr(K[H])$.
In this notation, $H$ is nearly Gorenstein if and only if $M\subseteq \tr(H)$.
\medskip
The semigroup ring $K[H]$ is $1$-dimensional, so its canonical trace ideal is either the whole ring, or it is an $\mm$-primary ideal.
Equivalently, $K[H]/\tr(\omega_{K[H]})$ is a finite dimensional vector space with a $K$-basis given by $\{t^h: h\in H\setminus \tr(H) \}$.
We define the {\em residue} of $H$ as the residue of $K[H]$, namely
\begin{equation}
\label{eq:res-definition}
\res(H)=\dim_K K[H]/\tr(\omega_{K[H]})= | H\setminus \tr(H)|.
\end{equation}
Thus $\res(H)=0$ means that $H$ is symmetric, and $\res(H) \leq 1$ if and only if $H$ is nearly Gorenstein.
The conductor of the extension $K[H] \subseteq K[t]$ is the ideal
$$
\mathcal{C}_{K[t]/K[H]}=(t^h: h> \Fr(H))K[H],
$$ which explains why the quantity
$c(H):=\Fr(H)+1$ is named the conductor of $H$. We denote $\mathcal{C}_H=\{h: t^h\in \mathcal{C}_{K[t]/K[H]} \}$, which is an ideal in $H$ minimally generated by
$c(H), c(H)+1, \dots, c(H)+n_1-1$.
An important observation is that $\tr(H)$ contains the conductor ideal $\mathcal{C}_H$. We skip the proof for now, since it follows from Proposition \ref{conductor} in the Appendix, where we consider a more general situation.
\begin{Proposition}
\label{prop:subsets-trace} For any numerical semigroup $H$ one has
$$
\mathcal{C}_H \subseteq \tr(H) \subseteq H.
$$
If $H$ is not symmetric then $\mathcal{C}_H \subseteq \tr(H) \subseteq M$.
\end{Proposition}
As a corollary we obtain an upper bound for $\res(H)$.
We define the set of {\em non-gaps} of $H$ to be
$NG(H)=\{ x\in H: x< \Fr(H)\}$ and we denote $n(H)=|NG(H)|$.
\begin{Corollary}
\label{cor:boundres}
For any numerical semigroup $H$ one has $\res(H)\leq n(H)$, with equality if and only if $\tr(H)=\mathcal{C}_H$.
\end{Corollary}
\begin{proof}
The desired inequality follows from the observation that
$$
n(H)=|NG(H)|=|H\setminus \mathcal{C}_H| \geq |H\setminus \tr(H)|=\res(H).\quad \square
$$
\end{proof}
The map $\rho: NG(H)\to G(H)$ given by $\rho(x)=\Fr(H)-x$ for all $x$ in $NG(H)$ is well defined and injective.
Also, $|NG(H)|+ |G(H)|=\Fr(H)+1$, hence denoting $g(H)=|G(H)|$ we have $n(H) \leq g(H)$.
Numerical experiments with GAP (\cite{GAP}) indicate that another bound for $\res(H)$ might also hold.
We formulate the following question.
\begin{Question}
\label{que:g-n}
Given a numerical semigroup $H$, is it true that
$$
\res(H)\leq g(H)-n(H)?
$$
\end{Question}
This question has a positive answer for symmetric semigroups: by \cite[Lemma 1(f)]{FGH} $H$ is symmetric if and only if $n(H)=g(H)$.
In Proposition \ref{prop:3-bound} we also confirm Question \ref{que:g-n}, when $H$ is $3$-generated.
For any integer $a>3$ the semigroup $H=\langle a, a+1, \dots, 2a-1\rangle$ is nearly Gorenstein and not symmetric (see Proposition \ref{prop:arithmetic-ng}),
and it has $\res(H)=1=n(H) < g(H)-n(H)=a-2$.
This shows that the bound in Question \ref{que:g-n} is not always smaller than the one given by Corollary \ref{cor:boundres}.
\medskip
The second list of inclusions in Proposition~\ref{prop:subsets-trace} are sharp, as confirmed by Proposition~\ref{prop:arithmetic-ng} and Example~\ref{ex:trace-conductor} below.
The following result shows that a numerical semigroup generated by an arithmetic sequence is nearly Gorenstein.
We also characterize when such semigroups are almost symmetric, taking into account that the symmetric case
was known from work of Gimenez, Sengupta and Srinivasan in \cite{GSS}.
\begin{Proposition}
\label{prop:arithmetic-ng}
Let $e>2$, and $H=\langle a, a+d, \dots, a+(e-1)d\rangle$ with $a,d$ coprime nonnegative integers and $e\leq a$.
Then
\begin{enumerate}
\item[{\em (a)}] $H$ is nearly Gorenstein;
\item[{\em (b})] $H$ is symmetric if and only if $a \equiv 2 \mod (e-1)$;
\item[{\em (c)}] $H$ is almost symmetric if and only if $a=e$ or $a\equiv 2 \mod (e-1)$.
\end{enumerate}
\end{Proposition}
\begin{proof}
It is known from \cite[Theorem 4.7]{GSS} that $\tau=\type(H)$ is the unique integer $1\leq \tau \leq e-1$ such that
$a=k(e-1)+\tau+1$ with $k$ integer. Equivalently, $k= \lfloor \frac{a-2}{e-1} \rfloor$.
Tripathi \cite[Theorem on page 3]{Tripathi} shows that
$$
\PF(H)=\left\{ a\left\lfloor \frac{x-1}{e-1} \right\rfloor +dx: a-\tau \leq x \leq a-1 \right\}.
$$
For $ a-\tau \leq x \leq a-1$ we get $k(e-1) \leq x-1 \leq k(e-1)+(\tau-1)$, hence $\lfloor \frac{x-1}{e-1}\rfloor=k$.
This implies that $\Fr(H)=ak+d(a-1)$ and
\begin{equation}
\label{eq:pf-arithmetic}
\PF(H)=\{\Fr(H)-(\tau-1)d, \dots, \Fr(H)-d, \Fr(H)\},
\end{equation}
hence the canonical ideal $\Omega_H $ is generated by
\begin{equation*}
\mathcal{W}=\{-\Fr(H), -\Fr(H)+d, \dots, -\Fr(H)+(\tau-1)d\}.
\end{equation*}
For part (a) we consider the set
\begin{equation*}
\mathcal{W'}=\{ \Fr(H)+a, \Fr(H)+a+d, \dots, \Fr(H)+a + (e-\tau)d \}\subset H.
\end{equation*}
An element in $\mathcal{W}+\mathcal{W}'$ is of the form $a+(i+j)d$ with $0\leq i\leq \tau-1$ and $1\leq j \leq e-\tau$.
This way we obtain the generators of $H$: $a, a+d, \dots, a+(e-1)d$,
which shows that $\mathcal{W}'\subset \Omega_H^{-1}$ and $\Omega_H+\Omega_H^{-1} \supseteq M$. Equivalently, $H$ is nearly Gorenstein.
Part (b) is known and may be traced back to \cite[Theorem 2.2]{GSS} or (less explicitly in) \cite{PatilSengupta}.
The statement is an immediate consequence of the fact that $K[H]$ is Gorenstein if and only if $\tau=1$.
For part (c), using (b), it is enough to treat the case of $H$ being almost symmetric, but not symmetric.
This is equivalent (using \eqref{eq:ag-symmetries} and \eqref{eq:pf-arithmetic}) to
\begin{eqnarray*}
(\Fr(H)-(\tau-1)d) + (\Fr(H)-d) &=& \Fr(H), \text{ which is equivalent to} \\
\Fr(H) &=& \tau d.
\end{eqnarray*}
After we substitute the values of $\Fr(H)$ and $\tau$ in the previous equation, we get
\begin{eqnarray*}
ak+ d(a-1) &=& (a-1-k(e-1)) d, \\
k(a+d(e-1)) &=& 0, \\
k &=& 0.
\end{eqnarray*}
Note that $e\leq a$ and by the way $k$ was defined, we may express $k= \left\lfloor \frac{a-2}{e-1} \right\rfloor$. Therefore $k=0$ if and only if $a=e$.
\end{proof}
Next, we present a family of numerical semigroups $H$ such that $ \mathcal{C}_H = \tr(H)$.
\begin{Example}
\label{ex:trace-conductor}
{\em
For the integers $m>1$ and $q >0$ we let
$$
H=\langle m, qm+1, qm+2, \dots, qm+m-1 \rangle.
$$
This is a semigroup with minimal multiplicity, hence its pseudo-Frobenius numbers are obtained by subtracting $m$ from the rest of the minimal generators. This gives $\PF(H)=\{(q-1)m+1, (q-1)m+2, \dots, qm-1\}$, a list of $m-1$ consecutive integers. Let $x\in \Omega_H^{-1}$, i.e. $-\PF(H)+x \subset H$. This can happen
only if $x-\Fr(H)=x-qm+1 \geq qm$, equivalently $x\geq 2qm-1$. Consequently, $\tr(H)= \{x: x \geq qm \}=\mathcal{C}_H$, and $\res(H)=|\{0, m,\dots, (q-1)m\}|=q$.
}
\end{Example}
\section{The case of $3$-generated numerical semigroups}
\label{sec:3gens}
When the numerical semigroup $H$ is $3$-generated, the results in \cite[Section 3]{HHS} can be applied to obtain a simple formula of $\res(H)$ from the defining ideal of $K[H]$.
Assume $H$ is minimally generated by $n_1, n_2, n_3$, not necessarily listed increasingly. Let $\varphi: S=K[x_1, x_2, x_3] \to K[H]$ the algebra map given by
$\varphi(x_i)=t^{n_i}$ for $i=1, \dots, 3$. Then $\ker (\varphi)= I_H$, the defining ideal of $K[H]$.
It is proven in \cite{He-semi} that
$H$ is symmetric, equivalently $K[H]$ is a complete intersection, if and only if,
up to a permutation, $d=\gcd(n_1, n_2) >1$ and $n_3 \in \langle n_1/d, n_2/d \rangle$.
Assume $H$ is not symmetric. We recall from \cite{He-semi} how to compute the ideal $I_H$ in this case.
We find the positive integers $c_1, c_2, c_3$ minimal with the property that there exist nonnegative integers $a_i, b_i$, $i=1,\dots, 3$ such that
\begin{eqnarray}
\label{eq:3semi-equations}
\nonumber c_1 n_1 &=& b_2 n_2+a_3 n_3, \\
c_2 n_2 &=& a_1 n_1 + b_3 n_3, \\
\nonumber c_3 n_3 &=& b_1n_1 + a_2 n_2.
\end{eqnarray}
Such $a_i, b_i$ are positive, unique, and $c_i=a_i+b_i$ for $i=1, \dots, 3$.
In this notation, the ideal $I_H$ is the ideal of maximal minors of the matrix
\begin{eqnarray}
\label{structure}
A=\left( \begin{array}{ccc} x_1^{a_1} & x_2^{a_2} & x_3^{a_3}\\
x_2^{b_2} & x_3^{b_3} & x_1^{b_1}
\end{array}\right),
\end{eqnarray}
that we call the {\em structure matrix} of the semigroup $H$.
It is noticed in \cite[page 69]{NNW}
that one can recover $n_1, n_2, n_3$ from the matrix $A$
by computing the $K$-vector space dimension for the isomorphic rings
$$
K[H]/(t^{n_1}) \cong S/(x_1, I_H) \cong K[ x_2, x_3]/(x_2^{a_2+b_2}, x_2^{b_2}x_3^{a_3}, x_3^{a_3+b_3}),
$$
and the other two cases, see \cite[Lemma 10.23]{RoSa-book} for a different approach.
Namely, we get
\begin{eqnarray}
\label{eq:gens-from-matrix}
\nonumber n_1 &=& a_2 a_3+b_2 a_3+ b_2 b_3, \\
n_2 &=& a_1a_3+a_1b_3 +b_1b_3, \\
\nonumber n_3 &=& a_1 a_2+b_1 a_2 +b_1b_2.
\end{eqnarray}
It follows from the Hilbert-Burch theorem (\cite[Theorem 1.4.17]{BH}) that the transpose $A^{T}$ is the relation matrix of $I_H$, i.e. the sequence
$$
0\to S^2 \stackrel{A^{T}}{\longrightarrow} S^3 \to I_H \to 0
$$
is exact. The type of $R=K[H]$ is $2$, hence by \cite[Corollary 3.4]{HHS} we get
$$\tr(\omega_R)=I_1(\bar{A^T})=(t^{n_ia_i}, t^{n_ib_i}:i=1,\dots, 3),$$
where $\bar{A^{T}}$ is obtained by applying $\varphi$ on the entries of $A^{T}$. We may formulate the following result.
\begin{Proposition}
\label{prop:3-semi-trace-formula}
Assume $H$ is a non-symmetric $3$-generated numerical semigroup and let $R=K[H]$.
With notation as in \eqref{eq:3semi-equations}, we set $d_i=\min\{a_i,b_i\}$ for $1\leq i \leq 3$.
Then
$$
\tr(\omega_R)= (t^{d_1 n_1},t^{d_2 n_2}, t^{d_3 n_3})R, \text{ and } \res(H)=d_1d_2d_3.
$$
\end{Proposition}
\begin{proof} The first part is clear from the discussion above. Since
$$
R/\tr(\omega_R) \iso S/(I_H, x_1^{d_1}, x_2^{d_2}, x_3^{d_3}) \iso S/(x_1^{d_1}, x_2^{d_2}, x_3^{d_3})
$$
we obtain that $\res(H)=\dim_K R/\tr(\omega_R)= d_1d_2d_3$.
\end{proof}
We may now give a positive answer to Question \ref{que:g-n}, in embedding dimension $3$.
\begin{Proposition}
\label{prop:3-bound}
For any $3$-generated numerical semigroup $H$ one has $$\res(H)\leq g(H)-n(H).$$
\end{Proposition}
\begin{proof}
If $H$ is symmetric we actually have equality $0=\res(H)= g(H)-n(H)$, as noted in \cite[Lemma 1(f)]{FGH}.
Assume $H$ is not symmetric and that it has a structure matrix $A$ denoted as in \eqref{structure}.
Nari et al. prove in \cite[Theorem 3.2]{NNW} that
$$
2g(H)-(\Fr(H)+1) \in \{a_1 a_2 a_3, b_1 b_2 b_3\}.
$$
Using Proposition \ref{prop:3-semi-trace-formula}, we obtain
$$
\res(H) \leq \min\{a_1 a_2 a_3, b_1 b_2 b_3\} \leq 2g(H)-(\Fr(H)+1)= g(H)-n(H). \quad \square
$$
\end{proof}
As an application of Proposition~\ref{prop:3-semi-trace-formula} we will characterize the $3$-generated numerical semigroups such that their trace is at either end of the interval $[\mathcal{C}_H , M]$.
\begin{Theorem}
\label{thm:3semi-trace-is-maximal}
Let $H$ be a $3$-generated numerical semigroup. Then $\tr(H)=M$ if and only if one of the following cases occurs:
\begin{enumerate}
\item[{\em (i)}] $H=\langle ab+b+1, b+c+1, ac+a+c\rangle$ where $a,b,c$ are positive integers with $\gcd(b+c-1, ab-c)=1$, or
\item[{\em (ii)}] $H=\langle bc+b+1, ca+c+1, ab+a+1\rangle$, where $a,b,c$ are positive integers with $\gcd(bc+b+1, ca+c+1)=1$.
\end{enumerate}
In case {\em (i)}, $\Fr(H)= abc+bc-b-1+\max\{0, ab-c\}$, and in case {\em (ii)}, $\Fr(H)=2abc-2$.
\end{Theorem}
\begin{proof}
Assume $H=\langle n_1, n_2, n_3 \rangle$ such that $\tr(H)=M$. By \cite[Corollary 3.5]{HHS}, that is equivalent to $I_1(A)=(x_1, x_2, x_3)$,
where $A$ is the matrix attached to $H$ as in \eqref{structure}.
Clearly, $H$ is not symmetric, hence up to a permutation of the variables, there are essentially two (overlapping) cases to consider.
Case 1:
\begin{eqnarray*}
A=\left( \begin{array}{ccc} x_1^{ } & x_2^{a } & x_3^{b}\\
x_2^{ } & x_3^{ } & x_1^{c}
\end{array}\right), \text{ with } a,b,c >0.
\end{eqnarray*}
Using \eqref{eq:gens-from-matrix} we get $n_1=ab+b+1, n_2= b+c+1, n_3=ac+a+c$, as desired.
It is easy to check that $\gcd(n_1, n_2)=\gcd(n_2, n_3)=\gcd(n_1, n_3)$, hence
$1=\gcd(n_1, n_2, n_3)= \gcd(n_2, n_1-n_2)= \gcd(b+c-1, ab-c)$.
Conversely, let $n_1=ab+b+1, n_2= b+c+1, n_3=ac+a+c$ for some positive integers $a,b,c$ such that $\gcd(b+c-1, ab-c)=1$.
Arguing as above we see that the generators of $H$ are pairwise coprime, hence $H$ is a numerical semigroup which is not symmetric.
It is easy to verify the following equations:
\begin{eqnarray}
\label{eq:case1}
\nonumber (1+c) n_1 &=& n_2+ b n_3, \\
(1+a) n_2 &=& n_1 + n_3, \\
\nonumber (1+b) n_3 &=& c n_1 + a n_2.
\end{eqnarray}
We claim that these are the minimal relations \eqref{eq:3semi-equations} among $n_1, n_2, n_3$.
Since $a_1, b_3$ in \eqref{eq:3semi-equations} are positive, unique and $(1+a)n_2=n_1+n_3$, we may identify $c_2=1+a$ and $a_1=b_3=1$.
After substituting $n_1=(1+a)n_2-n_3$ into $c_1n_1=b_2 n_2+a_3n_3$, we get $c_1((1+a)n_2-n_3)=b_2n_2+a_3n_3$, hence
$$
(c_1(1+a)-b_2)n_2=(c_1+a_3) n_3.
$$
Since $n_2$ and $n_3$ are coprime, there exists a positive integer $\ell$ so that $c_1+a_3=\ell n_2$. Thus, $c_1+a_3\geq b+c+1$.
On the other hand, comparing \eqref{eq:case1} and \eqref{eq:3semi-equations} we obtain that $c_1\leq 1+c $ and $a_3=c_3-b_3 \leq (b+1)-1=b$, hence
$c_1+a_3 \leq 1+c+b$. This implies that $c_1+a_3=b+c+1$, and moreover $c_1=c+1$ and $a_3=b$.
We can now identify the rest of the coefficients in \eqref{eq:3semi-equations}:
$c_1=1+b, b_1=c, a_2=a$, which shows that the matrix $A$ has the desired entries.
Case 2:
\begin{eqnarray*}
A=\left( \begin{array}{ccc} x_1^{ } & x_2^{ } & x_3^{ }\\
x_2^{b} & x_3^{c} & x_1^{a}
\end{array}\right), \text{ with } a,b,c >0.
\end{eqnarray*}
Using \eqref{eq:gens-from-matrix} we get $n_1=bc+b+1, n_2=ca+c+1, n_3=ab+a+1$.
It is easy to see that $\gcd(n_1, n_2)=\gcd(n_2, n_3)=\gcd(n_1, n_3)$, hence the desired description for $H$.
Conversely, let $a,b,c$ be positive integers with $\gcd(bc+b+1, ca+c+1)=1$.
It now follows from \cite[Theorem 14]{RoSa-3pseudo} (and its proof) that $H=\langle bc+b+1, ca+c+1, ab+a+1 \rangle$ is a pseudo-symmetric numerical semigroup
whose matrix $A$ is the one we started with this case.
It is shown in \cite[Theorem 2.2.3]{RamirezAlfonsin} and \cite[Exercise 5, pp. 145]{Kunz-book} that
for any non-symmetric numerical semigroup $H=\langle n_1,n_2, n_3 \rangle$ one has
$$
\Fr(H)= \max\{ c_1n_1+b_3n_3, c_2n_2+ a_3n_3\},
$$
where $c_1, c_2, a_3, b_3$ are as in \eqref{eq:3semi-equations}.
It is now an easy exercise to derive the announced formulas for $\Fr(H)$, when $H$ belongs to either one of the two families considered above.
\end{proof}
\begin{Remark}
{\em
As noticed by Nari, Numata and Keiichi~Watanabe in \cite[Corollary 3.3]{NNW} (see also \cite[Corollary 2.9]{Numata}), the format of the matrix $A$ in case (ii) of Theorem \ref{thm:3semi-trace-is-maximal}
corresponds to $H$ being pseudo-symmetric, which is equivalent in embedding dimension $3$ to $H$ being almost symmetric and not symmetric, see \cite[Proposition 2.3]{Numata}.
The complete parametrization of $3$-generated pseudo-symmetric numerical semigroups was obtained by Rosales and Garc\'ia-S\'anchez in \cite{RoSa-3pseudo}.
}
\end{Remark}
\begin{Proposition}
\label{prop:3semi-trace-is-conductor}
Assume $H$ is a non-symmetric $3$-generated numerical semigroup. Then $\tr(H)=\mathcal{C}_H$ if and only if $H=\langle 3, 3a+1, 3a+2\rangle$ for some positive integer $a$.
\end{Proposition}
\begin{proof}
Assume $\tr(H)=\mathcal{C}_H$.
It follows from Proposition \ref{prop:3-semi-trace-formula} that
$\mu(\tr(H))\leq 3$. If $\mu(\tr(H))=2$, then $e(H)=\mu(\mathcal{C}_H)=2$ and $H$ is symmetric, a contradiction. Therefore,
$\mu(\tr(H))=3$, hence $H$ has multiplicity $3$.
(We can get the same thing by applying Corollary \ref{maxtr}.)
Listing its generators increasingly we have that either $H=\langle 3, 3a+1, 3b+2 \rangle$ with $0<a \leq b$, or $H=\langle 3, 3b+2, 3a+1\rangle$ with $a>b>0$.
Assume $a \leq b$. Then $3b+2\notin \langle 3, 3a+1\rangle$, hence $3b+2 \leq \Fr(\langle 3, 3a+1\rangle)= 6a-1$, by \cite{RamirezAlfonsin}. Thus $b<2a$.
It is easy to check that the structure matrix \eqref{structure} is
$$
A=\left( \begin{array}{ccc} x_1^{2a-b} & x_2 & x_3\\
x_2 & x_3 & x_1^{2b-a+1}
\end{array}\right).
$$
Hence $\res(H)=2a-b$ by Proposition \ref{prop:3-semi-trace-formula}.
Note that $0, 3, 6, \dots, 3(a-1)$ are not in $\mathcal{C}_H$, hence
$2a-b=\res(H)=|H\setminus{C}_H| \geq a$. This gives $a=b$ and $H= \langle 3, 3a+1, 3a+2 \rangle$.
If $a > b$ then arguing as in the previous case we obtain $3a+1 \leq \Fr(\langle 3, 3b+2 \rangle)= 6b+1$, and $a<2b$.
Clearly $0, 3, 6, \dots, 3b$ are not in $\mathcal{C}_H$, hence $\res(H)=|H\setminus \mathcal{C}_H| \geq b+1$.
On the other hand, the structure matrix of $H$ is
$$
A=\left( \begin{array}{ccc} x_1^{2b-a+1} & x_2 & x_3\\
x_2 & x_3 & x_1^{2a-b}
\end{array}\right),
$$
and Proposition \ref{prop:3-semi-trace-formula} gives $\res(H)=2b-a+1$.
Thus $2b-a+1 \geq b+1$, and $b\geq a$, a contradiction.
Example \ref{ex:trace-conductor} confirms that for any $a>0$ the semigroup $H=\langle 3,3a+1,3a+2 \rangle$ satisfies $\tr(H)=\mathcal{C}_H$.
\end{proof}
\begin{Remark} \label{rem:3large}
{\em
From the proof of Proposition \ref{prop:3semi-trace-is-conductor} we see that for any $a>0$ we have that $\res(\langle 3, 3a+1, 3a+2\rangle)= a$.
}
\end{Remark}
\section{The residue in shifted families of semigroups}
\label{sec:shifted-residue}
Remark~\ref{rem:3large} indicates that the residue of a $3$-generated numerical semigroup $H$ may be as large as possible.
However, this is not the case in a shifted family of semigroups, as we verify below.
Firstly, we extend the definition of residue from \eqref{eq:res-definition} to arbitrary affine subsemigroups of $\NN$.
In this sense, for any semigroup $H\subset \NN$ containing $0$ we let
\begin{equation*}
\res(H) = \res \left(\frac{1}{d}H\right), \text{ where } d=\gcd(h:h\in H).
\end{equation*}
Given the sequence of integers $\ab: a_1<\dots <a_e$, for any $j$ we denote $\ab+j:a_1+j, \dots, a_e+j$.
The shifted family of $\ab$ is the family $\{ \ab+j \}_{j\geq 0}$.
It has been proved that for large enough shifts several properties occur periodically in the shifted
family of semigroups $\{ \langle \ab+j \rangle \}_{j\geq 0}$
and their semigroup rings $\{ K[\langle \ab+j \rangle ]\}_{j\geq 0}$, see \cite{JS}, \cite{Vu}, \cite{HeS}, \cite{S-3semi}.
For instance, Jayanthan and Srinivasan \cite{JS} showed that $\text{for }j\gg 0 $
$$
K[\langle \ab+j \rangle] \text{ is complete intersection (CI)} \iff K[\langle \ab+j +(a_e-a_1) \rangle] \text{ is CI}.
$$
More generally, Vu (\cite[Theorem 1.1]{Vu}) showed that
\begin{equation*}
\text{for }j\gg 0, \quad \beta_i(K[\langle \ab+j \rangle])=\beta_i(K[\langle \ab+j+(a_e-a_1) \rangle]) \text{ for all }i.
\end{equation*}
In particular, for $j\gg 0$ the algebras $K[\langle\ab+j\rangle]$ and $K[\langle \ab+j+(a_e-a_1)\rangle]$ are Gorenstein at the same time. This implies that
the semigroups $\langle\ab+j\rangle$ and $\langle \ab+j+(a_e-a_1)\rangle$ are symmetric at the same time. Equivalently,
\begin{equation*}
\text{ for } j \gg 0, \quad \res(\langle \ab+j \rangle)=0 \iff \res(\langle \ab+j+ (a_e-a_1) \rangle) =0.
\end{equation*}
It is natural to ask the following.
\begin{Question}
\label{que:shifts}
Given the list of integers $\ab: a_1<\dots <a_e$, is it true that
\begin{equation*}
\text{ for } j\gg 0, \quad \res(\langle \ab+j \rangle) = \res(\langle \ab+j+ (a_e-a_1) \rangle) ?
\end{equation*}
\end{Question}
We remark that a positive answer to Question~\ref{que:shifts} implies that for numerical semigroups with bounded width, their residue is also bounded. We recall that the width of a numerical semigroup $H$ is defined in \cite{HeS} as the difference between the largest and the smallest minimal generator of $H$.
Numerical experiments with GAP (\cite{GAP}, \cite{Num-semigroup}) indicate that Question \ref{que:shifts} might have a positive answer.
Next we confirm it in case $e\leq 3$. If $e=2$, then $\langle a_1+j, a_2+j\rangle$ is symmetric for all $j$, and we are done.
The case $e=3$ is proved in the following theorem.
We first note that when studying asymptotic properties in a shifted family $\{ \ab +j \}_j$, we may assume $a_1=0$.
\begin{Theorem}
\label{thm:3semi-res-periodic}
Given the integers $0<a<b$, let $D=\gcd(a,b)$ and $k_{a,b}= \max\{b(\frac{b-a}{D} -1), \frac{ba}{D} \}$.
For any integer $j$ we denote $H_j= \langle j, j+a, j+b\rangle$.
Then $\res(H_j)=\res(H_{j+b})$ for all $j>2k_{a,b}$.
\end{Theorem}
Before giving the proof of Theorem \ref{thm:3semi-res-periodic} we recall a result from \cite{S-3semi} (extending Jayanthan and Srinivasan's \cite[Theorem 1.4]{JS})
about the occurence of symmetric semigroups in a shifted family $\{ \langle j, a+j, b+j \rangle \}_{j\geq 0}$.
\begin{Lemma} (\cite[Theorem 3.1]{S-3semi})
\label{lemma:3semi-ci}
With notation as in Theorem \ref{thm:3semi-res-periodic}, let
\begin{equation}
\label{eq:T}
T=\prod_{p \text{ prime, } \nu_p(a)<\nu_p(b)} p^{\nu_p(b)},
\end{equation}
where for any integer $n$ we denote $\nu_p(n)=\max\{i: p^i \text{ divides }n\}$.
Then for $j>k_{a,b}$ the semigroup $H_j$ is symmetric if and only if $j$ is a multiple of $T$.
In particular, in the family of semigroups $\{ H_j \}_{j> k_{a,b}}$ the symmetric property occurs periodically with principal period $T$.
\end{Lemma}
\begin{proof} (of Theorem \ref{thm:3semi-res-periodic}).
We start with $j>k_{a,b}$.
By Lemma \ref{lemma:3semi-ci}, if $H_j$ is symmetric then $H_{j+b}$ is symmetric, too, hence $\res(H_j)=\res(H_{j+b})=0$.
Assume $H_j$ is not symmetric. By Lemma \ref{lemma:3semi-ci}, $H_{j+\ell b}$ is not symmetric for all $\ell \geq 0$.
Denote $A_{j+\ell b}$ the structure matrix \eqref{structure} of the non-symmetric semigroup $H_{j+\ell b}$.
For $(n_1, n_2, n_3)=(0, a, b)+j+\ell b$, it is proved in \cite[Theorem 2.2]{S-3semi} that for any $\ell \geq 0$
the middle equation in \eqref{eq:3semi-equations} is
\begin{equation}
\label{eq:middle}
\frac{b}{D} n_2= \frac{b-a}{D}n_1+ \frac{a}{D}n_3.
\end{equation}
This implies that
\begin{eqnarray*}
A_j=\left( \begin{array}{ccc} x_1^{(b-a)/D} & x_2^{a_2} & x_3^{a_3}\\
x_2^{b_2} & x_3^{a/D} & x_1^{b_1}
\end{array}\right),
\end{eqnarray*}
where $a_2, a_3, b_1, b_2$ are positive integers (depending on $j$) such that $a_2+b_2=b/D$, by \eqref{eq:middle}.
Let $e=\gcd(a,b)/\gcd(j,a,b)$. Proposition 4.2 in \cite{S-3semi} explains how the equations \eqref{eq:3semi-equations} change when we shift up by $b$.
According to this result, only the last column of $A_j$ changes and we obtain
\begin{eqnarray*}
A_{j+b}=\left( \begin{array}{ccc} x_1^{(b-a)/D} & x_2^{a_2} & x_3^{a_3+e}\\
x_2^{b_2} & x_3^{a/D} & x_1^{b_1+e}
\end{array}\right).
\end{eqnarray*}
Iterating this, we have that
\begin{eqnarray*}
A_{j+\ell b}=\left( \begin{array}{ccc} x_1^{(b-a)/D} & x_2^{a_2} & x_3^{a_3+\ell e}\\
x_2^{b_2} & x_3^{a/D} & x_1^{b_1+\ell e}
\end{array}\right), \text{ for }\ell \geq 0.
\end{eqnarray*}
Proposition \ref{prop:3-semi-trace-formula} gives
\begin{equation}
\label{eq:after-shift-2}
\res(H_{j+\ell b})=\min \{(b-a)/D, b_1+\ell e \} \cdot \min \{ a/D, a_3+\ell e\} \cdot \min \{a_2, b_2\}.
\end{equation}
For $\ell \geq \max\{ \frac{b-a}{D}-1, \frac{a}{D} \} = \frac{1}{b} k_{a,b}$ it is easy to see that $b_1+\ell e \geq (b-a)/D$ and
$a_3+\ell e \geq a/D$. Hence, \eqref{eq:after-shift-2} becomes
\begin{equation}
\label{eq:res-big-shift}
\res(H_{j+\ell b})= \min\{a_2, b_2\} \cdot a(b-a)/D^2,
\end{equation}
which is a formula not involving $\ell$.
The argument above shows that for any $j>2k_{a,b}$ we have that $\res (H_j)=\res(H_{j+b})$. This concludes the proof.
\end{proof}
\begin{Corollary}
\label{cor:bound-res-shifts}
With notation as in Theorem \ref{thm:3semi-res-periodic}, for $j>2k_{a,b}$ the residue of $H_j$ is an integer divisible by $(b-a)a/D^2$ and
$$
\res(H_j) < 8b^3/27D^3.
$$
\end{Corollary}
\begin{proof}
If $H_j$ is symmetric, the inequality to prove is clear.
Assume $H_j$ is not symmetric and $j>2k_{a,b}$. By \eqref{eq:res-big-shift}, we have $\res(H_{j })= \min\{a_2, b_2\} \cdot a(b-a)/D^2$, with
$a_2, b_2$ positive integers such that $a_2+b_2=b/D$. This shows the first part of the claim. The second part is obtained from the following chain of inequalities
$$
\res(H_j) \leq \frac{a(b-a)}{D^2} \left(\frac{b}{D}-1\right) < \frac{ab(b-a)}{D^3} \leq \left( \frac{2b}{3} \right)^3 \cdot \frac{1}{D^3}=\frac{8b^3}{27D^3},
$$
where for the last inequality we used the known fact that $\sqrt[3]{xyz} \leq (x+y+z)/3$ for $x,y,z >0$.
\end{proof}
\begin{Corollary}
With notation as in Theorem \ref{thm:3semi-res-periodic}, for $j>2k_{a,b}$ the semigroup $H_j$ is nearly Gorenstein if and only if $H_{j+b}$ is nearly Gorenstein.
\end{Corollary}
We make a comment about the frequency of occurences of the symmetric, almost symmetric and nearly Gorenstein property in a shifted family.
\begin{Remark}
{\em Let $0<a<b$. For $j\geq 0$ we denote $H_j=\langle j, j+a, j+b\rangle$. We use the constant $k_{a,b}$ introduced in Theorem \ref{thm:3semi-res-periodic}.
Lemma \ref{lemma:3semi-ci} shows that we find symmetric semigroups for arbitrarily large shifts $j$.
From the formula \eqref{eq:T} for the principal period $T$ we infer that $T >1$, otherwise $a=b$, which is false.
This means that there is no $j_0$ such that $H_j$ is symmetric for all $j>j_0$.
When $b=2a$, the semigroup $H_j$ is generated by an arithmetic sequence.
Using Proposition \ref{prop:arithmetic-ng} we get that $H_j$ is nearly Gorenstein for all $j>0$.
On the other hand, by Lemma \ref{lemma:3semi-ci}, when $j>b$ we have that $H_j$ is symmetric if and only if $j$ is divisible by $2^{\nu_2(b)}$.
It is however possible that in the shifted family $\{H_j\}_{j\geq 0}$, for large $j$ the only nearly Gorenstein semigroups are the symmetric ones.
Indeed, if $a, b$ are coprime and $a>1$, by Corollary \ref{cor:bound-res-shifts} we have that for $j>2k_{a,b}$, either $\res(H)=0$, or $\res(H) \geq a(b-a) >1$.
The first author was informed by Kei-ichi Watanabe that there are only finitely many almost symmetric semigroups in the shifted family $\{H_j\}_{j\geq 0}$.
This can also be seen as follows.
According to Nari et al. \cite{NNW}, the structure matrix $A_j$ for an almost symmetric semigroup $H_j$ must have one row consisting
of linear forms. However, it is proven in \cite[Proposition 4.2]{S-3semi} that for $j> k_{a,b}$ the matrix $A_{j+b}$ is obtained from $A_j$
by increasing the exponents of the last column by $\gcd(a,b)/\gcd(a,b,j)$. This shows that for $j>k_{a,b}+b$ the semigroup $H_j$ is not almost symmetric. The analysis of the occurence of infinitely many almost symmetric semigroups in the shifted family of a $4$-generated numerical semigroup is made in \cite[Section 7]{HeWa}.
}
\end{Remark}
| {
"timestamp": "2020-08-05T02:12:57",
"yymm": "2008",
"arxiv_id": "2008.01428",
"language": "en",
"url": "https://arxiv.org/abs/2008.01428",
"abstract": "For a numerical semigroup ring $K[H]$ we study the trace of its canonical ideal. The colength of this ideal is called the residue of $H$. This invariant measures how far is $H$ from being symmetric, i.e. $K[H]$ from being a Gorenstein ring. We remark that the canonical trace ideal contains the conductor ideal, and we study bounds for the residue.For $3$-generated numerical semigroups we give explicit formulas for the canonical trace ideal and the residue of $H$. Thus, in this setting we can classify those whose residue is at most one (the nearly-Gorenstein ones), and we show the eventual periodic behaviour of the residue in a shifted family.",
"subjects": "Commutative Algebra (math.AC)",
"title": "Canonical trace ideal and residue for numerical semigroup rings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587229064297,
"lm_q2_score": 0.7185944046238982,
"lm_q1q2_score": 0.709797891399008
} |
https://arxiv.org/abs/1910.11766 | On the discrepancy of random subsequences of $\{nα\}$ | For irrational $\alpha$, $\{n\alpha\}$ is uniformly distributed mod 1 in the Weyl sense, and the asymptotic behavior of its discrepancy is completely known. In contrast, very few precise results exist for the discrepancy of subsequences $\{n_k \alpha\}$, with the exception of metric results for exponentially growing $(n_k)$. It is therefore natural to consider random $(n_k)$, and in this paper we give nearly optimal bounds for the discrepancy of $\{n_k \alpha\}$ in the case when the gaps $n_{k+1}-n_k$ are independent, identically distributed, integer-valued random variables. As we will see, the discrepancy behavior is determined by a delicate interplay between the distribution of the gaps $n_{k+1}-n_k$ and the rational approximation properties of $\alpha$. We also point out an interesting critical phenomenon, a sudden change of the order of magnitude of the discrepancy of $\{n_k \alpha\}$ as the Diophantine type of $\alpha$ passes through a certain critical value. | \section{Introduction}\label{intro}
An infinite sequence $(x_k)$ of real numbers is called \textit{uniformly distributed mod} 1 if for every pair $a, b$ of
real numbers with $0\le a < b \le 1$ we have
$$\lim_{N\to\infty} \frac{1}{N}\sum_{k=1}^N I_{[a, b)} (\{x_k\})=b-a.$$
Here $\{ \cdot\}$ denotes fractional part, and $I_{[a,b)}$ is the indicator function of the interval $[a, b)$.
By Weyl's criterion \cite{WE}, a sequence $(x_k)$ is uniformly distributed mod 1 if and only if
$$\lim_{N\to\infty} \frac{1}{N} \sum_{k=1}^N e^{2\pi i hx_k}=0$$
for all integers $h \ne 0$. In particular, the sequence $\{n \alpha\}$ is uniformly distributed mod 1 for any irrational $\alpha$. It also follows that $\{n_k \alpha\}$ is uniformly distributed mod 1 for all irrational $\alpha$ for
$n_k = k^b \log^c k$ ($0 < b < 1, c\in \mathbb R$), $n_k = \log^c k$ ($c > 1$), $n_k=P(k)$, where $P$ is a nonconstant polynomial with integer coefficients. See Kuipers and Niederreiter \cite{KN} for further examples.
A natural measure of the mod 1 uniformity of an infinite sequence $(x_k)$ is the \textit{discrepancy} defined by
$$D_N(x_k) := \sup_{0\le a<b \le 1} \left| \frac{1}{N} \sum_{k=1}^N I_{[a, b)} (\{x_k\}) -(b-a)\right| \quad (N=1, 2, \ldots).$$
By Diophantine approximation theory, the order of magnitude of the discrepancy $D_N(\{n\alpha\})$ is closely connected with the rational approximation properties of $\alpha$. By a standard definition (see e.g.\ \cite{KN}), the {\it type} $\gamma$ of an irrational number $\alpha$ is the supremum of all $c$ such that $$\liminf_{q \to\infty} q^c \| q \alpha\|=0,$$
where $\left\| t \right\|$ denotes the distance from a real number $t$ to the nearest integer.
Then $\gamma\ge 1$ for all irrational $\alpha$ and by classical results (see e.g.
\cite[Chapter 3, Theorems 3.2 and 3.3]{KN}) if $\alpha$ has finite type $\gamma$, then
\begin{equation}\label{af2}
D_N( \{n \alpha\}) =O( N^{-1/\gamma+\varepsilon}), \qquad D_N( \{n \alpha\}) =\Omega( N^{-1/\gamma-\varepsilon})
\end{equation}
for any $\varepsilon>0$. However, the type is a rather crude measure of rational approximation and a more precise characterization can be obtained by using a nondecreasing positive function $\psi$ such that
\begin{equation}\label{strong}
0<\liminf_{q\to\infty} \psi (q) \|q \alpha\| <\infty.
\end{equation}
Note that e.g.\ $\psi (q) = \max_{1 \le k \le q} 1/\left\| k \alpha \right\|$ satisfies \eqref{strong}, but $\psi$ is not uniquely determined by $\alpha$. For the sake of simplicity, in this paper we will focus on the case when \eqref{strong} is satisfied with $\psi (q)=q^{\gamma}$ for some $\gamma \ge 1$.
We shall say in this case that $\alpha$ {\it has strong type $\gamma$}.
As a minor change of the proof of (\ref{af2}) shows, in this case (\ref{af2}) can be improved to
$$D_N( \{n \alpha\}) =O( N^{-1/\gamma}), \qquad D_N( \{n \alpha\}) =\Omega( N^{-1/\gamma})$$
for $\gamma>1$ and
$$ D_N( \{n \alpha\}) =O\left( \frac{\log N}{N}\right)$$
for $\gamma=1$. In view of Schmidt's theorem (see e.g.\ \cite[p.\ 109]{KN}), the last bound is also optimal. Note that for any irrational $\alpha$, \eqref{strong} does not hold with any function $\psi (q) = o(q)$, and that it holds with $\psi(q)=q$ if and only if the partial quotients $a_k$ in the continued fraction of $\alpha$ remain bounded. Such irrational numbers are called \textit{badly approximable}.
In contrast to the precise results for $D_N(\{n\alpha\})$ above, much less is known about $D_N(\{n_k \alpha\})$ for general $(n_k)$. By a result of Philipp \cite{PH}, if $(n_k)$ is a sequence of positive reals with
$$n_{k+1}/n_k \ge q >1 \quad (k=1, 2, \ldots),$$
then $D_N(\{n_k\alpha\})$ satisfies the \textit{law of the iterated logarithm} (LIL):
\begin{equation}\label{ph75}
0<\limsup_{N\to\infty} \sqrt{\frac{N}{\log\log N}} D_N (\{n_k \alpha\})<\infty
\end{equation}
for almost all $\alpha$ in the sense of the Lebesgue measure. For general $(n_k)$ growing more slowly, even sharp metric results are not available. R.\ Baker \cite{BA} proved that if $(n_k)$ is an increasing sequence of positive integers, then for any $\e >0$,
\begin{equation}\label{baker}
D_N (\{n_k \alpha \}) = O \left( N^{-1/2} (\log N)^{3/2 +\e} \right)
\end{equation}
for almost all $\alpha$, but it is not known whether the exponent $3/2$ can be improved. In the case when $n_k$ is a polynomial with integer coefficients in $k$ of degree at least 2, Aistleitner and Larcher \cite{AL} proved the lower bound $D_N (\{n_k \alpha \}) = \Omega \left( N^{-1/2-\e} \right)$, valid for any $\e>0$ and almost every $\alpha$. However, all these are metric results and do not give information on $D_N(\{n_k \alpha\})$ for any specific irrational $\alpha$.
Thus it is natural to consider random sequences $(n_k)$, and in this paper we consider the case when the gaps $n_{k+1}-n_k$ are independent, identically distributed (i.i.d.) random variables. That is, we are dealing with the discrepancy $D_N(\{S_k \alpha\})$, where $S_k=\sum_{j=1}^k X_j$ with i.i.d.\ random variables $X_1, X_2, \ldots$, i.e.\ $S_k$ is a random walk. In a recent paper \cite{BB} the authors proved the law of the iterated logarithm
\[ 0 < \limsup_{N \to \infty} \frac{\left| \sum_{k=1}^N e^{2 \pi i S_k \alpha} \right|}{\sqrt{N \log \log N}} < \infty \quad \text{a.s.} \]
whenever $\exp (2 \pi i X_1 \alpha)$ is nondegenerate (i.e.\ it does not equal a constant with probability 1). Note that a.s.\ (almost surely) means that the given event has probability 1 in the space of the random walk $S_k$. From Koksma's inequality \cite[Chapter 2, Corollary 5.1]{KN}, we thus obtain the following general lower estimate.
\begin{prop}\label{generallower} Let $X_1, X_2, \dots$ be i.i.d.\ random variables, let $S_k=\sum_{j=1}^k X_j$ and let $\alpha \in \mathbb{R}$. If $\exp (2 \pi i X_1 \alpha)$ is nondegenerate, then
\[ D_N(\{ S_k \alpha \}) = \Omega \left( \sqrt{\frac{\log \log N}{N}} \right) \quad \text{a.s.} \]
\end{prop}
The sharpness of Proposition \ref{generallower} follows from a result of Schatte \cite{SCH3}, who proved that if
\begin{equation}\label{SCH}
\sup_{0\le x \le 1} | \P (\{S_k\alpha\}<x)-x| = O(k^{-5/2}),
\end{equation}
then we have
\begin{equation}\label{LILdiscr}
0<\limsup_{N\to\infty} \sqrt{\frac{N}{\log\log N}} D_N(\{S_k \alpha\})<\infty \quad \text{a.s.}
\end{equation}
Condition (\ref{SCH}) is satisfied for all $\alpha \neq 0$ if the distribution of $X_1$ is absolutely continuous, in which case the convergence speed in (\ref{SCH}) is exponential. Berkes and Raseta \cite{BR} showed that in the absolutely continuous case the LIL (\ref{LILdiscr}) also holds for the $L_p$ discrepancy of $\{S_k \alpha\}$, $1\le p<\infty$, and for other functionals of the path $\{S_k \alpha\}, 1\le k \le N$. Improving results of Schatte \cite{SCH1} and Su \cite{SU}, in \cite{BB2} we gave optimal bounds for the quantity on the left hand side of (\ref{SCH}) in the case when $X_1$ is an integer-valued random variable having a finite variance, or having heavy tails satisfying
\begin{equation}\label{heavy}
c_1 x^{-\beta} \le \P (|X_1| \ge x) \le c_2 x^{-\beta}
\end{equation}
for all $x>0$ with some constants $c_1, c_2>0$ and $0<\beta<2$. These results imply that the LIL (\ref{LILdiscr}) also holds if $\alpha$ has strong type $\gamma$ and $X_1$ is an integer-valued random variable satisfying (\ref{heavy}) with $\beta \le 2/(5\gamma)$ (see the last paragraph of Subsection \ref{HT}). In this case $S_n$ grows, in a stochastic sense, with the polynomial speed $n^{1/\beta}$ and this result can be considered as the stochastic analogue of Philipp's lacunary result (\ref{ph75}).
On the other hand, the results of \cite{BB2} also show that (\ref{SCH}) cannot hold if $X_1$ has a finite variance, in which case $S_n$ grows at most linearly. In this case the problem of asymptotic behavior of $D_N(\{S_k \alpha\})$ becomes considerably harder and will be studied in the present paper.
Upper bounds for $D_N(\{S_k \alpha\})$ for general random walks in terms of the growth rate of the sums
$$ \sum_{h=1}^H \frac{1}{h|1-\varphi(2 \pi h\alpha)|} \quad \text{and} \quad \sum_{h=1}^H \frac{1}{h|1-\varphi(2 \pi h\alpha)|^{1/2}} $$
were given in Weber \cite{W} and Berkes and Weber \cite{BW}. Here $\varphi$ denotes the characteristic function of $X_1$. In particular, in \cite{W} it is shown that if $X_1$ is integer-valued, $S_k/k^{1/\beta}$ converges in distribution to a stable law with parameter $0<\beta<1$ and $\alpha$ satisfies $\left\| q \alpha \right\| \ge C q^{-\gamma}$ for every $q \in \mathbb{N}$ with some $\gamma >1$ and $C>0$, then
\begin{equation}\label{Weberbound}
D_N( \{S_k\alpha\})=O \left( N^{- 1/(1+\gamma)} \log^{2+\varepsilon} N\right) \quad \text{a.s.}
\end{equation}
for any $\varepsilon>0$. The same upper bound holds if instead of the distributional convergence of $S_k/k^{1/\beta}$ we assume $\mathbb{E}X_1 \neq 0$ and $\mathbb{E}|X_1|<\infty$. For nearly optimal improvements of this estimate, see Propositions \ref{mainprop1} and \ref{mainprop2} below.
The main focus of this paper is to study the discrepancy of $\{ S_k \alpha \}$ in the case when $X_1$ is an integer-valued random variable, and $\alpha$ is irrational. The most interesting case is $X_1>0$, when $\{ S_k \alpha \}$ is in fact a random subsequence of $\{n \alpha \}$, but in general we will allow $X_1$ to take negative integers as well. Before we formulate our general results, we discuss here the simple special case when $X_1$ takes the values 1 and 2 with probability $1/2$ each. The corresponding sequence $\{ S_k \alpha \}$ is arguably the simplest random subsequence of $\{ n \alpha \}$.
\begin{prop}\label{mainprop1} Let $X_1, X_2, \dots$ be i.i.d.\ random variables such that $\P (X_1=1)=\P (X_1=2)=1/2$, let $S_k=\sum_{j=1}^k X_j$, and let $\alpha \in \R$ be irrational.
\begin{enumerate}
\item[(i)] If $\left\| q \alpha \right\| \ge C q^{-2}$ for every $q \in \mathbb{N}$ with some constant $C>0$, then $D_N=D_N(\{ S_k \alpha \})$ satisfies
\[ D_N = O \left( \sqrt{\frac{\log \log N}{N}} \log N \right), \quad D_N = \Omega \left( \sqrt{\frac{\log \log N}{N}} \right) \quad \text{a.s.} \]
\item[(ii)] If $0< \liminf_{q \to \infty} q^{\gamma} \left\| q \alpha \right\| < \infty$ with some $\gamma >2$, then $D_N=D_N(\{ S_k \alpha \})$ satisfies
\[ D_N = O \left( \left( \frac{\log \log N}{N} \right)^{1/\gamma} \right), \quad D_N = \Omega \left( \frac{1}{N^{1/\gamma}} \right) \quad \text{a.s.} \]
\end{enumerate}
\end{prop}
For an irrational $\alpha$ with strong type $\gamma$, the estimates in (i) hold if $1 \le \gamma \le 2$, while those in (ii) hold if $\gamma >2$. Thus the behavior of $D_N(\{S_k \alpha \})$ changes at the critical value $\gamma=2$. It would not be difficult to generalize (ii) to irrational $\alpha$ satisfying \eqref{strong} with an arbitrary $\psi (q)$ increasing faster than $q^2$. In this case the estimates for $D_N(\{ S_k \alpha \})$ would be given in terms of the inverse function $\psi^{-1}$.
The estimates in (i) apply to every algebraic irrational $\alpha$, as well as to almost every $\alpha$ in the sense of the Lebesgue measure. Indeed, a celebrated theorem of Roth \cite{RO} states that any algebraic irrational $\alpha$ satisfies $\left\| q \alpha \right\| \ge C q^{-(1+\e)}$ with some constant $C=C(\alpha, \e)>0$, where $\e>0$ is arbitrary. Furthermore, according to the Jarn\'ik--Besicovitch theorem \cite{BE}, the set of all $\alpha \in \R$ for which $\liminf_{q \to \infty} q^{\gamma} \left\| q \alpha \right\| <\infty$ has Hausdorff dimension $2/(\gamma+1)$. Thus except for a set of Hausdorff dimension 2/3 (and hence Lebesgue measure $0$), every $\alpha \in \R$ satisfies the Diophantine condition in (i).
Note that the exponent 1 of the log in the upper estimate in (i) is smaller than the exponent 3/2 in Baker's estimate (\ref{baker}), and thus random sequences give a better discrepancy bound.
\section{Results}\label{mainresults}
\subsection{Heavy-tailed distributions}\label{HT}
Suppose that the random variable $X_1$ has a \textit{heavy-tailed} distribution, i.e.\ $\mathbb{E} X_1^2 = \infty$. For the sake of simplicity, we only formulate a result for random variables whose tail distribution decays at the rate of a power function. The indicator function of the event $E$ will be denoted by $I_E$.
\begin{prop}\label{mainprop2} Let $X_1, X_2, \dots$ be integer-valued i.i.d.\ random variables such that $c_1 x^{-\beta} \le \P (|X_1| \ge x) \le c_2 x^{-\beta}$ for all $x>0$ with some constants $0<\beta<2$ and $c_1, c_2>0$. For $1 < \beta <2$ suppose also that $\mathbb{E}X_1=0$, and for $\beta =1$ that $|\E (X_1 I_{\{ |X_1| < x \}})| \le c_3$ for all $x>0$ with some constant $c_3>0$. Let $S_k=\sum_{j=1}^k X_j$, and let $\alpha \in \mathbb{R}$ be irrational.
\begin{enumerate}
\item[(i)] If $\left\| q \alpha \right\| \ge C q^{-2/\beta}$ for every $q \in \mathbb{N}$ with some constant $C>0$, then $D_N=D_N(\{ S_k \alpha \})$ satisfies
\[ D_N = O \left( \sqrt{\frac{\log \log N}{N}} \log N \right), \quad D_N = \Omega \left( \sqrt{\frac{\log \log N}{N}} \right) \quad \text{a.s.} \]
\item[(ii)] If $0< \liminf_{q \to \infty} q^{\gamma} \left\| q \alpha \right\| < \infty$ with some $\gamma >2/\beta$, then $D_N=D_N(\{ S_k \alpha \})$ satisfies
\[ D_N = O \left( \left( \frac{\log \log N}{N} \right)^{1/(\beta \gamma)} \right), \quad D_N = \Omega \left( \frac{1}{N^{1/(\beta \gamma)}} \right) \quad \text{a.s.} \]
\end{enumerate}
\end{prop}
Here we have a dichotomy similar to that in Proposition \ref{mainprop1}, the critical value of $\gamma$ being $2/\beta$. Again, it would not be difficult to generalize (ii) to irrational $\alpha$ satisfying \eqref{strong} with an arbitrary $\psi (q)$ increasing faster than $q^{2/\beta}$. Similarly, we could derive estimates for random variables with tail distribution $c_1 \phi (x) \le \P (|X_1| \ge x) \le c_2 \phi (x)$, where $\phi (x)$ is not necessarily a power function. In this more general situation the critical order of magnitude of $\psi (q)$, where the behavior of $D_N$ changes, would not necessarily be a power function.
Note that the estimates in (i) apply to every algebraic irrational $\alpha$, as well as to almost every $\alpha$ in the sense of the Lebesgue measure.
Proposition \ref{mainprop2} applies e.g.\ to the positive integer-valued random variable $X_1$ with $\P (X_1=n)=c_{\beta}/n^{1+\beta}$, $n=1,2,\dots$, where $0< \beta < 1$. This way we obtain a random subsequence $S_k \alpha$ of $n \alpha$ increasing roughly at the polynomial speed $k^{1/\beta}$. More precisely, $S_k = O \left( k^{1/\beta + \e} \right)$ a.s.\ for any $\e>0$ but not for $\e=0$ (see e.g.\ \cite[Theorem 6.9]{P}).
In conclusion we note that Schatte's LIL under (\ref{SCH}) and Proposition 2.1 of our previous paper \cite{BB2} imply
that if in statement (i) of Proposition \ref{mainprop2} we replace the assumption $\| q \alpha \| \ge C q^{-2/\beta}$ by $ \| q \alpha\| \ge Cq^{-2/(5\beta) }$, then in the conclusion
$$ D_N = O \left( \sqrt{\frac{\log \log N}{N}} \log N \right) \quad \text{a.s.} $$
the factor $\log N$ can be dropped, resulting in a sharp LIL bound. Whether this is true under the original assumption remains open.
\subsection{The case $\E X_1^2<\infty$, $\E X_1 = 0$}
The previous result deals with the case $\E X_1^2 =\infty$, and covers the typical case when the tails of $X_1$ decrease with speed $x^{-\beta}$, $0<\beta<2$. Next, we assume $\E X_1^2<\infty$. As we will see, the results are substantially different according as $\E X_1$ equals 0 or not, and we start with the easier case $\E X_1=0$.
\begin{prop}\label{mainprop3} Let $X_1, X_2, \dots$ be nondegenerate integer-valued i.i.d.\ random variables such that $\E X_1=0$ and $\E X_1^2 < \infty$, let $S_k=\sum_{j=1}^k X_j$, and let $\alpha \in \R$ be irrational.
\begin{enumerate}
\item[(i)] If $\left\| q \alpha \right\| \ge C q^{-1}$ for every $q \in \mathbb{N}$ with some constant $C>0$, then $D_N=D_N(\{ S_k \alpha \})$ satisfies
\[ D_N = O \left( \sqrt{\frac{\log \log N}{N}} \log^2 N \right), \quad D_N = \Omega \left( \sqrt{\frac{\log \log N}{N}} \right) \quad \text{a.s.} \]
\item[(ii)] If $0< \liminf_{q \to \infty} q^{\gamma} \left\| q \alpha \right\| < \infty$ with some $\gamma >1$, then $D_N=D_N(\{ S_k \alpha \})$ satisfies
\[ D_N = O \left( \left( \frac{\log \log N}{N} \right)^{1/(2 \gamma)} \right), \quad D_N = \Omega \left( \frac{1}{N^{1/(2\gamma)}} \right) \quad \text{a.s.} \]
\end{enumerate}
\end{prop}
The dichotomy is less pronounced here than in the previous propositions. Formally, the critical value is now $\gamma =1$. Thus (i) only applies to badly approximable irrationals, but not to almost every $\alpha$.
Note that the factor $\log ^2 N$ in the upper estimate in (i) is greater than the factor $(\log N)^{3/2+\varepsilon}$ in Baker's bound (\ref{baker}). However, Baker's bound does not apply to $\{S_k \alpha \}$, since $\mathbb{E}X_1=0$ implies that $S_k$ cannot be an increasing sequence. Additionally, the set of all badly approximable $\alpha$ is of measure 0, and Baker's estimate provides no information on what happens in such sets. As more than one result in our paper shows, discrepancy estimates in sets of zero measure can be much worse than the ``typical'' behavior.
\subsection{The case $\E X_1^2<\infty$, $\E X_1 \ne 0$}
The relation $\E X_1\neq 0$ holds in particular if $X_1>0$, when the sequence $S_k$ is increasing with probability 1, a natural situation since in this case $\{S_k \alpha\}$ is a random subsequence of $\{n \alpha\}$. As we will see, this case is considerably more involved, and we can prove almost tight estimates for the discrepancy only for certain special distributions, such as in Proposition \ref{mainprop1}.
In Section \ref{Examples} we will see further examples for which Proposition \ref{mainprop1} holds. For example, this is the case if $\P (X_1=a)=\P (X_1=b)=1/2$ for some $a,b \in \mathbb{Z}$, $a \not\equiv b \pmod{2}$, and also if $\E |X_1|<2 \P (X_1=1)$. However, we do not have a complete characterization of distributions for which the estimates in Proposition \ref{mainprop1} are valid. In the (admittedly most interesting) case $\E X_1^2 < \infty$, $\E X_1 \neq 0$, for an irrational $\alpha$ of strong type $\gamma >1$ in general we only know that $D_N (\{ S_k \alpha \})$ is, up to logarithmic factors, at most $N^{-1/(\gamma+1)}$ because of \eqref{Weberbound}, and at least $N^{-\tau}$ with $\tau=\min \{1/2, 1/\gamma \}$ because of Proposition \ref{generallower} and Lemma \ref{proplower2} below. Thus there is a gap between the exponents of $N$ in the upper and lower estimates, and the precise exponent remains open.
\subsection{Main theorem}
As we have seen, the order of magnitude of the discrepancy $D_N(\{S_k \alpha\})$ is sensitive to the distribution of $X_1$ and the Diophantine properties of $\alpha$. Theorem \ref{theoremA} below, which is the main result of our paper, provides criteria in terms of the characteristic function $\varphi$ of $X_1$. As we will see, these criteria cover all the above-mentioned classes and actually more.
\begin{thm}\label{theoremA} Let $X_1, X_2, \ldots$ be i.i.d.\ random variables with characteristic function $\varphi$, and let $S_k=\sum_{j=1}^k X_j$. Let $\alpha \in \R$ be irrational such that $\left\| q \alpha \right\| \ge C q^{-\gamma}$ for every $q \in \mathbb{N}$ with some constants $\gamma \ge 1$ and $C>0$.
\begin{enumerate}
\item[(i)] Suppose there exist real numbers $0<\beta \le 2$, $c>0$ and an integer $d>0$ such that for any $x\in \R$,
\begin{equation}\label{firstcond}
1-|\varphi(2\pi x)| \ge c \| dx\|^\beta .
\end{equation}
Then, with $s=1$ if \, $0<\beta<2$, and $s=2$ if \, $\beta=2$,
\begin{equation}\label{main}
D_N (\{S_k\alpha\})= \left\{ \begin{array}{ll} O \left( \sqrt{\frac{\log \log N}{N}} \log^s N \right) \quad \text{a.s.} & \text{if } 1\le \gamma\le 2/\beta ,\\ O \left( \left( \frac{\log \log N}{N} \right)^{1/(\beta \gamma)} \right) \quad \text{a.s.} & \text{if } \gamma >2/\beta . \end{array} \right.
\end{equation}
\item[(ii)] Suppose there exist a real number $c>0$ and an integer $d>0$ such that for any $x,y \in \R$,
\begin{equation}\label{secondcond}
|\varphi(2\pi x) - \varphi(2\pi y)| \ge c \| d(x-y)\|.
\end{equation}
Then
\begin{equation*}
D_N (\{S_k\alpha\})= \left\{ \begin{array}{ll} O \left( \sqrt{\frac{\log \log N}{N}} \log N \right) \quad \text{a.s.} & \text{if } 1 \le \gamma \le 2,\\ O \left( \left( \frac{\log \log N}{N} \right)^{1/\gamma} \right) \quad \text{a.s.} & \text{if } \gamma >2. \end{array} \right.
\end{equation*}
\end{enumerate}
\end{thm}
\noindent Conditions \eqref{firstcond} and \eqref{secondcond} are not standard in probability theory, therefore we offer some insight into their behavior in Section \ref{Examples}. As we will see in Proposition \ref{examplesprop} (i), Theorem \ref{theoremA} (i) with $\beta=2$ applies to any nondegenerate integer-valued $X_1$, making it our most general upper estimate.
Although we did not assume in Theorem \ref{theoremA} that $X_1$ is integer-valued, and indeed there exist non-integer-valued distributions satisfying \eqref{firstcond} or \eqref{secondcond}, the estimates, while valid, might be far from optimal in the non-integral case. Note that the upper bounds in Proposition \ref{mainprop1} will follow from Theorem \ref{theoremA} (ii); the upper bounds in Proposition \ref{mainprop2} will be a corollary of Theorem \ref{theoremA} (i) with $0<\beta<2$; finally, the upper bounds in Proposition \ref{mainprop3} will be deduced from Theorem \ref{theoremA} (i) with $\beta=2$. The lower bounds in Propositions \ref{mainprop1}, \ref{mainprop2} and \ref{mainprop3} are either a special case of Proposition \ref{generallower}, or follow from a simple argument based on the growth rate of $S_k$ (see Lemmas \ref{proplower2} and \ref{proplower3} below).
Our proof of Theorem \ref{theoremA} is based on the Erd\H{o}s--Tur\'an inequality, which states that for any sequence $(x_k)$ of reals and any $H \in \mathbb{N}$
\begin{equation}\label{ErdosTuran}
D_N(x_k) \le C \left( \frac{1}{H} + \sum_{h=1}^H \frac{1}{h} \left| \frac{1}{N} \sum_{k=1}^N e^{2 \pi i h x_k} \right| \right)
\end{equation}
with a universal constant $C>0$. The free parameter $H$ can be chosen arbitrarily to optimize the estimate. Note that the same exponential sum shows up in Weyl's criterion. To estimate $D_N (\{ S_k \alpha \})$, we therefore need to study
\begin{equation}\label{expsum}
\sum_{k=1}^N e^{2 \pi i S_k h \alpha} ,
\end{equation}
and this is why it was natural to state the conditions of Theorem \ref{theoremA} in terms of the characteristic function $\varphi$ of $X_1$. The same approach was followed in Weber \cite{W} and Berkes and Weber \cite{BW}, which were the starting point for our investigations. The various arithmetic and metric upper bounds for $D_N(\{S_k \alpha\})$ in \cite{W} and \cite{BW} were based on estimates for the second and fourth moments of \eqref{expsum}. The improvements in the present paper depend on sharp asymptotic estimates for the $2p$th moments of \eqref{expsum} for $p =O(\log\log N)$, a technique going back to Erd\H{o}s and G\'al \cite{EG} and which, as we will see, presents considerable combinatorial difficulties. A crucial ingredient of the argument will be a sharp estimate for Diophantine sums
\[ \sum_{h=1}^H \frac{1}{h \|h \alpha\|^b} \quad (0 < b \le 1) \]
(see Proposition \ref{generaldioph} and Corollary \ref{gammadioph}), which is of independent interest.
\section{The moments of an exponential sum}
Let $X_1, X_2, \dots$ be i.i.d.\ random variables, $S_k = \sum_{j=1}^k X_j$ and $\alpha \in \R$. In this section we estimate the moments
\begin{equation}\label{2pmoment}
\E \left| \sum_{k=m+1}^{m+n} e^{2 \pi i S_k \alpha} \right|^{2p}
\end{equation}
where $p \ge 1$ is an integer. The order of magnitude of \eqref{2pmoment} depends on a delicate interplay between the distribution of the random variable $X_1$ and the value of $\alpha$. Our main focus is on the case when $X_1$ is integer-valued, and $\alpha$ is irrational.
To get a basic understanding of \eqref{2pmoment}, consider the simplest case $p=1$. Expanding the square we get
\[ \E \left| \sum_{k=m+1}^{m+n} e^{2 \pi i S_k \alpha} \right|^2 = \sum_{k_1, k_2=m+1}^{m+n} \E e^{2 \pi i (S_{k_1} - S_{k_2}) \alpha} . \]
We need to decompose this sum into three parts, according to the cases $k_1=k_2$, $k_1<k_2$ and $k_1>k_2$. The terms with $k_1=k_2$ are simply 1. In the other two cases, using the independence of $X_1, X_2, \dots$ we have
\begin{equation}\label{k1k2cases}
\E e^{2 \pi i (S_{k_1} - S_{k_2}) \alpha} = \left\{ \begin{array}{ll} \varphi (-2 \pi \alpha)^{k_2-k_1} & \textrm{if } k_1<k_2, \\ \varphi (2 \pi \alpha)^{k_1-k_2} & \textrm{if } k_1>k_2. \end{array} \right.
\end{equation}
It is now easy to sum over all pairs $m+1 \le k_1,k_2 \le m+n$ and obtain an explicit formula for \eqref{2pmoment} in the case $p=1$.
The basic tool for the case $p>1$ is a generalization of the decomposition above which enables an evaluation similar to \eqref{k1k2cases} of the terms in the expanded sum. The number of cases will obviously be much larger than 3, in fact it will be almost as large as $(2p)^{2p}$.
We are ultimately interested in the discrepancy of the sequence $\{ S_k \alpha \}$. To use \eqref{ErdosTuran} with $x_k=S_k \alpha$ for a specific $\alpha$, we therefore need to estimate \eqref{2pmoment} not only for $\alpha$, but for every integral multiple of $\alpha$ as well.
The main difficulty of this section is thus that our estimate of \eqref{2pmoment} cannot contain any implied constant depending on $\alpha$, it has to be completely explicit.
\subsection{Two estimates of the moments}
We now prove two estimates of \eqref{2pmoment} under two different conditions on the distribution of $X_1$. In the proofs we will often use the fact that $\left\| \cdot \right\|$ is symmetric and subadditive, i.e.\ $\left\| -x \right\| = \left\| x \right\|$ and $\left\| x+y \right\| \le \left\| x \right\| + \left\| y \right\|$ for any $x,y \in \R$, and that the characteristic function $\varphi$ of any probability distribution satisfies $\varphi (-x)=\bar{\varphi} (x)$ and $|\varphi (x)| \le 1$ for any $x \in \R$.
\begin{prop}\label{momentestimate} Let $X_1, X_2, \dots$ be i.i.d.\ random variables with characteristic function $\varphi$, and let $S_k=\sum_{j=1}^k X_j$.
\begin{enumerate}
\item[(i)] Suppose that there exist real constants $0<\beta \le 2$ and $c,d>0$ such that \eqref{firstcond} holds for any $x \in \R$. For any $\alpha \in \R$ such that $d \alpha \not\in \mathbb{Z}$, and any integers $m \ge 0$ and $n,p \ge 1$,
\begin{equation}\label{mom(i)}
\E \left| \sum_{k=m+1}^{m+n} e^{2 \pi i S_k \alpha} \right|^{2p} \le (8p)^{2p} \max_{1 \le r \le p} \frac{n^r}{r! \left( c \left\| d \alpha \right\|^{\beta} \right)^{2p-r}} .
\end{equation}
\item[(ii)] Suppose that there exist real constants $c,d>0$ such that \eqref{secondcond} holds for any $x,y \in \R$. For any $\alpha \in \R$ such that $d \alpha \not\in \mathbb{Z}$, and any integers $m \ge 0$ and $n,p \ge 1$,
\begin{equation}\label{mom(ii)}
\E \left| \sum_{k=m+1}^{m+n} e^{2 \pi i S_k \alpha} \right|^{2p} \le (4p)^{2p} \sum_{r=0}^p \frac{n^r}{r! \left( c \left\| d \alpha \right\| \right)^{2p-r}} .
\end{equation}
\end{enumerate}
\end{prop}
\begin{proof} Let us expand the power to obtain
\begin{equation}\label{2pexpand}
\E \left| \sum_{k=m+1}^{m+n} e^{2 \pi i S_k \alpha} \right|^{2p} = \sum_{k_1, \dots , k_{2p}=m+1}^{m+n} \E e^{2 \pi i (S_{k_1} - S_{k_2} + \cdots + S_{k_{2p-1}} - S_{k_{2p}}) \alpha} .
\end{equation}
In order to compute the expected value, we need to write the exponent as a sum of independent random variables. To this end, let us say that $P=(P_1, \dots, P_s)$ is an \textit{ordered partition} of the set $[2p]$, where $[N]$ denotes the set $\left\{ 1, \dots , N \right\}$ for any $N \in \mathbb{N}$, if $P_1, \dots , P_s$ are pairwise disjoint, nonempty subsets of $[2p]$ such that $\bigcup_{j=1}^s P_j = [2p]$. We can associate an ordered partition to every $2p$-tuple $k=(k_1, \dots, k_{2p})$ in a natural way: if
\begin{equation}\label{k=l}
\left\{ k_1, \dots, k_{2p} \right\} = \left\{ \ell_1, \dots, \ell_s \right\}
\end{equation}
with $\ell_1<\cdots < \ell_s$, then for any $1 \le j \le s$ let
\[ P_j (k) = \left\{ i \in [2p] \,\, : \,\, k_i= \ell_j \right\} . \]
Then $P(k)=\left( P_1(k), \dots, P_s(k) \right)$ is an ordered partition of $[2p]$. In other words, the numbers $k_1, \dots, k_{2p}$ are written in increasing order as $\ell_1 < \dots < \ell_s$ (note $s \le 2p$ where we do not necessarily have equality since $k_1, \dots, k_{2p}$ need not be distinct), and we let $P_1(k)$ be the set of indices $i$ such that $k_i$ is the smallest, we let $P_2(k)$ be the set of indices $i$ such that $k_i$ is the second smallest etc. We will decompose the sum in \eqref{2pexpand} according to the value of $P(k)$. For any given ordered partition $P$ of $[2p]$ let
\[ S(P) = \sum_{\substack{k_1, \dots , k_{2p}=m+1 \\ P(k)=P}}^{m+n} \E e^{2 \pi i (S_{k_1} - S_{k_2} + \cdots + S_{k_{2p-1}} - S_{k_{2p}}) \alpha} . \]
Let us now fix an ordered partition $P=(P_1, \dots , P_s)$ of $[2p]$. Let $k$ be such that $P(k)=P$, and let $\ell_1 < \cdots < \ell_s$ be as in \eqref{k=l}. We have
\[ S_{k_1} - S_{k_2} + \cdots + S_{k_{2p-1}} - S_{k_{2p}} = \e_1 S_{\ell_1} + \cdots + \e_s S_{\ell_s} \]
where $\e_j=\sum_{i \in P_j} (-1)^{i+1}$ for any $1 \le j \le s$. Since $\ell_1 < \cdots < \ell_s$, it is now easy to write this as a sum of independent random variables:
\[ \e_1 S_{\ell_1} + \cdots + \e_s S_{\ell_s} = c_1 \sum_{t=1}^{\ell_1} X_t + c_2 \sum_{t=\ell_1+1}^{\ell_2} X_t + \cdots + c_s \sum_{t=\ell_{s-1}+1}^{\ell_s} X_t \]
where $c_j = \e_j + \e_{j+1} + \cdots + \e_s$. Note that $\e_1, \dots, \e_s$ and $c_1, \dots, c_s$ depend only on the fixed ordered partition $P$. Therefore
\[ \E e^{2 \pi i (S_{k_1} - S_{k_2} + \cdots + S_{k_{2p-1}} - S_{k_{2p}}) \alpha} = \varphi (2 \pi c_1 \alpha)^{\ell_1} \varphi (2 \pi c_2 \alpha)^{\ell_2-\ell_1} \cdots \varphi (2 \pi c_s \alpha)^{\ell_s-\ell_{s-1}} , \]
and
\begin{equation}\label{phiexpansion}
S(P) = \sum_{m+1 \le \ell_1 < \cdots < \ell_s \le m+n} \varphi (2 \pi c_1 \alpha)^{\ell_1} \varphi (2 \pi c_2 \alpha)^{\ell_2-\ell_1} \cdots \varphi (2 \pi c_s \alpha)^{\ell_s-\ell_{s-1}} .
\end{equation}
This is the generalization of \eqref{k1k2cases} to arbitrary $p \ge 1$. We are going to estimate \eqref{phiexpansion} in two different ways, according to the hypotheses \eqref{firstcond} and \eqref{secondcond}.
First, we prove Proposition \ref{momentestimate} (i), i.e.\ we assume \eqref{firstcond}. Observe that the set
\[ B=\left\{ k \in \mathbb{Z} \,\, : \,\, \left\| dk \alpha \right\| < \frac{1}{2} \left\| d \alpha \right\| \right\} \]
contains no two consecutive integers. Indeed, if $k,k+1 \in B$, then using the symmetry and the subadditivity of $\left\| \cdot \right\|$ we would have
\[ \left\| d \alpha \right\| \le \left\| d(k+1) \alpha \right\| + \left\| -dk \alpha \right\| < \frac{1}{2} \left\| d \alpha \right\| + \frac{1}{2} \left\| d \alpha \right\| , \]
a contradiction. Clearly $0 \in B$ and $\pm 1 \not\in B$. Consider the set
\[ \left\{ 1 \le j \le s \,\, : \,\, c_j \in B \right\} = \left\{ j_1, \dots , j_r \right\} \]
where $j_1<\cdots <j_r$. Note that
\[ c_1 = \e_1 + \cdots + \e_s = \sum_{i=1}^{2p} (-1)^{i+1} =0 \in B , \]
hence $j_1=1$. Since $B$ contains no consecutive integers, for any $1 \le a \le r-1$ we have
\[ \pm 1 \neq c_{j_a} - c_{j_{a+1}} = \sum_{j_a \le j < j_{a+1}} \e_j = \sum_{i \in \bigcup_{j_a \le j < j_{a+1}} P_j} (-1)^{i+1} . \]
Similarly, $\pm 1 \not\in B$ implies
\[ \pm 1 \neq c_{j_r} = \sum_{j_r \le j \le s} \e_j = \sum_{i \in \bigcup_{j_r \le j \le s} P_j} (-1)^{i+1} . \]
Therefore $\left| \bigcup_{j_a \le j < j_{a+1}} P_j \right| \ge 2$ and $\left| \bigcup_{j_r \le j \le s} P_j \right| \ge 2$. Using the fact that $P_1,\dots, P_s$ is a partition of $[2p]$ we thus obtain
\[ 2r \le \sum_{a=1}^{r-1} \left| \bigcup_{j_a \le j < j_{a+1}} P_j \right| + \left| \bigcup_{j_r \le j \le s} P_j \right| \le 2p . \]
In other words, $c_j \in B$ for at most $p$ indices $1 \le j \le s$.
Let us now apply the triangle inequality to \eqref{phiexpansion}. For any $j \neq j_1, \dots , j_r$ we have $c_j \not\in B$, hence condition \eqref{firstcond} implies
\[ |\varphi (2 \pi c_j \alpha)| \le 1 - c \left\| d c_j \alpha \right\|^{\beta} \le 1-\frac{c}{2^{\beta}} \left\| d \alpha \right\|^{\beta} . \]
For $j=j_1, \dots , j_r$ let us use the trivial estimate $|\varphi (2 \pi c_j \alpha)| \le 1$. Recall that $j_1=1$, which means that we in fact use the trivial estimate on the first factor $\varphi (2 \pi c_1 \alpha)^{\ell_1}$. This way we obtain
\begin{equation}\label{Destimate}
\left| S(P) \right| \le \sum_{m+1 \le \ell_1 < \cdots < \ell_s \le m+n} \left( 1-\frac{c}{2^{\beta}} \left\| d \alpha \right\|^{\beta} \right)^{\sum_{j \neq j_1, \dots , j_r} \left( \ell_j - \ell_{j-1} \right)} .
\end{equation}
We need to estimate the number of indices $m+1 \le \ell_1 < \dots < \ell_s \le m+n$ for which the total exponent is some fixed integer
\begin{equation}\label{totalexp}
\ell = \sum_{\substack{1 \le j \le s \\ j \neq j_1, \dots , j_r}} \left( \ell_j - \ell_{j-1} \right) .
\end{equation}
The special indices $\ell_{j_1}, \dots, \ell_{j_r}$ can be chosen in $\binom{n}{r} \le n^r/r!$ ways. Given $\ell_{j_1}, \dots, \ell_{j_r}$, the positive integers $\ell_j-\ell_{j-1}$, $j \neq j_1, \dots, j_r$ determine all of $\ell_1, \dots, \ell_s$. The number of ways to write $\ell$ as a sum of $s-r$ nonnegative integers (where the order of the terms matter) is $\binom{\ell+s-r-1}{s-r-1}$, provided $r<s$. The number of indices $m+1 \le \ell_1 < \dots < \ell_s \le m+n$ for which \eqref{totalexp} holds is thus at most $\frac{n^r}{r!} \binom{\ell+s-r-1}{s-r-1}$, and so \eqref{Destimate} gives
\[ |S(P)| \le \sum_{\ell=0}^{\infty} \frac{n^r}{r!} \binom{\ell +s-r-1}{s-r-1} \left( 1-\frac{c}{2^{\beta}} \left\| d \alpha \right\|^{\beta} \right)^{\ell} . \]
This is in fact a well-known power series which can be obtained by differentiating the geometric series $s-r-1$ times. Hence
\[ |S(P)| \le \frac{n^r}{r! \left( \frac{c}{2^{\beta}} \left\| d \alpha \right\|^{\beta} \right)^{s-r}} \]
if $r<s$, but clearly the same is true if $r=s$ (in which case our method simply estimates the absolute value of each term of \eqref{phiexpansion} by 1). Here $s \le 2p$ and $2^{\beta (s-r)} \le 4^{2p}$, therefore
\[ |S(P)| \le 4^{2p} \frac{n^r}{r! \left( c \left\| d \alpha \right\|^{\beta} \right)^{2p-r}} . \]
We have seen that $r \le p$ for any $P$. The number of ordered partitions of $[2p]$ is at most $(2p)^{2p}$, hence summing over all ordered partitions $P$ of $[2p]$ finally shows
\[ \E \left| \sum_{k=m+1}^{m+n} e^{2 \pi i S_k \alpha} \right|^{2p} = \sum_{P} S(P) \le (8p)^{2p} \max_{1 \le r \le p} \frac{n^r}{r! \left( c \left\| d \alpha \right\|^{\beta} \right)^{2p-r}} . \]
Next, we prove Proposition \ref{momentestimate} (ii), i.e.\ we assume \eqref{secondcond}. To estimate \eqref{phiexpansion} under hypothesis \eqref{secondcond} we will need the following lemma.
\begin{lem}\label{fmnslemma} Let $m \ge 0$ and $n, s \ge 1$ be integers, and let $\delta >0$. Consider
\[ f_{m,n,s} (x_1, \dots , x_s) = \sum_{m+1 \le \ell_1< \cdots < \ell_s \le m+n} x_1^{\ell_1} \cdots x_s^{\ell_s} . \]
For a given $x=(x_1, \dots , x_s) \in \mathbb{C}^s$ let
\begin{enumerate}
\item[(i)] $q=q(x)$ denote the maximum number of pairwise disjoint, nonempty intervals of consecutive integers $I_1, \dots , I_q \subseteq [s]$ such that $\left| 1-\prod_{j \in I_r} x_j \right|<\delta$ for all $1 \le r \le q$,
\item[(ii)] $\displaystyle{K=K(x)=\max \left\{ \prod_{j=a}^s |x_j| \,\, : \,\, 1 \le a \le s \right\} \cup \{ 1 \} }$.
\end{enumerate}
Then
\[ |f_{m,n,s} (x_1, \dots, x_s)| \le K^{m+n+1} \left( \frac{2}{\delta}\right)^s \sum_{r=0}^q \frac{(\delta n)^r}{r!} . \]
\end{lem}
Note that $\delta>0$ is a free parameter, which can be chosen to optimize the estimate. As $\delta \to 0$, each term of the estimate is increasing, however the highest exponent $q$ of $n$ which shows up in the estimate is decreasing.
\begin{proof}[Proof of Lemma \ref{fmnslemma}] We may assume that $x_1, \dots , x_s \neq 0$, otherwise $f_{m,n,s}(x_1, \dots , x_s)=0$. We use induction on $s$. First, let $s=1$, and consider
\[ f_{m,n,1} (x_1)=\sum_{m+1\le \ell_1 \le m+n} x_1^{\ell_1}. \]
If $|1-x_1|<\delta$, then $q=1$. Using the triangle inequality and $|x_1| \le K$ we get
\[ |f_{m,n,1}(x_1)| \le \sum_{m+1\le \ell_1 \le m+n} K^{\ell_1} \le K^{m+n}n \le K^{m+n+1} \frac{2}{\delta} \left( 1 + \delta n \right) , \]
as claimed. If $|1-x_1| \ge \delta$, then $q=0$. In this case we evaluate $f_{m,n,1}(x_1)$ as a partial sum of a geometric series, and obtain
\[ |f_{m,n,1}(x_1)| = \left| \frac{x_1^{m+1}-x_1^{m+n+1}}{1-x_1} \right| \le \frac{K^{m+1}+K^{m+n+1}}{\delta} \le K^{m+n+1} \frac{2}{\delta}, \]
as claimed.
Suppose now that the lemma is true for $s-1$, and let us prove it for $s \ge 2$. Let $x=(x_1, \dots , x_s) \in \mathbb{C}^s$, and consider $q=q(x)$ and $K=K(x)$. We will treat the cases $|1-x_s|<\delta$ and $|1-x_s|\ge \delta$ separately.
Assume first that $|1-x_s|<\delta$. By fixing $\ell_s$ first, and summing over $\ell_1, \dots , \ell_{s-1}$ we get
\[ f_{m,n,s}(x_1, \dots , x_s) = \sum_{m+s \le \ell_s \le m+n} x_s^{\ell_s} \sum_{m+1 \le \ell_1 < \cdots < \ell_{s-1} \le \ell_s-1} x_1^{\ell_1} \cdots x_{s-1}^{\ell_{s-1}} . \]
Note that the inner sum is equal to $f_{m,\ell_s-m-1,s-1} (x_1, \dots , x_{s-1})$. Let $x^*=(x_1, \dots ,$ $x_{s-1}) \in \mathbb{C}^{s-1}$, and consider $q^*=q(x^*)$ and $K^*=K(x^*)$. We have $K^* \le K/|x_s|$ and $q^*=q-1$. Indeed, we can add the singleton $\{ s \}$ to the family of pairwise disjoint, nonempty intervals defining $q^*$. Applying the triangle inequality and the inductive hypothesis we get
\[ \begin{split} | f_{m,n,s}(x_1, x_2, \dots , x_s)| &\le \sum_{m+s \le \ell_s \le m+n} |x_s|^{\ell_s} |f_{m,\ell_s-m-1,s-1} (x_1, x_2, \dots , x_{s-1})| \\ &\le \sum_{m+s \le \ell_s \le m+n} |x_s|^{\ell_s} \left( \frac{K}{|x_s|} \right)^{\ell_s} \left( \frac{2}{\delta} \right)^{s-1} \sum_{r=0}^{q-1} \frac{(\delta (\ell_s-m-1))^r}{r!}. \end{split} \]
Here $|x_s|^{\ell_s} (K/|x_s|)^{\ell_s} \le K^{m+n+1}$, thus
\[ | f_{m,n,s}(x_1, \dots , x_s)| \le K^{m+n+1} \left( \frac{2}{\delta} \right)^{s-1} \sum_{r=0}^{q-1} \frac{\delta^r}{r!} \sum_{m+s\le \ell_s \le m+n} (\ell_s-m-1)^r . \]
The standard estimate
\[ \sum_{m+s\le \ell_s \le m+n} (\ell_s-m-1)^r = \sum_{\ell=s-1}^{n-1} \ell^r \le \frac{n^{r+1}}{r+1} \]
shows
\[ | f_{m,n,s}(x_1, \dots , x_s)| \le K^{m+n+1} \frac{1}{2} \left( \frac{2}{\delta} \right)^{s} \sum_{r=0}^{q-1} \frac{(\delta n)^{r+1}}{(r+1)!} . \]
Reindexing the sum over $r$ finishes the proof of the inductive step in the case $|1-x_s|<\delta$.
Finally, assume $|1-x_s| \ge \delta$. Fixing $m+1 \le \ell_1 < \cdots < \ell_{s-1} \le m+n-1$ first, and summing over $\ell_{s-1}<\ell_s \le m+n$ we obtain
\[ f_{m,n,s}(x_1, \dots , x_s) = \sum_{m+1 \le \ell_1 < \cdots < \ell_{s-1} \le m+n-1} x_1^{\ell_1} \cdots x_{s-1}^{\ell_{s-1}} \frac{x_s^{\ell_{s-1}+1}-x_s^{m+n+1}}{1-x_s} , \]
which yields the recursive formula
\[ \begin{split} f_{m,n,s} (x_1, \dots , x_s) = &\frac{x_s}{1-x_s} f_{m,n-1,s-1} (x_1, \dots , x_{s-1}x_s) \\ &- \frac{x_s^{m+n+1}}{1-x_s} f_{m,n-1,s-1} (x_1, \dots , x_{s-1}) . \end{split} \]
Let $x'=(x_1, \dots , x_{s-1} x_s) \in \mathbb{C}^{s-1}$, and consider $q'=q(x')$ and $K'=K(x')$. It is easy to see that $q' \le q$ and $K' \le K$. Applying the inductive hypothesis and using $\left| x_s/(1-x_s) \right| \le K/\delta$ we get
\begin{equation}\label{fmns1}
\left| \frac{x_s}{1-x_s} f_{m,n-1,s-1} (x_1, \dots , x_{s-1}x_s) \right| \le \frac{K}{\delta} K^{m+n} \left( \frac{2}{\delta} \right)^{s-1} \sum_{r=0}^q \frac{(\delta n)^r}{r!} .
\end{equation}
Let $x''=(x_1, \dots , x_{s-1}) \in \mathbb{C}^{s-1}$, and consider $q''=q(x'')$ and $K''=K(x'')$. It is easy to see that $q'' \le q$ and $K'' \le K/|x_s|$. Applying the inductive hypothesis and using $\left| x_s^{m+n+1}/(1-x_s) \right| \le K |x_s|^{m+n}/\delta$ we get
\begin{align}\label{fmns2}
&\left| \frac{x_s^{m+n+1}}{1-x_s} f_{m,n-1,s-1} (x_1, \dots , x_{s-1}) \right| \\
& \phantom{99999999999999999} \le \frac{K |x_s|^{m+n}}{\delta} \left( \frac{K}{|x_s|} \right)^{m+n} \left( \frac{2}{\delta} \right)^{s-1} \sum_{r=0}^q \frac{(\delta n)^r}{r!} . \nonumber
\end{align}
Adding \eqref{fmns1} and \eqref{fmns2} we finally get
\[ | f_{m,n,s} (x_1, \dots , x_s)| \le K^{m+n+1} \left( \frac{2}{\delta} \right)^s \sum_{r=0}^q \frac{(\delta n)^r}{r!} . \]
This completes the proof of Lemma \ref{fmnslemma}.
\end{proof}
Let us now return to estimating $S(P)$ in \eqref{phiexpansion} under the hypothesis \eqref{secondcond}. If $\varphi (2 \pi c_j \alpha) =0$ for some $1 \le j \le s$, then $S(P)=0$. Otherwise $S(P)=f_{m,n,s} (x_1, \dots, x_s)$ as in Lemma \ref{fmnslemma} with $x_j = \varphi(2 \pi c_j \alpha)/\varphi (2 \pi c_{j+1} \alpha)$ for $1 \le j \le s-1$, and $x_s = \varphi (2 \pi c_s \alpha)$. First, note that for any $1 \le a \le s$,
\[ \prod_{j=a}^s |x_j| = |\varphi (2 \pi c_a \alpha)| \le 1 , \]
therefore we have $K=K(x)=1$. For an interval of consecutive integers $[a,b] \subseteq [s]$ with $1 \le a \le b<s$ condition \eqref{secondcond} implies
\[ \begin{split} \left| 1-\prod_{j \in [a,b]} x_j \right| = \left| 1- \frac{\varphi(2 \pi c_a \alpha)}{\varphi (2 \pi c_{b+1} \alpha)} \right| &\ge \left| \varphi (2 \pi c_a \alpha) - \varphi (2 \pi c_{b+1} \alpha) \right| \\ &\ge c \left\| d (c_a-c_{b+1}) \alpha \right\| \\&= c \left\| d(\e_a + \e_{a+1} + \cdots + \e_b ) \alpha \right\| . \end{split} \]
Similarly, for an interval of consecutive integers $[a,s] \subseteq [s]$ with $1 \le a \le s$ condition \eqref{secondcond} implies
\[ \begin{split} \left| 1-\prod_{j \in [a,s]} x_j \right| = \left| 1 - \varphi (2 \pi c_a \alpha ) \right| &= \left| \varphi (2 \pi c_a \alpha ) - \varphi (2 \pi 0) \right| \\ &\ge c \left\| d c_a \alpha \right\| \\&= c \left\| d (\e_a + \e_{a+1} + \cdots + \e_s) \alpha \right\| . \end{split} \]
Altogether, for any nonempty interval of consecutive integers $I \subseteq [s]$ we have
\[ \left| 1 - \prod_{j \in I} x_j \right| \ge c \left\| d \left( \sum_{j \in I} \e_j \right) \alpha \right\| = c \left\| d \left( \sum_{i \in \bigcup_{j \in I} P_j} (-1)^{i+1} \right) \alpha \right\| . \]
This estimate gives the idea to choose $\delta=c \left\|d \alpha \right\|$ in Lemma \ref{fmnslemma}. With this choice, $\left|1 - \prod_{j \in I} x_j \right| < \delta$ implies that
\[ \sum_{i \in \bigcup_{j \in I} P_j} (-1)^{i+1} \neq \pm 1, \]
and so $\left| \bigcup_{j \in I} P_j \right| \ge 2$. Hence if $I_1, \dots, I_q \subseteq [s]$ are pairwise disjoint, nonempty intervals of consecutive integers such that $\left| 1 - \prod_{j \in I_r} x_j \right| < \delta$ for every $1 \le r \le q$, then using the fact that $P_1, \dots , P_s$ is a partition of $[2p]$, we get
\[ 2q \le \sum_{r=1}^q \left| \bigcup_{j \in I_r} P_j \right| = \left| \bigcup_{j \in I_1 \cup \cdots \cup I_r} P_j \right| \le 2p. \]
Thus $q=q(x)$ as in Lemma \ref{fmnslemma} satisfies $q \le p$. Applying Lemma \ref{fmnslemma} with $K=1$, $q \le p$ and $\delta = c \left\| d \alpha \right\|$ to \eqref{phiexpansion}, we obtain
\begin{equation}\label{SPestimate}
|S(P)| \le \left( \frac{2}{c \left\| d \alpha \right\|} \right)^s \sum_{r=0}^p \frac{(c \left\| d \alpha \right\| n)^r}{r!}
\end{equation}
for any ordered partition $P=(P_1, \dots , P_s)$ of $[2p]$. Here $s \le 2p$. Since the number of ordered partitions of $[2p]$ is at most $(2p)^{2p}$, summing \eqref{SPestimate} over all ordered partitions $P$ of $[2p]$ finishes the proof of Proposition \ref{momentestimate} (ii):
\[ \E \left| \sum_{k=m+1}^{m+n} e^{2 \pi i S_k \alpha} \right|^{2p} = \sum_{P} S(P) \le (2p)^{2p} \left( \frac{2}{c \left\| d \alpha \right\|} \right)^{2p} \sum_{r=0}^p \frac{(c \left\| d \alpha \right\| n)^r}{r!} . \]
\end{proof}
\subsection{Examples}\label{Examples}
We were able to estimate the moments \eqref{2pmoment} in Proposition \ref{momentestimate} under conditions \eqref{firstcond} and \eqref{secondcond} for the characteristic function $\varphi$ of $X_1$. We now study probability distributions which satisfy those conditions. First of all note that if $X_1$ is integer-valued, then $\varphi (2 \pi x)$ is periodic, e.g.\ 1 is a period. Thus any lower estimate of $1-|\varphi (2 \pi x)|$ and $|\varphi (2 \pi x) - \varphi (2 \pi y)|$ needs to be periodic as well, which explains the use of the distance from the nearest integer function $\left\| \cdot \right\|$. The constant $d>0$ accounts for the fact that the smallest period of $\varphi (2 \pi x)$ or its absolute value might be less than 1.
It is easy to see that \eqref{firstcond} with some $0<\beta<2$ implies $\E X_1^2 = \infty$. Therefore we can only hope to prove \eqref{firstcond} with $0<\beta<2$ for certain ``heavy-tailed'' distributions. On the other hand, \eqref{firstcond} with $\beta=2$ holds in far more general circumstances.
\begin{prop}\label{examplesprop} Let $X_1$ be an integer-valued random variable with characteristic function $\varphi$.
\begin{enumerate}
\item[(i)] If $X_1$ is nondegenerate, then there exist a real number $c>0$ and an integer $d>0$ such that \eqref{firstcond} holds for any $x \in \R$ with $\beta=2$.
\item[(ii)] Let $0<\beta<2$. Suppose there exist constants $K,x_0>0$ such that for any $x \ge x_0$,
\begin{equation}\label{heavytail}
\E \left( X_1^2 I_{\left\{ |X_1| \le x \right\}} \right) \ge K x^{2-\beta} .
\end{equation}
Then there exist a real number $c>0$ and an integer $d>0$ such that \eqref{firstcond} holds for any $x \in \R$.
\end{enumerate}
\end{prop}
\begin{proof} Let $X_2$ be a random variable independent of and with the same distribution as $X_1$. Then
\[ \E e^{2 \pi i x (X_1-X_2)} = \E e^{2 \pi i x X_1} \E e^{-2\pi i x X_2} = |\varphi (2 \pi x)|^2 . \]
By taking the real part of both sides and using a trigonometric identity we obtain
\[ 1 - |\varphi (2 \pi x)|^2 = \E \left( 1 - \cos \left( 2 \pi x (X_1-X_2) \right) \right) = 2 \E \sin^2 \left( \pi x (X_1-X_2) \right) . \]
Let $f: \R \to \R$, $f(x)= \E \sin^2 \left( \pi x (X_1-X_2) \right)$. Since
\[ 1 - |\varphi (2 \pi x)| \ge \frac{1-|\varphi (2 \pi x)|^2}{2} = f(x), \]
it will be enough to find a lower estimate for $f(x)$.
Let $d>0$ denote the greatest common divisor of the (finite or infinite) support of $X_1-X_2$. Note that the nondegeneracy of $X_1$ implies that this support contains a nonzero integer, making $d>0$ well-defined. Clearly, $f$ is periodic with period $1/d$. It is also easy to see that $f(x)=0$ if and only if $x(X_1-X_2) \in \mathbb{Z}$ with probability 1, or equivalently, if and only if $x$ is an integer multiple of $1/d$. Furthermore, $f$ is continuous, which can be seen e.g.\ from Lebesgue's dominated convergence theorem. Hence to prove an estimate of the form
\begin{equation}\label{flowerestimate}
f(x) \ge c \left\| d x \right\|^{\beta}
\end{equation}
for some constant $c>0$ it is enough to prove \eqref{flowerestimate} in an open neighborhood of $0$.
Applying the estimate $\sin^2 (\pi t) \ge 4t^2$, valid for any $|t| \le 1/2$, with $t=x(X_1-X_2)$ gives
\begin{equation}\label{flowerestimate2}
f(x) \ge 4 x^2 \E \left( \left( X_1-X_2 \right)^2 I_{\left\{ |X_1-X_2| \le \frac{1}{2|x|} \right\}} \right) .
\end{equation}
First, we prove (i). We have $\E (X_1-X_2)^2 >0$ (possibly infinite), because $X_1$ is nondegenerate. From the monotone convergence theorem we can see that
\[ \E \left( \left( X_1-X_2 \right)^2 I_{\left\{ |X_1-X_2| \le \frac{1}{2|x|} \right\}} \right) \]
is greater than a fixed positive constant in an open neighborhood of $0$. Therefore \eqref{flowerestimate2} shows that \eqref{flowerestimate} holds with $\beta=2$ and some $c>0$ in an open neighborhood of $0$, and we are done.
Next, we prove (ii). Let $\mu$ denote any median of $|X_1|$, i.e.\ $\P (|X_1|\le \mu) \ge 1/2$ and $\P (|X_1|\ge \mu) \ge 1/2$. If both $2 \mu \le |X_1| \le 1/(2|x|)-\mu$ and $|X_2| \le \mu$, then $|X_1-X_2| \le 1/(2|x|)$ and $(X_1-X_2)^2 \ge X_1^2/4$. Therefore
\[ \left( X_1-X_2 \right)^2 I_{\left\{ |X_1-X_2| \le \frac{1}{2|x|} \right\}} \ge \frac{X_1^2}{4} I_{\left\{ 2 \mu \le |X_1| \le \frac{1}{2|x|}-\mu \right\}} I_{\left\{|X_2| \le \mu \right\}} . \]
Taking the expected value and using the definition of a median we obtain
\[ \begin{split} \E \left( \left( X_1-X_2 \right)^2 I_{\left\{ |X_1-X_2| \le \frac{1}{2|x|} \right\}} \right) &\ge \frac{1}{8} \E \left( X_1^2 I_{\left\{ 2 \mu \le |X_1| \le \frac{1}{2|x|}-\mu \right\}} \right) \\ &\ge \frac{1}{8} \E \left( X_1^2 I_{\left\{|X_1| \le \frac{1}{2|x|}-\mu \right\}} \right) - \frac{\mu^2}{2}. \end{split} \]
Equation \eqref{flowerestimate2} and condition \eqref{heavytail} thus imply that \eqref{flowerestimate} holds with some $c>0$ in an open neighborhood of $0$.
\end{proof}
Next, we study when relation \eqref{secondcond} holds. For the sake of simplicity, assume that $X_1$ is integer-valued, and $\E |X_1|<\infty$. Then $\varphi (2 \pi x)$ has period 1, therefore we may visualize it as a continuously differentiable, closed curve on the Euclidean plane. It is easy to see that the ``self-intersection points'' of this curve, i.e.\ the solutions of the equation $\varphi (2 \pi x) = \varphi (2 \pi y)$, $x \neq y$ will play an important role. Indeed, $|\varphi (2 \pi x) - \varphi (2 \pi y)|$ can be small in two different ways: either $x$ and $y$ are close to each other, or they are close to two different self-intersection points of the curve. In the first case a lower estimate linear in $|x-y|$ can be deduced by assuming $\varphi' \neq 0$ anywhere on $\R$. To handle the second case, we will impose a ``rationality'' and a ``linear independence'' condition on the self-intersection points.
\begin{prop}\label{secondcondprop} Let $X_1$ be an integer-valued random variable with characteristic function $\varphi$ such that $\E |X_1|<\infty$ and $\varphi' \neq 0$ anywhere on $\R$. Let $p>0$ denote the smallest period of $\varphi (2 \pi x)$. Suppose that the equation $\varphi (2 \pi x) = \varphi (2 \pi y)$, $x,y \in [0,p)$, $x \neq y$, has finitely many solutions $(x_1,y_1), \dots, (x_n,y_n)$, and that $x_k-y_k \in \mathbb{Q}$ and $\varphi' (2 \pi x_k)/ \varphi' (2 \pi y_k) \not\in \R$ for any $k=1,\dots, n$. Then there exist a real number $c>0$ and an integer $d>0$ such that \eqref{secondcond} holds for any $x,y \in \R$.
\end{prop}
\begin{proof} Clearly $p>0$ is the reciprocal of the greatest common divisor of the (finite or infinite) support of $X_1$. By considering $pX_1$ instead, we may therefore assume $p=1$. Let $d>0$ be an integer such that $d(x_k-y_k) \in \mathbb{Z}$ for every $k=1,\dots, n$.
The assumption $\E |X_1|<\infty$ implies that $\varphi$ is differentiable, and $\varphi'$ is uniformly continuous. The periodicity of $\varphi$ thus shows that $|\varphi'| \ge K_0$ for some constant $K_0>0$. For any $k=1,\dots, n$ the derivatives $\varphi' (2 \pi x_k)$ and $\varphi' (2 \pi y_k)$ are linearly independent as planar vectors, because $\varphi' (2 \pi x_k)/ \varphi' (2 \pi y_k) \not\in \R$. From the equivalence of finite-dimensional norms, we get that for any $u,v \in \R$,
\begin{equation}\label{derivativeest}
|\varphi'(2 \pi x_k) u - \varphi'(2 \pi y_k) v| \ge K_k \left( |u|+|v| \right)
\end{equation}
with some constant $K_k>0$. Let $K=\min \left\{ K_k \,\, : \,\, 0 \le k \le n \right\}$.
A simple corollary of the uniform continuity of $\varphi'$ is that the convergence
\[ \frac{\varphi (2 \pi t) - \varphi (2 \pi a)}{2 \pi t- 2 \pi a} \to \varphi' (2 \pi a) \]
as $|t-a| \to 0$ is uniform in $t,a \in \R$. In particular, there exists a constant $r>0$ such that whenever $|t-a|<r$, then
\begin{equation}\label{uniformdiff}
\left| \varphi (2 \pi t) - \varphi (2 \pi a) - \varphi'(2 \pi a) (2 \pi t-2 \pi a) \right| \le \pi K |t-a| .
\end{equation}
Consider the compact set
\[ C = \left\{ (x,y) \in [0,1]^2 \,\, : \,\, \varphi (2 \pi x) = \varphi (2 \pi y) \right\} . \]
Note that $C$ consists of the diagonal $x=y$, the points $(0,1)$, $(1,0)$ and the finite point set $(x_k,y_k)$, $k=1, \dots, n$. Let $(x,y) \in [0,1]^2$ be such that $\textrm{dist} \, ((x,y),C)<r/2$, where $\textrm{dist}$ denotes the distance from a point to a set. There are three cases: $(x,y)$ is either close to the diagonal, to $(0,1)$ or $(1,0)$, or to the point $(x_k,y_k)$ for some $k=1,\dots,n$.
First, assume that the distance of $(x,y)$ from the diagonal is less than $r/2$. Then $|x-y|<r$, thus \eqref{uniformdiff} with $t=x$ and $a=y$ implies
\[ \begin{split} \left| \varphi (2 \pi x) - \varphi (2 \pi y) \right| &\ge |\varphi' (2 \pi y)| \cdot |2 \pi x- 2 \pi y| - \pi K |x-y| \\ &\ge \frac{\pi K}{d} |d(x-y)| \ge \frac{\pi K}{d} \left\| d (x-y) \right\| . \end{split} \]
Assume next that the Euclidean distance from $(x,y)$ to $(0,1)$ is less than $r/2$. Then \eqref{uniformdiff} applies with $t=x$ and $a=y-1$. Using the periodicity of $\varphi$ we thus obtain
\[ \begin{split} |\varphi (2 \pi x) - \varphi (2 \pi y)| &= |\varphi (2 \pi x) - \varphi (2 \pi (y-1))| \\ &\ge |\varphi' (2 \pi (y-1))| \cdot |2 \pi x- 2 \pi (y-1)| - \pi K |x-(y-1)| \\ &\ge \frac{\pi K}{d} |d(x-y)+d| \ge \frac{\pi K}{d} \left\| d (x-y) \right\| . \end{split} \]
A similar estimate holds when the distance from $(x,y)$ to $(1,0)$ is less than $r/2$. Finally, assume that the distance from $(x,y)$ to $(x_k,y_k)$ is less than $r/2$ for some $k=1, \dots, n$. In this case \eqref{uniformdiff} applies with $t=x$ and $a=x_k$, and also with $t=y$ and $a=y_k$. Since $\varphi (2 \pi x_k)=\varphi (2 \pi y_k)$, we have
\[ \begin{split} \left| \varphi (2 \pi x) - \varphi (2 \pi y) \right| \ge &\left| \varphi' (2 \pi x_k) (2 \pi x-2 \pi x_k) - \varphi' (2 \pi y_k) (2 \pi y-2 \pi y_k) \right| \\ &- \pi K |x-x_k| - \pi K |y-y_k| . \end{split} \]
Applying \eqref{derivativeest} with $u=x-x_k$ and $v=y-y_k$ we obtain
\[ \begin{split} \left| \varphi (2 \pi x) - \varphi (2 \pi y) \right| &\ge \pi K \left( |x-x_k|+|y-y_k| \right) \\ &\ge \frac{\pi K}{d} | d(x-y)-d(x_k-y_k) | \ge \frac{\pi K}{d} \left\| d (x-y) \right\| . \end{split} \]
Altogether we have shown that for any $(x,y) \in [0,1]^2$ such that $\textrm{dist}\, ((x,y),C)<r/2$ we have
\[ |\varphi (2 \pi x) - \varphi (2 \pi y)| \ge \frac{\pi K}{d} \left\| d (x-y) \right\| . \]
Using the compactness of the corresponding set it is easy to see that for any $(x,y) \in [0,1]^2$ such that $\textrm{dist}\, ((x,y),C) \ge r/2$ we have
\[ |\varphi (2 \pi x) - \varphi (2 \pi y)| \ge c' \left\| d (x-y) \right\| \]
with some constant $c'>0$. Hence \eqref{secondcond} is satisfied with $c=\min \left\{ \pi K/d, c' \right\}$ for any $(x,y) \in [0,1]^2$. By the periodicity of $\varphi$, \eqref{secondcond} is therefore satisfied for all $x,y \in \R$.
\end{proof}
\begin{cor}\label{corollary} Let $X_1$ be a random variable with characteristic function $\varphi$. Suppose that $\P (X_1=a)=\P (X_1=b)=1/2$ for some $a,b \in \mathbb{Z}$ with $a \not\equiv b \pmod{2}$. Then there exist a real number $c>0$ and an integer $d>0$ such that \eqref{secondcond} holds for any $x,y \in \R$.
\end{cor}
\begin{proof} We will show that $X_1$ satisfies the conditions of Proposition \ref{secondcondprop}. The characteristic function of $X_1$ is
\[ \varphi (t) = \frac{1}{2} e^{iat} + \frac{1}{2} e^{ibt} = e^{i \frac{a+b}{2}t} \cos \left( \frac{a-b}{2} t \right) . \]
First, note that
\[ |\varphi' (t)| = \frac{1}{2} \left| a e^{iat} + b e^{ibt} \right| \ge \frac{1}{2} \left| |a|-|b| \right| \ge \frac{1}{2} , \]
therefore $\varphi' \neq 0$ anywhere on $\R$.
Similarly to Proposition \ref{secondcondprop} we may assume that $a$ and $b$ are relatively prime, i.e.\ that the smallest period of $\varphi (2 \pi x)$ is 1. Observe that $a \not\equiv b \pmod{2}$ implies that $a-b$ and $a+b$ are also relatively prime.
Consider the equation $\varphi (2 \pi x) = \varphi (2 \pi y)$, $x \neq y$, which is equivalent to
\begin{equation}\label{phi=phi}
e^{\pi i (a+b)(x-y)} \cos \left( \pi (a-b) x \right) = \cos \left( \pi (a-b) y \right) , \qquad x \neq y .
\end{equation}
We have
\begin{equation}\label{phi'/phi'}
\frac{\varphi' (2 \pi x)}{\varphi' (2 \pi y)} = e^{\pi i (a+b)(x-y)} \frac{i(a+b) \cos (\pi (a-b)x) - (a-b) \sin (\pi (a-b)x)}{i(a+b) \cos (\pi (a-b)y) - (a-b) \sin (\pi (a-b)y)} .
\end{equation}
We distinguish two cases in \eqref{phi=phi}: either $\cos (\pi (a-b)x) = \cos (\pi (a-b)y)=0$, or $\exp (\pi i (a+b)(x-y)) \in \R$. The first case gives finitely many solutions $(x_k,y_k)$ within a period $[0,1)$, each of which satisfies $(a-b)(x_k-y_k) \in \mathbb{Z}$. Since $\sin (\pi (a-b)x_k)$ and $\sin (\pi (a-b)y_k)$ are both $\pm 1$, for these solutions \eqref{phi'/phi'} simplifies to
\[ \frac{\varphi' (2 \pi x_k)}{\varphi' (2 \pi y_k)} = \pm e^{\pi i (a+b) (x_k-y_k)} . \]
By way of contradiction, suppose that this ratio is purely real. Then $(a+b)(x_k-y_k) \in \mathbb{Z}$. Since $a-b$ and $a+b$ are relatively prime, the integrality of $(a-b)(x_k-y_k)$ and $(a+b)(x_k-y_k)$ implies that $x_k-y_k$ is also an integer. This is impossible for $x_k, y_k$ in the period interval $[0,1)$.
Finally, suppose $\exp (\pi i (a+b)(x-y)) \in \R$. It is easy to see that in this case \eqref{phi=phi} also gives finitely many solutions $(x_{\ell},y_{\ell})$ in $[0,1)$, each of which satisfies $(a+b)(x_{\ell}-y_{\ell}) \in \mathbb{Z}$. Since $\exp (\pi i (a+b)(x_{\ell}-y_{\ell}))=\pm 1$, \eqref{phi'/phi'} is purely real if and only if
\begin{multline*}
- \cos (\pi (a-b)x_{\ell}) \sin (\pi (a-b)y_{\ell}) + \cos (\pi (a-b)y_{\ell}) \sin (\pi (a-b)x_{\ell}) \\ = \sin (\pi (a-b)(x_{\ell} - y_{\ell})) =0,
\end{multline*}
which is equivalent to $(a-b)(x_{\ell}-y_{\ell}) \in \mathbb{Z}$. Since $a-b$ and $a+b$ are relatively prime, $(a+b)(x_{\ell} - y_{\ell}) \in \mathbb{Z}$ and $(a-b)(x_{\ell} - y_{\ell}) \in \mathbb{Z}$ would imply $x_{\ell}-y_{\ell} \in \mathbb{Z}$, which is impossible for $x_{\ell},y_{\ell}$ in the period interval $[0,1)$. Therefore the solutions $(x_{\ell}, y_{\ell})$ also satisfy $\varphi' (2 \pi x_{\ell})/\varphi' (2 \pi y_{\ell}) \not\in \R$.
\end{proof}
The simplest case in which the ``rationality'' and the ``linear independence'' conditions on the self-intersection points of $\varphi$ in Proposition \ref{secondcondprop} hold is when $\varphi$ is a simple closed curve, i.e.\ when there are no self-intersection points at all. If $X_1=1$ a.s., then $\varphi (2 \pi x)$ parametrizes the unit circle. Thus if $X_1=1$ has a high enough probability, then $\varphi (2 \pi x)$ will look like a slightly ``deformed'' circle, and we can hope that this slight deformation will not introduce any self-intersection points. It is very easy to turn this idea into a precise proof as follows.
\begin{prop} Let $X_1$ be an integer-valued random variable such that $\E |X_1| < 2 \P (X_1=1)$. Then the characteristic function $\varphi$ of $X_1$ satisfies \eqref{secondcond} with $c=8 \P (X_1=1)- 4\E |X_1|>0$ and $d=1$.
\end{prop}
\begin{proof} We give a direct proof without using Proposition \ref{secondcondprop}. We have
\[ \begin{split} |\varphi (2 \pi x) - \varphi (2 \pi y)| &= \left| \E \left( e^{2 \pi i X_1 x} - e^{2 \pi i X_1 y} \right) \right| \\ &\ge \P (X_1=1) |e^{2 \pi i x} - e^{2 \pi i y}| - \E \left( |e^{2 \pi i X_1 x} - e^{2 \pi i X_1 y}| I_{\{ X_1 \neq 1\}} \right) . \end{split} \]
Using
\[ |e^{2 \pi i X_1 x} - e^{2 \pi i X_1 y}| \le |X_1| \cdot |e^{2 \pi i x} - e^{2 \pi i y}| \]
and
\[ \E \left( |X_1| I_{\{ X_1 \neq 1 \}} \right) = \E |X_1| - \P (X_1=1) \]
we deduce
\[ |\varphi (2 \pi x) - \varphi (2 \pi y)| \ge \left( 2 \P (X_1=1) - \E |X_1| \right) |e^{2 \pi i x} - e^{2 \pi i y}| . \]
Finally, note that
\[ |e^{2 \pi i x} - e^{2 \pi i y}| = 2 |\sin (\pi (x-y))| \ge 4 \| x-y \| . \]
\end{proof}
\section{A Diophantine sum}
To study the discrepancy of the sequence $\{ S_k \alpha \}$, we will combine the Erd\H{o}s--Tur\'an inequality and our estimates for the high moments of an exponential sum in Proposition \ref{momentestimate}. In order to proceed, it will be necessary to estimate sums of the form
\begin{equation}\label{diophsum}
\sum_{h=1}^H \frac{1}{h \left\| h \alpha \right\|^b}
\end{equation}
where $\alpha$ is a given irrational and $0<b \le 1$. Note that in the proof of Theorem \ref{theoremA}, $b$ will be $\beta/2$ in (i), while $b$ will be $1/2$ in (ii). The behavior of the sum \eqref{diophsum} depends on the Diophantine approximation properties of $\alpha$, i.e.\ on how well $\alpha$ can be approximated by rational numbers with small denominators. These properties are encoded in the continued fraction representation of $\alpha$, therefore it is natural to use the theory of continued fractions to estimate \eqref{diophsum}.
Recall that any irrational $\alpha$ has a unique continued fraction representation
\[ \alpha = [a_0;a_1,a_2, \dots ] = a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cdots}} \]
where $a_0$ is an integer and $a_i$ is a positive integer for $i \ge 1$. By truncating the infinite continued fraction we obtain the rational numbers
\[ \frac{p_n}{q_n} = [a_0;a_1,a_2, \dots , a_{n-1}] = a_0 + \cfrac{1}{a_1+\cfrac{1}{a_2 + \cfrac{1}{\cdots + \cfrac{1}{a_{n-1}}}}} , \]
for all $n \ge 1$, called the \textit{convergents} to $\alpha$. Their main relevance is that in a certain sense they are the ``best'' rational approximations of $\alpha$.
The fact that $p_n/q_n$ is ``close'' to $\alpha$ implies that $q_n \alpha$ is ``close'' to an integer (namely $p_n$). This gives us the intuition that the largest terms of the sum \eqref{diophsum} are those for which $h=q_n$ for some $n$. Since $1/(h \left\| h \alpha \right\|^b) \ge 1/h$, the best we can hope for is that the contribution of all other terms is at most a constant times $\log H$. We can turn this intuition into a precise statement:
\begin{prop}\label{generaldioph} Let $\alpha = [a_0;a_1,a_2, \dots]$ be the continued fraction representation of an irrational number $\alpha$, and let $p_n/q_n=[a_0;a_1,a_2, \dots , a_{n-1}]$ denote its convergents. For any $0<b \le 1$,
\[ \sum_{0<h<q_n} \frac{1}{h \left\| h \alpha \right\|^b} = O \left( \log^s q_n + \sum_{0<k<n} \frac{1}{q_k \left\| q_k \alpha \right\|^b} \right) , \]
where $s=1$ if $0<b<1$, and $s=2$ if $b=1$. The implied constant depends only on $\alpha$ and $b$.
\end{prop}
To prove Proposition \ref{generaldioph} we need certain facts from the theory of continued fractions. For a proof see e.g.\ \cite{JWC}.
\begin{prop}\label{facts} The convergents $p_n/q_n=[a_0;a_1,a_2, \dots , a_{n-1}]$ of an arbitrary irrational number $\alpha=[a_0;a_1,a_2, \dots]$ satisfy the following:
\begin{enumerate}
\item[(i)] For any $n \ge 2$ we have $\frac{1}{q_{n+1}+q_n} \le \left\| q_n \alpha \right\| = |q_n \alpha - p_n| \le \frac{1}{q_{n+1}}$.
\item[(ii)] For any $n \ge 1$ we have $q_n \alpha - p_n = (-1)^{n+1} |q_n \alpha - p_n|$.
\item[(iii)] The denominators of the convergents satisfy the recurrence $q_{n+1}=a_n q_n + q_{n-1}$ with initial conditions $q_1=1$, $q_2=a_1$.
\item[(iv)] For any $n \ge 2$ we have $p_n q_{n-1}-q_n p_{n-1} = (-1)^n$. In particular, $p_n$ and $q_n$ are relatively prime.
\end{enumerate}
\end{prop}
\qed
\begin{proof}[Proof of Proposition \ref{generaldioph}] Let $k \ge 3$, and consider the sum
\begin{equation}\label{qkqk+1}
\sum_{q_k \le h < q_{k+1}} \frac{1}{h \left\| h \alpha \right\|^b}.
\end{equation}
Let $\e_k = q_k \alpha - p_k$. Note that $\left\| h \alpha \right\| = \left\| hp_k/q_k + h \e_k/q_k \right\|$. Here $hp_k/q_k$ is an integer multiple of $1/q_k$, and $\left| h \e_k/q_k \right| < q_{k+1}|\e_k|/q_k \le 1/q_k$ for any $q_k \le h < q_{k+1}$. The assumption $k \ge 3$ ensures $q_k \ge 2$. Hence $\left\| h \alpha \right\|$ is basically determined by the residue class of $hp_k$ modulo $q_k$. Since $\textrm{sign } \e_k = (-1)^{k+1}$, the residue classes $0$ and $(-1)^k$ will require special treatment. It is thus natural to decompose the sum \eqref{qkqk+1} using the index sets
\[ \begin{split} A &= \left\{ q_k \le h < q_{k+1} \,\, : \,\, h p_k \equiv 0 \pmod{q_k} \right\} ,\\
B &= \left\{ q_k \le h < q_{k+1} \,\, : \,\, h p_k \equiv (-1)^k \pmod{q_k} \right\} ,\\
C &= \left\{ q_k \le h < q_{k+1} \,\, : \,\, h p_k \not\equiv 0, (-1)^k \pmod{q_k} \right\} . \end{split} \]
First, consider the sum over $h \in A$. Since $p_k$ and $q_k$ are relatively prime, $A$ only contains integral multiples of $q_k$. For any $h=aq_k \in A$, $a \ge 1$ we thus have
\[ \left\| h \alpha \right\| = \left\| \frac{0}{q_k} + \frac{a q_k \e_k}{q_k} \right\| = a |\e_k| = a \left\| q_k \alpha \right\| , \]
and therefore
\begin{equation}\label{sumoverA}
\sum_{h \in A} \frac{1}{h \left\| h \alpha \right\|^b} \le \sum_{a=1}^{\infty} \frac{1}{a q_k \left( a \left\| q_k \alpha \right\| \right)^b} = O \left( \frac{1}{q_k \left\| q_k \alpha \right\|^b} \right) .
\end{equation}
Next, let us estimate the sum over $h \in B$. By taking the equation $p_k q_{k-1} - q_k p_{k-1}=(-1)^k$ from Proposition \ref{facts} (iv) modulo $q_k$, we find that the multiplicative inverse of $p_k$ modulo $q_k$ is $(-1)^kq_{k-1}$, hence every element of $B$ is congruent to $q_{k-1}$ modulo $q_k$. In fact $B=\{ aq_k + q_{k-1} \, : \, 1 \le a \le a_{k}-1 \}$, since $a_k q_k+q_{k-1}=q_{k+1}$ is outside the interval $q_k \le h<q_{k+1}$. From Proposition \ref{facts} (i, iii) we deduce that $a_k q_k |\e_k| \le 1-q_{k-1}|\e_k|$. For any $h=aq_k+q_{k-1} \in B$ we thus have
\[ \left\| h \alpha \right\| = \left\| \frac{(-1)^k}{q_k} + \frac{h \e_k}{q_k} \right\| = \frac{1-(aq_k+q_{k-1})|\e_k|}{q_k} \ge (a_k-a)|\e_k| . \]
Therefore
\begin{equation}\label{sumoverB}
\sum_{h \in B} \frac{1}{h \left\| h \alpha \right\|^b} \le \sum_{a=1}^{a_k-1} \frac{1}{a q_{k} \left( (a_k-a) \left\| q_k \alpha \right\| \right)^b} = O \left( \frac{1}{q_k \left\| q_k \alpha \right\|^b} \right) .
\end{equation}
Finally, we need to estimate the sum over $h \in C$. The congruence conditions in the definition of $C$ imply that for any $h \in C$,
\[ \left\| h \alpha \right\| = \left\| \frac{hp_k}{q_k} + \frac{h \e_k}{q_k} \right\| \ge \frac{1}{2} \left\| \frac{hp_k}{q_k} \right\| . \]
For any integer $a \ge 1$ we therefore have
\begin{equation}\label{aqka+1qk}
\sum_{\substack{aq_k \le h < (a+1)q_k \\ h \in C}} \frac{1}{h \left\| h \alpha \right\|^b} \le \sum_{aq_k < h < (a+1)q_k} \frac{2^b}{aq_k \left\| hp_k/q_k \right\|^b} .
\end{equation}
Since $p_k$ and $q_k$ are relatively prime, as $h$ runs in the interval $aq_k < h < (a+1)q_k$, the numbers $h p_k$ attain each nonzero residue class modulo $q_k$ exactly once. Considering the cases $0<b<1$ and $b=1$ separately, we find that the right hand side of \eqref{aqka+1qk} can hence be estimated as
\[ \frac{2^b}{a q_k} \sum_{j=1}^{q_k-1} \frac{1}{\left\| j/q_k \right\|^b} \le \frac{2 \cdot 2^b}{a q_k} \sum_{1 \le j \le q_k/2} \frac{1}{\left( j/q_k \right)^b} = O \left( \frac{\log^{s-1} q_k}{a} \right) . \]
Summing over $1 \le a \le a_k$ we obtain
\begin{equation}\label{sumoverC}
\sum_{h \in C} \frac{1}{h \left\| h \alpha \right\|^b} = O \left( \log^{s-1} q_k \log a_k \right) .
\end{equation}
Adding \eqref{sumoverA}, \eqref{sumoverB} and \eqref{sumoverC} we get
\[ \sum_{q_k \le h < q_{k+1}} \frac{1}{h \left\| h \alpha \right\|^b} = O \left( \log^{s-1} q_k \log a_k + \frac{1}{q_k \left\| q_k \alpha \right\|^b} \right) . \]
Summing over $3 \le k \le n-1$ we obtain
\begin{align}\label{0hqn}
&\sum_{0<h<q_n} \frac{1}{h \left\| h \alpha \right\|^b} \\
&\phantom{9999} = \sum_{0<h<q_3} \frac{1}{h \left\| h \alpha \right\|^b} + O \left( \sum_{k=3}^{n-1} \left( \log^{s-1} q_k \log a_k + \frac{1}{q_k \left\| q_k \alpha \right\|^b} \right) \right) \nonumber.
\end{align}
Here the sum over $0<h<q_3$ is $O(1)$, because $q_3$ is a constant depending only on $\alpha$. The recurrence in Proposition \ref{facts} (iii) shows $q_{n} \ge a_{n-1} q_{n-1}$, and iterating this inequality we get
\[ q_n \ge a_{n-1} a_{n-2} \cdots a_3 q_3 . \]
Hence $\sum_{k=3}^{n-1} \log^{s-1} q_k \log a_k = O (\log^s q_n)$, and so \eqref{0hqn} simplifies to
\[ \sum_{0<h<q_n} \frac{1}{h \left\| h \alpha \right\|^b} = O \left( \log^s q_n + \sum_{0<k<n} \frac{1}{q_k \left\| q_k \alpha \right\|^b} \right) . \]
\end{proof}
\begin{cor}\label{gammadioph} Let $\alpha$ be irrational and $0<b \le 1$. Suppose there exist constants $\gamma \ge 1$ and $C>0$ such that $\left\| q \alpha \right\| \ge C q^{- \gamma}$ for every $q \in \mathbb{N}$. Then
\[ \sum_{h=1}^H \frac{1}{h \left\| h \alpha \right\|^b} = \left\{ \begin{array}{ll} O \left( \log^s H \right) & \textrm{if } \gamma \le 1/b, \\ O \left( H^{b \gamma -1} \right) & \textrm{if } \gamma > 1/b, \end{array} \right. \]
where $s=1$ if $0<b<1$, and $s=2$ if $b=1$. The implied constants depend only on $\alpha$, $b$ and $\gamma$.
\end{cor}
\begin{proof} Let $p_n/q_n$ denote the convergents to $\alpha$. Consider the two consecutive convergent denominators such that $q_{n-1} \le H < q_n$. Proposition \ref{generaldioph} implies
\begin{equation}\label{generaldiophest}
\sum_{h=1}^H \frac{1}{h \left\| h \alpha \right\|^b} = O \left( \log^s q_n + \sum_{0<k<n} \frac{1}{q_k \left\| q_k \alpha \right\|^b} \right) .
\end{equation}
Proposition \ref{facts} (i) shows that $C q_{n-1}^{- \gamma} \le \left\| q_{n-1} \alpha \right\| \le 1/q_n$. Rearranging we get $q_n \le q_{n-1}^{\gamma}/C \le H^{\gamma}/C$. Therefore the first error term in \eqref{generaldiophest} satisfies $\log^s q_n = O \left( \log^s H \right)$.
In the second error term in \eqref{generaldiophest} we have
\[ \frac{1}{q_k \left\| q_k \alpha \right\|^b} \le C^{-b} q_k^{b \gamma -1} = O \left( q_k^{b \gamma -1} \right) . \]
If $\gamma \le 1/b$, then
\[ \sum_{0<k<n} \frac{1}{q_k \left\| q_k \alpha \right\|^b} = O \left( \sum_{0<k<n} q_k^{b \gamma -1} \right) = O \left( n \right) . \]
The recurrence in Proposition \ref{facts} (iii) shows that $q_n$ is at least as large as the $n$th Fibonacci number, therefore $n=O (\log q_{n-1}) = O \left( \log H \right)$. Hence \eqref{generaldiophest} simplifies to
\[ \sum_{h=1}^H \frac{1}{h \left\| h \alpha \right\|^b} = O \left( \log^s H + \log H \right) = O \left( \log^s H \right) . \]
Finally, assume $\gamma >1/b$. Proposition \ref{facts} (iii) shows that $q_{k+2} \ge q_{k+1}+q_k \ge 2 q_k$. In particular, any interval of the form $\left[ 2^{\ell},2^{\ell+1}\right)$ contains at most two convergent denominators. Hence
\[ \begin{split} \sum_{0<k<n} \frac{1}{q_k \left\| q_k \alpha \right\|^b} &= O \left( \sum_{0<k<n} q_k^{b \gamma -1} \right) = O \left( \sum_{\substack{\ell \\ 2^{\ell} \le q_{n-1}}} \sum_{2^{\ell} \le q_k < 2^{\ell +1}} q_k^{b \gamma -1} \right) \\ &= O \left( \sum_{\substack{\ell \\ 2^{\ell} \le H}} 2^{(\ell +1) (b \gamma -1)} \right) = O \left( H^{b \gamma -1} \right) . \end{split} \]
Thus in this case \eqref{generaldiophest} gives
\[ \sum_{h=1}^H \frac{1}{h \left\| h \alpha \right\|^b} = O \left( \log^s H + H^{b \gamma -1} \right) = O \left( H^{b \gamma -1} \right) . \]
\end{proof}
\section{Proof of the upper bounds}
In what follows, $K$ will denote positive constants, not always the same, depending (at most) on $\alpha$ and the distribution of $X_1$.
We first show
\begin{lem}\label{lem:1} Let $X_1, X_2, \ldots$ and $\alpha$ be as in Theorem \ref{theoremA} and assume (\ref{firstcond}).
Then for any integers $\ell \ge 0$ and $p\ge 1$ we have
\begin{multline}\label{5}
\left\| \max_{2^{\ell}\le N\le 2^{\ell +1}} ND_N(\{S_k\alpha\})\right\|_{2p} \\ \le K \left( 2^{\ell (1-\delta)}p^\delta +2^{\ell /2} \sqrt{p} \sum_{h=1}^{[K 2^{(\ell +1) \delta}/p^\delta]} \frac{1}{h\|dh\alpha\|^{\beta/2}} \right)
\end{multline}
where $\delta=1/(\beta\gamma)$. If instead of (\ref{firstcond}) we assume (\ref{secondcond}), then (\ref{5}) holds with $\beta=1$.
\end{lem}
\begin{proof} Assume first (\ref{firstcond}). Then
by Proposition \ref{momentestimate} (i), for any integers $m\ge 0$ and $n,h,p\ge 1$ we have
\begin{equation}\label{1}
\mathbb E \left|\sum_{k=m+1}^{m+n} e^{2\pi i S_k h\alpha}\right|^{2p} \le (8p)^{2p} \max_{1\le r\le p} \frac{n^r}{r! \left( c\|d h \alpha\|^\beta\right)^{2p-r}}.
\end{equation}
Let
\begin{equation}\label{H}
H_n= C^{1/\gamma}d^{-1}(cn/p)^{1/(\beta\gamma)}.
\end{equation}
We claim that for any $1\le h\le H_n$ and $0\le r<p$ we have
\begin{equation}\label{incr}
\frac {n^r}{r! \left(c\|dh\alpha\|^\beta\right)^{2p-r}}\le \frac {n^{r+1}}{(r+1)! \left(c\|dh\alpha\|^\beta\right)^{2p-r-1}}.
\end{equation}
To see this, we note that (\ref{incr}) is equivalent to $r+1\le n c \|dh\alpha\|^\beta$ and for $1\le h \le H_n$ and $0\le r<p$ we know by (\ref{H}) and the assumptions of Theorem \ref{theoremA} that
$$ \|dh\alpha\|^\beta \ge C^\beta (dh)^{-\beta\gamma} \ge C^\beta (dH_n)^{-\beta\gamma} =p/(cn) \ge (r+1)/(cn).$$
Thus the maximum in (\ref{1}) is reached for $r=p$ and consequently
\begin{equation}\label{mom1}
\left\| \sum_{k=m+1}^{m+n} e^{2\pi i S_k h\alpha}\right\|_{2p} \le K \sqrt{np} \frac{1}{\|dh\alpha\|^{\beta/2}}
\end{equation}
for all $m\ge 0$, $n,p \ge 1$ and $1\le h\le H_n$. Set now
$$D_N(\alpha)=D_N(\{S_k\alpha\}), \quad T_h(N, \alpha)=\sum_{k=1}^N e^{2\pi i h S_k \alpha}.$$
By the Erd\H{o}s--Tur\'an inequality we
have
$$
ND_N(\alpha)\le 6\left(\frac{N}{[H_N]} + \sum_{h=1}^{[H_N]} \frac{1}{h} |T_h(N, \alpha)|\right)
$$
and consequently
\begin{equation}\label{6}
\max_{2^{\ell}\le N\le 2^{\ell +1}}ND_N(\alpha) \le K\left( 2^{\ell (1-\delta)}p^\delta + \sum_{h=1}^{[K 2^{(\ell +1)\delta}/p^\delta]} \frac{1}{h}\max_{2^{\ell} \le N\le 2^{\ell +1}} |T_h(N, \alpha)| \right).
\end{equation}
(Note that $N/[H_N]\sim Kp^\delta N^{1-\delta}$ and thus its maximum for $2^{\ell}\le N\le 2^{\ell+1}$ is $\le K 2^{\ell(1-\delta)}p^\delta$.)
By (\ref{mom1})
\begin{equation}\label{7}
\|T_h(N, \alpha)\|_{2p} \le K \sqrt{Np} \frac{1}{ \|d h\alpha\|^{\beta/2}}.
\end{equation}
Since this remains valid for shifted sums $T_h(N, M, \alpha)=\sum_{k=M+1}^{M+N} e^{2\pi ihS_k \alpha}$ as well, the
Erd\H{o}s--Stechkin inequality \cite{MO} yields
\begin{equation*}
\left\|\max_{2^{\ell}\le N\le 2^{\ell +1}}T_h(N, \alpha)\right\|_{2p} \le K 2^{\ell /2} \sqrt{p} \frac{1}{\|dh\alpha\|^{\beta/2}} .
\end{equation*}
Substituting this in (\ref{6}) it follows that
$$\left\| \max_{2^{\ell}\le N\le 2^{\ell +1}} ND_N(\alpha)\right\|_{2p} \le K \left(2^{\ell (1-\delta)}p^\delta +2^{\ell/2} \sqrt{p} \sum_{h=1}^{[K 2^{(\ell+1)\delta}/p^\delta]} \frac{1}{h\|dh\alpha\|^{\beta/2}}\right),$$
and thus (\ref{5}) is proved under condition (\ref{firstcond}) in Theorem \ref{theoremA}.
If instead of (\ref{firstcond}) we assume (\ref{secondcond}), the proof of (\ref{5}) is essentially the same. In this case in Proposition \ref{momentestimate} we have (\ref{mom(ii)}) instead of (\ref{mom(i)}), which implies, in view of the monotonicity relation (\ref{incr}), that (\ref{mom1}) remains valid with $\beta=1$ and a different constant $K$. The rest of the proof of (\ref{5}) requires no change.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theoremA}]
Assume first that (\ref{firstcond}) holds. We will deal with the cases $\gamma>2/\beta$ and $1 \le \gamma\le 2/\beta$ separately.
Assume $\gamma>2/\beta$. Then $\delta<1/2$ and by Lemma \ref{lem:1} and Corollary \ref{gammadioph},
\[ \begin{split} \left\| \max_{2^{\ell}\le N\le 2^{\ell +1}} ND_N(\{S_k\alpha\})\right\|_{2p} &\le K \left(2^{\ell (1-\delta)}p^\delta + 2^{\ell /2} \sqrt{p} \left(2^{(\ell+1)\delta}/p^\delta\right)^{\beta\gamma/2-1}\right)\\
&=K \left(2^{\ell (1-\delta)}p^\delta + 2^{\ell /2} \sqrt{p}\, 2^{(\ell +1)(1/2-\delta)}p^{\delta-1/2}\right)\\
&\le K 2^{\ell (1-\delta)} p^\delta \end{split} \]
for any integers $\ell \ge 0$ and $p \ge 1$. Choosing $p\sim \log \ell$ and using the Markov inequality we get, for a sufficiently large constant $B>0$,
\[ \P \left( \max_{2^{\ell}\le N\le 2^{\ell+1}} ND_N(\{S_k\alpha\})\ge B 2^{\ell (1-\delta)} p^\delta \right)\le \left( \frac{K2^{\ell (1-\delta)} p^\delta}{B2^{\ell (1-\delta)} p^\delta} \right)^{2p} \le 4^{-2p}\le \ell^{-2}.\]
Using the Borel--Cantelli lemma we get
$$ \max_{2^{\ell}\le N\le 2^{\ell +1}} ND_N(\{S_k\alpha\}) =O\left( 2^{\ell (1-\delta)} p^\delta \right)=O\left( 2^{\ell (1-\delta)} (\log\log 2^{\ell})^\delta \right) \qquad \text{a.s.},$$
proving the second estimate in (\ref{main}).
Assume now $1\le \gamma\le 2/\beta$. Then $\delta\ge 1/2$ and thus using Lemma \ref{lem:1} and Corollary \ref{gammadioph} we get
\[ \left\| \max_{2^{\ell}\le N\le 2^{\ell +1}} ND_N(\{S_k\alpha\})\right\|_{2p} \le K \left(2^{\ell /2}p^\delta+2^{\ell /2} \sqrt{p}\, \ell^s \right) \le K 2^{\ell /2} \ell^s \sqrt{p} \]
for any integers $\ell \ge 1$, $1 \le p \le \ell^{s/\delta}$, where $s=1$ if $0<\beta<2$, and $s=2$ if $\beta =2$. Choosing again $p\sim \log \ell$ and using the Markov inequality we get, for a sufficiently large constant $B$,
\begin{equation*}
\P \left( \max_{2^{\ell} \le N\le 2^{\ell +1}} ND_N( \{S_k\alpha\}) \ge B 2^{\ell /2} \ell^s \sqrt{p} \right) \le 4^{-2p}\le \ell^{-2}.
\end{equation*}
Hence the Borel--Cantelli lemma yields the first estimate in (\ref{main}).
If in Theorem \ref{theoremA} we assume (\ref{secondcond}), the argument is the same, using the fact that in this case by Lemma \ref{lem:1} we have (\ref{5}) with $\beta=1$.
\end{proof}
Corollary \ref{corollary} shows that the random variable with distribution $\P (X_1=1)= \P (X_1=2)=1/2$ satisfies the conditions of Theorem \ref{theoremA} (ii), proving the upper bounds in Proposition \ref{mainprop1}. To see the upper bounds in Proposition \ref{mainprop2} note that the condition $c_1 x^{-\beta} \le \P (|X_1| \ge x)$ clearly implies \eqref{heavytail}, and so according to Proposition \ref{examplesprop} (ii), Theorem \ref{theoremA} (i) applies. Finally, the upper bounds in Proposition \ref{mainprop3} follow from Theorem \ref{theoremA} (i) with $\beta =2$ and Proposition \ref{examplesprop} (i).
\section{Proof of the lower bounds}
We start by proving two general lower bounds of independent interest.
\begin{lem}\label{proplower2} Let $X_1, X_2, \ldots$ be integer-valued random variables, let $S_k=\sum_{j=1}^k X_j$, and let $\alpha \in \R$ be irrational such that $\left\| q \alpha \right\| \le C q^{-\gamma}$ for infinitely many $q \in \mathbb{N}$ with some constants $\gamma \ge 1$ and $C>0$. Assume that $S_k=O \left( \psi (k) \right)$ a.s., where $\psi (k)$ is a nondecreasing sequence of positive reals. Then
\begin{equation*}
D_N \left( \{ S_k \alpha \} \right) = \Omega \left( \psi (N+1)^{-1/\gamma} \right) \quad \text{a.s.}
\end{equation*}
\end{lem}
Note that here we allow $X_1, X_2, \ldots$ to be degenerate, in which case the sequence $(S_n)$ is a deterministic sequence of integers.
\begin{proof}[Proof of Lemma \ref{proplower2}]
If $\psi (k)=O(1)$, then the sequence $\{ S_k \alpha \}$ attains only finitely many points, and thus trivially $D_N (\{ S_k \alpha \}) = \Omega (1)$ a.s. We may therefore assume $\psi (k) \to \infty$ as $k \to \infty$. Let $K>0$ be a random variable such that $|S_k| \le K \psi (k)$ for every $k \in \mathbb{N}$.
Let $q \in \mathbb{N}$ with $q > (3CK \psi (1))^{1/\gamma}$ be such that $\left\| q \alpha \right\| = |q \alpha -p| \le C q^{-\gamma}$, where $p=p(q)$ denotes the integer closest to $q \alpha$. Let $N=N(q)$ be the largest positive integer such that $\psi (N) < q^{\gamma}/(3CK)$, i.e.\ $\psi (N) < q^{\gamma}/(3CK) \le \psi (N+1)$. Note that
\[ \left| S_k \alpha - \frac{S_k p}{q} \right| = |S_k| \frac{\left\| q \alpha \right\|}{q} \le K \psi (N) \frac{C q^{-\gamma}}{q} < \frac{1}{3q} \]
holds for any $k=1,\dots, N$. This means that $S_k \alpha$ is in the open neighborhood of some integral multiple of $1/q$ with radius $1/(3q)$. In particular, none of the points $\{ S_k \alpha \}$, $k=1, \dots, N$ lies in $\left[ 1/(3q), 2/(3q) \right] \subset [0,1]$. By the definition of discrepancy we thus have
\begin{equation}\label{DNlowerestimate}
D_N (\left\{ S_k \alpha \right\}) \ge \frac{1}{3q} \ge \frac{1}{3 \left( 3 CK \psi (N+1) \right)^{1/\gamma}} .
\end{equation}
Clearly there are only finitely many $q \in \mathbb{N}$ for which $N(q)$ is a given integer, therefore the existence of infinitely many $q \in \mathbb{N}$ with $\left\| q \alpha \right\| \le C q^{-\gamma}$ implies the existence of infinitely many $N \in \mathbb{N}$ for which \eqref{DNlowerestimate} holds.
\end{proof}
\begin{lem} \label{proplower3} Let $X_1, X_2, \ldots$ be integer-valued i.i.d.\ random variables with characteristic function $\varphi$, and let $S_k=\sum_{j=1}^k X_j$. Suppose that $|1-\varphi (x)| \le c |x|^{\beta}$ for all $x \in \R$ with some constants $c>0$ and $0<\beta \le 2$. Assume further that $\left\| q \alpha \right\| \le Cq^{-\gamma}$ for infinitely many $q \in \mathbb N$ with some constants $\gamma \ge 1$ and $C>0$. Then
\begin{equation*}
D_N \left( \{ S_k \alpha \} \right) = \Omega \left( N^{-1/(\beta\gamma)} \right) \quad \text{a.s.}
\end{equation*}
\end{lem}
\begin{proof} By the assumption on $\varphi$, the characteristic function of $S_n/n^{1/\beta}$ satisfies $|1-\varphi^n (x/n^{1/\beta})| \le n |1-\varphi (x/n^{1/\beta})| \le c |x|^{\beta}$ for any $x \in \R$. Using a well-known method to estimate the tail probabilities of a random variable in terms of its characteristic function (see e.g.\ \cite[p.\ 171--172, Proposition 8.29]{BRE}) we obtain
\begin{equation}\label{sntails}
\P \left( |S_n|/n^{1/\beta} >t \right) \ll t \int_0^{1/t} \left( 1-\mathrm{Re} \varphi^n (x/n^{1/\beta}) \right) \, \mathrm{d}x \ll c t^{-\beta}
\end{equation}
for any $t>0$ with a universal implied constant.
Let $M_n=\max_{1 \le k \le n} |S_k|$, and let $\mu_k$ denote a median of $S_k$. From \eqref{sntails} we get $|\mu_k| \le c' k^{1/\beta}$ with some constant $c'>0$. L\'evy's inequality (see e.g.\ \cite[p.\ 259]{LO}) and \eqref{sntails} hence give, for all $t>c'$,
\[ \begin{split} \P \left( M_n \ge t n^{1/\beta} \right) &\le \P \left( \max_{1 \le k \le n}|S_k + \mu_{n-k}| \ge (t-c') n^{1/\beta} \right) \\ &\le 2 \P \left( |S_n| \ge (t-c')n^{1/\beta} \right) \\ &\ll c(t-c')^{-\beta} . \end{split} \]
In particular, there exist constants $C_1, C_2>0$ such that $\P \left( M_n \ge C_1 n^{1/\beta} \right) \le 1-C_2$ for all $n \in \mathbb{N}$.
We will use a trivial version of the Borel--Cantelli lemma stating that if $A_1, A_2, \ldots$ are arbitrary events with $\P(A_k)\ge \lambda$ $(k=1, 2, \ldots)$, then with probability $\ge \lambda$, infinitely many $A_k$ will occur. By the assumptions, there exists an infinite subset $H$ of $\mathbb N$ such that $\left\| q \alpha \right\| \le C q^{-\gamma}$ for $q \in H$. For each $q\in H$, let $N=N(q)= [aq^{\beta\gamma}]$, where $a$ is a small constant. Thus letting $A_q= \left\{M_{N(q)}< C_1 N(q)^{1/\beta}\right\}$, we have $\P(A_q) \ge C_2$ for all $q\in H$, and thus with probability $\ge C_2$ infinitely many of the $A_q$, $q\in H$ occur. By the Hewitt--Savage zero-one law (see e.g.\ \cite[p.\ 64, Corollary 3.50]{BRE}), this is actually true with probability 1. Choose now such a $q$; then $\left\| q \alpha \right\| = |q \alpha -p| \le C q^{-\gamma}$, where $p=p(q)$ denotes the integer closest to $q \alpha$. Hence for $N=N(q)$ on the set $A_q$ we have, for any $1\le k\le N$,
\begin{equation}\label{diff}
\left |S_k\alpha-\frac{S_k p}{q}\right|\le C \frac{|S_k|}{q^{\gamma+1}}\le C \frac{|M_N|}{q^{\gamma+1}} \le \frac{C C_1 N^{1/\beta}}{q^{\gamma+1}} \le
\frac{C C_1a^{1/\beta}}{q} \le \frac{1}{3q}
\end{equation}
provided $a$ is small enough. Since the $X_i$ are integer-valued, the points $S_k p/q$ are integer multiples of $1/q$ and thus by (\ref{diff}) the points $S_k \alpha$ $(1\le k\le N)$ differ from each other by $\ge 1/(3q)$, and consequently with probability 1,
\begin{equation*}
D_N (\{S_k \alpha\}) \ge \frac{1}{3q}\ge C_3 N^{-1/(\beta\gamma)}
\end{equation*}
with some constant $C_3>0$ for infinitely many $N$, as stated.
\end{proof}
The lower bounds in Propositions \ref{mainprop1} (i), \ref{mainprop2} (i), \ref{mainprop3} (i) are all special cases of Proposition \ref{generallower}. The lower bound in Proposition \ref{mainprop1} (ii) follows from Lemma \ref{proplower2} with $\psi (k)=k$. The lower bound in Proposition \ref{mainprop3} (ii) is a corollary of Lemma \ref{proplower3} with $\beta =2$. Indeed, note that if $\E X_1=0$ and $\E X_1^2 < \infty$, then $\varphi (x)=1-\E X_1^2 x^2 (1+o(1))$ as $x \to 0$, and hence $|1-\varphi (x)| \le cx^2$ with some constant $c>0$.
Finally, we claim that under the conditions of Proposition \ref{mainprop2} we have $|1-\varphi (x)| \le c|x|^{\beta}$ with some $c>0$. The lower bound in Proposition \ref{mainprop2} (ii) will thus follow from Lemma \ref{proplower3}. To see this, consider
\begin{equation}\label{finaleq}
|1-\varphi (x)| = \sqrt{\left( \E (1-\cos (xX_1)) \right)^2 + \left( \E \sin (xX_1) \right)^2} .
\end{equation}
To estimate the first term in \eqref{finaleq}, we will use $1-\cos (xX_1) \le (xX_1)^2/2$ if $|xX_1|<1$, and $1-\cos (xX_1) \le 2$ otherwise. Hence
\[ \begin{split} 1-\cos (xX_1) &\le \frac{x^2}{2} X_1^2 I_{\{ |X_1|<1/|x| \}} + 2 I_{\{ |X_1| \ge 1/|x| \}}, \\ \E (1-\cos (xX_1)) &\le \frac{x^2}{2} \E \left( X_1^2 I_{\{ |X_1|<1/|x| \}} \right) +2\P (|X_1| \ge 1/|x|) \\ &\le x^2 \int_0^{1/|x|} t \P (|X_1| \ge t) \, \mathrm{d}t +2\P (|X_1| \ge 1/|x|) . \end{split} \]
The assumption $\P (|X_1| \ge x) \le c_2 x^{-\beta}$ thus shows that $\E (1-\cos (xX_1)) \ll |x|^{\beta}$. To estimate the second term in \eqref{finaleq}, we will use $\sin (xX_1) = xX_1 +O(|xX_1|^3)$ if $|xX_1| < 1$, and $|\sin (xX_1)| \le 1$ otherwise. We thus obtain
\[ \begin{split} \left| \E \sin (xX_1)\right| &\ll x \E (X_1 I_{\{ |X_1| < 1/|x| \}}) + |x|^3 \E \left( |X_1|^3 I_{\{ |X_1|<1/|x| \}} \right) + \P \left( |X_1| \ge 1/|x| \right) \\ &\le x \E (X_1 I_{\{ |X_1| < 1/|x| \}}) + |x|^3 \int_0^{1/|x|} 3t^2 \P (|X_1| \ge t) \, \mathrm{d}t + \P \left( |X_1| \ge 1/|x| \right) . \end{split} \]
By the assumption $\P (|X_1| \ge x) \le c_2 x^{-\beta}$ the last two terms are indeed $\ll |x|^{\beta}$. Considering the cases $0<\beta<1$, $\beta =1$ and $1<\beta<2$ separately, it is not difficult to see that the first term is also $\ll |x|^{\beta}$. Hence $|1-\varphi (x)| \ll |x|^{\beta}$, as claimed.
| {
"timestamp": "2019-10-28T01:16:22",
"yymm": "1910",
"arxiv_id": "1910.11766",
"language": "en",
"url": "https://arxiv.org/abs/1910.11766",
"abstract": "For irrational $\\alpha$, $\\{n\\alpha\\}$ is uniformly distributed mod 1 in the Weyl sense, and the asymptotic behavior of its discrepancy is completely known. In contrast, very few precise results exist for the discrepancy of subsequences $\\{n_k \\alpha\\}$, with the exception of metric results for exponentially growing $(n_k)$. It is therefore natural to consider random $(n_k)$, and in this paper we give nearly optimal bounds for the discrepancy of $\\{n_k \\alpha\\}$ in the case when the gaps $n_{k+1}-n_k$ are independent, identically distributed, integer-valued random variables. As we will see, the discrepancy behavior is determined by a delicate interplay between the distribution of the gaps $n_{k+1}-n_k$ and the rational approximation properties of $\\alpha$. We also point out an interesting critical phenomenon, a sudden change of the order of magnitude of the discrepancy of $\\{n_k \\alpha\\}$ as the Diophantine type of $\\alpha$ passes through a certain critical value.",
"subjects": "Number Theory (math.NT); Probability (math.PR)",
"title": "On the discrepancy of random subsequences of $\\{nα\\}$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587272306606,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7097978885536274
} |
https://arxiv.org/abs/1703.05378 | Non-crossing Monotone Paths and Binary Trees in Edge-ordered Complete Geometric Graphs | An edge-ordered graph is a graph with a total ordering of its edges. A path $P=v_1v_2\ldots v_k$ in an edge-ordered graph is called increasing if $(v_iv_{i+1}) > (v_{i+1}v_{i+2})$ for all $i = 1,\ldots,k-2$; it is called decreasing if $(v_iv_{i+1}) < (v_{i+1}v_{i+2})$ for all $i = 1,\ldots,k-2$. We say that $P$ is monotone if it is increasing or decreasing. A rooted tree $T$ in an edge-ordered graph is called monotone if either every path from the root of to a leaf is increasing or every path from the root to a leaf is decreasing.Let $G$ be a graph. In a straight-line drawing $D$ of $G$, its vertices are drawn as different points in the plane and its edges are straight line segments. Let $\overline{\alpha}(G)$ be the maximum integer such that every edge-ordered straight-line drawing of $G$ %under any edge labeling contains a monotone non-crossing path of length $\overline{\alpha}(G)$. Let $\overline{\tau}(G)$ be the maximum integer such that every edge-ordered straight-line drawing of $G$ %under any edge labeling contains a monotone non-crossing complete binary tree of size $\overline{\tau}(G)$. In this paper we show that $\overline \alpha(K_n) = \Omega(\log\log n)$, $\overline \alpha(K_n) = O(\log n)$, $\overline \tau(K_n) = \Omega(\log\log \log n)$ and $\overline \tau(K_n) = O(\sqrt{n \log n})$. | \section{Introduction}
An \emph{edge-ordering} of a graph is a total order of its edges, an \emph{edge-ordered} graph is a graph with an edge-ordering. A path $P=v_1v_2\ldots v_k$ in an edge-ordered graph is called increasing if $(v_iv_{i+1}) > (v_{i+1}v_{i+2})$ for all $i = 1,\ldots,k-2$; it is called decreasing if $(v_iv_{i+1}) < (v_{i+1}v_{i+2})$ for all $i = 1,\ldots,k-2$. We say that $P$ is \emph{monotone} if it is increasing or decreasing.
Let $G$ be a graph. Let $\alpha(G)$ be the maximum integer such that $G$ has a monotone path of length $\alpha(G)$ under any edge-ordering. The parameter $\alpha(G)$ is often called the \emph{altitude} of $G$.
In 1972 Chv\'atal and Koml\'os \cite{ChvatalKomlos} posed the problem of estimating the altitude of $K_n$. Graham and Kleitman \cite{IncreasingPaths} showed in 1973 that
\[ \sqrt{n-\frac{3}{4}}-\frac{1}{2} \le \alpha(K_n) \]
and Calderbank and Chung \cite{BlockSums} showed in 1984 that
\[ \alpha(K_n) \le \left(\frac{1}{2}+o(1)\right)n. \]
They also conjectured that this is the right order of magnitude of $\alpha(K_n)$. The lower bound by Graham and Kleitman remained the best known for over 40 years, until Milans \cite{newbound} showed in 2015 that
\[ \left(\frac{1}{20}-o(1) \right) \left(\frac{n}{\lg n} \right) \le \alpha(K_n). \]
In the meantime, several variations and specific cases for the altitude of a graph have been studied.
In 1987 Bialostocki and Roditty \cite{AMonotone} showed that a graph $G$ has altitude at least three if and only if $G$ contains as a subgraph an odd cycle of length at least five or one of six fixed graphs.
In 2001 Yuster \cite{BoundedDegree} studied the parameter $\alpha_{\Delta}$, defined as the maximum of $\alpha(G)$ over all graphs with maximum degree at most $\Delta$. He proved that $\Delta(1-o(1)) \le \alpha_\Delta \le \Delta +1$.
That same year Roditty, Shoham, and Yuster \cite{SparseGraphs} gave bounds on the altitude of several families of sparse graphs. In particular, if $G$ is a planar graph then $\alpha(G) \le 9$ and there exist planar graphs with $6 \le \alpha(G)$. They also proved that if $G$ is a bipartite planar graph then $\alpha(G) \le 6$ and there exist bipartite planar graphs with $4 \le \alpha(G)$.
In 2005
Mynhardt, Burger, Clark, Falvai and Henderson \cite{girth} characterized cubic graphs with girth at least five and altitude three. They also showed that if $G$ is an $r$-regular graph ($r \ge 4$) and has girth at least five, then $\alpha(G) \ge 4$.
In 2015 De Silva, Molla, Pfender, Retter and Tait \cite{hypercube} showed that $d/\log d \le \alpha(Q_d)$, where $Q_d$ is the $d$-dimensional hypercube.
In 2016 Lavrov and Loh \cite{RandHam} studied the length of a longest monotone path in $K_n$ under a random edge-ordering. They showed that, with probability at least $1/e-o(1)$, $K_n$ contains a monotone Hamiltonian path. They also conjectured that, given a random ordering, $K_n$ contains a monotone Hamiltonian path with probability tending to 1. Shortly after, Martinsson \cite{randomconjecture} proved this conjecture.
In this paper we study $\alpha(G)$ and other parameters in the context of geometric graphs.
A \emph{geometric graph} $\widetilde G$ is a graph whose vertices are points in the plane in general position and whose edges are straight line segments joining these points. We say that $\widetilde G$ is a \emph{straight-line drawing} of $G$ if $G$ and $\widetilde G$ are isomorphic. Let $\overline \alpha(\widetilde G)$ be the minimum over all edge-orderings of $\widetilde G$ of the maximum length of a non-crossing monotone path in $\widetilde G$. We denote by $\overline \alpha(G)$ the minimum of $\overline \alpha(\widetilde G)$ over all straight-line drawings $\widetilde G$ of $G$.
Now we define parameters similar to $\overline \alpha$ related to binary trees instead of paths. From now on, all binary trees are complete and rooted.
A rooted tree in an edge-ordered graph $G$ is increasing (decreasing) if every path from the root to a leaf is increasing (decreasing). It is called monotone if it is increasing or decreasing. Let $\overline \tau_+(\widetilde G)$ ($\overline \tau_-(\widetilde G)$) be the minimum over all edge-orderings of the maximum size of a non-crossing increasing (decreasing) binary tree in $\widetilde G$. Let $\overline\tau_+(G)$ ($\overline\tau_-(G)$) be the minimum of $\overline \tau_+(\widetilde G)$ ($\overline \tau_-(\widetilde G)$) over all straight-line drawings $\widetilde G$ of $G$.
Similarly, let $\overline \tau(\widetilde G)$ be the minimum over all edge-orderings of the maximum size of a non-crossing monotone binary tree in $\widetilde G$ and $\overline\tau(G)$ the minimum of $\overline \tau(\widetilde G)$ over all straight-line drawings $\widetilde G$ of $G$.
In this paper we prove that $\overline{\alpha}(K_n)=\Omega(\log\log n)$ and $\overline{\alpha}(K_n)=O(\log n)$. For the parameter $\overline \tau$ we prove that $\overline \tau(K_n) = \Omega(\log \log \log n)$ and $\overline \tau(K_n) = O(\sqrt{n \log n})$. As an intermediate result, if we are interested in bounding the size of increasing or decreasing binary trees but not both, we prove that $\overline \tau_+(K_n) = O(\log n)$ and $\overline \tau_-(K_n) = O(\log n)$.
\section{Monotone non-crossing paths} \label{convex}
In this section we give bounds for $\overline \alpha (K_n)$. First we introduce a couple of definitions, which we will use in the following theorems.
A \emph{convex geometric graph} is a geometric graph whose vertices are in convex position. A \emph{convex straight-line drawing} of
$G$ is a convex geometric graph $\widetilde G$ that is isomorphic to $G$.
\begin{lemma} \label{twoedges}
Let $S$ be a set of points in convex position and $\ell$ a straight line that partitions $S$ into two nonempty sets $U$ and $V$. The maximum length of a non-crossing polygonal chain whose vertices alternate between $U$ and $V$ and whose edges increase in slope is two.
\end{lemma}
\begin{proof}
Assume that a polygonal chain $P$ of length three with the conditions of the statement exists. Let $p, q, r, s$ denote the vertices of $P$ and assume without loss of generality that $q$ has smaller $x$ coordinate than $r$. The edges of $P$ have increasing slope, thus both $p$ and $s$ lie to the right of the directed line from $q$ to $r$. Since the points $p, q, r, s$ are in convex position and the points $q$ and $s$ lie on a different semiplane than the points $p$ and $r$ with respect to $\ell$, the edges $(pq)$ and $(rs)$ are the diagonals of a convex quadrilateral. These diagonals intersect; a contradiction the non-crossing property of $P$.
\end{proof}
\begin{theorem} \label{thm:alphaLower}
$\overline \alpha(K_n) = \Omega(\log\log n)$.
\end{theorem}
\begin{proof}
Let $\widetilde K_n$ be an edge-ordered straight-line drawing of $K_n$ and assume without loss of generality that no two vertices of $\widetilde K_n$ have the same $x$-coordinate. Let $v_1, \ldots, v_n$ be the vertices of $\widetilde K_n$ ordered by increasing $x$-coordinate, and let $H$ be the complete 3-uniform hypergraph with same vertex set as $\widetilde K_n$. For $0\le i<j<k\le n$, color the edge $(v_i,v_j,v_k)$ of $H$ blue if $(v_iv_j) < (v_jv_k)$, otherwise color it red. From Ramsey's Theorem, there exists a complete monochromatic sub-hypergraph $K$ of size $m=\Omega(\log\log n)$ in $H$ with vertices $v_{i_1},v_{i_2},\ldots, v_{i_m}$. Then $P=v_{i_1},v_{i_2},\ldots, v_{i_m}$ is a monotone path of length $\Omega(\log\log n)$ in $\widetilde K_n$. Note that $P$ is $x$-monotone, thus it has no crossings.
\end{proof}
\begin{theorem} \label{thm:alphaUpper}
$\overline \alpha(K_n) = O(\log n)$.
\end{theorem}
\begin{proof}
Let $\widetilde K_n$ be a convex straight-line drawing of $K_n$. Let $\ell$ be a vertical line that partitions the vertices of $\widetilde K_n$ into two no empty sets $U$ and $W$, each of size at most $\lceil n/2 \rceil$. Order the edges between $U$ and $W$ so that $e<e'$ if and only if the slope of $e$ is less than the slope of $e'$. Recursively order the edges of $\widetilde K_n[U]$ and $\widetilde K_n[W]$ as before. Extend these orders to $\widetilde K_n$ by declaring the edges in $\widetilde K_n[U] \cup \widetilde K_n[W]$ to be less than those between $U$ and $W$. Let $P$ be a monotone path of maximum length in $\widetilde K_n$. By Lemma~\ref{twoedges}, there are at most two edges of $P$ between sets in the same level of recursion. Moreover, $P$ cannot have edges both in $\widetilde K_n[U]$ and in $\widetilde K_n[W]$, since there would be a subpath with an edge in $\widetilde K_n[U]$, an edge between $U$ and $W$ and an edge in $\widetilde K_n[W]$; this path cannot be monotone. Thus, the length $T(n)$ of $P$ satisfies the recursion $T(n) \le 2+T(n/2)$, which implies that $T(n) = O(\log n)$.
\end{proof}
\section{Monotone non-crossing binary trees}
We begin this section with a lower bound for $\overline \tau( \widetilde K_n)$, where $\widetilde K_n$ is a convex straight-line drawing of $K_n$. We use the same argument used to bound $\overline \alpha(K_n)$.
\begin{theorem} \label{treesconvexlow}
Let $\widetilde K_n$ be an edge-ordered convex straight-line drawing of $K_n$. Then $\overline \tau(\widetilde K_n) = \Omega(\log\log n)$.
\end{theorem}
\begin{proof}
Let $v_1, v_2, \ldots, v_n$ be the vertices of $\widetilde K_n$ ordered by increasing $x$-coordinate and let $H$ be the complete 3-uniform hypergraph with same vertex set as $\widetilde K_n$. For $0\le i<j<k\le n$, color the edge $(v_i,v_j,v_k)$ of $H$ blue if $(v_iv_j) < (v_jv_k)$, otherwise color it red. From Ramsey's theorem, there exists a complete monochromatic sub-hypergraph $K$ of size $m=\Omega(\log\log n)$ in $H$. Assume without loss of generality that at least half of the vertices of $K$ belong to the lower convex hull of the vertices of $\widetilde K_n$. Let $v_{i_1},\ldots,v_{i_m}$ denote those vertices ordered by increasing $x$-coordinate. Let $m'$ be the largest integer of the form $2^{h}-1$, for some integer $h$, such that $m'\le m$. We embed a binary tree $T$ with vertices $v_{i_1},\ldots,v_{i_{m'}}$ as follows. Place the root of $T$ at $v_{i_1}$, and inductively place its left and right subtrees at the vertices $v_{i_2},\ldots,v_{i_{(m'+1)/2}}$ and $v_{i_{(m'+1)/2+1}},\ldots,v_{i_{m'}}$ with roots at $v_{i_2}$ and $v_{i_{(m'+1)/2+1}}$, respectively (see Figure \ref{fig:convexTree}). Note that $T$ is monotone and has no crossings by construction.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.8]{convexTree}
\caption{A non crossing binary tree in $H$.}
\label{fig:convexTree}
\end{figure}
\begin{theorem} \label{theo:lowertau}
$\overline \tau(K_n) = \Omega(\log \log \log n)$.
\end{theorem}
\begin{proof}
Let $\widetilde K_n$ be a straight-line drawing of $K_n$. By the Erd\H{o}s-Szekeres Theorem, there exists a set of $\Omega(\log n)$ vertices of $\widetilde K_n$ in convex position; the bound follows from Theorem \ref{treesconvexlow}.
\end{proof}
When we search for monotone paths, there is no need to distinguish between increasing and decreasing paths---traversing an increasing path in opposite direction gives us a decreasing path of the same length and vice versa. This is not the case with binary trees. Theorem~\ref{theo:lowertau} guarantees that we can find a monotone binary tree of size $\Omega(\log\log\log n)$ in any edge-ordered straight-line drawing of $K_n$, which can be increasing or decreasing.
An upper bound on $\overline \tau(K_n)$ must take into account both increasing and decreasing binary trees. We start by bounding $\overline \tau_+(K_n)$ and $\overline \tau_-(K_n)$.
\begin{theorem} \label{treesconvexup1}
$\overline \tau_+(K_n) = O(\log n)$ and $\overline \tau_-(K_n) = O(\log n)$.
\end{theorem}
\begin{proof}
Let $\widetilde K_n$ be a convex straight-line drawing of $K_n$. We give an edge-ordering of $\widetilde K_n$ such that largest non-crossing increasing binary tree has size $O(\log n)$. The proof that $\tau_-(\widetilde K_n) = O(\log n)$ is analogous.
The supporting line of every edge $e=(uv)$ of $\widetilde K_n$ partitions the vertices of $\widetilde K_n \setminus \{u,v\}$ into two sets, let $S_e$ be the smaller one. Construct an edge-ordering of $\widetilde K_n$ such that $e < e'$ if $|S_e| < |S_{e'}|$. Let $T$ be a largest non-crossing increasing binary tree in $\widetilde K_n$ and denote its root by $r$.
Let $L$ and $R$ be the convex hulls of the left and right subtrees of $T$, respectively. We claim that $L$ and $R$ are disjoint. Suppose this is not the case. Order the vertices of $K_n$ counterclockwise starting with $r$ and let $l', v_1, \ldots, v_s, r'$ be the vertices of $L \cup R$ as they appear in this order. Let $e$ be the left edge of $r$. No edge of $T$ can have an endpoint in $S_e$: any such edge $f$ must intersect $e$ or have both endpoints in $S_e$, in which case $f$ belongs to the left subtree of $T$; this is impossible since $|S_f|<|S_e|$. Thus, $l'$ is the root of the left subtree of $T$ and by an analogous argument $r'$ is the root of the right subtree of $T$.
There must be a vertex of $R$ followed by a vertex of $L$ in this order, otherwise $L$ and $R$ are disjoint; let $v_i$ be the first such vertex of $R$ in this order. In the path that joins $v_i$ with $r'$ in $T$, there exists at least one edge $v_jv_k$ with $j \le i < k$. The vertices $l'$ and $v_{i+1}$ lie on different sides of the supporting line of $v_jv_k$, so the path that joins them in $T$ must intersect $v_jv_k$. This is a contradiction, since $T$ is non-crossing.
Let $\ell$ be a straight line through $r$ that separates $L$ and $R$. The line $\ell$ partitions the vertices of $\widetilde K_n \setminus r$ in two parts, one with less than $n/2$ vertices. Let $S$ be this part and let $T'$ be the subtree of $T$ contained in $S$.
Let $h'$ be the height of $T'$. Note that for every vertex $v$ of $T'$ with children $u$ and $w$ either the subtree $T_u$ rooted at $u$ is contained in $S_{(vw)}$ or the subtree $T_w$ rooted at $w$ is contained in $S_{(vu)}$. Assume without loss of generality that the first case happens.
\begin{figure}[h]
\centering
\includegraphics[scale=1.5]{trees}
\caption{$T_{s, 1}$ and $T_{s, h}$.}
\label{fig:trees}
\end{figure}
Let $T_{s,h}$ denote an increasing tree with root $u$ such that:
\begin{itemize}
\item The vertex $u$ has only one child $v$ and $|S_{(uv)}| = s$
\item The vertex $v$ is the root of a binary tree with height $h$
\end{itemize}
Let $V(s, h)$ denote the minimum number of vertices of $S$ needed to embed $T_{s,h}$. If $h=1$, the complete binary tree rooted at $v$ consists of two edges, $e_1, e_2$; furthermore, $|S_{(uv)}| = s$ implies that $|S_{e_1}|\ge s$ and $|S_{e_2}|\ge s+1$. Thus, $V(s, 1) = 2s+4$. (See figure \ref{fig:trees}). Note that if $h > 1$, we need at least $V(s, h-1)$ vertices to embed the left subtree of $v$, which implies that we need $V(V(s, h-1)-1, h-1)$ vertices to embed the right subtree of $v$. Thus we obtain the following recurrence:
\[V(s, h) \ge V(V(s, h-1)-1, h-1)+s+1.\]
We show by induction on $h$ that $V(s, h) \ge (s+1)2^{2^{h-1}}$. This certainly holds for the base case, assume it also holds for $h-1$. We have that:
\begin{eqnarray*}
V(s, h) &\ge& V(V(s, h-1)-1, h-1) \\
&\ge& V(s, h-1)\cdot 2^{2^{h-2}} \\
&\ge& (s+1)\cdot 2^{2^{h-2}} \cdot 2^{2^{h-2}} \\
&\ge& (s+1)2^{2^{h-1}}
\end{eqnarray*}
Note that $h' \ge V(0, h') = \Omega(2^{2^{h'-1}})$, which implies that $h' = O(\log \log n)$ and the theorem follows.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.5]{badtree}
\caption{A decreasing binary tree of linear size under the edge-ordering used in Theorem \ref{treesconvexup1}.}
\label{fig:badtree}
\end{figure}
The edge-ordering used in the proof of Theorem~\ref{treesconvexup1} forbids large increasing binary trees, but it is possible to find a decreasing binary tree of linear size (see Figure \ref{fig:badtree}). The proof of Theorem~\ref{treesconvexup2} gives an edge-ordering that forbids both increasing and decreasing large binary trees.
\begin{theorem} \label{treesconvexup2}
$\overline \tau(K_n) = O(\sqrt{n \log n})$.
\end{theorem}
\begin{proof}
Let $\widetilde K_n$ be a convex straight-line drawing of $K_n$. Let $v_1, \ldots, v_n$ denote the vertices of $\widetilde K_n$ in counterclockwise order. Let $m=\left \lceil \sqrt{n/\log n} \right \rceil$ and partition the vertices into groups $S_1, S_2, \ldots, S_m$ of consecutive vertices such that each one has size at most $\left \lceil \sqrt{n \log n} \right \rceil$. Order the edges that have endpoints within $S_i$ using the same edge-ordering as in Theorem \ref{treesconvexup1} so that the largest non-crossing decreasing binary tree contained in $S_i$ has size $O(\log n)$. We refer to those edges as red edges and to the edges that have endpoints in different groups as blue edges. Order the blue edges by increasing slope. Furthermore, order the edges in such a way that every blue edge is greater than every red edge.
\begin{figure}[h]
\centering
\includegraphics[scale=0.9]{T6_Decreasing}
\caption{A decreasing binary tree with respect to the edge-ordering of Theorem~\ref{treesconvexup2}.}
\label{fig:T6_dec}
\end{figure}
Let $T$ be a decreasing binary tree with respect to this edge-ordering and let $r$ be its root. Note that $T$ consists of a possibly empty blue binary tree $T_b$ and a forest of red complete binary trees such that the roots of the red trees are leaves of $T_b$ (Figure \ref{fig:T6_dec}). We claim that the subgraph $T_{i,j}$ of $T_b$ induced by blue the edges between two different groups $S_i$ and $S_j$ has at most one connected component. Suppose for the sake of a contradiction that this does not happen. Choose two connected components such that $r$ is in at most one of them, let $e$ be and edge from the first component and $f$ an edge from the second component. Suppose that one of the support lines (say, the support line of $e$) leaves $f$ and $r$ in different semiplanes. Then, since the vertices of $T_b$ are in convex position, the path from any endpoint of $f$ to $r$ must intersect $e$. Therefore, $r$ lies between the support lines of $e$ and $f$. Without loss of generality, $r$ is not in the connected component where $e$ belongs. Any path from $r$ to an endpoint of $e$ must go first through a vertex in some $S_k$, where $k \ne i, j$. This path intersects $e$ or $f$, a contradiction.
Since $T_{i,j}$ has at most one connected component (which by Lemma~\ref{twoedges} is a tree of height at most two), its number of edges is at most a constant. Let $u_1, \ldots, u_m$ be vertices on a circle ordered counterclockwise. Add an edge between $u_i$ and $u_j$ if and only if there is an edge in $T_b$ between $S_i$ and $S_j$. Since $T_b$ is non-crossing, the resulting graph is also non-crossing and has $O(\sqrt{n/\log n})$ edges, which implies that $T_b$ has $O(\sqrt{n/\log n} )$ edges. There are $O(\sqrt{n/\log n})$ leaves in $T_b$, thus there are $O(\sqrt{n/\log n})$ red trees and by Theorem~\ref{treesconvexup1} each has size $O(\log n)$. Thus, $T$ has size $O(\sqrt{n \log n})$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.9]{T6_Increasing}
\caption{An increasing complete binary tree with respect to the edge-ordering of Theorem~\ref{treesconvexup2}.}
\label{fig:T6_inc}
\end{figure}
Now consider an increasing complete binary tree $T$. Note that $T$ consists of a possibly empty red binary tree $T_r$ and a forest of blue binary trees such that the roots of the blue trees are leaves of $T_r$ (Figure \ref{fig:T6_inc}). The tree $T_r$ is contained in some $S_i$, thus it has size $O(\sqrt{n \log n})$. Identify all the roots of the blue trees with a vertex of $S_i$--this produces a non crossing blue tree. There are $O(\sqrt{n \log n})$ blue edges with an endpoint in $S_i$. We can bound the number of remaining blue edges as before by $O(\sqrt{n / \log n})$. Therefore, $T$ has size $O(\sqrt{n \log n})$.
\end{proof}
\small
\bibliographystyle{habbrv} | {
"timestamp": "2017-05-04T02:02:18",
"yymm": "1703",
"arxiv_id": "1703.05378",
"language": "en",
"url": "https://arxiv.org/abs/1703.05378",
"abstract": "An edge-ordered graph is a graph with a total ordering of its edges. A path $P=v_1v_2\\ldots v_k$ in an edge-ordered graph is called increasing if $(v_iv_{i+1}) > (v_{i+1}v_{i+2})$ for all $i = 1,\\ldots,k-2$; it is called decreasing if $(v_iv_{i+1}) < (v_{i+1}v_{i+2})$ for all $i = 1,\\ldots,k-2$. We say that $P$ is monotone if it is increasing or decreasing. A rooted tree $T$ in an edge-ordered graph is called monotone if either every path from the root of to a leaf is increasing or every path from the root to a leaf is decreasing.Let $G$ be a graph. In a straight-line drawing $D$ of $G$, its vertices are drawn as different points in the plane and its edges are straight line segments. Let $\\overline{\\alpha}(G)$ be the maximum integer such that every edge-ordered straight-line drawing of $G$ %under any edge labeling contains a monotone non-crossing path of length $\\overline{\\alpha}(G)$. Let $\\overline{\\tau}(G)$ be the maximum integer such that every edge-ordered straight-line drawing of $G$ %under any edge labeling contains a monotone non-crossing complete binary tree of size $\\overline{\\tau}(G)$. In this paper we show that $\\overline \\alpha(K_n) = \\Omega(\\log\\log n)$, $\\overline \\alpha(K_n) = O(\\log n)$, $\\overline \\tau(K_n) = \\Omega(\\log\\log \\log n)$ and $\\overline \\tau(K_n) = O(\\sqrt{n \\log n})$.",
"subjects": "Combinatorics (math.CO)",
"title": "Non-crossing Monotone Paths and Binary Trees in Edge-ordered Complete Geometric Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587265099557,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7097978880357327
} |
https://arxiv.org/abs/1508.03516 | An Adaptive Variable Order Quadrature Strategy | In this article we propose a new adaptive numerical quadrature procedure which includes both local subdivision of the integration domain, as well as local variation of the number of quadrature points employed on each subinterval. In this way we aim to account for local smoothness properties of the function to be integrated as effectively as possible, and thereby achieve highly accurate results in a very efficient manner. Indeed, this idea originates from so-called hp-version finite element methods which are known to deliver high-order convergence rates, even for nonsmooth functions. | \section{Introduction}
Numerical integration methods have witnessed a tremendous development over the last few decades; see, e.g., \cite{Press:07,DahlquistBjorck:08,DavisRabinowitz:07}. In particular, adaptive quadrature rules have nowadays become an integral part of many scientific computing codes. Here, one of the first yet very successful approaches is the application of adaptive Simpson integration or the more accurate Gauss-Kronrod procedures (see, e.g., \cite{GanderGautschi:00}). The key points in the design of these methods are, first of all, to keep the number of function evaluations low, and, secondly, to divide the domain of integration in such a way that the features of the integrand function are appropriately and effectively accounted for.
The aim of the current article is to propose a complementary adaptive quadrature approach that is quite different from previous numerical integration schemes. In fact, our work is based on exploiting ideas from $hp$-type adaptive finite element methods (FEM); cf.~\cite{HoustonSuliHPADAPT,De07,FankhauserWihlerWirz:14,MelenkAPOST01,opac-b1101124}. These schemes accommodate and combine both traditional low-order adaptive FEM and high-order (so-called spectral) methods within a single unified framework. Specifically, their goal is to generate discrete approximation spaces which allow for both adaptively refined subdomains, as well as locally varying approximation orders. In this way, the $hp$-FEM methodology is able to resolve features of an underlying unknown analytical solution in a highly efficient manner. In fact, this approach has proved to be enormously successful in the context of numerically approximating solutions of differential equations, and has been shown to exhibit high-order algebraic or exponential convergence rates even in the presence of local singularities; cf.~\cite{Schwab98,GuiBabuska86,SchotzauSchwabWihler:15}.
With this in mind, we adopt the $hp$-adaptive finite element strategy for the purpose of introducing a variable order adaptive quadrature framework. More precisely, we propose a procedure whereby the integration domain will be subdivided adaptively in combination with a local tuning of the number of quadrature points employed on each subinterval. To drive this refinement process, we employ a smoothness estimation technique from~\cite{FankhauserWihlerWirz:14,W11} (see also~\cite{HoustonSuliHPADAPT} for a related strategy), which was originally introduced in the context of $hp$-adaptive FEM. Specifically, the smoothness test makes it possible to gain local information concerning the regularity of the integrand function, and thereby, to suitably subdivide the integration domain and select an appropriate number of quadrature points for each subinterval. By means of a series of numerical experiments we demonstrate that the proposed adaptive quadrature strategy is capable of generating highly accurate approximations at a very low computational cost. The main ideas on this new approach together with a view on practical aspects will be discussed in the subsequent section.
\section{An $hp$-Type Quadrature Approach}
\subsection{General Quadrature Rules}
Typical quadrature rules for the approximation of an integral
\begin{equation}\label{eq:I}
I:=\int_{-1}^1 f(x)\, \dd x
\end{equation}
of a continuous function~$f:\,[-1,1]\to\mathbb{R}$, take the form
\begin{equation}\label{eq:quad}
I\approx \widehat Q_p(f):=\sum_{k=1}^p \w{p}{k}f(\x{p}{k}),
\end{equation}
where~$p\ge 1$ is a (typically prescribed) integer number, and~$\{\x{p}{k}\}_{k=1}^p\subset[-1,1]$ and~$\{\w{p}{k}\}_{k=1}^p\subset(0,2]$ are appropriate quadrature points and weights, respectively. When dealing with a variable number~$p$ of quadrature points and weights, we can consider one-parameter families of quadrature rules (such as, for example, Gauss-type quadrature methods); here, for each~$p\in\mathbb{N}$, with~$p\ge p_{\min}$, where~$p_{\min}$ is a minimal number of points, there are (possibly non-hierarchical) families of quadrature points~$\xx{p}=\{\x{p}{k}\}_{k=1}^p$, and weights~$\ww{p}=\{\w{p}{k}\}_{k=1}^p$.
On an arbitrary bounded interval~$[a,b]$, $a<b$, a corresponding integration formula can be obtained, for instance, by means of a simple affine scaling
\begin{equation}\label{eq:phi}
\phi_{[a,b]}:\,[-1,1]\to[a,b],\qquad
\widehat x\mapsto x=\phi_{[a,b]}(\widehat x)=\frac12h\widehat x+\frac12(a+b),
\end{equation}
with~$h=b-a>0$. Indeed, in this case
\[
\int_a^bf(x)\, \dd x \approx Q_{[a,b],p}(f):=\frac{h}{2}\sum_{k=1}^p \w{p}{k}(f\circ\phi_{[a,b]})(\x{p}{k}),
\]
where~$f:\,[a,b]\to\mathbb{R}$ is again continuous. As before, for any specific family of quadrature rules, the corresponding quadrature point families~$\bm x_p$ are obtained in a straightforward way by letting $\bm x_p=\phi_{[a,b]}(\xx{p})$ (with the understanding that~$\phi_{[a,b]}$ is extended componentwise to vectors).
Furthermore, the above construction allows us to define composite quadrature rules, whereby the integral of $f$ is approximated on a collection of~$n\ge1$ disjoint (open) subintervals~$\{K_i\}_{i=1}^n$ of~$[a,b]$ with~$[a,b]=\bigcup_{i=1}^n \overline{K}_i$, i.e.,
\[
I\approx \sum_{i=1}^n Q_{K_i,p}(f|_{K_i}).
\]
In practical applications the subintervals are usually either of uniform size~$\nicefrac{(b-a)}{n}$, for sufficiently large~$n$, or alternatively, they are selected adaptively with the aim of resolving the relevant features of the given function~$f$.
\subsection{The Basic Idea: $hp$-Adaptivity}\label{sc:basic}
Adaptive quadrature rules usually generate a sequence of repeatedly bisected and possibly non-uniform subintervals~$\{K_i\}_{i=1}^n$, $n \geq 1$, of the integration domain~$[a,b]$ (i.e., each subinterval~$K_i$ may have a different length~$h_i$), with a prescribed and uniform number~$p$ of quadrature points on each subinterval. With the aim of providing highly accurate approximations with as little computational effort as possible, the novelty of the approach presented in this article is to design an adaptive quadrature procedure, which, in addition to subdividing the original interval~$[a,b]$ into appropriate subintervals, is able to adjust the number of quadrature points~$p_i$ \emph{individually} within each subinterval~$K_i$ in an effective way. We note that this idea originates from approximation theory~\cite{Scherer:80/81,DeVoreScherer:80} (see also~\cite{GuiBabuska86}), and has been applied with huge success in the context of finite element methods for the numerical approximation of differential equations. Indeed, under certain conditions, the judicious combination of subinterval refinements ($h$-refinement) and selection of local approximation orders ($p$-refinement), which results in the class of so-called $hp$-finite element methods, is able to achieve high-order algebraic or exponential rates of convergence, even for solutions with local singularities; see, e.g. \cite{Schwab98}. In an effort to automate the combined $h$- and $p$-refinement process, a number of $hp$-adaptive finite element approaches have been proposed in the literature; see, e.g, the survey article~\cite{mitchell_mcclain} and the references cited therein. In the current article, we pursue the smoothness estimation approach developed in~\cite{FankhauserWihlerWirz:14,W11} (cf.~also~\cite{HoustonSuliHPADAPT} for a related methodology), and translate the idea into the context of adaptive variable order numerical quadrature.
Starting from a subinterval~$K_i$ with~$p_i$ quadrature points, we are given a current approximation~$Q_{K_i,p_i}(f|_{K_i})$ of the subintegral
\begin{equation}\label{eq:intKi}
\int_{K_i}f(x)\, \dd x\approx Q_{K_i,p_i}(f|_{K_i}).
\end{equation}
Then, with the aim of improving the approximate value~$Q_{K_i,p_i}(f|_{K_i})$, in the sense of an $hp$-adaptive finite element methodology in one-dimension, we propose two possible refinements of~$K_i$:
\begin{enumerate}[(i)]
\item\label{item:i} $h$-refinement: The subinterval~$K_i$ of length~$h_i$ is bisected into two subintervals~$K_i^1$ and~$K_i^2$ of equal size~$\nicefrac{h_i}{2}$, and the number~$p_i$ of quadrature points is either inherited to both subintervals or, in order to allow for derefinement with respect to the number of local quadrature points, reduced to~$p_i-1$ points. In the latter case, we obtain a potentially improved approximation
\begin{equation}\label{eq:Qh}
Q_{K_i}^{\rm h}(f)=Q_{K_i^{1},\max(1,p_i-1)}(f)+Q_{K_i^{2},\max(1,p_i-1)}(f)
\end{equation}
of~\eqref{eq:intKi}.
\item $p$-refinement: The subinterval~$K_i$ is retained, and the number~$p_i$ of quadrature points~$p_i$ is increased by~1, i.e., $p_i\gets p_i+1$. This yields an approximation
\begin{equation}\label{eq:Qp}
Q_{K_i}^{\rm p}(f)=Q_{K_i,p_i+1}(f).
\end{equation}
In case that~$p_i=p_{\max}$, where~$p_{\max}$ is a prescribed maximal number of quadrature points on each subinterval, we define
\begin{equation}\label{eq:Qp'}
Q_{K_i}^{\rm p}(f)=Q_{K_i^1,p_i}(f)+Q_{K_i^2,p_i}(f),
\end{equation}
where~$K_i^1$ and~$K_i^2$ result from subdividing~$K_i$ as in~\eqref{item:i}.
\end{enumerate}
In order to determine which of the above refinements is more appropriate for a given subinterval~$K_i$, we apply a smoothness estimation idea as outlined in the subsequent section. Once a decision between $h$- and $p$-refinement for~$K_i$ has been made, the procedure is repeated iteratively for any subintervals~$K_i$ for which~$Q_{K_i,p_i}(f|_{K_i})$ and its refined value (resulting from the chosen refinement) differ by at least a prescribed tolerance~$\mathtt{tol}>0$.
\subsection{Smoothness Estimation}\label{sc:smoothness}
The basic idea presented in the articles~\cite{FankhauserWihlerWirz:14,W11,HoustonSuliHPADAPT} is to estimate the regularity of a function to be approximated locally. Then, following along the lines of the $hp$-approximation approach, if the function is found to be smooth, according to the underlying regularity estimation test, then a $p$-refinement is performed, otherwise an $h$-refinement is employed. In~\cite{FankhauserWihlerWirz:14}, the following smoothness indicator, for a (weakly) differentiable function~$f$ on an interval~$K_j$, has been introduced (cf.~\cite[Eq.~(3)]{FankhauserWihlerWirz:14}):
\begin{equation}\label{eq:F}\tag{F}
\mathcal{F}_{K_j}[f]:=\begin{cases}\displaystyle
\frac{\NN{f}_{L^\infty(K_j)}}{h_j^{-\nicefrac12}\NN{f}_{L^2(K_j)}+\frac{1}{\sqrt2}h_j^{\nicefrac12}\NN{f'}_{L^2(K_j)}} & \text{if }f|_{K_j}\not\equiv 0,\\[3ex]
1 & \text{if }f|_{K_j}\equiv 0.
\end{cases}
\end{equation}
The motivation behind this definition is the continuous Sobolev embedding~$W^{1,2}(K_j)\hookrightarrow L^\infty(K_j)$, which implies that
\[
\sup_{v\in H^1(K_j)}\frac{\NN{v}_{L^\infty(K_j)}}{h_j^{-\nicefrac12}\NN{v}_{L^2(K_j)}+\frac{1}{\sqrt2}h_j^{\nicefrac12}\NN{v'}_{L^2(K_j)}}\le 1;
\]
see~\cite[Proposition~1]{FankhauserWihlerWirz:14}. In particular, it follows that~$\mathcal{F}_{K_j}[f]\le 1$ in~\eqref{eq:F}; $f$ is classified as being smooth on~$K_j$ if~$\mathcal{F}_{K_j}[f]\ge\tau$, for a prescribed smoothness testing parameter~$0<\tau<1$, and nonsmooth otherwise.
To begin, we first consider the special case when~$f$ is a polynomial of degree~$p_j\ge 1$. Then, the derivative~$f^{(p_j-1)}$ of order~$p_j-1$ of $f$ is a linear polynomial, and the evaluation of the smoothness indicator~$\mathcal{F}_{K_j}\left[f^{(p_j-1)}\right]$ from~\eqref{eq:F} is simple to obtain. In fact, let us write~$f|_{K_j}$ in terms of a (finite) Legendre series, that is,
\begin{equation}\label{eq:fleg}
f|_{K_j}=\sum_{l=0}^{p_j}a_l(\widehat{L}_l\circ\phi^{-1}_{K_j}),
\end{equation}
for coefficients~$a_0,\ldots,a_{p_j}\in\mathbb{R}$. Here, $\widehat{L}_l$, $l\ge 0$, are the Legendre polynomials on~$[-1,1]$ (scaled such that~$\widehat L_l(1)=1$ for all~$l\ge 0$), and $\phi_{K_j}$ is the affine scaling of~$[-1,1]$ to~$K_j$; cf.~\eqref{eq:phi}. For~$f$ as in~\eqref{eq:fleg} it can be shown that
\begin{equation}\label{eq:Fp}
\mathcal{F}_{K_j}\left[f^{(p_j-1)}\right]=\frac{1+\xi_{p_j}}{\sqrt{1+\frac13\xi_{p_j}^2}+\sqrt2\xi_{p_j}},
\end{equation}
where~$\xi_{p_j}=(2p_j-1)\left|\nicefrac{a_{p_j}}{a_{p_j-1}}\right|$ (provided that~$a_{p_j-1}\neq 0$); see~\cite[Proposition~3]{FankhauserWihlerWirz:14}. In particular, this implies that
\begin{equation}\label{eq:range}
\frac12\approx\frac{\sqrt{3}}{\sqrt{6}+1}\le\mathcal{F}_{K_j}\left[f^{(p_j-1)}\right]\le 1;
\end{equation}
cf.~\cite[\S2.2]{FankhauserWihlerWirz:14}.
In the context of the numerical integration rule~\eqref{eq:quad}, the above methodology can be adopted as follows: suppose we are given~$p_j\ge 2$ quadrature points and weights, $\{\x{p_j}{k}\}_{k=1}^{p_j}$ and~$\{\w{p_j}{k}\}_{k=1}^{p_j}$, respectively. Then,
\begin{equation}\label{eq:QKj}
\int_{K_j}f(x)\, \dd x\approx Q_{K_j,p_j}(f|_{K_j})=\frac{h_j}{2}\sum_{k=1}^{p_j}\w{p_j}{k}(f\circ\phi_{K_j})(\x{p_j}{k}).
\end{equation}
We denote the uniquely defined interpolating polynomial of~$f$ of degree~$p_j-1$ at the given quadrature points by
\[
\Pi_{K_j,p_j-1}f=\sum_{l=0}^{p_j-1}b_l(\widehat{L}_l\circ\phi^{-1}_{K_j}).
\]
Due to orthogonality of the Legendre polynomials, we note that
\[
b_l=\frac{2l+1}{h_j}\int_{K_j}\Pi_{K_j,p_j-1}f(x)(\widehat{L_l}\circ\phi^{-1}_{K_j})(x)\, \dd x,\qquad l=0,\ldots,p_j-1.
\]
We further assume that the quadrature rule under consideration is exact for all polynomials of degree up to~$2p_j-2$. Thereby,
\begin{align*}
b_l&=\frac{2l+1}{2}\sum_{k=1}^{p_j}\w{p_j}{k}(\Pi_{K_j,p_j-1}f)\circ\phi_{K_j}(\x{p_j}{k})\widehat{L_l}(\x{p_j}{k})\\
&=\frac{2l+1}{2}\sum_{k=1}^{p_j}\w{p_j}{k}(f\circ\phi_{K_j})(\x{p_j}{k})\widehat{L_l}(\x{p_j}{k}).
\end{align*}
Consequently, we infer that
\begin{equation}\label{eq:xi}
\begin{split}
\xi_{K_j,p_j-1}:&=(2p_j-3)\left|\frac{b_{p_j-1}}{b_{p_j-2}}\right|\\
&=(2p_j-1)\frac{\sum_{k=1}^{p_j}\w{p_j}{k}(f\circ\phi_{K_j})(\x{p_j}{k})\widehat{L}_{p_j-1}(\x{p_j}{k})}{\sum_{k=1}^{p_j}\w{p_j}{k}(f\circ\phi_{K_j})(\x{p_j}{k})\widehat{L}_{p_j-2}(\x{p_j}{k})},
\end{split}
\end{equation}
and thus, in view of~\eqref{eq:Fp}, we use the quantity
\begin{equation}\label{eq:sind}
\mathsf{F}_{K_j,p_j}(f):=\frac{1+\xi_{K_j,p_j-1}}{\sqrt{1+\frac13\xi_{K_j,p_j-1}^2}+\sqrt2\xi_{K_j,p_j-1}}\in\left(\frac{\sqrt{3}}{\sqrt6+1},1\right),
\end{equation}
cf.~\eqref{eq:range}, to estimate the smoothness of~$f|_{K_j}$. Here, we emphasise that the computation of~$\xi_{K_j,p_j-1}$ does not require any additional function evaluations of~$f$ since the values~$(f\circ\phi_{K_j})(\x{p_j}{k})$, $k=1,\ldots,p_j$, have already been determined in the application of the quadrature rule~\eqref{eq:QKj}.
\subsection{Adaptive Variable Order Procedure}\label{sc:hprefine}
Based on the above derivations, we now propose an $hp$-type adaptive quadrature method. To this end, we start by choosing a tolerance~$\mathtt{tol}>0$, a smoothness parameter~$\tau\in\left(\nicefrac{\sqrt{3}}{(\sqrt6+1)},1\right)$, and a maximal number~$p_{\max}\ge 2$ of possible quadrature points on each subinterval. Furthermore, we define the interval $K_1=[a,b]$, and a small number~$p_1$, $2\le p_1\le p_{\max}$, of quadrature points on~$K_1$. Moreover, we initialise the set of subintervals~$\mathtt{subs}$, the order vector~$\mathtt{p}$ containing the number of quadrature points on each subinterval, and the unknown value~$\mathtt{Q}$ of the integral as follows:
\[
\mathtt{subs}=\{K_1\},\qquad \mathtt{p}=\{p_1\},\qquad \mathtt{Q}=0.
\]
Then, the basic adaptive procedure is given as follows:
\begin{algorithmic}[1]
\While {$\mathtt{subs}\neq\emptyset$}
\State $[\Q1,\mathtt{subs},\mathtt{p}] = \mathtt{hprefine}(f,\mathtt{subs},\mathtt{p},p_{\max},\tau)$;
\State $\mathtt{Q} = \mathtt{Q} + \Q1$;
\EndWhile
\State Output~$\mathtt{Q}$.
\end{algorithmic}
Here, $\mathtt{hprefine}$ is a function, whose purpose is to identify those subintervals in~$\mathtt{subs}$, which need to be refined further for a sufficiently accurate approximation of the unknown integral. In addition, it outputs a set of subintervals (again denoted by~$\mathtt{subs}$), as well as an associated order vector (again denoted by~$\mathtt{p}$) which result from applying the most appropriate refinement, i.e., either~$h$- or $p$-refinement as outlined in~(i) and~(ii) in Section~\ref{sc:basic} above, for each subinterval. Furthermore, $\mathtt{hprefine}$ returns the sum~$\Q1$ of all quadrature values corresponding to subintervals in the input set~$\mathtt{subs}$ for which no further refinement is deemed necessary. The essential steps are summarised in Algorithm~\ref{alg:hprefine}.
\begin{algorithm}
\caption{Function $[\mathtt{Q},\mathtt{subsnew},\mathtt{pnew}] = \mathtt{hprefine}(f,\mathtt{subs},\mathtt{p},p_{\max},\tau)$}
\label{alg:hprefine}
\begin{algorithmic}[1]
\State {Define~$\mathtt{subsnew}=\mathtt{subs}$, and~$\mathtt{pnew}=\mathtt{p}$. Set~$\mathtt{Q}=0$.}
\For {each subinterval~$K_j\in\mathtt{subs}$}
\State {Evaluate the smoothness indicator~$\mathsf{F}_{K_j,p_j}(f)$ from~\eqref{eq:sind}.}
\If {$\mathsf{F}_{K_j,p_j}(f)<\tau$}
\myStateDouble {Apply $h$-refinement to~$K_j$, i.e., bisect~$K_j$ into two subintervals of equal size and reduce the number of quadrature points to~$\max(p_j-1,1)$ on both of them;}
\myStateDouble {Compute an improved approximation, denoted by~$\widetilde Q_{K_j}$, of~$Q_{K_j,p_j}(f|_{K_j})$ using~\eqref{eq:Qh} on~$K_j$.}
\ElsIf {$\mathsf{F}_{K_j,p_j}(f)\ge\tau$ and $p_j+1\le p_{\max}$}
\myStateDouble {Apply $p$-refinement to~$K_j$, i.e., increase the number of quadrature points to~$p_j+1$ on~$K_j$;}
\myStateDouble {Compute an improved approximation, denoted by~$\widetilde Q_{K_j}$, of~$Q_{K_j,p_j}(f|_{K_j})$ using~\eqref{eq:Qp} on~$K_j$.}
\ElsIf {$\mathsf{F}_{K_j,p_j}(f)\ge\tau$ and $p_j+1>p_{\max}$}
\myStateDouble {Bisect~$K_j$ into two subintervals of equal size and retain the number of quadrature points~$p_j$ on both of them;}
\myStateDouble {Compute an improved approximation, denoted by~$\widetilde Q_{K_j}$, of~$Q_{K_j,p_j}(f|_{K_j})$ using~\eqref{eq:Qp'} on~$K_j$.}
\EndIf
\If {$|\widetilde Q_{K_j}-Q_{K_j,p_j}(f|_{K_j})|$ is sufficiently small}
\myStateDouble {Update $\mathtt{Q} = \mathtt{Q} + \widetilde Q_{K_j}$;}
\myStateDouble {Eliminate $K_j$ from~$\mathtt{subsnew}$ and the corresponding entry~$p_j$ from~$\mathtt{pnew}$.}
\Else
\myStateDouble {Replace $K_j$ and~$p_j$ in~$\mathtt{subsnew}$ and~$\mathtt{pnew}$, respectively, by the corresponding $h$- or $p$-refined subintervals as determined above.}
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Practical Aspects}
In this section we discuss a number of practical issues involved in the implementation of the procedure described in Section~\ref{sc:hprefine} within a given computing environment.
\subsubsection{Gauss-Quadrature Rules}
In principle, the adaptive procedure presented in Section~\ref{sc:hprefine} allows for any variable order family of quadrature rules. In our numerical experiments presented in Section~\ref{sc:numerics} below, we propose the use of (families of) Gauss-type quadrature schemes. Although they might be criticised for their non-hierarchical structure, in the sense that they require more function evaluations in comparison to more traditional schemes (such as, for example, the adaptive Simpson or fixed-order Gauss-Kronrod rules), our numerical results indicate that their high degree of accuracy may be exploited in a very efficient manner within the $hp$-setting, particularly for smooth functions, with or without locally singular behaviour. Indeed, whilst non-hierarchical lower-order Gauss-type quadrature schemes might not be computationally competitive, it is a well-known feature of $hp$-methods (see, e.g., \cite{Schwab98}) that their superiority becomes especially apparent on a variable, higher-order level.
In the current article we employ Gauss-Legendre quadrature points and weights (with at least~$p_{\min}=2$ points and weights); these quantities can be precomputed up to any given order~$p_{\max}$ (in practice~$p_{\max}=15$ is usually more than sufficient) or even be generated on the spot in an efficient way (see, e.g.,~\cite{CanutoHussainiQuarteroni:88,GlaserLiuRokhlin:07,Waldvogel:06}) if an upper bound~$p_{\max}$ cannot be fixed. In addition, we note that the Gauss-Legendre rule based on~$p$ points has a degree of exactness of~$2p-1$, i.e., the smoothness indicators derived in Section~\ref{sc:smoothness} can be computed by means of the formula given in~\eqref{eq:xi}. For a given maximum number~$p_{\max}$, we store the points and weights of the Gauss-Legendre rules (on the reference interval~$[-1,1]$) with up to~$p_{\max}$ points in two $p_{\max}\times (p_{\max}-1)$-matrices~$\bm X$ and~$\bm W$, respectively; here, for parameters~$p=2,\ldots,p_{\max}$, the $p$-th columns of~$\bm X$ and~$\bm W$ are built from the points and weights of the corresponding $p$-point Gauss-Legendre quadrature rule, respectively (and complementing the remaining entries in all but the last column by zeros):
\begin{equation}\label{eq:XW}
{\bm X}=
\begin{pmatrix}
\x{2}{1}&\x{3}{1}& \cdots & \x{p_{\max}}{1} \\
\x{2}{2}&\vdots & & \\
&\x{3}{3}& & \vdots\\
&\bigzero& \ddots & \\
& & & \x{p_{\max}}{p_{\max}}
\end{pmatrix},\quad
{\bm W}=
\begin{pmatrix}
\w{2}{1}&\w{3}{1}& \cdots & \w{p_{\max}}{1} \\
\w{2}{2}&\vdots & & \\
&\w{3}{3}& & \vdots\\
&\bigzero& \ddots & \\
& & & \w{p_{\max}}{p_{\max}}
\end{pmatrix}.
\end{equation}
We note that, for other quadrature rules, the number of rows in the above matrices may be different.
\subsubsection{Vectorised Quadrature}\label{sc:vecQuad}
Following the ideas of~\cite{Shampine:08} we use a vectorised quadrature implementation. This means that, instead of computing the integrals on the subintervals~$\mathtt{subs}$ in Algorithm~\ref{alg:hprefine} one at a time, they are all computed at once. This can be accomplished by using fast vector- and matrix-operations, and by carrying out all necessary function evaluations in a single operation by computing the function to be integrated for a vector of input values. Specifically, we write the composite rule
\[
I\approx \sum_{K_i\in\mathtt{subs}}Q_{K_i,p_i}(f|_{K_i})=\sum_{K_i\in\mathtt{subs}}\frac{h_i}{2}\sum_{k=1}^{p_i}\w{p_i}{k}(f\circ\phi_{K_i})(\x{p_i}{k})
\]
as a dot product of a weight vector~$\bm w$ and a function vector~$f(\bm x)$; here, the former vector contains all (scaled) weights~$\{\frac12h_i\w{p_i}{k}\}_{i,k}$, and the latter vector represents the evaluation of the integrand function~$f$ on the vector~$\bm x$ of all corresponding quadrature points~$\{\phi_{K_i}(\x{p_i}{k})\}_{i,k}$ appearing in the sum above. Evidently, these vectors can be built efficiently by extracting (and affinely mapping and scaling) the corresponding rows from the matrices~$\bm X$ and~$\bm W$ in~\eqref{eq:XW}. We emphasise that applying vectorised quadrature crucially improves the performance of the overall adaptive procedure (provided that such a technology is available in a given computing environment).
\subsubsection{Smoothness Estimators}
As mentioned before, computing the smoothness indicators from \eqref{eq:xi} does not need any additional function evaluations of the integrand function~$f$; they only require the values of the Legendre polynomials~$\widehat L_{p-1}$ and~$\widehat L_{p-2}$ at the points~$\{\x{p}{k}\}_{k=1}^p$, for~$p=2,\ldots,p_{\max}$. These quantities are again precomputable, and can be stored in two matrices
\begin{equation}\label{eq:L1}
\bm L_1
=\begin{pmatrix}
L_1(\x{2}{1}) & L_2(\x{3}{1}) & \cdots & L_{p_{\max}-1}(\x{p_{\max}}{1})\\
L_1(\x{2}{2}) & \vdots\\
& L_2(\x{3}{3}) & & \vdots\\
&\bigzero& \ddots & \\
& && L_{p_{\max}-1}(\x{p_{\max}}{p_{\max}})
\end{pmatrix},
\end{equation}
and
\begin{equation}\label{eq:L2}
\bm L_2
=\begin{pmatrix}
L_0(\x{2}{1}) & L_1(\x{3}{1}) & \cdots & L_{p_{\max}-2}(\x{p_{\max}}{1})\\
L_0(\x{2}{2}) & \vdots\\
& L_1(\x{3}{3}) & & \vdots\\
&\bigzero& \ddots & \\
& && L_{p_{\max}-2}(\x{p_{\max}}{p_{\max}})
\end{pmatrix}.
\end{equation}
Then, the sums in~\eqref{eq:xi} are vectorised similarly as described above. In particular, the computation of the smoothness estimators can be undertaken with an almost negligible computational cost.
\subsubsection{Stopping Criterion}
In order to implement the stopping-type criterion in line~14 of Algorithm~\ref{alg:hprefine}, we exploit an idea that was proposed in the context of adaptive Simpson quadrature in~\cite{GanderGautschi:00}. More precisely, given a possibly rough approximation~$\mathtt{iguess}\approx\int_a^bf(x)\, \dd x$ of the unknown integral~$I$ from~\eqref{eq:I} (e.g., obtained from a Monte-Carlo calculation such that both the approximation and the exact value are of the same magnitude; cf.~\cite{GanderGautschi:00}), and a tolerance~$\mathtt{tol}>0$, we redefine
\[
\mathtt{iguess = iguess*tol/eps;}
\]
here, $\mathtt{eps}$ represents the smallest (positive) machine number in a given computing environment. Then, using the comparison operator~$\mathtt{==}$, we accept the difference $|\widetilde Q_{K_j}-Q_{K_j,p_j}(f|_{K_j})|$ to be sufficiently small with respect to the given tolerance~$\mathtt{tol}$ if the logical call
\[
\mathtt{iguess} + |\widetilde Q_{K_j}-Q_{K_j,p_j}(f|_{K_j})| \mathtt{\ == iguess};
\]
yields a {\tt true} value.
\subsection{Numerical Examples}\label{sc:numerics}
In order to test our approach, we consider a number of benchmark problems on the interval~$[0,1]$. Specifically, the following functions will be studied:
\begin{align*}
f_1(x) & = \exp(x),\\
f_2(x) & = \sqrt{|x-\nicefrac13|},\\
f_3(x) & = \sech(10(x-\nicefrac15))^2 + \sech(100(x-\nicefrac25))^4\\
&\quad + \sech(1000(x-\nicefrac35))^6 + \sech(1000(x-\nicefrac45))^8,\\
f_4(x) & = \cos(1000x),\\
f_5(x) & = \begin{cases}
0 & \text{ if $x\le\nicefrac13$},\\
1 & \text{ if $x>\nicefrac13$}.
\end{cases}
\end{align*}
Whilst the first function, $f_1$, is analytic, the second function, $f_2$, is smooth except at~$\nicefrac13$ (see Figure~\ref{fig:f2} (top)). Furthermore, $f_3$ was proposed in~\cite{Hale:10} in the context of the chebfun package~\cite{HaleTrefethen:12}; this is a smooth function that exhibits several very thin spikes (see Figure~\ref{fig:f3} (top)). Moreover, $f_4$ is highly oscillating, and~$f_5$ is an example of a discontinuous function.
We perform our computations in \textsc{Matlab}\footnote{The MathWorks, Inc.} on a single 2.6GHz processor. The tolerance is set to~$\mathtt{tol}=0.3\times10^{-15}$ (which is close to machine precision in \textsc{Matlab}), the smoothness estimation parameter is prescribed as~$\tau=0.6$, and~$p_{\max}=15$. Within this setting, the adaptive procedure generates results that are accurate to machine precision, for all of the considered examples. In Table~\ref{tb:fctevl}, for each of the functions~$f_1,\ldots,f_5$ above, we present the number of function calls (\#~fct.~calls) in the vectorised quadrature implementation (counting a single application of the integrand function to a vector input as 1; cf.~Section~\ref{sc:vecQuad}), as well as the number of single function evaluations (\#~sing.~fct.~ev.) taking into account the number of scalar entries of a vector input in each function call. The latter number is compared with the number of scalar function evaluations performed in a classical adaptive Simpson procedure as proposed in~\cite{GanderGautschi:00} (which is based on employing the two end points as well as the midpoint on each subinterval, and reuses the former two points without recomputing). Except for the last function, $f_5$, where a low-order quadrature rule is more effective, the remarkable efficiency of the proposed $hp$-type quadrature becomes clearly visible. This is confirmed with the expeditious cpu times (which do not include the computation of the precomputable matrices~$\bm X, \bm W, \bm L_1,\bm L_2$ from~\eqref{eq:XW}, \eqref{eq:L1}, and \eqref{eq:L2}) for each of the examples.
\begin{table}
\begin{tabular}{crrrr} \toprule
\sp & \multicolumn{3}{c}{\emph{$hp$-adapt. quad.}} & \emph{adapt. Simpson quad.}\\
& \#~fct.~calls & \# sing.~fct.~ev. & cpu [sec] & \# sing.~fct.~ev.\\\midrule
$f_1$ & 52 & 9 & 0.0031 & 4,096\\
$f_2$ & 1,718 & 65 & 0.0224 & 25,488\\
$f_3$ & 2,427 & 33 & 0.0144 & 72,528\\
$f_4$ & 50,534 & 35 & 0.0180 & 1,965,376\\
$f_5$ & 1,273 & 106 & 0.0342 & 784\\
\bottomrule\\
\end{tabular}
\caption{Performance data for $hp$-type adaptive quadrature.}
\label{tb:fctevl}
\end{table}
In order to illustrate how the $hp$-adaptive procedure performs, we depict the final $hp$-mesh for~$f_2$ and~$f_3$ in Figure~\ref{fig:f2} (bottom) and Figure~\ref{fig:f3} (bottom), respectively. Here, along the horizontal axis we present the subintervals obtained as a result of the adaptive process, and on the vertical axis the number of quadrature points introduced on each subinterval is displayed. In both examples, we see that smooth regions in the underlying integrand are resolved by employing larger subintervals featuring a higher number of quadrature points, whereas close to singularities, the number of quadrature points is kept low on very small integration subdomains. It is noteworthy that this behaviour is well-known from $hp$-finite element methods for differential equations, where high-order algebraic or even exponential convergence rates can be obtained by applying this type of $hp$-refinement procedure; see~\cite{Schwab98} for details.
\begin{figure}
\centering
\includegraphics[width=0.685\linewidth]{f2}\\[5ex]
\includegraphics[width=0.685\linewidth]{f2mesh}
\caption{Function~$f_2$: Graph (top) and $hp$-mesh (bottom).}
\label{fig:f2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.685\linewidth]{f3}\\[5ex]
\includegraphics[width=0.685\linewidth]{f3mesh}
\caption{Function~$f_3$: Graph (top) and $hp$-mesh (bottom).}
\label{fig:f3}
\end{figure}
\section{Conclusions}
In this article we proposed a new adaptive quadrature strategy, which features both local subdivision of the integration domain, as well as local variation of the number of quadrature points employed on each subinterval. Our approach is inspired by the $hp$-adaptive finite element methodology based on $hp$-adaptive smoothness testing. In combination with a vectorised quadrature implementation, the proposed adaptive quadrature algorithm is able to deliver highly accurate results in a very efficient manner. Since our approach is closely related to the $hp$-finite element technique, it can be extended to multiple dimensions, including, in particular, the application of anisotropic refinements of the underlying domain of integration, together with the exploitation of different numbers of quadrature points in each coordinate direction on each subinterval (based, for example, on anisotropic Sobolev embeddings as outlined in~\cite[\S3.1]{FankhauserWihlerWirz:14}).
\bibliographystyle{amsplain}
| {
"timestamp": "2015-08-17T02:09:31",
"yymm": "1508",
"arxiv_id": "1508.03516",
"language": "en",
"url": "https://arxiv.org/abs/1508.03516",
"abstract": "In this article we propose a new adaptive numerical quadrature procedure which includes both local subdivision of the integration domain, as well as local variation of the number of quadrature points employed on each subinterval. In this way we aim to account for local smoothness properties of the function to be integrated as effectively as possible, and thereby achieve highly accurate results in a very efficient manner. Indeed, this idea originates from so-called hp-version finite element methods which are known to deliver high-order convergence rates, even for nonsmooth functions.",
"subjects": "Numerical Analysis (math.NA)",
"title": "An Adaptive Variable Order Quadrature Strategy",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587261496031,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7097978877767854
} |
https://arxiv.org/abs/2012.09317 | On a fractional queueing model with catastrophes | A $M/M/1$ queue with catastrophes is a modified $M/M/1$ queue model for which, according to the times of a Poisson process, catastrophes occur leaving the system empty. In this work, we study a fractional $M/M/1$ queue with catastrophes, which is formulated by considering fractional derivatives in the Kolmogorov's Forward Equations of the original Markov process. For the resulting fractional process, we obtain the state probabilities, the mean and the variance for the number of customers at any time. In addition, we discuss the estimation of parameters. | \section{Introduction}
Queueing Theory allows the formulation of mathematical models and methods to deal with stochastic aspects in applied sciences. Roughly speaking, the models are stochastic processes, usually Markovian, to represent phenomena in which customers arrive in a random way at a service facility. Upon arrival they are made to wait in queue until it is their turn to be served and after that it is assumed that they leave the system. The main interest for such models is in the behavior of the number of customers in the system at any time. This could be studied by stating the state probabilities, mean and variance, between other quantities. For a friendly introduction to Queueing Theory we refer the reader, for instance, to \cite[Chapter 8]{ross}.
A class of well-known queueing models are the exponential ones, where it is assumed that arrivals occur according to a Poisson processe, and it is assumed that services times follow an exponential law. Such models are usually called $M/M/k$ queues, where $k$ represents the number of servers. Our interest is in the $M/M/1$ queue with catastrophes in which, additionally, it is supposed that according to the times of a Poisson process catastrophes occur leaving the system empty. This model has been studied, for instance, in \cite{kumar,catastrofef,catastrofe}. Here we shall consider a non-Markovian version of such a model, which is inspired in a series of modifications in probabilistic models appeared as a consequence of the development of the fractional calculus. The recent interest in fractional calculus has been increased by its applications mainly in numerical analysis and different areas of physics, engineering, economy etc. In Probability Theory, the fractional calculus combined with stochastic processes finds useful to represent random phenomena with long memory; that is, where the Markovian property does not apply. This is the reason why in recent years the efforts of a large number of researchers have been directed towards the formulation of the fractional counterpart of classical Markovian models. For some examples, we refer the reader to \cite{cahoy2010,death, puro,yule,fila,catastrofef} and the references therein.
Our purpose is contributing with this effort by studying the fractional version of the $M/M/1$ queue with catastrophes. Fractional queues were studied by the first time by \cite{fila}, where the authors proposed a generalization of the classical $M/M/1$ queue model derived by applying fractional derivative operators to the Kolmogorov's Forward Equations of the original process. The approach proposed by \cite{fila} allows the formulation of closed-expression for some functional of interest, and at the same time, the estimation of parameters. As far as we known,
the fractional version of the $M/M/1$ queue with catastrophes was proposed and studied only by \cite{catastrofef}, where the authors provided expressions for the state probabilities, the distributions of the busy period for fractional queues without and with catastrophes and, the distribution of the time of the first occurrence of a catastrophe. In this work we complement the analysis of \cite{catastrofef} by appealing to the approach proposed by \cite{fila}. As a contribution to the field we provide a closed-expression for the state probabilities, assuming that the process starts from any state, and for the mean and the variance for the number of customers at any time. In addition, we deal with the estimation of parameters for the model, and we illustrate our results with computational simulations.
We organize the paper as follows: in Section 2 we introduce the classical $M/M/1$ queueing model with catastrophes and its fractional generalization using a subordination relationship. Furthermore, we obtain the state probabilities, and we state the mean and the variance for the number of customers at any time by using a probability generating function. In Section 3, we present the estimation of parameters for the model and their confidence intervals. Lastly, we summarize our results in a brief conclusion in Section 4.
\section{The model and results}
Let $\{X_t\}_{t\geq 0}$ be the exponential queue model $M/M/1$ with catastrophes, and let $P_{i,n}(t):=P(X_t=n|X_0=i)$ be its transition probabilities, where $i,n$ are non-negative integers. In words, assume that customers arrive at a single-server service system according with a Poisson process of parameter $\lambda>0$. Upon arrival each customer is made to wait in a unique queue until it is his/her turn to be served. If the server is free at an arrival of a customer then he/she goes directly into service. After a service is complete, the corresponding customer leaves the system, and the next customer in the queue enters service. It is assumed that the sequence of services times are independent random variables with a common exponential law of parameter $\mu>0$. In addition, it is assumed that according to the times of a Poisson process of parameter $\xi\geq 0$ catastrophes occur leaving the system empty. For any $t\geq 0$ the random variable $X_t$ denotes the number of customers in the system at time $t$, and $\{X_t\}_{t\geq 0}$ is the continuous-time Markov chain with transitions given by:
$$P_{i,n}(h)=\left\{
\begin{array}{cl}
\lambda h + o(h),& \text{ if }i\in\mathbb{N}\cup\{0\}\text{ and }n=i+1,\\[.2cm]
\mu h + o(h),& \text{ if }i\in\mathbb{N}\setminus \{1\}\text{ and }n=i-1,\\[.2cm]
(\mu + \xi) h + o(h),& \text{ if }i=1\text{ and }n=0,\\[.2cm]
\xi h + o(h),& \text{ if }i\in\mathbb{N}\setminus\{1\}\text{ and }n=0,\\[.2cm]
\end{array}\right.
$$
where $o(h)$ represents a function such that $\lim_{h\to 0}o(h)/h =0$ (see Figure \ref{fig:trans}).
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\draw [thick] (0,0) circle (7pt);
\draw (0,0) node[font=\footnotesize] {$0$};
\draw [thick] (2,0) circle (7pt);
\draw (2,0) node[font=\footnotesize] {$1$};
\draw [thick] (4,0) circle (7pt);
\draw (4,0) node[font=\footnotesize] {$2$};
\draw [thick] (6,0) circle (7pt);
\draw (6,0) node[font=\footnotesize] {$3$};
\draw [thick] (8,0) circle (7pt);
\draw (8,0) node[font=\footnotesize] {$4$};
\draw [thick, directed] (0,0.25) to [bend left=45] (2,0.25);
\draw [thick, directed] (2,-0.25) to [bend left=45] (0,-0.25);
\draw [thick, directed] (4,-0.25) to [bend left=60] (0,-0.25);
\draw [thick, directed] (6,-0.25) to [bend left=75] (0,-0.25);
\draw [thick, directed] (8,-0.25) to [bend left=90] (0,-0.25);
\draw [thick, directed] (2,0.25) to [bend left=45] (4,0.25);
\draw [thick, directed] (4,0.25) to [bend left=45] (6,0.25);
\draw [thick, directed] (6,0.25) to [bend left=45] (8,0.25);
\draw [->, dashed, thick] (8,0.25) to [bend left=45] (9,0.45);
\draw [thick, reverse directed] (4.25,0) to (5.75,0);
\draw [thick, reverse directed] (2.25,0) to (3.75,0);
\draw [thick, reverse directed] (0.25,0) to (1.75,0);
\draw [thick, reverse directed] (6.25,0) to (7.75,0);
\draw (1,1) node[font=\footnotesize] {$\lambda$};
\draw (1.5,-0.8) node[font=\footnotesize] {$\xi$};
\draw (3.2,-1.3) node[font=\footnotesize] {$\xi$};
\draw (5.3,-1.6) node[font=\footnotesize] {$\xi$};
\draw (7.2,-2.1) node[font=\footnotesize] {$\xi$};
\draw (3,1) node[font=\footnotesize] {$\lambda$};
\draw (5,1) node[font=\footnotesize] {$\lambda$};
\draw (7,1) node[font=\footnotesize] {$\lambda$};
\draw (5,-0.3) node[font=\footnotesize] {$\mu$};
\draw (1,-0.3) node[font=\footnotesize] {$\mu$};
\draw (3,-0.3) node[font=\footnotesize] {$\mu$};
\draw (7,-0.3) node[font=\footnotesize] {$\mu$};
\draw (9,0) node[font=\footnotesize] {$\cdots$};
\end{tikzpicture}
\caption{Transitions and rates for the $M/M/1$ queue with catastrophes.}
\label{fig:trans}
\end{figure}
The Kolmogorov's Forward Equations for the exponential queue model $M/M/1$ with catastrophes are giving by
\begin{equation}
\begin{cases}
\displaystyle \frac{\partial P_{i,0}(t)}{\partial t}=-(\lambda + \xi)P_{i,0}(t)+\mu P_{i,1}(t)+\xi,\\[0.4cm]
\displaystyle \frac{\partial P_{i,n}(t)}{\partial t}=-(\lambda + \mu + \xi)P_{i,n}(t)+\lambda P_{i,n-1}(t)+\mu P_{i, n+1}(t),\\[0.4cm]
P_{i,n}(0)=\delta_{i, n},
\end{cases}
\label{eqdif}
\end{equation}
\noindent
where $\delta_{i, n}$ is the Kronecker delta defined by $\delta_{i, n}=1$ if $i=n$, or $\delta_{i, n}=0$, otherwise. Our purpose is to define the fractional version for the exponential queue model $M/M/1$ with catastrophes.
\begin{defn}
The fractional $M/M/1$ queue model with catastrophes, with parameter $\alpha\in (0,1)$, is the continuous-time stochastic process $\{X^\alpha_t\}_{t\geq 0}$ such that the transition probabilities $P^\alpha_{i,n}(t):=P\{X^\alpha_t=n|X^\alpha_0=i\}$ satisfy
\begin{equation}\label{eqdiff}
\begin{cases}
D^\alpha _t P^\alpha_{i,0}(t)=-(\lambda + \xi)P^\alpha_{i,0}(t)+\mu P^\alpha_{i,1}(t)+\xi,\\[0.4cm]
D^\alpha _t P^\alpha_{i,n}(t)=-(\lambda + \mu + \xi)P^\alpha_{i,n}(t)+\lambda P^\alpha_{i,n-1}(t)+\mu P^\alpha_{i,n+1}(t),\\[0.4cm]
P^\alpha_{i,n}(0)=\delta_{i,n},
\end{cases}
\end{equation}
where $D^\alpha_t$ is the Caputo's fractional derivative of order $\alpha$
\end{defn}
\begin{lema}Let $\{X^\alpha_t\}_{t\geq 0}$ be the fractional $M/M/1$ queue model with catastrophes, with parameter $\alpha$, and consider the probability generating function $G^\alpha(z,t)=G_i^\alpha(z,t):=\sum^\infty_{n=0}z^nP^\alpha_{i, n}(t)$. Then,
\begin{equation}
\begin{cases}
\displaystyle zD^\alpha_tG^\alpha(z,t)=(1-z)\left[(\mu-z\lambda-\frac{\xi z}{(1-z)})G^\alpha(z,t)-\mu P^\alpha_{i,0}(t)+\frac{\xi z}{(1-z)}\right], \\
G^\alpha(z,0)=z^i.
\label{sistema}
\end{cases}
\end{equation}
\end{lema}
\begin{proof}
On the one hand, by \eqref{eqdiff}, we obtain
\begin{equation}
D^\alpha _t[G^\alpha(z,t)-P^\alpha_{i,0}(t)]=-(\lambda+\mu+\xi)[G^\alpha(z,t)-P^\alpha_{i,0}(t)]+\lambda zG^\alpha(z,t)+\frac{\mu}{z}[G^\alpha(z,t)-P^\alpha_{i,0}(t)-zP^\alpha_{i,1}(t)].
\label{des_sistema}
\end{equation}
On the other hand, by replacing $D^\alpha_t P^\alpha_{i,0}$ from \eqref{eqdiff} in \eqref{des_sistema} we get
\begin{equation}\label{eqlem1}
D^\alpha _t[G^\alpha(z,t)-P^\alpha_{i,0}(t)]=D^\alpha _tG^\alpha(z,t)-[-(\lambda + \xi)P^\alpha_{i,0}(t)+\mu P^\alpha_{i,1}(t)+\xi].
\end{equation}
\smallskip
Putting \eqref{des_sistema} and \eqref{eqlem1} together we have
\begin{equation*}
\begin{array}{rl}
D^\alpha_tG^\alpha(z,t)=&-[\lambda(1-z) +\mu(1-1/z) + \xi]G^\alpha(z,t) +\mu(1 - 1/z)P^\alpha_{i,0}(t) +\xi,
\end{array}
\end{equation*}
\smallskip
which can be written as
\begin{equation}\label{eq:lem2}
zD^\alpha_tG^\alpha(z,t)=[-z\lambda(1-z)+\mu(1-z)-z\xi]G^\alpha(z,t)-(1-z)\mu P^\alpha_{i,0}(t)+z\xi.
\end{equation}
Therefore, \eqref{sistema} is obtained by \eqref{eq:lem2} and by noting that $G^\alpha(z,0)= \displaystyle \sum^\infty_{n=0}z^n\delta_{i, n} = z^i.$
\end{proof}
\subsection{Representation of the fractional model, and state probabilities}
Our first task is to provide a representation for the fractional model through the exponential one, by mean of a time-change version of the original process. Let $\{B^\alpha_t\}_{t\geq 0}$ be the $\alpha$-stable subordinator and let $\{C^\alpha_t\}_{t\geq 0}$ its inverse process, where $C^\alpha_t$ is the first passage time to the level $t>0$, that is,
\begin{equation}
C^\alpha_t:=\inf\{s>0,~B^\alpha_s>t\}.
\end{equation}
We refer the reader to \cite[Chapter 3]{bertoin} for more details about the subordinator and its inverse. In addition, we consider the Laplace transform of the inverse process
\begin{equation}
\int^\infty_0e^{-st}f_\alpha(y,t)dt=s^{\alpha-1}e^{ys^\alpha }.
\end{equation}
\noindent
Here $f_\alpha(y,t)$ is the density of $C_t^{\alpha}$, and it is given by
\begin{equation}
f_\alpha(y,t) = W_{-\alpha,1-\alpha}(-yt^\alpha)t^{-\alpha}=t^{-\alpha}\sum^\infty_{r = 0}\frac{(-yt^{-\alpha})^r}{r!\Gamma(1-\alpha(1+r))},
\label{wright}
\end{equation}
\noindent
where $W_{-\alpha,1-\alpha}(-x)$, also denoted by $M_{\alpha}(x)$ in \cite{mainardi}, is the Wright distribution of parameter $\alpha$. Moreover, $f_\alpha(y,t)$ is the solution of the fractional diffusion equation. We refer the reader to \cite{mainardi, difusion} for more details.
\begin{teo}
\label{teosub}
Let $\{X_t\}_{t\geq 0}$ and $\{X^{\alpha}_t\}_{t\geq 0}$ be, respectively, the exponential and fractional with parameter $\alpha\in(0,1]$ queue model $M/M/1$ with catastrophes. If $\{C^\alpha_t\}_{t\geq 0}$, $\alpha \in (0,1]$, is the inverse $\alpha$-stable subordination process, and it is independent of $\{X_t\}_{t\geq 0}$, then
\begin{equation}
X^\alpha_t=X_{C^\alpha_t}, \text{ for any }t\geq0
\label{relacaosub}
\end{equation}
where the equality holds for the one-dimensional distribution.
\label{sub}
\end{teo}
\begin{proof}
We start by pointing out that \eqref{relacaosub} is equivalent to say that we can write the state probabilities as
\begin{equation}
P^\alpha_{i,n}(t)=\int^\infty_0P_{i,n}(y)f_\alpha(y,t) dy,
\label{resub}
\end{equation}
\smallskip
and the probability generating function as
\begin{equation}
\begin{array}{rl}
G^\alpha(z,t)&=\displaystyle\sum^\infty_{i = 0}z^i\left\{\int^\infty_0P_{i,n}(y)f_\alpha(y,t)dy\right\}\\[.5cm]
&=\displaystyle\int^\infty_0\left\{\sum^\infty_{i = 0}z^iP_{i,n}(y)f_\alpha(y,t)\right\}dy\\[.5cm]
&\displaystyle=\int^\infty_0G(z,y)f_\alpha(y,t)dy.
\label{resub1}
\end{array}
\end{equation}
Since we are interested in proving that we can rewrite the fractional process as a transformation of the exponential process, through the inverse $\alpha$-stable subordinator, then is enough if we prove that \eqref{resub} and \eqref{resub1} satisfy \eqref{sistema}. So applying the Laplace transform in \eqref{sistema}; namely, if $\tilde{G}^\alpha(z,s):=\int_{0}^{\infty}e^{-st}G^{\alpha}(z,t)dt,$ and $ \tilde{P}^\alpha_{i,n}(s) := \int^\infty_0e^{-st}P^\alpha_{i,n}(t)dt$, then
\begin{equation}
z[s^\alpha \tilde{G}^\alpha(z,s)-s^{\alpha-1}z^i]=(1-z)\left[\left(\mu-\lambda z-\frac{\xi z}{1-z}\right)\tilde{G}^\alpha(z,s)-\mu \tilde{P}^\alpha_{i,0}(s)\right]+\frac{\xi z}{s}.
\label{trans1eq}
\end{equation}
By using \eqref{resub}, \eqref{resub1} and the Laplace transform of the inverse $\alpha$-stable subordinator process, we get that $z[s^\alpha \tilde{G}^\alpha(z,s)-s^{\alpha-1}z^i]$ is equal to
$$(1-z)\left[\left(\mu -\lambda z -\frac{\xi z}{1-z}\right)\int^\infty_0{G(z,y)s^{\alpha-1}e^{-ys^\alpha}dy} \displaystyle-\mu\int^\infty_0{P_{i,0}(y)}s^{\alpha-1}e^{-ys^\alpha}dy\right]+\frac{\xi z}{s},$$
so
\begin{equation} \label{trans2eq}
z[s^\alpha \tilde{G}^\alpha(z,s)-s^{\alpha-1}z^i]=\displaystyle (1-z)\int^\infty_0s^{\alpha-1}e^{-ys^\alpha}\left[\left(\mu -\lambda z -\frac{\xi z}{1-z}\right){G(z,y)}-\mu{P_{i,0}(y)}\right]dy+\frac{\xi z}{s}.
\end{equation}
Now, since
\begin{equation}
z\left[\frac{\partial G(z,t)}{\partial t}-\xi\right]=(1-z)\left[(\mu-z\lambda-\frac{\xi z}{(1-z)})G(z,t)-\mu P_{i,0}(t)\right],
\end{equation}
we can use \eqref{trans2eq} to obtain
\begin{equation}
\begin{array}{rcl}
z[s^\alpha \tilde{G}^\alpha(z,s)-s^{\alpha-1}z^i]&=&\displaystyle s^{\alpha-1}\int^\infty_0{e^{-ys^\alpha}z\frac{\partial G(z,y)}{\partial y}}dy-s^{\alpha-1}\int^\infty_0{e^{-ys^\alpha}z\xi}dy+\frac{\xi z}{s}\\[.6cm]
& =&\displaystyle s^{\alpha-1}z\left[G(z,y)e^{-sy^\alpha} \Bigr|^{y=\infty}_{y=0}+s^\alpha\int^\infty_0{G(z,y)e^{-ys^\alpha}}dy\right]-\frac{\xi zs^{\alpha-1}}{s^\alpha}+\frac{\xi z}{s}\\[.6cm]
&=& s^{\alpha-1}z\left[s^\alpha\displaystyle \int^\infty_0{G(z,y)e^{-ys^\alpha}}dy-z^i\right]\\[.6cm]
&=&z[s^\alpha \tilde{G}^\alpha(z,y)-s^{\alpha-1}z^i].
\end{array}
\label{trans3eq}
\end{equation}
\end{proof}
The previous theorem gains in interest if we realize that we can obtain the state probabilities of the fractional model provided we have the state probabilities of the exponential process. For the exponential $M/M/1$ queue with catastrophes $\{X_t\}_{t\geq 0}$, if
$$I_n(z)=\sum^\infty_{m = 0}\frac{1}{m!\Gamma(m + n + 1)}\left(\frac{z}{2}\right)^{2m + n}, \text{where } n,z\in\mathbb{C}$$
denotes the modified Bessel function of the first kind, it is well-known (see \cite{kumar}) that
\begin{equation}
P_{i,0}(t)=\frac{1}{\mu}\sum ^\infty_{n=i}\frac{(n+1)I_{n+1}((2\sqrt{\lambda\mu})t)e^{-(\lambda+\mu+\xi)t}}{(\sqrt{\lambda/\mu})^{n+1}t}+\frac{\xi}{\mu}\int^t_0\sum^\infty_{n=1}\frac{nI_{n}((2\sqrt{\lambda\mu})u)e^{-(\lambda+\mu+\xi)u}}{(\sqrt{\lambda/\mu})^{n}u}du
\label{p1}
\end{equation}
and, for $n > 0$
\begin{equation}
\begin{array}{ll}
\displaystyle P_{i,n}(t)=&\displaystyle\frac{\xi(\sqrt{\lambda/\mu})^{n+1}}{\sqrt{\lambda\mu}}\int^t_0\sum^\infty_{k=0}\frac{(n+k+1)I_{n+k+1}((2\sqrt{\lambda\mu})u)e^{-(\lambda+\mu+\xi)u}}{(\sqrt{\lambda/\mu})^{k+1}u}du\\\\
&+\displaystyle\sum^\infty_{m=0}e^{-(\lambda+\mu+\xi)t}\left[\frac{I_{m+n+i+1}((2\sqrt{\lambda\mu})t)}{(\sqrt{\lambda/\mu})^{m-n+i+1}}-\frac{I_{m+n+i+2}((2\sqrt{\lambda\mu})t)}{(\sqrt{\lambda/\mu})^{m-n+i}}\right]\\\\
&+\displaystyle (\sqrt{\lambda/\mu})^{n-i}I_{n-i}(2(\sqrt{\lambda\mu})t)e^{-(\lambda+\mu+\xi)t}.
\label{pn}
\end{array}
\end{equation}
By \eqref{resub} and \eqref{wright} in \eqref{p1} and \eqref{pn} we get the state probabilities for the fractional model. In other words, we obtain the following result.
\begin{teo}
Let $\{X^{\alpha}_t\}_{t\geq 0}$ be the fractional queue model $M/M/1$ with catastrophes of parameter $\alpha\in(0,1]$. Let $\beta_{\lambda, \mu, \xi}(x):=x^{-1}e^{-(\lambda+\mu+\xi)x}$, and let $M_\alpha(x):=W_{-\alpha,1-\alpha}(-x)$, where $W_{-\alpha,1-\alpha}(-x)$ is the Wright distribution with parameter $\alpha$. Then,
\begin{equation}
\begin{array}{ccl}
P^\alpha_{i,0}(t)&=&\displaystyle\frac{1}{\mu}\int^\infty_0 \left\{ \frac{M_{\alpha}(yt^{-\alpha})}{t^{\alpha}} \beta_{\lambda, \mu, \xi}(y) \sum ^\infty_{n=i}\frac{(n+1)I_{n+1}((2\sqrt{\lambda\mu})y)}{(\sqrt{\lambda/\mu)}^{n+1}} \right\} dy\\[.6cm]
& &\displaystyle+\frac{\xi}{\mu}\int^\infty_0 \left\{ \frac{M_{\alpha}(yt^{-\alpha})}{t^{\alpha}} \int^y_0 \left\{ \beta_{\lambda, \mu, \xi}(u) \sum^\infty_{n=1}\frac{nI_{n+1}((2\sqrt{\lambda\mu})u)}{(\sqrt{\lambda/\mu})^{n}} \right\}du \right\} dy
\end{array}
\end{equation}
and, for $n>0$
\begin{equation}
\begin{array}{ccl}
\displaystyle P^\alpha_{i,n}(t)&=&\displaystyle\frac{\xi(\sqrt{\lambda/\mu})^{n+1}}{\sqrt{\lambda\mu}} \displaystyle\int^\infty_0 \frac{M_{\alpha}(yt^{-\alpha})}{t^{\alpha}}\left\{ \int^y_0 \beta_{\lambda, \mu, \xi}(u)\sum^\infty_{k=0}\frac{(n+k+1)I_{n+k+1}((2\sqrt{\lambda\mu})u)}{(\sqrt{\lambda/\mu})^{k+1}}du\right\} dy\\[.7cm]
&& \displaystyle+\int^\infty_0 y \beta_{\lambda, \mu, \xi}(y) \frac{M_{\alpha}(yt^{-\alpha})}{t^{\alpha}} \sum^\infty_{m=0} \left[\frac{I_{m+n+i+1}((2\sqrt{\lambda\mu})y)}{(\sqrt{\lambda/\mu})^{m-n+i+1}}-\frac{I_{m+n+i+2}((2\sqrt{\lambda\mu})y)}{(\sqrt{\lambda/\mu})^{m-n+i}}\right] dy\\[.7cm]
&& \displaystyle +\int^\infty_0(\sqrt{\lambda/\mu})^{n-i}I_{n-i}(2(\sqrt{\lambda\mu})y)e^{-(\lambda+\mu+\xi)y} \frac{M_{\alpha}(yt^{-\alpha})}{t^{\alpha}}dy.
\end{array}
\end{equation}
\end{teo}
We show in Figure \ref{fig:behaviours} the behaviour of the states probabilities for with different values of $\alpha$. In such an illustration, we assume that the queue starts with $i=1$, and two cases for $n$ at time $t$; namely, we assume $n = 0$ in Figure \ref{subfig:example_p10} and $n = 1$ in Figure \ref{subfig:example_p11}. We are assuming the arrival rate, the departure rate and the catastrophes rate as $\lambda = 5$, $\mu = 3$ and $\xi = 1$, respectively. We can see that the more we decrease $\alpha$, the slower the convergence of the probability becomes.
\begin{figure}[H]
\centering
\begin{subfigure}[a]{0.75\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/example_p10.png}
\caption{$P^\alpha_{1,0}(t)$ for different values of $\alpha$.}
\label{subfig:example_p10}
\end{subfigure}
\begin{subfigure}[a]{0.75\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/example_p11.png}
\caption{$P^\alpha_{1,1}(t)$ for different values of $\alpha$.}
\label{subfig:example_p11}
\end{subfigure}
\caption{Behaviour of the state probability as a function of $t$. Here we consider a queue with catastrophes starting in $i = 1$ with $\lambda = 5$, $\mu = 3$ and $\xi = 1$.}
\label{fig:behaviours}
\end{figure}
\subsection{Expectation and variance}
In this section we deal with the moments of $X_t^{\alpha}$, and we do it by mean of the probability generating function
$$G^\alpha(z,t)=\sum^\infty_{n=0}z^iP^\alpha_{i,n}(t),~t>0,~0<\alpha<1.$$
We will discuss two ways of using $G^\alpha(z,t)$: On the one hand we deal with the inverse Laplace transform following the ideas of \cite{bayle, fila}. On the other hand, we deal with the manipulation of $G^{\alpha}(z,t)$ through the Mittag-Leffler function like in \cite{kumar}. We point out that both approaches are useful. Let us start with some manipulations of the p.g.f. of $X^{\alpha}_t$.
\begin{teo}
Let $\{X^{\alpha}_t\}_{t\geq 0}$ be the fractional queue model $M/M/1$ with catastrophes of parameter $\alpha\in(0,1]$, and let $G^\alpha(z,t)$ the p.g.f. Then,
\begin{equation}
G^\alpha(z,t)=z^iE_{\alpha,1}(A(z)t^\alpha)-\mu\left(\frac{1}{z}-1\right)[P^\alpha_{i,0}(t)]\ast [t^{\alpha-1} E_{\alpha,\alpha}(A(z)t^\alpha)]+t^\alpha\xi E_{\alpha,\alpha+1}(A(z)t^\alpha),
\label{funcaogeradora}
\end{equation}
where $A(z)=A_{\lambda,\mu,\xi}(z):=\lambda z-(\lambda+\mu+\xi)+\mu/z$,
\begin{equation}
E^\delta_{\beta,\gamma}(w)=\sum^\infty_{r=0}\frac{w^r\Gamma(\delta+r)}{r!\Gamma(\beta r+\gamma)\Gamma(\delta)},~~~~~~~~~~~~~ \gamma, \delta, \beta \in \mathbb{C}, \Re(\beta)>0
\end{equation}
is the 3 parameters Mittag-Leffler function, and $\ast$ is the convolution operator of t.
\label{Gteo}
\end{teo}
\begin{proof}
Remember that $\tilde{G}^\alpha(z,s)=\int_{0}^{\infty}e^{-st}G^{\alpha}(z,t)dt,$ so \eqref{trans1eq} reads
\begin{equation}
s^\alpha \tilde{G}^\alpha(z,s)-s^{\alpha-1}z^i=\left(z\lambda-(\mu+\lambda +\xi)+\mu/z \right)\tilde{G}^\alpha(z,s)-\mu\left(\frac{1}{z}-1\right) \tilde{P}^\alpha_{i,0}(s)+\frac{\xi z}{s}.
\end{equation}
After some algebraic manipulations and by letting $A(z):=\lambda z-(\lambda+\mu+\xi)+\mu/z$, we have
\begin{equation}
(s^\alpha-A) \tilde{G}^\alpha(z,s)=s^{\alpha-1}z^i-\mu\left(\frac{1}{z}-1\right) \tilde{P}^\alpha_{i,0}(s)+\xi z s^{-1}.
\end{equation}
Leaving $s^\alpha-A$ in the right side and using the inverse {Laplace} transform of the {Mittg-Leffler}, which is given by
\begin{equation}
\label{eq:transformada_mittag}
\int^\infty_0e^{-st}t^{\gamma-1}E^\delta_{\beta,\gamma}(wt^\beta)=\frac{s^{\beta\delta-\gamma}}{(s^\beta-w)^\delta},
\end{equation}
(see \cite[Equation 2.3.24]{mathai} for more details), we conclude that
\begin{equation}
G^\alpha(z,t)=z^iE_{\alpha,1}(At^\alpha)-\mu\left(\frac{1}{z}-1\right)[P^\alpha_{i,0}(t)]\ast[t^{\alpha - 1} E_{\alpha,\alpha}(At^\alpha)]+t^\alpha\xi E_{\alpha,\alpha+1}(At^\alpha).
\end{equation}
\end{proof}
\begin{teo}
Let $\{X^{\alpha}_t\}_{t\geq 0}$ be the fractional queue model $M/M/1$ with catastrophes of parameter $\alpha\in(0,1]$, and let $G^\alpha(z,t)$ the p.g.f. The Laplace transform of $G^\alpha(z,s)$ is given by
\begin{equation}
\tilde{G}^\alpha(z,s)=\frac{z^{i+1}s^{\alpha-1}+\xi zs^{-1}-(1-z)\mu \tilde{P}^\alpha_{i,0}}{-\lambda(z-a_1)(z-a_2)}
\label{glaplace}
\end{equation}
where $a_1$ and $a_2$ are zeros of $f(z)=-\lambda z^2+(s^\alpha+\lambda+\mu+\xi)z-\mu$.
\end{teo}
\begin{proof}
We start the proof as in Theorem \ref{Gteo}, but $z$ is multiplying both sides of the equation
\begin{equation}
z[s^\alpha\tilde{G}^\alpha(z,s)-z^is^{\alpha-1}]=(1-z)[\tilde{G}^\alpha(z,s)(\mu-\lambda z-\frac{\xi z}{(1-z)})-\mu\tilde{P}^\alpha_{i,0}(s)]+\frac{z\xi}{s}.
\end{equation}
Leaving $\tilde{G}^\alpha(z,s)$ in the left side, we have
\begin{equation}
\tilde{G}^\alpha(z,s)(-\lambda z^2+(s^\alpha+\lambda+\mu+\xi)z-\mu)=z^{i+1}s^{\alpha-1}-(1-z)\mu\tilde{P}^\alpha_{i,0}(s)+\frac{z\xi}{s},
\end{equation}
that is,
\begin{equation}
\tilde{G}^\alpha(z,s)=\frac{z^{i+1}s^{\alpha-1}-(1-z)\mu\tilde{P}^\alpha_{i,0}(s)+z\xi s^{-1}}{-\lambda z^2+(s^\alpha+\lambda+\mu+\xi)z-\mu}.
\label{denominador}
\end{equation}
Let $a_1~and~a_2$ be the solution of $f(z):=-\lambda z^2+(s^\alpha+\lambda+\mu+\xi)z-\mu$. The equations of $a_1~e~a_2$ are giving by:
\begin{equation}
\begin{cases}
\displaystyle a_1+a_2=\frac{(s^\alpha+\lambda+\mu+\xi)}\lambda,\\
\displaystyle a_1a_2=\frac{\mu}{\lambda},\\
s^\alpha+\xi=-\lambda(1-a_2)(1-a_1).\\
\end{cases}
\label{a1a2}
\end{equation}
Therefore,
\begin{equation}
\tilde{G}^\alpha(z,s)=\frac{z^{i+1}s^{\alpha-1}+\xi zs^{-1}-(1-z)\mu P^\alpha_{i,0}(s)}{-\lambda(z-a_1)(z-a_2)}.
\end{equation}
\end{proof}
\begin{teo}
Let $\{X^{\alpha}_t\}_{t\geq 0}$ be the fractional queue model $M/M/1$ with catastrophes of parameter $\alpha\in(0,1]$. The expectation of $X^\alpha_t$ is given by
\begin{equation}\label{expec}
\mathbb{E}[X^\alpha_t]= iE^1_{\alpha,1}(-\xi t^\alpha)+\mu [P^\alpha_{i,0}(t)]\ast[ t^{\alpha-1}E^1_{\alpha,\alpha}(-\xi t^\alpha)]+\frac{(\lambda-\mu)}{\xi}(1-E^1_{\alpha,1}(-\xi t^\alpha)),
\end{equation}
where
\begin{equation}
E^\delta_{\beta,\gamma}(w)=\sum^\infty_{r=0}\frac{w^r\Gamma(\delta+r)}{r!\Gamma(\beta r+\gamma)\Gamma(\delta)},~~~~~~~~~~~~~w, \gamma, \delta, \beta \in \mathbb{C}, \Re(\beta)>0
\end{equation}
is the generalized Mittag-Leffler function of $3$ parameters and $\ast$ is the convolution operator for $t$.
\label{teomean}
\end{teo}
\begin{proof}
In order to prove \eqref{expec} we apply derivates in \eqref{glaplace}, so
\begin{equation}
\begin{array}{rl}
\displaystyle\frac{\partial \tilde{G}^\alpha(z,s)}{\partial z}=&\displaystyle\frac{[(i+1)z^is^{\alpha-1}+\xi s^{-1}+\mu\tilde{P}^\alpha_{i,0}(s)](-\lambda(z-a_1)(z-a_2))}{\lambda^2(z-a_1)^2(z-a_2)^2}\\[.4cm]
&
\displaystyle +\frac{(z^{i+1}s^{\alpha-1}+\xi zs^{-1}-(1-z)\mu\tilde{P}^\alpha_{i,0}(s))(\lambda[z-a_1+z-a_2] )}{\lambda^2(z-a_1)^2(z-a_2)^2}.
\end{array}
\label{derivada1g}
\end{equation}
Using $z=1$ and \eqref{a1a2}, we have
\begin{equation}
\begin{array}{rl}
\displaystyle\frac{\partial \tilde{G}^\alpha(1,s)}{\partial z}=&\displaystyle\frac{((i+1)s^{\alpha-1}+\xi s^{-1}+\mu\tilde{P}^\alpha_{i,0}(s)}{-\lambda(1-a_1)(1-a_2)}+\frac{(s^{\alpha-1}+\xi s^{-1})\lambda[1-a_1+1-a_2] )}{\lambda^2(1-a_1)^2(1-a_2)^2}\\[.4cm]
=&\displaystyle\frac{(i+1)s^{\alpha-1}+\xi s^{-1}+\mu \tilde{P}^\alpha_{i,0}(s)}{\xi +s^\alpha}
+\displaystyle\frac{(s^{\alpha-1}+\xi s^{-1})\lambda[2-(a_1+a_2)] )}{(\xi+s^\alpha)^2}\\[.4cm]
=&\displaystyle\frac{(i+1)s^{\alpha-1}+\xi s^{-1}+\mu \tilde{P}^\alpha_{i,0}(s)}{\xi+s^\alpha}
+\frac{(s^{\alpha-1}+\xi s^{-1})(\lambda-\mu-\xi-s^\alpha )}{(\xi+s^\alpha)^2}\\[.4cm]
=&\displaystyle\frac{(i)s^{\alpha-1}+\mu \tilde{P}^\alpha_{i,0}(s)}{\xi+s^\alpha}
+\frac{(s^{\alpha-1}+\xi s^{-1})(\lambda-\mu-\xi-s^\alpha +\xi +s^\alpha)}{(\xi+s^\alpha)^2}\\[.4cm]
=&\displaystyle\frac{is^{\alpha-1}+\mu \tilde{P}^\alpha_{i,0}(s)}{\xi+s^\alpha}
+\frac{s^{-1}(s^{\alpha}+\xi)(\lambda-\mu)}{(\xi+s^\alpha)^2}\\[.4cm]
=&\displaystyle\frac{is^{\alpha-1}+\mu \tilde{P}^\alpha_{i,0}(s)+s^{-1}(\lambda-\mu)}{\xi+s^\alpha}.
\end{array}
\end{equation}
Finally, we use the Laplace transform of the Mittag-Leffler function to we obtain
\begin{equation}
\mathbb{E}[X^\alpha_t]= iE^1_{\alpha,1}(-\xi t^\alpha)+\mu P^\alpha_{i,0}(t)\ast t^{\alpha-1}E^1_{\alpha,\alpha}(-\xi t^\alpha)+(\lambda-\mu)t^\alpha E^1_{\alpha,\alpha+1}(-\xi t^\alpha)
\label{premedia}
\end{equation}
and we conclude the proof of the theorem by \cite[page 82, Theorem 2.2.1]{mathai}, where
\begin{equation}
\label{eq:221mathai}
E^1_{\alpha,\beta}(z) = z E^1_{\alpha,\alpha + \beta}(z) + \frac{1}{\Gamma (\beta)}
\end{equation}
\end{proof}
\begin{no}
By considering $\xi=0$ we recover the result obtained by \cite[equation 2.27]{fila}. For this, we need to remember that $E^\delta_{\alpha, \beta}(0)=\Gamma(\beta)^{-1}$ in \eqref{premedia} and we prove it. More precisely, we recover
\begin{equation}
\mathbb{E}[X^\alpha_t]= i+\mu J^\alpha P^\alpha_{i,0}(t)+\frac{(\lambda-\mu)t^\alpha}{\Gamma (\alpha +1)}
\end{equation}
where $J^\alpha f(t)$ is the {Riemann-Liouville} fractional integral \cite{kilbas}
\begin{equation}
J^\alpha f(t)=\frac{1}{\Gamma(\alpha)}\int^t_0(y-t)^{\alpha-1}f(y)dy
\end{equation}
\end{no}
\begin{no}
By letting $\alpha = 1$, we get the classical result for the $M/M/1$ queue with catastrophes obtained by \cite[Equation 2.25]{kumar}. Note that $E^1_{1,1}(w) = e^w$.
\end{no}
\begin{no}
We emphasize that an alternative for proving Theorem \ref{teomean} is a suitable application of Theorem \ref{Gteo}, and the fact that
$$\frac{\partial G^\alpha(1,t)}{\partial z}=\mathbb{E}[X^\alpha_t].$$
For this, we find the differentiation of \eqref{funcaogeradora}, using
$$\frac{dE^\delta_{\alpha,\beta}(t)}{d t}=\frac{\Gamma(\delta + 1)}{\Gamma(\delta)}E^{\delta+1}_{\alpha,\beta + \alpha}(t),$$
see \cite[Equation 2.2.1]{shukla}, so
\begin{equation}
\begin{array}{rl}
\displaystyle \frac{\partial G^\alpha(z,t)}{\partial z}=&\displaystyle i z^{i-1}E^1_{\alpha,1}(At^\alpha)+z^it^\alpha\left(\lambda - \frac{\mu}{z^2}\right)E^2_{\alpha, \alpha+1}(At^\alpha)+\frac{\mu}{z^2}[P^\alpha_{i,0}(t)]\ast [t^{\alpha-1}E^1_{\alpha, \alpha}(At^\alpha) ] \\+ &
\displaystyle\mu\left(\frac{1}{z}-1\right)\left(\lambda - \frac{\mu}{z^2}\right)[P^\alpha_{i,0}(t)]\ast [t^{2\alpha-1}E^2_{\alpha, 2\alpha}(At^\alpha) ] + t^{2\alpha}\xi\left(\lambda - \frac{\mu}{z^2}\right)E^2_{\alpha, 2\alpha + 1}(At^\alpha)
\end{array}
\end{equation}
Taking $z=1$, we have
\begin{equation}
\begin{array}{rl}
\displaystyle\frac{\partial G^\alpha(1,t)}{\partial z}=&iE^1_{\alpha,1}(-\xi t^\alpha)+t^\alpha\left(\lambda - \mu\right)E^2_{\alpha, \alpha+1}(-\xi t^\alpha)\\+&
\displaystyle\mu[P^\alpha_{i,0}(t)]\ast [t^{\alpha-1}E^1_{\alpha, \alpha}(At^\alpha) ]+0+t^{2\alpha}\xi(\lambda- \mu)E^2_{\alpha, 2\alpha + 1}(-\xi t^\alpha),
\end{array}
\end{equation}
and since
\begin{equation}
\label{eq:238mathai}
E^\delta_{\alpha,\beta-\alpha}(z)-E^{\delta-1}_{\alpha,\beta-\alpha}(z)=zE^\delta_{\alpha,\beta}(z),
\end{equation}
see \cite[Equation 2.3.8]{mathai}, we get the desired result.
\end{no}
\begin{teo}
Let $\{X^{\alpha}_t\}_{t\geq 0}$ be the fractional queue model $M/M/1$ with catastrophes of parameter $\alpha\in(0,1]$. The variance of $X^\alpha_t$ is given by
\begin{equation}
\begin{array}{rcl}
Var(X^\alpha_t) &=&i^2E^1_{\alpha,\alpha+1}(-\xi t^\alpha)+2it^\alpha\left(\lambda-\mu\right)E^2_{\alpha, \alpha+1}(-\xi t^\alpha)-\mu[P^\alpha_{i,0}(t)\ast t^{\alpha-1}E^1_{\alpha,\alpha}(-\xi t^\alpha)]\\[.4cm]
&& \displaystyle +2\mu(\lambda-\mu)[P^\alpha_{i,0}(t)\ast t^{\alpha-1}E^2_{\alpha,2\alpha}(-\xi t^\alpha)] + \\[.4cm]
&&+(\mu + \lambda)t^\alpha E^1_{\alpha, \alpha+1}(-\xi t^\alpha)\displaystyle+2t^{2\alpha}(\lambda-\mu)^2E^2_{\alpha, 2\alpha+1}\\[.4cm]
&&\displaystyle-[iE^1_{\alpha,1}(-\xi t^\alpha)+\mu P^\alpha_{i,0}(t)\ast t^{\alpha-1}E^1_{\alpha,\alpha}(-\xi t^\alpha)+\frac{(\lambda-\mu)}{\xi}(1-E^1_{\alpha,1}(-\xi t^\alpha))]^2.
\end{array}
\end{equation}
\end{teo}
\begin{proof}
Since $$\frac{\partial^2 G^\alpha(1,t)}{\partial z^2}=\mathbb{E}[(X^\alpha_t)^2]-\mathbb{E}[X^\alpha_t],$$
we can find $E[(X^\alpha_t)^2]$ by noting that
\begin{equation}
\begin{array}{rcl}
\displaystyle\frac{\partial^2 G^\alpha(z,t)}{ \partial z^2} &=&\displaystyle
i(i-1)z^{i-2}E^1_{\alpha,\alpha+1}(At^\alpha)+iz^{i-1}t^\alpha
\left(\lambda -\frac{\mu}{z^2}\right)E^2_{\alpha, \alpha+1}(At^\alpha)\\[.4cm]
&&+t^\alpha(i\lambda z^{i-1}-(i-2)\mu z^{i-3})E^2_{\alpha, \alpha+1}(At^\alpha)
\displaystyle+2t^{2\alpha}z^i\left(\lambda-\frac{\mu}{z^2}\right)^2E^3_{\alpha,2\alpha+1}(At^\alpha)\\[.4cm]
&&- \displaystyle\frac{2\mu}{z^3} [P^\alpha_{i,0}(t)\ast t^{\alpha-1} E^1_{\alpha,\alpha}(At^\alpha)]+\frac{\mu}{z^2}\left(\lambda-\frac{\mu}{z^2}\right)[P^\alpha_{i,0}(t)\ast t^{2\alpha-1} E^2_{\alpha,2\alpha}(At^\alpha)]\\[.4cm]
&&-\displaystyle \mu\left[-\frac{\lambda}{z}+\frac{3\mu}{z^4}-\frac{2\mu}{z^3}\right][P^\alpha_{i,0}(t)\ast t^{2\alpha-1} E^2_{\alpha,2\alpha}(At^\alpha)]\\[.4cm]
&&\displaystyle-2\mu\left(\frac{1}{z}-1\right)\left(\lambda+\frac{\mu}{z^2}\right)^2[P^\alpha_{i,0}(t)\ast t^{3\alpha-1} E^3_{\alpha,3\alpha}(At^\alpha)]\\[.4cm]
&& \displaystyle+2t^{2\alpha}\xi\frac{\mu}{z^3}E^2_{\alpha,2\alpha+1}(At^\alpha)\displaystyle+2\xi t^{3\alpha}\left(\lambda-\frac{\mu}{z^2}\right)^2E^3_{\alpha,3\alpha+1}(At^\alpha).
\end{array}
\end{equation}
Thus, by taking $z=1$ we find
\begin{equation}
\begin{array}{rcl}
\displaystyle \frac{\partial^2 G^\alpha(1,t)}{\partial z^2}&=&i(i-1)E^1_{\alpha,1}(-\xi t^\alpha)+it^\alpha(\lambda-\mu)E^2_{\alpha, \alpha+1}(-\xi t^\alpha)+t^\alpha(i\lambda -(i-2)\mu )E^2_{\alpha, \alpha+1}(-\xi t^\alpha)\\[.4cm]
&&\displaystyle+2t^{2\alpha}\left(\lambda-\mu\right)^2E^3_{\alpha,2\alpha+1}(-\xi t^\alpha)-
2\mu [P^\alpha_{i,0}(t)\ast t^{\alpha-1} E^1_{\alpha,\alpha}(-\xi t^\alpha)]\\[.4cm]
&&+\mu\left(\lambda-\mu\right)[P^\alpha_{i,0}(t)\ast t^{2\alpha-1} E^2_{\alpha,2\alpha}(-\xi t^\alpha)] \displaystyle+\mu(\lambda-\mu)[P^\alpha_{i,0}(t)\ast t^{2\alpha-1} E^2_{\alpha,2\alpha}(-\xi t^\alpha)]\\[.4cm]
&&+2t^{2\alpha}\xi\mu E^2_{\alpha,2\alpha+1}(-\xi t^\alpha)+2\xi t^{3\alpha}\left(\lambda-\mu\right)^2E^3_{\alpha,3\alpha+1}(-\xi t^\alpha).
\end{array}
\end{equation}
By applying Equation \eqref{eq:238mathai} and after some algebraic manipulations we get
\begin{equation}\label{expec2}
\begin{array}{rcl}
\displaystyle\mathbb{E}[(X^\alpha_t)^2]-\mathbb{E}[X^\alpha_t]&=&i(i-1)E^1_{\alpha,1}(-\xi t^\alpha)\\[.4cm]
&&+2it^\alpha\left(\lambda-\mu\right)E^2_{\alpha, \alpha+1}(-\xi t^\alpha)-2\mu[P^\alpha_{i,0}(t)\ast t^{\alpha-1}E^1_{\alpha,\alpha}(-\xi t^\alpha)]\\[.4cm]
&& \displaystyle +2\mu(\lambda-\mu)[P^\alpha_{i,0}(t)\ast t^{2\alpha-1}E^2_{\alpha,2\alpha}(-\xi t^\alpha)]+2\mu t^\alpha E^1_{\alpha, \alpha+1}(-\xi t^\alpha)\\[.4cm]
&&+2t^{2\alpha}(\lambda-\mu)^2E^2_{\alpha, 2\alpha+1}(-\xi t^\alpha).
\end{array}
\end{equation}
Finally, we obtain the second moment for $X_{t}^{\alpha}$ by \eqref{premedia} and \eqref{expec2}; namely
\begin{equation}
\begin{array}{rcl}
E[(X^\alpha_t)^2]&=&i^2E^1_{\alpha,\alpha+1}(-\xi t^\alpha)+2it^\alpha\left(\lambda-\mu\right)E^2_{\alpha, \alpha+1}(-\xi t^\alpha)-\mu[P^\alpha_{i,0}(t)\ast t^{\alpha-1}E^1_{\alpha,\alpha}(-\xi t^\alpha)]\\[.4cm]
&& \displaystyle +2\mu(\lambda-\mu)[P^\alpha_{i,0}(t)\ast t^{2\alpha-1}E^2_{\alpha,2\alpha}(-\xi t^\alpha)]+\\[.4cm]
&&+ (\lambda + \mu) t^\alpha E^1_{\alpha, \alpha+1}(-\xi t^\alpha)+2t^{2\alpha}(\lambda-\mu)^2E^2_{\alpha, 2\alpha+1}(-\xi t^\alpha).
\end{array}
\end{equation}
Since $Var(X^\alpha_t)=E((X^\alpha_t)^2)-E(X^\alpha_t)^2 $ the proof is complete.
\end{proof}
Assuming the parameters of Figure \ref{fig:behaviours}, we show in Figure \ref{fig:esp_var_behaviours} the behaviour of the expected value and the variance during time.
\begin{figure}[H]
\centering
\begin{subfigure}[a]{0.75\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/example_esp.png}
\caption{$\mathbb{E}[X^\alpha_t]$ for different values of $\alpha$.}
\label{subfig:example_esp}
\end{subfigure}
\begin{subfigure}[a]{0.75\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/example_var.png}
\caption{$Var(X^\alpha_t)$ for different values of $\alpha$.}
\label{subfig:example_var}
\end{subfigure}
\caption{Behavior of the mean and the variance along time of a queue with catastrophes starting in $i = 1$ with $\lambda = 5$, $\mu = 3$ and $\xi = 1$}\label{fig:esp_var_behaviours}
\end{figure}
\section{Estimation of parameters}
In this section we discuss about the estimation of parameters for the model. In order to do it we start with some remarks about the waiting times for the fractional $M/M/1$ queue with catastrophes. Let $S_k$ be the waiting time until something happens in the classical
queue, then $P(S_k>t) = exp[-(\lambda+\mu+\xi)t].$ On the other hand, if $S^\alpha_k$ denotes the waiting time until something happens in the fractional
queue we have that, thanks to the subordination relationship \eqref{resub}, it holds
\begin{equation}
P(S^\alpha_k>t)=\int^\infty_0exp[-(\lambda+\mu+\xi)s]f_{\alpha}(s,t)ds.
\end{equation}
By applying the Laplace transform, we get
$$\int^\infty_0e^{-zt}P(S^\alpha_k>t)dt =\int^\infty_0e^{-(\lambda+\mu+\xi)s}z^{\alpha - 1}e^{sz^\alpha}ds = \frac{z^{\alpha - 1}}{\lambda + \mu + \xi + z^\alpha}.$$
Applying Equation \eqref{eq:transformada_mittag} and the {Mittag-Leffler} property described in Equation \eqref{eq:221mathai}, we find
$$P(S^\alpha_k>t) = E^1_{\alpha, 1}(-(\lambda + \mu + \xi)t^\alpha) = 1 - (\lambda + \mu + \xi)t^\alpha E^1_{\alpha,\alpha + 1}(-(\lambda + \mu + \xi)t^\alpha).$$
Finally, since $f_{S^\alpha}(t) = -dP(S^\alpha_k>t)/dt$ and $d[t^{\beta - 1}E^1_{\beta,\gamma}(\omega t^\alpha)]/dt = t^{\beta - 2}E^1_{\beta,\gamma - 1}(\omega t^\alpha)$ (see \cite[Equation 2.2.3]{mathai}), we get
\begin{equation}
f_{S^\alpha}(t) = (\lambda+\mu+\xi)t^{\alpha-1}E_{\alpha,\alpha}(-(\lambda+\mu+\xi)t^\alpha).
\label{distmitt}
\end{equation}
From \eqref{distmitt} we obtain that $S^\alpha_k$ follows a Mittag-Leffler distribution, therefore $S^\alpha_k$ can be represented as
\begin{equation}
S^\alpha_k=\mathcal{E}^{\frac{1}{\alpha}}T_\alpha,
\label{set}
\end{equation}
where $\mathcal{E}$ follows an exponential distribution of parameter $\theta=\lambda+\mu+\xi$ and $T_\alpha$ follows a one-sided $\alpha^+$-stable law. If we let $ln(S^\alpha_k)=\frac{1}{\alpha}ln(\mathcal{E})-ln(T_\alpha)$, then we can write from \cite{cahoy2010}
\begin{equation}
\mu_{\ln{S^\alpha_k}}= -\frac{ln(\theta)}{\alpha}-\gamma
\label{esplns}
\end{equation}
and
\begin{equation}
\sigma^2_{\ln{S^\alpha_k}}=\pi^2\left(\frac{1}{3\alpha^2}-\frac{1}{6} \right),
\label{varlns}
\end{equation}
where $\gamma$ is the Euler-Mascheroni constant described by $\gamma = 0.57721\dots$ From the previous results, we can deduce the moment estimator of $\alpha$ and $\theta$; namely,
\begin{equation}
\hat{\alpha}=\frac{\pi}{\sqrt{3\left(\hat{\sigma}^2_{\ln{S^\alpha_k}}+\frac{\pi^2}{6}\right)}},
\label{mmalpha}
\end{equation}
and
\begin{equation}
\hat{\theta}=exp(-\hat{\alpha}(\hat{\mu}_{\ln{S^\alpha_k}}+\gamma))
\label{mmtheta}
\end{equation}
\smallskip
where $\displaystyle{\hat{\mu}_{{\ln{S^{\alpha}_k}}}=\sum^n_{j=1}\frac{\ln(S^{\alpha}_j)}{n}}$\text{ and } $\displaystyle{\hat{\sigma}^{2}}_{\ln{S^\alpha_k}}=\sum^n_{j=1}\frac{(\ln(S^{\alpha}_j)-\hat{\mu}_{\ln(S^{\alpha}_k)})^2}{n}$. The asymptotic normality and confidence interval for $\alpha$ are, respectively, given by
\begin{equation}
\sqrt{n}(\hat{\alpha}-\alpha)\sim N\left(~0,~\frac{\alpha^2(32-20\alpha^2-\alpha^4)}{40}\right)
\end{equation}
and
\begin{equation}
\hat{\alpha}\pm z_{\epsilon/2}\sqrt{\frac{\alpha^2(32-20\alpha^2-\alpha^4)}{40n}}.
\end{equation}
We are interested in estimating the parameters $\lambda,\mu,$ and $\xi$. Then, by using the proportion of occurrences and the asymptotic properties of each event, like in \cite{fila}, we get a point and interval estimation. In other words, let $n_a$, $n_s$ and $n_c$ be the number of costumers that arrive to the system, the number of costumers that leave the system and the number of catastrophes, respectively. Let $n=n_a+n_s+n_c$. We can write the arrival proportions and its estimator as $\lambda/\theta=p_1$ and $\hat{p_1}=n_a/n$. We do the same for costumers that leaving the system: $\lambda/\theta=p_2$ and $\hat{p_2}=n_s/n$, and for the catastrophes: $\lambda/\theta=p_3$ and $\hat{p_3}=n_c/n$. Now we shall show the asymptotic normality for the parameters $\lambda,~\mu$, and $\xi$, respectively.
\begin{teo}
Let $\hat{p_1}=n_a/n$ and $p_1=\lambda/\theta$, then
\begin{equation}
\sqrt{n}(\hat{\lambda}-\lambda)\sim N\left(0,\theta^2p_1(1-p_1)+p^2_1\sigma^2_\theta\right)
\end{equation}
with n $\xrightarrow{}\infty$, where
\begin{equation}
\sigma^2_\theta=\frac{\theta^2[20\pi^4(2-\alpha^2)-3\pi^2(\alpha^4+20\alpha^2-32)(ln(\theta))^2-720\alpha^3(ln(\theta))\zeta(3)]}{120\pi^2},
\label{vartheta}
\end{equation}
where $\zeta(3)$ is the {Riemann-zeta} function evaluated in 3.
\end{teo}
\begin{proof}
The proof follows the same steps of the Theorem 3.1 of \cite{fila}, with the difference that here we adapt the number of parameters for the model. In other words, we assume a Multinomial$(1,p_1,p_2,p_3)$ and we use the asymptotic property of the parameters as follows
\begin{equation}
\sqrt{n}\left(\begin{array}{cc}
\hat{p_1}-p_1 \\
\hat{\theta}-\theta
\end{array}\right)
\xrightarrow{d} N(0,\Sigma),
\end{equation}
for $n\xrightarrow{}\infty$, where the covariance matrix $\Sigma$ is given by
\begin{equation}
\Sigma=\left(\begin{array}{cc}
p_1(1-p_1) & 0 \\
0 & \sigma^2_\theta
\end{array}\right),
\end{equation}
and $\sigma^2_\theta$ is obtained in \cite{cahoy2010}.
The central bi-dimensional limit theorem implies that
\begin{equation}
\sqrt{n}(h(\hat{\omega}_n)-h(\omega))\sim N(~0,\Dot{h}(\omega)^T\Sigma\Dot{h}(\omega)),
\end{equation}
where $\hat{\omega_n}=(\hat{p_1},\hat{\theta})^T$, $h$ is a mapping from $\mathbb{R}^2\xrightarrow{}\mathbb{R}$, $\Dot{h}$ is continue in a neighborhood of $\omega\in\mathbb{R}^2$, $ h(p_1,\theta)=p_1\theta$ and $\Dot{h}(p_1,\theta)=(\theta,p_1)$. Therefore the proof is complete.
\end{proof}
\begin{teo}
Let $\hat{p_2}=n_s/n$ and $p_2=\mu/\theta$, then:
\begin{equation}
\sqrt{n}(\hat{\mu}-\mu)\sim N(0,\theta^2p_2(1-p_2)+p^2_2\sigma^2_\theta)
\end{equation}
with n $\xrightarrow{}\infty$.
\end{teo}
\begin{proof}
The proof is equal to the one of the previous theorem. Here we have $h(p_2,\theta)=p_2\theta$ and $\Dot{h}(p_2,\theta)=(\theta,p_2)$.
\end{proof}
\begin{teo}
Let $\hat{p_3}=\frac{n_c}{n}$ and $p_3={\xi}/{\theta}$, then
\begin{equation}
\sqrt{n}(\hat{\xi}-\xi)\sim N(0,\theta^2p_3(1-p_3)+p^2_3\sigma^2_\theta)
\end{equation}
with n $\xrightarrow{}\infty$.
\end{teo}
\begin{proof}
The proof is also equal to the one of previous theorem. But, we have $h(p_3, \theta) = p_3\theta$ and $\Dot{h}(p_3,\theta)=(\theta,p_3)$.
\end{proof}
Once we prove the asymptotic properties above, we can write the confidence intervals (1-$\epsilon$)100\% for $\lambda,~\mu~e~\xi$, respectively
\begin{equation}
IC[\hat{\lambda}]= \hat{\lambda}\pm z_{\epsilon/2}\hat{\sigma_\lambda},~~IC[\hat{\mu}]= \hat{\mu}\pm z_{\epsilon/2}\hat{\sigma_\mu}~~\text{ and }~~IC[\hat{\xi}]= \hat{\xi}\pm z_{\epsilon/2}\hat{\sigma_\xi},
\end{equation}
where
\begin{equation}
\hat{\sigma_\lambda}= \sqrt{\frac{\hat{\theta}\hat{p_1}(1-\hat{p_1})+\hat{p_1}^2\hat{\sigma^2_\theta}}{n}},~~ \hat{\sigma_\mu}= \sqrt{\frac{\hat{\theta}\hat{p_2}(1-\hat{p_2})+\hat{p_2}^2\hat{\sigma^2_\theta}}{n}}~~\text{ and }~~ \hat{\sigma_\xi}= \sqrt{\frac{\hat{\theta}\hat{p_3}(1-\hat{p_3})+\hat{p_3}^2\hat{\sigma^2_\theta}}{n}}.
\end{equation}
Since the waiting times are given by a Mittag-Leffler distribution and \eqref{set} holds, we can simulate values for the validation of the proposed estimators and intervals. Taking any $k\geq0$ for starting the process, we generate the values with the following algorithm: \\[.5cm]
\begin{algorithm}[H]
\begin{enumerate}
\item[1)] Starts with $X^\alpha_0=k$ and $t = 0$.
\item[2)] If $X^\alpha_t\neq0$:
\begin{enumerate}
\item[i)] Generate $\mathcal{E}\sim Exp(\lambda + \mu + \xi)$.
\item[ii)] Generate $T^\alpha$ from a one-sided $\alpha^+$-stable distribution.
\item[iii)] Calculate $S^\alpha_k = \mathcal{E}^{\frac{1}{ \alpha}}T^\alpha$ and $t = t + S^\alpha_k$.
\item[iv)] Generate U uniformly distributed in (0,1).
\item[v)] If $0\leq U < \frac{\lambda}{\lambda+\mu+\xi}$, take $X^\alpha_t=k+1$.
\item[vi)] If $\frac{\lambda}{\lambda+\mu+\xi}\leq U < \frac{\lambda + \mu}{\lambda+\mu+\xi}$, we take $X^\alpha_t = k - 1$.
\item[vii)] Else we take $X^\alpha_t=0$.
\end{enumerate}
\item[3)] if $X^\alpha_t=0$:
\begin{enumerate}
\item[i)] Generate $\mathcal{E}_0\sim Exp(\lambda)$.
\item[ii)] Generate $T^\alpha$ from a one-sided $\alpha^+$-stable distribution.
\item[iii)] Calculate $S^\alpha_k = \mathcal{E}_0^{\frac{1}{ \alpha}}T^\alpha$ and calculate $t = t + S^\alpha_k$.
\item[iv)] Take $X^\alpha_t = 1$;
\end{enumerate}
\item[4)]Repeat until the number of desire iteration.
\end{enumerate}
\caption{Fractional queue with catastrophe simulation.}
\end{algorithm}
\begin{no}
We are assuming $X^\alpha_t = k$ to simplify the notation, but for every iteration $X^\alpha_t$ change it value.
\end{no}
\begin{no}
We don't use the case where $k=0$ in the estimation, because the processes have a different waiting time. The state $0$ is described by a fractional Poisson process with parameters $\lambda$ and $\alpha$, therefore a different technique to estimate the parameters is needed (see \cite{cahoy2010} for more details). However, the scope of the paper is the estimation of the catastrophe rate and for this we need a continuity of the queue growth for have a lot of catastrophes.
\end{no}
Assuming again the fractional queue starting in $i = 1$ and $\lambda = 5$, $\mu = 3$ and $\xi = 1$, we simulated their behaviour during the time for different values of $\alpha$.
\begin{figure}[H]
\centering
\begin{subfigure}[a]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/path_alpha10.png}
\caption{Simulation for $\alpha = 1$.}
\label{subfig:example_esp}
\end{subfigure}
\begin{subfigure}[a]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/path_alpha09.png}
\caption{Simulation for $\alpha = 0.9$.}
\label{subfig:example_var}
\end{subfigure}
\begin{subfigure}[a]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/path_alpha08.png}
\caption{Simulation for $\alpha = 0.8$.}
\label{subfig:example_var}
\end{subfigure}
\begin{subfigure}[a]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/path_alpha07.png}
\caption{Simulation for $\alpha = 0.7$.}
\label{subfig:example_var}
\end{subfigure}
\caption{Simulation of the fractional queue with catastrophe starting in $i = 1$ with $\lambda = 5$, $\mu = 3$ and $\xi = 1$.}
\label{fig:esp_var_behaviours2}
\end{figure}
\begin{table}[!htbp] \centering
\caption{Estimators and confidential intervals tests}
\label{}
\begin{tabular}{@{\extracolsep{5pt}} lccccccccc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
& \multicolumn{3}{c}{$n = 10^2$}& \multicolumn{3}{c}{$n = 10^3$} &\multicolumn{3}{c}{$n = 10^4$} \\
& \% Bias & CV & CP & \% Bias & CV & CP & \% Bias & CV & CP \\
\hline
$\alpha = 0.9$ & $0.656$ & $5.934$ & $0.948$ & $0.022$ & $1.945$ & $0.949$ & $0.021$ & $0.615$ & $0.946$ \\
$\lambda = 4$ & $2.877$ & $17.499$ & $0.943$ & $0.374$ & $5.718$ & $0.944$ & $0.077$ & $1.722$ & $0.957$ \\
$\mu = 2$ & $3.667$ & $22.269$ & $0.941$ & $0.166$ & $7.031$ & $0.945$ & $0.130$ & $2.088$ & $0.958$ \\
$\xi = 1$ & $2.991$ & $31.028$ & $0.919$ & $0.155$ & $9.021$ & $0.954$ & $0.181$ & $2.873$ & $0.951$ \\
\hline
$\alpha = 0.5$ & $2.026$ & $8.043$ & $0.955$ & $0.255$ & $2.508$ & $0.960$ & $0.030$ & $0.808$ & $0.953$ \\
$\alpha = 1$ & $6.940$ & $41.243$ & $0.932$ & $0.841$ & $12.182$ & $0.952$ & $0.030$ & $3.792$ & $0.963$ \\
$\mu = 3$ & $7.540$ & $30.431$ & $0.948$ & $1.025$ & $9.227$ & $0.962$ & $0.141$ & $2.876$ & $0.959$ \\
$\xi = 6$ & $8.043$ & $28.494$ & $0.952$ & $1.384$ & $7.903$ & $0.961$ & $0.036$ & $2.572$ & $0.955$ \\
\hline
$\alpha = 0.1$ & $1.315$ & $9.007$ & $0.943$ & $0.025$ & $2.817$ & $0.945$ & $0.018$ & $0.943$ & $0.934$ \\
$\lambda = 7$ & $8.273$ & $31.447$ & $0.937$ & $0.502$ & $9.004$ & $0.941$ & $0.058$ & $3.034$ & $0.942$ \\
$\mu = 0.9$ & $8.990$ & $46.862$ & $0.927$ & $0.604$ & $13.922$ & $0.946$ & $0.219$ & $4.479$ & $0.939$ \\
$\xi = 3$& $8.149$ & $34.846$ & $0.941$ & $0.231$ & $10.125$ & $0.946$ & $0.069$ & $3.401$ & $0.936$ \\
\hline
$\alpha = 1$& $0.733$ & $5.155$ & $0.944$ & $0.030$ & $1.641$ & $0.953$ & $0.015$ & $0.535$ & $0.951$ \\
$\lambda = 2$ & $1.723$ & $18.766$ & $0.918$ & $0.307$ & $5.760$ & $0.950$ & $0.065$ & $1.775$ & $0.952$ \\
$\mu = 2$& $1.074$ & $18.041$ & $0.933$ & $0.237$ & $5.770$ & $0.946$ & $0.041$ & $1.859$ & $0.934$ \\
$\xi = 2$& $1.258$ & $17.543$ & $0.937$ & $0.116$ & $5.600$ & $0.958$ & $0.009$ & $1.752$ & $0.959$ \\
\hline \\[-1.8ex]
\end{tabular}
\label{bias}
\end{table}
\newpage
With the simulated times, we can test the estimated value of the parameters. For test the estimators we use the percent bias (\% Bias) and coefficient of variation (CV), witch values close to zero indicate that our estimated parameters are close from their real value. We use the coverage probabilities for check the confidence interval of 95\%, witch the values close to 0.95 means that our proposal interval is a good confidence interval.
We test four cases, where each case we generated 1000 samples with sample size equal $10^2$, $10^3$ and $10^4$. We can see the test results in Table \ref{bias} and we can see also that the more we increase the sample size, the better the estimations is. In fact, the sample size of $10^3$ brings a good approach for the examples, since the \% Bias and CV are relatively short and CP is close to $0.95$.
\section{Conclusion}
In this work we obtained the state probabilities of the fractional queue with catastrophes starting from any number of customers. In addition, we obtained the mean and the variance from the respective probability generating function. Using the Multinomial distribution, we proposed a moment estimator and a confidence interval for the parameters of the model. Finally, we performed computational simulations to validate the estimation of parameters, by showing that the estimators and the confidence intervals worked well for the chosen tests when we increase the sample size.
\section*{Acknowledgements}
This study was financed in part by the Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'ivel Superior - Brasil (CAPES) - Finance Code 001. The authors thanks Katiane Silva Concei\c{c}\~ao and Juliana Cobre for fruitful discussions at the early stages of this work.
| {
"timestamp": "2020-12-18T02:06:19",
"yymm": "2012",
"arxiv_id": "2012.09317",
"language": "en",
"url": "https://arxiv.org/abs/2012.09317",
"abstract": "A $M/M/1$ queue with catastrophes is a modified $M/M/1$ queue model for which, according to the times of a Poisson process, catastrophes occur leaving the system empty. In this work, we study a fractional $M/M/1$ queue with catastrophes, which is formulated by considering fractional derivatives in the Kolmogorov's Forward Equations of the original Markov process. For the resulting fractional process, we obtain the state probabilities, the mean and the variance for the number of customers at any time. In addition, we discuss the estimation of parameters.",
"subjects": "Probability (math.PR); Statistics Theory (math.ST)",
"title": "On a fractional queueing model with catastrophes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587229064297,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7097978854462593
} |
https://arxiv.org/abs/1005.1424 | On commuting matrices in max algebra and in classical nonnegative algebra | This paper studies commuting matrices in max algebra and nonnegative linear algebra. Our starting point is the existence of a common eigenvector, which directly leads to max analogues of some classical results for complex matrices. We also investigate Frobenius normal forms of commuting matrices, particularly when the Perron roots of the components are distinct. For the case of max algebra, we show how the intersection of eigencones of commuting matrices can be described, and we consider connections with Boolean algebra which enables us to prove that two commuting irreducible matrices in max algebra have a common eigennode. | \section{Introduction}\label{s:introduction}
The study of commuting complex matrices has a long history. As
observed in~\cite{Dra-51}, Cayley considers what appears to be a
generic case of commuting matrices in his famous
memoir~\cite{Cay-58}. Frobenius~\cite{Fro-78,Fro-96} showed that if
$A_i$, $i = 1,\ldots ,r$, are pairwise commuting matrices, then the
eigenvalues $\ga_i^j$, $j = 1,\ldots ,n$, of the matrices $A_i$ may
be ordered so that the eigenvalues of any polynomial
$p(A_1,\ldots,A_r)$ are $p(\ga^j_1, \ldots \ga^j_r)$, $j = 1,\ldots, n$.
Another proof may be found in Schur~\cite{Schu-02}.
Surprisingly, none of these proofs mention eigenvectors.
Frobenius~\cite{Fro-78} also showed that if for given matrices $A,B$ the
equation $AX = XB$ has a nonzero solution, then $A$ and $B$ have a
common eigenvalue. Another well-known result is that pairwise
commuting matrices have a common eigenvector. We have found no
reference for the first explicit appearance of this property, though
it easily follows from e.g. the canonical form derived by
Weyr~\cite{Wey-90} and his discussion of commuting matrices.
Many generalizations and applications of this result exist,
see~\cite{DDG-51},~\cite{RR-00} or \cite{MS-10}. Several books on
matrix theory, such as~\cite{Gan-59}, contain proofs of the results
stated above.
It is the purpose of this paper to prove analogs of these results
for matrices over two semirings:
1. the semiring of nonnegative reals under the usual addition, here
called (classical) nonnegative algebra, and
2. the semiring of nonnegative reals with the operation of maximum
playing the role of addition, here called max algebra.
Spectral theory of matrices in nonnegative algebra is usually called
Perron-Frobenius theory after the founders of this topic,
see~\cite{Per-07,Per-07a,Fro-08,Fro-09,Fro-12}.
The basic results are again found in many books on matrix theory,
such as~\cite{Gan-59,HJ-85,Rot-07}.
Commuting nonnegative matrices can be found in~\cite{BP-94},
see Section~\ref{s:classical}, and in~\cite{Rad-99}.
For further information relevant to the present article see~\cite{Sch-86}.
Spectral theory for matrices in max algebra was developed by
Cuninghame-Green~\cite{CG:79} and Gaubert~\cite{Gau-92},
see~\cite{BCG-09,But-10} for recent expositions.
See \cite{DO-10a,DO-10b} for studies of commuting matrices
in more general semirings.
We devote the main sections to properties of
commuting matrices in max algebra.
At the end of this introduction
we give a formal definition of max algebra and make some remarks on
the relation between the two theories. We then review basic max algebra
spectral theory in Section~\ref{s:SpectralProblem} (for those who
are unfamiliar with this topic). We provide a proof that pairwise
commuting matrices have a common eigenvector in
Section~\ref{s:commoneig}. We also derive some immediate
consequences of this theorem, concerning inequalities for Perron
roots and matrix polynomials, and describe the intersection of
principal eigencones by means of the product of spectral projectors.
In Section~\ref{s:fnf}, we investigate Frobenius normal forms of
commuting matrices, showing that in the important special case where
the Perron roots of the components are distinct, the transitive
closures of the associated reduced digraphs coincide.
In Section~\ref{s:bmaxalg} we consider the eigenvector scaling,
which leads us to study commuting matrices in Boolean algebra.
As a result of this study we show that the critical digraphs
of two commuting irreducible matrices in
max algebra share a common node. In Section~\ref{s:classical} it is
indicated that most results in Sections~\ref{s:commoneig}
and~\ref{s:fnf} also hold in nonnegative matrix algebra.
Section~\ref{s:examples} is devoted to numerical examples.
By {\em max algebra} we understand the analogue of linear algebra
developed over the max-times semiring $(\Rp,\oplus,\times)$, which is
the set of nonnegative numbers $\Rp$ equipped with the operations of
``addition'' $a\oplus b:=\max(a,b)$ and ordinary multiplication
$a\times b$. The zero and unity of this semiring coincide with the usual
$0$ and $1$. The operations of the semiring are extended to
nonnegative matrices and vectors in the same way as in conventional
linear algebra. That is, if $A=(a_{ij})$, $B=(b_{ij})$ and
$C=(c_{ij})$ are matrices of compatible sizes with entries in
$\Rp$, we write $C=A\oplus B$ if $c_{ij}=a_{ij}\oplus b_{ij}$ for
all $i,j$ and $C=A\otimes B$ if $c_{ij}=\oplus_{k} a_{ik}
b_{kj}=\max_{k}(a_{ik} b_{kj})$ for all $i,j$. If $A$ is a square
matrix over $\Rp$, then the iterated product
$A\otimes A\otimes \cdots \otimes A$
in which the symbol $A$ appears $k$ times will be
denoted by $A^{k}$.
It is significant that max algebra can be obtained from
nonnegative linear algebra by means of a limit passage called Maslov
dequantization~\cite{Lit-07}:
\begin{equation}
\label{dequant} a\oplus b=\lim_{p\to\infty} a\oplus_p b \enspace ,
\end{equation}
where $a\oplus_p b:=(a^{1/p}+b^{1/p})^p$. Note that
$(\Rp,\oplus_p,\times)$ forms a semiring which is isomorphic to the semiring
$(\Rp,+,\times)$ of nonnegative numbers equipped with the usual addition
and multiplication. Thus one may expect, and this is indeed the
case, that max algebra and nonnegative linear algebra
have many interesting properties in common\footnote{Referring to
max algebra spectral theory, Gaubert~\cite{GP-97} remarks ``The
theory is extremely similar to the well known Perron-Frobenius
theory''.}. For example, the
Frobenius (normal) from of a reducible matrix plays an important
role in the study of reducible matrices in both theories. In view of
the above discussion, it is not surprising that a comparison of
spectral properties of reducible matrices shows that one needs to replace
strict inequalities in classical nonnegative spectral theory by weak
inequalities in max algebraic spectral theory, see~\cite{BCG-09} for
a remark along these lines concerning eigenvectors.
The above notation employing $\oplus$ and $\otimes$ is standard in
max algebra. However, as many results of the present paper are true
both in max algebra and in nonnegative linear algebra, it will be
convenient to write $a+b$ for $\max(a,b)$ when the argument works in
both theories. On the other hand, we emphasize by using the specific
max algebraic notation when this is not the case.
\section{The spectral problem in max algebra}\label{s:SpectralProblem}
We recall here some notation and basic facts about the spectral
problem in max algebra, which we use further in this paper.
See~\cite{BCOQ,But-10,CG:79,HOW-05} for general reference and more
information.
The {\em max algebraic spectral problem} for $A\in\Rp^{n\times n}$
consists in finding {\em eigenvalues} $\alpha \in \Rp$ and nonzero
{\em eigenvectors} $v \in \Rp^n$ such that $A v=\alpha v$ is
satisfied. Observe that the set $V(A,\alpha):=\{v\mid A v=\alpha
v\}$ is a (max) {\em cone} of $\Rp^n$, that is, a subset of $\Rp^n$
closed under (max) addition and (nonnegative) scalar multiplication.
This cone will be called the {\em eigencone} of $A$ associated with
$\alpha$. The set of eigenvalues, which is nonempty like in the usual
Perron-Frobenius theory, is called the {\em spectrum} of $A$ and
denoted $\Lambda(A)$. The largest eigenvalue of $A$ will be denoted
$\lambda(A)$ and called the {\em Perron root} of $A$ (since we want
the same terminology for max algebra and nonnegative matrix
algebra), and the associated eigencone will be called the {\em
principal eigencone} of $A$.
Unlike in the case of classical algebra,
there is an explicit formula for the max algebraic Perron root of
$A=(a_{ij})\in\Rp^{n\times n}$:
\begin{equation}\label{mcm}
\lambda(A)=\bigoplus_{k=1}^n\bigoplus_{i_1,\ldots,i_k}
(a_{i_1i_2} \cdots a_{i_ki_1})^{1/k}.
\end{equation}
This is also known as the maximal cycle (geometric) mean of $A$.
For $A \in \Rpnn$ we construct the associated digraph
$\assgraph=(N,E)$ by setting $N = \{1,\ldots,n\}$ and
letting $(i,j) \in E$ whenever $a_{ij} >0$.
When this digraph contains at least one cycle,
one distinguishes {\em critical cycles},
where the maximum in~\eqref{mcm} is attained. Further,
one constructs the {\em critical digraph} $\crit(A)=(N_c^A, E_c^A)$,
which consists of all the nodes $N_c^A$ and edges
$E_c^A$ of $\assgraph$ on critical cycles.
The nodes in $N_c^A$ will be called {\em critical nodes} or {\em eigennodes}.
The critical digraph is closely related to the series
\begin{equation}\label{kls}
A^*=I\oplus A\oplus A^2\oplus \cdots \enspace ,
\end{equation}
where $I$ is the unit matrix. This series is known to converge if,
and only if, $\lambda(A)\leq 1$,
in which case it is called the {\em Kleene star} of $A$.
If $\lambda(A)\leq 1$, then this series can be truncated:
$A^*=I\oplus A\oplus A^2\oplus\ldots\oplus A^{n-1}$.
For $A\in\Rpnn$ such that $\lambda(A)=1$, the principal eigencone is
the set of max-linear combinations of all columns of $A^*$ with
indices in $N_c^A$:
\begin{equation}\label{princ-eigen}
V(A,1)=\left\{\oplus_{i\in N_c^A} \beta_i
A^*_{\cdot i} \mid \beta_i\in\Rp\right\} \enspace .
\end{equation}
In particular, we have
\begin{equation}\label{fund-eigv}
AA^*_{\cdot i}=A^*_{\cdot i}\enspace ,
\quad A^*_{i\cdot }A=A^*_{i \cdot } \enspace ,\quad \forall i\in N_c^A \enspace .
\end{equation}
Thus, unlike in the usual Perron-Frobenius theory, even if
$A\in\Rpnn$ is irreducible (that is, the associated digraph
is strongly connected), the principal eigencone in max algebra
may contain more than just one ray. However, for irreducible $A$,
$\lambda(A)$ given by~\eqref{mcm} is the only eigenvalue
and every eigenvector is positive,
see Theorem~\ref{t:fundspec} below.
By standard optimal path algorithms, the critical digraph and the
columns of $A^*$ can be computed in $O(n^3)$ operations.
For further details we refer the reader to~\cite{But-10,HOW-05,BCOQ}.
A (max) cone $K\subset \Rp^n$ is said to be finitely generated if it is
the set of max-linear combinations of a finite subset of vectors of
$\Rp^n$. Equivalently, a cone $K\subset \Rp^n$ is finitely generated
if there exists a matrix $X\in \Rp^{n\times r}$, for some $r\in \N$,
such that $K=\Img(X)$, where as usual $\Img(X):=\{Xu\mid u\in
\Rp^r\}$. Observe that if $K$ is not trivial, we may assume that $X$
does not have a null column. By~\eqref{princ-eigen}, it follows that
the principal eigencone is finitely generated. Indeed, this property
holds for any eigencone of $A$, see e.g.~\cite[Theorem~4.1]{BCG-09}.
Therefore, in what follows, for $\alpha \in \Lambda(A)$ we shall
denote by $X^A_\alpha$ any matrix with nonzero columns satisfying
$V(A,\alpha)=\Img(X^A_\alpha)$.
Let us finally mention that like in classical algebra,
any finite intersection of finitely generated
(max) cones is also finitely generated (this property follows
from~\cite{BH-84}, see e.g.~\cite[Theorem~1]{GK09}).
We summarize the main properties that will be used
in this paper in the next proposition.
\begin{proposition}\label{p:allpropert}
In max algebra the following statements hold:
\begin{enumerate}[(i)]
\item Every matrix has an eigenvalue with a corresponding eigenvector;
\item Eigencones are finitely generated;
\item The intersection of two finitely generated (max) cones is finitely generated.
\end{enumerate}
\end{proposition}
Further information on max algebra spectral theory
will be given in Section~\ref{s:fnf}.
\section{Existence of common eigenvectors}\label{s:commoneig}
\subsection{Common eigenvector of two matrices}
In this section on max algebra we prove that two commuting matrices
have a common eigenvector.
With this aim, we shall need the following lemma.
\begin{lemma}\label{l:InvEigenCone}
If $A,B\in\R_+^{n\times n}$ commute, then any eigencone
$V(A,\alpha)$ of $A$ is invariant under $B$ and any eigencone
$V(B,\alpha)$ of $B$ is invariant under $A$.
\end{lemma}
\begin{proof}
Let $v\in V(A,\alpha)$. For $u=Bv$, we have
\begin{equation}
Au=ABv=BAv=\alpha Bv=\alpha u \enspace .
\end{equation}
Therefore, $B(V(A,\alpha )) \subset V(A,\alpha )$.
\end{proof}
Now it is possible to prove the following result,
which relates the eigencones of two commuting matrices.
\begin{theorem}\label{t:commoneig}
If $A,B\in\R_+^{n\times n}$ commute, then for
any eigencone $V(A,\alpha)$ of $A$ there exists an eigencone
$V(B,\mu)$ of $B$ such that $V(A,\alpha)\cap V(B,\mu)$ contains a
nonzero vector.
\end{theorem}
\begin{proof}
Let $V(A,\alpha)=\Img(X^A_\alpha)$ be an eigencone of $A$. Then,
\begin{equation}
AX_{\alpha}^A=\alpha X_{\alpha}^A \enspace ,
\end{equation}
and since by Lemma~\ref{l:InvEigenCone} we have
$B(\Img(X^A_\alpha))\subset \Img(X^A_\alpha)$, there exists
a (nonnegative square) matrix $C$ such that $BX_{\alpha}^A=X_{\alpha}^A C$.
Let $z$ be any eigenvector of $C$,
so that $Cz=\mu z$ and $z\neq 0$, and consider $u=X_{\alpha}^A z$.
Then, $u\neq 0$ (recall that all the columns of $X_{\alpha}^A$ are
nonzero) and we obtain
\[
Au = AX_{\alpha}^A z=\alpha X_{\alpha}^A z=\alpha u
\]
and
\[
Bu =BX_{\alpha}^A z=X_{\alpha}^A C z= \mu X_{\alpha}^A z=\mu u \enspace .
\]
Thus, $u\in V(A,\alpha)\cap V(B,\mu)$.
\end{proof}
As an immediate consequence, we obtain:
\begin{corollary}\label{c:commoneig}
If $A,B\in\R_+^{n\times n}$ commute,
then they have a common eigenvector.
\end{corollary}
We remark that our proof of Theorem~\ref{t:commoneig}
also shows the following result:
\begin{proposition}\label{p:common}
Let $A \in \Rnn_+$ and let $K$ be a (nontrivial) finitely generated cone of
$\Rp^n$. If $AK \subseteq K$, then $A$ has an eigenvector in $K$.
\end{proposition}
\subsection{Common eigenvector of several matrices}
The results above can be generalized to several pairwise commuting
matrices.
\begin{theorem}\label{t:cv-rmats}
Assume the matrices $A_1,\ldots ,A_r \in \Rnn_+$ commute in pairs.
Then, given any eigenvalue $\alpha_i\in\Lambda(A_i)$,
where $i\in \{1,\ldots ,r \}$,
there exist $\alpha_j\in\Lambda(A_j)$ for $j\neq i$
such that $V(A_1,\alpha_1)\cap \cdots \cap V(A_r,\alpha_r )$
contains a nonzero vector.
\end{theorem}
\begin{proof}
The case $r=2$ is precisely Theorem~\ref{t:commoneig}.
So assume that the statement of the theorem holds for $r=k$ and let
$A_1,\ldots ,A_k,A_{k+1}$ be $k+1$ matrices which commute in pairs.
Without loss of generality, assume $\alpha_1 \in \Lambda (A_1)$ is
given. By the induction hypothesis,
there exist $\alpha_j \in \Lambda(A_j)$, for $j=2,\ldots ,k$,
such that $V(A_1,\alpha_1)\cap \cdots \cap V(A_k,\alpha_k )$
contains a nonzero vector. Moreover,
since by Proposition~\ref{p:allpropert} any eigencone is
finitely generated and any finite
intersection of finitely generated max cones is also finitely
generated, there exists a (nonnegative) matrix $X$ such that
$V(A_1,\alpha_1)\cap \cdots \cap V(A_k,\alpha_k)=\Img(X)$.
Note that we may assume, without loss of generality,
that all the columns of $X$ are nonzero
because $\Img(X)$ contains nonzero vectors.
Since $A_i$ and $A_{k+1}$ commute for $i=1,\ldots,k$,
by Lemma~\ref{l:InvEigenCone} it follows that
\[
A_{k+1}(V(A_i,\alpha_i))\subseteq V(A_i,\alpha_i)
\]
for $i=1,\ldots ,k$. Therefore,
$A_{k+1}(\Img(X))=A_{k+1}(\cap_{i=1}^kV(A_i,\alpha_i))\subset
\cap_{i=1}^kV(A_i,\alpha_i)=\Img(X)$ and thus there exists a
(nonnegative square) matrix $C$ such that $A_{k+1}X=XC$.
As in the proof of Theorem~\ref{t:commoneig}, let $z$ be any
eigenvector of $C$ so that $Cz=\mu z$,
for some $\mu \in \Lambda (C)$,
and define $u=X z$.
Since $z\neq 0$ and the columns of $X$ are nonzero,
we have $u\neq 0$ and
\[
A_{k+1}u=A_{k+1} X z=X C z=\mu X z=\mu u \enspace .
\]
Thus, $u\in \Img(X)\cap V(A_{k+1},\mu)=V(A_1,\alpha_1)\cap \cdots
\cap V(A_k,\alpha_k)\cap V(A_{k+1},\mu)$.
\end{proof}
Next we investigate the eigenvalues of polynomials of commuting matrices.
Let us recall that a {\em max polynomial} is obtained by replacing in a
real polynomial (with nonnegative coefficients) the usual sum by the maximum.
\begin{theorem}\label{t:polynomials}
Let $A_1,\ldots,A_r \in \Rpnn$ commute in pairs and let
$p(x_1,\ldots,x_r)$ be a max polynomial. Then,
\begin{enumerate}[(i)]
\item\label{t:p1}
For each $i\in \{ 1,\ldots ,r \} $ and $\alpha_i\in\Lambda(A_i)$,
there exist $\alpha_j\in\Lambda(A_j)$ for all $j\neq i$ such that
$p(\alpha_1,\ldots,\alpha_r)\in \Lambda(p(A_1,\ldots,A_r))$;
\item\label{t:p2}
For each $\lambda\in\Lambda(p(A_1,\ldots,A_r))$ there exist
$\alpha_i\in \Lambda(A_i)$ for all $i=1,\ldots,r$ such that
$\lambda=p(\alpha_1,\ldots,\alpha_r)$.
\end{enumerate}
\end{theorem}
\begin{proof}
\eqref{t:p1} Let $i\in\{1,\ldots,r\}$ and $\alpha_i\in\Lambda(A_i)$.
By Theorem~\ref{t:cv-rmats}, there exist $\alpha_j\in\Lambda(A_j)$
for all $j\neq i$ and a nonzero vector $v \in \Rpn$ such that
$A_i v=\alpha_i v$ for all $i=1,\ldots, r$. But then we also have
$p(A_1,\ldots,A_r)v=p(\alpha_1,\ldots,\alpha_r)v$, and so
$p(\alpha_1,\ldots,\alpha_r)\in \Lambda(p(A_1,\ldots,A_r))$. \\
\eqref{t:p2} Let $\lambda\in\Lambda(p(A_1,\ldots,A_r))$.
Since $A_1,\ldots,A_r$ and $p(A_1,\ldots,A_r)$ commute in pairs,
by Theorem~\ref{t:cv-rmats} there is an eigenvector
$v\in V(p(A_1,\ldots,A_r),\lambda)$ which is also an eigenvector of
$A_i$ associated with some eigenvalue $\alpha_i\in \Lambda(A_i)$,
for all $i=1,\ldots,r$. But then
$\lambda v=p(A_1,\ldots,A_r) v = p(\alpha_1,\ldots,\alpha_r) v$
and it follows that $\lambda=p(\alpha_1,\ldots,\alpha_r)$.
\end{proof}
\begin{corollary}\label{c:spineqs}
Let $A_1,\ldots, A_r\in\Rpnn$ commute in pairs and
let $p(x_1,\ldots,x_r)$ be a max polynomial. Then,
\begin{enumerate}[(i)]
\item\label{c:spineqs3}
$\lambda(p(A_1,\ldots,A_r))\leq p(\lambda(A_1),\ldots,\lambda(A_r))$;
\item\label{c:spineqs1} $\lambda(A_1+\cdots+ A_r)\leq
\lambda(A_1)+\cdots+\lambda(A_r)$;
\item\label{c:spineqs2} $\lambda(A_1\cdots A_r)\leq
\lambda(A_1)\cdots \lambda(A_r)$.
\end{enumerate}
Moreover, equality holds in all the above relations if the matrices
$A_1,\ldots, A_r$ are irreducible.
\end{corollary}
\begin{proof}
Part~\eqref{c:spineqs3} follows from Theorem~\ref{t:polynomials},
and parts~\eqref{c:spineqs1} and~\eqref{c:spineqs2} are its special cases.
If the matrices are irreducible, then each of them has unique eigenvalue,
and we have the equalities.
\end{proof}
In the case of max algebra we also have
$\lambda(A_1\oplus\ldots\oplus A_r)\geq \lambda(A_i)$ for all
$i=1,\ldots, r$, as the Perron root expressed by~\eqref{mcm} is
monotonic. Hence~\eqref{c:spineqs1} always holds with equality in
max algebra.
\subsection{Intersection of principal eigencones}\label{s:intersection}
A matrix $Q$ is called a {\em projector} on a
cone $K\subset \R^n_+$ if $\Img(Q)=K$ and $Q^2 = Q$.
This implies that $Qx = x$ if, and only if, $x \in K$.
In general, there are many projectors on the same cone,
but if two such projectors $P,Q$ commute,
then they are identical because we have
$Px = QPx = PQx = Qx$ for all $x \in \R^n_+$.
We recall that the eigencone $V(A,\lambda(A))$ associated with
$\lambda(A)$ (assumed to be nonzero) is called the principal
eigencone of $A$ and a projector on $V(A, \lambda(A))$ which
commutes with $A$ is called a {\em spectral projector} for $A$.
Since $V(A,\lambda(A)) = V(A/\lambda(A),1)$,
there is no loss of generality in assuming that $\lambda(A)=1$.
In max algebra, one can explicitly define such projector.
There are two definitions in the literature:
\begin{equation}\label{e:specproj-bac}
\Tilde{Q}(A)=\bigoplus_{i\in N_c^A} A_{\cdot i}^* A_{i\cdot}^* \enspace ,
\end{equation}
and
\begin{equation}\label{e:specproj-yak}
{Q}(A)=\lim_{p\to\infty} \bigoplus_{m\geq p} A^m \enspace .
\end{equation}
The first of these is found in Baccelli et al.~\cite[Section~3.7.3]{BCOQ},
see also~\cite{CTGG-99}, and the second one,
which is attributed to Yakovenko~\cite{Yak-90},
is found in a more general context in
Kolokoltsov and Maslov~\cite[Section~2.4]{KM:97}.
We shall need the following proposition,
which shows that these projectors are indeed identical.
See~\cite[Theorem 2.11]{KM:97} for a closely related result.
\begin{proposition}\label{p:commproj}
Let $A \in \R_+^{n \times n}$ with $\lambda(A) =1$. Then,
there is a unique spectral projector on $V(A,1)$ which is given,
equivalently, by~\eqref{e:specproj-bac} or~\eqref{e:specproj-yak}.
\end{proposition}
\begin{proof}
In the first place, note that in the matrix case,
\eqref{e:specproj-yak} may be replaced by
\begin{gather}
\label{e:specproj-kss} Q(A)=\lim_{p\to\infty} A^p A^* \enspace ,
\end{gather}
see the remarks on $A^*$ in Section~\ref{s:SpectralProblem}.
Since by~\eqref{kls} we have $A^{p+1}A^* \leq A^p A^*$,
it follows that the limit in~\eqref{e:specproj-kss} exists.
By the continuity of operations,
$\lim_{p\to\infty} (C_p B)=(\lim_{p\to\infty} C_p) B$
for any converging sequence of matrices $C_p$ and any matrix $B$.
Using this, we observe that if $B$ is any matrix which commutes with $A$,
then $B$ also commutes with $Q(A)$.
Since as shown above any two commuting projectors
on the same cone are identical,
we conclude that any spectral
projector for $A$ is equal to $Q(A)$.
Therefore, in particular we have $\Tilde{Q}(A)=Q(A)$.
\end{proof}
Next we state two lemmas. The first one exploits~\eqref{e:specproj-yak}
and follows from the continuity of multiplication.
The second lemma is standard and its proof is
recalled for the convenience of the reader.
\begin{lemma}
If $A,B\in\Rnn_+$ commute, then $Q(A)$ and $Q(B)$ commute.
\end{lemma}
\begin{lemma}\label{l:commproj}
Let $Q_i$, $i = 1,\ldots, r$, be commuting projectors. Then,
\begin{equation}
\Img(Q_1) \cap \cdots \cap \Img(Q_r) = \Img(Q_1\cdots Q_r) \enspace .
\end{equation}
\end{lemma}
\begin{proof}
If $x \in \Img(Q_1) \cap \cdots \cap \Img(Q_r)$, then $Q_i x = x$ for
$i = 1,\ldots, r$, and hence $(Q_1\cdots Q_r) x = x$.
Thus, $x \in \Img(Q_1\cdots Q_r)$. Conversely,
if $x \in \Img(Q_1\cdots Q_r)$,
then $(Q_1 \cdots Q_r) y = x$ for some vector $y$.
Multiplying this equation by $Q_i$, for $i =1,\ldots, r$,
using the idempotency of $Q_i$ and commutativity, it follows that $Q_i x = x$.
This completes the proof of the lemma.
\end{proof}
Lemma~\ref{l:commproj} implies that we can express the intersection
of the principal eigencones of commuting matrices as follows:
\begin{equation}\label{qa1ar}
V(A_1,1)\cap\cdots\cap V(A_r,1)=
\Img(Q(A_1)) \cap \cdots \cap \Img(Q(A_r)) =
V(Q(A_1)\cdots Q(A_r), 1)\enspace .
\end{equation}
In the general (reducible) case,
this intersection may reduce to the zero vector.
Since by~\eqref{c:spineqs2} of Corollary~\ref{c:spineqs}
we have $\lambda(Q(A_1)\cdots Q(A_r))\leq 1$,
it follows that~\eqref{qa1ar} is not trivial if,
and only if, the Perron root of $Q(A_1)\cdots Q(A_r)$ is $1$,
in which case this intersection is given by
the principal eigencone of $Q(A_1)\cdots Q(A_r)$.
Using definition~\eqref{e:specproj-bac}, we can compute
this product in $O(rn^3)$ operations, and then it requires no more
than $O(n^3)$ operations to compute its Perron root and describe its
principal eigencone when the Perron root is $1$.
\section{Frobenius normal forms}\label{s:fnf}
Let $\assgraph=(N,E)$ be the associated digraph of $A\in\Rpnn$ and
$\assgraph^\mu =(N^{\mu},E^{\mu})$, for $\mu=1,\ldots ,t$, be the
connected components of $\assgraph$. We construct the {\em reduced digraph}
$\redgraph$ with set of nodes $\{ 1,\ldots ,t\}$ setting an edge
$(\mu,\nu)$ whenever there exist $i\in N^{\mu}$ and $j\in N^{\nu}$
such that $(i,j)\in E$. We shall call a connected component
(or the corresponding set of nodes) of $\assgraph$ a {\em class} of $A$
and also use that term for the
nodes of $\redgraph$. Further, we also identify subsets $S$ of
nodes of $\redgraph$ with the union of the corresponding classes of
$A$, that is $S$ may denote $\cup_{\nu \in S}N^\nu$.
Each class $\mu$ is labeled by the corresponding maximal cycle
(geometric) mean $\alpha_{\mu }$,
which will be also called the {\em Perron root} of the class. We
write $\mu\rightarrow\nu$ if $\mu = \nu$ or if there exists a path
in $\redgraph$ connecting $\mu$ to $\nu$ (in other words, if $\mu$
has access to $\nu$). A set $I$ of classes is an {\em initial segment}
of $\redgraph$ if $\nu \in I$ and $\mu\rightarrow\nu$ imply
that $\mu \in I$. The set of all classes $\mu$ such that
$\mu\rightarrow\nu$ will be denoted by $\intl(\nu)$ and called the
{\em initial segment} generated by $\nu$ in $\redgraph$. If $S$ is a
set of classes, then a class $\nu \in S$ is said to be {\em initial}
in $S$ if $\mu\rightarrow \nu$ and $\mu \in S$ imply that $\mu =
\nu$. Similarly, a class $\nu \in S$ is called {\em final} in $S$
if $\nu\rightarrow \mu$ and $\mu \in S$ imply that $\mu = \nu$. An
initial (resp. final) class in $\{1,\ldots ,t\}$ is simply called
initial (resp. final).
A class $\nu$ is said to be {\em spectral} if $\nu$ is initial, or
if $\ga_\nu > 0$ and $\mu\rightarrow\nu$ imply that
$\alpha_{\mu}\leq \alpha_{\nu}$. A spectral class $\nu$ is called
{\em premier spectral} if $\mu\rightarrow\nu$ and $\mu\neq \nu$
imply that $\alpha_\mu < \alpha_\nu$.
Access relations for $\assgraph$ and $\redgraph$ are normally
visualized in terms of a {\em Frobenius form}. There exists a
similarity permutation of $A$ such that
\[
A =
\begin{pmatrix}
A _{1 1}& 0 &\cdots & 0& 0\\
A _{2 1}& A _{2 2} &\cdots &0 & 0\\
\vdots &\vdots &\ddots & \vdots & \vdots \\
A _{(t-1)1}&A _{(t-1)2} &\cdots &A _{(t-1)(t-1)}& 0 \\
A _{t 1} &A _{t 2} &\cdots &A _{t(t-1)}& A _{t t}
\end{pmatrix}
\]
with irreducible diagonal blocks $A _{\mu \mu}$ for $\mu=1,\ldots ,t$.
A Frobenius (normal) form of $A$ arises from each total ordering of
the classes of $\redgraph$ that is anti-compatible with the partial
order given by the access relations, viz. $\mu \rightarrow \nu$
implies $\mu \geq \nu$. In particular, given any initial segment $I$
of $\redgraph$ there is a Frobenius form of $A$ for which the
classes of $I$ are $r$, $r+1,\ldots ,t$ for some $r \in\{1,\ldots ,t\}$.
We now state the {\em fundamental spectral theorem of max algebra}.
Recall that the {\em support} of a vector $x \in \Rpn$
consists of all $i \in N$ such that $x_i > 0$.
\begin{theorem}\label{t:fundspec}
Let $A \in \Rpnn$ and $\gl \in \Rp$.
Then, a subset $U$ of $N$ is the support of an
eigenvector associated with $\gl$ if, and only if,
\begin{enumerate}[(i)]
\item\label{t:fundspec1}
There is an initial segment $I$ of $\redgraph$ such that
$U = \cup_{\nu \in I}N^\nu$,
\item\label{t:fundspec2}
All final classes $\nu$ in $I$ are spectral and satisfy
$\ga_\nu =\gl$.
\end{enumerate}
\end{theorem}
This theorem has a long history and has been stated in different ways,
see e.g.~\cite{GP-97,Gau-92,CG:79,BCG-09,But-10,BCOQ}.
The statement in Theorem~\ref{t:fundspec} is essentially the same
as the one that appeared in~\cite{GP-97}.
The following corollary is immediate.
\begin{corollary}\label{c:initnodes}
Let $A \in \Rpnn$. Then,
\begin{enumerate}[(i)]
\item\label{c:initnodes1}
$\gl$ is an eigenvalue if, and only if, there is a spectral class
$\nu$ such that $\ga_\nu = \gl$;
\item\label{c:initnodes2}
$\nu$ is a spectral class if, and only if, there exists an
eigenvector with support $\intl(\nu)$;
\item\label{c:initnodes3}
A spectral class $\nu$ is premier spectral if, and only if,
any eigenvector associated with $\ga_{\nu}$ whose support is contained
in $\intl(\nu)$ has its support equal to $\intl(\nu)$;
\item\label{c:initnodes4}
If the reduced digraph of $A$ has a unique spectral class $\nu$ with
Perron root $\ga_{\nu}$, then any eigenvector associated with
$\ga_{\nu}$ has support $\intl(\nu)$;
\item\label{c:initnodes5}
If the Perron roots of all classes are distinct, then all spectral
classes are premier spectral and all eigenvectors have support
$\intl(\nu$) for some spectral class $\nu$.
\end{enumerate}
\end{corollary}
The following well-known corollary also
follows easily from Theorem~\ref{t:fundspec}.
\begin{corollary}\label{c:posevec}
For any $A\in \Rpnn$ with $\gl(A)> 0$
the following statements are equivalent:
\begin{enumerate}[(i)]
\item\label{c:posevec1}
$A$ has a positive eigenvector.
\item\label{c:posevec2}
The Perron root of any final class is $\gl(A)$
(and so, in particular, all final classes are spectral).
\end{enumerate}
If either condition holds, then any positive eigenvector
is associated with the eigenvalue $\gl(A)$.
\end{corollary}
The proof of our next lemma essentially repeats arguments used to
prove Corollary~\ref{c:commoneig} and Theorem~\ref{t:cv-rmats}.
\begin{lemma}\label{l:commutant}
Let $A \in \R_+^{n\times n}$ and $C\in\R_+^{m\times m}$. If $AX = XC$,
where $X\in\R_+^{n\times m}$ and every column of $X$ is nonzero,
then any eigenvalue of $C$ is also an eigenvalue of $A$.
\end{lemma}
\begin{proof} Suppose that $\lambda \in \Lambda(C)$ and let
$z\in\R_+^m $ be an eigenvector of $C$ associated with $\lambda $. Then,
$A X z = X C z = \lambda X z$.
Since every column of $X$ is nonzero,
we have $X z \neq 0$ and thus $\lambda \in \Lambda(A)$.
\end{proof}
If $A$ and $C$ are irreducible, then in
Lemma~\ref{l:commutant} it is enough to assume that $X$ is
nonzero because the vector $z$ in the proof above is positive.
Thus, we obtain:
\begin{lemma}\label{l:icommutant}
Let $A \in \R_+^{n\times n}$ and $C\in\R_+^{m\times m}$
be irreducible matrices. If $AX = XC$, where $X\in\R_+^{n\times m}$
is nonzero, then $\lambda(A) = \lambda(C)$.
\end{lemma}
The following important lemma indicates what happens if a matrix
commutes with an irreducible matrix.
\begin{lemma}\label{l:uniqueevalue}
If $A,B\in\R_+^{n\times n}$ commute and $B$ is irreducible, then
\begin{enumerate}[(i)]
\item\label{l:uniqueevalue1}
The Perron root of every final class and every initial class of $A$
is $\gl(A)$ (and so, in particular, all final classes are spectral);
\item\label{l:uniqueevalue2}
$A$ has the unique eigenvalue $\gl(A)$;
\item\label{l:uniqueevalue3}
If $A$ is reducible,
then at least two distinct classes of $A$ have Perron root $\gl(A)$.
\end{enumerate}
\end{lemma}
\begin{proof}
In the first place, note that the lemma is obvious if $\gl(A)=0$,
so we may assume that $\gl(A)> 0$.
\eqref{l:uniqueevalue1}
From Corollary~\ref{c:commoneig}, we know that $A$ and $B$
have a common eigenvector. Since $B$ is irreducible,
all its eigenvectors are positive.
It follows by~\eqref{c:posevec2} of
Corollary~\ref{c:posevec} that all final classes of $A$ have
Perron root $\gl(A)$ and are therefore spectral.
Similarly, the transpose $A^T$ commutes with the irreducible matrix $B^T$ and
therefore all final classes of $A^T$ have Perron root $\gl(A)$.
But the final classes of $A^T$ are precisely the initial classes of $A$.
\eqref{l:uniqueevalue2} This follows easily from~\eqref{l:uniqueevalue1},
Theorem~\ref{t:fundspec} and the definition of spectral class.
\eqref{l:uniqueevalue3} If $A$ is reducible,
either it has two initial classes or
an initial class and a distinct final class.
\end{proof}
\begin{remark}\label{r:spineqs}
{\rm In Corollary~\ref{c:spineqs}, the irreducibility assumption can
be relaxed. We need there that just {\em one} of the matrices is
irreducible, for then by~\eqref{l:uniqueevalue2} of Lemma~\ref{l:uniqueevalue}
each matrix has a unique eigenvalue.}
\end{remark}
The {\em transitive closure} of $\redgraph$ is the digraph
$\redgraph^*$ which has the edge $(\mu,\nu)$ if, and only if,
$\mu\rightarrow\nu$ in $\redgraph$. We shall say that $\nu$
{\em covers} $\mu$ in $\redgraph^*$ if $ \nu \neq \mu$,
$\nu \rightarrow \mu$ and the following property is satisfied:
$\nu \rightarrow \delta \rightarrow \mu$
implies that either $\delta=\mu$ or $\delta = \nu$.
The main result of this section is the following theorem.
\begin{theorem}\label{t:distroots}
Suppose that $A_1,\ldots ,A_r\in\R_+^{n\times n}$ pairwise commute
and that all classes of $A_i$, for each $i\in \{1,\ldots,r\}$, have
distinct Perron roots. Then,
\begin{enumerate}[(i)]
\item\label{t:d1}
All classes of $A_1,\ldots,A_r$ and $A_1+\cdots+ A_r$ coincide;
\item\label{t:d2}
The transitive closures of the reduced digraphs of
$A_1,\ldots,A_r$ and $A_1+\cdots+ A_r$ coincide;
\item\label{t:d3}
The spectral classes of the reduced digraphs of $A_1,\ldots,A_r$ and
$A_1+\cdots+ A_r$ coincide. In particular, $A_1,\ldots,A_r$ have the
same number of distinct eigenvalues;
\item\label{t:d4}
Let $\mu_1,\ldots,\mu_m$ be the common spectral classes of
$A_1,\ldots,A_r$ and denote the Perron root of the $\mu_j$-th class
of $A_i$ by $\ga^j_i$. Then, for any max polynomial
$p(x_1,\ldots,x_r)$, the eigenvalues of $p(A_1,\ldots,A_r)$ are
precisely $p(\alpha^j_1,\ldots,\alpha^j_r)$ for $j=1,\ldots,m$
(possibly with repetitions).
\end{enumerate}
\end{theorem}
\begin{proof}
\eqref{t:d1} Suppose that $C:= A_1+\cdots+ A_r$ is in Frobenius
form and partition $A_i$, for $i=1,\ldots,r$, correspondingly.
Evidently, a Frobenius form of $B:=A_i$, for $i=1,\ldots,r$,
is a refinement of the Frobenius form of $C$. Since $B_{\mu \mu}$
and $C_{\mu \mu}$ commute and $C_{\mu \mu}$ is irreducible,
by~\eqref{l:uniqueevalue3} of Lemma~\ref{l:uniqueevalue} and our assumption,
it follows that $B_{\mu \mu}$ is also irreducible. Therefore,
$B=A_i$ is also in Frobenius form. This proves~\eqref{t:d1}.
\eqref{t:d2} Now suppose that $\nu$ covers $\mu$ in
the reduced digraph associated with $C$.
Then, for $B:=A_i$ the matrices
\[
\begin{pmatrix}
B_{\mu \mu} & 0 \\
B_{\nu \mu} & B_{\nu \nu}
\end{pmatrix}
\enspace {\rm and }\enspace
\begin{pmatrix}
C_{\mu \mu} & 0 \\
C_{\nu \mu} & C_{\nu \nu}
\end{pmatrix}
\]
commute and, by assumption, $C_{\nu \mu} \neq 0$. Suppose that
$B_{\nu \mu} = 0$. Examining the $(2,1)$ block of the products of
these matrices we obtain
\begin{equation}
B_{\nu \nu}C_{\nu \mu} = C_{\nu \mu} B_{\mu \mu} \enspace .
\end{equation}
Since $B_{\mu \mu}$ and $B_{\nu \nu}$ are irreducible, it follows
from Lemma~\ref{l:icommutant} that the Perron roots of $B_{\mu \mu}$
and $B_{\nu \nu}$ are equal. This contradicts our assumption and
hence $B_{\nu \mu} \neq 0$. But two transitive digraphs coincide if
the cover relations are identical. This proves~\eqref{t:d2}.
\eqref{t:d3}
In the first place, observe that any initial segment $\intl(\nu)$
generated by a class $\nu$ in the reduced digraph associated with one
of the matrices $A_1,\ldots,A_r$ or $A_1+\cdots+ A_r$ is independent of the
choice of the matrix
because the transitive closures of their reduced digraphs coincide.
For this reason, in what follows we shall denote by $\intl(\nu)$ this
common initial segment and we shall not specify the matrix it corresponds to.
Let $\mu_j$ be a spectral class of $A_i$. Since all
classes of $A_i$ have distinct Perron roots,
from~\eqref{c:initnodes5} of Corollary~\ref{c:initnodes} it follows
that every spectral class is premier spectral and that every
eigenvector of $A_i$ associated with $\ga^j_i$ has support
$\intl(\mu_j)$. But, by Theorem~\ref{t:cv-rmats}, there are
eigenvalues of $A_k$ for $k\neq i$ that share an eigenvector
with the eigenvalue $\ga^j_i$ of $A_i$. Since this eigenvector has
support $\intl(\mu_j)$, by~\eqref{c:initnodes2} of
Corollary~\ref{c:initnodes} it follows that $\mu_j$
is a spectral class for all $A_k$.
Note that the above argument shows that any spectral class of $A_i$ is also
a spectral class of $A_1+\cdots+ A_r$. To prove the converse in max
algebra, suppose that $\mu$ is a spectral class of $A_1+\cdots + A_r$.
Using the additivity of Perron roots
(see Corollary~\ref{c:spineqs}), we obtain
\begin{equation}\label{e:maxinequ}
\oplus_{i=1}^r \lambda((A_i)_{\nu \nu}) =
\lambda((\oplus_{i=1}^r A_i)_{\nu \nu}) \leq
\lambda((\oplus_{i=1}^r A_i)_{\mu \mu}) =
\oplus_{i=1}^r \lambda((A_i)_{\mu \mu}) \enspace ,
\end{equation}
for all $\nu \in \intl(\mu)$. Without loss of generality,
assume that
$\oplus_{i=1}^r \lambda((A_i)_{\mu \mu})= \lambda((A_1)_{\mu \mu})$.
Then, from~\eqref{e:maxinequ} it follows that
$\lambda((A_1)_{\nu \nu})\leq \lambda((A_1)_{\mu \mu})$
for all $\nu \in \intl(\mu)$,
implying that $\mu$ is a spectral class of $A_1$,
and hence of all $A_i$.
\eqref{t:d4} By Theorem~\ref{t:cv-rmats},
for each common spectral class $\mu_j$ of $A_1,\ldots ,A_r$
there exists a common eigenvector $v^j$ which has support $\intl(\mu_j)$.
Since $A_i v^j =\alpha_i^j v^j$ for $i=1,\ldots , r$,
it follows that
$p(A_1,\ldots ,A_r)v^j = p(\alpha_1^j,\ldots,\alpha_r^j) v^j$ and thus
$p(\alpha_1^j,\ldots,\alpha_r^j)$ is an eigenvalue of
$p(A_1,\ldots,A_r)$. Let now $\lambda $ be an eigenvalue of
$p(A_1,\ldots,A_r)$. As $p(A_1,\ldots,A_r)$ commutes with $A_i$ for
all $i=1,\ldots,r$, by Theorem~\ref{t:cv-rmats} there exists an
eigenvector $v$ of $p(A_1,\ldots,A_r)$ associated with $\lambda $
which is also an eigenvector of $A_i$ for all $i$. Then,
by~\eqref{c:initnodes5} of Corollary~\ref{c:initnodes} there exists
a common spectral class $\mu_j$ of $A_1,\ldots ,A_r$ such that the
support of $v$ is equal to $\intl(\mu_j)$. Therefore, we have
$A_i v=\alpha_i^j v$ for all $i=1,\ldots,r$, implying that
$\lambda=p(\alpha_1^j,\ldots,\alpha_r^j)$ because
$\lambda v=p(A_1,\ldots,A_r)v=p(\alpha_1^j,\ldots,\alpha_r^j)v$.
\end{proof}
As it was already observed, under the assumptions of
Theorem~\ref{t:distroots}, the eigenvalues $\alpha^j_i$,
$i = 1,\ldots, r$, of the matrices $A_1,\ldots ,A_r$ are associated with
some common spectral class $\mu_j$ of their reduced digraphs.
We next show how to compute the intersection
of the corresponding eigencones.
Let $I$ be the initial segment generated by the spectral class
$\mu_j$ in any of the reduced digraphs associated with the matrices
$A_i$ (recall that this initial segment is independent
of the choice of the matrix because the transitive closures of
their reduced digraphs coincide).
We write uniquely each vector $x \in \R^n_+$ as $x[I] + x[I']$,
where $I'$ is the complement of $I$ in $\{1,\ldots ,n\}$.
Since $I$ is an initial segment of
the reduced digraphs associated with all the matrices $A_i$, there
is a Frobenius form of all these matrices such that $I = \{r,r+1,\ldots,t\}$
for some $r\in\{1,\ldots,t\}$.
If we denote the submatrix of $A_i$
based on the set of classes $I$ by $A_i[I,I]$,
then as the matrices $A_1,\ldots ,A_r$ commute in pairs,
it follows that also the matrices $A_1[I,I],\ldots ,A_r[I,I]$
commute in pairs. Therefore,
we can apply the method described in Section~\ref{s:intersection}
to compute the intersection of their principal eigencones.
Moreover, by Corollary~\ref{c:initnodes} we know that
$x\in V(A_1,\alpha_1^j) \cap \cdots \cap V(A_r,\alpha_r^j)$ if,
and only if, $x[I']=0$ and
$x[I]\in V(A_1[I,I],\alpha_1^j) \cap \cdots \cap V(A_r[I,I],\alpha_r^j)$,
where the latter is the intersection of the principal eigencones
of $A_i[I,I]$, because by the definition of these matrices
we have $\lambda(A_i[I,I])=\alpha_i^j$ for all $i = 1,\ldots, r$.
\section{Common scaling and application of Boolean algebra}\label{s:bmaxalg}
\subsection{Common scaling and saturation digraphs}
The whole of this section is in max algebra only. It is inspired by
the works of Cuninghame-Green and Butkovi\v{c}~\cite{CGB-08,But-10},
where commuting matrices are studied in the context of two-sided
systems and generalized eigenproblem. In these works, commuting
irreducible matrices are {\em assumed} to have a common eigennode.
We are going to show that it is always the case.
If $A=(a_{ij})$ and $B=(b_{ij})$ are irreducible and $AB=BA$,
then they have a common positive eigenvector $u$,
and using $U=\diag(u)$ they can be simultaneously scaled to
$\Tilde{A}:=U^{-1}AU$ and $\Tilde{B}:=U^{-1}BU$.
Assumed that $\lambda(A)=\lambda(B)=1$,
we obtain for $\Tilde{A}=(\Tilde{a}_{ij})$ and
$\Tilde{B}=(\Tilde{b}_{ij})$ that
\begin{equation}
\begin{split}
Au=u \Rightarrow & \forall i\exists j\colon
a_{ij}u_j=u_i\Leftrightarrow
\Tilde{a}_{ij}=1 \enspace ,\\
& \forall i,j\colon a_{ij}u_j\leq u_i \Leftrightarrow
\Tilde{a}_{ij}\leq 1 \enspace .\\
Bu=u \Rightarrow & \forall i\exists j\colon
b_{ij}u_j=u_i\Leftrightarrow
\Tilde{b}_{ij}=1 \enspace ,\\
& \forall i,j\colon b_{ij}u_j\leq u_i \Leftrightarrow
\Tilde{b}_{ij}\leq 1 \enspace .
\end{split}
\end{equation}
Defining $\Tilde{A}^{[1]}=(\Tilde{a}^{[1]}_{ij})$ and
$\Tilde{B}^{[1]}=(\Tilde{b}^{[1]}_{ij})$ by:
\begin{equation}
\Tilde{a}^{[1]}_{ij}=
\begin{cases}
1 & \text{if }\Tilde{a}_{ij}=1 \; ,\\
0 & \text{otherwise.}
\end{cases}
\enspace \enspace
\Tilde{b}^{[1]}_{ij}=
\begin{cases}
1 & \text{if }\Tilde{b}_{ij}=1 \; ,\\
0 & \text{otherwise.}
\end{cases}
\end{equation}
we obtain that
\begin{equation}
\label{atilde-btilde} \forall i\exists j\colon
\Tilde{a}_{ij}^{[1]}=1 \enspace ,\quad \forall i\exists k\colon
\Tilde{b}_{ik}^{[1]}=1 \enspace .
\end{equation}
This means that both $\Tilde{A}^{[1]}$ and $\Tilde{B}^{[1]}$ are
incidence matrices of digraphs $\assgraph_1=(N,E_1)$ and
$\assgraph_2=(N,E_2)$, where each node has a nonzero number of
outgoing edges. These digraphs are the {\em saturation digraphs} of
$u$ with respect to $A$ and $B$, meaning that $(i,j)\in E_1$
(resp. $(i,j)\in E_2$) if, and only if, $a_{ij}u_j=u_i$
(resp. $b_{ij}u_j=u_i$). We recall the following well-known result.
\begin{proposition}[Baccelli et al.~\cite{BCOQ}]\label{sat-digraph}
Let $A\in\R_+^{n\times n}$ be irreducible and let $u\in\R_+^n$ be an eigenvector
of $A$. Then, the strongly connected components of the saturation
digraph of $u$ with respect to $A$ are the same as that of the critical digraph
$\crit(A)$.
\end{proposition}
This proposition tells us that the strongly connected components of
$\assgraph_1$ and $\assgraph_2$ are those of $\crit(A)$ and
$\crit(B)$.
\subsection{Commuting Boolean matrices}
Now we study in more detail the case of Boolean matrices, to show that two
irreducible commuting matrices in max algebra always have a common eigennode.
In a similar way, the graphs of commuting Boolean matrices are studied
in~\cite[Proposition 10]{DO-10a} to make an observation about the general case.
We need a couple of simple facts, which combined with
Proposition~\ref{sat-digraph}, will provide the
connection between max algebra and the Boolean case.
\begin{lemma}\label{l:011}
If the matrices $A,B\in\R_+^{n\times n}$ are such that
$a_{ij}\leq 1$ and $b_{ij}\leq 1$ for all $i,j\in N$,
then $(AB)^{[1]}=A^{[1]}B^{[1]}$.
\end{lemma}
\begin{proof}
For all $i,k\in N$, we may have two cases:
\begin{equation}\label{comm-choices}
\bigoplus_{j=1}^n a_{ij}b_{jk}=1 \enspace \enspace \makebox{or } \enspace
\bigoplus_{j=1}^n a_{ij}b_{jk}<1 \enspace .
\end{equation}
In the first case of~\eqref{comm-choices}, there exists $h$
such that $a_{ih}b_{hk}=1$, which implies
$a_{ih}=b_{hk}=1$, since $a_{ij}\leq 1$ and
$b_{ij}\leq 1$ for all $i$ and $j$.
Passing to $A^{[1]}$ and $B^{[1]}$, we have
$a^{[1]}_{ih}=b^{[1]}_{hk}=1$ and thus
$a^{[1]}_{ih}b^{[1]}_{hk}=1$. Using this we obtain
\begin{equation}
\label{comm-1mats1}
\bigoplus_{j=1}^n a^{[1]}_{ij}b^{[1]}_{jk}=1 \enspace .
\end{equation}
In the second case of~\eqref{comm-choices},
there are no such $h$ as above, and we obtain
\begin{equation}
\label{comm-1mats2} \bigoplus_{j=1}^n a^{[1]}_{ij}b^{[1]}_{jk}=0 \enspace .
\end{equation}
It follows that $(AB)^{[1]}=A^{[1]}B^{[1]}$.
\end{proof}
We immediately deduce the following observation.
\begin{lemma}\label{a1b1-comm}
If the matrices $A,B\in\R_+^{n\times n}$ are such that $AB=BA$ and
$a_{ij}\leq 1$, $b_{ij}\leq 1$ for all $i,j\in N$,
then $A^{[1]}B^{[1]}=B^{[1]}A^{[1]}$.
\end{lemma}
This motivates us to study the Boolean case in more detail.
\begin{theorem}\label{t:commdigr}
Let $\assgraph_1$ and $\assgraph_2$ be two commuting digraphs
(meaning that their incidence matrices commute)
with nonzero out-degree at each node,
and let $\assgraph_1^{\mu}=(N_1^{\mu},E_1^{\mu})$ for $\mu=1,\ldots,m_1$ and
$\assgraph_2^{\nu}=(N_2^{\nu},E_2^{\nu})$ for $\nu=1,\ldots,m_2$ be the
strongly connected components of $\assgraph_1$ and $\assgraph_2$ respectively.
Then, there exists a cycle $c_1\in \assgraph_1$ such
that all nodes on this cycle belong to $\bigcup_{\nu =1}^{m_2}
N^{\nu}_2$, and a cycle $c_2\in \assgraph_2$ such that all nodes on
this cycle belong to $\bigcup_{\mu =1}^{m_1} N^{\mu}_1$.
\end{theorem}
\begin{proof}
Pick $\mu_1\in\{1,\ldots,m_1\}$ and consider the digraph
$\assgraph_2[N^{\mu_1}_1]$ induced by $N^{\mu_1}_1$.
Then, either this induced digraph has a cycle and the claim is true,
or it is acyclic. In the latter case, let $i\in N^{\mu_1}_1$ be a leaf in
$\assgraph_2[N_1^{\mu_1}]$, i.e. a node with no edges back into $N_1^{\mu_1}$.
Let $M=\{j\mid (i,j)\in E_2\}$.
As $i$ is a leaf in $\assgraph_2[N_1^{\mu_1}]$,
we have $M\cap N_1^{\mu_1}=\emptyset$.
There is a cycle $c\in\assgraph_1$ which goes through $i$.
Select $j\in M$ and consider the path $c\circ(i,j)$
(first turn around along $c$ in $\assgraph_1$ then move $i\to j$ in
$\assgraph_2$).
As the digraphs commute, there is a path $P=(i,k)\circ P'$
connecting node $i$ with node $j$ such that $(i,k)\in E_2$
and the path $P'\in\assgraph_1$ is of the same length as $c$.
Hence, for each node $j\in M$ there exists a node $k\in M$
such that $k$ has access to $j$ in $\assgraph_1$.
This implies that some nodes in $M$ lie on a cycle in $\assgraph_1$
so that $M$ intersects a component
$\assgraph_1^{\mu_2}$ of $\assgraph_1$.
Consider the digraph $\assgraph_2[N_1^{\mu_2}]$.
If it is not acyclic then the claim is true,
otherwise we take $j\in M\cap N_1^{\mu_2}$ and
proceed to a leaf $k$ accessed by $j$ in $\assgraph_2[N_1^{\mu_2}]$.
We have obtained a path from $i$ to
$k$ in $\assgraph_2$, whose nodes lie in $\bigcup_{\mu=1}^{m_1}
N^{\mu}_1$. Arguing as above we can continue this path until we
obtain a cycle $c_2$ in $\assgraph_2$ which has all its nodes in
$\bigcup_{\mu=1}^{m_1} N^{\mu}_1$. The cycle $c_1$ in $\assgraph_1$
which has all its nodes in $\bigcup_{\nu=1}^{m_2} N^{\nu}_2$ is
obtained analogously. The claim is proved.
\end{proof}
Theorem~\ref{t:commdigr} implies notable facts about
the critical digraphs of two irreducible commuting matrices in max algebra.
\begin{theorem}\label{t:commoneigennode}
If two irreducible matrices $A,B\in\R_+^{n\times n}$ commute, then the
conclusion of Theorem~\ref{t:commdigr} holds for the strongly connected components
of $\crit(A)$ and $\crit(B)$. In particular, $A$ and $B$ have a common eigennode.
\end{theorem}
\begin{proof}
If $A,B\in\R_+^{n\times n}$ commute, then they have a common eigenvector $u$ by
Corollary~\ref{c:commoneig}. If these matrices are irreducible, then $u$
is positive and $U:=\diag(u)$ can be used to make a simultaneous
diagonal similarity scaling: $\Tilde{A}:=U^{-1}AU$ and $\Tilde{B}:=U^{-1}BU$.
Evidently, $\Tilde{A}\Tilde{B}=\Tilde{B}\Tilde{A}$. Also,
we have $\crit(\Tilde{A})=\crit(A)$ and $\crit(\Tilde{B})=\crit(B)$.
Notice that $\Tilde{A}^{[1]}$, resp. $\Tilde{B}^{[1]}$,
is the incidence matrix of the saturation digraph of $u$
with respect to $A$, resp. to $B$. These saturation digraphs will
be denoted by $\assgraph_1$ and $\assgraph_2$, respectively
(with the intention to use Theorem~\ref{t:commdigr}). By Lemma~\ref{a1b1-comm},
we have $\Tilde{A}^{[1]}\Tilde{B}^{[1]}=\Tilde{B}^{[1]}\Tilde{A}^{[1]}$.
As $\assgraph_1$ and $\assgraph_2$ are saturation digraphs,
each node in these digraphs has a nonzero out-degree.
Applying Theorem~\ref{t:commdigr}, we obtain that its conclusion
holds for the strongly connected components of
$\assgraph_1$ and $\assgraph_2$.
By Proposition~\ref{sat-digraph}, these components
are precisely the strongly connected components of $\crit(A)$ and $\crit(B)$.
Now the conclusion of Theorem~\ref{t:commdigr} also implies that $A$ and $B$
have a common eigennode.
\end{proof}
Let us consider a special case,
which usually appears if $A$ and $B$ are chosen at random.
\begin{corollary}
Let two irreducible matrices $A,B\in\R_+^{n\times n}$ commute. If $\crit(A)=(N^A_c,E^A_c)$
and $\crit(B)=(N^B_c,E^B_c)$ both consist of just one cycle, then
$N_c^A=N_c^B$.
\end{corollary}
\section{Examples of commuting matrices in max algebra}\label{s:examples}
In this section we give several examples in max algebra, which will
appear now as the semiring $(\R \cup \{ -\infty \}, \max ,+)$, i.e.
the set $\R \cup \{ -\infty \}$ equipped with $\max$ as ``addition''
and the usual sum as ``multiplication''.
This semiring is isomorphic to $(\Rp, \max ,\times )$
via the logarithmic transform.
Consider the irreducible commuting matrices
\[
A_1=\begin{pmatrix}
-2 & 1 & -\infty \\
-1 & -1 & -2 \\
-1 & -\infty & -2
\end{pmatrix}
\enspace {\rm and }\enspace A_2= \begin{pmatrix}
0 & -1 & -1 \\
-\infty & 0 & -4 \\
-3 & -\infty & 0
\end{pmatrix} \enspace .
\]
Then, it is straightforward to check that $\lambda (A_1)= \lambda(A_2)= 0$,
$N_c^{A_1}=\{ 1,2 \} $ and $N_c^{A_2}=\{ 1,2,3 \} $.
Therefore, as claimed in Theorem~\ref{t:commoneigennode}, $A_1$
and $A_2$ have a common eigennode.
In order to compute their common eigenvectors,
we apply the method described in Section~\ref{s:intersection}.
Since
\[
Q(A_1)=\begin{pmatrix}
0 & 1 & -1 \\
-1 & 0 & -2 \\
-1 & 0 & -2
\end{pmatrix}
\enspace {\rm and }\enspace Q(A_2)= \begin{pmatrix}
0 & -1 & -1 \\
-7 & 0 & -4 \\
-3 & -4 & 0
\end{pmatrix} \enspace ,
\]
it follows that
\[
Q(A_1)Q(A_2)=\begin{pmatrix}
0 & 1 & -1 \\
-1 & 0 & -2 \\
-1 & 0 & -2
\end{pmatrix} \enspace .
\]
Then, by~\eqref{qa1ar} we have
\[
V(A_1,0)\cap V(A_2,0)=V(Q(A_1)Q(A_2),0)=\left\{\lambda (1,0,0)^T
\mid \lambda \in \R \cup \{ -\infty \} \right\} \enspace .
\]
The following example of commuting matrices illustrates
Lemma~\ref{l:uniqueevalue}. Let
\[
A=
\begin{pmatrix}
1 & -\infty & -\infty \\
1 & 0 & -\infty \\
0 & 1 & 1 \\
\end{pmatrix}
\enspace \makebox{ and }\enspace
B=
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{pmatrix} \enspace .
\]
Then, $A$ and $B$ commute, $B$ is irreducible and $A$ satisfies the
conditions of Lemma~\ref{l:uniqueevalue}.
As an example of reducible commuting matrices, consider
\[
A_1=
\begin{pmatrix}
0 & -\infty & -\infty &-\infty \\
1 & 3 & -\infty & -\infty \\
2 & -\infty & -1 & -\infty \\
-\infty & -\infty & 0 & 2
\end{pmatrix}
\enspace \makebox{ and }\enspace
A_2=
\begin{pmatrix}
6 & -\infty & -\infty & -\infty \\
5 & 7 & -\infty & -\infty \\
8 & -\infty & 5 & -\infty \\
5 & -\infty & 6 & 8
\end{pmatrix} \enspace .
\]
The classes of these matrices are their diagonal elements.
Since the Perron roots of the classes (i.e. the diagonal entries
in this case) of each of these matrices are distinct,
we know by Theorem~\ref{t:distroots} that the
transitive closure of the reduced digraph associated with these
matrices are the same, even if these digraphs are different, as can
be easily checked. By the same theorem, we know that the spectral
classes of the associated reduced digraph coincide. In this case, for
both matrices the spectral classes are $2$ and $4$. Each of these
matrices has two different eigenvalues corresponding to their
spectral classes. The eigenvalues of $A_1$ are $3$ and $2$ and the
ones of $A_2$ are $7$ and $8$.
\section{Classical nonnegative matrices}\label{s:classical}
In this section we assume knowledge of some basic results on
nonnegative matrices found in e.g.~\cite{BP-94} or~\cite{Gan-59}. We
state analogs for nonnegative matrices in classical matrix algebra
of the results for matrices in max algebra proved in
Section~\ref{s:commoneig} (except for the last subsection) and in
Section~\ref{s:fnf}. The results and proofs are essentially
identical. We first need to redefine some symbols and terms.
Let $A \in \Rpnn$. Following standard terminology, we call an
eigenvalue $\mcm$ of $A$ a {\em distinguished eigenvalue} of $A$ if
$\mcm \geq 0$ and there is a nonnegative eigenvector corresponding
to it. In this section, $\Lambda(A)$ will be the set of
distinguished eigenvalues of $A$ and $V(A,\lambda)$ the convex cone
of nonnegative eigenvectors (and the $0$ vector) associated with a
distinguished eigenvalue $\lambda$. By the Perron-Frobenius theorem,
$\Lambda(A)$ is nonempty and the largest element in $\Lambda(A)$ is
called the {\em Perron root} of $A$. Moreover, any eigencone
$V(A,\lambda)$ is finitely generated, and the intersection of
finitely generated convex cones is again finitely generated.
Matrices leaving a cone invariant in $\Rpn$ (indeed in $\R^n$) have
been much studied, see e.g.~\cite{TS-07}. Proposition~\ref{p:common}
is well known in this context.
Lemma~\ref{l:InvEigenCone}, Theorem~\ref{t:commoneig},
Theorem~\ref{t:cv-rmats}, Corollary~\ref{c:commoneig} and their
proofs go through without further change to the classical
nonnegative case, except that we need to insert the adjective
``nonnegative'' in Corollary~\ref{c:commoneig}:
\begin{mycorollary:commoneigP}
If $A,B\in\R_+^{n\times n}$ commute, then they have a common
nonnegative eigenvector.
\end{mycorollary:commoneigP}
It follows that if $A$ and $B$ are commuting nonnegative matrices and
one of them is irreducible, then they have a common Perron vector.
Theorem~\ref{t:polynomials} and Corollary \ref{c:spineqs} are also
valid in the classical nonnegative case under the following
assumptions: the matrices $A_1,\ldots,A_r \in \Rpnn$ commute in
pairs and $p(x_1,\ldots,x_r)$ is a real polynomial such that
$p(A_1,\ldots,A_r)$ is nonnegative and, in the case of
Corollary~\ref{c:spineqs}, all the coefficients of $p(x_1,\ldots,x_r)$ are
nonnegative. In the latter, by the analog of Remark~\ref{r:spineqs},
we need to assume only one of the $A_i$ is irreducible.
Turning to Section~\ref{s:fnf}, we again construct the reduced
digraph of $A\in \Rpnn$ and we now label each class $\mu$ with its
classical Perron root $\ga_\mu$. By a theorem of
Frobenius~\cite{Fro-12}, we replace Theorem~\ref{t:fundspec} by:
\begin{mytheorem:spectrnodesP}
Let $A \in \Rpnn$ and $\gl \in \Rp$.
Then, a subset $U$ of $N$ is the support of a nonnegative
eigenvector associated with $\gl$ if, and only if,
\begin{enumerate}[(i)]
\item\label{t:fundspec1P}
There is an initial segment $I$ such that $U = \cup_{\nu \in I}N^\nu$,
\item\label{t:fundspec2P}
All final classes $\nu$ in $I$ are premier spectral and satisfy
$\ga_\nu =\gl$.
\end{enumerate}
\end{mytheorem:spectrnodesP}
See e.g.~\cite{Sch-86}. We observe that the supports of nonnegative
eigenvectors of $A \in \Rpnn$ are completely determined in
Theorem~\ref{t:fundspec}A by (i) the classes (i.e. the strongly connected
components) of $\assgraph$, (ii) the Perron roots of these classes
and (iii) the access relations of $\redgraph$ (equivalently the edges
of $\redgraph^*$). A similar remark holds for
Theorem~\ref{t:fundspec} and other results in
Sections~\ref{s:fnf} and~\ref{s:classical}.
We restate Corollary~\ref{c:initnodes} as:
\begin{mycorollary:initnodesP}
Let $A \in \Rpnn$. Then,
\begin{enumerate}[(i)]
\item\label{c:initnodes1P}
$\gl$ is a distinguished eigenvalue if, and only if, there is a
premier spectral class $\nu$ such that $\ga_\nu = \gl$;
\item\label{c:initnodes2P}
$\nu$ is a premier spectral class if, and only if, there exists a
nonnegative eigenvector with support $\intl(\nu)$;
\item\label{c:initnodes3P}
If $\nu$ is a premier spectral class, then any nonnegative
eigenvector associated with $\ga_{\nu}$ whose support is contained
in $\intl(\nu)$ has its support equal to $\intl(\nu)$;
\item\label{c:initnodes4P}
If the reduced digraph of $A$ has a unique premier spectral class
$\nu$ with Perron root $\ga_{\nu}$, then any nonnegative eigenvector
associated with $\ga_{\nu}$ has support $\intl(\nu)$;
\item\label{c:initnodes5P}
If the Perron roots of all classes are distinct, then all
nonnegative eigenvectors have support $\intl(\nu$) for some premier
spectral class $\nu$.
\end{enumerate}
\end{mycorollary:initnodesP}
The analog of Corollary~\ref{c:posevec} in nonnegative linear
algebra is well known, but we need to replace~\eqref{c:posevec2}
of Corollary~\ref{c:posevec} by:
``The Perron root of any final class is $\gl(A)$ and all final
classes are premier spectral''. Lemma~\ref{l:commutant}
goes through without change except that we need to replace
``eigenvalue'' with ``distinguished eigenvalue'' and
Lemma~\ref{l:icommutant} also holds in nonnegative linear algebra.
In the classical nonnegative case we obtain the following known
stronger form of Lemma~\ref{l:uniqueevalue} which may be found
on~\cite[p.53]{BP-94}. We give a short proof along the lines of the
proof of Lemma~\ref{l:uniqueevalue}.
\begin{mylemma:uniqueevalueP}\label{l:uniqueevaluePrima}
If $A,B\in\R_+^{n\times n}$ commute and $B$ is irreducible,
then the Perron root of $A$ is its unique distinguished eigenvalue.
Moreover, if $A$ is reducible, it is completely reducible
(viz, the direct sum of irreducible matrices after a permutation similarity).
\end{mylemma:uniqueevalueP}
\begin{proof}
We repeat the proof of~\eqref{l:uniqueevalue1} of
Lemma~\ref{l:uniqueevalue} to show that both $A$ and $A^T$ have
positive eigenvectors. This implies that all initial and final classes
in the reduced digraph of $A$ are premier spectral with Perron root $\gl(A)$.
But a premier spectral class cannot have access to another premier spectral
class with the same Perron root. It follows that all initial classes are
final and vice versa. This means that a class has access only to itself,
which proves the lemma.
\end{proof}
Theorem~\ref{t:distroots} also holds in nonnegative algebra, with
exception of the last part of~\eqref{t:d3} whose proof is specific
to max algebra. Thus we obtain the following main theorem of this
section.
\begin{mytheorem:distrootsP}
Suppose that $A_1,\ldots ,A_r\in\R_+^{n\times n}$ pairwise commute
and that all classes of $A_i$, for each $i\in\{1,\ldots,r\}$, have
distinct Perron roots. Then,
\begin{enumerate}[(i)]
\item\label{t:d1pr}
All classes of $A_1,\ldots,A_r$ and $A_1+\cdots+ A_r$ coincide;
\item\label{t:d2pr}
The transitive closures of the reduced digraphs of
$A_1,\ldots,A_r$ and $A_1+\cdots+ A_r$ coincide;
\item\label{t:d3pr}
The reduced digraphs of $A_1,\ldots,A_r$ have the same premier
spectral classes, which are premier spectral classes of
$A_1+\cdots+ A_r$. In particular,
$A_1,\ldots,A_r$ have the same number of distinct distinguished eigenvalues;
\item\label{t:d4pr}
Let $\mu_1,\ldots,\mu_m$ be the common premier spectral classes of
$A_1,\ldots,A_r$ and denote the Perron root of the $\mu_j$-th class
of $A_i$ by $\ga^j_i$. Then, for any real polynomial
$p(x_1,\ldots,x_r)$ such that $p(A_1,\ldots,A_r)$ is nonnegative,
the distinguished eigenvalues of $p(A_1,\ldots,A_r)$ are precisely
$p(\alpha^j_1,\ldots,\alpha^j_r)$ for $j=1,\ldots,m$ (possibly with
repetitions).
\end{enumerate}
\end{mytheorem:distrootsP}
We end this section with an example to illustrate Theorem~\ref{t:distroots}A.
\begin{example} {\rm
Let
\[
A =
\begin{pmatrix}
10 & 0 & 0\\
5 & 0 & 0\\
2 & 3 & 3
\end{pmatrix} \enspace \makebox{ and } \enspace
B =
\begin{pmatrix}
3 & 0 & 0\\
1 & 1 & 0\\
0 & 1 & 2
\end{pmatrix} \enspace .
\]
Then, $AB = BA$. The classes of $A$ and $B$ are their diagonal
elements, and the skeleton of their reduced digraphs (meaning the
diagram of cover relations) is
\[
1 \leftarrow 2 \leftarrow 3\; .
\]
The premier spectral classes of both matrices are $1$ and $3$ and
the distinguished eigenvalues are the corresponding entries. Their
common (nonnegative) eigenvectors are $(2, 1, 1)^T$ and
$(0, 0, 1)^T$, respectively.
Of course, $A^T$ and $B^T$ also commute. Note that the skeleton of
their reduced digraphs is obtained by reversing the arrows in the
diagram above. The only spectral class of $A^T$ or $B^T$ is 1 and
their common eigenvector is $(1, 0, 0)^T$.
We also observe that $p(A,B) = A^2B - AB \geq 0$ satisfies the
conditions of~\eqref{t:d4pr} of Theorem~\ref{t:distroots}A.
}
\end{example}
{\em Acknowledgement.}\/ We thank M. Drazin, T. Hawkins and R. Horn
for comments which have helped to improve this paper.
P. Butkovi\v{c} and B.~S. Tam deserve particular thanks for their
careful reading of our manuscript and many suggestions.
| {
"timestamp": "2010-05-11T02:01:15",
"yymm": "1005",
"arxiv_id": "1005.1424",
"language": "en",
"url": "https://arxiv.org/abs/1005.1424",
"abstract": "This paper studies commuting matrices in max algebra and nonnegative linear algebra. Our starting point is the existence of a common eigenvector, which directly leads to max analogues of some classical results for complex matrices. We also investigate Frobenius normal forms of commuting matrices, particularly when the Perron roots of the components are distinct. For the case of max algebra, we show how the intersection of eigencones of commuting matrices can be described, and we consider connections with Boolean algebra which enables us to prove that two commuting irreducible matrices in max algebra have a common eigennode.",
"subjects": "Rings and Algebras (math.RA)",
"title": "On commuting matrices in max algebra and in classical nonnegative algebra",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587218253717,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.709797884669417
} |
https://arxiv.org/abs/2212.03915 | Combinatorial generation via permutation languages. V. Acyclic orientations | In 1993, Savage, Squire, and West described an inductive construction for generating every acyclic orientation of a chordal graph exactly once, flipping one arc at a time. We provide two generalizations of this result. Firstly, we describe Gray codes for acyclic orientations of hypergraphs that satisfy a simple ordering condition, which generalizes the notion of perfect elimination order of graphs. This unifies the Savage-Squire-West construction with a recent algorithm for generating elimination trees of chordal graphs. Secondly, we consider quotients of lattices of acyclic orientations of chordal graphs, and we provide a Gray code for them, addressing a question raised by Pilaud. This also generalizes a recent algorithm for generating lattice congruences of the weak order on the symmetric group. Our algorithms are derived from the Hartung-Hoang-Mütze-Williams combinatorial generation framework, and they yield simple algorithms for computing Hamilton paths and cycles on large classes of polytopes, including chordal nestohedra and quotientopes. In particular, we derive an efficient implementation of the Savage-Squire-West construction. Along the way, we give an overview of old and recent results about the polyhedral and order-theoretic aspects of acyclic orientations of graphs and hypergraphs. | \section{Introduction}
In 1953, Frank Gray registered a patent~\cite{gray_1953} for a method to list all binary words of length~$n$ in such a way that any two consecutive words differ in exactly one bit, and he called it the \emph{binary reflected code}.
More generally, a \emph{combinatorial Gray code}~\cite{ruskey_2016} is a listing of all objects of a combinatorial class such that any two consecutive objects differ by a `small local change', sometimes also called a `flip'.
Over the years, Gray codes have been designed for numerous classes of combinatorial objects, including permutations, combinations, integer and set partitions, Catalan objects (binary trees, triangulations etc.), linear extensions of a poset, spanning trees or matchings of a graph etc.; see the surveys~\cite{MR1491049,muetze_2022}.
This area has been the subject of intensive research combining ideas from combinatorics, algorithms, graph theory, order theory, algebra, and discrete geometry.
This enabled recent exciting progress on long-standing problems in this area~(see e.g.~\cite{MR3775825}), and the development of versatile general techniques for designing Gray codes~\cite{MR3126386,MR2844089,MR4391718}.
One of the main applications of Gray codes is to efficiently generate a class of combinatorial objects~(see e.g.~\cite{MR2807540}), and many such algorithms are described in the most recent volume of Knuth's book `The Art of Computer Programming'~\cite{MR3444818}.
\subsection{The Steinhaus-Johnson-Trotter algorithm}
\label{sec:sjt}
The \emph{Steinhaus-Johnson-Trotter algorithm}, also known as `plain changes', is one of the classical Gray codes for generating permutations.
Specifically, it lists all permutations of $[n]:=\{1,2,\ldots,n\}$ so that every pair of successive permutations differs by exactly one adjacent transposition, i.e., by swapping two neighboring entries of the permutation.
Using suitable auxiliary arrays, this algorithm can be implemented in time~$\cO(1)$ per visited permutation.
The Steinhaus-Johnson-Trotter ordering of permutations can be defined inductively as follows:
For $n=1$ the listing consists only of a single permutation~1.
To construct the listing for permutations of~$[n]$ for $n\geq 2$, we consider the listing for permutations of~$[n-1]$, and we replace every permutation~$\pi$ in it by the sequence of permutations obtained by inserting the new largest symbol~$n$ in all possible positions in~$\pi$ from right to left, or from left to right, alternatingly.
It is easy to check that this indeed gives a listing of all permutations of~$[n]$ by adjacent transpositions.
Moreover, as $n!$ is even for $n\geq 2$, the listing is cyclic, i.e., the last and first permutation differ only in an adjacent transposition.
For example, for $n=2$ we get the listing $1{\color{red}2},{\color{red}2}1$, for $n=3$ we get $12{\color{red}3},1{\color{red}3}2,{\color{red}3}12,{\color{red}3}21,2{\color{red}3}1,21{\color{red}3}$, and for $n=4$ we get $123{\color{red}4},12{\color{red}4}3,1{\color{red}4}23,{\color{red}4}123,{\color{red}4}132,1{\color{red}4}32,13{\color{red}4}2,132{\color{red}4},312{\color{red}4},31{\color{red}4}2,3{\color{red}4}12,{\color{red}4}312,{\color{red}4}321,3{\color{red}4}21,32{\color{red}4}1,321{\color{red}4},231{\color{red}4},\linebreak[4] 23{\color{red}4}1,2{\color{red}4}31,{\color{red}4}231,{\color{red}4}213,2{\color{red}4}13,21{\color{red}4}3,213{\color{red}4}$; see Figure~\ref{fig:p4sjt}.
In those listings, the newly inserted symbol~$n$ is highlighted, which allows tracking its zigzag movement.
Williams~\cite{MR3126386} found a strikingly simple equivalent description of the Steinhaus-Johnson-Trotter ordering via the following greedy algorithm: Start with the identity permutation, and repeatedly perform an adjacent transposition with the largest possible value that yields a previously unvisited permutation.
The results in this paper can be seen as far-ranging generalizations of these two alternative descriptions of the same fundamental ordering.
\subsection{Flip graphs, lattices, and polytopes}
\label{sec:flip}
Any Gray code problem gives rise to a corresponding \emph{flip graph}, which has as vertices the combinatorial objects of interest, and an edge between any two objects that differ by the specified flip operation.
\begin{wrapfigure}{r}{0.5\textwidth}
\centering
\includegraphics[page=1]{p4sjt}
\caption{The 3-dimensional permutohedron with the Steinhaus-Johnson-Trotter Hamilton path.
The start and end vertex are highlighted by a triangle and diamond, respectively, and can be joined to a Hamilton cycle.}
\label{fig:p4sjt}
\vspace{-3mm}
\end{wrapfigure}
For example, the flip graph on binary words of length~$n$ under flips that change a single bit is the $n$-dimensional hypercube.
Moreover, the flip graph for permutations under adjacent transpositions discussed in the previous section is the Cayley graph of the symmetric group generated by adjacent transpositions.
Another heavily studied example is the flip graph on binary trees under tree rotations~\cite{MR928904,MR3197650}.
Clearly, computing a Gray code for a set of combinatorial objects amounts to traversing a Hamilton path in the corresponding flip graph.
In particular, Hamilton paths in the three aforementioned flip graphs can be computed by the binary reflected code, the Steinhaus-Johnson-Trotter algorithm, and by an algorithm due to Lucas, Roelants van Baronaigien, and Ruskey~\cite{MR1239499}, respectively.
It turns out that many flip graphs can be equipped with a poset structure and realized as polytopes, i.e., they are cover graphs of certain lattices, and 1-skeleta of certain high-dimensional polytopes.
For example, the $n$-dimensional hypercube is the cover graph of the Boolean lattice and the skeleton of the Cartesian product~$[0,1]^n$.
Similarly, the flip graph on permutations under adjacent transpositions is the cover graph of the weak order on the symmetric group, and the skeleton of the \emph{permutohedron}; see Figure~\ref{fig:p4sjt}.
Lastly, the flip graph on binary trees under rotations is the cover graph of the Tamari lattice and the skeleton of the \emph{associahedron}.
Generalizations of these lattices and polytopes and the associated combinatorial structures have been the subject of intensive research in algebraic and polyhedral combinatorics; see Figure~\ref{fig:results}.
The theory of \emph{generalized permutohedra}~\cite{MR2520477,MR2487491,aguiar_ardila_2017} and of \emph{lattice congruences} and their \emph{quotients}~\cite{MR3221544,MR3645056}, in particular, provides us with a rich framework that contains all previous three examples as very special cases of a much broader picture.
Specifically, in the next three sections we discuss the generalizations shown one level above the bottom in Figure~\ref{fig:results}, namely acyclic orientations, elimination trees and lattice quotients.
\subsection{From permutations to acyclic orientations}
\label{sec:acyclic}
The starting point of this work are Gray codes for acyclic orientations of a graph.
Given a simple graph~$G$, an \emph{acyclic orientation} of~$G$ is a digraph~$D$ obtained by orienting every edge of~$G$ in one of two ways so that $D$ does not contain any directed cycles.
The goal is to list all acyclic orientations of~$G$ in such a way that any two consecutive orientations differ by reorienting a single arc, which we refer to as an \emph{arc flip}.
\begin{figure}[h!]
\centering
\includegraphics[page=1]{acyclic}
\caption{(a1) An acyclic orientation~$D$ of a graph; (a2) the transitive reduction of~$D$, which contains precisely the flippable arcs in~$D$; (b1) an acyclic orientation~$D'$ of the complete graph; (b2) the transitive reduction of~$D'$ and the corresponding permutation.
}
\label{fig:acyclic}
\end{figure}
It is easy to see that in an acyclic orientation~$D$, the flippable arcs are precisely the arcs that are in the \emph{transitive reduction} of~$D$, which is the minimum subset of arcs that has the same reachability relations (i.e., the same transitive closure) as~$D$; see Figure~\ref{fig:acyclic}~(a1)+(a2).
If $G$ is the complete graph with vertex set~$[n]$, then the transitive reduction of any of its acyclic orientations~$D$ is a path, directed from the source to the sink of the orientation.
Consequently, we can interpret the vertex labels along this path as a permutation of~$[n]$; see Figure~\ref{fig:acyclic}~(b1)+(b2).
Furthermore, an arc flip corresponds to an adjacent transposition in this permutation.
Consequently, the flip graph on acyclic orientations of the complete graph is the skeleton of the permutohedron.
In general, the flip graph on the acyclic orientations of a graph~$G$ is the skeleton of a polytope known as the \emph{graphical zonotope} of~$G$~\cite{G77,MR712251,MR2383131}.
\begin{figure}[t!]
\centering
\includegraphics[page=2]{acyclic}
\caption{Illustration of the Savage-Squire-West proof.
In the neighborhood of~$v$, only the transitive reduction is shown, whereas transitive arcs are omitted for simplicity.}
\label{fig:acyclic-flip}
\end{figure}
In general, not all (skeletons of) graphical zonotopes admit a Hamilton path or cycle, and we do not know of any simple conditions on the graph~$G$ for this to hold.
Clearly, the flip graph on acyclic orientations is bipartite for any graph~$G$, and if the partition classes have sizes that differ by more than~1, then this rules out the existence of a Hamilton path, a phenomenon that occurs for example if $G$ is a wheel graph with an even number of spokes~\cite{MR1267311}.
In this context, let us mention that counting the number of acyclic orientations of a graph is {\#}P-complete~\cite{MR830652}.
On the positive side, Savage, Squire and West~\cite{MR1267311} showed that the flip graph on acyclic orientations of~$G$ has a Hamilton cycle if~$G$ is chordal.
Their proof is a straightforward generalization of the Steinhaus-Johnson-Trotter construction, so we describe it here, with the goal of generalizing it even further subsequently.
A graph is \emph{chordal} if every induced cycle has length~3.
It is well-known that every chordal graph~$G$ has a \emph{simplicial} vertex~$v$, i.e., a vertex whose neighborhood in the graph is a clique.
We remove~$v$ from the graph, and by induction we obtain a Gray code for the acyclic orientations of~$G-v$; see Figure~\ref{fig:acyclic-flip}~(a).
Let $k$ be the number of neighbors of~$v$ in~$G$.
To construct the listing of acyclic orientations of~$G$, we replace every acyclic orientation in the listing for~$G-v$ by the sequence of $k+1$ acyclic orientations obtained by adding~$v$ and orienting the edges incident with~$v$ in all possible ways (that yield an acyclic orientation).
Specifically, since the neighborhood of~$v$ is a clique, whose transitive reduction is a path, there are precisely $k+1$ valid acyclic orientations obtained by adding~$v$, and they differ in a sequence of arc flips of arcs incident with~$v$, and this sequence starts and ends with~$v$ being a sink or a source; see Figure~\ref{fig:acyclic-flip}~(b).
In the Gray code for the acyclic orientations of~$G$, the vertex~$v$ alternates or `zigzags' between being sink or source.
As the number of acyclic orientations of any graph~$G$ with at least one edge is even (consider the involution on the set of all acyclic orientations that reorients every arc), the resulting ordering is cyclic, i.e., the last and first acyclic orientation differ only in an arc flip.
For $G$ being a complete graph, the resulting ordering of acyclic orientations and their corresponding permutations is exactly the Steinhaus-Johnson-Trotter ordering.
\subsection{From permutations to elimination trees}
\label{sec:elim}
\begin{figure}[b!]
\centering
\includegraphics[page=1]{elim}
\caption{(a1) A graph~$G$; (a2) an elimination tree~$T$ of~$G$; (b1) the complete graph; (b2) an elimination tree of the complete graph and the corresponding permutation.
}
\label{fig:elim}
\end{figure}
An elimination tree~$T$ of a connected graph~$G$ is an unordered rooted tree obtained as follows: We remove a vertex~$v$ of~$G$ which becomes the root of~$T$, and we recurse on the connected components of~$G-v$, whose elimination trees become the subtrees of~$v$ in~$T$; see Figure~\ref{fig:elim}~(a1)+(a2).
The goal is to list all elimination trees of~$G$ in such a way that any two consecutive trees differ by a \emph{rotation}, which is the result of swapping the removal order of two vertices that form a parent-child relationship in the elimination tree; see Figure~\ref{fig:hyper2}~(a)+(b).
Clearly, every elimination tree of a complete graph with vertex set~$[n]$ is a path, which can be interpreted as a permutation of~$[n]$ by reading the labels of the path from the root to the leaf; see Figure~\ref{fig:elim}~(b1)+(b2).
Furthermore, a tree rotation corresponds to an adjacent transposition in this permutation.
Consequently, the flip graph on elimination trees of the complete graph is the skeleton of the permutohedron.
In general, the flip graph on elimination trees of a graph~$G$ is the skeleton of a polytope known as the \emph{graph associahedron} of~$G$~\cite{MR2239078,MR2479448,MR2487491}.
\tikzset{
double color fill/.code 2 args={
\pgfdeclareverticalshading[%
tikz@axis@top,tikz@axis@middle,tikz@axis@bottom%
]{diagonalfill}{100bp}{%
color(0bp)=(tikz@axis@bottom);
color(43.9bp)=(tikz@axis@bottom);
color(44bp)=(tikz@axis@middle);
color(44.1bp)=(tikz@axis@top);
color(100bp)=(tikz@axis@top)
}
\tikzset{shade, left color=#1, right color=#2, shading=diagonalfill}
}
}
\begin{figure}
\centering
\makebox[0cm]{
\begin{tikzpicture}[node distance=0.7cm and 0.3cm, auto]
\scriptsize
\tikzset{
oldnode/.style={rectangle,rounded corners,draw=black, fill=yellow!20, very thick, inner sep=1mm, minimum size=3em, text centered},
newnode/.style={rectangle,rounded corners,draw=black, fill=red!40, very thick, inner sep=1mm, minimum size=3em, text centered},
specialnode/.style={rectangle,rounded corners,draw=black, double color fill={yellow!20}{red!40}, shading angle=0,very thick, inner sep=1mm, minimum size=3em, text centered},
myarrow/.style={-, >=latex', shorten >=1pt, thick},
}
\node[newnode] (AOhyper) {
\begin{tabular}{l}
\hfill$\rightarrow$ Section~\ref{sec:res1} \\
C: \textbf{Acyclic orientations of hypergraphs} \\
\hspace{4mm}\textbf{in hyperfect elimination order} \\ \hline
P: Hypergraphic polytopes \\ \hline
H: New result: Theorem~\ref{thm:mainhyper} \\\hline
A: time $\cO(\Delta n)$, space $\cO(\Delta n^2)$ \\
\hspace{4mm}$\Delta=\Delta(\cH)$ max.\ degree, $n=|V|$ \\
\hspace{4mm}$\rightarrow$ Section~\ref{sec:algo}
\end{tabular}
};
\node[newnode, right=of AOhyper] (AOquotient) {
\begin{tabular}{l}
\hfill$\rightarrow$ Section~\ref{sec:res2} \\
L: \textbf{Quotients of acyclic reorientation} \\
\hspace{4mm}\textbf{lattices of peo-consistent digraphs} \\ \hline
P: Quotientopes \\ \hline
H: New result: Theorem~\ref{thm:mainquotient}
\end{tabular}
};
\node[newnode, below left=0.7cm and -3cm of AOhyper] (AObuilding) {
\begin{tabular}{l}
\hfill$\rightarrow$ Section~\ref{sec:res1} \\
C: \textbf{Acyclic orientations} \\
\hspace{4mm}\textbf{of chordal building sets} \\ \hline
P: Chordal nestohedra \\ \hline
H: New result: Theorem~\ref{thm:mainbuild}
\end{tabular}
};
\node[oldnode, below=of AObuilding] (ETgraph) {
\begin{tabular}{l}
\hfill$\rightarrow$ Section~\ref{sec:elim} \\
C: \textbf{Elimination trees} \\
\hspace{4mm}\textbf{of chordal graphs} \\ \hline
P: Chordal graph associahedra \\ \hline
H: \cite{MR3383157,DBLP:conf/soda/CardinalMM22} \\\hline
A: time $\cO(\sigma)$, space $\cO(n^2)$ \\
\hspace{4mm}$\sigma=\sigma(G)$ max.\ induced star, $n=|V|$
\end{tabular}
};
\node[specialnode, right=of ETgraph] (AOgraph) {
\begin{tabular}{l}
\hfill$\rightarrow$ Section~\ref{sec:acyclic} \\
C: \textbf{Acyclic orientations} \\
\hspace{4mm}\textbf{of chordal graphs} \\ \hline
P: Chordal graph zonotopes \\ \hline
H: \cite{MR1267311} \\\hline
A: time $\cO(\log \omega)$, space $\cO(n^2)$ \\
\hspace{4mm}$\omega=\omega(G)$ max.\ clique, $n=|V|$ \\
\hspace{4mm}$\rightarrow$ Section~\ref{sec:algo} and Theorem~\ref{thm:SSW-algo}
\end{tabular}
};
\node[oldnode, right=of AOgraph, yshift=-5.5mm] (WOquotient) {
\begin{tabular}{l}
\hfill$\rightarrow$ Section~\ref{sec:quotient} \\
L: \textbf{Quotients of} \\
\hspace{4mm}\textbf{the weak order} \\ \hline
P: Quotientopes \\ \hline
H: \cite{MR4344032}
\end{tabular}
};
\node[oldnode, below=1.55cm of ETgraph, xshift=-14mm] (Bin) {
\begin{tabular}{l}
\hfill$\rightarrow$ Section~\ref{sec:flip} \\
C: \textbf{Binary words} \\ \hline
L: Boolean lattice \\ \hline
P: Hypercube \\ \hline
H: Binary reflected code \\ \hline
A: time $\cO(1)$, space $\cO(n)$ \\
\hspace{4mm}$n=$ word length
\end{tabular}
};
\node[oldnode, below=1.4cm of AOgraph, xshift=-26.5mm, yshift=0mm] (WO) {
\begin{tabular}{l}
\hfill$\rightarrow$ Section~\ref{sec:flip} \\
C: \textbf{Permutations} \\ \hline
L: Weak order \\ \hline
P: Permutohedron \\ \hline
H: Steinhaus-Johnson-Trotter \\ \hline
A: time $\cO(1)$, space $\cO(n)$ \\
\hspace{4mm}$n=$ permutation length
\end{tabular}
};
\node[oldnode, below=1.4cm of WOquotient, xshift=-26.5mm] (BT) {
\begin{tabular}{l}
\hfill$\rightarrow$ Section~\ref{sec:flip} \\
C: \textbf{Binary trees} \\ \hline
L: Tamari lattice \\ \hline
P: Associahedron \\ \hline
H: \cite{MR1239499} \\ \hline
A: time $\cO(1)$, space $\cO(n)$ \\
\hspace{4mm}$n=$ number of nodes
\end{tabular}
};
\node[oldnode, right=of BT, xshift=-0.5mm] (Rect) {
\begin{tabular}{l}
\hfill$\rightarrow$ Section~\ref{sec:quotient} \\
C: \textbf{Diagonal / generic} \\
\hspace{4mm}\textbf{rectangulations} \\ \hline
L: $\mathrm{dRec}_n$ \cite{MR2871762} / $\mathrm{gRec}_n$ \cite{meehan_2019} \\ \hline
P: $P_\mathrm{dRec}$ \cite{MR2871762} / quotientope \\ \hline
H: \cite{perm_series_iii} \\ \hline
A: time $\cO(1)$, space $\cO(n)$ \\
\hspace{4mm}$n=$ number of rectangles
\end{tabular}
};
\draw[myarrow] (AOhyper.south) -- (AObuilding.north);
\draw[myarrow] (AOquotient.south) -- (AOgraph.north);
\draw[myarrow] (AObuilding.south) -- (ETgraph.north);
\draw[myarrow] (AOhyper.south) -- (AOgraph.north);
\draw[myarrow] (AOgraph.south) -- (WO.north);
\draw[myarrow] (ETgraph.south) -- (WO.north);
\draw[myarrow] (WOquotient.south) -- (WO.north);
\draw[myarrow] (ETgraph.south) -- (Bin.north);
\draw[myarrow] (Bin.north) -- (AOgraph.south);
\draw[myarrow] (Bin.north) -- (WOquotient.south);
\draw[myarrow] (AOquotient.south) -- (WOquotient.north);
\draw[myarrow] (BT.north) -- (ETgraph.south);
\draw[myarrow] (BT.north) -- (WOquotient.south);
\draw[myarrow] (Rect.north) -- (WOquotient.south);
\end{tikzpicture}
}
\medskip
\caption{Inclusion diagram of combinatorial families (C), lattices (L), polytopes (P), Hamiltonicity results (H), and corresponding algorithmic results (A).
More general objects are above their specialized counterparts.
New results are highlighted red.
Running times are per generated object, whereas the space refers to the total space needed (without storing previous objects).
Section references indicate where those results are discussed in more detail.
}
\label{fig:results}
\end{figure}
\todo{Replace references [CMM22], [Pil22] by journal references once available}
Manneville and Pilaud~\cite{MR3383157} showed that the skeleton of the graph associahedron of~$G$ admits a Hamilton cycle for any graph~$G$ with at least two edges.
In~\cite{DBLP:conf/soda/CardinalMM22}, we present a simple algorithm for computing a Hamilton path on the graph associahedron for the case when $G$ is a chordal graph.
This algorithm visits each elimination tree along the Hamilton path in time~$\cO(m+n)$, where $m$ and $n$ are the number of edges and vertices of~$G$, and this time can be improved to~$\cO(1)$ if $G$ is a tree.
Furthermore, if $G$ is chordal and 2-connected, then the resulting Hamilton path is actually a Hamilton cycle, i.e., the first and last elimination tree differ only in a tree rotation.
The proof in~\cite{DBLP:conf/soda/CardinalMM22} is an application of the Hartung-Hoang-M\"utze-Williams generation framework~\cite{MR4391718}, which generalizes the Steinhaus-Johnson-Trotter algorithm.
Specifically, we consider a simplicial vertex~$v$ in~$G$, we remove~$v$ from the graph, and by induction we obtain a rotation Gray code for the elimination trees of~$G-v$; see Figure~\ref{fig:elim-flip}~(a).
Let $N(v)$ be the set of neighbors of~$v$ in~$G$.
To construct the listing of elimination trees of~$G$, we consider every elimination tree~$T$ in the listing for~$G-v$.
As the vertices in~$N(v)$ form a clique in~$G$, these vertices appear on a path~$P$ in~$T$ that starts at the root and ends at a vertex from~$N(v)$.
We replace the elimination tree~$T$ for~$G-v$ by the sequence of elimination trees of~$G$ obtained by inserting~$v$ in all possibly ways on the path~$P$; see Figure~\ref{fig:elim-flip}~(b).
In particular, if $P$ has $k$ vertices, then we obtain $k+1$ elimination trees.
This insertion is done alternatingly from leaf to root or root to leaf, i.e., in the resulting listing of elimination trees of~$G$, the vertex~$v$ alternates or `zigzags' between being root or leaf.
In particular, for $G$ being a complete graph, the resulting ordering of elimination trees and their corresponding permutations is exactly the Steinhaus-Johnson-Trotter ordering.
\begin{figure}[h!]
\centering
\includegraphics[page=2]{elim}
\caption{Illustration of the zigzag argument for elimination trees of a chordal graph.
The vertices in the neighborhood~$N(v)$ of~$v$ are shaded, whereas other vertices of~$G$ are white.
The sloped edges in~$T$ connect to further subtrees (not shown).}
\label{fig:elim-flip}
\end{figure}
We will see that the appearance of chordal graphs in the aforementioned results on acyclic orientations and elimination trees is \textit{not a coincidence}.
In fact, our first main result gives a unified proof for both the results of~\cite{MR1267311} and~\cite{DBLP:conf/soda/CardinalMM22} by introducing a suitable notion of chordality for hypergraphs (see Section~\ref{sec:res1}).
\subsection{From permutations to lattice quotients}
\label{sec:quotient}
The \emph{inversion set} of a permutation~$\pi=a_1\cdots a_n$ is the set of pairs~$(a_i,a_j)$ that appear in the `wrong' order, i.e., the set $\{(a_i,a_j)\mid 1\leq i<j\leq n \text{ and } a_i>a_j\}$.
If we order all permutations of~$[n]$ by containment of their inversion sets, we obtain the \emph{weak order} on permutations; see Figure~\ref{fig:cong1}~(a).
The weak order forms a \emph{lattice}, i.e., joins and meets are well-defined.
Note that the cover relations in this lattice are precisely adjacent transpositions, i.e., the cover graph of this lattice is the skeleton of the permutohedron.
Furthermore, the levels $0,\ldots,\binom{n}{2}$ correspond to the number of inversions.
\begin{figure}[h!]
\centering
\includegraphics[page=1]{cong}
\caption{(a) Weak order on permutations with the sylvester congruence, with the equivalence classes drawn as gray bubbles; (b) its quotient is the Tamari lattice of 231-avoiding permutations.}
\label{fig:cong1}
\end{figure}
A \emph{lattice congruence} is an equivalence relation on a lattice that respects joins and meets, i.e., for any choice of representatives from two equivalence classes, their joins and meets must lie in the same equivalence class.
A well-known example of a lattice congruence for the weak order is the \emph{sylvester congruence}, defined as the transitive closure of the rewriting rule $\_b\_ca\_ \equiv \_b\_ac\_$, where $a<b<c$, i.e., whenever a permutation contains three symbols $a<b<c$ in the order $b,c,a$, with $c$ and~$a$ directly next to each other, then this permutation belongs to the same equivalence class as the permutation obtained by transposing~$c$ and $a$.
Figure~\ref{fig:cong1}~(a) shows the equivalence classes of this congruence, with 231-avoiding permutations as the minima of the equivalence classes.
The \emph{quotient} of some lattice congruence is the lattice obtained by `contracting' each equivalence class to a single element; see Figure~\ref{fig:cong1}~(b).
In this way, we obtain for example the Tamari lattice (Figure~\ref{fig:cong1}~(b)) and the Boolean lattice as quotients of suitable lattice congruences of the weak order on permutations.
Let us also mention that the lattice of diagonal rectangulations~\cite{MR2871762} and the lattice of generic rectangulations~\cite{meehan_2019} arise as quotients of the weak order, and they have twisted Baxter permutations or 2-clumped permutations, respectively, as the minima of the equivalence classes.
The cover graphs of these quotient lattices are skeleta of polytopes known as \emph{quotientopes}~\cite{MR3964495,MR4311892}.
We showed in~\cite{MR4344032} that the skeleton of any quotientope admits a Hamilton path, and this Hamilton path can be computed by a `zigzag' strategy that generalizes the Steinhaus-Johnson-Trotter algorithm.
Pilaud~\cite{pilaud_2022} generalized this notion of quotientopes as follows:
He equipped the flip graph on acyclic orientations of a graph~$G$ with a poset structure.
Specifically, he considers the containment order of the sets of reoriented arcs with respect to some acyclic reference orientation~$D$ of~$G$.
The cover relations are given by reorienting a single arc, and the levels of this poset correspond to the number of reoriented arcs; see Figure~\ref{fig:cong2}~(a).
Pilaud characterized under which conditions on~$D$ this poset is a lattice, and he introduced lattice congruences and lattice quotients in this setting; see Figure~\ref{fig:cong2}~(b)+(c).
Furthermore, he showed how to realize the cover graphs of those quotients as polytopes, generalizing the constructions from~\cite{MR3964495,MR4311892}.
We saw before that if $G$ is a complete graph, then its acyclic orientations correspond to permutations, and arc flips correspond to adjacent transpositions, so in this special case Pilaud's lattice is precisely the weak order on permutations.
In his paper, Pilaud raised the problem which of these generalized quotientopes (parametrized by a reference orientation~$D$ of some graph~$G$) admit a Hamilton cycle.
The second main result of our work addresses Pilaud's question, by showing that they all have a Hamilton path, which can be computed by a simple greedy algorithm, again following the `zigzag' principle (see Section~\ref{sec:res2}).
\begin{figure}[h!]
\centering
\makebox[0cm]{
\includegraphics[page=2]{cong}
}
\caption{(a) Lattice of acyclic reorientations of a digraph~$D$ (reoriented arcs w.r.t.~$D$ are highlighted); (b) one of its lattice congruences, and encoding of acyclic orientations by permutations; (c) the resulting quotient lattice, and corresponding representative permutations.}
\label{fig:cong2}
\end{figure}
\subsection{Our results}
We proceed to give an overview of the results of this paper, explaining the main statements and connections to previous work.
These new results are highlighted red in Figure~\ref{fig:results}.
A more formal treatment including proofs is provided in later sections.
\subsubsection{Acyclic orientations of hypergraphs}
\label{sec:res1}
Our first main contribution is the generalization of the Savage-Squire-West Gray code for acyclic orientations of chordal graphs to acyclic orientations of hypergraphs, by introducing a suitable notion of chordality for hypergraphs (see Theorem~\ref{thm:mainhyper} in Section~\ref{sec:aoh}).
Our construction yields Hamilton paths on the skeleta of certain \emph{hypergraphic polytopes}~\cite{MR3960512,aguiar_ardila_2017}, and in particular on chordal nestohedra~\cite{MR2520477} (Theorem~\ref{thm:mainbuild}).
Furthermore, this generalization subsumes the construction of Gray codes for elimination trees of chordal graphs presented in~\cite{DBLP:conf/soda/CardinalMM22} (Lemma~\ref{lem:BG-elim}).
Given a hypergraph $\cH=(V,\cE)$, where $\cE\seq 2^V$, an \emph{orientation} is a mapping $h:\cE\rightarrow V$ such that $h(A)\in A$ for every hyperedge~$A$ of~$\cH$; see Figure~\ref{fig:hyper}~(a).
The letter $h$ stands for `head': Every hyperedge designates one of its vertices as head.
This orientation is \emph{acyclic} if the digraph formed by all arcs $i\rightarrow j$ for every pair of distinct vertices $i,j\in V$ with $i,j\in A$ and $j=h(A)$ for some hyperedge~$A\in\cE$ is acyclic; see Figure~\ref{fig:hyper}~(b).
This definition clearly generalizes the notion of an acyclic digraph.
It is a special case of a more general definition recently used in a similar context by Benedetti, Bergeron, and Machacek~\cite{MR3960512}.
\begin{figure}[h!]
\centering
\includegraphics[page=1]{hyper}
\caption{(a) Acyclic orientation of a hypergraph; (b) corresponding acyclic digraph; (c) corresponding poset (whose cover graph is the transitive reduction of~(b)).}
\label{fig:hyper}
\end{figure}
Given a chordal graph~$G$, repeatedly removing one of its simplicial vertices yields a \emph{perfect elimination ordering (peo)} of the graph.
In fact, it is well-known that a graph~$G$ admits a perfect elimination ordering if and only if $G$ is chordal~\cite{MR186421}.
We generalize the notion of perfect elimination order of chordal graphs to what we call \emph{hyperfect elimination order} of hypergraphs (the formal definition is in Section~\ref{sec:hyperfect} below).
We then apply the aforementioned Hartung-Hoang-M\"utze-Williams generation framework to obtain a Gray code for the acyclic orientations of a hypergraph in hyperfect elimination order, using the `zigzag' idea common to Figures~\ref{fig:acyclic-flip} and~\ref{fig:elim-flip}.
The flip operation in an orientation~$h$ of a hypergraph $\cH=(V,\cE)$ consists of picking two vertices $i,j\in V$, and for all hyperedges $A\in\cE$ with $i,j\in A$ and $h(A)=j$ we instead define $h(A):=i$ (provided that the resulting orientation is acyclic); see Figure~\ref{fig:hyper2}~(c).
\begin{figure}[b!]
\centering
\includegraphics[page=2]{hyper}
\caption{(a) A graph~$G$; (b) two elimination trees~$T$ and~$T'$ of~$G$ that differ in a tree rotation; (c) corresponding two acyclic orientations~$h_T$ and~$h_{T'}$ of the graphical building set of~$G$, and flip operation between them.
The orientations $h_T$ and~$h_{T'}$ differ only on the highlighted hyperedges that contain both~$i$ and~$j$.
}
\label{fig:hyper2}
\end{figure}
The results from~\cite{DBLP:conf/soda/CardinalMM22} on elimination trees of chordal graphs can be recovered as a special case of our new results as follows; see Figure~\ref{fig:hyper2}:
Given a graph~$G=(V,E)$, its \emph{graphical building set} is the hypergraph $\cH=(V,\cE)$ such that
\begin{equation*}
\cE:=\{U\seq V\mid \text{$G[U]$ is connected}\},
\end{equation*}
where $G[U]$ is the subgraph of~$G$ induced by~$U$, i.e., we consider all connected subgraphs of~$G$ as hyperedges.
An elimination tree~$T$ of~$G$ with root~$v$ corresponds to the acyclic orientation of~$\cH$ in which every hyperedge~$A\in\cE$ with $v\in A$ satisfies $h(A)=v$, and this condition holds recursively for all subtrees.
Consequently, acyclic orientations of~$\cH$ are in one-to-one correspondence with elimination trees of~$G$, and the aforementioned flip operation on the hypergraph corresponds to a rotation in the elimination tree.
Our new Gray code on acyclic orientations of hypergraphs in hyperfect elimination order thus yields as a special case the Gray code on elimination trees of chordal graphs presented in~\cite{DBLP:conf/soda/CardinalMM22}.
The notions of building set and chordal building set have been defined abstractly without reference to a graph by Postnikov~\cite{MR2487491}, and Postnikov, Reiner, and Williams~\cite{MR2520477}, and the corresponding flip graphs arise as skeleta of \emph{chordal nestohedra}, a term also coined in \cite{MR2520477}.
Connections with acyclic orientations of hypergraphs have been investigated by Benedetti, Bergeron, and Machacek~\cite{MR3960512}.
We thus also obtain a simple constructive proof that chordal nestohedra admit a Hamilton path, directly yielding Gray codes for so-called \emph{$\cB$-forests}~\cite{MR2520477}.
\subsubsection{Quotients of acyclic reorientation lattices}
\label{sec:res2}
Recall from Section~\ref{sec:quotient} the definition of the poset of acyclic orientations of a graph~$G$ with respect to a reference orientation~$D$ of~$G$.
Pilaud~\cite{pilaud_2022} characterized when this poset is a lattice.
The following definitions are illustrated in Figure~\ref{fig:classes}.
Specifically, a digraph~$D$ is called \emph{vertebrate} if the transitive reduction of every induced subgraph of~$D$ is a forest.
It is easy to see that vertebrate implies acyclic.
Furthermore, $D$ is called \emph{filled} if for any directed path $v_1\rightarrow\cdots\rightarrow v_k$ in~$D$, if the arc~$v_1\rightarrow v_k$ belongs to~$D$, then all arcs $v_i\rightarrow v_j$, $1\leq i<j\leq k$, also belong to~$D$.
A digraph is called \emph{skeletal} if it is both vertebrate and filled.
Pilaud~\cite[Thm.~1+Thm.~3]{pilaud_2022} showed that the acyclic reorientation poset of~$D$ is a lattice if and only if $D$ is vertebrate, and that this lattice is semidistributive if and only if $D$ is filled.
He also raised the following question in his paper.
\begin{figure}[b!]
\centering
\includegraphics[page=3]{acyclic}
\caption{Illustration of various classes of digraphs: (b) is peo-consistent, as the neighborhood of the source~$a$ is a clique, and $D-a$ is also peo-consistent; it is not skeletal, as the path $a\rightarrow b\rightarrow c\rightarrow d$ is not filled; (c) is vertebrate, as the transitive reduction is the path $a\rightarrow b\rightarrow c\rightarrow d$; it is not peo-consistent, as $a$ is a source and $d$ is a sink, but none of their neighborhoods is a clique; (d) is not vertebrate, as the transitive reduction is the full graph, which is not a forest; (e) is a directed cycle.}
\label{fig:classes}
\end{figure}
\begin{open}[{\cite[Problem~51]{pilaud_2022}}]
\label{prob:pilaud}
Given a skeletal (i.e., vertebrate and filled) digraph~$D$, do all cover graphs of lattice quotients of the acyclic reorientation lattice of~$D$ admit a Hamilton cycle?
\end{open}
Our second main contribution is to address Pilaud's question, by showing that those cover graphs all have a Hamilton path, which can be computed by a simple greedy algorithm, generalizing Steinhaus-Johnson-Trotter (Theorem~\ref{thm:mainquotient} in Section~\ref{sec:arl}).
This also yields an algorithmic proof that the corresponding generalized quotientopes admit a Hamilton path.
Furthermore, our result encompasses all earlier results on quotients of the weak order on permutations~\cite{MR4344032}, which are obtained as special case when $D$ is an acyclic orientation of a complete graph.
In fact, our results hold not only for skeletal digraphs~$D$, but for a slightly larger class.
Specifically, a \emph{peo-consistent} digraph $D$ has a source or sink~$v$ (i.e., all arcs incident with~$v$ are either outgoing or incoming, respectively) whose neighborhood is a clique, and $D-v$ is also peo-consistent or empty.
A straightforward induction shows that peo-consistent implies vertebrate.
The key fact we establish in our paper is that skeletal implies peo-consistent (Lemma~\ref{lem:skeletal}).
We thus have the following inclusions among classes of digraphs, which are strict (see Figure~\ref{fig:classes}):
\begin{equation}
\label{eq:inclusion}
\text{skeletal} \subset \text{peo-consistent} \subset \text{vertebrate} \subset \text{acyclic}.
\end{equation}
It is not difficult to see that an undirected graph has a peo-consistent orientation if and only if it is chordal.
We also observe that an undirected graph admits a skeletal orientation only if it is \emph{strongly chordal}~\cite{MR685625}.
We emphasize that our aforementioned results only guarantee a Hamilton path, whereas Pilaud's question asks for a Hamilton cycle.
However, our results hold for a slightly larger class of graphs, and they yield a simple algorithm.
As mentioned before, in the special case of elimination trees of chordal graphs~$G$, there are interesting cases where the Hamilton path computed by our algorithm is actually a Hamilton cycle, i.e., the first and last elimination tree differ only in a tree rotation.
Specifically, this happens if $G$ is chordal and 2-connected; see~\cite{DBLP:conf/soda/CardinalMM22}.
In particular, when $G$ is a complete graph our algorithm specializes to the Steinhaus-Johnson-Trotter algorithm for permutations, which produces a cyclic Gray code.
\subsubsection{Efficient generation algorithms}
\label{sec:algo}
We briefly discuss the computational efficiency of the Gray code algorithms derived from our work.
Those are summarized in the bottom part of the boxes in Figure~\ref{fig:results}.
All four algorithms mentioned at the bottom level in Figure~\ref{fig:results} can be implemented to output each new object in time~$\cO(1)$.
Our first algorithmic contribution is to turn the Savage-Squire-West Gray code for acyclic orientations of a chordal graph~$G$ into an algorithm that generates each acyclic orientation in time~$\cO(\log \omega)$ on average, where $\omega=\omega(G)$ is the clique number of~$G$ (Theorem~\ref{thm:SSW-algo} in Section~\ref{sec:aog}).
Clearly, we have $\omega\leq n$, which yields the more generous bound~$\cO(\log n)$.
The initialization time of our algorithm is~$\cO(n^2)$, which includes the time for testing chordality and computing a perfect elimination ordering, and the required space is~$\cO(n^2)$.
In our algorithm, we represent each acyclic orientation as an adjacency matrix, which allows constant-time orientation queries for each arc.
In the following we write $m$ and~$n$ for the number of edges and vertices of~$G$, respectively.
For comparison, Barbosa and Szwarcfiter~\cite{MR1733453} described an algorithm to generate each acyclic orientation of an arbitrary graph in time~$\cO(m+n)$ on average.
Moreover, Conte, Grossi, Marino, and Rizzi~\cite{MR3815526} provided an algorithm that generates each acyclic orientation in time~$\cO(m)$, and this bound holds in every iteration.
Their approach generalizes to the setting where some vertices are prescribed as sources, at the cost of a higher running time.
However, none of these other algorithms produces a Gray code listing, i.e., they do not yield Hamilton paths on the corresponding graphical zonotopes, unlike our Gray code.
Generalizing this algorithm, we can implement our Gray code for generating all acyclic orientations of a hypergraph~$\cH=(V,\cE)$ in hyperfect elimination order in time~$\cO(\Delta n)$ per generated acyclic orientation, where $\Delta=\Delta(\cH):=\max_{v\in V}|\{A\in \cE\mid v\in A\}|$ denotes the maximum degree and $n=|V|$ is the number of vertices.
The space required by this algorithm is~$\cO(\Delta n^2)$, and the initialization time is also~$\cO(\Delta n^2)$.
Testing whether a hypergraph admits a hyperfect elimination order and computing one takes time~$O(\Delta^3 n^5)$.
We implemented both of the aforementioned algorithms in C++, and we made this code available for download and experimentation on the Combinatorial Object Server~\cite{cos_orient}.
For algorithmic details on graphs, see Section~\ref{sec:SSW-algo}, and for details on hypergraphs, see our C++ implementation.
For lattice congruences and their quotients, it is difficult to provide meaningful statements about running times because of representation issues.
Specifically, for the weak order on permutations of~$[n]$, there are double-exponentially in~$n$ many different lattice congruences~\cite[Thm.~18]{MR4344032}.
Specifying the congruence as input of an algorithm therefore takes exponential space in general.
Consequently, one cannot improve much upon a naive specification of the congruence as a full list of equivalence classes as input of the algorithm.
However, given such a specification as input, there is no value in generating it efficiently.
\subsection{The permutation language framework}
\label{sec:framework}
In a recent line of work, Hartung, Hoang, M\"utze, and Williams~\cite{MR4391718} introduced a far-ranging generalization of the Steinhaus-Johnson-Trotter algorithm, which yields efficient Gray code algorithms for a large variety of combinatorial objects, based on encoding them as permutations.
So far, the framework has been applied successfully to obtain Gray codes for pattern-avoiding permutations~\cite{MR4391718}, lattice congruences of the weak order on permutations~\cite{MR4344032}, different families of rectangulations~\cite{perm_series_iii} (a rectangulation is a subdivision of a rectangle into smaller rectangles), and elimination trees of chordal graphs~\cite{DBLP:conf/soda/CardinalMM22}.
The methods described in this paper extend the reach of this framework, and make it applicable to generate even more general classes of objects, namely acyclic orientations of hypergraphs, which in particular subsumes the earlier results in~\cite{MR4344032} and~\cite{DBLP:conf/soda/CardinalMM22}.
In the following we summarize the key methods and results provided by this generation framework.
\subsubsection{Jumps in permutations}
We use $S_n$ to denote the set of all permutations of~$[n]$.
Furthermore, we use $\ide_n=12\cdots n$ to denote the identity permutation, and $\varepsilon\in S_0$ to denote the empty permutation.
A permutation~$\pi=a_1\cdots a_n$ is \emph{peak-free} if it does not contain any triple~$a_{i-1}<a_i>a_{i+1}$.
For any $\pi\in S_{n-1}$ and any $1\leq i\leq n$, we write $c_i(\pi)\in S_n$ for the permutation obtained from~$\pi$ by inserting the new largest value~$n$ at position~$i$ of~$\pi$, i.e., if $\pi=a_1\cdots a_{n-1}$ then $c_i(\pi)=a_1\cdots a_{i-1} \, n\, a_i \cdots a_{n-1}$.
Moreover, for~$\pi\in S_n$, we write $p(\pi)\in S_{n-1}$ for the permutation obtained from~$\pi$ by removing the largest entry~$n$.
Given a permutation $\pi=a_1\cdots a_n$ with a substring $a_i\cdots a_{i+d}$ with $d>0$ and $a_i>a_{i+1},\ldots,a_{i+d}$, a \emph{right jump of the value~$a_i$ by $d$~steps} is a cyclic left rotation of this substring by one position to $a_{i+1}\cdots a_{i+d} a_i$.
Similarly, given a substring $a_{i-d}\cdots a_i$ with $d>0$ and $a_i>a_{i-d},\ldots,a_{i-1}$, a \emph{left jump of the value~$a_i$ by $d$~steps} is a cyclic right rotation of this substring to $a_i a_{i-d}\cdots a_{i-1}$.
For example, a right jump of the value~5 in the permutation~$265134$ by 2 steps yields~$261354$.
\subsubsection{A simple greedy algorithm}
\label{sec:greedy}
The main ingredient of the framework is the following simple greedy algorithm to generate a set of permutations $L_n\seq S_n$.
We say that a jump is \emph{minimal} with respect to~$L_n$, if every jump of the same value in the same direction by fewer steps creates a permutation that is not in~$L_n$.
\begin{algo}{Algorithm~J}{Greedy minimal jumps}
This algorithm attempts to greedily generate a set of permutations $L_n\seq S_n$ using minimal jumps starting from an initial permutation $\pi_0 \in L_n$.
\begin{enumerate}[label={\bfseries J\arabic*.}, leftmargin=8mm, noitemsep, topsep=3pt plus 3pt]
\item{} [Initialize] Visit the initial permutation~$\pi_0$.
\item{} [Jump] Generate an unvisited permutation from~$L_n$ by performing a minimal jump of the largest possible value in the most recently visited permutation.
If no such jump exists, or the jump direction is ambiguous, then terminate.
Otherwise visit this permutation and repeat~J2.
\end{enumerate}
\end{algo}
Note that Algorithm~J is a generalization of Williams' greedy description of the Steinhaus-Johnson-Trotter algorithm given in Section~\ref{sec:sjt}.
Indeed, if $L_n=S_n$ is the set of all permutations of~$[n]$, then minimal jumps correspond to adjacent transpositions.
Note that by the definition of step~J2, Algorithm~J never visits any permutation twice.
The following key result provides a sufficient condition on the set~$L_n$ to guarantee that Algorithm~J succeeds to list all permutations from~$L_n$.
This condition is captured by the following closure property of the set~$L_n$.
A set of permutations~$L_n\seq S_n$ is called a \emph{zigzag language}, if either $n=0$ and $L_0=\{\varepsilon\}$, or if $n\geq 1$ and $L_{n-1}:=\{p(\pi)\mid \pi\in L_n\}$ is a zigzag language satisfying either one of the following conditions:
\begin{enumerate}[label={(z\arabic*)}, leftmargin=8mm, noitemsep, topsep=3pt plus 3pt]
\item For every $\pi\in L_{n-1}$ we have~$c_1(\pi)\in L_n$ and~$c_n(\pi)\in L_n$.
\item We have $L_n=\{c_n(\pi)\mid \pi\in L_{n-1}\}$.
\end{enumerate}
\begin{theorem}[\cite{MR4391718}]
\label{thm:jump}
Given any zigzag language of permutations~$L_n$ and initial permutation $\pi_0=\ide_n$, Algorithm~J visits every permutation from~$L_n$ exactly once.
\end{theorem}
It was already argued in~\cite{MR4391718} that more generally, any peak-free permutation can be used as initial permutation~$\pi_0$ for Algorithm~J.
We emphasize that Algorithm~J can be made \emph{history-free}, i.e., by introducing suitable auxiliary arrays, step~J2 can be performed without maintaining any previously visited permutations in order to decide which jump to perform; for details see \cite[Sec.~5.1+8.7]{perm_series_iii} and Section~\ref{sec:SSW-algo}.
The running time of this algorithm is then only determined by the time it takes to decide membership of a permutation in the zigzag language~$L_n$.
We should think of~$L_n$ as a set of permutations defined by some property, such as for example `permutations that avoid the pattern 231' or `permutations that encode acyclic orientations of some graph' (recall Figures~\ref{fig:cong1}~(b) and~\ref{fig:cong2}~(b), respectively), rather than an explicitly given set.
After all, if the set was already provided explicitly as input, then there would be no point in generating it; recall the discussion in Section~\ref{sec:algo}.
\subsubsection{Inductive description of the same ordering}
\label{sec:zigzag}
In the same way that the Steinhaus-Johnson-Trotter ordering can be defined both greedily or inductively, the ordering produced by Algorithm~J can also be defined inductively, as we show next.
Specifically, given a zigzag language~$L_n$, we write~$J(L_n)$ for the ordering of all permutations from~$L_n$ produced by Algorithm~J when initialized with~$\pi_0=\ide_n$.
For any $\pi\in L_{n-1}$ we let $\rvec{c}(\pi)$ be the sequence of all $c_i(\pi)\in L_n$ for $i=1,2,\ldots,n$, starting with $c_1(\pi)$ and ending with $c_n(\pi)$, and we let $\lvec{c}(\pi)$ denote the reverse sequence, i.e., it starts with $c_n(\pi)$ and ends with $c_1(\pi)$.
In words, those sequences are obtained by inserting into~$\pi$ the new largest value~$n$ from left to right, or from right to left, respectively, in all possible positions that yield a permutation from~$L_n$, skipping the positions that yield a permutation that is not in~$L_n$.
It was shown in~\cite{MR4391718} that the sequence~$J(L_n)$ can be described inductively as follows:
If $n=0$ then we have $J(L_0)=\varepsilon$, and if $n\geq 1$ then we consider the finite sequence $J(L_{n-1})=:\pi_1,\pi_2,\ldots$ and we have
\begin{subequations}
\label{eq:JLn12}
\begin{equation}
\label{eq:JLn1}
J(L_n)=\lvec{c}(\pi_1),\rvec{c}(\pi_2),\lvec{c}(\pi_3),\rvec{c}(\pi_4),\ldots
\end{equation}
if condition~(z1) holds, and
\begin{equation}
\label{eq:JLn2}
J(L_n)=c_n(\pi_1),c_n(\pi_2),c_n(\pi_3),c_n(\pi_4),\ldots
\end{equation}
\end{subequations}
if condition~(z2) holds.
In words, if condition~(z1) holds then this sequence is obtained from the previous sequence by inserting the new largest value~$n$ in all possible positions alternatingly from right to left, or from left to right, in a `zigzag' fashion.
The case where condition~(z2) holds is exceptional, as we only append~$n$ to each permutation of the previous sequence.
\subsubsection{How we apply Algorithm~J in this work}
To prove our results on acyclic orientations of hypergraphs discussed in Section~\ref{sec:res1}, we proceed as follows:
Given a hypergraph~$\cH$ in hyperfect elimination order, we label its vertices with $1,2,\ldots,n$ according to this ordering, and we encode any of its acyclic orientations as a permutation on~$[n]$, in such a way that the set of permutations obtained for all acyclic orientations of~$\cH$ is a zigzag language.
By Theorem~\ref{thm:jump} we can thus apply Algorithm~J to generate this zigzag language in Gray code order, and in a final step we interpret the jumps in permutations performed by Algorithm~J in terms of flip operations on acyclic orientations of~$\cH$.
The key insight that makes this work is that when vertices are in hyperfect elimination order, the posets defined by the acyclic orientations have the \emph{unique parent-child} property, namely that every vertex has at most one parent and one child in the poset (Lemma~\ref{lem:upc-heo}).
To prove our results on quotients of acyclic reorientations lattices discussed in Section~\ref{sec:res2}, we proceed as follows:
Given a peo-consistent digraph~$D$, we label its vertices with $1,\ldots,n$ according to this ordering, and we encode any of its acyclic orientations as a permutation on~$[n]$; see Figure~\ref{fig:cong2}~(b).
For a given lattice congruence of the reorientation lattice of~$D$, we select a set of representatives, one permutation from each equivalence class, such that those representative permutations form a zigzag language; see Figure~\ref{fig:cong2}~(c).
We show that for peo-consistent digraphs, the equivalence classes of any lattice congruence have a simple projection property that enables selecting representatives in an inductive `zigzag'-like way.
It then follows that the jumps in permutations performed by Algorithm~J correspond to steps along cover edges of the lattice quotient.
\section{Acyclic orientations of graphs}
\label{sec:aog}
In this section we review the structure of acyclic orientations of graphs and the associated combinatorial and geometric objects.
We also describe how the Savage-Squire-West Gray code for acyclic orientations of a chordal graph described in Section~\ref{sec:acyclic} can be cast as an instance of Algorithm~J, and we show how to implement this Gray code efficiently (Theorem~\ref{thm:SSW-algo} below).
\subsection{Poset preliminaries}
\label{sec:prelim}
We first recall some terminologies for a partially ordered set~$(P, <)$ that will be used throughout this paper.
A \emph{cover relation} is a pair~$x,y \in P$ with $x<y$ such that there is no~$z \in P$ with $x<z<y$.
In that case, we say that \emph{$y$ covers~$x$}, or \emph{$x$ is covered by~$y$}.
The \emph{cover graph} of $P$ has the elements of $P$ as vertices, and an edge between every pair of vertices that are in a cover relation.
A \emph{linear extension} of~$P$ is a total order that respects the comparabilities (or cover relations) of~$P$.
We write $\ext(P)$ for the set of linear extensions of~$P$.
An \emph{interval} $[x,y]$ of~$P$ is the set of all~$z$ in~$P$ such that $x<z<y$.
A poset~$(P,<)$ is a \emph{lattice}, if for every pair $x,y \in P$ there is a unique minimal element~$z$ such that~$z>x$ and~$z>y$, called the \emph{join~$x\vee y$ of~$x$ and~$y$}, and a unique maximal element~$z$ such that~$z<x$ and~$z<y$, called the \emph{meet~$x\wedge y$ of~$x$ and~$y$}.
\subsection{Flips in acyclic orientations of graphs}
We use standard terminology for digraphs~$D=(V,A)$, such as \emph{out-neighbor}, \emph{in-neighbor}, as well as \emph{out-degree} and \emph{in-degree} of a vertex~$v\in V$, denoted~$d^+(v)$ and~$d^-(v)$, respectively.
A \emph{source} is a vertex with zero in-degree, and a \emph{sink} is a vertex with zero out-degree.
The \emph{transitive reduction} $T_D$ of a digraph~$D$ is the digraph obtained by removing from~$D$ all arcs that can be obtained by applying the transitivity rule in~$D$; see Figure~\ref{fig:acyclic}.
When interpreting~$D$ as a poset, $T_D$ is the cover graph of~$D$.
An \emph{orientation}~$D$ of a graph~$G$ is a digraph obtained by orienting every edge of~$G$ in one of two ways.
An orientation of~$G$ is called \emph{acyclic} if it does not contain any directed cycles.
We write $\AO_G$ for the set of all acyclic orientations of the graph~$G$.
We refer to the operation of reversing the direction of a single arc of an orientation as an \emph{arc flip}.
We define a \emph{flip graph} on~$\AO_G$ by joining two acyclic orientations of~$G$ with an edge if and only if they differ in an arc flip.
Note that a transitive arc cannot be flipped, as this would create a directed cycle.
Conversely, if flipping an arc creates a directed cycle, then before the flip the arc was transitive.
Therefore, for every $D\in\AO_G$, an arc is flippable in~$D$ if and only if it belongs to the transitive reduction~$T_D$, and the degree of~$D$ in the flip graph is equal to the number of arcs in~$T_D$.
We often consider a total ordering of the vertex set~$V$ of a graph or digraph, and then we simply use $V=[n]=\{1,2,\ldots,n\}$.
\subsection{Graphical zonotopes}
\label{sec:zono}
The \emph{graphical arrangement} of $G=([n],E)$ is the collection of hyperplanes $\{H_{ij} \mid ij\in E\}$ in~$\mathbb{R}^n$, defined as~$H_{ij} := \{x\in\mathbb{R}^n \mid x_i = x_j\}$.
This arrangement defines the \emph{graphical fan} of $G$, the full-dimensional cones of which are in one-to-one correspondence with acyclic orientations of $G$.
The \emph{graphical zonotope~$Z(G)$} of~$G$ is the dual polytope of the graphical fan of $G$.
Original developments about these structures can be found in Greene~\cite{G77} and Greene and Zaslavsky~\cite{MR712251}; see also Stanley~\cite{MR2383131}.
The graphical zonotope~$Z(G)$ of~$G$ can be defined as the Minkowski sum of the line segments~$[e_i, e_j]$ for all $ij\in E$, where $e_i$ is the $i$th canonical basis vector of~$\mathbb{R}^n$. We can also define~$Z(G)$ as follows.
For an orientation (not necessarily acyclic) $D$ of~$G$, we consider the \emph{in-degree sequence} of~$D$, defined as $\delta_D:=(d^-(1),\ldots,d^-(n))\in\mathbb{N}^n$.
Then the graphical zonotope of~$G$ is
\begin{equation*}
Z(G) = \conv \{ \delta_D \mid D \text{\ orientation\ of\ } G \}.
\end{equation*}
It can be shown that $\delta_D$ is a vertex of~$Z(G)$ if and only if $D$ is acyclic,
hence the above definition remains valid if we restrict~$D$ to be an acyclic orientation of~$G$.
For a simple proof of this fact, we refer to that of a more general statement given as Proposition~3.9 in~\cite{rehberg_2021}.
The following lemma is a simple consequence of the definitions (see also the discussion in~\cite{MR1609342}).
\begin{lemma}
The flip graph on acyclic orientations~$\AO_G$ of a graph~$G$ is isomorphic to the skeleton of its zonotope~$Z(G)$.
\end{lemma}
A \emph{partial cube} is a graph that has an isometric embedding in the graph of a hypercube.
Dual graphs of hyperplane arrangements are known to be partial cubes (see for instance Chapter~7 in~\cite{O11}).
This applies in particular to graphical arrangements, and directly yields the following statement.
\begin{lemma}
The flip graph on acyclic orientations~$\AO_G$ of a graph~$G$ is a partial cube.
\end{lemma}
As a consequence, the \emph{flip distance} between two acyclic orientations of $G$, i.e.,
the minimum number of arc flips needed to transform one into the other, is equal to the number of edges oriented oppositely in the two orientations.
\subsection{Acyclic orientations of chordal graphs}
For a graph~$G=([n],E)$ and an integer $i\leq n$, we write $G_i$ for the subgraph of $G$ induced by the vertices in~$[i]$.
Similarly, for a digraph~$D$ with vertex set~$[n]$ and an integer~$i\leq n$, we write $D_i$ for the subdigraph of~$D$ induced by~$[i]$.
A graph is \emph{chordal} if every induced cycle has length~3.
A vertex whose neighborhood forms a clique is called \emph{simplicial}.
A graph $G=([n],E)$ is in \emph{perfect elimination order} if for all~$i\in [n]$, the vertex~$i$ is simplicial in~$G_i$.
It is well known that a graph is chordal if and only if it is isomorphic to a graph~$([n],E)$ in perfect elimination order~\cite{MR186421}.
We say that a graph $G=([n],E)$ has the \emph{unique parent-child property} if either $n=0$, or $n\geq 1$ and the following two conditions are satisfied:
\begin{enumerate}[label=(\roman*),leftmargin=8mm, noitemsep, topsep=1pt plus 1pt]
\item \label{itm:upc1} for every acyclic orientation~$D\in\AO_G$ the vertex~$n$ has in-degree and out-degree at most~1 in the transitive reduction $T_D$ of $D$;
\item the graph~$G_{n-1}$ has the unique parent-child property.
\end{enumerate}
\begin{lemma}
\label{lem:upc-peo}
A graph $G=([n],E)$ has the unique parent-child property if and only if it is in perfect elimination order.
\end{lemma}
\begin{proof}
$(\Leftarrow)$ By definition, the vertex~$n$ is simplicial in $G$, hence $n$ together with its neighbors induce a clique.
In any acyclic orientation $D\in\AO_G$, the transitive reduction of this clique is a path.
Therefore, the vertex~$n$ is involved in at most two arcs of~$T_D$ and has in-degree and out-degree at most~1.
The same holds for the vertex~$i$ in~$G_i$, for all $i\in [n]$.
\noindent$(\Rightarrow)$ Suppose that $G$ is not in perfect elimination order.
Without loss of generality, suppose that there are two nonadjacent neighbors~$a,b$ of~$n$ in~$G$.
Consider an orientation $D$ of $G$ in which
\begin{itemize}[leftmargin=5mm, noitemsep, topsep=1pt plus 1pt]
\item every arc having $n$ as endpoint is directed towards~$n$;
\item any other arc having~$a$ or~$b$ as endpoint is directed towards~$a$ or~$b$, respectively;
\item all other arcs $ij\in E$ are directed towards~$\max\{i,j\}$.
\end{itemize}
Then $D$ is an acyclic orientation of~$G$ such that the arcs $a\rightarrow n$ and $b\rightarrow n$ are both present in~$T_D$, contradicting condition~\eqref{itm:upc1} of the unique parent-child property.
\end{proof}
\subsection{Savage-Squire-West as an instance of Algorithm~J}
Recall the Savage-Squire-West Gray code for acyclic orientations of a chordal graph in perfect elimination order described in Section~\ref{sec:acyclic}.
We now show how to derive this Gray code as a special case of Algorithm~J in the Hartung-Hoang-M\"utze-Williams framework introduced in Section~\ref{sec:framework}.
For this purpose, we map acyclic orientations of a graph to permutations so that the image of all acyclic orientations forms a zigzag language of permutations.
This mapping is illustrated in Figures~\ref{fig:cong2} and~\ref{fig:ladder}.
\begin{lemma}
\label{lem:peo2perm}
Let $G=([n],E)$ be a graph in perfect elimination order.
With any acyclic orientation $D\in\AO_G$ we associate a permutation $\pi_D\in S_n$ as follows:
If $n=0$ then $\pi_D:=\varepsilon$, and if $n\geq 1$ we consider three cases:
\begin{enumerate}[label=(\roman*),leftmargin=8mm, noitemsep, topsep=1pt plus 1pt]
\item if the vertex~$n$ is a sink in $D$, then $\pi_D:=c_n(\pi_{D_{n-1}})$;
\item if the vertex~$n$ is a source in $D$, then $\pi_D:=c_1(\pi_{D_{n-1}})$;
\item otherwise, $\pi_D:=c_i(\pi_{D_{n-1}})$, where $i$ is the position in $\pi_{D_{n-1}}$ of the unique out-neighbor of $n$ in the transitive reduction $T_D$.
\end{enumerate}
Then the map $\AO_G\to S_n:D\mapsto \pi_D$ is injective, and
\begin{equation}
\label{eq:PiG}
\Pi_G:=\{\pi_D \mid D\in\AO_G\}
\end{equation}
is a zigzag language of permutations.
\end{lemma}
If the vertex~$n$ is isolated in~$G$, then it is both a sink and a source, in which case we use the encoding stated under~(i), and then the special condition~(z2) in the definition of zigzag languages applies.
Observe that $\pi_D$ is a linear extension of the poset defined by the transitive closure of~$D$ (whose cover graph is~$T_D$), and that the orientation~$D$ can be retrieved from $\pi_D$ by orienting every edge~$ij$ of $G$ towards the vertex in~$\{i,j\}$ that is further to the right in~$\pi_D$.
Also note that for any acyclic orientation~$D$ in which vertex~$i$ is a source or a sink in~$D_i$ for all~$i\in[n]$, the permutation~$\pi_D$ is peak-free.
In particular, if $i$ is a sink in~$D_i$ for all~$i\in[n]$, then $\pi_D=\ide_n$ is the identity permutation.
As remarked before, any of those permutations can serve as initial permutation~$\pi_0$ for Algorithm~J.
\begin{theorem}
For every graph $G = ([n],E)$ in perfect elimination order, Algorithm~J with input~$\Pi_G$ as defined in~\eqref{eq:PiG} generates a sequence of permutations $\pi_{D_1}, \pi_{D_2}, \ldots$, where $D_1,D_2,\ldots \in \AO_G$ such that $D_1,D_2,\ldots $ is a Hamilton path in the flip graph on acyclic orientations of~$G$, or equivalently, on the skeleton of the graphical zonotope~$Z(G)$.
\end{theorem}
\subsubsection{Efficient implementation}
\label{sec:SSW-algo}
We can now use the history-free implementation of Algorithm~J developed in~\cite[Sec.~5.1+8.7]{perm_series_iii} to compute the Savage-Squire-West Gray code efficiently; see the pseudocode stated as Algorithm~A.
In the following, for a chordal graph~$G=([n],E)$ in perfect elimination order, we write $N_i$ for the set of neighbors of~$i$ in~$G_i$.
For simplicity, the algorithm assumes that~$G$ is connected.
We remark that the case of disconnected chordal graphs~$G$ can be handled with slight adjustments, yielding the same runtime guarantees.
For details, see our C++ implementation~\cite{cos_orient}.
The algorithm maintains the current acyclic orientation~$D=([n],A)$ of~$G$ as an $n\times n$ adjacency matrix~$A$ with entries $a_{i,j}\in\{0,1\}$, where $a_{i,j}=1$ if and only if the arc $i\rightarrow j$ is present in~$D$.
The initial acyclic orientation~$D$ of~$G$ defined in step~A1 makes the vertex~$j$ a sink in~$D_j$ for all $j\in [n]$.
The corresponding permutation~$\pi_D$ is the identity permutation~$\pi_D=\ide_n$.
After the initialization in step~A1, the algorithm loops through steps~A2--A6, where A2 visits the current acyclic orientation~$D$, and steps~A3--A6 update the data structures before the next visit.
In step~A3, the auxiliary array~$s=(s_1,\ldots,s_n)$ is used to determine which vertex~$j$ is selected to have one of the arcs incident with~$j$ in~$D_j$ being flipped.
The array~$o=(o_1,\ldots,o_n)$ keeps track, for each vertex~$j\in[n]$, whether its current zigzag movement is from being sink to being source, in which case $o_j=\dirl$, or from source to sink, in which case $o_j=\dirr$; recall Figure~\ref{fig:acyclic-flip}.
Once a vertex~$j$ has become source or sink in~$D_j$, before flipping any arcs incident with it, in step~A4 the algorithm determines the transitive reduction of the clique in~$D_j$ induced by the vertices in~$N_j$.
Computing the transitive reduction (which is a path, i.e., a total order) amounts to sorting the set~$N_j$, using for comparisons the orientations of the arcs between vertices~$i,i'\in N_j$, which can be queried in constant time by reading the entries~$a_{i,i'}$ of the adjacency matrix.
This sorting happens exactly once at the beginning of each zigzag movement, and the resulting total order is stored in the array~$T_j$.
In each execution of step~A5, the next arc incident with~$j$ in the precomputed list~$T_j$ is flipped, using the index~$t_j$ into the list~$T_j$.
Once $t_j$ reaches the maximum value~$t_j=|N_j|$, i.e., the last entry of the list~$T_j$, which means that~$j$ has now become a source or sink in~$D_j$, then the direction~$o_j$ of the zigzag movement for~$j$ is reversed in step~A6, and the array~$s$ is updated accordingly (see~\cite{perm_series_iii} for details).
\begin{algo}{\bfseries Algorithm~A}{History-free arc flips}
Given a connected chordal graph~$G=([n],E)$ in perfect elimination order, this algorithm generates all acyclic orientations of~$G$ by arc flips in the order given by the Savage-Squire-West construction (recall Section~\ref{sec:acyclic}).
It maintains the current acyclic orientation~$D=([n],A)$ of~$G$ as an adjacency matrix~$A=(a_{i,j})_{i,j\in[n]}$, with $a_{i,j}\in\{0,1\}$, total orderings~$T_j$ of $N_j$ and an index~$t_j$, $0\leq t_j\leq |N_j|$, into the array~$T_j$ for all $j=1,\ldots,n$, as well as auxiliary arrays $o=(o_1,\ldots,o_n)$ and $s=(s_1,\ldots,s_n)$.
\begin{enumerate}[label={\bfseries A\arabic*.}, leftmargin=8mm, noitemsep, topsep=3pt plus 3pt]
\item{} [Initialize] For $i,j=1,\ldots,n$ set $a_{i,j}\gets 0$.
Then for $j=1,\ldots,n$, orient all arcs incident with~$j$ in~$D_j$ towards~$j$, i.e., for all $i\in N_j$ set $a_{i,j}\gets 1$.
Also set $t_j\gets 0$, $o_j\gets \dirl$, and $s_j\gets j$ for $j=1,\ldots,n$.
\item{} [Visit] Visit the current acyclic orientation~$D=([n],A)$.
\item{} [Select vertex] Set $j\gets s_n$, and terminate if $j=1$.
\item{} [Sort neighbors] If $t_j=0$, compute the transitive reduction $v_1\rightarrow v_2\rightarrow \cdots\rightarrow v_k$ of the clique in~$D_j$ formed by the vertices in~$N_j$ via sorting, using the adjacency matrix entries~$A_{i,i'}$, $i,i'\in N_j$, for comparisons.
If $o_j\gets \dirl$, set $T_j\gets (v_k,v_{k-1},\ldots,v_1)$, and if $o_j=\dirr$ set $T_j\gets (v_1,v_2,\ldots,v_k)$.
\item{} [Flip arc] Set $t_j\gets t_j+1$.
In the current acyclic orientation~$D$, flip the arc between~$j$ and~$i\gets T_{j,t_j}$, i.e., if $o_j=\dirl$ set $a_{i,j}\gets 0$ and $a_{j,i}\gets 1$, whereas if $o_j=\dirr$ set $a_{i,j}\gets 1$ and $a_{j,i}\gets 0$.
\item{} [Update $o$ and $s$] Set $s_n\gets n$.
If $t_j=|N_j|$, then if $o_j=\dirl$ ($j$ has become source in~$D_j$) set $o_j\gets \dirr$, and if $o_j=\dirr$ ($j$ has become sink in~$D_j$) set $o_j\gets \dirl$, and in both cases set $t_j\gets 0$, $s_j\gets s_{j-1}$ and $s_{j-1}\gets j-1$. Go back to~A2.
\end{enumerate}
\end{algo}
\begin{theorem}
\label{thm:SSW-algo}
Given a connected chordal graph~$G=([n],E)$ in perfect elimination order, Algorithm~A visits each acyclic orientation of~$G$ in time~$\cO(\log \omega)$ on average, where $\omega=\omega(G)$ is the clique number of~$G$.
\end{theorem}
\begin{proof}
Steps~A2, A4, A5 and~A6 clearly take only constant time.
The sorting step~A4 takes time~$\cO(d\log d)$, where $d=|N_j|$ is the degree of the vertex~$j$ in~$D_j$.
This iteration of the main loop is followed by~$d-1$ iterations later iterations of the main loop in which vertex~$j$ is considered but step~A4 is skipped because~$t_j>0$ (specifically, this happens for $t_j\in\{1,\ldots,d-1\}$.
So overall the algorithm visits~$d$ acyclic orientations in time~$\cO(d\log d)$, which is $\cO(\log d)$ on average.
Clearly, we have $d\leq \omega(G)$.
\end{proof}
The space required by Algorithm~A to store the adjacency matrix of~$G$ is clearly~$\cO(n^2)$, and the initialization time spent in step~A1 is~$\cO(n^2)$.
Testing whether an arbitrary graph~$G$ is chordal, and if so computing a perfect elimination ordering for~$G$, can be done in time~$\cO(m+n)$ by lexicographic breadth-first-search~\cite{MR408312}, where $m$ is the number of edges of~$G$.
Clearly, $\cO(m+n)$ is dominated by the initialization time~$\cO(n^2)$ of Algorithm~A.
\section{Acyclic orientations of hypergraphs}
\label{sec:aoh}
In this section we establish our first main result, a Gray code for acyclic orientations of certain hypergraphs (Theorem~\ref{thm:mainhyper} below).
The hypergraphs we consider admit a vertex ordering that we refer to as hyperfect elimination order.
When specialized to graphs, this corresponds to a perfect elimination order, and we recover the Savage-Squire-West construction.
When specialized to graphical building sets, we recover the Gray code for elimination trees of chordal graphs described in~\cite{DBLP:conf/soda/CardinalMM22} (Lemma~\ref{lem:BG-elim}).
The algorithm also applies to chordal building sets (Theorem~\ref{thm:mainbuild}) and yields Hamilton paths on the corresponding hypergraphic polytopes called chordal nestohedra.
\subsection{Flips in acyclic orientations of hypergraphs}
Let $\cH=(V,\cE)$ be a hypergraph, where $\cE\seq 2^V$.
An \emph{orientation} of $\cH$ is a function $h:\cE\to V$ such that $h(A)\in A$ for every $A\in\cE$; see Figure~\ref{fig:hyper}~(a).
We refer to $h(A)$ as the \emph{head} of hyperedge $A$ in the orientation.\footnote{Orientations of hypergraphs have been defined differently in similar contexts. In particular, the definition of hypergraph orientation used by Benedetti, Bergeron, and Machacek~\cite{MR3960512} is more general than ours. In their terminology, we restrict to orientations with heads of size one only. The general definition is most useful for a complete characterization of the faces of the hypergraphic polytope. Rehberg~\cite{rehberg_2021} refers to our definition as a \emph{heading} instead of an orientation.}
A \emph{path} in an orientation~$h$ of~$\cH$ is a sequence~$(v_1,\ldots,v_k)$ of distinct vertices for which there are hyperedges~$A_1,\ldots,A_{k-1}\in\cE$ such that $v_i,v_{i+1}\in A_i$ and $h(A_i)=v_{i+1}$ for all $i=1,\ldots,k-1$.
A~\emph{cycle} is such a path with $k\geq 2$ and the additional property that $v_k,v_1\in A_k$ and $h(A_k)=v_1$ for some hyperedge~$A_k\in\cE$.
By this definition a loop does not count as a cycle.
An orientation of~$\cH$ is \emph{acyclic} if it does not contain any cycles.
Equivalently, $\cH$ is acyclic if the digraph formed by all arcs~$i\rightarrow j$ for every pair of distinct vertices~$i,j\in V$ with $i,j\in A$ and $j=h(A)$ for some hyperedge~$A\in \cE$ is acyclic; see Figure~\ref{fig:hyper}~(b).
We write $\AO_\cH$ for the set of all acyclic orientations of the hypergraph~$\cH$.
Given an orientation~$h$ of a hypergraph~$\cH=([n],\cE)$, a \emph{pair flip} involves a pair of distinct vertices~$(i,j)$, $i,j\in [n]$, and maps an orientation~$h$ to a distinct orientation~$h'$ such that
\begin{equation}
\label{eq:pair-flip}
h'(A) :=
\begin{cases}
i & \text{\ if\ } h(A) = j \text{\ and\ } i\in A, \\
h(A) & \text{\ otherwise,}
\end{cases}
\end{equation}
for all $A\in\cE$; see Figure~\ref{fig:hyper2}~(c).
Note that in order for~$h'$ to be distinct from~$h$, the definition requires that there must exist a hyperedge~$A\in\cE$ with $h(A)=j$ (which then satisfies $h'(A)=i$).
We define a \emph{flip graph} on $\AO_\cH$ by joining two acyclic orientations of~$\cH$ with an edge if and only if they differ in a pair flip.
The following lemma identifies flippable pairs in an acyclic orientation of a hypergraph.
It generalizes the situation for acyclic orientations of graphs, where flippable arcs were precisely the arcs in the transitive reduction.
An acyclic orientation~$h$ of a hypergraph~$\cH=([n],\cE)$ yields a poset on~$[n]$ defined as the transitive closure of the relation
\begin{equation*}
i\prec j \Longleftrightarrow \text{there exist a hyperedge $A\in \cE$ with $i,j\in A$ and $j=h(A)$}.
\end{equation*}
We denote this poset by $P_{\cH,h}$, and we write $i\prec j$ to express comparabilities in this poset in infix notation; see Figure~\ref{fig:hyper}~(c).
Note that if the pair~$(\cH,h)$ is a simple digraph~$D$, then the poset~$P_{\cH,h}$ is the poset defined by the transitive closure of~$D$.
\begin{lemma}
\label{lem:flippable}
A pair~$(i,j)$ is flippable in an acyclic orientation~$h$ of~$\cH$ if and only if $j$ covers~$i$ in the poset~$P_{\cH,h}$.
\end{lemma}
\begin{proof}
$(\Leftarrow)$ Suppose that $j$ covers~$i$ in~$P_{\cH,h}$ and let $h'$ be the orientation obtained after flipping the pair~$(i,j)$.
Suppose for the sake of contradiction that $h'$ is not acyclic.
Then there must exist a path~$(v_1,\ldots,v_k)$, $k\geq 3$, with $v_1=i$ and $v_k=j$ in the orientation~$h$ of~$\cH$.
But this implies that the relation~$i\prec j$ is obtained by transitivity, i.e., $j$ does not cover~$i$ in~$P_{\cH,h}$.
$(\Rightarrow)$ Suppose that $(i,j)$ is flippable, and suppose for the sake of contradiction that $j$ does not cover $i$ in~$P_{\cH,h}$.
Then the relation $i\prec j$ is obtained by transitivity, i.e., there is a path $(v_1,\ldots,v_k)$, $k\geq 3$, with $v_1=i$ and $v_k=j$ in the orientation~$h$ of~$\cH$.
After flipping~$(i,j)$, this path creates a cycle in the resulting orientation~$h'$.
Specifically, if there is a hyperedge~$A\in\cE$ with $i,j\in A$ and $v_{k-1}\in A$, then $(v_1,\ldots,v_{k-1})$ is a cycle in~$h'$, and otherwise $(v_1,\ldots,v_k)$ is a cycle in~$h'$.
\end{proof}
Note that we can define a surjective map $S_n\to\AO_\cH$.
For a permutation $\pi\in S_n$, every hyperedge $A\in\cE$ can be oriented towards $h(A):=\argmax_{i\in A} \pi^{-1}(i)$, i.e., towards the element from~$A$ that appears rightmost in~$\pi$.
Every acyclic orientation $h\in\AO_\cH$ can be obtained in this way.
Any linear extension $\pi\in\ext (P_{\cH,h})$ is such that orienting the hyperedges according to $\pi$ yields the orientation~$h$.
The set $\{\ext (P_{\cH,h}) \mid h\in\AO_\cH\}$ is therefore a partition of~$S_n$ into equivalence classes.
\subsection{Hypergraphic polytopes}
Generalizing the situation for acyclic orientations of graphs described in Section~\ref{sec:zono}, the acyclic orientations of a hypergraph~$\cH$ are in one-to-one correspondence with the vertices of a polytope associated with~$\cH$, and the flip graph on acyclic orientations is isomorphic to the skeleton of this polytope.
The \emph{hypergraphic polytope}~\cite{MR3960512,aguiar_ardila_2017} of a hypergraph $\cH=([n],\cE)$ can be defined as a Minkowski sum of simplices.
Specifically, with a subset $S\seq [n]$, we associate the standard simplex $\Delta_S := \conv \{e_i \mid i\in S\}$, and the hypergraphic polytope $Z(\cH)$ of~$\cH$ can be defined as $Z(\cH)=\sum_{A\in\cE} \Delta_A$.
Hypergraphic polytopes can also be defined as convex hulls of in-degree sequences.
For an orientation~$h$ of the hypergraph~$\cH$, let $d^-(i):=|\{A\in\cE \mid h(A)=i\}|$ be the \emph{in-degree} of vertex~$i$.
The vector $\delta_{\cH,h}:=(d^-(1),\ldots,d^-(n))\in\mathbb{N}^n$ is the \emph{in-degree sequence} of~$h$.
Then the hypergraphic polytope of~$\cH$ is
\begin{equation}
\label{eq:hypervert}
Z(\cH)=\conv \{\delta_{\cH,h} \mid \text{$h$ orientation of $\cH$} \}.
\end{equation}
Again, this definition does not change if we require the orientations~$h$ to be acyclic.
A proof of these facts is implicit in previous works~\cite{MR3960512} and spelled out by Rehberg~\cite{rehberg_2021} (Proposition 3.9).
The following lemma is a special case of Theorem~2.18 in~\cite{MR3960512}.
\begin{lemma}[\cite{MR3960512}]
\label{lem:hyperflip}
The flip graph on acyclic orientations~$\AO_\cH$ of a hypergraph~$\cH$ is isomorphic to the skeleton of its hypergraphic polytope~$Z(\cH)$.
\end{lemma}
\subsection{Hypergraphs in hyperfect elimination order}
We now generalize the notion of perfect elimination order to hypergraphs~$\cH$, and prove that this order is a necessary and sufficient condition for the unique parent-child property to hold in the poset~$P_{\cH,h}$ of any acyclic orientation $h\in\AO_\cH$.
\subsubsection{Hyperfect elimination order}
\label{sec:hyperfect}
For a hypergraph~$\cH=([n],\cE)$ and an integer~$i\leq n$, we write $\cH_i$ for the subgraph of $\cH$ induced by the vertices in~$[i]$.
$\cH=([n],\cE)$ is in \emph{hyperfect elimination order} if either $n=0$, or $n\geq 1$ and the following two conditions are satisfied:
\begin{enumerate}[label=(\roman*),leftmargin=8mm, noitemsep, topsep=1pt plus 1pt]
\item \label{itm:heo1}
For any two hyperedges $A,B\in\cE$ with $n\in A\cap B$ and any two distinct vertices $a\in A-n$, $b\in B-n$ there is a hyperedge~$X\in\cE$ such that $\{a,b\} \seq X \seq (A \cup B)-n$;
\item \label{itm:heo2}
$\cH_{n-1}$ is in hyperfect elimination order.
\end{enumerate}
We emphasize that the hyperedges~$A,B\in\cE$ in this definition are not required to be distinct, i.e., this condition must hold also for hyperedges~$A=B$.
Observe that if the hypergraph is a graph, thus if all hyperedges have size two, then the first condition states that the neighbors of~$n$ must be pairwise adjacent, hence that $n$ is simplicial.
The hyperfect elimination order is therefore a generalization of the perfect elimination order for chordal graphs.
(For other generalizations of perfect elimination orders to hypergraphs, see e.g.~\cite{MR2603461}.)
\subsubsection{Unique parent-child property for hypergraphs}
We now generalize Lemma~\ref{lem:upc-peo} to hypergraphs in hyperfect elimination order.
We say that a hypergraph $\cH=([n],\cE)$ has the \emph{unique parent-child property} if either $n=0$, or $n\geq 1$ and the following two conditions are satisfied:
\begin{enumerate}[label=(\roman*),leftmargin=8mm, noitemsep, topsep=1pt plus 1pt]
\item for every acyclic orientation~$h\in\AO_\cH$ the vertex~$n$ covers at most one element and is covered by at most one element in~$P_{\cH,h}$;
\item the hypergraph $\cH_{n-1}$ has the unique parent-child property.
\end{enumerate}
\begin{lemma}
\label{lem:upc-heo}
A hypergraph $\cH=([n],\cE)$ has the unique parent-child property if and only if it is in hyperfect elimination order.
\end{lemma}
\begin{proof}
$(\Leftarrow)$
Suppose that $\cH$ is in hyperfect elimination order.
Let $h$ be an acyclic orientation of $\cH$ and $a,b\in [n-1]$ two distinct vertices.
We first show that $n$ cannot cover~$a$ and~$b$ in~$P_{\cH,h}$.
Suppose for the sake of contradiction that $n$ covers both~$a$ and~$b$ in~$P_{\cH,h}$.
This means there are hyperedges $A,B\in \cE$ such that $\{n,a\}\seq A$, $\{n,b\}\seq B$ and $h(A)=h(B)=n$.
By the definition of hyperfect elimination order, it follows that there is a hyperedge~$X \in \cE$ such that $\{a,b\}\seq X \seq (A\cup B)-n$.
Note that $h(X)\notin \{a,b\}$; otherwise we have either $a\prec b$ or~$b\prec a$ which implies that $n$ cannot cover both~$a$ and~$b$, a contradiction.
As $h(X) \in (A\cup B)-n$, we also conclude that $h(X)\prec n$.
Thus, $a\prec h(X)\prec n$ and $b\prec h(X)\prec n$ which means that $n$ does not cover~$a$ nor~$b$, a contradiction.
We now show that $n$ cannot be covered by~$a$ and~$b$.
Suppose for the sake of contradiction that $n$ is covered by both~$a$ and~$b$ in~$P_{\cH,h}$.
This means there are hyperedges $A,B \in \cE$ such that $\{n,a\}\seq A$, $\{n,b\}\seq B$, $h(A)=a$ and $h(B)=b$.
By the definition of hyperfect elimination order, it follows that there is a hyperedge~$X \in \cE$ such that $\{a,b\}\seq X \seq (A\cup B)-n$.
Note that if $h(X)\in A$, then we must have~$h(X)=a$.
Indeed, if $h(X)\neq a$, then we would have $a\prec h(X)$ and $h(X)\prec a$ as $h(A)=a$ and $h(X)\in A$, a contradiction.
Symmetrically, if $h(X)\in B$, then we must have~$h(X)=b$.
It follows that $h(X)\in \{a,b\}$.
Hence, we either have $a\prec b$ or $b\prec a$, which implies that $n$ cannot be covered by both~$a$ and~$b$, a contradiction.
Combining these observations, we obtain that $n$ covers at most one element and is covered by at most one element in~$P_{\cH,h}$.
By iterating this argument for~$\cH_{n-1}$, we conclude that $\cH$ has the unique parent-child property.
$(\Rightarrow)$
Suppose that $\cH$ has the unique parent-child property, and suppose for the sake of contradiction that $\cH$ is not in hyperfect elimination order.
Without loss of generality, suppose that there are hyperedges $A,B \in \cE$ with $n\in A\cap B$ and two distinct vertices $a \in A-n$, $b \in B-n$ such that
\begin{equation}
\label{eq:heo-neg}
\text{no hyperedge $X\in\cE$ satisfies $\{a,b\}\seq X\seq (A\cup B)-n$.}
\end{equation}
We construct an acyclic orientation~$h$ of~$\cH$ such that $n$ covers at least two elements in~$P_{\cH,h}$.
To do so, recall that a permutation~$\pi$ of $[n]$ induces an acyclic orientation~$h_\pi$ by defining $h_\pi(X):=\argmax_{i \in X} \pi^{-1}(i)$, and that~$\pi$ is a linear extension of~$P_{\cH,h_{\pi}}$.
Let $R:=(A \cup B) \setminus \{a,b,n\}$ and $S:=[n]\setminus (A\cup B)$, and note that $R,S,\{a\},\{b\},\{n\}$ is a partition of~$[n]$.
Consider the permutation~$\pi$ on~$[n]$ defined by
\begin{equation}
\label{eq:piRS}
\pi^{-1}(r)<\pi^{-1}(a)<\pi^{-1}(b)<\pi^{-1}(n)<\pi^{-1}(s)
\end{equation}
for all $r\in R$ and~$s\in S$, and let $h:=h_\pi$ be the acyclic orientation induced by~$\pi$.
We claim that $n$ covers both~$a$ and~$b$ in~$P_{\cH,h}$.
First note that $a,b\prec n$, as $h(A)=h(B)=n$.
Furthermore, there can be no $x\in[n]\setminus\{a,b\}$ with $a\prec x\prec n$ or $b\prec x\prec n$, as $\pi$ is a linear extension of~$P_{\cH,h}$.
We conclude that $n$ covers~$b$.
Furthermore, $n$ covers~$a$ unless $b$ covers~$a$.
However, if $b$ covers~$a$, then by~\eqref{eq:piRS} there must be a hyperedge~$X\in\cE$ with $\{a,b\}\seq X\seq (A\cup B)-n$ and $h(X)=b$, contradicting~\eqref{eq:heo-neg}.
This completes the proof.
\end{proof}
Note that in the case of graphs in perfect elimination order, the proof of Lemma~\ref{lem:upc-peo} used the fact that the neighbors of $n$ are always totally ordered in~$P_D$ (whose transitive reduction is~$T_D$).
This property is not true for hypergraphs in hyperfect elimination orders.
Consider for instance the hypergraph $\cH=([4],\{12, 123, 1234\})$, which is in hyperfect elimination order.
If we consider the acyclic orientation $h$ such that $h(12)=h(123)=1$ and $h(1234)=4$, we have that~2 and~3 are incomparable in~$P_{\cH,h}$.
\subsection{Generation algorithm}
\subsubsection{Generation of acyclic orientations of hypergraphs}
Using Lemma~\ref{lem:upc-heo}, we can describe a simple recursive algorithm generating a Hamilton path in the flip graph on acyclic orientations of a hypergraph in hyperfect elimination order; see Figure~\ref{fig:tree}.
This algorithm generalizes the Savage-Squire-West construction for chordal graphs~\cite{MR1267311} described in Section~\ref{sec:acyclic}.
\begin{figure}
\centering
\makebox[0cm]{
\includegraphics{tree}
}
\caption{Gray code of acyclic orientations of a hypergraph~$\cH$ in hyperfect elimination order.
The levels of the tree correspond to the induction steps.
At each step the figure shows the acyclic orientation~$h$ of~$\cH$, and the corresponding poset~$P_{\cH,h}$ and permutation~$\pi_h$.
The notation $j\dird$ indicates a pair flip~$(i,j)$ where $i$ is the unique child of~$j$ in~$P_{\cH_j,h_j}$ by Lemma~\ref{lem:upc-heo}.
Similarly, $j\diru$ indicates a pair flip~$(j,i)$ where $i$ is the unique parent of~$j$ in~$P_{\cH_j,h_j}$.
}
\label{fig:tree}
\end{figure}
We consider a hypergraph $\cH=([n],\cE)$ in hyperfect elimination order, and proceed by induction on~$n$.
The base case $n=0$ is trivial.
For $n\geq 1$, suppose we have a Hamilton path in the flip graph on acyclic orientations of~$\cH_{n-1}$.
Every acyclic orientation $h_{n-1}\in\AO_{\cH_{n-1}}$ can be extended to an acyclic orientation~$h\in\AO_\cH$ in two ways:
\begin{itemize}[leftmargin=5mm, noitemsep, topsep=1pt plus 1pt]
\item for all $A\in\cE$ such that $n\in A$, we define $h(A):=n$, in which case $n$ is maximal in~$P_{\cH,h}$;
\item consider a permutation $\pi\in\ext (P_{\cH_{n-1},h_{n-1}})$ and for all $A\in\cE$ such that $n\in A$, let $h(A):=\argmax_{i\in A-n} \pi^{-1}(i)$, in which case $n$ is minimal in~$P_{\cH,h}$.
\end{itemize}
Furthermore, we can get from the former to the latter, by iteratively flipping $n$ with the unique vertex it covers in this order, until there is no such vertex and $n$ becomes minimal in~$P_{\cH,h}$.
Conversely, we can move $n$ up by iteratively flipping $n$ and the unique vertex that covers it until $n$ becomes maximal in~$P_{\cH,h}$.
We can therefore replace any acyclic orientation in the Hamilton path in the flip graph on $\AO_{\cH_{n-1}}$ by a sequence of acyclic orientations in which vertex $n$ either moves down from maximal to minimal in~$P_{\cH,h}$, or up from minimal to maximal.
These sequences can be concatenated to yield a Hamilton path in the flip graph on $\AO_\cH$, in which $n$ `zigzags' alternately up and down in~$P_{\cH,h}$.
\subsubsection{Algorithm~J for acyclic orientations of hypergraphs}
The recursive algorithm described above can be cast as a special case of Algorithm~J in the Hartung-Hoang-M\"utze-Williams framework~\cite{MR4391718}.
\begin{lemma}
\label{lem:heo2perm}
Let $\cH=([n],\cE)$ be a hypergraph in hyperfect elimination order.
For an orientation $h\in\AO_\cH$, let $h_{n-1}$ be its restriction to~$\cH_{n-1}$.
With any acyclic orientation $h\in\AO_\cH$ we associate a permutation $\pi_h\in S_n$ as follows:
If $n=0$ then $\pi_h:=\varepsilon$, and if $n\geq 1$ we consider three cases:
\begin{enumerate}[label=(\roman*),leftmargin=8mm, noitemsep, topsep=1pt plus 1pt]
\item if the vertex~$n$ is maximal in~$P_{\cH,h}$, then $\pi_h:=c_n(\pi_{h_{n-1}})$;
\item if the vertex~$n$ is minimal in~$P_{\cH,h}$, then $\pi_h:=c_1(\pi_{h_{n-1}})$;
\item otherwise, $\pi_h := c_i(\pi_{h_{n-1}})$, where $i$ is the position in $\pi_{h_{n-1}}$ of the unique vertex that covers~$n$ in~$P_{\cH,h}$.
\end{enumerate}
Then the map $\AO_\cH\to S_n: h\mapsto \pi_h$ is injective, and
\begin{equation}
\label{eq:PiH}
\Pi_\cH := \{ \pi_h \mid h\in\AO_\cH \}
\end{equation}
is a zigzag language of permutations.
\end{lemma}
If the vertex~$n$ is isolated in~$P_{\cH,h}$, then it is both maximal and minimal, in which case we use the encoding stated under~(i), and then the special condition~(z2) in the definition of zigzag languages applies.
The definition of the mapping $h\mapsto \pi_h$ is illustrated in Figure~\ref{fig:tree}.
Lemma~\ref{lem:heo2perm} can be proved straightforwardly by induction; we omit the details.
Note that by definition, we have $\pi_h\in \ext (P_{\cH,h})$.
Therefore, the language $\Pi_\cH$ defines a set of representatives for the equivalence classes of permutations in~$S_n$.
\begin{lemma}
\label{lem:algJ-flip}
When running Algorithm~J with input~$\Pi_\cH$ as defined in~\eqref{eq:PiH}, then for any two permutations $\pi_h, \pi_{h'}$ that are visited consecutively, the corresponding acyclic orientations~$h$ and~$h'$ of~$\cH$ are adjacent in the flip graph on~$\AO_\cH$.
\end{lemma}
To prove Lemma~\ref{lem:algJ-flip}, we introduce the following lemma from~\cite{perm_series_iii}.
We say that a jump of a value~$j$ in a permutation~$\pi\in S_n$ is \emph{clean}, if for every $k=j+1,\ldots,n$, the value~$k$ is either to the left or right of all values smaller than~$k$ in~$\pi$.
\begin{lemma}[{\cite[Lemma~24~(d)]{perm_series_iii}}]
\label{lem:clean-jumps}
For any zigzag language $L_n\seq S_n$, all jumps performed by Algorithm~J are clean.
\end{lemma}
Furthermore, for proving Lemma~\ref{lem:algJ-flip} we establish the following auxiliary lemma.
\begin{lemma}
\label{lem:jump-flip}
Let $\cH=([n],\cE)$ be a hypergraph in hyperfect elimination order, and let $\cH_\nu=([\nu],\cE_\nu)$ for some $\nu \in [n]$.
Let $\pi'$ be obtained from~$\pi_{h_\nu}$ by a clean right jump of the value~$j=\pi_{h_\nu}(i)$ by $d$ steps.
Suppose that for all $a\in\{\pi_{h_\nu}(i+2),\ldots,\pi_{h_\nu}(i+d)\}$ there is no hyperedge~$A\in\cE_\nu$ with $j,a\in A$ and $h_\nu(A)=a$, and either $i+d=\nu$, or for $b:=\pi_{h_\nu}(i+d+1)$ we have $b>j$ or there is a hyperedge~$A\in\cE_\nu$ with $j,b\in A$ and $h_\nu(A)=b$.
Then the jump in~$\pi_{h_\nu}$ is minimal, we have $\pi' \in \Pi_{\cH_\nu}$ and the acyclic orientation~$h'\in\AO_{\cH_\nu}$ obtained from~$h_\nu$ by the pair flip~$(j,\pi_{h_\nu}(i+1))$ satisfies $\pi'=\pi_{h'}$.
\end{lemma}
\begin{proof}
We proceed by induction on~$\nu$.
The result is clear if $\nu=0$ or $\nu=1$, so we assume that~$\nu\geq 2$.
We distinguish two cases.
{\bf Case 1:} $j = \nu$.
Let $\hat \pi:=p(\pi_{h_\nu})=p(\pi')\in\Pi_{\cH_{\nu-1}}$.
By definition, we have that $\pi_{h_\nu}=c_i(\hat \pi)$ and $\pi_{h_\nu}(i+1)$ covers $\nu$ in~$P_{\cH_\nu,h_\nu}$.
Recall that by Lemma~\ref{lem:heo2perm} the mapping $h_\nu \mapsto \pi_{h_\nu}$ is a bijection between~$\AO_{\cH_\nu}$ and~$\Pi_{\cH_\nu}$ and that for every $A \in \cE_\nu$ we have
\begin{equation*}
h_\nu(A) = \argmax_{x \in A} \pi_{h_\nu}^{-1}(x).
\end{equation*}
Since $\pi_{h_\nu}(i+1)$ covers~$\nu$ in~$P_{\cH_\nu,h_\nu}$, by Lemma~\ref{lem:flippable} we can define $h':\cE_\nu \to [\nu]$ as the acyclic orientation that is obtained from~$h_\nu$ by the pair flip~$(\nu,\pi_{h_\nu}(i+1))$.
Using the definition~\eqref{eq:pair-flip}, we therefore obtain
\begin{equation}
\label{eq:h-flipped}
h'(A)=\begin{cases}
\nu & \text{if $h_\nu(A)=\pi_{h_\nu}(i+1)$ and $\nu \in A$},\\
h_\nu(A) & \text{otherwise}.
\end{cases}
\end{equation}
We aim to show that $\pi'$ is a linear extension of~$P_{\cH_\nu,h'}$.
Let $x,y \in [\nu]$ such that $x$ is covered by~$y$ in~$P_{\cH_\nu,h'}$.
As $x$ is covered by $y$, there exists a hyperedge $X\in \cE_\nu$ with $x,y \in X$ and $h'(X)=y$.
We first consider the case~$y\neq \nu$.
In this case we have $h'(X)=h_\nu(X)=y$.
Hence, $x\prec_{h_\nu} y$ and consequently $\pi_{h_\nu}^{-1}(x)<\pi_{h_\nu}^{-1}(y)$.
If $x\neq \nu$, then we trivially have $\pi_{h'}^{-1}(x)<\pi_{h'}^{-1}(y)$.
It remains to consider the case $x=\nu$.
We define
\begin{equation*}
\begin{split}
C'&:=\{\pi_{h_\nu}(i+2),\ldots,\pi_{h_\nu}(i+d)\}, \\
C&:=\{\pi_{h_\nu}(i+1)\}\cup C',
\end{split}
\end{equation*}
Note that $y\neq \pi_{h_\nu}(i+1)$, otherwise the definition~\eqref{eq:h-flipped} would imply $h'(X)=x$ and consequently $y\prec_{h'} x$.
Furthermore, from the assumption that for all $a\in C'$ there is no hyperedge~$A\in\cE_\nu$ with $\nu,a\in A$ and $h_\nu(A)=a$, we obtain that $y\notin C'$.
Combining these two observations shows that $y\notin C$, and therefore we have $\pi_{h'}^{-1}(x)<\pi_{h'}^{-1}(y)$, as desired.
We now consider the case~$y=\nu$.
There are two possible subcases.
If $\pi_{h_\nu}(i+1)\in X$, then by the definition~\eqref{eq:h-flipped} we have $h_\nu(X)=\pi_{h_\nu}(i+1)$ and consequently $\pi_{h_\nu}^{-1}(x)\leq i+1$, implying that $\pi_{h'}^{-1}(x)<\pi_{h'}^{-1}(y)$.
If $\pi_{h_\nu}(i+1)\notin X$, then we have $h'(X)=h_\nu(X)=y$, and consequently $\pi_{h_\nu}^{-1}(x)<i$, implying that $\pi_{h'}^{-1}(x)<\pi_{h'}^{-1}(y)$ as well.
We have shown that $\pi'$ is a linear extension of~$P_{\cH_\nu,h'}$.
We now use the assumption that either $i+d=\nu$, or for $b:=\pi_{h_\nu}(i+d+1)$ there is a hyperedge~$A\in\cE_\nu$ with $\nu,b\in A$ and $h_\nu(A)=b$ (the case $b>j=\nu$ is impossible).
In the first case $\nu$ is maximal in~$P_{\cH_\nu,h'}$.
In the second case we have $h'(A)=h_\nu(A)=b$ and therefore $\nu \prec_{h'} b$.
As $\nu=j$ and~$b$ are at neighboring positions in~$\pi'$, we see that $b=\pi_{h_\nu}(i+d+1)$ covers~$\nu$ in~$P_{\cH_\nu,h'}$.
Combining these observations shows that the jump in~$\pi_{h_\nu}$ is minimal, we have $\pi' \in \Pi_{\cH_\nu}$ and the acyclic orientation~$h'$ obtained from $h_\nu$ by the pair flip~$(j,\pi_{h_\nu}(i+1))$ satisfies $\pi'=\pi_{h'}$.
{\bf Case 2:} $j<\nu$.
Note that $p(\pi_{h_\nu})$ and $p(\pi')$ differ in a right jump of the value~$j$.
As the jump is clean by assumption, we have that $\pi_{h_\nu}^{-1}(\nu)=\pi'^{-1}(\nu) \in \{1,\nu\}$.
In fact, we may assume that $\pi_{h_\nu}^{-1}(\nu)=\pi'^{-1}(\nu)=\nu$, as the other case is analogous.
By induction, the jump in $p(\pi_{h_\nu})$ is minimal, we have $p(\pi') \in \Pi_{\cH_{\nu-1}}$ and the acyclic orientation $h':\cE_{\nu-1} \to [\nu-1]$ of~$\cH_{\nu-1}$ obtained from~$h_{\nu-1}$ by the pair flip $(j,\pi_{h_{\nu-1}}(i+1))=(j,\pi_{h_\nu}(i+1))$ satisfies $p(\pi')=\pi_{h'}\in \Pi_{\cH_{\nu-1}}$.
It follows that the jump in~$\pi_{h_\nu}$ is also minimal.
Moreover, we define
\begin{align*}
\hat h(A) & :=\begin{cases}
\nu & \text{if $\nu \in A$}, \\
h'(A) & \text{otherwise,} \\
\end{cases} \\
& =\begin{cases}
\nu & \text{if $\nu \in A$}, \\
j & \text{if $\nu \notin A$, $h_{\nu-1}(A)=\pi_{h_\nu}(i+1)$ and $j \in A$,} \\
h_{\nu-1}(A) & \text{otherwise,} \\
\end{cases} \\
& =\begin{cases}
j & \text{if $h_{\nu}(A)=\pi_{h_\nu}(i+1)$ and $j \in A$,} \\
h_\nu(A) & \text{otherwise,} \\
\end{cases}
\end{align*}
From the last line of this equation we see that $\hat h$ is obtained from~$h_\nu$ by the pair flip~$(j,\pi_{h_\nu}(i+1))$.
Furthermore, as $\pi'=c_\nu(\pi_{h'})$ we conclude that $\pi'=\pi_{\hat h} \in \Pi_{\cH_\nu}$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:algJ-flip}]
Let $h,h'\in\AO_\cH$, $j=\pi_h(i)$ and~$d$ be such that $\pi_h$ and $\pi_{h'}$ differ in a minimal jump of the value~$j$ by $d$ steps.
By Lemma~\ref{lem:clean-jumps} we can assume that the jump is clean.
Furthermore, we assume that the jump is a right jump, as the other case follows the same ideas.
By Lemma~\ref{lem:jump-flip} applying the pair flip~$(j,\pi_h(i+1))$ to~$h$ yields an acyclic orientation~$h''\in\AO_\cH$ such that $\pi_{h''}$ is obtained from~$\pi_h$ by a minimal right jump of~$j$.
It follows that $h''=h'$, and the lemma is proved.
\end{proof}
Combining Theorem~\ref{thm:jump}, Lemma~\ref{lem:heo2perm}, and Lemma~\ref{lem:algJ-flip} yields our first main result.
\begin{theorem}
\label{thm:mainhyper}
For every hypergraph $\cH = ([n],\cE)$ in hyperfect elimination order, Algorithm~J with input~$\Pi_\cH$ as defined in~\eqref{eq:PiH} generates a sequence of permutations $\pi_{h_1}, \pi_{h_2}, \ldots$, where $h_1,h_2,\ldots \in \AO_\cH$ such that $h_1,h_2,\ldots $ is a Hamilton path in the flip graph on acyclic orientations of~$\cH$, or equivalently, on the skeleton of the hypergraphic polytope~$Z(\cH)$.
\end{theorem}
\subsection{Application to building sets and nestohedra}
\label{sec:bs}
In what follows, we use the terminology of Postnikov~\cite{MR2487491}, although the notion of building set goes back to De Concini and Procesi~\cite{MR1366622}.
\subsubsection{Acyclic orientations of building sets and nestohedra}
A hypergraph $\cB=(V,\cE)$ is a \emph{building set} if
\begin{enumerate}[label=(\roman*),leftmargin=8mm, noitemsep, topsep=1pt plus 1pt]
\item $\cE$ contains all singletons $\{v\}$ for $v\in V$;
\item for every pair $A,B\in\cE$ with $A\cap B\neq \emptyset$ we have $A\cup B\in\cE$.
\end{enumerate}
A building set is \emph{connected} if $V\in\cE$.
A remarkable property of acyclic orientations of building sets is that the transitive reduction of the corresponding posets are forests.
\begin{lemma}[{\cite[Thm.~7.4]{MR2487491}, \cite[Prop.~3.3]{MR3960512}}]
If $\cB$ is a building set, then
\begin{enumerate}[label=(\roman*),leftmargin=8mm, noitemsep, topsep=1pt plus 1pt]
\item the hypergraphic polytope $Z(\cB)$ of $\cB$ is simple;
\item the flip graph on $\AO_{\cB}$ is regular;
\item for every acyclic orientation $h\in\AO_{\cB}$, the poset $P_{\cH,h}$ is a forest.
\end{enumerate}
\end{lemma}
The hypergraphic polytope~$Z(\cB)$ of a building set~$\cB$ is called a \emph{nestohedron}~\cite{MR2487491}.
The forests corresponding to acyclic orientations of $\AO_{\cB}$ are called $\cB$-forests (or $\cB$-trees if $\cB$ is connected).
\subsubsection{Chordal building sets}
We follow Postnikov, Reiner, and Williams~\cite{MR2520477}\footnote{The original definition is actually reversed, but we need this one for consistency with our definition of perfect elimination order in chordal graphs.} and say that a building set $\cB=([n],\cE)$ is a \emph{chordal building set} if and only if for any $A=\{i_1<i_2<\cdots <i_r\}\in\cE$ and $s\in [r]$, we have $\{i_1<i_2<\cdots <i_s\}\in\cE$.
The point of considering chordal building sets is the following direct connection between chordality and hyperfect elimination orders.
\begin{lemma}
\label{lem:cbs-heo}
A building set is chordal if and only if it is in hyperfect elimination order.
\end{lemma}
\begin{proof}
We first observe that for building sets, condition~\eqref{itm:heo1} for the hyperfect elimination order reduces to the case where $A=B$.
Indeed, if $A,B\in\cE$ both contain~$n$, then $A\cup B\in\cE$.
$(\Rightarrow)$
Consider a chordal building set $\cB=([n],\cE)$.
We need to verify conditions~\eqref{itm:heo1} and \eqref{itm:heo2} of the hyperfect elimination order.
Condition~\eqref{itm:heo1} is satisfied, since for any $A\in\cE$ such that $n\in A$, we have $A- n\in\cE$ from the chordality of $\cB$, and $A-n$ fulfills the requirements of~$X$ in condition~\eqref{itm:heo1}.
It remains to observe that if $\cB$ is chordal, then so is~$\cB_{n-1}$.
$(\Leftarrow)$
Consider a building set $\cB=([n],\cE)$ in hyperfect elimination order.
We only need to prove that condition~\eqref{itm:heo1} implies that for any $A\in\cE$ with $n\in A$, we have $A-n\in\cE$.
Suppose for the sake of contradiction that every hyperedge $X\in\cE$ contained in $A-n$ has size strictly less than~$|A-n|$.
Consider such a hyperedge~$X$ of maximum size, a vertex $a\in X$, and another vertex $b\in (A-n)\setminus X$.
Applying condition~\eqref{itm:heo1} on~$a$ and~$b$ yields another hyperedge~$X'\in\cE$ contained in~$A-n$ that contains both~$a$ and~$b$.
Since $\cH$ is a building set, we must have $X\cup X'\in\cE$, and this set is larger than~$X$, a contradiction.
Therefore, we have $A-n\in\cE$.
The rest follows by induction on~$\cB_{n-1}$.
\end{proof}
Nestohedra of chordal building sets are called \emph{chordal nestohedra}~\cite{MR2520477}.
By combining Lemma~\ref{lem:cbs-heo} and Theorem~\ref{thm:mainhyper}, we therefore obtain new Hamilton paths on chordal nestohedra.
\begin{theorem}
\label{thm:mainbuild}
For every chordal building set $\cB = ([n],\cE)$, Algorithm~J with input~$\Pi_{\cB}$ as defined in~\eqref{eq:PiH} generates a sequence of permutations $\pi_{h_1}, \pi_{h_2}, \ldots$, where $h_1,h_2,\ldots \in \AO_{\cB}$ such that $h_1,h_2,\ldots $ is a Hamilton path in the flip graph on acyclic orientations of~$\cB$, or, equivalently, on the skeleton of the chordal nestohedron~$Z(\cB)$.
\end{theorem}
\subsubsection{Graphical building sets and elimination trees}
For a graph $G=(V,E)$, we let $\cB(G):=(V,\cE)$ be the hypergraph such that
\begin{equation*}
\cE:=\{U\seq V\mid \text{$G[U]$ is connected}\}.
\end{equation*}
Clearly, $\cE$ contains all singletons $\{v\}$, $v\in V$.
Also, if two subsets induce a connected subgraph of~$G$ and have a nonempty intersection, then their union also induces a connected subgraph.
Therefore, $\cB(G)$ is a building set, called the \emph{graphical building set} of~$G$.
An \emph{elimination tree} of a connected graph~$G=(V,E)$ is an unordered rooted tree~$T$ obtained by removing a vertex~$v$ of~$G$ which becomes the root of~$T$, and by recursing on the connected components of $G-v$, whose elimination trees become the subtrees of~$v$ in~$T$.
An \emph{elimination forest} of a (not necessarily connected) graph~$G$ is a set of elimination trees, one for each connected component of~$G$.
An elimination forest of a graph~$G=([n],E)$ can be produced from a permutation on~$[n]$, by choosing to remove at each step the vertex that has the leftmost position in the permutation.
Two elimination forests differ by a \emph{rotation} if there exist two permutations producing them that differ by a single adjacent transposition.
We refer to Figure~\ref{fig:hyper2} for an example of rotation and to \cite{DBLP:conf/soda/CardinalMM22} for more details.
The \emph{graph associahedron} of~$G$ is the hypergraphic polytope~$Z(\cB(G))$.
The vertices of $Z(\cB(G))$ are in one-to-one correspondence with the elimination forests of~$G$, and its skeleton is the \emph{rotation graph} on the elimination forests of~$G$; see \cite{MR2239078,MR2487491,MR2520477,MR3383157}.
Combining this characterization with~\eqref{eq:hypervert} and Lemma~\ref{lem:hyperflip}, we obtain the following.
\begin{lemma}
\label{lem:BG-elim}
Acyclic orientations of a graphical building set~$\cB(G)$ are in one-to-one correspondence with the elimination forests of~$G$, and the flip graph on the acyclic orientations of~$\cB(G)$ is isomorphic to the rotation graph on the elimination forests of $G$.
\end{lemma}
This statement is illustrated in Figure~\ref{fig:hyper2}, showing an example of a rotation between two elimination trees of a graph, and the corresponding pair flip in the acyclic orientations of the building set.
\begin{lemma}[\cite{MR2520477}]
\label{lem:cgbs}
A graphical building set $\cB(G)$ is chordal if and only if $G$ is in perfect elimination order.
\end{lemma}
Note that not all chordal building sets are graphical.
For instance, the hypergraph $([n], \{[i]\mid i\in n\})$ is a chordal building set but is not graphical.
(The corresponding nestohedron is called the \emph{Stanley-Pitman polytope}~\cite{MR1902680}.)
Combining Lemma~\ref{lem:cgbs} and Lemma~\ref{lem:cbs-heo}, we obtain the following.
\begin{corollary}
A graphical building set $\cB(G)$ is in hyperfect elimination order if and only if $G$ is in perfect elimination order.
\end{corollary}
This confirms our earlier claim that our Gray codes for acyclic orientations of hypergraphs in hyperfect elimination order, as given in Theorem~\ref{thm:mainhyper}, generalize the known Gray codes for elimination forests on chordal graphs described recently by Cardinal, Merino, and M\"utze~\cite{DBLP:conf/soda/CardinalMM22}.
\section{Acyclic reorientation lattices}
\label{sec:arl}
In this section, we consider order-theoretic properties of the set of all acyclic orientations of a graph and the existence of Hamilton paths and cycles in lattices defined from acyclic orientations.
In particular, we address a question due to Pilaud~\cite[Problem~51]{pilaud_2022} stated as Problem~\ref{prob:pilaud} in Section~\ref{sec:res2}.
In order to make sense of this question, we first define classes of acyclic digraphs and the lattices that are defined from their acyclic reorientations.
\subsection{Acyclic reorientation lattices of families of acyclic digraphs}
Given an acyclic digraph~$D$, an \emph{acyclic reorientation} of $D$ is an acyclic digraph obtained by flipping the orientation of some arcs of $D$.
The set of all acyclic reorientations of $D$ can be ordered by containment of the sets of arcs that are flipped w.r.t.~$D$; see Figure~\ref{fig:cong2}~(a).
We denote this poset by~$\AR_D$.
Pilaud~\cite{pilaud_2022} gave necessary and sufficient conditions on $D$ for $\AR_D$ to be a lattice.
When those conditions are satisfied, we refer to $\AR_D$ as the \emph{acyclic reorientation lattice} of $D$.
\subsubsection{Acyclic reorientation lattices of vertebrate digraphs}
We use the term \emph{oriented tree} to refer to a digraph whose underlying undirected graph is a tree, and \emph{oriented forest} to refer to a collection of disjoint oriented trees.
Following Pilaud~\cite{pilaud_2022}, we say that a digraph~$D$ is \emph{vertebrate} if the transitive reduction of every induced subgraph of~$D$ is an oriented forest.
It is easy to see that any vertebrate digraph must be acyclic; see~\eqref{eq:inclusion}.
\begin{theorem}[{\cite[Thm.~1]{pilaud_2022}}]
The poset~$\AR_D$ of acyclic reorientations of a digraph~$D$ is a lattice if and only if $D$ is vertebrate.
\end{theorem}
\subsubsection{Acyclic reorientation lattices of peo-consistent digraphs}
A~digraph $D$ is \emph{peo-consistent} if its vertices can be labeled $1,\ldots,n$, such that either $n=1$ or the following three conditions are satisfied:
\begin{enumerate}[label=(\roman*),leftmargin=8mm, noitemsep, topsep=1pt plus 1pt]
\item the vertex~$n$ is a source or a sink of~$D$,
\item the vertex~$n$ is simplicial in the underlying undirected graph of $D$,
\item the digraph~$D-n$ is peo-consistent.
\end{enumerate}
In the following, we often denote a peo-consistent digraph~$D$ with the corresponding labeling of its vertices by~$1,\ldots,n$ as $D=([n],A)$.
Hence a peo-consistent digraph is an orientation of a chordal graph obtained by iteratively adding each vertex in perfect elimination order as a source or a sink in the digraph induced by its predecessors in the order.
Since the underlying undirected graph of a peo-consistent digraph~$D$ can be any graph in perfect elimination order, it has the unique parent-child property by Lemma~\ref{lem:upc-peo}.
The peo-consistent digraphs form a natural class of vertebrate digraphs; see~\eqref{eq:inclusion} and Figure~\ref{fig:classes}~(d).
\begin{lemma}
\label{lem:consistent}
Every peo-consistent digraph is vertebrate.
\end{lemma}
The converse of Lemma~\ref{lem:consistent} does not hold, i.e., not every vertebrate digraph is peo-consistent; see Figure~\ref{fig:classes}~(c).
\begin{proof}
We proceed by induction on the number of vertices, and suppose that the statement holds for all graphs on less than $n$ vertices.
Observe that if $D=([n],A)$ is a peo-consistent digraph, then so is any of its induced subgraph.
Hence from the induction hypothesis, we only need to prove that the transitive reduction~$T_D$ of~$D$ is an oriented tree.
Consider the transitive reduction~$T_{D-n}$ of~$D-n$.
We claim that $T_D$ is obtained by adding the vertex~$n$ as a new leaf to $T_{D-n}$, with either a single incoming arc or a single outgoing arc.
Indeed, since the vertex~$n$ is simplicial, its neighbors must form a directed path in~$T_{D-n}$.
And since $n$ is either added to~$D$ as a source or a sink, there is a single new arc in the transitive reduction, which connects it to either its first or last neighbor in the directed path, respectively.
Therefore, $T_D$ is an oriented tree, as required.
\end{proof}
\subsubsection{Acyclic reorientation lattices of skeletal digraphs}
We say that a digraph~$D$ is \emph{filled} if for any directed path $v_1\rightarrow \cdots\rightarrow v_k$ in~$D$, if the arc~$v_1\rightarrow v_k$ belongs to~$D$, then all arcs $v_i\rightarrow v_j$, $1\leq i<j\leq k$, also belong to~$D$.
A digraph is called \emph{skeletal} if it is both vertebrate and filled.
An alternative definition of a skeletal digraph~$D$ is that $D$ is obtained from an oriented forest by replacing some directed paths by acyclic cliques.
The motivation for introducing skeletal digraphs is the following result due to Pilaud.
\begin{theorem}[{\cite[Thm.~3]{pilaud_2022}}]
\label{lem:semid}
The poset~$\AR_D$ of acyclic reorientations of an acyclic digraph~$D$ is a semidistributive lattice if and only if $D$ is skeletal.
\end{theorem}
We now prove that the class of skeletal digraphs refine that of peo-consistent digraphs; see~\eqref{eq:inclusion}.
\begin{lemma}
\label{lem:skeletal}
Every skeletal digraph is peo-consistent.
\end{lemma}
The converse of Lemma~\ref{lem:skeletal} does not hold, i.e., not every peo-consistent digraph is skeletal; see Figure~\ref{fig:classes}~(b).
\begin{proof}
It suffices to show that in every skeletal digraph~$D$, one can find a source or sink that is simplicial in the underlying undirected graph, i.e., whose neighborhood in~$D$ is a clique.
We refer to such a vertex as a \emph{terminal} vertex.
If a terminal vertex~$v$ exists, then we can remove it and iterate the same argument on the remaining digraph~$D-v$, which is also skeletal.
In fact, we proceed to prove that every skeletal digraph~$D$ has at least two terminal vertices.
We consider a connected skeletal digraph~$D$ and its transitive reduction~$T:=T_D$, which is an oriented tree.
We first argue that we can assume without loss of generality that every source or sink has degree exactly~1.
Indeed, suppose that this is not the case.
Then we partition the arc set of~$T$ into a collection~$\cT$ of subtrees, such that in every subtree, every source or sink has degree exactly~1, and moreover every arc of~$D$ joins two vertices from the same subtree in~$\cT$.
Specifically, for each source~$v$ of out-degree~$d>1$ in~$T$, we split the arc set of~$T$ into $d$ subtrees connected to~$v$, assigning all arcs in the same connected component of~$T-v$ together with the arc that connects this component to~$v$ to the same subtree; see Figure~\ref{fig:skeletal}~(a).
We repeat this for every source of out-degree $>1$, and we proceed similarly for each sink of in-degree~$>1$.
Suppose that we have $|\cT|=k$ trees after this partitioning stage.
If we can find two terminal vertices in~$D[T']$ for every $T'\in\cT$, i.e., in the subgraph of~$D$ induced by the subtree~$T'$, then at least $2k-(2k-1)=2$ of those vertices will also be terminal vertices in~$D$.
This is because $D$ is obtained from gluing together the graphs~$D[T']$, $T'\in\cT$, in a tree-like fashion.
So for the rest of the proof we assume without loss of generality that sources and sinks of~$D$ have degree exactly~1.
Let $S_0$ be the set of sources and $S_1$ the set of sinks of~$D$.
We denote by $M$ the set of all vertices of~$D$ that are neither in~$S_0$ nor in~$S_1$; see Figure~\ref{fig:skeletal}~(b).
In other words, every vertex in~$M$ has at least one in-neighbor and at least one out-neighbor.
We observe that $v\in S_0\cup S_1$ is a terminal vertex if its neighborhood in~$D$ induces a directed path in~$T$.
Indeed, since $D$ is filled, $v$ must simplicial in the underlying undirected graph, and hence $v$ is a terminal vertex.
\begin{figure}
\centering
\makebox[0cm]{
\includegraphics{skeletal}
}
\caption{Illustration of the proof of Lemma~\ref{lem:skeletal}.}
\label{fig:skeletal}
\end{figure}
On the other hand, consider a vertex $v\in S_0$ that is not a terminal vertex; see Figure~\ref{fig:skeletal}~(c1).
Then the neighbors of~$v$ in~$D$ induce an oriented subtree~$T(v)$ of~$T$ rooted at~$v$, all arcs of which are oriented away from~$v$.
As $v$ is not a terminal vertex, $T(v)$ is not a single directed path, but it has a \emph{branch vertex} $b(v)\in M$ with out-degree at least~2 in~$T(v)$.
We denote by $P(v)$ one pair of out-neighbors of~$b(v)$ in~$T(v)$ (pick two arbitrarily if there are more than two), which by definition are also out-neighbors of~$v$ in~$D$.
Consider a vertex $x\in M$ and all sources $v_1,v_2,\ldots ,v_\ell\in S_0$ that have $x$ as their common branch vertex, i.e., that satisfy $b(v_1)=b(v_2)=\cdots =b(v_{\ell})=x$; see Figure~\ref{fig:skeletal}~(d1).
We observe that we must have $\ell\leq d^+(x)-1$.
Indeed, if $\ell>d^+(x)-1$, then the transitive reduction of the subgraph induced by $v_1,v_2,\ldots,v_{\ell}$ and the vertices in $P(v_1),P(v_2),\ldots,P(v_{\ell})$ in $D$ contains a cycle, no arc of which is transitive, contradicting the fact that $D$ is vertebrate; see Figure~\ref{fig:skeletal}~(d2).
Symmetrically, we can consider a vertex $v\in S_1$ that is not a terminal vertex, and define a corresponding branch vertex $b(v)\in M$; see Figure~\ref{fig:skeletal}~(c2).
Then similarly, for a vertex $x\in M$, at most $d^-(x)-1$ vertices from~$S_1$ can have $x$ as their common branch vertex.
We now apply a counting argument.
We have
\begin{equation*}
|S_0|=1 + \sum_{x\in M} (d^-(x) - 1) \quad \text{and} \quad
|S_1|=1 + \sum_{x\in M} (d^+(x) - 1).
\end{equation*}
Suppose without loss of generality that $|S_0|\geq |S_1|$, and first consider the case $|S_0|=|S_1|$.
From the previous observations and the equalities above, we can have at most $|S_1|-1=|S_0|-1$ branch vertices~$b(v)$ for the vertices $v\in S_0$, so one such vertex must be a terminal vertex.
Similarly, we can have at most $|S_0|-1=|S_1|-1$ branch vertices $b(v)$ for the vertices $v\in S_1$, and one such vertex must also be a terminal vertex.
We therefore obtain two terminal vertices.
On the other hand, if $|S_0|>|S_1|$, then we can have at most $|S_1|-1\leq |S_0|-2$ branch vertices~$b(v)$ for the vertices $v\in S_0$, and at least two of them must be terminal vertices.
Hence in all cases, we obtain two terminal vertices, as claimed.
\end{proof}
In fact, there exist chordal graphs, no orientation of which is skeletal.
An example is the \emph{complete $k$-sun}, defined as the graph that is obtained from a $2k$-cycle on the vertices $v_1,\ldots,v_{2k}$ by adding an additional edge between every pair of vertices~$v_{2i}$ and~$v_{2j}$, for every $i\ne j\in [k]$.
The transitive reduction of the complete graph on $v_2,v_4,\ldots,v_{2k}$ is a path, and one can argue that one of the vertices~$v_{2i+1}$ joins two vertices in distance strictly larger than one along that path.
Then this vertex~$v_{2i+1}$ cannot have out-degree~2 or in-degree~2, as this would violate the filled property, and it cannot have out-degree and in-degree~1, as this would violate the vertebrate property.
As a consequence of this observation, if $D$ is skeletal then its underlying undirected graph does not contain a complete $k$-sun as induced subgraph.
Farber~\cite{MR685625} proved that a chordal graph is \emph{strongly chordal} if and only if it contains no induced complete $k$-sun (called trampoline there).
Therefore, if $D$ is skeletal, then its underlying undirected graph is strongly chordal.
\subsection{Quotients of acyclic reorientation lattices}
\subsubsection{Lattice congruences}
Recall the terminology introduced in Section~\ref{sec:prelim}.
We consider an equivalence relation~$\equiv$ on elements of a lattice~$L$.
For an element $x\in L$, we denote by $[x]_{\equiv}$ the equivalence class of $\equiv$ to which $x$ belongs.
An equivalence relation $\equiv$ on $L$ is a \emph{lattice congruence} if it respects joins and meets, i.e., if
\begin{equation*}
(x\equiv x' \ \text{and}\ y\equiv y') \Longrightarrow (x\vee y\equiv x'\vee y'\ \ \text{and}\ x\wedge y\equiv x'\wedge y').
\end{equation*}
For a lattice congruence $\equiv$, the \emph{lattice quotient} $L/{\equiv}$ is the poset on the set of the equivalence classes of~$\equiv$, where for any two equivalence classes $X,Y$, we have $X < Y$ if and only if there is an~$x\in X$ and a~$y\in Y$ such that~$x<y$ in~$L$.
We will need the following well-known lemma, which is a direct consequence of the definition of lattice congruences.
\begin{lemma}
\label{lem:interval}
For any lattice congruence~$\equiv$ of a finite lattice~$L$, and any element $x\in L$, the equivalence class $[x]_{\equiv}$ is an interval of~$L$.
\end{lemma}
The definition of a lattice congruence gives rise to so-called \emph{forcing rules}.
These rules state that if some pair of elements of a lattice are congruent, then so must be some other pairs.
We now state two forcing rules that we need in the following; see Figure~\ref{fig:forcing}.
A \emph{diamond} is a four-element poset $(\{a,b,c,d\},<)$ with $a<b<d$ and $a<c<d$ and no other cover relations.
A \emph{hexagon} is a six-element poset $(\{a,b,c,d,e,f\},<)$ with $a<b<d<f$ and $a<c<e<f$ and no other cover relations.
\begin{figure}[h!]
\includegraphics{forcing}
\caption{Forcing rules in a lattice congruence: diamond rule (left) and hexagon rule (right).
Edges indicate cover relations, and bold edges indicate that the two elements belong to the same equivalence class.
The arrows indicate implications.}
\label{fig:forcing}
\end{figure}
\begin{lemma}
\label{lem:forcing}
Let $\equiv$ be a congruence of a lattice~$L$.
\begin{description}
\item[Diamond rule] For every diamond sublattice $\{a,b,c,d\}$ of~$L$ with $a<b<d$ and $a<c<d$, we have $a\equiv c\Leftrightarrow b\equiv d$.
\item[Hexagon rule] For every hexagon sublattice $\{a,b,c,d,e,f\}$ of~$L$ with $a<b<d<f$ and $a<c<e<f$, we have $a\equiv c\Leftrightarrow d\equiv f$ and $(a\equiv c \text{ and } d\equiv f)\Rightarrow (b\equiv d \text{ and } c\equiv e)$.
\end{description}
\end{lemma}
\begin{proof}
The statements are derived by elementary applications of the definition of lattice congruences.
For diamonds, we have $a\equiv c\Rightarrow a\vee b\equiv c\vee b \Rightarrow b\equiv d$.
Symmetrically, we have $b\equiv d\Rightarrow b\wedge c\equiv d\wedge c\Rightarrow a\equiv c$.
For hexagons, we again apply the definition of a lattice congruence as follows: $a\equiv c\Rightarrow a\vee b \equiv c\vee b\Rightarrow b\equiv f$.
As equivalence classes are intervals, we also have $b\equiv d$ and $d\equiv f$.
Symmetrically, from $d\equiv f$ we obtain $d\wedge e\equiv f\wedge e\Rightarrow a\equiv e$ and hence $a\equiv c$ and~$c\equiv e$.
\end{proof}
In what follows, we consider quotients $\AR_D/{\equiv}$ of the acyclic reorientation lattice~$\AR_D$ of an acyclic digraph~$D$.
We emphasize that the diamond and hexagon rule stated in Lemma~\ref{lem:forcing} are necessary, but may not be sufficient to completely define the forcing relations for congruences of~$\AR_D$.
It is known that such local forcing rules are sufficient whenever the lattice $\AR_D$ is \emph{polygonal}~\cite[Thm.~9-6.5]{MR3645055}, which is the case if and only if it is semidistributive~\cite[Thm.~9-3.8 and 9-6.10]{MR3645055}.
Hence from Lemma~\ref{lem:semid}, $\AR_D$ is polygonal if and only if $D$ is skeletal, and in that case the two rules in Lemma~\ref{lem:forcing} completely characterize forcing relations in congruences of~$\AR_D$.
Note that starting from Section~\ref{sec:quotient-algo}, we will only assume that $D$ is peo-consistent, and not necessarily skeletal, so the diamond and hexagon rule are necessary, but may not be sufficient to characterize all forcing relations in a congruence (but our proof works regardless).
\subsubsection{Quotientopes}
Given a skeletal digraph~$D$, Pilaud~\cite{pilaud_2022} realizes any lattice quotient of~$\AR_D$ as a polytope, called a \emph{quotientope}.
Previously, the notion of quotientope has been used for the polytopal realization of lattice quotients of the weak order on permutations in the same manner~\cite{MR3964495,MR4311892}.
Recall that the weak order on permutations is~$\AR_D$ when $D$ is an acyclic complete graph.
The cover graph of a lattice quotient is exactly the skeleton of the corresponding quotientope.
Therefore, Problem~\ref{prob:pilaud} is equivalent to asking whether the skeleta of these quotientopes admit a Hamilton cycle.
\subsection{Algorithm}
\label{sec:quotient-algo}
We now give an algorithm to construct Hamilton paths in the cover graphs of quotients of acyclic reorientation lattices of peo-consistent digraphs.
\subsubsection{Restrictions, rails, ladders, and projections}
We introduce some notations that will be useful for inductive reasonings.
For the rest of this paper let $D=([n],A)$ be a peo-consistent digraph.
For an acyclic reorientation~$E\in\AR_{D_{n-1}}$, we denote by~$c(E)$ the acyclic reorientation of~$D\in\AR_D$ obtained from~$E$ by adding the last vertex~$n$ as a source or as a sink, according to how it appears in~$D$.
Similarly, we write~$\bar{c}(E)$ for the acyclic reorientation of~$D$ obtained from~$E$ by adding the last vertex~$n$ as a source if it is a sink in~$D$, or as a sink if it is a source in~$D$, hence oppositely to how it appears in~$D$.
Given a lattice congruence~$\equiv$ on~$\AR_D$, we define the \emph{restriction} $\equiv^*$ as the relation on~$\AR_{D_{n-1}}$ induced by all acyclic reorientations of~$D$ in which no arc incident with~$n$ is reoriented with respect to~$D$, i.e., for any two acyclic reorientations~$E$ and~$F$ in~$\AR_{D_{n-1}}$ we have $E \equiv^* F\Leftrightarrow c(E)\equiv c(F)$.
\begin{lemma}
For every lattice congruence~$\equiv$ of~$\AR_D$, the restriction~$\equiv^*$ is a lattice congruence of~$\AR_{D_{n-1}}$.
\end{lemma}
\begin{proof}
This follows straightforwardly from the definition of lattice congruence and restriction and the observation that for any two acyclic reorientations~$E$ and~$F$ in~$\AR_{D_{n-1}}$, we have $c(E) \vee c(F) = c(E \vee F)$ and $c(E) \wedge c(F) = c(E \wedge F)$.
\end{proof}
A \emph{rail} in the acyclic reorientation lattice~$\AR_D$ is a maximal subposet of~$\AR_D$ induced by all acyclic reorientations of~$D$ that agree on the orientations of all the arcs in~$D_{n-1}$, i.e., that differ only in the orientation of the arcs incident with~$n$.
For a reorientation~$E\in\AR_{D_{n-1}}$, we denote the corresponding rail in~$\AR_D$ by~$r(E)$; see Figure~\ref{fig:ladder}.
The minimum element of the rail~$r(E)$ is~$c(E)$, the maximum element is~$\bar{c}(E)$, and the number of elements on the rail equals the number of arcs incident with~$n$ in~$D$ (i.e., the degree of~$n$).
The number of rails in~$\AR_D$ is equal to~$|\AR_{D_{n-1}}|$, and these rails form a partition of~$\AR_D$ into chains of the same size.
\begin{lemma}
\label{lem:rail}
For every lattice congruence~$\equiv$ of~$\AR_D$, every equivalence class~$X$ of~$\equiv$, and every rail~$r$ of~$\AR_D$, the intersection~$X\cap r$ is an interval in~$r$.
\end{lemma}
\begin{proof}
From Lemma~\ref{lem:interval}, we know that $X$ is an interval of~$\AR_D$.
Since every rail~$r$ is a chain of~$\AR_D$, its intersection with~$X$ must be an interval in~$r$.
\end{proof}
A \emph{ladder} is the subposet of~$\AR_D$ induced by a pair of rails~$r(E)$ and~$r(F)$ for which~$E,F\in\AR_{D_{n-1}}$ differ in a flip of a single arc, i.e., $E$ and~$F$ are in cover relation in~$\AR_{D_{n-1}}$; see Figure~\ref{fig:ladder}.
We denote this ladder by~$\ell(E,F)$.
The cover graph of a ladder consists of two paths belonging to the rails, and of additional cover edges that we call \emph{stairs}.
We will need the following property of ladders, which explains the chosen name.
\begin{figure}[h!]
\centering
\makebox[0cm]{
\includegraphics{ladder}
}
\caption{Illustration of rails and ladders.
A sublattice of~$\AR_D$ is shown on the left (reoriented arcs w.r.t.~$D$ are highlighted), with the encoding of acyclic reorientations by permutations given by Lemma~\ref{lem:peo2perm} shown on the right.
A rail and a ladder and are highlighted.}
\label{fig:ladder}
\end{figure}
\begin{lemma}
\label{lem:ladder}
For any ladder~$\ell(E,F)$, where $E,F\in\AR_{D_{n-1}}$, the first and last pair of reorientations in both rails forms a stair, and the interval between any two successive stairs in the ladder is either a diamond or a hexagon.
\end{lemma}
\begin{proof}
As $E$ and~$F$ differ in a flip of some arc, $c(E)$ and~$c(F)$ are both acyclic reorientations of~$D$ that differ in a flip of the same arc.
Similarly, $\bar{c}(E)$ and~$\bar{c}(F)$ are both acyclic reorientations of~$D$ that differ in a flip of the same arc.
Consequently, $(c(E),c(F))$ is the first stair of the ladder~$\ell(E,F)$, and $(\bar{c}(E),\bar{c}(F))$ is the last stair of the ladder.
Now consider any stair~$(E',F')$ of this ladder and denote by~$a$ the arc of~$D$ that has a distinct orientation in~$E'$ and~$F'$.
Furthermore, let $E''\in r(E)$ and~$F''\in r(F)$ be the reorientations that cover~$E'$ and~$F'$ in their respective rails.
We then consider two cases.
The first case is when~$E''$ and~$F''$ are obtained from~$E'$ and~$F'$, respectively, by flipping the same arc~$b$.
In that case, $E''$ and~$F''$ clearly differ in a flip of the single arc~$a$, and form the next stair~$(E'',F'')$ in the ladder.
Hence the two successive stairs form a diamond.
The second case is when~$E''$ is obtained from~$E'$ by flipping an arc~$b$, and $F''$ is obtained from~$F'$ by flipping another arc~$c\neq b$.
Then the pair~$(E'',F'')$ is not a stair.
Since $E''\in r(E)$ and $F''\in r(F)$, it must be the case that $b$ and~$c$ are both incident to the vertex~$n$.
We assume without loss of generality that the vertex~$n$ is a sink in~$D$, which means that the arcs~$b$ and~$c$ are incoming to~$n$ in~$E'$ and~$F'$.
Since $n$ is a simplicial vertex, the other two endpoints of~$b$ and~$c$ must be adjacent.
And it must be the case that arc~$a$ connects these two endpoints of~$b$ and~$c$, as otherwise one of~$E''$ or~$F''$ would not be acyclic.
So the three arcs $a,b,c$ form a triangle.
We claim that arc~$c$ can be flipped in~$E''$ to obtain the next reorientation~$E'''\in r(E_{n-1})$.
Indeed, suppose for contradiction that flipping $c$ creates a cycle.
Then either this cycle does not use~$a$ and must also be present in~$F''$ (as it cannot use~$b$), or it uses arc~$a$, but then there already is a shorter cycle in~$E''$ that uses~$b$ instead of $c,a$.
This proves the claim.
Since we can flip arc~$c$ in~$E''$, this must yield the next reorientation~$E'''$ on the rail~$r(E)$.
Similarly, the arc~$b$ can be flipped in~$F''$ to obtain the next reorientation~$F'''\in r(F)$.
Now~$E'''$ and~$F'''$ only differ by a flip of the arc~$a$, and hence must form the next stair of the ladder.
In this case, the interval induced by the two successive stairs in the ladder is a hexagon.
By applying this reasoning starting from the first stair on the ladder~$\ell(E,F)$ and ending with the last stair, we obtain that the only stairs are the pairs identified above, which completes the proof.
\end{proof}
From the proof above we see that each ladder has at most one hexagon formed by two consecutive stairs.
More specifically, a ladder~$\ell(E,F)$ for $E,F\in\AR_{D_{n-1}}$ consists of a single hexagon and diamonds if and only if the vertex~$n$ is incident to both endpoints of the arc~$a$ in which $E$ and~$F$ differ, and otherwise the ladder has only diamonds.
For example, in Figure~\ref{fig:ladder}, the ladder~$\ell(F,F')$ is of the first type, and the ladder~$\ell(F',E)$ is of the second type.
Given a set $X\seq\AR_D$, we define the \emph{projection} $p(X) := \{E_{n-1} \mid E\in X \}$.
The following lemma is a crucial ingredient for our algorithm, and it is proved by repeated applications of Lemma~\ref{lem:forcing}.
\begin{lemma}
\label{lem:projection}
For every lattice congruence~$\equiv$ of~$\AR_D$ and every equivalence class~$X$ of~$\equiv$, the projection~$p(X)$ is an equivalence class of~$\equiv^*$.
In particular, for any two equivalence classes~$X,Y$, we either have $p(X)=p(Y)$ or $p(X)\cap p(Y)=\emptyset$.
\end{lemma}
\begin{proof}
For the sake of contradiction suppose that there is an equivalence class~$X$ of~$\equiv$ for which~$p(X)$ is not an equivalence class of~$\equiv^*$.
Pick some acyclic reorientation from~$p(X)\seq\AR_{D_{n-1}}$, and let $Y$ be its equivalence class of~$\equiv^*$.
As $Y\neq p(X)$ we must have $p(X)\setminus Y\neq \emptyset$ or $Y\setminus p(X)\neq \emptyset$.
We first consider the case~$p(X)\setminus Y\neq \emptyset$.
In this case there must be $E,F\in X$ that are in cover relation in~$\AR_D$ with $E\equiv F$ such that $p(E)=E_{n-1}\in p(X)\setminus Y$ and $p(F)=F_{n-1}\in Y$, in particular $E_{n-1}\notin Y$.
This means that in the ladder~$\ell(E_{n-1},F_{n-1})$, the endpoints~$E$ and~$F$ of the stair~$(E,F)$ are congruent.
By using Lemma~\ref{lem:ladder} and repeatedly applying Lemma~\ref{lem:forcing}, it follows that the endpoints of every other stair of the ladder must be congruent as well, in particular the endpoints~$c(E_{n-1})$ and $c(F_{n-1})$ of the first stair.
Consequently, we have $c(E_{n-1})\equiv c(F_{n-1})$ and therefore $E_{n-1}\equiv^* F_{n-1}$, a contradiction to the fact that $E_{n-1}\notin Y$ and $F_{n-1}\in Y$.
We now consider the case~$Y\setminus p(X)\neq \emptyset$.
In this case there must be $E,F\in Y$ that are in cover relation in~$\AR_{D_{n-1}}$ with $E\equiv^* F$ such that $E\in Y\setminus p(X)$ and $F\in p(X)$, in particular $E\notin p(X)$.
This means that in the ladder~$\ell(E,F)$, the endpoints~$E$ and~$F$ of the first stair~$(c(E),c(F))$ are congruent.
By using Lemma~\ref{lem:ladder} and repeatedly applying Lemma~\ref{lem:forcing}, it follows that the endpoints of every other stair of the ladder must be congruent as well.
Consequently, since there is a stair~$(E',F')$ with $F'\in X$ (as $p(F')=F\in p(X)$), we must have $E'\in X$ as well and therefore $p(E')=E\in p(X)$, a contradiction to the fact that~$E\notin p(X)$.
\end{proof}
\begin{lemma}
\label{lem:minmax-rail}
For every lattice congruence $\equiv$ of $\AR_D$, either for every~$E\in\AR_{D_{n-1}}$ there are two distinct equivalence classes of~$\equiv$ containing~$c(E)$ and~$\bar{c}(E)$, or for every rail~$r(E)$, $E\in\AR_{D_{n-1}}$, all reorientations on that rail belong to the same equivalence class.
\end{lemma}
\begin{proof}
By the forcing rules described in Lemma~\ref{lem:forcing}, if the reorientations of some rail $r(E)$, $E\in\AR_{D_{n-1}}$, are pairwise congruent, then so are the reorientations of every rail~$r(F)$, $F\in\AR_{D_{n-1}}$, that forms a ladder~$\ell(E,F)$ with~$r(E)$.
Repeatedly applying this observation shows that in this case all reorientations on every rail are congruent.
Otherwise, every rail intersects with at least two equivalence classes of~$\equiv$.
From Lemma~\ref{lem:interval}, we know that every equivalence class of~$\equiv$ intersects each rail~$r(E)$, $E\in\AR_{D_{n-1}}$, in an interval, hence the minimum~$c(E)$ and maximum~$\bar{c}(E)$ must belong to distinct equivalence classes, as claimed.
\end{proof}
\subsubsection{Selection of representatives and encoding as a zigzag language}
As mentioned before, $D=([n],A)$ is assumed to be a peo-consistent acyclic digraph throughout.
For any congruence~$\equiv$ of the acyclic reorientation lattice~$\AR_D$, we say that $R\seq \AR_D$ is a set of \emph{representatives} for the equivalence classes~$\AR_D/{\equiv}$ if and only if for every equivalence class~$X\in\AR_D/{\equiv}$, we have $|X\cap R|=1$.
We now define a set of representatives~$R_D$.
If $D$ is empty, then we define $R_D:=\emptyset$.
Otherwise, we consider the set $R_{D_{n-1}}\seq \AR_{D_{n-1}}$ of representatives for the restriction~$\equiv^*$ of~$\equiv$ on~$D_{n-1}$.
For every acyclic reorientation $E\in R_{D_{n-1}}$, we consider the rail~$r(E)$ in~$\AR_D$.
By Lemma~\ref{lem:minmax-rail} there are two possible cases.
If for every~$E\in\AR_{D_{n-1}}$ there are two distinct equivalence classes of~$\equiv$ containing~$c(E)$ and~$\bar{c}(E)$, then we define a set~$R_{r(E)}$ for all $E\in R_{D_{n-1}}$ as follows.
For each equivalence class~$X\in \AR_D/{\equiv}$ such that $X\cap r(E)\neq\emptyset$, we pick exactly one element from~$X\cap r(E)$ to be included in the set~$R_{r(E)}$.
In particular, we always pick~$c(E)$ and~$\bar{c}(E)$.
We then define
\begin{subequations}
\label{eq:RD}
\begin{equation}
\label{eq:RD1}
R_D := \bigcup_{E\in R_{D_{n-1}}} R_{r(E)}.
\end{equation}
Otherwise, by Lemma~\ref{lem:minmax-rail} for every rail~$r(E)$, $E\in \AR_{D_{n-1}}$, all reorientations on that rail belong to the same equivalence class.
For each $E\in R_{D_{n-1}}$ we select for the set~$R_D$ the reorientation that consists of adding the vertex~$n$ to~$E$ as a sink, i.e., we define
\begin{equation}
\label{eq:RD2}
R_D := \begin{cases}
\{ c(E) \mid E\in R_{D_{n-1}} \} & \text{if $n$ is a sink in~$D$,} \\
\{ \bar{c}(E) \mid E\in R_{D_{n-1}} \} & \text{if $n$ is a source in~$D$.}
\end{cases}
\end{equation}
\end{subequations}
In order to apply Algorithm~J, we interpret the representatives for~$\AR_D/{\equiv}$ as a zigzag language of permutations.
Recall that in Lemma~\ref{lem:peo2perm} we defined a map from acyclic orientations of a graph in perfect elimination order to permutations.
We reuse this definition here, and with any reorientation~$E$ of~$D$ (as $D$ is peo-consistent, $1,\ldots,n$ is a perfect elimination ordering of the underlying undirected graph), we associate the permutation~$\pi_E$, and we define
\begin{equation}
\label{eq:PiD}
\Pi_D := \{ \pi_E \mid E\in R_D \}.
\end{equation}
\begin{lemma}
\label{lem:cong}
For every lattice congruence~$\equiv$ of~$\AR_D$, the set~$R_D$ defined in~\eqref{eq:RD} is indeed a set of representatives for~$\AR_D/{\equiv}$.
Furthermore, the set~$\Pi_D$ defined in~\eqref{eq:PiD} is a zigzag language of permutations satisfying condition~(z1) if~\eqref{eq:RD1} holds and condition~(z2) if~\eqref{eq:RD2} holds.
\end{lemma}
\begin{proof}
We argue by induction on~$n$.
The statement trivially holds for~$n=0$.
For the induction step, suppose that $R_{D_{n-1}}$ is a set of representatives for~$\AR_{D_{n-1}}/{\equiv^*}$ and that $\Pi_{D_{n-1}}$ is a zigzag language of permutations.
Observe that $\Pi_{D_{n-1}} = \{ p(\pi_E) \mid \pi_E \in \Pi_D \}$.
Hence, in order to show that $\Pi_D$ is a zigzag language, we only need to show that either condition~(z1) or~(z2) as stated in Section~\ref{sec:greedy} is met.
Consider the two cases~\eqref{eq:RD1} and~\eqref{eq:RD2} in the inductive definition of~$R_D$.
In the first case, by Lemma~\ref{lem:projection} and the induction hypothesis, for each equivalence class~$X$ of~$\AR_D$, the projection~$p(X)$ has exactly one element in common with~$R_{D_{n-1}}$.
Hence, $X$ has a nonempty intersection with exactly one rail~$r(E)$ for some $E\in R_{D_{n-1}}$.
By construction, we then have $|R_{r(E)}\cap X|=1$ and consequently $|R_D\cap X|=1$ by the definition~\eqref{eq:RD1}.
It follows that $R_D$ is a set of representatives for~$\AR_D/{\equiv}$.
Moreover, by construction, for every $E \in R_{D_{n-1}}$, the reorientations~$c(E)$ and~$\bar{c}(E)$ are in~$R_{r(E)}$ and consequently also in~$R_D$.
This implies that $c_1(\pi_E)$ and~$c_n(\pi_E)$ are in~$\Pi_D$, i.e., $\Pi_D$ satisfies condition~(z1).
In the second case, by Lemma~\ref{lem:projection} every equivalence class~$X$ of~$\AR_D$ satisfies $X=\bigcup_{E \in X^*} r(E)$.
By the induction hypothesis, every equivalence class of~$\AR_{D_{n-1}}$ has exactly one element in common with~$R_{D_{n-1}}$.
Therefore, for $R_D$ as defined by~\eqref{eq:RD2}, each equivalence class of~$\AR_D$ has exactly one element in common with~$R_D$.
It follows that $R_D$ is a set of representatives for~$\AR_D/{\equiv}$.
Furthermore, in this case we have $\Pi_D = \{c_n(\pi_E) \mid E \in R_{D_{n-1}} \} = \{c_n(\pi_E) | \pi_E \in \Pi_{D_{n-1}} \}$, i.e., $\Pi_D$ satisfies condition~(z2).
\end{proof}
\subsubsection{Algorithm~J for quotients of acyclic reorientation lattices}
\begin{lemma}
\label{lem:cong-J}
When running Algorithm~J with input~$\Pi_D$ as defined in~\eqref{eq:PiD}, then for any two permutations~$\pi_E, \pi_F$ that are visited consecutively, the corresponding equivalence classes~$[E]_{\equiv}$ and~$[F]_{\equiv}$ form a cover relation in the quotient~$\AR_D/{\equiv}$.
\end{lemma}
\begin{proof}
We proof the lemma by induction.
The statement is vacuously true for~$n=0$.
For the induction step we assume that it holds for the input~$\Pi_{D_{n-1}}$.
By Lemma~\ref{lem:cong}, $\Pi_D$ is a zigzag language of permutations.
Furthermore, if~\eqref{eq:RD1} holds, then it satisfies condition~(z1), so the permutations in~$\Pi_D$ are generated in the sequence~$J(\Pi_D)$ defined in~\eqref{eq:JLn1}.
Observe that all permutations in~$\lvec{c}(\pi_E)$ or~$\rvec{c}(\pi_E)$, where $E\in R_{D_{n-1}}\seq \AR_{D_{n-1}}$, correspond to acyclic reorientations that lie on the rail~$r(E)=:(E_1,\ldots,E_d)=(c(E),\ldots,\bar{c}(E))$.
If $\pi_{E_i},\pi_{E_j}$, where $1\leq i<j\leq d$, are visited consecutively, then by the definition of the representatives~$R_{r(E)}$ there is an integer~$s$ with $i\leq s<j$ such that
\begin{equation*}
E_i\equiv E_{i+1}\equiv \cdots\equiv E_s\not\equiv E_{s+1}\equiv E_{s+2}\equiv \cdots\equiv E_j,
\end{equation*}
so~$[E_i]_\equiv$ and~$[E_j]_\equiv$ form a cover relation in $\AR_D/{\equiv}$.
Moreover, if $\pi_E$ and~$\pi_F$, where $E,F\in R_{D_{n-1}}\seq \AR_{D_{n-1}}$, are visited consecutively in~$J(\Pi_{D_{n-1}})$, then transitioning from the last permutation of~$\lvec{c}(\pi_E)$ to the first permutation of~$\rvec{c}(\pi_F)$, or from the last permutation of~$\rvec{c}(\pi_E)$ to the first permutation of~$\lvec{c}(\pi_F)$, corresponds to moving from~$c(E)$ to~$c(F)$, or from~$\bar{c}(E)$ to~$\bar{c}(F)$.
Consequently, as $[E]_{\equiv^*}$ and~$[F]_{\equiv^*}$ form a cover relation in the quotient~$\AR_{D_{n-1}}/{\equiv^*}$ by induction, we obtain with the help of Lemma~\ref{lem:projection} that $[c(E)]_{\equiv}$ and~$[c(F)]_{\equiv}$, and also $[\bar{c}(E)]_{\equiv}$ and~$[\bar{c}(F)]_{\equiv}$ form a cover relation in the quotient~$\AR_D/{\equiv}$.
On the other hand, if~\eqref{eq:RD2} holds, then the zigzag language~$\Pi_D$ satisfies condition~(z2), so the permutations in~$\Pi_D$ are generated in the sequence~$J(\Pi_D)$ defined in~\eqref{eq:JLn2}.
In this case, the claim follows immediately by induction.
\end{proof}
Combining Theorem~\ref{thm:jump}, Lemma~\ref{lem:cong}, and Lemma~\ref{lem:cong-J} yields our second main theorem.
Note that by Lemma~\ref{lem:skeletal}, Theorem~\ref{thm:mainquotient} below applies in particular to skeletal digraphs~$D$, thus addressing Problem~\ref{prob:pilaud}.
\begin{theorem}
\label{thm:mainquotient}
For every peo-consistent digraph~$D$ and every lattice congruence $\equiv$ of~$\AR_D$, Algorithm~J with input~$\Pi_D$ as defined in~\eqref{eq:PiD} generates a sequence $\pi_{E_1}, \pi_{E_2}, \ldots$ of permutations from~$R_D$, where $E_1,E_2,\ldots \in \AR_D$ such that $[E_1]_{\equiv}, [E_2]_{\equiv},\ldots $ is a Hamilton path in the cover graph of the lattice quotient~$\AR_D/{\equiv}$.
\end{theorem}
By the remarks after Lemma~\ref{lem:peo2perm}, the minimum~$D$ of the acyclic reorientation lattice~$\AR_D$ is encoded by the identity permutation~$\pi_D=\ide_n$, which by Theorem~\ref{thm:jump} can be used for initialization in Algorithm~J, if and only if the vertex~$i$ is a sink in~$D_i$ for all $i\in[n]$.
\iffalse
\section{Open questions}
We conclude this paper with some interesting open problems.
\begin{itemize}[leftmargin=5mm, noitemsep, topsep=1pt plus 1pt]
\item
Graphs that admit a perfect elimination order have a nice graph-theoretic characterization, namely that every induced cycle has length~3 (i.e., they are chordal).
Is there a similar structural characterization for hypergraphs that admit a hyperfect elimination order (recall the definition in Section~\ref{sec:hyperfect})?
\item
Are there natural examples of hypergraphs in hyperfect elimination order that are neither chordal graphs nor chordal building sets?
\item
The following problem was raised by Pilaud~\cite[Problem~41]{pilaud_2022}:
Characterize the skeletal digraphs~$D$ and the congruences~$\equiv$ of their acyclic reorientation lattice~$\AR_D$ for which the cover graph of the quotient~$\AR_D/{\equiv}$ is regular.
\item
The poset~$\AR_D$ of acyclic reorientations of a peo-consistent digraph~$D$ is a supersolvable lattice, in the sense defined by Stanley~\cite{MR309815}.
Is it true that $\AR_D$ is supersolvable if and only if $D$ is peo-consistent?
\item
When is the poset of acyclic reorientations of a hypergraph a lattice?
\end{itemize}
\fi
\bibliographystyle{alpha}
| {
"timestamp": "2022-12-09T02:00:35",
"yymm": "2212",
"arxiv_id": "2212.03915",
"language": "en",
"url": "https://arxiv.org/abs/2212.03915",
"abstract": "In 1993, Savage, Squire, and West described an inductive construction for generating every acyclic orientation of a chordal graph exactly once, flipping one arc at a time. We provide two generalizations of this result. Firstly, we describe Gray codes for acyclic orientations of hypergraphs that satisfy a simple ordering condition, which generalizes the notion of perfect elimination order of graphs. This unifies the Savage-Squire-West construction with a recent algorithm for generating elimination trees of chordal graphs. Secondly, we consider quotients of lattices of acyclic orientations of chordal graphs, and we provide a Gray code for them, addressing a question raised by Pilaud. This also generalizes a recent algorithm for generating lattice congruences of the weak order on the symmetric group. Our algorithms are derived from the Hartung-Hoang-Mütze-Williams combinatorial generation framework, and they yield simple algorithms for computing Hamilton paths and cycles on large classes of polytopes, including chordal nestohedra and quotientopes. In particular, we derive an efficient implementation of the Savage-Squire-West construction. Along the way, we give an overview of old and recent results about the polyhedral and order-theoretic aspects of acyclic orientations of graphs and hypergraphs.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "Combinatorial generation via permutation languages. V. Acyclic orientations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587272306607,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7097978826008787
} |
https://arxiv.org/abs/1511.05057 | Inverse tensor eigenvalue problem | A tensor $\mathcal T\in \mathbb T(\mathbb C^n,m+1)$, the space of tensors of order $m+1$ and dimension $n$ with complex entries, has $nm^{n-1}$ eigenvalues (counted with algebraic multiplicities). The inverse eigenvalue problem for tensors is a generalization of that for matrices. Namely, given a multiset $S\in \mathbb C^{nm^{n-1}}/\mathfrak{S}(nm^{n-1})$ of total multiplicity $nm^{n-1}$, is there a tensor in $\mathbb T(\mathbb C^n,m+1)$ such that the multiset of eigenvalues of $\mathcal{T}$ is exact $S$? The solvability of the inverse eigenvalue problem for tensors is studied in this paper. With tools from algebraic geometry, it is proved that the necessary and sufficient condition for this inverse problem to be generically solvable is $m=1,\ \text{or }n=2,\ \text{or }(n,m)=(3,2),\ (4,2),\ (3,3)$. | \section{Introduction}\label{sec:intro}
Eigenvalue of a tensor, as a natural generalized notion of the eigenvalue of a square matrix, has been attracting increasingly attention
in fields related to numerical multilinear algebra (see \cite{cs13,cpz08,fo14,lqz13,oo12,nqww07,q14} and references therein), since the independent work by Lim \cite{l05} and Qi \cite{q05}.
For a tensor of order $m+1$ and dimension $n$, its eigenvalues are the roots of the characteristic polynomial, which is a monic polynomial with degree $nm^{n-1}$ \cite{q05,hhlq13}. As a consequence, the number of eigenvalues, counted with multiplicities, is equal to $nm^{n-1}$. This number, grows however exponentially along with $m$ and $n$, differs largely from its matrix counterpart. We can alternatively (actually equivalently) define the eigenvalues as solutions of a system of polynomial equations resembles the system of eigenvalue equations for a matrix (cf.\ Definition~\ref{def:eigenvalue-eigenvector}). However, both computation and structures of the eigenvalues are very complicated, and tough to investigate \cite{q05,hhlq13,hy15}. The situation would be improved if the eigenvalues are shown to lie in a variety in $\mathbb C^{nm^{n-1}}$ with a much smaller dimension.
Let $\mathbb T(\mathbb C^n,m+1)$ be the space of tensors of order $m+1$ and dimension $n$ with entries in the field $\mathbb C$ of complex numbers. When $m=1$, we get the space of $n\times n$ matrices with complex components. The eigenvalues of a matrix $A=(a_{ij})\in\mathbb T(\mathbb C^n,2)$, as roots of the characteristic polynomial
\[
\operatorname{det}(\lambda I-A)=\lambda^n+c_{n-1}(A)\lambda^{n-1}+c_1(A)\lambda+c_0(A),
\]
can be written as hypergeometric series in terms of the components $a_{ij}$'s (cf.\ \cite{s02}), since $c_i(A)\in\mathbb C[A]$ is a homogeneous polynomial of degree $n-i$ for $i=1,\dots,n$. We can collect the $n$ hypergeometric series to form a multiset-valued mapping $\phi : \mathbb T(\mathbb C^n,2)\rightarrow \mathbb C^n/\mathfrak{S}(n)$,
where $\mathfrak{S}(n)$ is the group of permutations on $n$ elements. Thus, $\phi(A)$ is the multiset of eigenvalues of $A$. The set $\mathbb C^n/\mathfrak{S}(n)$ is the $n$th symmetric product of $\mathbb C$, and (cf.\ \cite{h92})
\[
\operatorname{dim}(\mathbb C^n/\mathfrak{S}(n))=n.
\]
A well-known result from linear algebra (cf.\ \cite{hj85}) is that this mapping is surjective, i.e.,
\begin{equation}\label{matrix-surjective}
\operatorname{image}(\phi)=\mathbb C^n/\mathfrak{S}(n).
\end{equation}
This implies that any given $n$-tuple of complex numbers can be realized as the eigenvalues of an $n\times n$ matrix.
It is shown in \cite{hhlq13} that the codegree $i$ coefficient of the characteristic polynomial of a tensor is a homogeneous polynomial of degree $i$ in terms of the tensor components. Therefore, we can define the multiset-valued eigenvalue mapping $\phi : \mathbb T(\mathbb C^n,m+1)\rightarrow\mathbb C^{nm^{n-1}}/\mathfrak{S}(nm^{n-1})$ in a similar way for the tensor eigenvalues. Here we use the same symbol $\phi$, for the sake of notational simplicity, for all positive integers $m$ and $n$, which would be clear for the content.
Likewise, a basic question arises for tensors when one is trying to understand their eigenvalues:
\[
\textit{what can the eigenvalues of tensors in a given space be?}
\]
This question is of course very general, hard; and deserves a very long way and much continuous effort to answer. Reversely, we can ask the question about \textit{the existence of tensors for a given multiset of eigenvalues}, which would have the nomenclature \textit{inverse eigenvalue problem for tensors} in general. We refer to \cite{f77,cg05} and references therein for the inverse eigenvalue problems for matrices.
In this article, we will
study the counterpart of \eqref{matrix-surjective} for tensors: when the eigenvalues fulfill the whole quotient space? It turns out that this question is hard to answer. However, with the help of concepts from algebraic geometry, we are able to answer a weaker version of the question: ``when the eigenvalues almost fulfill the whole quotient space?", or more precisely in mathematical language: ``when does the image of the multiset-valued eigenvalue map contains an open dense subset of $\mathbb C^{nm^{n-1}}/\mathfrak{S}(nm^{n-1})$?" Unless otherwise stated, we will always adopt the Zariski topology for the ambient space. The map $\phi$ is dominant if its image
contains an open dense subset of $\mathbb C^{nm^{n-1}}/\mathfrak{S}(nm^{n-1})$ (cf.\ Definition~\ref{prop:dense-eig}).
We have the following main theorem of this article.
\begin{theorem}[Dominant Theorem]\label{theorem:dominant}
The eigenvalue map $\phi : \mathbb{T}(\mathbb C^n,m+1)\rightarrow\mathbb C^{nm^{n-1}}/\mathfrak{S}(nm^{n-1})$ is dominant if and only if
\[
m=1,\ \text{or }n=2,\ \text{or }(n,m)=(3,2),\ (4,2),\ (3,3).
\]
\end{theorem}
\begin{proof}
The case $m=1$ is the trivial matrix counterpart. For the other cases,
the necessity follows from Proposition~\ref{prop:necessary}; and the sufficiency follows from
Propositions~\ref{prop:dominant-n2}, \ref{prop:dominant-33} and \ref{prop:dominant-34-42}.
\end{proof}
Since the topology on $\mathbb C^{nm^{n-1}}/\mathfrak{S}(nm^{n-1})$ is the Zariski topology, the fact that the image of $\phi$ contains an open dense subset of $\mathbb C^{nm^{n-1}}/\mathfrak{S}(nm^{n-1})$ implies that for almost all multiset $S$ in $\mathbb C^{nm^{n-1}}/\mathfrak{S}(nm^{n-1})$, there exists a tensor $\mathcal{T}$ in $\mathbb{T}(\mathbb{C}^n,m+1)$ such that the set of eigenvalues of $\mathcal{T}$ is exact $S$. More precisely, the fact that the image of $\phi$ contains an open dense subset of $\mathbb C^{nm^{n-1}}/\mathfrak{S}(nm^{n-1})$ implies that the probability that a randomly picked multiset $S$ can be realized as the set of eigenvalues of a tensor in $\mathbb{T}(\mathbb{C}^n,m+1)$ is one.
\section{Preliminaries}\label{sec:preliminary}
\subsection{Eigenvalues of tensors}\label{sec:eigenvalue}
There are two most popular definitions of tensor eigenvalues in the literature \cite{q05,l05}. Throughout this article, eigenvalues and eigenvectors of tensors are restricted to the next definition.
\begin{definition}[Eigenvalues and Eigenvectors \cite{q05,l05}]\label{def:eigenvalue-eigenvector}
Let tensor $\mathcal T=(t_{ii_1\ldots i_m})\in\mathbb{T}(\mathbb{C}^n,m+1)$. A number $\lambda\in\mathbb C$ is called an \textit{eigenvalue} of $\mathcal T$, if there exists a vector $\mathbf x\in\mathbb C^n\setminus\{\mathbf 0\}$ which is called an \textit{eigenvector} such that
\begin{equation}\label{eigenvalue-equation}
\mathcal T\mathbf x^m=\lambda\mathbf x^{[m]},
\end{equation}
where $\mathbf x^{[m]}\in\mathbb C^n$ is an $n$-dimensional vector with its $i$-th component being $x_i^{m}$, and $\mathcal T\mathbf x^m\in\mathbb C^n$ with
\[
\big(\mathcal T\mathbf x^m\big)_i:=\sum_{i_1,\dots,i_m=1}^nt_{ii_1\dots i_m}x_{i_1}\dots x_{i_m}\ \text{for all }i=1,\dots,n.
\]
\end{definition}
A multiset $S$ of complex numbers is a $2$-tuple $(A,\psi)$ with a set $A\subseteq\mathbb C$ and a map $\psi : A\rightarrow\mathbb N_+$. For any $a\in A$, $\psi(a)$ is the \textit{multiplicity} of the element $a$ in $S$. The summation $\sum_{a\in A}\psi(a)$ is the \textit{total multiplicity} of the multiset $S$.
The multiset of eigenvalues of a given tensor $\mathcal T$, which is denoted as $\sigma(\mathcal T)$,
is defined as $(A,\psi)$ with $A$ being the set of eigenvalues of $\mathcal T$ and the multiplicity map $\psi$ being the algebraic multiplicity of the eigenvalue. Then, $\sigma(\mathcal T)$ is always of total multiplicity being finite \cite{q05,hhlq13}, and it is the multiset of roots of the univariate polynomial
\[
\chi(\lambda)=\operatorname{det}(\lambda\mathcal I-\mathcal T)
\]
which is called the \textit{characteristic polynomial} of $\mathcal T$ \cite{q05}. The degree of $\chi(\lambda)$ for $\mathcal T\in\mathbb{T}(\mathbb{C}^n,m+1)$ is $nm^{n-1}$. Thus, $\sigma(\mathcal T)$ can be identified as an element in $\mathbb C^{nm^{n-1}}/\mathfrak{S}(nm^{n-1})$ for any $\mathcal T\in\mathbb T(\mathbb C^n,m+1)$.
We refer to \cite{hhlq13,hy15} for the definitions of tensor determinant and algebraic multiplicity, and more facts on the characteristic polynomial.
The multiset-valued eigenvalue map $\phi : \mathbb{T}(\mathbb{C}^n,m+1)\rightarrow
\mathbb C^{nm^{n-1}}/\mathfrak{S}(nm^{n-1})$ is defined as
\[
\phi(\mathcal T)=\sigma(\mathcal T).
\]
As long as we are concerning on eigenvalues of tensors, which are solely related to $\mathcal T\mathbf x^{m}$, it is
sufficient to consider the tensor space $\mathbb{TS}(\mathbb{C}^n,m+1):=\mathbb C^n\otimes\operatorname{S}^{m}(\mathbb C^n)$ (cf.\ \cite[Section~5.2]{hy15}). For any $\mathcal T\in\mathbb T(\mathbb C^n,m+1)$, we can symmetrize its $i$th slice $\mathcal T_i:=(t_{ii_1\dots i_m})_{1\leq i_1,\dots,i_m\leq n}$ via
\[
\big(\mathcal T\mathbf x^{m}\big)_i=\langle\operatorname{Sym}(\mathcal T_i),\mathbf x^{\otimes (m)}\rangle:=\sum_{i_1,\dots,i_m=1}^n\big(\operatorname{Sym}(\mathcal T_i)\big)_{i_1\dots i_m}x_{i_1}\dots x_{i_m}\ \text{for all }\mathbf x\in\mathbb C^n,
\]
where $\operatorname{Sym}(\mathcal T_i)$ is the symmetrization of the tensor $\mathcal T_i$ as a symmetric tensor in the sense of the above equalities. We refer to \cite{l12} for basic concepts on tensors. Therefore, for every tensor $\mathcal T\in\mathbb T(\mathbb C^n, m+1)$, we associate it an element $\operatorname{eSym}(\mathcal T)$ in $\mathbb{TS}(\mathbb{C}^n,m+1)$ by symmetrizing its slices. It is easy to see that
\[
\mathcal T\mathbf x^{m}=\operatorname{eSym}(\mathcal T)\mathbf x^{m}\ \text{for all }\mathbf x\in\mathbb C^n.
\]
We see that all tensors in the fibre of the surjective map $\operatorname{eSym} : \mathbb T(\mathbb C^n,m+1)\rightarrow\mathbb{TS}(\mathbb C^n,m+1)$ have the same defining equations for the eigenvalue problem. Therefore, we have
\[
\phi(\mathbb{T}(\mathbb{C}^n,m+1))=\phi(\mathbb{TS}(\mathbb{C}^n,m+1)).
\]
\subsection{Algebraic geometry}\label{sec:algebraic}
We list here some notions from algebraic geometry which we will use in this article. We refer to \cite{h77,h92,s77,clo98} for basic algebro-geometric concepts.
\begin{enumerate}
\item An \textit{algebraic variety} in $\mathbb{C}^n$ is a set of common zeros of some polynomials in $n$ variables. In particular, the linear space $\mathbb{C}^n$ is an algebraic variety.
\item The coordinate ring $\mathbb{C}[X]$ of an algebraic variety $X$ is defined to be the quotient ring $\mathbb{C}[x_1,\cdots, x_n]/ I(X)$ where $I(X)$ is the ideal of all polynomials vanishing on $X$.
\item A map $f : X \rightarrow Y$ between two algebraic varieties $X$ and $Y$ is said to be a \textit{morphism} if $f$ is induced by a homomorphism of coordinate rings $\psi : \mathbb C[Y ] \rightarrow\mathbb C[X]$. In particular, polynomial maps between two linear spaces are morphisms.
\item Let $f$ be a morphism between $X$ and $Y$. If its image is Zariski dense, i.e., $\overline{f(X)} = Y$ , then $f$ is called a \textit{dominant morphism}.
\item An algebraic variety $X$ is \textit{irreducible} if $X=X_1\cup X_2$ for closed subvarieties $X_1$ and $X_2$ implies that $X_1=X$ or $X_2=X$.
\item We say that a property $P$ holds for a \textit{generic point} in $\mathbb{C}^n$ if the set of points in $\mathbb{C}^n$ that do not satisfy $P$ is contained in a proper subvariety of $\mathbb{C}^n$. For example, fix an algebraic variety $X\subset \mathbb{C}^n$, we say that a generic point in $\mathbb{C}^n$ is not in $X$.
\end{enumerate}
\begin{remark}
We remark here that if we put the outer Lebesgue measure on $\mathbb{C}^n\simeq \mathbb{R}^{2n}$ then a proper subvariety $X$ of $\mathbb{C}^n$ has measure zero.
Hence a property $P$ holds for a generic point in $\mathbb{C}^n$ implies that the probability that a randomly picked point from $\mathbb{C}^n$ has property $P$ is one.
\end{remark}
The main result of this article will be proved based on the following algebraic version of the open mapping theorem.
\begin{proposition}[\cite{t02}]\label{prop:dense}
If $f:X\to Y$ is a dominant morphism between two irreducible algebraic varieties then $f(X)$ contains an open dense subset of $Y$.
\end{proposition}
The following two facts are obvious to those who are familiar with algebraic geometry, while we supply proofs here for completeness.
\begin{proposition}\label{prop:necessary-general}
If a morphism $f : X \rightarrow Y$ between two algebraic varieties $X$ and $Y$ is dominant, then it holds that
\[
\operatorname{dim}(X)\geq \operatorname{dim}(Y).
\]
\end{proposition}
\begin{proof}
It is known that $\operatorname{dim}(X)$ is the same as the transcendence degree of the function field $\mathbb{C}(X)$ over $\mathbb{C}$ and $f$ is dominant if and only if the ring map $\psi: \mathbb{C}[Y]\to \mathbb{C}[X]$ is an inclusion of rings. Since $\psi$ is an inclusion of rings we obtain that $\psi$ induces an inclusion of fields $\mathbb{C}(Y)\to \mathbb{C}(X)$. Therefore we have
\[
\operatorname{tr.d}_{\mathbb{C}}(\mathbb{C}(X))\ge \operatorname{tr.d}_{\mathbb{C}}(\mathbb{C}(Y)).
\]
\end{proof}
An algebraic variety $X\subseteq\mathbb C^n$ is \textit{smooth} if the tangent space $T_{\mathbf x}(X)$ has constant dimension (i.e., $\operatorname{dim}(X))$ for every $\mathbf x\in X$.
\begin{proposition}\label{prop:differential}
Let $f : X \rightarrow Y$ be a morphism between two smooth algebraic varieties with $\operatorname{dim}(X)\geq \operatorname{dim}(Y)$.
If there exists a point $\mathbf x\in X$ such that the rank of the differential of $f$ at $\mathbf x$ is equal to $\operatorname{dim}(Y)$, then the morphism $f$ is dominant.
\end{proposition}
\begin{proof}
If $f$ is not dominant then $\overline{f(X)}$ is a proper subvariety of $Y$. Hence it factors as
\[
X\xrightarrow{g} f(X)\xrightarrow{i} Y,
\]
where $g$ is defined by $g(\mathbf x)=f(\mathbf x)$ and $i$ is the inclusion of $\overline{f(X)}$ into $Y$. Then the differential $d_{\mathbf x}f$ factors as
\[
T_{\mathbf x}X \xrightarrow{d_{\mathbf x}g} T_{f(\mathbf x)} \overline{f(X)} \xrightarrow{d_{f(\mathbf x)}i}T_{f(\mathbf x)}Y,
\]
where $T_{\mathbf x}X$ is the tangent space of the variety $X$ at the point $\mathbf x$.
Since $\overline{f(X)}$ is a proper subvariety of $Y$, it has strictly smaller dimension than $\operatorname{dim}(Y)$, which implies that $\operatorname{rank}(d_{\mathbf x} g)$ is at most $\operatorname{dim} T_{f(x)}\overline{f(X)} < \operatorname{dim}(Y)$. Therefore, we get a contradiction to the assumption that the rank of $d_{\mathbf x}f$ is $\operatorname{dim}(Y)$.
\end{proof}
For any positive integer $d>0$, the roots of the univariate polynomial equation
\[
t^d+p_{d-1}t^{d-1}+\dots+p_1t+p_0=0
\]
depends continuously on the coefficient vector $\mathbf p:=(p_{d-1},\dots,p_0)^\mathsf{T}$ \cite{s02}. If we collect the $d$ trajectories of the roots, we can define a multiset-valued map $q :\mathbb C^d\rightarrow\mathbb C^d/\mathfrak{S}(d)$ by
\begin{equation}\label{map:root}
q(\mathbf w):= \{ \text{roots (with mutliplicities) of }t^d+w_1t^{d-1}+\dots+w_d=0\}.
\end{equation}
\begin{definition}\label{prop:dense-eig}
Let
$p_i(\mathbf y)\in\mathbb C[\mathbf y]$ be a polynomial for all $i=0,\dots,d-1$ with $\mathbf y=(y_1,\dots,y_k)^\mathsf{T}\in\mathbb C^k$, and $\mathbf p: \mathbb C^k\rightarrow\mathbb C^d$ be the mapping defined by:
\[
\mathbf p(\mathbf y):=(p_{d-1}(\mathbf y),\dots,p_0(\mathbf y))^\mathsf{T}.
\]
The mapping $q\circ\mathbf p : \mathbb C^k\rightarrow\mathbb C^d/\mathfrak{S}(d)$ is called dominant, if $\operatorname{image}(q\circ\mathbf p)$ contains a Zariski open dense subset of $\mathbb C^d/\mathfrak{S}(d)$.
\end{definition}
Definition~\ref{prop:dense-eig} is an extension of dominant morphisms, since $q\circ\mathbf p$ is not a morphism.
\begin{lemma}\label{lemma:roots}
For any positive integer $d>0$,
let $\mathbf p: \mathbb C^k\rightarrow\mathbb C^d$ be a polynomial mapping as in Definition~\ref{prop:dense-eig}. Then, the composite mapping $q\circ\mathbf p : \mathbb C^k\rightarrow\mathbb C^d/\mathfrak{S}(d)$ is surjective (respectively, dominant), i.e.,
\[
\operatorname{image}(q\circ \mathbf p) =\mathbb C^d/\mathfrak{S}(d) (\text{respectively, }
\operatorname{image}(q\circ \mathbf p)\ \text{contains an open dense subset of }\mathbb C^d/\mathfrak{S}(d))
\]
if and only if the mapping $\mathbf p$ is a surjective (respectively, dominant) morphism, i.e.,
\[
\operatorname{image}(\mathbf p)=\mathbb C^d (\text{respectively, }
\overline{\operatorname{image}(\mathbf p)}=\mathbb C^d).
\]
\end{lemma}
\begin{proof}
Note that $q : \mathbb C^d\rightarrow\mathbb C^d/\mathfrak{S}(d)$ is bijective. Then, $q\circ\mathbf p$ is surjective if and only if $\mathbf p$ is surjective.
We consider the map $g : \mathbb C^{d}/\mathfrak{S}(d)\to \mathbb{C}^{d}$ defined by sending a multiset $\{\lambda_1,\dots, \lambda_{d}\}$ to the vector formed by coefficients (except the leading term) of the polynomial $(t-\lambda_1)\cdots (t-\lambda_{d})$ in increasing codegree order. Then $g$ is a morphism. It is easy to see that $g$ is the inverse of the map $q : \mathbb{C}^{d}\to\mathbb C^{d}/\mathfrak{S}(d)$.
If $q\circ\mathbf p$ is dominant, then there is $V\subseteq \operatorname{image}(q\circ\mathbf p)$ such that $V$ is an open dense subset of $\mathbb C^{d}/\mathfrak{S}(d)$. $V$ is also an Euclidean open subset.
Since $q$ is in addition continuous, $q^{-1}(V)=g(V)\subseteq \mathbf p(\mathbb C^k)$
is an Euclidean open subset. If $q^{-1}(V)$ is not dense, then there is small Euclidean open ball $\hat V\subset\mathbb C^d$ such that $q^{-1}(V)\cap \hat V=\emptyset$. Since $g$ is continuous, $g^{-1}(\hat V)$ is an Euclidean open set in $\mathbb C^{d}/\mathfrak{S}(d)$ (whose Euclidean topology is the induced one from $\mathbb C^d$). We must have $g^{-1}(\hat V)\cap V=\emptyset$, since $g$ is bijective.
Thus, we obtain a contradiction to the choice of $V$. Therefore, $\mathbf p(\mathbb C^k)$ should contains an Euclidean open dense subset of $\mathbb C^d$. On the other hand, it is also true that the Euclidean closure of $\mathbf p(\mathbb C^k)$ is contained in the Zariski closure of $\mathbf p(\mathbb C^k)$. Thus, $\overline{\mathbf p(\mathbb C^k)}=\mathbb C^d$, and hence $\mathbf p$ is a dominant morphism.
Suppose that $\mathbf p : \mathbb C^k\to\mathbb C^d$ is a dominant morphism. It follows from Proposition~\ref{prop:dense} that $\mathbf p(\mathbb C^k)$ contains an open dense subset $U$ of $\mathbb C^d$. Since $g$ is a morphism, $g^{-1}(U)$ is an open dense subset of $\mathbb C^d/\mathfrak{S}(d)$. Therefore, $g^{-1}(U)=q(U)\subseteq q(\mathbf p(\mathbb C^k))\subseteq \mathbb C^d/\mathfrak{S}(d)$ is an open dense subset. Thus, $q\circ\mathbf p$ is dominant by Definition~\ref{prop:dense-eig}.
\end{proof}
\subsection{Necessary conditions}\label{sec:necessary}
We can expand out the characteristic polynomial $\chi(\lambda)$ of a tensor $\mathcal T\in\mathbb {TS}(\mathbb C^n,m+1)$ as
\[
\chi(\lambda)=\lambda^{nm^{n-1}}+c_{nm^{n-1}-1}(\mathcal T)\lambda^{nm^{n-1}-1}+\dots+c_1(\mathcal T)\lambda+c_0(\mathcal T).
\]
According to \cite{hhlq13}, each $c_i(\mathcal T)\in\mathbb C[\mathcal T]$ is a homogeneous polynomial in the variables $t_{ii_1\dots i_m}$'s of degree $nm^{n-1}-i$ for $i=0,\dots,nm^{n-1}-1$. We define the \textit{coefficient map} $\mathbf c : \mathbb{TS}(\mathbb C^n,m+1)\rightarrow \mathbb C^{nm^{n-1}}$ as
\begin{equation}\label{coefficient-map}
\mathbf c(\mathcal T):=(c_{nm^{n-1}-1}(\mathcal T),\dots,c_0(\mathcal T))^\mathsf{T}\ \text{for all }\mathcal T\in \mathbb{TS}(\mathbb C^n,m+1).
\end{equation}
It is easy to see that $\mathbf c$ is a morphism between two smooth varieties. It is also easy to see that $\phi=q\circ\mathbf c$ (cf.\ Definition~\ref{prop:dense-eig}).
These, together with Lemma~\ref{lemma:roots}, imply the next proposition.
\begin{proposition}[Equivalent Relation]\label{prop:tensor}
For any positive integers $m$ and $n$, the multiset-valued eigenvalue map
$\phi: \mathbb {TS}(\mathbb C^n,m+1) \to \mathbb{C}^{nm^{n-1}}/\mathfrak{S}_{nm^{n-1}}$
is surjective (respectively, dominant) if and only if the coefficient map $\mathbf c : \mathbb {TS}(\mathbb C^n,m+1) \to\mathbb C^{nm^{n-1}}$ is a surjective (respectively, dominant) morphism.
\end{proposition}
\begin{lemma}\label{lemma:size}
For all positive integers $m,n\geq 2$, it holds that
\begin{equation}\label{lemma:size-inequality}
{n+m-1\choose m}<m^{n-1},
\end{equation}
unless $n=2$, or
\[
(n,m)=(3,2),\ (4,2),\ (3,3).
\]
\end{lemma}
\begin{proof}
First note that, for fixed $m\geq 2$, if \eqref{lemma:size-inequality} hods for some $n\geq 2$, then it also holds for
$n+1$, since
\[
{n+m\choose m}=\frac{n+m}{n}{n+m-1\choose m}<m{n+m-1\choose m}.
\]
Second, note that for fixed $n\geq 2$, if \eqref{lemma:size-inequality} hods for some $m\geq 2$, then it also holds for
$m+1$, since
\[
\frac{{n+m-1\choose m}}{m^{n-1}}=\frac{(1+\frac{n-1}{m})\cdots(1+\frac{1}{m})}{(n-1)!}.
\]
Last, it is then a direct calculation to see that the listed cases are the only exceptions to the inequality~\eqref{lemma:size-inequality}.
\end{proof}
The next proposition establishes the necessary condition under which the multiset-valued eigenvalue map is dominant. It says that in most situations, the eigenvalue map $\phi$ fails to be dominant.
\begin{proposition}[Necessary condition]\label{prop:necessary}
Let integers $m,n\geq 2$.
A necessary condition for the map $\phi : \mathbb{TS}(\mathbb C^n,m+1)\rightarrow\mathbb C^{nm^{n-1}}/\mathfrak{S}(n)$ being dominant
is that either
$n=2$, or
\[
(n,m)=(3,2),\ (4,2),\ (3,3).
\]
\end{proposition}
\begin{proof}
Note that
the dimension of the tensor space
$\mathbb{TS}(\mathbb{C}^n,m+1)$ is
\[
n{n+m-1\choose m}.
\]
The result then follows from Propositions~\ref{prop:necessary-general} and \ref{prop:tensor}, and Lemma~\ref{lemma:size}.
\end{proof}
\section{Tensors with dimension $n=2$}\label{sec:two}
\subsection{Basics}
In this section, we consider tensors in $\mathbb {TS}(\mathbb C^2,m+1)$. The multiset-valued eigenvalue map is therefore
\[
\phi : \mathbb{TS}(\mathbb C^2,m+1)\rightarrow\mathbb C^{2m}/\mathfrak{S}(2m).
\]
The system of eigenvalue equations of a tensor $\mathcal T=(t_{i_0\dots i_m})$ is (cf.\ \eqref{eigenvalue-equation})
\[
\begin{cases}a_0x^m+a_1x^{m-1}y+\dots+a_my^m=\lambda x^m,\\ b_0x^m+\dots+b_{m-1}xy^{m-1}+b_my^m=\lambda y^m,\end{cases}
\]
where we parameterized $\mathcal T$ as
\[
a_0:=t_{111\dots 1},\ a_1:=t_{121\dots 1}/m,\dots, a_m=t_{122\dots2},\ b_0=t_{211\dots1},\dots,b_{m-1}=t_{212\dots 2}/m,\ b_m=t_{222\dots 2}.
\]
It follows from the Sylvester formula for the resultant of two homogeneous polynomials in two variables (cf.\ \cite{s02,gkz94}) that
the characteristic polynomial is $\operatorname{det}(M-\lambda I)$ with the identity matrix $I\in\mathbb C^{2m\times 2m}$ and
the matrix $M\in\mathbb C^{2m\times 2m}$
\begin{equation}\label{sylvester-matrix}
M=\begin{bmatrix}a_0&a_1&a_2&\dots&a_m&0&0&\dots\\ 0& a_0&a_1&a_2&\dots&a_m&0&\dots\\
0&0& a_0&a_1&a_2&\dots&a_m&\dots \\ &&&\dots\\ 0&\dots &0&a_0&a_1&a_2&\dots& a_m\\
b_0&b_1&b_2&\dots&b_m&0&0&\dots\\ 0& b_0&b_1&b_2&\dots&b_m&0&\dots\\
0&0& b_0&b_1&b_2&\dots&b_m&\dots \\ &&&\dots\\ 0&\dots &0&b_0&b_1&b_2&\dots& b_m
\end{bmatrix}.
\end{equation}
For all $k=1,\dots,2m$, denote by $M_k:=\{A : A\ \text{is a }k\times k\ \text{principal submatrix of }M\}$ the set of all $k\times k$ principal submatrices of $M$.
It is known that
\[
\operatorname{det}(M-\lambda I)=\sum_{k=0}^{2m}(-1)^k\bigg(\sum_{A\in M_{2m-k}}\operatorname{det}(A)\bigg)\lambda^k,
\]
where $M_0:=\emptyset$ by convenience, and the summation over an empty set is defined as $1$.
Denote by
\begin{equation}\label{minor}
c_k(\mathcal T):=(-1)^k\sum_{A\in M_{2m-k}}\operatorname{det}(A), \ \text{for all }k=0,\dots,2m.
\end{equation}
We have (cf.\ \cite{hhlq13})
\[
c_0(\mathcal T)=\operatorname{det}(\mathcal T)=\operatorname{det}(M),\ c_{2m-1}(\mathcal T)=-m(a_0+b_m),\ \text{and }c_{2m}(\mathcal T)=1.
\]
It is easy to see that each $c_i\in\mathbb C[\mathcal T]$ is a homogeneous polynomial of degree $2m-i$ for $i=0,\dots,2m$, and
$c_{2m-1}(\mathcal T),\dots,c_0(\mathcal T)$ are the components of the coefficient map $\mathbf c$ (cf.\ \eqref{coefficient-map}).
Denote by $H\in\mathbb C^{2m\times(2m+2)}$ the Jacobian matrix of the coefficient map $\mathbf c:=(c_{2m-1},\dots,c_0)^\mathsf{T}:\mathbb C^{2m+2}\rightarrow\mathbb C^{2m}$ with respect to variables $a_0,\dots,a_m,b_0,\dots,b_m$:
\[
h_{ij}:=\begin{cases}\frac{\partial c_{2m-i}}{\partial a_{j-1}}& \text{if }j\leq m+1,\\ \frac{\partial c_{2m-i}}{\partial b_{j-m-2}}& \text{otherwise}.\end{cases}
\]
Denote by the submatrix $H_{:,1:2m}$ of $H$ as $K$.
Here we use the Matlab notation for submatrices: $A_{a:b,c:d}$ means the submatrix of $A\in\mathbb C^{p\times q}$ formed by the row index set $\{a,a+1,\dots,b\}$ and the column index set $\{c,c+1,\dots,d\}$, $A_{:,c:d}$ means the corresponding row index set being the entire $\{1,\dots,p\}$, etc. So,
$K$ is a $2m\times 2m$ matrix with entries in $\mathbb C[a_0,\dots,a_m,b_0,\dots,b_m]$. Moreover, it follows that the monomial of every term in each entry of the $i$th row of $K$ is of the same degree $i-1$ with respect to $a_0,\dots,a_m,b_0,\dots,b_m$.
In order to show that the map $\phi$ is dominant for $\mathbb{TS}(\mathbb C^2,m+1)$, which is the same as the map $\mathbf c$ being dominant (cf.\ Proposition~\ref{prop:tensor}),
our goal is to show that the matrix $H$ is of full rank for some tensor $\mathcal T$ (cf.\ Proposition~\ref{prop:differential}), which will be a consequence of the nonsingularity of
$K$ at that tensor point. Actually, we will show a much more stronger result: the determinant of the matrix $K$ is a nonzero polynomial
in $\mathbb C[\mathcal T]$, which implies the nonsingularity generically. To achieve this, we only need to show that there is a term $\alpha a_1^{\frac{m(m-1)}{2}}a_m^{m-1}b_{m-1}^{\frac{m(m+1)}{2}+(m-1)^2}$ in the determinant $\operatorname{det}(K)$ for some nonzero scalar $\alpha$.
To illustrate the proof of the general case we first work out the following example.
\begin{example}
Let $m=2$. Then we have the Sylvester matrix
\[
M=\left[
\begin{matrix}
a_0 & a_1 & a_2 & 0\\
0 & a_0 & a_1 & a_2\\
b_0 & b_1 & b_2 & 0\\
0 & b_0 & b_1 & b_2
\end{matrix}
\right]
\]
and hence the coefficients of the characteristic polynomial of $M$ (the same as that for the tensor) are
\begin{align*}
c_4(M)&=1,\\
c_3(M)&=-2(a_0+b_2),\\
c_2(M)&=\operatorname{det}\left[
\begin{matrix}
a_0 & a_1\\
b_1 & b_2
\end{matrix}
\right]+ \textit{ principal $2\times 2$ minors which do not involve $b_1a_i$'s}, \\
c_1(M)&=-\operatorname{det} \left[
\begin{matrix}
a_0 & a_1 & a_2\\
b_1 & b_2 & 0\\
b_0 & b_1 & b_2
\end{matrix}
\right] - \textit{ principal $3\times 3$ minors which do not involve $b_1^2a_i$'s},\\
c_0(M)& = \operatorname{det}(M), \textit{in which only one term can involve $b_0b_1$, that is, $a_1a_2b_0b_1$.}
\end{align*}
It is easy to compute $H$
\[
H=
\begin{bmatrix}
-2 & 0 & 0 & 0 & 0 & 2\\
* & -b_1 & * & * & \% & \% \\
\& &\& & -b_1^2+ \& & \& & \% & \% \\
\$ & \# &\# & -a_1a_2b_1+\# & \% & \%
\end{bmatrix},
\]
where $*$'s contain terms without the variable $b_1$,
$ \&$'s contain terms with the degrees of $b_1$ being strictly smaller than $2$,
and $\#$ contains terms violating either (1) has the variable $b_1$ or (2) only has the variables $a_1$, $a_2$ and $b_1$.
By definition the submatrix $K$ of $H$ is
\[
K=
\begin{bmatrix}
-2 & 0 & 0 & 0 \\
* & -b_1 & * & * \\
\& &\& & -b_1^2+ \& & \& \\
\$ & \# &\# & -a_1a_2b_1+\#
\end{bmatrix}.
\]
Thus, the only way to obtain $a_1a_2b_1^4$ in $\operatorname{det}(K)$ is by taking the diagonal entries of $K$. It is obvious that the coefficient of $a_1a_2b_1^4$ is nonzero. Note that the degree $4$ for the variable $b_1$ is the maximal possible.
\end{example}
\subsection{General cases}
Let us look at the diagonal elements of the submatrix $K_{1:m+1;1:m+1}$ of $K$.
\begin{lemma}\label{lemma:diagonal}
For each $i=1,\dots,m+1$, this is a nonzero term of the monomial $b_{m-1}^{i-1}$ in the entry $K_{ii}$.
Moreover, this is the unique entry in the $i$th row of $K$ containing a term of the monomial $b_{m-1}^{i-1}$.
\end{lemma}
\begin{proof}
Let us visualize the submatrix $M_{m-1:2m,m-1:2m}$ of $M$:
\[
P=\begin{bmatrix}a_0&a_1&a_2&a_3&\dots&a_m\\ b_{m-1}&b_m&0&0&\dots&0\\ &b_{m-1}&b_m& \\ & &\ddots&\ddots&\\
&&&b_{m-1}&b_m&0\\ &&&&b_{m-1}&b_m
\end{bmatrix}.
\]
The case when $i=1$ is trivial. Let $i>1$.
It is easy to check that there is a term
\[
(-1)^{i-1}a_{i-1}b_{m-1}^{i-1}
\]
in the $i\times i$ leading principal minor of $P$ for all $i=2,\dots,m+1$.
It is also easy to see that any other choice of $i\times i$ principal minor of $M$ cannot have a term whose monomial is $a_{i-1}b_{m-1}^{i-1}$ for $i>1$, since only $i\times i$ principal submatrices of $P$ can contain $i-1$ rows for the variable $b_{m-1}$ and one row for $a$'s, and only the minor we have seen can result in a nonzero term of the monomial $a_{i-1}b_{m-1}^{i-1}$.
Henceforth, it follows from the formulae for the coefficients and the definition for the Jacobian matrix that a monomial $b_{m-1}^{i-1}$ occurs in the entry $K_{ii}$ for all $i=1,\dots,m+1$.
In the next, we show the uniqueness. By the homogeneity of the polynomials in each entry, the case $i=1$ is trivial. Actually, it follows from \cite{hhlq13} that $c_{2m-1}(\mathcal T)=-m(a_0+b_m)$, which implies $K_{1j}=0$ for $j=2,\dots,2m$.
Let us fix $i>1$. First, each entry $K_{ij}$ cannot have a nonzero term of the monomial $b_{m-1}^{i-1}$ for $j>m+1$. Suppose on the contrary that it has, then $c_{2m-i}$ contains a nonzero term of the monomial $b_{j-m-2}b_{m-1}^{i-1}$. It follows from the structure of the matrix $M$ that this term comes from an $i\times i$ principal minor of the submatrix $M_{m+1:2m,m+1:2m}$:
\[
M_1:=\begin{bmatrix}b_m&0&0&\dots&0\\ b_{m-1}&b_m&0&\dots&0\\ b_{m-2}&b_{m-1}&b_m&\dots&0\\
\dots&\dots&\dots&\dots&\dots\\
b_1&b_2&b_3&\dots&b_m
\end{bmatrix}.
\]
However, this cannot happen, since the monomial of every term in any principal minor of this matrix contains the variable $b_m$.
Second, we show that each entry $K_{ij}$ cannot have a nonzero term of the monomial $b_{m-1}^{i-1}$ for $j\neq i$ within $j\in\{1,\dots,m+1\}$.
Again, suppose on the contrary that it has, then $c_{2m-i}$ contains a nonzero term of the monomial $a_{j-1}b_{m-1}^{i-1}$. It comes from a principal minor of $M$. The corresponding principal submatrix is denoted by $T\in\mathbb C^{i\times i}$. By the hypothesis, we should have that $T$ is such a principal submatrix with whose $(i-1)\times (i-1)$ lasting principal submatrix comes from an $(i-1)\times (i-1)$ principal submatrix of $M_1$, since we should have $i-1$'s $b_{m-1}$.
Note that each principal submatrix of $M_1$ is lower triangular with the last diagonal entry being $b_m$. Therefore, in order to get the monomial $a_{j-1}b_{m-1}^{i-1}$, we must have that the $(1,i)$th entry of $T$ being $a_{j-1}$, and there is $b_{m-1}$ in each $s$th row of $T$ for $s=2,\dots,i$ by Laplace's determinant formula (cf.\ \cite{hj85}). However, this can only happen when $j=i$ and $T$ being a leading principal submatrix of $P$. A contradiction is therefore arrived.
In conclusion, $K_{ii}$ is the unique entry in the $i$th row of $K$ possessing a nonzero term of the monomial $b_{m-1}^{i-1}$.
\end{proof}
Let us look at the antidiagonal elements of the submatrix $K_{m+2:2m;m+2:2m}$ of $K$.
\begin{lemma}\label{lemma:antidiagonal}
For each $i=m+2,\dots,2m$, there is a nonzero term of the monomial $a_1^{i-m-1}b_{m-1}^{m-1}a_m$ in the $(i,3m-i+2)$th entry of $K$.
Moreover, it is the unique entry with the maximal degree $m-1$ for $b_{m-1}$ among terms involving only $a_1,b_{m-1},a_m$ in the $(i,j)$th entry of $K$ for $j=2,\dots,2m$.
\end{lemma}
\begin{proof}
Obviously, we cannot get a nonzero term with degree $m$ for the variable $b_{m-1}$ in the $(i,j)$th entry of $K$ for all
$j=m+2,\dots,2m$ and $i=m+2,\dots,2m$, since only $m$ rows of $M$ contain $b$'s.
It can be seen from the submatrix $M_{2m-i+1:2m,2m-i+1:2m}$ of $M$ that
a nonzero term of the monomial
\[
b_{2m-i}a_1^{i-m-1}b_{m-1}^{m-1}a_m
\]
occurs in its determinant, which is an $i\times i$ principal minor. We claim that the determinant of $M_{2m-i+1:2m,2m-i+1:2m}$ is the unique $i\times i$ minor in the definition of $c_{2m-i}$ (cf.\ \eqref{minor}) which has a nonzero term of the monomial $b_{2m-i}a_1^{i-m-1}b_{m-1}^{m-1}a_m$.
To obtain $b_{2m-i}b_{m-1}^{m-1}$, for any $i\times i$ principal submatrix $T$ of $M$, the matrix $P$ (defined in the proof of Lemma~\ref{lemma:diagonal}) should be its principal submatrix as well, since only $m$ rows of $M$ contain $b$'s and we should take them all.
First, the last column of the matrix $T$ contains only $a_m$, $b_m$ and $0$'s, from which $a_m$ should be chosen, since $2m-i<m$ for all the possible $i=m+2,\dots,2m$.
Second, we cannot choose $b_{2m-i}$ from the lower triangular parts of the submatrix $P$, since otherwise, we can at most get
$b_{m-1}^{m-2}$ according to Laplace's determinant formula.
Third, by the second, we can only choose $b_{2m-i}$ from the first $i-m-1$ columns of $T$. Also, since we pick principal submatrices from $M$, the $(1,1)$th entry of $T$ would be $a_0$ for sure, and the others in the first column are distinct $b_t$'s. Therefore, we must choose $b_{2m-i}$ from the first column. Moreover, it should be the first nonzero entry other than $a_0$ in the first column.
It then follows from the structure of the matrix $M$ that the only possible principal submatrix is the submatrix $M_{2m-i+1:2m,2m-i+1:2m}$.
Therefore, it follows from the formulae for the coefficients and the definition for the Jacobian matrix that a nonzero term of the monomial $a_1^{i-m-1}b_{m-1}^{m-1}a_m$ occurs in the $(i,3m-i+2)$th entry of $K$ for all $i=m+2,\dots,2m$.
With almost the same argument, we can see that there does not exist a nonzero term of a monomial with the maximum degree $m-1$ for $b_{m-1}$ and with only the variables $a_1$, $b_{m-1}$ and $a_m$ in the $(i,j)$th entry of $K$ for all $j\in\{m+2,\dots,2m\}\setminus\{3m-i+2\}$ and $i=m+2,\dots,2m$.
In the next,
we show that there does not exist a nonzero term of a monomial
\[
a_ra_1^pb_{m-1}^{m-1}a_m^q
\]
with some integers $p+q=i-m$ for any $r=1,\dots,m$ in any $i\times i$ principal minor of $M$ for all $i=m+2,\dots,2m$.
Note that the first column of any $i\times i$ principal submatrix of $M$ is of the form
\[
(a_0,0,\dots,0,b_t,b_{t-1},\dots)^\mathsf{T}
\]
for some $t<m-1$, since $i\geq m+2$. Therefore, each term of the minor must contain either the variable $a_0$ or a variable $b_s$ for some $s<m-1$. Neither case will result in a nonzero term of the monomial involving only $a_1$, $a_r$, $a_m$ and $b_{m-1}$ for some $r=1,\dots,m$. Thus, no term involving only $a_1$, $a_m$ and $b_{m-1}$ exists in the $(i,j)$-entry of $K$ for $i=m+2,\dots,2m$ and $j=2,\dots,m+1$.
Henceforth, a nonzero term of the monomial $a_1^{i-m-1}b_{m-1}^{m-1}a_m$ uniquely appears in the $(i,3m-i+2)$-entry of $K$ for every $i=m+2,\dots,2m$.
\end{proof}
\begin{lemma}\label{lemma:nonsingular}
The submatrix $K$ of the Jacobian matrix $H$ is nonsingular generically over the tensor space $\mathbb T(\mathbb C^2,m+1)$ for all $m=2,3,\dots$.
\end{lemma}
\begin{proof}
We know that the first row of $K$ is
\[
(-m,0,\dots,0)^\mathsf{T}\in\mathbb C^{2m},
\]
which implies that $\operatorname{det}(K)=-m\operatorname{det}(K_{2:2m,2:2m})$.
We consider terms of $\operatorname{det}(K)$ of monomials with only the variables $a_1$, $b_{m-1}$ and $a_m$.
For $i=2,\dots,2m$, each entry of the $i$th row of the matrix $K$ is a homogeneous polynomial of degree $i-1$ in the variables $a_0,\dots,a_m,b_0,\dots,b_m$, and there are $m$ rows of $M$ consisting $b_{m-1}$.
This, together with Lemmas~\ref{lemma:diagonal} and \ref{lemma:antidiagonal}, implies that the maximal possible degree for the variable $b_{m-1}$ in such a term in the determinant of the matrix $K$ is
\[
1+\dots+m+(m-1)(m-1)=\frac{m(m+1)}{2}+(m-1)^2.
\]
It follows from Lemmas~\ref{lemma:diagonal} and \ref{lemma:antidiagonal} again that such a term is unique and there is a unique way to consititute it:
choosing the diagonal entries of the submatrix $K_{1:m+1,1:m+1}$ and then the anti-diagonal entries of the submatrix $K_{m+2:2m,m+2:2m}$. Moreover, by the same lemmas,
the term of the monomial $a_1^{\frac{m(m-1)}{2}}b_{m-1}^{\frac{2m^2-m+1}{2}}a_m^{m-1}$ in the determinant of the $K$ has a nonzero coefficient.
Therefore, the determinant of the matrix $K$ is a nonzero polynomial over the polynomial ring $\mathbb C[a_0,\dots,a_m,b_0,\dots,b_m]$. By Hilbert's zero theorem (cf. \cite{h77,h92,s02}), we conclude that the submatrix $K$ of the Jacobian matrix is nonsingular generically in the tensor space.
\end{proof}
\begin{proposition}\label{prop:dominant-n2}
For any positive $m\geq 1$, the multiset-valued eigenvalue map $\phi : \mathbb{TS}(\mathbb C^2,m+1)\rightarrow\mathbb C^{2m}/\mathfrak{S}(2m)$ is dominant, i.e.,
for a generic multiset $S\in\mathbb C^{2m}/\mathfrak{S}(2m)$, there exists a tensor $\mathcal T\in\mathbb{TS}(\mathbb C^2,m+1)$ such that
the set of eigenvalues (counting with multiplicities) of $\mathcal T$ is $S$.
\end{proposition}
\begin{proof}
It follows from Propositions~\ref{prop:differential} and \ref{prop:tensor}, and Lemma~\ref{lemma:nonsingular}.
\end{proof}
\subsection{Extensions}\label{sec:ext}
We first make a parenthesis on Sylvester matrices.
A Sylvester matrix is a matrix of the form as $M$ (cf.\ \eqref{sylvester-matrix}):
\begin{equation*}
\begin{bmatrix}a_0&\dots&a_p&0&0&\dots\\ &&&\dots\\ 0&\dots &0&a_0&\dots& a_p\\
b_0&\dots&b_q&0&0&\dots\\ &&&\dots\\ 0&\dots &0&b_0&\dots& b_q
\end{bmatrix},
\end{equation*}
while in general there are $q$ rows of $a$'s and $p$ row of $b$'s for different $p,q$. Therefore, the matrix is in $\mathbb C^{(p+q)\times (p+q)}$.
Up to permutation, we can assume without loss of generality that $q\geq p$. Then, with almost the same argument as
the preceding analysis, we can obtain the following result on the inverse eigenvalue problem for Sylvester matrices.
\begin{proposition}[Sylvester Matrix]\label{prop:sylvester}
Let $m\geq 2$ be a positive integer. Given a generic multiset $S\in\mathbb C^{m}/\mathfrak{S}(m)$ there exists a Sylvester matrix $A\in\mathbb C^{m\times m}$ such that the set of eigenvalues (counting with multiplicities) of $A$ is $S$.
\end{proposition}
In the following, we will get back to tensors.
Note that we have a decomposition of $\mathbb{TS}(\mathbb C^n,m+1)$ as a $\operatorname{GL}_n(\mathbb{C})$ module (cf.\ \cite{l12}):
\[
\mathbb{TS}(\mathbb C^n,m+1)=\mathbb{C}^n\otimes \operatorname{S}^m(\mathbb{C}^n)= \operatorname{S}^{m+1} \mathbb{C}^n\oplus \operatorname{S}_{m,1}\mathbb{C}^n.
\]
In particular, when $n=2$ we have
\[
\mathbb{TS}(\mathbb C^2,m+1)= \operatorname{S}^{m+1} \mathbb{C}^2\oplus \operatorname{S}_{m,1}\mathbb{C}^2=\operatorname{S}^{m+1} \mathbb{C}^2 \oplus (\wedge^2\mathbb{C}^2\otimes \operatorname{S}^{m-1}\mathbb{C}^2).
\]
Tensors in $\operatorname{S}^{m+1}\mathbb{C}^2$ are just symmetric tensors, which can be represented by $m+2$ parameters.
More precisely, for each symmetric tensor $\mathcal T$, the homogeneous polynomial $\mathbf z^\mathsf{T}\big(\mathcal T\mathbf z^m\big)$ with $\mathbf z=(x,y)^\mathsf{T}$ can be parameterized as $F(x,y)=a_{m+1} x^{m+1}+ \cdots + a_0 y^{m+1}\in\mathbb C[x,y]$ for $a$'s.
\begin{lemma}\label{eqn:Sm+1}
The system of eigenvalue equations associated to $\mathcal{T}$ is
\begin{align*}
\frac{\partial F(x,y)}{\partial x} &=\lambda x^m,\\
\frac{\partial F(x,y)}{\partial y} &=\lambda y^m.
\end{align*}
\end{lemma}
We characterize eigenvalues of a nonzero $\mathcal{T}\in \operatorname{S}^{m+1}\mathbb{C}^2$ in the next proposition.
\begin{proposition}\label{prop:eig-Sm+1}
Let $p_1,\cdots, p_k$ be distinct zeros of $F(x,y)$ in $\mathbb{P}^1$, with multiplicities $m_1,\cdots, m_k$ respectively. Let $L_{i}$ be the linear form vanishing on $p_i$ respectively. Eigenvalues of $\mathcal{T}$ are $0$ with multiplicity $\sum_{i=1}^{k}(m_i-1)$ and $\lambda_j= \frac{\partial F}{\partial x} (\alpha_j,\beta_j)/ \alpha_j^m , j=1,\dots, m+k-1$ with multiplicity one where $(\alpha_j,\beta_j)$ is a solution of
\[
\frac{y^m \frac{\partial F}{\partial x}- x^m \frac{\partial F}{\partial y}}{\prod_{i=1}^k L_i^{m_i-1}}=0.
\]
\end{proposition}
\begin{proof}
By the equations in Lemma~\ref{eqn:Sm+1} we obtain
\[
\lambda(y^m \frac{\partial F}{\partial x}- x^m \frac{\partial F}{\partial y})=0.
\]
Let $\lambda\ne 0$ be an eigenvalue of $\mathcal{T}$ and $(\alpha,\beta)\ne (0,0)$ be an eigenvector corresponding to $\lambda$. Then, either $\frac{\partial F}{\partial x}(\alpha,\beta)\ne 0$ or $\frac{\partial F}{\partial y}(\alpha,\beta)\ne 0$, i.e., $(\alpha,\beta)$ is a root of
\[
\frac{y^m \frac{\partial F}{\partial x}- x^m \frac{\partial F}{\partial y}}{\prod_{i=1}^k L_i^{m_i-1}}=0.
\]
Since $\mathcal{T}$ has $2m$ eigenvalues and $\sum_{i=1}^km_i=m+1$, we see that they are either $0$ or of the forms as claimed.
\end{proof}
To conclude this section, we consider eigenvalues of tensors in $\wedge^2\mathbb{C}^2\otimes \operatorname{S}^{m-1}\mathbb{C}^2$.
\begin{lemma}\label{lemma:Sm1equation}
The system of eigenvalue equations associated to a tensor $\mathcal{T}\in \wedge^2\mathbb{C}^2\otimes \operatorname{S}^{m-1}\mathbb{C}^2$ is of the form
\begin{eqnarray*}
y f(x,y)=\lambda x^m,\\
-x f(x,y)=\lambda y^m,
\end{eqnarray*}
where $f(x,y)$ is a homogeneous polynomial of degree $m-1$.
\end{lemma}
\begin{proof}
Let us fix the standard basis $\{\mathbf e_1,\mathbf e_2\}$ for $\mathbb{C}^2$ then an element in $\wedge^2\mathbb{C}^2\otimes \operatorname{S}^{m-1}\mathbb{C}^2$ is
\[
\mathcal{T}=(\mathbf e_1\wedge \mathbf e_2) \otimes f,
\]
where $f\in \operatorname{S}^{m-1}\mathbb{C}^2$. We identify $\operatorname{S}^{m-1}\mathbb{C}^2$ with the space of homogeneous polynomials of degree $m-1$ in two variables with coefficients in the field of complex numbers.
Expand $\mathbf e_1\wedge \mathbf e_2$ and write out the equation system corresponding to $\mathcal{T}$, the claim follows.
\end{proof}
Let $\mathcal{T}\in \wedge^2\mathbb{C}^2\otimes \operatorname{S}^{m-1}\mathbb{C}^2$, then we describe eigenvalues of $\mathcal{T}$ in the next proposition.
\begin{proposition}\label{prop:Sm1}
Eigenvalues of $\mathcal{T}$ are $0$ with multiplicity $m-1$ and $\omega_if(1,\omega_i),i=0,\dots, m$ where $\omega_i$ is an $(m+1)$-th root of $-1$.
\end{proposition}
\begin{proof}
If $\lambda\ne 0$ is an eigenvalue of $\mathcal{T}$ then
\begin{align*}
yf(x,y)=\lambda x^m \textit{ and }
-xf(x,y)=\lambda y^m.
\end{align*}
Thus
\[
\lambda (x^{m+1}+y^{m+1}) =0.
\]
Since $\lambda\ne 0$ we can derive
\[
y=\omega_i x, i=0,\dots, m.
\]
It is easy to obtain
\[
\lambda=y f(x,y)/x^m = \omega_i f(1,\omega_i).
\]
Lastly, every homogeneous polynomial $f(x,y)$ definitely has a nontrivial solution in $\mathbb C^2$ by Hilbert's zero theorem, we conclude that $0$ is also an eigenvalue of $\mathcal T$. Since the total number of eigenvalues is $2m$, we see that $0$ gets the rest multiplicity $m-1$.
\end{proof}
\begin{remark}
We notice that Proposition \ref{prop:Sm1} gives an algorithm to reconstruct a tensor $\mathcal{T}\in \wedge^2\mathbb{C}^2\otimes\operatorname{S}^{m-1}\mathbb{C}^2$ from given $m+1$ numbers $\lambda_0,\dots, \lambda_{m}$ such that eigenvalues of $\mathcal{T}$ are $\lambda_0,\dots, \lambda_{m}$ and zero by solving a linear system. Namely, we consider the following linear system
\[
\left(
\begin{matrix}
1 & 1 & \cdots & 1 & 1\\
\omega_1 & \omega_1^2 & \cdots & \omega_1^{m-1} & \omega_1^m \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
\omega_m & \omega_m^2 & \cdots & \omega_m^{m-1} & \omega_m^m
\end{matrix}
\right)\left(
\begin{matrix}
a_0 \\
a_1 \\
\vdots \\
a_{m-1}
\end{matrix}
\right)=\left(
\begin{matrix}
\lambda_0 \\
\lambda_1 \\
\vdots \\
\lambda_{m}
\end{matrix}
\right),
\]
where $\omega_i$'s are the $m+1$-th roots of $-1$.
\begin{enumerate}
\item If this overdetermined linear system has no solution then there is no tensor $\mathcal{T}\in \wedge^2\mathbb{C}^2\otimes\operatorname{S}^{m-1}\mathbb{C}^2$ can have $\{\lambda_0,\dots, \lambda_{m},0,\dots, 0\}$ as the multiset of eigenvalues.
\item If this overdetermined linear system has a solution then the solution gives a homogeneous polynomial $f(x,y)=\sum_{i=0}^{m-1}a_ix^{m-1-i}y_i$ which gives the desired tensor $\mathcal{T}$ (cf.\ Lemma~\ref{lemma:Sm1equation}).
\end{enumerate}
\end{remark}
\section{The Exceptional Cases}\label{sec:general}
In this section, we show that the eigenvalue map $\phi : \mathbb{TS}(\mathbb C^n,m+1)\rightarrow\mathbb C^{nm^{n-1}}/\mathfrak{S}(nm^{n-1})$ is dominant for the \textit{exceptional cases}
\[
(n,m)=(3,2),\ (4,2),\ (3,3).
\]
We use Propositions~\ref{prop:differential} and \ref{prop:tensor} to prove the results. The basic idea is the same as Section~\ref{sec:two}: finding a point in $\mathbb{TS}(\mathbb C^n,m+1)$ such that the differential of the coefficient map $\mathbf c :\mathbb{TS}(\mathbb C^n,m+1)\rightarrow\mathbb C^{nm^{n-1}}$ at this point has the maximal rank $nm^{n-1}$. The difference is instead of proving a generic property on the Jacobian matrix, we find a concrete point at which the Jacobian matrix is of full rank.
\subsection{Macaulay's formulae of characteristic polynomials}\label{sec:mac}
We refer to \cite{clo98,s02,gkz94,h77} for the theory and computation of resultants and hyperdeterminants. The determinant of a tensor
is actually the resultant of a specially constructed system of homogeneous polynomials of the same degree \cite{hhlq13}.
Let
\[
d=nm-n+1,
\]
and
\[
S=\{x_1^d,x_1^{d-1}x_2,\dots,x_n^d\}
\]
be the set of monomials in $x_1,\dots,x_n$ of degree $d$ in lexicographic order. A monomial of degree $d$ is written as $\mathbf x^{\alpha}=x_1^{\alpha_1}\dots x_n^{\alpha_n}$ with $\alpha\in\mathbb N^n$ and $\alpha_1+\dots+\alpha_n=d$.
The set $S$ divides into $n$ subsets as follows:
\[
S_i:=\{\mathbf x^\alpha \in S : \alpha_i\geq m,\ \alpha_j<m\ \text{for all }j=1,\dots,i-1\},\ \text{for all }i=1,\dots,n.
\]
It is easy to see that $\{S_1,\dots,S_n\}$ are mutually disjoint and $\cup_{i=1}^nS_i=S$. Note that the cardinality of $S$ is
\[
w=|S|={d+n-1\choose d}.
\]
Let $\mathcal T\in\mathbb{TS}(\mathbb C^n,m+1)$. We write
\[
f_i(\mathbf x):=(\mathcal T\mathbf x^m-\lambda\mathbf x^{[m]})_i
\]
as the $i$th defining equation for the eigenvalue problem for $i=1,\dots,n$.
For the $n$ homogeneous polynomials $f_1(\mathbf x),\dots,f_n(\mathbf x)$ in $n$ variables $\mathbf x=(x_1,\dots,x_n)$, parameterized by $\mathcal T$ and $\lambda$, we can formulate a system of $w$ homogeneous polynomials
\begin{equation}\label{mac:matrix}
\mathbf x^{\alpha-m\mathbf e_i}\cdot f_i(\mathbf x),\ \text{for all }\mathbf x^\alpha\in S_i,\ \text{for all }i=1,\dots,n,
\end{equation}
where $\mathbf e_i\in\mathbb R^n$ is the $i$th standard basis vector. This system of polynomials is naturally indexed by monomials $\mathbf x^\alpha\in S$.
With respect to the basis $S$, we can represent the system \eqref{mac:matrix} as a matrix $R\in\mathbb C[\mathcal T,\lambda]^{w\times w}$.
A monomial $\mathbf x^\alpha$ is reduced, if there exists exactly one $i\in\{1,\dots,n\}$ such that $\alpha_i\geq m$. The submatrix of $R$ obtained by deleting all rows and columns of reduced monomials is denoted by $R'$. Note that the entries of both $R$ and $R'$ are
linear forms of the variables $t_{ii_1\dots i_m}$ and $\lambda$.
It follows from Macaulay's formula for resultant (cf.\ \cite{m02}) that the characteristic polynomial of $\mathcal T$ is
\begin{equation}\label{mac:characteristic}
\operatorname{det}(\mathcal T-\lambda\mathcal I)=\pm \frac{\operatorname{det}(R)}{\operatorname{det}(R')}.
\end{equation}
With the characteristic polynomial \eqref{mac:characteristic}, we can compute out the coefficient map $\mathbf c : \mathbb{TS}(\mathbb C^n,m+1)\rightarrow\mathbb C^{nm^{n-1}}$ and its Jacobian matrix $H$. Note that, we may restrict our map $\mathbf c$ on a subspace $V\subseteq\mathbb{TS}(\mathbb C^n,m+1)$ as long as the dimension of $V$ is larger than $nm^{n-1}$ (cf.\ Proposition~\ref{prop:necessary-general}) to reduce the computational cost.
\subsection{ Third order three dimensional tensors}\label{sec:32}
In this section, we present the detailed computation for third order three dimensional tensors, i.e., $(n,m)=(3,2)$. The details serve as an example to illustrate the method in Section~\ref{sec:mac}.
The computation has been majorly conducted by Macaulay2 \cite{gs} together with Matlab.
For any tensor $\mathcal T\in\mathbb {TS}(\mathbb C^3,3)$, its system of eigenvalue equations is
\[
\begin{cases}\sum_{j,k=1}^3t_{1jk}x_jx_k=\lambda x_1^2,\\
\sum_{j,k=1}^3t_{2jk}x_jx_k=\lambda x_2^2,\\
\sum_{j,k=1}^3t_{3jk}x_jx_k=\lambda x_3^2,
\end{cases}
\]
which can be equivalently parameterized as
\[
\begin{cases}f_1(A,\lambda,\mathbf x):=a_{11}x_1^2+a_{12}x_1x_2+a_{13}x_2^2+a_{14}x_1x_3+a_{15}x_2x_3+a_{16}x_3^2-\lambda x_1^2=0,\\
f_2(A,\lambda,\mathbf x):=a_{21}x_1^2+a_{22}x_1x_2+a_{23}x_2^2+a_{24}x_1x_3+a_{25}x_2x_3+a_{26}x_3^2-\lambda x_1^2=0,\\
f_3(A,\lambda,\mathbf x):=a_{31}x_1^2+a_{32}x_1x_2+a_{33}x_2^2+a_{34}x_1x_3+a_{35}x_2x_3+a_{36}x_3^2-\lambda x_1^2=0.
\end{cases}
\]
Let
\[
S=\{x_1^4,x_1^3x_2,\dots,x_2^4,\dots,x_3^4\}
\]
be the set of all monomials of $x_1, x_2, x_3$ with total degree $4$ in lexicographic order, and
\[
T_1=\{x_1^2, x_1x_2, x_1x_3, x_2^2, x_2x_3, x_3^2\},\
T_2=\{x_1x_2, x_1x_3, x_2^2, x_2x_3, x_3^2\},
\]
and
\[
T_3=\{x_1x_2, x_1x_3,x_2x_3,x_3^2\}.
\]
It follows that the cardinality of $S$ is
\[
|S|={3+4-1\choose 4}=15.
\]
We generate a system of $15$ polynomial equations via
\[
f_ig^i_j = 0,\ \text{for all }g^i_j\in T_i,\ \text{for all }i=1,2,3.
\]
Regarding $f_ig^i_j\in\mathbb C[A,\lambda][\mathbf x]$, we can get a square matrix $M\in\big(\mathbb C[A,\lambda]\big)^{15\times 15}$ as the coefficient matrix of the polynomial equations
\[
f_1g^1_1=0,\dots,f_1g^1_6=0, f_2g^2_1=0,\dots,f_2g^2_5=0, f_3g^3_1=0,\dots,f_3g^3_4=0
\]
in the canonical basis $S$. It follows from Section~\ref{sec:mac} that the characteristic polynomial of $\mathcal T$ is
\[
\operatorname{det}(\mathcal T-\lambda\mathcal I)=\pm \frac{\operatorname{det}(M)}{(a_{11}-\lambda)^2(a_{23}-\lambda)}.
\]
The matrix $M$ is
\[
\small
\begin{bmatrix}
a_{11}& a_{12}& a_{13}& a_{14}& a_{15}& a_{16}& 0& 0& 0& 0& 0& 0& 0& 0& 0\\
0& a_{11} & a_{12}& 0& a_{14}& 0& a_{13}& a_{15}& 0& 0& 0& a_{16}& 0& 0& 0\\
0& 0& a_{11} & 0& 0& 0& a_{12}& a_{14}& a_{13}& a_{15}& a_{16}& 0& 0& 0& 0\\
0& 0& 0& a_{11} & a_{12}& a_{14}& 0& a_{13}& 0& 0& 0& a_{15}& a_{16}& 0& 0\\
0& 0& 0& 0& a_{11} & 0& 0& a_{12}& 0& a_{13}& a_{15}& a_{14}& 0& a_{16}& 0\\
0& 0& 0& 0& 0& a_{11} & 0& 0& 0& 0& a_{13}& a_{12}& a_{14}& a_{15}& a_{16}\\
0& a_{21}& a_{22}& 0& a_{24}& 0& a_{23} & a_{25}& 0& 0& 0& a_{26}& 0& 0& 0\\
0& 0& 0& a_{21}& a_{22}& a_{24}& 0& a_{23} & 0& 0& 0& a_{25}& a_{26}& 0& 0\\
0& 0& a_{21}& 0& 0& 0& a_{22}& a_{24}& a_{23} & a_{25}& a_{26}& 0& 0& 0& 0\\
0& 0& 0& 0& a_{21}& 0& 0& a_{22}& 0& a_{23} & a_{25}& a_{24}& 0& a_{26}& 0\\
0& 0& 0& 0& 0& a_{21}& 0& 0& 0& 0& a_{23} & a_{22}& a_{24}& a_{25}& a_{26}\\
0& a_{31}& a_{32}& 0& a_{34}& 0& a_{33}& a_{35}& 0& 0& 0& a_{36}& 0& 0& 0\\
0& 0& 0& a_{31}& a_{32}& a_{34}& 0& a_{33}& 0& 0& 0& a_{35}& a_{36}& 0& 0\\
0& 0& 0& 0& a_{31}& 0& 0& a_{32}& 0& a_{33}& a_{35}& a_{34}& 0& a_{36}& 0\\
0& 0& 0& 0& 0& a_{31}& 0& 0& 0& 0& a_{33}& a_{32}& a_{34}& a_{35}& a_{36}
\end{bmatrix}-\lambda I,
\]
where $I$ is the identity matrix of appropriate size.
If we restrict the tensor space to be with $a_{21}=a_{31}=a_{13}=a_{33}=0$, then we have
\[
\operatorname{det}(\mathcal T-\lambda\mathcal I)=\frac{\operatorname{det}(M)}{(a_{11}-\lambda)^2(a_{23}-\lambda)}=\operatorname{det}(M')
\]
with
\[
M'=\small\begin{bmatrix}
a_{11}& 0& 0& 0& a_{12}& a_{14}& a_{15}& a_{16}& 0& 0& 0& 0\\
0& a_{11}& a_{12}& a_{14}& 0&0& 0& 0& a_{15}& a_{16}& 0& 0\\
0& 0& a_{11}& 0& 0& a_{12}& 0& a_{15}& a_{14}& 0& a_{16}& 0\\
0& 0& 0& a_{11}& 0& 0& 0& 0& a_{12}& a_{14}& a_{15}& a_{16}\\
a_{22}& 0& a_{24}& 0& a_{23}& a_{25}& 0& 0& a_{26}& 0& 0& 0\\
0& 0& a_{22}& a_{24}& 0& a_{23}& 0& 0& a_{25}& a_{26}& 0& 0\\
0& 0&0& 0& 0& a_{22}& a_{23}& a_{25}& a_{24}& 0& a_{26}& 0\\
0& 0& 0& 0& 0& 0& 0& a_{23}& a_{22}& a_{24}& a_{25}& a_{26}\\
a_{32}& 0& a_{34}& 0&0& a_{35}& 0& 0& a_{36}& 0& 0& 0\\
0& 0& a_{32}& a_{34}& 0& 0& 0& 0& a_{35}& a_{36}& 0& 0\\
0& 0& 0& 0& 0& a_{32}& 0& a_{35}& a_{34}& 0& a_{36}& 0\\
0& 0& 0& 0& 0& 0& 0& 0& a_{32}& a_{34}& a_{35}& a_{36}
\end{bmatrix}-\lambda I.
\]
Note that we restricted our coefficient map $\mathbf c$ on a linear subspace $V$ of dimension $14$. We use Macaulay2 to compute the $12\times 14$ Jacobian matrix. The evaluation of this matrix at the point
\begin{align*}
a_{11}&=1, a_{12}=2, a_{14}=3, a_{15}=4, a_{16}=5,\\
a_{22}&=6, a_{23}=7, a_{24}=8,a_{25}=9,a_{26}=10,\\
a_{32}&=11,a_{34}=12,a_{35}=13,a_{36}=14
\end{align*}
is
\begin{multline*}
\tiny\left[\begin{matrix} -4 & 0 & 0 & 0 & 0 & 0 & -4 \\ 348 & -12 & -24 & 0 & 0 & -4 &
324 \\
-11948 & 528 & 1575 & -336 & -420 & 123 &
-10190 \\ 229449 & -6573 & -42450 & 9606 & 14460 & -435 & 178549 \\
-2841839 & 8007 & 669288 & -123924 & -254385 & -40559 &
-1983021\\
24015886 & 693225 & -6820251 & 716979 & 2947131 & 838271 &
14685692 \\
-141005226 & -8897502 & 46520475 & -55500 & -23636976 & -7517499 &
-74200394 \\
577067743 & 52779339 & -214721160 & -19191762 & 128734014 & 37178105 &
258147039 \\
-1615274021& -168212115& 650003046& 85305210 & -443297901 & -110199791&
-603940123 \\
2874026450 & 286931673 & -1165235823& -145117011& 852500403 & 194375103 &
876534196 \\
-2794768018& -259796358& 1046384496 & 111968790 & -778096974& -179135234&
-672256468 \\
1066887388& 96499788 & -356759172 & -33512052 & 261090648 & 64501920 &
202844400
\end{matrix}\right.\\
\tiny\left. \begin{matrix} 0 & 0 & 0 & 0 & 0&
0 & -4 \\ 0 & -26 & 0 & 0 & -6 &
-18 & 296 \\ -158 & 1685 & -382 & -223& 156 &
652 & -8362 \\
4943 & -43158 & 16014 & 5650 & -1511 & -6084 & 129887 \\
-54113& 607211 & -282327 & -46378 & -27079 &
-23151 & -1258172 \\ 150466 & -5348868 & 2845660 & -150997 & 963371 &1047467 & 7850991 \\
170837& 31218785 & -18669471& 5370313 & -11890233 & -11634444 & -30119950 \\
-16953189& -122285430& 82279634 & -38993472& 78057193 &
68998306& 58177003\\
64320415& 313612725 & -232274827 & 133470656 & -288207851&
-225232919& -1425840 \\
-125125090& -487990298& 384765126 & -232073969 & 569148965 &
386696721 & -187322475 \\
121312760 & 392623914& -320537057& 201649792 & -531624753&
-319797178& 252532328 \\
-45364410 & -122396540 & 101857630 & -69231372& 183581748 & 99950648 & -98555702
\end{matrix}\right]
\end{multline*}
Using either Matlab or Macaulay2, we can check that the above matrix has full rank $12$.
Therefore, we arrive at the next proposition.
\begin{proposition}\label{prop:dominant-33}
The eigenvalue map $\phi : \mathbb{TS}(\mathbb C^3,3)\rightarrow\mathbb C^{12}/\mathfrak{S}(12)$ is dominant.
\end{proposition}
\subsection{Fourth order three dimensional and third order four dimensional tensors}\label{sec:42-33}
In this section, we show that the eigenvalue maps $\phi : \mathbb{TS}(\mathbb C^3,4)\rightarrow\mathbb C^{27}/\mathfrak{S}(27)$ and $\phi : \mathbb{TS}(\mathbb C^4,3)\rightarrow\mathbb C^{32}/\mathfrak{S}(32)$ are both dominant. Note that, symbolically, the determinants of tensors of both formats $\mathbb{TS}(\mathbb C^3,4)$ and $ \mathbb{TS}(\mathbb C^4,3)$ are mostly likely to have millions of terms, regarding the relationship between determinants and hyperdeterminnats (cf.\ \cite{o12}) and already the approximately 3 million terms for the hyperdeterminant of tensors in $\mathbb T(\mathbb C^2,4)$ (cf.\ \cite{hsyy08}). It would be impractical or impossible using Macaulay2 to compute out the characteristic polynomial $\operatorname{det}(\mathcal T-\lambda \mathcal I)$ for a symbolic tensor in these two cases.
For a map $\mathbf f : V\rightarrow W$ between two vector spaces $V$ and $W$, whenever the differential $d_{\mathbf x}\mathbf f$
exists at a point $\mathbf x$, we have that the directional derivative of $\mathbf f$ at direction $\mathbf y\in V$ is
\begin{equation}\label{directional}
\mathbf f^\mathsf{\prime}(\mathbf x;\mathbf y)=(d_{\mathbf x}\mathbf f) \mathbf y.
\end{equation}
Let $\operatorname{dim}(V)\geq \operatorname{dim}(W)$.
As a linear map from $V$ to $W$, $d_{\mathbf x}\mathbf f$ is of maximal rank $\operatorname{dim}(W)$ if we can find a set of directions $\{\mathbf y_1,\dots,\mathbf y_k\}\subset V$ with $k\geq \operatorname{dim}(W)$ such that
\[
\operatorname{rank}\big([(d_{\mathbf x}\mathbf f) \mathbf y_1,\ \dots,\ (d_{\mathbf x}\mathbf f) \mathbf y_k]\big)=\operatorname{dim}(W).
\]
Note that here our map is the coefficient map $\mathbf c$ (cf.\ \eqref{coefficient-map}). $V$ is either $\mathbb{TS}(\mathbb C^3,4)$ or $ \mathbb{TS}(\mathbb C^4,3)$, and $W$ is respectively either $\mathbb C^{27}$ or $\mathbb C^{32}$. In both cases, we choose $k=\operatorname{dim}(V)$.
We use formula \eqref{directional} to compute $(d_{\mathbf x}\mathbf f) \mathbf y_i$ for each $i=1,\dots,k$.
We first choose a point $\mathcal T\in V$ and a set of directions $\{\mathcal T_1,\dots,\mathcal T_k\}$, which will be chosen as the set of standard basis of the space $V$.
Then, we compute the characteristic polynomial of $\mathcal T + t\mathcal T_i$ with parameter $t$
\[
\operatorname{det}(\mathcal T+t\mathcal T_i-\lambda\mathcal I)
\]
for all $i=1,\dots,k$. Note that we have only two symbolic variables $\lambda$ and $t$ now.
Write $\operatorname{det}(\mathcal T+t\mathcal T_i-\lambda\mathcal I)$ as
\[
\operatorname{det}(\mathcal T+t\mathcal T_i-\lambda\mathcal I)=\sum_{s=0}^Nc_s(t)\lambda^s,
\]
for appropriate $N$, which is either $27$ or $32$. Note that $c_N(t)=\pm 1$.
It follows that
\[
\mathbf c^\mathsf{\prime}(\mathcal T;\mathcal T_i)=(d_{\mathcal T}\mathbf c) \mathcal T_i=(c_{N-1}^\mathsf{\prime}(0),\dots,c_0^\mathsf{\prime}(0))^\mathsf{T}.
\]
In this way, we can try to find a tensor $\mathcal T\in V$ such that the resulting matrix
\[
[(d_{\mathcal T}\mathbf c) \mathcal T_1,\ \dots, (d_{\mathcal T}\mathbf c) \mathcal T_k]
\]
has full rank. In fact, if such a tensor $\mathcal{T}$ exists then a generic tensor will work. For $V=\mathbb{TS}(\mathbb C^3,4)$, the differential of the coefficient map at the tensor point (only independent entries are listed)
\begin{align*}
&t_{1111}=1,\ t_{1112}=-1/3,\ t_{1122}=2/3,\ t_{1222}=-2,\
t_{1113}=1,\ t_{1123}=-1/2,\ t_{1223}=4/3,\\
&t_{1133}=-4/3,\
t_{1233}=5/3,\ t_{1333}=-5,
t_{2111}=6,\ t_{2112}=-2,\ t_{2122}=7/3,\ t_{2222}=-7,\
t_{2113}=8/3,\\
&t_{2123}=-4/3,\ t_{2223}=3,\ t_{2133}=-3,\
t_{2233}=1/3,\ t_{2333}=2,\
t_{3111}=3,\ t_{3112}=4/3,\ t_{3122}=5/3,\\
&t_{3222}=6,\
t_{3113}=0, t_{3123}=-1/6,\ t_{3223}=-2/3,\ t_{3133}=-1,\
t_{3233}=-4/3,\ t_{3333}-5
\end{align*}
is of full rank $27$;
and for $V=\mathbb{TS}(\mathbb C^4,3)$, the differential of the coefficient map at the tensor point (again, only independent entries are listed)
\begin{align*}
&t_{111}=1,\ t_{112}=-1/2,\ t_{122}=2,\ t_{113}=-1,\ t_{123}=3/2,\ t_{133}=-3,
t_{114}=2,\ t_{124}=-2,\ t_{134}=5/2,\\
&t_{144}=-5,
t_{211}=6,\ t_{212}=-3,\ t_{222}=7,\ t_{213}=-7/2,\ t_{223}=4,\ t_{233}=-8,
t_{214}=9/2,\ t_{224}=-9/2,\\
&t_{234}=1/2,\ t_{244}=2,
t_{311}=3,\ t_{312}=2,\ t_{322}=5,\ t_{313}=3,\ t_{323}=0,\ t_{333}=-1,
t_{314}=-1,\\
& t_{324}=-3/2,\ t_{334}=-2,\ t_{344}-5,
t_{411}=1,\ t_{412}=1,\ t_{422}=3/2,\ t_{413}=2,\ t_{423}=5/2,\ t_{433}=6,\\
&t_{414}=7/2,\ t_{424}=4,\ t_{434}=9/2,\ t_{444}=10
\end{align*}
is of full rank $32$.
Therefore, we have the next result from Propositions~\ref{prop:differential} and \ref{prop:tensor}.
\begin{proposition}\label{prop:dominant-34-42}
The eigenvalue maps $\phi : \mathbb{TS}(\mathbb C^3,4)\rightarrow\mathbb C^{27}/\mathfrak{S}(27)$ and
$\phi : \mathbb{TS}(\mathbb C^4,3)\rightarrow\mathbb C^{32}/\mathfrak{S}(32)$
are both dominant.
\end{proposition}
\section{Final remarks}\label{sec:remarks}
\subsection{Reconstruction of a tensor}
The understanding of the ranges of the multiset-valued maps would be an essential step to understand the configurations of the eigenvalues of general tensors \footnote{There is a parallel research on another concept of eigenvectors \cite{ass}.}.
\begin{proposition}\label{prop:sub}
For any integers $m,n\geq 2$, there is a set $W\subset\mathbb C^{nm^{n-1}}/\mathfrak{S}(nm^{n-1})$ whose closure has dimension $2\lfloor \frac{n}{2}\rfloor m$ contained in the closure of the image $\phi(\mathbb{T}(\mathbb C^n,m+1))$.
\end{proposition}
\begin{proof}
For any $n\geq 2$, we can take $\lfloor \frac{n}{2}\rfloor$ tensors $\mathcal A_i\in \mathbb{T}(\mathbb C^2,m+1)$, and possibly a scalar $\alpha$ (when $n$ is odd) as subtensors to form a diagonal block tensor $\mathcal T\in\mathbb{T}(\mathbb C^n,m+1)$. It follows from \cite{hhlq13} that
\[
\operatorname{det}(\mathcal T-\lambda\mathcal I)=(\alpha-\lambda)^q\prod_{i=1}^{\lfloor \frac{n}{2}\rfloor}\big[\operatorname{det}(\mathcal A_i-\lambda\mathcal I)\big]^p
\]
for some positive integers $p,q$. So, it is clear that the set $\phi(\mathbb{T}(\mathbb C^2,m+1))\times \dots\times \phi(\mathbb{T}(\mathbb C^2,m+1))$ with $\lfloor \frac{n}{2}\rfloor$ copies can be embedded into $\phi(\mathbb{T}(\mathbb C^n,m+1))$. It then follows from Proposition~\ref{prop:dominant-n2} that the closure of this set has dimension $2\lfloor \frac{n}{2}\rfloor m$.
\end{proof}
Similarly, we can use Propositions~\ref{prop:dominant-33} and \ref{prop:dominant-34-42} to refine the blocks to get a variety with larger dimension in some cases. However, in general it is far away from the following expected dimension \eqref{eqn:expect}.
\subsection{The dimension of the image of $\phi$}
It follows from Theorem~\ref{theorem:dominant} that for most tensor spaces $\mathbb T(\mathbb C^n,m+1)$, the multiset-valued eigenvalue map is not dominant. Thus, it is reasonable to expect that the dimension of $\overline{\phi(\mathbb{T}(\mathbb C^n,m+1))}$ is
\begin{equation}\label{eqn:expect}
\min\bigg\{n{n+m-1\choose m}, nm^{n-1}\bigg\},
\end{equation}
since $\phi(\mathbb{T}(\mathbb C^n,m+1))=\phi(\mathbb{TS}(\mathbb C^n,m+1))$ and $\operatorname{dim}(\mathbb{TS}(\mathbb C^n,m+1))=n{n+m-1\choose m}$.
We also want to point out that \eqref{eqn:expect} may not hold for all $m,n\geq 2$. We tested, by a similar method as Section~\ref{sec:42-33}, the case $(n,m)=(3,4)$. Note that the tensor space is of dimension $45$ while the number of eigenvalues is $48$.
However, the ranks of the resulting Jacobian matrices for both the following two points (only independent elements are listed) are of the same value $43$.
\begin{align*}
&t_{11111}=0,\ t_{11112}=3/4,\ t_{11122}=5/6,\ t_{11222}=1/4,\ t_{12222}=0,\ t_{11113}=-5/4,
t_{11123}=-1/6,\\
& t_{11223}=-1/6,\ t_{12223}=5/4,\
t_{11133}=-1/2,\ t_{11233}=-1/3,\ t_{12233}=-1/6,\ t_{11333}=-1,\\
& t_{12333}=1/2,
t_{13333}=4,\ t_{21111}=3,\ t_{21112}=5/4,\ t_{21122}=-1/2,\ t_{21222}=1,\ t_{22222}=-2,\\
& t_{21113}=1,\ t_{21123}=1/6,\ t_{21223}=-5/12,\ t_{22223}=-1,\
t_{21133}=-1/6,\ t_{21233}=0,\ t_{22233}=-1/3,\\
& t_{21333}=1,\ t_{22333}=-5/4,\
t_{23333}=-1,
t_{31111}=-3,\ t_{31112}=-5/4,\ t_{31122}=-1/3,\\
& t_{31222}=-1/2,\
t_{32222}=0,\ t_{31113}=3/4,\
t_{31123}=1/6,\
t_{31223}=1/3,\ t_{32223}=2/3,\
t_{31133}=-2/3,\\
& t_{31233}=-1/6,\
t_{32233}=1/3,\ t_{31333}=1,\ t_{32333}=-1,\ t_{33333}=0,
\end{align*}
and
\begin{align*}
&t_{11111}=7,\ t_{11112}=-3/2,\ t_{11122}=-4/3,\ t_{11222}=-9/4,\ t_{12222}=8,\ t_{11113}=7/4,
t_{11123}=3/4,\\
& t_{11223}=7/12,\ t_{12223}=-7/6,\
t_{11133}=5/6,\ t_{11233}=-1/12,\ t_{12233}=5/6,\ t_{11333}=1,\ t_{12333}=9/4,\\
& t_{13333}=0,\ t_{21111}=10,\ t_{21112}=-1,\ t_{21122}=0,\ t_{21222}=5/4,\ t_{22222}=-1,\ t_{21113}=7/4,\\
&
t_{21123}=7/12,\
t_{21223}=0,\ t_{22223}=-7/6,\
t_{21133}=-7/6,\ t_{21233}=-1/4,\ t_{22233}=5/6,\\
& t_{21333}=-5/2,\
t_{22333}=1,\
t_{23333}=6,\ t_{31111}=8,\ t_{31112}=-3/2,\ t_{31122}=-1/3,\ t_{31222}=-5/4,\\
& t_{32222}=4,\
t_{31113}=9/4,\
t_{31123}=3/4,\ t_{31223}=-1/3,\ t_{32223}=-4/3,\
t_{31133}=1/6,\\ & t_{31233}=5/6,\ t_{32233}=-1,\ t_{31333}=3/2,\ t_{32333}=5/2,\ t_{33333}=6.
\end{align*}
Nevertheless, we have considerable confidence to believe that \eqref{eqn:expect} is true for all but a finite exceptions, as such phenomena happen in tensor problems, e.g., the famous Alexander-Hirschowitz theorem for symmetric tensor rank \cite{ah95}.
\subsection{The closedness of the image of $\phi$} We proved in Proposition \ref{prop:dominant-n2} that for a generic multiset of total multiplicity $2m$ of complex numbers, the inverse eigenvalue problem is solvable. A natural question to ask is: is the inverse eigenvalue problem solvable for any multiset of total multiplicity $2m$ of complex numbers, i.e., is $\phi$ surjective? It is easy to show that when $m=2$, the answer is affirmative. To this end, we will need the following result.
\begin{proposition}[\cite{f77}]\label{thm:friedland}
Let $f=(f_1,\dots,f_n):\mathbb{C}^n\to \mathbb{C}^n$ be a polynomial map where each $f_i$ is homogeneous. If $f(x_1,\dots, x_n)=0$ has only a trivial solution $(0,\dots,0)$ then $f(x_1,\dots,x_n)=\omega$ is always solvable for any $\omega\in \mathbb{C}^n$.
\end{proposition}
\begin{proposition}[Cubic plane tensor]\label{prop:closedness m=2}
Given any multiset $S$ of total multiplicity four of complex numbers, there exists a tensor $\mathcal{T}$ in $\mathbb{T}(\mathbb{C}^2,3)$ such that the set of eigenvalues of $\mathcal{T}$ is $S$.
\end{proposition}
\begin{proof}
As before, we use $a_0,a_1,a_2,b_0,b_1,b_2$ to parametrize $\mathbb{TS}(\mathbb C^2,3)$ which is isomorphism to $\mathbb{C}^6$. Let $\mathbf{c}: \mathbb{C}^6\to \mathbb{C}^4$ be the map sending the vector $(a_0,a_1,a_2,b_0,b_1,b_2)$ to $(c_1,c_2,c_3,c_4)$ where $c_i$ is the codegree $i$ coefficient of the characteristic polynomial of the tensor determined by $(a_0,a_1,a_2,b_0,b_1,b_2)$, $i=1,\dots,4$. Hence $c_i$'s are homogeneous polynomials of degree $(4-i)$ respectively. By Proposition~\ref{thm:friedland} and Proposition~\ref{prop:tensor}, it is sufficent to find a four dimension linear subspace $L\subset \mathbb{C}^6$ such that $\mathbf c^{-1}((0,0,0,0))\cap L$ is $(0,0,0,0)$. We consider the linear subspace $L$ defined by equations
\[
a_1+b_1+b_2 =0, a_2+b_0 =0.
\]
Then it is easy to verify that $L\cap \mathbf c^{-1}((0,0,0,0))$ is $(0,0,0,0)$.
\end{proof}
\begin{remark}
We use Macaulay2 to see that $L\cap \mathbf c^{-1}((0,0,0,0))$ is $(0,0,0,0)$. Since $n$ generic homogeneous polynomials only have a trivial solution, the existence of $L$ in the proof of Proposition \ref{prop:closedness m=2} implies that a generic four dimensional subspace of $\mathbb{C}^6$ should work.
\end{remark}
\begin{remark}
It is tempting to extend the proof of Proposition \ref{prop:closedness m=2} to show that $\mathbf c$ is surjective in general. However, on one hand it is difficult to compute the intersection of $\mathbf c^{-1}(\mathbf 0)$ with a generic linear space of dimension $2m$ in general. On the other hand, when $m=3$ the dimension of $\mathbf c^{-1}(\mathbf 0)$ is three which is larger than the expected dimension two, hence the method used for $m=2$ does not work for $m=3$.
\end{remark}
\subsection*{Acknowledgement}
This work is partially supported by
National Science Foundation of China (Grant No. 11401428).
\bibliographystyle{model6-names}
| {
"timestamp": "2016-05-26T02:08:02",
"yymm": "1511",
"arxiv_id": "1511.05057",
"language": "en",
"url": "https://arxiv.org/abs/1511.05057",
"abstract": "A tensor $\\mathcal T\\in \\mathbb T(\\mathbb C^n,m+1)$, the space of tensors of order $m+1$ and dimension $n$ with complex entries, has $nm^{n-1}$ eigenvalues (counted with algebraic multiplicities). The inverse eigenvalue problem for tensors is a generalization of that for matrices. Namely, given a multiset $S\\in \\mathbb C^{nm^{n-1}}/\\mathfrak{S}(nm^{n-1})$ of total multiplicity $nm^{n-1}$, is there a tensor in $\\mathbb T(\\mathbb C^n,m+1)$ such that the multiset of eigenvalues of $\\mathcal{T}$ is exact $S$? The solvability of the inverse eigenvalue problem for tensors is studied in this paper. With tools from algebraic geometry, it is proved that the necessary and sufficient condition for this inverse problem to be generically solvable is $m=1,\\ \\text{or }n=2,\\ \\text{or }(n,m)=(3,2),\\ (4,2),\\ (3,3)$.",
"subjects": "Spectral Theory (math.SP)",
"title": "Inverse tensor eigenvalue problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587272306607,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7097978826008787
} |
https://arxiv.org/abs/1712.05098 | The central limit theorem for the number of clusters of the Arratia flow | In this paper we prove the central limit theorem for the number of clusters formed by the particles of the Arratia flow starting from the interval $[0;n]$ as $n\to\infty$ and obtain an estimate of the Berry-Esseen type for the rate of this convergence. | \section{Introduction}
\label{section1}
In this paper we consider the Arratia flow $\{x(u,\cdot),\; u\in\mbR\}$, which is an ordered family of standard Brownian motions starting from every point of the real line such that for any $u,v\in\mbR$ the joint quadratic variation of $x(u,\cdot)$ and $x(v,\cdot)$ is given by
\[
\jqv{x(u,\cdot)}{x(v,\cdot)}_t=\int\limits_0^t \1_{\{0\}}(x(u,s)-x(v,s))\,ds,\quad t\ges 0,
\]
where $\1_{\{0\}}$ stands for the indicator function of the set $\{0\}$. This flow was constructed by R.~A.~Arratia~\cite{Arratia} as a weak limit of families of coalescing simple random walks and can be informally described as a system of Brownian particles any two of which move independently until they meet, after which they coalesce and move together.
In~\cite{Harris} T.~E.~Harris considered a generalisation of the Arratia flow, in which the indicator function $\1_{\{0\}}$ is replaced by a non-negative definite function $\Gamma$, which is called the covariance function of the flow, and proved its existence under certain conditions on $\Gamma$.
In the same paper T.~E.~Harris proved that for the Arratia flow $\{x(u,\cdot),\; u\in\mbR\}$ for any time $t>0$ and interval $[u_1;u_2]\subset\mbR$ the set $x([u_1;u_2],t)$ is almost surely finite. From this it follows that for any time $t>0$ and interval $[u_1;u_2]$ the number
\[
\nu_t([u_1;u_2]):=\#\; x([u_1;u_2],t)
\]
of elements of the set $x([u_1;u_2],t)$ is almost surely finite (for a different proof see monograph~\cite{Dorogovtsev2007} of A.~A.~Dorogovtsev). R.~Tribe and O.~Zaboronski~\cite{TribeZaboronski} proved that for any $t>0$ the random point process $x(\mbR,t)$ is Pfaffian and found its kernel; basing on some of their formulae, the distribution of $\nu_t([0;u])$ was found in~\cite{Fomichov}. Earlier for Harris flows the necessary and sufficient condition of the existence of coalescence of particles and an estimate for the mean value of the number of clusters were obtained by H.~Matsumoto in~\cite{Matsumoto} respectively. For the Arratia flow the large deviation principle and the law of the iterated logarithm for the size of the cluster containing the point zero were established by A.~A.~Dorogovtsev and O.~V.~Ostapenko~\cite{DorogovtsevOstapenko} and A.~A.~Dorogovtsev, A.~V.~Gnedin and M.~B.~Vovchanskii~\cite{DorogovtsevGnedinVovchanskii} respectively.
Since the covariance of any two particles in Harris flows depends only on the distance between them, such flows are stationary with respect to the spatial variable. In~\cite{Glinyanaya}, under the assumption that the covariance function converges to zero at infinity, their ergodicity with respect to the spatial variable was established and an estimate for the strong mixing coefficient was found.
In this paper we prove the following central limit theorem for $\nu_t([0;n])$ as $n\to\infty$.
\begin{theorem}
\label{theorem1}
For any $t>0$
\[
\dfrac{\nu_t([0;n])-\E\nu_t([0;n])}{\sqrt{n}} \Longrightarrow \mcN(0;\sigma_t^2),\quad n\to\infty,
\]
where $\sigma_t^2:=\dfrac{3-2\sqrt{2}}{\sqrt{\pi t}}$.
\end{theorem}
Furthermore, we also obtain an estimate for the rate of this convergence by proving the following inequality of the Berry--Esseen type.
\begin{theorem}
\label{theorem2}
For any $n\ges 1$
\[
\sup_{z\in\mbR} \abs{\Prob{\dfrac{\nu_t([0;n])-\E\nu_t([0;n])}{\sqrt{n}}\les z}-\int\limits_{-\infty}^z \dfrac{1}{\sqrt{2\pi\sigma_t^2}} e^{-r^2/2\sigma^2_t}\,dr}\les Cn^{-1/2}(\log n)^2.
\]
\end{theorem}
Let us note that due to the scaling invariance of the Arratia flow (e.~g., see~\cite[subsection~2.3]{TribeZaboronski})
\begin{equation}
\label{equation0}
x(\cdot,\cdot)\stackrel{d}{=}\dfrac{1}{\ve} x(\ve \cdot,\ve^2 \cdot),\quad \ve>0,
\end{equation}
from Theorem~\ref{theorem1} the following corollary can be deduced.
\begin{corollary}
The following convergence in distribution takes place:
\[
\sqrt[4]{t} \cdot \nu_t([0,1])-\dfrac{1}{\sqrt[4]{t} \cdot \sqrt{\pi}} \Longrightarrow \mcN(1;\sigma^2),\quad t\to 0+,
\]
where $\sigma^2=\dfrac{3-2\sqrt{2}}{\sqrt{\pi}}$.
\end{corollary}
The main part of this paper consists of two sections. In Section~\ref{section2} we establish the asymptotic behaviour of the variance and all moments of $\nu_t([0;u])$ and in Section~\ref{section3} we give the proof of Theorems~\ref{theorem1} and~\ref{theorem2}.
\section{Asymptotics of the variance and moments of $\nu_t([0;u])$}
\label{section2}
In this section we establish the asymptotic behaviour of the variance and all moments of $\nu_t([0;u])$. Our proof is based on the results of R.~Tribe and O.~Zaboronski~\cite{TribeZaboronski}, and we refer the reader to this paper for the definitions of the objects we use in this section.
In paper~\cite{TribeZaboronski} the authors proved that for any time $t>0$ the clusters of the Arratia flow form a Pfaffian point process with the kernel
\[
K_t(u,v)=\dfrac{1}{\sqrt{t}} K\left(\dfrac{u}{\sqrt{t}}, \dfrac{v}{\sqrt{t}}\right),
\]
where
\[
K(u,v)=
\begin{pmatrix}
-F''(v-u) & -F'(v-u)\\
F'(v-u) & \sign(v-u) \cdot F(\abs{v-u})
\end{pmatrix}
\]
with the function $F$ given by
\[
F(z):=\dfrac{1}{\sqrt{\pi}} \int\limits_z^{+\infty} e^{-r^2/4}\,dr,\quad z>0.
\]
In particular, it means that for any $n\ges 1$ the $n$th factorial moment (for this moment we will use the notation $a^{[n]}=a(a-1) \ldots (a-n+1)$, $a\in\mbZ_+$) of the number $N_t([0;u])$ of particles of the Arratia flow which at time $t>0$ are found at the interval $[0;u]$ is given by
\[
\E N_t^{[n]}([0;u])=\int\limits_0^u \stackrel{n}{\ldots} \int\limits_0^u \rho_t^{(n)}(v_1,\ldots,v_n)\,dv_1 \ldots dv_n,
\]
where $\rho_t^{(n)}$ is the $n$-point density permitting the following representation:
\[
\rho_t^{(n)}(v_1,\ldots,v_k)=\mathrm{Pf}\left[K_t(v_i,v_j),\; i,j=1,\ldots,n\right],\quad v_1,\ldots,v_n\in\mbR.
\]
To obtain the expressions for the moments of $\nu_t([0;u])$ it remains to note that
\begin{equation}
\label{equation}
\nu_t([0;u])\stackrel{d}{=} N_t([0;u])+1,
\end{equation}
which can be easily proved with the help of the dual flow (e.~g., see~\cite{TothWerner}, \cite{Dorogovtsev}, \cite[subsection~2.2]{TribeZaboronski}). Recall that for fixed time $t_0>0$ the dual flow is a system $\{y(u,t),\; u\in\mbR,\; 0\les t\les t_0\}$ of coalescing Brownian motions in backward time starting from every point of the real line characterised by the property that its trajectories do not intersect those of the particles of the restriction $\{x(u,t),\; u\in\mbR,\; 0\les t\les t_0\}$ of the Arratia flow to the time interval $[0;t_0]$. It is known that $\{y(u,t),\; u\in\mbR,\; 0\les t\les t_0\}$ agrees in distribution with $\{x(u,t),\; u\in\mbR,\; 0\les t\les t_0\}$, and equality~\eqref{equation} follows from the fact that the set $y(\mbR,t)$ coincides with the set of points of discontinuity of the mapping $x(\cdot,t) \colon \mbR \rightarrow \mbR$.
\begin{proposition}
For any $t>0$ and $u>0$ we have
\[
\Var\nu_t([0;u])=-\dfrac{4}{\pi}+\dfrac{3u}{\sqrt{\pi t}}+ \dfrac{4}{\pi}e^{-u^2/2t}-\dfrac{2}{\pi}\int\limits_0^{u/\sqrt{t}} e^{-z^2/4}\,dz-\dfrac{4u}{\pi\sqrt{t}}\int\limits_0^{u/\sqrt{t}}e^{-z^2/2}\,dz.
\]
\end{proposition}
\begin{proof}
First of all, let us note that
\[
\E N_t([0;u])=\E N_t^{[1]}([0;u])=\int\limits_0^u \rho_t^{(1)}(v)\,dv= \dfrac{u}{\sqrt{\pi t}},
\]
and so
\begin{equation}
\label{equation1}
\E\nu_t([0;u])=1+\dfrac{u}{\sqrt{\pi t}}.
\end{equation}
Moreover, on the one hand,
\begin{equation}
\label{equation2}
\E N_t^{[2]}([0;u])=\E\nu_t^2([0;u])-3\E\nu_t([0,u])+2,
\end{equation}
and, on the other hand,
\begin{equation}
\label{equation3}
\E N_t^{[2]}([0;u])=\int\limits_0^u \int\limits_0^u \rho^{(2)}_t(v_1,v_2)\,dv_1dv_2,
\end{equation}
where (for notational simplicity here and below for antisymmetric matrices we omit their entries below the diagonal)
\begin{gather*}
\rho^{(2)}_t(v_1,v_2)=
\mathrm{Pf}
\left[
\begin{matrix}
0 & \dfrac{1}{\sqrt{\pi t}} & -\dfrac{v_2-v_1}{2\sqrt{\pi} \cdot t} e^{-(v_2-v_1)^2/4t} & \dfrac{1}{\sqrt{\pi t}} e^{-(v_2-v_1)^2/4t}\\
& 0 & -\dfrac{1}{\sqrt{\pi t}} e^{-(v_2-v_1)^2/4t} & \dfrac{\sign (v_2-v_1)}{\sqrt{\pi t}} \cdot \int\limits_{\abs{v_2-v_1}/\sqrt{t}}^{+\infty} e^{-v^2/4}\,dv\\
& & 0 & \dfrac{1}{\sqrt{\pi t}}\\
& & & 0
\end{matrix}
\right]
=
\\
=\dfrac{1}{\pi t}\left(1+\dfrac{\abs{v_2-v_1}}{2\sqrt{t}} \cdot e^{-(v_2-v_1)^2/4t} \cdot \int\limits_{\abs{v_2-v_1}/\sqrt{t}}^{+\infty} e^{-v^2/4}\,dv-e^{-(v_2-v_1)^2/2t}\right).
\end{gather*}
Therefore, computing the integral in~\eqref{equation3} by integrating by parts (several times) and using~\eqref{equation1} and~\eqref{equation2}, we obtain
\begin{equation}
\label{equation4}
\E\nu_t^2([0;u])=1-\dfrac{4}{\pi}+\dfrac{5u}{\sqrt{\pi t}}+\dfrac{u^2}{\pi t}+ \dfrac{4}{\pi}e^{-u^2/2t}-\dfrac{2}{\pi}\int\limits_0^{u/\sqrt{t}} e^{-z^2/4}\,dz-\dfrac{4u}{\pi\sqrt{t}}\int\limits_0^{u/\sqrt{t}}e^{-z^2/2}\,dz.
\end{equation}
Finally, using~\eqref{equation1} and~\eqref{equation4}, we arrive at the desired result.
\end{proof}
\begin{corollary}
\label{corollary5}
The following assertions hold true:
\begin{gather*}
\Var\nu_t([0;u])\sim (3-2\sqrt{2}) \cdot \dfrac{u}{\sqrt{\pi t}},\quad u\to +\infty \text{ or } t\to 0+,\\
\Var\nu_t([0;u])\sim (3-\dfrac{2}{\sqrt{\pi}}) \cdot \dfrac{u}{\sqrt{\pi t}},\quad u\to 0+ \text{ or } t\to +\infty.
\end{gather*}
\end{corollary}
\begin{theorem}
For any $k\ges 1$ we have
\[
\E\nu_t^k([0;u])\sim \left(\dfrac{u}{\sqrt{\pi t}}\right)^k,\quad u\to +\infty \text{ or } t\to 0+.
\]
\end{theorem}
\begin{proof}
Due to the scaling invariance~\eqref{equation0} of the Arratia flow it is enough to prove the corresponding assertion for $t\to 0+$. To do it, we will use induction. For $k=1$ the assertion follows from~\eqref{equation1}. Now suppose that it holds true for all $k'\les k-1$. Then from~\eqref{equation} it follows that
\[
\lim_{t\to 0+} t^{k/2}\E\nu_t^k([0;u])=\lim_{t\to 0+} t^{k/2}\E N_t^{[k]}([0;u]),
\]
provided that the limit on the right-hand side exists. However,
\[
t^{k/2}\E N_t^{[k]}([0;u])=\int\limits_0^u \stackrel{k}{\ldots} \int\limits_0^u \mathrm{Pf}\left[\sqrt{t} \cdot K_t(v_i,v_j),\; i,j=1,\ldots,k\right]\,dv_1 \ldots dv_k,
\]
and the Pfaffian on the right-hand side converges as $t\to 0+$ to the Pfaffian
\[
\mathrm{Pf}
\left[
\begin{matrix}
0 & 1/\sqrt{\pi} & 0 & 0 & 0 & \ldots & 0 & 0 & 0\\
{} & 0 & 0 & 0 & 0 & \ldots & 0 & 0 & 0\\
{} & {} & 0 & 1/\sqrt{\pi} & 0 & \ldots & 0 & 0 & 0\\
{} & {} & {} & 0 & 0 & \ldots & 0 & 0 & 0\\
{} & {} & {} & {} & 0 & \ldots & 0 & 0 & 0\\
{} & {} & {} & {} & {} & \ddots & \vdots & \vdots & \vdots\\
{} & {} & {} & {} & {} & {} & 0 & 0 & 0\\
{} & {} & {} & {} & {} & {} & {} & 0 & 1/\sqrt{\pi}\\
{} & {} & {} & {} & {} & {} & {} & {} & 0\\
\end{matrix}
\right]
=\left(\dfrac{1}{\sqrt{\pi}}\right)^k.
\]
Thus, by the dominated convergence theorem we obtain
\[
\lim_{t\to 0+} t^{k/2}\E N_t^{[k]}([0;u])=\left(\dfrac{u}{\sqrt{\pi}}\right)^k.
\]
The theorem is proved.
\end{proof}
\section{Proof of the main results}
\label{section3}
\begin{proof}[Proof of Theorem~\ref{theorem1}]
Fixing arbitrary $t>0$, let us note that for any $u_1<u_2<u_3$ we have
\begin{equation}
\label{equation5}
\nu_t([u_1;u_3])+1=\nu_t([u_1;u_2])+\nu_t([u_2;u_3]),
\end{equation}
since on the right-hand side the cluster containing the point $x(u_2,t)$ is taken into account twice due to the almost sure continuity of the random mapping $x(\cdot,t) \colon \mbR \rightarrow \mbR$ at the point $u_2$. From~\eqref{equation5} it follows that for all $n\ges 1$
\begin{equation}
\label{equation6}
\nu_t([0;n])-\E\nu_t([0;n])=\sum_{k=1}^n \eta_k,
\end{equation}
where
\[
\eta_k:=\nu_t([k-1;k])-\E\nu_t([k-1;k]),\quad k\ges 1.
\]
Since the stochastic process $\{x(u,t)-u,\; u\in\mbR\}$ is strictly stationary, so is the sequence $\{\eta_n,\; n\ges 1\}$. Now to this sequence we would like to apply the following theorem.
\begin{theorem}
\label{theorem7}
\textup{\cite[Theorem~18.5.3]{IbragimovLinnik}}
Let $\{X_n,\; n\ges 1\}$ be a strictly stationary sequence of centered random variables with finite variance such that
\[
\Var\sum_{k=1}^n X_k\longrightarrow +\infty,\quad n\to +\infty,
\]
and for some $\delta>0$
\[
\E\abs{X_1}^{2+\delta}<+\infty
\]
and
\[
\sum_{n=1}^\infty \left(\alpha^X(n)\right)^{\delta/(2+\delta)}<+\infty,
\]
where $\alpha^X$ is its strong mixing coefficient:
\begin{gather*}
\alpha^X(n):=\sup\{\abs{\mbP(AB)-\mbP(A)\mbP(B)} \mid A\in \sigma(X_j,\; j\les k),\\
B\in\sigma(X_j,\; j\ges k+n),\; k\in\mbZ\},\quad n\in\mbZ,
\end{gather*}
with $\sigma(\mcA)$ standing for the $\sigma$-field generated by the set $\mcA$ of random variables.
Then the series
\[
\E X_1^2+2\sum_{k=2}^\infty \E X_1X_k
\]
is absolutely convergent and, provided that its sum $\sigma^2$ is strictly positive, the following convergence in distribution takes place:
\[
\dfrac{1}{\sqrt{n}} \sum_{k=1}^n X_k \Longrightarrow \mcN(0,\sigma^2),\quad n\to\infty.
\]
\end{theorem}
\begin{remark}
Note that $\sigma^2$ permits the representation
\[
\sigma^2=\lim_{n\to \infty} \dfrac{1}{n} \Var\sum_{k=1}^n X_k,
\]
since
\[
\dfrac{1}{n} \Var\sum_{k=1}^n X_k=\dfrac{1}{n} \E\left(\sum_{k=1}^n X_k\right)^2=\dfrac{1}{n} \sum_{i,j=1}^n \E X_iX_j=\E X_1^2+2\sum_{k=2}^n \dfrac{n-k}{n} \E X_1X_k,
\]
and, if the series $\sum \E X_1X_k$ is absolutely convergent, by the dominated convergence theorem
\[
\lim_{n\to\infty} \sum_{k=2}^n \dfrac{n-k}{n} \E X_1X_k=\sum_{k=2}^\infty \E X_1X_k-\lim_{n\to\infty} \sum_{k=2}^n \dfrac{k}{n} \E X_1X_k=\sum_{k=2}^\infty \E X_1X_k.
\]
\end{remark}
Now let us verify that the conditions of this theorem are satisfied for the sequence $\{\eta_n,\; n\ges 1\}$. First, we note that all absolute moments of $\eta_1$ are finite, since such are those of $\nu_t([0;1])$. Second, from equality~\eqref{equation6} and Corollary~\ref{corollary5} we get
\[
\dfrac{1}{n}\Var\sum_{k=1}^n \eta_k=\dfrac{1}{n}\Var\nu_t([0;n])\longrightarrow \dfrac{3-2\sqrt{2}}{\sqrt{\pi t}}>0,\quad n\to\infty,
\]
and so in particular
\[
\Var\sum_{k=1}^n \eta_k\longrightarrow +\infty,\quad n\to\infty.
\]
Third, it is easy to check that for the strong mixing coefficient $\alpha^\eta$ of the sequence $\{\eta_n,\; n\ges 1\}$ we have
\[
\alpha^\eta(n)\les \alpha(n),\quad n\ges 1,
\]
where
\begin{gather*}
\alpha(n):=\sup\{\abs{\mbP(AB)-\mbP(A)\mbP(B)},\; A\in\sigma(x(u,t)-u,\; u\les h),\\
B\in\sigma(x(u,t)-u,\; u\ges h+n),\; h\in\mbR\}.
\end{gather*}
In~\cite{Glinyanaya} it was proved that for $n\ges 1$ large enough
\[
\alpha(n)\les 2\sqrt{\dfrac{2}{\pi t}} \int\limits_n^{+\infty} e^{-r^2/2t}\,dr.
\]
Therefore, using the standard estimate for the tails of the Gaussian distribution, we obtain that for $n\ges 1$ large enough
\[
\alpha^\eta(n)\les 2\sqrt{\dfrac{2}{\pi t}} \int\limits_n^{+\infty} e^{-r^2/2t}\,dr\les \dfrac{2}{n}\sqrt{\dfrac{2}{\pi t}} e^{-n^2/2t},
\]
and so for all $\delta>0$
\[
\sum_{n=1}^\infty \left(\alpha^\eta(n)\right)^{\delta/(2+\delta)}<+\infty.
\]
Thus, applying Theorem~\ref{theorem7} to the sequence $\{\eta_n,\; n\ges 1\}$ and using equality~\eqref{equation6} finishes the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem2}]
The proof is based on the following theorem.
\begin{theorem}
\textup{\cite[Theorem~2]{Tikhomirov}}
Let $\{X_n,\; n\ges 1\}$ be a strictly stationary sequence of centered random variables with finite variance such that for some $\delta\in (0,1]$
\[
\E\abs{X_1}^{2+\delta}<+\infty
\]
and for some constants $K>0$ and $\beta>0$
\[
\alpha^X(n)\les Ke^{-\beta n},\quad n\ges 1.
\]
Then there exists a constant $A=A(K,\beta,\delta)>0$ such that
\[
\sup_{z\in\mbR} \abs{\Prob{\frac 1{\sigma_n} \sum_{k=1}^n X_k\les z}- \dfrac{1}{\sqrt{2\pi}} \int\limits_{-\infty}^z e^{-r^2/2}\,dr}\les An^{-\delta/2}(\log n)^{1+\delta},\quad n\ges 1,
\]
where
\[
\sigma_n^2=\E\left(\sum_{k=1}^n X_k\right)^2.
\]
\end{theorem}
Applying this theorem to the sequence $\{\eta_n,\; n\ges 1\}$ defined above and using equality~\eqref{equation6}, we obtain the desired result.
\end{proof}
| {
"timestamp": "2017-12-15T02:03:53",
"yymm": "1712",
"arxiv_id": "1712.05098",
"language": "en",
"url": "https://arxiv.org/abs/1712.05098",
"abstract": "In this paper we prove the central limit theorem for the number of clusters formed by the particles of the Arratia flow starting from the interval $[0;n]$ as $n\\to\\infty$ and obtain an estimate of the Berry-Esseen type for the rate of this convergence.",
"subjects": "Probability (math.PR)",
"title": "The central limit theorem for the number of clusters of the Arratia flow",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587268703082,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7097978823419314
} |
https://arxiv.org/abs/1911.11810 | Exceptional points of discrete-time random walks in planar domains | Given a sequence of lattice approximations $D_N\subset\mathbb Z^2$ of a bounded continuum domain $D\subset\mathbb R^2$ with the vertices outside $D_N$ fused together into one boundary vertex $\varrho$, we consider discrete-time simple random walks in $D_N\cup\{\varrho\}$ run for a time proportional to the expected cover time and describe the scaling limit of the exceptional level sets of the thick, thin, light and avoided points. We show that these are distributed, up a spatially-dependent log-normal factor, as the zero-average Liouville Quantum Gravity measures in $D$. The limit law of the local time configuration at, and nearby, the exceptional points is determined as well. The results extend earlier work by the first two authors who analyzed the continuous-time problem in the parametrization by the local time at $\varrho$. A novel uniqueness result concerning divisible random measures and, in particular, Gaussian Multiplicative Chaos, is derived as part of the proofs. | \section{Introduction}
\noindent
This note is a continuation of earlier work by the first two authors who in~\cite{AB} studied various exceptional level sets associated with the local time of random walks in lattice versions~$D_N\subset\mathbb Z^2$ of bounded open domains~$D\subset\mathbb R^2$, at times proportional to the cover time of~$D_N$. The walks in~\cite{AB} move as the ordinary constant-speed continuous-time simple symmetric random walk on~$D_N$ and, upon exit from~$D_N$, reenter~$D_N$ through a uniformly-chosen boundary edge. The re-entrance mechanism is conveniently realized by addition to~$D_N$ of a boundary vertex~$\varrho$ with all edges emanating out of~$D_N$ on~$\mathbb Z^2$ now ending in~$\varrho$. See Fig.~\ref{fig1} for an example.
In~\cite{AB}, the local time was parametrized by the time spent at~$\varrho$. Through the use of the Second Ray-Knight Theorem (Eisenbaum, Kaspi, Marcus, Rosen and Shi~\cite{EKMRS}) this enabled a connection to the level sets of the Discrete Gaussian Free Field (DGFF) studied earlier by the second author and O.~Louidor~\cite{BL4}.
The goal of the present paper is to extend the results of~\cite{AB} to the more natural setting of a discrete-time random walk parametrized by its actual time. As we shall see, a close connection to the DGFF still persists, albeit now to that conditioned on vanishing arithmetic mean over~$D_N$. As no version of the Second Ray-Knight Theorem seems available for this specific setting, we have to proceed by suitable, and sometimes tedious, approximations. A key point is to control the fluctuations of the total time of the random walk at a given occupation time of the boundary vertex.
\begin{figure}[t]
\vglue0.2cm
\centerline{\includegraphics[height = 2.0in]{./domain.pdf}
}
\begin{quote}
\small
\vglue-0.2cm
\caption{
\label{fig1}
The graph $(V\cup\{\varrho\},E)$ corresponding to~$D_N$ being the square of $6\times 6$ vertices and all edges emanating from~$D_N$ routed to the boundary vertex~$\varrho$. Note that the graph $(V\cup\{\varrho\},E)$ is planar whenever~$\mathbb Z^2\smallsetminus D_N$ is connected.}
\normalsize
\end{quote}
\end{figure}
In order to give the precise setting of our problem, we first consider a general finite, unoriented, connected graph $G = (V\cup\{\varrho\},E)$, where~$\varrho$ is a distinguished vertex (not belonging to~$V$). Let~$X$ denote a sample path of the simple random walk on~$G$; i.e., a discrete-time Markov chain on~$V\cup\{\varrho\}$ with the transition probabilities
\begin{equation}
\cmss P (u, v) :=
\begin{cases}
\frac1{\deg(u)}, &~\text{if}~e:=(u, v) \in E, \\
0, &~\text{otherwise},
\end{cases}
\end{equation}
where $\deg(u)$ is the degree of~$u$. As usual, we will write~$P^u$ to denote the law of~$X$ subject to the initial condition~$P^u(X_0=u)=1$.
Given a path~$X$ of the chain, the local time at~$v\in V\cup\{\varrho\}$ at time~$n$ is then given by
\begin{equation}
\label{E:local_time}
\ell_n^V(v) := \frac{1}{\deg(v)} \sum_{k=0}^n1_{\{X_k = v \}},\quad n \geq 0.
\end{equation}
Our aim is to observe the Markov chain at times when most, or even all, of the vertices have already been visited. This requires looking at the chain at times (at least) proportional to the total degree~$\deg(V):=\sum_{v\in V\cup\{\varrho\}}\deg(V)$. To simplify our later notations, we thus abbreviate, for any~$t>0$,
\begin{equation}
\label{E:LVt}
L_t^V(v):=\ell_{\lfloor t\deg(V)\rfloor}^V(v),\quad v\in V.
\end{equation}
In this parametrization, we have $L_t^V(v)= t+o(t)$ with high probability as~$t\to\infty$.
Our derivations will make heavy use of the connection between the above Markov chain and an instance of the Discrete Gaussian Free Field (DGFF). Denoting by
\begin{equation}
H_{v} := \inf \bigl\{n \geq 0 \colon X_n = v \bigr\}
\end{equation}
the first hitting time of vertex~$v$, this DGFF is the centered Gaussian process $\{h_v^{V}\colon v \in V\}$ with covariances given by
\begin{equation}
\label{E:cov}
\mathbb E\bigl(h_u^{V} h_v^{V} \bigr) = G^{V} (u, v) :=E^u\bigl(\ell_{H_{\varrho}}^V(v)\bigr),
\end{equation}
where~$\mathbb E$ the expectation with respect to the law of~$h^V$ and~$G^V$ is the Green function. The field naturally extends to~$\varrho$ by~$h^V_\varrho=0$.
We will apply the above to~$V$ ranging through a sequence of lattice approximations of a well-behaved continuum domain. The following definitions are taken from~\cite{BL2}:
\begin{definition}
An admissible domain
is a bounded open subset of $\mathbb{R}^2$
that consists of a finite number of connected components
and whose boundary is composed of a finite number of connected sets each of which has
positive Euclidean diameter.
\end{definition}
We will write $\mathfrak{D}$ to denote the family of all admissible domains and let $d_{\infty} (\cdot, \cdot)$ denote the $\ell^{\infty}$-distance on $\mathbb{R}^2$. The lattice domains are then assumed to obey:
\begin{definition}
\label{dfn:admissible}
An admissible lattice approximation of $D \in \mathfrak{D}$ is a sequence $\{D_N\}_{N\ge1}$ of sets $D_N\subset\mathbb{Z}^2$ such that the following holds: There is~$N_0\in\mathbb N$ such that for all~$N\ge N_0$ we have
\begin{equation}
\label{E:1.8i}
D_N \subseteq \Bigl\{x \in \mathbb{Z}^2 \colon
d_{\infty}\bigl(\ffrac{x}{N}, \mathbb{R}^2 \smallsetminus D\bigr) > \frac{1}{N} \Bigr\}
\end{equation}
and, for any~$\delta>0$ there is also~$N_1\in\mathbb N$ such that for all~$N\ge N_1$,
\begin{equation}
\label{E:1.8ii}
D_N \supseteq \bigl\{x \in \mathbb{Z}^2 \colon
d_{\infty} (\ffrac{x}{N}, \mathbb{R}^2 \smallsetminus D) > \delta \bigr\}.
\end{equation}
\end{definition}
As shown in \cite[Appendix~A]{BL2}, the conditions \twoeqref{E:1.8i}{E:1.8ii} ensure that the discrete harmonic measure on~$D_N$ tends, under scaling of space by~$N$, weakly to the harmonic measure on~$D$. This yields a precise asymptotic expansion of the associated Green function; see \cite[Chapter~1]{B-notes}. In particular, we have $G^{D_N}(x,x)=g\log N+O(1)$ for
\begin{equation}
g:=\frac1{2\pi}
\end{equation}
whenever~$x$ is deep inside~$D_N$. (This is by a factor~$4$ smaller than the corresponding constant in~\cite{B-notes,BL2} due to a different normalization of the Green function.)
\section{Main results}
\noindent
Let us move to discussing our main results. We pick an admissible domain~$D\in\mathfrak D$ and a sequence of admissible lattice approximation~$\{D_N\}_{N\ge1}$ and consider these fixed throughout the rest of the derivations.
\subsection{Setting the scales}
We begin by setting the scales for the time that the random walk is observed for and determining the range of values taken by the local time:
\begin{theorem}
\label{thm-minmax}
Let~$\{t_N\}_{N\ge1}$ be a positive sequence such that, for some~$\theta>0$,
\begin{equation}
\label{E:1.12}
\lim_{N\to\infty}\frac{t_N}{(\log N)^2}=2g\theta.
\end{equation}
Then for any choices of~$x_N\in D_N$, the following limits hold in $P^{x_N}$-probability:
\begin{equation}
\label{E:max}
\frac1{(\log N)^2}\,\max_{x\in D_N} L^{D_N}_{t_N}(x)\,\,\,\underset{ N\to\infty}\longrightarrow\,\,\,2 g\bigl(\sqrt\theta+1\bigr)^2
\end{equation}
and
\begin{equation}
\label{E:min}
\frac1{(\log N)^2}\,\min_{x\in D_N} L_{t_N}^{D_N}(x)\,\,\,\underset{ N\to\infty}\longrightarrow\,\,\,2 g\bigl[(\sqrt\theta-1)\vee0\,\bigr]^2.
\end{equation}
\end{theorem}
The conclusion \eqref{E:min} indicates (and our later results on avoided points prove) that the choice $\theta:=1$ identifies the leading order of the \myemph{cover time} of~$D_N$ --- defined as the first time that every vertex of the graph has been visited. The cover time is random but it is typically concentrated (more precisely, whenever the maximal hitting time is much smaller than the expected cover time; see Aldous~\cite{Aldous}). The scaling \eqref{E:1.12} thus corresponds to the walk run for a~$\theta$-multiple of the cover time.
As it turns out, under \eqref{E:1.12}, the asymptotic $[2g\theta+o(1)](\log N)^2$ marks the value of~$L_{t_N}^{D_N}$ at all but a vanishing fraction of the vertices in~$D_N$. In light of \twoeqref{E:max}{E:min}, this suggests that we call~$x\in D_N$ a \myemph{$\lambda$-thick point} if (for~$\lambda\in[0,1]$)
\begin{equation}
\label{E:2.4iu}
L_{t_N}^{D_N}(x)\ge 2 g\bigl(\sqrt\theta+\lambda\bigr)^2(\log N)^2
\end{equation}
and a \myemph{$\lambda$-thin point} if (for~$\lambda\in[0,\sqrt\theta)$)
\begin{equation}
\label{E:2.5iu}
L_{t_N}^{D_N}(x)\le 2 g\bigl(\sqrt\theta-\lambda\bigr)^2(\log N)^2.
\end{equation}
One of our goals is to describe the scaling limit of the sets of thick and thin points. This is best done via random measures of the form
\begin{equation}
\label{E:zetaND}
\zeta^D_N:=\frac1{W_N}\sum_{x\in D_N}\delta_{x/N}\otimes\delta_{(L_{t_N}^{D_N}(x)-a_N)/\sqrt{2a_N}}\,,
\end{equation}
where $a_N$ is a sequence with the asymptotic growth as the right-hand side of \twoeqref{E:2.4iu}{E:2.5iu} and~$W_N$ is a normalizing sequence. The specific choice of the normalization by $\sqrt{2a_N}$ reflects on the natural fluctuations of $L_{t_N}^{D_N}(x)$ (which turn out to be order $\log N$ even between nearest neighbors) and captures best the connection to the corresponding object for the DGFF to be discussed next.
\subsection{Level sets of zero-average DGFF}
Recall that $h^{D_N}$ denotes a sample of the DGFF in~$D_N$.
As shown by Bolthausen, Deuschel and Giacomin~\cite{BDG}, the maximum of~$h^{D_N}$ is asymptotic to~$2\sqrt g\log N$ and so the $\lambda$-thick points are naturally defined as those where the field exceeds~$2\lambda\sqrt g\log N$. Allowing for sub-leading corrections, these are best captured by the random measure
\begin{equation}
\label{E:etaDGFF}
\eta_N^D:=\frac1{K_N}\sum_{x\in D_N}\delta_{x/N}\otimes\delta_{h^{D_N}_x-\widehat a_N},
\end{equation}
where~$\{\widehat a_N\}$ is a centering sequence with the asymptotic $\widehat a_N\sim 2\lambda\sqrt{g}\log N$ and
\begin{equation}
\label{E:1.19e}
K_N:=\frac{N^2}{\sqrt{\log N}}\text{\rm e}\mkern0.7mu^{-\frac{(\widehat a_N)^2}{2g\log N}}.
\end{equation}
In \cite[Theorem~2.1]{BL4} it was shown that for each~$\lambda\in(0,1)$, there is a constant~$\fraktura c(\lambda)>0$ (independent of~$D$ or the approximating sequence $\{D_N\}_{N\ge1}$) such that, relative to the topology of vague convergence of measures on $\overline D\times(\mathbb R\cup\{+\infty\})$,
\begin{equation}
\label{E:1.19}
\eta_N^D\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\fraktura c(\lambda)\,Z^D_\lambda(\text{\rm d}\mkern0.5mu x)\otimes\text{\rm e}\mkern0.7mu^{-\alpha\lambda h}\text{\rm d}\mkern0.5mu h,
\end{equation}
where
\begin{equation}
\alpha:=\frac2{\sqrt g}
\end{equation}
and where~$Z_\lambda^D$ is a random a.s.-finite Borel measure in~$D$ called the \myemph{Liouville Quantum Gravity} (LQG) at parameter~$\lambda$-times critical. The measure~$Z^D_\lambda$ is normalized so that, for each Borel set~$A\subseteq D$,
\begin{equation}
\label{E:1.19a}
\mathbb E Z_\lambda^D(A)=\int_A r^{\,D}(x)^{2\lambda^2}\text{\rm d}\mkern0.5mu x,
\end{equation}
where~$r^D$ is an explicit bounded, continuous function supported on~$D$ that, for~$D$ simply connected, is the conformal radius; see~\cite[(2.10)]{BL4}.
As was shown in \cite{AB}, the measures $\{Z^D_\lambda\colon\lambda\in(0,1)\}$ are quite relevant for the exceptional level sets associated with the continuous-time random walk in the parametrization by the local time spent in the ``boundary vertex.'' Somewhat different measures will arise for the discrete-time random walk. Let $\Pi^D(x,\cdot)$ be the harmonic measure in~$D$ defined, e.g., as the exit distribution from~$D$ of a Brownian motion started at~$x$. The continuum Green function in~$D$ with Dirichlet boundary condition is then given by
\begin{equation}
\widehat G^D(x,y):=-g\log|x-y|+g\int_{\partial D}\Pi^D(x,\text{\rm d}\mkern0.5mu z)\log|y-z|.
\end{equation}
Writing~${\rm Leb}$ for the Lebesgue measure on~$\mathbb R^2$, let~$\fraktura d\colon\mathbb R^2\to\mathbb R$ be defined by
\begin{equation}
\fraktura d(x):={\rm Leb}(D)\frac{\int_D\text{\rm d}\mkern0.5mu y\,\widehat G^D(x,y)}{\int_{D\times D}\text{\rm d}\mkern0.5mu z\,\text{\rm d}\mkern0.5mu y\,
\widehat G^D(z,y)}.
\end{equation}
As is readily checked, $\fraktura d$ is bounded and continuous, vanishes outside~$D$ and integrates to~${\rm Leb}(D)$ over~$D$. (We also have~$\fraktura d\ge0$ because~$\widehat G^D\ge0$ and also that the Laplacian of~$\fraktura d$ is constant on~$D$ but that is of no consequence in the sequel.) See~Fig.~\ref{fig2}. We claim:
\begin{theorem}
\label{thm-DGFF}
For each $\lambda\in(0,1)$ and each~$D\in\mathfrak D$, there is a unique random measure $Z^{D,0}_\lambda$ on~$D$ such that, for any sequence $\{D_N\}_{N\ge1}$ of admissible approximations of~$D$ and any centering sequence $\{\widehat a_N\}_{N\ge1}$ satisfying $\widehat a_N\sim 2\lambda\sqrt g\log N$ as~$N\to\infty$,
\begin{equation}
\label{E:2.16new}
\Bigl(\eta^D_N\,\Big|\,\sum_{x\in D_N}h^{D_N}_x=0\Bigr)
\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\,\fraktura c(\lambda)\,Z^{D,0}_\lambda(\text{\rm d}\mkern0.5mu x)\otimes\text{\rm e}\mkern0.7mu^{-\alpha\lambda h}\text{\rm d}\mkern0.5mu h,
\end{equation}
where~$\fraktura c(\lambda)$ is as in \eqref{E:1.19}. Moreover,
if $Y$ is a normal random variable with mean zero and variance
\begin{equation}
\label{E:2.14new}
\sigma_D^2:=\int_{D\times D}\text{\rm d}\mkern0.5mu x\text{\rm d}\mkern0.5mu y\,\widehat G^D(x,y),
\end{equation}
then the measure from \twoeqref{E:1.19}{E:1.19a} obeys
\begin{equation}
\label{E:2.15new}
Y\independent Z^{D,0}_\lambda\quad\Rightarrow\quad
Z^D_\lambda(\text{\rm d}\mkern0.5mu x)\,\overset{\text{\rm law}}=\, \text{\rm e}\mkern0.7mu^{\lambda\alpha\fraktura d(x)Y}
\,Z^{D,0}_\lambda(\text{\rm d}\mkern0.5mu x).
\end{equation}
The law of $Z^{D,0}_\lambda$ is determined uniquely by \eqref{E:2.15new}.
\end{theorem}
The existence of a random measure $Z^{D,0}_\lambda$ satisfying \eqref{E:2.15new} is part of the proof of \eqref{E:2.16new}. The uniqueness of the decomposition \eqref{E:2.15new} holds quite generally and constitutes the main technical ingredient of the proof; see Theorem~\ref{lemma-8.1new} which is of independent interest. The known properties of~$Z^D_\lambda$ (see \cite[Theorem 2.3]{BL4}) imply that $Z^{D,0}_\lambda$ is a.s.-finite and charges every non-empty open subset of~$D$ a.s.
\begin{figure}[t]
\vglue0.2cm
\centerline{\includegraphics[width = 3.8in]{./dfrak-function-100.pdf}
}
\begin{quote}
\small
\vglue-0.2cm
\caption{
\label{fig2}
A plot of function~$\fraktura d$ on~$D:=(0,1)^2$ obtained by solving the differential equation $-\Delta\fraktura d = {\rm Leb}(D)/\sigma_D^2$, where~$\Delta$ is the Laplacian, with Dirichlet boundary conditions on~$\partial D$.}
\normalsize
\end{quote}
\end{figure}
\subsection{Exceptional local-time sets}
We are now well equipped to state our results concerning the limits of the random measures \eqref{E:zetaND} for a given centering sequence~$\{a_N\}_{N\ge1}$ growing as the right-hand sides of \twoeqref{E:2.4iu}{E:2.5iu} and the normalizing sequence given by
\begin{equation}
\label{E:WN}
W_N:=\frac{N^2}{\sqrt{\log N}}\text{\rm e}\mkern0.7mu^{-\frac{(\sqrt{2t_N}-\sqrt{2a_N})^2}{2g\log N}}.
\end{equation}
For the thick points we then get:
\begin{theorem}[Thick points]
\label{thm-thick}
Suppose $\{t_N\}_{N\ge1}$ and $\{a_N\}_{N\ge1}$ are positive sequences such that, for some $\theta>0$ and some $\lambda\in(0,1)$, \eqref{E:1.12} and
\begin{equation}
\label{E:1.20}
\lim_{N\to\infty}\frac{a_N}{(\log N)^2}=2g(\sqrt\theta+\lambda)^2
\end{equation}
hold true.
Then for any~$x_N\in D_N$ and for~$X$ sampled from~$P^{x_N}$, the measures~$\zeta^D_N$ in \eqref{E:zetaND} with~$W_N$ as in \eqref{E:WN} obey
\begin{equation}
\label{E:1.21dis}
\zeta^D_N\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\, \sqrt{\frac{\sqrt\theta}{\sqrt\theta+\lambda}}\,\text{\rm e}\mkern0.7mu^{-\alpha^2\lambda^2/16}\,\fraktura c(\lambda) \,\text{\rm e}\mkern0.7mu^{\alpha\lambda (\fraktura d(x)-1)Y}Z_\lambda^{D,0}(\text{\rm d}\mkern0.5mu x)\otimes\text{\rm e}\mkern0.7mu^{-\alpha\lambda h}\text{\rm d}\mkern0.5mu h
\end{equation}
in the sense of vague convergence of measures on $\overline D\times(\mathbb R\cup\{+\infty\})$,
where $Y=\mathcal N(0,\sigma_D^2)$ and~$Z^{D,0}_\lambda$ are independent
and $\fraktura c(\lambda)$ is as in \eqref{E:1.19}.
\end{theorem}
For the thin points, we similarly obtain:
\begin{theorem}[Thin points]
\label{thm-thin}
Suppose $\{t_N\}_{N\ge1}$ and $\{a_N\}_{N\ge1}$ are positive sequences such that, for some $\theta>0$ and some $\lambda\in(0,\sqrt\theta\wedge1)$, \eqref{E:1.12} and
\begin{equation}
\label{E:1.22}
\lim_{N\to\infty}\frac{a_N}{(\log N)^2}=2g(\sqrt\theta-\lambda)^2
\end{equation}
hold true.
Then for any~$x_N\in D_N$ and for~$X$ sampled from~$P^{x_N}$, the measures~$\zeta^D_N$ in \eqref{E:zetaND} with~$W_N$ as in \eqref{E:WN} obey
\begin{equation}
\label{E:1.23dis}
\zeta^D_N\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\, \sqrt{\frac{\sqrt\theta}{\sqrt\theta-\lambda}}\,\text{\rm e}\mkern0.7mu^{-\alpha^2\lambda^2/16}\,\fraktura c(\lambda) \,\text{\rm e}\mkern0.7mu^{\alpha\lambda (\fraktura d(x)-1)Y}Z_\lambda^{D,0}(\text{\rm d}\mkern0.5mu x)\otimes\text{\rm e}\mkern0.7mu^{+\alpha\lambda h}\text{\rm d}\mkern0.5mu h
\end{equation}
in the sense of vague convergence of measures on $\overline D\times(\mathbb R\cup\{-\infty\})$,
where $Y=\mathcal N(0,\sigma_D^2)$ and~$Z^{D,0}_\lambda$ are independent and $\fraktura c(\lambda)$ is as in \eqref{E:1.19}.
\end{theorem}
The limiting spatial distribution of the $\lambda$-thick and $\lambda$-thin points (as well as the distribution of the total number of these points) is governed by the measure
\begin{equation}
\label{E:2.21iu}
\text{\rm e}\mkern0.7mu^{\alpha\lambda (\fraktura d(x)-1)Y}Z_\lambda^{D,0}(\text{\rm d}\mkern0.5mu x).
\end{equation}
In light of \eqref{E:2.15new}, this is somewhere between the zero-average LQG $Z_\lambda^{D,0}$ and the ``ordinary'' LQG $Z^D_\lambda$, which appeared in the limit for the parametrization by the local time at~$\varrho$. The second component of the measure on the right of \eqref{E:1.21dis} and \eqref{E:1.23dis} is exactly as that for the DGFF \eqref{E:1.19}. This is due to the judicious scaling of the second component of~$\zeta^D_N$ by $\sqrt{2a_N}$ rather than just~$\log N$, as was done in \cite{AB}.
\smallskip
Apart from the thick and thin points, \cite{AB} studied also the sets of points where the local time is order unity, called the \myemph{light} points, and the points where the local time vanishes, called the \myemph{avoided} points. In both cases, the LQG measure that appears is for parameter $\lambda:=\sqrt\theta$ (and $\theta\in(0,1)$). The control extends to the discrete-time problem parametrized by the total time as well. We start with the light points:
\begin{theorem}[Light points]
\label{thm-light}
Suppose $\{t_N\}_{N\ge1}$ is a positive sequence such that \eqref{E:1.12} holds for some $\theta\in(0,1)$. For any~$x_N\in D_N$ and for~$X$ sampled from~$P^{x_N}$, consider the measure
\begin{equation}
\label{E:varthetaND}
\vartheta^D_N:=\frac1{\widehat W_N }\sum_{x\in D_N}\delta_{x/N}\otimes\delta_{L_{t_N}^{D_N}(x)},
\end{equation}
where
\begin{equation}
\label{E:1.31}
\widehat W_N :=N^2\text{\rm e}\mkern0.7mu^{-\frac{t_N}{g\log N}}.
\end{equation}
Then, in the sense of vague convergence of measures on~$\overline D\times[0,\infty)$,
\begin{equation}
\label{E:2.22ii}
\vartheta^D_N\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\, \sqrt{2\pi g}\,\fraktura c(\sqrt\theta)\,\,\text{\rm e}\mkern0.7mu^{\alpha\sqrt\theta (\fraktura d(x)-1)Y}\, Z_{\sqrt{\theta}\,}^{D,0}(\text{\rm d}\mkern0.5mu x)\otimes\mu(\text{\rm d}\mkern0.5mu h),
\end{equation}
where $\fraktura c(\lambda)$ is as in \eqref{E:1.19}, $Y=\mathcal N(0,\sigma_D^2)$ and~$Z^{D,0}_{\sqrt\theta}$ are independent and $\mu:=\sum_{n\ge0}q_n\delta_{n/4}$ for a sequence $\{q_n\colon n\ge0\}$ of non-negative numbers determined uniquely by
\begin{equation}
\label{E:2.27}
\sum_{n\ge0}q_n(1+s/4)^{-n} = \text{\rm e}\mkern0.7mu^{\frac{\alpha^2\theta}{2s}},\quad s>0.
\end{equation}
\end{theorem}
That~$\mu$ is supported on $\frac14\mathbb N_0:=\{0,\frac14,\frac12,\frac34,1,\dots\}$ arises from the normalization in~\eqref{E:local_time}.
From \eqref{E:2.22ii} we conclude that the number of the vertices of~$D_N$ visited exactly~$n$ times during the first
\begin{equation}
[8g\theta+o(1)](\log N)^2\deg(D_N)
\end{equation}
steps of the random walk is thus asymptotic to
\begin{equation}
\label{E:2.26}
q_n\,\Bigl[\sqrt{2\pi g}\,\fraktura c(\sqrt\theta)\,\,\int_D \text{\rm e}\mkern0.7mu^{\alpha\sqrt\theta (\fraktura d(x)-1)Y}\, Z_{\sqrt{\theta}\,}^{D,0}(\text{\rm d}\mkern0.5mu x)\Bigr]\,\widehat W_N,
\end{equation}
jointly for all~$n\ge0$.
Noting that $q_0=1$, straightforward limit considerations show:
\begin{theorem}[Avoided points]
\label{thm-avoid}
Suppose $\{t_N\}_{N\ge1}$ is a sequence such that \eqref{E:1.12} holds for some~$\theta\in(0,1)$. For any~$x_N\in D_N$ and for~$X$ sampled from~$P^{x_N}$, consider the measure
\begin{equation}
\label{E:kappaND}
\kappa^D_N:=\frac1{\widehat W_N }\sum_{x\in D_N}1_{\{L_{t_N}^{D_N}(x)=0\}}\,\delta_{x/N},
\end{equation}
where~$\widehat W_N $ is as in \eqref{E:1.31}. Then, in the sense of vague convergence of measures on~$\overline D$,
\begin{equation}
\label{E:2.27dis}
\kappa^D_N\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\, \sqrt{2\pi g}\,\fraktura c(\sqrt\theta)\,\,\text{\rm e}\mkern0.7mu^{\alpha\sqrt\theta (\fraktura d(x)-1)Y}\, Z_{\sqrt{\theta}\,}^{D,0}(\text{\rm d}\mkern0.5mu x),
\end{equation}
where $Y=\mathcal N(0,\sigma_D^2)$ and~$Z^{D,0}_{\sqrt\theta}$ are independent and $\fraktura c(\lambda)$ is as in \eqref{E:1.19}.
\end{theorem}
The above theorems will be deduced from the corresponding statements for a conti\-nu\-ous-time variant of~$X$ observed for a fixed time of order $N^2(\log N)^2$ (see Propositions~\ref{thm-thick-cont},~\ref{thm-thin-cont}, \ref{thm-light-cont} and~\ref{thm-avoid-cont}). These statements are nearly identical to Theorems~\ref{thm-thick}--\ref{thm-avoid} above, respectively, except for the term $\text{\rm e}\mkern0.7mu^{-\alpha^2\lambda^2/16}$ in \eqref{E:1.21dis} and \eqref{E:1.23dis} that arises from the fluctuations of the (continuous-time) local time at points where the discrete-time local time is large, and the measure~$\mu$ in \eqref{E:2.22ii} which gets replaced (in Proposition~\ref{thm-light-cont}) by a continuous, and quite explicit, counterpart.
The fixed-time results for continuous-time random walk will be inferred from the corresponding results in~\cite{AB} for the parametrization by the local time at~$\varrho$. The main difference is that the measure \eqref{E:2.21iu} gets replaced by the ``pure'' LQG~$Z^D_\lambda$.
\newcommand{{\text{\rm loc}}}{{\text{\rm loc}}}
\subsection{Local structure}
Similarly as in~\cite{AB}, we are also able to control the local structure of the above exceptional sets. For the thick and thin points, this is achieved by considering the measures on $D\times\mathbb R\times\mathbb R^{\mathbb Z^2}$ of the form
\begin{equation}
\label{E:zetaNDloc}
\zeta^{D,{\text{\rm loc}}}_N:=\frac1{W_N}\sum_{x\in D_N}\delta_{x/N}\otimes\delta_{(L_{t_N}^{D_N}(x)-a_N)/\sqrt{2a_N}}
\otimes\delta_{\{(L_{t_N}^{D_N}(x)-L_{t_N}^{D_N}(x+z))/\sqrt{2a_N}\colon z\in\mathbb Z^2\}},
\end{equation}
where the third coordinate captures the ``shape'' of the local-time configuration near every exceptional point.
In the parametrization by the local time at the boundary vertex, the asymptotic ``law'' of the third component in \eqref{E:zetaNDloc} turned out be that of the pinned DGFF (i.e., the DGFF in~$\mathbb Z^2\smallsetminus\{0\}$) reduced by a multiple of the potential kernel~$\fraktura a$. Here we note that, in our normalization, $\fraktura a$ is the unique non-negative function on~$\mathbb Z^2$ that is discrete harmonic on~$\mathbb Z^2\smallsetminus\{0\}$ and obeys $\fraktura a(0)=0$ and $\fraktura a(x) \sim g\log|x|+O(1)$ as~$|x|\to\infty$. The pinned DGFF~$\phi$ then has the covariance structure
\begin{equation}
\label{E:2.30}
\text{\rm Cov}(\phi_x,\phi_y)=\fraktura a(x)+\fraktura a(y)-\fraktura a(x-y).
\end{equation}
As it turns out, a different (albeit closely related) Gaussian process arises for the discrete-time walk parametrized by its total time:
\begin{theorem}[Local structure of thick/thin points]
\label{thm-thick-loc}
For the setting and under the conditions of Theorem~\ref{thm-thick}, relative to the vague topology of $\overline D\times(\mathbb R\cup\{+\infty\})\times\mathbb R^{\mathbb Z^2}$,
\begin{equation}
\zeta^{D,{\text{\rm loc}}}_N\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\,\zeta^D\otimes\nu_\lambda,
\end{equation}
where $\zeta^D$ is the measure on the right of \eqref{E:1.21dis} and $\nu_\lambda$ is the law of $\widetilde\phi+\alpha\lambda\fraktura a-\frac18\alpha\lambda1_{\{0\}^{\text{\rm c}}}$, for~$\widetilde\phi$ a centered Gaussian process on~$\mathbb Z^2$ with covariances
\begin{equation}
\label{E:2.32}
\text{\rm Cov}(\widetilde\phi_x,\widetilde\phi_y)=\fraktura a(x)+\fraktura a(y)-\fraktura a(x-y)-\frac18\bigl[1-\delta_{x,0}-\delta_{y,0}+\delta_{x,y}\bigr].
\end{equation}
The same statement (relative to the vague topology on $\overline D\times(\mathbb R\cup\{-\infty\})\times\mathbb R^{\mathbb Z^2}$) holds for the setting of Theorem~\ref{thm-thin} except that~$\nu_\lambda$ is then the law of
$\widetilde\phi-\alpha\lambda\fraktura a+\frac18\alpha\lambda1_{\{0\}^{\text{\rm c}}}$.
\end{theorem}
To demonstrate that $\widetilde\phi$ is indeed closely related to the pinned DGFF~$\phi$, we note that, for $\{n_z\colon z\in\mathbb Z^2\}$ i.i.d\ $\mathcal N(0,\frac18)$ that are independent of~$\widetilde\phi$,
\begin{equation}
\label{E:2.33}
\{\phi_z\colon z\in\mathbb Z^d\}\,\,\overset{\text{\rm law}}=\,\,\{\widetilde\phi_z+n_0-n_z\colon z\in\mathbb Z^2\}.
\end{equation}
We will verify this relation, along with the fact that \eqref{E:2.32} is positive semidefinite and thus the covariance of a Gaussian process, in Lemma~\ref{lemma-8.4a}. The i.i.d.\ normals appear during a conversion from the continuous-time walk to its discrete-time counterpart. They represent the scaling limit of the fluctuations of the local time due to the random (i.i.d.\ exponential) nature of the jump times.
We will also address the local time structure in the vicinity of the avoided points. This is done by considering the measure on $D\times[0,\infty)^{\mathbb Z^2}$ defined by
\begin{equation}
\label{E:kappaNDloc}
\kappa^{D,{\text{\rm loc}}}_N:=\frac1{\widehat W_N }\sum_{x\in D_N}1_{\{L_{t_N}^{D_N}(x)=0\}}\,\delta_{x/N}\otimes\delta_{\{L^{D_N}_{t_N}(x+z)\colon z\in\mathbb Z^2\}}.
\end{equation}
For reasons explained earlier, the measure is concentrated on $D\times(\frac14\mathbb N_0)^{\mathbb Z^2}$.
Recall from \cite[Theorem~2.8]{AB} that, for the continuous-time random walk parametrized by the local time at the boundary vertex and observed at the time corresponding to~$\theta$-multiple of the cover time, the limit distribution of the local configuration is described by the law $\nu_{\theta}^{\text{\rm RI}}$ of the occupation-time field of random-interlacements at level~$u:=\pi\theta$. This measure was constructed by Rodriguez~\cite[Theorems~3.3 and~4.2]{R18} (see \cite[Section~2.6]{AB} for a summary of the construction). For the discrete-time random walk parametrized by its total time we get a discrete-time counterpart of $\nu_{\theta}^{\text{\rm RI}}$:
\begin{theorem}[Local structure of avoided points]
\label{thm-avoid-loc}
For each~$u>0$, there is a unique Borel measure $\nu_{u}^{\text{\rm RI,\,dis}}$ on $[0,\infty)^{\mathbb Z^2}$ that is supported on $(\frac14\mathbb N_0)^{\mathbb Z^2}$ and obeys the following: For
\begin{enumerate}
\item[(1)] $\{\ell(z)\colon z\in\mathbb Z^2\}$ a sample from $\nu_{u}^{\text{\rm RI,\,dis}}$, and
\item[(2)] $\{\tau_{z,j}\colon z\in\mathbb Z^2,\,j\ge1\}$ independent i.i.d.\ Exponential(1),
\end{enumerate}
we have
\begin{equation}
\nu_{u}^{\text{\rm RI}} = \text{\rm law of }\Bigl\{\frac14\sum_{j=1}^{4\ell(z)}\tau_{z,j}\colon z\in\mathbb Z^2\Bigr\}.
\end{equation}
For the setting and under the conditions of Theorem~\ref{thm-avoid}, for each~$\theta\in(0,1)$ we then have
\begin{equation}
\kappa^{D,{\text{\rm loc}}}_N\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\,\kappa^D\otimes\nu_{\theta}^{\text{\rm RI,\,dis}}
\end{equation}
where $\kappa^D$ is the measure on the right of \eqref{E:2.27dis}.
\end{theorem}
Similarly as in \cite{AB}, we will not attempt to make statements concerning the local structure of the light points as that would require developing the corresponding extension of the above occupation-time measure to the situation when the local time at the origin does not vanish.
\subsection{Remarks}
We proceed with a couple of remarks. First note that, along with \eqref{E:min} and the fact that~$Z^{D,0}_{\sqrt\theta}$ is supported on all of~$D$ a.s., Theorem~\ref{thm-avoid} implies that the cover time is indeed marked by the choice~$\theta:=1$. Second, note that
an explicit formula for~$q_n$ can be extracted from \eqref{E:2.27}. This is achieved using the identity
\begin{equation}
\text{\rm e}\mkern0.7mu^{x^2/s}=1+\int_0^\infty
\frac{x}{2\sqrt{t}}\,\text{\rm e}\mkern0.7mu^{t}\,I_1(x\sqrt{t})\,\text{\rm e}\mkern0.7mu^{-(1+s/4)t}\,\text{\rm d}\mkern0.5mu t,
\end{equation}
where $I_1(z):=\sum_{n\geq0}\frac{1}{n!(n+1)!}(z/2)^{2n+1}$ is a modified Bessel function. Expanding $\text{\rm e}\mkern0.7mu^{t}$ and $\frac1{\sqrt{t}}I_1(x\sqrt{t})$ into power series in~$t$ and scaling~$t$ by $(1+s/4)$ then readily shows
\begin{equation}
q_{n+1} = n!\sum_{j=0}^n\frac{(\alpha^2\theta/8)^{j+1}}{j!(j+1)!(n-j)!}
\end{equation}
for each~$n\ge0$. See also \eqref{E:tilde-mu} for the corresponding formulas in continuous time.
Third, as we will see in the proofs, the random variable~$Y$ in the measure \eqref{E:2.21iu} represents the limit of normalized fluctuations of the local time at the boundary vertex for the first $\lfloor t_N\deg(D_N)\rfloor$ steps of the random walk (see Lemma~\ref{lemma-TY}).
A key point is that this becomes statistically independent of the level-set statistics in the limit.
Incidentally, through \eqref{E:2.26}, the total mass of the measure \eqref{E:2.21iu} describes the limit law of a normalized total number of uncovered vertices at the time proportional to~$\lambda^2$-multiple of the cover time.
Fourth, the reader may wonder why we had to include the degree of~$\varrho$ into the normalization of the local time \eqref{E:LVt} by~$\deg(V)$. This is because, although $\deg(\varrho)=o(|D_N|)$ under \twoeqref{E:1.8i}{E:1.8ii} (see Lemma~\ref{lemma-5.8}), once the ratio of $\deg(\varrho)/|D_N|$ is larger than $1/\log N$ (which can occur under \twoeqref{E:1.8i}{E:1.8ii}) removing $\deg(\varrho)$ from the normalization changes the scaling of the normalization constants~$W_N$ and~$\widehat W_N$ with~$N$.
Fifth, as in \cite{AB}, the above statements deliberately avoid various boundary values of the parameters; i.e., $\lambda=1$ for the thick points, $\lambda=\sqrt\theta\wedge 1$ for the thin points and~$\theta=1$ for the light and avoided points. All of these are closely related to the statistics of nearly-maximal DGFF values, which is different than the regime described in Theorem~\ref{thm-DGFF}. While the nearly-maximal DGFF values are now well understood thanks to the work of the second author with O.~Louidor \cite{BL1,BL2,BL3} and with S.~Guffler and O.~Louidor~\cite{BGL}, the recent work of Cortines, Louidor and Saglietti~\cite{CLS} shows that the connection between the avoided points at~$\theta=1$ (i.e., the time scale of the cover time) and the DGFF extrema is considerably more subtle.
Sixth, a natural setting for the above problem is the random walk on a lattice torus $(\mathbb Z/(N\mathbb Z))^2$ started from any given vertex~$\varrho$. As our work in progress shows~\cite{ABL}, the scaling of the corresponding measures is then more complicated --- and, in particular, the scaling sequences~$W_N$ and~$\widehat W_N$ have to be taken \myemph{random}. This is related to the fact that, for random walks of time-length order $N^2(\log N)^2$, the local time at the starting point of the walk exhibits fluctuations of order $(\log N)^{3/2}$ on the torus while these are only of order $\log N$ at the boundary vertex in our planar domains.
Seventh, we note the recent preprints of Jego~\cite{Jego1,Jego2}, where measures of the kind \eqref{E:zetaND} associated with the thick points of planar Brownian motion run until the first exit from a bounded domain are shown to admit a non-trivial scaling limit that is identified with the limit of multiplicative chaos measures associated with the root of the local time. In~\cite{Jego2} the limit measure is shown to obey a list of natural properties that characterize it uniquely. It remains to be seen whether the limit measure bears any connection to Gaussian Free Field and/or Liouville Quantum Gravity.
Finally, we note that Dembo, Peres, Rosen and Zeitouni \cite{DPRZ01, DPRZ06}
and Okada \cite{Okada1, Okada2, Okada3} analyzed the fractal nature and clustering of the sets of thick points
and avoided points in the setting of a random walk killed on exit from $D_N$ (for the thick points) and on two-dimensional torus (for the avoided points).
In particular, for $0 < \beta < 1$, the growth exponents have been obtained for
\begin{equation}
\#\Bigl\{(x_1, x_2) \in D_N\times D_N \colon |x_1 - x_2| \le N^{\beta},\,\min\{L^{D_N}_{H_\varrho}(x_1),L^{D_N}_{H_\varrho}(x_2)\}\ge s(\log N)^2\Bigr\}
\end{equation}
}%\color{red} with $s>0$ \normalcolor and
\begin{equation}
\#\Bigl\{(x_1, x_2) \in D_N\times D_N\colon |x_1 - x_2| \le N^{\beta},\,}%\color{red}\max\normalcolor\{L^{D_N}_{t_N}(x_1),L^{D_N}_{t_N}(x_2)\}=0\Bigr\},
\end{equation}
}%\color{red} as well as the sets where ``$\min$'' and ``$\max$'' are swapped --- which amounts to changing from the behavior near a typical point in the level set to a typical point in~$D_N$. \normalcolor
These conclusions cannot be gleaned from our results because $N^{-1+\beta}$ vanishes as $N \to \infty$. Notwithstanding, the obtained exponents coincide with those for the DGFF thick points computed by Daviaud~\cite{Daviaud} and thus affirm the universality of the DGFF.
\subsection{Outline}
The rest of this paper is organized as follows. In Section~\ref{sec3} we derive the scaling limit for the level sets of zero-average DGFF. Section~\ref{sec4} extends the conclusions of~\cite{AB} on the local time parametrized by the local time at~$\varrho$ to include information on fluctuations of the total time of the walk. This naturally feeds into Section~\ref{sec5} where we establish the scaling limit of exceptional points for the local time of the continuous-time random walk in the parametrization of the total time. Section~\ref{sec6} then controls the effect of starting the walk at an arbitrary point. In Section~\ref{sec7} we then prove our main theorems above concerning the discrete-time walk except for the local behavior, which is deferred to Section~\ref{sec8}.
\section{Zero average DGFF level sets}
\label{sec3}\noindent
We are now ready to commence the proofs. As our first item of business, we will address Theorem~\ref{thm-DGFF} on the level sets of the zero-average DGFF. Our strategy is to derive the statement from the unconditional convergence \eqref{E:1.19}. This leads to a convolution identity whose resolution requires a uniqueness statement that pertains to the whole class of Gaussian Multiplicative Chaos measures:
\begin{theorem}
\label{lemma-8.1new}
Given a bounded open set~$D\subset\mathbb R^d$, let $M^D$ and~$\widetilde M^D$ be two random a.s.-finite Borel measures on~$D$ and let~$\Phi$ be a centered Gaussian field on~$D$ independent of~$M^D$ and~$\widetilde M^D$ such that, for some bounded measurable functions $\mathfrak h_k\colon D\to\mathbb R$,
\begin{equation}
\label{E:3.1}
\text{\rm Cov}\bigl(\Phi(x),\Phi(y)\bigr)=\sum_{k=0}^\infty \mathfrak h_k(x)\mathfrak h_k(y),\quad \text{\rm locally uniformly in }x,y\in D.
\end{equation}
Then
\begin{equation}
\label{E:8.1new}
\text{\rm e}\mkern0.7mu^{\Phi(x)}M^D(\text{\rm d}\mkern0.5mu x)\,\,\,\overset{\text{\rm law}}=\,\,\,\text{\rm e}\mkern0.7mu^{\Phi(x)}\widetilde M^D(\text{\rm d}\mkern0.5mu x)
\end{equation}
implies $M^D\,\overset{\text{\rm law}}=\, \widetilde M^D$.
\end{theorem}
We remark that for the needs of the present paper it would suffice to treat the case when the sum in \eqref{E:3.1} consists of only one non-zero term. However, this still constitutes the bulk of the proof and so we include the more general case as it is interesting in its own right. The result extends (with suitable modifications) even to the case when~$\Phi$ is a generalized Gaussian Field; the statement thus ``reverse engineers'' the base measure from the associated Gaussian Multiplicative Chaos. Our setting goes even somewhat beyond that of, e.g., Shamov~\cite{Shamov} as we make no moment assumptions on~$M^D$ and~$\widetilde M^D$.
\smallskip
The proof of Theorem~\ref{lemma-8.1new} hinges on the following technical observation:
\begin{lemma}
\label{lemma-3.2}
Let~$\mathfrak h\colon D\to\mathbb R$ and $f\colon D\to[0,\infty)$ be bounded and measurable and let $M^D$ be a random a.s.-finite Borel measure on~$D$. Let~$Y=\mathcal N(0,1)$ be independent of~$M^D$. Define
$\phi\colon \mathbb R\times [0,\infty)\to[0,1]$ by
\begin{equation}
\label{E:3.3}
\phi(\lambda,t):=E\bigl(\text{\rm e}\mkern0.7mu^{-\langle M^D,\,\text{\rm e}\mkern0.7mu^{\sqrt{t}\,\mathfrak h(\cdot)Y-\lambda\mathfrak h(\cdot)}f\rangle}\bigr).
\end{equation}
Then~$\phi$ is continuous on its domain and smooth on the interior thereof. Moreover, $\phi$ satisfies the heat equation,
\begin{equation}
\label{E:3.4}
\frac{\partial\phi}{\partial t} = \frac12\frac{\partial^2\phi}{\partial \lambda^2},\qquad (\lambda,t)\in\mathbb R\times(0,\infty).
\end{equation}
\end{lemma}
\begin{proofsect}{Proof}
The continuity of~$\phi$ on $\mathbb R\times[0,\infty)$ follows by the Bounded Convergence Theorem. Using that $\sqrt{t}Y=\mathcal N(0,t)$ and invoking Tonelli's Theorem we get
\begin{equation}
\phi(\lambda,t) = \int\frac{\text{\rm d}\mkern0.5mu y}{\sqrt{2\pi t}}\,\text{\rm e}\mkern0.7mu^{-\frac{(y-\lambda)^2}{2t}}\phi(y,0).
\end{equation}
As~$y\mapsto\phi(y,0)$ is bounded, $\phi$ is continuously differentiable on $\mathbb R\times(0,t)$. Since the density of $\mathcal N(0,t)$ solves the heat equation \eqref{E:3.4}, the Dominated Convergence Theorem ensures that so does~$\phi$.
\end{proofsect}
We are now ready to give:
\begin{proofsect}{Proof of Theorem~\ref{lemma-8.1new}}
Let us first assume that~$\Phi$ takes the form~$\mathfrak h(x)Y$ for some bounded measurable $\mathfrak h\colon D\to\mathbb R$ and~$Y=\mathcal N(0,1)$ independent of~$M^D$ and~$\widetilde M^D$. Assume that
\begin{equation}
\label{E:3.10}
\text{\rm e}\mkern0.7mu^{\mathfrak h(x)Y}M^D(\text{\rm d}\mkern0.5mu x)\,\,\,\overset{\text{\rm law}}=\,\,\,\text{\rm e}\mkern0.7mu^{\mathfrak h(x)Y}\widetilde M^D(\text{\rm d}\mkern0.5mu x).
\end{equation}
Given any bounded and measurable~$f\colon D\to[0,\infty)$, let~$\phi(\lambda,t)$, resp., $\widetilde\phi(\lambda,t)$ denote the functions in \eqref{E:3.3} with the random measure~$M^D$, resp.,~$\widetilde M^D$. Since also $x\mapsto\text{\rm e}\mkern0.7mu^{-\lambda\mathfrak h(x)}f(x)$ is non-negative and measurable, from \eqref{E:3.10} we then have
\begin{equation}
\label{E:3.10e}
\phi(\lambda,1)=\widetilde\phi(\lambda,1),\quad \lambda\in\mathbb R.
\end{equation}
In light of Lemma~\ref{lemma-3.2}, the difference $\phi-\widetilde\phi$ is a bounded solution to the heat equation in $\mathbb R\times(0,\infty)$ with a continuous extension to $\mathbb R\times[0,\infty)$. A key point is that the heat equation is known to exhibit \myemph{backward uniqueness}. More precisely, Seregin and \v Sver\'ak~\cite[Theorem~4.1]{SS} implies that every bounded solution to \eqref{E:3.4} that vanishes at a given positive time vanishes everywhere. Since \eqref{E:3.10e} implies that~$\phi-\widetilde\phi$ vanishes at ``time'' $t=1$, we have $\phi=\widetilde\phi$ on~$\mathbb R\times[0,\infty)$. From the equality $\phi(0,0)=\widetilde\phi(0,0)$ we then infer
\begin{equation}
E\bigl(\text{\rm e}\mkern0.7mu^{-\langle M^D,f\rangle}\bigr)=E\bigl(\text{\rm e}\mkern0.7mu^{-\langle \widetilde M^D,f\rangle}\bigr).
\end{equation}
Since~$f$ was arbitrary, the claim thus holds for any $\Phi$ of the form $\mathfrak h(\cdot)Y$.
To address the general case, we proceed as in Kahane~\cite{Kahane} (see \cite[Section~5.2]{B-notes} for a review). First note that by \eqref{E:3.1} we may write
\begin{equation}
\Phi(x)\,\,\overset{\text{\rm law}}=\,\,\Phi_n(x)+\sum_{k=0}^n \mathfrak h_k(x)Y_k,
\end{equation}
where $(Y_0,\dots,Y_n)$ are i.i.d.\ standard normal and where $\Phi_n$ is an independent centered Gaussian field with covariance
\begin{equation}
\text{\rm Cov}\bigl(\Phi_n(x),\Phi_n(y)\bigr)=\sum_{k=n+1}^\infty \mathfrak h_k(x)\mathfrak h_k(y).
\end{equation}
The argument for $\Phi$ of the form $\mathfrak h(\cdot)Y$ then shows, inductively, that \eqref{E:8.1new} implies
\begin{equation}
\label{E:3.19}
\text{\rm e}\mkern0.7mu^{\Phi_n(x)}M^D(\text{\rm d}\mkern0.5mu x)\,\,\,\overset{\text{\rm law}}=\,\,\,\text{\rm e}\mkern0.7mu^{\Phi_n(x)}\widetilde M^D(\text{\rm d}\mkern0.5mu x),\quad n\in\mathbb N.
\end{equation}
Letting $f\colon D\to[0,\infty)$ be measurable and supported in a compact set $A\subset D$, the assumption of locally-uniform convergence in \eqref{E:3.1} implies that, given~$\epsilon>0$ there is $n\in\mathbb N$ such that $\text{\rm Var}(\Phi_n(x))\le\epsilon$ for all~$x\in A$. This also gives $\text{\rm Cov}(\Phi_n(x),\Phi_n(y))\le\epsilon$ for all~$x,y\in A$ and so Kahane's convexity inequality along with Jensen's inequality show, for $Y_\epsilon=\mathcal N(0,\epsilon)$ independent of~$M^D$ and~$\widetilde M^D$,
\begin{equation}
\begin{aligned}
E\bigl(\text{\rm e}\mkern0.7mu^{-\text{\rm e}\mkern0.7mu^{Y_\epsilon}\langle M^D,f\rangle}\bigr)
&=E\bigl(\text{\rm e}\mkern0.7mu^{-\text{\rm e}\mkern0.7mu^{\epsilon/2}
\text{\rm e}\mkern0.7mu^{Y_\epsilon-\epsilon/2}\langle M^D,f\rangle}\bigr)
\\
&\!\!\!\!\overset{\text{Kahane}}\ge
E\bigl(\text{\rm e}\mkern0.7mu^{-\text{\rm e}\mkern0.7mu^{\epsilon/2}
\langle M^D,\,\text{\rm e}\mkern0.7mu^{\Phi_n(\cdot)-\frac12\text{\rm Var}(\Phi_n(\cdot))}f\rangle}\bigr)
\\
&\!\!\overset{\eqref{E:3.19}}=E\bigl(\text{\rm e}\mkern0.7mu^{-\text{\rm e}\mkern0.7mu^{\epsilon/2}
\langle \widetilde M^D,\,\text{\rm e}\mkern0.7mu^{\Phi_n(\cdot)-\frac12\text{\rm Var}(\Phi_n(\cdot))}f\rangle}\bigr)
\overset{\text{Jensen}}\ge E\bigl(\text{\rm e}\mkern0.7mu^{-\text{\rm e}\mkern0.7mu^{\epsilon/2}
\langle \widetilde M^D,f\rangle}\bigr).
\end{aligned}
\end{equation}
Taking~$\epsilon\downarrow0$ and noting that this implies~$Y_\epsilon\to0$ in probability then shows, with the help of the Bounded Convergence Theorem,
\begin{equation}
E\bigl(\text{\rm e}\mkern0.7mu^{-\langle M^D,f\rangle}\bigr)\ge E\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde M^D,f\rangle}\bigr).
\end{equation}
By symmetry, equality must hold for all~$f$ as above and so $M^D\,\overset{\text{\rm law}}=\, \widetilde M^D$, as desired.
\end{proofsect}
Equipped with Theorem~\ref{lemma-8.1new}, we are ready to give:
\begin{proofsect}{Proof of Theorem~\ref{thm-DGFF}}
Abbreviate
\begin{equation}
\label{E:3.21}
Y_N:=\frac1{|D_N|}\sum_{x\in D_N}h^{D_N}_x.
\end{equation}
Then $Y_N$ is normal with mean zero and variance
\begin{equation}
\text{\rm Var}(Y_N)=\frac1{|D_N|^2}\sum_{x,y\in D_N}G^{D_N}(x,y).
\end{equation}
Moreover, denoting
\begin{equation}
\fraktura d_N(x):=\frac{|D_N|\sum_{y\in D_N}G^{D_N}(\lfloor xN\rfloor,y)}{\sum_{y,z\in D_N}G^{D_N}(z,y)}
\end{equation}
a covariance calculation shows that~$Y_N$ is independent of
\begin{equation}
\widehat h^{D_N}_x:=h^{D_N}_x-\fraktura d_N(x/N)Y_N
\end{equation}
which, we note, has zero average over~$D_N$. Hence, if we define the zero-average variant of~$\eta^D_N$ by
\begin{equation}
\label{E:3.25}
\eta^{D,0}_N:=\frac1{K_N}\sum_{x\in D_N}\delta_{x/N}\otimes\delta_{\,\widehat h^{D_N}_x- \widehat{a}_N},
\end{equation}
we have
\begin{equation}
\label{E:3.26}
\eta^{D,0}_N\independent Y_N\quad\text{and}\quad \eta^D_N=\eta^{D,0}_N\circ\theta_{\fraktura d_N(\cdot)Y_N}^{-1},
\end{equation}
where~$\theta_{s(\cdot)}\colon D\times\mathbb R\to D\times\mathbb R$ is defined by $\theta_{s(\cdot)}(x,h):=(x,h+s(x))$. The stated independence also shows
\begin{equation}
\Bigl(\eta^D_N\,\Big|\,\sum_{x\in D_N}h^{D_N}_x=0\Bigr) \,\,\overset{\text{\rm law}}=\,\,\,\eta^{D,0}_N
\end{equation}
and so we may and will henceforth focus on the limit of~$\eta^{D,0}_N$.
Using the uniform bound $G^{D_N}(x,y)\le g\log\frac{N}{|x-y|+1}+c$ along with
\begin{equation}
\label{E:3.28}
G^{D_N}\bigl(\lfloor xN\rfloor,\lfloor yN\rfloor\bigr)\underset{N\to\infty}\longrightarrow\,\widehat G^D(x,y),\quad x,y\in D,\,x\ne y,
\end{equation}
the Dominated Convergence shows that~$\text{\rm Var}(Y_N)$ converges to $\sigma_D^2$ from \eqref{E:2.14new}. We thus have $Y_N{\,\overset{\text{\rm law}}\longrightarrow\,} Y=\mathcal N(0,\sigma_D^2)$. In particular, $\{Y_N\colon N\ge1\}$ is tight and so from the tightness of~$\eta^D_N$, \eqref{E:3.26} and the uniform boundedness of~$\fraktura d_N$ we get
\begin{equation}
\{\eta^{D,0}_N\colon N\ge1\}\text{ is tight}.
\end{equation}
Similarly we show that $\fraktura d_N\to\fraktura d$ uniformly on~$D$. (This implies~$\fraktura d(x)\ge0$). Writing the equality in \eqref{E:3.26} via Laplace transforms against a test function $f\in C_{\text{\rm c}}(D\times\mathbb R)$ and invoking \eqref{E:1.19}, any subsequential limit $\eta^{D,0}$ of $\{\eta^{D,0}_N\colon N\ge1\}$ thus obeys
\begin{equation}
\label{E:8.14new}
\eta^{D,0}\circ\theta_{\fraktura d(\cdot)Y}^{-1} \,\,\overset{\text{\rm law}}=\,\, \fraktura c(\lambda)Z^D_\lambda(\text{\rm d}\mkern0.5mu x)\otimes\text{\rm e}\mkern0.7mu^{-\alpha\lambda h}\text{\rm d}\mkern0.5mu h\,,
\end{equation}
where~$Y=\mathcal N(0,\sigma_D^2)$ is such that $Y\independent \eta^{D,0}$ on the left-hand side.
Next we note that we may realize \eqref{E:8.14new} as an a.s.\ equality. This is because \eqref{E:8.14new} implies, for any measurable $A\subseteq D$ and $B\subseteq\mathbb R$ with ${\rm Leb}(A)>0$,
\begin{equation}
\frac{\eta^{D,0}\circ\theta_{\fraktura d(\cdot)Y}^{-1}(A\times B)}{\eta^{D,0}\circ\theta_{\fraktura d(\cdot)Y}^{-1}(A\times[0,1])}=\alpha\lambda(1-\text{\rm e}\mkern0.7mu^{-\alpha\lambda})^{-1}\int_B\text{\rm e}\mkern0.7mu^{-\alpha\lambda h}\,\text{\rm d}\mkern0.5mu h\quad\text{a.s.}
\end{equation}
due to the fact that equality in law to a constant implies equality a.e. We conclude that the measure
\begin{equation}
A\mapsto\alpha\lambda[\fraktura c(\lambda)(1-\text{\rm e}\mkern0.7mu^{-\alpha\lambda})]^{-1}\,\eta^{D,0}\circ\theta_{\fraktura d(\cdot)Y}^{-1}(A\times[0,1])
\end{equation}
is equidistributed to~$Z^D_\lambda$.
Replacing $Z^D_\lambda$ by this measure then gives us equality a.s.
Once we have \eqref{E:8.14new} as an a.s.\ equality, and $Z^D_\lambda$ thus as a measurable function of~$\eta^{D,0}$ and~$Y$, we apply a routine change of variables to get
\begin{equation}
\label{E:3.33}
\eta^{D,0}=\fraktura c(\lambda)\,\text{\rm e}\mkern0.7mu^{-\alpha\lambda\fraktura d(x)Y}\,Z^D_\lambda(\text{\rm d}\mkern0.5mu x)\otimes\text{\rm e}\mkern0.7mu^{-\alpha\lambda h}\text{\rm d}\mkern0.5mu h.
\end{equation}
Setting
\begin{equation}
\label{E:8.19new}
Z^{D,0}_\lambda(\text{\rm d}\mkern0.5mu x):=\text{\rm e}\mkern0.7mu^{- \alpha\lambda\fraktura d(x)Y}Z^D_\lambda(\text{\rm d}\mkern0.5mu x)
\end{equation}
the independence of~$\eta^{D,0}$ of~$Y$ shows $Z^{D,0}_\lambda\independent Y$ and thus proves existence of the decomposition \eqref{E:2.15new}. Since the decomposition is unique by Theorem~\ref{lemma-8.1new} and the fact that~$\fraktura d$ is bounded and continuous, the law of $Z^{D,0}_\lambda$ does not depend on the subsequential limit~$\eta^{D,0}$. It follows that all subsequential limits of $\{\eta^{D,0}_N\colon N\ge1\}$ are equal in law and so we get the convergence statement \eqref{E:2.16new} as well.
\end{proofsect}
Our use of Theorem~\ref{thm-DGFF} will invariably come through:
\begin{corollary}
\label{cor-3.3}
Under the conditions of Theorem~\ref{thm-DGFF}, and for~$Y_N$ as in \eqref{E:3.21},
\begin{equation}
\eta^D_N\otimes\delta_{Y_N}\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\,\fraktura c(\lambda)\,\text{\rm e}\mkern0.7mu^{\alpha\lambda\fraktura d(x)Y}\,Z^{D,0}_\lambda(\text{\rm d}\mkern0.5mu x)\otimes\text{\rm e}\mkern0.7mu^{-\alpha\lambda h}\text{\rm d}\mkern0.5mu h\otimes\delta_Y,
\end{equation}
where $Y=\mathcal N(0,\sigma_D^2)$, for~$\sigma_D^2$ as in \eqref{E:2.14new}, is such that $Y\independent Z^{D,0}_\lambda$.
\end{corollary}
\begin{proofsect}{Proof}
By \eqref{E:3.26} and the fact that $Y_n\to Y$ in law and $\fraktura d_N\to\fraktura d$ uniformly shows
\begin{equation}
\eta_N^D\otimes\delta_{Y_N}\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\,\bigl(\eta^{D,0}\circ\theta^{-1}_{\fraktura d(\cdot)Y}\bigr)\otimes\delta_Y,
\end{equation}
where $\eta^{D,0}$ is as in \eqref{E:3.33} and obeys $Y\independent\eta^{D,0}$. Invoking \eqref{E:8.19new}, the claim follows by a routine change of variables.
\end{proofsect}
\section{Augmented boundary vertex measures}
\label{sec4}\noindent
We will now move to the discussion of local time level sets. Our proofs build on the conclusions derived in~\cite{AB} for the local time parametrized by its value at the boundary vertex~$\varrho$. In order to transfer these conclusions to the setting of a fixed total time, we will need to control the fluctuations of the total local time at a fixed local time at~$\varrho$. Our first step is thus to augment the results of~\cite{AB} by information about these fluctuations.
We will again introduce the corresponding quantities on a general finite connected graph with vertex set~$V\cup\{\varrho\}$. Consider a joint law of paths~$X$ of the discrete-time random walk on~$V\cup\{\varrho\}$ and an independent sample $t\mapsto \widetilde N(t)$ of a rate-1 Poisson process. The continuous-time walk is then defined as
\begin{equation}
\label{E:tilde-X}
\widetilde X_t:=X_{\widetilde N(t)},\quad t\ge0.
\end{equation}
The local time naturally associated with~$\widetilde X$ is given by
\begin{equation}
\label{E:2.32uw}
\widetilde L_t^V(u):=\frac1{\deg(u)}\int_0^t\text{\rm d}\mkern0.5mu s\,1_{\{\widetilde X_s=u\}}.
\end{equation}
Denoting $\hat\tau_{\varrho}(t):=\inf\{s\ge0\colon\widetilde L_s^V(\varrho)\ge t\}$, the local time parametrized by its value at~$\varrho$ is defined as
\begin{equation}
\label{E:2.27ia}
\widehat L_t^V(v):=\widetilde L_{\hat\tau_{\varrho}(t)}^V(v).
\end{equation}
Note that, in particular, we have $\widehat L_t^V(\varrho)=t$ for all~$t\ge0$. The same is true about the expected value at any vertex; i.e., $E^\varrho\widehat L^V_t(v)=t$ for all~$v\in V$.
At a given~$t\ge0$, the total (continuous) local time of the walk is computed by adding $\widehat L^V_t(v)$ over all~$v\in V\cup\{\varrho\}$. The quantity
\begin{equation}
\label{E:4.4u}
T(t):=\frac1{\sqrt{2t}\,|V|}\sum_{v\in V}\bigl[\,\widehat L^V_t(v)-t\bigr]
\end{equation}
then denotes the normalized (empirical) fluctuation of the total local time. (Note that $v=\varrho$ can be freely added to the sum as $\widehat L^V_t(\varrho)=t$.) To explain the specific choice of the normalization, we recall the following result from Eisenbaum, Kaspi, Marcus, Rosen and Shi~\cite{EKMRS} (with improvements by Zhai~\cite[Section~5.4]{Zhai}):
\begin{theorem}[Second Ray-Knight Theorem]
\label{lemma-Dynkin}
For each~$t>0$ there exists a coupling of $\widehat L^V_t$ (sampled under~$P^\varrho$) and two copies of the DGFF $h^V$ and~$\tilde h^V$ such that
\begin{equation}
\label{E:Dynkin1}
h^V\text{\rm\ and } \widehat L^V_t \text{\rm\ are independent}
\end{equation}
and
\begin{equation}
\label{E:Dynkin2}
\widehat L_t^V(u)+\frac12(h_u^V)^2 = \frac12\bigl(\tilde h_u^V+\sqrt{2t}\bigr)^2, \quad u\in V.
\end{equation}
\end{theorem}
Using the stated coupling, we readily compute
\begin{equation}
\label{E:4.7}
T(t) = \frac1{|V|}\sum_{u\in V}\tilde h^V_u
+\frac1{\sqrt{2t}\,|V|}\sum_{u\in V}\frac{(\tilde h^V_u)^2-(h^V_u)^2}2.
\end{equation}
Note that the first term is the average of the field~$\tilde h^V$.
In what follows, the role of~$V$ will be taken by the sets~$D_N$ and~$\varrho$ by the ``boundary vertex.'' We let $h^{D_N}$ be the DGFF on~$D_N$ and, given a sequence $\{t_N\}_{N\ge1}$ and for the continuous-time random walk started at~$\varrho$, let~$\tilde h^{D_N}$ be the DGFF such that \twoeqref{E:Dynkin1}{E:Dynkin2} with $t:=t_N$ holds. We then set
\begin{equation}
\label{E:4.8}
T_N:=\frac1{\sqrt{2t_N}\,|D_N|}
\sum_{x\in D_N}\bigl[\,\widehat L^{D_N}_{t_N}(x)-t_N\bigr]
\end{equation}
and denote
\begin{equation}
\label{E:4.9nwt}
Y_N:=\frac1{|D_N|}\sum_{x\in D_N}\tilde h^{D_N}_x.
\end{equation}
We start by noting:
\begin{lemma}
\label{lemma-TY}
For any~$\{t_N\}_{N\ge1}$ with~$t_N\to\infty$ we have
\begin{equation}
\label{E:4.10}
T_N-Y_N\,\,\underset{N\to\infty}\longrightarrow\,\,0,\quad\text{\rm in probability}.
\end{equation}
In particular,
\begin{equation}
\label{E:4.11}
T_N\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\,\mathcal N(0,\sigma_D^2),
\end{equation}
where~$\sigma_D^2$ is as in \eqref{E:2.14new}.
\end{lemma}
\begin{proofsect}{Proof}
The Wick Pairing Theorem gives
\begin{equation}
\begin{aligned}
\text{\rm Var}\Bigl(\,\sum_{x\in D_N} (h^{D_N}_x)^2\Bigr)
&=\sum_{x,y\in D_N}\text{\rm Cov}\bigl((h^{D_N}_x)^2,(h^{D_N}_y)^2\bigr)
\\
&=\sum_{x,y\in D_N} 2\,\bigl[E(h^{D_N}_x h^{D_N}_y)\bigr]^2
=2\sum_{x,y\in D_N}G^{D_N}(x,y)^2.
\end{aligned}
\end{equation}
The uniform bound $G^{D_N}(x,y)\le g\log\frac{N}{|x-y|+1}+c$ shows that the double sum on the right is of order~$|D_N|^2$. From~$t_N\to\infty$ it follows that
\begin{equation}
\frac1{\sqrt{2t_N}\,|D_N|}\sum_{x\in D_N}\bigl[(h^{D_N}_x)^2-\mathbb E[(h^{D_N}_x)^2]\bigr]\,\,\underset{N\to\infty}\longrightarrow\,\,0,\quad\text{in probability}.
\end{equation}
Using this along with $\mathbb E[(h^{D_N}_x)^2]=\mathbb E[(\tilde h^{D_N}_x)^2]$ in \eqref{E:4.7}, we get \eqref{E:4.10}. For \eqref{E:4.11} we invoke the argument after \eqref{E:3.28}.
\end{proofsect}
We are now ready to state and prove convergence theorems for processes associated with exceptional level sets of the boundary vertex local time~$\widehat L^{D_N}_{t_N}$ augmented by information about~$T_N$. Starting with the thick and thin points, given positive sequences $\{t_N\}_{N\ge1}$ and $\{a_N\}_{N\ge1}$, define
\begin{equation}
\label{E:zetaND-bv}
\widehat\zeta^D_N:=\frac1{W_N}\sum_{x\in D_N}\delta_{x/N}\otimes\delta_{(\widehat L_{t_N}^{D_N}(x)-a_N)/\sqrt{2a_N}}\,,
\end{equation}
where~$W_N$ is as in \eqref{E:WN}. For the thick points of~$\widehat L^{D_N}_{t_N}$, we then have:
\begin{proposition}[Thick points]
\label{thm-4.3}
Suppose that $\{t_N\}_{N\ge1}$ and $\{a_N\}_{N\ge1}$ are such \eqref{E:1.12} and \eqref{E:1.20} hold for some $\theta>0$ and~$\lambda\in(0,1)$. Then for~$X$ sampled from~$P^\varrho$, relative to the vague convergence of measures on~$\overline D\times(\mathbb R\cup\{+\infty\})\times\mathbb R$,
\begin{equation}
\label{E:1.21cont}
\widehat\zeta^D_N\otimes\delta_{T_N}\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\, \sqrt{\frac{\sqrt\theta}{\sqrt\theta+\lambda}}\,\,\fraktura c(\lambda) \,\text{\rm e}\mkern0.7mu^{\alpha\lambda\fraktura d(x)Y}Z_\lambda^{D,0}(\text{\rm d}\mkern0.5mu x)\otimes\text{\rm e}\mkern0.7mu^{-\alpha\lambda h}\text{\rm d}\mkern0.5mu h\otimes\delta_Y(\text{\rm d}\mkern0.5mu t)
\end{equation}
where~$Y=\mathcal N(0,\sigma_D^2)$, for~$\sigma_D^2$ as in \eqref{E:2.14new}, is such that~$Y\independent Z^{D,0}_\lambda$.
\end{proposition}
\begin{proofsect}{Proof}
We will rely heavily on the proof of \cite[Theorem~2.2]{AB} but, due to a different normalization of the second coordinate in \eqref{E:zetaND-bv} and also the fact that the limit measure is different than in \cite{AB}, we need to recount the main steps of the proof. Throughout we will assume (for each~$N\ge1$ and each~$t:=t_N$) a coupling of~$\widehat L^{D_N}_{t_N}$ and an independent DGFF~$h^{D_N}$ to a DGFF~$\tilde h^{D_N}$ satisfying \eqref{E:Dynkin2}.
First, by~\cite[Corollary~4.2]{AB} the measures $\{\widehat\zeta^D_N\colon N\ge1\}$ are tight and, by Lemma~\ref{lemma-TY}, the same applies to the enhanced measures $\{\xi_N\colon N\ge1\}$ where
\begin{equation}
\label{E:4.16}
\xi_N:=\widehat\zeta^D_N\otimes\delta_{T_N}.
\end{equation}
Moreover, \cite[Lemma~5.3]{AB} shows that if~$\xi_{N_k}\to\xi$ in law along some increasing sequence~$\{N_k\}_{k\ge1}$, then the extended measures
\begin{equation}
\label{E:xi}
\xi_N^{\text{ext}}:=\frac1{W_N}\sum_{x\in D_N}\delta_{x/N}\otimes\delta_{(\widehat L_{t_N}^{D_N}(x)-a_N)/\sqrt{2a_N}}\otimes\delta_{T_N}\otimes\delta_{h^{D_N}_x/(2a_N)^{1/4}}\,,
\end{equation}
where we now normalize the third coordinate differently than in \cite{AB}, obey
\begin{equation}
\label{E:4.18}
\xi_{N_k}^{\text{ext}}\,\,\,\underset{ k \to \infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\,\xi\otimes\mathfrak g
\end{equation}
in which, using \eqref{E:1.20}, $\mathfrak g$ is the law of~$\mathcal N(0,\frac1{\alpha\,(\sqrt\theta+\lambda)})$.
Let~$\eta^D_N$ be the process \eqref{E:etaDGFF} associated with the field~$\tilde h^{D_N}$ and the scale function\begin{equation}
\widehat a_N:=\sqrt{2a_N}-\sqrt{2t_N}
\end{equation}
that, by \eqref{E:1.12} and \eqref{E:1.20}, scales as
$\widehat a_N\sim2\sqrt g\,\lambda\log N$ as~$N\to\infty$.
Let~$Y_N$ be the average of~$\tilde h^{D_N}$ over~$D_N$; cf~\eqref{E:4.9nwt}.
Given $f\in C_{\text{\rm c}}(D\times\mathbb R\times\mathbb R)$, in the assumed coupling of~$\widehat L^{D_N}_{t_N}$, $h^{D_N}$ and~$\tilde h^{D_N}$, the convergence in Lemma~\ref{lemma-TY} tells us
\begin{equation}
\label{E:4.20}
\langle\eta^D_N\otimes\delta_{Y_N},f\rangle=o(1)+\langle\eta^D_N\otimes\delta_{T_N},f\rangle,
\end{equation}
where~$o(1)\to0$ as~$N\to\infty$ in probability. The calculation in the proof of \cite[Lemma~5.4]{AB} (enabled by the fact that the field~$h^{D_N}$ will be typical at most points contributing to~$\zeta^D_N$, as shown in~\cite[Lemma~5.2]{AB}) then gives
\begin{equation}
\label{E:4.21}
\langle\eta^D_N\otimes\delta_{T_N},f\rangle = o(1)+\langle\xi_N^{\text{ext}},f^{\text{ext}}\rangle,
\end{equation}
where
\begin{equation}
f^{\text{ext}}(x,\ell,t,h):=f\bigl(x,\ell+\tfrac {h^2}2,t\bigr).
\end{equation}
Using Corollary~\ref{cor-3.3} on the left-hand side of \eqref{E:4.20}, from \eqref{E:4.21} and \eqref{E:4.18} and, one more time, \cite[Lemma~5.2]{AB} we conclude that every subsequential limit~$\xi$ of the measures in \eqref{E:4.16}
satisfies the convolution-type identity
\begin{equation}
\label{E:4.23}
\langle\xi,f^{\ast\mathfrak g}\rangle \,\,\overset{\text{\rm law}}=\,\, \fraktura c(\lambda)\int\text{\rm e}\mkern0.7mu^{\alpha\lambda\fraktura d(x)Y}\,Z^{D,0}_\lambda(\text{\rm d}\mkern0.5mu x)\otimes\text{\rm e}\mkern0.7mu^{-\alpha\lambda \ell}\text{\rm d}\mkern0.5mu \ell \,\,f(x,\ell,Y),
\end{equation}
where $Y\independent Z^{D,0}_\lambda$ and
\begin{equation}
\label{E:4.24}
f^{\ast\mathfrak g}(x,\ell,t):=\int\mathfrak g(\text{\rm d}\mkern0.5mu h)f\bigl(x,\ell+\tfrac {h^2}2,t\bigr),
\end{equation}
jointly for all $f\in C_{\text{\rm c}}(D\times\mathbb R\times\mathbb R)$. It remains to ``solve'' \eqref{E:4.23} for~$\xi$.
First we note that the Monotone Convergence Theorem extends \eqref{E:4.23} to all~$f$ of the form $f(x,\ell,t):=1_A(x)\tilde f(\ell)1_{(b,\infty)}(t)$, where~$\tilde f\in C_{\text{\rm c}}(\mathbb R)$ and where~$A\subseteq D$ is non-empty and open. Denoting $\xi_{A,b}(B):=\xi(A\times B\times(b,\infty))$, a calculation then shows
\begin{equation}
\label{E:4.25}
\langle\xi,f^{\ast\mathfrak g}\rangle = \langle\xi_{A,b},\tilde f\ast\fraktura e\rangle
\end{equation}
where
\begin{equation}
\fraktura e(z):=\sqrt{\frac\beta\pi}\,\frac{\text{\rm e}\mkern0.7mu^{\beta z}}{\sqrt{-z}}1_{(-\infty,0)}(z)\quad\text{for}\quad\beta:=\alpha\bigl(\sqrt\theta+\lambda\bigr).
\end{equation}
The identity \eqref{E:4.23} also implies that $\langle\xi_{A,b},1_{[0,\infty)}\rangle<\infty$ a.s.\ and gives
\begin{equation}
\label{E:4.27}
\langle\xi_{A,b},\tilde f\ast\fraktura e\rangle = \langle\xi_{A,b},1_{[0,\infty)} \ast \fraktura e\rangle \int
\alpha\lambda\,\text{\rm e}\mkern0.7mu^{-\alpha\lambda\ell} \tilde f (\ell) \text{\rm d}\mkern0.5mu \ell,
\end{equation}
where the equality now holds pointwise a.s.\ because once
$\langle\xi_{A,b},1_{[0,\infty)} \ast \fraktura e \rangle>0$ (which is necessary for the left-hand side to be non-zero), the ratio $\langle\xi_{A,b},\tilde f\ast\fraktura e\rangle/ \langle\xi_{A,b},1_{[0,\infty)} \ast \fraktura e\rangle$ is equal in law, and thus pointwise, to the integral on the right.
Denoting $\mu_\lambda(\text{\rm d}\mkern0.5mu h):=\text{\rm e}\mkern0.7mu^{-\alpha\lambda h}\text{\rm d}\mkern0.5mu h$, a routine change of variables rewrites \eqref{E:4.27} as
\begin{equation}
\label{E:4.28}
\langle\xi_{A,b},\tilde f\ast\fraktura e\rangle =C\langle\mu_\lambda,\tilde f\rangle
\end{equation}
where~$C$ is a random constant that is finite thanks to~$\beta>\alpha\lambda$. By \cite[Lemma~5.5]{AB}, there is at most one Borel measure~$\xi_{A,b}$ on~$\mathbb R$ satisfying \eqref{E:4.28} and, in fact, $\xi_{A,b}(\text{\rm d}\mkern0.5mu\ell)=C_{A,b}\text{\rm e}\mkern0.7mu^{-\alpha\lambda\ell}\text{\rm d}\mkern0.5mu\ell$ for some (random) constant~$C_{A,b}$. It follows that
\begin{equation}
\xi(\text{\rm d}\mkern0.5mu x\text{\rm d}\mkern0.5mu\ell\text{\rm d}\mkern0.5mu t)=M(\text{\rm d}\mkern0.5mu x\text{\rm d}\mkern0.5mu t)\otimes \text{\rm e}\mkern0.7mu^{-\alpha\lambda\ell}\text{\rm d}\mkern0.5mu\ell,
\end{equation}
where, by plugging this in \eqref{E:4.23},
\begin{equation}
M(\text{\rm d}\mkern0.5mu x\text{\rm d}\mkern0.5mu t)\,\,\overset{\text{\rm law}}=\,\,\Bigl(\int\mathfrak g(\text{\rm d}\mkern0.5mu h)\text{\rm e}\mkern0.7mu^{\alpha\lambda\frac{h^2}2}\Bigr)^{-1}
\fraktura c(\lambda)\text{\rm e}\mkern0.7mu^{\alpha\lambda\fraktura d(x)Y}\,Z^{D,0}_\lambda(\text{\rm d}\mkern0.5mu x)\otimes\delta_Y(\text{\rm d}\mkern0.5mu t).
\end{equation}
The integral equals the root of $(\sqrt\theta+\lambda)/\sqrt\theta$. The claim follows.
\end{proofsect}
We proceed with the corresponding result for the thin points:
\begin{proposition}[Thin points]
\label{thm-4.4}
Suppose that $\{t_N\}_{N\ge1}$ and $\{a_N\}_{N\ge1}$ are such \eqref{E:1.12} and \eqref{E:1.22} hold for some $\theta>0$ and~$\lambda\in(0,\sqrt\theta\wedge1)$. Then for~$X$ sampled from~$P^\varrho$, relative to the vague convergence of measures on~$\overline D\times(\mathbb R\cup\{-\infty\})\times\mathbb R$,
\begin{equation}
\label{E:1.23cont}
\widehat\zeta^D_N\otimes\delta_{T_N}\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\, \sqrt{\frac{\sqrt\theta}{\sqrt\theta-\lambda}}\,\,\fraktura c(\lambda) \,\text{\rm e}\mkern0.7mu^{-\alpha\lambda\fraktura d(x)Y}Z_\lambda^{D,0}(\text{\rm d}\mkern0.5mu x)\otimes\text{\rm e}\mkern0.7mu^{+\alpha\lambda h}\text{\rm d}\mkern0.5mu h\otimes\delta_Y(\text{\rm d}\mkern0.5mu t)
\end{equation}
where~$Y\independent Z^{D,0}_\lambda$ with $Y=\mathcal N(0,\sigma_D^2)$, for~$\sigma_D^2$ as in \eqref{E:2.14new}.
\end{proposition}
\begin{proofsect}{Proof}
The proof is very similar to that of Proposition~\ref{thm-4.3} so we indicate only the needed changes. We will again rely on the coupling of~$\widehat L^{D_N}_{t_N}$ and two DGFFs $h^{D_N}$ and~$\tilde h^{D_N}$ such that \twoeqref{E:Dynkin1}{E:Dynkin2} for~$t:=t_N$ hold. Let~$\eta^D_N$ to denote the process associated with~$\tilde h^{D_N}$ and the centering sequence~$-\widehat a_N$, where
\begin{equation}
\widehat a_N:=\sqrt{2t_N}-\sqrt{2a_N}.
\end{equation}
Note that, under \eqref{E:1.12} and \eqref{E:1.22} we have $\widehat a_N\sim2\sqrt g\lambda\log N$. Writing~$Y_N$ for the average of~$\tilde h^{D_N}$ over~$D_N$,
Corollary~\ref{cor-3.3} along with the symmetry $h^{D_N}\,\,\overset{\text{\rm law}}=\,\,-h^{D_N}$ ensures
\begin{equation}
\label{E:4.32}
\eta^D_N\otimes\delta_{Y_N}\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\,\fraktura c(\lambda)\,\text{\rm e}\mkern0.7mu^{-\lambda\alpha\fraktura d(x)Y}\,Z_\lambda^{D,0}(\text{\rm d}\mkern0.5mu x)\otimes\text{\rm e}\mkern0.7mu^{+\alpha\lambda h}\text{\rm d}\mkern0.5mu h\otimes\delta_Y(\text{\rm d}\mkern0.5mu t),
\end{equation}
where~$Y=\mathcal N(0,\sigma_D^2)$ is independent of~$Z^{D,0}_\lambda$.
The argument now proceeds very much like for the thick points. We consider the extended measures \eqref{E:xi}, which are tight by \cite[Corollary~4.8]{AB} and show, with the help of \cite[Lemmas~6.1, 6.2]{AB} and \eqref{E:4.32}, that every subsequential limit~$\xi$ thereof obeys
\begin{equation}
\label{E:4.33}
\langle\xi,f^{\ast\mathfrak g}\rangle \,\,\overset{\text{\rm law}}=\,\, \fraktura c(\lambda)\int
\text{\rm e}\mkern0.7mu^{- \alpha\lambda\fraktura d(x)Y}\,Z^{D,0}_\lambda(\text{\rm d}\mkern0.5mu x)\otimes
\text{\rm e}\mkern0.7mu^{+\alpha\lambda \ell}\text{\rm d}\mkern0.5mu \ell \,\,f(x,\ell,Y),
\end{equation}
where~$f^{\ast\mathfrak g}$ is still defined via \eqref{E:4.24} but with
\begin{equation}
\mathfrak g := \text{law of }\mathcal N\bigl(0, \tfrac1{\alpha(\sqrt\theta-\lambda)}\bigr).
\end{equation}
The identity \eqref{E:4.33} readily extends to all~$f$ of the form $f(x,\ell,t):=1_A(x)\tilde f(\ell)1_{(-\infty,b)}(t)$, where~$\tilde f\in C_{\text{\rm c}}(\mathbb R)$ and where~$A\subseteq D$ is non-empty and open. A calculation then shows \eqref{E:4.25} with~$\fraktura e$ now defined using $\beta:=\alpha(\sqrt\theta-\lambda)$. Proceeding via an analogue of \eqref{E:4.27} (with $1_{[0,\infty)}$ replaced by $1_{(-\infty,0]}$), using \cite[Lemma~6.4]{AB} we then again show
\begin{equation}
\xi(\text{\rm d}\mkern0.5mu x\text{\rm d}\mkern0.5mu\ell\text{\rm d}\mkern0.5mu t)=M(\text{\rm d}\mkern0.5mu x\text{\rm d}\mkern0.5mu t)\otimes \text{\rm e}\mkern0.7mu^{+\alpha\lambda\ell}\text{\rm d}\mkern0.5mu\ell,
\end{equation}
where, this time,
\begin{equation}
M(\text{\rm d}\mkern0.5mu x\text{\rm d}\mkern0.5mu t)\,\,\overset{\text{\rm law}}=\,\,\Bigl(\int\mathfrak g(\text{\rm d}\mkern0.5mu h)\text{\rm e}\mkern0.7mu^{-\alpha\lambda\frac{h^2}2}\Bigr)^{-1}
\fraktura c(\lambda)\text{\rm e}\mkern0.7mu^{-\alpha\lambda\fraktura d(x)Y}\,Z^{D,0}_\lambda(\text{\rm d}\mkern0.5mu x)\otimes\delta_Y(\text{\rm d}\mkern0.5mu t).
\end{equation}
The integral equals the root of $(\sqrt\theta-\lambda)/\sqrt\theta$.
\end{proofsect}
Next we move to the discussion of the light and avoided points. Starting with the light points, we define
\begin{equation}
\label{E:hat-varthetaND}
\widehat\vartheta^D_N:=\frac1{\widehat W_N }\sum_{x\in D_N}\delta_{x/N}\otimes\delta_{\widehat L_{t_N}^{D_N}(x)},
\end{equation}
where $\widehat W_N$ is as in \eqref{E:1.31}. We then get:
\begin{proposition}[Light points]
\label{thm-light-bv}
Suppose $\{t_N\}_{N\ge1}$ obeys \eqref{E:1.12} for some $\theta\in(0,1)$. Then, for the random walk sampled from~$P^\varrho$, in the sense of vague convergence of measures on~$\overline D\times[0,\infty)\times\mathbb R$,
\begin{equation}
\widehat\vartheta^D_N\otimes\delta_{T_N}\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\, \sqrt{2\pi g}\,\fraktura c(\sqrt\theta)\,\,\text{\rm e}\mkern0.7mu^{-\alpha\sqrt\theta\,\fraktura d(x)Y}\, Z_{\sqrt{\theta}\,}^{D,0}(\text{\rm d}\mkern0.5mu x)\otimes\tilde\mu(\text{\rm d}\mkern0.5mu h)\otimes\delta_Y(\text{\rm d}\mkern0.5mu t),
\end{equation}
where $Y=\mathcal N(0,\sigma_D^2)$ is independent of~$Z^{D,0}_{\sqrt\theta}$ and
\begin{equation}
\label{E:tilde-mu}
\tilde\mu(\text{\rm d}\mkern0.5mu h):=\delta_0(\text{\rm d}\mkern0.5mu h)+\biggl(\,\sum_{n=0}^\infty\frac1{n!(n+1)!}\Bigl(\frac{\alpha^2\theta}2\Bigr)^{n+1} h^n\biggr)1_{(0,\infty)}(h)\,\text{\rm d}\mkern0.5mu h.
\end{equation}
\end{proposition}
\begin{proofsect}{Proof}
Assuming again the coupling from \twoeqref{E:Dynkin1}{E:Dynkin2}, we set
\begin{equation}
\xi_N:=\widehat\vartheta^D_N\otimes\delta_{T_N}.
\end{equation}
The family~$\{\xi_N\colon N\ge1\}$ is tight by \cite[Corollary~4.6]{AB} and so we may consider a subsequential limit~$\xi$ thereof. By \cite[Lemma~7.1]{AB}, the extended measure
\begin{equation}
\xi_N^{\text{ext}}:=\frac{\sqrt{\log N}}{\widehat W_N }\sum_{x\in D_N}\delta_{x/N}\otimes\delta_{\widehat L_{t_N}^{D_N}(x)}\otimes\delta_{T_N}\otimes\delta_{h^{D_N}_x},
\end{equation}
then converges to~$\xi\otimes\frac1{\sqrt{2\pi g}}{\rm Leb}$ along the same subsequence. We now pick a test function $f\in C_{\text{\rm c}}(D\times[0,\infty)\times\mathbb R)$, denote
\begin{equation}
f^{\text{ext}}(x,\ell,t,h):=f\bigl(x,\ell+\tfrac{h^2}2,t\bigr)
\end{equation}
and observe that \eqref{E:Dynkin2} implies
\begin{equation}
\sum_{x\in D_N}f^{\text{ext}}\Bigl(\ffrac xN,\widehat L^{D_N}_{t_N}(x), T_N,h^{D_N}_x\Bigr)
=\sum_{x\in D_N}f\Bigl(\ffrac xN,\tfrac12\bigl(\tilde h^{D_N}_x+\sqrt{2t_N}\bigr)^2,T_N\Bigr).
\end{equation}
Writing this in terms of the above measures, Lemma~\ref{lemma-TY} gives
\begin{equation}
\langle\xi_N^{\text{ext}},f^{\text{ext}}\rangle = o(1)+\bigl\langle\eta^D_N\otimes\delta_{Y_N},f(~\cdot~, \tfrac{1}{2} |\cdot|^2, ~\cdot~)\bigr\rangle,
\end{equation}
where~$\eta^D_N$ is the DGFF process associated with the scale sequence $\widehat a_N:=-\sqrt{2t_N}$. As $\widehat a_N\sim-2\sqrt g\sqrt\theta\log N$, from \eqref{E:4.32} we get
\begin{equation}
\langle \xi,f^{\ast{\rm Leb}}\rangle \,\,\overset{\text{\rm law}}=\,\,
\fraktura c(\sqrt\theta)\int\text{\rm e}\mkern0.7mu^{-\alpha\sqrt\theta\,\fraktura d(x)Y}\,Z^{D,0}_{\sqrt\theta}(\text{\rm d}\mkern0.5mu x)\otimes\text{\rm e}\mkern0.7mu^{+\alpha\sqrt\theta\, h}\text{\rm d}\mkern0.5mu h \,\,f\bigl(x,\tfrac12 h^2,Y\bigr),
\end{equation}
where
\begin{equation}
f^{\ast{\rm Leb}}(x,\ell,t):=\frac1{\sqrt{2\pi g}}\int\text{\rm d}\mkern0.5mu h\, f\bigl(x,\ell+\tfrac{h^2}2,t\bigr).
\end{equation}
By the Monotone Convergence Theorem, this extends to all~$f$ of the form
\begin{equation}
f(x,\ell,t):=1_A(x)\text{\rm e}\mkern0.7mu^{-s\ell}1_{[0,\infty)}(\ell)1_{[b,\infty)}(t)
\end{equation}
for~$A\subseteq D$ open,~$b\in\mathbb R$ and~$s>0$. For $\xi_{A,b}(B):=\xi(A\times B\times[b,\infty))$, we then get
\begin{equation}
\int_0^\infty\xi_{A,b}(\text{\rm d}\mkern0.5mu\ell) \text{\rm e}\mkern0.7mu^{-s\ell}
\,\,\overset{\text{\rm law}}=\,\, \sqrt{2\pi g}\, \fraktura c(\sqrt\theta)\,
\Bigl(\int_{A}
\text{\rm e}\mkern0.7mu^{-\alpha\sqrt\theta\,\fraktura d(x)Y}\,Z^{D,0}_{\sqrt\theta}(\text{\rm d}\mkern0.5mu x)\Bigr)\,\text{\rm e}\mkern0.7mu^{\frac{\alpha^2\theta}{2s}}\,1_{[b,\infty)}(Y).
\end{equation}
Since the Laplace transform of a measure, if exists, determines the measure uniquely, this proves that $\xi$ takes the product form
\begin{equation}
\xi \,\,\overset{\text{\rm law}}=\,\, \sqrt{2\pi g}\, \fraktura c(\sqrt\theta)\,\text{\rm e}\mkern0.7mu^{-\alpha\sqrt\theta\,\fraktura d(x)Y}\,Z^{D,0}_{\sqrt\theta}(\text{\rm d}\mkern0.5mu x)\otimes\tilde\mu(\text{\rm d}\mkern0.5mu \ell)\otimes\delta_Y(\text{\rm d}\mkern0.5mu t)
\end{equation}
for some deterministic measure~$\tilde\mu$ on $[0,\infty)$ with Laplace transform $s\mapsto\text{\rm e}\mkern0.7mu^{\frac{\alpha^2\theta}{2s}}$. A calculation shows that the measure \eqref{E:tilde-mu} has this property.
\end{proofsect}
A direct consequence of our control of the light points is:
\begin{proposition}[Avoided points]
\label{thm-avoid-bv}
Suppose $\{t_N\}_{N\ge1}$ is such that \eqref{E:1.12} holds for some $\theta\in(0,1)$ and let
\begin{equation}
\widehat\kappa_N^D:=\frac1{\widehat W_N}\sum_{x\in D_N}1_{\{\widehat L^{D_N}_{t_N}(x)=0\}}\delta_{x/N}.
\end{equation}
Then, for the random walk distributed according to~$P^\varrho$, in the sense of vague convergence of measures on~$\overline D\times\mathbb R$,
\begin{equation}
\label{E:2.22cont}
\widehat\kappa^D_N\otimes\delta_{T_N}\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\, \sqrt{2\pi g}\,\fraktura c(\sqrt\theta)\,\,\text{\rm e}\mkern0.7mu^{-\alpha\sqrt\theta\,\fraktura d(x)Y}\, Z_{\sqrt{\theta}\,}^{D,0}(\text{\rm d}\mkern0.5mu x)\otimes\delta_Y(\text{\rm d}\mkern0.5mu t),
\end{equation}
where $Y=\mathcal N(0,\sigma_D^2)$ is independent of~$Z^{D,0}_{\sqrt\theta}$.
\end{proposition}
\begin{proofsect}{Proof}
The proof of \cite[Theorem~2.5]{AB} carries over essentially \myemph{verbatim}.
\end{proofsect}
\newcommand{\texte}{\text{\rm e}\mkern0.7mu}
\newcommand{1}{1}
\newcommand{\OO}{\mathcal O}
\newcommand{\NN}{\mathcal N}
\newcommand{\EE}{\mathcal E}
\section{Fixed total time}
\label{sec5}\noindent
Equipped with the enhanced limit results that include the limit value of suitably-norma\-lized fluctuations of the total local time, we now proceed to derive from these the corresponding conclusions for a fixed total time.
We keep working with the random walk started at the boundary vertex~$\varrho$; general starting points will be dealt with in Section~\ref{sec6}.
\subsection{Time conversions}
The transition from a fixed local time at~$\varrho$ to a fixed total time is based on a simple inversion formula. Recall that, in our context,
\begin{equation}
\hat\tau_\varrho(t):=\inf\bigl\{s\ge0\colon \widetilde L_s^{D_N}(\varrho)\ge t\bigr\}
\end{equation}
and $\deg(D_N)=\sum_{x\in D_N\cup\{\varrho\}}\deg(x)$. Given a sequence $\{t_N\}_{N\ge1}$ with $t_N\ge1$, define
\begin{equation}
\label{E:5.2}
t_{N}^\star = \inf\bigl\{ t \geq 0 \colon \hat{\tau}_{\varrho}(t) \geq \deg(D_N) t_{N} \bigr\}.
\end{equation}
This is an inverse of~$\hat\tau_\varrho$ evaluated at $\deg(D_N) t_N$ and so we expect $\hat\tau_\varrho(t_N^\star)\approx\deg(D_N) t_N$. By \eqref{E:LVt} and \eqref{E:2.27ia}, we should therefore have $\widetilde L_{\deg(D_N) t_N}^{D_N}(\cdot)\approx \widehat L^{D_N}_{t_N^\star}(\cdot)$. Besides their approximate nature, any use of these identifications are complicated by the appearance of the random time~$t_N^\star$ for which we have no better formula than \eqref{E:5.2}. We will thus base the time conversion on a slightly different (still random) quantity that will turn out to be better adapted to our needs.
Recall the definition of~$T_N$ from \eqref{E:4.8}. We note that this actually coincides with the value of $T_N(t_N)$, where (in accord with \eqref{E:4.4u}) we set
\begin{equation}
T_{N}(t) := \frac{U_{N}(t)}{\sqrt{2t}}
\qquad \text{for} \qquad
U_{N}(t) := \frac{1}{|D_{N}|}\sum_{x\in D_{N}}\bigl[\,\widehat L^{D_{N}}_{t}(x) - t\bigr].
\end{equation}
Now let
\begin{equation}
\label{E:tNcirc}
t^{\circ}_{N} := t_{N} - \left(1-\frac{\deg(\varrho)}{\deg(D_{N})}\right) \sqrt{2t_{N}} \, T_{N}(t_{N}).
\end{equation}
We then have:
\begin{proposition}[Time conversion]
\label{P:tNbound}
Fix any sequence $(b_{N})_{N \geq 1}$ in $(0, \infty)$ such that $b_{N} \to \infty$ and $b_{N}/t_{N}^{1/4} \to 0$ as $N \to \infty$. Then there exist constants $c_{1} > 0 $ such that
\begin{equation}
\label{E:tNbound}
\hat{\tau}_{\varrho}\big(t_{N}^{\circ} - b_{N} t_{N}^{1/4}\big)
\leq \deg(D_N) t_{N}
\leq \hat{\tau}_{\varrho}\big(t^{\circ}_{N} + b_{N} t_{N}^{1/4}\big)
\end{equation}
and thus, in particular,
\begin{equation}
\label{E:5.6i}
\widehat L^{D_N}_{t_N^\circ-b_Nt_N^{1/4}}(\cdot)\le\widetilde L^{D_N}_{\deg(D_N) t_N}(\cdot)\le\widehat L^{D_N}_{t_N^\circ+b_Nt_N^{1/4}}(\cdot)
\end{equation}
hold true with $P^{\varrho}$-probability at least $1- c_{1} b_{N}^{-1}$.
\end{proposition}
The proof will be split into several intermediate results, some of which will be useful later as well. The first item to note is the ``stability'' (or slow variation) of the fluctuation of the total local time:
\begin{lemma}
\label{L:UN_max}
There exists a constant $ c_{2} > 0 $ such that for all $s, t \geq 0$ and all $ r > 0 $,
\begin{equation}
\label{E:UN_max}
P^{\varrho}\left(\sup_{0\leq u \leq t} \left| U_{N}(s+u) - U_{N}(s) \right| \geq r \right)
\leq \frac{c_{2}t}{r^{2}}.
\end{equation}
\end{lemma}
\begin{proofsect}{Proof}
Note that $ U_{N} $ is a compensated compound Poisson process. In view of stationarity, it suffices to consider the case $ s = 0 $. Moreover, since $ U_{N} $ is a martingale, Doob's maximal inequality is applicable and hence
\begin{equation}
P^{\varrho}\left(\sup_{0\leq u \leq t} \left| U_{N}(u) \right| \geq r \right)
\leq \frac{4 \text{\rm Var}_{P^\varrho}(U_{N}(t))}{r^{2}}.
\end{equation}
It suffices to show that $ \text{\rm Var}_{P^\varrho}(U_{N}(t)) $ is bounded by $ Ct $ for some $ C > 0 $. To this end, we note that $t \mapsto (U_{N}(t) + t) $ is a compound Poisson process with rate $ \deg(\varrho) $ and jump size distributed as $ \sum_{x \in D_{N}} \ell(x) / |D_{N}| $, where $ \ell(\cdot) $ is the local time for a single excursion. Hence,
\begin{equation}
\text{\rm Var}_{P^\varrho}(U_{N}(t))
= \text{\rm Var}_{P^\varrho}(U_{N}(t)+t)
= \frac{1}{|D_N|^2} \deg(\varrho) t\, E^{\varrho}\Biggl[\biggl(\sum_{x \in D_{N}} \ell(x) \biggr)^{2}\Biggr].
\end{equation}
The last expectation can be computed via the Kac moment formula,
\begin{equation}
\label{E:5.7}
\text{\rm Var}_{P^\varrho}(U_{N}(t)) = \frac{2t}{|D_{N}|^{2}} \sum_{x,y \in D_{N}} G^{D_{N}}(x, y).
\end{equation}
The uniform bound $G^{D_N}(x,y)\le g\log\frac{N}{|x-y|+1}+c$ shows that the sum is at most a constant times~$|D_N|^2$, uniformly in~$N\ge1$.
\end{proofsect}
The next lemma quantifies the difference between $ \hat{\tau}_{\varrho}(t_{N}^\star) $ and $ \deg(D_N) t_{N} $:
\begin{lemma}
\label{L:tN*asymp1}
Let $ (b_{N})_{N\geq 1} $ be as in the statement of Proposition \ref{P:tNbound}. Then there exists a constant $ c_{3} > 0 $ such that
\begin{equation}
\label{E:tN*asymp1}
\left| \frac{\hat{\tau}_{\varrho}(t_{N}^\star)}{\deg(D_N)} - t_{N} \right| \leq b_{N}
\qquad \text{\rm and} \qquad
\left|t_{N}^\star - t_{N}\right| < b_{N}\sqrt{t_{N}}
\end{equation}
hold with $P^{\varrho}$-probability at least $1 - c_{3} b_{N}^{-2}$.
\end{lemma}
\begin{proofsect}{Proof}
Note that~$\hat\tau_\varrho(t)=\sum_{x\in D_N\cup\{\varrho\}}\deg(x)\widehat L^{D_N}_t(x)$.
The proof is a straightforward application of Chebyshev's inequality together with some variance estimates. We begin by noting that $ \hat{\tau}_{\varrho}(t_{N}^\star) - \deg(D_N) t_{N} $ is the first time to hit $ \varrho $ starting from the point $\widetilde X_{\deg(D_N) t_{N}}$. Writing~$H_\varrho$ for the first hitting time of~$\varrho$, the Markov property tells
\begin{equation}
E^{\varrho}\left[ \left(\hat{\tau}_{\varrho}(t_{N}^\star) - \deg(D_N) t_{N} \right)^{2} \right]
= E^{\varrho}\left[ E^{\widetilde X_{\deg(D_N) t_{N}}}\big[ H_{\varrho}^{2} \big] \right]
\leq \max_{x \in D_{N}} E^{x}\big[ H_{\varrho}^{2} \big].
\end{equation}
As in the proof of the previous lemma, applying the Kac moment formula shows
\begin{equation}
E^{x}\big[ H_{\varrho}^{2} \big]
= 2 \sum_{y,z\in D_{N}} \deg(y)\deg(z) G^{D_{N}}(x, y)G^{D_{N}}(y, z)
\leq c_{4}|D_{N}|^{2}
\end{equation}
for some absolute constant $ c_{4} > 0 $. (This also conforms to the knowledge that the length of a typical excursion on $D_N$ is comparable to the volume of $D_N$.) Then by the Chebyshev inequality,
\begin{equation}
\label{E:5.12}
P^{\varrho}\left( \left| \frac{\hat{\tau}_{\varrho}(t_{N}^\star)}{\deg(D_N)} - t_{N} \right| \geq b_{N} \right)
\leq \frac{c_{4}|D_{N}|^{2}}{(\deg(D_N) b_{N})^{2}}
\leq \frac{c_{4}}{16 b_{N}^{2}},
\end{equation}
where the last step follows from $ \deg(D_N) = \deg(\varrho) + 4|D_{N}| $. Also, by the computation similar to the previous proof, we get
\begin{equation}
E^{\varrho}\Biggl[\biggl(\frac{\hat{\tau}_{\varrho}(t)}{\deg(D_N)} - t\biggr)^2\Biggr]
= \frac{2t}{\deg(D_N)^{2}}\sum_{x,y \in D_{N}} \deg(x)\deg(y) G^{D_{N}}(x, y)
\leq c_{5} t
\end{equation}
for some constant $ c_{5} > 0 $. So again, by Chebyshev's inequality,
\begin{equation}
\label{E:5.14}
P^{\varrho}\bigg( \frac{\hat{\tau}_{\varrho}(t_{N} - b_{N}\sqrt{t_{N}})}{\deg(D_N)} \geq t_{N} - b_{N}\sqrt{t_{N}}/2 \bigg)
\leq \frac{c_{5}(t_{N} - b_{N}\sqrt{t_{N}})}{(b_{N}\sqrt{t_{N}}/2)^{2}}
\leq \frac{4c_{5}}{b_{N}^{2}}
\end{equation}
and likewise
\begin{equation}
\label{E:5.15}
P^{\varrho}\bigg( \frac{\hat{\tau}_{\varrho}(t_{N} + b_{N}\sqrt{t_{N}})}{\deg(D_N)} \leq t_{N} + b_{N}\sqrt{t_{N}}/2 \bigg)
\leq \frac{4c_{5}(1 + b_{N}t_{N}^{-1/2})}{b_{N}^{2}}.
\end{equation}
Combining \eqref{E:5.12}, \eqref{E:5.14}, and \eqref{E:5.15} we find that there exists a constant $ c_{3} > 0 $, depending only on $ (t_{N})_{N\geq 1} $ and $ (b_{N})_{N\geq 1} $, such that all of
\begin{equation}
\label{E:5.16}
\begin{aligned}
\hat{\tau}_{\varrho}(t_{N} - b_{N}\sqrt{t_{N}}) &< \deg(D_N)\big(t_{N} - b_{N}\sqrt{t_{N}}/2\big), \\
\hat{\tau}_{\varrho}(t_{N} + b_{N}\sqrt{t_{N}}) &> \deg(D_N)\big(t_{N} + b_{N}\sqrt{t_{N}}/2\big), \\
\left| \hat{\tau}_{\varrho}(t_{N}^\star) - \deg(D_N) t_{N} \right| &\leq \deg(D_N) b_{N}/2
\end{aligned}
\end{equation}
simultaneously hold with $P^{\varrho}$-probability at least $1 - c_{3}b_{N}^{-2}$. But if all of \eqref{E:5.16} hold, then we get
\begin{equation}
\hat{\tau}_{\varrho}(t_{N} - b_{N}\sqrt{t_{N}})
< \hat{\tau}_{\varrho}(t_{N}^\star)
< \hat{\tau}_{\varrho}(t_{N} + b_{N}\sqrt{t_{N}}).
\end{equation}
By the monotonicity of $ \hat{\tau}_{\varrho} $, these altogether imply \eqref{E:tN*asymp1} as required.
\end{proofsect}
Next we will quantify the difference between $t_N^\star$ and~$t_N^\circ$:
\begin{lemma}
\label{L:tN*asymp2}
Assume $t_N\ge1$ and let $ (b_{N})_{N\geq 1} $ be as in the statement of Proposition \ref{P:tNbound}. Then there exists a constant $ c_{6} > 0 $ such that
\begin{equation}
\label{E:tN*asymp2}
|t_{N}^\star - t_{N}^{\circ}| \leq b_{N}t_{N}^{1/4}
\end{equation}
holds with $P^{\varrho}$-probability at least $1 - c_{6} b_{N}^{-1}$.
\end{lemma}
\begin{proofsect}{Proof}
We note that, by \eqref{E:2.27ia} and the fact that $\deg(x)=4$ for~$x\in D_N$,
\begin{equation}
\begin{split}
U_{N}(t)
&= \frac{1}{|D_{N}|} \sum_{x \in D_{N}} \left( \frac{1}{\deg(x)} \int_{0}^{\hat{\tau}_{\varrho}(t)} 1_{\{\widetilde X_{s} = x\}} \, \textd s - t \right) \\
&\quad= \frac{1}{|D_{N}|} \left( \frac{1}{4} \left( \hat{\tau}_{\varrho}(t) - \deg(\varrho)t \right) - |D_{N}|t \right)
= \frac{\hat{\tau}_{\varrho}(t) - \deg(D_N) t}{4|D_{N}|}.
\end{split}
\end{equation}
Rearranging the identity in terms of $ t $, we get
\begin{equation}
\label{E:5.20}
t = \frac{\hat{\tau}_{\varrho}(t)}{\deg(D_N)} - \left(1-\frac{\deg(\varrho)}{\deg(D_N)}\right) U_{N}(t).
\end{equation}
This will be used to prove the desired bound. Plugging $ t := t_{N}^\star $, we notice that the right-hand side of \eqref{E:5.20}
almost looks like the definition \eqref{E:tNcirc} of $ t_{N}^{\circ} $, except that we need~$t_{N} $ in place of $ \hat{\tau}_{\varrho}(t_{N}^\star) / \deg(D_N)$ and $ U_{N}(t_{N}) $ in place of $ U_{N}(t_{N}^\star) $. This amounts to estimating their respective differences, and this is where the previous lemmas come handy.
First, we plug $s := t_{N} - b_{N}\sqrt{t_{N}}$ and $t := 2b_{N}\sqrt{t_{N}}$ in \eqref{E:UN_max} to get
\begin{equation}
P^{\varrho}\left( \sup_{|u| \leq b_{N}\sqrt{t_{N}}} \left| U_{N}(t_{N}+u) - U_{N}(t_{N}) \right| \geq b_{N}t_{N}^{1/4} \right)
\leq \frac{ 8 c_{2}b_{N}\sqrt{t_{N}}}{\big(b_{N}t_{N}^{1/4}\big)^{2}}
= \frac{ 8 c_{2}}{b_{N}}.
\end{equation}
Combining this with Lemma \ref{L:tN*asymp1}, we can find $ c_{7} > 0 $ such that both \eqref{E:tN*asymp1} and
\begin{equation}
\label{E:5.22}
\left| U_{N}(t_{N}+u) - U_{N}(t_{N}) \right| \leq b_{N}t_{N}^{1/4}
\qquad \text{for all } |u| \leq b_{N}\sqrt{t_{N}}
\end{equation}
hold with $ P^{\varrho} $-probability at least $ 1-c_{7}b_{N}^{-1} $. Moreover, given \eqref{E:tN*asymp1} and \eqref{E:5.22}, we also get $ \left| U_{N}(t_{N}^\star) - U_{N}(t_{N}) \right| \leq b_{N}t_{N}^{1/4} $. Putting this together, we get
\begin{equation}
\begin{aligned}
|t_{N}^\star - t_{N}^{\circ}|
&\leq \left| \frac{\hat{\tau}_{\varrho}(t_{N}^\star)}{\deg(D_N)} - t_{N} \right| + \left| U_{N}(t_{N}^\star) - U_{N}(t_{N}) \right|
\\
&\leq b_{N}\big(1+t_{N}^{1/4}\big)\le 2b_N t_N^{1/4}.
\end{aligned}
\end{equation}
Although this bound is slightly larger than that appearing in the statement, we can repeat all the above argument with $\{b_{N}/2\}_{N\geq 1} $ in place of $\{b_{N}\}_{N\geq 1} $, then the desired claim follows with $ c_{6} = 2c_{7} $.
\end{proofsect}
We are now ready to prove the main statement:
\begin{proofsect}{Proof of Proposition \ref{P:tNbound}}
Let $ (b_{N})_{N\geq 1} $ be as in the statement. Then by the definition of $ t_{N}^\star $ and Lemma \ref{L:tN*asymp2},
\begin{equation}
\deg(D_N) t_{N} \leq \hat{\tau}_{\varrho}(t_{N}^\star) \leq \hat{\tau}_{\varrho}\big(t_{N}^{\circ} + b_{N}t_{N}^{1/4}\big)
\end{equation}
holds with $ P^{\varrho} $-probability at least $ 1-\OO(b_{N}^{-1}) $. Next, regarding $ t_{N} \mapsto t_{N}^\star $ and $ t_{N} \mapsto t_{N}^{\circ} $ as functions of $ t_{N} $ for each fixed $ N $, Lemma \ref{L:tN*asymp1} applied to $ (t_{N}-b_{N}/4)_{N\geq 1} $ and $ (b_{N}/4)_{N\geq 1} $ in place of $ (t_{N})_{N\geq 1} $ and $ (b_{N})_{N\geq 1} $, respectively, show that both
\begin{equation}
\deg(D_N) t_{N}
\geq \hat{\tau}_{\varrho}\big( (t_{N} - b_{N}/4)^\star \big) \geq \deg(D_N)(t_{N} - b_{N}/2)
\end{equation}
and
\begin{equation}
\big\lvert \big( t_{N} - b_{N}/4 \big)^\star - t_{N} \big\rvert \leq b_{N}\sqrt{t_{N}}/2
\end{equation}
are satisfied with $ P^{\varrho} $-probability at least $ 1-\OO(b_{N}^{-1}) $. Then using \eqref{E:5.22} and repeating the argument as in the previous proof, we can bound $ (t_{N} - b_{N}/4)^\star $ from below by $ t_{N}^{\circ} - b_{N}t_{N}^{1/4} $ again with probability at least $ 1-\OO(b_{N}^{-1}) $.
\end{proofsect}
\subsection{Continuous-time exceptional level sets}
We are now ready to adapt the convergence theorems for the exceptional level-set measures for the boundary-vertex local times $\widehat L^{D_{N}}$ to those associated with the local time $\widetilde L^{D_{N}}$ of the continuous-time walk~$\widetilde X$ run for a fixed time of order~$N^2(\log N)^2$. We begin by the thick points; the arguments will be readily adapted to the other families of exceptional points as well. Given two positive sequences $\{t_{N}\}_{N\geq 1}$ and $\{a_{N}\}_{N\geq 1}$ as before, define
\begin{equation}
\label{E:5.29b}
\widetilde\zeta^{D}_{N} = \frac{1}{W_{N}} \sum_{x \in D_{N}} \delta_{x/N} \otimes \delta_{(\widetilde L^{D_{N}}_{\deg(D_N) t_{N}}(x) - a_{N})/\sqrt{2a_{N}}},
\end{equation}
where $W_{N}$ is the same as in the case of $\widehat\zeta^{D}_{N}$. Then
\begin{proposition}[Continuous-time thick points]
\label{thm-thick-cont}
Under the setting and notation of Theorem~\ref{thm-thick} and for the walk started at the ``boundary vertex,'' we have
\begin{equation}
\label{E:1.21b}
\widetilde\zeta^D_N\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\, \sqrt{\frac{\sqrt{\theta}}{\sqrt{\theta}+\lambda}}\,\,\fraktura c(\lambda)\,\texte^{\alpha \lambda (\mathfrak{d}(x) - 1) T}\,Z_\lambda^{D,0}(\text{\rm d}\mkern0.5mu x)\otimes\text{\rm e}\mkern0.7mu^{-\alpha\lambda h}\text{\rm d}\mkern0.5mu h,
\end{equation}
where $T$ and $Z_{\lambda}^{D,0}$ are independent with $T \sim \NN(0, \sigma_{D}^{2})$.
\end{proposition}
The key point is to carefully track the effects of the random time shift $\sqrt{2t_{N}} \, T_{N}$ in the quantity~$t_{N}^{\circ}$ from \eqref{E:tNcirc}. Let~$\{b_N\}_{N\ge1}$ be a sequence with~$b_N\to\infty$ and~$b_N/t_N^{1/4}\to0$. Consider the event
\begin{multline}
\label{E:evtgood}
\quad
\EE_N:= \left\{ \hat{\tau}_{\varrho}\big(t_{N}^{\circ} - b_{N} t_{N}^{1/4}\big)
\leq \deg(D_N) t_{N}
\leq \hat{\tau}_{\varrho}\big(t^{\circ}_{N} + b_{N} t_{N}^{1/4}\big) \right\}
\\
\cap \,\left\{ \max_{|u|\leq b_{N}\sqrt{t_{N}}}\left| U_{N}(t_{N}+u) - U_{N}(t_{N}) \right| \leq b_{N}t_{N}^{1/4}\right\}
\cap \left\{ \left| T_{N} \right| \leq {b_{N}} \right\}.
\quad
\end{multline}
We then have:
\begin{lemma}
\label{lemma-5.6}
There is a constant~$c_7>0$ such that the following holds for all~$N\ge1$:
\begin{equation}
\label{E:5.32i}
P^{\varrho}(\EE_N) \geq 1 - c_7 b_{N}^{-1}
\end{equation}
and
\begin{equation}
\label{E:5.33i}
\max_{|u|\le b_N\sqrt{t_N}}\,\left|T_{N}(t_{N}+u) - T_{N}\right|
\le c_7 b_N/t_N^{1/4}\quad\text{\rm on }\mathcal E_N.
\end{equation}
\end{lemma}
\begin{proofsect}{Proof}
The bound \eqref{E:5.32i} follows from Proposition \ref{P:tNbound}, Lemma \ref{L:UN_max} and the fact that~$T_N$ has asymptotically a Gaussian tail. To get \eqref{E:5.33i}, note that for $|u|\le b_N\sqrt{t_N}$,
\begin{equation}
\label{E:5.29}
\left|T_{N}(t_{N}+u) - T_{N}\right|
\leq \frac{b_{N}t_{N}^{1/4}}{\sqrt{2(t_{N} - b_{N}\sqrt{t_{N}})}} + \frac{b_{N} \left|T_{N}\right|}{\sqrt{t_{N} - b_{N}\sqrt{t_{N}}}}.
\end{equation}
As $|T_N|\le b_N$ on~$\EE_N$ and $\{b_N/t_N^{1/4}\}_{N\ge1}$ is bounded, this is at most order $b_{N}/t_{N}^{1/4}$.
\end{proofsect}
The argument to follow will be based on dividing the event~$\EE_N$ depending on the values of $T_{N}$. For this we fix an $\epsilon > 0$, and let $ \{ \rho_{k} \}_{k\in\mathbb Z} $ be a family of continuous functions such that
\begin{equation}
\label{E:evtdiv}
0 \leq \rho_{k} \leq 1{}_{[(k-1)\epsilon,(k+1)\epsilon]}
\qquad \text{and} \qquad
\sum_{k\in\mathbb Z} \rho_{k} = 1.
\end{equation}
We also define two auxilliary time sequences $\{t^{+}_{N,k}\}_{N\geq 1}$ and $\{t^{-}_{N,k}\}_{N\geq 1}$ by
\begin{equation}
\label{E:tNshift}
\begin{gathered}
t^{+}_{N,k}
= t_{N} - \Big(1 - \tfrac{\deg(\varrho)}{\deg(D_N)} \Big) \epsilon (k-1) \sqrt{2t_{N}} + b_{N}t_{N}^{1/4},\\
t^{-}_{N,k}
= t_{N} - \Big(1 - \tfrac{\deg(\varrho)}{\deg(D_N)} \Big) \epsilon (k+1) \sqrt{2t_{N}} - b_{N}t_{N}^{1/4}.
\end{gathered}
\end{equation}
We then have:
\begin{lemma}
\label{lemma-5.7}
For each~$M>0$ there is~$N_0\in\mathbb N$ such that for all~$N\ge N_0$ and all~$k\in\mathbb Z$ with~$|k|\le M$, the following holds on $\EE_N\cap\{T_N\in\operatorname{supp}(\rho_k)\}$:
\begin{equation}
\label{E:5.34}
\bigl|T_N(t_{N,k}^\pm)-T_N\bigr|\le c_7 b_N/t_N^{1/4}
\end{equation}
and
\begin{equation}
\label{E:5.33b}
\widehat L^{D_{N}}_{t^{-}_{N,k}}(\cdot)
\leq \widetilde L^{D_{N}}_{\deg(D_N) t_{N}}(\cdot)
\leq \widehat L^{D_{N}}_{t^{+}_{N,k}}(\cdot).
\end{equation}
\end{lemma}
\begin{proofsect}{Proof}
Fix~$M>0$. As~$b_N\to\infty$ and $b_N t_N^{-1/4}\to0$, we can choose~$N_0\in\mathbb N$ such that $\epsilon (M+1)\sqrt{2t_N}+b_N t_N^{1/4}\le b_N\sqrt{t_N}$ for all~$N\ge N_0$. Then for all~$N\ge N_0$,
\begin{equation}
\bigl|t_{N,k}^\pm-t_N\bigr|\le b_N\sqrt{t_N},\quad -M\le k\le M.
\end{equation}
The bound \eqref{E:5.34} is then implied by \eqref{E:5.33i}.
For \eqref{E:5.33b} we note that, on $\{T_N\in\operatorname{supp}(\rho_k)\}$ we have $(k-1)\epsilon\le T_N\le (k+1)\epsilon$ and thus also
\begin{equation}
\label{E:5.40nw}
t_{N,k}^-\le t_N^\circ-b_N t_N^{1/4}\le t_N^\circ+b_N t_N^{1/4}\le t_{N,k}^+.
\end{equation}
The bound \eqref{E:5.33b} then follows from the inequalities in \eqref{E:evtgood} and the monotonicity of $t\mapsto\widehat L^{D_N}_t(\cdot)$.
\end{proofsect}
The inequalities \eqref{E:5.33b} thus naturally make us consider the level-set measures $\widehat\zeta^{D}_{N}$ along different choices of time sequences than the base sequence~$\{t_N\}_{N\ge1}$. We will explicate the dependence on the time sequence by writing $\widehat\zeta^{D}_{N}(t'_{N})$ whenever it is along $\{t'_{N}\}_{N\geq 1}$ rather than $\{t_{N}\}_{N\geq 1}$, and likewise, we will write $W_{N}(t'_{N})$ for the normalizing constants along $\{t'_{N}\}_{N\geq 1}$.
Next we note:
\begin{lemma}
\label{lemma-5.8}
We have $\deg(\varrho)/\deg(D_N)\to0$ as~$N\to\infty$. In particular, for each~$k\in\mathbb Z$,
\begin{equation}
\label{E:5.42i}
t^{\pm}_{N,k} \sim 2g\theta (\log N)^{2},\quad N\to\infty.
\end{equation}
Moreover,
\begin{equation}
\label{E:5.32}
\begin{gathered}
W_{N}(t^{+}_{N,k}) = W_{N}(t_N) \, \texte^{-\alpha\lambda \epsilon (k-1) + o(1)}, \\
W_{N}(t^{-}_{N,k}) = W_{N}(t_N) \, \texte^{-\alpha\lambda \epsilon (k+1) + o(1)},
\end{gathered}
\end{equation}
where~$o(1)\to0$ uniformly in~$k\in\mathbb Z$ with~$|k|\le M$, for any~$M>0$.
\end{lemma}
\begin{proofsect}{Proof}
We start by showing $\deg(\varrho)/\deg(D_N)\to0$. For this we note that~$\deg(D_N)\ge 4|D_N|$ while, for any~$\delta>0$ and~$N$ sufficiently large, $\deg(\varrho)\le 4|D_N\smallsetminus D_N^\delta|$, where $D_N^\delta:=\{x\in D_N\colon d_\infty(x,D_N^{\text{\rm c}}) >\delta N\}$. Definition~\ref{dfn:admissible} now ensures
\begin{equation}
\limsup_{N\to\infty}\frac{\deg(\varrho)}{\deg(D_N)}\le
\limsup_{N\to\infty}\frac{|D_N\smallsetminus D_N^\delta|}{|D_N|}\le\frac{{\rm Leb}(D\smallsetminus D^{2\delta})}{{\rm Leb}(D)},
\end{equation}
where~$D^\delta:=\{x\in D\colon d_\infty(x,D^{\text{\rm c}})>\delta\}$. As~$D^{2\delta}\uparrow D$ as~$\delta\downarrow0$, we have ${\rm Leb}(D\smallsetminus D^{2\delta})\to0$ as~$\delta\downarrow0$.
With $\deg(\varrho)/\deg(D_N)\to0$ settled, the asymptotic \eqref{E:5.42i} is now checked readily from the definition of~$t^\pm_{N,k}$. The bounds in \eqref{E:5.32} follow similarly from the explicit formula for~$W_N$ and some routine estimates.
\end{proofsect}
We are now ready for:
\begin{proofsect}{Proof of Proposition~\ref{thm-thick-cont}}
Let $f\colon\overline D\times(\mathbb R\cup\{+\infty\})\to[0,\infty)$ be a bounded and continuous function that is non-decreasing in the second coordinate and supported on~$\overline D\times[b,\infty]$ for some~$b\in\mathbb R$. Then \eqref{E:5.32}, \eqref{E:5.33b} and \eqref{E:5.34} show
\begin{equation}
\begin{aligned}
\label{E:5.35}
\texte^{-2\alpha\lambda \epsilon + o(1)} \texte^{-\alpha\lambda T_{N}(t^{-}_{N,k})} \langle \widehat\zeta_{N}^{D}(t^{-}_{N,k}), f \rangle
& \leq \langle \widetilde\zeta_{N}^{D}, f \rangle
\\
&\leq \texte^{2\alpha\lambda \epsilon + o(1)} \texte^{-\alpha\lambda T_{N}(t^{+}_{N,k}) } \langle \widehat\zeta_{N}^{D}(t^{+}_{N,k}), f \rangle
\end{aligned}
\end{equation}
on $\EE_N\cap \{ T_{N} \in \operatorname{supp}(\rho_{k}) \}$, where~$o(1)$ is a deterministic sequence tending to zero uniformly in~$k\in\mathbb Z$ with~$|k|\le M$.
Define the maximal modulus of continuity of~$\{\rho_k\colon|k|\le M\}$ by
\begin{equation}
\text{osc}_{M,\epsilon}(r):=\max_{|k|\le M}\sup_{\begin{subarray}{c}
t,t'\in\mathbb R\\|t-t'|\le r
\end{subarray}}
\bigl|\rho_k(t)-\rho_k(t')\bigr|.
\end{equation}
Relying first on the lower bound of \eqref{E:5.35}, we now estimate
\begin{equation}
\label{E:5.46i}
\begin{aligned}
E^{\varrho}\bigl( \texte^{-\langle \widetilde\zeta_{N}^{D}, f \rangle}& \bigr)-P^{\varrho}(\EE^{\mathrm{c}}_N)
- P^{\varrho}\bigl(|T_{N}| \geq M/\epsilon\bigr)
\\
&\leq \sum_{k=- M}^M E^{\varrho} \bigl( \texte^{-\langle \widetilde\zeta_{N}^{D}, f \rangle} \rho_{k}(T_{N})1{}_{\EE_N} \bigr)
\\
&\leq \sum_{k=- M}^M E^{\varrho} \biggl(\texte^{-\texte^{-2\alpha\lambda \epsilon + o(1)} \texte^{-\alpha\lambda T_{N}(t^{-}_{N,k})} \langle \widehat\zeta_{N}^{D}(t^{-}_{N,k}), f \rangle}\rho_{k}(T_{N}) 1_{\EE_N}\biggr)
\\
&\leq (2M+1)\text{osc}_{M,\epsilon}\bigl(c_7b_N/t_N^{1/4}\bigr)
\\&\qquad\quad+
\sum_{k=- M}^M E^{\varrho} \biggl(\texte^{-\texte^{-2\alpha\lambda \epsilon + o(1)} \texte^{-\alpha\lambda T_{N}(t^{-}_{N,k})} \langle \widehat\zeta_{N}^{D}(t^{-}_{N,k}), f \rangle}\rho_{k}\bigl(T_{N}(t_{N,k}^-)\bigr)1_{\EE_N} \biggr),
\end{aligned}
\end{equation}
where in the last step we used \eqref{E:5.34}. The key point is that, dropping the indicator of~$\EE_N$, the $k$-th term in the sum is now a continuous function of the process~$\widehat\zeta_N^D(t_{N,k}^-)$ and the time~$T_N(t_{N,k}^-)$. In light of \eqref{E:5.42i}, Proposition~\ref{thm-4.3} gives
\begin{equation}
E^{\varrho} \biggl(\texte^{-\texte^{-2\alpha\lambda \epsilon + o(1)} \texte^{-\alpha\lambda T_{N}(t^{-}_{N,k})} \langle \widehat\zeta_{N}^{D}(t^{-}_{N,k}), f \rangle}\rho_{k}\bigl(T_{N}(t_{N,k}^-)\bigr)\biggr)
\underset{N\to\infty}\longrightarrow\, E\Bigl(\texte^{-\texte^{-2\alpha\lambda \epsilon}\text{\rm e}\mkern0.7mu^{-\alpha\lambda T}\langle\widehat\zeta^D,f\rangle}\rho_k(T)\Bigr),
\end{equation}
where
\begin{equation}
\label{E:5.48a}
\widehat\zeta^D := \sqrt{\frac{\sqrt{\theta}}{\sqrt{\theta}+\lambda}} \,\fraktura c(\lambda)\, \texte^{\alpha \lambda \mathfrak{d}(x)T} Z_{\lambda}^{D,0}(\text{\rm d}\mkern0.5mu x) \otimes \texte^{-\alpha \lambda h} \text{\rm d}\mkern0.5mu h
\end{equation}
with~$T=\mathcal N(0,\sigma_D^2)$ independent of~$Z_{\lambda}^{D,0}$. Dropping the restriction to~$|k|\le M$, the $N\to\infty$ \myemph{limes superior} of the sum on the extreme right of \eqref{E:5.46i} is then at most $E(\texte^{- \text{\rm e}\mkern0.7mu^{- 2 \alpha \lambda \epsilon}\text{\rm e}\mkern0.7mu^{-\alpha\lambda T}
\langle\widehat\zeta^D,f\rangle})$. Since $\text{osc}_{M,\epsilon}(r)\to0$ as~$r\downarrow0$, taking $N\to\infty$ followed by~$M\to\infty$ and~$\epsilon\downarrow0$ shows
\begin{equation}
\limsup_{N\to\infty}
E^{\varrho}\bigl( \texte^{-\langle \widetilde\zeta_{N}^{D}, f \rangle}\bigr)
\le E\bigl(\texte^{- \text{\rm e}\mkern0.7mu^{-\alpha\lambda T}\langle\widehat\zeta^D,f\rangle}\bigr),
\end{equation}
where the two ``error'' terms on the left-hand side of \eqref{E:5.46i} tend to zero in the stated limits thanks to Lemma~\ref{lemma-5.6} and the Gaussian (asymptotic) tail of~$T_N$.
The argument for a corresponding lower bound is very similar; we need to work with~$t_{N,k}^+$ instead of~$t_{N,k}^-$ and use explicit estimates to get rid of the indicator~$1_{\EE_N}$ and the restriction to the range of~$k$ in the sum. As a conclusion, we get
\begin{equation}
\lim_{N\to\infty}
E^{\varrho}\bigl( \texte^{-\langle \widetilde\zeta_{N}^{D}, f \rangle}\bigr)
= E\bigl(\texte^{- \text{\rm e}\mkern0.7mu^{-\alpha\lambda T}\langle\widehat\zeta^D,f\rangle}\bigr)
\end{equation}
for any function~$f$ as above.
This is sufficient to give $\widetilde\zeta_N^D{\,\overset{\text{\rm law}}\longrightarrow\,}\text{\rm e}\mkern0.7mu^{-\alpha\lambda T}\widehat\zeta^D$, as desired.
\end{proofsect}
For the thin points we now get:
\begin{proposition}[Continous-time thin points]
\label{thm-thin-cont}
Under the setting and notation of Theorem~\ref{thm-thin} and for the walk started at the ``boundary vertex,'' we have
\begin{equation}
\label{E:1.21}
\widetilde\zeta^D_N\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\, \sqrt{\frac{\sqrt{\theta}}{\sqrt{\theta}-\lambda}}\,\,\fraktura c(\lambda)\,\texte^{-\alpha \lambda (\mathfrak{d}(x) - 1) T}\,Z_\lambda^{D,0}(\text{\rm d}\mkern0.5mu x)\otimes\text{\rm e}\mkern0.7mu^{+\alpha\lambda h}\text{\rm d}\mkern0.5mu h,
\end{equation}
where $T$ and $Z_{\lambda}^{D,0}$ are independent with $T \sim \NN(0, \sigma_{D}^{2})$.
\end{proposition}
\begin{proofsect}{Proof}
The argument is similar to that for the thick points: We need to work with compact\-ly-supported, continuous test functions $f\colon\overline D\times(\mathbb R\cup\{-\infty\})\to[0,\infty)$ that are non-increasing in the second coordinate.
The change in monotonicity effectively swaps the inequalities in \eqref{E:5.35} and, due to a sign change in \eqref{E:5.32}, also that in the exponent of $\texte^{-\alpha\lambda T_N(t^\pm_{N,k})}$. We also need to rely on Proposition~\ref{thm-4.4} instead of Proposition~\ref{thm-4.3}. We leave further details to the reader.
\end{proofsect}
Moving to the light points, we define
\begin{equation}
\widetilde\vartheta^{D}_{N} := \frac{1}{\widehat W_{N}} \sum_{x \in D_{N}} \delta_{x/N} \otimes \delta_{\widetilde L^{D_{N}}_{\deg(D_N) t_{N}}(x)}
\end{equation}
and state:
\begin{proposition}[Continuous-time light points]
\label{thm-light-cont}
Under the setting and assumptions of Theorem~\ref{thm-light} and for the walk started at the ``boundary vertex,'' we have
\begin{equation}
\label{E:2.31ii}
\widetilde\vartheta^D_N\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\, \sqrt{2\pi g}\,\fraktura c(\sqrt\theta)\,\, \texte^{-\alpha \sqrt{\theta} (\mathfrak{d}(x) - 1) T}\,Z_{\sqrt\theta}^{D,0}(\text{\rm d}\mkern0.5mu x)\otimes\tilde\mu(\text{\rm d}\mkern0.5mu h),
\end{equation}
where $T=\mathcal N(0,\sigma_D^2)$ is independent of $Z_{\sqrt\theta}^{D,0}$ and $\tilde\mu$ is the measure in \eqref{E:tilde-mu}.
\end{proposition}
\begin{proofsect}{Proof}
Relying on our convention concerning different time sequences, we start by noting
\begin{equation}
\label{E:5.32u}
\begin{gathered}
\widehat W_{N}(t^{+}_{N,k}) = \widehat W_{N}(t_N) \, \texte^{\alpha\sqrt\theta \epsilon (k-1) + o(1)}, \\
\widehat W_{N}(t^{-}_{N,k}) = \widehat W_{N}(t_N) \, \texte^{\alpha\sqrt\theta \epsilon (k+1) + o(1)}.
\end{gathered}
\end{equation}
Given a compactly-supported, continuous function $f\colon\overline D\times[0,\infty)\to[0,\infty)$ that is non-increasing in the second coordinate, from \eqref{E:5.32u}, \eqref{E:5.33b} and \eqref{E:5.34} we then have
\begin{equation}
\begin{aligned}
\label{E:5.35u}
\texte^{-2\alpha\sqrt\theta \epsilon + o(1)} \texte^{\alpha\sqrt\theta T_{N}(t^{+}_{N,k}) } \langle \widehat\vartheta_{N}^{D}(t^{+}_{N,k}), f \rangle
& \leq \langle \widetilde\vartheta_{N}^{D}, f \rangle
\\
&\leq \texte^{2\alpha\sqrt\theta \epsilon + o(1)} \texte^{\alpha\sqrt\theta T_{N}(t^{-}_{N,k})} \langle \widehat\vartheta_{N}^{D}(t^{-}_{N,k}), f \rangle.
\end{aligned}
\end{equation}
The rest of the argument for the thick points (with Proposition~\ref{thm-light-bv} instead of Proposition~\ref{thm-4.3}) can now be applied to get
\begin{equation}
\langle \widetilde\vartheta_{N}^{D}, f \rangle \,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\, \texte^{+\alpha\sqrt\theta T}\langle\widehat\vartheta^D,f\rangle,
\end{equation}
where
\begin{equation}
\widehat\vartheta^D:=\sqrt{2\pi g}\,\fraktura c(\sqrt\theta)\,\,\text{\rm e}\mkern0.7mu^{-\alpha\sqrt\theta\,\fraktura d(x)T}\, Z_{\sqrt{\theta}\,}^{D,0}(\text{\rm d}\mkern0.5mu x)\otimes\tilde\mu(\text{\rm d}\mkern0.5mu h).
\end{equation}
The claim now follows by a density argument.
\end{proofsect}
Finally, for the avoided points we set
\begin{equation}
\widetilde\kappa^{D}_{N} := \frac{1}{\widehat W_{N}} \sum_{x \in D_{N}} 1_{\{\widetilde L^{D_{N}}_{\deg(D_N) t_{N}}(x)=0\}}\,\delta_{x/N}
\end{equation}
and state:
\begin{proposition}[Continuous-time avoided points]
\label{thm-avoid-cont}
Under the setting and assumptions of Theorem~\ref{thm-light} and for the walk started at the ``boundary vertex,'' we have
\begin{equation}
\label{E:2.27cont}
\widetilde\kappa^D_N\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\, \sqrt{2\pi g}\,\fraktura c(\sqrt\theta)\,\, \texte^{-\alpha \sqrt{\theta} (\mathfrak{d}(x) - 1) T}\,Z_{\sqrt\theta}^{D,0}(\text{\rm d}\mkern0.5mu x),
\end{equation}
where $T=\mathcal N(0,\sigma_D^2)$ is independent of $Z_{\sqrt\theta}^{D,0}$.
\end{proposition}
\begin{proofsect}{Proof}
Given a continuous $f\colon\overline D\to\mathbb R$, the identity \eqref{E:5.35u}
applies with $\widetilde\vartheta^D_N$, resp., $\widehat\vartheta^D_N$ replaced by $\widetilde\kappa^D_N$, resp., $\widehat\kappa^D_N$. The argument then proceeds as for Proposition~\ref{thm-light-cont}.
\end{proofsect}
\section{Arbitrary starting points}
\label{sec6}\noindent
As our next item of business, we augment the continuous-time conclusions from the previous section to allow the random walk to start at an arbitrary point of~$D_N$. The formal statement is the content of:
\begin{theorem}[Arbitrary starting points]
\label{thm-6.1}
The statements of Propositions~\ref{thm-thick-cont}, \ref{thm-thin-cont}, \ref{thm-light-cont} and \ref{thm-avoid-cont} apply for random walk starting from an arbitrary point~$x_N\in D_N$.
\end{theorem}
We will start with the thick points as that is the hardest case. Assume that~$\{a_N\}_{N\ge1}$ and~$\{t_N\}_{N\ge1}$ satisfy the conditions of Propositions~\ref{thm-thick-cont}. The integrals of $\{\widetilde\zeta^D_N\colon N\ge1\}$ from \eqref{E:5.29b} against $f\in C_{\text{\rm c}}(\overline D\times(\mathbb R\cup\{+\infty\}))$ are tight random variables.
Our strategy is to use the strong Markov property after the first hitting of the ``boundary vertex.'' For this let us recall that~$H_x$ denotes the first hitting time of vertex~$x$ and let~$\theta_t$ denote the shift on the path space acting as $(\widetilde X\circ\theta_t)_s=\widetilde X_{t+s}$. We will write $\{(\widetilde L^{D_N}\circ\theta_t)_s\colon s\ge0\}$ for the local time process associated with the time-shifted path $\{(\widetilde X\circ\theta_t)_s\colon s\ge0\}$. Our first observation is then:
\begin{lemma}
\label{lemma-6.2}
On $\{H_\varrho<t\}$, we have
\begin{equation}
\label{E:6.1}
\widetilde L_t^{D_N}(\cdot) = \widetilde L^{D_N}_{H_\varrho}(\cdot)+(\widetilde L^{D_N}\circ\theta_{H_\varrho})_{t-H_\varrho}(\cdot).
\end{equation}
In particular, under the conditions of Proposition~\ref{thm-thick-cont}, for any $f\in C_{\text{\rm c}}(\overline D\times(\mathbb R\cup\{+\infty\}))$ that is non-decreasing in the second variable
and any $x_N\in D_N$,
\begin{equation}
\label{E:6.2}
\limsup_{N\to\infty} E^{x_N}\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\zeta^D_N,f\rangle}\bigr)
\le \lim_{N\to\infty}\,E^{\varrho}\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\zeta^D_N,f\rangle}\bigr).
\end{equation}
\end{lemma}
\begin{proofsect}{Proof}
The relation \eqref{E:6.1} is a direct consequence of the additivity of the local time. As to \eqref{E:6.2}, for~$f$ as above and any~$m>0$ with $t_N>m$, dropping the term $\widetilde L^{D_N}_{H_\varrho}$ while noting that $W(t_N-m)\ge \text{\rm e}\mkern0.7mu^{-c \frac{m}{\log N}} W(t_N)$ for some $c > 0$ shows
\begin{equation}
\bigl\langle\widetilde\zeta^{D_N}_N(t_N),f\bigr\rangle\ge
\text{\rm e}\mkern0.7mu^{- c \frac{m}{\log N}}
\bigl\langle\widetilde\zeta^{D_N}_N(t_N-m),f\bigr\rangle\circ\theta_{H_\varrho}\quad\text{on }\{H_\varrho< m\deg(D_N)\}.
\end{equation}
The strong Markov property then gives
\begin{equation}
\label{E:6.4}
\begin{aligned}
E^{x_N}\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\zeta^D_N(t_N),f\rangle}\bigr)&
\le P^{x_N}\bigl(H_\varrho\ge m\deg(D_N)\bigr)+E^{x_N}\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\zeta^D_N(t_N),f\rangle}1_{\{H_\varrho<m\deg(D_N)\}}\bigr)
\\
&\le
P^{x_N}\bigl(H_\varrho\ge m\deg(D_N)\bigr)+E^{\varrho}
\bigl(\text{\rm e}\mkern0.7mu^{-\text{\rm e}\mkern0.7mu^{-c \frac{m}{\log N}}\langle\widetilde\zeta^D_N(t_N-m),f\rangle}\bigr).
\end{aligned}
\end{equation}
Since the random walk on~$D_N$ coincides with the random walk on~$\mathbb Z^2$ until time~$H_\varrho$, the Central Limit Theorem shows that the probability tends to zero in the limits~$N\to\infty$ and~$m\to\infty$. The expectation on the right converges by Proposition~\ref{thm-thick-cont}.
\end{proofsect}
Our next goal is to prove a complementary bound to \eqref{E:6.2} for the \myemph{limes inferior}. For this we must control the effect of the first term on the right of \eqref{E:6.1}. Writing $\{(\widehat L^{D_N}\circ\theta_t)_s\colon s\ge0\}$ for the local time of the process $\widetilde X\circ\theta_t$ parametrized at the time spent at the boundary vertex, we then have:
\begin{lemma}
\label{lemma-6.3}
Under the conditions of Proposition~\ref{thm-thick-cont}, for each~$b\in\mathbb R$ there is~$c>0$ such that for all $N\ge1$ and all~$x\in D_N$,
\begin{equation}
\label{E:6.5uie}
\sum_{z\in D_N}
P^x\biggl(\widetilde L^{D_N}_{H_\varrho}(z)+(\widehat L^{D_N}\circ\theta_{H_\varrho})_{t_N}(z)\ge a_N+b\log N,\, H_z<H_\varrho\biggr)
\le c\frac {W_N}{\log N}.
\end{equation}
\end{lemma}
\begin{proofsect}{Proof}
Let us for simplicity assume (e.g., by redefining~$a_N$) that~$b=0$. The strong Markov property bounds the probability under the sum by
\begin{equation}
\label{E:6.6}
\sum_{m\ge0}P^x\bigl(H_z<H_\varrho\bigr)
P^z\Bigl(\widetilde L^{D_N}_{H_\varrho}(z)\ge mG^{D_N}(z,z)\Bigr) P^\varrho\Bigl(\widehat L^{D_N}_{t_N}(z)\ge a_N- (m+1)G^{D_N}(z,z)\Bigr).
\end{equation}
We start by estimating the second term.
Denoting $p:=P^z(\hat H_z<H_\varrho)$ where $\hat H_z$ is the first return time to~$z$, we have $\widetilde L^{D_N}_{H_\varrho}(z)\,\overset{\text{\rm law}}=\, \frac{1}{4}\sum_{i=1}^N\tau_i$ for $N:=\text{Geometric}(p)$ and $\tau_1,\tau_2,\dots$ i.i.d.\ Exponential(1) independent of~$N$. For any $q\in(0,1)$, the Chernoff bound gives
\begin{equation}
P\biggl(\sum_{i=1}^N \tau_i>r\biggr)\underset{0\le s<1-p}\le \text{\rm e}\mkern0.7mu^{-sr}\frac{1-p}{1-p-s}
\underset{s:=q(1-p)}\le \frac1{1-q}\text{\rm e}\mkern0.7mu^{-rq(1-p)}.
\end{equation}
As $1-p = P^z(\hat H_z>H_\varrho)=\frac1{4G^{D_N}(z,z)}$, we thus get
\begin{equation}
\label{E:6.9}
P^z\Bigl(\widetilde L^{D_N}_{H_\varrho}(z)\ge m G^{D_N}(z,z)\Bigr)\le \frac1{1-q}\text{\rm e}\mkern0.7mu^{-mq},\quad m\ge0.
\end{equation}
for all~$q\in(0,1)$.
Using \eqref{E:6.9} in conjunction with the uniform estimate $G^{D_N}(z,z)\le g\log N+c$, we dominate the part of the sum in \eqref{E:6.6} for~$m$ satisfying
$(m+2) G^{D_N}(z,z)\ge a_N - t_N$ by a quantity of order $N^{-2q[(\sqrt\theta+\lambda)^2-\theta+o(1)]}$. Recalling that~$W_N=N^{2(1-\lambda^2)+o(1)}$, this is $o(W_N N^{-2}/\log N)$ when~$1-q>0$ is so small that $q[(\sqrt\theta+\lambda)^2-\theta]>\lambda^2$.
In the complementary regime, we have $a_N-(m+2) G^{D_N}(z,z)>t_N$ which permits us to estimate the last term on the right of \eqref{E:6.6} via \cite[Lemma~4.1]{AB} with the choices $a:=a_N$, $t:=t_N$ and~$b:=(m+2) G^{D_N}(z,z)$ to get
\begin{multline}
\label{E:6.10}
P^z\Bigl(\widetilde L^{D_N}_{H_\varrho}(z)\ge mG^{D_N}(z,z)\Bigr) P^\varrho\Bigl(\widehat L^{D_N}_{t_N}(z)\ge a_N- (m+1)G^{D_N}(z,z)\Bigr)
\\
\le
\frac1{1-q}\,\frac{\sqrt{G^{D_N} (z,z)}}{\sqrt{2a_N - 2(m+1)G^{D_N} (z,z)} - \sqrt{2t_N}}
\frac{\sqrt{\log N}}{N^2} W_N\,\text{e}^{-qm + (m+1)\frac{\sqrt{2a_N}-\sqrt{2t_N}}{\sqrt{2a_N}}}
\end{multline}
As $\frac{\sqrt{2a_N}-\sqrt{2t_N}}{\sqrt{2a_N}}\to\frac{\lambda}{\sqrt\theta+\lambda}$ as~$N\to\infty$, we choose~$q\in(\frac{\lambda}{\sqrt\theta+\lambda},1)$ and proceed as follows: For $(m+1) G^{D_N} (z, z) > \frac{1}{2} (a_N - t_N)$, the prefactor is order $\sqrt{\log N}\, W_N/N^2$ but, thanks to the uniform upper bound on~$G^{D_N}(z,z)$, the sum of the exponential terms decays polynomially with~$N$. For~$m$ with $(m+1) G^{D_N} (z, z) \leq \frac{1}{2} (a_N - t_N)$, the prefactor is order $W_N/N^2$ and the sum of the exponentials is bounded.
Combining the above estimates, the sum in \eqref{E:6.5uie} is bounded by a quantity of order
\begin{equation}
\label{E:6.10iu}
o\Bigl(\frac{W_N}{\log N}\Bigr)+
\frac{W_N}{N^2}\sum_{z\in D_N}P^x\bigl(H_z<H_\varrho\bigr).
\end{equation}
Interpreting~$H_\varrho$ as the first exit time of the simple random walk on~$\mathbb Z^2$ from~$D_N$, the sum on the right is non-decreasing in~$D_N$. We may thus assume that~$D_N$ is a box of side-length~$2^n$, for~$n=\log_2 N+O(1)$, centered at~$x$. For the probability under the sum we then get, for each~$k=0,\dots,n-1$ and some constant~$c>0$,
\begin{equation}
P^x (H_z < H_{\rho}) = \frac{G^{D_N} (x, z)}{G^{D_N} (z, z)}
\le c\frac {n-k}n,\qquad 2^{k}<|x-z|\le 2^{k+1}.
\end{equation}
The sum in \eqref{E:6.10iu} is thus at most of order $1+\sum_{k=0}^n\frac{n-k}n\,2^{2k}$ which is of order $N^2/\log N$. The claim follows.
\end{proofsect}
We are now ready to give:
\begin{proofsect}{Proof of Theorem~\ref{thm-6.1}, thick points}
Consider a non-negative~$f\in C_{\text{\rm c}}(\overline D\times(\mathbb R\cup\{+\infty\})$ that is non-decreasing in the second variable and supported in $\overline D\times[b,\infty)$ for some~$b\in\mathbb R$.
Note that~$\{H_\varrho<\infty\}$ is a full probability event under~$P^x$. Decomposing the support of~$\zeta^D_N$ according to whether the point was hit before hitting the boundary vertex or not, the monotonicity of~$t\mapsto\widetilde L^{D_N}_t$ and the assumed monotonicity of~$f$ yield
\begin{multline}
\label{E:6.11}
\quad
\langle\widetilde\zeta^D_N,f\rangle\le\langle\widetilde\zeta^D_N,f\rangle\circ \theta_{H_\varrho}
\\
+\frac{\Vert f\Vert_\infty}{W_N}\sum_{z\in D_N}1_{\{H_z<H_\varrho\}}\,1_{\{\widetilde L^{D_N}_{H_\varrho}(z)+(\widetilde L^{D_N}\circ\theta_{H_\varrho})_{t_N\deg(D_N)}(z)\ge a_N+ b \sqrt{2 a_N} \}}.
\quad
\end{multline}
Fix a sequence~$b_N\to\infty$ such that~$b_N/t_N^{1/4}\to0$ and let~$\mathcal F_N$ be the event that the inequalities in \eqref{E:5.6i} hold. Fix any~$m>0$ and~$\epsilon>0$. Let~$\mathcal G_N$ be the event that the second term on the right of \eqref{E:6.11} is less than~$\epsilon$. Then
\begin{equation}
\label{E:6.12}
\begin{aligned}
E^x\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\zeta^D_N,f\rangle}\bigr)
&\ge
E^x\Bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\zeta^D_N,f\rangle}1_{\theta_{H_\varrho}^{-1}(\mathcal F_N\cap\{T_N\ge-m\})}\Bigr)
\\
&\ge \text{\rm e}\mkern0.7mu^{-\epsilon}
E^x\Bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\zeta^D_N,f\rangle\circ\theta_{H_\varrho}}1_{\theta_{H_\varrho}^{-1}(\mathcal F_N\cap\{T_N\ge-m\})}
\Bigr)
\\
&\qquad\qquad\qquad-P^x\Bigl(\mathcal G_N^{\text{\rm c}}\cap\theta_{H_\varrho}^{-1}(\mathcal F_N\cap\{T_N\ge-m\})\Bigr).
\end{aligned}
\end{equation}
As~$P^x(H_\varrho<\infty)=1$, the strong Markov property gives
\begin{equation}
\begin{aligned}
E^x\Bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\zeta^D_N,f\rangle\circ\theta_{H_\varrho}}1_{\theta_{H_\varrho}^{-1}(\mathcal F_N\cap\{T_N\ge-m\})}
\Bigr)
&=E^\varrho\Bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\zeta^D_N,f\rangle}1_{\mathcal F_N\cap\{T_N\ge-m\}}
\Bigr)
\\
&\ge E^\varrho\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\zeta^D_N,f\rangle}\bigr)-P^\varrho\bigl((\mathcal F_N\cap\{T_N\ge-m\})^{\text{\rm c}}\bigr).
\end{aligned}
\end{equation}
Proposition~\ref{P:tNbound} and the fact that~$\{T_N\colon N\ge1\}$ is tight now ensures that the probability on the right tends to zero in the limits~$N\to\infty$ and~$m\to\infty$.
Concerning the probability on the right of \eqref{E:6.12}, an inspection of \eqref{E:tNcirc} shows that, on $(\mathcal F_N\cap\{T_N\ge-m\})\circ\theta_{H_\varrho}$, we have
\begin{equation}
(\widetilde L^{D_N}\circ\theta_{H_\varrho})_{t_N\deg(D_N)}(\cdot)
\le(\widehat L^{D_N}\circ \theta_{H_\varrho})_{t_N+b_Nt_N^{1/4}+m\sqrt{2t_N}}(\cdot).
\end{equation}
By the Markov inequality, the probability in \eqref{E:6.12} is thus bounded by $\epsilon^{-1}\Vert f\Vert_\infty/W_N(t_N)$ times the sum in Lemma~\ref{lemma-6.3} albeit with~$t_N$ replaced by $t_N':=t_N+b_Nt_N^{1/4}+m\sqrt{2t_N}$. As~$W_N(t_N')/W_N(t_N)$ is bounded by an~$m$-dependent constant uniformly in~$N$, the probability in \eqref{E:6.12} is thus $O(1/\log N)$ uniformly in~$x\in D_N$.
Taking~$N\to\infty$ followed by~$m\to\infty$ and~$\epsilon\downarrow0$ shows
\begin{equation}
\liminf_{N\to\infty} E^{x_N}\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\zeta^D_N,f\rangle}\bigr)
\ge \lim_{N\to\infty}\,E^{\varrho}\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\zeta^D_N,f\rangle}\bigr).
\end{equation}
Combining with \eqref{E:6.2}, we then get the desired claim.
\end{proofsect}
The situation for the thin, light and avoided points is similar albeit simpler. Writing $\widetilde\xi^D_N$ for the corresponding continuous-time point measure (parametrized by the total time), as in Lemma~\ref{lemma-6.2}, the identity \eqref{E:6.1} gives us an easy one-way bound, where the test function~$f$ takes values in~$\overline D\times(\mathbb R\cup\{-\infty\})$ for the thin points, $\overline D\times[0,\infty)$ for the light points and~$\overline D$ for the avoided points:
\begin{lemma}
\label{lemma-6.4}
Under the conditions of Proposition~\ref{thm-thin-cont}, \ref{thm-light-cont} and \ref{thm-avoid-cont}, for any any $x_N\in D_N$ and any continuous, compactly-supported, non-negative test function $f$ on the corresponding domain that, for the thin and light points, is non-increasing in the second variable,
\begin{equation}
\liminf_{N\to\infty} E^{x_N}\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\xi^D_N,f\rangle}\bigr)
\ge \lim_{N\to\infty}\,E^{\varrho}\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\xi^D_N,f\rangle}\bigr).
\end{equation}
\end{lemma}
\begin{proofsect}{Proof}
Using \eqref{E:6.1}, on $\{H_\varrho< m\deg(D_N)\}$ we get
\begin{equation}
\bigl\langle\widetilde\xi^{D_N}_N(t_N),f\bigr\rangle\le
\text{\rm e}\mkern0.7mu^{c \frac{m}{\log N}}
\bigl\langle\widetilde\xi^{D_N}_N(t_N-m),f\bigr\rangle\circ\theta_{H_\varrho},
\end{equation}
where we now rely on the fact that~$t\mapsto W_N(t)$, resp., $t\mapsto\widehat W_N(t)$ are non-increasing for~$t$ near~$t_N$. The inequalities \eqref{E:6.4} then become
\begin{equation}
\begin{aligned}
E^{x_N}\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\xi^D_N(t_N),f\rangle}\bigr)
&\ge E^{x_N}\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\xi^D_N(t_N),f\rangle}1_{\{H_\varrho<m\deg(D_N)\}}\bigr) \\
&\ge E^{\varrho}\bigl(\text{\rm e}\mkern0.7mu^{-\text{\rm e}\mkern0.7mu^{c \frac{m}{\log N}}\langle\widetilde\xi^D_N(t_N-m),f\rangle}\bigr)-P^{x_N}\bigl(H_\varrho\ge m\deg(D_N)\bigr).
\end{aligned}
\end{equation}
The claim now follows by taking~$N\to\infty$ followed by~$m\to\infty$.
\end{proofsect}
In replacement of Lemma~\ref{lemma-6.3}, we then need:
\begin{lemma}
\label{lemma-6.5}
Under the conditions of Proposition~\ref{thm-thin-cont}, for each~$b\in\mathbb R$ there is~$c>0$ such that for all $N\ge1$ and all~$x\in D_N$,
\begin{equation}
\label{E:6.20uiu}
\sum_{z\in D_N}
P^x\biggl((\widehat L^{D_N}\circ\theta_{H_\varrho})_{t_N}(z)\le a_N+b\log N,\, H_z<H_\varrho\biggr)
\le c\frac {W_N}{\log N}.
\end{equation}
Under the conditions of Propositions~\ref{thm-light-cont} and \ref{thm-avoid-cont} the same holds with $a_N+b\log N$ replaced by~$b\ge0$ (including, for the avoided points,~$b=0$) and~$W_N$ replaced by~$\widehat W_N$.
\end{lemma}
\begin{proofsect}{Proof}
The Strong Markov property and the estimates from~\cite[Corollary~4.8]{AB} bound the probability in \eqref{E:6.20uiu} by $P^x(H_z<H_\varrho)$ times
\begin{equation}
P^\varrho\bigl(\widehat L^{D_N}_{t_N}(z)\le a_N + b \log N \bigr)\le c\frac{W_N}{N^2}
\end{equation}
and so the quantity in \eqref{E:6.20uiu} is at most order $W_N N^{-2}\sum_{z \in D_N}P^x(H_z<H_\varrho)$. The argument then concludes as in the proof of Lemma~\ref{lemma-6.3}. For the light and avoided points, we instead invoke \cite[Corollary~4.6]{AB} and proceed analogously.
\end{proofsect}
With this we get:
\begin{proofsect}{Proof of Theorem~\ref{thm-6.1}, thin, light and avoided points}
We proceed similarly as for the thick points. First, writing~$\widetilde a_N:=a_N+b\log N$ for the thin points and~$\widetilde a_N:=b$ for the light and (with~$b:=0$) avoided points, given a continuous, compactly-supported~$f$ that is non-increasing in the second variable, in all three cases of interest we have
\begin{equation}
\label{E:6.20}
\langle\widetilde\xi^D_N,f\rangle\ge\langle\widetilde\xi^D_N,f\rangle\circ \theta_{H_\varrho}
-\frac{\Vert f\Vert_\infty}{W_N}\sum_{z\in D_N}1_{\{H_z<H_\varrho\}}\,1_{\{(\widetilde L^{D_N}\circ\theta_{H_\varrho})_{t_N\deg(D_N)-H_\varrho}(z)\le \widetilde a_N\}}.
\end{equation}
Let $\mathcal{F}_N$ be the event from \eqref{E:5.6i} with $t_N$ replaced by $t_N - m$.
Abusing our earlier notation, given~$\epsilon>0$, let $\mathcal G_N$ be the event that the second term (without the minus sign) is at most~$\epsilon$. From \eqref{E:6.20}, we then get
\begin{equation}
\label{E:6.21}
\begin{aligned}
E^x\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\xi^D_N,f\rangle}&\bigr)-P^x\bigl(H_\varrho\ge m\deg(D_N)\bigr)
-P^\varrho\bigl((\mathcal F_N\cap \{T_N (t_N - m) \leq m \} )^{\text{\rm c}}\bigr)
\\
&\le
E^x\Bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\xi^D_N,f\rangle}1_{\{H_\varrho<m\deg(D_N)\}}1_{\theta_{H_\varrho}^{-1}(\mathcal F_N\cap \{T_N (t_N - m) \leq m \} )}\Bigr)
\\
&\le\text{\rm e}\mkern0.7mu^\epsilon E^\varrho\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\xi^D_N,f\rangle}\bigr)
\\
&\qquad\quad+P^x\Bigl(\mathcal G_N^{\text{\rm c}}\cap \{H_\varrho<m\deg(D_N)\}\cap\theta_{H_\varrho}^{-1}(\mathcal F_N\cap \{T_N (t_N - m) \leq m \} )\Bigr).
\end{aligned}
\end{equation}
Thanks to the Central Limit Theorem, the tightness of~$\{T_N\colon N\ge1\}$ and Proposition~\ref{P:tNbound}, the two probabilities on the left-hand side of \eqref{E:6.21} tend to zero in the limits~$N\to\infty$ and~$m\to\infty$, uniformly in~$x\in D_N$. For the probability on the right we observe that, on $\{H_\varrho<m\deg(D_N)\}\cap\theta_{H_\varrho}^{-1}(\mathcal F_N\cap
\{T_N (t_N - m) \leq m \} )$, we have
\begin{equation}
(\widetilde L^{D_N}\circ\theta_{H_\varrho})_{t_N\deg(D_N)-H_\varrho}(\cdot)
\ge(\widehat L^{D_N}\circ\theta_{H_\varrho})_{t_N'}(\cdot)
\end{equation}
for~$t_N':=t_N-m-b_N t_N^{1/4}-m\sqrt{2t_N}$. Lemma~\ref{lemma-6.5} and the Markov inequality then bound the probability by an $m$-dependent constant times~$1/\log N$, uniformly in~$x\in D_N$.
Combining these observations we thus get
\begin{equation}
\limsup_{N\to\infty} E^{x_N}\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\xi^D_N,f\rangle}\bigr)
\le \lim_{N\to\infty}\,E^{\varrho}\bigl(\text{\rm e}\mkern0.7mu^{-\langle\widetilde\xi^D_N,f\rangle}\bigr).
\end{equation}
In conjunction with Lemma~\ref{lemma-6.4} this proves the claim.
\end{proofsect}
\section{Discrete time conclusions}
\label{sec7}\noindent
We will now move to the proof of our main results except those on the local structure which are deferred to Section~\ref{sec8}. Considering, for a moment, a random walk on a general finite, connected graph on $V\cup\{\varrho\}$, recall that the discrete-time local time $L^V_t$ is parametrized by the total number of steps in units of~$\deg(V)=\sum_{u\in V\cup\{\varrho\}}\deg(u)$ while its continuous-time counterpart $\widetilde L^V_t$ is parametrized by the total time. Both of these are naturally realized on the same probability space through the definition \eqref{E:tilde-X} of~$\widetilde X$ via the discrete-time walk~$X$ and an independent (rate-1) Poisson point process~$\widetilde N(t)$. A key technical tool in what follows is the following lemma:
\begin{lemma}
\label{lemma-7.1}
There is a family of i.i.d.\ exponentials $\{\tau_j(v)\colon j\ge1, v\in V\}$
with parameter $1$
independent of~$X$ (but not of~$\widetilde N$) such that
\begin{equation}
\label{E:8.1a}
\widetilde L_t^V(v) = \frac1{\deg(V)}\sum_{j\ge1}\tau_j(v)1_{\{j\le\deg(V) L^V_{\widetilde N(t)/\deg(V)}(v)\}},\quad v\ne \widetilde X_t,
\end{equation}
holds $P^x$-a.s.\ for each $t\ge0$ and each~$x\in V\cup\{\varrho\}$.
\end{lemma}
\begin{proofsect}{Proof}
This is a consequence of the standard representation of the wait times of~$\widetilde X$ by independent exponentials. (In this representation, the process~$\widetilde N$ is a function of the exponentials and~$X$, albeit independent of~$X$.) Note that the equality \eqref{E:8.1a} fails at~$\widetilde X_t$ because the walk is ``in-between'' jumps there.
\end{proofsect}
Moving back to the random walk on~$D_N\cup\{\varrho\}$, this readily yields:
\begin{lemma}
\label{lemma-7.2}
For each~$x\in D_N$, abbreviate
\begin{equation}
\label{E:7.2}
\mathcal F_{N}(x):=\biggl\{\widetilde L^{D_N}_{(t_N-1)\deg(D_N)}(x)\le\frac14\sum_{j\ge1}\tau_j(x)1_{\{j\le 4L^{D_N}_{t_N}(x)\}}\le\widetilde L^{D_N}_{(t_N+1)\deg(D_N)}(x)\biggr\}.
\end{equation}
Then for any~$x_N\in D_N$,
\begin{equation}
P^{x_N}\Bigl(\,\sum_{x\in D_N}1_{\mathcal F_N(x)^{\text{\rm c}}}>2\Bigr)\,\underset{N\to\infty}\longrightarrow\,0.
\end{equation}
\end{lemma}
\begin{proofsect}{Proof}
The Central Limit Theorem ensures that~$(\widetilde N(t)-t)/\sqrt{t}$ tends in law to a standard normal as~$t\to\infty$. As~$t_N=o(\deg(D_N))$, the inequalities
\begin{equation}
\label{E:7.4}
\frac{\widetilde N((t_N-1)\deg(D_N))}
{\deg(D_N)}\le t_N\le \frac{\widetilde N((t_N+1)\deg(D_N))}
{\deg(D_N)}
\end{equation}
are satisfied with probability tending to one as~$N\to\infty$. Once \eqref{E:7.4} is in force, the monotonicity of $t\mapsto L^{D_N}_t$ and \eqref{E:8.1a} show that the event~$\mathcal F_N(x)$ occurs at all~$x\in D_N$ except perhaps at the position of~$\widetilde X$ at times $(t_N\pm 1)\deg(D_N)$.
\end{proofsect}
With these observations in hand, we are now ready to finally present the proofs of our main theorems. The easiest case is that of avoided points:
\begin{proofsect}{Proof of Theorem~\ref{thm-avoid}}
Note that, whenever~$\mathcal F_N(x)$ occurs, $\widetilde L^{D_N}_{(t_N+1)\deg(D_N)}(x)=0$ forces $L^{D_N}_{t_N}(x)=0$ (a.s.), which in turn forces $\widetilde L^{D_N}_{(t_N-1)\deg(D_N)}(x)=0$. For any~$f\in C_{\text{\rm c}}(\overline D)$ with~$f\ge0$, on the event $\sum_{x\in D_N}1_{\mathcal F_N(x)^{\text{\rm c}}}\le 2$ we thus have
\begin{equation}
\label{E:7.5}
\begin{aligned}
\frac{\widehat W_N(t_N+1)}{\widehat W(t_N)}\bigl\langle\widetilde\kappa^D_N(t_N+1),f\bigr\rangle-\frac2{\widehat W_N} \Vert f\Vert_\infty
&\le \bigl\langle\kappa^D_N,f\bigr\rangle
\\
&\le
\frac{\widehat W_N(t_N-1)}{\widehat W(t_N)}\bigl\langle\widetilde\kappa^D_N(t_N-1),f\bigr\rangle
+\frac2{\widehat W_N} \Vert f\Vert_\infty.
\end{aligned}
\end{equation}
As $\{t_N\pm1\}_{N\ge1}$ have the same leading-order asymptotic as~$\{t_N\}_{N\ge1}$, the random variables $\langle\widetilde\kappa^D_N(t_N\pm1),f\rangle$ have the same weak limit as $\langle\widetilde\kappa^D_N,f\rangle$. Since~$\widehat W_N\to\infty$ and also
\begin{equation}
\frac{\widehat W_N(t_N\pm1)}{\widehat W_N(t_N)}\,\underset{N\to\infty}\longrightarrow\,1,
\end{equation}
the claim follows from Lemma~\ref{lemma-7.2}, Proposition~\ref{thm-avoid-cont} and Theorem~\ref{thm-6.1}.
\end{proofsect}
Next we tackle the light points:
\begin{proofsect}{Proof of Theorem~\ref{thm-light}}
Denote
\begin{equation}
\label{E:7.7}
\overline L^{D_N}_{t_N}(x):=\frac14\sum_{j\ge1}\tau_j(x)1_{\{j\le 4L^{D_N}_{t_N}(x)\}}
\end{equation}
and consider the auxiliary point measure
\begin{equation}
\overline\vartheta^D_N:=\frac1{\widehat W_N}\sum_{x\in D_N}\delta_{x/N}\otimes\delta_{\overline L^{D_N}_{t_N}(x)}.
\end{equation}
Thanks to Lemma~\ref{lemma-7.2}, on the event $\sum_{x\in D_N}1_{\mathcal F_N(x)^{\text{\rm c}}}\le 2$, the inequality \eqref{E:7.5} holds for any non-negative $f\in C_{\text{\rm c}}(\overline D\times[0,\infty))$ that is non-increasing in the second variable and with~$\widetilde\kappa^D_N$, resp., $\kappa^D_N$ replaced by $\widetilde\vartheta^D_N$, resp., $\overline\vartheta^D_N$. As, by Proposition~\ref{thm-light-cont} and Theorem~\ref{thm-6.1}, $\widetilde\vartheta^D_N$ tends in law to the measure~$\widetilde\vartheta^D$ on the right of \eqref{E:2.31ii}, we have
\begin{equation}
\langle\overline\vartheta^D_N,f\rangle\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\langle\widetilde\vartheta^D, f\rangle
\end{equation}
for any non-negative $f\in C_{\text{\rm c}}(\overline D\times[0,\infty))$.
Next we observe that, by that fact that for any~$\epsilon>0$ and any random variable~$Y$ taking values in $[0,\epsilon]$,
\begin{equation}
\label{E:7.10}
\exp\{-E(Y)\}\le E(\text{\rm e}\mkern0.7mu^{-Y})\le \exp\{-\text{\rm e}\mkern0.7mu^{-\epsilon} E(Y)\},
\end{equation}
the fact that the random variables $\{\tau_j(x)\colon j\ge1,\,x\in D_N\}$ are independent of the random walk and independent for different~$x\in D_N$ implies
\begin{equation}
E^{x_N}\bigl(\text{\rm e}\mkern0.7mu^{-E(\langle\overline\vartheta^D_N,f\rangle|\sigma(X))}\bigr)
\le E^{x_N}\bigl(\text{\rm e}\mkern0.7mu^{-\langle\overline\vartheta^D_N,f\rangle}\bigr)
\le E^{x_N}\bigl(\text{\rm e}\mkern0.7mu^{-\text{\rm e}\mkern0.7mu^{-\Vert f\Vert_\infty/\widehat W_N}E(\langle\overline\vartheta^D_N,f\rangle|\sigma(X))}\bigr)
\end{equation}
(see~\cite[Lemma 3.12]{BL4}),
where the conditional expectation is meaningful because $\langle\overline\vartheta^D_N,f\rangle$ is a finite random variable.
Defining $f^{\ast\mathfrak e}\colon\overline D\times[0,\infty)\to\mathbb R$ by
\begin{equation}
f^{\ast\mathfrak e}(x,\ell):= E\biggl[\,f\Bigl(x,\frac14\sum_{j=1}^{\lfloor 4\ell\rfloor } \tau_j\Bigr)\biggr],
\end{equation}
where~$\{\tau_j\colon j\ge1\}$ are i.i.d.\ Exponential(1), we have
\begin{equation}
E^{x_N}\bigl(\langle\overline\vartheta^D_N,f\rangle\,\big|\,\sigma(X)\bigr) =\langle\vartheta^D_N,f^{\ast\mathfrak e}\rangle.
\end{equation}
Hence we get (under the laws $\{P^{x_N}\colon N\ge1\}$),
\begin{equation}
\label{E:7.14}
\langle\vartheta^D_N,f^{\ast\mathfrak e}\rangle\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\langle\widetilde\vartheta^D ,f\rangle
\end{equation}
for any $f\in C_{\text{\rm c}}(\overline D\times[0,\infty))$.
We now claim that $\{\vartheta^D_N\colon N\ge1\}$ is tight. For this we pick $M\in\mathbb N$, denote $f_M(x,h):=1_{[0,M]}(h)$ and observe that, for all~$n\in \mathbb N_0:=\{0,1,2,\dots\}$, we get
\begin{equation}
\label{E:7.15}
f_M^{\ast\mathfrak e}(x, n/4)=P\Bigl(\frac14\sum_{j=1}^{ n}\tau_j\le M\Bigr).
\end{equation}
Markov's inequality then shows $f_{2M}^{\ast\mathfrak e}(x, n/4)\ge\frac121_{[0,M]}( n/4)$ and, therefore,
\begin{equation}
\vartheta^D_N\bigl(\overline D\times[0,M]\bigr)\le2\langle\vartheta^D_N,f_{2M}^{\ast\mathfrak e}\rangle.
\end{equation}
The existence of the limit \eqref{E:7.14} then implies tightness of $\{\vartheta^D_N(\overline D\times[0,M])\colon N\ge1\}$ for all~$M>0$, and thus tightness of $\{\vartheta^D_N\colon N\ge1\}$ as well.
The tightness of $\{\vartheta^D_N\colon N\ge1\}$ permits us to extract a weak subsequential limit~$\vartheta^D$ along a (strictly) increasing sequence~$\{N_k\colon k\ge1\}$ of naturals. This entails the convergence $\langle\vartheta^D_{N_k},f\rangle{\,\overset{\text{\rm law}}\longrightarrow\,}\langle\vartheta^D,f\rangle$ for every~$f\in C_{\text{\rm c}}(\overline D\times[0,\infty))$. We claim that we even have
\begin{equation}
\label{E:7.17}
\langle\vartheta^D_{N_k},f^{\ast\mathfrak e}\rangle\,\,\underset{k\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\langle\vartheta^D,f^{\ast\mathfrak e}\rangle
\end{equation}
for every $f\in C_{\text{\rm c}}(\overline D\times[0,\infty))$. (This is not automatic because $f^{\ast\mathfrak e}$ is not compactly supported in general.) First we note that straightforward comparisons with the Lebesgue measure show, for each~$M>0$,
\begin{equation}
\label{E:7.18}
\lim_{n\to\infty}\frac{P\Bigl(\frac14\sum_{j=1}^{4n}\tau_j\le M\Bigr)}{P\Bigl(\frac14\sum_{j=1}^{4n}\tau_j\le 2M\Bigr)}=0.
\end{equation}
Writing~$\epsilon_n$ for the ratio of the two probabilities, for~$f$ supported in~$\overline D\times[0,M]$ we have $|f^{\ast\mathfrak e}|\le \Vert f\Vert_{\infty} f^{\ast\mathfrak e}_M$ and so, by \eqref{E:7.15},
\begin{equation}
\bigl|f^{\ast\mathfrak e}(x,n/4)\bigr|\le \epsilon_n \Vert f\Vert_{\infty} \,f_{2M}^{\ast\mathfrak e}(x,n),\quad n\in\mathbb N_0.
\end{equation}
It follows that the part of the integral $\langle\vartheta^D_N,f^{\ast\mathfrak e}\rangle$ corresponding to the second coordinate in excess of~$n$ is at most~$\epsilon_n \Vert f\Vert_{\infty}$ times $\langle\vartheta^D_N,f^{\ast\mathfrak e}_{2M}\rangle$, which is tight by \eqref{E:7.14}. We can thus approximate $f^{\ast\mathfrak e}$ by a function supported in~$\overline D\times[0,n]$ and pass to the limit $N\to\infty$ followed by~$n\to\infty$. This gives \eqref{E:7.17} as desired.
Combining \eqref{E:7.14} with \eqref{E:7.17} we arrive at the convolution identity
\begin{equation}
\label{E:7.20}
\langle\vartheta^D,f^{\ast\mathfrak e}\rangle\,\overset{\text{\rm law}}=\,\langle\widetilde\vartheta^D,f\rangle.
\end{equation}
We have proved this (including the absolute convergence of the integral on the left-hand side) for $f\in C_{\text{\rm c}}(\overline D\times[0,\infty))$ but the Monotone Convergence Theorem along with the fact that the second coordinate of~$\widetilde\vartheta^D$ has subexponentially growing density extends this to all~$f\in C(\overline D\times[0,\infty))$ such that $|f(x,h)|\le c\text{\rm e}\mkern0.7mu^{-\epsilon h}$ for some~$\epsilon,c>0$. This permits us to consider functions of the form $g_s(x,h):=\tilde f(x)\text{\rm e}\mkern0.7mu^{-sh}$ for~$s>0$ and~$\tilde f\in C(\overline D)$, for which
\begin{equation}
g_s^{\ast\mathfrak e}(x,n/4)=\tilde f(x)(1+s/4)^{-n},\quad n\in\mathbb N_0.
\end{equation}
Since $\vartheta^D$ is supported on $\overline D\times\frac14\mathbb N_0$, it makes sense to denote
\begin{equation}
\vartheta^{D,n}(A):=\vartheta^D\bigl(A\times\{n/4\}\bigr).
\end{equation}
The identity \eqref{E:7.20} then becomes
\begin{equation}
\label{E:7.23}
\sum_{n\ge0}\langle\vartheta^{D,n},\tilde f\rangle(1+s/4)^{-n}\,\overset{\text{\rm law}}=\,\langle\widetilde\vartheta^D,g_s\rangle.
\end{equation}
Assuming $\tilde f>0$, the explicit form of the right-hand side shows that $\langle\widetilde\vartheta^D,g_s\rangle/\langle\widetilde\vartheta^D,g_1\rangle$ is well-defined and equal to a non-random quantity --- namely, the ratio of two Laplace transforms of~$\tilde\mu$. This turns \eqref{E:7.23} into the pointwise identity
\begin{equation}
\label{E:7.24}
\sum_{n\ge0}\langle\vartheta^{D,n},\tilde f\rangle(1+s/4)^{-n}=
\frac{\int\tilde\mu(\text{\rm d}\mkern0.5mu h)\text{\rm e}\mkern0.7mu^{-sh}}
{\int\tilde\mu(\text{\rm d}\mkern0.5mu h)\text{\rm e}\mkern0.7mu^{-h}}\biggl(\,\sum_{n\ge0}\langle\vartheta^{D,n},\tilde f\rangle(5/4)^{-n}\biggr)
\end{equation}
valid, a.s., for each~$s>0$ and (by elementary extensions) all~$\tilde f\in C(\overline D)$. Thanks to the monotonicity of both sides in~$s$ and almost-sure continuity in~$\tilde f$ of both sides with respect to the supremum norm, the identity actually holds a.s.\ for all~$s>0$ and all~$\tilde f\in C(\overline D)$ simultaneously.
With \eqref{E:7.24} in hand, we are more or less done. Indeed, as the left-hand side is a generating function of the sequence $\{\langle\vartheta^{D,n},\tilde f\rangle\}_{n\ge0}$, which determines the sequence uniquely, all $\langle\vartheta^{D,n},\tilde f\rangle$ must be the same deterministic multiple of the quantity in the large parentheses on the right-hand side. This shows that $\vartheta^D$ must be as on the right-hand side of \eqref{E:2.22ii} for some~$\mu$ of the form $\mu=\sum_{n\ge0}q_n\delta_{n/4}$ where~$\{q_n\}_{n\ge0}$ is uniquely determined by
\begin{equation}
\label{E:7.18a}
\sum_{n\ge0}q_n(1+s/4)^{-n}=\int_0^\infty\tilde\mu(\text{\rm d}\mkern0.5mu h)\text{\rm e}\mkern0.7mu^{-sh},\quad s>0.
\end{equation}
The Laplace transform of~$\tilde\mu$ was calculated in the proof of Proposition~\ref{thm-light-bv}. All subsequential limits of $\{\vartheta^D_N\colon N\ge1\}$ are thus equal in law and so convergence holds.
\end{proofsect}
Moving to the thick points, we first need a version of \eqref{E:7.18}:
\begin{lemma}
\label{lemma-7.3}
For $\{\tau_j\colon j\ge1\}$ be i.i.d.\ Exponential(1), all $k\in\mathbb N$ and all reals $s\ge t\ge0$,
\begin{equation}
\frac{P\Bigl(\sum_{j=1}^k(\tau_j-1)\ge s+t\Bigr)}{P\Bigl(\sum_{j=1}^k(\tau_j-1)\ge s\Bigr)}
\le\text{\rm e}\mkern0.7mu^{-\frac{st}{k+s+t}}.
\end{equation}
\end{lemma}
\begin{proofsect}{Proof}
Since $\sum_{j=1}^k\tau_j$ has density $\frac1{(k-1)!}x^{k-1}\text{\rm e}\mkern0.7mu^{-x}$, the change of variables $y:=x+t$ gives
\begin{equation}
\begin{aligned}
P\biggl(\,\sum_{j=1}^k(\tau_j-1)\ge s\biggr)
&=\frac1{(k-1)!}\int_{x\ge k+s}\text{\rm d}\mkern0.5mu x\, x^{k-1}\text{\rm e}\mkern0.7mu^{-x}
\\
&=\text{\rm e}\mkern0.7mu^t\,\frac1{(k-1)!}\int_{y\ge k+s+t}\text{\rm d}\mkern0.5mu y\, (y-t)^{k-1}\text{\rm e}\mkern0.7mu^{-y}
\\
&
\ge \text{\rm e}\mkern0.7mu^t\Bigl(1-\frac{t}{k+s+t}\Bigr)^k\,P\biggl(\,\sum_{j=1}^k(\tau_j-1)\ge s+t\biggr).
\end{aligned}
\end{equation}
Using that $s\ge t$, the prefactor can be written as the exponential of
\begin{equation}
\begin{aligned}
t+k\log\Bigl(1-\frac{t}{k+s+t}\Bigr)&=t-k\sum_{n\ge1}\frac1n\frac{t^n}{(k+s+t)^n}
\\
&\ge t-\frac{kt}{k+s+t}-\frac12\frac{k t^2}{(k+s+t)^2}\sum_{n\ge0}2^{-n}.
\end{aligned}
\end{equation}
Noting that right-hand side is no less than $\frac{st}{k+s+t}$, we get the claim.
\end{proofsect}
A convolution identity that inevitably shows up in the proof also requires:
\begin{lemma}
\label{lemma-7.4}
Suppose~$\nu$ is a Borel measure on~$\mathbb R$ such that, for some~$\beta\in\mathbb R$ and some~$\sigma^2>0$ and all~$f\in C_{\text{\rm c}}(\mathbb R)$,
\begin{equation}
\label{E:7.29}
\int_\mathbb R \nu(\text{\rm d}\mkern0.5mu h) \,E\bigl[ \,f(h+\mathcal N(0,\sigma^2))\bigr] = \int_\mathbb R\text{\rm d}\mkern0.5mu h\,\,\text{\rm e}\mkern0.7mu^{\beta h}\,f(h)
\end{equation}
Then
\begin{equation}
\nu(\text{\rm d}\mkern0.5mu h)=\text{\rm e}\mkern0.7mu^{-\frac12\beta^2\sigma^2+\beta h}\text{\rm d}\mkern0.5mu h.
\end{equation}
\end{lemma}
\begin{proofsect}{Proof}
Consider the measure $\widetilde\nu(\text{\rm d}\mkern0.5mu h):=\text{\rm e}\mkern0.7mu^{-\beta h+\frac12\beta^2\sigma^2}\nu(\text{\rm d}\mkern0.5mu h)$.
Absorbing the exponential term on the right of \eqref{E:7.29} into the test function, a calculation shows
\begin{equation}
\int_{\mathbb R\times\mathbb R}\widetilde\nu(\text{\rm d}\mkern0.5mu h)\otimes\frac{\text{\rm d}\mkern0.5mu x}{\sqrt{2\pi\sigma^2}}\,\,\text{\rm e}\mkern0.7mu^{-\frac{(x-h+\beta\sigma^2)^2}{2\sigma^2}}\,f(x)=\int_\mathbb R\text{\rm d}\mkern0.5mu h\,f(h)
\end{equation}
for all~$f\in C_{\text{\rm c}}(\mathbb R)$. As $C_{\text{\rm c}}(\mathbb R)$ generates all Borel functions in~$\mathbb R$, we get
\begin{equation}
\frac1{\sqrt{2\pi\sigma^2}}\int_\mathbb R\widetilde\nu(\text{\rm d}\mkern0.5mu h)\,\text{\rm e}\mkern0.7mu^{-\frac{(x-h+\beta\sigma^2)^2}{2\sigma^2}} = 1,\quad x\in\mathbb R.
\end{equation}
This can be interpreted by saying that $\widehat\nu(\text{\rm d}\mkern0.5mu h):=\,\frac1{\sqrt{2\pi\sigma^2}}\text{\rm e}\mkern0.7mu^{-\frac{(h-\beta\sigma^2)^2}{2\sigma^2}}\widetilde\nu(\text{\rm d}\mkern0.5mu h)$ is a measure such that
\begin{equation}
\int_\mathbb R\widehat\nu(\text{\rm d}\mkern0.5mu h)\text{\rm e}\mkern0.7mu^{-xh} = \text{\rm e}\mkern0.7mu^{-x\beta\sigma^2+x^2\sigma^2/2},\quad x\in\mathbb R.
\end{equation}
The right-hand side is the Laplace transform of $\mathcal N(\beta\sigma^2,\sigma^2)$ and so, since the Laplace transform of a measure, if exists, determines the measure uniquely, $\widehat\nu$ is the law of $\mathcal N(\beta\sigma^2,\sigma^2)$. Hence~$\widetilde\nu$ is the Lebesgue measure, thus proving the claim.
\end{proofsect}
\begin{proofsect}{Proof of Theorem~\ref{thm-thick}}
The proof starts by adapting the argument leading to \eqref{E:7.14}. Indeed, working again in the coupling of the random walk~$X$ and the i.i.d.\ exponentials $\{\tau_j(x)\colon x\in D_N,\,j\ge1\}$, let
\begin{equation}
\overline\zeta^D_N:=\frac1{W_N}\sum_{x\in D_N}\delta_{x/N}\otimes\delta_{(\overline L_{t_N}^{D_N}(x)-a_N)/\sqrt{2a_N}}\,,
\end{equation}
where $\overline L_{t_N}^{D_N}(x)$ is the quantity from \eqref{E:7.7}.
Lemmas~\ref{lemma-7.1}-\ref{lemma-7.2} along with
Proposition~\ref{thm-thick-cont}, Theorem~\ref{thm-6.1} and~\eqref{E:7.10} then show
\begin{equation}
E^{x_N}\bigl(\langle\overline\zeta^D_N,f\rangle\,\big|\,\sigma(X)\bigr)\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\langle\widetilde\zeta^D, f\rangle
\end{equation}
for every~$f\in C_{\text{\rm c}}(\overline D\times(\mathbb R\cup\{+\infty\}))$, where $\widetilde{\zeta}^D$ is the measure on the right of (\ref{E:1.21b}).
Writing $\{\tau_j\colon j\ge1\}$ for generic i.i.d.\ exponentials with parameter 1 and denoting, with some abuse of earlier notation,
\begin{equation}
f^{N,\ast\mathfrak e}(x,h):= E\biggl[\,f\Bigl(x,h+\frac1{4\sqrt{2a_N}}\sum_{j\ge1} (\tau_j-1)1_{\{j\le 4a_N+4h\sqrt{2a_N}\}}\Bigr)\biggr],
\end{equation}
the fact that $L^{D_N}_{t_N}$ takes values in $\frac14\mathbb N_0$ then shows
\begin{equation}
E^{x_N}\bigl(\langle\overline\zeta^D_N,f\rangle\,\big|\,\sigma(X)\bigr)\,=\,\langle\zeta^D_N,f^{N,\ast\mathfrak e}\rangle
\end{equation}
thus proving
\begin{equation}
\label{E:7.33}
\langle\zeta^D_N,f^{N,\ast\mathfrak e}\rangle\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\langle\widetilde\zeta^D, f\rangle
\end{equation}
for every~$f\in C_{\text{\rm c}}(\overline D\times(\mathbb R\cup\{+\infty\}))$.
We will now use \eqref{E:7.33} to control the behavior of the measures $\{\zeta^D_N\colon N\ge1\}$. First, writing henceforth $1_{[M,\infty)}$ for the function $(x,h)\mapsto 1_{[M,\infty)}(h)$ we get
\begin{equation}
\bigl(1_{[M,\infty)}\bigr)^{N,\ast\mathfrak e}(x,h)=P\biggl(\,\sum_{j=1}^k (\tau_j-1)\ge (M-h)4\sqrt{2a_N}\biggr),
\end{equation}
where $k:=\lfloor 4a_N+4h\sqrt{2a_N}\rfloor$. Assuming $h\ge 2M$ with~$M>0$ large, Markov's inequality along with $E((\tau_j-1)^2)=1$ then gives
\begin{equation}
\label{E:7.41}
1-\bigl(1_{[M,\infty)}\bigr)^{N,\ast\mathfrak e}(x,h)
\le\frac{4a_N+4h\sqrt{2a_N}}{32a_N(h-M)^2}
\le\frac1{h^2}+\frac1{h\sqrt{2a_N}}.
\end{equation}
For~$M$ large, the right-hand side is at most $1/2$ thus showing
\begin{equation}
1_{[2M,\infty)}(h)
\le2\bigl(1_{[M,\infty)}\bigr)^{N,\ast\mathfrak e}(x,h).
\end{equation}
From \eqref{E:7.33} and the fact that $\widetilde\zeta^D$ has an exponentially decaying density in the second variable we then get, for each~$\epsilon>0$,
\begin{equation}
\label{E:7.37}
\lim_{M\to\infty}\,\limsup_{N\to\infty}\,P^{x_N}\bigl(\langle\zeta^D_N,1_{[M,\infty)}\rangle>\epsilon\bigr)=0.
\end{equation}
This implies tightness of $\{\zeta^D_N\colon N\ge1\}$ on $\overline D\times(\mathbb R\cup\{+\infty\})$ along with their asymptotic concentration on $\overline D\times\mathbb R$. In particular, we may extract a weak subsequential limit~$\zeta^D$.
We would like to use the existence of weak subsequential limits to pass to the limit~$N\to\infty$ inside the integral on the left-hand side of \eqref{E:7.33}. For that we need to deal with the fact that the support of $f^{N,\ast\mathfrak e}$ extends to~$-\infty$ in the second variable. Pick any~$b>0$ and, for any $h<-3b$, invoke Lemma~\ref{lemma-7.3} with the choices $s:=4\sqrt{2a_N}(-2b-h)$, $t:=4\sqrt{2a_N}b$
and~$k$ as above to conclude that
\begin{equation}
\bigl(1_{[-b,\infty)}\bigr)^{N,\ast\mathfrak e}(x,h)\le \text{\rm e}\mkern0.7mu^{-\frac{32a_N b(-2b-h)}
{4a_N- 4\sqrt{2a_N}b}}\,\,\bigl(1_{[-2b,\infty)}\bigr)^{N,\ast\mathfrak e}(x,h),\quad h < - 3b.
\end{equation}
The prefactor decays to zero as~$h\to-\infty$ uniformly in~$N\ge1$ and so, plugging this into \eqref{E:7.33} and using that $\{\langle\zeta^D_N,(1_{[-2b,\infty)})^{N,\ast\mathfrak e}\rangle\colon N\ge1\}$ is tight we get, for each bounded, continuous~$f$ with $\operatorname{supp}(f)\subseteq\overline D\times[b,\infty]$ and each~$\epsilon>0$,
\begin{equation}
\lim_{M\to\infty}\,\limsup_{N\to\infty}\,P\biggl(\,\Bigl|\,\bigl\langle\zeta^D_N,\,f^{N,\ast\mathfrak e}1_{(-\infty,-M]}\bigr\rangle\Bigr|>\epsilon\biggr)=0.
\end{equation}
Combining this with \eqref{E:7.37}, we may truncate the second variable in the integral on the left of \eqref{E:7.33} to lie in $[-M,M]$ at the cost of errors that tend to zero in probability as $M\to\infty$. The Central Limit Theorem shows
\begin{equation}
\frac1{4\sqrt{2a_N}}\sum_{j\ge1} (\tau_j-1)1_{\{j\le 4a_N\}}\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\mathcal N\bigl(0,\tfrac18\bigr)
\end{equation}
and a simple estimate based, e.g., on Doob's $L^2$-martingale inequality to account for the correction $4\sqrt{2a_N}h$ in the number of terms in the sum then gives
\begin{equation}
\lim_{N\to\infty}\,\,\sup_{h\in[-M,M]}\,\,\sup_{x\in\overline D}\,\,\bigl|f^{N,\ast\mathfrak e}(x,h)-f^{\ast\mathfrak n}(x,h)\bigr|=0,
\end{equation}
where
\begin{equation}
f^{\ast\mathfrak n}(x,h)=E\Bigl[f\bigl(x,h+\mathcal N(0,\tfrac18)\bigr)\Bigr].
\end{equation}
Taking $M\to\infty$ after~$N\to\infty$ we then readily conclude that every subsequential weak limit~$\zeta^D$ of~$\{\zeta^D_N\colon N\ge1\}$ satisfies the distributional identity
\begin{equation}
\label{E:7.49}
\langle\zeta^D,f^{\ast\mathfrak n}\rangle\,\,\overset{\text{\rm law}}=\,\,\langle\widetilde\zeta^D,f\rangle
\end{equation}
for all~$f\in C_{\text{\rm c}}(\overline D\times(\mathbb R\cup\{+\infty\}))$. This includes the fact that the integral on the left-hand side converges absolutely for all such~$f$.
We are now more or less done. Indeed, note that the explicit form of~$\widetilde\zeta^D$ gives, for $\tilde f\in C_{\text{\rm c}}(\mathbb R)$ and~$A\subseteq D$ Borel with~${\rm Leb}(A)>0$,
\begin{equation}
\frac{\langle\widetilde\zeta^D,1_A\otimes \tilde f\rangle}{\langle\widetilde\zeta^D,1_A\otimes 1_{[0,\infty)}\rangle} = \alpha\lambda\int \text{\rm d}\mkern0.5mu h\,\text{\rm e}\mkern0.7mu^{-\alpha\lambda h}\tilde f(h),\quad \text{a.s.}
\end{equation}
The right-hand side is non-random and so \eqref{E:7.49} becomes the pointwise equality
\begin{equation}
\bigl\langle\zeta^D,(1_A\otimes\tilde f)^{\ast\mathfrak n}\bigr\rangle
=\bigl\langle\zeta^D,(1_A\otimes1_{[0,\infty)})^{\ast\mathfrak n}\bigr\rangle
\,\alpha\lambda\int \text{\rm d}\mkern0.5mu h\,\text{\rm e}\mkern0.7mu^{-\alpha\lambda h}\tilde f(h)
\end{equation}
for all~$\tilde f\in C_{\text{\rm c}}(\mathbb R)$. This shows that, for any~$B\subseteq\mathbb R$ Borel,
\begin{equation}
\label{E:7.52}
\zeta^D(A\times B)=\alpha\lambda\bigl\langle\zeta^D,(1_A\otimes1_{[0,\infty)})^{\ast\mathfrak n}\bigr\rangle\otimes\nu(B),
\end{equation}
where~$\nu$ is a Borel measure on~$\mathbb R$ that obeys \eqref{E:7.29} with $\beta:=-\alpha\lambda$ and $\sigma^2:=1/8$. Lemma~\ref{lemma-7.4} then gives $\nu(\text{\rm d}\mkern0.5mu h)=\text{\rm e}\mkern0.7mu^{-\alpha^2\lambda^2/16-\alpha\lambda h}\,\text{\rm d}\mkern0.5mu h$ and, since the first measure on the right of \eqref{E:7.52} has the law of the spatial part of~$\widetilde\zeta^D$, we get
\begin{equation}
\label{E:7.52b}
\zeta^D\,\,\,\overset{\text{\rm law}}=\,\,\, \text{\rm e}\mkern0.7mu^{-\alpha^2\lambda^2/16}\,\widetilde\zeta^D.
\end{equation}
The claim follows.
\end{proofsect}
Finally, we deal with the changes that are required for the thin points:
\begin{proofsect}{Proof of Theorem~\ref{thm-thin}}
Following the proof of Theorem~\ref{thm-thick}, the argument is exactly the same up to \eqref{E:7.33}, except that now $f\in C_{\text{\rm c}}(\overline D\times(\mathbb R\cup\{-\infty\}))$. For the tightness, we then need to consider
\begin{equation}
\bigl(1_{(-\infty,-M]}\bigr)^{N,\ast\mathfrak e}(x,h)=P\biggl(\,\sum_{j=1}^k (\tau_j-1)\le -(M+h)4\sqrt{2a_N}\biggr),
\end{equation}
where $k:=\lfloor 4a_N+4h\sqrt{2a_N}\rfloor$. For $h\le-2M$ the same estimate as \eqref{E:7.41} then shows $1_{(-\infty,-2M]}(h)\le 2(1_{(-\infty,-M]})^{N,\ast\mathfrak e}(x,h)$ and so, for each $\epsilon>0$, we get
\begin{equation}
\lim_{M\to\infty}\,\limsup_{N\to\infty}\,P^{x_N}\bigl(\langle\zeta^D_N,1_{(-\infty,-M]}\rangle>\epsilon\bigr)=0
\end{equation}
from \eqref{E:7.33}. For the upper tail, we need a variation on Lemma~\ref{lemma-7.3}:
\begin{lemma}
\label{lemma-7.5}
For $\{\tau_j\colon j\ge1\}$ i.i.d.\ Exponential(1), all $k\in\mathbb N$ and all $s,t\ge0$ with $s+t<k$,
\begin{equation}
\frac{P\Bigl(\sum_{j=1}^k(\tau_j-1)\le -(s+t)\Bigr)}{P\Bigl(\sum_{j=1}^k(\tau_j-1)\le -s\Bigr)}
\le\text{\rm e}\mkern0.7mu^{-\frac{t(s-1)}{k-s}}.
\end{equation}
\end{lemma}
To use this, let~$b>0$ and invoke the choices $s:=(h-2b)4\sqrt{2a_N}$, $t:=4b\sqrt{2a_N}$ and~$k$ as above while noting that, for~$N$ large and $h>2b$, we have $s+t<k$, to get
\begin{equation}
\bigl(1_{(-\infty,b]}\bigr)^{N,\ast\mathfrak e}(x,h)
\le\exp\Bigl\{-\frac{4b\sqrt{2a_N}[(h-2b)4\sqrt{2a_N}-1]}{4a_N+4h\sqrt{2a_N}-(h-2b)4\sqrt{2a_N}}\Bigr\}
\bigl(1_{(-\infty,2b]}\bigr)^{N,\ast\mathfrak e}(x,h).
\end{equation}
The exponential prefactor tends to zero as~$h\to\infty$ uniformly in~$N$ sufficiently large
and so, for any bounded and continuous~$f$ with $\operatorname{supp}(f)\subseteq\overline D\times(-\infty,b]$ and each~$\epsilon>0$,
\begin{equation}
\lim_{M\to\infty}\,\limsup_{N\to\infty}\,P\biggl(\,\Bigl|\,\bigl\langle\zeta^D_N,\,f^{N,\ast\mathfrak e}1_{[M,\infty)}\bigr\rangle\Bigr|>\epsilon\biggr)=0.
\end{equation}
This again permits us to truncate the tails and derive \eqref{E:7.49} for each $f\in C_{\text{\rm c}}(\overline D\times(\mathbb R\cup\{-\infty\}))$ and each weak subsequential limit~$\zeta^D$ of $\{\zeta^D_N\colon N\ge1\}$. The rest of the proof of Theorem~\ref{thm-thick} can be followed literally leading to \eqref{E:7.52b}, as before.
\end{proofsect}
It remains to give:
\begin{proofsect}{Proof of Lemma~\ref{lemma-7.5}}
The explicit form of the density along with the substitution $y:=x+t$ again shows
\begin{equation}
\begin{aligned}
P\biggl(\,\sum_{j=1}^k(\tau_j-1)\le-(s+t)\biggr)
&=\frac1{(k-1)!}\int_{0\le x\le k-s-t}\text{\rm d}\mkern0.5mu x\, x^{k-1}\text{\rm e}\mkern0.7mu^{-x}
\\
&\le\text{\rm e}\mkern0.7mu^t\,\frac1{(k-1)!}\int_{t\le y\le k-s}\text{\rm d}\mkern0.5mu y\, (y-t)^{k-1}\text{\rm e}\mkern0.7mu^{-y}
\\
&
\le \text{\rm e}\mkern0.7mu^t\Bigl(1-\frac{t}{k-s}\Bigr)^{k-1}\,P\biggl(\,\sum_{j=1}^k(\tau_j-1)\le -s\biggr)
\end{aligned}
\end{equation}
Using the bound $1-x\le\text{\rm e}\mkern0.7mu^{-x}$, the prefactor is at most $\text{\rm e}\mkern0.7mu^{-\frac{t(s-1)}{k-s}}$.
\end{proofsect}
With the help of the above theorems, we can finally settle:
\begin{proofsect}{Proof of Theorem~\ref{thm-minmax}}
For the local time $\widehat L^{D_N}_{t_N}$ parametrized by the time at the boundary vertex and the walk started at~$\varrho$, the statement appears as~\cite[Theorem~2.1]{AB}.
The bounds in Proposition~\ref{P:tNbound} along with the tightness of $\{T_N\colon N\ge1\}$ then extend the conclusion to~$\widehat L^{D_N}_{t_N}$ replaced by~$\widetilde L^{D_N}_{\deg(D_N) t_N}$.
Since the random walk started at~$\varrho$ visits any given $x_N\in D_N$ in time of order $N^2\log N$ while the walk started at~$x_N$ hits~$\varrho$ in time of order~$N^2$ with high probability, shifting~$t_N$ by~$\pm(\log N)^{3/2}$ and invoking the monotonicity of~$t\mapsto \widetilde L_t^{D_N}$ extends~\cite[Theorem~2.1]{AB} to arbitrary starting points.
The inequalities \eqref{E:7.4} then extend it to the discrete-time object~$L^{D_N}_{t_N}$ as well.
\end{proofsect}
\section{Local structure}
\label{sec8}\noindent
The last item to be addressed are the proofs of Theorems~\ref{thm-thick-loc} and \ref{thm-avoid-loc} dealing with the local structure of the local time field near thick/thin and avoided points, respectively. We will start with the former setting, as it is technically most demanding.
\subsection{Thick and thin points}
We will again carry the argument primarily for the thick points and only comment on the changes for the thin points. Assuming henceforth the setting and notation of Theorem~\ref{thm-thick}, we start by converting the continuous-time in the boundary-vertex parametrization to that parametrized by the total time.
\begin{proposition}
\label{prop-8.1}
Let $\overline{\zeta}_N^{D,{\text{\rm loc}}}$ be given by the same formula as $\zeta_N^{D,{\text{\rm loc}}}$ in \eqref{E:zetaNDloc} except with $L^{D_N}_{t_N}(x)$ replaced by $\overline L^{D_N}_{t_N}(x)$ from \eqref{E:7.7}. Then, given an~$x_N\in D_N$ for each~$N\ge1$, under $P^{x_N}$,
\begin{equation}
\overline{\zeta}_N^{D,{\text{\rm loc}}}\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\,\widetilde\zeta^D\otimes\widehat\nu_\lambda,
\end{equation}
where $\widetilde\zeta^D$ is the measure on the right of \eqref{E:1.21b} and $\widehat\nu_\lambda$ is the law of $\phi+\alpha\lambda\fraktura a$, for~$\phi$ the pinned DGFF; i.e., a centered Gaussian process on~$\mathbb Z^2$ with covariances \eqref{E:2.30}.
\end{proposition}
The proof will rely heavily on the arguments and notation from Sections~\ref{sec5}--\ref{sec7}. Throughout, we fix a sequence $\{b_N\}_{N\ge1}$ such that $b_N\to\infty$ and $b_N/t_N^{1/4}\to0$. First we condense the ideas underlying Lemmas~\ref{lemma-5.6},~\ref{lemma-5.7} and~\ref{lemma-7.2} into:
\begin{lemma}
\label{lemma-8.2}
Given~$\epsilon>0$, let $\widetilde t_{N,k}^{\pm}$ be the quantity from \eqref{E:tNshift} but with $b_N$ replaced by $3b_N$. Abbreviate
\begin{multline}
\label{E:7.2a}
\quad\widetilde\mathcal F_{N}(x):=\bigcup_{k\in \mathbb Z }
\biggl(\bigl\{(k-1)\epsilon\le T_N \circ \theta_{H_{\varrho}} \le(k+1)\epsilon\bigr\}
\\
\cap\Bigl\{(\widehat L^{D_N} \circ \theta_{H_{\varrho}})_{\widetilde t_{N,k}^-} (x)
\le \overline L^{D_N}_{t_N}(x)
\le \widetilde L^{D_N}_{H_\varrho}(x)+ (\widehat L^{D_N} \circ \theta_{H_{\varrho}})_{\widetilde t_{N,k}^+} (x)\Bigr\}\biggr).
\end{multline}
Then for each~$b\in\mathbb R$ and any choice of~$x_N\in D_N$ for each~$N\ge1$,
\begin{equation}
P^{x_N}\Bigl(\,\sum_{x\in D_N}1_{\widetilde\mathcal F_N(x)^{\text{\rm c}}}>2\,\Bigr)\,\underset{N\to\infty}\longrightarrow\,0.
\end{equation}
\end{lemma}
\begin{proofsect}{Proof}
The tightness of~$T_N$ and $H_\varrho/|D_N|$ allows us to effectively truncate the union in \eqref{E:7.2a} to $-M\le k\le M$ and assume $H_{\varrho} \le m \deg(D_N) $.
Recall the event $\mathcal F_N (x)$ from \eqref{E:7.2} and note that on the event
\begin{equation}
\label{E:8.2ev}
\Bigl\{\sum_{x \in D_N} 1_{\mathcal F_N (x)^{\text{\rm c}}} \le 2 \Bigr\} \cap \bigl\{H_{\varrho} \le m \deg (D_N) \bigr\},
\end{equation}
we have \begin{multline}
\quad
\widetilde L^{D_N}_{H_\varrho}+ (\widetilde L^{D_N} \circ \theta_{H_{\varrho}})_{(t_N +1)\deg (D_N)} (x)
\ge\overline{L}_{t_N}^{D_N} (x)
\\
\ge \widetilde L_{(t_N-1)\deg(D_N)}^{D_N} (x)
\ge (\widetilde L^{D_N} \circ \theta_{H_{\varrho}})_{(t_N - m - 1)\deg (D_N)} (x)
\quad
\end{multline}
at all but at most two $x \in D_N$.
Next set $\mathcal E_N^+ := \mathcal E_N (t_N + 1)$ and $\mathcal E_N^- := \mathcal E_N (t_N-m-1)$,
where $\mathcal E_N (t_N')$ is the event~$\mathcal E_N$ from \eqref{E:evtgood} but for $\{t_N\}$ replaced by $\{t_N'\}$.
Recall the notation $(t_N')^\circ$ for the quantity from \eqref{E:tNcirc}. On $\theta_{H_{\varrho}}^{-1} (\mathcal E_N^+ \cap \mathcal E_N^- \cap \{ (k-1)\epsilon \le T_N \le (k+1)\epsilon\})$ we then get an analogue of \eqref{E:5.40nw} of the form
\begin{equation}
\bigl((t_N+1)^{\circ} + b_N (t_N+1)^{1/4}\bigr) \circ \theta_{H_{\varrho}} \le \widetilde t_{N, k}^+,
\end{equation}
\begin{equation}
\bigl((t_N-m-1)^{\circ} - b_N (t_N-m-1)^{1/4}\bigr) \circ \theta_{H_{\varrho}} \ge \widetilde t_{N, k}^-
\end{equation}
once~$N$ is sufficiently large (independent of~$k$).
Consequently, the inequalities
\begin{equation}
\label{E:8.9iui}
\widetilde L^{D_N}_{H_\varrho}(x)+ (\widehat L^{D_N} \circ \theta_{H_{\varrho}})_{\widetilde t_{N,k}^+} (x)
\ge\overline{L}_{t_N}^{D_N} (x)
\ge (\widehat L^{D_N} \circ \theta_{H_{\varrho}})_{\widetilde t_{N,k}^-} (x)
\end{equation}
apply on the same event as well.
Lemma~\ref{lemma-7.2}
shows that \eqref{E:8.9iui} holds at all but two $x\in D_N$ with $P^{x_N}$-probability tending to one as~$N\to\infty$. This proves the claim.
\end{proofsect}
Lemma~\ref{lemma-8.2} eliminates the need to consider other starting points than~$\varrho$. Next comes the main issue to be dealt with in the proof of Proposition~\ref{prop-8.1}: Since we are after differences of the local time, we cannot rely on monotonicity as we did earlier; instead we have to estimate the variation of $t\mapsto\widehat L^{D_N}_t$ over time intervals of length of order~$\epsilon\sqrt{2t_N}$. This is the content of:
\begin{lemma}
\label{lemma-8.3}
For all~$\delta>0$, all $b\in\mathbb R$ and all~$\{t_N'\}_{N\ge1}$ satisfying $t_N'-t_N=O(\log N)$,
\begin{multline}
\label{E:8.9a}
\quad
\lim_{\epsilon\downarrow0}\,\limsup_{N\to\infty}\frac1{W_N}\sum_{x\in D_N}
P^\varrho\Bigl(\widehat L^{D_N}_{t_N'}(x)\ge a_N+b\log N,
\\
\widehat L^{D_N}_{t_N'}(x)-\widehat L^{D_N}_{t_N'-\epsilon\sqrt{2t_N}}(x)>\delta\sqrt{2t_N}\,\Bigr)
=0.
\quad
\end{multline}
\end{lemma}
\begin{proofsect}{Proof}
The proof is based on tail estimates for the local time which will depend, somewhat sensitively, on a choice of a few parameters. Given~$\delta>0$ let~$\epsilon_0>0$ and~$j_0\in\mathbb N$ be such that
\begin{equation}
\label{E:8.10b}
(\sqrt\theta+\lambda)^2-(1+\epsilon_0)\theta>\lambda^2
\end{equation}
and that, for all integers~$j\ge j_0$,
\begin{equation}
\label{E:8.11b}
(j-\delta)\frac{\sqrt{\delta}-\sqrt{\epsilon_0}}{\sqrt{\delta}}>(j+1)\Bigl[\epsilon_0+\frac\lambda{\sqrt\theta+\lambda}\Bigr].
\end{equation}
These choices can be made because $(\theta+\lambda)^2-\theta^2>\lambda^2$ and $\frac\lambda{\sqrt\theta+\lambda}<1$.
Assume~$\epsilon\in(0,\epsilon_0]$ and abbreviate $t_N'':=t_N'-\epsilon\sqrt{2t_N}$ and $\widetilde a_N:=a_N+b\log N$.
Set $M$ to the least integer such that $(M+1)\sqrt{2t_N}\ge \widetilde a_N-(1+\epsilon_0)t_N''$.
Using the Markov property of $t\mapsto \widehat L^{D_N}_t(x)$, the probability in \eqref{E:8.9a} is bounded by
\begin{multline}
\label{E:8.10a}
P^\varrho\Bigl(\widehat L^{D_N}_{t_N''}(x)\ge\widetilde a_N-j_0\sqrt{2t_N}\Bigr)
P^\varrho\Bigl(\widehat L^{D_N}_{\epsilon\sqrt{2t_N}}(x)\ge \delta\sqrt{2t_N}\Bigr)
\\
+
\sum_{j=j_0}^{M} P^\varrho\Bigl(\widehat L^{D_N}_{t_N''}(x)\ge\widetilde a_N-(j+1)\sqrt{2t_N}\Bigr)
P^\varrho\Bigl(\widehat L^{D_N}_{\epsilon\sqrt{2t_N}}(x)\ge j\sqrt{2t_N}\Bigr)
\\
+P^\varrho\Bigl(\widehat L^{D_N}_{\epsilon\sqrt{2t_N}}(x)\ge (M+1)\sqrt{2t_N}\Bigr).
\end{multline}
We now use \cite[Lemma~4.1]{AB} to bound the individual probabilities on the right-hand side as follows.
First, noting that by our choice of~$M$,
\begin{equation}
\sqrt{2\bigl(\widetilde a_N-(M+1)\sqrt{2t_N}\,\bigr)}-\sqrt{2t_N''}
\end{equation}
grows proportionally to~$\log N$ as~$N\to\infty$, \cite[Lemma~4.1]{AB} may be used for the choices $a:=\widetilde a_N-j_0\sqrt{2t_N}$, $t:=t_N''$ and~$b:=0$. Noting that~$W_N$ defined using $\widetilde a_N-j_0\sqrt{2t_N}$ and~$t_N''$ instead of~$a_N$ and~$t_N$ is comparable with~$W_N$, the uniform upper bound on~$G^{D_N}(x,x)$ then bounds the very first probability in \eqref{E:8.10a} by a quantity of order~$W_N/N^2$. The Markov inequality shows
\begin{equation}
\label{E:8.49}
P^\varrho\Bigl( \widehat L_{\epsilon\sqrt{2t_N}}^{D_N} (x) >\delta\sqrt{2a_N}\Bigr)\le\frac{\epsilon\sqrt{2t_N}}{\delta\sqrt{2a_N}}
\end{equation}
and so the first term in \eqref{E:8.10a} is order $\epsilon W_N/N^2$ (with a constant that depends on~$j_0$).
Next we move to the terms under the sum in \eqref{E:8.10a}. Here we use \cite[Lemma~4.1]{AB} for the choices $a:=\widetilde a_N$, $t:=t_N''$ and $b:= -j\sqrt{2t_N} $ to get, for all~$j=j_0,\dots,M+1$,
\begin{equation}
\label{E:8.11a}
P^\varrho\Bigl(\widehat L^{D_N}_{t_N''}(x)\ge\widetilde a_N-j\sqrt{2t_N}\Bigr)
\le c_1\frac{W_N}{N^2}\,\text{\rm e}\mkern0.7mu^{\,j\frac{\sqrt{2t_N}}{G^{D_N}(x,x)}\frac{\sqrt{2\widetilde a_N}-\sqrt{2t_N''}}{\sqrt{2\widetilde a_N}}}
\end{equation}
for some constant~$c_1\in(0,\infty)$ independent of~$N\ge1$,~$j=0,\dots,M+1$ and~$x\in D_N$.
For the second probability under the sum in \eqref{E:8.10a}, we apply \cite[Lemma~4.1]{AB} with the choices $a:=\delta\sqrt{2t_N}$, $t:=\epsilon\sqrt{2t_N}$ and~$b:=(j-\delta)\sqrt{2t_N}$ to get
\begin{equation}
\label{E:8.12a}
P^\varrho\Bigl(\widehat L^{D_N}_{\epsilon\sqrt{2t_N}}(x)\ge j\sqrt{2t_N}\Bigr)
\le c_2\,\text{\rm e}\mkern0.7mu^{-(j-\delta)\frac{\sqrt{2t_N}}{G^{D_N}(x,x)}\frac{\sqrt{\delta}-\sqrt{\epsilon}}{\sqrt{\delta}}}
\end{equation}
for some constant~$c_2\in(0,\infty)$ independent of~$N\ge1$ and $m\ge1$. Putting \eqref{E:8.11a} and \eqref{E:8.12a} together and invoking \eqref{E:8.11b} along with the uniform upper bound on~$G^{D_N}(x,x)$, the sum over~$j=j_0,\dots,M$ in \eqref{E:8.10a} may be performed with the result of order $ \text{\rm e}\mkern0.7mu^{-\alpha \sqrt{\theta} j_0\epsilon_0} W_N/N^2$, uniformly in~$x\in D_N$.
Finally, for the stand-alone probability in \eqref{E:8.10a}, one more use of \cite[Lemma~4.1]{AB} with the choices $a:=(M+1)\sqrt{2t_N}$, $t:=\epsilon\sqrt{2t_N}$ and $b:=0$ yields
\begin{equation}
\label{E:8.16a}
P^\varrho\Bigl(\widehat L^{D_N}_{\epsilon\sqrt{2t_N}}(x)\ge (M+1)\sqrt{2t_N}\Bigr)\le
\frac{c_3}{\sqrt{\log N}}\,\text{\rm e}\mkern0.7mu^{-(1-o(1))\frac{(M+1)\sqrt{2t_N}}{G^{D_N}(x,x)}}
\end{equation}
for a constant~$c_3\in(0,\infty)$ independent of, and $o(1)\to0$ uniformly in,~$N\ge1$ and~$x\in D_N$. Using the definition of~$M$, the right hand side of \eqref{E:8.16a} is order $N^{-2[\sqrt\theta+\lambda)^2-(1+ \epsilon_0 )\theta]+o(1)}$ which is $o(W_N/N^2)$ by $W_N=N^{2(1-\lambda^2)+o(1)}$ and \eqref{E:8.10b}, uniformly in~$x\in D_N$. The claim follows by taking~$N\to\infty$, followed by~$\epsilon\downarrow0$ and~$j_0\to\infty$.
\end{proofsect}
We are ready to give:
\begin{proofsect}{Proof of Proposition~\ref{prop-8.1}}
Let~$f\in C_{\text{\rm c}}( D \times\mathbb R\times\mathbb R^{\mathbb Z^2})$ be such that~$f(x,h,\phi)$ depends only on coordinates~$\{\phi_z\colon z\in \Lambda_r(0)\}$ for some~$r>0$ and vanishes unless $|h|\le b$ and $\max_{z\in\Lambda_r(0)}|\phi_z|\le b$, for some~$b>0$. Given~$\epsilon>0$, let~$k\in\mathbb Z$ be such that $| T_N \circ \theta_{H_{\varrho}} -k\epsilon|<\epsilon$. Pick~$x\in D_N$ and abbreviate
\begin{equation}
f_{N,r}(x,\ell):=f\biggl(x/N, \frac{\ell(x)-a_N}{\sqrt{2a_N}} ,
\Bigl\{ \frac{\ell(x)-\ell(x+z)}{\sqrt{2a_N}} \colon z\in\Lambda_r(0)\Bigr\}\biggr).
\end{equation}
Introducing the oscillation of~$f$ by
\begin{equation}
\text{osc}_{f} (\delta)
:= \sup_{x \in D} \,\sup_{\begin{subarray}{c} u, v \in \mathbb R,\\ |u-v| \le \delta \end{subarray}}
\,\,\sup_{\begin{subarray}{c} \phi, \widetilde \phi \in \mathbb R^{\Lambda_r (0)}, \\
\max_{z\in\Lambda_r(0)}|\phi_z - \widetilde \phi_z| \leq 2 \delta \end{subarray}}
\bigl|f(x, u, \phi) - f(x, v, \widetilde \phi)\bigr|,
\end{equation}
the difference
\begin{equation}
\label{E:8.17a}
f_{N,r}\bigl(x, \overline L^{D_N}_{t_N}\bigr)- f_{N,r}\Bigl(x, (\widehat L^{D_N}\circ \theta_{H_\varrho} )_{\widetilde t_{N,k}^-}\Bigr)
\end{equation}
is bounded in absolute value by the sum over~$z\in\Lambda_r(x)$ of three terms: $2\Vert f\Vert_\infty 1_{\widetilde\mathcal F_N(z)^{\text{\rm c}}}$,
\begin{equation}
\label{E:8.20a}
2\Vert f\Vert_\infty 1_{\widetilde\mathcal F_N(z)\cap\{H_z<H_\varrho\}}
\Bigl(1_{\{(\widehat L^{D_N}\circ \theta_{H_\varrho} )_{\widetilde t_{N,k}^-}(z)\ge a_N-2b\sqrt{2a_N}\}}+1_{\{\overline L^{D_N}_{t_N}(z)\ge a_N-2b\sqrt{2a_N}\}}\Bigr)
\end{equation}
and
\begin{multline}
\label{E:8.21a}
\qquad
1_{\widetilde\mathcal F_N(z)\cap\{H_z>H_\varrho\}}\Bigl(\text{osc}_f(\delta)+\Vert f\Vert_\infty 1_{\{|\overline L^{D_N}_{t_N}(z)- (\widehat L^{D_N} \circ \theta_{H_{\varrho}})_{\widetilde t_{N,k}^-} (z)|>\delta\sqrt{2a_N}\}}\Bigr)
\\
\times\Bigl(1_{\{ (\widehat L^{D_N} \circ \theta_{H_{\varrho}})_{\widetilde t_{N,k}^-} (z)\ge a_N-2b\sqrt{2a_N}\}}+1_{\{\overline L^{D_N}_{t_N}(z)\ge a_N-2b\sqrt{2a_N}\}}\Bigr).
\qquad
\end{multline}
To simplify estimates, introduce the events
\begin{equation}
\mathcal G_N(x):=\Bigl\{\widetilde L^{D_N}_{H_\varrho}(x)+ (\widehat L^{D_N} \circ \theta_{H_{\varrho}})_{\widetilde t_{N,k}^+} (x)\ge a_N-2b\sqrt{2a_N}\Bigr\}\cap \{H_x<H_\varrho\}
\end{equation}
and
\begin{equation}
\mathcal H_N(x):=\Bigl\{\widehat L^{D_N}_{\widetilde t_{N,k}^+}(x)\ge a_N-2b\sqrt{2a_N}\Bigr\}\cap\Bigl\{\widehat L^{D_N}_{\widetilde t_{N,k}^+}(x)-\widehat L^{D_N}_{\widetilde t_{N,k}^-}(x)>\delta\sqrt{2a_N}\Bigr\}.
\end{equation}
Then \eqref{E:8.20a} is bounded by $4\Vert f\Vert_\infty 1_{\mathcal G_N(z)}$ while \eqref{E:8.21a} is bounded by
\begin{equation}
2\text{osc}_f(\delta)1_{\{(\widehat L^{D_N}\circ \theta_{H_\varrho} )_{\widetilde t_{N,k}^+}(z)\ge a_N-2b\sqrt{2a_N}\}}+2\Vert f\Vert_\infty 1_{\mathcal H_N(z)}\circ \theta_{H_\varrho}.
\end{equation}
Summarizing these estimates, and writing~$\widehat\zeta^{D,{\text{\rm loc}}}_N(t_N')$ for the measure in \eqref{E:zetaNDloc} except with $L^{D_N}$ replaced by $\widehat L^{D_N}$ and~$t_N$ by~$t_N'$, we thus get that, on $\{| T_N \circ \theta_{H_{\varrho}} -k\epsilon|<\epsilon\}$,
\begin{multline}
\label{E:8.25a}
\biggl|\langle\overline\zeta^{D,{\text{\rm loc}}}_N,f\rangle-\frac{W_N(\widetilde t_{N,k}^-)}{W_N}
\bigl\langle\widehat\zeta_N^{D,{\text{\rm loc}}}(\widetilde t_{N,k}^-),f\bigr\rangle\circ \theta_{H_\varrho}\biggr|
\\
\le 4\Vert f\Vert_\infty|\Lambda_r(0)|\frac1{W_N}\sum_{x\in D_N}\bigl(1_{\widetilde\mathcal F_N(x)^{\text{\rm c}}}+1_{\mathcal G_N(x)}+1_{\mathcal H_N(x)}\circ \theta_{H_\varrho}\bigr)
\\
+2\,\text{osc}_f(\delta)|\Lambda_r(0)|\frac{W_N( \widetilde t_{N,k}^+ )}{W_N}\bigl\langle\widehat\zeta^{D}_N( \widetilde t_{N,k}^+ ),1_D\otimes 1_{[-2b,\infty)}\bigr\rangle\circ \theta_{H_\varrho}
\end{multline}
Using Lemmas~\ref{lemma-8.2}, \ref{lemma-8.3} and \ref{lemma-6.3}, the first term on the right tends to zero in $P^{x_N}$-probability as~$N\to\infty$ and $\epsilon \downarrow 0$ for each~$\delta>0$.
The tightness of~$\widehat\zeta^D_N$ measures (under~$P^\varrho$) along with the uniform continuity of~$f$ ensure that the second term tends to zero in $P^{x_N}$-probability as~$N\to\infty$ and~$\delta\downarrow0$.
To finish the proof, note that by \cite[Theorem~2.6]{AB} and the argument underlying Proposition~\ref{thm-4.3} we have, under~$P^\varrho$,
\begin{equation}
\widehat\zeta^{D,{\text{\rm loc}}}_N(t_N')\otimes\delta_{T_N}\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\,\widehat\zeta^D\otimes\widehat\nu_\lambda\otimes\delta_{T}
\end{equation}
for any sequence~$\{t_N'\}_{N\ge1}$ such that $t_N'-t_N=o(t_N)$, where $\widehat\zeta^D$ is related to~$T$ as in \eqref{E:5.48a}.
Since $W_N(\widetilde t_{N,k}^-)/W_N=(\text{\rm e}\mkern0.7mu^{-\alpha\lambda T_N (\widetilde t_{N, k}^-)}\circ
\theta_{H_\varrho})\text{\rm e}\mkern0.7mu^{O(\epsilon)}$ on $\{|T_N \circ \theta_{H_{\varrho}}-k\epsilon|<\epsilon\}\cap\mathcal{E}_N^- \circ \theta_{H_{\rho}}$,
from \eqref{E:8.25a} and the tightness of the random variables~$\{T_N\}_{N\ge1}$ and $\{H_\varrho/|D_N|\}_{N\ge1}$
we get, by taking~$N\to\infty$ followed by~$\delta\downarrow0$,~$\epsilon\downarrow0$ and~$m\to\infty$, under~$P^{x_N}$,
\begin{equation}
\overline\zeta^{D,{\text{\rm loc}}}_N\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\,\text{\rm e}\mkern0.7mu^{-\alpha\lambda T}\widehat\zeta^D\otimes\widehat\nu_\lambda.
\end{equation}
This is the desired claim.
\end{proofsect}
With Proposition~\ref{prop-8.1} in hand, we are ready to tackle:
\begin{proofsect}{Proof of Theorem~\ref{thm-thick-loc}, thick points}
First observe that the tightness of $\{\zeta_N^{D}\colon N\ge1\}$
implies tightness of
$\{\zeta_N^{D,{\text{\rm loc}}}\colon N\ge1\}$ and so we may consider subsequential distributional limits $\zeta^{D,{\text{\rm loc}}}$ of the latter. Using Proposition~\ref{prop-8.1} in the argument from the proof of Theorem~\ref{thm-thick} we conclude that every such subsequential weak limit obeys
\begin{equation}
\langle \zeta^{D,{\text{\rm loc}}}, f^{\ast\mathfrak n} \rangle
\,\,\overset{\text{\rm law}}=\,\,
\langle \widetilde \zeta^{D}\otimes\widehat\nu_\lambda,f\rangle
\end{equation}
for all $f\in C_{\text{\rm c}}(D\times\mathbb R\times\mathbb R^{\mathbb Z^2})$, where
\begin{equation}
f^{\ast\mathfrak n}(x,h, \phi):=E\Bigl[f\bigl(x,h+\mathfrak{n}_0, \{\mathfrak{n}_0
- \mathfrak{n}_z + \phi_z \colon z \in \mathbb{Z}^2 \}\bigr)\Bigr],
\end{equation}
for $\{\mathfrak{n}_z \colon z \in \mathbb{Z}^2\}$ i.i.d. $\mathcal N(0,\tfrac18)$.
We now proceed similarly as in \twoeqref{E:7.49}{E:7.52}: Given any $\tilde f\in C_{\text{\rm c}}(\mathbb R \times \mathbb R^{\mathbb Z^2})$ and any Borel~$A\subseteq D$ with~${\rm Leb}(A)>0$, the explicit form of~$\widetilde\zeta^{D,{\text{\rm loc}}}$ gives the pointwise equality
\begin{multline}
\bigl\langle\zeta^{D,{\text{\rm loc}}},(1_A\otimes\tilde f)^{\ast\mathfrak n}\bigr\rangle
\\
=\bigl\langle\zeta^{D,{\text{\rm loc}}},(1_A\otimes1_{[0,\infty)}\otimes 1_{\mathbb R^{\mathbb Z^2}})^{\ast\mathfrak n}\bigr\rangle
\,\alpha\lambda\int \text{\rm d}\mkern0.5mu h\,\text{\rm e}\mkern0.7mu^{-\alpha\lambda h} \otimes \widehat\nu_{\lambda} (\text{\rm d}\mkern0.5mu \phi) \tilde f(h, \phi).
\end{multline}
Abbreviating $\beta:=-\alpha\lambda$, for each~$A$ as above, the measure $\zeta_A$ on~$\mathbb R\times\mathbb R^{\mathbb Z^2}$ defined by
\begin{equation}
\label{E:8.31a}
\zeta_A(B):=\frac{\zeta^{D,{\text{\rm loc}}}(A\times B)}
{\alpha\lambda \bigl\langle\zeta^{D,{\text{\rm loc}}},(1_A\otimes1_{[0,\infty)}\otimes 1_{\mathbb R^{\mathbb Z^2}})^{\ast\mathfrak n}\bigr\rangle}
\end{equation}
then ``solves'' for~$\mu$ from the convolution equation
\begin{multline}
\label{E:0.24}
\quad
\int_{\mathbb R \times \mathbb R^{\mathbb Z^2}}
\mu(\text{\rm d}\mkern0.5mu h \text{\rm d}\mkern0.5mu \phi)
\,E\bigl[ \,f(h+\mathfrak n_0, \{\mathfrak n_0 - \mathfrak n_z + \phi_z \colon z \in \mathbb Z^2\})\bigr]
\\= \int_{\mathbb R \times \mathbb R^{\mathbb Z^2}} \text{\rm d}\mkern0.5mu h\,\,\text{\rm e}\mkern0.7mu^{\beta h} \otimes \widehat\nu_{\lambda} (\text{\rm d}\mkern0.5mu \phi)\,f(h, \phi)
\quad
\end{multline}
for all $f\in C_{\text{\rm c}}(\mathbb R\times\mathbb R^{\mathbb Z^2})$.
To solve this equation, we need:
\begin{lemma}
\label{lemma-8.4a}
For each~$x,y\in\mathbb Z^2$, let
\begin{equation}
\label{E:8.27}
\widetilde C(x,y):=\fraktura a(x)+\fraktura a(y)-\fraktura a(x-y)-\frac18\bigl[1-\delta_{x,0}-\delta_{y,0}+\delta_{x,y}\bigr].
\end{equation}
Then $\widetilde C$ is symmetric and positive semidefinite and so there exists a centered Gaussian process~$\{\widetilde\phi_x\colon x\in\mathbb Z^2\}$ with covariance~$\widetilde C$. This process then satisfies \eqref{E:2.33}.
\end{lemma}
\begin{proofsect}{Proof}
Recall that (in our normalization) $\fraktura a$ solves the equation $\Delta\fraktura a = \delta_0$ and so using Fourier transform techniques we get
\begin{equation}
\fraktura a(x)=\int_{(-\pi,\pi)^2}\frac{\text{\rm d}\mkern0.5mu k}{(2\pi)^2}\frac{1-\text{\rm e}\mkern0.7mu^{-\text{\rm i}\mkern0.7mu k\cdot x}}{\widehat D(k)},
\end{equation}
where
\begin{equation}
\widehat D(k):=4\sin(k_1/2)^2+4\sin(k_2/2)^2.
\end{equation}
Let $v\in\ell^2(\mathbb Z^2)$ and denote by $\hat v(k):=\sum_{x\in\mathbb Z^2}v(x)\text{\rm e}\mkern0.7mu^{\text{\rm i}\mkern0.7mu k\cdot x}$ the Fourier transform of~$v$. A calculation then shows
\begin{equation}
(v,\widetilde C v) = \int_{(-\pi,\pi)^2}\frac{\text{\rm d}\mkern0.5mu k}{(2\pi)^2}\biggl(\frac1{\widehat D(k)}-\frac18\biggr)\bigl|\hat v(0)-\hat v(k)\bigr|^2
\end{equation}
Noting that $\widetilde D(k)\le8$, we get that $\widetilde C$ is indeed positive semidefinite. We now readily check that $x,y\mapsto\frac18[1-\delta_{x,0}-\delta_{y,0}+\delta_{x,y}]$ is the covariance of $\{n_0-n_z\colon z\in\mathbb Z^2\}$ for $\{n_z\colon z\in\mathbb Z^2\}$ i.i.d.\ $\mathcal N(0,\frac18)$, and so \eqref{E:2.33} holds as well.
\end{proofsect}
The solution of \eqref{E:0.24} will require the following extension of Lemma~\ref{lemma-7.4}:
\begin{lemma}
\label{lemma-8.4}
Let $\widetilde\phi$ be a centered Gaussian process on~$\mathbb Z^2$ such that, for some~$\beta\in\mathbb R$ and some~$\sigma^2>0$, the process $\{\widetilde\phi_x+n_0-n_z\colon z\in\mathbb Z^2\}$ with $\{n_z\colon z\in\mathbb Z^2\}$ i.i.d.\ $\mathcal N(0,\sigma^2)$ has the law of the pinned DGFF~$\phi$. Denote
\begin{equation}
\nu_{\lambda,\beta}(A):=P\biggl(\widetilde\phi+\lambda\alpha\fraktura a+\beta\sigma^21_{\mathbb Z^2\smallsetminus\{0\}}\in A\biggr).
\end{equation}
Then \eqref{E:0.24} is solved uniquely by
\begin{equation}
\label{E:0.29}
\mu(\text{\rm d}\mkern0.5mu h\text{\rm d}\mkern0.5mu\phi) = \text{\rm e}\mkern0.7mu^{-\frac1{2}\beta^2\sigma^2+\beta h}\text{\rm d}\mkern0.5mu h\otimes\nu_{\lambda,\beta}(\text{\rm d}\mkern0.5mu\phi).
\end{equation}
\end{lemma}
\begin{proofsect}{Proof}
Denote $\widetilde\mu(\text{\rm d}\mkern0.5mu h\text{\rm d}\mkern0.5mu \phi):=\text{\rm e}\mkern0.7mu^{\frac1{2}\beta^2\sigma^2-\beta h}\mu(\text{\rm d}\mkern0.5mu h\text{\rm d}\mkern0.5mu \phi)$.
Pick $\{t_z\colon z\in\mathbb Z^2\}$ with finite support and~$t_0=0$ and, writing $\langle\cdot,\cdot\rangle$ for the inner product in $\ell^2(\mathbb Z^2)$, apply \eqref{E:0.24} to the test function $h,\phi\mapsto \text{\rm e}\mkern0.7mu^{-\beta h}\,f(h)\exp\{\langle t,\phi\rangle\}$ with a non-negative~$f\in C_{\text{\rm c}}(\mathbb R)$. (This is permissible in light of the Monotone Convergence Theorem.) Writing~$x$ for~$h+n_0$ then turns \eqref{E:0.29} into
\begin{multline}
\label{E:0.30}
\int\widetilde\mu(\text{\rm d}\mkern0.5mu h\text{\rm d}\mkern0.5mu \phi)\otimes\text{\rm d}\mkern0.5mu x\,\text{\rm e}\mkern0.7mu^{\langle t,\phi\rangle}\,\frac1{\sqrt{2\pi\sigma^2}}\,\text{\rm e}\mkern0.7mu^{-\frac1{2\sigma^2}(x-h)^2}E\bigl(\text{\rm e}\mkern0.7mu^{-\langle t,n\rangle})\,\text{\rm e}\mkern0.7mu^{\bar t(x-h)}\text{\rm e}\mkern0.7mu^{-\frac1{2}\beta^2\sigma^2+\beta h}\text{\rm e}\mkern0.7mu^{-\beta x}\,f(x)
\\
=\int \widehat \nu_\lambda(\text{\rm d}\mkern0.5mu\phi)\otimes\text{\rm d}\mkern0.5mu x\,\text{\rm e}\mkern0.7mu^{\langle t,\phi\rangle}f(x)
\end{multline}
where $\bar t:=\sum_{z\in\mathbb Z^2}t_z$.
By assumption we have
\begin{equation}
\{\phi_z\colon z\in\mathbb Z^d\}\,\,\overset{\text{\rm law}}=\,\,\{\widetilde\phi_z+n_0-n_z\colon z\in\mathbb Z^2\}
\end{equation}
and so, in light of $t_0=0$,
\begin{equation}
\begin{aligned}
\int \widehat \nu_\lambda(\text{\rm d}\mkern0.5mu\phi)\text{\rm e}\mkern0.7mu^{\langle t,\phi\rangle}
&=\int P(\text{\rm d}\mkern0.5mu\phi)\text{\rm e}\mkern0.7mu^{\langle t,\phi+\alpha\lambda\fraktura a\rangle}\\
&=\int P(\text{\rm d}\mkern0.5mu\widetilde\phi)E\bigl(\text{\rm e}\mkern0.7mu^{\langle t,\widetilde\phi+n_0-n+\alpha\lambda\fraktura a\rangle})\\
&=\int \nu_{\lambda,\beta} (\text{\rm d}\mkern0.5mu\widetilde\phi)\,\text{\rm e}\mkern0.7mu^{\langle t,\widetilde\phi\rangle}\,E\bigl(\text{\rm e}\mkern0.7mu^{-\langle t,n\rangle}\bigr)E(\text{\rm e}\mkern0.7mu^{\bar t (n_0-\beta\sigma^2)}),
\end{aligned}
\end{equation}
where the expectation is over $\{n_z\colon z\in\mathbb Z^2\}$.
Using this in \eqref{E:0.30} and cancelling $E\bigl(\text{\rm e}\mkern0.7mu^{-\langle t,n\rangle})$ on both sides, the identity $E(\text{\rm e}\mkern0.7mu^{\bar t (n_0-\beta\sigma^2)}) = \text{\rm e}\mkern0.7mu^{\frac12\bar t^2\sigma^2-\beta\bar t\sigma^2}$ along with the fact that functions~$f\in C_{\text{\rm c}}(\mathbb R)$ separate points yield
\begin{multline}
\label{E:0.33}
\int\widetilde\mu(\text{\rm d}\mkern0.5mu h\text{\rm d}\mkern0.5mu \phi)\,\text{\rm e}\mkern0.7mu^{\langle t,\phi\rangle}\,\frac1{\sqrt{2\pi\sigma^2}}\,\text{\rm e}\mkern0.7mu^{-\frac1{2\sigma^2}(x-h)^2}\,\text{\rm e}\mkern0.7mu^{\bar t(x-h)}\text{\rm e}\mkern0.7mu^{-\beta x} \text{\rm e}\mkern0.7mu^{-\frac12\bar t^2\sigma^2+\beta\bar t\sigma^2}\text{\rm e}\mkern0.7mu^{-\frac1{2}\beta^2\sigma^2+\beta h}
\\
=\int \nu_{\lambda,\beta} (\text{\rm d}\mkern0.5mu\widetilde\phi)\,\text{\rm e}\mkern0.7mu^{ \langle t,\widetilde\phi \rangle }\,
\end{multline}
for all~$x\in\mathbb R$. (Continuity is used to get from Lebesgue a.e.~$x\in\mathbb R$ to all~$x\in\mathbb R$.)
The five exponentials on the left combine into
\begin{equation}
\text{\rm e}\mkern0.7mu^{-\frac1{2\sigma^2}(x-h-\bar t\sigma^2)^2 -\beta (x-h-\bar t\sigma^2)-\frac1{2}\beta^2\sigma^2}=\text{\rm e}\mkern0.7mu^{-\frac1{2\sigma^2}(x-h-\bar t\sigma^2+\beta\sigma^2)^2}.
\end{equation}
Shifting~$x$ by $\bar t\sigma^2+\beta\sigma^2$ and scaling it by~$\sigma^2$ shows that $\widehat\mu(\text{\rm d}\mkern0.5mu h\text{\rm d}\mkern0.5mu\phi):=\frac1{\sqrt{2\pi\sigma^2}}\text{\rm e}\mkern0.7mu^{-\frac1{2\sigma^2}h^2}\widetilde\mu(\text{\rm d}\mkern0.5mu h\text{\rm d}\mkern0.5mu\phi)$ obeys
\begin{equation}
\label{E:8.45}
\int\widehat\mu(\text{\rm d}\mkern0.5mu h\text{\rm d}\mkern0.5mu \phi)\,\text{\rm e}\mkern0.7mu^{\langle t,\phi\rangle-xh}
=\int\nu_{\lambda,\beta}(\text{\rm d}\mkern0.5mu\widetilde\phi)\,\text{\rm e}\mkern0.7mu^{\langle t,\widetilde\phi\rangle}\text{\rm e}\mkern0.7mu^{\frac12x^2\sigma^2}
\end{equation}
for all $x\in\mathbb R$ and all~$\{t_z\colon z\in\mathbb Z^2\}$ with finite support and~$t_0=0$.
The restriction to~$t_0=0$ is irrelevant in \eqref{E:8.45} since $\nu_{\lambda,\beta}$ is concentrated on $\{\phi\colon \phi_0=0\}$ and, by \eqref{E:0.24} so is~$\mu$ and thus also~$\widehat\mu$. The right-hand side of \eqref{E:8.45} is the Laplace transform of the product of the law of~$\mathcal N(0,\sigma^2)$ and~$\nu_{\lambda,\beta}$. Hence
\begin{equation}
\widetilde\mu(\text{\rm d}\mkern0.5mu h\text{\rm d}\mkern0.5mu \phi) = \text{\rm d}\mkern0.5mu h\otimes \nu_{\lambda,\beta}(\text{\rm d}\mkern0.5mu\phi)
\end{equation}
and so the claim follows from the definition of~$\widetilde\mu$.
\end{proofsect}
Returning to the main line of the proof of Theorem~\ref{thm-thick-loc}, it remains to observe that the denominator in \eqref{E:8.31a} has the law of
\begin{equation}
\sqrt{\frac{\sqrt{\theta}}{\sqrt{\theta}+\lambda}}\,\,\fraktura c(\lambda)\,\texte^{\alpha \lambda (\mathfrak{d}(x) - 1) Y}\,Z_\lambda^{D,0}(\text{\rm d}\mkern0.5mu x),
\end{equation}
for~$Y=\mathcal N(0,\sigma_D^2)$ independent of~$Z^{D,0}_\lambda$.
Lemma~\ref{lemma-8.4} with $\beta:=-\alpha\lambda$ and~$\sigma^2:=\frac18$ then yields the claim.
\end{proofsect}
Moving to the thin points, here we go directly for:
\begin{proofsect}{Proof of Theorem~\ref{thm-thick-loc}, thin points}
The proof is considerably simpler because, as a few times earlier, certain key inequalities go in a more favorable direction. Following the argument and the notation from the proof for the thick points, we derive an analogue of \eqref{E:8.25a} with the events $\mathcal G_N(x)$ and $\mathcal H_N(x)$ replaced by
\begin{equation}
\widetilde\mathcal G_N(x):=\Bigl\{(\widehat L^{D_N} \circ \theta_{H_{\varrho}})_{\widetilde t_{N,k}^-} (x)\le a_N+2b\sqrt{2a_N}\Bigr\}\cap \{H_x<H_\varrho\}
\end{equation}
and
\begin{equation}
\widetilde\mathcal H_N(x):=\Bigl\{\widehat L^{D_N}_{\widetilde t_{N,k}^-}(x)\le a_N+2b\sqrt{2a_N}\Bigr\}\cap\Bigl\{\widehat L^{D_N}_{\widetilde t_{N,k}^+}(x)-\widehat L^{D_N}_{\widetilde t_{N,k}^-}(x)>\delta\sqrt{2a_N}\Bigr\},
\end{equation}
respectively,
and $1_{[-2b,\infty)}$ replaced by $1_{(-\infty,2b]}$. The $P^{x_N}$-probability of event~$\widetilde \mathcal G_N(x)$ is controlled using Lemma~\ref{lemma-6.5}. Unlike $\mathcal H_N(x)$ which required a non-trivial decomposition in the proof of Lemma~\ref{lemma-8.3}, the two events constituting~$\widetilde\mathcal H_N(x)$ can be directly separated using the Markov property of~$t\mapsto\widehat L^{D_N}_t$. The expected sum over~$1_{ \widetilde \mathcal H_N(x) }\circ\theta_{H_\varrho}$ is then shown to be order
$\epsilon W_N$ by \eqref{E:8.49}
and the fact that $E^\varrho\langle\widehat\zeta^D_N(\widetilde t_{N,k}^-),1_{(-\infty,2b]}\rangle$ is bounded in~$N\ge1$. As a consequence, we get that, under $P^{x_N}$,
\begin{equation}
\overline\zeta^{D,{\text{\rm loc}}}_N\,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\,\widetilde\zeta^D\otimes\widehat\nu_\lambda,
\end{equation}
where~$\widetilde\zeta^D$ is the measure on the right of \eqref{E:1.23dis} without the term $\text{\rm e}\mkern0.7mu^{-\alpha^2\lambda^2/16}$ and~$\widehat\nu_\lambda$ is the law of~$\phi-\alpha\lambda\fraktura a$.
The rest of the argument for the thick points may be followed literally.
\end{proofsect}
\subsection{Avoided points}
The proof is a variation on the themes encountered in the proof of convergence of the measure associated with the light and avoided points. In particular, since the local time vanishes at the avoided points, we will be able to use monotonicity arguments. The following observation will be useful:
\begin{lemma}
\label{lemma-8.6}
Let~$\mu$ be a probability measure on~$\mathbb N^{\mathbb Z^2}$ with samples denoted by $\{\hat n_z\colon z\in\mathbb Z^2\}$. Let $\{\tau_j(x)\colon j\ge1,\,x\in\mathbb Z^2\}$ be i.i.d.\ Exponential(1), independent of $\{\hat n_z\colon z\in\mathbb Z^2\}$. Then for any~$t\in(-1,\infty)^{\mathbb Z^2}$ with finite support,
\begin{equation}
E\exp\Bigl\{-\sum_{z\in\mathbb Z^2}t(z)\sum_{j=1}^{\hat n_z}\tau_j(z)\Bigr\}=E\exp\Bigl\{-\sum_{z\in\mathbb Z^2}t'(z)\hat n_z\Bigr\},
\end{equation}
where $t'(z):=\log(1+t(z))$.
\end{lemma}
\begin{proofsect}{Proof}
This boils down to a calculation of the Laplace transform of Exponential(1).
\end{proofsect}
\begin{proofsect}{Proof of Theorem~\ref{thm-avoid-loc}}
We will establish the existence and uniqueness of the law $\nu_u^{\text{RI},\text{dis}}$ as part of the proof of the convergence. Let~$\tilde f\in C(\overline D)$ be non-negative, pick
$t\in(0,\infty)^{\mathbb Z^2}$
with finite support and consider the test function
\begin{equation}
f_t(x,\phi):=\tilde f(x)\text{\rm e}\mkern0.7mu^{-\langle t,\phi\rangle}
\end{equation}
where, abusing notation as before, $\langle\cdot,\cdot\rangle$ denotes the canonical inner product in~$\ell^2(\mathbb Z^2)$. The function $x,h,\phi\mapsto \text{\rm e}\mkern0.7mu^{-hn}f_t(x,\phi)$ is non-increasing
in both~$h$ and the coordinates of~$\phi$ and so, thanks to Lemma~\ref{lemma-8.2}, \eqref{E:5.35u} applies to~$f$ replaced by~$\text{\rm e}\mkern0.7mu^{-hn}f_t$ and $\widetilde\vartheta^D_N$ by
\begin{equation}
\overline\vartheta^D_N:=\frac1{\widehat W_N}\sum_{x\in D_N}\delta_{x/N}\otimes\delta_{\overline L^{D_N}_{t_N}(x)}\otimes\delta_{\{\overline L^{D_N}_{t_N}(x+z)\colon z\in\mathbb Z^2\}}.
\end{equation}
Let~$\overline\kappa^D_N$ be the measure tracking the local behavior of $\overline L^{D_N}_{t_N}(x+z)\colon z\in\mathbb Z^2$ around every point~$x$ where $\overline L^{D_N}_{t_N}(x)=0$ which, we note, is almost surely equivalent to~$L^{D_N}_{t_N}(x)=0$. Taking the limits~$N\to\infty$ and~$n\to\infty$, from \cite[Theorem~2.8]{AB} we then get, under~$P^{x_N}$,
\begin{equation}
\label{E:8.54}
\langle\overline\kappa_N^{D,{\text{\rm loc}}},f_t\rangle \,\,\,\underset{N\to\infty}{\,\overset{\text{\rm law}}\longrightarrow\,}\,\,\,
\langle\widetilde\kappa^D\otimes\nu_\theta^{\text{RI}},f_t\rangle,
\end{equation}
where~$\widetilde\kappa^D$ is the law on the right-hand side of \eqref{E:2.27cont}.
Next we observe that, by Lemma~\ref{lemma-8.6} and the fact that~$4L^{D_N}_{t_n}(x)$ is a natural,
\begin{equation}
E^\varrho\bigl(\langle\overline\kappa_N^{D,{\text{\rm loc}}},f_t\rangle\,\big|\,\sigma(X)\bigr) = \langle\kappa_N^{D,{\text{\rm loc}}},f_{t'}\rangle
\end{equation}
where $t'(z):=4\log(1+t(z)/4)$. From \eqref{E:7.10} and \eqref{E:8.54} we then get that every subsequential weak limit $\kappa^{D,{\text{\rm loc}}}$ of $\{\kappa_N^{D,{\text{\rm loc}}}\colon N\ge1\}$ obeys
\begin{equation}
\langle\kappa^{D,{\text{\rm loc}}},f_{t'}\rangle \,\overset{\text{\rm law}}=\, \langle\widetilde\kappa^D\otimes\nu_\theta^{\text{RI}},f_t\rangle
\end{equation}
jointly for all~$t\in(0,\infty)^{\mathbb Z^2}$ with finite support and all~$\tilde f\in C(\overline D)$.
Since~$\nu_\theta^{\text{RI}}$ is non-random, this is readily turned into the a.s.\ identity
\begin{equation}
\int \kappa^{D,{\text{\rm loc}}}(\text{\rm d}\mkern0.5mu x\text{\rm d}\mkern0.5mu\ell) \tilde f(x)\text{\rm e}\mkern0.7mu^{-\langle t',\ell\rangle} = \Bigl(\int\widetilde\kappa^D(\text{\rm d}\mkern0.5mu x)\tilde f(x)\Bigr)\int \nu_\theta^{\text{RI}}(\text{\rm d}\mkern0.5mu\phi)\text{\rm e}\mkern0.7mu^{-\langle t,\phi\rangle}.
\end{equation}
This along with the fact that
\begin{equation}
\text{\rm e}\mkern0.7mu^{-\langle t',\ell\rangle} = E\exp\biggl\{-\sum_{z\in\mathbb Z^2}t(z)\frac14\sum_{j=1}^{4\ell(z)} \tau_j(z)\biggr\}
\end{equation}
for $\{\tau_j(z)\colon j\ge1,\,z\in\mathbb Z^2\}$ independent i.i.d.\ Exponential(1) implies that
\begin{equation}
\kappa^{D,{\text{\rm loc}}} = \widetilde\kappa^D\otimes\nu_\theta^{\text{RI},\text{dis}}
\end{equation}
where~$\nu_\theta^{\text{RI},\text{dis}}$ is a measure as described in the statement.
This shows that a measure $\nu_u^{\text{RI},\text{dis}}$ exists with the stated properties for all~$u\in(0,1)$. Since adding independent samples from this measure for parameters~$u\in(0,1)$ and~$v\in(0,1)$ gives us a sample from the measure for parameter~$u+v$, the existence extends to all~$u>0$. The measure is unique by Lemma~\ref{lemma-8.6} and so is thus the distributional limit~$\kappa^{D,{\text{\rm loc}}}$. This completes the proof.
\end{proofsect}
\section*{Acknowledgments}
\nopagebreak\nopagebreak\noindent
This project has been supported in part by the NSF award DMS-1712632 and GA\v CR project P201/16-15238S.
The first author has been supported in part by JSPS KAKENHI, Grant-in-Aid for Early-Career Scientists 18K13429.
\bibliographystyle{abbrv}
| {
"timestamp": "2019-11-28T02:01:28",
"yymm": "1911",
"arxiv_id": "1911.11810",
"language": "en",
"url": "https://arxiv.org/abs/1911.11810",
"abstract": "Given a sequence of lattice approximations $D_N\\subset\\mathbb Z^2$ of a bounded continuum domain $D\\subset\\mathbb R^2$ with the vertices outside $D_N$ fused together into one boundary vertex $\\varrho$, we consider discrete-time simple random walks in $D_N\\cup\\{\\varrho\\}$ run for a time proportional to the expected cover time and describe the scaling limit of the exceptional level sets of the thick, thin, light and avoided points. We show that these are distributed, up a spatially-dependent log-normal factor, as the zero-average Liouville Quantum Gravity measures in $D$. The limit law of the local time configuration at, and nearby, the exceptional points is determined as well. The results extend earlier work by the first two authors who analyzed the continuous-time problem in the parametrization by the local time at $\\varrho$. A novel uniqueness result concerning divisible random measures and, in particular, Gaussian Multiplicative Chaos, is derived as part of the proofs.",
"subjects": "Probability (math.PR); Mathematical Physics (math-ph)",
"title": "Exceptional points of discrete-time random walks in planar domains",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587229064297,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7097978794935106
} |
https://arxiv.org/abs/1106.4910 | On Projections of Metric Spaces | Let $X$ be a metric space and let $\mu$ be a probability measure on it. Consider a Lipschitz map $T: X \rightarrow \Rn$, with Lipschitz constant $\leq 1$. Then one can ask whether the image $TX$ can have large projections on many directions. For a large class of spaces $X$, we show that there are directions $\phi \in \nsphere$ on which the projection of the image $TX$ is small on the average, with bounds depending on the dimension $n$ and the eigenvalues of the Laplacian on $X$. | \section{Introduction}
Let $(X,m)$ be a metric space and let $\mu$ be a probability measure on $X$. Consider a
Lipschitz map $T:X \rightarrow \mathbb{R}^n$, with $\LipNorm{T}\leq 1$, where $\mathbb{R}^n$ is taken with the standard inner product $<\cdot,\cdot>$ and the corresponding Euclidean norm $|\cdot|$.
Define another (semi) norm on $\mathbb{R}^n$ by setting for each $\theta \in \mathbb{R}^n$,
\[
\|\theta\|_{L_2} := \left( \int <Tx,\theta>^2 d\mu(x) \right)^{\frac{1}{2}}.
\]
This norm is known as the covariance structure of the push-forward measure $T\mu$, and we shall regard it as measuring the size of the projection of the image
of $X$ onto the direction $\theta$. It is then natural to ask, for a given space $X$, whether an image of $X$ can have big projections on many directions. For example, take $X$ to be the
unit interval $[0,1]$, with Lebesgue measure, and let $T:[0,1] \rightarrow \mathbb{R}^n$ be a Lipschitz map (here and in what follows, Lipschitz will mean ``having Lipschitz constant $\leq 1$"). By considering a few pictures, it
is clear that if $n$ is large then there should be a direction with a small projection and we
are interested in quantifying this phenomenon. It turns out that this particular situation was
investigated already by Gohberg and Krein, in a different formulation and context (related to
spectral properties of smooth kernels), see the book \cite{GK}, Chapter III.10.3 .
\begin{thm}[\cite{GK}]
\label{curves_GK} There is an absolute constant $c>0$, such
that for every $n>1$ and every $T:[0,1] \rightarrow \mathbb{R}^n$ with $\LipNorm{T}\leq 1$, there is
$\theta \in S^{n-1}$ such that $\LtwoNorm{T} \leq c \cdot n^{-\frac{3}{2}}$.
\end{thm}
Our objective is to provide a similar result for general metric spaces with measure on which an appropriate notion
of a derivative can be defined. However, for the simplicity of presentation,
we shall restrict our discussion to finite spaces $X$ which are graphs with the shortest
path metric. The statement and the proof of the following result can, for instance, can be repeated with straightforward modifications in the setting of Riemannian manifolds.
Let $X$ be a finite set and let $E\subset X \times X$ be a set of edges such that
$(X,E)$ is a connected undirected graph without loops. For $x,y \in X$, write $x \sim y$ iff $(x,y) \in E$ and let $d(x)$ denote the degree of $x$. We endow $X$ with the shortest path metric and we set $\mu$ to be the stationary distribution of a simple nearest neighbour
random walk on $(X,E)$, $\mu(x) = \frac{d(x)}{2|E|}$.
Denote by $L(X)$ the space of real-valued functions on $X$. The Laplacian on
$L(X)$ is defined by
\[
\bigtriangleup(f)(x) = 2 \left( f(x) - \frac{1}{deg(x)} \sum_{y \sim x} f(y) \right).
\]
As is well known, this is a self-adjoint non-negative operator, with a one dimensional
kernel consisting of constant functions. We denote by $\{\lambda_i\}_{i=1}^{|X|-1}$ the sequence of non-zero eigenvalues of $\bigtriangleup$ in \textit{non-decreasing} order, including multiplicities.
\begin{thm}
\label{main_thm}
There is an absolute constant $c>0$ such that for every graph $X$ as above, every
$n>1$ and every Lipschitz map $T: X \rightarrow \mathbb{R}^n$, there is a direction
$\theta \in S^{n-1}$ such that $\LtwoNorm{\theta} \leq c \cdot n^{-\frac{1}{2}} \lambda^{-\frac{1}{2}}_{\floor{n/2}-1}$.
\end{thm}
\begin{example}[Discrete Space]
Let $(X,m)$ be a metric space such that $m(x,y)=1$ for all $x\neq y$, and let $\mu$
be the uniform probability on $X$. Then for every Lipschitz $T:X \rightarrow \mathbb{R}^n$ there is
$\theta \in \mathbb{R}^n$ such that
\begin{equation}
\label{discr_space_bound}
\LtwoNorm{\theta} \leq \frac{c}{\sqrt{n}}.
\end{equation}
Note that,
interestingly enough, the size of the space, $|X|$, does not appear in this bound
(except that, of course, for $n>|X|$ we always have $\theta \in \mathbb{R}^n$ with
$\LtwoNorm{\theta}=0$). That is, one can not increase the minimal projection size of $X$ in,
say,
$\mathbb{R}^{20}$, by adding more points. To prove (\ref{discr_space_bound}), note that
$(X,m)$ corresponds to a clique graph, and the Laplacian has a single non-zero eigenvalue,
which is of the order of a constant and has multiplicity $|X|-1$.
\end{example}
\begin{example}[Combinatorial Cube]
Here $X$ is the set $\ncubep{d} = \{0,1\}^d$, with the Hamming metric
\[m(x,y)=|\{i \in \{1,...,d\} \spaceo | \spaceo x_i\neq y_i \}|\]
where $x=x_1...x_d,y=y_1...y_d\in \ncubep{d}$ and $\mu$ is again taken to be the uniform probability measure. The non-zero eigenvalues of the Laplacian on $X$ are $\frac{4k}{d}$ with
multiplicity $\combchoose{n}{k}$, $k=1,...,d$ (See, ex. \cite{Diac}). The $n$-th smallest eigenvalue is therefore
of the order $\frac{\log(n)}{d}$ and we obtain that for every Lipschitz $T:X \rightarrow \mathbb{R}^n$,
there is $\theta \in S^{n-1}$ such that
\[
\LtwoNorm{\theta} \leq c \cdot \frac{\sqrt{d}}{\sqrt{n} \cdot \sqrt{\log(n)}}
\]
\end{example}
Our approach to Theorem \ref{main_thm} extends the argument in \cite{GK} and, naturally,
Theorem \ref{curves_GK} can also be seen as a consequence of the principle of Theorem \ref{main_thm}. Indeed, the eigenvalues of the Laplacian on $[0,1]$ satisfy,
up to multiplicative constants, that $\lambda_n \approx n^2$ (as can be seen by double differentiating sines and cosines). Hence the decay rate of $n^{-\frac{1}{2}}\cdot (n^{2})^{-\frac{1}{2}} = n^{-\frac{3}{2}}$ in Theorem \ref{curves_GK}.
\section{Proof}
Fix a graph $(X,E)$ with the stationary measure $\mu$ on it.
Let $T : X \rightarrow \mathbb{R}^n$ be a Lipschitz map. It will be convenient to introduce
an additional assumption, that
\begin{equation}
\label{mz_cond}
\int T(x) d\mu = 0.
\end{equation}
With conditions of Theorem \ref{main_thm} and this assumption, we will
show that there is $\theta \in S^{n-1}$ such that
\begin{equation}
\label{mz_res}
\LtwoNorm{\theta} = c \cdot n^{-\frac{1}{2}} \lambda^{-\frac{1}{2}}_{\floor{n/2}}.
\end{equation}
This implies the result for arbitrary Lipschitz $T :X \rightarrow \mathbb{R}^n$. Indeed, assume that
$v:= \int Tx d\mu \neq 0$. Denote by $P$ the orthogonal projection onto the $n-1$
dimensional space orthogonal to $v$. Then the composition $P \circ T$ is a Lipschitz map
into that space and $\int (P\circ T)(x) d\mu = 0 $, so (\ref{mz_res}) can be applied in dimension $n-1$.
The notion of a gradient on a graph is folklore, although not particularly frequent
in the literature. We thus recall the definition.
Define an inner product on $L(X)$ by
\[
[f,g]_X = \int f(x)g(x) d\mu(x).
\]
Denote by $L(E)$ the set of real valued functions on the set of edges, $E$, and let
$\nu$ be the uniform probability measure on $E$. We equip $L(E)$ with the inner product
\[
[f,g]_E = \int f(e) g(e) d\nu.
\]
Fix an arbitrary orientation on $E$, i.e.
for each edge $e=\{x,y\}$ choose in an arbitrary way an enumeration $v_e^1,v_e^2$ of the
vertices of the edge. The gradient operator on $X$ is defined by
$\bigtriangledown : L(X) \rightarrow L(E)$,
\[
(\bigtriangledown f)(e) = f(v_e^1) - f(v_e^2).
\]
One can verify by direct computation that the usual relation $\bigtriangleup = \bigtriangledown^{*} \circ \bigtriangledown$ holds, where $\bigtriangledown^{*}$ is the Hilbert space adjoint of $\bigtriangledown$. Note that the Laplacian does not depend on the orientation on $E$.
Denote by $L_0(X)$ the subspace of $L(X)$ that is orthogonal to the constant functions.
Since $Ker \bigtriangleup$ is the space of the constant functions, the Laplacian is invertible on $L_0(X)$ and hence the gradient is also an invertible
operator from $L_0(X)$ onto its image. The inverse of the gradient will be of importance in what follows and we denote it by $\Gamma : Im(\bigtriangledown) \rightarrow L_0(X)$.
We also recall the notion of singular values of an operator. If $V$ and $W$ are (say, finite dimensional ) Hilbert spaces, and $A:V \rightarrow W$ is a linear operator, then
$A^{*}A$ is a non-negative self-adjoint operator. Let $\{\lambda_i(A^{*}A)\}_{i=1}^{dim V}$ be the eigenvalues of $A^*A$, in non-increasing order, with multiplicities. Then singular values of the operator $A$ are the non-increasing sequence $s_i(A) = \sqrt{\lambda_i(A^{*}A)}$.
The singular values of $A$ have a simple geometric interpretation. If $B_2(V)$ denotes
the unit ball of $V$, then the image $A(B_2(V))$ is an ellipsoid in $W$ and $s_i(A)$ are
precisely the lengths of the principal axes of this ellipsoid.
The singular values satisfy $s_i(A) = s_i(A^{*})$ and singular values of a composition
can be bounded by the following inequality due to Ky Fan. Let $A:U \rightarrow W$ and
$B: W \rightarrow V$ be two operators on Hilbert spaces. Then, for every $i,j \geq 1$,
\begin{equation}
\label{ky_fan}
s_{i+j-1}(BA) \leq s_i(A) s_j(B)
\end{equation}
Finally, the Hilbert-Schmidt norm of an operator is defined by
\[
\|A\|_{HS} = \sqrt{tr A^{*} A } = \left(\sum_{i} s_i^2(A)\right)^{\frac{1}{2}}.
\]
Note that since $s_i$ is a non-increasing sequence,
\begin{equation}
\label{hs_s_i_bound}
s_i(A) \leq \frac{\|A\|_{HS}}{\sqrt{i}}
\end{equation}
for all $i$.
More details on singular values can be found, for instance, in \cite{Bha} or \cite{GK}.
\begin{proof}[Proof of Theorem \ref{main_thm}]
As mentioned above, we assume that condition (\ref{mz_cond}) holds.
Let $D_T$ denote the gradient of the map $T$, i.e.
\[
D_T(e) = T(v_e^1) - T(v_e^2).
\]
Since $T$ is Lipschitz, $D_T$ is bounded, i.e. $|D_T(e)|\leq 1$ for all $e\in E$.
For every $\theta \in \mathbb{R}^n$ consider the function on $X$, $f_{\theta}(x) = <\theta, Tx>$.
By (\ref{mz_cond}), $f_{\theta} \in L_0(E)$ and clearly $(\bigtriangledown f_{\theta}) (e)= <\theta, D_T(e)>$ and
\[
f_{\theta} = \Gamma (<\theta, D_T>).
\]
Let
\[
B_{L_2}(X) = \Big\{f:X \rightarrow \mathbb{R} \spaceo \Big | \spaceo [f,f]_X\leq 1 \Big\}
\]
denote the unit ball in $L(X)$ and write $\LtwoNorm{\theta}$ in a dual form:
\[
\LtwoNorm{\theta} = sup_{g \in B_2(X)} \int g(x) <\theta, Tx> d\mu .
\]
Then
\begin{eqnarray*}
\int g(x) <\theta, Tx> d\mu = [g, f_{\theta}]_{X} =
[g, \Gamma \circ \bigtriangledown f_{\theta}]_X = [\Gamma^{*} g, \bigtriangledown f_{\theta}]_E
\end{eqnarray*}
Next, write
\begin{eqnarray*}
[\Gamma^{*} g, \bigtriangledown f_{\theta}]_E =
\int (\Gamma^{*} g)(e) \cdot <\theta,D_T(e)> d\nu(e) =
<\theta, \int (\Gamma^{*} g)(e) \cdot D_T(e) d\nu(e) > .
\end{eqnarray*}
Denote by $\widetilde{D}: L(E) \rightarrow \mathbb{R}^n$ the operator that acts by
\[\widetilde{D} u = \int u(e) \cdot D_T(e) d\nu\].
With this notation,
\begin{equation}
\label{ell_intro}
\LtwoNorm{\theta} = sup_{g\in B_2} <\theta, \widetilde{D} \circ \Gamma^{*} g> =
\sup_{\phi \in \mathcal{E}} <\theta,\phi>
\end{equation}
where $\mathcal{E} = \widetilde{D} \circ \Gamma^{*} B_2$
is the dual (in $\mathbb{R}^n$) ellipsoid of the $\LtwoNorm{\cdot}$ norm.
Our aim is to bound the quantity
\[
\inf_{\theta \in S^{n-1}} \LtwoNorm{\theta}.
\]
By (\ref{ell_intro}), this quantity equals the length of the smallest
principal axe of the ellipsoid $\mathcal{E}$. Since the lengths of the principal axes are
the singular values of $\widetilde{D} \circ \Gamma^{*}$, we bound
the singular values of this operator.
The singular values of $\Gamma^{*}$ are given, $s_i(\Gamma^{*}) = \lambda_i^{-\frac{1}{2}}$, where
$\lambda_i$ are the non-zero eigenvalues of the Laplacian (in non-decreasing order, so that
$s_i(\Gamma^{*})$ do not increase). To bound the singular values of $\widetilde{D}$, we show that
\begin{equation}
\label{D_T_HS}
\|\widetilde{D}\|_{HS}\leq 1.
\end{equation}
Indeed, one readily verifies that
\[
\widetilde{D} \circ \widetilde{D}^{*} (\theta) = \int <\theta,D_T(e)>D_T(e) d\nu.
\]
For fixed $e\in E$, the trace of an operator $\theta \mapsto <\theta,D_T(e)>D_T(e)$ is $|D_T(e)|^2\leq 1$, implying
(\ref{D_T_HS}). Now, by (\ref{hs_s_i_bound}), $s_i(\widetilde{D}) \leq \frac{1}{\sqrt{i}}$ and an application of (\ref{ky_fan}) (with $i=j=n/2$) completes the proof.
\end{proof}
| {
"timestamp": "2011-06-27T02:01:43",
"yymm": "1106",
"arxiv_id": "1106.4910",
"language": "en",
"url": "https://arxiv.org/abs/1106.4910",
"abstract": "Let $X$ be a metric space and let $\\mu$ be a probability measure on it. Consider a Lipschitz map $T: X \\rightarrow \\Rn$, with Lipschitz constant $\\leq 1$. Then one can ask whether the image $TX$ can have large projections on many directions. For a large class of spaces $X$, we show that there are directions $\\phi \\in \\nsphere$ on which the projection of the image $TX$ is small on the average, with bounds depending on the dimension $n$ and the eigenvalues of the Laplacian on $X$.",
"subjects": "Functional Analysis (math.FA)",
"title": "On Projections of Metric Spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587229064297,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7097978794935105
} |
https://arxiv.org/abs/2005.01466 | On dominating pair degree conditions for hamiltonicity in balanced bipartite digraphs | We prove several new sufficient conditions for hamiltonicity and bipancyclicity in balanced bipartite digraphs, in terms of sums of degrees over dominating or dominated pairs of vertices. | \section{Introduction}
\label{sec:intro}
This article is concerned with sufficient conditions for hamiltonicity and bipancyclicity in balanced bipartite digraphs. More specifically, we study several Meyniel-type criteria, that is, theorems asserting existence of hamiltonian cycles under certain conditions on the sums of degrees of non-adjacent vertices.
There are numerous such criteria, and open problems, in general digraphs (see, e.g., \cite{BG, BGL} and the references therein). Over the last few years, various analogues of these theorems and conjectures have been established for bipartite digraphs \cite{A1, A2, AAY, D, M, W, WW}. These results, generally speaking, do not follow from their non-bipartite analogues and require different arguments and techniques.
We begin with a short review of the relevant results to provide context for our present work. Throughout this paper, $D$ denotes a strongly connected balanced bipartite digraph of order $2a$ (see Section~\ref{sec:not} for details on notation and terminology).
The main feature of Meyniel-type criteria is that a degree condition be only imposed on pairs of non-adjacent vertices. The first such criterion in the bipartite setting was proved in \cite{AAY}.
\begin{theorem}[{\cite[Thm.\,1.2]{AAY}}]
\label{thm:AAY}
Let $D$ be as above, with $a\geq2$, and suppose that
\[
d(u)+d(v)\geq3a
\]
for every pair of distinct vertices $\{u,v\}$ such that $uv\notin A(D)$ and $vu\notin A(D)$. Then, $D$ is hamiltonian.
\end{theorem}
The lower bound of $3a$ is sharp (see examples in \cite{AAY}). The condition from Theorem~\ref{thm:AAY} may be further strengthened, in the spirit of~\cite{BGL}, by requiring that it be satisfied only by dominating and dominated pairs of vertices. This was done in \cite{A1}.
\begin{theorem}[{\cite[Thm.\,1]{A1}}]
\label{thm:A1}
Let $D$ be as above, with $a\geq3$, and suppose that
\[
d(u)+d(v)\geq3a
\]
whenever $\{u,v\}$ is a dominating or dominated pair. Then, $D$ is hamiltonian.
\end{theorem}
At this point, there are two natural questions: First, are the above assumptions enough to imply existence of cycles of all even lengths in $D$, perhaps modulo some exceptional digraphs (Bondy's metaconjecture)? And secondly, could we expect the same conclusion if the degree sum condition was only satisfied by the dominating pairs of vertices? The answer to the first question is positive. More precisely, we have the following result.
\begin{theorem}[{\cite[Thm.\,1.3]{A2}}]
\label{thm:A2}
Let $D$ be as above, with $a\geq3$, and suppose that
\[
d(u)+d(v)\geq3a
\]
whenever $\{u,v\}$ is a dominating or dominated pair. Then, $D$ is either bipancyclic or a directed cycle of length $2a$.
\end{theorem}
The second question seems much harder. However, most recently, Wang and Wu~\cite{WW} proposed an interesting variant of the degree sum condition that allows them to obtain hamiltonicity by only imposing the condition on dominating pairs of vertices.
\begin{theorem}[{\cite[Thm.\,1.10]{WW}}]
\label{thm:WW}
Let $D$ be a strongly connected balanced bipartite digraph of order $2a$, where $a\geq3$, and let $k$ be an integer satisfying $\max\{1,\frac{a}{4}\}<k\leq\frac{a}{2}$.
Suppose that for every dominating pair $\{u,v\}$ of vertices in $D$,
\[
d(u)\geq2a-k\ \mathrm{and}\ d(v)\geq a+k,\quad\mathrm{or}\quad d(u)\geq a+k\ \mathrm{and}\ d(v)\geq2a-k.
\]
Then, $D$ is hamiltonian.
\end{theorem}
The authors of~\cite{WW} posed also several interesting problems related to the above theorems. Among them:
\begin{itemize}
\item[(a)] Are the assumptions of Theorem~\ref{thm:WW} enough to imply bipancyclicity of $D$?
\item[(b)] Is there an integer $k\geq0$ such that $D$ is hamiltonian if the inequality $d(u)+d(v)\geq3a+k$ is only imposed on the dominating pairs $\{u,v\}$?
\end{itemize}
The main goal of the present article is to prove the following positive answers to these two questions.
\begin{theorem}
\label{thm:bipart}
If $D$ satisfies the hypotheses of Theorem~\ref{thm:WW}, then $D$ is either bipancyclic or a directed cycle of length $2a$.
\end{theorem}
\begin{theorem}
\label{thm:hamil}
Let $D$ be a strongly connected balanced bipartite digraph of order $2a$, where $a\geq2$. Suppose that for every dominating pair $\{u,v\}$ of vertices in $D$,
\[
d(u)+d(v)\geq3a+1\,.
\]
Then, $D$ is hamiltonian.
\end{theorem}
Theorems~\ref{thm:bipart} and~\ref{thm:hamil} are proved in Sections~\ref{sec:bipart-proof} and~\ref{sec:hamil-proof}, respectively. In the last section, we discuss some corollaries and open problems.
\medskip
\section{Notation and terminology}
\label{sec:not}
We consider digraphs in the sense of \cite{BG}: A \emph{digraph} $D$ is a pair $(V(D),A(D))$, where $V(D)$ is a finite set (of \emph{vertices}) and $A(D)$ is a set of ordered pairs of distinct elements of $V(D)$, called \emph{arcs} (i.e., $D$ has no loops or multiple arcs).
The number of vertices $|V(D)|$ is the \emph{order} of $D$ (also denoted by $|D|$). For vertices $u$ and $v$ from $V(D)$, we write $uv\in A(D)$ to say that $A(D)$ contains the ordered pair $(u,v)$. If $uv\in A(D)$, then $u$ is called an \emph{in-neighbour} of $v$, and $v$ is an \emph{out-neighbour} of $u$. A pair of vertices $u,v\in V(D)$ is called \emph{dominating} (resp. \emph{dominated}) when there exists a vertex $w$ such that $uw\in A(D)$ and $vw\in A(D)$ (resp. $wu\in A(D)$ and $wv\in A(D)$).
For vertex sets $S,T\subset V(D)$, denote by $A[S,T]$ the set of all arcs of $A(D)$ from a vertex in $S$ to a vertex in $T$. We define
${\arcs}(S,T)\coloneqq|A[S,T]|+|A[T,S]|$.
For a vertex set $S \subset V(D)$, we denote by $N^+(S)$ the set of vertices in $V(D)$ \emph{dominated} by the vertices of $S$; i.e.,
\[
N^+(S)=\{u\in V(D): vu\in A(D)\text{\ for\ some\ }v\in S\}\,.
\]
Similarly, $N^-(S)$ denotes the set of vertices of $V(D)$ \emph{dominating} vertices of $S$; i.e,
\[
N^-(S)=\{u\in V(D): uv\in A(D)\text{\ for\ some\ }v\in S\}\,.
\]
If $S=\{v\}$ is a single vertex, the cardinality of $N^+(\{v\})$ (resp. $N^-(\{v\})$), denoted by $d^+(v)$ (resp. $d^-(v)$) is called the
\emph{outdegree} (resp. \emph{indegree}) of $v$ in $D$. The \emph{degree} of $v$ is $d(v)\coloneqq d^+(v)+d^-(v)$.
More generally, for a vertex $v\in V(D)$ and a subdigraph $E$ of $D$, we will denote the cardinality of $N^+(\{v\})\cap V(E)$ by $d^+_E(v)$. Similarly, the cardinality of $N^-(\{v\})\cap V(E)$ will be denoted by $d^-_E(v)$. We set $d_E(v)\coloneqq d^+_E(v)+d^-_E(v)$.
We will denote by $E^c$ the subdigraph of $D$ spanned by the vertices $V(D)\setminus V(E)$. Consequently, $d^+_{E^c}(v)=|N^+(\{v\})\cap V(D)\setminus V(E)|$ and $d^-_{E^c}(v)=|N^-(\{v\})\cap V(D)\setminus V(E)|$.
A directed cycle (resp. directed path) on vertices $v_1,\dots,v_m$ in $D$ is denoted by $[v_1,\ldots,v_m]$ (resp. $(v_1,\dots,v_m)$). We will refer to them as simply \emph{cycles} and \emph{paths} (skipping the term ``directed''), since their non-directed counterparts are not considered in this article at all.
A cycle passing through all the vertices of $D$ is called \emph{hamiltonian}, or a \emph{Hamilton cycle}. A digraph containing a hamiltonian cycle is called a \emph{hamiltonian digraph}. A digraph containing cycles of all lengths is called \emph{pancyclic}.
A digraph $D$ is \emph{strongly connected} when, for every pair of vertices $u,v\in V(D)$, $D$ contains a path originating in $u$ and terminating in $v$ and a path originating in $v$ and terminating in $u$. A digraph $D$ in which, for every pair of vertices $u,v\in V(D)$ precisely one of the arcs $uv, vu$ belongs to $A(D)$ is called a \emph{tournament}.
A digraph $D$ is \emph{bipartite} when $V(D)$ is a disjoint union of independent sets $V_1$ and $V_2$ (the \emph{partite sets}).
It is called \emph{balanced} if $|V_1|=|V_2|$. One says that a bipartite digraph $D$ is \emph{complete} when $d(x)=2|V_2|$ for all $x\in V_1$. A complete bipartite digraph with partite sets of cardinalitites $a$ and $b$ will be denoted by $K^*_{a,b}$\,. A balanced bipartite digraph containing cycles of all even lengths is called \emph{bipancyclic}.
A \emph{matching} from $V_1$ to $V_2$ is an independent set of arcs with origin in $V_1$ and terminus in $V_2$ ($u_1u_2$ and $v_1v_2$ are \emph{independent} arcs when $u_1\neq v_1$ and $u_2\neq v_2$). If $D$ is balanced, one says that such a matching is \emph{perfect} if it consists of precisely $|V_1|$ arcs.
Finally, to streamline the proofs of Theorems~\ref{thm:bipart} and~\ref{thm:hamil}, we will use the following shorthand terminology (borrowed from~\cite{WW}).
\begin{definition}
\label{def:A-Bk-C}
Let $D$ be a balanced bipartite digraph of order $2a$.
For an integer $k\geq0$, we say that $D$ satisfies \emph{condition $\Bk$}, when every dominating pair $\{u,v\}$ satisfies
\[
d(u)\geq2a-k\ \mathrm{and}\ d(v)\geq a+k,\quad\mathrm{or}\quad d(u)\geq a+k\ \mathrm{and}\ d(v)\geq2a-k.
\]
Also, for $k\geq0$, we say that $D$ satisfies \emph{condition $(\mathcal{D}_k)$}, when every dominating pair $\{u,v\}$ satisfies
\[
d(u)+d(v)\geq 3a+k\,.
\]
\end{definition}
\medskip
\section{Proof of Theorem~\ref{thm:hamil}}
\label{sec:hamil-proof}
Throughout this section we assume that $D$ is a strongly connected balanced bipartite digraph with partite sets of cardinalities $a\geq2$, which satisfies condition $\C$.
The proof of Theorem~\ref{thm:hamil} is based on the following four simple lemmas.
\begin{lemma}
\label{lem:1}
Suppose that $D$ is non-hamiltonian. Then, for every vertex $u\in V(D)$ there exists a vertex $v\in V(D)\setminus\{u\}$ such that $\{u,v\}$ is a dominating pair.
\end{lemma}
\begin{proof}
For a proof by contradiction, suppose that $D$ contains a vertex $u_0$ which has no common out-neighbour with any other vertex in $D$. We claim that then no vertex of $D$ has a common out-neighbour with any other vertex. Indeed, let $v\in V(D)\setminus\{u_0\}$ be arbitrary. By strong connectedness of $D$, there is a path $P=(u_0,u_1,\ldots,u_s)$, with $u_s=v$. By assumptions on $u_0$, we have $d^-(u_1)=1$ and hence $d(u_1)\leq a+1$. If then $u_1$ had a common out-neighbour with some vertex $w\in V(D)$, we would have $d(w)\geq2a$, by condition $\C$. In particular, $w$ would be dominated by all the vertices from the opposite partite set, and so $u_0$ would have $w$ as a common out-neighbour with all vertices from its partite set; a contradiction. It thus follows that $u_1$ has no common out-neighbour with any other vertex in $D$. Repeating the above argument for all the subsequent vertices on $P$, we obtain in the end that $v$ has no common out-neighbour with any other vertex in $D$. This proves our claim, since $v$ was arbitary.
The strong connectedness now implies that $D$ is, in fact, a cycle of length $2a$. This contradicts the assumptions of the lemma.
\end{proof}
\begin{lemma}
\label{lem:2}
If $D$ is non-hamiltonian, then $d(u)\geq a+1$ for every vertex $u$ in $D$.
\end{lemma}
\begin{proof}
This follows immediately from Lemma~\ref{lem:1}, condition $\C$, and the fact that the degree of every vertex in $D$ is bounded above by $2a$.
\end{proof}
\begin{lemma}
\label{lem:3}
$D$ contains a cycle factor.
\end{lemma}
\begin{proof}
Suppose that $D$ is non-hamiltonian.
Let $X$ and $Y$ denote the two partite sets of $D$. Observe that $D$ contains a cycle factor if and only if there exist both a perfect matching from $X$ to $Y$ and a perfect matching from $Y$ to $X$. Therefore, by the K{\"o}nig-Hall theorem (see, e.g., \cite{B}), it suffices to show that $|N^+(S)|\geq|S|$ for every $S\subset X$ and $|N^+(T)|\geq|T|$ for every $T\subset Y$.
For a proof by contradiction, suppose that a non-empty set $S\subset X$ is such that $|N^+(S)|<|S|$. Then $|S|\geq2$, for else the sole vertex $x_0$ of $S$ would satisfy $d^+(x_0)=0$, which is not possible in a strongly connected digraph. Since $|N^+(S)|<|S|$, there exist vertices $x_1,x_2\in S$ with a common out-neighbour. By condition $\C$, we get
\[
3a+1\leq d(x_1)+d(x_2)=(d^+(x_1)+d^+(x_2))+(d^-(x_1)+d^-(x_2))\leq 2(|S|-1)+2a\,,
\]
and hence $2|S|\geq a+3$.
Now, for every $y\in Y\setminus N^+(S)$, we have $d(y)=d^+(y)+d^-(y)\leq a+(a-|S|)$. It follows that $|S|\leq a-1$, for else we would have $d(y)\leq a$, contrary to Lemma~\ref{lem:2}. Consequently, $|Y\setminus N^+(S)|\geq2$.
Moreover, no two vertices of $Y\setminus N^+(S)$ form a dominating pair. Indeed, for if $y_1,y_2\in Y\setminus N^+(S)$ were such a pair, we would have
\[
3a+1\leq d(y_1)+d(y_2)\leq 2(2a-|S|)\leq 4a-(a+3)\,,
\]
a contradiction. Thus, in fact, for every $y\in Y\setminus N^+(S)$, we have
\[
d^+(y)\leq a-(|Y\setminus N^+(S)|-1)=a-(a-|N^+(S)|-1)\leq|S|\,.
\]
Consequenly, for every such $y$,
\[
d(y)=d^+(y)+d^-(y)\leq|S|+(a-|S|)=a\,,
\]
which again contradicts Lemma~\ref{lem:2}.
This completes the proof of existence of a perfect matching from $X$ to $Y$. The proof for a matching in the opposite direction is analogous.
\end{proof}
We shall also need the following result from \cite{AAY}. Note that, by Lemma~\ref{lem:3}, $D$ contains a cycle factor.
\begin{lemma}[{\cite{AAY}}]
\label{lem:4}
Suppose that $D$ is non-hamiltonian, and let $\{C_1,\dots,C_l\}$ be a cycle factor in $D$ with a minimal number of elements.
Then,
\[
\arcs(V(C_1), V(D)\setminus V(C_1))\ \leq\ \frac{|C_1|(2a-|C_1|)}{2}\,.
\]
\end{lemma}
\medskip
\subsection*{Proof of Theorem~\ref{thm:hamil}}
\label{subsec:thm-hamil}
Let $D$ be a balanced bipartite digraph on $2a$ vertices, and let $X$ and $Y$ denote its partite sets. By Lemma~\ref{lem:3}, $D$ contains a cycle factor $\{C_1,\dots,C_l\}$. Assume $l$ is minimum possible, and for a proof by contradiction suppose that $l\geq2$. We may assume that $|C_1|\leq\dots\leq|C_l|$. Set $t\coloneqq|C_1|/2$. Then,
$1\leq t\leq a/2$, since $l\geq2$ and $|C_1|\leq|C_2|$.
Moreover, by Lemma~\ref{lem:4}, we have
\begin{equation}
\label{eq:C1}
\arcs(V(C_1), V(D)\setminus V(C_1))\ \leq\ 2t(a-t)\,.
\end{equation}
Without loss of generality, we may assume that
\begin{equation}
\label{eq:C1X}
\arcs(V(C_1)\cap X, V(D)\setminus V(C_1))\ \leq\ t(a-t)\,,
\end{equation}
as otherwise
\begin{equation}
\label{eq:C1Y}
\arcs(V(C_1)\cap Y, V(D)\setminus V(C_1))\ \leq\ t(a-t)\,.
\end{equation}
We will first show that $t\geq2$. Suppose otherwise. Then $C_1$ is a 2-cycle consisting of, say, arcs $x_1y_1$ and $y_1x_1$. By \eqref{eq:C1},
\begin{multline}
\notag
d(x_1)+d(y_1)=(d_{C_1}(x_1)+d_{C_1}(y_1))+(d_{C^c_1}(x_1)+d_{C^c_1}(y_1))\\
\leq 4+2(a-1)=2a+2\,,
\end{multline}
which in light of Lemma~\ref{lem:2} implies that $d(x_1)=d(y_1)=a+1$, and $d_{C_1}(x_1)=d_{C_1}(y_1)=2$.
By Lemma~\ref{lem:1}, there exists a vertex $x'\in X\setminus\{x_1\}$ such that $\{x_1,x'\}$ is a dominating pair. Condition $\C$ then implies that $d(x')=2a$. In particular, $x'y_1\in A(D)$. We have $x'\in V(C_j)$ for some $1<j\leq l$. Let $y'$ denote the successor of $x'$ on the cycle $C_j$. Note that $\{y_1,y'\}$ form a dominating pair (as they both dominate $x'$), hence $d(y')=2a$, by condition $\C$ again. In particular, $x_1y'\in A(D)$, and so the cycle $C_1$ can be merged into $C_j$ by replacing the arc $x'y'$ on $C_j$ with the path $(x',y_1,x_1,y')$. This contradicts the minimality of $l$. Thus, indeed, $t\geq2$.
Let now $x_1,\dots,x_t\in X$ and $y_1,\dots,y_t\in Y$ be the vetices of $C_1$, labeled so that
\begin{equation}
\label{eq:C^c_1}
d_{C^c_1}(x_1)\leq\dots\leq d_{C^c_1}(x_t)\qquad\mathrm{and}\qquad d_{C^c_1}(y_1)\leq\dots\leq d_{C^c_1}(y_t)\,.
\end{equation}
Then, by \eqref{eq:C1X}, $d_{C^c_1}(x_1)\leq a-t$. The remainder of the proof splits into two cases depending on whether or not the latter inequality is strict.
\subsubsection*{Case 1.} Suppose first that $d_{C^c_1}(x_1)=a-t$. In this case we have $d_{C^c_1}(x_i)=a-t$ for all $1\leq i\leq t$, by \eqref{eq:C1X}. It follows that no two vertices $x_i,x_j$ in $X\cap V(C_1)$ form a dominating pair. Indeed, otherwise
\begin{multline}
\notag
3a+1\leq d(x_i)+d(x_j)=(d_{C_1}(x_i)+d_{C_1}(x_j))+(d_{C^c_1}(x_i)+d_{C^c_1}(x_j))\\ \leq 4t+2(a-t)=2a+2t\leq 3a\,,
\end{multline}
a contradiction.
Consequently,
\begin{equation}
\label{eq:y12}
d^-_{C_1}(y_j)=1 \quad\mathrm{for\ each\ } 1\leq j\leq t\,.
\end{equation}
In particular, $d^+_{C_1}(x_1)=1$. Now, as $d(x_1)\geq a+1$ (Lemma~\ref{lem:2}) and $d_{C^c_1}(x_1)=a-t$, it follows that $d^-_{C_1}(x_1)\geq t$ and so both $y_1$ and $y_2$ dominate $x_1$. However, by equality in \eqref{eq:C1X}, inequality \eqref{eq:C1} implies that \eqref{eq:C1Y} holds, hence (by \eqref{eq:C^c_1}) $d_{C^c_1}(y_1)+d_{C^c_1}(y_2)\leq2(a-t)$. This, together with \eqref{eq:y12} and condition $\C$, yields
\begin{multline}
\notag
3a+1\leq d(y_1)+d(y_2)=(d_{C_1}(y_1)+d_{C_1}(y_2))+(d_{C^c_1}(y_1)+d_{C^c_1}(y_2))\\ \leq 2(t+1)+2(a-t)=2a+2\,,
\end{multline}
which is impossible, as $a\geq2$.
\subsubsection*{Case 2.} Suppose then that $d_{C^c_1}(x_1)=a-t-\mu_1$ for some $\mu_1>0$. We claim that then $x_1$ forms a dominating pair with at least $\mu_1$ distinct vertices from $C_1$. Indeed, the inequality $d(x_1)\geq a+1$ implies that
\begin{equation}
\label{eq:x1}
d^+_{C_1}(x_1)\geq (a+1)-(a-t-\mu_1)-d^-_{C_1}(x_1)\geq 1+t+\mu_1-t=\mu_1+1\,,
\end{equation}
and so $x_1$ dominates at least $\mu_1$ vertices on $C_1$ apart from its own successor on $C_1$.
Note that, for any $1\leq i<j\leq t$ such that $x_i,x_j$ satisfy $d_{C^c_1}(x_i)+d_{C^c_1}(x_j)\leq2(a-t)$, the pair $\{x_i,x_j\}$ is not dominating. Indeed, for such $x_i$, $x_j$ we have
\begin{multline}
\notag
d(x_i)+d(x_j)=(d_{C_1}(x_i)+d_{C_1}(x_j))+(d_{C^c_1}(x_i)+d_{C^c_1}(x_j))\\ \leq 4t+2(a-t)=2a+2t\leq 3a\,.
\end{multline}
In particular, $\{x_1,x_2\}$ is not a dominating pair, by \eqref{eq:C1X} and \eqref{eq:C^c_1}.
Moreover, for all $x_i,x_j$ as above and for any vertices $x',x''$ such that $\{x_i,x'\}$ and $\{x_j,x''\}$ are dominating pairs (which exist, by Lemma~\ref{lem:1}, although not necessarily distinct), we have by condition $\C$
\begin{multline}
\notag
d(x')+d(x'')\\
\geq 6a+2-[(d^+_{C_1}(x_i)+d^+_{C_1}(x_j))+(d^-_{C_1}(x_i)+d^-_{C_1}(x_j))+(d_{C^c_1}(x_i)+d_{C^c_1}(x_j))]\\
\geq 6a+2-[t+2t+2(a-t)]=4a-t+2\,,
\end{multline}
hence $d(x')\geq (4a-t+2)-2a=2a-t+2$, and so
\begin{equation}
\label{eq:x'}
d_{C^c_1}(x')\geq (2a-t+2)-2t\geq a-t+2\,.
\end{equation}
Now, let $s\geq1$ and $\mu_1\geq\dots\geq\mu_s>0$ be such that $d_{C^c_1}(x_i)=a-t-\mu_i$ for all $1\leq i\leq s$, and $d_{C^c_1}(x_k)\geq a-t$ for $s<k\leq t$.
As in \eqref{eq:x1}, for each $1\leq i\leq s$, $x_i$ dominates at least $\mu_i$ vertices, say, $y^{i1},\dots,y^{i\mu_i}$ on $C_1$ apart from its own successor on $C_1$. Denote by $I_i$ the subset of $\{1,\dots,t\}$ of indices of the \emph{predecessors} on $C_1$ of those $y^{i1},\dots,y^{i\mu_i}$. Since no two $x_i,x_j$ form a dominating pair, we have
\[
(\{i\}\cup I_i)\cap(\{j\}\cup I_j)=\varnothing\quad\mathrm{for\ all\ } 1\leq i<j\leq s\,.
\]
Let $I:=\{1,\dots,t\}\setminus\bigcup_{i=1}^s(\{i\}\cup I_i)$. Then, by \eqref{eq:x'},
\begin{multline}
\notag
\sum_{i=1}^t d_{C^c_1}(x_i)=\sum_{i=1}^s\left(d_{C^c_1}(x_i)+\sum_{j\in I_i} d_{C^c_1}(x_j)\right)+\sum_{k\in I} d_{C^c_1}(x_k)\\
\geq \sum_{i=1}^s\left((a-t-\mu_i)+\mu_i\!\cdot\!(a-t+2)\right)+(t-\sum_{i=1}^s(1+\mu_i))\!\cdot\!(a-t)\\
= \sum_{i=1}^s\mu_i+\sum_{i=1}^t(a-t)> t(a-t)\,,
\end{multline}
which contradicts inequality \eqref{eq:C1X}. This completes the proof of the theorem.
\qed
\medskip
\section{Proof of Theorem~\ref{thm:bipart}}
\label{sec:bipart-proof}
The proof of Theorem~\ref{thm:bipart} is based on Theorems~\ref{thm:A2} and~\ref{thm:WW}, and the following result of Thomassen.
\begin{theorem}[{\cite[Thm.\,3.5]{T}}]
\label{thm:thomassen}
Let $G$ be a strongly connected digraph of order $n$, $n\geq3$, such that $d(u)+d(v)\geq 2n$ whenever $u$ and $v$ are non-adjacent. Then, $G$ is either pancyclic, or a tournament, or $n$ is even and $G$ is isomorphic to $K^*_{\frac{n}{2},\frac{n}{2}}$.
\end{theorem}
Let $a,k$ be integers such that $a\geq3$ and $\max\{1,\frac{a}{4}\}<k\leq\frac{a}{2}$. Note that then $a+k\leq2a-k$.
Throughout this section, we assume that $D$ is a strongly connected balanced bipartite digraph of order $2a$, which satisfies condition $\Bk$. By Theorem~\ref{thm:WW}, $D$ contains a Hamilton cycle $C$. Assume that $D$ is not equal to $C$. We shall show that then $D$ contains cycles of all even lengths.
\begin{lemma}
\label{lem:5}
For every vertex $u\in V(D)$ there exists a vertex $v\in V(D)\setminus\{u\}$ such that $\{u,v\}$ is a dominating pair.
In particular, $d(u)\geq a+k$.
\end{lemma}
\begin{proof}
For a proof by contradiction, suppose that $u_0\in V(D)$ has no common out-neighbour with any other vertex on $D$. Let $u_0^+$ denote the successor of $u_0$ on $C$. Then, $d^-(u_0^+)=1$ and so $d(u_0^+)\leq a+1$. Since $a+1<\min\{a+k,2a-k\}$, it follows that $u_0^+$ has no common out-neighbour with any other vertex, by condition $\Bk$. Repeating this argument for all the subsequent vertices along $C$, one gets that every vertex on $C$ is dominated in $D$ only by its predecessor on $C$. Thus, $D=C$, contrary to our assumptions.
The second claim follows immediately, by condition $\Bk$ and the fact that $a+k\leq 2a-k$.
\end{proof}
\begin{remark}
\label{rem:2-cycle}
Note that every vertex $u\in V(D)$ lies on a 2-cycle. Indeed, by Lemma~\ref{lem:5}, $d^+(u)+d^-(u)>a$ and hence $N^+(u)\cap N^-(u)\neq\varnothing$.
\end{remark}
\begin{lemma}
\label{lem:6}
Suppose $D$ is not bipancyclic. Then:
\begin{itemize}
\item[(a)] For every $u\in V(D)$,
\[
k+1\leq d^-(u)\leq a-1\quad\ \mathrm{and}\ \quad k+1\leq d^+(u)\leq a-1\,.
\]
\item[(b)] If $\{u,v\}$ is a non-dominating pair, with $u,v$ in the same partite set, then
\[
d(u)<2a-k\quad\mathrm{and}\quad d(v)<2a-k\,.
\]
\item[(c)] If $d(u)\geq2a-k$, then for any $v\in V(D)\setminus\{u\}$,
\[
d^+(u)+d^-(v)\geq a+2\quad\mathrm{and}\quad d^+(v)+d^-(u)\geq a+2\,.
\]
\end{itemize}
\end{lemma}
\begin{proof}
Part (a) follows from Lemma~\ref{lem:5} and the fact that $D$ contains a Hamilton cycle.
For the proof of (b), suppose that $d(u)\geq2a-k$. Then, Lemma~\ref{lem:5} implies $d(u)+d(v)\geq(2a-k)+(a+k)=3a$. Hence, by part (a),
\[
d^+(u)+d^+(v)=(d(u)+d(v))-(d^-(u)+d^-(v))\geq3a-2(a-1)=a+2\,,
\]
and so $N^+(u)\cap N^+(v)\neq\varnothing$; a contradiction.
Finally, if $d(u)\geq2a-k$ then, by Lemma~\ref{lem:5}, $d(u)+d(v)\geq3a$ and hence
\[
d^+(u)+d^-(v)=(d(u)+d(v))-(d^-(u)+d^+(v))\geq3a-2(a-1)=a+2\,,
\]
by part (a), again. The other inequality of (c) is proved analogously.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:bipart}}
\label{subsec:thm-bipart}
By Theorem~\ref{thm:A2}, we may assume that $D$ contains a dominated pair $\{u,v\}$ with $d(u)+d(v)<3a$, for else there is nothing to show. Then, by Lemma~\ref{lem:5}, we have $2(a+k)<3a$, hence $k<\frac{a}{2}$ and $a+k<2a-k$.
Let $X=\{x_1,\dots,x_a\}$ and $Y=\{y_1,\dots,y_a\}$ be the two partite sets of $D$, and suppose without loss of generality that $X$ contains a non-dominating pair $\{x',x''\}$, as above. Let $p\geq2$ denote the maximal integer such that $X$ contains vertices $\{x^{(1)},\dots,x^{(p)}\}$ no two of which form a dominating pair. Then, by Lemma~\ref{lem:6}(b), condition $\Bk$ implies that
\begin{equation}
\label{eq:non-p}
d(x)\geq2a-k\,,\quad\mathrm{for\ every\ }x\in X\setminus\{x^{(1)},\dots,x^{(p)}\}\,.
\end{equation}
Further, for any $1\leq i<j\leq p$, we have $N^+(x^{(i)})\cap N^+(x^{(j)})=\varnothing$,
and hence
\begin{equation}
\label{eq:a-p-k}
a\geq p(k+1)\,,
\end{equation}
by Lemma~\ref{lem:6}(a). Since $k>\frac{a}{4}$, by assumption, it follows that $p\leq3$.
Moreover, when $p=3$, then $a\geq p(k+1)$ and $k>\frac{a}{4}$ imply $a>12$, and hence $k\geq4$. To sum up, we have
\begin{equation}
\label{eq:est}
p=2,\quad\mathrm{or\ else}\quad p=3\ \,\mathrm{and}\ \,k\geq4\,.
\end{equation}
For the remainder of this proof we shall assume that $k\geq3$. The case $k=2$ requires a different argument and it will be settled separately in Section~\ref{subsec:special-cases}.
We will reduce the proof to a straightforward application of Thomassen's Theorem~\ref{thm:thomassen}, by proving the following claim.
\subsubsection*{Claim 1.} $D$ is bipancyclic, else it contains a Hamilton cycle $C=[y_1,x_1,\dots,y_a,x_a]$ such that
\[
d^+(x_i)+d^-(y_i)\geq a+2\,,\qquad\mathrm{for\ every\ }1\leq i\leq a\,.
\]
For the proof of the claim, suppose first that $k=3$. Then, $p=2$, by~\eqref{eq:est}. Let $x',x''$ be the only two vertices in $X$ with degrees strictly less than $2a-k$.
By Lemma~\ref{lem:6}(c), it suffices to show that $D$ contains a Hamilton cycle $C$ such that $d^+(x')+d^-(x'^-)\geq a+2$ and $d^+(x'')+d^-(x''^-)\geq a+2$, where $x'^-$ (resp. $x''^-$) denotes the predecessor of $x'$ (resp. $x''$) on $C$.
Let then $C=[y_1,x_1,\dots,y_a,x_a]$ be a fixed Hamilton cycle in $D$, and suppose that the predecessor $x'^-$ of $x'$ has $d(x'^-)<2a-k$. Assume without loss of generality that $x'=x_1$ (and so $x'^-=y_1$).
To simplify notation, set
\[
\alpha\coloneqq d^+(x_1)\quad\mathrm{and}\quad \beta\coloneqq d^-(y_1)\,.
\]
We may assume that $\beta\leq a-3$, for if $\beta\geq a-2$, then Lemma~\ref{lem:6}(a) implies
\[
\alpha+\beta\geq(k+1)+(a-2)= a+2\,.
\]
Next, let
\[
l\coloneqq\min\{n\geq2:y_1x_n\in A(D), x_n\neq x''\}\,.
\]
By Lemma~\ref{lem:5}, we have $d^+(y_1)\geq(a+k)-\beta=a-(\beta-k)$, hence
\begin{equation}
\label{eq:l1}
l\leq\beta-k+3\,,
\end{equation}
where the inequality is strict unless $x''\in\{x_2,\dots,x_{\beta-k+2}\}$ and $y_1x''\in A(D)$.
Now, the pair $\{y_1,y_l\}$ is dominating (both dominate $x_l$), thus $d(y_l)\geq2a-k$ and
\begin{multline}
\label{eq:y_l-1}
|N^+(y_l)\setminus\{x_1,\dots,x_l\}|\geq d(y_l)-d^-(y_l)-l\\
\geq(2a-k)-(a-1)-l=a-k-l+1\,.
\end{multline}
On the other hand,
\begin{equation}
\label{eq:x_1-1}
|N^-(x_1)\setminus\{y_1,\dots,y_l\}|\geq d(x_1)-d^+(x_1)-l\geq(a+k)-\alpha-l\,.
\end{equation}
By~\eqref{eq:l1}, the right sides of inequalities~\eqref{eq:y_l-1} and~\eqref{eq:x_1-1} are positive, else $\alpha+\beta\geq a+3$.
It thus follows from~\eqref{eq:y_l-1} and~\eqref{eq:x_1-1} that there exists $l+1\leq m\leq a$ such that
\begin{equation}
\label{eq:m-1}
y_lx_m\in A(D)\ \mathrm{and}\ \,y_mx_1\in A(D)\quad(\mathrm{hence\ also\ } d(y_m)\geq2a-k)\,,
\end{equation}
unless
\begin{equation}
\label{eq:a-l}
(a-k-l+1)+(a+k-\alpha-l)\leq a-l\,.
\end{equation}
By~\eqref{eq:l1}, the latter inequality implies $\alpha+\beta\geq a+1$. Hence, either $\alpha+\beta\geq a+2$, or $\alpha+\beta=a+1$ and we have equalities in~\eqref{eq:a-l} and~\eqref{eq:l1}, or else there exists $l+1\leq m\leq a$ such that~\eqref{eq:m-1} holds. In the latter case, $D$ contains a Hamilton cycle
\[
C'=[y_1,x_l,\ldots,y_m,x_1,\ldots,y_l,x_m,\ldots,x_a]\,,
\]
where the dotted parts indicate the appropriate pieces of $C$. On this new cycle, the predecessor of $x'$ is of degree at least $2a-k$, and hence, by Lemma~\ref{lem:6}(c), we have decreased by one the number of pairs of vertices not satisfying the condition of Claim~1. In may still be the case that the predecessor of $x''$ on $C'$ has degree less than $2a-k$. If so, we repeat the above construction to replace $C'$ by another Hamilton cycle $C''$, on which the condition from Claim~1 is already satisfied by all pairs.
The equalities in~\eqref{eq:a-l} and~\eqref{eq:l1} can actually only occur when $D$ is bipancyclic, as is shown in Lemma~\ref{lem:nasty} below. We have thus proved Claim~1 in case $k=3$.
\medskip
Suppose than that $p=3$, and hence $k\geq4$. Let $x',x'',x'''$ be the only three vertices in $X$ with degrees strictly less than $2a-k$.
By Lemma~\ref{lem:6}(c) and the first part of the proof, it suffices to show that $D$ contains a Hamilton cycle $C$ such that $d^+(x')+d^-(x'^-)\geq a+2$, where $x'^-$ denotes the predecessor of $x'$ on $C$.
Let then $C=[y_1,x_1,\dots,y_a,x_a]$ be a fixed Hamilton cycle in $D$, and suppose that the predecessor $x'^-$ of $x'$ has $d(x'^-)<2a-k$. Assume without loss of generality that $x'=x_1$ (and so $x'^-=y_1$).
To simplify notation, set
\[
\alpha\coloneqq d^+(x_1)\quad\mathrm{and}\quad \beta\coloneqq d^-(y_1)\,.
\]
We may assume that $\beta\leq a-4$, for if $\beta\geq a-3$, then Lemma~\ref{lem:6}(a) implies
\[
\alpha+\beta\geq(k+1)+(a-3)\geq a+2\,.
\]
Next, let
\[
l\coloneqq\min\{n\geq2:y_1x_n\in A(D), x_n\notin\{x'',x'''\}\}\,.
\]
By Lemma~\ref{lem:5}, we have $d^+(y_1)\geq(a+k)-\beta=a-(\beta-k)$, hence
\begin{equation}
\label{eq:l2}
l\leq\beta-k+4\,,
\end{equation}
where the inequality is strict unless $x'',x'''\in\{x_2,\dots,x_{\beta-k+3}\}$, $y_1x''\in A(D)$, and $y_1x'''\in A(D)$.
As in the first part of the proof, the pair $\{y_1,y_l\}$ being dominating implies $d(y_l)\geq2a-k$, and so the inequality~\eqref{eq:y_l-1} holds. Of course, we have~\eqref{eq:x_1-1} as well.
By~\eqref{eq:l2}, the right sides of~\eqref{eq:y_l-1} and~\eqref{eq:x_1-1} are now positive, else $\alpha+\beta\geq a+4$.
It thus follows from~\eqref{eq:y_l-1} and~\eqref{eq:x_1-1} that there exists $l+1\leq m\leq a$ such that
\begin{equation}
\label{eq:m-2}
y_lx_m\in A(D)\ \mathrm{and}\ \,y_mx_1\in A(D)\quad(\mathrm{hence\ also\ } d(y_m)\geq2a-k)\,,
\end{equation}
unless~\eqref{eq:a-l} holds.
By~\eqref{eq:l2} and since $k\geq4$, the latter inequality implies $\alpha+\beta\geq a+1$. Hence, either $\alpha+\beta\geq a+2$, or $\alpha+\beta=a+1$ and we have equalities in~\eqref{eq:a-l} and~\eqref{eq:l2}, or else there exists $l+1\leq m\leq a$ such that~\eqref{eq:m-2} holds. In the latter case, as in the first part of the proof, $D$ contains a Hamilton cycle $C'$, on which the predecessor of $x'$ is of degree at least $2a-k$. Hence, we have reduced to the case $p=2$, which is already settled. On the other hand, the equalities in~\eqref{eq:a-l} and~\eqref{eq:l2} can only occur when $D$ is bipancyclic, by Lemma~\ref{lem:nasty} below, so the proof of Claim~1 is now complete.
\medskip
Suppose now that $D$ is not bipancyclic, and $C=[y_1,x_1,\dots,y_a,x_a]$ is a Hamilton cycle for which the condition of Claim~1 is satisfied.
We associate with $D$ a new digraph, $G$, constructed as follows. Set $V(G)\coloneqq\{v_1,\dots,v_a\}$, and $v_iv_j\in A(G)$ whenever $x_iy_j\in A(D)$, for $i,j\in\{1,\dots,a\}$, $i\neq j$. Then, $G$ is strongly connected because it contains a Hamilton cycle $[v_1,\ldots,v_a]$ (induced from $C$).
Note that $a\geq3$, so $G$ has at least three vertices. Moreover, for every $1\leq i\leq a$, we have
\begin{equation}
\label{eq:D-to-G}
d^+_G(v_i)\geq d^+_D(x_i)-1\quad\mathrm{and}\quad d^-_G(v_i)\geq d^-_{D}(y_i)-1\,.
\end{equation}
Therefore, by Claim~1, $d_G(v_i)\geq a$ for every $1\leq i\leq a$, and thus $G$ satisfies the assumptions of Theorem~\ref{thm:thomassen}.
Notice that every cycle $[v_{i_1},\dots,v_{i_l}]$ of length $l$ in $G$ corresponds to a cycle of length $2l$ in $D$, namely $[y_{i_1},x_{i_1},\dots,y_{i_l},x_{i_l}]$.
Also, by Remark~\ref{rem:2-cycle}, $D$ contains a cycle of length $2$.
To complete the proof it thus suffices to show that $G$ is not a tournament nor a balanced bipartite digraph.
If $G$ were a tournament, then it would contain no cycle of length 2, and hence $d_G(v_i)=d^+_G(v_i)+d^-_G(v_i)\leq a-1$ for every $i$; a contradiction.
On the other hand, if $1\leq i\leq a$ is such that $d_D(x_i)\geq 2a-k$ then $d^+_D(x_i)\geq a-k+1$, by Lemma~\ref{lem:6}(a), and hence $d^+_G(v_i)\geq a-k$, by~\eqref{eq:D-to-G}. Since $k<\frac{a}{2}$, it follows that $v_i$ dominates more than half of the vertices of $G$, and so $G$ is not balanced bipartite.\qed
\subsection{Special cases}
\label{subsec:special-cases}
There remain a few cases of digraphs not covered by the above proof. We do not know of any uniform way of tackling them all at once, and instead proceed on a case by case basis. We begin with a lemma that completes the proof of Theorem~\ref{thm:bipart} in the case of $k\geq3$.
\begin{lemma}
\label{lem:nasty}
Under the above notation, suppose that $p=2$, $k=3$, and we have equalities in~\eqref{eq:a-l} and~\eqref{eq:l1}, or else $p=3$, $k\geq4$, and we have equalities in~\eqref{eq:a-l} and~\eqref{eq:l2}. Then, $D$ is bipancyclic.
\end{lemma}
\begin{proof}
Let $C=[y_1,x_1,\ldots,y_a,x_a]$ be the fixed Hamilton cycle from the above proof.
The equality in~\eqref{eq:a-l} implies equalities in all the inequalities that led to it. In particular, $x_1$ is dominated by each of the $\{y_1,\dots,y_l\}$, and $d^-(y_l)=a-1$, so either $y_l$ is dominated by all the vertices from $X\setminus\{x_1\}$ or else it is dominated by all the vertices from $X\setminus\{x''\}$. The equality in~\eqref{eq:l1} (when $p=2$) or in~\eqref{eq:l2} (when $p=3$), in turn, implies that $x''=x_r$ for some $1<r<l$, and $y_1x_r\in A(D)$. Also, $l\geq4$.
Suppose first that $x_1y_l\notin A(D)$. Then, $D$ contains cycles of all even lengths (induced from $C$, by chords into $y_l$), except at most for the cycle $C_{2s}$ with $s=a-l+2$. Now, if $s\leq l-1$, then $D$ contains a $2s$-cycle $[x_1,\ldots,y_s]$ (where, as before, the dotted part indicates an appropriate piece of $C$).
If, in turn, $s\geq l$, then either the cycle $[y_1,x_r,\ldots,x_a]$ is of length $2s$, or else it is of length greater than $2s$ and we can shorten it to a $2s$-cycle by replacing an $x_q-y_l$ path on $C$ by the arc $x_qy_l$ with a suitable choice of $r\leq q<l-1$.
Suppose then that $x_ry_l\notin A(D)$. Then, $D$ contains cycles of all even lengths, except at most for the cycle $C_{2s}$ with $s=a-l+r+1$.
Now, if $2r\geq l$, then $D$ contains a $2s$-cycle
\[
[y_1,x_r,\ldots,y_{l-1},x_1,\ldots,x_{2r-l+1},y_l,\ldots,x_a]\,.
\]
If, in turn, $2r\leq l-1$, then $r<l-2$ (as $l\geq4$) and the cycle $[y_1,x_r,\dots,x_a]$ is of length $2(a-r+1)$, where
\[
2(a-r+1)\geq2(a-r+1)+2(2r-l+1)=2(a-l+r+2)\,.
\]
Thus, $[y_1,x_r,\dots,x_a]$ can be shortened to a $2s$-cycle by replacing an $x_q-y_l$ path on $C$ with the arc $x_qy_l$ with a suitable choice of $r<q<l-1$.
\end{proof}
\medskip
It remains to consider the case when $D$ is such that $k=p=2$. Fix a Hamilton cycle $C=[y_1,x_1,\dots,y_a,x_a]$ on $D$. By~\eqref{eq:a-p-k}, we have $a\geq6$. On the other hand, if $a\geq8$, then $k>\frac{a}{4}$ implies $k\geq3$. Thus, $a=6$ or $a=7$. In both cases, for every $x\in X\setminus\{x',x''\}$, we have $d(x)\geq2a-2$. Hence, for every such $x$,
\begin{equation}
\label{eq:a-1}
d^-(x)=d^+(x)=a-1\,,
\end{equation}
for else $D$ contains cycles of all even lengths through $x$ (induced from $C$). It is easy to see that then $D$ contains all cycles of even lengths less than $2(a-1)$. Indeed, let $2\leq s\leq a-2$ and suppose without loss of generality that $d^+(x_1)=d^-(x_1)=a-1$. If $y_{s+1}x_1\in A(D)$ then $D$ contains the $2s$-cycle $[x_1,\dots,y_{s+1}]$. If $x_1y_{a-s+2}\in A(D)$, then $D$ contains the $2s$-cycle $[x_1,y_{a-s+2},\dots,y_1]$. If, in turn, $y_{s+1}x_1\notin A(D)$ and $x_1y_{a-s+2}\notin A(D)$, then $D$ contains all other arcs adjacent to $x_1$, and hence it contains the $2s$-cycle $[x_1,y_{a-s+1},\dots,y_{s+2}]$. It thus remains to show that $D$ contains a subhamiltonian cycle $C_{2(a-1)}$.
Suppose first that $a=6$. Then, we have equality in~\eqref{eq:a-p-k}, and hence
\begin{equation}
\label{eq:a6}
d^+(x')=d^+(x'')=3,\quad\mathrm{and}\quad d^-(x')=d^-(x'')=a-1\,,
\end{equation}
by Lemma~\ref{lem:6}. We may assume that $d(x'^-)<2a-k$ or $d(x''^-)<2a-k$, for else $D$ satisfies the condition from Claim~1 in the proof in Section~\ref{subsec:thm-bipart}, and the remainder of that proof carries through.
Without loss of generality, assume then that $x'=x_1$ (hence $x'^-=y_1$) and $d(y_1)<2a-k$. We may further assume that $y_ax_1\notin A(D)$, for else $D$ contains the $2(a-1)$-cycle $[y_a,x_1,\dots,x_{a-1}]$. Then, by~\eqref{eq:a6}, we must have $y_{a-1}x_1\in A(D)$. Of the 3 vertices $\{x_2,\dots,x_{a-2}\}$ at most one doesn't satisfy \eqref{eq:a-1}. Similarly, at most one of the $\{y_2,\dots,y_{a-2}\}$ has degree less than $2a-2$, by Lemma~\ref{lem:6} and since $p=2$, $d(y_1)<2a-k$. Therefore, there exists $2\leq i\leq a-2$ such that $y_ix_{a-1},y_ax_i\in A(D)$, or $y_ix_a,y_1x_i\in A(D)$, or else $y_ix_{i+1}\in A(D)$. In either of the first two cases, we obtain a cycle of length $2(a-1)$ by replacing the arc $y_ix_i$ on the cycle $[y_{a-1},x_1,\dots,x_{a-2}]$ with an appropriate path of length 3. In the latter case, in turn, $D$ contains the $2(a-1)$-cycle $[x_{i+1},\ldots,y_i]$.
Finally, suppose $a=7$. Then, the inequalities $d(x')\geq a+2$, $d(x'')\geq a+2$, together with~\eqref{eq:a-p-k}, imply that
\begin{equation}
\label{eq:a7}
d^-(x')=a-1,\quad\mathrm{or\ else}\quad d^-(x'')=a-1\,.
\end{equation}
For if $d^-(x')\leq a-2$ and $d^-(x'')\leq a-2$, then $d^+(x')+d^+('')\geq8$ and hence $\{x',x''\}$ is a dominating pair; a contradiction.
Without loss of generality, assume then that $x'=x_1$ and $d^-(x_1)=a-1$. We may further assume that $y_ax_1\notin A(D)$, for else $D$ contains the $2(a-1)$-cycle $[y_a,x_1,\dots,x_{a-1}]$. Then, by~\eqref{eq:a7}, we must have $y_{a-1}x_1\in A(D)$. Of the 4 vertices $\{x_2,\dots,x_{a-2}\}$ at most one doesn't satisfy \eqref{eq:a-1}. Similarly, at most two of the 4 vertices $\{y_2,\dots,y_{a-2}\}$ have degree less than $2a-2$, by Lemma~\ref{lem:6} and since $p=2$. Therefore, there exists $2\leq i\leq a-2$ such that $y_ix_{a-1},y_ax_i\in A(D)$, or $y_ix_a,y_1x_i\in A(D)$, or else $y_ix_{i+1}\in A(D)$. In either of the first two cases, we obtain a cycle of length $2(a-1)$ by replacing the arc $y_ix_i$ on the cycle $[y_{a-1},x_1,\dots,x_{a-2}]$ with an appropriate path of length 3. In the latter case, in turn, $D$ contains the $2(a-1)$-cycle $[x_{i+1},\ldots,y_i]$.\qed
\medskip
\section{Final remarks}
\label{sec:rem}
First of all, for the sake of completeness, let us note that analogues of Theorems~\ref{thm:bipart} and~\ref{thm:hamil} for \emph{dominated} pairs hold true as well. More precisely, we have the following results.
\begin{theorem}
\label{thm:bipart2}
Let $D$ be a strongly connected balanced bipartite digraph of order $2a$, where $a\geq3$, and let $k$ be an integer satisfying $\max\{1,\frac{a}{4}\}<k\leq\frac{a}{2}$.
Suppose that for every dominated pair $\{u,v\}$ of vertices in $D$,
\[
d(u)\geq2a-k\ \mathrm{and}\ d(v)\geq a+k,\quad\mathrm{or}\quad d(u)\geq a+k\ \mathrm{and}\ d(v)\geq2a-k.
\]
Then, $D$ is either bipancyclic or a directed cycle of length $2a$.
\end{theorem}
\begin{theorem}
\label{thm:hamil2}
Let $D$ be a strongly connected balanced bipartite digraph of order $2a$, where $a\geq2$. Suppose that for every dominated pair $\{u,v\}$ of vertices in $D$,
\[
d(u)+d(v)\geq3a+1\,.
\]
Then, $D$ is hamiltonian.
\end{theorem}
Indeed, with a given digraph $D$ one can associate a digraph $D'$ on the same vertices, and with arcs $uv\in A(D')$ whenever $vu\in A(D)$. Then, for every $u\in V(D')=V(D)$, we have $d^-_{D'}(u)=d^+_D(u)$ and $d^+_{D'}(u)=d^-_D(u)$, hence $d_{D'}(u)=d_D(u)$. Moreover, a pair of vertices $\{u,v\}$ is dominating in $D'$ if and only if it is dominated in $D$. The above results thus follow immediately from Theorems~\ref{thm:bipart} and~\ref{thm:hamil}, by observing that any cycle $[u_1,\dots,u_s]$ in $D'$ corresponds to a cycle on the same vertices in $D$ traversed in the opposite direction, $[u_s,\dots,u_1]$.
\smallskip
Next, let us have a look at the lower bound on the integer $k$ in Theorems~\ref{thm:WW} and~\ref{thm:bipart}. Of course, the assumption that $k>\frac{a}{4}$ leaves out nearly half of the possible cases in condition $\Bk$. However, a careful analysis of the proof from Section~\ref{subsec:thm-bipart} shows that this argument carries through, for $a$ large enough, so long as $k$ is bounded below by a constant of the form $\lambda a$ for some real $\lambda>0$. Consequently, for all but finitely many exceptional digraphs satifying condition $\Bk$ of this type, hamiltonicity implies bipancyclicity:
\begin{proposition}
\label{prop:what-if}
For every $\lambda>0$ there exists $a_0$ such that the following holds:\\
If $D$ is a strongly connected hamiltonian balanced bipartite digraph of order $2a$, with $a\geq a_0$, $k$ is an integer satisfying $\max\{1,\lambda a\}<k\leq\frac{a}{2}$, and for every dominating pair $\{u,v\}$ of vertices in $D$,
\[
d(u)\geq2a-k\ \mathrm{and}\ d(v)\geq a+k,\quad\mathrm{or}\quad d(u)\geq a+k\ \mathrm{and}\ d(v)\geq2a-k\,,
\]
then $D$ is bipancyclic or else a directed cycle of length $2a$.
\end{proposition}
It would be therefore very interesting to know whether condition $\Bk$ with $\max\{1,\lambda a\}<k\leq\frac{a}{2}$ and $\lambda<\frac{1}{4}$ does imply hamiltonicity of a digraph. We do not know the answer to this question, and it seems implausible that the techniques of \cite{WW} could be used to obtain such a result.
\smallskip
Another interesting problem is this: Is condition $\C$ enough to imply bipancyclicity of a strongly connected balanced bipartite digraph, modulo some exceptional cases?
Again, the techniques used in the present paper seem insufficient to have a go at it.
Finally, in the context of Theorem~\ref{thm:hamil}, it is perhaps worth mentioning that our present methods can be pushed a bit further, and the conclusions of Lemmas~\ref{lem:1} and~\ref{lem:3} hold, in fact, even if a digraph $D$ satisfies just $(\mathcal{D}_0)$. We do not include the proofs here, because they are much less elegant and require considering several special cases. They do however give hope that an analogue of Theorem~\ref{thm:hamil} with condition $\C$ replaced by $(\mathcal{D}_0)$ could be true.
\medskip
| {
"timestamp": "2020-05-05T02:29:23",
"yymm": "2005",
"arxiv_id": "2005.01466",
"language": "en",
"url": "https://arxiv.org/abs/2005.01466",
"abstract": "We prove several new sufficient conditions for hamiltonicity and bipancyclicity in balanced bipartite digraphs, in terms of sums of degrees over dominating or dominated pairs of vertices.",
"subjects": "Combinatorics (math.CO)",
"title": "On dominating pair degree conditions for hamiltonicity in balanced bipartite digraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587221857245,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7097978789756157
} |
https://arxiv.org/abs/1806.03196 | Approximation of Hermitian Matrices by Positive Semidefinite Matrices using Modified Cholesky Decompositions | A new algorithm to approximate Hermitian matrices by positive semidefinite Hermitian matrices based on modified Cholesky decompositions is presented. In contrast to existing algorithms, this algorithm allows to specify bounds on the diagonal values of the approximation. It has no significant runtime and memory overhead compared to the computation of a classical Cholesky decomposition. Hence it is suitable for large matrices as well as sparse matrices since it preserves the sparsity pattern of the original matrix. The algorithm tries to minimize the approximation error in the Frobenius norm as well as the condition number of the approximation. Since these two objectives often contradict each other, it is possible to weight these two objectives by parameters of the algorithm. In numerical experiments, the algorithm outperforms existing algorithms regarding these two objectives. A Cholesky decomposition of the approximation is calculated as a byproduct. This is useful, for example, if a corresponding linear equation should be solved. A fully documented and extensively tested implementation is available. Numerical optimization and statistics are two fields of application in which the algorithm can be of particular interest. | \section{Introduction} \label{sec: introduction}
Algorithms for approximating Hermitian matrices by positive semidefinite Hermitian matrices are useful in several areas. In stochastics they are needed to transform nonpositive semidefinite estimations of covariance and correlation matrices to valid estimations \cite{Rousseeuw1993,Qi2010,Higham2002a,Higham2016b}. In optimization they are needed to deal with nonpositive definite Hessian matrices in Newton type methods \cite{Gill1981,Nocedal2006,Chong2013}.
The existing algorithms have different disadvantages, which will be outlined below. A new algorithm without these disadvantages is presented in Section \ref{sec:LDL_approximation} where it is also examined in detail. An implementation is introduced in Section \ref{sec:implementation} together with numerical experiments and corresponding results. Conclusions are drawn in Section \ref{sec: summary}.
\subsection{Objectives of approximation algorithms} \label{subsec: introduction: objectives}
In order to evaluate the existing algorithms, objectives of an ideal approximation algorithm are established. For this, let $A \in \mathbb{C}^{n \times n}$ be an Hermitian matrix and $B \in \mathbb{C}^{n \times n}$ its approximation. The first three objectives are the following:
\begin{enumerate}[start=1,label={(\textbf{O\arabic*})},leftmargin=*]
\item $B$ is positive semidefinite.
\label{objective: positive semidefinite}
\end{enumerate}
\begin{enumerate}[start=2,label={(\textbf{O\arabic*})},leftmargin=*]
\item The approximation error $\| B - A \|$ is small.
\label{objective: small error}
\end{enumerate}
\begin{enumerate}[start=3,label={(\textbf{O\arabic*})},leftmargin=*]
\item The condition number $\kappa(B) = \|B\| \, \|B^{-1}\|$ is small.
\label{objective: well-conditioned}
\end{enumerate}
In addition to the approximation error, the condition number of the approximation is usually important as well, since, for example, often linear equations including the approximation have to be solved.
The three objectives \ref{objective: positive semidefinite}, \ref{objective: small error} and \ref{objective: well-conditioned} are sometimes contradictory. Hence, an ideal algorithm would allow to prioritize between \ref{objective: small error} and \ref{objective: well-conditioned}. The norm used in \ref{objective: small error} and \ref{objective: well-conditioned} may depend on the actual application. Typical choices are the spectral norm or the Frobenius norm.
Especially for large matrices, the execution time of the algorithm as well as the needed memory are important. The fastest way to test whether a matrix is positive definite is to try to calculate its Cholesky decomposition \cite{Higham1988}. This needs $\tfrac{1}{3} n^3 + \mathcal{O}(n^2)$ basic operations in the dense real valued case. The approximation algorithm cannot be expected to be faster but at least asymptotically as fast. Thus, the next two objectives are:
\begin{enumerate}[start=4,label={(\textbf{O\arabic*})},leftmargin=*]
\item The algorithm requires at most $\mathcal{O}(n^2)$ more basic operations than the calculation of a Cholesky decomposition of $B$.
\label{objective: fast computation}
\end{enumerate}
\begin{enumerate}[start=5,label={(\textbf{O\arabic*})},leftmargin=*]
\item The algorithm needs to store $\mathcal{O}(n)$ numbers besides $A$ and $B$ and allows to overwrite $A$ with $B$.
\label{objective: low memory}
\end{enumerate}
If $A$ is a sparse matrix, $B$ should have the same sparsity pattern. This allows an effective overwriting and is essential if the corresponding dense matrix would be to big to store. Thus, the next objective is:
\begin{enumerate}[start=6,label={(\textbf{O\arabic*})},leftmargin=*]
\item $A_{i j} = 0$ implies $B_{i j} = 0$.
\label{objective: preserve sparsity}
\end{enumerate}
For correlation matrices it is crucial that $B$ has only ones as diagonal values. This is the reason for the last objective:
\begin{enumerate}[start=7,label={(\textbf{O\arabic*})},leftmargin=*]
\item The diagonal of $B$ can be predefined.
\label{objective: diagonal values}
\end{enumerate}
Similar objectives to \ref{objective: positive semidefinite}, \ref{objective: small error}, \ref{objective: well-conditioned} and \ref{objective: fast computation} have been used in \cite{Schnabel1990,Schnabel1999,Cheng1998,Fang2008}. Here, another objective has been established: If $A$ is "sufficiently" positive definite, $B$ should be equal to $A$. This objective is not explicitly listed here and should be covered by \ref{objective: small error}.
\subsection{Existing approximation methods} \label{subsec: introduction: existing algorithms}
An overview of existing methods to approximate Hermitian matrices by positive semidefinite Hermitian matrices is provided next. They are evaluated using the objectives mentioned above.
The minimal approximation error can be achieved by computing an eigendecomposition and replacing negative eigenvalues \cite{Higham1988,Higham1989}. This was done in statistics \cite{Iman1982,Rousseeuw1993} as well as in optimization \cite[Chapter 3.4]{Nocedal2006}, \cite[Chapter 4.4.2.1]{Gill1981}. However, this does not meet \ref{objective: fast computation}, \ref{objective: preserve sparsity} and \ref{objective: diagonal values}.
It is also possible to calculate approximations with minimal approximation error and the restriction that all diagonal values are one \cite{Higham2002a,Borsdorf2010,Borsdorf2010a,Higham2016}. These methods could be extended so that the approximation has arbitrary predefined (nonnegative) diagonal values. Nevertheless, these methods do not meet \ref{objective: fast computation} and \ref{objective: preserve sparsity}.
Another method, especially common in optimization, is to add a predefined positive definite matrix multiplied by a sufficiently large scalar to the original matrix. The predefined matrix is usually the identity matrix or a diagonal matrix. The scalar is usually determined by increasing a value until the resulting approximation can be successfully Cholesky factorized. This method is also used in a modified Newton's method \cite{Goldfeld1966,Chong2013,Nocedal2006} and the Levenberg-Marquardt method \cite{Levenberg1944,Marquardt1963,Chong2013}. However, \ref{objective: fast computation} and \ref{objective: diagonal values} are not met.
A well-known method, in statistics, is a convex combination with a predefined positive definite matrix. In this context it is based on the concept of shrinkage estimator \cite{Stein1956,Devlin1975,Rousseeuw1993}. The positive definite matrix is again usually the identity matrix or a diagonal matrix. Only the convex combination factor has to be determined. This is usually done by examining the underlying statistical problem \cite{Chen2010,Fisher2011,Ikeda2016,Ledoit2003,Ledoit2004,Schaefer2005,Touloumis2015}. However, methods without using any statistical assumptions exist as well \cite{Higham2016b}. None of these meet \ref{objective: fast computation} and \ref{objective: diagonal values}.
Other methods used, especially in optimization, are modified Cholesky algorithms \cite{Gill1974,Gill1981,Schnabel1990,Schnabel1999,More1979,Cheng1998,Fang2008}. These compute a variant of a Cholesky decomposition like a $LDL^T$, a $LBL^T$ or a $LTL^T$ decomposition. Here $L$ is a lower triangular matrix, $D$ is a diagonal matrix, $B$ is a block diagonal matrix with block size smaller of one or two and $T$ is a tridiagonal matrix. During or after the calculation of these decompositions, their factors are modified so that they represent a positive definite matrix. The methods based on $LBL^T$ decomposition \cite{More1979,Cheng1998} do not meet \ref{objective: fast computation}, \ref{objective: preserve sparsity} and \ref{objective: diagonal values}, the ones based on $LTL^T$ decomposition \cite{Fang2008} do not meet \ref{objective: preserve sparsity} and \ref{objective: diagonal values} and the ones based on $LDL^T$ decomposition \cite{Gill1974,Gill1981,Schnabel1990,Schnabel1999,Fang2008} do not meet \ref{objective: diagonal values}.
Hence, none of the existing methods meet all objectives. However, methods that do not meet \ref{objective: diagonal values} can be extended to meet this objective. For that the calculated approximation is multiplied by a suitable chosen diagonal matrix from both sides. This does not affect \ref{objective: positive semidefinite}, \ref{objective: fast computation}, \ref{objective: low memory} and \ref{objective: preserve sparsity}. So the modified Cholesky method based on $LDL^T$ decomposition could meet all objectives if they are extended to meet \ref{objective: diagonal values}.
The new method presented in Section \ref{sec:LDL_approximation} is a modified Cholesky method based on $LDL^T$ decomposition as well. Contrary to the already published methods of this kind, this methods modifies not only the matrix $D$ but also the matrix $L$ during their calculation. In this way, the algorithm meets all objectives. Furthermore it better meets \ref{objective: small error} and \ref{objective: well-conditioned} than the other extended methods based on $LDL^T$ decomposition as shown in Section \ref{sec:implementation} by numerical experiments.
\section{The approximation algorithm}
\label{sec:LDL_approximation}
The algorithm \nameref{alg: approximated matrix} which approximates Hermitian matrices by positive semidefinite matrices Hermitian is presented and analyzed in this section.
\subsection{The \nameref{alg: approximated matrix} and the \nameref{alg: approximated decomposition} algorithm }
Previous modified Cholesky methods based on $LDL^T$ decomposition \cite{Gill1974,Gill1981,Schnabel1990,Schnabel1999,Fang2008} applied to a symmetric matrix $A$ try to calculate its $LDL^T$ decomposition. While doing so, they increase some of the values in the diagonal matrix $D$. Hence, they result in a decomposition of a positive definite matrix $A + \Delta$, where $\Delta$ is a diagonal matrix with values greater or equal to zero. However, is this way, the approximation $A + \Delta$ cannot have predefined diagonal elements.
The key idea of the new algorithm is to modify the off-diagonal values of $A$ instead or in addition to its diagonal values. In detail, the Hermitian positive definite approximation $B \in \mathbb{C}^{n \times n}$ of an Hermitian $A\in \mathbb{C}^{n \times n}$ is defined as
\begin{equation*}
B_{i j}
:=
\hat{\omega}_{i j} A_{i j}
\text{ if }
i \neq j
\text{ and }
B_{i i}
:=
A_{i i} + \delta_i
\end{equation*}
where $\hat{\omega}_{i j} \in [0, 1]$, $\hat{\omega}_{i j} = \hat{\omega}_{j i}$ and $\delta_i \in \mathbb{R}$ for all $i, j \in \{1, \ldots, n\}$.
If, for example, $\hat{\omega}_{i j} = 0$ and $\delta_i > |A_{ii}|$ for all $i, j \in \{1, \ldots, n\}$, then $B$ is a diagonal matrix with only positive values and thus positive definite. If, on the other hand, $\hat{\omega}_{i j} = 1$ and $\delta_i = 0$ for all $i, j \in \{1, \ldots, n\}$, then $B = A$ and there is no approximation error.
The challenge is now to determine the values $\hat{\omega}_{i j}$ and $\delta_i$ such that the objectives established in Subsection \ref{subsec: introduction: objectives} are met. This is where we use a (complex valued) modified Cholesky method based on $LDL^H$ decomposition. During the calculation of a $LDL^H$ decomposition of $A$, we modify $L$ and $D$ if the matrix represented by the decomposition would become not positive definite, its condition number would become to high or the requirements on the diagonal values would be violated otherwise.
In detail, the off-diagonal values in the $i$-th row of $L$ are multiplied by $\omega_i \in [0, 1]$ and $\delta_i \in \mathbb{R}$ is added to the $i$-th diagonal value of $D$. This $\delta_i$ corresponds to the previously mentioned $\delta_i$ and $\omega$ to $\hat{\omega}$ such that $\hat{\omega}_{i, j} = \omega_{\max\{{i,j}\}}$ for all $i, j \in \{1, \ldots, n\}$. This relationship is discussed in Subsection \ref{subsec: algorithms: representation}. Furthermore symmetric permutation techniques are used to reduce the approximation error, the computational effort and the required memory.
The algorithm \nameref{alg: approximated decomposition}, which computes the permuted modified $LDL^H$ decomposition and the values $\omega$ and $\delta$, is described in detail in Algorithm \ref{alg: approximated decomposition}.
\begin{algorithm}
\normalsize
\caption{\mbox{DECOMPOSITION}}
\label{alg: approximated decomposition}
\begin{algorithmic}[1]
\INPUT
\begin{itemize}[label={$\cdot$}]
\item $A \in \mathbb{C}^{n \times n}$ Hermitian, $x \in (\mathbb{R} \cup \{-\infty\})^n$, $y \in (\mathbb{R} \cup \{\infty\})^n$, $l \in \mathbb{R} \cup \{-\infty\}$, $u \in \mathbb{R} \cup \{\infty\}$, $\epsilon > 0$
\item with $\max\{x_i, l\} \leq \min\{y_i, u\}$ for all $i \in \{1, \ldots, n\}$
\item with $|x_i|, |l| \geq \epsilon$ or $|y_i|, |u| \geq \epsilon$ for all $i \in \{1, \ldots, n\}$
\end{itemize}
\OUTPUT
\begin{itemize}[label={$\cdot$}]
\item $L \in \mathbb{C}^{n \times n}$, $d, \omega, \delta \in \mathbb{R}^n$, $p \in \{1, \ldots, n\}^n$
\end{itemize}
\Function {decomposition}{$A, x, y, l, u, \epsilon$}
\State $p_i \gets i$ for all $i \in \{1, \ldots, n\}$ \label{line: alg: approximated decomposition: init p}
\State $\alpha_i \gets 0$ for all $i \in \{1, \ldots, n\}$ \label{line: alg: approximated decomposition: init alpha}
\For {$i \gets 1, \ldots, n$} \label{line: alg: approximated decomposition: main for loop}
\State select $j \in \{i, \ldots, n\}$ \label{line: alg: approximated decomposition: permutation}
\State swap $p_i$ with $p_j$ and $L_{i k}$ with $L_{j k}$ for all $k \in \{1, \ldots, i-1\}$ \label{line: alg: approximated decomposition: swap}
\State select $d_i \in [l, u]$, $\omega_{p_i} \in [0,1]$ with $|d_i| \notin (0, \epsilon)$, $d_i + \alpha_{p_i} \omega_{p_i}^2 \in [x_{p_i}, y_{p_i}]$\label{line: alg: approximated decomposition: select d and omega}
\State $L_{i j} \gets \omega_{p_i} L_{i j}$ for all $j \in \{1, \ldots, i-1\}$\label{line: alg: approximated decomposition: L modification omega}
\State $\delta_{p_i} \gets d_i + \omega_{p_i}^2 \alpha_{p_i} - A_{p_i p_i}$ \label{line: alg: approximated decomposition: delta definition}
\For {$j \gets i+1, \ldots, n$}
\If {$d_i \neq 0$} \label{line: alg: approximated decomposition: if d not zero}
\State $L_{j i} \gets \left( A_{p_j p_i} - \sum\limits_{k=1}^{i-1} L_{j k} \conj{L}_{i k} d_k \right) (d_i)^{-1}$ \label{line: alg: approximated decomposition: L definition}
\State $\alpha_{p_j} \gets \alpha_{p_j} + |L_{j i}|^2 d_{i}$ \label{line: alg: approximated decomposition: add alpha}
\Else
\State $L_{j i} \gets 0$ \label{line: alg: approximated decomposition: L set zero}
\EndIf
\EndFor
\EndFor
\State $L_{i i} \gets 1$ and $L_{i j} \gets 0$ for all $i, j \in \{1, \ldots, n\}$ with $j > i$ \label{line: alg: approximated decomposition: L set diagonal and upper triangle}
\Return $(L, d, p, \omega, \delta)$
\EndFunction
\end{algorithmic}
\end{algorithm}
The algorithm \nameref{alg: approximated matrix}, which computes the approximation $B$, is described in detail in Algorithm \ref{alg: approximated matrix}.
\begin{algorithm}[H]
\normalsize
\caption{\mbox{MATRIX}}
\label{alg: approximated matrix}
\begin{algorithmic}[1]
\INPUT
\begin{itemize}[label={$\cdot$}]
\item $A \in \mathbb{C}^{n \times n}$ Hermitian, $x \in (\mathbb{R} \cup \{-\infty\})^n$, $y \in (\mathbb{R} \cup \{\infty\})^n$, $l \in \mathbb{R} \cup \{-\infty\}$, $u \in \mathbb{R} \cup \{\infty\}$, $\epsilon > 0$
\item with $\max\{x_i, l\} \leq \min\{y_i, u\}$ for all $i \in \{1, \ldots, n\}$
\item with $|x_i|, |l| \geq \epsilon$ or $|y_i|, |u| \geq \epsilon$ for all $i \in \{1, \ldots, n\}$
\end{itemize}
\OUTPUT
\begin{itemize}[label={$\cdot$}]
\item $B \in \mathbb{C}^{n \times n}$
\end{itemize}
\Function {matrix}{$A, x, y, l, u, \epsilon$}
\State $(L, d, p, \omega, \delta) \gets$ \Call{DECOMPOSITION}{$A, x, y, l, u, \epsilon$}
\State $q_{p_i} \gets i$ for all $i \in \{1, \ldots, n\}$ \label{line: alg: approximated matrix: definition q}
\For {$i \gets 1, \ldots, n$} \label{line: alg: approximated matrix: outer for loop}
\State $B_{i i} \gets A_{i i} + \delta_i$ \label{line: alg: approximated matrix: set diagonal}
\For {$j \gets i+1, \ldots, n$} \label{line: alg: approximated matrix: inner for loop}
\If {$q_i > q_j$}
\State $a \gets j$, $b \gets i$ \label{line: alg: approximated matrix: set a and b case 1}
\Else
\State $a \gets i$, $b \gets j$ \label{line: alg: approximated matrix: set a and b case 2}
\EndIf
\If {$d_{q_a} \neq 0$ \OR $\omega_b = 0$}
\State $B_{i j} \gets A_{i j} \omega_b$ \label{line: alg: approximated matrix: set below diagonal with omega}
\Else
\State $B_{i j} \gets \sum\limits_{k = 1}^{q_a - 1} L_{q_i k} d_k \overline{L}_{q_j k}$ \label{line: alg: approximated matrix: set below diagonal without omega}
\EndIf
\EndFor
\EndFor
\State $B_{j i} \gets \overline{B}_{i j}$ for all $i, j \in \{1, \ldots, n\}$ with $j > i$ \label{line: alg: approximated matrix: set obove diagonal}
\Return $B$
\EndFunction
\end{algorithmic}
\end{algorithm}
The parameters $l$ and $u$ of the algorithms are lower and upper bounds on the diagonal values of $D$. The positive definiteness of $B$ can be controlled by $l$ as pointed out in Subsection \ref{subsec: algorithms: positive semidefinite}. The parameters $x$ and $y$ are lower and upper bounds on the diagonal values of $B$ as shown in Subsection \ref{subsec: algorithms: diagonal values}. The condition number of $B$ and the approximation error $\|B- A\|$ are influenced by $x, y, l, u$ as demonstrated in Subsection \ref{subsec: algorithms: condition number} and \ref{subsec: algorithms: approximation error}, respectively. Moreover, they allow to prioritize a low approximation error or a low condition number. The numerical stability of the algorithms is controlled by $\epsilon$.
The algorithms can be considered as a whole class of algorithms since there are many possibilities to choose the permutation and $\omega$ and $\delta$ as discussed in Subsection \ref{subsec: algorithms: choice of d and omega} and \ref{subsec: algorithms: permutation}. The algorithm is carefully designed, so that the overhead in computational effort and memory consumption compared to classical Cholesky decomposition algorithms is negligibly if $\omega$ and $\delta$ are chosen in a proper way, as shown in Subsection \ref{subsec: algorithms: complexity}.
For the rest of this section, we use the following notation for the analysis of both algorithms.
\begin{definition}
Let
\begin{equation*}
B := \text{\normalfont \nameref{alg: approximated matrix}}(A, x, y, l, u, \epsilon)
\end{equation*}
where $(A, x, y, l, u, \epsilon)$ is some valid input for the algorithm with $A \in \mathbb{C}^{n \times n}$ and
\begin{equation*}
(L, d, p, \omega, \delta) := \text{\normalfont \nameref{alg: approximated decomposition}}(A, x, y, l, u, \epsilon).
\end{equation*}
Define $D := \diag(d)$ the diagonal matrix with $d$ as the diagonal. Define $P \in \mathbb{R}^{n \times n}$ as the permutation matrix induced by $p$, which is
\begin{equation*}
P_{i j}
:=
\begin{cases}
1 \text{ if } j = p_i\\
0 \text{ else}
\end{cases}
\text{ for all }
i, j \in \{1, \ldots, n\}
.
\end{equation*}
\end{definition}
\subsection{Representation of the approximation matrix} \label{subsec: algorithms: representation}
\begin{sloppypar}
In this subsection it is shown that $B = P^T L D L^H P$. This means that \nameref{alg: approximated matrix} calculates the matrix represented by the decomposition calculated by \nameref{alg: approximated decomposition}. This will be crucial for further investigation of \nameref{alg: approximated matrix}.
\end{sloppypar}
First, we prove that $p$ is a permutation vector.
\begin{lemma} \label{lemma: approximated LDLH decomposition: p uniqueness}
\begin{equation*}
\{p_i ~|~ i \in \{1, \ldots, n\}\}
=
\{1, \ldots, n\}
.
\end{equation*}
\end{lemma}
\begin{proof}
In \nameref{alg: approximated decomposition}, the variable $p$ is initiated at \autoref{line: alg: approximated decomposition: init p} of the algorithm so that $p_i = i$ for all $i \in \{1, \ldots, n\}$. After its initialization, the variable $p$ is only changed in \autoref{line: alg: approximated decomposition: swap}. Here some of its components are swapped in each iteration. Thus
$
\{p_i ~|~ i \in \{1, \ldots, n\}\}
=
\{1, \ldots, n\}
$
at the end of the algorithm.
\flushright\qed
\end{proof}
Next it is shown how a corresponding inverse permutation vector can be defined.
\begin{lemma} \label{lemma: p q i = i}
Define
\begin{equation*}
q_{p_i} := i
\text{ for all }
i \in \{1, \ldots, n\}
.
\end{equation*}
$q$ is well defined and
\begin{equation*}
p_{q_i}
=
i
\text{ for all }
i \in \{1, \ldots, n\}
.
\end{equation*}
\end{lemma}
\begin{proof}
$q$ is well defined due to Lemma \ref{lemma: approximated LDLH decomposition: p uniqueness}. Let $i \in \{1, \ldots, n\}$. Due to Lemma \ref{lemma: approximated LDLH decomposition: p uniqueness}, a $j \in \{1, \ldots, n\}$ exists with $p_j = i$. Furthermore $q_{p_j} = j$ due to the definition of $q$. Thus, $p_{q_i} = p_{q_{p_j}} = p_{j} = i$ follows.
\flushright\qed
\end{proof}
A fast way to calculate $LDL^H$, using only $A$, $\omega$, $\delta$ and $p$, is pointed out in the next lemma.
\begin{lemma} \label{lemma: approximated LDLH decomposition: decomposition equation}
\begin{equation*}
(LDL^H)_{ii}
=
A_{p_i p_i} + \delta_{p_i}
\end{equation*}
and
\begin{equation*}
(LDL^H)_{i j}
=
A_{p_i p_j} \omega_{p_{\max\{i, j\}}}
\text{ if }
d_{\min\{i, j\}} \neq 0
\text{ or }
\omega_{p_{\max\{i, j\}}} = 0
\end{equation*}
for all $i, j \in \{1, \ldots, n\}$ with $i \neq j$.
\end{lemma}
\begin{proof}
First some properties of the variable $p$ during the execution of the algorithm are proved. Denote the for loop starting at \autoref{line: alg: approximated decomposition: main for loop} of the algorithm the main for loop. Let $p^{(0)}$ be the value of the variable $p$ directly before the main for loop and $p^{(i)}$ its value directly after its $i$-th iteration for each $i \in \{1, \ldots, n\}$. Its final value is denoted by $p$.
Let $i \in \{1, \ldots, n\}$. The variable $p$ is initiated so that $p^{(0)}_i = i$. After its initialization, the variable $p$ is only changed in \autoref{line: alg: approximated decomposition: swap}. Here the variables $p_i$ and $p_j$ are swapped for some $j \in \{i, \ldots, n\}$ in the $i$-th iteration of the main for loop. Hence
\begin{equation} \label{equ: decomposition equation: p permutation vector}
\{p_i^{(j)} ~|~ i \in \{1, \ldots, n\}\}
=
\{1, \ldots, n\}\
\text{ for all }
j \in \{1, \ldots, n\}
.
\end{equation}
Furthermore the variable $p_i$ is not changed anymore after the $i$-th iteration. Thus
\begin{equation} \label{equ: decomposition equation: p invariance}
p_i
=
p_i^{(j)}
\text{ for all }
i, j \in \{1, \ldots, n\}
\text{ with }
i \leq j
\end{equation}
and hence
\begin{equation} \label{equ: decomposition equation: p inequality}
p_i^{(i)} \neq p_j^{(j)}
\text{ for all }
i, j \in \{1, \ldots, n\}
\text{ with }
i \neq j.
\end{equation}
Next it is shown that all entries in the variables $d$, $\omega$ and $\delta$ are set once in the algorithm and are never changed after that. Hence, we do not need an index indicating the current iteration for this variables. Let $d$, $\omega$ and $ \delta$ be the final value of the corresponding variables.
The value of $d_i$ is set in the $i$-th iteration of the main for loop at \autoref{line: alg: approximated decomposition: select d and omega} and nowhere else. The values of $\omega_{p_i}$ and $\delta_{p_i}$ are set in the $i$-th iteration of the main for loop at \autoref{line: alg: approximated decomposition: select d and omega} and \autoref{line: alg: approximated decomposition: delta definition} and due to \eqref{equ: decomposition equation: p inequality} nowhere else. Furthermore $\omega_i$ and $\delta_i$ are set due to equation \eqref{equ: decomposition equation: p invariance} and Lemma \ref{lemma: approximated LDLH decomposition: p uniqueness}. Hence, all entries in the variables $d$, $\omega$ and $\delta$ are set once in the algorithm and are never changed after that.
Next properties of the variable $L$ in the algorithm are proved which will lead to the result of this lemma. Denote with $L^{(i)}$ the value of the variable $L$ directly after the $i$-th iteration of the main for loop for all $i \in \{1, \ldots, n\}$. $L$ denotes its final value.
Let $i, j \in \{1, \ldots, n\}$ with $j < i$. The variable $L_{i j}$ is only changed in the $j$-th iteration at \autoref{line: alg: approximated decomposition: L definition} or \autoref{line: alg: approximated decomposition: L set zero}, in the $i$-th iteration at \autoref{line: alg: approximated decomposition: L modification omega} and maybe in the $k$-th iteration at \autoref{line: alg: approximated decomposition: swap} for $k \in \{j+1, \ldots, i\}$. Thus, after the $i$-th iteration it is unchanged which means
\begin{equation} \label{equ: decomposition equation: L invariance}
\begin{gathered}
L_{i j}
=
L_{i j}^{(k)}
\text{ for all }
i, j, k \in \{1, \ldots, n\}
\text{ with }
j < i \leq k
.
\end{gathered}
\end{equation}
In the $i$-th iteration, the variable $L_{i j}$ might only be changed in \autoref{line: alg: approximated decomposition: swap} and \autoref{line: alg: approximated decomposition: L modification omega}. In \autoref{line: alg: approximated decomposition: swap} the variable $L_{i j}$ is only changed if it is swapped with the variable $L_{k j}$ for some $k \in \{i+1, \ldots, n\}$. This is exactly the case if the variable $p_i$ is swapped with the variable $p_k$. This together with \autoref{line: alg: approximated decomposition: L modification omega} and equation \eqref{equ: decomposition equation: p permutation vector} implies
\begin{equation*}
\begin{gathered}
L_{i j}^{(i)}
=
\omega_{p_i^{(i)}} L_{k j}^{(i-1)}
\text{ if }
p_i^{(i)}
=
p_k^{(i-1)}
\\
\text{ for all }
i, j, k \in \{1, \ldots, n\}
\text{ with }
j < i
.
\end{gathered}
\end{equation*}
This results with equation \eqref{equ: decomposition equation: p invariance} and \eqref{equ: decomposition equation: L invariance} in
\begin{equation} \label{equ: decomposition equation: L change iteration i}
\begin{gathered}
L_{i j}
=
\omega_{p_i} L_{k j}^{(i-1)}
\text{ if }
p_i
=
p_k^{(i-1)}
\\
\text{ for all }
i, j, k \in \{1, \ldots, n\}
\text{ with }
j < i
.
\end{gathered}
\end{equation}
In the $k$-th iteration for all $k \in \{j+1, \ldots, i-1\}$, the variable $L_{i j}$ might only be changed in \autoref{line: alg: approximated decomposition: swap} due to a swap with the variable $L_{k j}$. This is exactly the case if the variable $p_i$ is swapped with the variable $p_k$. This together with equation \eqref{equ: decomposition equation: p permutation vector} implies
\begin{equation} \label{equ: decomposition equation: L change iteration l}
\begin{gathered}
L_{i j}^{(l)} = L_{k j}^{(l-1)}
\text{ if }
p_i^{(l)} = p_k^{(l-1)}
\\
\text{ for all }
i, j, k, l \in \{1, \ldots, n\}
\text{ with }
j < l < i
.
\end{gathered}
\end{equation}
Equation \eqref{equ: decomposition equation: L change iteration i} and \eqref{equ: decomposition equation: L change iteration l} result in
\begin{equation} \label{equ: decomposition equation: L changes}
\begin{gathered}
L_{i j}
=
\omega_{p_i} L_{k j}^{(l)}
\text{ if }
p_i
=
p_k^{(l)}
\\
\text{ for all }
i, j, k, l \in \{1, \ldots, n\}
\text{ with }
j \leq l < i
.
\end{gathered}
\end{equation}
Now with this preparatory work, the main statement of this lemma can be proved. $L_{j j} = 1$ and $L_{j k} = 0$ for all $k \in \{j+1, \ldots, n\}$ due to \autoref{line: alg: approximated decomposition: L set diagonal and upper triangle}. This implies
\begin{equation*}
(LDL^H)_{i j}
=
\sum\limits_{k=1}^{n} L_{i k} \overline{L}_{j k} d_k
=
L_{i j} d_j + \sum\limits_{k=1}^{j-1} L_{i k} \overline{L}_{j k} d_k
.
\end{equation*}
Due to equation \eqref{equ: decomposition equation: p permutation vector}, a $l \in \{1, \ldots, n\}$ exists with $p_i = p_l^{(j)}$. Hence, equation \eqref{equ: decomposition equation: L invariance} and \eqref{equ: decomposition equation: L changes} imply
\begin{equation*}
L_{i j} d_j + \sum\limits_{k=1}^{j-1} L_{i k} \overline{L}_{j k} d_k
=
L_{i j} d_j + \sum\limits_{k=1}^{j-1} L_{i k} \overline{L}_{j k}^{(j)} d_k
=
\omega_{p_i} \left( L_{i j}^{(j)} d_j + \sum\limits_{k=1}^{j-1} L_{i k}^{(j)} \overline{L}_{j k}^{(j)} d_k \right)
.
\end{equation*}
Thus
\begin{equation} \label{equ: decomposition equation: L equals omega sum}
(LDL^H)_{i j}
=
\omega_{p_i} \left( L_{i j}^{(j)} d_j + \sum\limits_{k=1}^{j-1} L_{i k}^{(j)} \overline{L}_{j k}^{(j)} d_k \right)
.
\end{equation}
Due to \autoref{line: alg: approximated decomposition: L definition}
\begin{equation*}
A_{p_l^{(j)} p_j^{(j)}}
=
L_{l j}^{(j)} d_j + \sum\limits_{k=1}^{i-1} L_{l k}^{(j)} \overline{L}_{j k}^{(j)} d_k
\text{ if }
d_j \neq 0
.
\end{equation*}
Furthermore $p_i = p_l^{(j)}$ by definition of $l$ and $p_j = p_j^{(j)}$ due to equation \eqref{equ: decomposition equation: p invariance}. This together with the previous two equations implies
\begin{equation*}
(LDL^H)_{i j}
=
\omega_{p_i} A_{p_i p_j}
\text{ if }
d_j \neq 0
.
\end{equation*}
Moreover with equation \eqref{equ: decomposition equation: L equals omega sum} it follows
\begin{equation*}
(LDL^H)_{i j}
=
\omega_{p_i} A_{p_i p_j}
\text{ if }
\omega_{p_i} = 0
.
\end{equation*}
$D$ is a real-valued diagonal matrix and thus Hermitian. Hence, the matrix $LDL^H$ is Hermitian as well. Since $A$ is also Hermitian,
\begin{equation} \label{equ: decomposition equation: offdiagonal values j i}
\begin{gathered}
(L D L^H)_{j i}
=
(\overline{L D L^H})_{i j}
=
\overline{\omega_{p_i} A_{p_i p_j}}
=
\omega_{p_i} A_{p_j p_i}
\text{ if }
d_j \neq 0
\text{ or }
\omega_{p_i} = 0
.
\end{gathered}
\end{equation}
The combination of the three previous equations results in
\begin{equation*}
\begin{gathered}
(LDL^H)_{i j}
=
A_{p_i p_j} \omega_{p_{\max\{i, j\}}}
\text{ if }
\omega_{p_{\max\{i,j\}}} \neq 0
\text{ or }
d_{\min\{i,j\}} = 0\\
\text{~for all~}
i, j \in \{1, \ldots, n\}
\text{~with~}
i \neq j
\end{gathered}
\end{equation*}
which is one part of the statement of this lemma.
Since $L_{i i} = 1$ and $L_{i k} = 0$ for all $k \in \{i+1, \ldots, n\}$ due to \autoref{line: alg: approximated decomposition: L set diagonal and upper triangle},
\begin{equation} \label{equ: decomposition equation: diagonal values: sum}
(L D L^H)_{i i}
=
\sum\limits_{j=1}^{n} |L_{i j}|^2 d_j
=
d_i + \sum\limits_{j=1}^{i-1} |L_{i j}|^2 d_j
.
\end{equation}
Define for every $k \in \{0, \ldots, i-1\}$ an $i_k \in \{1, \ldots, n\}$ with $p_i = p_{i_k}^{(k)}$ which exists uniquely due to equation \eqref{equ: decomposition equation: p permutation vector}. Then equation \eqref{equ: decomposition equation: L changes} implies
\begin{equation} \label{equ: decomposition equation: diagonal values: sum with omega}
\sum\limits_{j=1}^{i-1} |L_{i j}|^2 d_j
=
\omega_{p_i}^2 \sum\limits_{k=1}^{i-1} |L_{i_{k} k}^{(k)}|^2 d_{k}
.
\end{equation}
Denote with $\alpha^{(0)}$ the value of the variable $\alpha$ directly before the main for loop and with $\alpha^{(i)}$ its value directly after its $i$-th iteration for each $i \in \{1, \ldots, n\}$.
Define for every $k \in \{0, \ldots, i-1\}$ an $i_k \in \{k+1, \ldots, n\}$ with $p_i = p_{i_k}^{(k)}$ which exists uniquely due to equation \eqref{equ: decomposition equation: p permutation vector}. Then
\begin{equation*}
\alpha_{p_{i}}^{(i)}
=
\alpha_{p_{i_{i-1}}^{(i-1)}}^{(i-1)}
\end{equation*}
and
\begin{equation*}
\begin{gathered}
\alpha_{p_{i_k}^{(k)}}^{(k)}
=
\alpha_{p_{i_k}^{(k)}}^{(k-1)} + |L_{i_{k} k}^{(k)}|^2 d_{k}
\text{ for all }
k \in \{1, \ldots, i-1\}
\end{gathered}
\end{equation*}
due to \autoref{line: alg: approximated decomposition: add alpha}. Furthermore $\alpha_{p_{i_0}}^{(0)} = 0$ due to \autoref{line: alg: approximated decomposition: init alpha}. Hence
\begin{equation} \label{equ: decomposition equation: diagonal values: alpha}
\alpha_{p_{i}}^{(i)}
=
\sum\limits_{k=1}^{i-1} |L_{i_{k} k}^{(k)}|^2 d_{k}
.
\end{equation}
The combination of equation \eqref{equ: decomposition equation: diagonal values: sum}, \eqref{equ: decomposition equation: diagonal values: sum with omega} and \eqref{equ: decomposition equation: diagonal values: alpha} results in
\begin{equation*}
(L D L^H)_{i i}
=
d_i + \omega_{p_i}^2 \alpha_{p_{i}}^{(i)}
.
\end{equation*}
Due to \autoref{line: alg: approximated decomposition: delta definition} and equation \eqref{equ: decomposition equation: p invariance}, $d_i + \omega_{p_i}^2 \alpha_{p_i}^{(i)} = \delta_{p_i} + A_{p_i p_i}$ and thus
\begin{equation*}
(L D L^H)_{i i}
=
\delta_{p_i} + A_{p_i p_i}
\end{equation*}
which is the other part of the statement of this lemma.
\flushright\qed
\end{proof}
The next lemma shows how $B$ can be calculate using only $A$, $\delta$, $\omega$ and $p$.
\begin{lemma} \label{lemma: approximated LDLH decomposition: B equals A modified}
\begin{equation*}
B_{i i}
=
A_{i i} + \delta_i
\end{equation*}
and
\begin{equation*}
B_{i j}
=
A_{i j} \omega_{b(i,j)}
\text{ if }
d_{q_{a(i,j)}} \neq 0
\text{ or }
\omega_{b(i,j)} = 0
\end{equation*}
where
\begin{equation*}
\begin{gathered}
q_{p_i} := i,~
a(i, j) :=
\begin{cases}
j \text{ if } q_i > q_j\\
i \text{ else}
\end{cases},~
b(i, j) :=
\begin{cases}
i \text{ if } q_i> q_j\\
j \text{ else}
\end{cases}
\end{gathered}
\end{equation*}
for all $i, j \in \{1, \ldots, n\}$ with $i \neq j$.
\end{lemma}
\begin{proof}
First of all, $q$ is well defined due to Lemma \ref{lemma: p q i = i}. Let $i \in \{1, \ldots, n\}$. In \nameref{alg: approximated matrix}, $B_{i i}$ is set only at \autoref{line: alg: approximated matrix: set diagonal} in the $i$-th iteration of the outer for loop at \autoref{line: alg: approximated matrix: outer for loop}. Due to this line $B_{i i} = A_{i i} + \delta_i$ and thus
\begin{equation*}
B_{i i}
=
A_{i i} + \delta_i
\text{ for all }
i \in \{1, \ldots, n\}
\end{equation*}
Let $j \in \{i+1, \ldots, n\}$. In \nameref{alg: approximated matrix}, the variable $B_{i j}$ is set only in \autoref{line: alg: approximated matrix: set below diagonal with omega} or \autoref{line: alg: approximated matrix: set below diagonal without omega} in the $i$-th iteration of the outer for loop at \autoref{line: alg: approximated matrix: outer for loop} and the $j$-th iteration of the inner for loop at \autoref{line: alg: approximated matrix: inner for loop}. At this iteration the variables $a$ and $b$ have the the value $a(i, j)$ and $b(i, j)$, respectively, due to \autoref{line: alg: approximated matrix: set a and b case 1} and \autoref{line: alg: approximated matrix: set a and b case 2}. Hence due to \autoref{line: alg: approximated matrix: set below diagonal with omega},
\begin{equation*}
\begin{gathered}
B_{i j}
=
A_{i j} \omega_{b(i,j)}
\text{ if }
d_{q_{a(i,j)}} \neq 0
\text{ or }
\omega_{b(i,j)} = 0\\
\text{ for all }
i, j \in \{1, \ldots, n\}
\text{ with }
i < j
.
\end{gathered}
\end{equation*}
The variable $B_{j i}$ is set only in \autoref{line: alg: approximated matrix: set obove diagonal} so that $B_{j i} = \overline{B}_{i j}$. Hence, the previous equation implies
\begin{equation*}
\begin{gathered}
B_{j i}
=
\overline{B}_{i j}
=
\overline{A_{i j} \omega_{b(i,j)}}
=
\overline{A}_{i j} \omega_{b(i,j)}
=
A_{j i} \omega_{b(j, i)}\\
\text{ if }
d_{q_{a(j, i)}} \neq 0
\text{ or }
\omega_{b(j, i)} = 0
\text{ for all }
i, j \in \{1, \ldots, n\}
\text{ with }
i < j
.
\end{gathered}
\end{equation*}
\flushright\qed
\end{proof}
Next the main theorem of this subsection emphasizes the connection between \nameref{alg: approximated matrix} and \nameref{alg: approximated decomposition}.
\begin{theorem} \label{theorem: B equals PT L D LH P}
\begin{equation*}
B
=
P^T L D L^H P
.
\end{equation*}
\end{theorem}
\begin{proof}
Define
\begin{equation*}
q_{p_i} := i
\text{ for all }
i \in \{1, \ldots, n\}
.
\end{equation*}
Due to Lemma \ref{lemma: p q i = i}, $q$ is well defined and
\begin{equation*}
p_{q_i}
=
i
\text{ for all }
i \in \{1, \ldots, n\}
.
\end{equation*}
Let $i ,j \in \{1, \ldots, n\}$ with $i < j$. Define $a$ and $b$ so that
\begin{equation*}
q_a = \min\{q_i, q_j\}
\text{ and }
q_b = \max\{q_i, q_j\}
.
\end{equation*}
This is well defined due to Lemma \ref{lemma: approximated LDLH decomposition: p uniqueness}.
Due to \autoref{line: alg: approximated matrix: set below diagonal without omega} of \nameref{alg: approximated matrix} and the definition of the variables $a$ and $b$ in the algorithm,
\begin{equation*}
\begin{gathered}
B_{i j}
=
\sum\limits_{k = 1}^{q_a - 1} L_{q_i k} d_k \overline{L}_{q_j k}
\text{ if }
d_{q_a} = 0
\text{ and }
\omega_b \neq 0
.
\end{gathered}
\end{equation*}
Since $L$ is a lower triangular matrix and due to the definition of $q_a$,
\begin{equation*}
L_{q_i k} = 0
\text{ or }
L_{q_j k} = 0
\text{ for all }
k \in \{q_a + 1, \ldots, n\}
.
\end{equation*}
Thus,
\begin{equation*}
\begin{gathered}
B_{i j}
=
\sum\limits_{k = 1}^{n} L_{q_i k} d_k \overline{L}_{q_j k}
=
(L D L^H)_{q_i q_j}
\text{ if }
d_{q_a} = 0
\text{ and }
\omega_b \neq 0
.
\end{gathered}
\end{equation*}
Furthermore Lemma \ref{lemma: approximated LDLH decomposition: decomposition equation} and \ref{lemma: approximated LDLH decomposition: B equals A modified} and the definition of $q$ imply
\begin{equation*}
\begin{gathered}
B_{i j}
=
A_{i j} \omega_b
=
A_{p_{q_i} p_{q_j}} \omega_{p_{q_b}}
=
(L D L^H)_{q_i q_j}
\text{ if }
d_{q_a} \neq 0
\text{ or }
\omega_b = 0
.
\end{gathered}
\end{equation*}
Due to \autoref{line: alg: approximated matrix: set obove diagonal} of \nameref{alg: approximated matrix},
\begin{equation*}
\begin{gathered}
B_{j i}
=
\overline{B}_{i j}
=
(\overline{L D L^H})_{q_i q_j}
=
(L D L^H)_{q_j q_i}
\end{gathered}
\end{equation*}
Lemma \ref{lemma: approximated LDLH decomposition: decomposition equation} and Lemma \ref{lemma: approximated LDLH decomposition: B equals A modified} imply
\begin{equation*}
B_{i i}
=
A_{i i} + \delta_{i}
=
A_{p_{q_i} p_{q_i}} + \delta_{p_{q_i}}
=
(L D L^H)_{q_i q_i}
.
\end{equation*}
Thus
\begin{equation*}
\begin{gathered}
B_{i j}
=
(L D L^H)_{q_i q_j}
\text{ for all }
i, j \in \{1, \ldots, n\}
.
\end{gathered}
\end{equation*}
The definition of $P$ implies
\begin{equation*}
P_{q_i i} = 1
\text{ and }
P_{q_j i} = 0
\text{ for all }
i, j \in \{1, \ldots, n\}
\text{ with }
i \neq j
.
\end{equation*}
Hence,
\begin{equation*}
\begin{gathered}
(L D L^H)_{q_i q_j}
=
\sum\limits_{k=0}^n \sum\limits_{j=0}^n
P_{k i} (L D L^H)_{k l} P_{l j}
=
(P^T L D L^H P)_{i j}
\\
\text{ for all }
i, j \in \{1, \ldots, n\}
.
\end{gathered}
\end{equation*}
\flushright\qed
\end{proof}
\subsection{Positive semidefinite approximation} \label{subsec: algorithms: positive semidefinite}
\nameref{alg: approximated matrix} can be forced to calculate positive definite or positive semidefinite matrices using $l > 0$ or $l \geq 0$, respectively as shown in Theorem \ref{theorem: algorithms: positive semidefinite}. Thus, \nameref{alg: approximated matrix} meets objective \ref{objective: positive semidefinite} if $l \geq 0$ is chosen. To prove this theorem, it is first shown that the values of $d$ are bounded below by $l$. For subsequent proofs, it is also shown that the values of $d$ are bounded above by $u$ and $y$.
\begin{lemma} \label{lemma: approximated LDLH decomposition: d bounds}
\begin{equation*}
d_i
\in
[l, u] \cap \mathbb{R}
\text{ and }
|d_i| \notin (0, \epsilon)
\end{equation*}
and if $l \geq 0$,
\begin{equation*}
d_i \leq y_{p_i}
\end{equation*}
for all $i \in \{1, \ldots, n\}$.
\end{lemma}
\begin{proof}
Let $i \in \{1, \ldots, n\}$. In \nameref{alg: approximated decomposition} the variable $d$ is only changed in \autoref{line: alg: approximated decomposition: select d and omega}. Here $d_i$ is chosen at the $i$-th iteration of the surrounding for loop so that $d_i \in [l, u] \cap \mathbb{R}$ and $|d_i| \notin (0, \epsilon)$. Apart from that, the variable $d_i$ is not set or changed anymore, so
\begin{equation*}
d_i
\in
[l, u] \cap \mathbb{R}
\text{ and }
|d_i| \notin (0, \epsilon)
\text{ for all }
i \in \{1, \ldots, n\}
.
\end{equation*}
The variable $\alpha$ in \nameref{alg: approximated decomposition} is only changed in \autoref{line: alg: approximated decomposition: init alpha} and \autoref{line: alg: approximated decomposition: add alpha}. Due to this lines and the previous equation,
\begin{equation*}
\alpha_i \geq 0
\text{ if }
l \geq 0
.
\end{equation*}
In \autoref{line: alg: approximated decomposition: select d and omega}, $d_i$ is also chosen so that $d_i + \omega_{p_i}^2 \alpha_{p_i} \leq y_{p_i}$. This implies, together with the previous equation,
\begin{equation*}
d_i \leq y_{p_i}
\text{ if }
l \geq 0
\text{ for all }
i \in \{1, \ldots, n\}
.
\end{equation*}
\flushright\qed
\end{proof}
\begin{theorem} \label{theorem: algorithms: positive semidefinite}
$B$ is positive semidefinite if $l \geq 0$ and positive definite if $l > 0$.
\end{theorem}
\begin{proof}
Theorem \ref{theorem: B equals PT L D LH P} implies
\begin{equation*}
z^H B z
=
z^H P^T L D L^H P z
=
(L^H P z)^H D (L^H P z)
\end{equation*}
for all $z \in \mathbb{C}^n$. Moreover $L$ and $P$ are invertible. Hence, $B$ is positive semidefinite if $D_{i i} = d_i \geq 0$ and positive definite if $D_{i i} = d_i > 0$ for all $i \in \{1, \ldots, n\}$. Thus, Lemma \ref{lemma: approximated LDLH decomposition: d bounds} implies that $B$ is positive semidefinite if $l \geq 0$ and positive definite if $l > 0$.
\end{proof}
\subsection{Diagonal values} \label{subsec: algorithms: diagonal values}
\nameref{alg: approximated matrix} allows to define lower and upper bounds for the diagonal values of $B$ using $x$ and $y$ as proved in Theorem \ref{theorem: algorithms: approximated matrix: diagonal values}. This allows to predefined diagonal values of $B$ by setting both bounds to the desired diagonal values. Thus, \nameref{alg: approximated matrix} meets objective \ref{objective: diagonal values} by appropriately selecting the parameters $x$ and $y$.
It should be taken into account that the algorithm requires $x_i \leq u$ and $l \leq y_i$ for all $i \in \{1, \ldots, n\}$. Hence, if positive semidefinite approximations are required, only nonnegative values can be used as predefined diagonal values. However, this is not an actual restriction, since positive semidefinite matrices always have nonnegative diagonal values.
\begin{theorem} \label{theorem: algorithms: approximated matrix: diagonal values}
\begin{equation*}
x_i
\leq
B_{i i}
\leq
y_i
\text{ for all }
i \in \{1, \ldots, n\}
.
\end{equation*}
\end{theorem}
\begin{proof}
In the \nameref{alg: approximated matrix}, \nameref{alg: approximated decomposition} is called first to calculate $L, d, p, \omega$ and $\delta$. Let $i \in \{1, \ldots, n\}$. At the $i$-th iteration of the outer for loop in \nameref{alg: approximated decomposition},
\begin{equation*}
d_i + \omega_{p_i}^2 \alpha_{p_i}
\in
[x_{p_i}, y_{p_i}]
\end{equation*}
due to \autoref{line: alg: approximated decomposition: select d and omega} and
\begin{equation*}
\delta_{p_i}
=
d_i + \omega_{p_i}^2 \alpha_{p_i} - A_{p_i p_i}
\end{equation*}
due to \autoref{line: alg: approximated decomposition: delta definition} and thus also
\begin{equation*}
A_{p_i p_i} + \delta_{p_i}
\in
[x_{p_i}, y_{p_i}]
.
\end{equation*}
The variables $p_i$ and $\delta_{p_i}$ are not changed anymore after that. Thus
\begin{equation*}
A_{p_i p_i} + \delta_{p_i}
\in
[x_{p_i}, y_{p_i}]
\text{ for all }
i \in \{1, \ldots, n\}
\end{equation*}
at the end of the algorithm. Due to Lemma \ref{lemma: approximated LDLH decomposition: p uniqueness},
\begin{equation*}
\{p_i \in \{1, \ldots, n\}\} = \{1, \ldots, n\}
\end{equation*}
and thus
\begin{equation*}
A_{i i} + \delta_{i}
\in
[x_{i}, y_{i}]
.
\end{equation*}
Lemma \ref{lemma: approximated LDLH decomposition: B equals A modified} states that
\begin{equation*}
B_{i i}
=
A_{i i} + \delta_i
\end{equation*}
and thus
\begin{equation*}
x_{i} \leq B_{i i} \leq y_{i}
.
\end{equation*}
\flushright\qed
\end{proof}
\subsection{Condition number} \label{subsec: algorithms: condition number}
The condition number of $B$ can be controlled by $l, u$ and $y$ as shown in Theorem \ref{theorem: algorithms: condition number}. Hence, \nameref{alg: approximated matrix} meets objective \ref{objective: well-conditioned} with suitable chosen parameters.
\begin{theorem} \label{theorem: algorithms: condition number}
Let $l > 0$. Then
\begin{equation*}
\begin{gathered}
\kappa_2(L)
\leq
2 \left( \frac{a}{l} \right)^\frac{n}{2},~
\kappa_2(D)
\leq
\frac{b}{l}
\text{ and }
\kappa_2(B)
\leq
4 \frac{a^n b}{l^{n+1}}\\
\text{with }
a := \frac{1}{n} \sum\limits_{i=1}^{n} y_i
\text{ and }
b := \min\{u, \max_{i=1,\ldots,n} y_i\}
.
\end{gathered}
\end{equation*}
\end{theorem}
\begin{proof}
$P$ is a permutation matrix and thus $\trace(P B P^T) = \trace(B)$. Furthermore, $P B P^T$ is positive definite because a permutation matrix is invertible and $B$ is positive definite due to Theorem \ref{theorem: algorithms: positive semidefinite}. Moreover, $\kappa_2(P B P^T) = \kappa_2(B)$ because a permutation matrix is also orthogonal. Thus, Theorem \ref{theorem: algorithms: approximated matrix: diagonal values} implies
\begin{equation*}
\frac{\trace(P B P^T)}{n}
=
\frac{\trace(B)}{n}
\leq
\frac{1}{n} \sum\limits_{i=1}^{n} y_i
=
a
.
\end{equation*}
Theorem \ref{theorem: B equals PT L D LH P} states that
\begin{equation*}
P B P^T
=
L D L^H
.
\end{equation*}
Lemma \ref{lemma: approximated LDLH decomposition: d bounds} implies
\begin{equation*}
l
\leq
D_{i i}
\leq
\min\{u, y_{p_i}\}
\leq
b
\text{ for all }
i \in \{1, \ldots, n\}
\end{equation*}
since $l \geq 0$. Hence, Theorem \ref{theorem: condition number inequality: positive definite} in the appendix implies
\begin{equation*}
\kappa_2(L)
\leq
2 \left( \frac{a}{l} \right)^\frac{n}{2},~
\kappa_2(D)
\leq
\frac{b}{l}
\text{ and }
\kappa_2(B)
\leq
4 \frac{a^n b}{l^{n+1}}
.
\end{equation*}
\flushright\qed
\end{proof}
\subsection{Approximation error} \label{subsec: algorithms: approximation error}
The approximation error $\| B - A \|$ can be expressed using $A$, $\delta$, $\omega$ and $p$ as shown in the next theorem where it is also proved that the approximation error is bounded. For that, it is first demonstrated that $\delta$ is bounded.
\begin{lemma} \label{lemma: algorithms: bound delta}
Let $l \geq 0$. Then
\begin{equation*}
\begin{gathered}
|\delta_i| \leq a + b
\text{ for all }
i \in \{1, \ldots, n\}\\
\text{ with }
a := \max\limits_{i=1,\ldots,n} y_i
\text{ and }
b := \max\limits_{i=1,\ldots,n} |A_{ii}|
.
\end{gathered}
\end{equation*}
\end{lemma}
\begin{proof}
Let $i \in \{1, \ldots, n\}$. $B$ is positive semidefinite due to Theorem \ref{theorem: algorithms: positive semidefinite} since $l \geq 0$. Hence,
\begin{equation*}
0
\leq
B_{i i}
\text{ and }
B_{i i}
\leq
y_i
\leq
a
\end{equation*}
due to Theorem \ref{theorem: algorithms: approximated matrix: diagonal values}. Furthermore
\begin{equation*}
B_{i i}
=
A_{i i} + \delta_i
\end{equation*}
due to Lemma \ref{lemma: approximated LDLH decomposition: B equals A modified}. Thus
\begin{equation*}
| \delta_i |
=
|B_{i i} - A_{i i}|
\leq
|B_{i i}| + |A_{i i}|
\leq
a + b
.
\end{equation*}
\flushright\qed
\end{proof}
\begin{theorem} \label{theorem: algorithms: approximation error}
Let $l > 0$ or otherwise $d_i = 0$ imply $\omega_j = 0$ for all $i, j \in \{1, \ldots, n\}$ with $j \geq i$.
Define $E := B - A$. Then
\begin{equation*}
\begin{aligned}
\|E\|_2
&\leq
\|E\|_1
=
\|E\|_\infty
\\&=
\max\limits_{i = 1, \ldots, n} \left(
|\delta_{p_i}| + (1 - \omega_{p_i}) \sum\limits_{j=1}^{i-1} |A_{p_i p_j}| + \sum\limits_{j=i+1}^n (1 - \omega_{p_j}) |A_{p_i p_j}|
\right)
\\&\leq
a + b + (n - 1) c
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
\|E\|_F^2
&=
\sum\limits_{i=1}^n
\left(
\delta_{p_i}^2
+
2 (1 - \omega_{p_i})^2 \sum\limits_{j=1}^{i-1} |A_{p_i p_j}|^2
\right)
\\&\leq
n ((a + b)^2 + (n - 1) c^2)
\end{aligned}
\end{equation*}
with
\begin{equation*}
a := \max\limits_{i=1,\ldots,n} y_i,
b := \max\limits_{i=1,\ldots,n} |A_{ii}|
\text{ and }
c := \max\limits_{i,j=1,\ldots,n; i \neq j} |A_{ij}|
.
\end{equation*}
\end{theorem}
\begin{proof}
Let $i, j \in \{1, \ldots, n\}$. Lemma \ref{lemma: approximated LDLH decomposition: decomposition equation} and Theorem \ref{theorem: B equals PT L D LH P} imply
\begin{equation*}
\begin{aligned}
B_{p_i p_j}
=
\begin{cases}
A_{p_i p_j} \omega_{p_{\max\{i, j\}}} &\text{ if } i \neq j\\
A_{p_i p_i} + \delta_{p_i} &\text{ else}
\end{cases}
.
\end{aligned}
\end{equation*}
Thus,
\begin{equation*}
\begin{aligned}
E_{p_i p_j}
=
\begin{cases}
(\omega_{p_{\max\{i, j\}}} - 1) A_{p_i p_j} &\text{ if } i \neq j\\
\delta_{p_i} &\text{ else}
\end{cases}
.
\end{aligned}
\end{equation*}
Furthermore
\begin{equation*}
\{p_i ~|~ i \in \{1, \ldots, n\}\}
=
\{1, \ldots, n\}
\end{equation*}
due to Lemma \ref{lemma: approximated LDLH decomposition: p uniqueness}.
Hence, $E$ is Hermitian because $A$ is Hermitian. Thus, the properties of the norms imply
\begin{equation*}
\|E\|_2
\leq
\|E\|_1
=
\|E\|_\infty
.
\end{equation*}
Moreover
\begin{equation*}
\begin{aligned}
\|E\|_\infty
&=
\max\limits_{i = 1, \ldots, n} \sum\limits_{j = 1}^{n} |E_{p_i p_j}|
=
\max\limits_{i = 1, \ldots, n} \left(
|E_{p_i p_i}| + \sum\limits_{j=1}^{i-1} |E_{p_i p_j}| + \sum\limits_{j=i+1}^n |E_{p_i p_j}|
\right)
\\&=
\max\limits_{i = 1, \ldots, n} \left(
|\delta_{p_i}| + (1 - \omega_{p_i}) \sum\limits_{j=1}^{i-1} |A_{p_i p_j}| + \sum\limits_{j=i+1}^n (1 - \omega_{p_j}) |A_{p_i p_j}|
\right)
\\&\leq
a + b + (n - 1) c
\end{aligned}
\end{equation*}
because $|\delta_i| \leq a + b$ and $\omega_i \in [0, 1]$
due to Lemma \ref{lemma: algorithms: bound delta} and line \ref{line: alg: approximated decomposition: select d and omega} in \nameref{alg: approximated decomposition}. Additionally
\begin{align*}
\|E\|_F^2
&=
\sum\limits_{i=1}^n
\left(
|E_{p_i p_i}|^2
+
2 \sum\limits_{j=1}^{i-1} |E_{p_i p_j}|^2
\right)
\\&=
\sum\limits_{i=1}^n
\left(
\delta_{p_i}^2
+
2 (1- \omega_{p_i})^2
\sum\limits_{j=1}^{i-1} | A_{p_i p_j}|^2
\right)
\\&\leq
n (a + b)^2 + n (n - 1) c^2
.
\end{align*}
\flushright\qed
\end{proof}
\subsection{Choice of $\omega$ and $d$} \label{subsec: algorithms: choice of d and omega}
The choice of $\omega$ and $d$ in \autoref{line: alg: approximated decomposition: select d and omega} in \nameref{alg: approximated decomposition} is arbitrary apart from that they must be feasible. However, their choice is crucial for the approximation error due to Theorem \ref{theorem: algorithms: approximation error} and line \ref{line: alg: approximated decomposition: delta definition} of \nameref{alg: approximated decomposition}.
Based on this theorem the algorithm \nameref{alg: minimal change}, presented in Algorithm \ref{alg: minimal change}, is derived which chooses $\omega$ and $d$ so that in each iteration the additional approximation error in the Frobenius norm is minimized. This does not guaranteed that the overall approximation error is minimized but still results in a small approximation error as numerical tests in Subsection \ref{subsec: numerical comparison} have shown. Hence, \nameref{alg: approximated matrix} meets objective \ref{objective: small error} when using \nameref{alg: minimal change}. It can be incorporated by replacing \autoref{line: alg: approximated decomposition: select d and omega} in \nameref{alg: approximated decomposition} with the code snippet \nameref{alg: approximated decomposition extension: minimal change} presented in Algorithm \ref{alg: approximated decomposition extension: minimal change}.
\nameref{alg: minimal change} was designed so that its needed number of basic operations and memory is negligible compared to the number of basic operations and memory needed by \nameref{alg: approximated matrix} as discussed in Subsection \ref{subsec: algorithms: complexity}. This makes it possible to meet objectives \ref{objective: fast computation} and \ref{objective: low memory} while using \nameref{alg: minimal change}.
It also ensures that $B = A$ if $A$ already meets the requirements on $B$. In detail, these are $x_i \leq A_{ii} \leq y_i$ and $\max \{l, \epsilon\} \leq D_{ii} \leq u$ for all $i \in \{1, \ldots, n\}$, where $D$ is the diagonal matrix of the $LDL^H$ decomposition of $P A P^T$.
If several pairs $(d, \omega)$ minimize the additional approximation error, the one with the biggest $d$ is chosen in \nameref{alg: minimal change}. This results in absolute smaller values in $L$ which reduces the condition number of $B$, as shown in the proof of Theorem \ref{theorem: condition number inequality: positive definite}. Moreover the numerical stability of the algorithms is increased because a division by $d$ is part of the algorithms.
\begin{algorithm}
\normalsize
\caption{MINIMAL\_CHANGE}
\label{alg: minimal change}
\begin{algorithmic}[1]
\INPUT
\begin{itemize}[label={$\cdot$}]
\item $x \in \mathbb{R} \cup \{-\infty\}$, $y, u \in \mathbb{R} \cup \{\infty\}$, $l, \epsilon, \alpha, \beta, \gamma \in \mathbb{R}$ with
$l, \alpha, \beta \geq 0$, $\epsilon > 0$,
$\max\{l, \epsilon, x\} \leq \min\{u, y\}$ and
$\beta = 0 \Rightarrow \alpha = 0$
\end{itemize}
\OUTPUT
\begin{itemize}[label={$\cdot$}]
\item $d, \omega \in \mathbb{R}$
\end{itemize}
\Function {minimal\_change}{$x, y, l, u, \epsilon, \alpha, \beta, \gamma$}
\If {$\max\{l, \epsilon, x - \alpha\} \leq \gamma - \alpha \leq \min\{u, y - \alpha\}$} \label{line: alg: select omega and d: case inner value: if condition}
\Return $(\gamma - \alpha, 1)$ \label{line: alg: select omega and d: case inner value: return values}
\EndIf
\State $C \gets \emptyset$
\If {$\max\{l, \epsilon, x - \alpha\} \leq \min\{u, y - \alpha\}$}
\State $C \gets \{(\min\{\max\{l, \epsilon, x - \alpha, \gamma - \alpha\}, u, y - \alpha\}, 1) \}$ \label{line: alg: select omega and d: add candidate: case omega equals 1}
\EndIf
\If {$\alpha \neq 0$}
\For {$d \in (\{\max\{l, \epsilon\} \} \cap [x -a, \infty)) \cup (\{u\} \cap (- \infty, y]))$}
\For {$\omega \in \mathbb{R} \text{ with } 2 \alpha^2 \omega^3 + ( 2 \alpha (d - \gamma) + \beta ) \omega - \beta = 0$}
\State $\omega \gets
\min \{
\max \{
\omega,
\sqrt{\tfrac{\max\{x - d, 0\}}{\alpha}}
\},
\sqrt{\tfrac{y - d}{\alpha}},
1
\}$
\State $C \gets C \cup \{(d, \omega)\}$ \label{line: alg: select omega and d: add candidate: omega inner}
\EndFor
\EndFor
\EndIf
\If {$l = 0$ and $x \leq 0$ and $2 \gamma \leq \epsilon$}
\State $C \gets C \cup \{(0, 0)\}$ \label{line: alg: select omega and d: add candidate: (0, 0)}
\EndIf
\Return $(d, \omega) \in C$ with smallest $((d + \omega^2 \alpha - \gamma)^2 + (\omega - 1)^2 \beta, -d, \omega)$ in lexicographical order \label{line: alg: select omega and d: return minimal candidate}
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\normalsize
\caption{CHOOSE\_$d$\_$\omega$}
\label{alg: approximated decomposition extension: minimal change}
\begin{algorithmic}[1]
\Indent
\Indent
\For {$k \gets i, \ldots, n$}
\If {$i = 1$}
\State $\beta_{p_k} \gets 0$
\Else
\State $\beta_{p_k} \gets \beta_{p_k} + 2 |A_{p_k p_{i- 1}}|^2$
\EndIf
\EndFor
\State $(d_{p_i}, \omega_{p_i}) \gets $\Call{minimal\_change}{$x_{p_i}, y_{p_i}, l, u, \epsilon, \alpha_{p_i}, \beta_{p_i}, A_{p_i p_i}$} \label{line: alg: approximated decomposition extension: minimal change: select minimum}
\EndIndent
\EndIndent
\end{algorithmic}
\end{algorithm}
The next Theorem stats that \nameref{alg: minimal change} chooses feasible $d$ and $\omega$ which minimize in each iteration the additional approximation error.
\begin{theorem} \label{theorem: optimal omega and d}
Let
\begin{equation*}
d, \omega := \text{\normalfont \nameref{alg: minimal change}}(x, y, l, u, \epsilon, \alpha, \beta, \gamma)
\end{equation*}
where $(x, y, l, u, \epsilon, \alpha, \beta, \gamma)$ is some valid input for the algorithm. Let
\begin{equation*}
\Phi_* :=
\{
(d, \omega)
~|~
d \in [\max\{l, \epsilon\}, u],
\omega \in [0, 1],
d + \omega^2 \alpha \in [x, y]
\}
,
\end{equation*}
\begin{equation*}
\Phi_0 :=
\begin{cases}
\{(0, 0)\} \text{ if } \max\{l, x\} \leq 0\\
\emptyset \text{ else}
\end{cases}
,
\Phi := \Phi_* \cup \Phi_0
\end{equation*}
and
\begin{equation*}
\Psi :=
\{
(d, \omega) \in \Phi
~|~
f(d,\omega)
=
\min_{(\hat{d}, \hat{\omega}) \in \Phi} f(\hat{d}, \hat{\omega})
\}
\end{equation*}
with $f: \mathbb{R}^2 \to \mathbb{R}, (d, \omega) \mapsto (d + \omega^2 \alpha - \gamma)^2 + (\omega - 1)^2 \beta$. Then $(d, \omega) \in \Psi$.
\end{theorem}
\begin{proof}
$\Phi$ is compact and $f$ is continuous. Thus, $f$ has a minimum on $\Phi$ due to Weierstrass's theorem \cite[Theorem 4.16]{Rudin1976}. Hence, $\Psi \neq \emptyset$ and thus,
\begin{equation} \label{eq: theorem: optimal omega and d: psi nonempty sets}
\Psi \cap \Phi_*^\circ \neq \emptyset
\text{ or }
\Psi \cap \partial \Phi_* \neq \emptyset
\text{ or }
\Psi \cap \Phi_0 \neq \emptyset
\end{equation}
where $\Phi_*^\circ$ denotes the interior of $\Phi_*$ and $\partial \Phi_*$ its boundary. Next these three cases are considered.
First consider the case that $\Psi \cap \Phi_*^\circ \neq \emptyset$. Then
\begin{equation*}
\nabla f(d, \omega) = 0
\text{ for all }
(d, \omega) \in \Psi \cap \Phi_*^\circ
\end{equation*}
due to \cite[Theorem 12.3]{Nocedal2006}. Furthermore
\begin{equation*}
\nabla f(d, \omega)
=
\left(
\begin{array}{c}
2 (d + \omega^2 \alpha - \gamma)\\
4 \alpha \omega (d + \omega^2 \alpha - \gamma) + 2 \beta (\omega - 1)
\end{array}
\right)
\end{equation*}
for all $(d, \omega) \in \Phi_*^\circ$. This implies
\begin{equation*}
\omega = 1
\text{ and }
d = \gamma - \alpha
\text{ for all }
(d, \omega) \in \Psi \cap \Phi_*^\circ
\text{ if }
\beta \neq 0
.
\end{equation*}
If $\beta = 0$, the algorithm requires $\alpha = 0$, which implies
\begin{equation*}
(\gamma - \alpha, 1) \in \Psi
\text{ if }
\Psi \cap \Phi_*^\circ \neq \emptyset
\text{ and }
\beta = 0
.
\end{equation*}
Hence,
\begin{equation*}
(\gamma - \alpha, 1) \in \Psi
\text{ if }
\Psi \cap \Phi_*^\circ \neq \emptyset
.
\end{equation*}
Thus, $\Psi \cap \Phi_*^\circ \neq \emptyset$ implies $(\gamma - \alpha, 1) \in \Phi_*$. Hence, $(\gamma - \alpha, 1)$ is returned by the algorithm in \autoref{line: alg: select omega and d: case inner value: return values} if $\Psi \cap \Phi_*^\circ \neq \emptyset$.
If $\Psi \cap \Phi_*^\circ = \emptyset$, the algorithm constructs a candidate set $C$ and returns a minimizer of $f$ on $C$ in \autoref{line: alg: select omega and d: return minimal candidate}. Hence, it remains to prove that
\begin{equation*}
C \cap \Psi \neq \emptyset
\text{ if }
\Psi \cap \partial \Phi_* \neq \emptyset
\text{ or }
\Psi \cap \Phi_0 \neq \emptyset
.
\end{equation*}
Consider now the case $\Psi \cap \partial \Phi_* \neq \emptyset$. Let $(d, \omega) \in \Psi \cap \partial \Phi_*$ and define $a := \max\{l, \epsilon\}$. Then
\begin{equation*}
d \in \{a, u\}
\text{ or }
d + \omega^2 \alpha \in \{x, y\}
\text{ or }
\omega \in \{0, 1\}
.
\end{equation*}
If $\omega = 1$, the definitions of $f$ and $\Phi_*$ imply
\begin{equation*}
\max\{a, x - \alpha\} \leq \min\{u, y - \alpha\}
\end{equation*}
and
\begin{equation*}
(d, \omega)
=
(\min\{\max\{a, x - \alpha, \gamma - \alpha\}, u, y - \alpha\}, 1)
.
\end{equation*}
This value is included in $C$ at \autoref{line: alg: select omega and d: add candidate: case omega equals 1}.
If $\alpha = 0$, $(d, \omega) \in \Psi$ implies $(d, 1) \in \Psi$ for all $(d, \omega) \in \Phi$. Hence, the case $\alpha = 0$ is covered by the previous case where $\omega = 1$. Thus, assume $\alpha \neq 0$.
If $d + \omega^2 \alpha = c$ for $c \in \{x, y\}$, $d \leq c$ and $\omega = \sqrt{\tfrac{c - d}{\alpha}}$.
Since
\begin{equation*}
f \left( d, \sqrt{\tfrac{c - d}{\alpha}} \right)
=
(c - \gamma)^2 + \left( \sqrt{\tfrac{c - d}{\alpha}} - 1 \right)^2 \beta
\end{equation*}
and $(d, \omega)$ is a minimizer of $f$ on $\Psi$, it follows
\begin{equation*}
d \in \{c - \alpha, a, u\}
\text{ if }
d + \omega^2 \alpha = c
.
\end{equation*}
$d = c - \alpha$ any $d + \omega^2 \alpha = c$ imply $\omega = 1$. The case $\omega = 1$ was already considered.
The case $d \in \{a, u\}$ is considered now. Then $(d, \omega) \in \Phi$ is equivalent to $\omega \in [\check{\omega}_d, \hat{\omega}_d]$ with
\begin{equation*}
\check{\omega}_d
:=
\sqrt{\tfrac{\max\{x - d, 0\}}{\alpha}},~~~
\hat{\omega}_d
:=
\sqrt{\tfrac{\min\{y - d, \alpha\}}{\alpha}}
.
\end{equation*}
Hence, $d = u$ implies $y \geq u$ and $d = a$ implies $x - \alpha \leq a$.
Define
\begin{equation*}
\Omega_d := \{ \omega \in \mathbb{R} ~|~ \tfrac{\partial}{\partial \omega} f(d, \omega) = 0 \}
.
\end{equation*}
$\omega \in (\check{\omega}_d, \hat{\omega}_d)$ implies $\omega \in \Omega_d$. $\omega = \check{\omega}_d$ implies $\min \Omega_d \leq \check{\omega}_d$ and $\omega = \hat{\omega}_d$ implies $\max \Omega_d \geq \hat{\omega}_d$. Hence
\begin{equation*}
\omega
\in
\{ \min\{\max\{\omega, \check{\omega}_d\}, \hat{\omega}_d \} \} ~|~ \omega \in \Omega_d \}
\end{equation*}
These values are included in $C$ in \autoref{line: alg: select omega and d: add candidate: omega inner}.
The last case is $\omega = 0$. This implies $d = a$, because $(d, \omega)$ is a minimizer of $f$ on $\Phi$. The case $d = a$ was already considered.
Hence,
\begin{equation*}
C \cap \Psi \neq \emptyset
\text{ if }
\Psi \cap \partial \Phi_* \neq \emptyset
.
\end{equation*}
Thus, it remains to show that
$
C \cap \Psi \neq \emptyset
\text{ if }
\Psi \cap \Phi_0 \neq \emptyset
.
$
Hence, consider now the case $\Psi \cap \Phi_0 \neq \emptyset$. The definition of $\Phi_0$ implies then $(0, 0) \in \Psi$ and $\max\{l, x\} \leq 0$. This implies $\epsilon \leq u, y$ due to the requirements of the algorithm. Thus, $(\epsilon, 0) \in \Phi$. Hence, since $(0, 0) \in \Psi$,
\begin{align*}
\gamma^2 + \beta
=
f(0, 0)
\leq
f(\epsilon, 0)
=
(\epsilon - \gamma)^2 + \beta
=
\gamma^2 - 2 \epsilon \gamma + \epsilon^2 + \beta
.
\end{align*}
Thus, $\Psi \cap \Phi_0 \neq \emptyset$ implies $2 \gamma \leq \epsilon$. $(0, 0)$ is included in $C$ in \autoref{line: alg: select omega and d: add candidate: (0, 0)} in this case. Hence
\begin{equation*}
C \cap \Psi \neq \emptyset
\end{equation*}
and the algorithm returns a value in $\Psi$ in all cases.
\flushright\qed
\end{proof}
\subsection{Permutation} \label{subsec: algorithms: permutation}
Another part of \nameref{alg: approximated decomposition} with some flexibility in its design is the permutation step in \autoref{line: alg: approximated decomposition: permutation} where the row and column for the current iteration are chosen. This choice drastically affects the output of \nameref{alg: approximated decomposition} and thus of \nameref{alg: approximated matrix}, too. Several strategies for the permutation are conceivable.
A strategy to reduce the approximation error is to choose the permutation that minimizes the additional approximation error. To achieve this, in each iteration the additional approximation error for all remaining indices is computed and the one with the lowest additional approximation error is chosen. If this is the same for several indices, a higher value in $d$ is preferred. As already stated in the previous subsection, this reduces the condition number of the approximation and increases the numerical stability. If these values are the same as well, a lower $\omega$ and then a lower index is preferred.
Another strategy is to prioritize higher values in $d$ instead of lower additional approximation errors. This improves the condition number and the numerical stability even further and does not necessarily increase the total approximation error as numerical experiments have shown.
To use this strategy, \autoref{line: alg: approximated decomposition extension: minimal change: select minimum} in \nameref{alg: approximated decomposition extension: minimal change} can be replaced by the following code snippet \nameref{alg: approximated decomposition extension: minimal change with permutation: max d} presented in Algorithm \ref{alg: approximated decomposition extension: minimal change with permutation: max d}. Furthermore in \nameref{alg: approximated decomposition} , the swap in \autoref{line: alg: approximated decomposition: swap} has to be moved after \nameref{alg: approximated decomposition extension: minimal change with permutation: max d} and \autoref{line: alg: approximated decomposition: permutation} could be removed.
\begin{algorithm}[H]
\normalsize
\caption{CHOOSE\_$p$\_$d$\_$\omega$}
\label{alg: approximated decomposition extension: minimal change with permutation: max d}
\begin{algorithmic}[1]
\Indent
\Indent
\State $\hat{d} \gets -\infty$
\For {$k \gets i, \ldots, n$}
\State $(\tilde{d}, \tilde{\omega}) \gets$ \Call{minimal\_change}{$x_{p_{k}}, y_{p_{k}}, l, u, \epsilon, \alpha_{p_{k}}, \beta_{p_{k}}, A_{p_{k} p_{k}}$}
\State $\tilde{f} \gets (\tilde{d} + \tilde{\omega}^2 \alpha_{p_{k}} - A_{p_{k} p_{k}})^2 + (\tilde{\omega} - 1)^2 \beta_{p_{k}}$
\If {$(-\tilde{d}, \tilde{f}, \tilde{\omega}, k) < (-\hat{d}, \hat{f}, \hat{\omega}, j)$ in lexicographical order}
\State $j \gets k$, $\hat{d} \gets \tilde{d}$, $\hat{\omega} \gets \tilde{\omega}$, $\hat{f} \gets \tilde{f}$
\EndIf
\EndFor
\State $(d_{p_j}, \omega_{p_j}) \gets (\hat{d}, \hat{\omega})$
\EndIndent
\EndIndent
\end{algorithmic}
\end{algorithm}
For sparse matrices, the permutation also affects the sparsity pattern of the matrix $L$. Hence, it would be beneficial to choose a permutation which reduces the number of nonzero values in $L$ and thus reduces also the computational effort and the memory consumption. However, minimizing the number of nonzero values is a NP-complete problem \cite{Yannakakis1981}.
However, several heuristic methods exist, which can reduce the number of nonzero values significantly. These are band reducing permutation algorithms like the Cuthill–McKee algorithm \cite{Cuthill1969} and the reverse Cuthill–McKee algorithm \cite{George1981}, symmetric approximate minimum degree permutation algorithms \cite{George1989}, like for example \cite{Amestoy1996}, or symmetric nested dissection algorithms. A good overview is provided by \cite[chapter 7]{Davis2006} and \cite[Chapter 8]{Davis2016}. It should be taken into account that only symmetric permutation methods are applicable in our context.
\subsection{Complexity} \label{subsec: algorithms: complexity}
In the context of large matrices and limited resources, the needed run time and memory of \nameref{alg: approximated matrix} and \nameref{alg: approximated decomposition} are crucial.
The fastest way to check if $A \in \mathcal{C}^{n \times n}$ is positive definite is to try to calculate a (classical) Cholesky decomposition of $A$, that is a lower triangular matrix $L$ with $A = L L^H$ \cite[Chapter 10]{Higham2002}, \cite[Chapter 4.2]{Golub1996}. This needs at worst $\tfrac{1}{3} n^3 + \mathcal{O}(n^2)$ basic operations and stores $2 n^2 + \mathcal{O}(n)$ numbers in the real valued case. The needed memory can be reduced if only the lower triangles of $A$ and $L$ are stored. This would result in $n^2 + \mathcal{O}(n)$ numbers. It can be reduced even more if $A$ can be overwritten by $L$. This would result in $\tfrac{1}{2} n^2 + \mathcal{O}(n)$ numbers.
\nameref{alg: approximated matrix} and \nameref{alg: approximated decomposition} using \nameref{alg: approximated decomposition extension: minimal change with permutation: max d} need at worst $\tfrac{1}{3} n^3 + \mathcal{O}(n^2)$ basic operations and memory for $2 n^2 + \mathcal{O}(n)$ numbers in the real valued case, too. For this only a few small modifications are necessary which are explained below. Hence, both algorithms have asymptotically the same worst case number of basic operations and memory as an algorithm which calculates a Cholesky decomposition. Thus, their overhead is negligible and vanishes asymptotically. With some small modifications, it is also possible to overwrite the input matrix $A$ with the output matrices $L$ and $B$. Thus, \nameref{alg: approximated matrix} meets objective \ref{objective: fast computation} and \ref{objective: low memory}.
\begin{sloppypar}
For \nameref{alg: minimal change}, the number of needed basic operations and numbers that have to be stored is $\mathcal{O}(1)$. Hence, \nameref{alg: approximated decomposition extension: minimal change with permutation: max d} needs $\mathcal{O}(n)$ basic operations and stores $\mathcal{O}(1)$ numbers. If \nameref{alg: approximated decomposition extension: minimal change with permutation: max d} is used in \nameref{alg: approximated decomposition} to choose the permutation as well as $d$ and $\omega$, $\mathcal{O}(n^2)$ additional basic operations have to be performed and $\mathcal{O}(n)$ additional numbers have to be stored.
\end{sloppypar}
In \nameref{alg: approximated decomposition}, a crucial part for the number of needed operations is the calculation of $L$ in \autoref{line: alg: approximated decomposition: L definition}. Here the effort can be reduced by calculating and storing $L D^{\frac{1}{2}}$ instead of $L$ first. After that $L$ can be calculated with an effort of $\mathcal{O}(n^2)$ basic operations. This approach results in an overall worst case number of $\tfrac{1}{3} n^3 + \mathcal{O}(n^2)$ basic operations plus the basic operations needed for the permutation and the choice of $d$ and $\omega$. Furthermore $2 n^2 + \mathcal{O}(n)$ numbers have to be stored in \nameref{alg: approximated decomposition} despite the memory needed for the choice of the permutation, $d$ and $\omega$. Hence, if \nameref{alg: approximated decomposition extension: minimal change with permutation: max d} is used, the overall worst case number of basic operations is $\tfrac{1}{3} n^3 + \mathcal{O}(n^2)$ and $2 n^2 + \mathcal{O}(n)$ numbers have to be stored.
The needed storage can be reduced by storing only one triangle of $A$ and the lower triangle of $L$ and by overwriting $A$ with $L$. This would result in $n^2 + \mathcal{O}(n)$ and $\frac{1}{2} n^2 + \mathcal{O}(n)$ numbers, respectively. However, the permutation in \nameref{alg: approximated decomposition} must be taken into account here. For this, the indexing of $A$ in \autoref{line: alg: approximated decomposition: L definition} must be suitably adapted or $A$ must be permuted. However, these modifications would not influence the $\tfrac{1}{3} n^3$ as the dominant part in the number of basic operations.
For \nameref{alg: approximated matrix}, at most $\tfrac{1}{3} n^3 + \mathcal{O}(n^2)$ basic operations plus the basic operations for choosing the permutation, $d$ and $\omega$ are needed as well. This is because for each execution of \autoref{line: alg: approximated matrix: set below diagonal without omega} in \nameref{alg: approximated matrix}, the execution of \autoref{line: alg: approximated decomposition: L definition} in \nameref{alg: approximated decomposition} is once omitted. Thus, the worst case number of needed basic operations of \nameref{alg: approximated matrix} increases only by $\mathcal{O}(n^2)$ compared to the worst case number of needed basic operations of \nameref{alg: approximated decomposition}.
At most $2 n^2+ \mathcal{O}(n)$ numbers have to be stored in \nameref{alg: approximated matrix} plus the numbers that need to be stored for choosing the permutation, $d$ and $\omega$. To achieve this, $B$ must overwrite $L$. If the strict lower triangle of $L$ is allowed to overwrite the strict lower triangle of $A$, at most $n^2+ \mathcal{O}(n)$ numbers have to be stored plus the numbers for the choice of the permutation, $d$ and $\omega$.
Hence, \nameref{alg: approximated matrix}, with the small modifications mentioned above, needs at most $\tfrac{1}{3} n^3 + \mathcal{O}(n^2)$ basic operations and stores at most $2 n^2+ \mathcal{O}(n)$ numbers if \nameref{alg: approximated decomposition extension: minimal change with permutation: max d} is used to choose the permutation, $d$ and $\omega$. It stores only $n^2+ \mathcal{O}(n)$ numbers if it is allowed to overwrite the input matrix.
It would also be possible to reduce the needed memory to $\tfrac{1}{2} n^2+ \mathcal{O}(n)$ numbers by passing only the lower triangle of the matrices $A$, $L$ and $B$. Since in this case $A$ is then no longer available after the calculation of $L$, $B$ must be calculated in the more expensive way shown in Theorem \ref{theorem: B equals PT L D LH P}. This would result in $\tfrac{1}{3} n^3 + \mathcal{O}(n^2)$ additional basic operations.
In the complex valued case, the main statement remains the same: The overhead of \nameref{alg: approximated matrix} and \nameref{alg: approximated decomposition} using \nameref{alg: approximated decomposition extension: minimal change with permutation: max d} is negligible and vanishes asymptotically compared to an algorithm which calculates a (classical) Cholesky decomposition. In a similar way, an analysis can be carried out for the case where $A$ is a sparse matrix.
\section{Implementation and numerical experiments}
\label{sec:implementation}
An implementation of the algorithms \nameref{alg: approximated matrix} and \nameref{alg: approximated decomposition} is presented in this section together with the performed numerical experiments.
\subsection{Implementation}
The algorithms \nameref{alg: approximated matrix} and \nameref{alg: approximated decomposition} presented in Section \ref{sec:LDL_approximation} are implemented in a software library written in Python \cite{Python-3.7} called matrix-decomposition library \cite{matrix-decomposition-1.2}. Their implementation uses the \nameref{alg: minimal change} algorithm and provides both permutation algorithms described in Subsection \ref{subsec: algorithms: permutation} as well as several fill reducing permutation algorithms for sparse matrices. In addition, the library provides many more approximation and decomposition algorithms together with various other useful functions regarding matrices and its decompositions.
The library is available at github \cite{matrix-decomposition-git}. It is based on NumPy \cite{NumPy-1.17}, SciPy \cite{SciPy-1.3,Virtanen2019} and scikit-sparse \cite{scikit-sparse-0.4.4}. It was extensively tested using pytest \cite{pytest-4.4.1} and documented using Sphinx \cite{Sphinx-2.0.1}. The matrix-decomposition library and all required packages are open-source.
They can be comfortably installed using the cross-platform package manager Conda \cite{Conda} and the Anaconda Cloud \cite{matrix-decomposition-anaconda}. Here all required packages are installed during the installation of the matrix-decomposition library. The library is also available on the Python Package Index \cite{matrix-decomposition-PyPI} and is thus installable with the standard Python package manager pip \cite{pip} as well.
\subsection{Comparison with other approximation algorithms}
\label{subsec: numerical comparison}
The \nameref{alg: approximated matrix} algorithm has been compared with other modified Cholesky algorithms based on $LDL^T$ decomposition by the resulting approximation errors and the condition numbers of the approximations. For the results presented here, we have use the Frobenius norm. However, the results using the spectral norm look similar.
The other algorithms are GMW81 \cite{Gill1981}, which is a refined version of \cite{Gill1974}, GMW1 \cite{Fang2008} and GMW2 \cite{Fang2008} which are based on GMW81, SE90 \cite{Schnabel1990} and its refined version SE99 \cite{Schnabel1999} as well as SE1 \cite{Fang2008} which in turn is based on SE99. All these algorithms are implemented in the matrix-decomposition library \cite{matrix-decomposition-1.2}. These algorithms have been extended so that the approximation can have predefined diagonal values. For this, the calculated approximation was scaled by multiplying with a suitable diagonal matrix on both sides.
\nameref{alg: approximated matrix} has been configured so that the permutation strategy which prefers high values in $D$ is used and no upper bound on the values in $D$ is applied.
Different test scenarios were used. The first three scenarios are random correlation matrices disturbed by some additive unbiased noise which should be approximated by valid correlation matrices. The random correlation matrices have been generated by the algorithm described in \cite{Davies2000}. The off-diagonal values of the symmetric noise matrices have been drawn from a normal distribution with expectation value zero and 0.1, 0.2 or 0.3 as standard deviation depending on the scenario. The diagonal values of the noise matrices were zero in all scenarios.
\begin{figure}
\centering
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth,height=0.71\linewidth]{correlation_with_noise_0_1_-_number_of_bests.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth,height=0.71\linewidth]{correlation_with_noise_0_2_-_number_of_bests.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth,height=0.71\linewidth]{correlation_with_noise_0_3_-_number_of_bests.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth,height=0.71\linewidth]{eigenvalues_in_-10000_and_10000_-_number_of_bests.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth,height=0.71\linewidth]{eigenvalues_in_-10000_and_1_-_number_of_bests.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth,height=0.71\linewidth]{eigenvalues_in_-1_and_10000_-_number_of_bests.pdf}
\end{minipage}
\caption{Frequency, how often the algorithm achieved the smallest approximation error for the four different bounds on the condition number of the approximation (different colors) and for each of the six test scenarios (different plots).}
\label{fig: number of best}
\end{figure}
The last three scenarios are randomly generated symmetric matrices with eigenvalues uniformly distributed in $[-10^4, 10^4]$, $[-10^4, 1]$ or $[-1, 10^4]$, depending on the scenario, which should be approximated by symmetric positive semidefinite matrices. Each of these random symmetric matrices has been generated by multiplying a random orthogonal matrix, generated with the algorithm described in \cite{Stewart1980}, with a diagonal matrix with the chosen eigenvalues as diagonal values and then multiplying this with the transposed random orthogonal matrix. The eigenvalues have been drawn from uniform distributions and were altered so that each matrix has at least one negative and one positive eigenvalue.
\begin{figure}
\centering
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth,height=0.71\linewidth]{correlation_with_noise_0_1_-_median.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth,height=0.71\linewidth]{correlation_with_noise_0_2_-_median.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth,height=0.71\linewidth]{correlation_with_noise_0_3_-_median.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth,height=0.71\linewidth]{eigenvalues_in_-10000_and_10000_-_median.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth,height=0.71\linewidth]{eigenvalues_in_-10000_and_1_-_median.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\linewidth,height=0.71\linewidth]{eigenvalues_in_-1_and_10000_-_median.pdf}
\end{minipage}
\caption{Median of the approximation errors for the four different bounds on the condition number of the approximation (different colors) and for each of the six test scenarios (different plots).}
\label{fig: error}
\end{figure}
For each of the six scenarios, 100 matrices have been generated with dimensions evenly distributed between 10, 20, 30, 40 and 50 and each of them was approximated.
The approximations are assessed according to the approximation error and their condition number using different objectives. The first one is to minimize the approximation error without caring about the condition number. The other three are to minimize the approximation error while getting a condition number lower or equal to $10 n$, $5 n$ and $2 n$, respectively, where $n$ is the dimension of the matrix. This corresponds to the requirement, that the condition number should be sufficiently small (but must not be minimal), which often occurs in application examples. Minimizing only the condition number without taking the approximation error into account is not useful.
Each of the algorithms has a parameter, representing a lower bound on the values of $D$, allowing to favor a low approximation error or a low condition number. Hence, each algorithm has been executed several times with different values for this parameter and for each of the four objectives only the approximation which best meets the objective was taken into account.
Figure \ref{fig: number of best} shows how many times each algorithm has computed the approximation with the smallest approximation error among all tested algorithms for the six scenarios and the four objectives. The \nameref{alg: approximated matrix} algorithm clearly outperforms all other tested algorithms in all scenarios.
Figure \ref{fig: error} shows the median of the approximation errors for each of the six scenarios and the four objectives. The approximation errors are relative to the minimal possible approximation errors which have been calculated using the methods described in \cite{Higham1988} and \cite{Higham2002a}. No bar in Figure \ref{fig: error} indicates that the algorithm was not able to calculate an approximation with the restriction to the condition number for at least half of the test matrices.
The results show that \nameref{alg: approximated matrix} calculates approximations with approximation errors usually close to optimal and still sufficiently small condition numbers. In addition, it performs better, sometimes very considerably, than the other tested algorithms.
The numerical tests have also indicated that, for determining $d_i$, a varying lower bound $\hat{l}_i$, defined as
\begin{equation*}
\hat{l}_i
:=
\begin{cases}
l &\text{ if } \tfrac{1}{2} c_{p_i} < l\\
u &\text{ if } \tfrac{1}{2} c_{p_i} > u\\
\tfrac{1}{2} c_{p_i} & \text{ else}
\end{cases}
\text{ with }
c_i
:=
\begin{cases}
x_i &\text{ if } A_{i i} < x_i\\
y_i &\text{ if } A_{i i} > y_i\\
A_{i i} & \text{ else}
\end{cases}
\end{equation*}
for each $i \in \{1 \ldots, n\}$,
is useful in order to achieve a low approximation error and a low condition number. This varying lower bound is also choosable in the matrix-decomposition library.
\section{Conclusions}
\label{sec: summary}
A new algorithm to approximate Hermitian matrices by positive semidefinite Hermitian matrices was presented. In contrast to existing algorithms, it allows to specify bounds on the diagonal values of the approximation.
It tries to minimize the approximation error in the Frobenius norm and the condition number of the approximation. Parameters of the algorithm can be used to select which of these two objectives is preferred if not both objectives can be meet equally well. Numerical tests have shown that the algorithms outperforms existing algorithms regarding the approximation error as well as the condition number.
The algorithm is suitable for very large matrices, since it needs only $\tfrac{1}{3} n^3 + \mathcal{O}(n^2)$ basic operations and storage for $n^2 + \mathcal{O}(n)$ numbers in the real valued case. This is asymptotically the same number of basic operations as the computation of a Cholesky decomposition would need. Moreover the algorithm is also suitable for sparse matrices since it preserves the sparsity pattern of the original matrix.
The $LDL^H$ decomposition of the approximation is calculated as a byproduct. This allows to solve corresponding linear equations or to calculate the corresponding determinant very quickly. If such a decomposition should be calculated anyway, the algorithm has no significant overhead.
Two parts in the algorithm are realizable in many different ways. Various possibilities were presented, more are conceivable.
An open-source implementation of this algorithm is freely available. The implementation is fully documented and easy to install. Extensive numerical tests confirm the functionality of the algorithm and its implementation.
Numerical optimization and statistics are two fields of application in which the algorithm can be of particular interest.
| {
"timestamp": "2019-12-12T02:18:57",
"yymm": "1806",
"arxiv_id": "1806.03196",
"language": "en",
"url": "https://arxiv.org/abs/1806.03196",
"abstract": "A new algorithm to approximate Hermitian matrices by positive semidefinite Hermitian matrices based on modified Cholesky decompositions is presented. In contrast to existing algorithms, this algorithm allows to specify bounds on the diagonal values of the approximation. It has no significant runtime and memory overhead compared to the computation of a classical Cholesky decomposition. Hence it is suitable for large matrices as well as sparse matrices since it preserves the sparsity pattern of the original matrix. The algorithm tries to minimize the approximation error in the Frobenius norm as well as the condition number of the approximation. Since these two objectives often contradict each other, it is possible to weight these two objectives by parameters of the algorithm. In numerical experiments, the algorithm outperforms existing algorithms regarding these two objectives. A Cholesky decomposition of the approximation is calculated as a byproduct. This is useful, for example, if a corresponding linear equation should be solved. A fully documented and extensively tested implementation is available. Numerical optimization and statistics are two fields of application in which the algorithm can be of particular interest.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Approximation of Hermitian Matrices by Positive Semidefinite Matrices using Modified Cholesky Decompositions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587261496031,
"lm_q2_score": 0.7185943865443349,
"lm_q1q2_score": 0.7097978758712878
} |
https://arxiv.org/abs/math/0702279 | Infinitely Often Dense Bases of Integers with a Prescribed Representation Function | Nathanson constructed asymptotic bases for the integers with a prescribed representation function, then asked how dense they can be. We can easily obtain an upper bound using a simple argument. In this paper, we will see this is indeed the best bound we can get for asymptotic bases for the integers with an arbitrary representation function prescribed. | \section{Introduction}\label{S:intro}
We will use the following notations: For sets $A, B$ of integers, any integer $t$ and a positive integer $h$, we define the \emph{sumset}
$ A + B = \{ a+b : a\in A, b \in B\}$,
the \emph{translation}
$A + t = \{ a + t : a \in A\},$ and the \emph{dilation}
$ h*A = \{ ha : a \in A\}$.
Let $\mathbb{N}_0$ be the set of nonnegative integers. Then we define the representation function of $A$ as
\[r_A(n) = \mbox{card} \{(a,b) : a,b \in A,\; a \le b,\;
a+b=n \}, \] where $n$ is an integer. A set of integers A is called an \emph{additive basis} for integers if $r_A(n) \ge 1$ for all integers $n$. If all but finitely many integers can be written as a sum of two elements from $A$, then $A$ is called an \emph{asymptotic basis} for integers. A set of integers
$A$ is called an \emph{unique representation basis} for integers if $r_A(n)=1$ for all integers $n$. Also, the \emph{counting function} for the set $A$ is \[ A(y,x) = \mbox{card} \{ a \in A : y \le a \le x \} \] for real numbers $x$ and $y$.
For bases of integers, Nathanson~\cite{NathSpa,NathInv} obtained the following:
\begin{theorem}[\cite{NathSpa, NathInv}]\label{T:Nath}
Let $ f \colon \mathbb{Z} \to \mathbb{N}_0 \cup \{\infty\}$ be any function such that the set $f^{-1}(0)$ is a finite set.
\begin{enumerate}
\item Let \, $\phi \, \colon \mathbb{N}_0 \to \mathbb{R}$ \; be any nonnegative function such that \; $\lim_{x \to \infty} \phi (x) = \infty$. Then there exist uncountably many asymptotic bases $A$ of integers such that $r_A(n) = f(n)$ for all integers \,$n$ and $A(-x,x) \le \phi(x)$ for all $x \ge 0$.
\item There exist uncountably many asymptotic bases $A$ of integers such that $r_A(n) = f(n)$ for all integers \,$n$ and $A(-x,x) \gg x^{1/3}$ for all sufficiently large $x$.
\end{enumerate}
\end{theorem}
Note that, under our assumption, a finite set $f^{-1}(0)$ just means $A$ is an asymptotic basis. Cilleruelo and Nathanson~\cite{Per} later improved the exponent ${1/3}$ in the second statement of Theorem~\ref{T:Nath} to $\sqrt{2} -1 +o(1)$. Also, \L uczak and Schoen~\cite{Luc} proved the second statement of Theorem~\ref{T:Nath} by showing that a set with a condition of Sidon type can be extended to a unique representation bases.
When the representation function is arbitrarily given, the first statement of Theorem~\ref{T:Nath} means an asymptotic basis for the integers can be as sparse as we want, and the second statement of Theorem~\ref{T:Nath} means we can achieve a certain thickness. Therefore, we want to consider the following general question: \emph{Given an arbitrary representation function, what are the possible thickness for asymptotic bases for the integers?} This question was posed by Nathanson~\cite{NathInv}.
We can obtain an upper bound quite easily, as shown in \cite{Uniq}. Let $A$ be any set of integers with a bounded representation function $r_A(n) \le r$ for some $r > 0$ for all integers $n$. Take $k = A(-x,x)$. Then there are $\frac{k(k+1)}{2}$ ways to make $a_i + a_j$\,, where $a_i, a_j$ are in $A$ with $|a_i|, |a_j| \le x$. All these sums belong to the interval $[-2x, 2x]$ and each number in that interval is represented by $a_i + a_j$ at most $r$ times. Therefore, \[ \frac{k(k+1)}{2} \le r(4x+1) \] and solving this for $k = A(-x,x)$ gives us
$A(-x,x) \ll x^{1/2}$
for all $x > 0$.
Since we are considering possible thickness for asymptotic bases with an arbitrary representation function given, the above argument provides us with an upper bound $x^{1/2}$ for the possible thickness. Then the next question we might ask is, can we find a better upper bound, or is $x^{1/2}$ the best upper bound we can get? One way to find a better upper bound is to find a representation function which require the thickness to have a better upper bound.
We might want to start with unique representation bases. Nathanson~\cite{Uniq} asked the following: Does there exist a number $\theta < 1/2$ such that $A(-x,x) \le x^{\theta}$ for every unique representation basis $A$ and for all sufficiently large $x$? This question was answered negatively by Chen~\cite{Chen}.
\begin{theorem}[\cite{Chen}]\label{T:Chen}
For any \,$\epsilon > 0$, there exists an unique representation basis $A$ for the integers such that for infinitely many positive integers $x$, we have
\[ A(-x,x) \ge x^{{1/2} - \epsilon} . \]
\end{theorem}
In this paper, we show that it is impossible to find a better upper bound for any representation function. Therefore, if we consider $A(-x,x)$ for infinitely many integers $x$ instead of all large $x$, $x^{1/2}$ is indeed the best upper bound.
\\
\textbf{Acknowledgment}. The author thanks Mel Nathanson for his very helpful advices.
\section{Preliminary Lemmas}\label{S:Lem}
From now on, $f$ will denote a function $f \colon \mathbb{Z} \to \mathbb{N}_0 \cup \{ \infty \}$ such that the set $f^{-1}(0)$ is a finite set. Then there exists a positive integer $d_0$ such that $f(n) \ge 1$ for all integers $n$ with $|n| \ge d_0$. Nathanson~\cite{NathSpa} proved the following:
\begin{lemma}[\cite{NathSpa}]\label{L:uk}
Given a function $f$ as above, there exists a sequence $U=\{ u_k \}_{k=1}^{\infty}$ of integers such that, for every $n \in \mathbb{Z}$ and $k \in \mathbb{N}$,
\begin{equation}\label{E:uk}
f(n) = \mbox{card} \{ k \ge 1 : u_k = n \}.
\end{equation}
\end{lemma}
For the proof of Lemma~\ref{L:uk}, see Nathanson~\cite{NathSpa}.
\begin{lemma}\label{L:first}
Let $A$ be a finite set of integers with $r_A(n) \le f(n)$ for all integers \,$n$, \, $0 \notin A$, and for all integers $n$,\[r_A(n) \ge \# \{ i \le m: u_i = n\}\] for some integer \,$m$ which depends only on the set $A$. Then, there exists a finite set of integers $B$ such that $A \subseteq B$, $r_B(n) \le f(n)$ for all integers $n$, \[r_B(n) \ge \# \{ i \!\le\! m+1 : u_i = n \}\] for all integers $n$, and $0 \notin B$.
\end{lemma}
\begin{proof}
If $r_A(n) \ge \# \{ i \le m+1 : u_i = n\}$ for all $n$, then take $B=A$ and we are done. Otherwise, note that \[\#\{ i \le m : u_i = n\} = \# \{ i \le m+1 : u_i = n \}\] for all $n \ne u_{m+1}$ and \[\#\{ i \le m : u_i = u_{m+1}\} + 1 = \# \{ i\le m+1 : u_i = u_{m+1}\}.\] If \[r_A(n) < \#\{ i\le m+1 : u_i = n\}\] for some $n$ as we are assuming now, since \[r_A(n) \ge \# \{ i \le m : u_i = n\}\] for all $n$, we must have \[r_A(u_{m+1}) < \#\{ i \le m+1 : u_i=u_{m+1} \} \le f(u_{m+1}).\]
Let $d = \mbox{max}\{ d_0,\, |u_{m+1}|,\,|a|\ \mbox{where }a \in A \}$. Choose $c > 4d$ if $u_{m+1} \ge 0$ and $c < -4d$ if $u_{m+1} < 0$. Note that $|c| >4d$.
Let $B = A \cup \{ -c,\ c+u_{m+1}\}$. Then $2B$ has three parts:
\[ 2A, \ A+\{-c,\ c+u_{m+1}\}, \ \{ -2c,\ u_{m+1},\ 2c+2u_{m+1} \}. \]
If $a \in 2A$, then $-2d \le a \le 2d$. If $a \in A$, then if $u_{m+1} \ge 0$,\ we have $c > 0$, thus
\begin{eqnarray*}
a-c & \le & d-4d = -3d, \\
a + c + u_{m+1} & \ge & -d+4d+u_{m+1} \ge 3d +u_{m+1} \ge 3d\,,
\end{eqnarray*}
and if $u_{m+1} <0$,\ we have $c<0$, thus
\begin{eqnarray*}
a-c &\ge& -d+4d = 3d, \\
a+c+u_{m+1} &\le& d-4d+u_{m+1}=-3d+u_{m+1} \le -3d\,.
\end{eqnarray*}
Therefore, \[ 2A \,\cap\, A\! +\! \{-c,\ c+u_{m+1} \} = \emptyset\,, \] each element of $A\! +\! \{-c,\ c+u_{m+1}\}$ has an unique representation in the form of $a\! +\! \{-c, \ c+u_{m+1}\},\ a\in A$, and the same is true for $\{-2c,\ u_{m+1},\ 2c+2u_{m+1}\}$ \,in the form of $\{-c,\ c+u_{m+1}\} + \{-c,\ c+u_{m+1}\}$. And
\begin{eqnarray*}
|-2c| &=& 2|c| > 8d, \\
|2c+2u_{m+1}|&=& 2|c+u_{m+1}| \ge 2|c| \ge 8d\,,
\end{eqnarray*}
thus $2A \,\cap\, \{-2c, 2c+2u_{m+1}\} = \emptyset$ \,(recall that $c$ and $u_{m+1}$ has the same sign).
Also note that $A+\{-c,\ c+u_{m+1}\}$ and $\{-2c,\ u_{m+1},\ 2c+2u_{m+1}\}$ are disjoint. To see this, for example, if $a-c=2c+2u_{m+1}$ for some $a \in A$, then $a=3c+2u_{m+1}$ \,so $|a|=|3c+2u_{m+1}| \ge |3c| > 12d$, giving us a contradiction. Other cases are similar.
Thus, we have
\[ r_B(n) =
\left\{\begin{array}{cl}
r_A(n) + 1 & \mbox{if } n=u_{m+1} \\
r_A(n) & \mbox{if } n\in \,2A \setminus \{u_{m+1}\} \\
1 & \mbox{if } n\in \,2B \setminus \bigl\{2A \cup \{u_{m+1}\}\bigr\}.
\end{array}\right. \]
Now, we have $r_A(u_{m+1}) < f(u_{m+1})$, so\, \[r_B(u_{m+1}) = r_A(u_{m+1}) +1 \;\le f(u_{m+1}).\] And, if $n\in 2B \setminus \bigl\{2A \cup \{u_{m+1}\}\bigr\}$, then $|n| \ge d_0$, so $f(n) \ge 1 = r_B(n)$. Thus, $r_B(n) \le f(n)$ for all $n$.
Now, \,$r_A(u_{m+1}) \ge \#\{i\le m : u_i = u_{m+1}\}$ so
\begin{eqnarray*}
r_A(u_{m+1})+1 & \ge & \#\{i\le m : u_i=u_{m+1}\} +1 \\
& = & \#\{i \le m+1 : u_i = u_{m+1} \}, \end{eqnarray*}
therefore,
\[ r_B(u_{m+1}) \ge \#\{ i \le m+1 : u_i = u_{m+1} \}. \]
If $n \in 2A \setminus \{u_{m+1}\}$, then \[r_B(n) = r_A(n) \ge \#\{i \le m : u_i = n \} = \#\{ i \le m+1 : u_i = n\}.\]
If $n \in 2B \setminus \bigl\{2A \cup \{u_{m+1}\}\bigr\}$, then
\[ 0=r_A(n) \ge \#\{ i \le m : u_i = n\} = \#\{ i \le m+1 : u_i = n\}\]
so
\[ 0=\#\{ i \le m+1 : u_i = n \} \le 1 = r_B(n). \]
Thus, \[r_B(n) \ge \#\{ i \le m+1 : u_i =n\} \] for all $n$.
\end{proof}
\begin{lemma}\label{L:second}
Let $A$ be a finite set of integers with $r_A(n) \le f(n)$ for all $n$, and $0 \notin A$. Let $\phi (x) \colon \mathbb{N}_0 \to \mathbb{R}$ be a nonnegative function such that $\lim_{x \to \infty} \phi (x) = \infty$. Then for any $M > 0$, there exists an integer $x > M$ and a finite set of integers $B$ with $0 \notin B$, $A \subseteq B$, $r_B(n) \le f(n)$ for all $n$, and $B(-x,x) > \sqrt{x}/\phi(x)$.
\end{lemma}
\begin{proof}
It is well-known that there exists a Sidon set $D \subseteq [1,n]$ such that $|D| = n^{1/2} + o(n^{1/2})$ for all $n \ge 1$. Choose an integer $x$ which satisfies the following:
\begin{enumerate}
\item $\phi (x) > M+ \sqrt{20T}$ where $T= \mbox{max} \{d_0,\ |a|\ \mbox{where }a \in A\},$\label{ff}
\item $x$ is a multiple of $5T$,\label{fs}
\item $x > M$,\label{ft}
\item If $n$ is large enough, $|D|= \sqrt{n} +o(\sqrt{n}) > \sqrt{n}/2$. Let $x$ be large enough so that $n=x/5T$ would be large enough to satisfy the above.\label{ffo}
\end{enumerate}
Let $B = A \cup \{ 5Td : d \in D\}$ where $D \subseteq [1,n]$ with $n= x/5T$ as above. Then
\begin{eqnarray*}
B(-x,x) &\ge& B(0,x)\ge D(1,n)=|D| \\
&> &\frac{\sqrt{n}}{2} = \frac{1}{2}\sqrt{\frac{x}{5T}} = \frac{\sqrt{x}}{\sqrt{20T}} > \frac{\sqrt{x}}{\phi(x) - M} > \frac{\sqrt{x}}{\phi (x)}. \end{eqnarray*}
Now, note that $2B$ has three parts:
\[ 2A, \ A+5Td \ \mbox{for } d\in D,\ \mbox{and } 5T(d_1+d_2)\ \mbox{for } d_
1,d_2 \in D.\]
As earlier, we have $2A \,\cap\, (A + 5T\negmedspace*\negmedspace D) = \emptyset$\; and $2A \,\cap\, \{ 5T(d_1+d_2) : d_1, d_2 \in D\} = \emptyset$.
Now, if $a+5Td_1 = 5T(d_2 + d_3)$ for $a \in A$, $d_i \in D$, then $|a| = 5T\,|d_2+d_3-d_1|$. If $|\,d_2+d_3-d_1|=0$, then $a=0 \in A$, a contradiction. And if $|\,d_2+d_3-d_1| \ge 1$, then $|\,a| \ge 5T$, a contradiction. Thus
\[ (A+5T\negmedspace*\!D) \cap \{5T(d_1+d_2) : d_1, d_2 \in D\} = \emptyset.\]
If $a_1+5Td_1 = a_2 +5Td_2$ \,for $a_1, a_2 \in\! A$,\, $d_1, d_2 \in\! D$, then $|\,a_1 - a_2|=5T|\,d_1-d_2|$. As above, this cannot happen unless $a_1=a_2$ \,and $d_1 =d_2$. And if \,$5T(d_1+d_2)=5T(d_3+d_4)$ for $d_i \in D$ with $d_1 \le d_2, \, d_3 \le d_4$, then $d_1+d_2=d_3+d_4$. Since $D$ is a Sidon set, we have $d_1=d_3,\ d_2=d_4$. Thus we have
\[ r_B(n) =
\left\{\begin{array}{cl}
r_A(n) & \mbox{if } n \in 2A \\
1 & \mbox{if } n\in 2B \setminus 2A.
\end{array}\right. \]
If $n \in 2B \setminus 2A$, \, $n \ge T \ge d_0$, so $f(n) \ge 1 = r_B(n)$. Thus, $r_B(n) \le f(n)$ for all $n$.
\end{proof}
\section{Main Result}\label{S:Main}
\begin{theorem}\label{T:Main}
Let $f \colon \mathbb{Z} \to \mathbb{N}_0 \cup \{ \infty\}$ be a function with a finite set $f^{-1}(0)$. Let $\phi \colon \mathbb{N}_0 \to \mathbb{R}$ be any nonnegative function with $\lim_{x \to \infty} \phi(x) = \infty$. Then, there exists an asymptotic basis $A$ of integers such that $r_A(n) = f(n)$ for all $n$, and for infinitely many positive integers\, $x$, we have $A(-x,x) > \sqrt{x}/\phi(x)$.
\end{theorem}
\begin{proof}
Recall that given such a function $f$, we have $d_0$ and $\{ u_k\}$ as defined in Section~\ref{S:Lem}.
We use induction to get an infinite sequence of finite sets of integers $A_1 \subseteq A_2 \subseteq \cdots$ and a sequence of positive integers $\{ x_i\}_{i=1}^{\infty}$ with $x_{i+1} > x_i$\, such that, for all positive integers $l$, we have
\begin{enumerate}
\item $r_{A_l}(n) \le f(n)$ for all $n$,\label{indfi}
\item $r_{A_{2l}}(n),\, r_{A_{2l+1}}(n) \;\ge\, \#\{ i\le l\!+\!1 : u_i=n\}$ for all $n$,\label{indse}
\item $A_{2l-1}(-x_l, x_l) > \sqrt{x_l}/\phi(x_l)$,\label{indth}
\item $0 \notin A_l$.\label{indfo}
\end{enumerate}
If $u_1 \ge 0$, take $c=4d_0 > 0$. If $u_1 <0$, take $c=-4d_0 <0$. Let $\alpha =|2c+2u_1| > 0$\;(\,thus $\alpha > |c|\,,|u_1|\,,d_0$). As before, if $n$ is large enough, there exists a Sidon set $D \subseteq [1,n]$ such that $|D| > \sqrt{n}/2$. Take such an integer $n$ which also satisfies $\phi(3\alpha n) > 2\!\sqrt{3\alpha}$.
Take $A_1 = 3\alpha\!*\!D \;\cup\, \{-c,\ c+u_1\}$ \,and \,$x_1=3\alpha n$. Then $2A_1$ has three parts:
\[ 2(3\alpha\!*\!D),\ \ 3\alpha\!*\!D+\{-c, \ c+u_1\},\ \ \{-2c, \ u_1,\ 2c+2u_1\}. \]
It's easy to see that these are pairwise disjoint(for example, if $3\alpha d -c=2c+2u_1$\,, then $3\alpha d=3c+2u_1$\,, so $3\alpha \le |\,3c+2u_1| < |\,4c+4u_1|=2\alpha$\,, \,a contradiction). If $3\alpha d_1 -c=3\alpha d_2+c+u_1$, then $2c+u_1=3\alpha (d_1-d_2)$. If $d_1 \ne d_2$, then $|2c+u_1| \ge 3\alpha$ but $|\,2c+u_1| < |\,2c+2u_1| = \alpha$. So $d_1\!=\!d_2$\,. Then $-c=c+u_1$\,, \,so $-2c=u_1$\,, a contradiction. Thus,
\[ r_{A_1}(n) =
\left\{\begin{array}{cl}
1 & \mbox{if } n \in 2A_1 \\
0 & \mbox{if } n \notin 2A_1.
\end{array}\right. \]
Now, $3\alpha d-c\ge 3\alpha - \alpha=2\alpha$\,, and also\, $3\alpha d+c+u_1 \ge 3\alpha -\alpha-\alpha = \alpha$\,. So if $n \!\in\! 2A_1 \!\setminus\! \{u_1\}$ then $|n| \ge d_0$\,, so $f(n) \ge 1$. And if $n= u_1$\,, then by the definition of $\{u_k\}$, we have $f(u_1)= \#\{ k : u_k=u_1 \,\} \ge 1$. Thus, for all $n \in 2A_1$\,,\ $r_{A_1}(n) =1 \le f(n)$. If $ n \notin 2A_1$\,, \, $r_{A_1}(n)=0 \le f(n)$. Therefore, for all $n$, $r_{A_1}(n) \le f(n)$. Now, we have $1=r_{A_1}(u_1) \ge \#\{ i \le 1 : u_i = u_1\}$. For other $n \ne u_1$\,, \ $r_{A_1}(n) \ge \#\{ i \le 1 : u_i = n\}=0$. Thus, $r_{A_1}(n) \ge \#\{i\le 1 : u_i=n\}$ for all $n$. Also,
\begin{eqnarray*}
A_1(-x_1,\, x_1) &\ge& A_1(1,\, 3\alpha n) \ge \,D(1,n) = |D| \\
&> & \frac{\sqrt{n}}{2}= \frac{\sqrt{3\alpha n}}{2\!\sqrt{3\alpha}} > \frac{\sqrt{3\alpha n}}{\phi(3\alpha n)} = \frac{\sqrt{x_1}}{\phi(x_1)}\,.
\end{eqnarray*}
Thus $A_1$ satisfies all of the conditions~\eqref{indfi} to \eqref{indfo}.
Now, suppose we have $A_1 \subseteq A_2 \subseteq \cdots \subseteq A_{2l-1}$ and $x_1 < x_2 < \cdots < x_l$. By Lemma~\ref{L:first}, there exists $A_{2l}$ such that $A_{2l-1} \subseteq A_{2l}$ with $r_{A_{2l}}(n)\le f(n)$ for all $n$\, and
\[ r_{A_{2l}}(n)\, \ge\, \#\{ i \le l\!+\!1 :\, u_i=n\} \] for all $n$\,, and $0 \notin A_{2\,l}$. Now, by Lemma~\ref{L:second}, there exists an integer $x_{l+\!1} > x_l$ and $A_{2l+\!1}$ with $0 \notin A_{2l+\!1}$\,, \ $A_{2l} \subseteq A_{2l+\!1}$\,, \ $r_{A_{2l+\!1}}(n) \le f(n)$ for all $n$\,, and \[A_{2l+\!1}(-x_{l+\!1}\,,\, x_{l+\!1})\, > \,\frac{\sqrt{x_{l+\!1}}}{\phi(x_{l+\!1})} \,.\]
Also,\, $r_{A_{2l+\!1}}(n) \,\ge\, r_{A_{2l}}(n) \,\ge\, \#\{i \le l\!+\!1 : u_i=n\}$ \,for all $n$.
Now, let $A= \cup_{l=1}^{\infty} \;A_l$\,. By conditions \eqref{indfi} and \eqref{indse}\,, $r_A(n) = f(n)$ for all $n$ and
\[ A(-x_k\,,\,x_k) \ge A_{2k-\!1}(-x_k\,,\,x_k) > \frac{\sqrt{x_k}}{\phi(x_k)} \]
for all $k$\,.
\end{proof}
| {
"timestamp": "2007-04-26T11:09:57",
"yymm": "0702",
"arxiv_id": "math/0702279",
"language": "en",
"url": "https://arxiv.org/abs/math/0702279",
"abstract": "Nathanson constructed asymptotic bases for the integers with a prescribed representation function, then asked how dense they can be. We can easily obtain an upper bound using a simple argument. In this paper, we will see this is indeed the best bound we can get for asymptotic bases for the integers with an arbitrary representation function prescribed.",
"subjects": "Number Theory (math.NT); Combinatorics (math.CO)",
"title": "Infinitely Often Dense Bases of Integers with a Prescribed Representation Function",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987758725428898,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7097978753533932
} |
https://arxiv.org/abs/1911.00659 | Jacobi-type algorithm for low rank orthogonal approximation of symmetric tensors and its convergence analysis | In this paper, we propose a Jacobi-type algorithm to solve the low rank orthogonal approximation problem of symmetric tensors. This algorithm includes as a special case the well-known Jacobi CoM2 algorithm for the approximate orthogonal diagonalization problem of symmetric tensors. We first prove the weak convergence of this algorithm, \textit{i.e.} any accumulation point is a stationary point. Then we study the global convergence of this algorithm under a gradient based ordering for a special case: the best rank-2 orthogonal approximation of 3rd order symmetric tensors, and prove that an accumulation point is the unique limit point under some conditions. Numerical experiments are presented to show the efficiency of this algorithm. | \section{Introduction}
As the higher order analogue of vectors and matrices,
in the last two decades,
tensors have been attracting more and more attentions from various fields,
including signal processing, numerical linear algebra and machine learning
\cite{Cichocki15:review,comon2014tensors,Como10:book,kolda2009tensor,sidiropoulos2017tensor,Anan14:latent}.
One reason is that more and more real data are naturally represented in tensor form, e.g. \emph{hyperspectral image}, \emph{brain fMRI image}, or \emph{social networks}.
The other reason is that, compared with the matrix case, tensor based techniques can capture higher order and more complicated relationships, e.g. \emph{Independent Component Analysis} (ICA) based on the cumulant tensor \cite{Como94:sp}, and \emph{multilinear subspace learning} methods \cite{lu2013multilinear}.
Low rank approximation of higher order tensors is a very important problem and has been applied in various areas \cite{Como10:book,de1997signal,smildemulti}. However,
it is much more difficult than the matrix case,
since it is ill-posed for many ranks,
and this ill-posedness is not rare for 3rd order tensors \cite{de2008tensor}.
\textbf{Notation.}
Let $\mathbb{R}^{n_1\times\cdots\times n_d}\stackrel{\sf def}{=}\mathbb{R}^{n_1}\otimes\cdots\otimes\mathbb{R}^{n_d}$
be the linear space of $d$th order real tensors and
$\text{symm}(\mathbb{R}^{n\times\cdots\times n})\subseteq\mathbb{R}^{n\times\cdots\times n}$ be the set of symmetric ones \cite{Comon08:symmetric,qi2017tensor},
whose entries do not change under any permutation of indices.
The identity matrix of size $n$ is denoted by $\matr{I}_n$.
Let $\text{St}(p,n)\subseteq\mathbb{R}^{n\times p}$ be the Stiefel manifold with $1\leq p\leq n$.
Let $\ON{n}\subseteq\mathbb{R}^{n\times n}$ be the orthogonal group, \textit{i.e.} $\ON{n}=\text{St}(n,n)$.
Let $\SON{n}\subseteq\mathbb{R}^{n\times n}$ be the special orthogonal group,
\textit{i.e.}
the set of orthogonal matrices with determinant 1.
We denote by $\|\cdot\|$ the Frobenius norm of a tensor or a matrix,
or the Euclidean norm of a vector.
Tensor arrays, matrices, and vectors, will be respectively denoted by bold calligraphic letters, e.g. $\tens{A}$, with bold uppercase letters, e.g. $\matr{M}$, and with bold lowercase letters, e.g. $\vect{u}$; corresponding entries will be denoted by $\tenselem{A}_{ijk}$, $M_{ij}$, and $u_i$.
Operator $\contr{p}$ denotes contraction on the $p$th index of a tensor; when contracted with a matrix, it is understood that summation is always performed on the second index of the matrix. For instance, $(\tens{A}\contr{1}\matr{M})_{ijk}=\sum_\ell \tenselem{A}_{\ell jk} M_{i\ell}$.
We denote
$$\tens{A}(\matr{M}) \stackrel{\sf def}{=} \tens{A} \contr{1} \matr{M}^{\T} \contr{2} \cdots \contr{d} \matr{M}^{\T}$$
for convenience in this paper.
For $\tens{A}\in\mathbb{R}^{n\times\cdots\times n}$ and a fixed set of indices $1\leq k_1<k_2<\cdots<k_m\leq n$,
we denote by $\tens{A}^{(k_1,k_2,\cdots,k_m)}$ the $m$-dimensional subtensor obtained from $\tens{A}$ by allowing its indices to vary in $\{k_1,k_2,\cdots,k_m\}$ only.
\textbf{Problem statement.}
Let $\tens{A} \in \text{symm}(\mathbb{R}^{n\times\cdots\times n})$ and $1 \leq p \leq n$.
In this paper,
we study the \emph{best rank-$p$ orthogonal approximation problem},
which is to find
\begin{equation}\label{pro-ortho-approxi}
\tens{C}^{*} \stackrel{\sf def}{=} \sum\limits_{k=1}^{p}\sigma_{k}^{*}u_{k}^{*}\otimes\cdots\otimes u_{k}^{*}
= \mathop{\operator@font argmin}\|\tens{A} - \sum_{k=1}^{p}\sigma_{k}u_{k}\otimes\cdots\otimes u_{k}\|,
\end{equation}
where $[u_{1},\cdots,u_{p}]\in \text{St}(p,n)$ and $\sigma_{k}\in\mathbb{R}$ for $1\leq k\leq p$.
If $p=1$,
then \eqref{pro-ortho-approxi} is the \emph{best rank-1 approximation} problem
\cite{Lathauwer00:rank-1approximation,kofidis2002best,kolda2011shifted,zhang2001rank,SilvCA16:spl}
of symmetric tensors,
which is equivalent to the \emph{cubic spherical optimization} problem \cite{qi2009z,zhang2012best,Zhang12:MC}.
If $p=n$, by \cite[Proposition 5.1]{chen2009tensor} and \cite[Proposition 5.2]{LUC2018},
we see that \eqref{pro-ortho-approxi} is closely related to the \emph{approximate orthogonal diagonalization problem} for 3rd and 4th order cumulant tensors,
which is in the core of \emph{Independent Component Analysis} (ICA) \cite{Como92:elsevier,Como94:sp,comon1994tensor},
and finds many applications \cite{Como10:book}.
To our knowledge,
the orthogonal tensor decomposition was first tackled in \cite{Como92:elsevier}, but appeared more formally in \cite{Kolda01:Orthogonal},
in which many examples were presented to illustrate the difficulties of this type of decomposition.
In \cite{chen2009tensor}, the existence of $\tens{C}^{*}$ in problem \eqref{pro-ortho-approxi} was proved, and the \emph{low rank orthogonal approximation of tensors} (LROAT) algorithm and \emph{symmetric LROAT} (SLROAT) were developed to solve this problem based on the polar decomposition.
These two algorithms boil down to the \emph{higher order power method} (HOPM) and \emph{symmetric HOPM} (SHOPM) algorithm \cite{Lathauwer00:rank-1approximation,kofidis2002best,zhang2001rank}
when $p=1$.
More recently, also based on the polar decomposition, a similar algorithm was developed in \cite{pan2018symmetric} to solve problem \eqref{pro-ortho-approxi}, and this algorithm was applied to the image reconstruction task.
\textbf{Contribution.}
In this paper,
we propose a Jacobi-type algorithm to solve problem \eqref{pro-ortho-approxi}.
This algorithm is exactly the well-known Jacobi CoM2 algorithm \cite{Como10:book} when $p=n$,
and the same as the Jacobi-type algorithm in \cite{IshtAV13:simax} when $p=1$.
We first prove the weak convergence\footnote{any accumulation point is a stationary point.} of this algorithm under the cyclic ordering based on a decomposition property of the identity matrix.
Then, under the gradient based ordering defined in \cite{IshtAV13:simax,LUC2017globally,ULC2019},
we prove the global convergence\footnote{the iterations converge to a unique limit point for any starting point.} of this algorithm for 3rd order tensors of rank $p=2$ under some conditions.
By making some numerical experiments and comparisons,
we show that the Jacobi-type algorithm proposed in this paper is efficient and stable.
\textbf{Organization.}
The paper is organized as follows.
In \cref{geometric}, we show that two optimization problems on Riemannian manifold are both equivalent to \eqref{pro-ortho-approxi},
and then calculate their Riemannian gradients.
In \cref{section-algorithm},
we propose a Jacobi-type algorithm to solve \eqref{pro-ortho-approxi}.
This algorithm includes the well-known Jacobi CoM2 algorithm as a special case.
In \cref{sec-weak-conver}, we prove the weak convergence of this algorithm under the cyclic ordering.
In \cref{sect-Jacobi-G},
we study the global convergence of this algorithm under the gradient based ordering for the 3rd order tensor and $p=2$ case.
In \cref{sect-experiment},
we report some numerical experiments showing the efficiency of this algorithm.
\section{Geometric properties}\label{geometric}
\subsection{Equivalent problems}
Let $\tens{A} \in \text{symm}(\mathbb{R}^{n\times\cdots\times n})$ and $1 \leq p \leq n$.
Let $\matr{X}\in \text{St}(p,n)$ and
$\widetilde{\tens{W}} = \tens{A}(\matr{X}).$
One problem equivalent to \eqref{pro-ortho-approxi} is to find
\begin{equation}\label{pro-stefiel}
\matr{X}_{*} = \argmax_{\matr{X}\in \text{St}(p,n)}\tilde{f}(\matr{X}),
\end{equation}
where
\begin{equation}\label{eq-cost-func-1}
\tilde{f}(\matr{X}) \stackrel{\sf def}{=} \sum\limits_{i=1}^{p}\widetilde{\tenselem{W}}^2_{i\cdots i}.
\end{equation}
\begin{lemma}\rm(\cite[Proposition 5.1]{chen2009tensor})
Let $\tens{C}^{*}$
be as in \eqref{pro-ortho-approxi}.
Then
\begin{equation*}\label{eq-orthogonal-A-T}
\langle\tens{A}-\tens{C}^{*}, u_{k}^{*}\otimes\cdots\otimes u_{k}^{*}\rangle = 0\ \ \text{and}\ \
\sigma_{k}^{*} = \langle\tens{A}, u_{k}^{*}\otimes\cdots\otimes u_{k}^{*}\rangle\notag
\end{equation*}
for $1\leq k\leq p$.
Moreover,
it holds that
\begin{equation}\label{eq-relation-problem}
\|\tens{A}-\tens{C}^{*}\|^2 = \|\tens{A}\|^2 - \|\tens{C}^{*}\|^2 = \|\tens{A}\|^2 - \sum\limits_{k=1}^{p}(\sigma_{k}^{*})^2.
\end{equation}
\end{lemma}
\begin{remark}\rm\label{remark-equivalence}
(i) Let $\tens{C}^{*}$ be as in \eqref{pro-ortho-approxi}
and $\matr{X}_{*}$ be as in \eqref{pro-stefiel}.
We see from \eqref{eq-relation-problem} that
\begin{equation*}
\matr{X}_{*} = [u_{1}^{*},\cdots,u_{p}^{*}]\ \ \text{and}\ \
\|\tens{A}-\tens{C}^{*}\|^2 = \|\tens{A}\|^2 - \tilde{f}(\matr{X}_{*}).
\end{equation*}
In other words,
to solve \eqref{pro-ortho-approxi},
it is enough for us to solve \eqref{pro-stefiel},
which is an optimization problem on $\text{St}(p,n)$.\\
(ii) If $p=1$, then \eqref{pro-stefiel} is the \emph{cubic spherical optimization problem} \cite{qi2009z,zhang2012best,Zhang12:MC}.
If $p=n$, then \eqref{pro-stefiel} is the \emph{approximate orthogonal tensor diagonalization problem} \cite{Como94:sp,comon1994tensor,Como10:book,LUC2017globally}.
\end{remark}
Let $\matr{Q}\in\ON{n}$ and
$\tens{W} = \tens{A}(\matr{Q})$.
Another problem, equivalent to \eqref{pro-stefiel}, is to find
\begin{equation}\label{pro-orthogonal}
\matr{Q}_{*} = \argmax_{\matr{Q}\in\ON{n}}f(\matr{Q}),
\end{equation}
where
\begin{equation}\label{eq-cost-func-2}
f(\matr{Q})\stackrel{\sf def}{=} \sum\limits_{i=1}^{p}\tenselem{W}^2_{i\cdots i}.
\end{equation}
In fact,
if $\matr{X}\in \text{St}(p,n)$ and $\matr{Q}=[\matr{X},\matr{Y}]\in\ON{n}$,
then $\tenselem{W}_{i_1\cdots i_d}=\widetilde{\tenselem{W}}_{i_1\cdots i_d}$ for any $1\leq i_1,\cdots,i_d\leq p$.
The equivalence between \eqref{pro-stefiel} and \eqref{pro-orthogonal} follows from the fact that $f(\matr{Q}) = \tilde{f}(\matr{X})$.
\begin{remark}\rm
Let $\tens{W}\in \text{symm}(\mathbb{R}^{n\times n\times n})$ and $1\leq p\leq n$.
Let $\widetilde{\tens{W}}=\tens{W}^{(1,2,\cdots,p)}$.
Then the objective used in \cite[(3.1)]{IshtAV13:simax} is the sum of squares of all the elements in $\widetilde{\tens{W}}$,
while \eqref{eq-cost-func-2} is the sum of squares of the diagonal elements in $\widetilde{\tens{W}}$.
They are the same if $p=1$.
\end{remark}
\subsection{Riemannian gradient}
\begin{definition}\rm
Let $\tens{A} \in \text{symm}(\mathbb{R}^{n\times\cdots\times n})$ and $1\leq i<j\leq n$.
Define
\begin{align*}
\sigma_{i,j}(\tens{A})\stackrel{\sf def}{=} \tenselem{A}_{ii\ldots i}\tenselem{A}_{ji\ldots i},\quad
d_{i,j}(\tens{A})\stackrel{\sf def}{=} \sigma_{i,j}(\tens{A})-\sigma_{j,i}(\tens{A}) = \tenselem{A}_{ii\ldots i}\tenselem{A}_{ji\ldots i}-\tenselem{A}_{ij\ldots j}\tenselem{A}_{jj\ldots j}.
\end{align*}
\end{definition}
\begin{theorem}\label{RiemanGrad-thm}\rm
The Riemannian gradient of
\eqref{eq-cost-func-2} at $\matr{Q}$ is
\begin{equation}\label{eq-Riemannian-gradient}
\ProjGrad{f}{\matr{Q}}= \matr{Q}\,\Lambda(\matr{Q}),
\end{equation}
where
\begin{align}\label{eq-gradient-On}
\Lambda(\matr{Q})\stackrel{\sf def}{=}
d\cdot
\left[\begin{smallmatrix}
0 & -d_{1,2}(\tens{W}) &
\ldots & -d_{1,p}(\tens{W})
& -\sigma_{1,p+1}(\tens{W})
& \cdots & -\sigma_{1,n}(\tens{W})\\ \\
d_{1,2}(\tens{W}) & 0 &
\ldots & -d_{2,p}(\tens{W})
& -\sigma_{2,p+1}(\tens{W})
& \cdots & -\sigma_{2,n}(\tens{W})\\ \\
\ldots&\ldots&\ldots&\ldots&\cdots&\cdots&\cdots\\ \\
d_{1,p}(\tens{W}) &
d_{2,p}(\tens{W}) & \ldots & 0
& -\sigma_{p,p+1}(\tens{W})
& \cdots & -\sigma_{p,n}(\tens{W})\\ \\
\sigma_{1,p+1}(\tens{W}) & \sigma_{2,p+1}(\tens{W})
& \cdots &\sigma_{p,p+1}(\tens{W}) & 0 &\cdots & 0\\ \\
\ldots&\ldots&\ldots&\ldots&\ldots&\cdots & \ldots\\ \\
\sigma_{1,n}(\tens{W}) &
\sigma_{2,n}(\tens{W}) & \ldots & \sigma_{p,n}(\tens{W}) &0
&\cdots& 0
\end{smallmatrix}\right].
\end{align}
\end{theorem}
\begin{proof}
Note that
\begin{equation*}
f(\matr{Q}) = \sum\limits_{j=1}^p \tenselem{W}^2_{jj\ldots j}
=\sum\limits_{j=1}^p(\sum\limits_{i_1,i_2,\ldots,i_d}\tenselem{A}_{i_1,i_2,\ldots,i_d}Q_{i_1,j}Q_{i_2,j}\ldots Q_{i_d,j})^2.
\end{equation*}
Let $\tens{V} = \tens{A} \contr{2} \matr{Q}^{\T} \cdots \contr{d} \matr{Q}^{\T}$.
Fix $1\leq i\leq n$ and $1\leq j\leq p$.
Then
\begin{align*}
\frac{\partial f}{\partial {Q}_{i,j}}
=2d\tenselem{W}_{jj\ldots j} \tenselem{V}_{ij\ldots j}
\end{align*}
by methods similar to \cite[Section 4.1]{LUC2017globally}.
Note that $\tens{W} = \tens{V}\contr{1} \matr{Q}^{\T}$.
We get the Euclidean gradient of \eqref{eq-cost-func-2} at $\matr{Q}$ as follows:
\begin{align*}
\nabla f(\matr{Q})
= 2d
\matr{Q}
\begin{bmatrix}
\tenselem{W}_{11\ldots 1} & \tenselem{W}_{12\ldots 2} & \ldots & \tenselem{W}_{1p\ldots p}& 0 &\cdots&0\\
\tenselem{W}_{21\ldots 1} & \tenselem{W}_{22\ldots 2} & \ldots & \tenselem{W}_{2p\ldots p}& 0 &\cdots&0\\
\ldots&\ldots&\ldots&\ldots& \cdots &\cdots&\cdots\\ \\
\tenselem{W}_{n1\ldots 1} & \tenselem{W}_{n2\ldots 2} & \ldots & \tenselem{W}_{np\ldots p}& 0 &\cdots&0
\end{bmatrix}
\begin{bmatrix}
\tenselem{W}_{1\ldots 1} & \cdots & 0&\cdots &0\\
\vdots&\ddots&\vdots&\cdots &0\\
0 & \cdots & \tenselem{W}_{p\cdots p}&\cdots &0\\
\vdots&\cdots&\cdots&\ddots &\vdots\\
0&\cdots&0&\cdots &0
\end{bmatrix}.
\end{align*}
By \cite[(3.35)]{Absil08:Optimization},
we get that
\begin{equation}\label{eq-Riemannian-gradient}
\ProjGrad{f}{\matr{Q}}=
\frac{1}{2}\matr{Q}(\matr{Q}^{\T}\nabla f(\matr{Q})-\nabla f(\matr{Q})^{\T}\matr{Q})
=\matr{Q}\,\Lambda(\matr{Q}).
\end{equation}
Then the proof is complete.
\end{proof}
\begin{remark}\rm
(i) If $p=1$,
we see that $\Lambda(\matr{Q})=0$ if and only if
\begin{equation*}
\tenselem{W}_{21\ldots 1} = \tenselem{W}_{31\ldots 1} = \cdots = \tenselem{W}_{n1\ldots 1} = 0,
\end{equation*}
which means that the first column of $\matr{Q}$ satisfies the condition in \cite[(2)]{qi2009z}.\\
(ii) The definition of $\Lambda(\matr{Q})$ in \eqref{eq-gradient-On} can be seen as an extension of \cite[(12)]{LUC2017globally}.
\end{remark}
\begin{theorem}\rm
The Riemannian gradient of \eqref{eq-cost-func-1} at $\matr{X}$ satisfies
\begin{align}\label{Remannian-gradient-stp-2}
\matr{X}^{\T}\ProjGrad{\tilde{f}}{\matr{X}} = d\cdot
\left[\begin{smallmatrix}
0 & -d_{1,2}(\widetilde{\tens{W}}) &
\ldots & -d_{1,p}(\widetilde{\tens{W}})\\ \\
d_{1,2}(\widetilde{\tens{W}}) & 0 &
\ldots & -d_{2,p}(\widetilde{\tens{W}})\\ \\
\ldots&\ldots&\ldots&\ldots\\ \\
d_{1,p}(\widetilde{\tens{W}}) &
d_{2,p}(\widetilde{\tens{W}}) & \ldots & 0
\end{smallmatrix}\right].
\end{align}
\end{theorem}
\begin{proof}
The proof goes along the same lines as for \cref{RiemanGrad-thm}. Note that
\begin{equation*}
\tilde{f}(\matr{X}) = \sum\limits_{j=1}^p \widetilde{\tenselem{W}}^2_{jj\ldots j}
=\sum\limits_{j=1}^p(\sum\limits_{i_1,i_2,\ldots,i_d}\tenselem{A}_{i_1,i_2,\ldots,i_d}X_{i_1,j}X_{i_2,j}\ldots X_{i_d,j})^2.
\end{equation*}
Let $\widetilde{\tens{V}} = \tens{A} \contr{2} \matr{X}^{\T} \cdots \contr{d} \matr{X}^{\T}$.
Fix $1\leq i\leq n$ and $1\leq j\leq p$.
Then
\begin{align*}
\frac{\partial \tilde{f}}{\partial X_{i,j}}
=2d\widetilde{\tenselem{W}}_{jj\ldots j} \widetilde{\tenselem{V}}_{ij\ldots j}
\end{align*}
by the similar methods in \cite[Section 4.1]{LUC2017globally}.
Note that $\widetilde{\tens{W}} = \widetilde{\tens{V}}\contr{1} \matr{X}^{\T}$.
We get the Euclidean gradient of \eqref{eq-cost-func-2} at $\matr{X}$ as follows:
\begin{align*}
\nabla \tilde{f}(\matr{X})&= 2d
\begin{bmatrix}
\widetilde{\tenselem{V}}_{11\ldots 1} & \widetilde{\tenselem{V}}_{12\ldots 2} & \cdots & \widetilde{\tenselem{V}}_{1p\ldots p} \\
\widetilde{\tenselem{V}}_{21\ldots 1} & \widetilde{\tenselem{V}}_{22\ldots 2} & \cdots & \widetilde{\tenselem{V}}_{2p\ldots p} \\
\cdots&\cdots&\cdots&\cdots\\
\widetilde{\tenselem{V}}_{n1\ldots 1} & \widetilde{\tenselem{V}}_{n2\ldots 2} & \cdots & \widetilde{\tenselem{V}}_{np\ldots p}
\end{bmatrix}
\begin{bmatrix}
\widetilde{\tenselem{W}}_{1\ldots 1} & \cdots & 0\\
\vdots & \ddots & \vdots\\
0 & \cdots & \widetilde{\tenselem{W}}_{p\cdots p}
\end{bmatrix}.
\end{align*}
It follows by \cite[(3.35)]{Absil08:Optimization} that
\begin{align}\label{Remannian-gradient-stp-1}
\ProjGrad{\tilde{f}}{\matr{X}}
= (\matr{I}_{n}-\matr{X}\matr{X}^{\T})\nabla \tilde{f}(\matr{X})
+ d\matr{X}\cdot
\left[\begin{smallmatrix}
0 & -d_{1,2}(\widetilde{\tens{W}}) &
\ldots & -d_{1,p}(\widetilde{\tens{W}})\\ \\
d_{1,2}(\widetilde{\tens{W}}) & 0 &
\ldots & -d_{2,p}(\widetilde{\tens{W}})\\ \\
\ldots&\ldots&\ldots&\ldots\\ \\
d_{1,p}(\widetilde{\tens{W}}) &
d_{2,p}(\widetilde{\tens{W}}) & \ldots & 0
\end{smallmatrix}\right],
\end{align}
and the proof is completed.
\end{proof}
\begin{proposition}\rm\label{pro-equiva-sationary}
Let $\tens{A} \in \text{symm}(\mathbb{R}^{n\times\cdots\times n})$ and $1 \leq p \leq n$.
Let $\matr{X}_{*}\in \text{St}(p,n)$ and $\matr{Q}_{*} = [\matr{X}_{*},\matr{Y}_{*}]\in \ON{n}.$
Suppose that $\tilde{f}$ is as in \eqref{eq-cost-func-1} and $f$ is as in \eqref{eq-cost-func-2}.
Then
$$\ProjGrad{\tilde{f}}{\matr{X}_{*}}=0\ \Leftrightarrow \ \ProjGrad{f}{\matr{Q}_{*}}=0.$$
\end{proposition}
\begin{proof}
Let
$\widetilde{\tens{W}}_{*} = \tens{A}(\matr{X}_{*})$
and
$\tens{W}_{*} = \tens{A}(\matr{Q}_{*})$.\\
($\Rightarrow$).
By \eqref{Remannian-gradient-stp-2},
we see that
$d_{i,j}(\tens{W}_{*}) = d_{i,j}(\widetilde{\tens{W}}_{*}) = 0$
for any $1\leq i<j \leq p$.
It follows by \eqref{Remannian-gradient-stp-1} that
$$\matr{Y}_{*}\matr{Y}_{*}^{\T}\nabla \tilde{f}(\matr{X}_{*})
= (\matr{I}_{n}-\matr{X}_{*}\matr{X}_{*}^{\T})\nabla \tilde{f}(\matr{X}_{*}) = 0,$$
and thus
$$\matr{Y}_{*}^{\T}\nabla \tilde{f}(\matr{X}_{*})
= \matr{Y}_{*}^{\T}\matr{Y}_{*}\matr{Y}_{*}^{\T}\nabla \tilde{f}(\matr{X}_{*}) = 0.$$
Then
$\sigma_{i,j}(\tens{W}_{*}) = 0$
for any $1\leq i\leq p<j\leq n$,
and thus $\ProjGrad{f}{\matr{Q}_{*}}=0$ by \eqref{eq-gradient-On}.\\
($\Leftarrow$).
By \eqref{eq-gradient-On},
we see that
$d_{i,j}(\widetilde{\tens{W}}_{*}) = d_{i,j}(\tens{W}_{*}) = 0$
for any $1\leq i<j \leq p$.
Note that
$\sigma_{i,j}(\tens{W}_{*}) = 0$
for any $1\leq i\leq p< j\leq n$.
It follows that
$\matr{Y}_{*}^{\T}\nabla \tilde{f}(\matr{X}_{*}) = 0,$
and thus
$$(\matr{I}_{n}-\matr{X}_{*}\matr{X}_{*}^{\T})\nabla \tilde{f}(\matr{X}_{*})
= \matr{Y}_{*}\matr{Y}_{*}^{\T}\nabla \tilde{f}(\matr{X}_{*}) = 0.$$
Then
$\ProjGrad{\tilde{f}}{\matr{X}_{*}}=0$ by \eqref{Remannian-gradient-stp-1}.
\end{proof}
\section{Jacobi low rank orthogonal approximation algorithm}\label{section-algorithm}
\subsection{Algorithm description}
Let $1\leq p\leq n$ and $\set{C}=\{(i,j), 1\leq i< j\leq n, i\leq p\}$.
We divide $\set{C}$ to be two different subsets
\begin{align*}
\mathcal{C}_{1} \stackrel{\sf def}{=} \{(i,j),\ 1\leq i<j\leq p\}\ \text{and}\ \
\mathcal{C}_{2} \stackrel{\sf def}{=} \{(i,j),\ 1\leq i\leq p<j\leq n\}.
\end{align*}
Denote by $\Gmat{i}{j}{\theta}$ the \emph{Givens rotation} matrix, as defined \textit{e.g.} in \cite[Section 2.2]{LUC2017globally}.
Now we formulate the {\it Jacobi low rank orthogonal approximation} (JLROA) algorithm for
problem \eqref{pro-orthogonal} as follows.
\begin{algorithm}\rm\label{al-JLROA}(JLROA algorithm)\\
{\bf Input:} $\tens{A}\in\text{symm}(\mathbb{R}^{n\times \cdots\times n})$, $1 \leq p \leq n$, a starting point $\matr{Q}_{0}$.\\
{\bf Output:} Sequence of iterations $\matr{Q}_{k}$.
\begin{itemize}
\item {\bf For} $k=1,2,\ldots$ until a stopping criterion is satisfied do
\item\quad Choose the pair $(i_k,j_k)\in\set{C}$ in the following cyclic ordering:
\begin{equation}\label{partial-cyclic-1}
\begin{split}
&(1,2) \to (1,3) \to \cdots \to (1,n) \to \\
& (2,3) \to \cdots \to (2,n) \to \\
& \cdots \to (p,p+1) \to \cdots \to (p,n) \to \\
&(1,2) \to (1,3) \to \cdots.
\end{split}
\end{equation}
\item\quad Solve $\theta_k^{*}$ that maximizes $h_k(\theta)\stackrel{\sf def}{=}\textit{f}(\matr{Q}_{k-1}\Gmat{i_k}{j_k}{\theta})$.
\item\quad Set $\matr{U}_k \stackrel{\sf def}{=} \Gmat{i_k}{j_k}{\theta^{*}_k}$, and update $\matr{Q}_k = \matr{Q}_{k-1} \matr{U}_k$.
\item {\bf End for}
\end{itemize}
\end{algorithm}
\subsection{Elementary rotation}
Let $\tens{W} = \tens{A}(\matr{Q}_{k-1})$ and $\tens{T} = \tens{W}(\Gmat{i_k}{j_k}{\theta})$.
As in \cref{al-JLROA},
we define
\begin{align}\label{definition-h}
\textit{h}_k:\ [-\frac{\pi}{2},\frac{\pi}{2}]\longrightarrow \mathbb{R}^+,
\ \theta \longmapsto \textit{f}\ (\matr{Q}_{k-1}\Gmat{i_k}{j_k}{\theta})=\sum_{i=1}^{p}\tenselem{T}_{i\cdots i}^2
\end{align}
where $f$ is as in \eqref{eq-cost-func-2}.
Note that $\Gmat{i_k}{j_k}{\theta}=\Gmat{i_k}{j_k}{\theta+2\pi}$ and
$\tenselem{T}_{i\cdots i}^2(\theta)=\tenselem{T}_{i\cdots i}^2(\theta+\pi)$
for any $\theta\in\mathbb{R}$ and $1\leq i\leq p$.
We see that $\textit{h}_k$ has the same image with that defined on $\mathbb{R}$.
So it is sufficient to determine $\theta_k^{*}\in [-\pi/2, \pi/2]$ such that
$\textit{h}_k(\theta_k^{*})=\max\limits_{\theta} \textit{h}_k(\theta),$
and we choose $\theta_k^{*}$ with the smallest absolute value if there are more than one choices.
Denote by $\overline{\mathbb{R}}=\mathbb{R}\cup\{\pm\infty\}$.
Define
\begin{align*}
\tau_k:\ \overline{\mathbb{R}}\longrightarrow \mathbb{R}^+,
\ \ x \longmapsto \textit{h}_k(\arctan(x)).
\end{align*}
Let $x=\tan(\theta)\in\overline{\mathbb{R}}$ and $x^{*}_{k}=\tan(\theta_k^{*})$.
Then
\begin{align*}
\tau_k(x) - \tau_k(0) = \textit{h}_k(\theta) - \textit{h}_k(0)
= \sum_{i=1}^{p}\tenselem{T}_{i\cdots i}^2 - \sum_{i=1}^{p}\tenselem{W}_{i\cdots i}^2.
\end{align*}
\begin{lemma}\label{lemma-derivative-h}\rm
Let $\textit{h}_k$ be as in \eqref{definition-h}.
Then
$\textit{h}_k^{'}(\theta)= -2\Lambda(\matr{Q}_{k-1}\Gmat{i_k}{j_k}{\theta})_{i_k,j_k}.$
\end{lemma}
\begin{proof}
We denote by $\matr{G}(\theta)=\Gmat{i_k}{j_k}{\theta}$ for convenience.
Then it follows from \eqref{eq-Riemannian-gradient} and the methods similar to \cite[Lemma 5.7]{LUC2017globally} that
\begin{align*}
\textit{h}_k^{'}(\theta)&=\langle\ProjGrad{\textit{f}}{\matr{Q}_{k-1}\matr{G}(\theta)}, \matr{Q}_{k-1}\matr{G}^{'}(\theta)\rangle
=\langle \matr{Q}_{k-1}\matr{G}(\theta)\Lambda(\matr{Q}_{k-1}\matr{G}(\theta)), \matr{Q}_{k-1}\matr{G}^{'}(\theta)\rangle\\
&=\langle\Lambda(\matr{Q}_{k-1}\matr{G}(\theta)), {\matr{G}(\theta)}^{\T}\matr{G}^{'}(\theta)\rangle
= -2\Lambda(\matr{Q}_{k-1}\matr{G}(\theta))_{i_k,j_k}.
\end{align*}
\end{proof}
\begin{remark}\rm
Let $(i_k,j_k)\in\mathcal{C}_{1}$.
Then $h_k (\theta)$ in \eqref{definition-h} also has a period $\pi/2$ by \cite[Section 4.3]{LUC2017globally}.
In other words,
we can choose $\theta^{*}_k\in [-\pi/4, \pi/4]$ to maximize $h_k (\theta)$.
Equivalently,
we can choose $x^{*}_{k}\in[-1,1]$ to maximize $\tau_k(x)$.
\end{remark}
\subsection{Examples}
Let $\tens{A}\in\text{symm}(\mathbb{R}^{n\times \cdots\times n})$ be of 3rd or 4th order.
Now we show the details of how to solve $\theta_k^{*}$ in \cref{al-JLROA}.
In fact,
the methods in \cref{exam-3rd-order}(i) and \cref{exam-4th-order}(i)
were first formulated in \cite{comon1994tensor},
and can also be found in \cite[Section 6.2]{LUC2017globally}.
We present them here for convenience.
\begin{example}\rm\label{exam-3rd-order} (For 3rd order symmetric tentors)\\
(i) $\mathbf{Case}$ 1: $(i_k,j_k)\in \mathcal{C}_{1}$.
Take $p=2$ and the pair $(1,2)$ for example.
Let
\begin{align*}
a &= 6(\tenselem{W}_{111}\tenselem{W}_{112}-\tenselem{W}_{122}\tenselem{W}_{222}),\\
b &= 6(\tenselem{W}_{111}^2+\tenselem{W}_{222}^2-3\tenselem{W}_{112}^2-3\tenselem{W}_{122}^2
-2\tenselem{W}_{111}\tenselem{W}_{122}-2\tenselem{W}_{112}\tenselem{W}_{222}).
\end{align*}
Then we have that
\begin{align}
\tau_k(x)-\tau_k(0)&=\frac{1}{(1+x^2)^2}(a(x-x^3)-\frac{b}{2}x^2),\label{eq-inc-3}\\
\tau_k^{'}(x) &= \frac{1}{(1+x^2)^3}(a(1-6x^2+x^4)-b(x-x^3)).\notag
\end{align}
Denote by $\xi = x - 1/x$.
Then $\tau_k^{'}(x)=0$ if and only if
\begin{align*}
\Omega(\xi) \stackrel{\sf def}{=} a\xi^2+b\xi-4a = 0.
\end{align*}
Solve $\Omega(\xi) = 0$ for all the real roots $\xi_\ell$.
Then solve
$x^2-\xi_\ell x-1=0$
for all $\ell$ and take the best real root as $x_k^{*}$.\\
(ii) $\mathbf{Case}$ 2: $(i_k,j_k)\in \mathcal{C}_{2}$.
Take $p=2$ and the pair (1,$3$) for example.
It holds that
\begin{align}
\tau_k(x)-\tau_k(0) &
= \tenselem{T}_{111}^{2}-\tenselem{W}_{111}^2
=\frac{1}{(1+x^2)^3}[
(\tenselem{W}_{333}^2-\tenselem{W}_{111}^2)x^6+(6\tenselem{W}_{133}\tenselem{W}_{333})x^5\notag\\
&+(-3\tenselem{W}_{111}^2+9\tenselem{W}_{133}^2+6\tenselem{W}_{113}\tenselem{W}_{333})x^4
+(18\tenselem{W}_{113}\tenselem{W}_{133}+2\tenselem{W}_{111}\tenselem{W}_{333})x^3\notag\\
&+(-3\tenselem{W}_{111}^2+6\tenselem{W}_{133}\tenselem{W}_{111}+9\tenselem{W}_{113}^2)x^2
+(6\tenselem{W}_{111}\tenselem{W}_{113})x],\label{eq-increasement-order3-2}\\
\tau_k^{'}(x)
&= \frac{6\tenselem{T}_{111}(x)}{(1+x^2)^{5/2}}[-\tenselem{W}_{133}x^3+(\tenselem{W}_{333}-2\tenselem{W}_{113})x^2
+(2\tenselem{W}_{133}-\tenselem{W}_{111})x+\tenselem{W}_{113}].\notag
\end{align}
Then we solve
\begin{equation}\label{eq-order-3-station}
-\tenselem{W}_{133}x^3+(\tenselem{W}_{333}-2\tenselem{W}_{113})x^2+(2\tenselem{W}_{133}-\tenselem{W}_{111})x+\tenselem{W}_{113} = 0,
\end{equation}
and take $x^{*}_{k}$ to be the best point among these real roots and $\pm\infty$.
\end{example}
\begin{remark}\rm
\eqref{eq-order-3-station} is similar to equations in \cite[Section 3.5]{Lathauwer00:rank-1approximation},
which is for the best rank-1 approximation of a tensor in $\text{symm}(\mathbb{R}^{2\times 2\times 2})$.
\end{remark}
\begin{example}\rm\label{exam-4th-order} (For 4th order symmetric tensors)\\
(i) $\mathbf{Case}$ 1: $(i_k,j_k)\in \mathcal{C}_{1}$.
Take $p=2$ and the pair $(1,2)$ for example.
It holds that
\begin{align*}
&\tau_k(x)-\tau_k(0)
= \tenselem{T}_{1111}^{2}+\tenselem{T}_{2222}^{2}-\tenselem{W}_{1111}^2-\tenselem{W}_{2222}^2\\
&=\frac{1}{(1+x^2)^4}((8\tenselem{W}_{1111}\tenselem{W}_{1112}-8\tenselem{W}_{1222}\tenselem{W}_{2222})(x-x^7)\\
&+(-4\tenselem{W}_{1111}^2 + 12\tenselem{W}_{1122}\tenselem{W}_{1111} + 16\tenselem{W}_{1112}^2 + 16\tenselem{W}_{1222}^2
- 4\tenselem{W}_{2222}^2 + 12\tenselem{W}_{1122}\tenselem{W}_{2222})(x^2+x^6)\\
&+(48\tenselem{W}_{1112}\tenselem{W}_{1122} + 8\tenselem{W}_{1111}\tenselem{W}_{1222}
- 48\tenselem{W}_{1122}\tenselem{W}_{1222} - 8\tenselem{W}_{1112}\tenselem{W}_{2222})(x^3-x^5)\\
&+(- 6\tenselem{W}_{1111}^2 + 4\tenselem{W}_{1111}\tenselem{W}_{2222} + 72\tenselem{W}_{1122}^2 - 6\tenselem{W}_{2222}^2 + 64\tenselem{W}_{1112}\tenselem{W}_{1222})x^4).
\end{align*}
Denote by
\begin{align*}
a &= 8(\tenselem{W}_{1111}\tenselem{W}_{1112}-\tenselem{W}_{1222}\tenselem{W}_{2222});\\
b &=8(\tenselem{W}_{1111}^2-3\tenselem{W}_{1122}\tenselem{W}_{1111}-4\tenselem{W}_{1112}^2
-4\tenselem{W}_{1222}^2+\tenselem{W}_{2222}^2-3\tenselem{W}_{1122}\tenselem{W}_{2222});\\
c &= 8(18\tenselem{W}_{1112}\tenselem{W}_{1122}-7\tenselem{W}_{1111}\tenselem{W}_{1112}+3\tenselem{W}_{1111}\tenselem{W}_{1222}\\
&-18\tenselem{W}_{1122}\tenselem{W}_{1222}-3\tenselem{W}_{1112}\tenselem{W}_{2222}+7\tenselem{W}_{1222}\tenselem{W}_{2222});\\
d &= 8(9\tenselem{W}_{1111}\tenselem{W}_{1122}-32\tenselem{W}_{1112}\tenselem{W}_{1222}-2\tenselem{W}_{1111}\tenselem{W}_{2222}\\
&+9\tenselem{W}_{1122}\tenselem{W}_{2222}+12\tenselem{W}_{1112}^2-36\tenselem{W}_{1122}^2+12\tenselem{W}_{1222}^2);\\
e &= 80(6\tenselem{W}_{1122}\tenselem{W}_{1222}-\tenselem{W}_{1111}\tenselem{W}_{1222}-6\tenselem{W}_{1112}\tenselem{W}_{1122}+\tenselem{W}_{1112}\tenselem{W}_{2222}).
\end{align*}
Then
\begin{align*}
&\tau_k'(x) = \frac{1}{(1+x^2)^5}[a(1+x^8)+b(x^7-x)+c(x^6+x^2)+d(x^5-x^3)+ex^4].
\end{align*}
Denote by $\xi = x - 1/x$.
It follows that $\tau_k^{'}(x)=0$ if and only if
\begin{align*}
\Omega(\xi) \stackrel{\sf def}{=} a\xi^4+b\xi^3+(4a + c)\xi^2+(3b + d)\xi+2a+2c+e = 0.
\end{align*}
Solve $\Omega(\xi) = 0$ for all the real roots $\xi_\ell$.
Then solve
$x^2-\xi_\ell x-1=0$
for all $\ell$ and take the best real root as $x_k^{*}$.\\
(ii) $\mathbf{Case}$ 2: $(i_k,j_k)\in \mathcal{C}_{2}$.
Take $p=2$ and the pair (1,$3$) for example.
It holds that
\begin{align*}
\tau_k(x)-\tau_k(0)
=\frac{1}{(1+x^2)^4}[
(\tenselem{W}_{3333}^2-\tenselem{W}_{1111}^2)x^8
+ (8\tenselem{W}_{1333}\tenselem{W}_{3333})x^7\\
&+ (- 4\tenselem{W}_{1111}^2 + 16\tenselem{W}_{1333}^2 + 12\tenselem{W}_{1133}\tenselem{W}_{3333})x^6
+ (48\tenselem{W}_{1133}\tenselem{W}_{1333} + 8\tenselem{W}_{1113}\tenselem{W}_{3333})x^5\\
&+ (- 6\tenselem{W}_{1111}^2 + 2\tenselem{W}_{3333}\tenselem{W}_{1111} + 36\tenselem{W}_{1133}^2 + 32\tenselem{W}_{1113}\tenselem{W}_{1333})x^4\\
&+ (48\tenselem{W}_{1113}\tenselem{W}_{1133} + 8\tenselem{W}_{1111}\tenselem{W}_{1333})x^3\\
&+ (- 4\tenselem{W}_{1111}^2 + 12\tenselem{W}_{1133}\tenselem{W}_{1111} + 16\tenselem{W}_{1113}^2)x^2
+ (8\tenselem{W}_{1111}\tenselem{W}_{1113})x],\\
\tau_k^{'}(x)
&= \frac{-8\tenselem{T}_{1111}}{(1+x^2)^3}
[\tenselem{W}_{1333}x^4 +(3\tenselem{W}_{1133}-\tenselem{W}_{3333})x^3
+ (3\tenselem{W}_{1113}-3\tenselem{W}_{1333})x^2\\
&+ (\tenselem{W}_{1111}-3\tenselem{W}_{1133})x-\tenselem{W}_{1113}].
\end{align*}
Then we solve
$$\tenselem{W}_{1333}x^4 +(3\tenselem{W}_{1133}-\tenselem{W}_{3333})x^3
+ (3\tenselem{W}_{1113}-3\tenselem{W}_{1333})x^2
+ (\tenselem{W}_{1111}-3\tenselem{W}_{1133})x-\tenselem{W}_{1113} = 0$$
and take $x^{*}_{k}$ to be the best point among these real roots and $\pm\infty$.
\end{example}
\section{Weak convergence to stationary points}\label{sec-weak-conver}
Let $N=p(2n-p-1)/2$ be the number of elements in $\set{C}$.
We denote by $\Sigma$ the set of all the \emph{ordered sets} $\set{P}$ of index pairs in $\set{C}$,
that is,
\begin{align*}
\set{P} = \{(i_1,j_1), (i_2,j_2), \cdots, (i_N,j_N)\}\in\Sigma.
\end{align*}
We denote by $\Sigma_{0}\subseteq\Sigma$ the subset including
\begin{align*}
\set{P}^{*} = \{(i^{*}_1,j^{*}_1), (i^{*}_2,j^{*}_2), \cdots, (i^{*}_N,j^{*}_N)\},
\end{align*}
which satisfies that the first $n-1$ pairs $\{(i^{*}_1,j^{*}_1), \cdots, (i^{*}_{n-1},j^{*}_{n-1})\}$ have one common index, the next $n-2$ pairs $\{(i^{*}_n,j^{*}_n), \cdots, (i^{*}_{2n-3},j^{*}_{2n-3})\}$ have one common index, the next $n-3$ pairs $\{(i^{*}_{2n-2},j^{*}_{2n-2}), \cdots, (i^{*}_{3n-6},j^{*}_{3n-6})\}$ have one common index, until the last $n-p$ pairs $\{(i^{*}_{N-n+p+1},j^{*}_{N-n+p+1}), \cdots, (i^{*}_{N},j^{*}_{N})\}$ have one common index.
\begin{definition}\label{def-equivalent-order}\rm
Let $\set{P}_1, \set{P}_2\in\Sigma$.
We say that $\set{P}_1$ is \emph{equivalent} to $\set{P}_2$ if we can obtain $\set{P}_2$ from $\set{P}_1$ only by\\
(i) exchanging the positions of $(i_l,j_l)$ and $(i_{l+1},j_{l+1})$ when $\{i_l,j_l\}\cap \{i_{l+1},j_{l+1}\}=\emptyset$;\\
(ii) moving the first element to the position after the last one;\\
(iii) moving the last element to the position before the first one;\\
(iv) reversing the positions of all the elements.
\end{definition}
\begin{example}\rm
(i) Let $n=p=4$. Let $\set{P}=\{(1,3), (2,3), (2,4), (1,4), (3,4), (1,2)\}$.
We can see that $\set{P}$ is equivalent to
$$\{(2,4), (1,4), (3,4), (1,2), (1,3), (2,3)\}\ \ \text{and}\ \
\{(3,4), (1,3), (2,3), (2,4), (1,4), (1,2)\},$$
which are both in $\Sigma_{0}$.
On the other hand, it is not difficult to see that
\begin{align}\label{example-dim-4}
\{(1,2), (1,4), (2,3), (2,4), (1,3), (3,4)\}
\end{align}
is not equivalent to any $\set{P}^{*}\in\Sigma_{0}$.\\
(ii) Let $n=p$.
We can verify that there always exists such $\set{P}\in\Sigma$ as in \eqref{example-dim-4} when $n$ is odd and $n\geq 5$.
In fact, in this case, we can construct a graph by setting the numbers as vertices and the index pairs in $\set{C}$ as edges.
Then, by Euler's Theorem, there always exists a Eulerian circuit, which is corresponding to a $\set{P}\in\Sigma$ not equivalent to any $\set{P}^{*}\in\Sigma_{0}$.
When $n=5$,
one such $\set{P}$ is
\begin{align*}
\{ (1,2), (2,3), (3,4), (4,5), (3,5), (1,3), (1,4), (2,4), (2,5), (1,5) \}.
\end{align*}
\end{example}
\begin{algorithm}\rm\label{al-general}(General algorithm)\\
{\bf Input:} $\tens{A}\in\text{symm}(\mathbb{R}^{n\times \cdots\times n})$, $1 \leq p \leq n$, a starting point $\matr{Q}_{0}$, an ordered set $\set{P}\in\Sigma$.\\
{\bf Output:} Sequence of iterations $\matr{Q}_{k}$.
\begin{itemize}
\item {\bf For} $k=1,2,\ldots$ until a stopping criterion is satisfied do
\item\quad Choose the pair $(i_k,j_k)\in\set{C}$ according to $\set{P}$.
\item\quad Solve $\theta^{*}_{k}$ that maximizes $h_k(\theta)$ defined as in \eqref{definition-h}.
\item\quad Set $\matr{U}_k \stackrel{\sf def}{=} \Gmat{i_k}{j_k}{\theta^{*}_k}$, and update $\matr{Q}_k = \matr{Q}_{k-1} \matr{U}_k$.
\item {\bf End for}
\end{itemize}
\end{algorithm}
Let $\tens{A}\in\text{symm}(\mathbb{R}^{n\times \cdots\times n})$ and $\matr{Q}\in\ON{n}$.
Let $\tens{W} = \tens{A}(\matr{Q})$ and $(i,j)\in\set{C}$.
Suppose that $\theta_{*}$ is the maximal point of the funcion
\begin{align*}
\textit{h}:\ [-\frac{\pi}{2},\frac{\pi}{2}]\longrightarrow \mathbb{R}^+, \ \
\theta \longmapsto \textit{f}\ (\matr{Q}\Gmat{i}{j}{\theta})
\end{align*}
as in \eqref{definition-h}.
We define the operators
$\Phi_{i,j}$ by sending $\matr{Q}$ to $\matr{Q}\Gmat{i}{j}{\theta_{*}}.$
Then the iterations in the $t$-th loop\footnote{each loop contains $N$ successive iterations.} of \cref{al-general} are in fact generated as follows:
\begin{align*}
&\cdots\xrightarrow{\Phi_{i_N,j_N}} \matr{Q}_{(t-1)N}
\xrightarrow{\Phi_{i_1,j_1}}
\matr{Q}_{(t-1)N+1} \xrightarrow{\Phi_{i_2,j_2}} \matr{Q}_{(t-1)N+2}\\
&\xrightarrow{\Phi_{i_3,j_3}} \cdots \xrightarrow{\Phi_{i_N,j_N}} \matr{Q}_{tN} \xrightarrow{\Phi_{i_1,j_1}}
\matr{Q}_{tN+1}\xrightarrow{\Phi_{i_2,j_2}}\cdots.
\end{align*}
We define $\matr{Q}^{(t)}=\matr{Q}_{tN}$ and
$\Phi = \Phi_{i_N,j_N}\circ \cdots\circ\Phi_{i_2,j_2}\circ\Phi_{i_1,j_1}$. It is clear that $\Phi_{i,j}$ is continuous for all $(i,j)\in\set{C}$.
Therefore, $\Phi$ is also continuous.
Now we rewrite \cite[Lemma 5.5]{chen2009tensor} as follows.
\begin{lemma}\label{lem-fixed-point}\rm
Let $\Phi:\ON{n}\rightarrow\ON{n}$ be a continuous operator and the sequence $\{\matr{Q}^{(t)}\}_{t=1}^{\infty}\subseteq\ON{n}$ satisfy $\matr{Q}^{(t+1)}=\Phi(\matr{Q}^{(t)})$.
If a continuous function $f:\ON{n}\rightarrow\mathbb{R}$ satisfies that\\
(i) the sequence $\{f(\matr{Q}^{(t)})\}_{t=1}^{\infty}$ converges, and\\
(ii) if $f(\Phi(\matr{Q})) = f(\matr{Q})$, then $\Phi(\matr{Q})=\matr{Q}$,\\
then every accumulation point $\matr{Q}_{*}$ of $\{\matr{Q}^{(t)}\}_{t=1}^{\infty}$ satisfies that $\Phi(\matr{Q}_{*})=\matr{Q}_{*}$.
\end{lemma}
\begin{lemma}\label{lem-iden-decom}\rm
Suppose that $\set{P}$ is equivalent to a $\set{P}^{*}\in\Sigma_{0}$ and
\begin{equation}\label{eq-indentity-equality}
\Gmat{i_1}{j_1}{\theta_{1}}\Gmat{i_2}{j_2}{\theta_{2}}\cdots\Gmat{i_N}{j_N}{\theta_{N}} = \matr{I}_{n},
\end{equation}
where $\theta_k\in[-\pi/2,\pi/2]$.
Then $\Gmat{i_k}{j_k}{\theta_{k}} = \matr{I}_{n}$ for $1\leq k\leq N$.
\end{lemma}
\begin{proof}
Note that $\set{P}$ is equivalent to $\set{P}^{*}\in\Sigma_{0}$ and the position changes in \cref{def-equivalent-order} preserve
\eqref{eq-indentity-equality}.
After a finite number of such position changes,
there exist $\theta_{k}^{*}\in[-\pi/2,\pi/2]$ such that
\begin{align}\label{eq-ordered-star}
\Gmat{i^{*}_1}{j^{*}_1}{\theta^{*}_{1}}\Gmat{i^{*}_2}{j^{*}_2}{\theta^{*}_{2}}\cdots\Gmat{i^{*}_N}{j^{*}_N}{\theta^{*}_{N}} = \matr{I}_{n}.
\end{align}
Without loss of generality, we can suppose that
\begin{align*}
\set{P}^{*} = \{(1,2), (1,3), \cdots, (1,n), (2,3), \cdots, (2,n), (3,4), \cdots, (p,n)\},
\end{align*}
as in \eqref{partial-cyclic-1}.
Then \eqref{eq-ordered-star} is
$\Gmat{1}{2}{\theta^{*}_{1}}\Gmat{1}{3}{\theta^{*}_{2}}\cdots\Gmat{p}{n}{\theta^{*}_{N}} = \matr{I}_{n}.$
It follows that
$$\Gmat{1}{3}{\theta^{*}_{2}}\cdots\Gmat{p}{n}{\theta^{*}_{N}} = \Gmat{1}{2}{-\theta^{*}_{1}}.$$
It is not difficult to verify that $(\Gmat{1}{3}{\theta^{*}_{2}}\cdots\Gmat{p}{n}{\theta^{*}_{N}})_{12}=0$.
Then $\theta^{*}_{1}=0$.
Similary, by
$\Gmat{1}{4}{\theta^{*}_{3}}\cdots\Gmat{p}{n}{\theta^{*}_{N}} = \Gmat{1}{3}{-\theta^{*}_{2}},$
we get $\theta^{*}_{2}=0$.
After repeting this process for $N-1$ times, we complete the proof.
\end{proof}
\begin{remark}\rm
It may be interesting to ask whether \cref{lem-iden-decom} holds for any $\set{P}\in\Sigma$.
In fact, when $p=n=4$, a counterexample is
\begin{align*}
\Gmat{1}{2}{\pi/2}\Gmat{1}{4}{\pi/2}\Gmat{2}{3}{-\pi/2}\Gmat{2}{4}{-\pi/2}\Gmat{1}{3}{-\pi/2}\Gmat{3}{4}{-\pi/2}= \matr{I}_{4}.
\end{align*}
\end{remark}
\begin{theorem}\rm
In \cref{al-general}, if $\set{P}$ is equivalent to a $\set{P}^{*}\in\Sigma_{0}$, then every accumulation point is a stationary point.
\end{theorem}
\begin{proof}
Suppose that $\matr{Q}_{*}$ is an accumulation point of $\{\matr{Q}_k, k\in\mathbb{N}\}$.
Then there exists $1\leq\ell_{*}\leq N$ such that $\matr{Q}_{*}$ is an accumulation point of $\{\matr{Q}_{tN+\ell_{*}}, t\in\mathbb{N}\}$.\\
Case I: If $\ell_{*}= N$,
by \cref{lem-fixed-point}, we see that $\Phi(\matr{Q}_{*})=\matr{Q}_{*}$.
It follows by \cref{lem-iden-decom} that $\Phi_{i,j}(\matr{Q}_{*})=\matr{Q}_{*}$ for all $(i,j)\in\set{C}$.
Then $\matr{Q}_{*}$ is a stationary point by \cref{RiemanGrad-thm}.\\
Case II: If $\ell_{*}< N$, we can set the starting point as $\matr{Q}_{\ell_{*}}$. Let $\set{P}^{'}\in\Sigma$ be obtained by doing the manipulation (ii) of \cref{def-equivalent-order} on $\set{P}$ successively for $\ell_{*}$ times. Let $\Phi^{'}$ be the composition corresponding to $\set{P}^{'}$. Similar to Case I, we see that $\Phi^{'}(\matr{Q}_{*})=\matr{Q}_{*}$. Note that $\set{P}^{'}$ is also equivalent to $\set{P}^{*}\in\Sigma_{0}$. By the similar reduction as in Case I, we complete the proof.
\end{proof}
\begin{corollary}\rm
(i) In \cref{al-JLROA}, every accumulation point is a stationary point.\\
(ii) In Jacobi CoM2 algorithm, every accumulation point is a stationary point.
\end{corollary}
\section{Jacobi-G algorithm and its convergence}\label{sect-Jacobi-G}
\subsection{Jacobi-G algorithm}
Different from the cyclic ordering \eqref{partial-cyclic-1} in \cref{al-JLROA} or the fixed ordering $\set{P}$ in \cref{al-general},
another pair selection rule of Jacobi-type algorithm based on the Riemannian gradient was proposed in \cite{IshtAV13:simax}.
In this sense,
the pair $(i_k,j_k)$ at each iteration is chosen such that
\begin{equation}\label{eq:pair_selection_gradient}
|h_{k}^{'}(0)| = 2|(\matr{Q}_{k-1}^{\intercal}\ProjGrad{f}{\matr{Q}_{k-1}})_{i_k,j_k}| \ge \varepsilon \|\ProjGrad{f}{\matr{Q}_{k-1}}\|,
\end{equation}
where $0<\varepsilon\leq2/n$ is fixed.
By \cite[Lemma 5.2]{IshtAV13:simax} and \cite[Lemma 3.1]{LUC2017globally},
we see that it is always possible to find such a pair if $f$ is differentiable.
\begin{algorithm}\rm (Jacobi-G algorithm
\label{alg:jacobi-G}\\
{\bf Input:} $\tens{A}\in\text{symm}(\mathbb{R}^{n\times \cdots\times n})$, $1\leq p\leq n$,
$0<\varepsilon\leq 2/n$,
a starting point $\matr{Q}_{0}$.\\
{\bf Output:} Sequence of iterations $\{\matr{Q}_{k}\}_{k\ge1}$.
\begin{itemize}
\item {\bf For} $k=1,2,\ldots$ until a stopping criterion is satisfied do
\item\quad Choose a pair $(i_k,j_k)$ satisfying \eqref{eq:pair_selection_gradient} at $\matr{Q}_{k-1}$.
\item\quad Solve $\theta^{*}_{k}$ that maximizes $h_k(\theta)$ defined as in \eqref{definition-h}.
\item\quad Set $\matr{U}_k \stackrel{\sf def}{=} \Gmat{i_k}{j_k}{\theta^{*}_k}$, and update $\matr{Q}_k = \matr{Q}_{k-1} \matr{U}_k$.
\item {\bf End for}
\end{itemize}
\end{algorithm}
\begin{remark}\rm\label{remark-local-conv}
(i) By \cite[Theorem 5.4]{IshtAV13:simax} and \cite[Theorem 3.3]{LUC2017globally},
we see that every accumulation point of the iterations in \cref{alg:jacobi-G} is a stationary point of $f$.\\
(ii) Let $\tens{A}\in\text{symm}(\mathbb{R}^{n\times n\times n})$ and $p$ = 1.
Then \cref{alg:jacobi-G} is the same with the Jacobi-type algorithm in \cite{IshtAV13:simax},
which was developed to find the best low multilinear rank approximation of symmetric tensors.
\end{remark}
In this section,
we mainly prove the following result for \cref{alg:jacobi-G}.
The proof is postponed to
\cref{subsec-main-proof}.
\begin{theorem}\rm\label{theorem-main-covergence}
Let $\tens{A}\in\text{symm}(\mathbb{R}^{n\times n\times n})$ with $n\geq 3$.
Suppose that $p=2$ and $\matr{Q}_{\ast}$ is an accumulation point of \cref{alg:jacobi-G} satisfying
\begin{align}
&\tenselem{A}(\matr{Q}_{\ast})_{112}^2+\tenselem{A}(\matr{Q}_{\ast})_{122}^2\neq 0,\label{eq-condition-not-zero}\\
&\tenselem{A}(\matr{Q}_{\ast})_{333}\tenselem{A}(\matr{Q}_{\ast})_{444}\cdots\tenselem{A}(\matr{Q}_{\ast})_{nnn}\neq 0.\label{eq-condition-not-zero-2}
\end{align}
Then either $\matr{Q}_{\ast}$ is the unique limit point,
or there exist an infinite number of accumulation points.
\end{theorem}
\subsection{Some lemmas}
\begin{lemma}\rm\label{lemma-double-derivative}
Let $\tens{W}\in\text{symm}(\mathbb{R}^{2\times 2\times 2})$ and
$\tens{T}=\tens{W}(\Gmat{1}{2}{\arctan x})$
with $x\in\overline{\mathbb{R}}$.
Define
$\tau:\ \overline{\mathbb{R}}\rightarrow \mathbb{R}^+$
sending $x$ to $\tenselem{T}_{111}^2.$
Suppose that $\tenselem{W}_{222}\neq0$ and $\tau(0)=\max\limits_{x\in\overline{\mathbb{R}}} \tau(x).$
Then\\
\noindent (i) $\tenselem{W}_{111}\neq0$, $\tenselem{W}_{112}=0$,\\
\noindent (ii) $\tenselem{W}_{111}(2\tenselem{W}_{122}-\tenselem{W}_{111})<0$.
\end{lemma}
\begin{proof}
(i) It is clear that $|\tenselem{W}_{222}|\leq|\tenselem{W}_{111}|$ since $\tau(0)\geq\tau(\pm\infty)$. Then $\tenselem{W}_{111}\neq0$.
Let $\theta = \arctan x$.
We have that
\[
\frac{d\tenselem{T}_{111}}{d\theta}=3\tenselem{T}_{112},\ \
\frac{d\tenselem{T}_{112}}{d\theta}=2\tenselem{T}_{122}-\tenselem{T}_{111}
\]
by straightforward differentiation \cite[Page 10]{LUC2017globally}.
It follows that
\begin{align}
\tau'(x)&=2\tenselem{T}_{111}\frac{d\tenselem{T}_{111}}{d\theta}\frac{d\theta}{dx}=\frac{6\tenselem{T}_{111}\tenselem{T}_{112}}{1+x^2},\label{eq-tau-derivative}\\
\tau''(x)
&=\frac{6}{(1+x^2)^2}(3\tenselem{T}_{112}^2+2\tenselem{T}_{111}\tenselem{T}_{122}
-\tenselem{T}_{111}^2-2\tenselem{T}_{111}\tenselem{T}_{112}x).\label{eq-tau-2-derivative}
\end{align}
Note that $\tau'(0)=0$.
We have $\tenselem{W}_{112}=0$ by \eqref{eq-tau-derivative}.\\
(ii) Note that $\tau''(0)\leq0$.
We have
$2\tenselem{W}_{111}\tenselem{W}_{122}-\tenselem{W}_{111}^2\leq0$
by \eqref{eq-tau-2-derivative}.
To complete the proof, we only need to prove that
$\tau(0)<\max\limits_{x\in\overline{\mathbb{R}}} \tau(x)$ if
$\tenselem{W}_{111}=1$,\ $\tenselem{W}_{122}=1/2$ and $\tenselem{W}_{222}=\beta\neq0$ without loss of generality.
In fact, it can be verified that
$$\tau(x) = \frac{(1+\frac{3}{2}x^2+\beta x^3)^2}{(1+x^2)^3}$$
in this case, and
$$\max\limits_{x\in\overline{\mathbb{R}}} \tau(x)\geq\tau(2\beta)=\frac{(1+6\beta^2+8\beta^4)^2}{(1+4\beta^2)^3}>\tau(0)=1.$$
\end{proof}
\begin{definition}\label{re-defi-index}\rm(\cite[Definition 3.11]{LUC2018})
Let $\tens{A}\in\text{symm}(\mathbb{R}^{n\times n\times n})$ and $1\le i<j\le n$.
Suppose that $\tenselem{A}_{iii}\tenselem{A}_{iij}=\tenselem{A}_{ijj}\tenselem{A}_{jjj}$.
The \emph{stationary diagonal ratio},
denoted by $\gamma_{ij}(\tens{A})$,
is defined as follows.
\[
\gamma_{ij}(\tens{A}) \stackrel{\sf def}{=}
\begin{cases}
0, & \text{if}\ \tens{A}^{(i,j)}= \mathbf{0};\\
\infty, & \text{if}\ \tenselem{A}_{iii}=\tenselem{A}_{jjj}=0\quad\text{and}\quad\tenselem{A}^2_{ijj} +\tenselem{A}^2_{iij}\neq0;\\
\end{cases}
\]
otherwise, $\gamma_{ij}(\tens{A})$ is the {(unique)} number such that
\[
\begin{pmatrix}\tenselem{A}_{ijj} \\ \tenselem{A}_{iij} \end{pmatrix} = \gamma_{ij}(\tens{A})\begin{pmatrix}\tenselem{A}_{iii}\\\tenselem{A}_{jjj}\end{pmatrix}.
\]
\end{definition}
\begin{lemma}\label{lemma-extreme-state-02}\rm
Let $\tens{W}\in\text{symm}(\mathbb{R}^{2\times 2\times 2})$ and
$\tens{T}=\tens{W}(\Gmat{1}{2}{\arctan x})$
with $x\in\mathbb{R}$ and $x\neq0$.
Suppose that $\|\diag{\tens{W}}\|=\|\diag{\tens{T}}\|\neq0$ and
$$\tenselem{W}_{111}\tenselem{W}_{112}=\tenselem{W}_{122}\tenselem{W}_{222},\ \ \tenselem{T}_{111}\tenselem{T}_{112}=\tenselem{T}_{122}\tenselem{T}_{222}.$$
Then $\gamma_{12}(\tens{W}) = \gamma_{12}(\tens{T}) = -1$ or $1/3$.
\end{lemma}
\begin{proof}
Note that $\|\diag{\tens{W}}\|=\|\diag{\tens{T}}\|$ and
$\|\tens{W}\| = \|\tens{T}\|$.
We see that
$|\gamma_{12}(\tens{W})| = |\gamma_{12}(\tens{T})|$.
Let $\tens{T} =\tens{W}(\Gmat{1}{2}{\arctan x})$.
Define
\begin{align*}
\tau:\ \mathbb{R} \longrightarrow \mathbb{R}^+,
\ x \longmapsto \|\diag{\tens{T}}\|^2 = \tenselem{T}_{111}^2+\tenselem{T}_{222}^2.
\end{align*}
Then $\tau(x)=\tau(0)$ by the condition.
It follows by \eqref{eq-inc-3} that
\begin{equation}\label{eq-0-double-derivative}
\tenselem{W}_{111}^2+\tenselem{W}_{222}^2-3\tenselem{W}_{112}^2-3\tenselem{W}_{122}^2
-2\tenselem{W}_{111}\tenselem{W}_{122}-2\tenselem{W}_{112}\tenselem{W}_{222}=0.
\end{equation}
After the substitution of $\tenselem{W}_{122}=\gamma_{12}(\tens{W})\tenselem{W}_{111}$ and $\tenselem{W}_{112}=\gamma_{12}(\tens{W})\tenselem{W}_{222}$ to \eqref{eq-0-double-derivative},
we get that $\gamma_{12}(\tens{W})=-1$ or $1/3$.
Note that $\tens{W}=\tens{T}((\Gmat{1}{2}{\arctan x})^\intercal)$.
We can similarly get that $\gamma_{12}(\tens{T})=-1$ or $1/3$.
\end{proof}
\begin{lemma}\rm\label{lemma-3-dimension-p-2}
Let $\tens{W}\in\text{symm}(\mathbb{R}^{3\times 3\times 3})$ and
$\tens{T}=\tens{W}(\Gmat{1}{3}{\arctan x})$
with $x\in\overline{\mathbb{R}}$ and $x\neq0$.
Suppose that $|\tenselem{W}_{111}|=|\tenselem{T}_{111}|>0$
and
$$\tenselem{W}_{111}\tenselem{W}_{112}=\tenselem{W}_{122}\tenselem{W}_{222},\ \
\tenselem{T}_{111}\tenselem{T}_{112}=\tenselem{T}_{122}\tenselem{T}_{222},\ \
\tenselem{W}_{113}=\tenselem{W}_{223}=\tenselem{T}_{113}=\tenselem{T}_{223}=0.$$
Then $\tenselem{W}_{112}=\tenselem{W}_{122}=\tenselem{T}_{112}=\tenselem{T}_{122}=0.$
\end{lemma}
\begin{proof}
It can be verified that
$$-\frac{x}{\sqrt{1+x^2}}\tenselem{W}_{122} = \tenselem{T}_{223} = 0,$$
and thus $\tenselem{W}_{122}=0$.
It follows by the condition that $\tenselem{W}_{112}=0$.
Note that
$\tens{W}=\tens{T}((\Gmat{1}{3}{\arctan x})^{\intercal}).$
We can similarly get that $\tenselem{T}_{112}=\tenselem{T}_{122}=0$.
\end{proof}
\subsection{Proof of \cref{theorem-main-covergence}}\label{subsec-main-proof}
\begin{lemma}\rm\label{lemma-weak-inequality}
Let $\tens{A}\in\text{symm}(\mathbb{R}^{n\times n\times n})$.
Let $h_{k}(\theta)$ be as in \eqref{definition-h} for $k\in\mathbb{N}$.
Then there exists $\delta>0$ such that
\begin{equation}\label{eq-lemma-weak-ineuqality}
h_{k}(\theta_k^{*})-h_{k}(0)\geq \delta |h_{k}'(0)|^2
\end{equation}
for any $k\in\mathbb{N}$ with $(i_k,j_k)\in \mathcal{C}_{2}$ in \cref{alg:jacobi-G}.
\end{lemma}
\begin{proof}
Let $\tens{W} = \tens{A}(\matr{Q}_{k-1})$ and $\tens{T} = \tens{W}(\Gmat{i_k}{j_k}{\theta})$.
Let $(i,j)=(i_k,j_k)$.
It is clear that $\tenselem{T}_{iii}(\theta)$ is a trigonometric polynomial with a finite degree $n_{0}$
for all the iterations in $\mathcal{C}_{2}$.
By \cite[Theorem 1]{bell2015bernstein},
we see that
\begin{equation*}
\tenselem{T}_{iii}'(0)^2 \leq n_{0}^2(\|\tenselem{T}_{iii}\|_{\infty}^2-\tenselem{T}_{iii}^2(0)) = n_{0}^2(h_{k}(\theta_k^{*})-h_{k}(0)),
\end{equation*}
when $\theta=0$.
Note that $h_{k}'(0)=2\tenselem{T}_{iii}(0)\tenselem{T}_{iii}'(0)$.
Let $M>0$ such that $|4 n_{0}^2\tenselem{T}_{iii}^2(0)|<M$ for all the iterations in $\mathcal{C}_{2}$.
Then
\begin{equation*}
|h_{k}'(0)|^2 \leq 4 n_{0}^2\tenselem{T}^2_{iii}(0)(h_{k}(\theta_k^{*})-h_{k}(0))
<M(h_{k}(\theta_k^{*})-h_{k}(0)).
\end{equation*}
The proof is completed if we set $\delta=1/M$.
\end{proof}
\begin{remark}\rm
Let $\tens{A}\in\text{symm}(\mathbb{R}^{n\times n\times n\times n})$ be of 4th order.
By the similar methods,
we can also prove \eqref{eq-lemma-weak-ineuqality}
for pairs in $\mathcal{C}_{1}$,
or pairs in $\mathcal{C}_{2}$.
\end{remark}
Now we need a result in \cite{LUC2017globally},
which is the direct consequence of \cite[Theorem 2.3]{SU15:pro}.
\begin{theorem}\rm\label{theorem-convegence-general}({\cite[Corollary 5.4]{LUC2017globally}})
Let $f$ be a real analytic function from $\ON{n}$ to $\mathbb{R}$.
Suppose that $\{\matr{Q}_k:k\in\mathbb{N}\}\subseteq\ON{n}$ and, for large enough k,\\
(i) there exists $\sigma>0$ such that
\begin{equation*}\label{condition-coro-KL}
|{f}(\matr{Q}_{k})-{f}(\matr{Q}_{k-1})|\geq \sigma\|\ProjGrad{f}{\matr{Q}_{k-1}}\| \|\matr{Q}_{k}-\matr{Q}_{k-1}\|,
\end{equation*}
(ii) $\ProjGrad{f}{\matr{Q}_{k-1}}=0$ implies that $\matr{Q}_{k}=\matr{Q}_{k-1}$.\\
Then the iterations $\{\matr{Q}_k:k\in\mathbb{N}\}$ converge to a point $\matr{Q}_*\in\ON{n}$.
\end{theorem}
\begin{proof}[Proof of \cref{theorem-main-covergence}]
Assume that there exist a finite number of accumulation points,
denoted by $\matr{Q}^{(\ell)}(1\leq\ell\leq N)$.
Then any accumulation point is a stationary point by \cref{remark-local-conv}(i).
In other words,
it holds that $\Lambda(\matr{Q}^{(\ell)})=0$ for all $1\leq\ell\leq N$ by \eqref{eq-Riemannian-gradient}.
Let $\matr{Q}_{\ast}=\matr{Q}^{(1)}$.
Now we prove that $\matr{Q}_{\ast}$ is the unique limit point.\\
\textbf{Step 1.}
We first prove that all the accumulation points satisfy \eqref{eq-condition-not-zero} and \eqref{eq-condition-not-zero-2} if $\matr{Q}_{\ast}$ satisfies them.
Note that the number of accumulation points is finite.
We can see that any two different accumulation points can be connected by finite combination of the following two possible paths.\\
(a) Take the pair $(1,2)\in\mathcal{C}_{1}$.
If $\{x_k^{\ast},(i_k,j_k)=(1,2)\}$ is finite or converges to 0,
this path doesn't appear and we skip it.
Otherwise,
this set has a nonzero accumulation point $\zeta$ and a subsequence converges to it.
We assume that
$$\{x_k^{\ast},(i_k,j_k)=(1,2)\}\rightarrow\zeta\neq0$$
without loss of generality.
Note that
$\{\matr{Q}_{k-1},(i_k,j_k)=(1,2)\}$ has an accumulation point.
We assume that
$$\{\matr{Q}_{k-1},(i_k,j_k)=(1,2)\}\rightarrow\matr{Q}^{(\ell_1)}$$
without loss of generality.
Then $\matr{Q}^{(\ell_2)}=\matr{Q}^{(\ell_1)}\matr{G}^{(1,2,\arctan\zeta)}$ is another different accumulation point.
It is clear that $\tenselem{A}(\matr{Q}^{(\ell_1)})_{iii}=\tenselem{A}(\matr{Q}^{(\ell_2)})_{iii}$ for $3\leq i\leq n$.
Note that $\tens{A}(\matr{Q}^{(\ell_1)})^{(1,2)}$ and $\tens{A}(\matr{Q}^{(\ell_2)})^{(1,2)}$
satisfy the conditions in \cref{lemma-extreme-state-02}.
We see that
$$\tenselem{A}(\matr{Q}^{(\ell_1)})_{112}^2+\tenselem{A}(\matr{Q}^{(\ell_1)})_{122}^2\neq0,\
\tenselem{A}(\matr{Q}^{(\ell_2)})_{112}^2+\tenselem{A}(\matr{Q}^{(\ell_2)})_{122}^2\neq0.$$
(b) Take the pair $(1,3)\in\mathcal{C}_{2}$ for example.
Other pairs in $\mathcal{C}_{2}$ are similar.
If $\{x_k^{\ast},(i_k,j_k)=(1,3)\}$ is finite or converges to 0,
this path doesn't appear and we skip it.
Otherwise,
this set has a nonzero accumulation point $\zeta$ and a subsequence converges to it.
We assume that
$$\{x_k^{\ast},(i_k,j_k)=(1,3)\}\rightarrow\zeta\neq0$$
without loss of generality.
Note that
$\{\matr{Q}_{k-1},(i_k,j_k)=(1,3)\}$ has an accumulation point.
We assume that
$$\{\matr{Q}_{k-1},(i_k,j_k)=(1,3)\}\rightarrow\matr{Q}^{(\ell_1)}$$
without loss of generality.
Then $\matr{Q}^{(\ell_2)}=\matr{Q}^{(\ell_1)}\matr{G}^{(1,3,\arctan\zeta)}$ is another different accumulation point.
Note that $\tens{A}(\matr{Q}^{(\ell_1)})^{(1,2,3)}$ and $\tens{A}(\matr{Q}^{(\ell_2)})^{(1,2,3)}$ satisfy the conditions in \cref{lemma-3-dimension-p-2}.
We see that
$$\tenselem{A}(\matr{Q}^{(\ell_1)})_{112}=\tenselem{A}(\matr{Q}^{(\ell_1)})_{122}=
\tenselem{A}(\matr{Q}^{(\ell_2)})_{112}=\tenselem{A}(\matr{Q}^{(\ell_2)})_{122}=0.$$
Since $\matr{Q}_{\ast}$ satisfies \eqref{eq-condition-not-zero},
we see that path (a) is the only possible path.
Then all the accumulation points satisfy \eqref{eq-condition-not-zero}.
Note that $\matr{Q}_{\ast}$ satisfies \eqref{eq-condition-not-zero-2} and $\tenselem{A}(\matr{Q}^{(\ell_1)})_{iii}=\tenselem{A}(\matr{Q}^{(\ell_2)})_{iii}$ for $3\leq i\leq n$ in path (a).
All the accumulation points satisfy \eqref{eq-condition-not-zero-2}.\\
\textbf{Step 2.}
Since path (b) in Step 1 doesn't appear, we get that
\begin{equation}\label{eq-proof-tends-0}
\{x_k^{\ast},(i_k,j_k)\in\mathcal{C}_{2}\}\rightarrow 0
\end{equation}
in \cref{alg:jacobi-G}.
Let $\mathcal{N}(\matr{Q}_{\ast},\eta)$ be the neighborhood of
$\matr{Q}_{\ast}=\matr{Q}^{(1)}$ in $\ON{n}$ with radius $\eta>0$ such that
there exist no other accumulation points in this neighborhood.
If pair $(i,j)\in\mathcal{C}_2$ satisfies that
\begin{equation}\label{eq-condition-infinite}
\{\matr{Q}_{k-1}\in\mathcal{N}(\matr{Q}_{\ast},\eta),(i_k,j_k)=(i,j)\}\
\text{is infinite},
\end{equation}
then $\tens{A}(\matr{Q}_{\ast})^{(i,j)}$ satisfies the conditions in \cref{lemma-double-derivative}(iii) by condition \eqref{eq-condition-not-zero-2}.
Then $\tenselem{A}(\matr{Q}_{\ast})_{iii}(\tenselem{A}(\matr{Q}_{\ast})_{iii}-2\tenselem{A}(\matr{Q}_{\ast})_{ijj})\neq0$.
Let
$$\rho_1\stackrel{\sf def}{=}\min|\tenselem{A}(\matr{Q}_{\ast})_{iii}(\tenselem{A}(\matr{Q}_{\ast})_{iii}-2\tenselem{A}(\matr{Q}_{\ast})_{ijj}) |$$
for all pairs $(i,j)\in\mathcal{C}_{2}$ satisfying \eqref{eq-condition-infinite}.
Then $\rho_1>0$.
For other accumulation points,
we can similarly get $\rho_\ell$ for $1<\ell\leq N$.
Then
\begin{equation}\label{eq-rho}
\rho\stackrel{\sf def}{=}\min\rho_\ell>0.
\end{equation}
\textbf{Step 3.}
Now we show that there exists $\kappa>0$ such that
\begin{align}\label{eq-inequality-C2}
|h_{k}(\theta_k^{*})-h_{k}(0)|\geq \kappa |h_{k}'(0)||\theta_k^{*}|
\end{align}
for all $(i_k,j_k)\in\mathcal{C}_{2}$.
Let $\tens{W} = \tens{A}(\matr{Q}_{k-1})$.
Denote $(i,j) = (i_{k},j_{k})$.
Note that $|x_k^{\ast}|<+\infty$ when $k$ is large enough by \eqref{eq-proof-tends-0}.
Then by \eqref{eq-order-3-station} and \eqref{eq-proof-tends-0},
we have that
\begin{align*}
\frac{h_{k}'(0)}{x_k^{\ast}}=
\frac{6\tenselem{W}_{iii}\tenselem{W}_{iij}}{x_k^{\ast}}
=-6\tenselem{W}_{iii}[(2\tenselem{W}_{ijj}-\tenselem{W}_{iii})+(\tenselem{W}_{jjj}-2\tenselem{W}_{iij})x_k^{\ast}-\tenselem{W}_{ijj}{x_k^{\ast}}^2]
\end{align*}
have accumulation points in the set
$$\{-6\tenselem{A}(\matr{Q}^{(\ell)})_{iii}(2\tenselem{A}(\matr{Q}^{(\ell)})_{ijj}
-\tenselem{A}(\matr{Q}^{(\ell)})_{iii}),\ \text{pair $(i,j)$ satisfies \eqref{eq-condition-infinite}}, \ 1\leq\ell\leq N\}$$
when $k\in\mathbb{N}$ with $(i_{k},j_{k})\in\mathcal{C}_2$.
It follows from \eqref{eq-rho} that there exists
$\upsilon>0$ such that $|h_{k}'(0)|\geq\upsilon|x_{k}^{\ast}|$ when $k$ is large enough with $(i_{k},j_{k})\in\mathcal{C}_2$..
Then we get \eqref{eq-inequality-C2} by \cref{lemma-weak-inequality}.\\
\textbf{Step 4.}
If $\{x_k^{\ast},(i_k,j_k)=(1,2)\in\mathcal{C}_1\}$ is finite,
we skip it.
Otherwise,
by \cite[(27)]{LUC2017globally},
we know that
\begin{equation}\label{eq:lemma-G-inequality-3}
|h_{k}(\theta_k^{*})-h_{k}(0)| = |\frac{x_k^{*}h^{'}_{k}(0)}{2(1-{x_k^{*}}^2)}| \geq \frac{1}{2}|h_{k}'(0)||\theta_k^{*}|
\end{equation}
for all $(i_k,j_k)\in\mathcal{C}_{1}$.
Let $\omega=\min\{\kappa,1/2\}>0$.
By \eqref{eq-inequality-C2} and \eqref{eq:lemma-G-inequality-3},
we get that
\begin{equation*}
|h_{k}(\theta_k^{*})-h_{k}(0)| \geq \omega|h_{k}'(0)||\theta_{\ast}| \geq \frac{\sqrt{2}}{2}\omega\varepsilon\|\ProjGrad{{f}}{\matr{Q}_{k-1}}\|\|\matr{Q}_{k}-\matr{Q}_{k-1}\|,
\end{equation*}
for all $k\in\mathbb{N}$.
Then $\matr{Q}_{\ast}$ is the unique limit point by \cref{theorem-convegence-general}.
\end{proof}
\section{Numerical experiments}\label{sect-experiment}
In this section, we make some experiments to compare the performance of JLROA algorithm with the LROAT and SLROAT algorihtms in \cite{chen2009tensor}, and Trust region algorithm by \emph{Manopt Toolbox} in \cite{JMLR:v15:boumal14a}.
When $p=1$, LROAT and SLROAT are exactly the HOPM and SHOPM algorithms in \cite{Lathauwer00:rank-1approximation,kofidis2002best}, respectively.
We use the cyclic ordering of JLROA algorithm in \cref{al-JLROA} for simplicity except \cref{example-4} and \cref{example-5}.
The LROAT and SLROAT algorihtms are both initialized via HOSVD \cite{Lathauwer00:TensorSVD}, because we find they generally have better performance in this case.
\begin{example}\label{example-1}\rm
We randomly generate 1000 tensors in $\text{symm}(\mathbb{R}^{10\times 10\times 10})$,
and run JLROA and SLROAT algorithms for them.
Denote by $\textsc{JVal}$ and $\textsc{SVal}$ the final value of
\eqref{eq-cost-func-1} obtained by JLROA and SLROAT, respectively.
Set the following notations.\\
(i) $\textsc{NumG}:$ the number of cases that $\textsc{JVal}$ is greater than $\textsc{SVal}$;\\
(ii) $\textsc{NumS}:$ the number of cases that $\textsc{JVal}$ is smaller than $\textsc{SVal}$;\\
(iii) $\textsc{NumE}:$ the number of cases that $\textsc{JVal}$ is equal\footnote{the difference is smaller than 0.0001.} to $\textsc{SVal}$;\\
(iv) $\textsc{RatioG}:$ the average of $\textsc{JVal}/\textsc{SVal}$ when $\textsc{JVal}$ is greater than $\textsc{SVal}$;\\
(v) $\textsc{RatioS}:$ the average of $\textsc{JVal}/\textsc{SVal}$ when $\textsc{JVal}$ is smaller than $\textsc{SVal}$.\\
The results are shown in \cref{table-example-1} and \cref{figure-example-1}.
It can be seen that JLROA algorithm has better performance when $p>2$.
They always get the same result when $p=1$.
\begin{table}[h!]
\centering
\caption{}
\label{table-example-1}
\scalebox{0.9}{
\begin{tabular}{l c c c c c}
\toprule
& $\textsc{NumG}$ & $\textsc{NumS}$ & $\textsc{NumE}$ & $\textsc{RatioG}$ & $\textsc{RatioS}$\\
\midrule
$p=1$ & 0 & 0 & 1000 & --- & --- \\
\midrule
$p=2$ & 328 & 441 & 231 & 1.0023 & 0.9982\\
\midrule
$p=5$ & 747 & 246 & 7 & 1.0042 & 0.9985\\
\midrule
$p=8$ & 900 & 99 & 1 & 1.0044 & 0.9992 \\
\midrule
$p=10$ & 815 & 180 & 5 & 1.0039 & 0.9996 \\
\bottomrule
\end{tabular}}
\end{table}
\end{example}
\begin{example}\rm\label{example-2}
Let $\tens{A}\in\text{symm}(\mathbb{R}^{3\times 3\times 3\times 3})$ such that
\begin{align*}
&\tenselem{A}_{1111} = 0.2883,\ \
\tenselem{A}_{1122} = -0.2485,\ \
\tenselem{A}_{1222} = 0.2972,\ \
\tenselem{A}_{1333} = -0.3619,\\
&\tenselem{A}_{2233} = 0.2127,\ \
\tenselem{A}_{1112} = -0.0031,\ \
\tenselem{A}_{1123} = -0.2939,\ \
\tenselem{A}_{1223} = 0.1862,\\
&\tenselem{A}_{2222} = 0.1241,\ \
\tenselem{A}_{2333} = 0.2727,\ \
\tenselem{A}_{1113} = 0.1973,\ \
\tenselem{A}_{1133} = 0.3847,\\
&\tenselem{A}_{1233} = 0.0919,\ \
\tenselem{A}_{2223} = -0.3420,\ \
\tenselem{A}_{3333} = -0.3054,\ \
\end{align*}
as in \cite[Example 1]{kofidis2002best} and \cite[Section 6.1]{chen2009tensor}.
It has been shown in \cite{kofidis2002best,chen2009tensor} that SHOPM ($p=1$) and SLROAT ($p=2$) fail to converge for $\tens{A}$.
We now see the convergence behaviour of JLROA algorithm.
The results of JLROA, SLROAT and LROAT algorithms are shown in \cref{figure-example-2}.
It can be seen that JLROA performances are always better than or equal to those of SLROAT and LROAT.
\end{example}
\begin{example}\label{example-3}\rm
We randomly generate 1000 tensors in $\text{symm}(\mathbb{R}^{10\times 10\times 10})$,
and run JLROA and Trust region algorithms for them.
Denote by $\textsc{JVal}$ and $\textsc{TVal}$ the final value of
\eqref{eq-cost-func-1} obtained by JLROA and Trust region, respectively.
Set the following notations.\\
(i) $\textsc{NumG}:$ the number of cases that $\textsc{JVal}$ is greater than $\textsc{TVal}$;\\
(ii) $\textsc{NumS}:$ the number of cases that $\textsc{JVal}$ is smaller than $\textsc{TVal}$;\\
(iii) $\textsc{NumE}:$ the number of cases that $\textsc{JVal}$ is equal\footnote{the difference is smaller than 0.0001.} to $\textsc{TVal}$;\\
(iv) $\textsc{RatioG}:$ the average of $\textsc{JVal}/\textsc{TVal}$ when $\textsc{JVal}$ is greater than $\textsc{TVal}$;\\
(v) $\textsc{RatioS}:$ the average of $\textsc{JVal}/\textsc{TVal}$ when $\textsc{JVal}$ is smaller than $\textsc{TVal}$.\\
The results are shown in \cref{table-example-3} and \cref{figure-example-3}.
It can be seen that $\textsc{RatioG}$ is very large when $p=1,2$, which means that Turst region is not so stable as JLROA in these two cases.
Correspondingly, Trust region algorithm has generally better performance when $p>2$.
\begin{table}[h!]
\centering
\caption{}
\label{table-example-3}
\scalebox{0.9}{
\begin{tabular}{l c c c c c}
\toprule
& $\textsc{NumG}$ & $\textsc{NumS}$ & $\textsc{NumE}$ & $\textsc{RatioG}$ & $\textsc{RatioS}$\\
\midrule
$p=1$ & 125 & 0 & 875 & 211.7822 & --- \\
\midrule
$p=2$ & 395 & 360 & 245 & 5.0299 & 0.9986\\
\midrule
$p=5$ & 431 & 555 & 14 & 1.0016 & 0.9987\\
\midrule
$p=8$ & 393 & 604 & 3 & 1.0011 & 0.9992 \\
\midrule
$p=10$ & 35 & 962 & 3 & 1.0002 & 0.9995 \\
\bottomrule
\end{tabular}}
\end{table}
\end{example}
\begin{example}\label{example-4}\rm
In this example, we show the influence of choice of $\set{P}\in\Sigma$ on the final results.
Fix $1\leq p\leq 10$ and randomly generate a $\tens{A}\in\text{symm}(\mathbb{R}^{10\times 10\times 10})$.
We first choose the cyclic ordering \eqref{partial-cyclic-1},
and then randomly choose $\set{P}\in\Sigma$ for $200$ times to run \cref{al-general}.
The results are shown in \cref{figure-example-4}.
It can be seen that all the $\set{P}\in\Sigma$ have almost the same result when $p=1$.
However, when $p=2$, these $\set{P}\in\Sigma$ are separated into different groups corresponding to different results.
It may be interesting to study how to determine the $\set{P}\in\Sigma$ with the best result.
\end{example}
\begin{example}\label{example-5}\rm
Let $\tens{A}\in\text{symm}(\mathbb{R}^{10\times 10\times 10})$ and $p=2$.
Suppose that $\matr{Q}_{\ast}$ is an accumulation point of \cref{alg:jacobi-G}.
To check the frequency of conditions \eqref{eq-condition-not-zero} and \eqref{eq-condition-not-zero-2} being satisfied, we define
\begin{equation*}
\omega = \min\{|\tenselem{W}_{112}|, |\tenselem{W}_{122}|, |\tenselem{W}_{333}|, \cdots, |\tenselem{W}_{nnn}|\},
\end{equation*}
where $\tens{W}=\tens{A}(\matr{Q}_{\ast})$.
We choose the iteration $\matr{Q}_{K}$ as the approximation of an accumulation point when $K$ is large enough ($K=500$ in this experiment).
We randomly generate $\tens{A}\in\text{symm}(\mathbb{R}^{10\times 10\times 10})$ for $1000$ times,
and run \cref{alg:jacobi-G} to see the frequency that $\omega>0$ (greater than 0.0001).
The results are shown in \cref{figure-example-5},
where $\omega>0$ for $991$ times.
It can be seen that the conditions \eqref{eq-condition-not-zero} and \eqref{eq-condition-not-zero-2} are satisfied in most cases.
\end{example}
\begin{figure}[tbhp]
\centering
\subfloat[p=2]{\includegraphics[width=0.5\textwidth]{figures/Example-1-p-2}}\!\!\!
\subfloat[p=5]{\includegraphics[width=0.5\textwidth]{figures/Example-1-p-5}}\!\!\!
\subfloat[p=8]{\includegraphics[width=0.5\textwidth]{figures/Example-1-p-8}}\!\!\!
\subfloat[p=10]{\includegraphics[width=0.5\textwidth]{figures/Example-1-p-10}}
\caption{Distributions of points $(\textsc{JVal}, \textsc{SVal})$ in \cref{example-1}. The points are blue when $\textsc{JVal}$ is greater, and red when $\textsc{SVal}$ is greater.}
\label{figure-example-1}
\end{figure}
\begin{figure}[tbhp]
\centering
\subfloat[$p=1$]{\includegraphics[width=0.5\textwidth]{figures/Example-2-p-1}}\!\!\!
\subfloat[$p=2$]{\includegraphics[width=0.5\textwidth]{figures/Example-2-p-2}}\!\!\!
\subfloat[$p=3$]{\includegraphics[width=0.5\textwidth]{figures/Example-2-p-3}}
\caption{Results of \cref{example-2}.}
\label{figure-example-2}
\end{figure}
\begin{figure}[tbhp]
\centering
\subfloat[p=1]{\includegraphics[width=0.5\textwidth]{figures/Example-3-p-1-time-1}}\!\!\!
\subfloat[p=2]{\includegraphics[width=0.5\textwidth]{figures/Example-3-p-2-time-1}}\!\!\!
\subfloat[p=5]{\includegraphics[width=0.5\textwidth]{figures/Example-3-p-5-time-1}}\!\!\!
\subfloat[p=8]{\includegraphics[width=0.5\textwidth]{figures/Example-3-p-8-time-1}}
\caption{Distributions of points $(\textsc{JVal}, \textsc{TVal})$ in \cref{example-3}. The points are blue when $\textsc{JVal}$ is greater, and red when $\textsc{TVal}$ is greater.}
\label{figure-example-3}
\end{figure}
\begin{figure}[tbhp]
\centering
\subfloat[p=1.]{\includegraphics[width=0.5\textwidth]{figures/Example-4-p-1-time-1}}\!\!\!
\subfloat[p=2.]{\includegraphics[width=0.5\textwidth]{figures/Example-4-p-2-time-1}
\caption{Results of \cref{example-4}. The unique green point is for cyclic ordering \eqref{partial-cyclic-1}. Red points mean higher results than \eqref{partial-cyclic-1}, while blue points mean lower results than \eqref{partial-cyclic-1}.}
\label{figure-example-4}
\end{figure}
\begin{figure}[tbhp]
\centering
\includegraphics[width=0.7\textwidth]{figures/Example-5-time-1}
\caption{Results of \cref{example-5}. Blue points mean that $\omega>0$, while red points mean that $\omega=0$.}
\label{figure-example-5}
\end{figure}
\subsection*{Acknowledgements}
The authors would like to thank Xiao Chen for his valuable discussions in \cref{sec-weak-conver}.
\bibliographystyle{siamplain}
| {
"timestamp": "2019-11-05T02:07:59",
"yymm": "1911",
"arxiv_id": "1911.00659",
"language": "en",
"url": "https://arxiv.org/abs/1911.00659",
"abstract": "In this paper, we propose a Jacobi-type algorithm to solve the low rank orthogonal approximation problem of symmetric tensors. This algorithm includes as a special case the well-known Jacobi CoM2 algorithm for the approximate orthogonal diagonalization problem of symmetric tensors. We first prove the weak convergence of this algorithm, \\textit{i.e.} any accumulation point is a stationary point. Then we study the global convergence of this algorithm under a gradient based ordering for a special case: the best rank-2 orthogonal approximation of 3rd order symmetric tensors, and prove that an accumulation point is the unique limit point under some conditions. Numerical experiments are presented to show the efficiency of this algorithm.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Jacobi-type algorithm for low rank orthogonal approximation of symmetric tensors and its convergence analysis",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587250685455,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7097978750944459
} |
https://arxiv.org/abs/1605.07233 | An interpolation proof of Ehrhard's inequality | We prove Ehrhard's inequality using interpolation along the Ornstein-Uhlenbeck semi-group. We also provide an improved Jensen inequality for Gaussian variables that might be of independent interest. | \section{Introduction}
In \cite{Ehr}, A. Ehrhard proved the following Brunn-Minkowski like inequality for convex sets $A, B$ in $\mathbb R^{n}$:
\begin{equation}
\label{Ehr}
\Phi^{-1} \left( \gamma_{n} ( \lambda A + (1-\lambda) B ) \right) \geq \lambda \Phi^{-1} (\gamma_{n} (A) ) + (1-\lambda) \Phi^{-1} (\gamma_{n} (B) ) , \ \lambda \in [0,1],
\end{equation}
where $ \gamma_{n}$ is the standard Gaussian measure in $\mathbb R^{n}$ (i.e. the measure with density $(2\pi )^{-n/2} e^{ - |x|^{2}/2} $) and $ \Phi$ is the Gaussian distribution function (i.e. $\Phi(x) = \gamma_{1} ( -\infty, x)$).
This is a fundamental result of Gaussian space and it is known to have
numerous applications (see, e.g.,~\cite{L1}). Ehrhard's result was extended
by R. Lata\l{}a~\cite{L} to the case that one of the two sets is Borel and the other is
convex. Finally, C.\ Borell~\cite{Borell2} proved that it holds for all pairs of Borel sets.
Ehrhard's original proof for convex sets used a Gaussian symmetrization technique.
Borell used the heat semi-group and a maximal inequality in his proof,
which has since been further developed by Barthe and Huet~\cite{BartheHuet}; very
recently Ivanisvili and Volberg~\cite{IvanVol} developed this method into a general technique
for proving convolution inequalities. Another proof was recently found by van Handel~\cite{vanHandel}
using a stochastic variational principle.
In this work we will prove Ehrhard's inequality by constructing a quantity that is monotonic
along the Ornstein-Uhlenbeck semi-group. In
recent years this approach has been developed into a powerful tool to prove
Gaussian inequalities such as Gaussian hypercontractivity, the log-Sobolev
inequality, and isoperimetry~\cite{BGL}. There is no known
proof of Ehrhard inequality using these techniques and the purpose of
this note is to fill this gap.
An interpolation proof of the Lebesgue version of Ehrhard's inequality
(the Pr\'ekopa-Leindler inequality)
was presented recently in \cite{CDP}. This proof uses an
``improved reverse H\"older"
inequality for correlated Gaussian vectors that was established in
\cite{CDP}. A generalization of the aforementioned inequality also appeared
recently~\cite{Led,N1}. This inequality, while we call an ``improved
Jensen inequality'' for correlated Gaussian vectors, we present and actually
also extend in the present note. In \S2 we briefly discuss how this inequality
implies several known inequalities in probability, convexity and harmonic
analysis. Using a ``restricted'' version of this inequality (Theorem~\ref{thm:restricted-jensen}),
we will present a proof of Ehrhard's inequality.
\smallskip
The paper is organized as follows: In \S2 we introduce the notation and basic
facts about the Ornstein-Uhlenbeck semi-group, and we present the proof of the
restricted, improved Jensen inequality. in \S3 we use Jensen inequality to
provide a new proof of Pr\'ekopa-Leindler inequality. We will use the main
ideas of this proof as a guideline for our proof of Ehrhard's inequality that
we present in \S4.
\section{An ``improved Jensen" inequality}
Fix a positive semi-definite $D \times D$ matrix $A$, and let $X \sim \mathcal{N}(0, A)$.
For $t \ge 0$, we define the operator $P_t^A$ on $L_1(\mathbb{R}^{D}, \gamma_A)$ by
\[
(P_t^A f)(x) = \mathbb{E} f(e^{-t} x + \sqrt{1-e^{-2t}} X).
\]
We will use the following well-known (and easily checked) facts:
\begin{itemize}
\item the measure $\gamma_A$ is stationary for $P_t^A$;
\item for any $s, t \ge 0$, $P_s^A P_t^A = P_{s + t}^A$;
\item if $f$ is a continuous function having limits at infinity then
$P_s^A f$ converges uniformly to $P_t^A f$ as $s \to t$.
\end{itemize}
We will heavily use the fact that $P_t^A$ commutes in a nice way with the composition of
smooth functions: let $\Psi: \mathbb{R}^k \to \mathbb{R}$ be a bounded $\mathcal{C}^2$ function. For
any bounded, measurable $f = (f_1, \dots, f_k): \mathbb{R}^D \to \mathbb{R}^k$, any $x \in \mathbb{R}^D$
and any $0 < s < t$, $P_{t-s}^A \Psi(P_s^A f(x))$ is differentiable in $s$ and satisfies
\begin{equation}\label{eq:commutation}
\pdiff{}{s} P_{t-s}^A \Psi(P_s^A f)
= -P_{t-s}^A \sum_{i,j = 1}^k \partial_i \partial_j \Psi(f)
\inr{\nabla P_s^A f_i}{A \nabla P_s^A f_j}.
\end{equation}
Suppose that $D = \sum_{i=1}^k d_i$, where $d_i \ge 1$ are integers.
We decompose $\mathbb{R}^D$ as $\prod_{i=1}^k \mathbb{R}^{d_i}$ and write
$\Pi_i$ for the projection on the $i$th component.
Given a $k \times k$ matrix $M$, write $\mathcal{E}_{d_1, \dots, d_k} (M)$
for the $D \times D$ matrix whose $i,j$ entry is
$M_{k,\ell}$ if $\sum_{a < k} d_a < i \le \sum_{a \le k} d_a$
and $\sum_{b < \ell} d_b < j \le \sum_{b \le \ell} d_b$; that is, each entry
$M_{k,\ell}$ of $M$ is expanded into a $d_k \times d_\ell$ block.
We write `$\odot$' for the element-wise product of matrices,
`$\succcurlyeq$' for the positive semi-definite matrix ordering,
and $\hess J$ for the Hessian matrix of the function $J$.
Our starting point in this note is the following inequality,
which may be seen as an improved
Jensen inequality for correlated Gaussian variables.
\begin{theorem}\label{thm:jensen}
Let $\Omega_1, \dots, \Omega_k$ be open intervals,
and let $\Omega = \prod_{i=1}^k \Omega_i$.
Take $X \sim \gamma_A$ and write $X_i = \Pi_i X$.
For a bounded, $\mathcal{C}^2$ function $J: \Omega \to \mathbb{R}$, the following are
equivalent:
\begin{enumerate}[label=(\ref{thm:jensen}.\alph*)]
\item for every $x \in \Omega$, $A \odot \mathcal{E}_{d_1, \dots, d_k}(\hess{J}(x)) \succcurlyeq 0$ \label{it:jensen-hess}
\item for every $k$-tuple of measurable functions $f_i: \mathbb{R}^{d_i} \to \Omega_i$,
\begin{equation}\label{eq:jensen}
\mathbb{E} J(f_1(X_1), \dots, f_k(X_k))
\ge J(\mathbb{E} f_1(X_1), \dots, \mathbb{E} f_k(X_k)).
\end{equation}
\label{it:jensen-ineq}
\end{enumerate}
\end{theorem}
We remark that the restriction that $J$ be bounded can often be lifted.
For example, if $J$ is a continuous but unbounded function then one can still
apply Theorem~\ref{thm:jensen} on bounded domains $\Omega_i' \subset \Omega_i$.
If $J$ is sufficiently nice (e.g.\ monotonic, or bounded above) then one
can take a limit as $\Omega_i'$ exhausts $\Omega_i$ (e.g. using
the monotone convergence theorem, or Fatou's lemma).
As we have already mentioned, Theorem~\ref{thm:jensen} is known to have many consequences.
However, we do not know how to obtain Ehrhard's inequality using only Theorem~\ref{thm:jensen};
we will first need to extend Theorem~\ref{thm:jensen} in a few ways.
To motivate our first extension, note that the usual Jensen inequality on $\mathbb{R}$ extends easily to the
case where some function is convex only on a sub-level set. To be more precise, take
a function $\psi: \mathbb{R} \to \mathbb{R}$ and the set $B = \{x \in \mathbb{R}^d: \psi(x) < 0\}$. If
$B$ is connected and $\psi$ is convex when restricted to $B$, then one
easily finds that $\mathbb{E} \psi(X) \ge \psi(\mathbb{E} X)$ for any random vector
supported on $B$. A similar modification may be made to Theorem~\ref{thm:jensen}.
\begin{theorem}\label{thm:restricted-jensen}
Take the notation and assumptions of Theorem~\ref{thm:jensen}, and assume in
addition that $\{x \in \Omega: J(x) < 0\}$ is connected. Then the
following are equivalent:
\begin{enumerate}[label=(\ref{thm:restricted-jensen}.\alph*)]
\item for every $x \in \Omega$ such that $J(x) < 0$, $A \odot \mathcal{E}_{d_1, \dots, d_k}(\hess{J}(x)) \succcurlyeq 0$
\label{it:restricted-jensen-hess}
\item for every $k$-tuple of measurable functions $f_i: \mathbb{R}^{d_i} \to \Omega_i$ that $\gamma_A$-a.s.\ satisfy
$J(f_1, \dots, f_k) < 0$,
\[
\mathbb{E} J(f_1(X_1), \dots, f_k(X_k))
\ge J(\mathbb{E} f_1(X_1), \dots, \mathbb{E} f_k(X_k)).
\]
\label{it:restricted-jensen-ineq}
\end{enumerate}
\end{theorem}
Note that the threshold of zero in the conditions $J(x) < 0$
and $J(f_1, \dots, f_k) \le 0$ is arbitrary, since we may apply the
theorem to the function $J(\cdot) - a$ for any $a \in \mathbb{R}$. Of course,
taking $a$ sufficiently large recovers Theorem~\ref{thm:jensen}.
\begin{proof}
Suppose that~\ref{it:restricted-jensen-hess} holds.
By standard approximation arguments, it suffices to
prove~\ref{it:restricted-jensen-ineq} for a more restricted class of
functions $f$. Indeed, let $F$ be the set of measurable
$f = (f_1, \dots, f_k)$ satisfying $J(f) < 0$ $\gamma_A$-a.s.\ and
let $F_\epsilon \subset F$ be those functions that are
continuous, vanish at infinity,
and satisfy $J(f) \le -\epsilon$ $\gamma_A$-a.s. Now, every $f \in F$
can be approximated pointwise by a sequence $f^{(n)} \in F_{1/n}$
(here we are using the fact that $\{x: J(x) < 0\}$ is connected);
hence, it suffices to prove~\ref{it:restricted-jensen-ineq} for
$f \in F_\epsilon$, where $\epsilon > 0$ is arbitrarily small.
From now on, fix $\epsilon > 0$ and fix $f = (f_1, \dots, f_k) \in
F_\epsilon$.
Recalling that $\Pi_i : \mathbb{R}^{d_1} \times \cdots \times \mathbb{R}^{d_k} \to \mathbb{R}^{d_i}$
is the projection onto the $i$th block of coordinates, define
$g_i = f_i \circ \Pi_i$ and $G_{s,t}(x) = P_{t-s}^A J(P_s^A g(x))$.
Since $f \in F_\epsilon$, we have $G_{0,0}(x) \le -\epsilon$
for every $x \in \mathbb{R}^D$. Moreover, since $f$ is continuous and
vanishes at infinity, $P_s^A g \to g$ uniformly as $s \to 0$.
Since $g$ is bounded, $J$ is uniformly continuous on the range of $g$
and so there exists $\delta > 0$ such that
$|G_{s,s}(x) - G_{r,r}(x)| < \epsilon$
for every $x \in R^D$ and every $|s - r| \le \delta$.
Now, fix $r \ge 0$ and assume that $G_{r,r} \le -\epsilon$ pointwise;
by the previous paragraph, $G_{s,s} < 0$ pointwise for every
$r \le s \le r + \delta$. Now we apply the commutation formula~\eqref{eq:commutation}:
with $B_s = B_s(x) = A \odot \mathcal{E}_{d_1, \dots, d_k}(H_J(P_s^A g))$, we have
\[
\pdiff{}{s} G_{s,t} = - P_{t-s}^A \sum_{{i,j}=1}^k
\inr{\nabla P_s^A g_i}{B \nabla P_s^A g_j}
\]
(here, we have used the observation that $P_s^A g_i(x)$ depends only on $\Pi_i x$,
and so $\nabla P_s^A g_i$ is zero outside the $i$th block of coordinates).
The assumption~\ref{it:restricted-jensen-hess} implies that $B_s$ is
positive semi-definite whenever $G_{s,s} < 0$; since $G_{s,s} < 0$
for every $s \in [r, r + \delta]$, we see that for such $s$,
$\pdiff{}{s} G_{s,r + \delta} \le 0$ pointwise. Since
$G_{s,r+\delta}$ is continuous in $s$ and $G_{r,r} \le -\epsilon$,
it follows that $G_{s,s} \le -\epsilon$ pointwise for all
$s \in [r, r+\delta]$.
Next, note that $r=0$ satisfies the assumption $G_{r,r} \le -\epsilon$
of the previous paragraph.
By induction, it follows that $G_{r,r} \le -\epsilon$ pointwise for all
$r \ge 0$. Hence, the matrix $B_s$ is positive semi-definite for all
$s \ge 0$ and $x \in \mathbb{R}^D$, which implies that $G_{s,t}(x)$ is
non-increasing in $s$ for all $t \ge s$ and $x \in \mathbb{R}^D$. Hence,
\[
\mathbb{E} J(f_1(X_1), \dots, f_k(X_k)) = \lim_{t \to \infty} G_{0,t}(0)
\ge \lim_{t \to \infty} G_{t,t}(0) = J(\mathbb{E} f_1, \dots, \mathbb{E} f_k).
\]
This completes the proof of~\ref{it:restricted-jensen-ineq}.
Now suppose that~\ref{it:restricted-jensen-ineq} holds.
Choose some $v \in \mathbb{R}^D$ and some $y \in \Omega$ with $J(y) < 0$;
to prove~\ref{it:restricted-jensen-hess}, it is enough to show that
\begin{equation}\label{eq:restricted-jensen-part-2-goal}
v^T (A \odot \mathcal{E}_{d_1, \dots, d_k}(H_J(y))) v \ge 0.
\end{equation}
Since $\Omega$ is open, there is some $\delta > 0$ such that $y + z \in \Omega$
whenever $\max_i |z_i| \le \delta$.
For this $\delta$, define $\psi: \mathbb{R} \to \mathbb{R}$ by
\[
\psi(t) = \max\{-\delta, \min\{\delta, t\}\}.
\]
For $\epsilon > 0$, define $f_{i, \epsilon}: \mathbb{R}^{d_i} \to \Omega_i$
by
\[
f_{i,\epsilon}(x) = y_i + \psi(\epsilon \inr{x}{\Pi_i v}).
\]
By~\ref{it:restricted-jensen-ineq},
\[
\mathbb{E} J(f_{1,\epsilon}(X_1), \dots, f_{k,\epsilon}(X_k))
\ge J(\mathbb{E} f_{1,\epsilon}(X_1), \dots, \mathbb{E} f_{k,\epsilon}(X_k)).
\]
Since $\psi$ is odd, $\mathbb{E} f_{i,\epsilon}(X_i) = y_i$ for all $\epsilon > 0$; hence,
\begin{equation}\label{eq:restricted-jensen-part-2-assumption}
\mathbb{E} J(f_{1,\epsilon}(X_1), \dots, f_{k,\epsilon}(X_k)) \ge J(y).
\end{equation}
Taylor's theorem implies that for any $z$ with $y + z \in \Omega$,
\[
J(y + z)
= J(y) + \sum_{i=1}^k \pdiff{J(y)}{y_i} z_i
+ \sum_{i,j=1}^k \pdiffII{J(y)}{y_i}{y_j} z_i z_j
+ \rho(|z|),
\]
where $\rho$ is some function satisfying $\epsilon^{-2} \rho(\epsilon) \to 0$ as $\epsilon \to 0$.
Now consider what happens when we replace $z_i$ above with $Z_i = \psi(\epsilon\inr{X_i}{\Pi_i v})$ and
take expectations. One easily checks that $\mathbb{E} Z_i = 0$, $\mathbb{E} \rho(|Z|) = o(\epsilon^2)$, and
\[
\mathbb{E} Z_i Z_j = \epsilon^2 (\Pi_i v)^T \mathbb{E} [X_i X_j] (\Pi_i v) + o(\epsilon^2);
\]
hence,
\begin{align*}
\mathbb{E} J(y + Z)
&= J(y) + \epsilon^2 \sum_{i,j = 1}^k \pdiffII{J(y)}{y_i}{y_j} (\Pi_i v)^T \mathbb{E} [X_i X_j] (\Pi_i v) + o(\epsilon^2) \\
&= J(y) + \epsilon^2 v^T (A \odot \mathcal{E}_{d_1, \dots, d_k}(H_J(y))) v + o(\epsilon^2).
\end{align*}
On the other hand, $\mathbb{E} J(y + Z) = \mathbb{E} J(f_{1,\epsilon}(X_1), \dots, f_{k,\epsilon}(X_k))$,
which is at least $J(y)$ according to~\eqref{eq:restricted-jensen-part-2-assumption}.
Taking $\epsilon \to 0$ proves~\eqref{eq:restricted-jensen-part-2-goal}.
\end{proof}
\section{A short proof of Pr\'ekopa-Leindler inequality }
The Pr\'ekopa-Leindler inequality states that if $f, g, h: \mathbb{R}^d \to [0, \infty)$ satisfy
\[
h(\lambda x + (1-\lambda) y) \ge f(x)^\lambda g(y)^{1-\lambda}
\]
for all $x, y \in \mathbb{R}^d$ and some $\lambda \in (0, 1)$ then
\[
\mathbb{E} h \ge (\mathbb{E} f)^\lambda (\mathbb{E} g)^{1-\lambda},
\]
where expectations are taken with respect to the standard Gaussian measure on $\mathbb{R}^d$.
By applying a linear transformation, the standard Gaussian measure may be replaced by any
Gaussian measure; by taking a limit over Gaussian measures with large covariances, the expectations
may also be replaced by integrals with respect to the Lebesgue measure.
As M. Ledoux brought to our attention,
the Pr\'ekopa-Leindler inequality may be seen as a consequence of Theorem~\ref{thm:jensen};
we will present only the case $d=1$, but the case for general $d$ may be done in a similar way.
Alternatively, one may prove the Pr\'ekopa-Leindler inequality for $d=1$ first and then extend to
arbitrary $d$ using induction and Fubini's theorem.
Fix $\lambda \in (0, 1)$, let
$(X, Y) \sim \mathcal{N}\big(0, \smash{\big(\begin{smallmatrix}1 & \rho \\ \rho & 1 \end{smallmatrix}\big)}\big)$
and let $Z = \lambda X + (1-\lambda) Y$. Let $\sigma^2 = \sigma^2(\rho, \lambda)$ be the variance
of $Z$ and let $A = A(\rho, \lambda)$ be the covariance of $(X, Y, Z)$.
Note that $A$ is a rank-two matrix, and that
it may be decomposed as $A = u u^T + v v^T$ where $u$ and $v$ are both orthogonal
to $(\lambda, 1-\lambda, -1)^T$.
For $\alpha, R \in \mathbb{R}_+$, define $J_{\alpha,R}: \mathbb{R}_+^3 \to \mathbb{R}$ by
\[
J_{\alpha,R}(x, y, z) = (x^{\lambda} y^{1-\lambda} z^{-\alpha})^R.
\]
\begin{lemma}\label{lem:pl}
For any $\lambda$ and $\rho$, and for any $\alpha < \sigma^2$,
there exists $R \in \mathbb{R}_+$ such that $A \odot H_{J_{\alpha,R}} \succcurlyeq 0$.
\end{lemma}
To see how the Pr\'ekopa-Leindler inequality
follows from Theorem~\ref{thm:jensen} and Lemma~\ref{lem:pl}, suppose that
$h(\lambda x + (1-\lambda)y) \ge f^\lambda(x) + g^{1-\lambda}(y)$
for all $x, y \in \mathbb{R}$. Then $J_{\alpha,R}(f(X), g(Y), h^{1/\alpha}(Z)) \le 1$
with probability one (because $Z = \lambda X + (1-\lambda) Y$ with probability one).
By Theorem~\ref{thm:jensen}, with the $R$ from Lemma~\ref{lem:pl} we have
\begin{align*}
1 &\ge \mathbb{E} J_{\alpha,R}(f(X), g(Y), h(Z)) \\
&\ge J_{\alpha,R} (\mathbb{E} f(X), \mathbb{E} g(Y), \mathbb{E} h(Z)) \\
&= \left(
\frac{(\mathbb{E} f(X))^\lambda (\mathbb{E} g(Y))^{1-\lambda}}
{(\mathbb{E} h^{1/\alpha}(Z))^\alpha}
\right)^R.
\end{align*}
In other words, $(\mathbb{E} h^{1/\alpha}(Z))^\alpha \ge (\mathbb{E} f)^\lambda (\mathbb{E} g)^{1-\lambda}$.
This holds for any $\rho$ and any $\alpha < \sigma^2$. By sending $\rho \to 1$, we
send $\sigma^2 \to 1$ and so we may take $\alpha \to 1$ also. Finally, note
that in this limit $Z$ converges in distribution to $\mathcal{N}(0, 1)$. Hence, we recover
the Pr\'ekopa-Leindler inequality for the standard Gaussian measure.
\begin{proof}[Proof of Lemma~\ref{lem:pl}]
By a computation,
\begin{align*}
H_{J_{\alpha,R}} &= J_{\alpha,R}(x, y, z)
\begin{pmatrix}
\frac{\lambda R(\lambda R - 1)}{x^2} & \frac{\lambda R(1-\lambda) R}{xy} & -\frac{\lambda \alpha R^2}{xz} \\
\frac{\lambda R(1-\lambda) R}{xy} & \frac{(1-\lambda)R((1-\lambda)R - 1)}{y^2} & -\frac{(1-\lambda)\alpha R^2}{yz} \\
-\frac{\lambda \alpha R^2}{xz} & -\frac{(1-\lambda)\alpha R^2}{yz} & \frac{\alpha R(\alpha R+1)}{z^2}
\end{pmatrix}.
\end{align*}
We would like to show that $A \odot H_J \succcurlyeq 0$; since elementwise multiplication
commutes with multiplication by diagonal matrices, it is enough to show that
\begin{equation}\label{eq:goal}
A \odot \left(
\begin{pmatrix}
\lambda \\
1-\lambda \\
-\alpha
\end{pmatrix}^{\otimes 2}
- \frac{1}{R} \begin{pmatrix}
\lambda & 0 & 0 \\
0 & 1-\lambda & 0 \\
0 & 0 & -\alpha
\end{pmatrix}
\right) \ge 0.
\end{equation}
Let $\theta = (\lambda, 1-\lambda, -\alpha)^T$ and recall that $A = u u^T + v v^T$, where
$u$ and $v$ are both orthogonal to $(\lambda, 1-\lambda -1)^T$. Then
\[
A \odot (\theta \theta^T) = (u \odot \theta) (u \odot \theta)^T +
(v \odot \theta) (v \odot \theta)^T,
\]
where $u \odot \theta$ and $v \odot \theta$ are both orthogonal to $(1, 1, \frac{1}{\alpha})^T$
(call this $w$).
In particular, $A \odot (\theta \theta^T)$ is a rank-two, positive semi-definite matrix whose null
space is the span of $w$.
On the other hand,
$A \odot \diag(\lambda, 1-\lambda, -\alpha) = \diag(\lambda, 1-\lambda, -\alpha \sigma^2)$
(call this $D$). Then $w^T D w = 1 - \sigma^2/\alpha < 0$. As a consequence of the following Lemma,
\[
A \circ (\theta \theta^T) - \frac 1R D \ge 0
\]
for all sufficiently large $R$.
\end{proof}
\begin{lemma}\label{lem:psd}
Let $A$ be a positive semi-definite matrix and let $B$ be a symmetric matrix. If
$u^T B u \ge \delta |u|^2$ for all $u \in \ker(A)$ and
$v^T A v \ge \delta |v|^2$ for all $v \in \ker(A)^\perp$ then
$A + \epsilon B \succcurlyeq 0$ for all $0 \le \epsilon \le \frac{\delta^2}{\|B\|^2 + \delta\|B\|}$,
where $\|B\|$ is the operator norm of $B$.
\end{lemma}
\begin{proof}
Any vector $w$ may be decomposed as $w = u + v$ with $u \in \ker(A)$ and $v \in \ker(A)^\perp$. Then
\begin{align*}
w^T (A + \epsilon B) w
&= u^T A u + \epsilon u^T B u - 2 \epsilon u^T B v + \epsilon v^T B v \\
&\ge \delta |u|^2 - \epsilon \|B\| |u|^2 - 2 \epsilon \|B\| |u| |v| + \epsilon \delta |v|^2.
\end{align*}
Considering the above expression as a quadratic polynomial in $|u|$ and $|v|$, we see that
it is non-negative whenever $(\delta - \epsilon \|B\|) \delta \ge \epsilon \|B\|^2$.
\end{proof}
We remark that the preceding proof of the Pr\'ekopa-Leindler inequality may be extended in
an analogous way to prove Barthe's inequality~\cite{Ba1}.
\section{Proof of Ehrhard's inequality}
The parallels between the Pr\'ekopa-Leindler and Ehrhard inequalities become
obvious when they are both written in the following form.
The version of Pr\'ekopa-Leindler that we proved above may
be restated to say that
\begin{equation}\label{eq:pl}
\left.\begin{gathered}
\exp(R (\lambda \log f(X) + (1-\lambda) \log g(Y) - \alpha \log h(Z))) \le 0
\text{ a.s.} \\
\text{implies} \\
\exp(R (\lambda \log \mathbb{E} f(X) + (1-\lambda) \log \mathbb{E} g(Y) - \alpha \log \mathbb{E} h(Z))) \le 0.
\end{gathered}\right\}
\end{equation}
On the other hand, here we will prove that
\begin{equation}\label{eq:ehrhard}
\left.\begin{gathered}
\Phi\left(R (\lambda \Phi^{-1}(f(X)) + (1-\lambda) \Phi^{-1}(g(Y))
- \sigma \Phi^{-1}(h(Z)))\right) \le 0 \text{ a.s.} \\
\text{implies} \\
\Phi\left(R (\lambda \Phi^{-1}(\mathbb{E} f(X)) + (1-\lambda) \Phi^{-1}(\mathbb{E} g(Y))
- \sigma \Phi^{-1}(\mathbb{E} h(Z)))\right) \le 0.
\end{gathered}\right\}
\end{equation}
(It may not yet be clear why the $\alpha$ in~\eqref{eq:pl} has become $\sigma$ in~\eqref{eq:ehrhard};
this turns out to be the right choice, as will become clear from the example in Section~\ref{sec:ehrhard-example}.)
This implies Ehrhard's inequality in the same way that~\eqref{eq:pl} implies
the Pr\'ekopa-Leindler inequality.
In particular, our proof of~\eqref{eq:pl} suggests
a strategy for attacking~\eqref{eq:ehrhard}: define the function
\[
J_R(x, y, z)
= \Phi\left(R (\lambda \Phi^{-1}(x) + (1-\lambda) \Phi^{-1}(y)
- \sigma \Phi^{-1}(z))\right).
\]
(We will drop the parameter $R$ when it can be inferred
from the context.) In analogy with our proof of Pr\'ekopa-Leindler, we might then try to
show that for sufficiently large $R$, $A \odot H_{J_R} \succcurlyeq 0$. Unfortunately, this is false.
\subsection{An example}\label{sec:ehrhard-example}
Recall from the proof of Theorem~\ref{thm:restricted-jensen} that if
$A \odot H_J \succcurlyeq 0$ then
\[
G_{s, t, R}(x, y) :=
P_{t-s}^A J_R(P_s^1 f(x), P_s^1 g(y), P_s^{\sigma^2} h(\lambda x + (1-\lambda) y))
\]
is non-increasing in $s$ for every $x$ and $y$. We will give an example in which $G_{s,t,R}$
may be computed explicitly and it clearly fails to be non-increasing.
From now on, define $f_s = P_s^1 f$, $g_s = P_s^1 g$ and $h_s = P^{\sigma^2}_s h$.
Let $f(x) = 1_{\{x \le a\}}$,
$g(y) = 1_{\{y \le b\}}$ and $h(z) = 1_{\{z \le c\}}$, where
$c \ge \lambda a + (1-\lambda) b$. A direct computation yields
\begin{align*}
f_s(x) &= \Phi\left(\frac{a - e^{-s} x}{\sqrt{1-e^{-2s}}}\right) \\
g_s(y) &= \Phi\left(\frac{b - e^{-s} y}{\sqrt{1-e^{-2s}}}\right) \\
h_s(z) &= \Phi\left(\frac{c - e^{-s} z}{\sigma \sqrt{1-e^{-2s}}}\right).
\end{align*}
Hence,
\[
J(f_s(x), g_s(y), h_s(\lambda x + (1-\lambda y)))
= \Phi\left(R\frac{\lambda a + (1-\lambda) b - c}{\sqrt{1-e^{-2s}}}\right).
\]
If $c > \lambda a + (1-\lambda)b$ then the above quantity is increasing in $s$.
Since it is also independent of $x$ and $y$, it remains unchanged when applying
$P_{t-s}^A$. That is,
\[
G_{s, t, R}
= \Phi\left(R\frac{\lambda a + (1-\lambda) b - c}{\sqrt{1-e^{-2s}}}\right)
\]
is increasing in $s$. On the bright side, in this example $G_{s,r,R\sqrt{1-e^{-2s}}}$
is constant. Since Theorem~\ref{thm:jensen} was not built to consider such behavior,
we will adapt it so that the function $J$
is allowed to depend on $s$.
\subsection{Allowing $J$ to depend on $t$}\label{sec:jensen-time-varying}
Recalling the notation of \S2, we assume from now on that $\Omega_i \subseteq [0, 1]$ for each $i$.
Then $A$ is a $k \times k$ matrix; let
$\sigma_1^2, \dots, \sigma_k^2$ be its diagonal elements.
We will consider functions of the form $J: \Omega \times [0, \infty] \to \mathbb{R}$. We write
$H_J$ for the Hessian matrix of $J$ with respect to the variables in $\Omega$, and $\pdiff J t$
for the partial derivative of $J$ with respect to the variable in $[0, \infty]$.
Let $I: [0, 1] \to \mathbb{R}$ be the function $I(x) = \phi(\Phi^{-1}(x))$.
\begin{lemma}\label{lem:time-varying-jensen}
With the notation above, suppose that $J: \Omega \times [0, \infty] \to \mathbb{R}$ is bounded and $\mathcal{C}^2$,
and take $(X_1, \dots, X_k) \sim \gamma_A$.
Let $\lambda_1, \dots, \lambda_k$ be non-negative numbers with $\sum_i \lambda_i = 1$, let $D(x)$
be the $k \times k$ diagonal matrix with $\lambda_i \sigma_i^2 / I^2(x_i)$ in position $i$,
and take some $\epsilon \ge 0$.
If $\pdiff J t(x, t) \le 0$ and
\begin{equation}
A \odot \hess{J}(x, t) - (e^{2(t+\epsilon)} - 1) \pdiff{J(x,t)}{t} D^2 \succcurlyeq 0
\label{eq:time-varying-hess}
\end{equation}
for every $x \in \Omega$ and $t > 0$ then
for every $k$-tuple of measurable functions $f_i: \mathbb{R} \to \Omega_i$,
\begin{equation}\label{eq:time-varying-jensen}
\mathbb{E} J(P_\epsilon^{\sigma_1} f_1(X_1), \dots, P_\epsilon^{\sigma_k} f_k(X_k), 0)
\ge J(\mathbb{E} f_1(X_1), \dots, \mathbb{E} f_k(X_k), \infty).
\end{equation}
\end{lemma}
Note that Lemma~\ref{lem:time-varying-jensen} has an extra parameter $\epsilon \ge 0$ compared
to our previous versions of Jensen's inequality. This is for convenience when applying Lemma~\ref{lem:time-varying-jensen}:
when $\epsilon > 0$ then the function $e^{2(t + \epsilon)} - 1$
is bounded away from zero, which makes~\eqref{eq:time-varying-hess}
easier to check.
\begin{proof}
Write $f_{i,s}$ for $P_{s+\epsilon}^{\sigma_i^2} f_i$ and $f_s = (f_{1,s}, \dots, f_{k,s})$.
Define
\[
G_{s,t} = P_{t-s-\epsilon}^A J(f_{1,s}, \dots, f_{k,s}, s).
\]
We differentiate in $s$, using the commutation formula~\eqref{eq:commutation}.
Compared to the proof of Theorem~\ref{thm:restricted-jensen},
an extra term appears because the function $J$ itself depends on $s$:
\begin{align*}
- \pdiff{}{s} G_{s,t}
&= P_{t-s-\epsilon} \sum_{i,j=1}^k \partial_i \partial_j J(f_s, s) A_{ij} f'_{i,s} f'_{j,s} - P_{t-s-\epsilon} \pdiff{J}{s}(f_s, s) \\
&= P_{t-s-\epsilon} v_s^T (A \odot H_J(f_s,s)) v_s - P_{t-s-\epsilon} \pdiff{J}{s}(f_s,s),
\end{align*}
where $v_s = \nabla f_s$.
Bakry and Ledoux~\cite{BL} proved that
$|v_{i,s}| \le \sigma_i^{-1} (e^{2(s+\epsilon)} - 1)^{-1/2} I(f_{i,s})$.
Hence,
\[
v_s^T D(f_s) v_s = \sum_{i=1}^k \lambda_i \left(\frac{\sigma_i |v'_{i,s}|}{I(f_{i,s})}\right)^2 \le (e^{2(s+\epsilon)} - 1)^{-1},
\]
and so
\[
- \pdiff{}{s} G_{s,t}
\ge P_{t-s} \left(v_s^T (A \odot H_J(f_s,s)) v_s - (e^{2(s+\epsilon)} - 1) \pdiff{J}{s}(f_s,s) v_s^T D(f_s) v_s\right).
\]
Clearly, the argument of $P_{t-s}$ is non-negative pointwise if
\[
A \odot H_{J_R}(x,s) \succcurlyeq (e^{2(s+\epsilon)} - 1) \pdiff{J_R}{s}(x,s) D(x)
\]
for all $x, s$. In this case, $G_{s,t}$ is non-increasing in $s$ and we conclude as in
the proof of Theorem~\ref{thm:restricted-jensen}.
\end{proof}
By combining the ideas of Theorem~\ref{thm:restricted-jensen} and Lemma~\ref{lem:time-varying-jensen},
we obtain the following combined version.
\begin{corollary}\label{cor:combined-jensen}
With the notation and assumptions of Lemma~\ref{lem:time-varying-jensen}, suppose in addition
that $\{x \in \Omega: J(x, 0) < a\}$ is connected, that $\pdiff{J(x,t)}{t} \le 0$ whenever $J(x, t) < a$,
and that
\[
A \odot \hess{J}(x, t) - (e^{2(t+\epsilon)} - 1) \pdiff{J(x,t)}{t} D^2 \succcurlyeq 0
\]
for every $t \ge 0$ and every $x$ such that $J(x, t) < a$. Then for every $k$-tuple of measurable
functions $f_i: \mathbb{R} \to \Omega_i$ satisfying $J(P_\epsilon^{\sigma_1} f_1, \dots, P_{\epsilon}^{\sigma_k} f_k) < 0$,
\[
\mathbb{E} J(P_\epsilon^{\sigma_1} f_1(X_1), \dots, P_\epsilon^{\sigma_k} f_k(X_k), 0) \ge J(\mathbb{E} f_1(X_1), \dots, \mathbb{E} f_k(X_k), \infty).
\]
\end{corollary}
\subsection{The Hessian of $J$}
Define $J_R: (0, 1)^3 \to 0$ by
\[
J_R(x, y, z)
= \Phi\left(R \big(\lambda \Phi^{-1}(x) + (1-\lambda) \Phi^{-1}(y)
- \sigma \Phi^{-1}(z))\big)\right).
\]
Let $H_J = H_J(x, y, z)$ denote the $3 \times 3$ Hessian matrix of $J$;
let $A$ be the $3 \times 3$ covariance
matrix of $(X, Y, Z)$. In order to apply Corollary~\ref{cor:combined-jensen},
we will compute the matrix $A \odot H_J$.
First, we define some abbreviations: set
\begin{align*}
u &= \Phi^{-1}(x) & \Xi &= \lambda u + (1-\lambda) v - \sigma w \\
v &= \Phi^{-1}(y) & \theta &= (\lambda, 1-\lambda, -\sigma)^T \\
w &= \Phi^{-1}(z) & \mathcal{I} &= \diag(\phi(u), \phi(v), \phi(w))
\end{align*}
We will use a subscript $s$ to denote that any of the above quantities is
evaluated at $(f_s, g_s, h_s)$ instead of $(x, y, z)$. That is
$u_s = \Phi^{-1}(f_s)$, $\Xi_s = \lambda u_s + (1-\lambda) v_s - \sigma w_s$,
and so on.
\begin{lemma}\label{lem:hess}
$\displaystyle
H_J = \phi(R\Xi) \mathcal{I}^{-1} \left(
R \diag(\lambda u, (1-\lambda)v, -\sigma w) - R^3 \Xi \theta \theta^T
\right) \mathcal{I}^{-1}.
$
\end{lemma}
\begin{proof}
Noting that $\diff{u}{x} = 1/\phi(u)$,
the chain rule gives
\[
\diff{}{x} \Phi(R \Xi) =
R \lambda \frac{\phi(R \Xi)}{\phi(u)}
= R \lambda \exp\left( -\frac{R^2 \Xi^2 - u^2}{2} \right).
\]
Differentiating again,
\[
\diffII{}{x}{x} \Phi(R\Xi)
= R \lambda (u - R^2 \Xi \lambda) \frac{\phi(R\Xi)}{\phi^2(u)}.
\]
For cross-derivatives,
\[
\diffII{}{x}{y} \Phi(R\Xi)
= - R^3 \Xi \lambda(1-\lambda) \frac{\phi(R\Xi)}{\phi(u)\phi(v)}.
\]
Putting these together with the analogous terms involving differentiation
by $z$,
\begin{multline*}
\frac{H_J}{\phi(R\Xi)} =
- R^3\Xi \begin{pmatrix}
\frac{\lambda^2}{\phi^2(u)} & \frac{\lambda(1-\lambda)}{\phi(u)\phi(v)} & -\frac{\lambda \sigma}{\phi(u)\phi(w)} \\
\frac{\lambda(1-\lambda)}{\phi(u)\phi(v)} & \frac{(1-\lambda)^2}{\phi^2(v)} & -\frac{(1-\lambda)\sigma}{\phi(v)\phi(w)} \\
-\frac{\lambda\sigma}{\phi(u)\phi(w)} & -\frac{(1-\lambda)\sigma}{\phi(u)\phi(v)} & \frac{\sigma^2}{\phi^2(w)}
\end{pmatrix} \\
+ R \begin{pmatrix}
\frac{\lambda u}{\phi^2(u)} & 0 & 0 \\
0 & \frac{(1-\lambda) v}{\phi^2(v)} & 0 \\
0 & 0 & -\frac{\sigma w}{\phi^2(w)}
\end{pmatrix}.
\end{multline*}
Recalling the definition of $\mathcal{I}$ and $\theta$, this may
be rearranged into the claimed form.
\end{proof}
Having computed $H_J$, we need to examine $A \odot H_J$. Recall that $A$
is a rank-two matrix and so it may be decomposed as $A = a a^T + b b^T$. Moreover,
the fact that $Z = \lambda X + (1-\lambda) Y$ means that $a$ and $b$
are both orthogonal to $(\lambda, 1-\lambda, -1)^T$. Recalling the definition
of $\theta$, this implies that $a \odot \theta$ and $b \odot \theta$
are both orthogonal to $(1, 1, \sigma^{-1})^T$.
This observation allows us to deal with the $\theta \theta^T$ term in
Lemma~\ref{lem:hess}:
\[
A \odot \theta \theta^T = (a a^T) \odot (\theta \theta^T) + (b b^T) \odot (\theta \theta^T)
= (a \odot \theta)^{\otimes 2} + (b \odot \theta)^{\otimes 2}.
\]
To summarize:
\begin{lemma}
The matrix $B := A \odot \theta \theta^T$ is positive semidefinite and has rank two.
Its kernel is the span of $(1, 1, \frac 1\sigma)^T$.
\end{lemma}
On the other hand, the diagonal entries of $A$ are $1, 1,$ and $\sigma^2$; hence,
\[
A \odot \diag(\lambda u, (1-\lambda)v, -\sigma w)
= \diag(\lambda u, (1-\lambda)v, - \sigma^3 w) =: D.
\]
Combining this with Lemma~\ref{lem:hess}, we have
\begin{equation}\label{eq:A-hess}
A \odot H_J = R \phi(R\Xi) \mathcal{I}^{-1}
(D - R^2 \Xi B)
\mathcal{I}^{-1}.
\end{equation}
Consider the expression above in the light of our earlier proof of
Pr\'ekopa-Leindler. Again, we have a sum of two matrices ($D$ and $-R^2 \Xi B$),
one of which is multiplied by a factor ($R^2$) that we may take to be large.
There are two important differences.
The first is that the matrix $D$ (whose analogue was constant in the proof of Pr\'ekopa-Leindler) cannot be
controlled pointwise in terms of $B$. This difference is closely related to
the example in Section~\ref{sec:ehrhard-example}; we will solve it by making $J$
depend on $t$ in the right way; the $\diff{J}{t}$ term in Corollary~\ref{cor:combined-jensen}
will then cancel out part of $D$'s contribution.
The second difference is
that in~\eqref{eq:A-hess}, the term that is multiplied by a large factor
(namely, $-\Xi B$) is not everywhere positive semi-definite because there exist
$(x, y, z) \in \mathbb{R}^3$ such that $\Xi(x, y, z) > 0$. This is the reason that we consider
the ``restricted'' formulation of Jensen's inequality in Theorem~\ref{thm:restricted-jensen}
and Corollary~\ref{cor:combined-jensen}.
\subsection{Adding the dependence on $t$}
Recall that $X$ and $Y$ have variance 1 and covariance $\rho$, that $Z = \lambda X + (1-\lambda) Y$,
and that $A$ is the covariance of $(X, Y, Z)$.
Recall also the notations $u, v, w, \Xi$, and their subscripted variants.
For $R > 0$, define $r(t) = R \sqrt{1-e^{-2t - \epsilon}}$ and
\begin{align}
J_R(x, y, z, t)
&= \Phi\left(r(t) \big(\lambda \Phi^{-1}(x) + (1-\lambda) \Phi^{-1}(y)
- \sigma \Phi^{-1}(z))\big)\right) \notag \\
&= \Phi(r(t) \Xi). \label{eq:J-def}
\end{align}
Let $E = \diag(\lambda, 1-\lambda, \sigma) / (1 + \sigma^{-1})$.
\begin{lemma}\label{lem:combined-jensen-condition}
Define $\Omega_\epsilon = [\Phi(-1/\epsilon), \Phi(1/\epsilon)]^3$.
For every $\rho, \lambda$, and $\epsilon$, there exists $R > 0$ such that
\[
A \odot H_J - (e^{2(t+\epsilon)} - 1) \pdiff{J}{t} \mathcal{I}^{-1} E \mathcal{I}^{-1} \succcurlyeq 0
\]
on $\{(x, t) \in \Omega_\epsilon \times [0, \infty): \Xi(x) \le -\epsilon\}$.
\end{lemma}
\begin{proof}
We computed $A \odot H_J$ in~\eqref{eq:A-hess} already; applying that formula
and noting that $\mathcal{I}^{-1} \succcurlyeq 0$, it suffices to show that
\[
r(t) \phi(r(t) \Xi) (D - r^2(t) \Xi B) - (e^{2(t+\epsilon)} - 1) \pdiff{J}{t} E \succcurlyeq 0
\]
whenever $\Xi \le -\epsilon$.
(Recall that $D = \diag(\lambda u, (1-\lambda) v, -\sigma^3 w)$, and that $B$ is a rank-two
positive semidefinite matrix that depends only on $\rho$ and $\lambda$, and whose kernel is
the span of $(1, 1, \sigma^{-1})^T$). We compute
\[
\pdiff{J}{t} = r'(t) \Xi \phi(r(t) \Xi) = \frac{r(t)}{e^{2t+\epsilon} - 1} \Xi \phi(r(t) \Xi).
\]
Now, there is some $\delta = \delta(\epsilon) > 0$ such that
\[
\frac{e^{2(t+\epsilon)} - 1}{e^{2t + \epsilon} - 1} \ge 1 + \delta
\]
for all $t \ge 0$. For this $\delta$,
\begin{multline*}
r(t) \phi(r(t) \Xi) (D - r^2(t) \Xi B) - (e^{2(t+\epsilon)} - 1) \pdiff{J}{t} E \\
\succcurlyeq
r(t) \phi(r(t) \Xi) (D - (1+\delta) \Xi E - r^2(t) \Xi B);
\end{multline*}
Hence, it suffices to show that $D - (1 + \delta) \Xi E - r^2(t) \Xi B \succcurlyeq 0$.
Since $\Xi \le -\epsilon$, it suffices to show that
$r^2(t) \epsilon B + D - (1 + \delta) \Xi E \succcurlyeq 0$.
Now, $B$ is a rank-two positive semi-definite matrix depending only on $\lambda$ and $\rho$.
Its kernel is spanned by $\theta = (1, 1, \sigma^{-1})^T$. Note that $\theta^T D \theta = \Xi$
and $\theta^T E \theta = 1$. Hence,
\[
\theta^T (D - (1 + \delta) \Xi E) \theta = - \delta \Xi \ge \delta \epsilon > 0.
\]
Next, note that we can bound the norm of $D - (1 + \delta) \Xi E$ uniformly:
on $\Omega_\epsilon$, $\|D\| \le 1/\epsilon$
and $|\Xi| \le 2/\epsilon$. All together, if we assume (as we may) that $\delta \le 1$ then
$\|D + (1 + \delta) \Xi E\| \le 5 / \epsilon$. By Lemma~\ref{lem:psd}, if $\eta > 0$
is sufficiently small then
\[
\epsilon B + \eta(D - (1 + \delta) \Xi E) \succcurlyeq 0.
\]
To complete the proof, choose $R$ large enough so that $R^2 (1-e^{\epsilon}) \ge 1/\eta$;
then $r^2(t) \ge 1/\eta$ for all $t$.
\end{proof}
Finally, we complete the proof of~\eqref{eq:ehrhard} by a series of simple approximations.
First, let $C_a$ denote the set of continuous functions $\mathbb{R} \to [0, 1]$ that converge to $a$ at $\pm \infty$,
and note that it suffices to prove~\eqref{eq:ehrhard} in the case that $f, g \in C_0$ and $g \in C_1$. Indeed,
any measurable $f, g: \mathbb{R} \to [0, 1]$ may be approximated (pointwise at $\gamma_1$-almost every point)
from below by functions in $C_0$, and any measurable $h: \mathbb{R} \to [0, 1]$ may be approximated from above by functions in $C_1$.
If we can prove~\eqref{eq:ehrhard} for these approximations, then it follows (by the dominated convergence theorem)
for the original $f, g$, and $h$.
Now consider $f, g \in C_0$ and $h \in C_1$ satisfying $\Xi(f, g, h) \le 0$ pointwise. For $\delta > 0$, define
\begin{align*}
f_\delta &= \Phi(-1/\delta) \lor f \land \Phi(1/(3\delta)) \\
g_\delta &= \Phi(-1/\delta) \lor g \land \Phi(1/(3\delta)) \\
h_\delta &= \Phi \left( - \frac{1}{3\delta} \lor (\Phi^{-1}(h) + \delta) \land \frac{1}{\delta} \right).
\end{align*}
If $\delta > 0$ is sufficiently small then $\Xi(f_\delta, g_\delta, h_\delta) \le -\delta$ pointwise;
moreover, $f_\delta, g_\delta$, and $h_\delta$ all take values in $[\Phi(-1/\delta), \Phi(1/\delta)]$, are
continuous, and have limits at $\pm \infty$. Since $f_\delta \to f$ as $\delta \to 0$ (and similarly for $g$ and $h$),
it suffices to show that
\begin{equation}\label{eq:fdelta}
\lambda \Phi^{-1}(\mathbb{E} f_\delta) + (1-\lambda) \Phi^{-1}(\mathbb{E} g_\delta) \le \sigma \Phi^{-1}(\mathbb{E} h_\delta)
\end{equation}
for all sufficiently small $\delta > 0$.
Since $f_\delta$ has limits at $\pm \infty$,
it follows that $P_\epsilon f_\delta \to f_\delta$ uniformly
as $\epsilon \to 0$ (similarly for $g_\delta$ and $h_\delta$). By taking $\epsilon$ small enough (at least as
small as $\delta/2$), we can ensure
that $\Xi (P^1_\epsilon f_\delta, P^1_\epsilon g_\delta, P^{\sigma^2}_\epsilon h_\delta) < -\epsilon$ pointwise.
Now we apply Corollary~\ref{cor:combined-jensen} with $\Omega_i = [\Phi(-1/\epsilon), \Phi(1/\epsilon)]$,
the function $J$ defined in~\eqref{eq:J-def}, $a = \frac 12$, and
with $(\lambda_1, \lambda_2, \lambda_3) = (\lambda, 1-\lambda, \sigma^{-1})/(1 + \sigma^{-1})$.
Lemma~\ref{lem:combined-jensen-condition} implies that the condition of Corollary~\ref{cor:combined-jensen}
is satisfied. We conclude that
\begin{align*}
\frac 12
&\ge J_R(\mathbb{E} f_\delta, \mathbb{E} g_\delta, \mathbb{E} h_\delta, \infty) \\
&= \Phi\left(
R\big(\lambda \Phi^{-1}(\mathbb{E} f_\delta) + (1-\lambda) \Phi^{-1} (\mathbb{E} g_\delta) - \sigma \Phi^{-1} (\mathbb{E} h_\delta)\big)
\right),
\end{align*}
which implies~\eqref{eq:fdelta} and completes the proof of~\eqref{eq:ehrhard}.
\section*{Acknowledgements}
We thank F. Barthe and M. Ledoux for helpful comments and for directing them to related literature.
We would also like to thank R. van Handel for pointing out to us that~\eqref{eq:ehrhard} corresponds more
directly to a generalized form of Ehrhard's inequality contained in Theorem 1.2 of~\cite{Borell3}.
\bigskip
\bigskip
\footnotesize
\bibliographystyle{amsplain}
| {
"timestamp": "2016-05-25T02:03:20",
"yymm": "1605",
"arxiv_id": "1605.07233",
"language": "en",
"url": "https://arxiv.org/abs/1605.07233",
"abstract": "We prove Ehrhard's inequality using interpolation along the Ornstein-Uhlenbeck semi-group. We also provide an improved Jensen inequality for Gaussian variables that might be of independent interest.",
"subjects": "Probability (math.PR)",
"title": "An interpolation proof of Ehrhard's inequality",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587236271349,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7097978740586564
} |
https://arxiv.org/abs/1910.12228 | Ring-theoretic approaches to point-set topology | In this paper, it is shown that a topological space $X$ is compact iff every maximal ideal of the power set ring $\mathcal{P}(X)$ converges to exactly one point of $X$. Then as an application, simple and ring-theoretic proofs are provided for the Tychonoff theorem and Alexander subbase theorem. As another result in this spirit, a ring-theoretic proof is given to the fact that a topological space is a profinite space iff it is compact and totally disconnected. | \section{Introduction}
Some of the mathematicians consider the Tychonoff theorem as the single most important result in general topology, (other mathematicians allow it to share this honor with Urysohn's lemma). Tychonoff theorem is a fundamental ingredient in proving various important results in the areas of topology, geometry, algebra and mathematical analysis. This result has been investigated in the literature over the years from various point of view, see e.g. \cite{Celmentino}- \cite{Wright}. Alexander subbase theorem and characterization of profinite spaces are another fundamental results in general topology which have significant applications in mathematics. \\
In this paper, an algebraic characterization for the quasi-compactness of a topological space is given, see Theorem \ref{Theorem I}. Using this, then the Tychonoff theorem and Alexander subbase theorem are easily deduced. An algebraic characterization for the Hausdorffness of a topological space is also given, see Theorem \ref{Theorem III}. As an immediate consequence of Theorems \ref{Theorem I} and \ref{Theorem III}, we obtain that a topological space $X$ is compact if and only if every maximal ideal of the power set ring $\mathcal{P}(X)$ converges to exactly one point of $X$. Finally, we give an algebraic proof to the fact that a topological space is a profinite space if and only if it is compact and totally disconnected, see Theorem \ref{Theorem II}. Most of the proofs are greatly based on the Zariski convergent notion and on the significant using of the power set ring. \\
\section{Preliminaries}
If $X$ is a set, then its power set $\mathcal{P}(X)$ together with the symmetric difference $A+B=(A\cup B)\setminus (A\cap B)$ as the addition and the intersection $A.B=A\cap B$ as the multiplication forms a commutative ring whose zero and unit are respectively the empty set and the whole set $X$. The ring $\mathcal{P}(X)$ is called the \emph{power set ring} of $X$. If $f:X\rightarrow Y$ is a function, then the map $\mathcal{P}(f):\mathcal{P}(Y)\rightarrow\mathcal{P}(X)$ defined by $A\rightsquigarrow f^{-1}(A)$ is a morphism of rings. In fact, the assignments $X\rightsquigarrow\mathcal{P}(X)$ and $f\rightsquigarrow\mathcal{P}(f)$ form a faithful contravariant functor from the category of sets to the category of commutative rings. We call it the \emph{power set functor}. \\
A ring is called a Boolean ring if every element is idempotent. It is easy to see that every Boolean ring is a commutative ring, and in a Boolean ring every prime ideal is a maximal ideal. The power set ring $\mathcal{P}(X)$ is a typical example of Boolean rings. \\
If $f_{1},...,f_{n}$ are a finite number of idempotent elements of a commutative ring $R$, then $(f_{1},...,f_{n})$ is a principal ideal of $R$ and generated by an idempotent, since $(f_{1},f_{2})=(f_{1}+f_{2}-f_{1}f_{2})$. \\
If $A\in\mathcal{P}(X)$ then $\mathcal{P}(A)$ is an ideal of $\mathcal{P}(X)$. In fact, $\mathcal{P}(A)=(A)$ is a principal ideal. If $A_{1},...,A_{n}$ are a finite number of elements of $\mathcal{P}(X)$ then $(A_{1},...,A_{n})=\mathcal{P}(\bigcup\limits_{i=1}^{n}A_{i})$. \\
If $x\in X$, then clearly $\mathfrak{m}_{x}:=\mathcal{P}(X\setminus\{x\})$ is a maximal ideal of $\mathcal{P}(X)$. It is also easy to see that $X$ is a finite set iff every maximal ideal of $\mathcal{P}(X)$ is of the form $\mathfrak{m}_{x}$. \\
If $\phi:A\rightarrow B$ is a morphism of rings, then the induced map $\Spec(B)\rightarrow\Spec(A)$ is given by $\mathfrak{p}\rightsquigarrow\phi^{-1}(\mathfrak{p})$. \\
If $f$ is a member of a ring $R$, then $D(f)=\{\mathfrak{p}\in\Spec(R): f\notin\mathfrak{p}\}$. \\
\begin{definition} Let $X$ be a topological space, $x\in X$ and $M$ a maximal ideal of $\mathcal{P}(X)$. Then we say that $M$ is \emph{convergent} (or, \emph{Zariski convergent}) to the point $x$ if $U$ is an open of $X$ containing $x$ then $M\in D(U)$. \\
\end{definition}
Let $\phi:X\rightarrow Y$ be a continuous map of topological spaces. If a maximal ideal $M$ of $\mathcal{P}(X)$ converges to some point $x\in X$, then clearly the maximal ideal $\mathcal{P}(\phi)^{-1}(M)$ of $\mathcal{P}(Y)$ is convergent to $\phi(x)$. \\Pl\"{u}cker
A quasi-compact and Hausdorff topological space is called a compact space. \\
\begin{definition}\label{Definition I} Let $(X_{i},\phi_{i,j})$ be a projective (inverse) system of finite discrete spaces over a poset $(I,<)$. The projective (inverse) limit $X=\lim\limits_{\overleftarrow{i\in I}}X_{i}$ with the induced product topology, as a subset of $\prod\limits_{i\in I}X_{i}$, is called a \emph{profinite space}. \\
\end{definition}
\section{Main results}
The following is one of the main results of this paper which builds a bridge between topology and commutative ring theory. \\
\begin{theorem}\label{Theorem I} A topological space $X$ is quasi-compact if and only if every maximal ideal of $\mathcal{P}(X)$ converges to a point of $X$. \\
\end{theorem}
{\bf Proof.} Let $X$ be quasi-compact and let $M$ be a maximal ideal of $\mathcal{P}(X)$. The collection of closures $\overline{A}$ with $A\in S:=\mathcal{P}(X)\setminus M$ has the finite intersection property. Therefore $\bigcap\limits_{A\in S}\overline{A}\neq\emptyset$, since $X$ is quasi-compact. Thus we may choose some point $x$ in the intersection. If $U$ is an open of $X$ containing $x$, then it will be enough to show that $U\notin M$. If $U\in M$ then
$U^{c}=X\setminus U\notin M$ and so $x\in\overline{U^{c}}=U^{c}$ which is a contradiction. Conversely, if $\{U_{i}: i\in I\}$ is an open covering of $X$ then we claim that the ideal $(U_{i}: i\in I)$ is the whole ring $\mathcal{P}(X)$. If not, then there exists a maximal ideal $M$ of $\mathcal{P}(X)$ containing this ideal. By the hypothesis, $M$ converges to a point $x\in X$. Clearly $x\in U_{k}$ for some $k$. It follows that $U_{k}\notin M$ which is a contradiction. This establishes the claim. Thus there exists a finite subset $J\subseteq I$ such that $\mathcal{P}(X)=(U_{i}: i\in J)=\mathcal{P}(\bigcup\limits_{i\in J}U_{i})$. This yields that $X=\bigcup\limits_{i\in J}U_{i}$. $\Box$ \\
\begin{corollary}$($Tychonoff Theorem$)$ Let $(X_{i})_{i\in I}$ be a family of quasi-compact topological spaces. Then $X=\prod\limits_{i\in I}X_{i}$ with the product topology is quasi-compact. \\
\end{corollary}
{\bf Proof.} Let $M$ be a maximal ideal of $\mathcal{P}(X)$. Setting $M_{i}:=\mathcal{P}(\pi_{i})^{-1}(M)$ where $\pi_{i}:X\rightarrow X_{i}$ is the canonical projection map. For each $i\in I$ then by Theorem \ref{Theorem I}, the maximal ideal $M_{i}$ is convergent to a point $x_{i}\in X_{i}$. To prove the assertion, by Theorem \ref{Theorem I}, it suffices to show that $M$ is convergent to the point $x=(x_{i})$. Let $U$ be an open of $X$ containing $x$. Then there exists a basis open $V=\prod\limits_{i\in I}V_{i}$ of $X$ such that $x\in V\subseteq U$ where each $V_{i}$ is an open of $X_{i}$ and $V_{i}=X_{i}$ for all but a finite number of indices $i$. Let $J$ be the set of all $i\in I$ such that $V_{i}\neq X_{i}$. Then $V=\bigcap\limits_{i\in J}\pi^{-1}(V_{i})$. Clearly $V_{i}\notin M_{i}$ and so $\pi^{-1}_{i}(V_{i})\notin M$ for all $i$. Thus $V\notin M$ since $J$ is a finite set. Therefore $U\notin M$. $\Box$ \\
\begin{corollary}$($Alexander Subbase Theorem$)$ Let $X$ be a topological space and let $\mathscr{D}$ be a subbase of $X$ such that every covering of $X$ by elements of $\mathscr{D}$ has a finite refinement. Then $X$ is quasi-compact. \\
\end{corollary}
{\bf Proof.} Let $M$ be a maximal ideal of $\mathcal{P}(X)$. By Theorem \ref{Theorem I}, it will be enough to show that $M$ is convergent to a point of $X$. By the hypothesis, $X\neq\bigcup\limits_{D\in M\cap\mathscr{D}}D$ since otherwise we may find a finite number $D_{1},...,D_{n}\in M\cap\mathscr{D}$ such that $X=\bigcup\limits_{i=1}^{n}D_{i}$ and so $\mathcal{P}(X)=(D_{1},...,D_{n})\subseteq M$ which is a contradiction. Hence, we may choose some $x\in X$ such that $x\notin\bigcup\limits_{D\in M\cap\mathscr{D}}D$. If $U$ is an open of $X$ containing $x$ then there exists a finite number $D'_{1},...,D'_{s}\in\mathscr{D}$ such that $x\in\bigcap\limits_{k=1}^{s}D'_{k}\subseteq U$. But $\bigcap\limits_{k=1}^{s}D'_{k}\notin M$. Therefore $U\notin M$. $\Box$ \\
\begin{theorem}\label{Theorem III} A topological space $X$ is Hausdorff if and only if every maximal ideal of $\mathcal{P}(X)$ converges to at most one point of $X$. \\
\end{theorem}
{\bf Proof.} Assume that $X$ is Hausdorff. Suppose there exists a maximal ideal $M$ of $\mathcal{P}(X)$ which is convergent to the distinct points $x$ and $y$ of $X$. Then we may find opens $U$ and $V$ of $X$ with $x\in U$, $y\in V$ and $U\cap V=\emptyset$. It follows that either $U\in M$ or $V\in M$, which is a contradiction. Conversely, let $x$ and $y$ be two distinct points of $X$. Let $I$ be the ideal of $\mathcal{P}(X)$ generated by all $U^{c}$ where $U$ is an open of $X$ containing $x$. Similarly, let $J$ be the ideal of $\mathcal{P}(X)$ generated by all $V^{c}$ where $V$ is an open of $X$ containing $y$. We claim that $I+J=\mathcal{P}(X)$. If not, then there exists a maximal ideal $M$ of $\mathcal{P}(X)$ such that $I+J\subseteq M$. It follows that $M$ is convergent to the both points $x$ and $y$, which is a contradiction. This establishes the claim. Thus we may find a finite number $U_{1},...,U_{n}$ of open neighborhoods of $x$ with $n\geq1$ and a finite number $V_{1},...,V_{m}$ of open neighborhoods of $y$ with $m\geq1$ such that $\mathcal{P}(X)=(U^{c}_{1},...,U^{c}_{n})+(V^{c}_{1},...,V^{c}_{m})$. This yields that $(\bigcap\limits_{i=1}^{n}U_{i})\cap
(\bigcap\limits_{j=1}^{m}V_{j})=\emptyset$. $\Box$ \\
\begin{corollary} A topological space $X$ is compact if and only if every maximal ideal of $\mathcal{P}(X)$ converges to exactly one point of $X$. \\
\end{corollary}
{\bf Proof.} It is an immediate consequence of Theorems \ref{Theorem I} and \ref{Theorem III}. $\Box$ \\
Let $R$ be a Boolean ring. Then $\{0,1\}$ is a subring of $R$. If $A$ is a subring of $R$ and $f\in R$ then $A[f]=\{a+bf: a,b\in A\}$. In particular, if $A$ is a finite subring of $R$ and $f_{1},...,f_{n}\in R$ then $A[f_{1},...,f_{n}]$ is also a finite subring of $R$. Let $\{R_{i}: i\in I\}$ be the set of all finite subrings of $R$.
We define $j<i$ if $R_{j}$ is a proper subset of $R_{i}$. Then the poset $(I,<)$ is directed, because if $A$ and $B$ are finite subrings of $R$ then we observed in the above that $A[B]$ is a finite subring of $R$ and $A,B\subseteq A[B]$. \\
\begin{lemma}\label{Lemma I} Let $R$ be a Boolean ring. Then $\Spec(R)$ is a profinite space. \\
\end{lemma}
{\bf Proof.} Let $\{R_{i}: i\in I\}$ be the set of all finite subrings of $R$.
Then the $\Spec(R_{i})$ together with the $\phi_{i,j}:\Spec(R_{i})\rightarrow\Spec(R_{j})$ induced by the inclusions $R_{j}\subseteq R_{i}$, as the transition morphisms, form a projective system of finite discrete spaces over the poset $(I,<)$. We show that $\Spec(R)$ together with the canonical maps $p_{i}:\Spec(R)\rightarrow\Spec(R_{i})$, induced by the inclusions $R_{i}\subseteq R$, is the projective limit of the above system. By the universal property of the projective limits, there exists a (unique) continuous map $\phi:\Spec(R)\rightarrow X=\lim\limits_{\overleftarrow{i\in I}}\Spec(R_{i})$ such that $p_{i}=\pi_{i}\circ\phi$ for all $i$, where each $\pi_{i}:X\rightarrow\Spec(R_{i})$ is the canonical projection. Therefore $\phi(M)=(M\cap R_{i})$. The map $\phi$ is clearly a closed map because $\Spec(R)$ is quasi-compact and $X$ is Hausdorff. It remains to show that it is bijective. Suppose $\phi(M)=\phi(N)$. If $f\in R$ then $A=\{0,1,f,1+f\}$ is a subring of $R$ and so $M\cap A=N\cap A$. Thus
$M= N$. Finally, take $(M_{i})\in X$ where each $M_{i}$ is a maximal ideal of $R_{i}$. Let $M$ be the ideal of $R$ generated by the subset $\bigcup\limits_{i\in I}M_{i}\subseteq R$. We show that $M$ is a maximal ideal of $R$. Clearly $M$ is a proper ideal of $R$. If not, then there exists a finite subset $J\subseteq I$ such that the ideal generated by the subset $\bigcup\limits_{i\in J}M_{i}$ is the whole ring $R$.
But we may find some $k\in I$ such that $i\leq k$ for all $i\in J$,
since $(I,<)$ is directed. We have $\bigcup\limits_{i\in J}M_{i}\subseteq M_{k}$ because $M_{i}=R_{i}\cap M_{k}$ for all $i\in J$. This yields that $M_{k}=R_{k}$ which is a contradiction. If $f,g\in R$ such that $fg\in M$ then similarly above there exists some $k\in I$ such that $f,g \in R_{k}$ and $fg\in M_{k}$. Thus either $f\in M_{k}$ or $g\in M_{k}$. Therefore $M$ is a prime ideal of $R$. Clearly $M_{i}\subseteq M\cap R_{i}$ and so $M_{i}=M\cap R_{i}$ for all $i\in I$. $\Box$ \\
\begin{lemma}\label{Lemma II} Let $(X_{i})$ be a family of Hausdorff and totally disconnected spaces. If $\prod\limits_{i}X_{i}$ is equipped with the product topology, then every subspace $X$ is Hausdorff and totally disconnected. \\
\end{lemma}
{\bf Proof.} It is clearly Hausdorff. Let $C$ be a connected subset of $X$. If $(x_{i})$ and $(y_{i})$ are two distinct points of $C$ then there exists some $k$ such that $x_{k}\neq y_{k}$. Clearly $x_{k},y_{k}\in\pi_{k}(C)$ and $\pi_{k}(C)$ is a connected subset of $X_{k}$ where $\pi_{k}:X\rightarrow X_{k}$ is the canonical projection. So $\pi_{k}(C)$ is a single-point set. But this is a contradiction and we win. $\Box$ \\
\begin{theorem}\label{Theorem II} A topological space is a profinite space if and only if it is compact and totally disconnected. \\
\end{theorem}
{\bf Proof.} Let $X$ be a profinite space. By Lemma \ref{Lemma II}, it is Hausdorff and totally disconnected. To see the quasi-compactness we use Theorem \ref{Theorem I}. So let $M$ be a maximal ideal of $\mathcal{P}(X)$. By taking into account the notations of Definition \ref{Definition I}, setting $M_{i}:=\mathcal{P}(\pi_{i})^{-1}(M)$ where each $\pi_{i}:X\rightarrow X_{i}$ is the canonical projection. But each $X_{i}$ is a finite set and so $M_{i}=\mathcal{P}(X_{i}\setminus\{x_{i}\})$ for some $x_{i}\in X_{i}$. We prove that $M$ is convergent to $x=(x_{i})$. First we have to show that $x\in X$. If $j\leq i$ then $\pi_{j}=\phi_{i,j}\circ\pi_{i}$. It follows that $M_{j}=\mathcal{P}(\phi_{i,j})^{-1}(M_{i})$. This yields that $M_{j}$ is convergent to $\phi_{i,j}(x_{i})$ because $M_{i}$ is obviously convergent to $x_{i}$. Therefore $\phi_{i,j}(x_{i})=x_{j}$. Hence, $x\in X$. Now let $U$ be an open of $X$ containing $x$. There exists a basis open $V=\prod\limits_{i\in I}V_{i}$ in the product topology such that $x\in X\cap V\subseteq U$. Thus there exists a finite subset $J\subseteq I$ such that $X\cap V=\bigcap\limits_{i\in J}\pi^{-1}(V_{i})$ and $\pi^{-1}(V_{i})\notin M$ for all $i$. Therefore $X\cap V$ and so $U$ are not in $M$. Hence, $M$ is convergent to $x$. Conversely, let $X$ be a compact totally disconnected space. Let $R=\Clop(X)$ be the set of all clopen (both open and closed) subsets of $X$. Clearly $R$ is a subring of $\mathcal{P}(X)$. It is well known that for each point $x$ in a compact space $X$, then the connected component of $X$ containing $x$ is the intersection of all $A\in R$ such that $x\in A$. Using this fact, then it can be shown that the map $X\rightarrow\Spec(R)$ given by $x\rightsquigarrow\mathfrak{m}_{x}\cap R$ is a homeomorphism where $\mathfrak{m}_{x}=\mathcal{P}(X\setminus\{x\})$. Therefore by Lemma \ref{Lemma I}, $X$ is a profinite space. $\Box$ \\
| {
"timestamp": "2020-11-05T02:15:42",
"yymm": "1910",
"arxiv_id": "1910.12228",
"language": "en",
"url": "https://arxiv.org/abs/1910.12228",
"abstract": "In this paper, it is shown that a topological space $X$ is compact iff every maximal ideal of the power set ring $\\mathcal{P}(X)$ converges to exactly one point of $X$. Then as an application, simple and ring-theoretic proofs are provided for the Tychonoff theorem and Alexander subbase theorem. As another result in this spirit, a ring-theoretic proof is given to the fact that a topological space is a profinite space iff it is compact and totally disconnected.",
"subjects": "Commutative Algebra (math.AC); General Topology (math.GN)",
"title": "Ring-theoretic approaches to point-set topology",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587225460772,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7097978732818144
} |
https://arxiv.org/abs/1808.09485 | Spijker's example and its extension | Strongly and weakly stable linear multistep methods can behave very differently. The latter class can produce spurious oscillations in some of the cases for which the former class works flawlessly. The main question is if we can find a well defined property which clearly tells the difference between them. There are many explanations from different viewpoints. We cite Spijker's example which shows that the explicit two step midpoint method is unstable with respect to the Spijker norm. We show that this result can be extended for the general weakly stable case. | \section{Introduction}
This paper focuses on the stability and instability of linear multistep methods. When we introduce linear multistep methods it is unavoidable to talk about the root-condition and usually about the two types of it, which divide these methods into two classes: the weakly and strongly stable linear multistep methods. This can be found in almost every textbook about the numerical solution of ordinary differential equations, see e.g. \cite[Section 5.2.3]{AP}. The root-condition is closely related to the stability of linear multistep methods. As it is well-known stability together with consistency implies the convergence of the method. This result can be obtained in different setups, we follow the book \cite{S73} where stability, consistency and convergence are defined in a general sense forming the base of a beautiful theoretical framework. This book also gives a detailed description how to use this framework for nonlinear ODEs. Our intention is to clarify the relation of the strongly/weakly stable linear multistep methods and stability of linear multistep methods in the above mentioned setting. Spijker's example \cite{S} gave the first (negative) result about this relation. The example shows that the explicit two step midpoint method is unstable with respect to the general notion of stability if we use an unusual norm. We extend this example to the whole weakly stable class.
We organized the paper as follows.
We introduce linear multistep methods and their basic notions which are important for us, including the definition of weakly/strongly stable linear multistep methods. Then we reformulate linear multistep methods and define stability in the general sense. After this preparation we recall Spijker's example and finally we present the new result which generalizes Spijker's example. We conclude the paper with a critical remark.
\section{Stability of linear multistep methods}
Without loss of generality we consider the scalar autonomous \emph{initial value problem} (IVP)
\begin{equation}\label{IVP}
\begin{cases}
u(0) = u^{0}\thinspace,\\
\dot u(t) = f(u(t))\thinspace,
\end{cases}
\end{equation}
where $t\in[0,T],\ u^{0}\in\mathbb{R}$ is the initial value, $u:[0,T]\to\mathbb{R}$ is the unknown function and we assume
that $f$ is Lipschitz continuous.
In practice we have to use a numerical method to approximate the solution of \eqref{IVP} since finding the solution analytically is impossible in most of the cases. There are many possible choices, one is the application of a linear multistep method (LMM).
\smallskip
\emph{Linear multistep method}s can be given in the following way:
\begin{equation}\label{LMM}
\begin{cases}
u_i = c^{i}\thinspace,&\quad i=0,\ldots, k-1\\[8pt]
\dfrac{1}{h} \sum\limits_{j=0}^{k} \alpha_{j} u_{i-j}= \sum\limits_{j=0}^{k} \beta_{j} f(u_{i-j})\thinspace,&\quad i=k,\ldots, n+k-1=N\thinspace,
\end{cases}
\end{equation}
where $h=T/N$ is the step size, $\alpha_{j}$, $\beta_{j}\in\mathbb{R}$, $\alpha_{0}\neq 0$ are the coefficients of the method and the constants $c^{i}$ are some approximation of the solution on the first $k$ time levels. When these latter are known (here we do not go into the details how to determine these values since this is irrelevant to the results of the paper) the method can "run", we can calculate the next approximation and so on. To get $u_i$ which approximates the solution at the $i$-th time level $u(i\cdot h)$ we only need to know the previous $k$ approximations. Thus the formula represents a $k$-step method. Note that while $k$ is fixed for the method $n$, $N=k+n-1$ and $h$ can vary as the grid gets finer. For shorthand notation later we will use $f_{i-j}$ for $f(u_{i-j})$.
As an example consider the explicit two step midpoint method (sometimes called leapfrog scheme in the context of parabolic PDEs)
\begin{equation}\label{midpoint}
\begin{cases}
u_i = c^{i}\thinspace,&\quad i=0,1\\[8pt]
\dfrac{1}{h} \left( \frac{1}{2}u_i-\frac{1}{2}u_{i-2}\right) = f_{i-1}\thinspace,&\quad i=2,\ldots, n+k-1=N
\end{cases}
\end{equation}
which plays the main role in Spijker's example.
The \emph{first characteristic polynomial} associated to \eqref{LMM} is defined as
\begin{equation*}
\varrho(z)=\sum\limits_{j=0}^{k} \alpha_{j} z^{k-j}\thinspace.
\end{equation*}
Usually, two types of root-conditions are defined. These are presented below.
\begin{definition}\label{d:rootcondition}
The method is said to be \emph{strongly stable} if for every root $\xi_i\in\mathbb{C}$ of the first characteristic polynomial $|\xi_i|< 1$ holds except $\xi_{1}= 1$, which is a simple root.
A not strongly stable method is said to be \emph{weakly stable} if for every root $\xi_i\in\mathbb{C}$ of the first characteristic polynomial $|\xi_i|\leq 1$ holds and if $|\xi_i|= 1$ then it is a simple root, moreover $\xi_{1}= 1$.
\end{definition}
We note that sometimes these are defined slightly differently. The two main possible differences are the following.
First, not requiring that $\xi_{1}= 1$ holds. Second, the weakly stable class containing the strongly stable class. Our reason not to vote for this option is that we want to distinguish clearly between the two.
Roughly speaking being weakly (or strongly) stable means that applying a method for $\dot u(t)=0$ the approximation remains bounded which is an understandable requirement.
The explicit two step midpoint method is weakly stable since its first characteristic polynomial is $\frac{1}{2}\left( z^{2}-1 \right) $ with roots $z=\pm 1$.
In the weakly stable case we have another root at the boundary of the unit circle which could cause problems in some of the cases. One type of explanation about the difference between weakly and strongly stable LMMs tries to exploit this fact directly, see eg. \cite[Example 5.7]{AP}. Our approach is different.
\smallskip
In the following we rewrite LMMs \eqref{LMM} into the form for which we can define stability in the general sense. A method can be represented with a sequence of operators $F_N: \mathcal{X}_N\to \mathcal{Y}_N$, where $\mathcal{X}_N$, $\mathcal{Y}_N$ are $k+n$ dimensional normed spaces with norms $\left\|\cdot\right\|_{\mathcal{X}_N}$, $\left\|\cdot\right\|_{\mathcal{Y}_N}$ respectively and
\begin{equation*}
(F_N({\bf u}_N))_i=
\begin{cases}
u_i - c^{i}\thinspace,& \quad i=0,\ldots, k-1\\[8pt]
\dfrac{1}{h} \sum\limits_{j=0}^{k} \alpha_{j} u_{i-j} - \sum\limits_{j=0}^{k} \beta_{j} f(u_{i-j})\thinspace,& \quad i=k,\ldots, n+k-1=N\thinspace.
\end{cases}
\end{equation*}
Finding the approximating solution means that we have to solve the non-linear system of equations $F_N({\bf u}_N) ={\bf 0}$. $F_N$ can be represented in the following way:
\begin{equation*}
F_N({\bf u}_N) ={\bf A}_N {\bf u}_N -{\bf B}_N f({\bf u}_N)-{\bf c}_N\thinspace,
\end{equation*}
where ${\bf u}_{N}=({\bf u}_{k},{\bf u}_{n})^{T}=(u_0,\ldots,u_{k-1},u_k,\ldots,u_{n+k-1})^{T}\in \mathbb{R}^{k+n}$, ${\bf u}_{k}\in \mathbb{R}^{k}$, ${\bf u}_{n}\in \mathbb{R}^{n}$, \\
$f({\bf u}_N)= (f(u_0),f(u_1),\ldots,f(u_{n+k-1}))^{T}\in \mathbb{R}^{k+n}$, ${\bf c}_n= (c^{0},c^{1},\ldots,c^{k-1},0,\ldots,0)^{T}\in \mathbb{R}^{k+n}$,
$$
{\bf A}_N =\left( \begin{array}{cc}
{\bf I} & {\bf 0} \\
{\bf A}_{k} & {\bf A}_{n} \\
\end{array} \right) \thinspace,
\qquad
{\bf B}_N =\left( \begin{array}{cc}
{\bf 0} & {\bf 0} \\
{\bf B}_{k} & {\bf B}_{n} \\
\end{array} \right) \thinspace,
$$
where ${\bf I}\in \mathbb{R}^{k\times k}$ is the identity matrix, ${\bf A}_{k},{\bf B}_{k}\in \mathbb{R}^{n\times k}$, ${\bf A}_{n},{\bf B}_{n}\in \mathbb{R}^{n\times n}$,
$$
{\bf A}_{k} =\dfrac{1}{h}\left( \begin{array}{cccc}
\alpha_k & \ldots & \alpha_2 & \alpha_1 \\
0 & \alpha_k & \ldots & \alpha_2 \\
\vdots & \ddots & \ddots & \vdots \\
0 & \ldots & \ldots & \alpha_k \\
0 & \ldots & \ldots & 0 \\
\vdots & \ddots & \ddots & \vdots \\
0 & \ldots & \ldots & 0 \\
\end{array} \right)
\quad
{\bf A}_{n} =\dfrac{1}{h}\left( \begin{array}{cccccc}
\alpha_0 & 0 & \ldots & \ldots & \ldots & 0 \\
\alpha_1 & \alpha_0 & 0 & \ldots & \ldots & 0 \\
\alpha_2 & \alpha_1 & \alpha_0 & 0 & \ldots & 0 \\
\vdots & \ddots & \ddots & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & \vdots \\
0 & \ldots & 0 & \alpha_k & \ldots & \alpha_0
\end{array} \right)
$$
and ${\bf B}_{k}$, ${\bf B}_{n}$ are the same as ${\bf A}_{k}$, ${\bf A}_{n}$, except that we have to omit the $\frac{1}{h}$ factor and the $\alpha$-s have to be changed to $\beta$-s.
\begin{definition}\label{d:stab}
We call a method \emph{stable in the norm pair} $\left( \left\|\cdot\right\|_{\mathcal{X}_n} \thinspace, \left\|\cdot\right\|_{\mathcal{Y}_n}\right)$ if for all IVP \eqref{IVP} $\exists S\in\mathbb{R}$ and $\exists N_0\in\mathbb{N}$ such that $\forall N\geq N_0 $\thinspace, $\forall {\bf u}_N, {\bf v}_N\in \mathbb{R}^{k+n}$ the estimate
\begin{equation}\label{Sstability}
\left\| {\bf u}_N- {\bf v}_N \right\|_{\mathcal{X}_N} \leq S \left\| F_N ({\bf u}_N)- F_N ({\bf v}_N) \right\|_{\mathcal{Y}_N}
\end{equation}
holds.
\end{definition}
To define stability in this way has a definite profit. It is general in the sense that it works for almost every type of numerical method approximating the solution of ODEs and PDEs as well. Convergence can be proved by the popular recipe "consistency + stability = convergence"
$$\left\| \varphi_N (\bar{u}) - \bar{{\bf u}}_N \right\|_{\mathcal{X}_N}\leq S \left\| F_N (\varphi_N(\bar{u}))- F_N (\bar{{\bf u}}_N) \right\|_{\mathcal{Y}_N} =
S \left\| F_N (\varphi_N(\bar{u})) \right\|_{\mathcal{Y}_N}\to 0\thinspace,
$$
where $\bar{u}$, $\bar{{\bf u}}_N$ denote the solution of the original problem \eqref{IVP} and the approximating problem $F_N ({\bf u}_N)={\bf 0}$ respectively, $\varphi_N:\mathcal{X}\to \mathcal{X}_N$ are projections from the normed space where the original problem is set, thus $\varphi_N (\bar{u}) - \bar{{\bf u}}_N$ represents the error (measured in $\mathcal{X}_N$). Finally, $\left\| F_N (\varphi_N(\bar{u})) \right\|_{\mathcal{Y}_N}\to 0$ is exactly the definition of consistency in this framework. We note that the existence of $\bar{{\bf u}}_N$ (from some index) is also the consequence of stability, see \cite[Lemma 24. and 25.]{FMF}, cf. \cite[Lemma 1.2.1]{S73}. There are many versions of Definition \ref{d:stab} which are requiring the stability estimate only in some neighbourhood, see \cite{FMF}, but as we defined it is satisfactory for the IVP \eqref{IVP}.
In the following we introduce norm pairs which are interesting for us. We start with some norm notations: for $k\in \mathbb{N}$ fixed, ${\bf u}_{N}\in \mathbb{R}^{k+n}$ the $k\infty$ norm is defined as
$$\left\|{\bf u}_{N}\right\|_{k\infty}=\max_{0\leq i\leq k-1}|u_{i}|+\max_{k\leq i\leq N}|u_{i}|\thinspace,$$
thus $\left\|{\bf u}_{N}\right\|_{k\infty}=\left\|{\bf u}_{k}\right\|_{\infty}+\left\|{\bf u}_{n}\right\|_{\infty} \thinspace.$
While the $k$--Spijker-norm is defined as
$$\left\|{\bf u}_{N}\right\|_{k\$}=\max_{0\leq i\leq k-1}|u_{i}|+h\max_{k\leq l\leq N}\left| \sum\limits_{i=k}^{l} u_{i}\right| \thinspace.$$
Using the notation $\left\|{\bf u}_{n}\right\|_{\$}=h\max_{k\leq l\leq N}\left| \sum\limits_{i=k}^{l} u_{i}\right|$ the $k$--Spijker-norm can be expressed as
$ \left\|{\bf u}_{N}\right\|_{k\$}=\left\|{\bf u}_{k}\right\|_{\infty}+\left\|{\bf u}_{n}\right\|_{\$}\thinspace.$ Introducing another notation, the Spijker-norm can be given in a useful way which will be presented in the following.
First, we introduce
${\bf E}_{n}\in \mathbb{R}^{n\times n}$
$$
{\bf E}_{n}=
\dfrac{1}{h}\left( \begin{array}{ccccc}
1 & 0 & \ldots & \ldots & 0 \\
-1 & 1 & 0 & \ldots & 0 \\
0 & -1 & 1 & 0 & \vdots \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
0 & \ldots & 0 & -1 & 1
\end{array} \right) \quad \mbox{for which} \quad
{\bf E}_{n}^{-1}=
\left( \begin{array}{ccccc}
h & 0 & \ldots & \ldots & 0 \\
h & h & 0 & \ldots & 0 \\
h & h & h & 0 & \vdots \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
h & h & \ldots & h & h
\end{array} \right) \thinspace.
$$
Note that ${\bf E}_{n}$ represents the linear part of the explicit Euler method (without the initial step) and its inverse can be interpreted as the simplest numerical integration.
Second, if ${\bf A}$ is a regular matrix and $\left\| \cdot\right\|_{\star}$ is a norm then
$\left\| {\bf u}\right\|_{{\bf A},\star}=\left\| {\bf A}{\bf u}\right\|_{\star}$ defines a norm.
Then clearly
$$\left\|{\bf u}_{n} \right\|_{\$}=\left\|{\bf E}_{n}^{-1}{\bf u}_{n} \right\|_{\infty}=\left\|{\bf u}_{n} \right\|_{{\bf E}_{n}^{-1},\infty}\thinspace.
$$
\smallskip
It is known that weakly and strongly stable linear multistep methods are stable in the norm pair $\left( \left\|\cdot\right\|_{k\infty}\thinspace, \left\|\cdot\right\|_{k\infty}\right)$, cf. \cite{M1}. Moreover, strongly stable methods are stable in the $\left( \left\|\cdot\right\|_{k\infty}\thinspace, \left\|\cdot\right\|_{k\$}\right)$ norm pair, see \cite{M2}.
These are positive results and there is a natural question: are weakly stable methods stable in the $\left( \left\|\cdot\right\|_{k\infty}\thinspace, \left\|\cdot\right\|_{k\$}\right)$ norm pair or not? The following section is devoted to answer this question.
\section{Spijker's example and its extension}
First we recall Spijker's example, cf. \cite[Example 2 in Section 2.2.4]{S73}.
\begin{thm}\label{thm:S}
The explicit two-step midpoint method \eqref{midpoint} is not stable in the $\left( \left\|\cdot\right\|_{2\infty}\thinspace, \left\|\cdot\right\|_{2\$}\right)$ norm pair.
\end{thm}
For the sake of completeness we append the proof.
\begin{proof}
We focus on the explicit two-step midpoint method \eqref{midpoint} and rewrite it to fit into our framework. $k=2$ and
$F_N({\bf u}_N) =$
\begin{equation*}
\left( \begin{array}{ccccc}
1 & 0 & 0 & \ldots & 0 \\
0 & 1 & 0 & \ldots & 0 \\
-\frac{1}{2h} & 0 & \frac{1}{2h} & \ldots & 0 \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
0 & \ldots & -\frac{1}{2h} & 0 & \frac{1}{2h}
\end{array} \right)
\left( \begin{array}{c}
u_0 \\
u_1 \\
u_2 \\
\vdots \\
u_{N}
\end{array} \right) -
\left( \begin{array}{ccccc}
0 & 0 & 0 & \ldots & 0 \\
0 & 0 & 0 & \ldots & 0 \\
0 & 1 & 0 & \ldots & 0 \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
0 & \ldots & 0 & 1 & 0
\end{array} \right)
\left( \begin{array}{c}
f_0 \\
f_1 \\
f_{2} \\
\vdots \\
f_{N}
\end{array} \right)-
\left( \begin{array}{c}
c_0 \\
c_1 \\
0 \\
\vdots \\
0
\end{array} \right)
\end{equation*}
using its matrix-vector form.
The goal is to show that this method is not stable in the $\left( \left\|\cdot\right\|_{2\infty}\thinspace, \left\|\cdot\right\|_{2\$}\right)$ norm pair i.e.
\begin{equation*}
\left\| {\bf u}_N- {\bf v}_N \right\|_{2\infty} \leq S \left\| F_N ({\bf u}_N)- F_N ({\bf v}_N) \right\|_{2\$}
\end{equation*}
does not hold.
Slightly modifying the original construction we define $f\equiv 0$, ${\bf v}_N= {\bf 0}$ and
\begin{equation*}
u_{l}=
\begin{cases}
&0\quad \mbox{, if}\quad l=0,1\\
&(l-1) (-1)^{l}\quad \mbox{, if}\quad l=2,\ldots, n+1
\end{cases}
\end{equation*}
thus, ${\bf u}_N= (0,0,1,-2,3,-4,\dots)^{T}$. With this choice
$$ \left\| {\bf u}_N- {\bf v}_N \right\|_{2\infty}=\left\| {\bf u}_{n}\right\|_{\infty}=n \quad \mbox{and} \quad \left\| F_N ({\bf u}_N)- F_N ({\bf v}_N) \right\|_{2\$}= \left\| {\bf A}_{n} {\bf u}_{n}\right\|_{\$},
$$
where we can calculate
$$ {\bf A}_{n} {\bf u}_{n}=\frac{1}{h}\thinspace\cdot\thinspace\left(\frac{1}{2}, -1, 1, -1, 1, \ldots\right)^{T}\quad \mbox{thus}\quad \left\| {\bf A}_{n} {\bf u}_{n}\right\|_{\$}=\frac{1}{2}\thinspace.
$$
This means that the stability estimate does not hold.
\end{proof}
In the following we present the extension of this result.
\begin{thm}\label{extS}
Weakly stable methods are not stable in the $\left( \left\|\cdot\right\|_{k\infty}\thinspace, \left\|\cdot\right\|_{k\$}\right)$ norm pair.
\end{thm}
\begin{proof}
We assume that the method is weakly stable, thus we assume that $|\xi_{2}|=1$, $\xi_{2}\neq 1$. We set $f\equiv 0$, ${\bf v}_N={\bf 0}$ and ${\bf u}_{N}=(0,\ldots,0,u_k,\ldots,u_{n+k-1})^{T}$. For this setting stability \eqref{Sstability} is simplified to
\begin{equation*}
\left\| {\bf u}_{n}\right\|_{\infty} \leq S \left\| {\bf A}_{n}{\bf u}_{n}\right\|_{\$}\thinspace.
\end{equation*}
For all $S$ and for all $n_0$ we will present a vector ${\bf u}_{n}$, $n>n_0$ for which
\begin{equation}\label{ford}
\left\| {\bf u}_{n}\right\|_{\infty} > S \left\| {\bf A}_{n}{\bf u}_{n}\right\|_{\$}\thinspace.
\end{equation}
Note that
\begin{equation*}
h{\bf A}_{n}=\alpha_0\prod\limits_{i=1}^{k}\left( {\bf I}-\xi_i{\bf H}_{n}\right)\thinspace,
\end{equation*}
where ${\bf I}\in\mathbb{R}^{n\times n}$ stands for the identity matrix, ${\bf H}_{n}\in \mathbb{R}^{n\times n}$ is defined as
$$
{\bf H}_{n}=
\left( \begin{array}{ccccc}
0 & 0 & \ldots & \ldots & 0 \\
1 & 0 & 0 & \ldots & 0 \\
0 & 1 & 0 & 0 & \ldots \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
0 & \ldots & 0 & 1 & 0
\end{array} \right) \thinspace,
$$
and $\xi_i$, $i=1,\ldots,k$ are the roots of the first characteristic polynomial. This comes from the following calculation.
\begin{align*}
&h{\bf A}_{n}=\alpha_0{\bf I}+\alpha_1{\bf H}_{n}+\alpha_2{\bf H}_{n}^{2}+\ldots+\alpha_k{\bf H}_{n}^{k}= \alpha_k\prod\limits_{i=1}^{k}({\bf H}_{n}-\nu_i{\bf I})=\\
&\alpha_k(-1)^{k}\left( \prod\limits_{i=1}^{k}\nu_i\right) \prod\limits_{i=1}^{k}\left( {\bf I}-\dfrac{1}{\nu_i}{\bf H}_{n}\right) = \alpha_0\prod\limits_{i=1}^{k}\left( {\bf I}-\dfrac{1}{\nu_i}{\bf H}_{n}\right)= \alpha_0\prod\limits_{i=1}^{k}\left( {\bf I}-\xi_i{\bf H}_{n}\right)\thinspace,
\end{align*}
where we exploited the commutativity of the terms $\left( {\bf I}-\xi_i{\bf H}_{n}\right)$ and that $\xi_i=\frac{1}{\nu_i}$ since $\alpha_0+\alpha_1 z+\alpha_2 z^{2}+\ldots+\alpha_k z^{k}$ is the reciprocal polynomial of $\varrho$. This covers the case when $\forall \xi_i\neq 0$. If $\exists \xi_i= 0$ the modification of the calculation is straightforward.
Let us introduce ${\bf w}_{n}=(w_1,\ldots,w_{n})^{T}\in\mathbb{R}^{n}$, ${\bf w}_{n}={\bf E}_{n} {\bf u}_{n}$. With this \eqref{ford} is equivalent to
\begin{equation*}
\left\| {\bf w}_{n}\right\|_{\$}=\left\| {\bf E}_{n}^{-1}{\bf w}_{n}\right\|_{\infty}=\left\| {\bf u}_{n}\right\|_{\infty} > S \left\| \alpha_0\prod\limits_{i=2}^{k}\left( {\bf I}-\xi_i{\bf H}_{n}\right){\bf w}_{n}\right\|_{\$}\thinspace.
\end{equation*}
If $\xi_{2}=-1$ then
$$\left\| \prod\limits_{i=2}^{k}\left( {\bf I}-\xi_i{\bf H}_{n}\right){\bf w}_{n}\right\|_{\$}\leq
\left( \prod\limits_{i=3}^{k} \left\|\left( {\bf I}-\xi_i{\bf H}_{n}\right)\right\|_{\$}\right)
\left\| \left( {\bf I}-\xi_{2} {\bf H}_{n}\right){\bf w}_{n}\right\|_{\$}\leq
2^{k-1} \left\| \left( {\bf I}-\xi_{2} {\bf H}_{n}\right){\bf w}_{n}\right\|_{\$}\thinspace,
$$
since
$$ \left\|\left( {\bf I}-\xi_i{\bf H}_{n}\right)\right\|_{\$}=
\max_{\left\|{\bf u}\right\|_{\$}=1} \left\|\left( {\bf I}-\xi_i{\bf H}_{n}\right){\bf u}\right\|_{\$} \leq
1+ \max_{\left\|{\bf u}\right\|_{\$}=1} \left\|{\bf H}_{n}{\bf u}\right\|_{\$}\leq 2\thinspace.$$
Now, let us choose $w_m=m \thinspace \xi_{2}^{m}=m (-1)^{m}$.
$$\left( \left( {\bf I}-\xi_{2}{\bf H}_{n}\right){\bf w}_{n} \right)_{m}=\xi_{2}^{m}=(-1)^{m}\thinspace,
$$
thus its norm
$$h\max_{1\leq l\leq n}
\left| \sum\limits_{i=1}^{l} (-1)^{m} \right| \to 0\thinspace,
$$
as $h\to 0$, while
$$\left\| {\bf w}_{n}\right\|_{\$}=
h\max_{1\leq l\leq n} \left| \dfrac{l \xi_{2}^{l+1}}{\xi_{2}-1} - \dfrac{\xi_{2}^{l+1}-\xi_{2}}{ (\xi_{2}-1)^{2} } \right|=
h\max_{1\leq l\leq n} \left| \dfrac{l (-1)^{l}}{2} - \dfrac{(-1)^{l+1} + 1}{ 4 } \right|\geq
\dfrac{h(n-1)}{2} \to \dfrac{1}{2}\thinspace.
$$
\smallskip
Else $\xi_{2}=e^{i\varphi}$ with $0<\varphi<\pi$ and then $\xi_{3}=e^{-i\varphi}$. The right side can be estimated similarly as before:
\begin{equation*}
\begin{split}
&\left\| \prod\limits_{i=2}^{k}\left( {\bf I}-\xi_i{\bf H}_{n}\right){\bf w}_{n}\right\|_{\$}\leq
\left( \prod\limits_{i=4}^{k} \left\|\left( {\bf I}-\xi_i{\bf H}_{n}\right)\right\|_{\$}\right)
\left\| \left( {\bf I}-\xi_{2} {\bf H}_{n}\right)\left( {\bf I}-\xi_{3} {\bf H}_{n}\right){\bf w}_{n}\right\|_{\$}\leq \\
&2^{k-2} \left\| \left( {\bf I}-2\cos \varphi {\bf H}_{n} + {\bf H}_{n}^{2} \right){\bf w}_{n}\right\|_{\$}\thinspace.
\end{split}
\end{equation*}
Now, let us choose $w_m=m \thinspace\Re \xi_{2}^{m}$, where $\Re$ is the notation for the real part.
$$\left( \left( {\bf I}-2\cos \varphi {\bf H}_{n} + {\bf H}_{n}^{2} \right){\bf w}_{n} \right)_{m}=
\begin{cases}
\cos \varphi\thinspace, &\mbox{if } m=1\\
\cos m\varphi -\cos (m-2)\varphi\thinspace, &\mbox{if } m\geq 2
\end{cases}
$$
thus its norm
$$h\max_{1\leq l\leq n}
\left\lbrace \left|\cos \varphi\right|, \left|\cos l\varphi +\cos (l-1)\varphi -1\right| \right\rbrace \to 0\thinspace,
$$
as $h\to 0$, while $\xi_{2}^{l}$ is either periodic with period $\geq 3$ or dense on the unit circle which means that $\exists c>0$ such that
$$\left\| {\bf w}_{n}\right\|_{\$}=
h\max_{1\leq l\leq n} \left|\Re\left( \dfrac{l \xi_{2}^{l+1}}{\xi_{2}-1} - \dfrac{\xi_{2}^{l+1}-\xi_{2}}{ (\xi_{2}-1)^{2} }\right) \right|\geq
h\max_{1\leq l\leq n} l\left|\Re\left( \dfrac{\xi_{2}^{l+1}}{\xi_{2}-1} \right) \right| - \dfrac{2h}{ |\xi_{2}-1|^{2} }>c\thinspace,
$$
if $n$ is large enough. This proves the statement.
\end{proof}
\section{Concluding discussion}
We conclude the paper adding a critical remark. Although Theorem \ref{extS} clearly showed the difference between weakly and strongly stable LMMs the practical side of this result is not clear at all. Stability is only a partial achievement, no doubt an important one, however, we are mostly interested in the convergence of methods.
Simply speaking the problem is the following. A weakly stable method is stable in the norm pair $\left( \left\|\cdot\right\|_{k\infty}\thinspace, \left\|\cdot\right\|_{k\infty}\right)$ resulting convergence in the norm $\left\|\cdot\right\|_{k\infty}$. For a strongly stable method we can obtain convergence in the same norm not depending on which type of stability ($\left( \left\|\cdot\right\|_{k\infty}\thinspace, \left\|\cdot\right\|_{k\infty}\right)$ or $\left( \left\|\cdot\right\|_{k\infty}\thinspace, \left\|\cdot\right\|_{k\$}\right)$) we use.
The profit is shifted to the consistency check.
Note that consistency in the norm $\left\|\cdot\right\|_{k\infty}$ with order $m$ implies consistency in the norm $\left\|\cdot\right\|_{k\$}$ with the same order $m$ or higher. This means that for strongly stable methods we have the freedom to check consistency in the $\left\|\cdot\right\|_{k\$}$ norm. It is a technical gain, see the tricky example \cite[Example 1 in Section 2.2.4]{S73}:
\begin{equation*}
\left(F_n({\bf u}_n)\right)_i =
\begin{cases}
& u_0-c_0 \quad \mbox{, if} \quad i=0 \thinspace, \\[8pt]
&\dfrac{u_i-u_{i-1}}{h}-f_{i-1} \quad \mbox{, if} \quad 1\leq i\leq n \quad \mbox{odd}\thinspace,\\[8pt]
&\dfrac{u_i-u_{i-1}}{h}-f_{i} \quad \mbox{, if} \quad 2\leq i\leq n \quad \mbox{even}\thinspace.
\end{cases}
\end{equation*}
This one-step method is consistent of order 2 with respect to the $\left\|\cdot\right\|_{1\$}$ norm. To get consistency of order 2 with respect to the $\left\|\cdot\right\|_{1\infty}$ norm is less straightforward (however, it is possible).
Consequently, this freedom could be a technical gain. Unfortunately, not more, we can not win an order this way.
| {
"timestamp": "2018-08-30T02:01:14",
"yymm": "1808",
"arxiv_id": "1808.09485",
"language": "en",
"url": "https://arxiv.org/abs/1808.09485",
"abstract": "Strongly and weakly stable linear multistep methods can behave very differently. The latter class can produce spurious oscillations in some of the cases for which the former class works flawlessly. The main question is if we can find a well defined property which clearly tells the difference between them. There are many explanations from different viewpoints. We cite Spijker's example which shows that the explicit two step midpoint method is unstable with respect to the Spijker norm. We show that this result can be extended for the general weakly stable case.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Spijker's example and its extension",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587221857245,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7097978730228669
} |
https://arxiv.org/abs/1710.07007 | Quarter-Turn Baxter Permutations | Baxter permutations are known to be in bijection with a wide number of combinatorial objects. Previously, it was shown that each of these objects had a natural involution which was carried equivariantly by the known bijections, and the number of objects fixed under involution was given by Stembridge's $q=-1$ phenomenon. In this paper, we consider the order 4 action of a quarter-turn rotation of a Baxter permutation matrix, refining the half-turn rotation previously studied. Using the method of generating trees, we show that the number of Baxter permutations fixed under quarter-turn rotation has a very nice enumeration, which suggests the existence of a combinatorial bijection. | \section{Background}
\label{intro}
Baxter permutations are a well-studied class of permutations, which have a number of symmetries and nice properties associated to them.
\begin{definition}
We say that a \emph{Baxter permutation} is a permutation that avoids the patterns 3-14-2 and 2-41-3, where an occurrence of the pattern 3-14-2 in a permutation $w=w_1\ldots w_n$ means there exists a quadruple of indices $\{i,j,j+1,k\}$ with $i<j<j+1<k$ and $w_j< w_k< w_i< w_{j+1}$ (and similarly for 2-14-3)\footnote{Such patterns with prescribed adjacencies are sometimes called {\textit{vincular} patterns}.}.
\end{definition}
For $n=4$, there are $B(4)= 22$ Baxter permutations in $\mathfrak{S}_4$, with the only excluded ones being 2413 and 3142. The general formula for the number of Baxter permutations of length $n$ is given by
\[ B(n):=\sum_{k=0}^{n-1}\frac{\binom{n+1}{k}\binom{n+1}{k+1}\binom{n+1}{k+2}}{\binom{n+1}{1}\binom{n+1}{2}}, \]
and was originally proven by Chung, Graham, Hoggatt, and Kleiman~\cite{NumBax}.
It is easy to see from the definition that Baxter permutations will be closed under two natural involutions. One of them reverses the order of a word ($w=w_1\ldots w_n\mapsto w_n\ldots w_1$), and the other reverses the order of the labels ($w=w_1\ldots w_n \mapsto (n+1-w_1)\ldots (n+1-w_n)$). These correspond to reflecting a permutation matrix horizontally and vertically (respectively). A slightly less obvious fact is that Baxter permutations will be closed under taking inverses, which corresponds to reflecting the permutation matrix across a diagonal line. This means that Baxter permutations are closed under the full dihedral action of the square.
It is clear that the first two involutions individually will never have any fixed points for $n>1$.
The author has previously shown that the combination of the first two involutions (correspond to a half-turn of the permutation matrix) is carried equivariantly to a natural rotation on other combinatorial objects, and the enumeration of fixed points is an instance of the ''$q=-1$ phenomenon``. \cite{Dilks}
Baxter permutations fixed under reflection across the diagonal correspond to self-involutive Baxter permutations, and these have previously been considered. The enumerative formula for the number of fixed-point free self-involutive Baxter permutations of length $2n$ has the surprisingly simple closed formula $b_n=\frac{3\cdot 2^{n-1}}{(n+1)(n+2)}\binom{2n}{n}$ through a bijection to planar maps \cite{fpfsi}. Later, Fusy extended this method to give a combinatorial proof of the enumeration for $b_n$, as well as a closed-form multivariate enumeration for all self-involutive Baxter permutations \cite{Fusy}.
The last remaining conjugacy class of dihedral actions on Baxter permutations is the one corresponding to $90^{\circ}$ rotation, which we now consider.
Our main result gives an enumeration of the number of Baxter permutations fixed under this quarter-turn rotation.
\begin{theorem}
\label{quarterturn}
The number of Baxter permutations of length $n$ fixed under $90^{\circ}$ rotation of its permutation matrix is $2^mC_m$ (where $C_m$ is the Catalan number) if $n=4m+1$, and zero otherwise.
\end{theorem}
We will prove this using the method of generating trees. In Section~\ref{sec:GenTreePerm}, we will recall some background on generating trees for families of permutations. In Section~\ref{sec:GenTreeBax}, we will recall the results of Chung, Graham, Hoggatt, and Kleiman used to give the original enumeration of Baxter permutations. ~\cite{NumBax}. In Section~\ref{sec:GenTreeHalf}, we will extend these results to describe the generating tree for Baxter permutations fixed under a $180^{\circ}$ rotation. Then in Section~\ref{sec:GenTreeQuarter}, we will further extend this to describe the generating tree for Baxter permutations fixed under a $90^{\circ}$ rotation of the permutation matrix, and prove our main result.
\section{Generating trees for permutations}
\label{sec:GenTreePerm}
Say we have a family of permutations that is closed under removing the largest entry. Then every permutation in the family of length $n$ arises uniquely from taking a permutation of length $n-1$ in the family, and inserting the letter $n$ into an admissible position.
\begin{definition}
We say that the \emph{generating tree} of a family of permutations closed under removing the largest entry is the tree whose nodes are the permutations in the family, and the parent of each node is the permutation obtained by removing the largest entry.
\end{definition}
\noindent In many cases, one can obtain enumeration results by analyzing this tree.
The family of permutations avoiding some set of classical patterns in always closed under taking any subword (in particular, removing the largest element), so we can construct a generating tree.
\begin{example}
Consider the set of permutations that avoid the classical pattern $231$. It is not hard to see given a $231$ avoiding permutation, the only place one can insert a new largest label into a permutation and still avoid the pattern $231$ is immediately to the left of a left-to-right maxima or at the end of the permutation. We say that $w_i$ is a left-to-right maxima of $w$ if $w_i>w_k$ for all $k<i$. So the number of children a permutation has in the generating tree depends only on this statistic.
Furthermore, inserting a new largest label has a predictable effect on the number of left-to-right maxima of the resulting permutation. If a permutation has $k+1$ left-to-right maxima, then it will have $k+1$ children with $2,3,\ldots, k+2$ left-to-right maxima.
Thus, an abstract tree with nodes labelled by integers that has root 1 and the property that every node $k+1$ has children labelled $2,3,\ldots k+2$ will be isomorphic to the generating tree for $231$ avoiding permutations. This tree is known as the \emph{Catalan tree}~\cite{WestGenTree}, and is known to have rank sizes corresponding to the Catalan numbers. See Figures~\ref{231fig} and \ref{cattree} for the generating tree of $231$ avoiding permutations, and the associated Catalan tree.
Similarly, we could consider permutations that avoided the classical pattern $132$. In this case, the places where we could insert a new largest label are immediately to the right of a right-to-left maxima (where we say that $w_i$ is a right-to-left maxima of $w$ if $w_i>w_k$ for all $k>i$) or at the beginning of the permutation. We again have the same predictable effect on number of right-to-left maxima by inserting a new largest label into a fixed permutation in all possible ways, and we again get a generating tree isomorphic to the Catalan tree.
\end{example}
\begin{figure}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=63mm]
\tikzstyle{level 2}=[sibling distance=27mm]
\tikzstyle{level 3}=[sibling distance=8mm]
\node (z){1}
child {node (a) {21}
child {node (b) {321}
child {node {\small 4321}}
child {node {\small 3214}}
}
child {node (d) {213}
child {node {\small 4213}}
child {node {\small 2143}}
child {node {\small 2134}}
}
}
child {node (e) {12}
child {node (f) {312}
child {node {\small 4312}}
child {node {\small 3124}}
}
child {node (g) {132}
child {node {\small 4132}}
child {node {\small 1432}}
child {node {\small 1324}}
}
child {node (h) {123}
child {node {\small 4123}}
child {node {\small 1423}}
child {node {\small 1243}}
child {node {\small 1234}}
}
};
\end{tikzpicture}
}
\caption{The beginning of the generating tree for $231$-avoiding permutations.}
\label{231fig}
\end{figure}
\begin{figure}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=59mm]
\tikzstyle{level 2}=[sibling distance=21mm]
\tikzstyle{level 3}=[sibling distance=5mm]
\node (z){2}
child {node (a) {2}
child {node (b) {2}
child {node {2}}
child {node {3}}
}
child {node (d) {3}
child {node {2}}
child {node {3}}
child {node {4}}
}
}
child {node (e) {3}
child {node (f) {2}
child {node {2}}
child {node {3}}
}
child {node (g) {3}
child {node {2}}
child {node {3}}
child {node {4}}
}
child {node (h) {4}
child {node {2}}
child {node {3}}
child {node {4}}
child {node {5}}
}
};
\end{tikzpicture}
}
\caption{The beginning of the Catalan tree}
\label{cattree}
\end{figure}
\section{Generating tree for Baxter permutations}
\label{sec:GenTreeBax}
Baxter permutations are given by a vincular pattern, where we have adjacency issues to consider, so it is not immediately obvious that they are closed under removing the largest label.
\begin{lemma}
\label{remove}
If $w$ is a Baxter permutation, and we remove its largest label, then the result is still a Baxter permutation
\end{lemma}
\begin{proof}
Say $w=w_1\ldots w_n$ is a Baxter permutation, and we remove $w_i=n$ to get $\bar{w}=w_1\ldots\hat{w_i}\ldots w_n$. If $\bar{w}$ is not a Baxter permutation, then WLOG say there is an instance of $2-41-3$. That means there is a subsequence $1\leq i_1<i_2<i_3<i_4\leq n$ with $i_1,i_2,i_3,i_4\neq i$, $w_{i_3}<w_{i_1}<w_{i_4}<w_{i_2}$, and $i_2$ is adjacent to $i_3$ in $\hat{w}$. The only way $i_2$ can be adjacent to $i_3$ in $\hat{w}$ is if $i_2+1=i_3$, or if $i_3+2=i+1=i_2$. In the first case, the subsequence $i_1,i_2,i_3,i_4$ would be an instance of $2-41-3$ in $w$, a contradiction of our assumption. In the second case, the subsequence $i_1,i,i_3,i_4$ would be an instance of $2-41-3$ in $w$, again a contradiction.
\end{proof}
Therefore, every Baxter permutation of length $n$ uniquely arises from taking a Baxter permutation of length $n-1$ and inserting $n$ into an admissible position.
\begin{figure}
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=59mm]
\tikzstyle{level 2}=[sibling distance=21mm]
\tikzstyle{level 3}=[sibling distance=5mm]
\node (z){1}
child {node (a) {21}
child {node (b) {321}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
}
child {node (c) {231}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
}
child {node (d) {213}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
}
}
child {node (e) {12}
child {node (f) {312}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
}
child {node (g) {132}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
}
child {node (h) {123}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
}
};
\end{tikzpicture}
\caption{The beginning of the generating tree for Baxter permutations}
\end{figure}
Chung, Graham, Hoggatt, and Kleiman \cite{NumBax} studied this generating tree to come up with their enumerative result. They showed that the admissible places where we can insert a new largest label into a Baxter permutation are immediately to the left of a left-to-right maxima, and immediately to the right of a left-to-right maxima.
The resulting Baxter permutation will also have a predictable number of left-to-right and right-to-left maxima. Say $w$ has left-to-right maxima $x_1<x_2,\ldots<x_i=n$ and right-to-left maxima $n=y_j>y_{j-1}>\ldots>y_1$. If we insert $n+1$ to the left of $x_k$, the resulting permutation will have left-to-right maxima $x_1<\ldots <x_{k-1}<n+1$, and right-to-left maxima $n+1>n=y_j>\ldots>y_1$. If we insert $n+1$ to the right of $y_k$, the resulting permutation will have left-to-right maxima $x_1<x_2<\ldots x_i=n<n+1$, and right-to-left maxima $n+1>y_{k-1}>\ldots >y_1$.
\begin{figure}
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=21mm]
\node (A) {$ \cdot 312 \!\! \cdot \!\! 4 \!\! \cdot \!\! 8 \!\! \cdot \!\! 7 \!\! \cdot \!\! 56 \cdot $}
[level distance=30mm]
child {node {$ \cdot 9 \!\! \cdot \!\! 31248 \!\! \cdot \!\! 7 \!\! \cdot \!\! 56 \cdot $}}
child {node {$ \cdot 312 \!\! \cdot \!\! 9\!\! \cdot \!\!48\!\! \cdot \!\!7\!\! \cdot \!\!56 \cdot $}}
child {node {$ \cdot 312 \!\! \cdot \!\! 4 \!\! \cdot \!\! 9\!\! \cdot \!\!8\!\! \cdot \!\!7\!\! \cdot \!\!56 \cdot $}}
child {node {$ \cdot 312 \!\! \cdot \!\! 4 \!\! \cdot \!\! 8 \!\! \cdot \!\! 9 \!\! \cdot \!\! 7 \!\! \cdot \!\! 56 \cdot $}}
child {node {$ \cdot 312 \!\! \cdot \!\! 4 \!\! \cdot \!\! 87 \!\! \cdot \!\! 9 \!\! \cdot \!\! 56 \cdot $}}
child {node {$ \cdot 312 \!\! \cdot \!\! 4 \!\! \cdot \!\! 87569 \cdot $}}
;
\end{tikzpicture}
\caption{Branching of generating tree for Baxter permutations at $w=31248756$, with insertion points marked.}
\end{figure}
This means that the number of children a given Baxter permutation has (and how many children those children will have, and so on) is entirely encoded by the number $i$ of left-to-right maxima, and the number $j$ of right-to-left maxima. Thus, the tree with root $(1,1)$, and the property that every node $(i,j)$ has children $(1,j+1), (2,j+1),\ldots (i,j+1),(i+1,j), (i+1,j-1), \ldots (i+1,1)$ will be isomorphic to the generating tree for Baxter permutations.
\begin{figure}
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=16mm]
\node (A) {$(i,j)$}
[level distance=30mm]
child {node {$(1,j+1)$}}
child {node {$(2,j+1)$}}
child {node {$\ldots$}}
child {node {$(i,j+1)$}}
child {node {$(i+1,j)$}}
child {node {$(i+1,j-1)$}}
child {node {$\ldots$}}
child {node {$(i+1,1)$}}
;
\end{tikzpicture}
\caption{Rule for generating tree isomorphic to Baxter permutations}
\end{figure}
\begin{figure}
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=59mm]
\tikzstyle{level 2}=[sibling distance=21mm]
\tikzstyle{level 3}=[sibling distance=5mm]
\node (z){$(1,1)$}
child {node (a) {$(1,2)$}
child {node (b) {$(1,3)$}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
}
child {node (c) {$(2,2)$}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
}
child {node (d) {$(2,1)$}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
}
}
child {node (e) {$(2,1)$}
child {node (f) {$(1,2)$}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
}
child {node (g) {$(2,2)$}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
}
child {node (h) {$(3,1)$}
child {node {$\vdots$}}
child {node {$\vdots$}}
child {node {$\vdots$}}
}
};
\end{tikzpicture}
\caption{The beginning of the generating tree isomorphic to the Baxter permutation generating tree.}
\end{figure}
One interesting consequence of the generating tree approach gives a non-obvious relationship between two permutation statistics on Baxter permutations.
\begin{corollary}
\label{desisinvdes}
Baxter permutations have the same number of descents as inverse descents.
\end{corollary}
\begin{proof}
If $n$ is inserted to the left of a left-to-right maximum, then either $n+1$ is being added to the front of the word, or it is being inserted into an ascent. In either case, the resulting permutation will have one more descent than the original one. But since $n$ is always the rightmost left-to-right maximum, $n+1$ is being inserted to the left of $n$, so we are also creating one new inverse descent. Similarly, $n+1$ being inserted to the right of a right-to-left maximum creates no new descents nor inverse descents. Since the act of inserting a new largest label preserves the difference between number of descents and number of inverse descents, and the base of our generating tree has the same number of descents as inverse descents, all permutations in the generating tree have the same number of descents as inverse descents.
\end{proof}
\section{Generating tree for Baxter permutations fixed under $180^{\circ}$ rotation}
\label{sec:GenTreeHalf}
Now, let us consider the generating tree of all Baxter permutations fixed under $180^{\circ}$ rotation.
A permutation $w$ of length $n$ being fixed under $180^{\circ}$ rotation means that if $w_i=j$, then $w_{n+1-i}=n+1-j$. By the same logic of Lemma~\ref{remove}, we can see that we can remove 1 from a Baxter permutation (and decrease all remaining labels by one) and still be a Baxter permutation.
Combining these two things, we can see that if we remove $n$ and 1 (and then decrease all the labels by 1) from a Baxter permutation fixed under $180^{\circ}$ rotation, then we will still have a Baxter permutation fixed under $180^{\circ}$ rotation. So again, we can construct a generating tree.
Note that in this case, we are removing two entries at a time, so we will have separate generating trees for when $n$ is even and when $n$ is odd. We will use the convention that the generating tree for $n$ even has the empty permutation $\emptyset$ of length 0 as its root, with children $12$ and $21$.
\begin{figure}
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=59mm]
\tikzstyle{level 2}=[sibling distance=12mm]
\tikzstyle{level 3}=[sibling distance=5mm]
\node (z){1}
child {node (a) {321}
child {node (b) {54321}}
child {node (c) {45312}}
child {node (d) {41352}}
child {node (e) {14325}}
}
child {node (f) {123}
child {node (g) {52341}}
child {node (h) {25314}}
child {node (i) {21354}}
child {node (j) {12345}}
};
\end{tikzpicture}
\caption{Generating tree for Baxter permutations of odd length fixed under conjugation by the longest element}
\end{figure}
\begin{figure}
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=59mm]
\tikzstyle{level 2}=[sibling distance=21mm]
\tikzstyle{level 3}=[sibling distance=5mm]
\node (z){$\emptyset$}
child {node (a) {21}
child {node (b) {4321}}
child {node (c) {3412}}
child {node (d) {1324}}
}
child {node (f) {12}
child {node (g) {4231}}
child {node (h) {2143}}
child {node (i) {1234}}
};
\end{tikzpicture}
\caption{Generating tree for Baxter permutations of even length fixed under $180^{\circ}$ rotation of permutation matrix.}
\end{figure}
We already have a combinatorial rule for when we can insert a new largest entry into a Baxter permutation and still be a Baxter permutation, so now we come up with a combinatorial rule for when we can insert a new smallest entry into a Baxter permutation and still be a Baxter permutation. By inserting a new smallest entry, we mean that we increase all the labels in the existing permutation by 1, and then insert a new entry with label 1, so that if the original permutation was a standard permutation of $n$ letters on $[n]$, then the result will be a standard permutation on $[n+1]$.
\begin{lemma}
\label{smallinsertion}
Inserting a new smallest label into position $j$ into a permutation is equivalent to rotating the permutation matrix $180^{\circ}$, inserting $n$ into position $n+1-j$, and then rotating the permutation matrix $180^{\circ}$ again.
Consequently, given a Baxter permutation $w$, the admissible places we can insert a new smallest label are immediately to the left of a left-to-right minimum, or immediately to the right of a right-to-left minimum.
\end{lemma}
\begin{proof}
We use the fact that $w$ is a Baxter permutation if and only if it is a Baxter permutation when we reverse the labels (i.e., send $i$ to $n+1-i$). Then if we take $w$, reverse the labels, insert a new largest label into position $i$, and then reverse the labels again, this is equivalent to inserting 1 into position $i$. Thus, we can insert 1 into position $i$ if and only if we can insert $n+1$ into position $i$ in the reversed word. This happens if and only if position $i$ is immediately to the left of a left-to-right maximum, or immediately to the right of a right-to-left maximum in the reversed word. This happens if and only if position $i$ is immediately to the left of a left-to-right minimum, or immediately to the right of a right-to-left minimum in the original word.
\end{proof}
Now, we want to make the generating tree using this insertion rule.
\begin{theorem}
The generating tree for Baxter Permutations fixed under $180^{\circ}$ rotation of even (resp. odd) length is isomorphic to the tree with root $(0,0)$ (resp. $(1,1)$), and branching rule given by $(i,j)$ having children $(1,j+2), (2, j+1), \ldots (i,j+1), (i+1,j),\ldots (i+1,2), (i+2,1)$.
\end{theorem}
\begin{proof}
The proof of Lemma~\ref{smallinsertion} makes it clear that if we can insert $n$ into position $i$ of $w$, then we can insert $1$ into position $n+1-i$ of $w_0ww_0$. So if $w$ is fixed under conjugation by the longest element, then we can insert $n+1$ into a position if and only if we can insert $1$ into the complementary position. However, we need to check to make sure that we can still insert $1$ into a complementary position after we have inserted $n+1$.
If $i$ is less than $n/2$, then the complementary place we want to insert 1 will be shifted right by 1. If $i$ is greater than $n/2$, then the complementary place we want to insert 1 will still be $n+1-i$. In either of these cases, the act of inserting $n$ will not affect being able to insert 1 into the complementary position, because the combinatorial rule for inserting 1 depends on things being left-to-right and right-to-left minima, and inserting a new largest label will not affect that.
As a kind of boundary case, we have the situation where $n$ is even and $i=n/2$, so we are inserting $n+1$ into the middle of the word. Then we could insert 1 either immediately to the left or right of $n+1$ and still have a permutation fixed under conjugation by the longest element. However, exactly one of these choices will correspond to a Baxter permutation.
Without loss of generality, say we inserted $n+1$ to the right of a left-to-right maxima, $w_{n/2}$. Then $w_{n/2+1}$ will be a right-to-left minima, even after we insert $n+1$. So we can insert 1 to the left of $w_{n/2+1}$, which will be immediately to the right of $n+1$. This means that $w_{n/2}>w_{n/2+1}$, or else the subword $w_{n/2}(n+1)1w_{n/2+1}$ would be a copy of the vincular pattern $2-41-3$. Thus, if we inserted 1 on the other side of $n+1$, we would be making a subword that was an instance of the forbidden vincular pattern $3-14-2$.
Now, we want to make an isomorphic generating tree that doesn't require us to keep track of the permutation in full, analogous to what we did with all Baxter permutations.
Again, we only need to keep track of the number of left-to-right and right-to-left maxima. Each of these corresponds to a place where we can insert $n+1$, and then we know there will be a complementary place we can insert 1 to stay fixed under conjugation by the longest element. We know how inserting $n+1$ will affect the number of left-to-right and right-to-left maxima. Inserting 1 will in general not create any new left-to-right or right-to-left maxima, except in the case where we are adding 1 to the beginning or end of the word. Thus, we get the desired branching rule (see Figure~\ref{fig:conjbranch}).
\end{proof}
\begin{figure}
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=15mm]
\node (A) {$(i,j)$}
[level distance=30mm]
child {node {$(1,j\!+\!2)$}}
child {node {$(2,j\!+\!1)$}}
child {node {$\ldots$}}
child {node {$(i,j\!+\!1)$}}
child {node {$(i\!+\!1,j)$}}
child {node {$(i\!+\!1,j\!-\!1)$}}
child {node {$\ldots$}}
child {node {$(i\!+\!2,1)$}}
;
\end{tikzpicture}
\caption{Rule for generating tree isomorphic to Baxter permutations fixed under $180^{\circ}$ rotation of permutation matrix.}
\label{fig:conjbranch}
\end{figure}
In principle, one could try and analyze this branching rule to come up with an algebraic formula for the number of Baxter permutations fixed under conjugation by the longest element. While it may give a refined enumeration for the number of Baxter permutations fixed under conjugation by the longest element with a given number of left-to-right and right-to-left maxima, it is unlikely that the resulting expression for the entire set would be as elegant as the $``q=-1''$ formula in \cite{Dilks}. In practice, this is more of a stepping stone to the case of Baxter permutations fixed under $90^{\circ}$ rotation, where the rules for insertion are more technical, but the resulting branching structure has a transparent enumerative formula.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=59mm]
\tikzstyle{level 2}=[sibling distance=12mm]
\tikzstyle{level 3}=[sibling distance=5mm]
\node (z){$(1,1)$}
child {node (a) {$(1,3)$}
child {node (b) {$(1,5)$}}
child {node (c) {$(2,4)$}}
child {node (d) {$(2,2)$}}
child {node (e) {$(3,1)$}}
}
child {node (f) {$(3,1)$}
child {node (g) {$(1,3)$}}
child {node (h) {$(2,2)$}}
child {node (i) {$(3,2)$}}
child {node (j) {$(5,1)$}}
};
\end{tikzpicture}
\caption{Isomorphic generating tree for Baxter permutations of odd length fixed under $180^{\circ}$ rotation of permutation matrix.}
\end{center}
\end{figure}
\begin{figure}
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=59mm]
\tikzstyle{level 2}=[sibling distance=21mm]
\tikzstyle{level 3}=[sibling distance=5mm]
\node (z){$(0,0)$}
child {node (a) {$(1,2)$}
child {node (b) {$(1,4)$}}
child {node (c) {$(2,2)$}}
child {node (d) {$(3,1)$}}
}
child {node (f) {$(2,1)$}
child {node (g) {$(1,3)$}}
child {node (h) {$(2,2)$}}
child {node (i) {$(4,1)$}}
};
\end{tikzpicture}
\caption{Isomorphic generating tree for Baxter permutations of even length fixed under conjugation by the longest element}
\end{figure}
\section{Generating tree for Baxter permutations fixed under $90^{\circ}$ rotation.}
\label{sec:GenTreeQuarter}
Now, we generalize the approach we used for Baxter permutations fixed under $180^{\circ}$ rotation to those fixed under $90^{\circ}$ rotation.
First, we determine for which values of $n$ a Baxter permutation of length $n$ can possibly be fixed under $90^{\circ}$ rotation.
\begin{lemma}
If a permutation of length $n$ is fixed under $90^{\circ}$ rotation, then $n$ must be $4m$ or $4m+1$ for some positive integer $m$.
\end{lemma}
\begin{proof}
For a permutation of length $n$ to be fixed under $90^{\circ}$ rotation, it is equivalent to say that if $w_i=j$, then $w_j=n+1-i$, $w_{n+1-i}=n+1-j$, and $w_{n+1-j}=i$. If we consider the cycle structure of this permutation, in general it makes a 4-cycle $(i, j, n+1-i, n+1-j)$. If this were to degenerate into a smaller cycle, we would have that $i=n+1-i$. This forces to $n=2i+1$ to be odd, and it also forces $i=j$, which means it actually degenerates to a single central fixed point. Thus, a permutation fixed under this action must have length $4m$ or $4m+1$, either consisting solely of 4-cycles, or 4-cycles and a single central fixed point.
\end{proof}
\begin{lemma}
If $w$ is a Baxter permutation of length $n$ fixed under $90^{\circ}$ rotation, then $n$ must be odd.
\end{lemma}
\begin{proof}
If $w$ is a Baxter permutation of length $n$ with $k$ descents, then by Corollary~\ref{desisinvdes}, $w^{-1}$ will have $k$ descents, and $w_0w^{-1}$ will have $n-1-k$ descents. So for a Baxter permutation to be fixed by this action, we must have $k=n-1-k$, which implies that $n$ must be odd.
\end{proof}
\begin{corollary}
\label{Length4PlusOne}
If $w$ is a Baxter permutation of length $n$, then $n=4m+1$ for some integer $m$.
In particular, a Baxter permutation fixed under $90^{\circ}$ rotation will consist of a single central fixed point, and 4-cycles of the form $(i, j, n+1-i, n+1-j)$.
\end{corollary}
For $n>1$, such a permutation will have a 4-cycle of the form $(1, j, n, n+1-j)$, which means the permutation starts with $j$, has $n$ in the $j^{th}$ position, 1 in the $(n+1-j)^{th}$ position, and $n+1-j$ at the end. We already know that we can remove $n$ and $1$ from a Baxter permutation and still be a Baxter permutation. It is not hard to see that we can also remove the first element or the last element from a Baxter permutation and still be a Baxter permutation (after reducing labels so we're still a permutation on $[n]$). So if we take a Baxter permutation fixed under $90^{\circ}$ rotation and the remove the largest label, the smallest label, the first label, and the last label, then we will still have a Baxter permutation, and it will still be fixed under $90^{\circ}$ rotation.
Thus, we can create a generating tree, with the identity permutation on 1 element as the root.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=30mm]
\tikzstyle{level 2}=[sibling distance=6mm]
\node (z){1}
[grow=right, level distance=35mm]
child { node (a) {41352}
child {node (b) {296357418}}
child {node (c) {672159834}}
child {node (d) {761258943}}
child {node (e) {816357492}}
}
child { node (f) {25314}
child {node (g) {294753618}}
child {node (h) {349852167}}
child {node (i) {438951276}}
child {node (j) {814753692}}
};
\end{tikzpicture}
\end{center}
\caption{Start of generating tree for Baxter permutations fixed under $90^{\circ}$ rotation (drawn left-to-right).}
\end{figure}
In order to create a 4-cycle, we have to come up with a combinatorial rule for when we can insert a letter at the beginning (resp. end) of a Baxter permutation, and still have it be a Baxter permutation. To insert a letter $j$ at the beginning of a permutation $w$ of length $n$, we mean that we increase all the labels greater than or equal to $j$ in $w$ by 1, and then prepend $j$, so the result is a standard permutation on $[n+1]$.
\begin{lemma}
\label{endinsertion}
Inserting $j$ at the end of a word is equivalent to rotating the permutation matrix $90^{\circ}$ clockwise, inserting $n$ into position $n+1-j$, and the rotating the permutation matrix $90^{\circ}$ counter-clockwise.
Similarly, inserting $j$ at the beginning of a word is equivalent to rotating a permutation matrix $90^{\circ}$ counter-clockwise, inserting $n$ into position $j$, and then rotating back $90^{\circ}$ clockwise.
Consequently, we can insert $j$ at the end (resp. beginning) of a Baxter permutation and still have it be a Baxter permutation if and only if all entries smaller than $j$ appear to the left (resp. right) of $j$, or if all entries bigger than $j-1$ appear to the right (resp. left) of $j-1$.
\end{lemma}
\begin{proof}
Before, we thought of inserting 1 as reversing the labels of a permutation, inserting $n$, and reversing the word again. In this case, we can see that inserting $j$ at the end of a word is the same as taking the inverse of a permutation, inserting $n$ into the $j^{th}$ position, and then taking the inverse of the resulting permutation.
So we can append $j$ to a Baxter permutation and still be a Baxter permutation if and only if we can insert $n$ into position $j$ in $w^{-1}$. We can do this if and only if position $j$ is immediately to the left of a left-to-right maximum, or immediately to the right of a right-to-left maximum, which means that either $w^{-1}_j>w^{-1}_i$ for all $i<j$, or $w^{-1}_{j-1}>w_i$ for all $i>j-1$.
Again by symmetry, we can insert $j$ at the beginning of a Baxter permutation $w$ if and only if we can insert $j$ at the end of $w$ reversed, which leads to a similar combinatorial description with left and right interchanged.
\end{proof}
Note that inserting $j$ at the end (resp. beginning) of Baxter permutation can possibly decrease the number of right-to-left (resp. left-to-right) maxima, as any previous left-to-right (resp. right-to-left) maxima that was less than $j$ will no longer be one after $j$ is inserted at the end (resp. beginning).
\begin{theorem}
\label{QuadInsertion}
For a Baxter permutation fixed under $90^{\circ}$ rotation, for every admissible position we can insert a new largest label and still have a Baxter permutation, it is also possible to insert a new smallest label, a new beginning label, and a new final label so that the result is a Baxter permutation fixed under $90^{\circ}$ rotation.
\end{theorem}
\begin{proof}
Without loss of generality, assume we are inserting $n$ to the right of a right-to-left maxima. The procedure for when we can insert $n+1$ to the left of a left-to-right maxima is the same, except we reverse the order of the word, follow the procedure for inserting $n+1$ to the right of a right-to-left maxima, and then reverse the order of the resulting word.
Say $w$ is a Baxter permutation of length $n$ fixed under $90^{\circ}$ rotation, with a right-to-left maxima at $w_j$. This means that we could insert $n+1$ into position $j+1$, and by Lemma~\ref{smallinsertion} we could also insert 1 into position $n-j$, and by Lemma~\ref{endinsertion} we could insert $n-j$ at the end or $j+1$ at the beginning. Specifically, since we know that we'd be inserting $n+1$ to the right of a left-to-right maxima, we know that $w_{n+j-1}$ must be a right-to-left minima, and that all entries larger than $j$ appear to the left of $j$, and that all entries smaller than $n-j$ appear to the left of $n-j$. We also know that as a left-to-right maxima, $w_j$ must be at least $n-j$, since there are $n-j$ things to its right that must be smaller.
But we need to check that we can perform all four insertions sequentially in a way so that the result is a Baxter permutation fixed under $90^{\circ}$ rotation with a new 4-cycle added.
We will seperately consider the cases with $j+1<n/2$, and $j+1>n/2$. Note that since $n$ has to be odd, we don't have to deal with the special case of $j+1=n/2$.
First, suppose $j+1<n/2$. We insert $j+1$ at the beginning first, which increases any labels that were $j+1$ or higher by 1. So now, we want to insert $n+1-j$ at the end. We have to check that all labels less than $n+1-j$ are to the left of $n+1-j$. Since in the original permutation we had that all labels less than $n-j$ were to the left of $n-j$, when we add 1 to all labels $j+1$ or higher, we will have that all labels less than $n+1-j$ (except possibly $j+1$) are to the left of $n+1-j$. But since we add $j+1$ to the beginning of the word, it will also certainly be to the left of $n+1-j$. So we may insert $(n+1-j)$ at the end.
We now have a permutation of length $n+2$, which will still be fixed under $180^{\circ}$ rotation. So by the previous section, if we can insert a new largest label into some position, we know we can insert a new smallest label into the complementary position. The $(j+1)^{st}$ entry in this permutation will be $w_j+2$, as inserting two smaller labels increased its label by 2, and inserting a label at the beginning shifted it right by one. We need to check that this is still a right-to-left maxima. The only thing we did that could have changed this is inserting $n+1-j$ at the end. However, since $w_j\geq n-j$, we have $w_j+2\geq n+2-j$, so adding $n+1-j$ will not keep it from being a right-to-left-maxima. Thus, we can insert a new largest label into position $j+2$, and also a new smallest label into the complementary position $(n+1-j)$.
After all of these steps, we will now have $j+2$ in the first position, $n+4$ in position $j+2$, 1 in position $(n+4)-(j+2)$, and $(n+2-j)$ at the end, which creates the desired 4-cycle.
Now, suppose $j+1>n/2$. Again, we insert $j+1$ at the beginning. Now, we want to insert $n-j$ at the end. Inserting $j+1$ will not affect any labels $n-j$ or smaller, so we will still have that all labels less than $n-j$ are to the left of $n-j$.
Again, we have a permutation of length $n+2$ fixed under $180^{\circ}$ rotation, so it suffices to show we can place a new largest label, and it will automatically follow that a new smallest label can go in the complementary position. Consider the $(j+1)^{st}$ entry of this permutation, which was originally $w_j$. We claim this is still a right-to-left maxima. The only thing that could have changed this fact is inserting $n-j$ at the end. Since $w_j\geq n-j$, this label would at least be increased by 1 when we inserted $n-j$. It could possibly also be increased by 1 when we inserted $j+1$, but what's important is that the $(j+1)^{st}$ entry is at least $n+1-j$, and thus having $n-j$ at the end will not prevent it from being a left-to-right maxima.
After all of these steps, we will now have $j+3$ in the first position, $n+4$ in position $j+3$, 1 in position $(n+4)-(j+3)$, and $n+1-j$ at the end, which creates the desired 4-cycle.
\end{proof}
Now, we want analyze how doing these four insertions changes the number of left-to-right and right-to-left maxima.
\begin{lemma}
\label{LeftIsRight}
If $w$ is a Baxter permutation fixed under $90^{\circ}$ rotation, then $w$ has the same number of left-to-right and right-to-left maxima. In particular, if $w$ has left-to-right maxima in positions $x_1<x_2<\ldots <x_j$ and right-to-left maxima at positions $y_j<y_{j-1}<\ldots<y_1$, and we do a 4-cycle insertion corresponding to being able to insert a new largest label to the right of $w_{y_i}$(or to the left of $w_{x_i}$), then the resulting Baxter permutation fixed under $90^{\circ}$ rotation will have $i+1$ left-to-right maxima and $i+1$ right-to-left maxima.
\end{lemma}
\begin{proof}
We note that $w_j<w_i$ if and only if $j$ appears to the left of $i$ in $w^{-1}$ if and only if $i$ appears to the left of $j$ in $w_0w^{-1}$. Thus, if $w=w_0w^{-1}$, we have a right-to-left maxima in position $j$ if and only if $j$ is a left-to-right maxima (and similarly, a left-to-right maxima in position $j$ if and only $j$ is a right-to-left maxima). So if $w$ is fixed by this action, it must have the same number of right-to-left maxima as left-to-right maxima.
Additionally, this gives a bijection between right-to-left maxima that were originally in $w$ that are later killed by $n+3$, and left-to-right maxima that are killed by the $j+2$ or $j+3$ at the beginning of the word. Similarly, there is a bijection between right-to-left maxima originally in $w$ that are later killed by the final entry, and left-to-right maxima originally in $w$ later killed by 1.
Since 1 always ends up on the interior of the word, it will never be a left-to-right maxima, and so the final entry will also never kill anything that was originally a right-to-left maxima. Since we (WLOG) did an insertion corresponding to putting a new largest label to the right of $w_{y_i}$, $n+3$ will kill the right-to-left maxima $w_{y_{i}},\ldots w_{y_{j}}$. Thus, we will have $i+1$ right-to-left maxima; the new right-most entry, the $i-1$ original right-to-left maxima not killed by $n+3$, and $n+3$.
\end{proof}
We now have enough information to analyze the generating tree for Baxter permutations fixed under rotation by $90^{\circ}$. If a Baxter permutation fixed under rotation by $90^{\circ}$ has $i+1$ left-to-right maxima and $i+1$ right-to-left maxima, then it will have $2i+2$ children. There will be $i+1$ children with number of left-to-right (and right-to-left) maxima being $2,3,\ldots i+2$ corresponding to inserting a new largest label to the left of a left-to-right maxima, and $i+1$ children with number of left-to-right (and right-to-left) maxima being $2,3,\ldots i+2$ corresponding to inserting a new largest label to the right of a right-to-left maxima.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=25mm]
\tikzstyle{level 2}=[sibling distance=4mm]
\node (z){1}
child {node (a) {2}
child {node (a1) {2}}
child {node (a2) {2}}
child {node (a3) {3}}
child {node (a4) {3}}
}
child { node (b) {2}
child {node (b1) {2}}
child {node (b2) {2}}
child {node (b3) {3}}
child {node (b4) {3}}
};
\end{tikzpicture}
\end{center}
\caption{The beginning of the doubled Catalan tree}
\label{doublecattree}
\end{figure}
We may now prove the main result.
\begin{MainResult}
The number of Baxter permutations of length $n$ fixed under $90^{\circ}$ rotation of its permutation matrix is $2^mC_m$ (where $C_m$ is the Catalan number) if $n=4m+1$, and zero otherwise.
\end{MainResult}
\begin{proof}
By Corollary~\ref{Length4PlusOne}, $n$ must be $4m+1$ for some integer $m$.
Now, we consider the generating tree on Baxter permutations fixed under $90^{\circ}$ rotation. By Lemma~\ref{QuadInsertion}, if a Baxter permutation fixed under rotation by $90^{\circ}$ has $i+1$ left-to-right maxima and $i+1$ right-to-left maxima, then it will have $2i+2$ children. By Lemma~\ref{LeftIsRight}, there will be $i+1$ children with number of left-to-right (and right-to-left) maxima being $2,3,\ldots i+2$ corresponding to inserting a new largest label to the left of a left-to-right maxima, and $i+1$ children with number of left-to-right (and right-to-left) maxima being $2,3,\ldots i+2$ corresponding to inserting a new largest label to the right of a right-to-left maxima.
Thus, we can identify a Baxter permutation fixed under $90^{\circ}$ rotation with $i+1$ left-to-right maxima (and thus $i+1$ right-to-left maxima) with the number $i+1$, and its descendents will be identified with the numbers $2,2,3,3,\ldots i+2,i+2$.
This generating tree is almost like the Catalan tree, except each parent with label $i+1$ has two (not one) children with a label between $2$ and $i+2$, and our root will have label 1. This implies that the number of elements of a given rank $m$ must be $2^mC_m$. See Figure~\ref{doublecattree}.
\end{proof}
\section{Remarks}
The fact that this enumeration has such an elegant closed formula means that it is likely that there is an underlying combinatorial bijection. However, as with Chung, Graham, Hoggat, and Kleiman, the method of generating trees does not make such an interpretation transparent.
Additionally, one might hope that it is possible to extend the previous ``q=-1'' result for Baxter permutations fixed under $180^{\circ}$ rotation to an instance of the cyclic sieving phenomenon. That is to say, finding a polynomial $f(q)$ where gives an enumeration of Baxter permutations (perhaps with respect to some statistics), $f(-1)$ counts how many of these Baxter permutations are fixed under $180^{\circ}$ rotation, and $f(i)=f(-i)$ counts how many of them are fixed under $90^{\circ}$ rotation. However, the natural candidate used in ~\cite{Dilks} does not give the right enumeration when evaluated at $i$, and it does not appear that it can be easily modified to give such a result.
\bibliographystyle{abbrv}
| {
"timestamp": "2017-10-20T02:04:25",
"yymm": "1710",
"arxiv_id": "1710.07007",
"language": "en",
"url": "https://arxiv.org/abs/1710.07007",
"abstract": "Baxter permutations are known to be in bijection with a wide number of combinatorial objects. Previously, it was shown that each of these objects had a natural involution which was carried equivariantly by the known bijections, and the number of objects fixed under involution was given by Stembridge's $q=-1$ phenomenon. In this paper, we consider the order 4 action of a quarter-turn rotation of a Baxter permutation matrix, refining the half-turn rotation previously studied. Using the method of generating trees, we show that the number of Baxter permutations fixed under quarter-turn rotation has a very nice enumeration, which suggests the existence of a combinatorial bijection.",
"subjects": "Combinatorics (math.CO)",
"title": "Quarter-Turn Baxter Permutations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587218253717,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7097978727639195
} |
https://arxiv.org/abs/1207.3878 | Solving the Ku-Wales conjecture on the eigenvalues of the derangement graph | We give a new recurrence formula for the eigenvalues of the derangement graph. Consequently, we provide a simpler proof of the Alternating Sign Property of the derangement graph. Moreover, we prove that the absolute value of the eigenvalue decreases whenever the corresponding partition decreases in the dominance order. In particular, this settles affirmatively a conjecture of Ku and Wales (J. of Combin. Theory, Series A 117 (2010) 289--312) regarding the lower and upper bound for the absolute values of these eigenvalues. | \section{Introduction}
Let $G$ be a finite group and $S$ be a subset of $G$. The Cayley graph $\Gamma(G,S)$ is the graph which has the elements of $G$ as its vertices and two vertices $u, v \in G$ are joined by an edge if and only if $uv^{-1} \in S$. We require that $S$ is a nonempty subset of $G$ satisfying the condition that $s \in S \Longrightarrow s^{-1} \in S$ and $1 \not \in S$.
The {\em derangement graph} $\Gamma_{n}$ is the Cayley graph $\Gamma(\mathcal{S}_{n}, \mathcal{D}_{n})$ where $\mathcal{S}_{n}$ is the symmetric group on $[n]=\{1, \ldots, n\}$, and $\mathcal{D}_{n}$ is the set of derangements in $\mathcal{S}_{n}$. That is, two vertices $g$, $h$ of $\Gamma_{n}$ are joined if and only if $g(i) \not = h(i)$ for all $i \in [n]$, or equivalently $gh^{-1}$ fixes no point.
Clearly, $\Gamma_{n}$ is vertex-transitive, so it is $D_{n}$-regular where $D_{n} = |\mathcal{D}_{n}|$. It is well known that the largest eigenvalue of a regular graph is its degree. However, it is generally difficult to determine the smallest eigenvalue of a regular graph. Recently, after having derived a recurrence formula (see Theorem \ref{Ren} below) for the eigenvalues of $\Gamma_{n}$, Renteln \cite{Renteln} showed that the smallest eigenvalue $\mu$ of $\Gamma_{n}$ is $-\frac{D_{n}}{n-1}$. The value of $\mu$ was also determined independently by Ellis et al. \cite{EFP} in their seminal work on intersecting families of permutations. The recurrence obtained by Renteln was later used by Ku and Wales \cite{Ku-Wales} to prove the Alternating Sign Property (ASP) of the derangement graph (Theorem \ref{alternating-sign}). The purpose of this paper is to give a new recurrence formula for these eigenvalues. This new recurrence, which follows from the property of shifted schur functions, provides a simpler proof of the ASP and settles affirmatively a conjecture of Ku and Wales regarding the lower bound and upper bounds for the absolute values of these eigenvalues.
Recall that a Cayley graph $\Gamma(G,S)$ is {\em normal} if $S$ is closed under conjugation. It is well known that the eigenvalues of a normal Cayley graph $\Gamma(G,S)$ can be expressed in terms of the irreducible characters of $G$.
\begin{thm}[\cite{Babai, DS, Lub, Ram}]\label{cayley}
The eigenvalues of a normal Cayley graph $\Gamma(G,S)$ are integers given by
\begin{eqnarray}
\eta_{\chi} & = & \frac{1}{\chi(1)} \sum_{s \in S} \chi(s),
\end{eqnarray}
where $\chi$ ranges over all the irreducible characters of $G$. Moreover, the multiplicity of $\eta_{\chi}$ is $\chi(1)^{2}$.
\end{thm}
Recall that a partition $\lambda$ of $n$, denoted by $\lambda \vdash n$, is a weakly decreasing sequence $\lambda_{1} \ge \ldots \ge \lambda_{r}$ with $\lambda_{r} \ge 1$ such that $\lambda_{1} + \cdots + \lambda_{r} = n$. We write $\lambda = (\lambda_{1}, \ldots, \lambda_{r})$. The {\em size} of $\lambda$, denoted by $|\lambda|$, is $n$ and each $\lambda_{i}$ is called the {\em $i$-th part} of the partition. We also use the notation $(\mu_{1}^{a_{1}}, \ldots, \mu_{s}^{a_{s}}) \vdash n$ to denote the partition where $\mu_{i}$ are the distinct nonzero parts that occur with multiplicity $a_{i}$. For example,
\[ (5,5,4,4,2,2,2,1) \longleftrightarrow (5^{2}, 4^{2}, 2^{3}, 1). \]
Clearly, the derangement graph $\Gamma_{n}$ is normal since the set $\mathcal{D}_{n}$ is closed under conjugation. On the other hand, it is well known that both the conjugacy classes of $\mathcal{S}_{n}$ and the irreducible characters of $\mathcal{S}_{n}$ are indexed by partitions $\lambda$ of $[n]$. Therefore, the eigenvalue $\eta_{\chi_{\lambda}}$ of the derangement graph can be denoted by $\eta_{\lambda}$. Throughout, we shall use this notation.
To describe the recurrence formula of Renteln, we require some terminology. To the Young diagram of a partition $\lambda$, we assign
$xy$-coordinates to each of its boxes by defining the
upper-left-most box to be $(1,1)$, with the $x$ axis increasing to
the right and the $y$ axis increasing downwards. Then the {\em hook}
of $\lambda$ is the union of the boxes $(x',1)$ and $(1, y')$ of the
Ferrers diagram of $\lambda$, where $x' \ge 1$, $y' \ge 1$. Let
$\widehat{h}_{\lambda}$ denote the hook of $\lambda$ and let $h_{\lambda}$
denote the size of $\widehat{h}_{\lambda}$. Similarly, let
$\widehat{c}_{\lambda}$ and $c_{\lambda}$ denote the first column of
$\lambda$ and the size of $\widehat{c}_{\lambda}$ respectively. Note that
$c_{\lambda}$ is equal to the number of rows of $\lambda$. When
$\lambda$ is clear from the context, we replace $\widehat{h}_{\lambda}$,
$h_{\lambda}$, $\widehat{c}_{\lambda}$ and $c_{\lambda}$ by $\widehat{h}$, $h$,
$\widehat{c}$ and $c$ respectively. Let $\lambda-\widehat{h} \vdash n-h$ denote
the partition obtained from $\lambda$ by removing its hook. Also,
let $\lambda-\widehat{c}$ denote the partition obtained from $\lambda$ by
removing the first column of its Ferrers diagram, i.e.
$(\lambda_{1}, \ldots, \lambda_{r})-\widehat{c} = (\lambda_{1}-1, \ldots,
\lambda_{r}-1) \vdash n-r$.
\begin{thm}[\cite{Renteln} Renteln's Formula]\label{Ren}
For any partition $\lambda$, the eigenvalues of the derangement graph $\Gamma_{n}$ satisfy the following recurrence:
\begin{eqnarray}
\eta_{\lambda} & = & (-1)^{h}\eta_{\lambda-\hat{h}} + (-1)^{h+\lambda_{1}}h\eta_{\lambda-\widehat{c}} \label{1-recurrence}
\end{eqnarray}
with initial condition $\eta_{\emptyset} = 1$.
\end{thm}
\begin{thm}[\cite{Ku-Wales} The Alternating Sign Property (ASP) ]\label{alternating-sign}
Let $n>1$. For any partition $\lambda = (\lambda_{1}, \ldots, \lambda_{r}) \vdash n$,
\begin{eqnarray}
\textnormal{sign}(\eta_{\lambda}) & = & (-1)^{|\lambda|-\lambda_{1}} \nonumber \\
& = & (-1)^{\# \textnormal{cells under the first row of $\lambda$}}
\end{eqnarray}
where $\textnormal{sign}(\eta_\lambda)$ is $1$ if $\eta_{\lambda}$ is positive or $-1$ if $\eta_{\lambda}$ is negative.
\end{thm}
It turns out that the two terms on the right-hand side of Renteln's formula (\ref{1-recurrence}) can have different signs. This is the source of difficulty in the proof of the ASP by Ku and Wales which relies mainly on the recurrence. Our recurrence formula does not have this problem, thus giving a `quicker' proof of the ASP.
To state our results, we need a new terminology. For a partition $\lambda =(\lambda_{1}, \ldots, \lambda_{r}) \vdash n$, let $\widehat{l_{\lambda}}$ denote the last row of $\lambda$ and let $l_{\lambda}$ denote the size of $\widehat{l_{\lambda}}$. Clearly, $l_{\lambda} = \lambda_{r}$. Also, let $\lambda-\widehat{l_{\lambda}}$ denote the partition obtained from $\lambda$ by deleting the last row. When $\lambda$ is clear from the context, we replace $\widehat{l_{\lambda}}$, $l_{\lambda}$ by $\widehat{l}$ and $l$ respectively.
\begin{thm}\label{formula}
Let $\lambda=(\lambda_{1}, \ldots, \lambda_{r}) \vdash n$. The eigenvalues of the derangement graph $\Gamma_{n}$ satisfy the following recurrence:
\begin{eqnarray}
\eta_{\lambda} & = & (-1)^{r-1}\lambda_{r} \eta_{\lambda-\hat{c}} + (-1)^{\lambda_{r}} \eta_{\lambda-\hat{l}} \label{main-recurrence}
\end{eqnarray}
with initial condition $\eta_{\emptyset} = 1$.
\end{thm}
It follows from the ASP that both of the terms on the right-hand side of (\ref{main-recurrence}) have the same sign.
Let $\lambda=(\lambda_{1}, \ldots, \lambda_{r}), \lambda'=(\lambda_{1}', \ldots, \lambda_{r}')\vdash n$. We write $\lambda <_{\textnormal{lex}} \lambda'$, if there is a $m$, $1\leq m\leq r$ such that $\lambda_i=\lambda_i'$ for all $1\leq i\leq m-1$ and $\lambda_m<\lambda_m'$. Note that `$<_{\textnormal{lex}}$' is the usual lexicographic ordering on the partitions of $n$.
Let $\lambda, \lambda'\vdash n$ with $\lambda_{1}$ as
their first part. In general, $\lambda <_{\textnormal{lex}} \lambda'$ does not imply that $\vert \eta_{\lambda}\vert<\vert \eta_{\lambda'}\vert$. This has been pointed out in \cite[Remark 1.4]{Ku-Wales}. One of our main contributions in this paper is to show that such property holds with respect to the {\em dominance order}. Recall that if $\lambda$ and $\lambda'$ are partitions, we say that $\lambda$ is {\em dominated} by $\lambda'$, and write $\lambda \unlhd \lambda'$, if $\lambda_{1} + \lambda_{2} + \cdots + \lambda_{k} \le \lambda'_{1} + \lambda'_{2} + \cdots + \lambda'_{k}$ for all positive integer $k$.
We give a more intuitive interpretation of the dominance order as follows. Recall that an {\em outside corner} of a partition $\lambda$ is a box $(x,y)$ of $\lambda$ such that neither $(x+1,y)$ nor $(x, y+1)$ are boxes of $\lambda$. On the other hand, define an {\em inside corner} of $\lambda$ as a location $(x,y)$ which is not a box of $\lambda$, such that either $y=1$ and $(x-1, y)$ is a box of $\lambda$, $x=1$ and $(x, y-1)$ is a box of $\lambda$, or $(x-1, y)$ and $(x, y-1)$ are boxes of $\lambda$. For example, in the following diagram of the partition $(4,3,1,1)$, the outside corners are marked with an `o' and the inside corners with an `i':
\begin{center}
\begin{young}
& & & o & ,i\\
& & o & ,i \\
&, i \\
o &, \\
,i
\end{young}
\end{center}
Let $\lambda, \lambda'\vdash n$. We write $\lambda <_1 \lambda'$, if there are $m_1$ and $m_2$, $1\leq m_1<m_2\leq r$ such that
\begin{align}
\lambda &=(\lambda_{1}, \ldots,\lambda_{m_1-1},\lambda_{m_1},\lambda_{m_1+1}, \ldots,\lambda_{m_2-1},\lambda_{m_2},\lambda_{m_2+1}, \ldots,\lambda_{r}),\notag\\
\lambda' &=(\lambda_{1}, \ldots,\lambda_{m_1-1},\lambda_{m_1}+1,\lambda_{m_1+1}, \ldots,\lambda_{m_2-1},\lambda_{m_2}-1,\lambda_{m_2+1}, \ldots,\lambda_{r})\notag
\end{align}
are partitions of $n$. Intuitively, $\lambda <_1 \lambda'$ corresponds to sliding
an outside corner of $\lambda$ upwards into an inside corner of $\lambda'$.
It turns out that the dominance order can be entirely characterized in terms of the partial ordering $<_1$. We shall omit the proof of this standard result.
\begin{lm}\label{raise}
Let $\mu$ and $\lambda$ be partitions of $n$. Then $\mu \unlhd \lambda$ if and only if there exist $\mu^{(1)}, \ldots, \mu^{(s)} \vdash n$ such that
\[ \mu <_1 \mu^{(1)} <_1 \cdots <_1 \mu^{(s)} <_1 \lambda. \]
\end{lm}
Using the recurrence given by Theorem \ref{formula}, we are able to prove Theorem \ref{thm_inequality} and then settle affirmatively the conjecture of Ku and Wales regarding the lower and upper bounds for the absolute values of the eigenvalues of $\Gamma_{n}$ (Theorem \ref{Bounds}).
\begin{thm}\label{thm_inequality}
Let $\lambda, \lambda' \vdash n$ with $\lambda_{1}$ as
their first part. If $\lambda \unlhd \lambda'$, then
\begin{equation}
\vert \eta_{\lambda}\vert< \vert \eta_{\lambda'}\vert.\notag
\end{equation}
\end{thm}
\begin{thm}[The Ku-Wales Conjecture]\label{Bounds}
Suppose $\lambda^{*} \vdash n$ is the largest partition in
lexicographic order among all the partitions with $\lambda_{1}$ as
their first part. Then, for every $\lambda = (\lambda_{1}, \ldots,
\lambda_{s}) \vdash n$,
\[|\eta_{(\lambda_{1}, 1^{n-\lambda_{1}})}| \le |\eta_{\lambda}| \le |\eta_{\lambda^{*}}|. \]
\end{thm}
\begin{proof} It follows from Theorem \ref{thm_inequality} by noting that $(\lambda_{1}, 1^{n-\lambda_{1}}) \unlhd \lambda \unlhd \lambda^{*}$, for all $\lambda \vdash n$, $\lambda\neq \lambda^{*}, (\lambda_{1}, 1^{n-\lambda_{1}}) $.
\end{proof}
Note that it has been shown by Ku and Wales (see \cite[Theorem 1.3]{Ku-Wales}) that the lower bound holds for all $\lambda_{1} \ge \lfloor n/2 \rfloor$.
The paper is organized as follows. In Section 2, we introduce the shifted Schur functions developed by Okounkov and Olshanski \cite{OO} and rewrite a formula of Renteln in terms of these functions. Theorem \ref{formula} will then follow immediately from the property of these shifted Schur functions. Using the new recurrence formula, we provide a simpler proof of the ASP in Section 3. In Section 4, we proved Theorem \ref{thm_inequality}, thus settling a conjecture of Ku and Wales. For the reader's convenience, in Section 5, we reproduce some the eigenvalues of the derangement graphs for small $n$ as given in \cite{Ku-Wales}.
\section{Shifted Schur Functions}
The {\em Schur function} or {\em Schur polynomial} in $n$ variables can be defined as the ratio of two $n \times n$ determinants
\begin{eqnarray}
s_{\mu}(x_{1}, \ldots, x_{n})& = & \frac{\textnormal{det} \left[ x_{i}^{\mu_{j}+n-j}\right]}{\textnormal{det} \left[ x_{i}^{n-j}\right]},
\end{eqnarray}
where $\mu$ is an arbitrary partition $\mu_{1} \ge \mu_{2} \ge \cdots \mu_{n} \ge 0$ of length at most $n$.
An important variant of the Schur polynomial are the {\em shifted Schur polynomials} that was developed by Okounkov and Olshanski \cite{OO}:
\begin{eqnarray}
s^{*}_{\mu}(x_{1}, \ldots, x_{n})& = & \frac{\textnormal{det} \left[ (x_{i}+n-i \downarrow \mu_{j}+n-j ) \right]}{\textnormal{det} \left[ x_{i}+n-i \downarrow n-j \right]},
\end{eqnarray}
where the symbol $(x \downarrow k)$ is the $k$-th {\em falling factorial power} of a variable $x$:
\begin{eqnarray}
(x \downarrow k) & = & \left\{ \begin{array}{ll}
x(x-1)\cdots (x-k+1), & \textnormal{if}~k=1,2, \ldots \\
1, & \textnormal{if}~k=0.
\end{array} \right.
\end{eqnarray}
Just like the ordinary Schur polynomials, the shifted Schur polynomials also satisfy the {\em stability property}:
\begin{eqnarray}
s^{*}_{\mu}(x_{1}, \ldots, x_{n},0) & = & s^{*}_{\mu}(x_{1}, \ldots, x_{n}). \label{stable}
\end{eqnarray}
The stability property allow us to define the {\em functions} $s^{*}_{\mu}(x_{1}, x_{2}, \ldots)$ in infinitely many variables that form a basis in the {\em algebra of shifted symmetric functions}, denoted by $\Lambda^{*}$. Every element of $\Lambda^{*}$ may be viewed as a function $f(x_{1}, x_{2}, \ldots)$ on an infinite sequence of arguments such that $x_{m} = 0$ for all sufficiently large $m$. We refer the reader to \cite{OO} for basic results on shifted symmetric functions.
For the application we have in mind, the following formula for the dimension of skew Young diagrams will be useful.
\begin{thm}[\cite{OO}] \label{dimension}
Let $\mu \vdash k$ and $\lambda \vdash n$ be two partitions, where $k \le n$ and $\mu \subseteq \lambda$. Let $\textnormal{dim~} \lambda/\mu$ denote the number of standard tableaux of shape $\lambda/\mu$; in particular, $\textnormal{dim~} \lambda = \textnormal{dim~} \lambda/\emptyset$. Then
\begin{eqnarray}
\frac{\textnormal{dim~} \lambda/\mu}{\textnormal{dim~} \lambda} & = & \frac{s^{*}_{\mu}(\lambda) }{(n \downarrow k)}, \label{dim}
\end{eqnarray}
where $s^{*}_{\mu}(\lambda) = s^{*}_{\mu}(\lambda_{1}, \lambda_{2}, \ldots)$.
\end{thm}
\begin{thm}[\cite{OO} Vanishing Theorem]\label{vanish}
We have
\begin{eqnarray}
s^{*}_{\mu}(\lambda) & = & 0~~\textnormal{unless}~~\mu \subseteq \lambda, \\
s^{*}_{\mu}(\mu) & = & H(\mu),
\end{eqnarray}
where $H(\mu) = \prod_{\alpha \in \mu} h(\alpha)$ is the product of the hook lengths of all boxes of $\mu$.
\end{thm}
As an example of shifted symmetric functions, set $h^{*}_{k} = s^{*}_{(k)}$ where $(k)$ is the partition of $k$ whose Young diagram consists of just one row. These are called the {\em complete shifted symmetric functions}. They are shifted analogues of the complete homogeneous symmetric functions. We shall require the following properties of $h^{*}_{k}$.
\begin{prop}[\cite{OO}]\label{complete-shifted}
The complete shifted symmetric functions $h^{*}_{k}$ can be written as
\begin{eqnarray}
h^{*}_{k}(x_{1}, x_{2}, \ldots) & = & \sum_{1 \le i_{1} \le \cdots \le i_{k}< \infty} (x_{i_{1}}-k+1)(x_{i_{2}}-k+2) \cdots x_{i_{k}}. \label{complete}
\end{eqnarray}
\end{prop}
\begin{cor}
The complete shifted symmetric functions $h^{*}_{k}$ satisfy the following recurrence:
\begin{eqnarray}
h^{*}_{k}(x_{1}, \ldots, x_{n}) & = & x_{n} h^{*}_{k-1}(x_{1}-1, \ldots, x_{n}-1) + h^{*}_{k}(x_{1}, \ldots, x_{n-1}). \label{recurrence}
\end{eqnarray}
\end{cor}
\begin{proof}
In view of the stability property and Proposition \ref{complete-shifted}, we have
\[ h^{*}_{k}(x_{1}, \ldots, x_{n}) = \sum_{1 \le i_{1} \le \cdots \le i_{k} \le n } (x_{i_{1}}-k+1)(x_{i_{2}}-k+2) \cdots x_{i_{k}}. \]
Therefore,
\begin{eqnarray}
h^{*}_{k}(x_{1}, \ldots, x_{n}) & = & x_{n} \left(\sum_{1 \le i_{1} \le \cdots \le i_{k-1} \le n } (x_{i_{1}}-k+1)(x_{i_{2}}-k+2) \cdots (x_{i_{k}}-1) \right) \nonumber \\
& & + \sum_{1 \le i_{1} \le \cdots \le i_{k} \le n-1 } (x_{i_{1}}-k+1)(x_{i_{2}}-k+2) \cdots x_{i_{k}} \nonumber \\
& = & x_{n} h^{*}_{k-1}(x_{1}-1, \ldots, x_{n}-1) + h^{*}_{k}(x_{1}, \ldots, x_{n-1}). \nonumber
\end{eqnarray}
\end{proof}
Recall the following formula due to Renteln \cite[Theorem 3.2]{Renteln}.
\begin{thm}[\cite{Renteln}]\label{Renteln-2}
The eigenvalues of the derangement graph $\Gamma_{n}$ are given by
\begin{eqnarray}
\eta_{\lambda} & = & \sum_{k=0}^{n} (-1)^{n-k}(n \downarrow k) \frac{\textnormal{dim~} \lambda/(k)}{\textnormal{dim~} \lambda}
\end{eqnarray}
\end{thm}
\noindent Therefore, it follows immediately from Theorem \ref{dimension} and Theorem \ref{Renteln-2} that
\begin{cor}\label{shift}
The eigenvalues of the derangement graph $\Gamma_{n}$ are given by
\begin{eqnarray}
\eta_{\lambda} & = & \sum_{k=0}^{n} (-1)^{n-k}s^{*}_{(k)}(\lambda) \nonumber \\
& = & \sum_{k=0}^{n} (-1)^{n-k}h^{*}_{k}(\lambda).
\end{eqnarray}
\end{cor}
\noindent {\bf Proof of Theorem \ref{formula}.}\\[0.1cm]
\noindent Set $\eta_{\lambda}' = \sum_{k=0}^{n} (-1)^{k} h^{*}_{k}(\lambda)$. By the Vanishing Theorem (Theorem \ref{vanish}) and Corollary \ref{shift}, we can write
\[ \eta_{\lambda}' = \sum_{k=0}^{\infty} (-1)^{k} h^{*}_{k}(\lambda) \]
so that
\[ \eta_{\lambda}' = (-1)^{|\lambda|}\eta_{\lambda}. \]
By (\ref{recurrence}),
\begin{eqnarray}
\eta_{\lambda}' & = & \sum_{k=0}^{\infty} \left((-1)^{k} \left(\lambda_{r} h^{*}_{k-1}(\lambda_{1}-1, \ldots, \lambda_{r}-1) + h^{*}_{k}(\lambda_{1}, \ldots, \lambda_{r-1})\right) \right) \nonumber \\
& = & -\lambda_{r} \sum_{k=0}^{\infty} (-1)^{k-1}h^{*}_{k-1}(\lambda_{1}-1, \ldots, \lambda_{r}-1) + \sum_{k=0}^{\infty} (-1)^{k}h^{*}_{k}(\lambda_{1}, \ldots, \lambda_{r-1}) \nonumber \\
& = & -\lambda_{r} \eta_{\lambda-\hat{c}}' + \eta'_{\lambda-\hat{l}} \nonumber \\
& = & -\lambda_{r} (-1)^{|\lambda-\hat{c}|} \eta_{\lambda-\hat{c}} + (-1)^{|\lambda-\hat{l}|}\eta_{\lambda-\hat{l}} \nonumber \\
(-1)^{|\lambda|} \eta_{\lambda} & = & \lambda_{r}(-1)^{1+|\lambda|-r} \eta_{\lambda-\hat{c}} + (-1)^{|\lambda|-\lambda_{r}} \eta_{\lambda-\hat{l}} \nonumber \\
\eta_{\lambda} & = & (-1)^{r-1}\lambda_{r} \eta_{\lambda-\hat{c}} + (-1)^{\lambda_{r}} \eta_{\lambda-\hat{l}}.\notag
\end{eqnarray}
\hfill $\square$
\section{A simpler proof of the Alternating Sign Property}
We prove by induction on $|\lambda|$. Obviously, the property holds for all small partitions. By the inductive hypothesis,
\begin{eqnarray*}
\textnormal{sign}\left( (-1)^{r-1}\eta_{\lambda-\hat{c}}\right)& = & (-1)^{r-1} (-1)^{|\lambda-\hat{c}|-(\lambda_{1}-1)}\\
& = & (-1)^{r-1+|\lambda|-r-\lambda_{1}+1} \\
& = & (-1)^{|\lambda|-\lambda_{1}}.
\end{eqnarray*}
Similarly,
\begin{eqnarray*}
\textnormal{sign} \left( (-1)^{\lambda_{r}} \eta_{\lambda-\hat{l}} \right) & = & (-1)^{\lambda_{r}} (-1)^{|\lambda-\hat{l}|-\lambda_{1}} \\
& = & (-1)^{\lambda_{r} + |\lambda|-\lambda_{r}-\lambda_{1}} \\
& = & (-1)^{|\lambda|-\lambda_{1}}.
\end{eqnarray*}
By the recurrence formula (\ref{main-recurrence}), we deduce that
\[ \textnormal{sign}(\eta_{\lambda}) = (-1)^{|\lambda|-\lambda_{1}}. \]
\hfill $\square$
\section{Some preliminary lemmas}
For convenience, let us write
\begin{equation}
f(\lambda_1,\lambda_2,\dots, \lambda_r)=\vert \eta_{(\lambda_1,\lambda_2,\dots, \lambda_r)} \vert.\notag
\end{equation}
Then by Theorem \ref{alternating-sign} and Theorem \ref{formula}, we have
\begin{equation}
f(\lambda_1,\lambda_2,\dots, \lambda_r)=\lambda_{r}f(\lambda_1-1,\lambda_2-1,\dots, \lambda_r-1) + f(\lambda_1,\lambda_2,\dots, \lambda_{r-1}).\label{use_recur}
\end{equation}
By abuse of notation, in this section we shall use the symbol $\lambda$ to denote a positive integer instead of a partition.
\begin{lm}\label{lm_h_one_variable}
\begin{align}
h^{*}_{0}(\lambda)&=1,\notag\\
h^{*}_{1}(\lambda)&=\lambda,\notag\\
h^{*}_{k}(\lambda)&=(\lambda-k+1)(\lambda-k+2)\cdots(\lambda-1)(\lambda),\ \ \textnormal{for $k\geq 2$}.\notag
\end{align}
\end{lm}
\begin{proof} It follows easily from Proposition \ref{complete-shifted}.
\end{proof}
\begin{lm}\label{recurrence_general} For any $1< m\leq r$,
\begin{equation}
f(\lambda_1,\lambda_2,\dots, \lambda_r)=\sum_{k=0}^{\lambda_m} h^{*}_{k}(\lambda_m,\dots, \lambda_r) f(\lambda_1-k,\lambda_2-k,\dots, \lambda_{m-1}-k).\notag
\end{equation}
\end{lm}
\begin{proof} Repeatedly applying equation (\ref{use_recur}) and by Lemma \ref{lm_h_one_variable}, we obtain
\begin{align}
&f(\lambda_1,\lambda_2,\dots, \lambda_r)\notag\\
&=h^{*}_{1}(\lambda_r)f(\lambda_1-1,\lambda_2-1,\dots, \lambda_{r}-1) + h^{*}_{0}(\lambda_r)f(\lambda_1,\lambda_2,\dots, \lambda_{r-1})\notag\\
&=(\lambda_{r})(\lambda_r-1)f(\lambda_1-2,\lambda_2-2,\dots, \lambda_r-2)+\sum_{k=0}^{1} h^{*}_{k}(\lambda_r) f(\lambda_1-k,\lambda_2-k,\dots, \lambda_{r-1}-k)\notag\\
&=(\lambda_{r})(\lambda_r-1)(\lambda_r-2)f(\lambda_1-3,\lambda_2-3,\dots, \lambda_r-3)+\sum_{k=0}^{2} h^{*}_{k}(\lambda_r) f(\lambda_1-k,\lambda_2-k,\dots, \lambda_{r-1}-k)\notag\\
&\hskip 3cm\vdots\notag\\
&=\sum_{k=0}^{\lambda_r} h^{*}_{k}(\lambda_r) f(\lambda_1-k,\lambda_2-k,\dots, \lambda_{r-1}-k).\label{eq_use}
\end{align}
Thus the lemma holds for $m=r$. Assume that it holds for some $m_0$, $2<m_0\leq r$. We shall show that it also holds for $m_0-1$.
By assumption, the following equation holds:
\begin{equation}
f(\lambda_1,\lambda_2,\dots, \lambda_r)=\sum_{k=0}^{\lambda_{m_0}} h^{*}_{k}(\lambda_{m_0},\dots, \lambda_r) f(\lambda_1-k,\lambda_2-k,\dots, \lambda_{m_0-1}-k).\label{eq_use2}
\end{equation}
By applying equation (\ref{eq_use}),
\begin{align}
&f(\lambda_1,\lambda_2,\dots, \lambda_r)\notag\\
&=\sum_{k=0}^{\lambda_{m_0}} h^{*}_{k}(\lambda_{m_0},\dots, \lambda_r)
\left (\sum_{j=0}^{\lambda_{m_0-1}-k} h^{*}_{j}(\lambda_{m_0-1}-k) f(\lambda_1-k-j,\lambda_2-k-j,\dots, \lambda_{m_0-2}-k-j) \right)\notag\\
&=\sum_{k=0}^{\lambda_{m_0}} \sum_{j=0}^{\lambda_{m_0-1}-k} h^{*}_{k}(\lambda_{m_0},\dots, \lambda_r)
h^{*}_{j}(\lambda_{m_0-1}-k) f(\lambda_1-k-j,\lambda_2-k-j,\dots, \lambda_{m_0-2}-k-j).\label{eq_use3}
\end{align}
Now by collecting all the terms with $k+j=j_0$, equation (\ref{eq_use3}) becomes
\begin{align}
&f(\lambda_1,\lambda_2,\dots, \lambda_r)\notag\\
&=\sum_{j_0=0}^{\lambda_{m_0-1}} \left( \sum_{\substack{k+j=j_0,\\ 0\leq k\leq \lambda_{m_0}}} h^{*}_{j}(\lambda_{m_0-1}-k)h^{*}_{k}(\lambda_{m_0},\dots, \lambda_r)
\right) f(\lambda_1-j_0,\lambda_2-j_0,\dots, \lambda_{m_0-2}-j_0).\label{eq_use4}
\end{align}
By Proposition \ref{complete-shifted},
\begin{equation}
h^{*}_{j_0}(\lambda_{m_0-1},\lambda_{m_0},\dots, \lambda_r)=\sum_{\substack{k+j=j_0,\\ 0\leq k\leq \lambda_{m_0}}} h^{*}_{j}(\lambda_{m_0-1}-k)h^{*}_{k}(\lambda_{m_0},\dots, \lambda_r).\notag
\end{equation}
Thus, by induction the lemma follows.
\end{proof}
\begin{lm}\label{inequality_h0}
\begin{align}
h^{*}_{0}(\lambda_1,\lambda_2,\dots,\lambda_r)&=1,\notag\\
h^{*}_{1}(\lambda_1,\lambda_2,\dots,\lambda_r)&=\lambda_1+\lambda_2+\cdots+\lambda_r.\notag
\end{align}
\end{lm}
\begin{proof} It follows easily from Proposition \ref{complete-shifted}.
\end{proof}
\begin{lm}\label{inequality_h1}
If $\lambda_s\leq \lambda$ and $2\leq k\leq \lambda$, then
\begin{equation}
h^{*}_{k}(\lambda,\lambda_s)< h^{*}_{k}(\lambda+1,\lambda_s-1).\notag
\end{equation}
\end{lm}
\begin{proof} By Proposition \ref{complete-shifted},
\begin{equation}
h^{*}_{k}(x,y)=\sum_{j=0}^k (x-j\downarrow k-j)(y \downarrow j).\notag
\end{equation}
Therefore,
\begin{equation}
h^{*}_{k}(\lambda+1,\lambda_s-1)-h^{*}_{k}(\lambda,\lambda_s-1)=\sum_{j=0}^{k-1} (k-j) (\lambda-j\downarrow k-j-1)(\lambda_s-1 \downarrow j),\label{compare2}
\end{equation}
and
\begin{align}
h^{*}_{k}(\lambda,\lambda_s)-h^{*}_{k}(\lambda,\lambda_s-1) &=\sum_{j=1}^{k} j (\lambda-j\downarrow k-j)(\lambda_s-1 \downarrow j-1).\notag\\
&=\sum_{j=0}^{k-1} (j+1) (\lambda-j-1\downarrow k-j-1)(\lambda_s-1 \downarrow j).\label{compare1}
\end{align}
We shall compare equation (\ref{compare2}) with equation (\ref{compare1}). For $0\leq j<\frac{k-1}{2}$, the $j$-th and $(k-1-j)$-th term of the right side of equation (\ref{compare2}) are
\begin{align}
& (k-j) (\lambda-j\downarrow k-j-1)(\lambda_s-1 \downarrow j),\label{eq_first1}\\
& (j+1) (\lambda-k+1+j\downarrow j)(\lambda_s-1 \downarrow k-1-j).\label{eq_first2}
\end{align}
On the other hand, the $j$-th and $(k-1-j)$-th term of the right side of equation (\ref{compare1}) are
\begin{align}
& (j+1) (\lambda-j-1\downarrow k-j-1)(\lambda_s-1 \downarrow j),\label{eq_second1}\\
& (k-j) (\lambda-k+j\downarrow j)(\lambda_s-1 \downarrow k-1-j).\label{eq_second2}
\end{align}
When $j=0$, the sum (\ref{eq_first1}) $+$ (\ref{eq_first2}) $-$ (\ref{eq_second1}) $-$ (\ref{eq_second2}) is
\begin{align}
& k (\lambda\downarrow k-1)+(\lambda_s-1 \downarrow k-1)- (\lambda-1\downarrow k-1)-k(\lambda_s-1 \downarrow k-1)\notag\\
& = ((\lambda\downarrow k-1)-(\lambda-1\downarrow k-1))+(k-1)((\lambda\downarrow k-1)-(\lambda_s-1 \downarrow k-1))\notag\\
& = ((k-1)(\lambda-1\downarrow k-2))+(k-1)((\lambda\downarrow k-1)-(\lambda_s-1 \downarrow k-1))\notag\\
&>0,\label{eq_final1}
\end{align}
where the last inequality follows from $k\geq 2$ and $\lambda\geq \lambda_{s}>\lambda_s-1$.
Now for $1\leq j<\frac{k-1}{2}$, (\ref{eq_first1}) $-$ (\ref{eq_second1}) is
\begin{equation}
\left((k-j)(\lambda-j)-(j+1)(\lambda-k+1)\right)(\lambda-j-1\downarrow k-j-2)(\lambda_s-1 \downarrow j),\label{eq_third1}
\end{equation}
and (\ref{eq_first2}) $-$ (\ref{eq_second2}) is
\begin{equation}
\left((j+1)(\lambda-k+1+j)-(k-j)(\lambda-k+1)\right)(\lambda-k+j\downarrow j-1)(\lambda_s-1 \downarrow k-1-j).\label{eq_third2}
\end{equation}
Since $\lambda\geq \lambda_s$ and $j<\frac{k-1}{2}$,
\begin{align}
(k-j)(\lambda-j)-(j+1)(\lambda-k+1) &=(k-(2j+1))\lambda+(k-1)+j(j-1)>0,\notag\\
(\lambda-j-1\downarrow k-j-2)(\lambda_s-1 \downarrow j)&\geq (\lambda-k+j\downarrow j-1)(\lambda_s-1 \downarrow k-1-j).\notag
\end{align}
Therefore the sum (\ref{eq_third1})+(\ref{eq_third2}) is at least
\begin{align}
((k-j)(k-j-1)+(j+1)j)(\lambda-k+j\downarrow j-1)(\lambda_s-1 \downarrow k-1-j)>0.\label{eq_final2}
\end{align}
If $k$ is odd, then $j$ can take value $\frac{k-1}{2}$. The $\frac{k-1}{2}$-th term on the right side of (\ref{compare2}) is
\begin{eqnarray}
\frac{k+1}{2} \left(\lambda-\frac{k-1}{2} \left \downarrow \frac{k-1}{2} \right. \right)\left(\lambda_s-1 \left \downarrow \frac{k-1}{2}\right.\right),\label{eq_fourth1}
\end{eqnarray}
and the $\frac{k-1}{2}$-th term on the right side of (\ref{compare1}) is
\begin{equation}
\frac{k+1}{2} \left(\lambda-\frac{k+1}{2} \left \downarrow \frac{k-1}{2} \right.\right)\left(\lambda_s-1 \left \downarrow \frac{k-1}{2} \right.\right).\label{eq_fourth2}
\end{equation}
Note that (\ref{eq_fourth1}) $-$ (\ref{eq_fourth2}) is
\begin{equation}
\frac{k+1}{2}\frac{k-1}{2} \left(\lambda-\frac{k+1}{2} \left \downarrow \frac{k-1}{2}-1 \right.\right)\left(\lambda_s-1 \left \downarrow \frac{k-1}{2} \right.\right)> 0.\label{eq_final3}
\end{equation}
From equations (\ref{eq_final1}), (\ref{eq_final2}) and (\ref{eq_final3}), we deduce that
\begin{align}
& h^{*}_{k}(\lambda+1,\lambda_s-1)- h^{*}_{k}(\lambda,\lambda_s)\notag\\
&=\left(h^{*}_{k}(\lambda+1,\lambda_s-1)-h^{*}_{k}(\lambda,\lambda_s-1) \right)\notag\\
&\hskip 2cm -\left(h^{*}_{k}(\lambda,\lambda_s)-h^{*}_{k}(\lambda,\lambda_s-1)\right)\notag\\
&> 0.\notag
\end{align}
\end{proof}
\begin{lm}\label{inequality_h2} Let $l\geq 1$ and
\begin{equation}
h^{*}_{k}(\lambda,\lambda^l,\lambda)=h^{*}_{k}(\lambda,\underbrace{\lambda,\dots,\lambda}_{l\ \textnormal{times}},\lambda).\notag
\end{equation}
If $2\leq k\leq \lambda$, then
\begin{equation}
h^{*}_{k}(\lambda,\lambda^l,\lambda)< h^{*}_{k}(\lambda+1,\lambda^l,\lambda-1).\notag
\end{equation}
\end{lm}
\begin{proof} By Proposition \ref{complete-shifted},
\begin{align}
&(x-j-r \downarrow k-j-r)(\lambda-j\downarrow r)(y \downarrow j)\notag\\
&=(x-k+1)\cdots (x-j-r)(\lambda-j-r+1)\cdots (\lambda-j)(y-j+1)\cdots (y),\notag
\end{align}
is a term in the sum of $h^{*}_{k}(x,\lambda^l,y)$. In fact, there are $\binom{r+l-1}{l-1}$ such terms. Therefore
\begin{equation}
h^{*}_{k}(x,\lambda^l,y)=\sum_{j=0}^k \sum_{r=0}^{k-j}\binom{r+l-1}{l-1} (x-j-r \downarrow k-j-r)(\lambda-j\downarrow r)(y \downarrow j).\label{leq_first}
\end{equation}
From (\ref{leq_first}),
\begin{align}
&h^{*}_{k}(x+1,\lambda^l,y)-h^{*}_{k}(x,\lambda^l,y)\notag\\
&= \sum_{j=0}^{k-1} \sum_{r=0}^{k-1-j}(k-j-r)\binom{r+l-1}{l-1} (x-j-r \downarrow k-1-j-r)(\lambda-j\downarrow r)(y \downarrow j).\label{leq_difference}
\end{align}
Now replacing $x$ with $\lambda$ and $y$ with $\lambda-1$ in (\ref{leq_difference}), we obtain
\begin{align}
&h^{*}_{k}(\lambda+1,\lambda^l,\lambda-1)-h^{*}_{k}(\lambda,\lambda^l,\lambda-1)\notag\\
&= \sum_{j=0}^{k-1} \sum_{r=0}^{k-1-j}(k-j-r)\binom{r+l-1}{l-1} (\lambda-j\downarrow k-1-j)(\lambda-1 \downarrow j)\notag\\
&=\sum_{j=0}^{k-1} \binom{k-j+l}{l+1} (\lambda-j\downarrow k-1-j)(\lambda-1 \downarrow j)\notag\\
&=\sum_{j=0}^{k-1} \binom{k-j+l}{l+1}(\lambda-j) (\lambda-1 \downarrow k-2)\notag\\
&> \sum_{j=0}^{k-1} \binom{k-j+l}{l+1}(\lambda-k+1) (\lambda-1 \downarrow k-2)\notag\\
&=\sum_{j=0}^{k-1} \binom{k-j+l}{l+1}(\lambda-1 \downarrow k-1)\notag\\
&= \binom{k+l+1}{l+2} (\lambda-1\downarrow k-1).\label{leq_compare_1}
\end{align}
From (\ref{leq_first}),
\begin{align}
&h^{*}_{k}(x,\lambda^l,y)-h^{*}_{k}(x,\lambda^l,y-1)\notag\\
&= \sum_{j=1}^{k} \sum_{r=0}^{k-j}j\binom{r+l-1}{l-1} (x-j-r \downarrow k-j-r)(\lambda-j\downarrow r)(y-1\downarrow j-1).\label{leq_difference2}
\end{align}
Now replacing $x$ with $\lambda$ and $y$ with $\lambda$ in (\ref{leq_difference2}), we obtain
\begin{align}
&h^{*}_{k}(\lambda,\lambda^l,\lambda)-h^{*}_{k}(\lambda,\lambda^l,\lambda-1)\notag\\
&= \sum_{j=1}^{k} \sum_{r=0}^{k-j}j\binom{r+l-1}{l-1} (\lambda-j\downarrow k-j)(\lambda-1\downarrow j-1)\notag\\
&= \sum_{j=1}^{k} \sum_{r=0}^{k-j}j\binom{r+l-1}{l-1} (\lambda-1\downarrow k-1)\notag\\
&= \sum_{j=1}^{k} j\binom{k-j+l}{l} (\lambda-1\downarrow k-1)\notag\\
&= \binom{k+l+1}{l+2} (\lambda-1\downarrow k-1).\label{leq_compare_2}
\end{align}
By equations (\ref{leq_compare_1}) and (\ref{leq_compare_2}), we deduce that
\begin{equation}
h^{*}_{k}(\lambda+1,\lambda^l,\lambda-1)-h^{*}_{k}(\lambda,\lambda^l,\lambda)>0.\notag
\end{equation}
\end{proof}
\begin{lm}\label{lm_special_1} Let $r\geq 3$. If
\begin{align}
&(\lambda_1,\dots, \lambda_{r-2},\lambda_{r-1},\lambda_r),\notag\\
&(\lambda_1,\dots, \lambda_{r-2},\lambda_{r-1}+1,\lambda_r-1),\notag
\end{align}
are two partitions of $n$, then
\begin{equation}
f(\lambda_1,\dots, \lambda_{r-2},\lambda_{r-1},\lambda_r)<f(\lambda_1,\dots, \lambda_{r-2},\lambda_{r-1}+1,\lambda_r-1).\notag
\end{equation}
\end{lm}
\begin{proof} By Lemma \ref{recurrence_general},
\begin{equation}
f(\lambda_1,\dots, \lambda_{r-2},\lambda_{r-1},\lambda_r)=\sum_{k=0}^{\lambda_{r-1}} h^{*}_{k}(\lambda_{r-1}, \lambda_r) f(\lambda_1-k,\lambda_2-k,\dots, \lambda_{r-2}-k),\notag
\end{equation}
and
\begin{align}
f(\lambda_1,\dots, \lambda_{r-2},\lambda_{r-1}+1,\lambda_r-1)&=\sum_{k=0}^{\lambda_{r-1}+1} h^{*}_{k}(\lambda_{r-1}+1, \lambda_r-1) f(\lambda_1-k,\lambda_2-k,\dots, \lambda_{r-2}-k)\notag\\
&\geq \sum_{k=0}^{\lambda_{r-1}} h^{*}_{k}(\lambda_{r-1}+1, \lambda_r-1) f(\lambda_1-k,\lambda_2-k,\dots, \lambda_{r-2}-k).\notag
\end{align}
The lemma then follows from Lemma \ref{inequality_h0} and Lemma \ref{inequality_h1}.
\end{proof}
\begin{lm}\label{lm_special_2} If $l\geq 1$ and
\begin{align}
&(\lambda_1,\dots, \lambda_{r},\lambda,\lambda^l,\lambda),\notag\\
&(\lambda_1,\dots, \lambda_{r},\lambda+1,\lambda^l,\lambda-1),\notag
\end{align}
are two partitions of $n$, then
\begin{equation}
f(\lambda_1,\dots, \lambda_{r},\lambda,\lambda^l,\lambda)<f(\lambda_1,\dots, \lambda_{r},\lambda+1,\lambda^l,\lambda-1).\notag
\end{equation}
\end{lm}
\begin{proof} By Lemma \ref{recurrence_general},
\begin{equation}
f(\lambda_1,\dots, \lambda_{r},\lambda,\lambda^l,\lambda)=\sum_{k=0}^{\lambda} h^{*}_{k}(\lambda,\lambda^l,\lambda) f(\lambda_1-k,\lambda_2-k,\dots, \lambda_{r}-k),\notag
\end{equation}
and
\begin{align}
f(\lambda_1,\dots, \lambda_{r},\lambda+1,\lambda^l,\lambda-1)&=\sum_{k=0}^{\lambda+1} h^{*}_{k}(\lambda+1,\lambda^l,\lambda-1) f(\lambda_1-k,\lambda_2-k,\dots, \lambda_{r}-k)\notag\\
&\geq\sum_{k=0}^{\lambda} h^{*}_{k}(\lambda+1,\lambda^l,\lambda-1) f(\lambda_1-k,\lambda_2-k,\dots, \lambda_{r}-k).\notag
\end{align}
The lemma then follows from Lemma \ref{inequality_h0} and Lemma \ref{inequality_h2}.
\end{proof}
\section{Proof of Theorem \ref{thm_inequality}}
\begin{proof} By Lemma \ref{raise}, it is sufficient to show that the inequality holds for $\lambda<_1\lambda'$, i.e. if $\lambda <_1 \lambda'$ then $|\eta_{\lambda}| < |\eta_{\lambda'}|$.
Let $2\leq m_1<m_2\leq r$ be such that
\begin{align}
\lambda & =(\lambda_1,\dots,\lambda_{m_1-1},\lambda_{m_1},\lambda_{m_1+1},\dots, \lambda_{m_2-1},\lambda_{m_2},\lambda_{m_2+1},\dots, \lambda_{r})\notag\\
\lambda' & =(\lambda_1,\dots,\lambda_{m_1-1},\lambda_{m_1}+1,\lambda_{m_1+1},\dots , \lambda_{m_2-1},\lambda_{m_2}-1,\lambda_{m_2+1},\dots, \lambda_{r}).\notag
\end{align}
We shall prove by induction on $n$. Clearly, Theorem \ref{thm_inequality} holds for small values of $n$. We shall distinguish two cases.
\vskip 0.5cm
\noindent
{\bf Case 1.} $m_2\neq r$. Then by Theorem \ref{alternating-sign} and Theorem \ref{formula},
\begin{eqnarray}
\vert \eta_{\lambda}\vert & = & \lambda_{r} \vert \eta_{\lambda-\hat{c}}\vert + \vert \eta_{\lambda-\hat{l}}\vert.\notag
\end{eqnarray}
Note that $\lambda-\hat{c}<_1\lambda'-\hat{c}$ and $\lambda-\hat{l}<_1\lambda'-\hat{l}$. So, by induction,
\begin{eqnarray}
\vert \eta_{\lambda}\vert & = & \lambda_{r} \vert \eta_{\lambda-\hat{c}}\vert + \vert \eta_{\lambda-\hat{l}}\vert<\lambda_{r} \vert \eta_{\lambda'-\hat{c}}\vert + \vert \eta_{\lambda'-\hat{l}}\vert=\vert \eta_{\lambda'}\vert.\notag
\end{eqnarray}
\vskip 0.5cm
\noindent
{\bf Case 2.} $m_2=r$. If $m_1=r-1$, then it follows from Lemma \ref{lm_special_1} that $\vert \eta_{\lambda}\vert<\vert \eta_{\lambda'}\vert$. Suppose $m_1<r-1$.
Let $m_3$ be the largest integer such that
\begin{equation}
\lambda'' =(\lambda_1,\dots,\lambda_{m_3-1},\lambda_{m_3}+1,\lambda_{m_3+1},\dots,\lambda_{r}-1),\notag
\end{equation}
is a partition of $n$. Note that $m_1\leq m_3$. By the choice of $m_3$, we must have
\begin{equation}
\lambda_{m_3-1}>\lambda_{m_3}=\lambda_{m_3+1}=\cdots=\lambda_{r-1}.\notag
\end{equation}
If $\lambda_r=\lambda_{r-1}$, then by Lemma \ref{lm_special_2}, $\vert \eta_{\lambda}\vert<\vert \eta_{\lambda''}\vert$. If $\lambda_r<\lambda_{r-1}$, then by Case 1, $\vert \eta_{\lambda}\vert<\vert \eta_{\lambda'''}\vert$, where
\begin{equation}
\lambda''' =(\lambda_1,\dots,\lambda_{m_3-1},\lambda_{m_3}+1,\lambda_{m_3+1},\dots,\lambda_{r-1}-1,\lambda_r).\notag
\end{equation}
By Lemma \ref{lm_special_1}, $\vert \eta_{\lambda'''}\vert<\vert \eta_{\lambda''}\vert$. Thus $\vert \eta_{\lambda}\vert<\vert \eta_{\lambda''}\vert$.
In either case, $\vert \eta_{\lambda}\vert<\vert \eta_{\lambda''}\vert$. If $m_1=m_3$, then we are done. If $m_1<m_3$, then by Case 1, $\vert \eta_{\lambda''}\vert<\vert \eta_{\lambda'}\vert$. Hence $\vert \eta_{\lambda}\vert<\vert \eta_{\lambda'}\vert$. This completes the proof of the theorem.
\end{proof}
\begin{section}{Some Values of $\eta_\lambda$}\label{values}
In this section we reproduce some of the eigenvalues of $\Gamma_{n}$ for small $n$ as given in \cite{Ku-Wales}.
\medskip
{\small
\begin{center}
\begin{minipage}[t]{5cm}
\begin{tabular}{|rr|}
\multicolumn{2}{c} {$n=2$} \\
\hline
$\lambda$ & $\eta_{\lambda}$ \\
\hline
$2$& $1$ \\
$1^{2}$& $-1$ \\
\hline
\end{tabular}
\end{minipage}
\hspace{2cm}
\begin{minipage}[t]{5cm}
\begin{tabular}{|rr|}
\multicolumn{2}{c} {$n=3$} \\
\hline
$\lambda$ & $\eta_{\lambda}$ \\
\hline
$3$& $2$ \\
$2,1$& $-1$ \\
$1^{3}$& $2$ \\
\hline
\end{tabular}
\end{minipage}
\end{center}
\begin{center}
\begin{minipage}[t]{5cm}
\begin{tabular}{|rr|r|rr|}
\multicolumn{5}{c} {$n=4$} \\
\hline
$\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$ \\
\hline
$4$& $9$ & & $2,1^{2}$ & $1$ \\
$3,1$& $-3$ & & $1^{4}$ & $-3$ \\
$2,2$& $3$ & & & \\
\hline
\end{tabular}
\end{minipage}
\hspace{2cm}
\begin{minipage}[t]{5cm}
\begin{tabular}{|rr|r|rr|}
\multicolumn{5}{c} {$n=5$} \\
\hline
$\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$ \\
\hline
$5$& $44$ & & $2^{2},1$ & $-4$ \\
$4,1$& $-11$ & & $2,1^{3}$ & $-1$ \\
$3,2$& $4$ & & $1^{5}$ & $4$ \\
$3,1^{2}$& $4$ & & & \\
\hline
\end{tabular}
\end{minipage}
\end{center}
\begin{center}
\begin{minipage}[t]{5cm}
\begin{tabular}{|rr|r|rr|}
\multicolumn{5}{c} {$n=6$} \\
\hline
$\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$ \\
\hline
$ 6 $ & $265$ & & $ 3, 1^{3}$ & $-5$\\
$ 5, 1 $ & $-53$ & & $ 2^{3} $ & $7$\\
$ 4, 2 $ & $15$ & & $ 2^{2}, 1^{2} $ & $5$\\
$4, 1^{2} $ & $13$ & & $ 2, 1^{4} $ & $ 1$\\
$3^{2}$ & $-11$ & & $ 1^{6} $ & $-5$ \\
$3, 2, 1$ & $-5$ & & &\\
\hline
\end{tabular}
\end{minipage}
\hspace{2cm}
\begin{minipage}[t]{5cm}
\begin{tabular}{|rr|r|rr|}
\multicolumn{5}{c} {$n=7$} \\
\hline
$\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$ \\
\hline
$ 7 $ & $1854$ & & $3, 2^{2} $ & $ 6$\\
$6, 1 $ & $-309$ & & $3, 2, 1^{2}$ & $6$\\
$5, 2 $ & $66$ & & $3, 1^{4} $ & $ 6$\\
$5, 1, 1$ & $62$ & &$2^{3}, 1 $ & $-9$ \\
$4, 3 $ & $-21$ & &$2^{2}, 1^{3} $ & $-6$ \\
$4, 2, 1$ & $-18$ & &$2, 1^{5}$ & $ -1$ \\
$4, 1^{3} $ & $ -15$ & &$1^{7}$ & $ 6$ \\
$3^{2}, 1 $ & $ 14$ & & & \\
\hline
\end{tabular}
\end{minipage}
\end{center}
\begin{center}
\begin{minipage}[t]{5cm}
\begin{tabular}{|rr|r|rr|}
\multicolumn{5}{c} {$n=8$} \\
\hline
$\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$ \\
\hline
$ 8 $ & $ 14833$ & & $4, 1^{4}$ & $ 17$\\
$7, 1$ & $-2119$ &&$3^{2}, 2$ & $-19$ \\
$6, 2$ & $371$ & & $3^{2}, 1^{2}$ & $ -17$\\
$6, 1^{2}$ & $353$ & & $3, 2^{2}, 1$ & $-7$\\
$5, 3$ & $-89$ & & $3, 2, 1^{3}$ & $-7$\\
$5, 2, 1$ & $-77$ & & $3, 1^{5}$ & $-7$\\
$5, 1^{3}$ & $ -71$ & & $2^{4}$ & $13$\\
$4^{2}$ & $53$ & &$2^{3}, 1^{2}$ & $11$ \\
$4, 3, 1$ & $25$ & & $2^{2}, 1^{4}$ & $7$\\
$4, 2^{2}$ & $ 23$ & & $2, 1^{6}$ & $1$ \\
$4, 2, 1^{2}$ & $ 21$ & &$1^{8}$ & $-7$ \\
\hline
\end{tabular}
\end{minipage}
\hspace{2cm}
\begin{minipage}[t]{5cm}
\begin{tabular}{|rr|r|rr|}
\multicolumn{5}{c} {$n=9$} \\
\hline
$\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$ \\
\hline
$9$ & $ 133496$ &&$4, 2^{2}, 1$ & $ -27$ \\
$8, 1$ & $-16687$ && $4, 2, 1^{3}$ & $ -24$\\
$7, 2$ & $2472$ &&$4, 1^{5}$ & $ -19$ \\
$7, 1^{2}$ & $2384$ && $3^{3}$ & $ 32$\\
$6, 3$ & $ -463$ &&$3^{2}, 2, 1$ & $ 23$ \\
$6, 2, 1$ & $-424$ && $3^{2}, 1^{3}$ & $ 20$\\
$6, 1^{3}$ & $-397$ &&$3, 2^{3}$ & $ 8$ \\
$5, 4$ & $128$ && $3, 2^{2}, 1^{2}$ & $ 8$\\
$5, 3, 1$ & $104$ && $3, 2, 1^{4}$ & $ 8$\\
$5, 2^{2}$ & $92$ &&$3, 1^{6}$ & $ 8$ \\
$5, 2, 1^{2}$ & $ 88$ &&$2^{4}, 1$ & $ -16$ \\
$5, 1^{4}$ & $ 80$ && $2^{3}, 1^{3}$ & $ -13$\\
$4^{2}, 1$ & $ -64$ &&$2^{2}, 1^{5}$ & $ -8$ \\
$4, 3, 2$ & $ -31$ && $2, 1^{7}$ & $ -1$\\
$4, 3, 1^{2}$ & $ -29$ && $1^{9}$ & $ 8$\\
\hline
\end{tabular}
\end{minipage}
\end{center}
\begin{center}
\begin{tabular}{|rr|r|rr|r|rr|r|rr|}
\multicolumn{11}{c} {$n=10$} \\
\hline
$\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$& \ \ \ &$\lambda$ & $\eta_{\lambda}$ \\
\hline
$ 10 $ & $ 1334961 $ & & $ 6,1^4 $ & $ 441 $ & & $ 4,3,2,1$ & $36$ & &$ 3,2^2,1^3$ & $-9$ \\
$ 9,1 $ & $ -148329 $ & & $5,5 $ & $-309 $ & & $4,3,1^3 $ & $33$& &$3,2,1^5$ &$-9$ \\
$ 8,2 $ & $ 19071 $ & & $5,4,1 $ & $ -149$ & & $4,2^3 $ & $33$& &$3,1^7$ &$-9$ \\
$ 8,1^2 $ & $ 18541 $ & & $5,3,2 $ & $-125 $ & & $4,2^2,1^2 $ & $31$& &$2^5$ &$21$ \\
$ 7,3 $ & $ -2967$ & & $5.3.1^2 $ & $-119 $ & & $4,2,1^4 $ & $27$& &$2^4,1^2$ &$19$ \\
$ 7,2,1 $ & $ -2781$ & & $5,2^2,1 $ & $-105 $ & & $4,1^6 $ & $21$& &$2^3,1^4$ &$15$ \\
$ 7,1^3 $ & $ -2649 $ & & $5,2,1^3 $ & $-99 $ & & $3^3,1 $ & $-39$& &$2^2,1^6$ &$9$ \\
$ 6,4 $ & $ 621 $ & & $5,1^5 $ & $-89 $ & & $3^2,2^2 $ & $-29$& & $2,1^8$ &$1$ \\
$ 6,3,1 $ & $529 $ & & $4^2,2 $ & $81 $ & & $3^2,2,1^2 $ & $-27$& & $1^{10}$ &$-9$ \\
$ 6,2^2 $ & $495 $ & & $4^2,1^2 $ & $75 $ & & $3^2,1^4 $ & $-23$& & & \\
$ 6,2,1^2 $ & $ 477 $ & & $4,3^2 $ & $39 $ & & $3,2^3,1 $ & $-9$& & & \\
\hline
\end{tabular}
\end{center}
}
{\small
\begin{center}
\begin{tabular}{|rr|r|rr|r|rr|r|rr|}
\multicolumn{11}{c} {$n=11$,\ $\lambda_{1} \ge 5$} \\
\hline
$\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$& \ \ \ &$\lambda$ & $\eta_{\lambda}$ \\
\hline
$ 11 $ & $ 14684570 $ & & $ 7, 3, 1 $ & $ 3338 $ & & $ 6, 2^{2}, 1$ & $-557 $ & & $ 5, 3, 1^{3}$ & $134$ \\
$ 10, 1 $ & $ -1468457 $ & & $ 7, 2^{2} $ & $ 3178 $ & & $6, 2, 1^{3} $ & $-530 $ & & $5, 2^{3} $ & $122$ \\
$ 9, 2 $ & $ 166870 $ & & $ 7, 2, 1^{2} $ & $ 3090 $ & & $6, 1^{5} $ & $-485 $ & & $5, 2^{2}, 1^{2} $ & $118$ \\
$ 9, 1^{2} $ & $ 163162 $ & & $ 7, 1^{4} $ & $ 2914 $ & & $5^{2}, 1 $ & $362 $ & & $5, 2, 1^{4} $ & $ 110$ \\
$ 8, 3 $ & $ -22249 $ & & $ 6, 5 $ & $ -905 $ & & $5, 4, 2 $ & $ 178 $ & & $5, 1^{6} $ & $98$ \\
$ 8, 2, 1 $ & $ -21190 $ & & $ 6, 4, 1 $ & $ -710 $ & & $ 5, 4, 1^{2}$ & $170 $ & & $ $ & $$ \\
$ 8, 1^{3} $ & $ -20395 $ & & $ 6, 3, 2 $ & $ -617 $ & & $ 5, 3^{2} $ & $158 $ & & $ $ & $$ \\
$ 7, 4 $ & $ 3706 $ & & $ 6, 3, 1^{2} $ & $ -595 $ & & $5, 3, 2, 1 $ & $143 $ & & $ $ & $$ \\
\hline
\end{tabular}
\end{center}
\begin{center}
\begin{tabular}{|rr|r|rr|r|rr|r|rr|}
\multicolumn{11}{c} {$n=12$,\ $\lambda_{1} \ge 6$} \\
\hline
$\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$& \ \ \ &$\lambda$ & $\eta_{\lambda}$ \\
\hline
$ 12 $ & $ 176214841 $ & & $ 8, 3, 1 $ & $ 24721 $ & & $ 7, 2^{2}, 1 $ & $ -3531$ & & $6, 3, 2, 1 $ & $694$ \\
$ 11, 1 $ & $ -16019531 $ & & $ 8, 2^{2} $ & $ 23839 $ & & $7, 2, 1^{3} $ & $ -3399$ & & $6, 3, 1^{3} $ & $661$ \\
$ 10, 2 $ & $ 1631619 $ & & $8, 2, 1^{2} $ & $23309 $ & & $7, 1^{5} $ & $ -3179$ & & $ 6, 2^{3}$ & $637$ \\
$ 10, 1^{2} $ & $ 1601953 $ & & $ 8, 1^{4} $ & $ 22249 $ & & $ 6^{2} $ & $ 2119$ & & $6, 2^{2}, 1^{2} $ & $619 $ \\
$ 9, 3 $ & $ -190709 $ & & $7, 5 $ & $ -4959 $ & & $ 6, 5, 1$ & $ 1033 $ & & $6, 2, 1^{4} $ & $583$ \\
$ 9, 2, 1 $ & $ -183557 $ & & $ 7, 4, 1 $ & $ -4169 $ & & $6, 4, 2 $ & $ 829 $ & & $6, 1^{6} $ & $ 529$ \\
$ 9, 1^{3} $ & $ -177995 $ & & $7, 3, 2 $ & $ -3815 $ & & $6, 4, 1^{2} $ & $ 799$ & & $ $ & $$ \\
$ 8, 4 $ & $ 26701 $ & & $ 7, 3, 1^{2} $ & $ -3709 $ & & $6, 3^{2} $ & $ 739$ & & $ $ & $$ \\
\hline
\end{tabular}
\end{center}
\begin{center}
\begin{tabular}{|rr|r|rr|r|rr|r|rr|}
\multicolumn{11}{c} {$n=13$, \ $\lambda_{1} \ge 6$} \\
\hline
$\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$& \ \ \ &$\lambda$ & $\eta_{\lambda}$ \\
\hline
$ 13 $ & $ 2290792932 $ & & $ 9, 1^{4} $ & $ 192828 $ & & $7, 4, 1^{2} $ & $4632 $ & & $6, 4, 3 $ & $-996$ \\
$ 12, 1 $ & $ -190899411 $ & & $ 8, 5 $ & $ -33363 $ & &$7,3^{2} $ & $4452$ & & $6, 4, 2, 1 $ & $ -933$ \\
$ 11, 2 $ & $ 17621484 $ & & $ 8, 4, 1 $ & $ -29668 $ & & $7, 3, 2, 1 $ & $4239 $ & & $ 6, 4, 1^{3} $ & $-888$ \\
$ 11, 1^{2} $ & $ 17354492 $ & & $ 8, 3, 2 $ & $ -27811 $ & & $7, 3, 1^{3} $ & $ 4080$ & & $6, 3^{2}, 1 $ & $ -831$ \\
$ 10, 3 $ & $ -1835571 $ & & $8, 3, 1^{2} $ & $ -27193 $ & & $7, 2^{3} $ & $ 3972 $ & & $6, 3, 2^{2} $ & $ -793$ \\
$ 10, 2, 1 $ & $ -1779948 $ & & $ 8, 2^{2}, 1 $ & $ -26223 $ & & $ 7, 2^{2}, 1^{2} $ & $3884 $ & & $6, 3, 2, 1^{2} $ & $-771$ \\
$ 10, 1^{3} $ & $ -1735449 $ & & $ 8, 2, 1^{3} $ & $ -25428 $ & & $ 7, 2, 1^{4} $ & $3708 $ & & $6, 3, 1^{4} $ & $-727$ \\
$ 9, 4 $ & $ 222492 $ & & $ 8, 1^{5} $ & $ -24103 $ & & $7, 1^{6} $ & $3444 $ & & $ 6, 2^{3}, 1$ & $-708$ \\
$ 9, 3, 1 $ & $ 209780 $ & & $7, 6 $ & $ 7284 $ & & $6^{2}, 1 $ & $-2428 $ & & $6, 2^{2}, 1^{3} $ & $-681$ \\
$ 9, 2^{2} $ & $ 203952 $ & & $7, 5, 1 $ & $ 5580 $ & & $6, 5, 2 $ & $ -1203 $ & & $ 6, 2, 1^{5}$ & $ -636$ \\
$ 9, 2, 1^{2} $ & $ 200244 $ & & $7, 4, 2 $ & $ 4764 $ & & $6, 5, 1^{2} $ & $-1161 $ & & $6, 1^{7} $ & $-573$ \\
\hline
\end{tabular}
\end{center}
}
{\small
\begin{minipage}[t]{5cm}
\begin{tabular}{|rr|r|rr|r|rr|r|rr|}
\multicolumn{11}{c} {$n=15$} \\
\hline
$\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$ & \ \ \ & $\lambda$ & $\eta_{\lambda}$& \ \ \ &$\lambda$ & $\eta_{\lambda}$ \\
\hline
$ 15 $ & $ 481066515734 $ & & $ 7^{2}, 1 $ & $ 18806 $ & & $ 6, 2, 1^{7} $ & $-742$ & &$4, 3^{2}, 2^{2}, 1$ & $-77$ \\
$ 14, 1 $ & $ -34361893981 $ & & $ 7, 6, 2 $ & $ 9350 $ & & $ 6, 1^{9} $ & $-661$ & & $4, 3^{2}, 2, 1^{3}$ & $-74$ \\
$ 13, 2 $ & $ 2672591754 $ & & $ 7, 6, 1^{2} $ & $ 9094 $ & & $ 5^{3} $ & $1214$ & & $4, 3^{2}, 1^{5}$ & $ -69$ \\
$ 13, 1^{2} $ & $ 2643222614 $ & & $ 7, 5, 3 $ & $ 7446 $ & & $5^{2}, 4, 1 $ & $859$ & & $4, 3, 2^{4}$ & $-73$ \\
$ 12, 3 $ & $ -229079293 $ & & $ 7, 5, 2, 1 $ & $ 7089 $ & & $5^{2}, 3, 2 $ & $742$ & & $4, 3, 2^{3}, 1^{2}$ & $-71$ \\
$ 12, 2, 1 $ & $ -224273434 $ & & $ 7, 5, 1^{3} $ & $ 6822 $ & &$5^{2}, 3, 1^{2} $ & $714$ & & $ 4, 3, 2^{3}, 1^{4}$ & $-67$ \\
$ 12, 1^{3} $ & $ -220268551 $ & & $ 7, 4^{2} $ & $ 6662 $ & & $5^{2}, 2^{2}, 1 $ & $662$& & $4, 3, 2, 1^{6}$ & $-61$ \\
$ 11, 4 $ & $ 22026854 $ & &$ 7, 4, 3, 1 $ & $ 6174 $ & &$5^{2},2,1^{3} $ & $629$ & & $ 4, 3, 1^{8}$ & $-53$ \\
$ 11, 3, 1 $ & $ 21211046 $ & & $ 7, 4, 2^{2} $ & $ 5954 $ & &$5^{2}, 1^{5}$ & $574$ & & $4, 2^{5}, 1 $ & $-66$ \\
$ 11, 2^{2} $ & $ 20825390 $ & & $ 7, 4, 2, 1^{2} $ & $ 5822 $ & & $5, 4^{2}, 2$ & $374$& & $4, 2^{4}, 1^{3}$ & $-63$ \\
$ 11, 2, 1^{2} $ & $ 20558398 $ & & $ 7, 4, 1^{4} $ & $ 5558 $ & &$5, 4^{2}, 1^{2}$ & $362 $ & & $ 4, 2^{3}, 1^{5}$ & $-58$ \\
$ 11, 1^{4} $ & $ 20024414 $ & & $ 7, 3^{2}, 2 $ & $ 5566 $ & &$5, 4, 3^{2}$ & $350$ & & $4, 2^{2}, 1^{7}$ & $-51$ \\
$ 10, 5 $ & $ -2447421 $ & & $ 7, 3^{2}, 1^{2}$ & $ 5442 $ & &$5, 4, 3, 2, 1$ & $329$& & $4, 2, 1^{9}$ & $-42$ \\
$ 10, 4, 1 $ & $ -2288506 $ & &$ 7, 3, 2^{2}, 1 $ & $ 5246 $ & & $5, 4, 3, 1^{3}$ & $314$& & $4, 1^{11} $ & $-31$ \\
$ 10, 3, 2 $ & $ -2202685 $ & & $ 7, 3, 2, 1^{3} $ & $ 5087 $ & & $5, 4, 2^{3} $ & $302 $& & $ 3^{5} $ & $134$ \\
$ 10, 3, 1^{2} $ & $ -2169311 $ & & $ 7, 3, 1^{5} $ & $ 4822 $ & &$5, 4, 2^{2}, 1^{2}$ & $294$ & & $3^{4}, 2, 1$ & $119$ \\
$ 10, 2^{2}, 1 $ & $ -2121105 $ & & $ 7, 2^{4} $ & $ 4854 $ & & $5, 4, 2, 1^{4}$ & $278$ & & $3^{4}, 1^{3}$ & $110$ \\
$ 10, 2, 1^{3} $ & $ -2076606 $ & & $ 7, 2^{3}, 1^{2} $ & $ 4766 $ & &$5, 4, 1^{6} $ & $254$ & & $3^{3}, 2^{3}$ & $98$ \\
$ 10, 1^{5} $ & $ -2002441 $ & & $ 7, 2^{2}, 1^{4} $ & $ 4590 $ & &$5, 3^{3}, 1 $ & $290 $ & & $3^{3}, 2^{2}, 1^{2}$ & $94$ \\
$ 9, 6 $ & $ 333674 $ & & $ 7, 2, 1^{6} $ & $ 4326 $ & & $5, 3^{2}, 2^{2}$ & $274$& & $3^{3}, 2, 1^{4}$ & $86$\\
$ 9, 5, 1 $ & $ 293702 $ & & $ 7, 1^{8} $ & $ 3974 $ & & $5, 3^{2}, 2, 1^{2}$ & $266$& & $3^{3}, 1^{6}$ & $74$\\
$ 9, 4, 2 $ & $ 271934 $ & & $ 6^{2}, 3 $ & $ -3430 $ & & $5, 3^{2}, 1^{4}$ & $250$& & $3^{2}, 2^{4}, 1$ & $62$ \\
$ 9, 4, 1^{2} $ & $ 266990 $ & & $ 6^{2}, 2, 1 $ & $-3205$ & & $5, 3, 2^{3}, 1$ & $239$& & $3^{2}, 2^{3}, 1^{3}$ & $59$ \\
$ 9, 3^{2} $ & $ 262226 $ & & $ 6^{2}, 1^{3} $ & $-3046$ & &$5, 3, 2^{2}, 1^{3}$ & $230$ & & $3^{2}, 2^{2}, 1^{5}$ & $54$\\
$ 9, 3, 2, 1 $ & $ 254279 $ & & $ 6, 5, 4 $ & $-1789$ & & $5, 3, 2, 1^{5}$ & $215$& & $3^{2}, 2, 1^{7}$ & $47$ \\
$ 9, 3, 1^{3} $ & $ 247922 $ & & $ 6, 5, 3, 1 $ & $-1617$ & &$5, 3, 1^{7}$ & $194$ & & $3^{2}, 1^{9}$ & $38$ \\
$ 9, 2^{3} $ & $ 244742 $ & & $6, 5, 2^{2} $ & $-1543$ & &$5, 2^{5}$ & $194$ & & $3, 2^{6}$ & $14$ \\
$ 9, 2^{2}, 1^{2} $ & $ 241034 $ & & $6, 5, 2, 1^{2}$ & $ -1501$ & &$5, 2^{4}, 1^{2}$ & $190$ & & $3, 2^{5}, 1^{2} $ & $14$ \\
$ 9, 2, 1^{4} $ & $ 233618 $ & & $ 6, 5, 1^{4} $ & $-1417$ & &$5, 2^{3}, 1^{4}$ & $182$& & $3, 2^{4}, 1^{4} $ & $14$ \\
$ 9, 1^{6} $ & $ 222494 $ & & $6, 4^{2}, 1 $ & $-1411$ & &$5, 2^{2}, 1^{6} $ & $170$ & & $3, 2^{3}, 1^{6}$ & $14$ \\
$ 8, 7 $ & $ -65821 $ & & $6, 4, 3, 2 $ & $-1282$ & &$5, 2, 1^{8}$ & $154$& & $3, 2^{2}, 1^{8}$ & $14$ \\
$ 8, 6, 1 $ & $ -49546 $ & &$6, 4, 3, 1^{2} $ & $ -1246$ & &$5, 1^{10} $ & $134$ & & $3, 2, 1^{10}$ & $14$\\
$ 8,5,2 $ & $-41701$ & & $6,4,2^{2},1$ & $-1181$ & & $4^{3}, 3$ & $-331$ & & $3, 1^{12}$ & $14$ \\
$ 8, 5, 1^{2} $ & $ -40775 $ & & $6, 4, 2, 1^{3} $ & $-1141$ & &$4^{3}, 2, 1$ & $-298$ & & $2^{7}, 1$ & $-49$\\
$ 8, 4, 3 $ & $ -38146 $ & & $6, 4, 1^{5} $ & $-1066$ & &$4^{3}, 1, 1, 1 $ & $-277$ & & $2^{6}, 1^{3}$ & $-46$ \\
$ 8, 4, 2, 1 $ & $ -36715 $ & & $6, 3^{3} $ & $-1105$ & &$4^{2}, 3^{2}, 1 $ & $-226$ & & $2^{5}, 1^{5}$ & $-41$\\
$ 8, 4, 1^{3} $ & $ -35602 $ & & $6, 3^{2}, 2, 1$ & $-1054$ & &$4^{2}, 3, 2^{2}$ & $-210$ & & $2^{4}, 1^{7} $ & $-34$ \\
$ 8, 3^{2}, 1 $ & $ -34961 $ & & $6, 3^{2}, 1^{3}$ & $-1015$ & & $4^{2}, 3, 2, 1^{2}$ & $-202$& & $2^{3}, 1^{9} $ & $-25$\\
$ 8, 3, 2^{2} $ & $ -33991 $ & & $6, 3, 2^{3}$ & $ -991$ & & $4^{2}, 3, 1^{4}$ & $-186$& & $2^{2}, 1^{11} $ & $-14$\\
$ 8, 3, 2, 1^{2} $ & $ -33373 $ & & $6, 3, 2^{2}, 1^{2} $ & $-969$ & &$4^{2}, 2^{3}, 1 $ & $-175$ & & $2, 1^{13} $ & $-1$ \\
$ 8, 3, 1^{4} $ & $ -32137 $ & & $6, 3, 2, 1^{4}$ & $ -925$ &&$4^{2}, 2^{2}, 1^{3} $ & $-166$ & & $1^{15}$ & $14$\\
$ 8, 2^{3}, 1 $ & $ -31786 $ & & $ 6, 3, 1^{6} $ & $-859$ & &$4^{2}, 2, 1^{5}$ & $-151$ & & $ $ & $ $\\
$ 8, 2^{2}, 1^{3} $ & $ -30991 $ & & $ 6, 2^{4}, 1 $ & $ -877$ & & $4^{2}, 1^{7}$ & $-130$& & &\\
$ 8, 2, 1^{5} $ & $ -29666 $ & & $ 6, 2^{3}, 1^{3} $ & $-850$ & & $4, 3^{3}, 2$ & $-81$& & &\\
$ 8, 1^{7} $ & $ -27811 $ & & $6, 2^{2}, 1^{5} $ & $ -805$ & & $4, 3^{3}, 1^{2}$ & $-79$& & &\\
\hline
\end{tabular}
\end{minipage}
}
\end{section}
| {
"timestamp": "2012-07-18T02:01:52",
"yymm": "1207",
"arxiv_id": "1207.3878",
"language": "en",
"url": "https://arxiv.org/abs/1207.3878",
"abstract": "We give a new recurrence formula for the eigenvalues of the derangement graph. Consequently, we provide a simpler proof of the Alternating Sign Property of the derangement graph. Moreover, we prove that the absolute value of the eigenvalue decreases whenever the corresponding partition decreases in the dominance order. In particular, this settles affirmatively a conjecture of Ku and Wales (J. of Combin. Theory, Series A 117 (2010) 289--312) regarding the lower and upper bound for the absolute values of these eigenvalues.",
"subjects": "Combinatorics (math.CO)",
"title": "Solving the Ku-Wales conjecture on the eigenvalues of the derangement graph",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587268703082,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7097978704364336
} |
https://arxiv.org/abs/2105.13301 | Majority Dynamics: The Power of One | Consider $n=\ell+m$ individuals, where $\ell\le m$, with $\ell$ individuals holding an opinion $A$ and $m$ holding an opinion $B$. Suppose that the individuals communicate via an undirected network $G$, and in each time step, each individual updates her opinion according to a majority rule (that is, according to the opinion of the majority of the individuals she can communicate with in the network). This simple and well studied process is known as "majority dynamics in social networks". Here we consider the case where $G$ is a random network, sampled from the binomial model $\mathbb{G}(n,p)$, where $(\log n)^{-1/16}\le p\le 1-(\log n)^{-1/16}$. We show that for $n=\ell+m$ with $\Delta=m-\ell\le(\log n)^{1/4}$, the above process terminates whp after three steps when a consensus is reached. Furthermore, we calculate the (asymptotically) correct probability for opinion $B$ to "win" and show it is \[\Phi\bigg(\frac{p\Delta\sqrt{2}}{\sqrt{\pi p(1-p)}}\bigg) + O(n^{-c}),\] where $\Phi$ is the Gaussian CDF. This answers two conjectures of Tran and Vu and also a question raised by Berkowitz and Devlin.The proof technique involves iterated degree revelation and analysis of the resulting degree-constrained random graph models via graph enumeration techniques of McKay and Wormald as well as Canfield, Greenhill, and McKay. | \section{Introduction}\label{sec:introduction}
Considerable effort has been devoted to understanding exchange of opinions between individuals, seeing as it plays a major role in all types of social interaction. Of course, no simple model can accurately describe the behavior of many actors in complicated situations, so analysis and understanding of natural models for this problem has generated significant interest. A natural model, which has even been of interest in biophysics \cite{MP43} and psychology \cite{CH56}, is so-called majority dynamics. It can be briefly described as follows. Given $n$ individuals, let the network $G$ capture the set of interactions between participants. For each participant $i\in\{1,\ldots,n\}$ with initial opinion $A_i^{(0)}\in\{\pm1\}$, at every time step they adopt the majority opinion of their neighbors, that is, $A_i^{(t+1)} = \operatorname{sign}(\sum_{j\sim i}A_j^{(t)})$. The key object of study therefore is understanding the propagation of opinions and how the local structure of the network affects these dynamics. We refer the reader to \cite{MT17, MNT14, TV20, BD20} for further references regarding majority dynamics.
We now precisely define majority dynamics in terms of partitions of the graph $G$ as this will be our focus in order to analyze it. Additionally, following \cite{TV20} we follow the convention that if a participant's neighborhood is equally split between opinions then they keep the same opinion.
\begin{definition}\label{def:majority-dynamics}
Given a graph $G$ with bipartition $B_0\sqcup R_0$, the \emph{majority dynamics} at time $i$ are computed as follows. Given $B_i\sqcup R_i$, a new partition $B_{i+1}\sqcup R_{i+1}$ by swapping precisely those vertices with strictly more of their neighbors on the other side of the partition. We say a color blue or red \emph{wins by step} $k$ if $B_k = B_0\cup R_0$ or $R_k = B_0\cup R_0$, respectively.
\end{definition}
Our primary object of study in the paper concerns majority dynamics on random graphs $\mathbb{G}(n,p)$. First considered by Benjamini, Chan, O’Donnell, Tamuz, and Tan \cite{BCOTT16}, research has primarily focused on establishing that majority dynamics terminates in a finite number of steps (see e.g.~\cite{FKM20} and the very recent \cite{CKLT21} aimed at understanding sparse graphs) or understanding the even finer question of the distribution of which color majority dynamics terminates on \cite{TV20,BD20}
Our primary aim is to resolve a conjecture of Tran and Vu \cite{TV20} which informally states that for majority dynamics in $\mathbb{G}(n,1/2)$ even a bias of a single extra voter is sufficient to influence the final state by a positive probability. An essentially equivalent conjecture appears in subsequent work of Berkowitz and Devlin \cite[Conjecture~7]{BD20}.
\begin{conjecture}[{\cite[Conjecture~7]{TV20}}]\label{conj:main}
Majority dynamics on $\mathbb{G}(2n+\Delta,1/2)$ with sets $R_0 = \{v_1,\ldots,v_{n+\Delta}\}$ and $B_0 = \{v_1',\ldots,v_n'\}$ converges to $R_k = R_0\cup B_0$ with probability at least $1/2+f(\Delta)$, where $f(\Delta) > 0$, as $n\to\infty$.
\end{conjecture}
Tran and Vu \cite{TV20} resolved this conjecture for (even) $\Delta\ge 12$, and Berkowitz and Devlin \cite{BD20} resolved it for $\Delta\ge 3$.
We resolve this conjecture in full.
\begin{theorem}\label{thm:main}
There is an absolute constant $c > 0$ so that the following holds. Let $n\ge 1$. Let $0\le\Delta\le(\log n)^{1/4}$ and let $n\ge 1$, $(\log n)^{-1/16}\le p\le 1-(\log n)^{-1/16}$. In majority dynamics on $\mathbb{G}(2n+\Delta,p)$ with $|R_0| = n+\Delta$, with probability at least $1-O(n^{-c})$ there is a color with more vertices at step $1$ and that color wins by step $3$. Furthermore, $|R_3| = 2n+\Delta$ with probability
\[\Phi\bigg(\frac{p\Delta\sqrt{2}}{\sqrt{\pi p(1-p)}}\bigg) + O(n^{-c})\]
where $\Phi$ is the cdf of $\mathcal{N}(0,1)$.
\end{theorem}
\begin{remark}
In particular, the event that both colors have the same size at step $1$ occurs with decaying probability. The parameters $(\log n)^{-1/16}, (\log n)^{-1/4}$ can certainly be improved substantially but we have chosen to focus on the dense regime.
\end{remark}
We note that \cref{thm:main} additionally resolves \cite[Conjecture~8]{TV20} regarding monotonicity of the limiting probabilities with respect to $\Delta$, and the proof of \cref{thm:main} essentially answers \cite[Question~2]{BD20} (see in particular \cref{thm:day-one,thm:day-two} which provide fine information about the sizes of the various parts after one and two days). We also note that this is the first work which gives an exact limiting probability for a specific color winning when that probability is strictly between $0$ and $1$ (other than the simple symmetric case $\Delta = 0$).
We anticipate that the techniques of this paper combined with recent refined asymptotic enumeration results of \cite{LW17,LW20} can yield further refinements of work of \cite{BD20,FKM20}. In particular this may allow precise understanding of the number of steps before reaching stability for wider ranges of sparse $p$ than currently known.
\subsection{Strategy}\label{sub:strategy}
The broad structure of this paper breaks into $2$ phases. In the first we substantially refine results of \cite{BD20} in order to obtain a local limit theorem of how many vertices switched from blue to red and red to blue jointly. Our techniques rely extensively on graph enumeration results and models developed for degree sequences in $\mathbb{G}(n,p)$ by McKay and Wormald \cite{MW97} and random bipartite graphs by McKay and Skerman \cite{MS16}. Technically, this fine-tuned local limit theorem is not necessary to complete the analysis, and one can use a (non-joint) central limit theorem for the lead \cite[Theorem~1]{BD20} along with a precise computation of its mean (using for example techniques similar to \cref{app:calculations} or \cite[Lemma~12]{BD20}).
The second and third days also use the graph enumeration techniques of McKay and Wormald \cite{MW90} which were extended to bipartite graphs by Canfield, Greenhill, and McKay \cite{CGM08}; however at these stages we will only derive coarser information about the degree sequences and the number of red and blue vertices. In particular, we prove that given a sufficiently large initial lead, on the second day the number of red and blue vertices concentrate in intervals of length $O(n^{1-\eta})$ for an absolute constant $\eta$. Further, we show that the side leading will have developed a substantial lead (of linear order). Then a final application of degree enumeration implies that with high probability that the process terminates on the third day, because it is unlikely for any vertex to have a degree so large that it overcomes the gap between sizes. (Simpler arguments in \cite{BCOTT16,TV20,BD20} show termination by the fourth day without enumeration at this stage.)
For these two stages we rely on a modification of a concentration argument developed by the Ferber, Kwan, Narayanan, and the authors \cite{FKNSS21} where a general framework for applying the second moment method with McKay-Wormald \cite{MW90} enumeration formulas were used to resolve a conjecture of F\"uredi on the existence of ``unfriendly'' partitions in $G(n,1/2)$. The analysis here is substantially simpler as we need to track fewer parameters to guarantee convergence to termination within a finite time horizon. In particular, the analysis of the third day only requires a large-deviation bound on the degrees of vertices from what is expected in a degree-constrained random graph model, and the analysis of the second day has substantially simpler formulas due to the setting.
\subsection*{Acknowledgements}
We thank Asaf Ferber, Vishesh Jain, Matthew Kwan, and Bhargav Narayanan for discussions related to this project.
\section{Day one}\label{sec:one}
As mentioned, the analysis for the first day involves proving a local limit theorem for the sizes of parts. Although a central limit theorem was shown by Berkowitz and Devlin \cite{BD20} for the size of the red partition after one step, we will require understanding of how many vertices switched from blue to red and red to blue jointly, rather than the net amount. A central limit theorem may be derivable from their method, which involves moments. We need only a joint central limit theorem but we have chosen to demonstrate a local limit theorem to demonstrate the power of these techniques, and due to its independent interest. In particular, enumeration techniques allow one to reduce this computation to a local limit theorem for certain binomial random variables and various question about the model can be derived using these techniques.
The main result of this section is the following theorem. Its proof occupies \cref{sub:initial-estimates,sub:graph-models,sub:local-limit,sub:integration}.
\begin{theorem}\label{thm:day-one}
There exists constants $C,c>0$ such that the following holds. Let $n\ge 2$, let $0\le\Delta\le(\log n)^{1/4}$ and let $(\log n)^{-1/4}\le p\le 1-(\log n)^{-1/4}$. Let
\begin{align*}
\sqrt{\frac{n}{4\pi}}x' &= x - \bigg(\frac{1}{2}+\frac{p(\Delta-1)+1/2}{2\sqrt{\pi p(1-p)n}}\bigg)n,\\
\sqrt{\frac{n}{4\pi}}y' &= y - \bigg(\frac{1}{2}+\frac{p(-\Delta-1)+1/2}{2\sqrt{\pi p(1-p)n}}\bigg)n.
\end{align*}
In majority dynamics on $\mathbb{G}(2n+\Delta,p)$ with $|R_0| = n+\Delta$, we have that
\begin{align*}
\mathbb{P}[|R_0\cap R_1| = x&\wedge|B_0 \cap B_1|=y]\\
&= \frac{2}{n\sqrt{\pi(2+\pi)}}\exp\bigg(-\frac{(1+\pi)(x')^2-2(x'y')+(1+\pi)(y')^2}{2\pi(2+\pi)}\bigg) + O(n^{-1-c}).
\end{align*}
Furthermore for $|x'|$ or $|y'|\ge C\sqrt{\log n}$ we have that
\[\mathbb{P}[|R_0\cap R_1| = x\wedge|B_0 \cap B_1|=y]\le n^{-5}.\]
\end{theorem}
\subsection{Initial estimates}\label{sub:initial-estimates}
We will first need some initial estimates regarding specific distributions which will show up when computing our local limit theorem. First, we record the probability that one binomial is greater than a different binomial with similar parameters. We defer its proof, which consists mainly of binomial manipulations and applications of well-known local central limit theorems, to \cref{app:calculations}.
\begin{lemma}\label{lem:chop-probability}
There is $c > 0$ so that the following holds. We are given $n\ge 2$, $\tau\in\mathbb{Z}$ of magnitude at most $2(\log n)^{1/4}$, and $(\log n)^{-1/4}\le p\le 1-(\log n)^{-1/4}$. Suppose that $q = p + \alpha/n$ and $q' = p + \beta/n$ with $|\alpha|,|\beta|\le 40\sqrt{p(1-p)\log n}$. Then
\[\mathbb{P}[\operatorname{Bin}(n+\tau,q)\ge\operatorname{Bin}(n,q')] = \frac{1}{2}+\frac{p\tau + 1/2 + \alpha - \beta}{2\sqrt{\pi p(1-p)n}} + O(n^{-3/4}).\]
\end{lemma}
Next, we need to understand the mean and standard deviation of certain conditioned binomial random variables. The level of control required can be deduced from the Berry-Esseen theorem.
\begin{lemma}\label{lem:chop-statistics}
There is $c > 0$ so that the following holds. We are given $n\ge 2$, $\tau\in\mathbb{Z}$ of magnitude at most $2(\log n)^{1/4}$, and $(\log n)^{-1/4}\le p\le 1-(\log n)^{-1/4}$. Suppose that $q,q'\in p\pm 40\sqrt{p(1-p)\log n}/n$. Let $X\sim\operatorname{Bin}(n+\tau,q)$ and $Y\sim\operatorname{Bin}(n,q')$. Let $X^+$ be $X$ conditional on $X > Y$ while $X^-$ be $X$ conditional on $X\le Y$. Then
\begin{align*}
\mathbb{E}X^+ = pn + \sqrt{\frac{p(1-p)n}{\pi}} + O(n^{1/4}),&\qquad\operatorname{Var}X^+ = \bigg(1-\frac{1}{\pi}\bigg)p(1-p)n+O(n^{3/4})\\
\mathbb{E}X^- = pn - \sqrt{\frac{p(1-p)n}{\pi}} + O(n^{1/4}),&\qquad\operatorname{Var}X^- = \bigg(1-\frac{1}{\pi}\bigg)p(1-p)n+O(n^{3/4}).
\end{align*}
\end{lemma}
\begin{proof}
By Berry-Esseen, the joint distribution $(X-pn,Y-pn)/\sqrt{p(1-p)n}$ has cumulative distribution function differing from $\mathcal{N}(0,I_2)$ by $O(1/\sqrt{np(1-p)})$ pointwise. (Note that $\tau$ is small, so the shifts are negligible.) Let $Z_1,Z_2\sim\mathcal{N}(0,1)$. We see that
\begin{align*}
\mathbb{E}X^+ = \mathbb{E}[X|X\ge Y] &= pn + \sqrt{p(1-p)n}\frac{\mathbb{E}[X\mathbbm{1}_{X\ge Y}]}{\mathbb{P}[X\ge Y]}\\
&= pn + \sqrt{p(1-p)n}\frac{\mathbb{E}[Z_1\mathbbm{1}_{Z_1\ge Z_2}] + O(n^{-1/4})}{\frac{1}{2} + O(n^{-1/4})}\\
&= pn + \sqrt{\frac{p(1-p)n}{\pi}} + O(n^{1/4}).
\end{align*}
The error terms $O(n^{-1/4})$ come from integrating the discrepancy in cumulative distribution functions over the region where $(X-pn,Y-pn)/\sqrt{p(1-p)n}$ is bounded by $O(\sqrt{\log n})$ and using a large deviation bound for binomials outside. Similarly,
\begin{align*}
\mathbb{E}(X^+-pn)^2 = \mathbb{E}[(X-pn)^2|X\ge Y] &= p(1-p)n\frac{\mathbb{E}[Z_1^2\mathbbm{1}_{Z_1\ge Z_2}] + O(n^{-1/4})}{\frac{1}{2} + O(n^{-1/4})}\\
&= p(1-p)n + O(n^{3/4}).
\end{align*}
Therefore
\[\operatorname{Var}X^+ = \mathbb{E}[(X^+-pn)^2] - (\mathbb{E}[X^+-pn])^2 = \bigg(1-\frac{1}{\pi}\bigg)p(1-p)n + O(n^{3/4}).\]
Above, we used $\mathbb{E}[Z_1\mathbbm{1}_{Z_1\ge Z_2}] = 1/(2\sqrt{\pi})$ and $\mathbb{E}[Z_1^2\mathbbm{1}_{Z_1\ge Z_2}] = 1/2$. The computation for $X^-$ is exactly analogous so we omit it.
\end{proof}
Next, we need a local limit theorem for sums of these conditioned binomial random variables. The proof uses log-concavity of binomial distributions, along with a technique of Bender \cite{Ben73} which upgrades a Berry-Essen quality central limit theorem for a log-concave variable into a local central limit theorem. Though it follows by directly citing such results, we spell out the details in order to quantify the bounds.
\begin{proposition}\label{prop:chopped-lclt}
There is $c > 0$ so that the following holds. We are given $n\ge 2$, $m,\tau_k,\tau_k'\in\mathbb{Z}$ of magnitude at most $2(\log n)^{1/4}$, and $(\log n)^{-1/4}\le p\le 1-(\log n)^{-1/4}$. Suppose that $q_k,q_k'\in p\pm 40\sqrt{p(1-p)\log n}/n$. Let $X_k\sim\operatorname{Bin}(n+\tau_k,q_k)$ and $Y_k\sim\operatorname{Bin}(n+\tau_k',q_k')$. Let $X_k^+$ be $X_k$ conditional on $X_k > Y_k$ while $X_k^-$ be $X_k$ conditional on $X_k\le Y_k$. Fix some $i\in[n+m]$ and sequence $\epsilon_k\in\{\pm1\}$, and let
\[S = \sum_{k=1}^i\epsilon_kX_k^- + \sum_{k=i+1}^{n+\tau'}\epsilon_kX_k^+.\]
Then
\[\mathbb{P}[S = s] = \frac{1}{\sqrt{2\pi}\sigma_S}\exp\bigg(-\frac{(s-\mu_S)^2}{2\sigma_S^2}\bigg) + O\bigg(\frac{1}{n^{1/5}\sigma_S}\bigg)\]
for all $i\in[n+m]$ and $s\in\mathbb{Z}$, if $\mu_S$ and $\sigma_S$ are the mean and variance of $S$.
\end{proposition}
\begin{proof}
Note that $X_k,Y_k$ have probabilities converging to that of a normalized Gaussian, by a local limit theorem. Combining with tail bounds, we easily see that $X_k^+$ has well-behaved (centered) moments: its variance is $\Theta(p(1-p)n)$ and its centered third moment is $\Theta((p(1-p)n)^{3/2})$. The same holds for $X_k^-$. Therefore, the Berry-Esseen theorem shows that the cumulative distribution functions of $S$ and $\mathcal{N}(\mu_S,\sigma_S^2)$ differ by $O(1/\sqrt{n})$ everywhere.
Next, note that $X_k,Y_k$ have log-concave probability mass functions (on $\mathbb{Z}$) by log-concavity of binomials, hence $(X_k,Y_k)$ has a jointly log-concave probability mass function in the sense that
\[p(a,b)p(c,d)\le p\bigg(\bigg\lfloor\frac{a+c}{2}\bigg\rfloor,\bigg\lfloor\frac{b+d}{2}\bigg\rfloor\bigg)p\bigg(\bigg\lceil\frac{a+c}{2}\bigg\rceil,\bigg\lceil\frac{b+d}{2}\bigg\rceil\bigg).\]
Conditioning on a convex set preserves log-concavity in this sense, hence $(X_k,Y_k)$ conditional on $X_k\ge Y_k$ as well as conditional on $X_k < Y_k$ both have log-concave probability mass functions. By \cite[Theorem~1.2]{HKS19} (which is essentially reproves to \cite[Theorem~1.4]{KL19} but allows functions to be $0$), we see that the marginals of a distribution which is log-concave in this sense are log-concave. Therefore $X_k^+$, $X_k^-$ have log-concave probability mass functions.
Finally, convolutions of log-concave sequences are log-concave, so $S$ has log-concave probability mass function. We established earlier that it satisfies a quantitative central limit theorem. We now quantify an argument of Bender \cite{Ben73} in order to deduce the desired result.
Let $m_{S}$ be the mode of $S$. Above this value, the probability mass is nonincreasing, while below it is nondecreasing. First suppose that $s > m_{S} + n^{-1/4}\sigma_S$. We see that
\begin{align*}
\mathbb{P}[S = s]&\le\frac{1}{\lceil n^{-1/4}\sigma_S\rceil}\mathbb{P}[s\le S < s+n^{-1/4}\sigma_S]\\
&= \frac{1}{\lceil n^{-1/4}\sigma_S\rceil}\mathbb{P}[s\le\mathcal{N}(\mu_S,\sigma_S^2) < s+n^{-1/4}\sigma_S] + O(n^{-1/4}\sigma_S^{-1})\\
&= \frac{1}{\sqrt{2\pi}\sigma_S}\exp\bigg(-\frac{(s-\mu_S)^2}{2\sigma_S^2}\bigg) + O(n^{-1/5}\sigma_S^{-1}).
\end{align*}
The last line follows since $(s-\mu_S)^2/(2\sigma_S^2)$ is either stable up to a multiplicative factor of $(1+O(n^{-1/5}))$ upon changing $s$ by $\pm n^{-1/4}\sigma_S$ or is super-polynomially small (hence absorbed into the additive error term, since $\sigma_S^2 = \Theta(p(1-p)n^2)$ is polynomial). The lower bound is analogous. Furthermore, this holds for $s < m_{S} - n^{-1/4}\sigma_S$ by an identical argument. Therefore,
\[\mathbb{P}[S = s] = \frac{1}{\sqrt{2\pi}\sigma_S}\exp\bigg(-\frac{(s-\mu_S)^2}{2\sigma_S^2}\bigg) + O(n^{-1/5}\sigma_S^{-1})\]
as long as $s\notin m_{S}\pm n^{-1/4}\sigma_S$.
Finally, suppose that $m_{S}\le s\le m_{S}+n^{-1/4}\sigma_S$ (the symmetric case is analogous). We have
\begin{align*}
\mathbb{P}[S = s]&\ge\mathbb{P}[S = s + \lceil n^{-1/4}\sigma_S\rceil]\\
&= \frac{1}{\sqrt{2\pi}\sigma_S}\exp\bigg(-\frac{(s+\lceil n^{-1/4}\sigma_S\rceil-\mu_S)^2}{2\sigma_S^2}\bigg) + O(n^{-1/5}\sigma_S^{-1})\\
&= \frac{1}{\sqrt{2\pi}\sigma_S}\exp\bigg(-\frac{(s-\mu_S)^2}{2\sigma_S^2}\bigg) + O(n^{-1/5}\sigma_S^{-1}),
\end{align*}
where the last equality uses a similar argument to above. This is in fact enough to demonstrate that $|m_{S}-\mu_S| = O(n^{-1/5}\sigma_S)$ (since if it were too far, then the sequence would have an increase-decrease pattern twice).
Finally, we obtain an upper bound via log-concavity:
\begin{align*}
\mathbb{P}[S = s]\le\frac{\mathbb{P}[S = s + \lceil n^{-1/4}\sigma_S\rceil]^2}{\mathbb{P}[S = s + 2\lceil n^{-1/4}\sigma_S\rceil]} = \frac{1}{\sqrt{2\pi}\sigma_S}\exp\bigg(-\frac{(s-\mu_S)^2}{2\sigma_S^2}\bigg) + O(n^{-1/5}\sigma_S^{-1})
\end{align*}
by an analogous computation and the fact that $m_{S},\mu_S$ are close. The result follows.
\end{proof}
\subsection{Degree sequence models}\label{sub:graph-models}
We now define a plethora of degree sequence models for random graphs that will be needed for the computations. At a high level, the work of McKay and Wormald \cite{MW97} and McKay and Skerman \cite{MS16} demonstrate that degrees of random graphs look independent conditional on, for example, total edge count. These models provide a way to encapsulate these facts quantitatively.
\begin{definition}[Degree sequence domains]\label{def:degree-sets}
Let $I_n = \{0,\ldots,n-1\}^n$, $E_n$ be the even sum sequences in this set, and $I_n^\ell$ be the sum $\ell$ sequences. We will typically denote elements of these sets by $\mathbf{d}$. Let $I_{m,n} = \{0,\ldots,n\}^m\times\{0,\ldots,m\}^n$, $E_{m,n}$ be the sequences with equal sums on both sides, and $E_{m,n}^\ell$ be the sequences with equal sums $\ell$. We will typically denote elements of these sets by $\mathbf{s}$ of length $m$ and $\mathbf{t}$ of length $n$. We will denote random variable versions of these by capital boldface instead.
\end{definition}
\begin{definition}[True degree models]\label{def:true-model}
$\mathcal{D}_p^n$ is the degree sequence distribution of $\mathbb{G}(n,p)$, which is a random variable supported on $E_n\subseteq I_n$. $\mathcal{D}_p^{m,n}$ is the degree sequence distribution of a bipartite graph with $m$ vertices on one side and $n$ on the other, each edge included independently with probability $p$, which is a random variable supported on $E_{m,n}\subseteq I_{m,n}$.
\end{definition}
\begin{definition}[Independent degree models]\label{def:independent-model}
$\mathcal{B}_p^n$ is the distribution of $n$ independent $\operatorname{Bin}(n-1,p)$ random variables, supported on $I_n$. $\mathcal{B}_p^{m,n}$ is the distribution of $m$ independent $\operatorname{Bin}(n,p)$ and $n$ independent $\operatorname{Bin}(m,p)$ variables, supported on $I_{m,n}$.
\end{definition}
\begin{definition}[Conditioned degree models]\label{def:conditioned-model}
$\mathcal{E}_p^n$ is the distribution of $\mathcal{B}_p^n$ conditioned on having even sum, supported on $E_n$. $\mathcal{E}_p^{m,n}$ is the distribution of $\mathcal{B}_p^{m,n}$ conditioned on having equal sums on both sides, supported on $E_{m,n}$.
\end{definition}
\begin{definition}[Integrated degree models]\label{def:integrated-model}
$\mathcal{I}_p^n$ is the distribution sampled as follows. Sample $p'\sim\mathcal{N}(p,p(1-p)/(n^2-n))$, conditional on being in $(0,1)$. Then sample from $\mathcal{E}_{p'}^n$. $\mathcal{I}_p^{m,n}$ is the distribution sampled as follows. Sample $p'\sim\mathcal{N}(p,p(1-p)/(2mn))$, conditional on being in $(0,1)$. Then sample from $\mathcal{E}_{p'}^{m,n}$.
\end{definition}
We are now ready to state the necessary results.
\begin{theorem}[{From~\cite[Theorem~3(ii)]{MW90},~\cite[Theorem~3.6]{MW97}}]\label{thm:graph-indep}
There is $c > 0$ and a growing function so that the following holds. Let $n\ge 2$ and suppose $(\log n)^{-1/4}\le p\le 1-(\log n)^{-1/4}$. There is an event $B_p^n\subseteq I_n$ such that $\mathbb{P}_{\mathcal{D}_p^n}[B_p^n] = n^{-\omega(1)}$ and uniformly for all $\mathbf{d}\in I_n\setminus B_p^n$ we have
\[\mathbb{P}_{\mathcal{D}_p^n}[\mathbf{D}=\mathbf{d}] = (1+O(n^{-c}))\mathbb{P}_{\mathcal{I}_p^n}[\mathbf{D} = \mathbf{d}]\]
\end{theorem}
\begin{theorem}[{From~\cite[Theorem~1(a)]{MS16}}]\label{thm:bipartite-indep}
There is $c > 0$ so that the following holds. Suppose $m,n\ge 2$ are such that $m = O(n\sqrt{\log n})$ and $n = O(m\sqrt{\log m})$. Suppose that $(\log n)^{-1/4}\le p\le 1-(\log n)^{-1/4}$. Then there is an event $B_p^{m,n}\subseteq I_{m,n}$ such that $\mathbb{P}_{\mathcal{D}_p^{m,n}}[B_p^{m,n}] = O(\exp(-n^c))$ and uniformly for $(\mathbf{s},\mathbf{t})\in I_{m,n}\setminus B$ we have
\[\mathbb{P}_{\mathcal{D}_p^{m,n}}[\mathbf{S} = \mathbf{s}\wedge\mathbf{T} = \mathbf{t}] = (1+O(n^{-1/3}))\mathbb{P}_{\mathcal{I}_p^{m,n}}[\mathbf{S} = \mathbf{s}\wedge\mathbf{T} = \mathbf{t}].\]
\end{theorem}
\subsection{Computing a local limit result}\label{sub:local-limit}
\subsubsection{Transferring to an independent model}\label{sub:transfer}
Now consider sampling $\mathbb{G}(2n+\Delta,p)$ and revealing the degrees among each part $R_0$ and $B_0$ as well from vertices in $R_0$ to $B_0$ and vice versa. We swap vertices purely based on this degree information. Since the sizes of the swapped parts are measurable with respect to this, which has distribution coming from three independent Erd\H{o}s-Renyi graph models, we see by \cref{thm:graph-indep,thm:bipartite-indep} that up to a multiplicative factor of $1+O(n^{-c})$ and an additive error of $n^{-\omega(1)}$ it is enough to compute the relevant probabilities if the models on the parts are $\mathcal{I}_p^{n+\Delta}$, $\mathcal{I}_p^n$, and $\mathcal{I}_p^{n+\Delta,n}$ instead. We let $\mathbf{d}$ be the degree sequence of size $n+\Delta$, $\mathbf{d}'$ be the one of length $n$, and $\mathbf{s},\mathbf{t}$ be of length $m=n+\Delta$ and $n$.
At this point it is useful to define $R_0 = \{n+1,\ldots,2n+\Delta\}$ and $B_0 = [n]$ as usual and define the swapped sets $R_1,B_1$ purely as functions of a triple of degree sequences $(\mathbf{d},\mathbf{d}',(\mathbf{s},\mathbf{t}))$ from $I_{n+\Delta}$, $I_n$, and $I_{n+\Delta,n}$. (We define it in the obvious way so as to apply even if the total sum in $I_n$ is not even, or the sums across both sides in $I_{n+\Delta,n}$ are not equal.)
With this in mind, the transference described above can be written quantitatively as
\begin{align}
\mathbb{P}_{\substack{\mathcal{D}_p^{n+\Delta},\mathcal{D}_p^n\\\mathcal{D}_p^{n+\Delta,n}}}[|R_0\cap R_1| = x\wedge|B_0\cap B_1| = y] = (1+O(n^{-c}))\mathbb{P}_{\substack{\mathcal{I}_p^{n+\Delta},\mathcal{I}_p^n\\\mathcal{I}_p^{n+\Delta,n}}}[&|R_0\cap R_1| = x\wedge|B_0\cap B_1| = y]\notag\\
&+ O(n^{-\omega(1)}).\label{eq:D-I}
\end{align}
Furthermore,
\begin{align}
&\mathbb{P}_{\substack{\mathcal{I}_p^{n+\Delta},\mathcal{I}_p^n\\\mathcal{I}_p^{n+\Delta,n}}}[|R_0\cap R_1| = x\wedge|B_0\cap B_1| = y]\notag\\
&= \frac{1}{\int_{q_0,q_1,q_2\in[0,1]}d\mu(q_0,q_1,q_2)}\int_{q_0,q_1,q_2\in[0,1]}\mathbb{P}_{\substack{\mathcal{E}_{q_0}^{n+\Delta},\mathcal{E}_{q_1}^n\\\mathcal{E}_{q_2}^{n+\Delta,n}}}[|R_0\cap R_1| = x\wedge|B_0\cap B_1| = y]d\mu(q_0,q_1,q_2)\notag\\
&= \int_{q_0,q_1,q_2\in p\pm20\sqrt{p(1-p)\log n}/n}\mathbb{P}_{\substack{\mathcal{E}_{q_0}^{n+\Delta},\mathcal{E}_{q_1}^n\\\mathcal{E}_{q_2}^{n+\Delta,n}}}[|R_0\cap R_1| = x\wedge|B_0\cap B_1| = y]d\mu(q_0,q_1,q_2) + O(n^{-10}),\label{eq:I-E}
\end{align}
where $\mu$ denotes the measure of three independent Gaussians centered at $p$ with variances $p(1-p)/((n+\Delta)^2-(n+\Delta))$, $p(1-p)/(n^2-n)$, and $p(1-p)/(2n(n+\Delta))$. The last line follows since such Gaussians lie in $(0,1)$ with exponentially good probability, and in fact are of size $p\pm20\sqrt{p(1-p)\log n}/n$ with probability at least $1-n^{-10}$.
At this point, we have nearly reached a model with independent Bernoulli sequences. However, we must condition on being even sum or having equal sum across two sides. To deal with this, we iteratively apply Bayes's rule to reduce to understanding genuinely independent random variables. This technique is closely related that in the proof given for \cite[Theorem~8]{MS16}. We have
\begin{align}
\mathbb{P}_{\substack{\mathcal{E}_{q_0}^{n+\Delta},\mathcal{E}_{q_1}^n\\\mathcal{E}_{q_2}^{n+\Delta,n}}}&[|R_0\cap R_1| = x\wedge|B_0\cap B_1| = y]\notag\\
&=\frac{\mathbb{P}_{\mathcal{B}_{q_0}^{n+\Delta},\mathcal{B}_{q_1}^n,\mathcal{B}_{q_2}^{n+\Delta,n}}[|R_0\cap R_1| = x\wedge|B_0\cap B_1| = y\wedge |\mathbf{D}|/2,|\mathbf{D}'|/2\in\mathbb{Z}\wedge|\mathbf{S}| = |\mathbf{T}|]}{\mathbb{P}_{\mathcal{B}_{q_0}^{n+\Delta},\mathcal{B}_{q_1}^n,\mathcal{B}_{q_2}^{n+\Delta,n}}[|\mathbf{D}|/2,|\mathbf{D}'|/2\in\mathbb{Z}\wedge|\mathbf{S}| = |\mathbf{T}|]}.\label{eq:E-B-even}
\end{align}
At this point, every event being considered is essentially coming from a sum of independent binomials or counting inequalities between independent binomials, so one should expect that these probabilities can be computed precisely. We can in fact do this, although we choose to iteratively simplify the expression by removing portions that ``act independent''.
\subsubsection{Removing evenness}\label{sub:evenness}
First, reveal $\mathcal{B}_{q_2}^{n+\Delta,n}$, that is, $\mathbf{S}$ and $\mathbf{T}$. Further reveal $R_0\cap R_1$ and $B_0\cap B_1$. Clearly the remaining randomness is as follows: for $v\in R_0\cap R_1$, we sample $d_v\sim\operatorname{Bin}(n+\Delta,p_0)|_{\ge s_v}$, and similar for the other three parts. Note that with probability at least $1-2\exp(-\Omega(n))$ there are at least $n/4$ vertices $v\in R_0$ with $s_i\in pn\pm 100\sqrt{p(1-p)n}$ and at least $n/4$ vertices $v\in B_0$ with $t_i\in pn\pm 100\sqrt{p(1-p)n}$. For such vertices, regardless of whether it was revealed to be in $R_0\cap R_1$ or $R_0\setminus R_1$ (and similar for blue vertices), we see that the conditional distribution of its degree is some conditioned binomial that is easily checked to be equidistributed $(\mathrm{mod}~2)$ up to say an error of $O(n^{-1/4})$. If we reveal the degrees of every other vertex, then add up $n/4$ of these random variables, we obtain equidistribution $(\mathrm{mod}~2)$ where both values are attained with probability $1/2+O(\exp(-n))$. Therefore the numerator and denominator satisfy
\begin{align}
&\frac{\mathbb{P}_{\mathcal{B}_{q_0}^{n+\Delta},\mathcal{B}_{q_1}^n,\mathcal{B}_{q_2}^{n+\Delta,n}}[|R_0\cap R_1| = x\wedge|B_0\cap B_1| = y\wedge |\mathbf{D}|/2,|\mathbf{D}'|/2\in\mathbb{Z}\wedge|\mathbf{S}| = |\mathbf{T}|]}{\mathbb{P}_{\mathcal{B}_{q_0}^{n+\Delta},\mathcal{B}_{q_1}^n,\mathcal{B}_{q_2}^{n+\Delta,n}}[|\mathbf{D}|/2,|\mathbf{D}'|/2\in\mathbb{Z}\wedge|\mathbf{S}| = |\mathbf{T}|]}\notag\\
&= \frac{(\frac{1}{4}+O(\exp(-n)))\mathbb{P}_{\mathcal{B}_{q_0}^{n+\Delta},\mathcal{B}_{q_1}^n,\mathcal{B}_{q_2}^{n+\Delta,n}}[|R_0\cap R_1| = x\wedge|B_0\cap B_1| = y\wedge|\mathbf{S}| = |\mathbf{T}|] + O(\exp(-\Omega(n)))}{(\frac{1}{4}+O(\exp(-n)))\mathbb{P}_{\mathcal{B}_{q_2}^{n+\Delta,n}}[|\mathbf{S}| = |\mathbf{T}|] + O(\exp(-\Omega(n)))}\notag\\
&= \frac{\mathbb{P}_{\mathcal{B}_{q_0}^{n+\Delta},\mathcal{B}_{q_1}^n,\mathcal{B}_{q_2}^{n+\Delta,n}}[|R_0\cap R_1| = x\wedge|B_0\cap B_1| = y\wedge|\mathbf{S}| = |\mathbf{T}|]}{\mathbb{P}_{\mathcal{B}_{q_2}^{n+\Delta,n}}[|\mathbf{S}| = |\mathbf{T}|]} + O(\exp(-\Omega(n))).\label{eq:B-even-B}
\end{align}
In the last line, we used that the final denominator probability is large. This can be seen since it is the chance that two samples of $\operatorname{Bin}(n(n+\Delta),q_2)$ equal each other. Being the same distribution supported on $[0,n(n+\Delta)]$, we see this occurs with probability at least $1/(n(n+\Delta)+1)$ by Cauchy-Schwarz.
\subsubsection{Computing the numerator}\label{sub:numerator}
In fact, this denominator can be computed precisely using a local limit theorem for binomial random variables. We therefore focus attention on computing the numerator. We have
\begin{align}
&\mathbb{P}_{\mathcal{B}_{q_0}^{n+\Delta},\mathcal{B}_{q_1}^n,\mathcal{B}_{q_2}^{n+\Delta,n}}[|R_0\cap R_1| = x\wedge|B_0\cap B_1| = y\wedge|\mathbf{S}| = |\mathbf{T}|]\notag\\
&= \sum_{|A|=x,|B|=y}\mathbb{P}_{\mathcal{B}}[R_0\cap R_1 = A\wedge B_0\cap B_1 = B]\mathbb{P}_{\mathcal{B}}[|\mathbf{S}| = |\mathbf{T}||R_0\cap R_1 = A\wedge B_0\cap B_1 = B].\label{eq:B-B-cond}
\end{align}
We can exactly compute the distribution of $|R_0\cap R_1|$ and $|B_0\cap B_1|$, which are independent. We make the following definitions for convenience going forward:
\begin{itemize}
\item $q_i = p + \alpha_i/n$ for $0\le i\le 2$, where $|\alpha_i|\le 20\sqrt{p(1-p)\log n}$;
\item $X_k\sim\operatorname{Bin}(n+\Delta-1,q_0)$ and $Y_k\sim\operatorname{Bin}(n,q_2)$ for $k\in[n+\Delta]$;
\item $Y_k^-$ is the distribution of $Y_k$ conditional on $Y_k\le X_k$ and $Y_k^+$ is conditional on $Y_k > X_k$;
\item $Z_k\sim\operatorname{Bin}(n+\Delta,q_2)$ and $W_k\sim\operatorname{Bin}(n-1,q_1)$ for $k\in[n]$;
\item $Z_k^-$ is $Z_k$ conditioned on $Z_k\le W_k$ and $Z_k^+$ is conditioned on $Z_k > W_k$;
\item $r = \mathbb{P}[X_1\ge Y_1]$ and $b = \mathbb{P}[W_1\ge Z_1]$.
\end{itemize}
We have
\begin{align*}
|R_0\cap R_1| &= \sum_{k=1}^{n+\Delta}\mathbbm{1}[X_k\ge Y_k]\sim\operatorname{Bin}(n+\Delta,r),\\
|B_0\cap B_1| &= \sum_{k=1}^n\mathbbm{1}[W_k\ge Z_k]\sim\operatorname{Bin}(n,b).
\end{align*}
Additionally, we can compute the distributions of $|\mathbf{S}|$ and $|\mathbf{T}|$ conditional on $A = R_0\cap R_1$ and $B = B_0\cap B_1$, which are independent. It actually only depends on the sizes. If we condition on $|A| = x$ and $|B| = y$, we have
\begin{align*}
|\mathbf{S}|&\sim\sum_{k=1}^x Y_k^- + \sum_{k=x+1}^{n+\Delta}Y_k^+,\\
|\mathbf{T}|&\sim\sum_{k=1}^y Z_k^- + \sum_{k=y+1}^n Z_k^+.
\end{align*}
At this point, computing the probability that $|\mathbf{S}| - |\mathbf{T}| = 0$ amounts to proving a local central limit theorem for all possible mixed sums and differences of these independent random variables. We have already done this in \cref{prop:chopped-lclt}. Explicitly, this means that for $|A| = x$ and $|B| = y$ that
\begin{align*}
\mathbb{P}_{\mathcal{B}}[|\mathbf{S}| = |\mathbf{T}||R_0\cap R_1 = A\wedge B_0\cap B_1 = B] &= \frac{1}{\sqrt{2\pi}\sigma_{x,y}}\exp\bigg(\frac{\mu_{x,y}^2}{2\sigma_{x,y}^2}\bigg) + O(n^{-1/5}\sigma_{x,y}^{-1})
\end{align*}
where $\mu_{x,y}$, $\sigma_{x,y}^2$ are the mean and variance of $|\mathbf{S}|-|\mathbf{T}|$ conditional on $|A| = x$ and $|B| = y$.
By \cref{lem:chop-statistics}, we have
\[\sigma_{x,y}^2 = \bigg(2-\frac{2}{\pi}\bigg)p(1-p)n^2 + O(n^{7/4}).\]
Therefore, let $\sigma = \sqrt{(2-2/\pi)p(1-p)}n$ and note that for $|A| = x$ and $|B| = y$ we have
\begin{align*}
\mathbb{P}_{\mathcal{B}}[|\mathbf{S}| = |\mathbf{T}||R_0\cap R_1 = A\wedge B_0\cap B_1 = B] &= \frac{1}{\sqrt{2\pi}\sigma}\exp\bigg(\frac{\mu_{x,y}^2}{2\sigma^2}\bigg) + O(n^{-1/5}\sigma^{-1}).
\end{align*}
It remains to understand $\mu_{x,y}$. We have
\begin{align*}
\mu_{x,y} = x\mathbb{E}Y_k^- + (n+\Delta-x)\mathbb{E}Y_k^+ - y\mathbb{E}Z_k^- - (n-y)\mathbb{E}Z_k^+.
\end{align*}
\begin{claim}\label{clm:mu-x-y}
If $|x-n/2|,|y-n/2|\le\sqrt{n}\log n$ we have
\[\mu_{x,y} = \bigg(\frac{\alpha_1-\alpha_0-2p\Delta}{\pi}\bigg)n + 2\sqrt{\frac{p(1-p)n}{\pi}}(x-y) + O(n^{4/5}).\]
\end{claim}
\begin{proof}
From \cref{lem:chop-statistics} we have
\begin{align*}
\mathbb{E}Y_k^+ &= pn + \sqrt{\frac{p(1-p)n}{\pi}} + O(n^{1/4}) = \mathbb{E}Z_k^+,\\
\mathbb{E}Y_k^- &= pn - \sqrt{\frac{p(1-p)n}{\pi}} + O(n^{1/4}) = \mathbb{E}Z_k^-.
\end{align*}
The error terms are not good enough to do a direct replacement. However, from \cref{lem:chop-probability} we have
\begin{align*}
r &= \mathbb{P}[X_1\ge Y_1] = \frac{1}{2} + \frac{p(\Delta-1) + 1/2 + \alpha_0 - \alpha_2}{2\sqrt{\pi p(1-p)n}} + O(n^{-3/4}),\\
b &= \mathbb{P}[W_1\ge Z_1] = \frac{1}{2} + \frac{p(-\Delta-1) + 1/2 + \alpha_1 - \alpha_2}{2\sqrt{\pi p(1-p)n}} + O(n^{-3/4})
\end{align*}
and additionally, by definition,
\begin{align*}
r\mathbb{E}Y_k^- + (1-r)\mathbb{E}Y_k^+ &= \mathbb{E}Y_k = pn + \alpha_2,\\
b\mathbb{E}Z_k^- + (1-b)\mathbb{E}Z_k^+ &= \mathbb{E}Z_k = pn + \alpha_2 + p\Delta + \alpha_2\Delta/n.
\end{align*}
Therefore
\begin{align*}
&\mu_{x,y} - (n+\Delta)(pn+\alpha_2) + n(pn+\alpha_2+p\Delta+\alpha_2\Delta/n)\\
&= (x-r(n+\Delta))(\mathbb{E}Y_k^--\mathbb{E}Y_k^+) - (y-bn)(\mathbb{E}Z_k^--\mathbb{E}Z_k^+)\\
&= (x-r(n+\Delta))\cdot2\sqrt{\frac{p(1-p)n}{\pi}} - (y-bn)\cdot2\sqrt{\frac{p(1-p)n}{\pi}} + O(n^{4/5})\\
&= 2\sqrt{\frac{p(1-p)n}{\pi}}((x-rn) - (y-bn)) + O(n^{4/5}).
\end{align*}
We deduce
\begin{align*}
&\mu_{x,y} - 2\sqrt{\frac{p(1-p)n}{\pi}}(x-y) + O(n^{4/5})\\
&= -2\sqrt{\frac{p(1-p)n}{\pi}}n\bigg(\frac{p(\Delta-1) + 1/2 + \alpha_0 - \alpha_2}{2\sqrt{\pi p(1-p)n}} - \frac{p(-\Delta-1) + 1/2 + \alpha_1 - \alpha_2}{2\sqrt{\pi p(1-p)n}}\bigg)\\
&= \bigg(\frac{-2p\Delta + \alpha_1-\alpha_0}{\pi}\bigg)n.\qedhere
\end{align*}
\end{proof}
We now make the following definitions.
\begin{itemize}
\item Recall that $\sigma = \sqrt{(2-2/\pi)p(1-p)}n$.
\item We have
\begin{align*}
r^\ast &= r^\ast(\alpha_0,\alpha_1,\alpha_2) = \frac{1}{2} + \frac{p(\Delta-1) + 1/2 + \alpha_0 - \alpha_2}{2\sqrt{\pi p(1-p)n}},\\
b^\ast &= b^\ast(\alpha_0,\alpha_1,\alpha_2) = \frac{1}{2} + \frac{p(-\Delta-1) + 1/2 + \alpha_1 - \alpha_2}{2\sqrt{\pi p(1-p)n}},
\end{align*}
which are within $O(n^{-3/4})$ of $r,b$ by \cref{lem:chop-statistics}.
\item We let
\[\mu^\ast(\alpha_0,\alpha_1,\alpha_2,x,y) = \bigg(\frac{\alpha_1-\alpha_0-2p\Delta}{\pi}\bigg)n + 2\sqrt{\frac{p(1-p)n}{\pi}}(x-y),\]
which satisfies $\mu_{x,y} = \mu^\ast(\alpha_0,\alpha_1,\alpha_2,x,y) + O(n^{4/5})$ for $|x-n/2|,|y-n/2|\le\sqrt{n}\log n$ by \cref{clm:mu-x-y}.
\end{itemize}
Now, continuing \eqref{eq:B-B-cond}, we find for $|x-n/2|,|y-n/2|\le\sqrt{n}\log n$ that
\begin{align}
&\mathbb{P}_{\mathcal{B}_{q_0}^{n+\Delta},\mathcal{B}_{q_1}^n,\mathcal{B}_{q_2}^{n+\Delta,n}}[|R_0\cap R_1| = x\wedge|B_0\cap B_1| = y\wedge|\mathbf{S}| = |\mathbf{T}|]\notag\\
&= \mathbb{P}[|R_0\cap R_1| = x]\mathbb{P}[|B_0\cap B_1| = y]\bigg(\frac{1}{\sqrt{2\pi}\sigma}\exp\bigg(\frac{\mu_{x,y}^2}{2\sigma^2}\bigg) + O(n^{-1/5}\sigma^{-1})\bigg)\notag\\
&= \phi_{rn,r(1-r)n}(x)\phi_{bn,b(1-b)n}(y)\phi_{0,\sigma^2}(\mu_{x,y}) + O(n^{-2-1/5})\notag\\
&= \phi_{r^\ast n,n/4}(x)\phi_{b^\ast n,n/4}(y)\phi_{0,\sigma^2}(\mu^\ast)+ O(n^{-2-1/5})\label{eq:B-Gaussian}
\end{align}
where $\phi_{a,b}$ denotes the pdf of the Gaussian with mean $a$ and variance $b$. In the second line we used that the conditional probability in \eqref{eq:B-B-cond} given $R_0\cap R_1$ and $B_0\cap B_1$ depends only on their sizes. The third line used a local limit theorem for binomials and appropriately expanding out error terms. The fourth line is just manipulation of established error terms in ways that we have seen already. Note that this equality is actually true if either $x$ or $y$ deviates by at least $\sqrt{n}\log n$ from $n/2$ as then the probability $|R_0\cap R_1| = x$ and $|B_0\cap B_1| = y$ is super-polynomially small. Therefore, this equation is true in general.
It is also worth mentioning by similar logic that if either $|x-n/2|\ge C\sqrt{\log n}$ or $|y-n/2|\ge C\sqrt{\log n}$ then
\begin{equation}\label{eq:B-sub-Gaussian}
\mathbb{P}_{\mathcal{B}_{q_0}^{n+\Delta},\mathcal{B}_{q_1}^n,\mathcal{B}_{q_2}^{n+\Delta,n}}[|R_0\cap R_1| = x\wedge|B_0\cap B_1| = y\wedge|\mathbf{S}| = |\mathbf{T}|]\le\mathbb{P}_{\mathcal{B}}[|R_0\cap R_1| = x] = O(n^{-10}).
\end{equation}
\subsection{Putting it together}\label{sub:integration}
Finally, note that the denominator of \cref{eq:B-even-B} is the probability that two samples of $\operatorname{Bin}(n(n+\Delta),q_2)$ subtract to $0$. This satisfies a local limit theorem (e.g.~by \cite{Can80}) and has mean $0$ and variance $2q_2(1-q_2)n(n+\Delta) = 2p(1-p)n^2(1+O(n^{-1/2}))$, so
\begin{equation}\label{eq:B-denominator}
\mathbb{P}_{\mathcal{B}_{q_2}^{n+\Delta,n}}[|\mathbf{S}| = |\mathbf{T}|] = \frac{1}{2\sqrt{\pi p(1-p)}n} + O(n^{-5/4}).
\end{equation}
Putting together \cref{eq:D-I}, \cref{eq:I-E}, \cref{eq:E-B-even}, \cref{eq:B-even-B}, and \cref{eq:B-Gaussian} along with \cref{eq:B-denominator}, we obtain for some absolute $c > 0$ that
\begin{align*}
&\mathbb{P}_{\substack{\mathcal{D}_p^{n+\Delta},\mathcal{D}_p^n\\\mathcal{D}_p^{n+\Delta,n}}}[|R_0\cap R_1| = x\wedge|B_0\cap B_1| = y]\\
&= \int_{|\alpha_0|,|\alpha_1|,|\alpha_2|\le 20\sqrt{p(1-p)\log n}}\frac{\phi_{r^\ast n,n/4}(x)\phi_{b^\ast n,n/4}(y)\phi_{0,\sigma^2}(\mu^\ast)}{1/(2\sqrt{\pi p(1-p)}n)}d\nu(\alpha_0,\alpha_1,\alpha_2)+ O(n^{-1-c}).\notag\\
&= \int_{\alpha_0,\alpha_1,\alpha_2\in\mathbb{R}}\frac{\phi_{r^\ast n,n/4}(x)\phi_{b^\ast n,n/4}(y)\phi_{0,\sigma^2}(\mu^\ast)}{1/(2\sqrt{\pi p(1-p)}n)}d\nu(\alpha_0,\alpha_1,\alpha_2)+ O(n^{-1-c}).\notag
\end{align*}
where $\nu$ denotes the product measure of three independent Gaussians centered at $0$ with variances $p(1-p), p(1-p), p(1-p)/2$, respectively. Note that the difference between sampling the $\alpha$ values from $\nu$ or the $q$ values from $\mu$ is negligible.
Equivalently, we can sample $\beta_i = \alpha_i/\sqrt{p(1-p)}$ from Gaussians with variances $1,1,1/2$ for $i=0,1,2$, respectively. We have
\begin{align*}
&\int_{\alpha_0,\alpha_1,\alpha_2\in\mathbb{R}}\frac{\phi_{r^\ast n,n/4}(x)\phi_{b^\ast n,n/4}(y)\phi_{0,\sigma^2}(\mu^\ast)}{1/(2\sqrt{\pi p(1-p)}n)}d\nu(\alpha_0,\alpha_1,\alpha_2)\\
&= \frac{2\sqrt{\pi p(1-p)}n}{(\sqrt{2\pi})^5(\sqrt{\pi})(n/4)\sqrt{(2-2/\pi)p(1-p)}n}\int_\beta e^{-\frac{2(x-r^\ast n)^2+2(y-b^\ast n)^2}{n}-\frac{(\mu^\ast)^2}{(4-4/\pi)p(1-p)n^2}-\frac{\beta_0^2+\beta_1^2+2\beta_2^2}{2}}d\beta.
\end{align*}
When the values of $r^\ast,b^\ast,\mu^\ast$ are substituted in, this becomes a Gaussian integral in $\beta_0,\beta_1,\beta_2$.
Let
\begin{align*}
\sqrt{\frac{n}{4\pi}}x' &= x - \bigg(\frac{1}{2}+\frac{p(\Delta-1)+1/2}{2\sqrt{\pi p(1-p)n}}\bigg)n,\\
\sqrt{\frac{n}{4\pi}}y' &= y - \bigg(\frac{1}{2}+\frac{p(-\Delta-1)+1/2}{2\sqrt{\pi p(1-p)n}}\bigg)n.
\end{align*}
Then we deduce
\begin{align*}
&-\frac{2(x-r^\ast n)^2+2(y-b^\ast n)^2}{n}-\frac{(\mu^\ast)^2}{(4-4/\pi)p(1-p)n^2}-\frac{\beta_0^2+\beta_1^2+2\beta_2^2}{2}\\
&= -\frac{1}{2\pi p(1-p)}(x'\sqrt{p(1-p)}-\alpha_0+\alpha_2)^2 - \frac{1}{2\pi p(1-p)}(y'\sqrt{p(1-p)}-\alpha_1+\alpha_2)^2 \\
&\quad-\frac{1}{4\pi(\pi-1)p(1-p)}((x'-y')\sqrt{p(1-p)}+\alpha_1-\alpha_0)^2-\frac{\beta_0^2+\beta_1^2+2\beta_2^2}{2}\\
&= -\frac{1}{2\pi}(x'-\beta_0+\beta_2)^2-\frac{1}{2\pi}(y'-\beta_1+\beta_2)^2-\frac{1}{4\pi(\pi-1)}(x'-y'+\beta_1-\beta_0)^2-\frac{\beta_0^2+\beta_1^2+2\beta_2^2}{2}.
\end{align*}
Changing variables via $\alpha_i = \sqrt{p(1-p)}\beta_i$ therefore yields
\begin{align*}
&\int_{\alpha_0,\alpha_1,\alpha_2\in\mathbb{R}}\frac{\phi_{r^\ast n,n/4}(x)\phi_{b^\ast n,n/4}(y)\phi_{0,\sigma^2}(\mu^\ast)}{1/(2\sqrt{\pi p(1-p)}n)}d\nu(\alpha_0,\alpha_1,\alpha_2)\\
&= \frac{1}{\pi^2\sqrt{\pi-1}n}\int_\beta e^{-\frac{1}{2\pi}(x'-\beta_0+\beta_2)^2-\frac{1}{2\pi}(y'-\beta_1+\beta_2)^2-\frac{1}{4\pi(\pi-1)}(x'-y'+\beta_1-\beta_0)^2-\frac{\beta_0^2+\beta_1^2+2\beta_2^2}{2}}d\beta\\
&= \frac{2}{n\sqrt{\pi(2+\pi)}}\exp\bigg(-\frac{(1+\pi)(x')^2-2(x'y')+(1+\pi)(y')^2}{2\pi(2+\pi)}\bigg).
\end{align*}
Furthermore, if either $|x-n/2|\ge C\sqrt{\log n}$ or $|y-n/2|\ge C\sqrt{\log n}$ for appropriate $C > 0$ then we obtain a bound of size $O(n^{-5})$, which is easily seen using \cref{eq:B-sub-Gaussian} along with \cref{sub:numerator}. \cref{eq:D-I}, \cref{eq:I-E}, \cref{eq:E-B-even}, \cref{eq:B-even-B}, and \cref{eq:B-denominator}. This completes the proof of \cref{thm:day-one}.
\section{Tracking the remainder}\label{sec:days}
Now we adapt the approach of Ferber, Kwan, Narayanan, and the authors \cite{FKNSS21} to analyze the remainder of the majority dynamics process. Note that it is key that we computed what the leads were after day one at the scale of $\sqrt{n}$, since the techniques in that work only constrain objects at the scale $O(n^{1-\eta})$. However, essentially the same set of coarse data that is tracked in that work, along with the information from \cref{thm:day-one}, will allow us to perform an analysis of the remaining process via iterated revelation.
\subsection{Tracking degree parameters}\label{sub:tracking-data}
We first define the parameters that will be tracked, which are basically the joint degree distributions of each part of the graph to each of the other parts.
Given $k\ge 1$ and $x\in\{0,1\}^k$ of the form $(x_0,\ldots,x_{k-1})$, let $V_x = \cap_{i=0}^{k-1}X_i(x)$ where $X_i(0) = R_i$ and $X_i(1) = B_i$. Additionally, for $v\in B_0\cup R_0$ let
\[\operatorname{deg}^{(k)} v = ((\deg_{V_x} v-p|V_x|)/\sqrt{p(1-p)n})_{x\in\{0,1\}^k}.\]
Finally, for $x\in\{0,1\}^k$ let $\mathcal{L}_x$ be the distribution of $\operatorname{deg}^{(k)}v$ if we sample a uniform $v\in V_x$ (implicitly assuming it is nonempty).
It will be helpful to recall the following definition of Kolmogorov distance.
\begin{definition}\label{def:kolmogorov}
If $\mathcal{L}$ and $\mathcal{L}'$ are probability distributions on $\mathbb{R}^d$, the \emph{Kolmogorov distance} $\operatorname{d}_{\mathrm{K}}(\mathcal{L},\mathcal{L}')$ is the supremum of
$|\mathcal{L}(A)-\mathcal{L}'(A)|$ over all sets $A = (-\infty,a_1]\times \dots\times (-\infty,a_d]$, where $a_1, \dots, a_d \in \mathbb{R}$.
\end{definition}
\subsection{Additional data from day one}\label{sub:day-one}
We now quickly derive certain coarse degree statistics arising from day one. These results are substantially less delicate than the previous section.
\begin{lemma}\label{lem:coarse-day-one}
There are $C, c > 0$ such that the following holds. Let $n\ge 2$, let $0\le\Delta\le(\log n)^{1/4}$ and let $(\log n)^{-1/4}\le p\le 1-(\log n)^{-1/4}$. For each $x\in\{0,1\}$ we have $\operatorname{d}_{\rm{K}}(\mathcal{L}_x,\mathcal{N}(0,I_2))\le n^{-c}$ with probability $1-O(n^{-5})$ under majority dynamics on $\mathbb{G}(2n+\Delta,p)$ with $|R_0| = n+\Delta$. Furthermore, $\mathcal{L}_x$ is supported on $[-C\sqrt{\log n},C\sqrt{\log n}]$ with probability $1-O(n^{-5})$. (Hence we can choose $\widehat{\mathcal{L}}_x$ to have the same support.)
\end{lemma}
\begin{remark}
A version of the above result when $p = 1/2$ and $\Delta = 0$ with a weaker probability bound appears in \cite[Section~4.1]{FKNSS21}, which is also sufficient for our purposes.
\end{remark}
\begin{proof}[Sketch]
It suffices to check it for $x = 0$, as the remaining case is analogous. The support claim is immediate by a union bound over all vertices. The Kolmogorov distance claim follows from the degree models in \cref{sec:one}. Specifically, consider the reduction from the true degree sequence model to the independent degree model, and then note that regardless of the revealed $q_0,q_1,q_2$ in $p\pm 20\sqrt{p(1-p)\log n}/n$, each vertex in $R_0$ has joint degree distribution extremely close to a correctly normalized Gaussian. Everything is now independent, so Chernoff on the number of vertices with degrees $pn+\sqrt{p(1-p)n}[x,x+n^{-c}]$, ranging over a polynomial-sized set of values $x$ proves the desired result. We must divide by the probability that the number of edges within each part is even and that the number of edges in the bipartite part agrees across both sides, as in \cref{sub:evenness,sub:numerator}, but these are polynomial probabilities which do not affect the bound significantly.
\end{proof}
\subsection{Data from day two}\label{sub:day-two}
We are now in position to derive the necessary data for day two. We show that given a substantial lead after day one that this leads grows to a linear size on the following day with high probability. To do this we reveal certain information and condition on certain high probability outcomes.
\begin{enumerate}[{\bfseries{E\arabic{enumi}}}]
\item\label{item:E1} Reveal all $\deg_{B_0}v$ and $\deg_{R_0}v$ values, which is enough to execute day one and determine $R_1,B_1$.
\item\label{item:E2} Furthermore, we assume that this revelation satisfies \cref{lem:coarse-day-one} and we let $|R_0\cap R_1| = x$ and $|B_0\cap B_1| = y$, defining
\begin{align*}
\sqrt{\frac{n}{4\pi}}x' &= x - \bigg(\frac{1}{2}+\frac{p(\Delta-1)+1/2}{2\sqrt{\pi p(1-p)n}}\bigg)n,\\
\sqrt{\frac{n}{4\pi}}y' &= y - \bigg(\frac{1}{2}+\frac{p(-\Delta-1)+1/2}{2\sqrt{\pi p(1-p)n}}\bigg)n.
\end{align*}
as in the statement of \cref{thm:day-one}.
\item\label{item:E3} We may assume that our revelation gave rise to values $x',y' = O(\sqrt{\log n})$ with probability at least $1-n^{-5}$ by \cref{thm:day-one}.
\item\label{item:E4} Finally, the number of edges between the two parts in the initial partition is $pn(n+\Delta)+O(n^{3/2-1/5})$ with super-polynomially high probability, so we may assume that our revelation gave rise to such a number of edges. Similarly within each part, we may assume we have $p\binom{n}{2} + O(n^{3/2-1/5})$ edges.
\end{enumerate}
In order to execute day two, we reveal $\deg_Tv$ for $T\in\{R_0\cap R_1,R_0\cap B_1, B_0\cap R_1, B_0\cap B_1\}$. Depending on the total degree from $v$ to $R_1$ and $B_1$, as well as whether $v\in B_1$ or $R_1$ in the case of ties, we know where $v$ lands in the next step.
\begin{claim}\label{clm:concentration}
There is an absolute $c > 0$ such that the following holds. Given revelations and assumptions \cref{item:E1,item:E2,item:E3,item:E4}, over the remaining randomness for each $x\in\{0,1\}^3$ the number of vertices in $V_x$ is concentrated at a scale $O(n^{1-c})$. In particular, $V_x$ is in an interval of length $O(n^{1-c})$ around its mean with probability at least $1-n^{-c}$.
\end{claim}
This implies that $|R_2|$ and $|B_2|$ are concentrated.
To do this, we attempt to understand the degree distribution better. First, let
\[\beta_i^R = \frac{\deg_{R_0}v_i-p(n+\Delta-1)}{\sqrt{p(1-p)(n+\Delta-1)}},\qquad\beta_i^B = \frac{\deg_{B_0}v_i-pn}{\sqrt{p(1-p)n}}\]
for $i\in R_0$ and
\[\beta_j^R = \frac{\deg_{R_0}v_j-p(n+\Delta)}{\sqrt{p(1-p)(n+\Delta)}},\qquad\beta_j^B = \frac{\deg_{B_0}v_j-p(n-1)}{\sqrt{p(1-p)(n-1)}}\]
for $j\in B_0$.
Given $v\in R_0\cup B_0$, look at
\[\deg^{(1)}v = (\alpha_0,\alpha_1),\qquad\deg^{(2)} v = (\rho_{00},\rho_{01},\rho_{10},\rho_{11})\]
where $\alpha_i = (\deg_{V_i}v-p|V_i|)/\sqrt{p(1-p)n}$ and $\rho_{ij} = (\deg_{V_{ij}}v-p|V_{ij}|)/\sqrt{p(1-p)n}$. Note that $\alpha_0,\alpha_1$ are determined given the revealed information, and that
\[\rho_{00}+\rho_{01} = \alpha_0,\qquad\rho_{10}+\rho_{11} = \alpha_1\]
hold. Furthermore $\rho_{00}$ and $\rho_{01}$ are independent given the revealed information, and their probability distributions can be determined by \cref{sub:enum-graph-model} and \cref{sub:enum-bigraph-model}, respectively.
By the first parts of \cref{prop:graph-bounded} and \cref{prop:bigraph-bounded}, we see that with super-polynomially high probability the $\rho_{ij}$ are bounded by $(\log n)^{25}/\sqrt{p(1-p)}$. Therefore we may assume that all vertices satisfy such a bound when revealing the new joint distribution of degrees. Furthermore, using the conditions on $x',y'$, in both cases we will be able to apply \cref{prop:graph-balanced} or \cref{prop:bigraph-balanced}, as long as we verify the necessary condition regarding $\sum\beta$ (in the notation of those propositions). Specifically, one needs for $v\in R_0$ that
\[\sum_{i\in R_0\setminus v}\beta_i^R = O(n^{5/6}) = \sum_{j\in B_0}\beta_j^R\]
while for $v\in B_0$ one needs
\[\sum_{i\in R_0}\beta_i^B = O(n^{5/6}) = \sum_{j\in B_0\setminus v}\beta_j^B.\]
This follows since we assumed the number of edges between the two parts in the initial partition is $pn(n+\Delta)+O(n^{3/2-1/5})$, and similar for within each of the two parts.
Now \cref{prop:graph-balanced} and \cref{prop:bigraph-balanced} show for $v\in R_0$ that
\begin{align}
\mathbb{P}[\rho_{00} = \gamma] &= \frac{\sqrt{2}+O(n^{-1/10})}{\sqrt{\pi p(1-p)n}}\exp\bigg(-\frac{1}{2}\bigg(2\gamma-\alpha_0-\frac{\sum_{i\in V_{00}}\beta_i^R}{n/2}\bigg)^2\bigg),\notag\\
\mathbb{P}[\rho_{10} = \gamma] &= \frac{\sqrt{2}+O(n^{-1/10})}{\sqrt{\pi p(1-p)n}}\exp\bigg(-\frac{1}{2}\bigg(2\gamma-\alpha_1-\frac{\sum_{j\in V_{10}}\beta_j^R}{n/2}\bigg)^2\bigg),\label{eq:day-two-gaussian-R}
\end{align}
absorbing negligible errors such as the difference of $\Delta$ between the number of vertices of $R_0$ and $B_0$.
Note that if we reveal the neighborhood of $v$, then any $w\in R_0$ will have essentially the same conditional distribution (the effect of revealing this neighborhood is to slightly adjust some degrees, which negligibly affects $\sum_{j\in R_{10}}\beta_j$, for instance).
Using this observation, a second-moment computation demonstrates that the number of vertices $v\in R_0\cap R_1$ with
\begin{equation}\label{eq:switch-condition-2}
\rho_{00} + \rho_{10} - \rho_{01} - \rho_{11}\ge\frac{p}{\sqrt{p(1-p)n}}(|V_{01}|+|V_{11}|-|V_{00}|-|V_{10}|) = \frac{p}{\sqrt{p(1-p)n}}(|B_1|-|R_1|)
\end{equation}
is concentrated (which corresponds to $v\in R_0\cap R_1$ being in $R_2$ after day two is revealed). We forgo the computational details (for similar arguments of this form, see \cite[Section~4.3.6]{FKNSS21}). The other cases are analogous. This completes the justification of \cref{clm:concentration}.
We quickly record that for $v\in B_0$, one obtains instead
\begin{align}
\mathbb{P}[\rho_{00} = \gamma] &= \frac{\sqrt{2}+O(n^{-1/10})}{\sqrt{\pi p(1-p)n}}\exp\bigg(-\frac{1}{2}\bigg(2\gamma-\alpha_0-\frac{\sum_{i\in V_{00}}\beta_i^B}{n/2}\bigg)^2\bigg),\notag\\
\mathbb{P}[\rho_{10} = \gamma] &= \frac{\sqrt{2}+O(n^{-1/10})}{\sqrt{\pi p(1-p)n}}\exp\bigg(-\frac{1}{2}\bigg(2\gamma-\alpha_1-\frac{\sum_{j\in V_{10}}\beta_j^B}{n/2}\bigg)^2\bigg).\label{eq:day-two-gaussian-B}
\end{align}
\begin{claim}\label{clm:expectation}
There is an absolute $c > 0$ such that the following holds. Given revelations and assumptions \cref{item:E1,item:E2,item:E3,item:E4}, over the remaining randomness we have
\[\mathbb{E}|R_0\cap R_2| = n\mathbb{P}_{Z\sim\mathcal{N}(0,2)}\bigg[Z\ge\frac{2}{\sqrt{\pi}} + \sqrt{\frac{p}{(1-p)n}}(|B_1|-|R_1|)\bigg] + O(n^{1-c})\]
and
\[\mathbb{E}|B_0\cap R_2| = n\mathbb{P}_{Z\sim\mathcal{N}(0,2)}\bigg[Z\ge -\frac{2}{\sqrt{\pi}} + \sqrt{\frac{p}{(1-p)n}}(|B_1|-|R_1|)\bigg] + O(n^{1-c}),\]
while also $\mathbb{E}|V_{x_1x_2x_3}| = \mathbb{E}|V_{x_1x_2'x_3}| + O(n^{1-c})$ for all $x_1,x_1',x_2,x_3\in\{0,1\}$.
\end{claim}
For this we note that if $v$ has parameters $(\alpha_0,\alpha_1)$ defined above then
\[\rho_{00}+\rho_{10}-\rho_{01}-\rho_{11} = (2\rho_{00}-\alpha_0) + (2\rho_{10}-\alpha_1),\]
which is the sum of two independent discrete Gaussians of standard deviation $1$ and discretization $2/\sqrt{p(1-p)n}$ by \cref{eq:day-two-gaussian-R,eq:day-two-gaussian-B}. A simple computation shows the sum of two such discrete Gaussians with the given error terms (and tail bounds) is a corresponding discrete Gaussian. In particular, we see for $v\in R_0$ that
\[\mathbb{P}[(2\rho_{00}-\alpha_0) + (2\rho_{10}-\alpha_1) = \tau] = \frac{1}{\sqrt{p(1-p)n}}\frac{1}{\sqrt{4\pi}}\exp\bigg(-\frac{1}{4}\bigg(\tau-\frac{\sum_{i\in V_{00}\cup V_{10}}\beta_i^R}{n/2}\bigg)^2\bigg) + O(n^{-1/2-1/12})\]
for $\tau$ on an appropriate integer lattice of discretization $2/\sqrt{p(1-p)n}$. A similar formula with $\beta_i^B$ holds for $v\in B_0$. For any $v\in R_0$ we see that
\[\mathbb{P}[\cref{eq:switch-condition-2}\text{ for }v] = \mathbb{P}_{Z\sim\mathcal{N}(0,2)}\bigg[Z\ge\frac{\sum_{i\in V_{00}\cup V_{10}}\beta_i^R}{n/2} + \sqrt{\frac{p}{(1-p)n}}(|B_1|-|R_1|)\bigg]+O(n^{-1/13})\]
hence
\[\mathbb{E}|R_0\cap R_2| = n\mathbb{P}_{Z\sim\mathcal{N}(0,2)}\bigg[Z\ge\frac{\sum_{i\in V_{00}\cup V_{10}}\beta_i^R}{n/2} + \sqrt{\frac{p}{(1-p)n}}(|B_1|-|R_1|)\bigg] + O(n^{12/13}).\]
Here the error term comes from the earlier term, as well as the possibility of vertices that are exactly balanced (of which there are few by the given computations) which may go a different way depending on its day one (not day zero) affiliation.
Similarly, for any $v\in B_0$ we have
\[\mathbb{P}[\cref{eq:switch-condition-2}\text{ for }v] = \mathbb{P}_{Z\sim\mathcal{N}(0,2)}\bigg[Z\ge\frac{\sum_{i\in V_{00}\cup V_{10}}\beta_i^B}{n/2} + \sqrt{\frac{p}{(1-p)n}}(|B_1|-|R_1|)\bigg]\]
and thus
\[\mathbb{E}|B_0\cap R_2| = n\mathbb{P}_{Z\sim\mathcal{N}(0,2)}\bigg[Z\ge\frac{\sum_{i\in V_{00}\cup V_{10}}\beta_i^B}{n/2} + \sqrt{\frac{p}{(1-p)n}}(|B_1|-|R_1|)\bigg] + O(n^{12/13}).\]
Finally, it suffices to compute the average of these $\beta_i^R$ and $\beta_i^B$ quantities over $V_{00}\cup V_{10} = R_1$. From \cref{lem:coarse-day-one} we know that the empirical normalized joint degree distributions for $R_0$ and $B_0$ are close to $\mathcal{N}(0,I_2)$. Therefore the degree distribution of $R_1$ is close to that of $(Z_1,Z_2)\sim\mathcal{N}(0,I_2)$ conditional on $Z_1\ge Z_2$ (where $Z_1$ corresponds to the parameter $(\deg_{R_0}v-p(n+\Delta))/\sqrt{p(1-p)(n+\Delta)}$). Therefore the expected value of this ensemble is within $O(n^{-c})$ of
\[\mathbb{E}[Z_1|Z_1\ge Z_2] = \frac{1}{\sqrt{\pi}}.\]
This shows
\[\frac{\sum_{i\in V_{00}\cup V_{10}}\beta_i^R}{n} = \frac{1}{\sqrt{\pi}} + O(n^{-c}) = \frac{\sum_{i\in V_{00}\cup V_{10}}\beta_i^B}{n}.\]
Then the first part of \cref{clm:expectation} follows.
The second part of \cref{clm:expectation} follows from that fact that all of the expressions for probabilities above based on $v$ are independent of the values $(\alpha_0,\alpha_1)$, so we can sum over $v\in R_0\cap R_1$, for instance, by just summing over those $v\in R_0$ which have $\alpha_0 > \alpha_1$, of which there are $n/2 + O(\sqrt{n}(\log n))$ by \cref{item:E2}.
Putting \cref{clm:concentration,clm:expectation} together and simplifying the sum of the expectations in \cref{clm:expectation}, we obtain the following information about the distribution of the sizes after day two.
\begin{theorem}\label{thm:day-two}
There is an absolute $c > 0$ such that the following holds. Given revelations and assumptions \cref{item:E1,item:E2,item:E3,item:E4}, over the remaining randomness we have
\[|R_2| = n+n\int_{-\eta}^\eta\frac{1}{\sqrt{2\pi}}\exp\bigg(-\frac{1}{2}\Big(u-\sqrt{\frac{2}{\pi}}\bigg)^2\Big)du + O(n^{1-c})\]
with probability at least $1-n^{-c}$, where $\eta = \sqrt{p/(2(1-p)n)}(|R_1|-|B_1|)$.
\end{theorem}
\begin{remark}
The integral is a signed integral. In particular its sign is the same as $\eta$.
\end{remark}
\subsection{Finishing on day three}\label{sub:day-three}
To finish we now use rather coarse consequences of degree enumeration to prove that every vertex is of the appropriate color. Note that \cref{thm:day-one} tells us the distribution of the lead $|R_1|-|B_1|$, and \cref{thm:day-two} tells us, in terms of the lead after day one, what the lead after day two is concentrated at. Furthermore, if the lead at day one is sufficiently positive then so will be the lead at day two with high probability. For example, for $p\le 1/2$ a lead of $\sqrt{n}$ yields a lead of $\Omega(n\sqrt{p})$.
Using arguments in \cite{BCOTT16,TV20,BD20} one can immediately prove that the side leading after day two has colored all the vertices in two further days. This along with \cref{thm:day-one,thm:day-two} will immediately justify \cref{thm:main} except that we can only guarantee it ends by day four. The arguments in \cite{BCOTT16,TV20,BD20} appear not sufficiently refined to deliver the day three result.
\begin{proof}[Proof of \cref{thm:main}]
Make revelations as in \cref{item:E1,item:E2,item:E3,item:E4}. Then reveal the information $\deg^{(2)}v$ for all $v\in R_0\cup B_0$, which allows us to determine the parts up to the end of day two. Reveal such that \cref{clm:concentration,clm:expectation,thm:day-two} are satisfied. Let
\[\eta = \sqrt{\frac{p}{2(1-p)n}}(|R_1|-|B_1|)\]
and note
\[|R_2|-|B_2| = 2n\int_{-\eta}^\eta\frac{1}{\sqrt{2\pi}}\exp\bigg(-\frac{1}{2}\Big(u-\sqrt{\frac{2}{\pi}}\bigg)^2\Big)du + O(n^{1-c})\]
from \cref{thm:day-two}, for some small absolute constant $c\in(0,1/4)$.
We wish to show that over all of the randomness (including the revealed randomness), if $\eta > 0$ then red will win in $3$ days while if $\eta < 0$ then red will win in $3$ days.
First if $\eta\ge 10+\sqrt{2\log p}$ then $|B_2|\le pn/5$. We see that red wins after day three with extremely high probability since the initial graph has minimum degree at least $pn/2$ with probability at least $1-O(\exp(-\Omega(pn)))$, and this forces every vertex to have more neighbors on the red side than blue side after day two is finished.
Similarly, if $-\eta\ge 10+\sqrt{2\log p}$ then blue wins after day three with extremely high probability.
The case $|\eta|\le n^{-c/2}$ occurs with probability $O(n^{-c/4})$ by the local limit theorem of \cref{thm:day-one}, so we ignore it.
Finally, without loss of generality we consider the case $n^{-c/2} < \eta < 10 + \sqrt{2\log p}$ (the opposite case being analogous except with red and blue switched).
Now to determine what happens on day three, we reveal $\deg^{(3)}v$ for all $v\in R_0\cup B_0$. This comes from another ensemble of degree-constrained distributions, so we apply the results of \cref{app:mckay-wormald} again, between all pairs of the four parts $V_x$ for $x\in\{0,1\}^2$. First, by \cref{clm:concentration,clm:expectation} we have
\[|V_x|\ge n\mathbb{P}_{Z\sim\mathcal{N}(0,2)}[Z\ge\frac{2}{\sqrt{\pi}}+\eta\sqrt{2}]+O(n^{1-c})\ge p^2n\ge n/(\log n)^{1/8}\]
for all $x\in\{0,1\}^3$. Thus we are in position to apply \cref{prop:graph-bounded,prop:bigraph-bounded} (as this guarantees the condition on the parameter $h$).
We need to check that the values $\beta$ are indeed of size $O((\log n)^2)$. This follows from the results of \cref{sub:day-two}; recall that we showed the $\rho_{ij}$ corresponding to each $v$ was bounded by $(\log n)^{25}/\sqrt{p(1-p)}$ with super-polynomially high probability, and otherwise there was an exact formula which guarantees the appropriate boundedness with super-polynomially high probability.
Now \cref{prop:graph-bounded,prop:bigraph-bounded} show that with super-polynomially high probability, for all $v\in R_0\cup B_0$ and $x\in\{0,1\}^3$ we have
\[\deg_{V_x}v = p|V_x| + O(n^{1/2}(\log n)^{25}).\]
Therefore
\[\deg_{R_2}v - \deg_{B_2}v = p(|R_2|-|B_2|) + O(n^{1/2}(\log n)^{25})\ge n^{1-c/3}\]
for all $v\in R_0\cup B_0$ with high probability. Thus every vertex will be on the red side after day three, as desired.
We have shown that with high probability (namely, as long as \cref{item:E1,item:E2,item:E3,item:E4} hold and $|\eta| > n^{-c/2}$, and over the randomness of certain degree revelations over three days), some color has the lead after the first day and it wins in three days. This probability is in fact polynomially good.
Finally, \cref{thm:day-one} tells us the probability that $\eta > 0$ to a high degree of accuracy, and simple computation with normal distributions shows it is
\[\mathbb{P}_{Z\sim\mathcal{N}(0,1)}\bigg[Z\le\frac{p\Delta\sqrt{2}}{\sqrt{\pi p(1-p)}}\bigg] + O(n^{-c})\]
if $c > 0$ is a small enough absolute constant. We are done.
\end{proof}
\bibliographystyle{amsplain0.bst}
| {
"timestamp": "2021-05-28T02:26:52",
"yymm": "2105",
"arxiv_id": "2105.13301",
"language": "en",
"url": "https://arxiv.org/abs/2105.13301",
"abstract": "Consider $n=\\ell+m$ individuals, where $\\ell\\le m$, with $\\ell$ individuals holding an opinion $A$ and $m$ holding an opinion $B$. Suppose that the individuals communicate via an undirected network $G$, and in each time step, each individual updates her opinion according to a majority rule (that is, according to the opinion of the majority of the individuals she can communicate with in the network). This simple and well studied process is known as \"majority dynamics in social networks\". Here we consider the case where $G$ is a random network, sampled from the binomial model $\\mathbb{G}(n,p)$, where $(\\log n)^{-1/16}\\le p\\le 1-(\\log n)^{-1/16}$. We show that for $n=\\ell+m$ with $\\Delta=m-\\ell\\le(\\log n)^{1/4}$, the above process terminates whp after three steps when a consensus is reached. Furthermore, we calculate the (asymptotically) correct probability for opinion $B$ to \"win\" and show it is \\[\\Phi\\bigg(\\frac{p\\Delta\\sqrt{2}}{\\sqrt{\\pi p(1-p)}}\\bigg) + O(n^{-c}),\\] where $\\Phi$ is the Gaussian CDF. This answers two conjectures of Tran and Vu and also a question raised by Berkowitz and Devlin.The proof technique involves iterated degree revelation and analysis of the resulting degree-constrained random graph models via graph enumeration techniques of McKay and Wormald as well as Canfield, Greenhill, and McKay.",
"subjects": "Combinatorics (math.CO)",
"title": "Majority Dynamics: The Power of One",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987758723627135,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7097978681059076
} |
https://arxiv.org/abs/2211.14252 | The extremals of Stanley's inequalities for partially ordered sets | Stanley's inequalities for partially ordered sets establish important log-concavity relations for sequences of linear extensions counts. Their extremals however, i.e., the equality cases of these inequalities, were until now poorly understood with even conjectures lacking. In this work, we solve this problem by providing a complete characterization of the extremals of Stanley's inequalities. Our proof is based on building a new ``dictionary" between the combinatorics of partially ordered sets and the geometry of convex polytopes, which captures their extremal structures. | \section{Introduction}
\label{sec:intro}
\subsection{Log-concave sequences}
Finite sequences of numbers $\{a_i\}_{i=1}^n$ often serve as a powerful way to encode properties of algebraic, geometric, and combinatorial objects: $a_i$ can stand for the $i$th coefficient of a Schur polynomial, the dimension of the $i$th cohomology group of a toric variety, or the number of $i$-element independent sets of a
matroid, etc. The properties and interrelations of the elements of the sequence $\{a_i\}_{i=1}^n$ provide valuable information about the underlying mathematical objects. Here we focus on \emph{log-concavity} relations:
\begin{align*}
a_i^2\ge a_{i-1}a_{i+1}\quad\text{for all }i=2,\ldots, n-1,
\end{align*}
which are tied to notions of \emph{positivity} and \emph{unimodality} \cite{Ber89,Ber94,Sta89,Stanley00, SW14, Bra15}. The question that motivates our work is the following: Suppose a log-concave $\{a_i\}_{i=1}^n$, whose elements stand for some algebraic/geometric/combinatorial properties of a mathematical object, satisfies
\begin{align*}
a_j^2=a_{j-1}a_{j+1}\quad \text{for some \emph{fixed} index $j$.}
\end{align*}
What can we deduce about the underlying object? This question of identifying the \emph{extremals} of the sequence $\{a_i\}_{i=1}^n$ is fundamental for a number of reasons. At the very basic level, the structure of the extremals is a basic property of the sequence which we ought to understand. More concretely, information about the extremals can provide information about the shape of the sequence which cannot be inferred from the log-concavity property alone: see Figure \ref{fig:stshape}.
Additionally, if one wishes to improve on the log-concavity property by having $a_i^2-a_{i-1}a_{i+1}\ge d_i$ for some non-trivial $d_i\ge 0$, then usually understanding the extremals of $\{a_i\}$, and hence the vanishing of $d_i$, is a necessary first step. From a different perspective, there are interesting questions related to combinatorial interpretations and computational complexity of the difference $a_i^2-a_{i-1}a_{i+1}$, where characterizing the vanishing condition $a_i^2=a_{i-1}a_{i+1}$ is a basic question \cite{P19,P22survey}.
Establishing that a given sequence, which arises in an algebraic/geometric/combinatorial setting, is log-concave is a difficult problem, with many remaining open questions. In recent years, major advances were achieved on the fronts of proving log-concavity relations for various important sequences in combinatorics \cite{Huh18,kalai2022work,CP22a}. These approaches rely on building ``dictionaries" between combinatorial and geometric-algebraic objects, and then using (or taking inspiration from) already-known log-concavity relations in the geometric-algebraic settings. What is missing, however, are the analogous dictionaries between the \emph{extremals} arising in the combinatorial and geometric-algebraic settings. In this work, we take a step towards bridging this gap by focusing on the correspondence between combinatorics and \emph{convex} geometry due to R. Stanley in the context of partially ordered sets. We will build such a dictionary and, as a consequence, completely characterize the extremal structures arising in Stanley's inequalities \cite{Sta81}. The question of the characterization of these extremals was already raised by Stanley, but even conjectures on these extremals were lacking. As we will see, this is for a good reason since, surprisingly, the extremal structures of our combinatorial sequences will display the richness and subtle nature of their geometric counterparts.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=.25]
\foreach \i in {9,...,15}
{
\draw[color=blue!40!white] (\i,{4*exp(-(\i-15)^2/16)}) -- (\i,0);
\draw[color=blue!40!white,fill=white,thick] (\i,{4*exp(-(\i-15)^2/16)}) circle (0.25);
}
\foreach \i in {16,...,24}
{
\draw[color=blue,fill=blue,thick] (\i,4) circle (0.25);
\draw[color=blue] (\i,4) -- (\i,0);
}
\foreach \i in {25,...,35}
{
\draw[color=blue!40!white] (\i,{4*exp(-(25-\i)^2/36)}) -- (\i,0);
\draw[color=blue!40!white,fill=white,thick] (\i,{4*exp(-(25-\i)^2/36)}) circle (0.25);
}
\draw[thick,->] (0,0) -- (0,5) node[above] {$a_i$};
\draw[thick,->] (0,0) -- (42,0) node[right] {$i$};
\foreach \i in {1,...,8}
{
\draw[color=blue,fill=blue,thick] (\i,0) circle (0.25);
}
\foreach \i in {36,...,40}
{
\draw[color=blue,fill=blue,thick] (\i,0) circle (0.25);
}
\draw[color=blue] (30,5.75) -- (33,5.75);
\draw[color=blue,fill=blue,thick] (33,5.75) circle (0.25);
\draw (33,5.85) node[right] {\small : flat};
\draw[color=blue!25!white] (30,4.5) -- (33,4.5);
\draw[color=blue!40!white,fill=white,thick] (33,4.5) circle (0.25);
\draw (33,4.55) node[right] {\small : strictly log-concave};
\end{tikzpicture}
\caption{The extremals of this log-concave sequence (cf. \eqref{eq:Stanleyk=1}) are such that $a_j^2=a_{j-1}a_{j+1}\Rightarrow a_{j-1}=a_j=a_{j+1}$, corresponding to the flat parts of the sequence. The width of each of the flat parts can be characterized as well. This precise description of the shape of the sequence cannot be obtained from the log-concavity property alone.\label{fig:stshape}}
\end{figure}
\subsection{Stanley's inequalities}
\label{subsec:StanleyInq}
Let $\bar{\alpha}=\{y_1,\ldots,y_{n-k}\}\cup\{x_1,\ldots,x_k\}$ be a partially ordered set (poset) of $n$ elements with a fixed chain $x_1<\cdots<x_k$ of length $k$. The set of linear extensions of $\bar{\alpha}$ is the set of bijections of $\bar{\alpha}$ into $[n]:=\{1,\ldots,n\}$ which are order-preserving:
\[
\mathcal N:=\{\text{bijections }\sigma:\bar{\alpha}\to [n]: w\le z\Rightarrow \sigma(w)\le \sigma(z)~\forall ~w,z\in\bar{\alpha}\}.
\]
We are interested in linear extensions which send the elements in the chain $x_1<\cdots<x_k$ into \emph{fixed} locations. Fix $1\le i_1<\cdots <i_k\le n$ and fix $\ell\in [k]$ such that $i_{\ell-1}+1<i_{\ell}<i_{\ell+1}-1$. For $\circ\in\{-,=,+\}$, let
\begin{align*}
\mathcal N_{\circ}:=\{\sigma\in\mathcal N:\sigma(x_j)=i_j~\forall\, j\in [k]\backslash \{\ell\}\text{ and }\sigma(x_{\ell})=i_{\ell}+ 1_{\circ} \},
\end{align*}
where $1_{\circ}:=1_{\{\circ\text{ is }+\}}-1_{\{\circ\text{ is }-\}}$. In words, whenever $j\neq \ell$, $x_j$ is placed at $i_j$, and when $j=\ell$, $x_{\ell}$ is placed at one of the locations in $\{i_{\ell}-1,i_{\ell},i_{\ell}+1\}$, depending on the sign of $\circ\in\{-,=,+\}$; see Figure \ref{fig:linext}.
\begin{figure}[h]
\begin{tikzpicture}[scale=.2]
\draw (-25,8) node[below]{1};
\draw (-17,8) node[below]{\textcolor{black}{$i_1$}};
\draw (-7.5,8) node[below]{\textcolor{black}{$i_{\ell-1}$}};
\draw (1,8) node[below]{\textcolor{red}{$i_{\ell}-1$}};
\draw (6,8) node[below]{\textcolor{red}{$i_{\ell}$}};
\draw (11,8) node[below]{\textcolor{red}{$i_{\ell}+1$}};
\draw (20,8) node[below]{\textcolor{black}{$i_k$}};
\draw (25,8) node[below]{$n$};
\draw[color=black,dashed] (-25,0)--(25,0);
\draw (-22.5,0) node[below]{\textcolor{black}{$\cdots$}};
\draw (-20,0) node[below]{\textcolor{black}{$y_3$}};
\draw[color=black,->] (-17,1)--(-17,5);
\draw (-17,0) node[below]{\textcolor{black}{$x_1$}};
\draw (-14,0) node[below]{\textcolor{black}{$y_2$}};
\draw (-11,0) node[below]{\textcolor{black}{$\cdots$}};
\draw (-7.5,0) node[below]{\textcolor{black}{$x_{\ell-1}$}};
\draw[color=black,->] (-7.5,1)--(-7.5,5);
\draw (-4,0) node[below]{\textcolor{black}{$\cdots$}};
\draw (-1,0) node[below]{\textcolor{black}{$y_{1}$}};
\draw (6,0) node[below]{\textcolor{red}{$x_{\ell}$}};
\draw (14,0) node[below]{\textcolor{black}{$y_4$}};
\draw (17,0) node[below]{\textcolor{black}{$\cdots$}};
\draw (20,0) node[below]{\textcolor{black}{$x_k$}};
\draw[color=black,->] (20,1)--(20,5);
\draw (24,0) node[below]{\textcolor{black}{$\cdots$}};
\draw[color=black,dashed,->] (5.5,0.5)--(1,5);
\draw (-0,1) node[right]{$\mathcal N_-$};
\draw[color=black,dashed,->] (5.5,0.5)--(5.5,5);
\draw (5,3.7) node[right]{$\mathcal N_=$};
\draw[color=black,dashed,->] (5.5,0.5)--(11,5);
\draw (7.5,1) node[right]{$\mathcal N_+$};
\end{tikzpicture}
\caption{Every linear extension sends $x_j$ to $i_j$ whenever $j\neq \ell$. But \textcolor{red}{$x_{\ell}$} is sent to one of the locations \textcolor{red}{$i_{\ell}-1,~i_{\ell},~i_{\ell}+1$}, depending on whether the linear extension is in $\mathcal N_-,\mathcal N_=,\mathcal N_+$, respectively.\label{fig:linext}}
\end{figure}
In \cite[Theorem 3.2]{Sta81}, Stanley showed that
\begin{align}
\label{eq:Stanley}
|\mathcal N_{=}|^2\ge |\mathcal N_{-}||\mathcal N_{+}|,
\end{align}
thus resolving a conjecture of Chung, Fishburn and Graham \cite{CFG80}. To see the relation to log-concave sequences consider the case $k=1$ and set
\begin{align}
\label{eq:Stanleyk=1}
a_i:=|\{\sigma\in\mathcal N:\sigma(x_1)=i\}|,\quad i\in [n].
\end{align}
Then, \eqref{eq:Stanley} amounts to the statement that the sequence $\{a_i\}$ is log-concave. For the general case $k\ge 1$, \eqref{eq:Stanley} is a log-concavity statement about multi-index sequences.
The goal of this work is to provide a complete characterization of the equality cases of \eqref{eq:Stanley} for any $k$. That is, we will answer the following question: If
\begin{align}
\label{eq:Stanleyeq}
|\mathcal N_{=}|^2=|\mathcal N_{-}||\mathcal N_{+}|,
\end{align}
what can we deduce about the poset $\bar{\alpha}$?
To gain some intuition for the extremals of Stanley's inequalities \eqref{eq:Stanley} let us start with a trivial observation: If $\{y_1,\ldots,y_{n-k}\}$ are all incomparable to $x_{\ell}$, then $|\mathcal N_{-}|=|\mathcal N_{=}|=|\mathcal N_{+}|$, which yields equality in \eqref{eq:Stanley}. In the same vein, consider the following example which is slightly less trivial.
\begin{example}
\label{ex:trivmech}
Suppose the poset $\bar{\alpha}$ satisfies
\begin{align}
\label{eq:triv}
\{z\in\bar{\alpha}: z<x_{\ell}\text{ and } z\not < x_{\ell-1}\}\cup \{z\in\bar{\alpha}: z>x_{\ell}\text{ and } z\not > x_{\ell+1}\}=\varnothing.
\end{align}
Then, given any $\sigma\in\cup_{\circ\in\{-,=,+\}}\mathcal N_{\circ}$, we can permute (some of) the locations of the elements $\{\sigma^{-1}(i_{\ell}-1),\, \sigma^{-1}(i_{\ell}),\, \sigma^{-1}(i_{\ell}+1)\}$ without violating any constraints. For example, given $\sigma\in\mathcal N_+$, the elements $\sigma^{-1}(i_{\ell}-1),\, \sigma^{-1}(i_{\ell})$ must be incomparable to $x_{\ell}=\sigma^{-1}(i_{\ell}+1)$ since, as $i_{\ell-1}+1<i_{\ell}<i_{\ell+1}-1$, the converse would violate \eqref{eq:triv}. Hence, we can exchange the locations of $\{\sigma^{-1}(i_{\ell}-1),\, \sigma^{-1}(i_{\ell}+1)\}$ or $\{\sigma^{-1}(i_{\ell}),\, \sigma^{-1}(i_{\ell}+1)\}$. It follows that
\begin{align}
\label{eq:trivii}
|\mathcal N_=|=|\mathcal N_-|=|\mathcal N_+|,
\end{align}
which in particular implies \eqref{eq:Stanleyeq}.
\end{example}
The mechanism \eqref{eq:triv} is wasteful since it is \emph{global} in nature. It controls all the elements between $x_{\ell-1}$ and $x_{\ell+1}$, even though we are concerned only with the elements which are close to $x_{\ell}$ in the sense that they are located in $i_{\ell}-1,\,i_{\ell},\,i_{\ell}+1$. Instead, we expect \eqref{eq:Stanleyeq} to hold as soon as the mechanism \eqref{eq:triv} occurs only on a \emph{local} scale. To make this idea precise we make the following definition regarding elements that are close to $x_{\ell}$.
\begin{definition}
\label{def:neighbor}
Fix $\ell\in[k]$ such that $i_{\ell-1}+1<i_{\ell}<i_{\ell+1}-1$, and given $\circ\in\{-,=,+\}$, fix $\sigma\in \mathcal N_{\circ}$. The \emph{companions} of $x_{\ell}=\sigma^{-1}(i_{\ell}+1_{\circ})$ are $\sigma^{-1}(i_j)$ for $i_j\in \{i_{\ell}-1,i_{\ell},i_{\ell}+1\}\backslash \{i_{\ell}+1_{\circ}\}$, where $1_{\circ}:=1_{\{\circ\text{ is }+\}}-1_{\{\circ\text{ is }-\}}$. The companion lower in ranking is the \emph{lower companion} and the companion higher in ranking is the \emph{upper companion}.
\end{definition}
For example, with $\circ$ being $-$, the companions of $x_{\ell}=\sigma^{-1}(i_{\ell}-1)$ are $\sigma^{-1}(i_{\ell})$ and $\sigma^{-1}(i_{\ell}+1)$. The lower companion is $\sigma^{-1}(i_{\ell})$ and the upper companion is $\sigma^{-1}(i_{\ell}+1)$.
\subsection{The extremals of Stanley's inequalities}
\label{subsec:extremal_Stanley}
The characterization of the extremals of Stanley's inequalities will be in terms of the companions of $x_{\ell}$ as defined in Definition \ref{def:neighbor}. On a finer resolution, there are two distinct classes of posets which in turn have different types of extremals. The two classes of posets will be called \emph{supercritical} and \emph{critical}, a terminology which will become clear later. The precise definitions are deferred to Definition \ref{def:posetcrit}, but for now, we will simply note that a supercritical poset is always critical, but the converse is false. (There are further classes which reduce to the supercritical and critical classes; they will be handled in Section \ref{sec:subcrit}.)
\begin{theorem}{\textnormal{(\textbf{Supercritical extremals of Stanley's inequalities})}}
\label{thm:supcrit}
\vspace{0.05in}
Suppose the poset $\bar{\alpha}$ is supercritical. The following are equivalent:
\vspace{0.05in}
\begin{enumerate}[(i)]
\item $|\mathcal N_{=}|^2= |\mathcal N_{-}||\mathcal N_{+}|$.\\
\item $|\mathcal N_{-}|= |\mathcal N_{=}|=|\mathcal N_{+}|$.\\
\item For every linear extension in $\mathcal N_{-}\cup\mathcal N_{=}\cup \mathcal N_{+}$, both companions of $x_{\ell}$ are incomparable to $x_{\ell}$.
\end{enumerate}
\end{theorem}
Theorem \ref{thm:supcrit} provides a number of insights into the extremals of \eqref{eq:Stanleyeq}. Part (ii) of the theorem (which held in \eqref{eq:trivii}) is non-trivial, and even surprising, since it puts heavy constraints on the ways in which $|\mathcal N_{=}|^2= |\mathcal N_{-}||\mathcal N_{+}|$ can occur. A priori, we could have a geometric progression where $|\mathcal N_{-}|=ab^{c-1},~|\mathcal N_{=}|=ab^c,~|\mathcal N_{+}|=ab^{c+1}$, for some $a,b,c>0$, which would yield the equality
\[
|\mathcal N_{=}|^2=a^2b^{2c}=(ab^{c-1})(ab^{c+1})=|\mathcal N_{-}||\mathcal N_{+}|.
\]
Theorem \ref{thm:supcrit}(ii) excludes this possibility. On the other hand, despite the information provided by (ii), it sheds no light on the mechanism which yield equality in \eqref{eq:Stanley}. In contrast, Theorem \ref{thm:supcrit}(iii) provides the mechanism behind the extremals: The companions of $x_{\ell}$, under any linear extension in $\bigcup_{\circ\in\{-,=,+\}}\mathcal N_{\circ}$, must be incomparable to $x_{\ell}$. Hence, the positions of $x_{\ell}$ and both of its companions can be swapped, which leads to part (ii). Note that (iii) is a \emph{local} condition which controls only the immediate companions of $x_{\ell}$, unlike \eqref{eq:triv}. The power of Theorem \ref{thm:supcrit} lies in the statement that this mechanism is the \emph{only} mechanism behind the extremals of Stanley's inequalities for supercritical posets.
The characterization of Theorem \ref{thm:supcrit} is very clean and one might hope that it applies to every poset. This hope is quickly shattered:
\begin{example}
\label{ex:crit}
Let $\bar{\alpha}=\{y_1,y_2,y_3,y_4,x_1,x_2,x_3\}$ with the relations
\[
x_1<x_2<x_3,\quad y_1<x_2,\quad x_2<y_2,\quad x_1<y_3<x_3.
\]
Set $\ell=2,$ and $i_1=2,~i_2=4,~i_3=6$. One can check that $|\mathcal N_-|= |\mathcal N_=|=|\mathcal N_+|=4$ so that Theorem \ref{thm:supcrit}(ii) holds. On the other hand, Theorem \ref{thm:supcrit}(iii) is false since $y_1,y_2$ are comparable to $x_2$ but can appear as companions of $x_2$ under linear extensions in $\mathcal N_-\cup\mathcal N_=\cup\mathcal N_+$. See Figure \ref{fig:excrit}.
\begin{figure}[h]
\begin{tikzpicture}[scale=1.5]
\begin{scope}
\node at (-2,-1) {$\bar{\alpha}\quad =$};
\node (y2) at (2,0) {$\color{blue}{y_2}$};
\node (x3) at (0,0) {$\color{red}{x_3}$};
\node (y3) at (-1,-1) {$\color{blue}{y_3}$};
\node (x2) at (1,-1) {$\color{red}{x_2}$};
\node (y1) at (2,-2) {$\color{blue}{y_1}$};
\node (x1) at (0,-2) {$\color{red}{x_1}$};
\node (y4) at (2,-1) {$y_4$};
\draw [->, thick] (y3) -- (x1);
\draw [->, thick] (x3) -- (y3);
\draw [->, thick] (x3) -- (x2);
\draw [->, thick] (x2) -- (x1);
\draw [->, thick] (y2) -- (x2);
\draw [->, thick] (x2) -- (y1);
\end{scope}
\end{tikzpicture}
\begin{align*}
&\mathcal N_- =\left\{ {\color{blue}y_1} {\color{red}x_1} {\color{red}x_2} {\color{blue}y_3} y_4 {\color{red}x_3} {\color{blue}y_2}, \quad{\color{blue}y_1} {\color{red}x_1} {\color{red}x_2} y_4 {\color{blue}y_3} {\color{red}x_3} {\color{blue}y_2}, \quad{\color{blue}y_1} {\color{red}x_1} {\color{red}x_2} {\color{blue}y_2}{\color{blue}y_3} {\color{red}x_3} y_4, \quad {\color{blue}y_1} {\color{red}x_1} {\color{red}x_2} {\color{blue}y_3}{\color{blue}y_2} {\color{red}x_3} y_4\right\},\\
&\mathcal N_==\left\{ {\color{blue}y_1} {\color{red}x_1} {\color{blue}y_3} {\color{red}x_2} y_4 {\color{red}x_3} {\color{blue}y_2}, \quad {\color{blue}y_1} {\color{red}x_1} y_4 {\color{red}x_2} {\color{blue}y_3} {\color{red}x_3} {\color{blue}y_2},\quad {\color{blue}y_1} {\color{red}x_1} {\color{blue}y_3} {\color{red}x_2} {\color{blue}y_2} {\color{red}x_3} y_4,\quad y_4{\color{red}x_1} {\color{blue}y_1} {\color{red}x_2} {\color{blue}y_3} {\color{red}x_3} {\color{blue}y_2}\right\},\\
&\mathcal N_+=\left\{ {\color{blue}y_1} {\color{red}x_1} y_4 {\color{blue}y_3} {\color{red}x_2} {\color{red}x_3} {\color{blue}y_2},\quad {\color{blue}y_1} {\color{red}x_1} {\color{blue}y_3} y_4{\color{red}x_2} {\color{red}x_3} {\color{blue}y_2},\quad y_4 {\color{red}x_1} {\color{blue}y_1} {\color{blue}y_3} {\color{red}x_2} {\color{red}x_3} {\color{blue}y_2},\quad y_4 {\color{red}x_1} {\color{blue}y_3} {\color{blue}y_1} {\color{red}x_2} {\color{red}x_3} {\color{blue}y_2}\right\}.
\end{align*}
\caption{\textbf{Top}: Hasse diagram (arrows point from smaller to larger elements) of poset in Example \ref{ex:crit}. \textbf{Bottom}: Collections of linear extensions of poset in Example \ref{ex:crit}.\label{fig:excrit}}
\end{figure}
\end{example}
Our next result goes beyond Theorem \ref{thm:supcrit} and characterizes the extremals of critical posets.
\begin{theorem}{\textnormal{(\textbf{Critical extremals of Stanley's inequalities})}}
\label{thm:crit}
\vspace{0.05in}
Suppose the poset $\bar{\alpha}$ is critical. The following are equivalent:
\vspace{0.05in}
\begin{enumerate}[(i)]
\item $|\mathcal N_{=}|^2= |\mathcal N_{-}||\mathcal N_{+}|$.\\
\item $|\mathcal N_{-}|= |\mathcal N_{=}|=|\mathcal N_{+}|$.\\
\item For every linear extension in $\mathcal N_{-}\cup\mathcal N_{=}\cup \mathcal N_{+}$, at least one companion of $x_{\ell}$ is incomparable to $x_{\ell}$. In addition, there exist nonnegative numbers $\mathrm N_1, \mathrm N_2$ such that:
\begin{itemize}
\item For any fixed $\circ \in \{-,=,+\}$,
\begin{align*}
&|\{\sigma\in\mathcal N_{\circ}: \textnormal{only the lower companion of $x_{\ell}$ is incomparable to $x_{\ell}$} \}|\\
&=\mathrm N_1=\\
&|\{\sigma\in\mathcal N_{\circ}: \textnormal{only the upper companion of $x_{\ell}$ is incomparable to $x_{\ell}$} \}|.
\end{align*}
\item $|\{\sigma\in\mathcal N_{\circ}: \textnormal{both companions of $x_{\ell}$ are incomparable to $x_{\ell}$} \}|=\mathrm N_2\quad \forall ~ \circ \in \{-,=,+\}$.
\end{itemize}
\end{enumerate}
\end{theorem}
Let us compare and contrast Theorem \ref{thm:supcrit} and Theorem \ref{thm:crit}. The conclusion in part (ii) that the equality \eqref{eq:Stanleyeq} necessitates $|\mathcal N_{-}|= |\mathcal N_{=}|=|\mathcal N_{+}|$, remains true for supercritical and critical posets. But the mechanisms, i.e., part (iii), for this phenomenon are different. Clearly, Theorem \ref{thm:supcrit}(iii) is a stronger condition since it trivially implies the condition in Theorem \ref{thm:crit}(iii). For critical posets, the conclusion that only 0 comparable companions are allowed (namely Theorem \ref{thm:supcrit}(iii)) is relaxed into the statement that 0 or 1 comparable companions are allowed. But in order to get $|\mathcal N_{-}|= |\mathcal N_{=}|=|\mathcal N_{+}|$, there must be a balance between between those linear extensions with 1 comparable companion, which is the content of the second part of Theorem \ref{thm:crit}(iii).
\begin{remark}{\textnormal{(\textbf{Poset characterization})}}
\label{rem:posetchar}
There is a way to reformulate Theorem \ref{thm:supcrit}(iii) so that the characterization of the extremals is given in terms of conditions on the poset itself rather than on the set of its linear extensions:
\begin{align}
\label{eq:posetchar}
\begin{split}
&\forall\,y<x_{\ell}: ~~\exists~ s(y)\in\{ 0,\ldots, k+1\} \text{ s.t. } y<x_{s(y)}\text{ and }| \{z\in\bar{\alpha}: y<z<x_{s(y)}\}|>i_{s(y)}-i_{\ell},\\
&\forall\,y>x_{\ell}:~~\exists\, r(y)\in\{ 0,\ldots, k+1\} \text{ s.t. } y>x_{r(y)}\text{ and }|\{z\in\bar{\alpha}: x_{r(y)}<z<y\}||>i_{\ell}-i_{r(y)};
\end{split}
\end{align}
see Proposition \ref{prop:IIIimpliesIV}. Here, $x_0$ (res. $x_{k+1}$) is the added element with the property that it is smaller (res. bigger) than any other element in $\bar{\alpha}$. The formulation \eqref{eq:posetchar} can be useful in practice since, given a standard description of a poset, \eqref{eq:posetchar} is easier to check. On the other hand, the formulation of Theorem \ref{thm:supcrit}(iii) is more compatible with our dictionary, which is more natural to formulate in terms of conditions on the linear extensions of the poset. It is an interesting problem to find an analogue of \eqref{eq:posetchar} for critical posets.
\end{remark}
\begin{remark}{\textnormal{(\textbf{$k=1$})}}
\label{rem:k=1}
The characterization of the extremals of Stanley's inequalities when $k=1$ was done in \cite[\S 15]{SvH20}. It turns out that, when $k=1$, the poset must be supercritical and the characterization of \cite[\S 15]{SvH20} in this case is the same as Theorem \ref{thm:supcrit} and Remark \ref{rem:posetchar}. While our proofs take much inspiration from the work \cite{SvH20}, the new phenomena of critical posets necessitated the development of many new ideas (see Figure \ref{fig:dictionary}). For example, the dictionary constructed in \cite[\S 15]{SvH20} was in terms of the poset itself (as in Remark \ref{rem:posetchar}), rather than its linear extensions. But when progressing to critical posets, the approach of \cite[\S 15]{SvH20} no longer works while our dictionary, which is in terms of linear extensions description, is suitable for these more subtle and rich extremals.
Let us also mention that, when $k=1$, Chan and Pak, using their \emph{combinatorial atlas} method \cite{CP21}, provided a \emph{linear-algebraic} proof of Stanley's inequalities and characterized their extremals, thus avoiding any use of convex geometry; see also the proof for width two posets by Chan, Pak, and Panova \cite{chan2021extensions}. However, their approach does not currently extend to the case $k>1$.
\end{remark}
\subsection{Dictionaries between convex geometry and combinatorics}
\label{subsec:AFStanleyInq}
Stanley's proof of \eqref{eq:Stanley} relies on a remarkable correspondence he found between mixed volumes of certain convex polytopes and linear extensions counts. Once this correspondence is established, the inequality \eqref{eq:Stanley} follows from a deep log-concavity result in convex geometry: The Alexandrov-Fenchel inequality. We will start this section by reviewing Stanley's proof of the inequality \eqref{eq:Stanley}, and then move to the discussion of its extremals.
\subsubsection{The Alexandrov-Fenchel inequality}
We start with some preliminaries from convex geometry, our standard reference is \cite{Sch14}. Given convex bodies (non-empty compact convex sets) $C,C'\subseteq \R^{n-k}$ and scalars $\lambda ,\lambda'\ge 0$, we define their sum as
\[
\lambda C+\lambda'C':=\{\lambda x+\lambda' y:x\in C, y\in C'\}.
\]
The volume of a sum of convex bodies behaves as a polynomial: Given a positive integer $p$, convex bodies $C_1,\ldots,C_p\subseteq\R^{n-k}$, and scalars $\lambda_1,\ldots,\lambda_p\ge 0$, we have
\[
\mathrm{Vol}_{n-k}(\lambda_1C_1+\cdots+\lambda_pC_p)=\sum_{ 1\le j_1,\ldots,j_{n-k}\le p}\mathsf{V}_{n-k}(C_{j_1},\ldots,C_{j_{n-k}})\lambda_{j_1}\cdots\lambda_{j_{n-k}},
\]
where the coefficients $\mathsf{V}_{n-k}(C_{j_1},\ldots,C_{j_{n-k}})$ are called \emph{mixed volumes}. These geometric objects generalize the notions of volume, surface area, mean width, etc. The \emph{Alexandrov-Fenchel inequality} \cite[\S 7.3]{Sch14} states that sequences of mixed volumes are log-concave: For any convex bodies $C_1,\ldots,C_{n-k}\subset \R^{n-k}$,
\begin{equation}
\label{eq:AFintro}
\mathsf{V}_{n-k}(C_1,C_2,C_3,\ldots,C_{n-k})^2\ge \mathsf{V}_{n-k}(C_1,C_1,C_3,\ldots,C_{n-k})\mathsf{V}_{n-k}(C_2,C_2,C_3,\ldots,C_{n-k}).
\end{equation}
Stanley's proof of \eqref{eq:Stanley} relies on the identification of the poset $\bar{\alpha}$ with polytopes $K_0,\ldots,K_k$. We defer the explicit construction of these polytopes for later (Section \ref{sec:preliminaries}), and for now denote by $\mathcalK$ a certain collection of these polytopes containing $n-k-2$ of them. The key point are the identities
\begin{equation}
\label{eq:posetvolrepintro}
\begin{split}
&|\mathcal N_-|=(n-k)!\,\mathsf{V}_{n-k}(K_{\ell},K_{\ell},\mathcal K),\\
&|\mathcal N_=|=(n-k)!\,\mathsf{V}_{n-k}(K_{\ell-1},K_{\ell},\mathcal K),\\
&|\mathcal N_+|=(n-k)!\,\mathsf{V}_{n-k}(K_{\ell-1},K_{\ell-1},\mathcal K).
\end{split}
\end{equation}
With the representation \eqref{eq:posetvolrepintro} in hand, the inequality \eqref{eq:Stanley} is equivalent to
\begin{equation}
\label{eq:AFStanleyintro}
\mathsf{V}_{n-k}(K_{\ell-1},K_{\ell},\mathcal K)^2\ge \mathsf{V}_{n-k}(K_{\ell},K_{\ell},\mathcal K)\mathsf{V}_{n-k}(K_{\ell-1},K_{\ell-1},\mathcal K),
\end{equation}
which follows immediately from \eqref{eq:AFintro}.
Stanely's proof of \eqref{eq:Stanley} is the only proof currently known. Hence, a natural route towards the characterization of the extremals of Stanley's inequalities would require:
\begin{itemize}
\item Characterization of the extremals of the Alexandrov-Fenchel inequality.
\item Dictionary between the extremals of the Alexandrov-Fenchel inequality and the extremals of Stanley's inequalities.
\end{itemize}
For arbitrary convex bodies, the characterization of the extremals of \eqref{eq:AFintro} is a long-standing open problem \cite[\S 7.6]{Sch14}. But when the bodies are \emph{polytopes}, this problem was recently solved by the second-named author and Van Handel \cite{SvH20}. Thus, the work \cite{SvH20} takes care of the first item and our work here is dedicated to the second item.
To build intuition regarding the correspondence between the extremal structures of posets and polytopes, let us revisit Example \ref{ex:trivmech}. As will be evident (see \eqref{eq:Ki}), the identity \eqref{eq:triv} holds if, and only if, $K_{\ell-1}=K_{\ell}$. In this case it is clear that equality will be attained in \eqref{eq:AFStanleyintro}. But as we saw in Theorem \ref{thm:supcrit} and Theorem \ref{thm:crit}, equality can be attained in Stanley's inequalities under much weaker conditions than those captured by Example \ref{ex:trivmech}. It follows that equality holds in \eqref{eq:AFStanleyintro} under conditions which are much weaker than $K_{\ell-1}=K_{\ell}$. The characterization of these conditions is the topic of the next section.
\subsubsection{The extremals of the Alexandrov-Fenchel inequality for convex polytopes}
The terminology of supercritical and critical posets comes in fact from the analogous terminology in the characterization of the extremals of the Alexandrov-Fenchel inequality for convex polytopes, as introduced in \cite{SvH20}----the precise definitions of supercriticality and criticality is deferred to Definition \ref{def:ssc}. In the sequel, $B\subseteq \R^{n-k}$ always stands for the unit ball, and the notions of \emph{$(B,\mathcal K)$-extreme normal directions} and \emph{$\mathcal K$-degenerate pairs}, which will be used in the subsequent theorem, will be given in Definition \ref{def:extreme} and Definition \ref{def:deg}, respectively.
\begin{theorem}{\textnormal{(\textbf{Extremals of the Alexandrov-Fenchel inequality for convex polytopes}, \cite{SvH20})}}
\label{thm:SvHsupcritIntro}
\vspace{0.1in}
\begin{itemize}
\item Suppose $\mathcal K$ is supercritical. Then,
\[
\mathsf{V}_{n-k}(K_{\ell-1},K_{\ell},\mathcal K)^2=\mathsf{V}_{n-k}(K_{\ell},K_{\ell},\mathcal K)\mathsf{V}_{n-k}(K_{\ell-1},K_{\ell-1},\mathcal K),
\]
if, and only if, up to dilation and translation, the supporting hyperplanes of $K_{\ell-1}$ and $K_{\ell}$ agree in all $(B,\mathcal K)$-extreme normal directions.\\
\item Suppose $\mathcal K$ is critical. Then,
\[
\mathsf{V}_{n-k}(K_{\ell-1},K_{\ell},\mathcal K)^2=\mathsf{V}_{n-k}(K_{\ell},K_{\ell},\mathcal K)\mathsf{V}_{n-k}(K_{\ell-1},K_{\ell-1},\mathcal K),
\]
if, and only if, there exist $0\le d<\infty$ $\mathcal K$-degenerate pairs $(P_1,Q_1),\ldots,(P_d,Q_d)$, such that, up to dilation and translation, the supporting hyperplanes of $K_{\ell-1}+\sum_{j=1}^dQ_j$ and $K_{\ell}+\sum_{j=1}^dP_j$ agree in all $(B,\mathcal K)$-extreme normal directions.
\end{itemize}
\end{theorem}
The complicated structure of the $(B,\mathcal K)$-extreme normal directions (see Figure \ref{fig:graph}) is what gives rise to the richness of the extremals. If the supporting hyperplanes of $K_{\ell-1}$ and $K_{\ell}$ agree in \emph{every direction} on the sphere $S^{n-k-1}$, then, up to dilation and translation, $K_{\ell-1}$ and $K_{\ell}$ are identical. This is an example where a \emph{global} mechanism (supporting hyperplanes of $K_{\ell-1},K_{\ell}$ agree everywhere) gives rise to equality in \eqref{eq:AFintro}. Theorem \ref{thm:SvHsupcritIntro} provides a \emph{local} mechanism for equality in \eqref{eq:AFintro} (supporting hyperplanes of $K_{\ell-1},K_{\ell}$ agree only in very few directions), and furthermore, establishes that this local mechanism is the \emph{only} mechanism for the extremal structures of the Alexandrov-Fenchel inequality.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=.6]
\shade[ball color = blue, opacity = 0.15] (1,0) circle [radius=2];
\draw[thick, densely dashed] (3,0) arc [start angle = 0, end angle = 180,
x radius = 2, y radius = .5];
\draw[thick] (3,0) arc [start angle = 0, end angle = -180, x radius = 2,
y radius = .5];
\draw[thick] (1,2) [rotate=90] arc [start angle = 0, end angle =
-180, x radius = 2, y radius = -1];
\draw[thick, densely dashed] (1,2) [rotate=90] arc [start angle
= 0, end angle =
-180, x radius = 2, y radius = 1];
\draw[thick] (1,2) [rotate=90] arc [start angle = 0, end angle =
-180, x radius = 2, y radius = 1.75];
\draw[thick, densely dashed] (1,2) [rotate=90] arc [start angle
= 0, end angle =
-180, x radius = 2, y radius = -1.75];
\draw[fill=black] (1,2) circle [radius=.07];
\draw[fill=black] (1,-2) circle [radius=.07];
\draw[fill=black] (2.74,-.24) circle [radius=.09] node[below right]
{$\mathrm v_{F}$};
\draw[fill=black] (-0.74,.24) circle [radius=.07];
\draw[fill=black] (.03,-.45) circle [radius=.09];
\draw[fill=black] (1.97,.45) circle [radius=.07];
\draw (.60,-.44) node[below] {$\mathrm v_{F'}$};
\draw (1.2,-.20) node {$e_{F,F'}$};
\begin{scope}[scale=.9]
\fill[blue!15] (-5.95,.55) -- (-4.03,1.55) --
(-4.03,-1.25) -- (-5.95,-2.25) -- (-5.95,.55);
\fill[blue!25] (-5.95,.55) -- (-8.47,.89) --
(-8.47,-1.91) -- (-5.95,-2.25) -- (-5.95,.55);
\fill[blue!5] (-8.47,.89) -- (-6.53,1.79) --
(-4.03,1.55) -- (-5.95,.55) -- (-8.47,.89);
\draw[thick] (-5.95,.55) -- (-4.03,1.55) -- (-4.03,-1.25) --
(-5.95,-2.25) -- (-5.95,.55);
\draw[thick] (-5.95,.55) -- (-8.47,.89) -- (-8.47,-1.91) --
(-5.95,-2.25);
\draw[thick] (-8.47,.89) -- (-6.53,1.79) -- (-4.03,1.55);
\draw[thick,densely dashed] (-6.53,1.79) -- (-6.53,-1.01) --
(-4.03,-1.25);
\draw[thick,densely dashed] (-6.53,-1.01) -- (-8.47,-1.91);
\end{scope}
\draw (-6.5,-.6) node {$F'$};
\draw (-4.4,-.3) node {$F$};
\end{tikzpicture}
\caption{\footnotesize Extreme normal directions associated to the cube. The vectors $\mathrm v_F,\mathrm v_{F'}\in S^2$ are the unit normals of the facets $F,F'$, and the line $e_{F,F'}$ is the shortest geodesic between the nodes $\mathrm v_F,\mathrm v_{F'}$. The $(\textnormal{Ball},\textnormal{Cube})$-extreme normal directions comprises of the nodes and arcs in this embedded graph on the sphere $S^2$.\label{fig:graph}}
\end{figure}
\subsubsection{Dictionary for extremals} A priori, it is not at all clear that the complications and richness of the extremals of \eqref{eq:AFintro} would also arise in our very specific family of polytopes. Indeed, in the case $k=1$, only the supercritical extremals appear. Remarkably, not only does this complexity arise, but we can provide a clean and intuitive characterization of the extremals arising in Stanley's inequalities for critical posets. At the core of our work is a powerful dictionary which translates between the \emph{extremal} properties of convex polytopes and partially ordered sets. We discover \emph{new} extreme normal directions, and in addition, introduce numerous new key ideas: \emph{closure}, \emph{splitting pairs}, \emph{mixing}, \emph{critical subposet}, to name just a few. It will be best to introduce these ideas at the appropriate places in the paper; Section \ref{sec:outline} will contain a brief outline of our proof. We refer to Figure \ref{fig:dictionary} for a quick summary of the main components in our dictionary, and recommend that the reader revisit this table from time to time.
\begin{figure}[h]
\begin{center}
\begin{tabular}{ | c | c | c |}
\hline
\thead{Geometry} & \thead{Dictionary} & \thead{Combinatorics} \\
\hline
\makecell{Criticality of polytopes \\ (Definition \ref{def:ssc})} & \makecell{ Section \ref{sec:notionsCrit} \\
(Proposition \ref{prop:critequiv})} & \makecell{Criticality of posets \\ (Definition \ref{def:posetcrit})} \\
\hline
\makecell{Projection\\ (\cite[Theorem 5.3.1]{Sch14})} & \makecell{ Section \ref{sec:subcrit} \\
(Remark \ref{rem:splitproj})} & \makecell{Splitting \\ (Definition \ref{def:split})}\\
\hline
\makecell{Criticality of splitting pairs\\ (Definition \ref{def:splitpaircrit})} & Section \ref{sec:beyond} & \makecell{Mixing of splitting pairs\\ (Figure \ref{fig:orgsec})} \\
\hline
\makecell{Maximal collection of polytopes\\ (\cite[section 9.1]{SvH20})} & \makecell{Section \ref{sec:beyond}\\
(Proposition \ref{prop:max_notions})} &
\makecell{Maximal splitting pair\\ (Definition \ref {def:rmaxsmin})} \\
\hline
\makecell{Extreme normal directions} & \makecell{Section \ref{sec:ext}} & \makecell{First- and second-neighbors} \\
\hline
\makecell{Translation and dilation} & \makecell{Sections \ref{sec:supercrit}-\ref{sec:crit}} & \makecell{Chains of poset} \\
\hline
\makecell{Critical subspace\\
(Equation \eqref{eq:Eperp})} & \makecell{Section \ref{sec:crit}} & \makecell{Critical subposet\\
(Equation \eqref{eq:Eperp})} \\
\hline
\end{tabular}
\end{center}
\caption{Dictionary between geometry of polytopes and combinatorics of posets.\label{fig:dictionary}}
\end{figure}
\subsection{Organization of paper}
We start in Section \ref{sec:preliminaries} by reviewing the connection between partially ordered sets and convex geometry. In Section \ref{sec:found} we develop a number of tools (\emph{decompositions, closure}) that are used throughout the paper and also prove the sufficiency parts of Theorem \ref{thm:supcrit} and Theorem \ref{thm:crit}. Section \ref{sec:outline} provides a brief outline of the proofs of the main results. Section \ref{sec:notionsCrit} sets the first building block of our dictionary by showing the equivalences between notions of criticality for posets and polytopes. Section \ref{sec:subcrit} introduces the idea of \emph{splitting} and characterizes the extremals of the \emph{subcritical} posets. Section \ref{sec:beyond} introduces the idea of \emph{mixing} which is at the heart of our proofs and applies it to \emph{splitting pairs}. In Section \ref{sec:ext} we add to our dictionary the combinatorial characterization of the extreme normal directions. We complete the proof of Theorem \ref{thm:supcrit} in Section \ref{sec:supercrit} and the proof of Theorem \ref{thm:crit} in Section \ref{sec:crit}. At the end of the paper we include a Notation Appendix for the convenience of the reader.
\section{Preliminaries}
\label{sec:preliminaries}
In this section we review some basics about posets and convex geometry, as well as introduce the notation we use throughout the paper. We review the connection between posets and mixed volumes, and state the characterization of the extremals of the Alexandrov-Fenchel inequality for (convex) polytopes. In addition, we provide the criticality definitions for polytopes and posets.
We use the notation $\le,<,=,\ge,>,\sim$ to describe the relations in a poset, where $\sim$ stands for the comparability relation, and by $\not\le,\not<,\not=,\not\ge,\not>,\nsim$ to describe their negations. Given integers $p\le q$ we write
\begin{equation}
\label{eq:bracket}
\llbracket p,q \rrbracket :=\{p,p+1,\ldots,q-1,q\}.
\end{equation}
Fix positive integers $k,n$, with $k\le n$, and consider the poset $\bar{\alpha}$, of size $n$,
\[
\bar{\alpha}=\{y_1,\ldots,y_{n-k},x_1,\ldots, x_k\},
\]
where $x_1<x_2<\cdots<x_k$ is a chain. Let
\[
\alpha=\{y_1,\ldots,y_{n-k}\}
\]
be the induced poset of size $n-k$ obtained from $\bar{\alpha}$ by removing the chain. To simplify the notation we add two elements $x_0,x_{k+1}$ to $\bar{\alpha}$ with the property that $x_0$ is smaller than any element in $\bar{\alpha}$ while $x_{k+1}$ is bigger than any element in $\bar{\alpha}$. Note that this allows us to consider the case $k=0$.
Let $\mathcal N$ be the set of all linear extensions of $\bar{\alpha}$, that is,
\[
\mathcal N=\{\text{bijections }\sigma:\bar{\alpha}\to [n]: w\le z\Rightarrow \sigma(w)\le \sigma(z)~\forall ~w,z\in\bar{\alpha}\},
\]
with the convention that $\sigma(x_0)=0$ and $\sigma(x_{k+1})=n+1$ for any $\sigma\in\mathcal N$.
Fix $\ell\in [k]:=\{1,\ldots, k\}$ and fix $i_1<i_2<\cdots< i_k\in [n]$, with the property $i_{\ell-1}+1<i_{\ell}<i_{\ell+1}-1$, and let $i_0:=0,~ i_{k+1}:=n+1$. We define the following sets of linear extensions, $\mathcal N_{-},\mathcal N_{=}, \mathcal N_{+}\subseteq\mathcal N$,
\begin{align*}
&\mathcal N_{-}:=\{\sigma\in\mathcal N: \sigma(x_{\ell})=i_{\ell}-1 \quad\text{and}\quad \sigma(x_m)=i_m~\forall ~m\in [k]\backslash \{\ell\}\},\\
&\mathcal N_{=}:=\{\sigma\in\mathcal N: \sigma(x_{\ell})=i_{\ell} \quad\text{and}\quad \sigma(x_m)=i_m~\forall ~m\in [k]\backslash \{\ell\}\},\\
&\mathcal N_{+}:=\{\sigma\in\mathcal N: \sigma(x_{\ell})=i_{\ell}+1 \quad\text{and}\quad \sigma(x_m)=i_m~\forall ~m\in [k]\backslash \{\ell\}\},
\end{align*}
so Stanley's inequalities read
\begin{align}
\label{eq:eqmathcalN}
|\mathcal N_{=}|^2\ge |\mathcal N_{-}||\mathcal N_{+}|.
\end{align}
\subsection{Posets and polytopes}
Fundamental to our approach towards the extremals of \eqref{eq:eqmathcalN} is the connection, due to Stanley \cite{Sta81}, between posets and convex polytopes. We start with the definition of an \emph{order polytope}: Given $\beta\subseteq\alpha$ we let $\R^{\beta}:=\{t\in \R^{n-k}:t_j=0\mbox{ for } y_j\notin\beta\}$ and define the order polytope $O_{\beta}\subseteq\R^{\beta}\subseteq \R^{\alpha}$ by
\[
O_{\beta}:=\{t\in \R^{\beta}: t_j\in [0,1] ~\forall\, y_j\in\beta, \mbox{ and } t_u\le t_v\mbox{ if }y_u\le y_v~\forall\, y_u,y_v\in\beta\}.
\]
The order polytope encodes important properties of the poset, e.g., the volume of $O_{\alpha}$ is proportional to the number of linear extensions of $\alpha$ \cite[Corollary 4.2]{Sta86}. Let us recall some basic facts about order polytopes, which will require the following poset notions. A \emph{maximal} (res. \emph{minimal}) element $y\in\alpha$ is such that there exists no $z\in\alpha$, different than $y$, satisfying $y<z$ (res. $z<y$). Given a set $\beta\subseteq\alpha$ we define $\beta^{\uparrow}$ (res. $\beta^{\downarrow}$) to be the set of maximal (res. minimal) elements of $\beta$. Given a relation $\star\in\{\le,<,=,\ge,>,\sim,\not\le,\not<,\not=,\not\ge,\not>,\nsim\}$ and $y\in\beta$ we let
\[
\beta_{\star y}:=\{z\in\beta: z\star y\},
\]
and, similarly, given relations $\star,\ast\in\{\le,<,=,\ge,>,\sim,\not\le,\not<,\not=,\not\ge,\not>,\nsim\}$, and $y,y'\in\beta$, we write
\[
\beta_{\star y,\ast y'}:=\{z\in\beta: z\star y\mbox{ and }z\ast y'\}.
\]
An element $z\in\beta$ \emph{covers} $y\in\beta$ if $z\in \beta_{>y}^{\downarrow}$. We say that $\beta$ is an \emph{upper set} (res. \emph{lower set}) if $\alpha_{>y}\subseteq\beta$ (res. $\alpha_{<y}\subseteq\beta$), for every $y\in\beta$.
The next result provides information about the face structure of order polytopes based on the poset notions just introduced.
\begin{lemma}{\textnormal{(\cite[\S 1]{Sta86})}}
\label{lem:dimOb}
For any $\beta\subseteq\alpha$ we have $\dim O_{\beta}=|\beta|$. The \textnormal{($|\beta|-1$)}-dimensional faces of $O_{\beta}$ are precisely the following subsets of $O_{\beta}$:
\begin{enumerate}[(i)]
\item $O_{\beta}\cap\{t_j=0\}$ for $y_j\in\beta^{\downarrow}$.
\item $O_{\beta}\cap\{t_j=1\}$ for $y_j\in\beta^{\uparrow}$.
\item $O_{\beta}\cap\{t_u=t_v\}$ for $y_u,y_v\in\beta$ such that $y_v$ covers $y_u$ in $\beta$.
\end{enumerate}
\end{lemma}
Hyperplane sections of order polytopes will play a crucial role for us: Given $i\in \llbracket 0, k\rrbracket$, define the polytopes in $\R^{n-k}$,
\begin{align}
\label{eq:Ki}
&K_i:=\{t\in O_{\alpha}:t_j=0\mbox{ if }y_j<x_i,~ t_j=1\mbox{ if }y_j>x_{i+1},\mbox{ for all }y_j\in\alpha\}.
\end{align}
While we defined the polytopes $\{K_i\}$ as hyperplane sections of order polytopes, they are in fact nothing but translations of certain order polytopes. To see this relation we start with the next lemma whose proof is a matter of checking the definitions. In the sequel, given $\beta\subseteq \alpha$ let $1_{\beta}:=\sum_{y_j\in\beta}e_j$, with $\{e_j\}_{j\in\beta}$ denoting the standard basis of $\R^{\beta}$.
\begin{lemma}
\label{lem:Onot}
Let $\beta,\beta'\subseteq\alpha$ be disjoint sets where $\beta$ is an upper set and $\beta'$ is a lower set. Then,
\[
O_{\alpha\backslash(\beta\cup\beta')}+1_{\beta}=\{t\in O_{\alpha}:t_j=0 \mbox{ if }y_j\in\beta' \mbox{ and } t_j=1\mbox{ if }y_j\in \beta\},
\]
where we view $\R^{\alpha\backslash(\beta\cup\beta')}$ as a subset of $\R^{\alpha} \cong\R^{n-k}$.
\end{lemma}
We can now write $\{K_i\}_{i\in\llbracket 0,k\rrbracket}$ as translates of order polytopes. For $i\in\llbracket 0,k\rrbracket$ define
\begin{align}
\label{eq:betai}
\beta_i:=\alpha\backslash(\alpha_{<x_i}\cup \alpha_{>x_{i+1}}),
\end{align}
with the convention that $\beta_i=\varnothing$ if $i<0$ or $i>k$; for $S\subseteq \llbracket 0,k\rrbracket$ set $\beta_S:=\cup_{i\in S}\beta_i$. Then, applying Lemma \ref{lem:Onot}, with the disjoint upper and lower sets $\beta=\alpha_{>x_{i+1}},~ \beta'=\alpha_{<x_i}$, shows that
\begin{equation}
\label{eq:Kiordpoly}
K_i=O_{\beta_i}+1_{\alpha_{>x_{i+1}}}\mbox{ for }i\in\llbracket 0,k\rrbracket.
\end{equation}
As an example of $\beta_S$, which will be useful later, the following result handles the set $S:=\llbracket 0, r\rrbracket\cup \llbracket s, k\rrbracket$.
\begin{lemma}
\label{lem:betaintv}
For any $r\le s$,
\[
\beta_{\llbracket 0, r\rrbracket\cup \llbracket s, k\rrbracket}=\alpha\backslash\alpha_{>x_{r+1},<x_s}=\beta_r\cup\beta_s\cup\alpha_{<x_{r+1}}\cup \alpha_{>{x_s}}.
\]
\end{lemma}
\begin{proof}
The second identity is clear so we focus on the first identity. Let $j_0:=-1,~ 0\le j_1<\cdots<j_p\le k,~j_{p+1}:=k+1$. We claim that
\begin{equation}
\label{eq:capcup}
\bigcap_{q=1}^p(\alpha_{<x_{j_q}}\cup \alpha_{>x_{j_q+1}})=\bigcup_{q=0}^p\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}.
\end{equation}
$\subseteq$: Let $y\in \bigcap_{q=1}^p(\alpha_{<x_{j_q}}\cup \alpha_{>x_{j_q+1}})$ so that, for each $q\in \llbracket 1,p\rrbracket$, either $y<x_{j_q}$ or $y>x_{j_q+1}$. Let $q'$ be the largest $q$ such that $y>x_{j_q+1}$. Then, $y$ is not bigger than $x_{j_{(q'+1)}+1}$, which means that $y<x_{j_{(q'+1)}}$, as $y\in \alpha_{<x_{j_{(q'+1)}}}\cup \alpha_{>x_{j_{(q'+1)}}+1}$ (this is trivially true if $q'=p$). Hence, $y\in \alpha_{>x_{j_{q'}+1},<x_{j_{(q'+1)}}}$.
$\supseteq$: Let $y\in\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}$ for some $q\in \llbracket 0,p\rrbracket$. Then, for any $q'\le q$, $y>x_{j_{q'}+1}$ and, for any $q'> q$, $y<x_{j_{q'}}$. Hence, $y\in \bigcap_{q=0}^{p+1}(\alpha_{<x_{j_{q}}}\cup\alpha_{>x_{j_{q}+1}})= \bigcap_{q=1}^p(\alpha_{<x_{j_{q}}}\cup\alpha_{>x_{j_{q}+1}})$.\\
We now to turn to the proof of the lemma. Let $j_0:=-1, j_{p+1}:=k+1$, and $\{j_1,\ldots,j_p\}:=\{0,\ldots,r,s,\ldots,k\}$. We have
\begin{align*}
\beta_{\llbracket 0, r\rrbracket\cup \llbracket s, k\rrbracket}&=\bigcup_{q=1}^p\beta_{j_q}=\bigcup_{q=1}^p\left(\alpha\bigg \backslash\left(\alpha_{<x_{j_q}}\cup \alpha_{>x_{j_q+1}}\right)\right)=\alpha\bigg \backslash\bigcap_{q=1}^p\left(\alpha_{<x_{j_q}}\cup \alpha_{>x_{j_q+1}}\right)\\
&\underset{\eqref{eq:capcup}}{=}\alpha\bigg \backslash \bigcup_{q=0}^p\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}.
\end{align*}
Whenever $j_q\neq r$, $j_q+1=j_{q+1}$, so $\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}=\varnothing$. It follows that $\bigcup_{q=0}^p\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}=\alpha_{>x_{r+1},<x_{s}}$, which completes the proof.
\end{proof}
\subsection{Posets and mixed volumes}
The connection between the polytopes $\{K_i\}_{i\in\llbracket 0,k\rrbracket}$ and \newline $|\mathcal N_-|, |\mathcal N_=|, |\mathcal N_+|$, which leads to Stanley's proof of \eqref{eq:eqmathcalN}, goes through the notion of mixed volumes; we refer to \cite{Sch14} as the standard reference for the theory of convex bodies. Given convex bodies (nonempty compact convex sets) $C,C'\subseteq \R^{n-k}$, and scalars $\lambda,\lambda'\ge 0$, we define their sum as
\[
\lambda C+\lambda' C':=\{\lambda x+\lambda' y:x\in C, y\in C'\}.
\]
The volume of a sum of convex bodies behaves as a polynomial: Given convex bodies $C_1,\ldots,C_p\subseteq\R^{n-k}$, and scalars $\lambda_1,\ldots,\lambda_p\ge 0$, we have \cite[Theorem 5.1.7]{Sch14},
\[
\mathrm{Vol}_{n-k}(\lambda_1C_1+\cdots+\lambda_pC_p)=\sum_{ j_1,\ldots,j_{n-k}\in\llbracket 1, p\rrbracket}\mathsf{V}_{n-k}(C_{j_1},\ldots,C_{j_{n-k}})\lambda_{j_1}\cdots\lambda_{j_{n-k}}.
\]
The coefficients $\mathsf{V}_{n-k}(C_{j_1},\ldots,C_{j_{n-k}})$, which are nonnegative, symmetric, and multilinear in their in their arguments, are called \emph{mixed volumes}. Stanley's proof of \eqref{eq:eqmathcalN} relies on the following identification of $|\mathcal N_-|,|\mathcal N_=|,|\mathcal N_+|$ with mixed volumes \cite[Theorem 3.2]{Sta81}. For $m\in\llbracket 0,k\rrbracket$ let
\[
\mathcal K_m:=(\underbrace{K_m,\ldots,K_m}_{i_{m+1}-i_m-1}).
\]
Then,
\begin{align*}
&|\mathcal N_-|=(n-k)!\mathsf{V}_{n-k}(\mathcalK_0,\mathcalK_1,\ldots,\underbrace{K_{\ell-1},\ldots,K_{\ell-1}}_{i_\ell-1-i_{\ell-1}-1},\underbrace{K_\ell,\ldots,K_\ell}_{i_{\ell+1}-(i_\ell-1)-1},\mathcalK_{\ell+1},\ldots ,\mathcalK_k),\\
&|\mathcal N_=|=(n-k)!\mathsf{V}_{n-k}(\mathcalK_0,\mathcalK_1,\ldots,\underbrace{K_{\ell-1},\ldots,K_{\ell-1}}_{i_{\ell}-i_{\ell-1}-1},\underbrace{K_{\ell},\ldots,K_{\ell}}_{i_{\ell+1}-i_{\ell}-1},\mathcalK_{\ell+1},\ldots ,\mathcalK_k),\\
&|\mathcal N_+|=(n-k)!\mathsf{V}_{n-k}(\mathcalK_0,\mathcalK_1,\ldots,\underbrace{K_{\ell-1},\ldots,K_{\ell-1}}_{i_{\ell}+1-i_{\ell-1}-1},\underbrace{K_{\ell},\ldots,K_{\ell}}_{i_{\ell+1}-(i_{\ell}+1)-1},\mathcalK_{\ell+1},\ldots, \mathcalK_k).
\end{align*}
To shorten the notation, let
\begin{equation*}
\mathcal K:=(\mathcal K_0,\mathcal K_1,\ldots,\underbrace{K_{\ell-1},\ldots,K_{\ell-1}}_{i_{\ell}-i_{\ell-1}-2}, \underbrace{K_{\ell},\ldots,K_{\ell}}_{i_{\ell+1}-i_{\ell}-2},\mathcalK_{\ell+1},\ldots,\mathcal K_k),
\end{equation*}
to get
\begin{equation}
\label{eq:posetvolrep}
\begin{split}
&|\mathcal N_-|=(n-k)!\mathsf{V}_{n-k}(K_{\ell},K_{\ell},\mathcal K),\\
&|\mathcal N_=|=(n-k)!\mathsf{V}_{n-k}(K_{\ell-1},K_{\ell},\mathcal K),\\
&|\mathcal N_+|=(n-k)!\mathsf{V}_{n-k}(K_{\ell-1},K_{\ell-1},\mathcal K).
\end{split}
\end{equation}
With the representation \eqref{eq:posetvolrep} in hand, we get that the inequality \eqref{eq:eqmathcalN} is equivalent to
\[
\mathsf{V}_{n-k}(K_{\ell-1},K_{\ell},\mathcal K)^2\ge \mathsf{V}_{n-k}(K_{\ell-1},K_{\ell-1},\mathcal K)\mathsf{V}_{n-k}(K_{\ell},K_{\ell},\mathcal K).
\]
The latter inequality follows immediately from the Alexandrov-Fenchel inequality \cite[Theorem 7.3.1]{Sch14}: For any convex bodies $C_1,\ldots,C_{n-k}\subseteq\R^{n-k}$ we have
\begin{equation}
\label{eq:AF}
\tag{AF}
\mathsf{V}_{n-k}(C_1,C_2,C_3,\ldots,C_{n-k})^2\ge \mathsf{V}_{n-k}(C_1,C_1,C_3,\ldots,C_{n-k})\mathsf{V}_{n-k}(C_2,C_2,C_3,\ldots,C_{n-k}).
\end{equation}
This completes Stanley's proof of \eqref{eq:eqmathcalN}. Since our goal in this paper is to understand the equality cases of \eqref{eq:eqmathcalN}, the above discussion naturally leads to the investigation of the equality cases of the Alexandrov-Fenchel inequality itself.
\subsection{The extremals of the Alexandrov-Fenchel inequality for convex polytopes}
We start with the \emph{support function} associated to a convex body: Given a convex body $C\subseteq \R^{n-k}$ we define $h_C:S^{n-k-1}\to \R$ by
\[
h_C(\mathrm u):=\sup_{x\in C}\langle \mathrm u,x\rangle,\quad \mbox{for }\mathrm u\in S^{n-k-1}.
\]
The support function evaluated at $\mathrm u$ gives the distance to the origin of the hyperplane orthogonal to $\mathrm u$ supporting $C$. The support function respects the summation of convex bodies in the sense that
\[
h_{\lambda C+\lambda' C'}=\lambda h_C+\lambda'h_{C'},
\]
for any convex bodies $C,C'\subseteq\R^{n-k}$ and scalars $\lambda,\lambda'\ge 0$. The function $h_C$ completely describes $C$ in the sense that two convex bodies are the same if their support functions are identical. That is, $C=C'$ if $h_C(\mathrm u)=h_{C'}(\mathrm u)$ for every $\mathrm u\in S^{n-k-1}$. Since mixed volumes are invariant under translations, and scale proportionally with dilations, it is clear that equality holds in \eqref{eq:AF} whenever there exist $a\ge 0$ and $\mathrm v\in\R^{n-k}$ such that $h_{C_1}(\mathrm u)=h_{aC_2+\mathrm v}(\mathrm u)$ for every $\mathrm u\in S^{n-k-1}$. However, the difficulty in characterizing the extremals of the Alexandrov-Fenchel inequality stems from the fact that equality can be attained in \eqref{eq:AF} even if $h_{C_1}$ and $h_{aC_2+\mathrm v}$ agree on a very small subset of $S^{n-k-1}$. The complete characterization of the extremals of \eqref{eq:AF} has been open for decades. But in the case of \emph{polytopes}, which is the setting relevant to Stanley's inequalities, the problem was completely settled in \cite{SvH20}. In order to present the results of \cite{SvH20} we need some definitions. In the sequel, $B\subseteq \R^{n-k}$ always stands for the unit ball. Given a polytope $C\subseteq \R^{n-k}$ and $\mathrm u\in S^{n-k-1}$ we write
\[
F(C,\mathrm u):=\{x\in C:\langle \mathrm u,x\rangle=h_C(\mathrm u)\},
\]
for the face of $C$ in the direction $\mathrm u$. We recall \cite[Theorem 1.7.2]{Sch14} that
\begin{align}
\label{eq:linface}
F(C+C',\mathrm u)=F(C,\mathrm u)+F(C',\mathrm u),
\end{align}
for any convex bodies $C,C'$ and $\mathrm u\in S^{n-k-1}$.
\begin{definition}
\label{def:extreme}
Let $\mathcal C:=(C_3,\ldots,C_{n-k})$ be a nonempty collection of polytopes in $\R^{n-k}$. A vector $\mathrm u\in S^{n-k-1}$ is a $(B,\mathcal C)$\emph{-extreme normal direction} if, for any $\mathcal C'\subseteq\mathcal C$,
\[
\dim\left(\sum_{C\in\mathcal C'}F(C,\mathrm u)\right)\ge |\mathcal C'|.
\]
\end{definition}
The definition of $(B,\mathcal C)$-extreme normal directions plays a crucial role in the characterization of the extremals of the Alexandrov-Fenchel inequality for convex polytopes. For example, it follows from \cite{SvH20} that if $C_1,\ldots,C_{n-k}$ are \emph{full-dimensional} polytopes in $\R^{n-k}$, then, equality holds in \eqref{eq:AF} if, and only if, there exist $a\ge 0$ and $ \mathrm v\in\R^{n-k}$ such that
\[
h_{C_1}(\mathrm u)=h_{aC_2+\mathrm v}(\mathrm u) \quad \mbox{for every }(B,\mathcal C)\mbox{-extreme normal directions } \mathrm u.
\]
In the setting of Stanley's inequalities, the full-dimensionality assumption does not hold so we need the full power of the results of \cite{SvH20}. This requires a few definitions.
\begin{definition}
\label{def:ssc}
Let $\mathcal C$ be a nonempty collection of polytopes in $\R^{n-k}$.
\begin{itemize}
\item The collection $\mathcal C$ is \emph{subcritical} if, for any collection $\mathcal C'\subseteq\mathcal C$, $\dim\left(\sum_{C\in \mathcal C'}C\right)\ge |\mathcal C'|$. A collection $\mathcal C' \subseteq\mathcal C$ is \emph{sharp-subcritical} if $\dim\left(\sum_{C\in \mathcal C'}C\right)= |\mathcal C'|$.
\item The collection $\mathcal C$ is \emph{critical} if, for any nonempty collection $\mathcal C'\subseteq\mathcal C$, $\dim\left(\sum_{C\in \mathcal C'}C\right)\ge |\mathcal C'|+1$. A collection $\mathcal C' \subseteq\mathcal C$ is \emph{sharp-critical} if $\dim\left(\sum_{C\in \mathcal C'}C\right)= |\mathcal C'|+1$.
\item The collection $\mathcal C$ is \emph{supercritical} if, for any nonempty collection $\mathcal C'\subseteq\mathcal C$, $\dim\left(\sum_{C\in \mathcal C'}C\right)\ge |\mathcal C'|+2$.
\end{itemize}
\end{definition}
The origin of the above definition is the following lemma, which characterizes the conditions under which mixed volumes are positive \cite[Theorem 5.1.8]{Sch14}.
\begin{lemma}{\textnormal{(\textbf{Positivity of mixed volumes})}}
\label{lem:mvpos}
Let $C_1,\ldots, C_{n-k}$ be convex bodies in $\R^{n-k}$. Then, $\mathsf{V}_{n-k}(C_1,\ldots, C_{n-k})>0$ if, and only if,
\[
\dim\left(\sum_{C\in\mathcal C'}C\right)\ge |\mathcal C'|\quad\mbox{for every collection } \mathcal C'\subseteq \{C_i\}_{i\in\llbracket 1,n-k\rrbracket}.
\]
\end{lemma}
For example, if the collection of polytopes $\mathcal C:=(C_3,\ldots,C_{n-k})$ in \eqref{eq:AF} is not subcritical, then Lemma \ref{lem:mvpos} shows that equality holds in \eqref{eq:AF} for trivial reasons: both sides of the inequality are zero. If $\mathcal C$ is subcritical with a sharp-subcritical collection, then the equality cases of \eqref{eq:AF} can be reduced to the equality cases of the Alexandrov-Fenchel inequality in a lower dimension; we refer to \cite{SvH20} for details. The difficult equality cases of \eqref{eq:AF} are the supercritical and, to a much larger degree, the critical collections. The following definition is needed for the characterization of the critical extremals of \eqref{eq:AF}.
\begin{definition}
\label{def:deg}
Let $\mathcal C=(C_3,\ldots,C_{n-k})$ be a collection of polytopes in $\R^{n-k}$ and let $(P,Q)$ be a pair of convex bodies in $\R^{n-k}$. The pair $(P,Q)$ is a $\mathcal C$\emph{-degenerate pair} if $P$ is not a translate of $Q$,
\begin{align*}
\mathsf{V}_{n-k}(P,Q,\mathcal C)=0,\quad\mbox{and}\quad \mathsf{V}_{n-k}(P,B,\mathcal C)=\mathsf{V}_{n-k}(Q,B,\mathcal C).
\end{align*}
\end{definition}
\begin{theorem}{\textnormal{(\cite[Theorem 2.13, Corollary 2.16]{SvH20})}}
\label{thm:SvH}
Let $C_1,\ldots, C_{n-k}$ be polytopes in $\R^{n-k}$ and let $\mathcal C:=(C_3,\ldots, C_{n-k})$.
\begin{itemize}
\item Suppose $\mathcal C$ is supercritical. Then, equality holds in \eqref{eq:AF} if, and only if, there exist $a\ge 0$ and $\mathrm v\in \R^{n-k}$ such that
\[
h_{C_1}(\mathrm u)=h_{aC_2+\mathrm v}(\mathrm u) \quad \mbox{for all }(B,\mathcal C)\textnormal{-extreme normal directions } \mathrm u.
\]
\item Suppose $\mathcal C$ is critical. Then, equality holds in \eqref{eq:AF} if, and only if, there exist $a\ge 0,\, \mathrm v\in \R^{n-k}$, and a number $0\le d<\infty$ of $\mathcal C$-degenerate pairs $(P_1,Q_1),\ldots,(P_d,Q_d)$, such that
\[
h_{C_1+\sum_{j=1}^dQ_j}(\mathrm u)=h_{aC_2+\mathrm v+\sum_{j=1}^dP_j}(\mathrm u) \quad \mbox{for all }(B,\mathcal C)\textnormal{-extreme normal directions } \mathrm u.
\]
\end{itemize}
\end{theorem}
\subsubsection{The extremals of Stanley's inequalities} The crux of our work lies in understanding how to apply Theorem \ref{thm:SvH} in our setting in order to get a \emph{combinatorial} characterization of the equality cases of \eqref{eq:eqmathcalN}. For convenience and future reference, let us explicitly write Theorem \ref{thm:SvH} in our setting.
\begin{theorem}
\label{thm:SvHComb}
$~$
\begin{itemize}
\item Suppose $\mathcal K$ is supercritical. Then, $|\mathcal N_{=}|^2=|\mathcal N_{-}||\mathcal N_{+}|$ holds, if, and only if, there exist $a\ge 0$ and $\mathrm v\in \R^{n-k}$ such that
\[
h_{K_{\ell-1}}(\mathrm u)=h_{aK_{\ell}+\mathrm v}(\mathrm u) \quad \mbox{for all }(B,\mathcal K)\textnormal{-extreme normal directions } \mathrm u.
\]
\item Suppose $\mathcal K$ is critical. Then, $|\mathcal N_{=}|^2=|\mathcal N_{-}||\mathcal N_{+}|$ holds, if, and only if, there exist $a\ge 0,\,\mathrm v\in \R^{n-k}$, and a number $0\le d<\infty$ of $\mathcal K$-degenerate pairs $(P_1,Q_1),\ldots,(P_d,Q_d)$, such that
\[
h_{K_{\ell-1}+\sum_{j=1}^dQ_j}(\mathrm u)=h_{aK_{\ell}+\mathrm v+\sum_{j=1}^dP_j}(\mathrm u) \quad \mbox{for all }(B,\mathcal K)\textnormal{-extreme normal directions } \mathrm u.
\]
\end{itemize}
\end{theorem}
Our proof proceeds by induction on $k$. The base case $k=0$ is trivial as equality in \eqref{eq:eqmathcalN} cannot occur because $|\mathcal N_-|=|\mathcal N_+|=0$ while $|\mathcal N_=|=|\mathcal N|$. Hence, Theorem \ref{thm:supcrit} and Theorem \ref{thm:crit} hold trivially when $k=0$. From here on we assume that $k\ge 1$ and that equality holds in \eqref{eq:eqmathcalN}:
\begin{equation*}
|\mathcal N_{=}|^2=|\mathcal N_{+}||\mathcal N_{-}|\quad\Longleftrightarrow \quad\mathsf{V}_{n-k}(K_{\ell-1},K_{\ell},\mathcal K)^2=\mathsf{V}_{n-k}(K_{\ell-1},K_{\ell-1},\mathcal K)\mathsf{V}_{n-k}(K_{\ell},K_{\ell},\mathcal K).
\end{equation*}
\begin{assumption}
\label{ass:induct}
Theorem \ref{thm:supcrit} and Theorem \ref{thm:crit} hold true for $k-1$.
\end{assumption}
We conclude this section by introducing the notions of criticality for posets. The relations between the criticality notions of Definition \ref{def:ssc} and the following Definition \ref{def:posetcrit} is given in Section \ref{sec:notionsCrit}.
\begin{definition}
\label{def:posetcrit}
Let $\bar{\alpha}=\{y_1,\ldots,y_{n-k}\}\cup\{x_1,\ldots,x_k\}$ be a poset, with a fixed chain $x_1<\cdots<x_k$, and fix $1\le i_1<\cdots <i_k\le n$ such that $i_{\ell-1}+1<i_{\ell}<i_{\ell+1}-1$ for some fixed $\ell\in [k]$. Suppose that $|\mathcal N_=|>0$.
\begin{itemize}
\item The poset $\bar{\alpha}$ is \emph{supercritical} if, for any integer $p\ge 1$ and $\{j_1<\cdots<j_p\}\subseteq \{0,\ldots, k+1\}$, with $j_0:=-1,~j_{p+1}:=k+1$,
\[
\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|\le |\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|-2+\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}(i_{j_{(q+1)}}-i_{j_q+1}-1).
\]
\item The poset $\bar{\alpha}$ is \emph{critical} if, for any integer $p\ge 1$ and $\{j_1<\cdots<j_p\}\subseteq \{0,\ldots, k+1\}$, with $j_0:=-1,~j_{p+1}:=k+1$,
\[
\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|\le |\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|-1+\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}(i_{j_{(q+1)}}-i_{j_q+1}-1).
\]
\end{itemize}
Let us remark that the case $k=1$ is always supercritical, where we use that $|\mathcal N_-|,|\mathcal N_=|,|\mathcal N_+|>0$, as $|\mathcal N_=|>0$ and $|\mathcal N_{=}|^2=|\mathcal N_{+}||\mathcal N_{-}|$.
\end{definition}
\section{Linear extensions}
\label{sec:found}
In this section we introduce a number of ideas and tools that will simplify the proofs of our main results. Section \ref{subsec:decomp} presents a \emph{decompositions} of $\mathcal N_{-}, \mathcal N_{=},\mathcal N_{+}$. Section \ref{subsec:suff} uses the above decompositions to prove the sufficiency part of Theorem \ref{thm:supcrit} and Theorem \ref{thm:critsec} (Proposition \ref{prop:suff}), and introduces conditions which are equivalent to Theorem \ref{thm:supcrit} and Theorem \ref{thm:critsec} (Lemma \ref{lem:suff}). Finally, Section \ref{subsec:cl} introduces the technical tool of \emph{closure} where relations are added to the poset $\bar{\alpha}$ based on linear extensions.
\subsection{Decompositions of linear extensions}
\label{subsec:decomp}
Fix $\circ\in\{-,=,+\}$ and $\star, \ast\in\{\nsim,\sim\}$. Recall Definition \ref{def:neighbor} and let
\[
\mathcal N_{\circ}(\star,\ast):=\{\sigma\in\mathcal N_{\circ}:~\text{lower companion $\star ~ x_{\ell}$ and upper companion $\ast ~x_{\ell}$}\}.
\]
It is clear that we have the disjoint decompositions,
\begin{align}
\label{eq:decomps_sum}
|\mathcal N_-|&=|\mathcal N_-(\nsim,\nsim)|+|\mathcal N_-(\nsim,\sim)|+|\mathcal N_-(\sim,\nsim)|+|\mathcal N_-(\sim,\sim)|,\nonumber\\
|\mathcal N_=|&=|\mathcal N_=(\nsim,\nsim)|+|\mathcal N_=(\nsim,\sim)|+|\mathcal N_=(\sim,\nsim)|+|\mathcal N_=(\sim,\sim)|,\\
|\mathcal N_+|&=|\mathcal N_+(\nsim,\nsim)|+ |\mathcal N_+(\nsim,\sim)|+|\mathcal N_+(\sim,\nsim)|+|\mathcal N_+(\sim,\sim)|\nonumber.
\end{align}
The next result shows that, regardless of whether equality holds in \eqref{eq:eqmathcalN}, certain relations between terms in \eqref{eq:decomps_sum} always hold.
\begin{lemma}
\label{lem:decompsimpl}
For any poset $\bar{\alpha}$ the following hold:
\begin{enumerate}[(i)]
\item $|\mathcal N_-(\nsim,\nsim)|=|\mathcal N_=(\nsim,\nsim)|=| \mathcal N_+(\nsim,\nsim)|$.
\item $|\mathcal N_-(\nsim,\sim)|=|\mathcal N_=(\nsim,\sim)|$.
\item $|\mathcal N_=(\sim,\nsim)|=|\mathcal N_+(\sim,\nsim)|$.
\item $|\mathcal N_-(\sim,\nsim)|\le |\mathcal N_-(\nsim,\sim)|$.
\item $|\mathcal N_+(\nsim,\sim)|\le |\mathcal N_+(\sim,\nsim)|$.
\end{enumerate}
\end{lemma}
\begin{proof}
$~$
\begin{enumerate}[(i)]
\item We show $|\mathcal N_-(\nsim,\nsim)|=|\mathcal N_=(\nsim,\nsim)|$; the argument for $|\mathcal N_=(\nsim,\nsim)|=| \mathcal N_+(\nsim,\nsim)|$ is analogous. Let $\pi_{i_{\ell-1},i_\ell}:[n]\to [n]$ be the permutation that swaps the positions of $i_{\ell-1}$ and $i_{\ell}$. We claim that defining $\pi_{i_{\ell-1},i_{\ell}}(\sigma):=\pi_{i_{\ell-1},i_{\ell}}\circ \sigma$, for $\sigma \in \mathcal N_-(\nsim,\nsim)$, yields a bijection $\pi_{i_{\ell-1},i_{\ell}}:\mathcal N_-(\nsim,\nsim)\to\mathcal N_=(\nsim,\nsim)$. That $\pi_{i_{\ell-1},i_{\ell}}(\mathcal N_-(\nsim,\nsim))\subseteq\mathcal N_=(\nsim,\nsim)$ follows from the fact that $x_{\ell}$ is incomparable to the element placed in $i_{\ell}$ so their positions can be swapped. Hence, to conclude that $\pi_{i_{\ell-1},i_{\ell}}$ is a bijection it suffices to show that $\pi_{i_{\ell-1},i_{\ell}}$ is invertible and that its inverse $\pi_{i_{\ell-1},i_{\ell}}^{-1}$ satisfies $\pi_{i_{\ell-1},i_{\ell}}^{-1}(\mathcal N_=(\nsim,\nsim))\subseteq \mathcal N_-(\nsim,\nsim)$. The inverse $\pi_{i_{\ell-1},i_{\ell}}^{-1}$ exists since $\pi_{i_{\ell-1},i_{\ell}}^{-1}=\pi_{i_{{\ell}-1},i_{\ell}}$. That $\pi_{i_{\ell-1},i_{\ell}}(\mathcal N_=(\nsim,\nsim))\subseteq \mathcal N_-(\nsim,\nsim)$ is clear.
\item Analogous argument to (i).
\item Analogous argument to (i).
\item Let $\pi_{i_{\ell},i_\ell+1}:[n]\to [n]$ be the permutation that swaps the positions of $i_{\ell}$ and $i_{\ell+1}$. We claim that defining $\pi_{i_{\ell},i_{\ell+1}}(\sigma):=\pi_{i_{\ell},i_{\ell+1}}\circ \sigma$, for $\sigma \in \mathcal N_-(\sim,\nsim)$, yields an injection $\pi_{i_{\ell},i_{\ell+1}}:\mathcal N_-(\sim,\nsim)\to\mathcal N_-(\nsim,\sim)$. Indeed, fix $\sigma \in \mathcal N_-(\sim,\nsim)$, so $\sigma(x_{\ell})=i_{\ell}-1$, and let $y_u:=\sigma^{-1}(i_{\ell}),y_v:=\sigma^{-1}(i_{\ell}+1)$ so that, by the definition of $ \mathcal N_-(\sim,\nsim)$, $x_{\ell}<y_u$ and $y_v\nsim x_{\ell}$. We cannot have $y_u<y_v$ since that would imply $x_{\ell}<y_u<y_v$ contradicting $y_v\nsim x_{\ell}$. Since $y_u=\sigma^{-1}(i_{\ell}),y_v=\sigma^{-1}(i_{\ell}+1)$, we cannot have $y_v<y_u$ so we must have $y_u\nsim y_v$. It follows that swapping the positions of $y_u$ and $y_v$ in $\sigma$ yields the linear extension $\pi_{i_{\ell},i_{\ell+1}}(\sigma)\in \mathcal N_-(\nsim,\sim)$.
\item Analogous argument to (iv).
\end{enumerate}
\end{proof}
\subsection{Sufficiency}
\label{subsec:suff}
The decompositions given in Section \ref{subsec:decomp} help us prove the sufficiency of the conditions of Theorem \ref{thm:supcrit}(iii) and Theorem \ref{thm:crit}(iii).
\begin{proposition}{\textnormal{(\textbf{Sufficient conditions})}}
\label{prop:suff}
$~$
\begin{enumerate}[(a)]
\item Theorem \ref{thm:supcrit}(ii) $\Longrightarrow$ Theorem \ref{thm:supcrit}(i) and Theorem \ref{thm:crit}(ii) $\Longrightarrow$ Theorem \ref{thm:crit}(i).
\item Theorem \ref{thm:crit}(iii) $\Longrightarrow$ Theorem \ref{thm:crit}(ii).
\item Theorem \ref{thm:supcrit}(iii) $\Longrightarrow$ Theorem \ref{thm:crit}(iii) $\Longrightarrow$ Theorem \ref{thm:supcrit}(ii).
\end{enumerate}
\end{proposition}
\begin{proof}
$~$
\begin{enumerate}[(a)]
\item Immediate.\\
\item The conditions in Theorem \ref{thm:crit}(iii) read
\begin{align*}
&|\mathcal N_-(\sim,\sim)|=|\mathcal N_=(\sim,\sim)|=| \mathcal N_+(\sim,\sim)|=0,\\
&|\mathcal N_-(\nsim,\sim)|=|\mathcal N_-(\sim,\nsim)|=\mathrm N_1,\\
&|\mathcal N_=(\nsim,\sim)|=|\mathcal N_=(\sim,\nsim)|=\mathrm N_1,\\
&|\mathcal N_+(\nsim,\sim)|=|\mathcal N_+(\sim,\nsim)|=\mathrm N_1,\\
&|\mathcal N_-(\nsim,\nsim)|=|\mathcal N_=(\nsim,\nsim)|=| \mathcal N_+(\nsim,\nsim)|=\mathrm N_2.
\end{align*}
Hence, \eqref{eq:decomps_sum} reads
\begin{align*}
&|\mathcal N_-|=\mathrm N_2+\mathrm N_1+\mathrm N_1+0=\mathrm N_2+2\mathrm N_1,\\
&|\mathcal N_=|=\mathrm N_2+\mathrm N_1+\mathrm N_1+0=\mathrm N_2+2\mathrm N_1,\\
&|\mathcal N_+|=\mathrm N_2+\mathrm N_1+\mathrm N_1+0=\mathrm N_2+2\mathrm N_1,
\end{align*}
which is the statement in Theorem \ref{thm:crit}(ii).\\
\item The first implication is immediate and the second implication follows from (b).
\end{enumerate}
\end{proof}
In order to prove Theorem \ref{thm:supcrit} and Theorem \ref{thm:crit} it remains to show that Theorem \ref{thm:supcrit}(i) $\Longrightarrow$ Theorem \ref{thm:supcrit}(iii) and Theorem \ref{thm:crit}(i) $\Longrightarrow$ Theorem \ref{thm:crit}(iii). To this end, the following conditions will suffice.
\begin{lemma}
\label{lem:suff}
$~$
\begin{enumerate}[(a)]
\item The conditions in Theorem \ref{thm:supcrit}(iii) hold if, and only if,
\[
|\mathcal N_=(\nsim,\sim)|=|\mathcal N_=(\sim,\nsim)|=|\mathcal N_=(\sim,\sim)|=0.
\]
\item Suppose $|\mathcal N_=|^2=|\mathcal N_-||\mathcal N_+|$. The conditions in Theorem \ref{thm:crit}(iii) hold if, and only if,
\[
|\mathcal N_-(\sim,\sim)|=|\mathcal N_+(\sim,\sim)|=0.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
We start with proof of (a). The ``only if" part is clear. To prove the ``if" part, assume that
\[
|\mathcal N_=(\nsim,\sim)|=|\mathcal N_=(\sim,\nsim)|=|\mathcal N_=(\sim,\sim)|=0,
\]
which by \eqref{eq:decomps_sum} implies
\[
|\mathcal N_=|=|\mathcal N_=(\nsim,\nsim)|.
\]
On the other hand, Lemma \ref{lem:decompsimpl}(i) yields
\[
N':=|\mathcal N_-(\nsim,\nsim)|=|\mathcal N_=(\nsim,\nsim)|=|\mathcal N_+(\nsim,\nsim)|,
\]
so \eqref{eq:decomps_sum} reads
\begin{align*}
|\mathcal N_-|&=N'+|\mathcal N_-(\nsim,\sim)|+|\mathcal N_-(\sim,\nsim)|+|\mathcal N_-(\sim,\sim)|,\\
|\mathcal N_=|&=N',\\
|\mathcal N_+|&=N'+ |\mathcal N_+(\nsim,\sim)|+|\mathcal N_+(\sim,\nsim)|+|\mathcal N_+(\sim,\sim)|.
\end{align*}
Stanley's inequality \eqref{eq:eqmathcalN},
\[
|\mathcal N_{=}|^2\ge |\mathcal N_{-}||\mathcal N_{+}|,
\]
implies that all the terms other than $N'$ must vanish, which completes the proof.\\
We now prove (b). The `only if" part is clear. To prove the ``if" part, assume that
\[
|\mathcal N_-(\sim,\sim)|=| \mathcal N_+(\sim,\sim)|=0.
\]
Using Lemma \ref{lem:decompsimpl}(i-iii), set
\begin{align*}
&N':=|\mathcal N_-(\nsim,\nsim)|=|\mathcal N_=(\nsim,\nsim)|=| \mathcal N_+(\nsim,\nsim)|,\\
&N_a':=|\mathcal N_-(\nsim,\sim)|=|\mathcal N_=(\nsim,\sim)|,\\
&N_b':=|\mathcal N_=(\sim,\nsim)|=|\mathcal N_+(\sim,\nsim)|,
\end{align*}
so \eqref{eq:decomps_sum} reads
\begin{align*}
|\mathcal N_-|&=N'+N_a'+|\mathcal N_-(\sim,\nsim)|,\\
|\mathcal N_=|&=N'+N_a'+N_b'+|\mathcal N_=(\sim,\sim)|,\\
|\mathcal N_+|&=N'+N_b'+ |\mathcal N_+(\nsim,\sim)|.
\end{align*}
By Lemma \ref{lem:decompsimpl}(iv-v),
\[
|\mathcal N_-(\sim,\nsim)|\le N_a'\quad\text{and}\quad |\mathcal N_+(\nsim,\sim)|\le N_b'
\]
so
\begin{align*}
|\mathcal N_-|&=N'+N'_a+|\mathcal N_-(\sim,\nsim)|\le N'+2N_a',\\
|\mathcal N_=|&=N'+N'_a+N'_b+|\mathcal N_=(\sim,\sim)|\ge N'+N_a'+N_b',\\
|\mathcal N_+|&=N'+N_b'+ |\mathcal N_+(\nsim,\sim)|\le N'+2N_b'.
\end{align*}
Hence,
\begin{align*}
&(N'+2N_a')(N'+2N_b')=(N'+N_a'+N_b')^2-(N_a'-N_b')^2\le (N'+N_a'+N_b')^2\\
&\le |\mathcal N_=|^2=|\mathcal N_-||\mathcal N_+|\le (N'+2N_a')(N'+2N_b').
\end{align*}
It follows that all of the above inequalities are in fact equalities. In particular,
\begin{align}
&|\mathcal N_=(\sim,\sim)|=0,\label{N-=0}\\
&N_a'=N_b',\label{Na'=Nb'}\\
&|\mathcal N_-(\sim,\nsim)|=N_a',\label{Na'=}\\
&|\mathcal N_+(\nsim,\sim)|=N_b'.\label{Nb'=}
\end{align}
The identity \eqref{N-=0}, together with the assumption $|\mathcal N_-(\sim,\sim)|=|\mathcal N_+(\sim,\sim)|=0$, imply that every linear extension in $\mathcal N_{\circ}$, for any $\circ\in\{-,=,+\}$, has either 0 or 1 comparable companions to $x_{\ell}$. It remains to show that there exist nonnegative numbers $\mathrm N_1, \mathrm N_2$ such that
\begin{align*}
&|\mathcal N_-(\sim,\nsim)|=|\mathcal N_=(\sim,\nsim)|=|\mathcal N_+(\sim,\nsim)|=|\mathcal N_-(\nsim,\sim)|=|\mathcal N_=(\nsim,\sim)|=|\mathcal N_+(\nsim,\sim)|=\mathrm N_1,\\
&|\mathcal N_-(\nsim,\nsim)|=|\mathcal N_=(\nsim,\nsim)|=|\mathcal N_+(\nsim,\nsim)|=\mathrm N_2.
\end{align*}
The first part follows since
\begin{align*}
&|\mathcal N_-(\sim,\nsim)|\underset{\eqref{Na'=}}{=}N_a':=|\mathcal N_-(\nsim,\sim)|\underset{\text{Lemma \ref{lem:decompsimpl}(ii)}}{=}|\mathcal N_=(\nsim,\sim)|\underset{\eqref{Na'=Nb'}}{=}N_b':=|\mathcal N_=(\sim,\nsim)|\\
&\underset{\text{Lemma \ref{lem:decompsimpl}(iii)}}{=}|\mathcal N_+(\sim,\nsim)|\underset{\eqref{Nb'=}}{=}|\mathcal N_+(\nsim,\sim)|=:\mathrm N_1,
\end{align*}
and the second part follows by Lemma \ref{lem:decompsimpl}(i).
\end{proof}
\subsection{Closure}
\label{subsec:cl}
Since we are interested in the extremals of \eqref{eq:eqmathcalN}, it is beneficial to add relations to $\bar{\alpha}$ which are compatible with $\mathcal N_{-},\mathcal N_{=},\mathcal N_{+}$, while leaving these sets invariant.
\begin{definition}
\label{def:clposet}
Denote by $\textnormal{Cl}(\bar{\alpha})$ (the \emph{closure} of $\bar{\alpha}$) the poset with the same elements as $\bar{\alpha}$ and with the partial order on $\textnormal{Cl}(\bar{\alpha})$ given by
\[
w<z\quad\textnormal{if and only if}\quad \sigma(w)<\sigma(z)~\forall ~\sigma\in \mathcal N_-\cup \mathcal N_=\cup \mathcal N_+.
\]
Let
\[
\mathcal N^{\text{cl}}:=\{\textnormal{bijections }\sigma:\textnormal{Cl}(\bar{\alpha})\to [n]: w\le z\Rightarrow \sigma(w)\le \sigma(z)~\forall ~w,z\in\textnormal{Cl}(\bar{\alpha})\},
\]
with the analogous $\mathcal N_{\circ}^{\text{cl}}(\star,\ast)$ for $\circ \in\{-,=,+\}$ and $\star,\ast\in\{\nsim,\sim\}$.
\end{definition}
We first need to check that Definition \ref{def:clposet} is well-defined. Indeed, if $z_1,z_2,z_3\in \textnormal{Cl}(\bar{\alpha})$ are such that $z_1<z_2$ and $z_2<z_3$ in $\textnormal{Cl}(\bar{\alpha})$, then, by definition, $\sigma(z_1)<\sigma(z_2)$ and $\sigma(z_2)<\sigma(z_3)$ for every $\sigma\in \bigcup_{\circ \in\{-,=,+\}} \mathcal N_{\circ}$, so $\sigma(z_1)<\sigma(z_2)<\sigma(z_3)$. It follows that $z_1<z_3$ in $\textnormal{Cl}(\bar{\alpha})$.
Let us now show that the relations in $\textnormal{Cl}(\bar{\alpha})$ are compatible with the relations in $\bar{\alpha}$.
\begin{lemma}
\label{lem:posetcl}
If $z_1<z_2$ in $\bar{\alpha}$ then $z_1<z_2$ in $\textnormal{Cl}(\bar{\alpha})$. If $z_1\nsim z_2$ in $\textnormal{Cl}(\bar{\alpha})$ then $z_1\nsim z_2$ in $\bar{\alpha}$.
\end{lemma}
\begin{proof}
If $z_1<z_2$ in $\bar{\alpha}$, then $\sigma(z_1)<\sigma(z_2)$ for every $\sigma\in \bigcup_{\circ \in\{-,=,+\}} \mathcal N_{\circ}$, so $z_1<z_2$ in $\textnormal{Cl}(\bar{\alpha})$. The contrapositive of this statement is that if $z_1\nsim z_2$ in $\textnormal{Cl}(\bar{\alpha})$ then $z_1\nsim z_2$ in $\bar{\alpha}$.
\end{proof}
While the closure operation is compatible with the relations in $\bar{\alpha}$, it can introduce new relations as the following example demonstrates.
\begin{example}
\label{ex:cl}
Let $\bar{\alpha}=\{x_1,x_2,y_1,y_2,y_3\}$, so $k=2$ and $n=5$, and suppose that the only relations are $x_1<x_2$ and $y_1<x_1$. Let $i_1=2,i_2=4$ and $l=2$, and note that $i_{\ell-1}+1=i_1+1=3<4=i_2=i_{\ell}< 5=(n+1)-1=i_{\ell+1}-1$. Let us show that, in $\textnormal{Cl}(\bar{\alpha})$, $x_1<y_2$ and $x_1<y_3$, relations which do not hold in $\bar{\alpha}$. Indeed, take any $\sigma\in \mathcal N_-\cup \mathcal N_=\cup \mathcal N_+$ and note that $\sigma(x_1)=i_1=2$ so, since $y_1<x_1$, we must have $\sigma(y_1)=1$. Thus, $\sigma(y_2),\sigma(y_3)>2$, and hence, in $\textnormal{Cl}(\bar{\alpha})$, $x_1<y_2$ and $x_1<y_3$. See Figure \ref{fig:excl}.
\begin{figure}
\begin{tikzpicture}[scale=1]
\begin{scope}
\node (x2) at (0,0) {$\color{red}{x_2}$};
\node (x1) at (0,-1) {$\color{red}{x_1}$};
\node (y3) at (2,-2) {$\color{blue}{y_3}$};
\node (y2) at (1,-2) {$\color{blue}{y_2}$};
\node (y1) at (0,-2) {$\color{blue}{y_1}$};
\draw[->,thick] (x2) -- (x1);
\draw[->,thick] (x1) -- (y1);
\node at (1,-2.7){$\bar{\alpha}$};
\end{scope}
\begin{scope}[xshift=5cm]
\node (x2) at (0,0) {$\color{red}{x_2}$};
\node (x1) at (0,-1) {$\color{red}{x_1}$};
\node (y3) at (1,0) {$\color{blue}{y_3}$};
\node (y2) at (-1,0) {$\color{blue}{y_2}$};
\node (y1) at (0,-2) {$\color{blue}{y_1}$};
\draw[->,thick] (x2) -- (x1);
\draw[->,thick] (x1) -- (y1);
\draw[->,dashed,thick] (y2) -- (x1);
\draw[->,dashed,thick] (y3) -- (x1);
\node at (0,-2.7){$\textnormal{Cl}(\bar{\alpha})$};
\end{scope}
\begin{scope}[xshift=6cm, yshift=-0.3cm]
\node at (5,0) {$\mathcal N_- =\{{\color{blue}{y_1}}{\color{red}{x_1x_2}}{\color{blue}{y_2y_3}},~ {\color{blue}{y_1}}{\color{red}{x_1x_2}}{\color{blue}{y_3y_2}}\}$};
\node at (5,-1) {$\mathcal N_= =\{{\color{blue}{y_1}}{\color{red}{x_1}}{\color{blue}{y_2}}{\color{red}{x_2}}{\color{blue}{y_3}}, ~{\color{blue}{y_1}}{\color{red}{x_1}}{\color{blue}{y_3}}{\color{red}{x_2}}{\color{blue}{y_2}}\}$};
\node at (5,-2) {$\mathcal N_+ :=\{ {\color{blue}{y_1}}{\color{red}{x_1}}{\color{blue}{y_2y_3}}{\color{red}{x_2}}, ~{\color{blue}{y_1}}{\color{red}{x_1}}{\color{blue}{y_3y_2}}{\color{red}{x_2}}\}$};
\end{scope}
\end{tikzpicture}
\caption{Hasse diagram (arrows point from smaller to larger elements) of posets in Example \ref{ex:cl}, together with their (identical) sets of linear extensions, showing that new relations can occur under the closure operation.}
\label{fig:excl}
\end{figure}
\end{example}
The next result shows that our basic objects of interest remain more-or-less invariant under the closure operation. To simplify the notation, let $(\textnormal{i})$ (res. $(\textnormal{ii})$) stand for the conditions in Theorem \ref{thm:supcrit}(i) and Theorem \ref{thm:crit}(i) (res. Theorem \ref{thm:supcrit}(ii) and Theorem \ref{thm:crit}(ii)), and let $(\textnormal{iii}_{\text{supcrit}})$ (res. $(\textnormal{iii}_{\text{crit}})$) stand for the conditions in Theorem \ref{thm:supcrit}(iii) (res. Theorem \ref{thm:crit}(iii)). We use an upper script ``cl" for the corresponding notation when $\textnormal{Cl}(\bar{\alpha})$, rather than $\bar{\alpha}$, is used.
\begin{proposition}
\label{prop:clposet}
The set $\textnormal{Cl}(\bar{\alpha})$ is a poset satisfying
\begin{enumerate}[(a)]
\item $\mathcal N_{\circ}^{\textnormal{cl}}=\mathcal N_{\circ}$ for every $\circ\in\{-,=,+\}$.
\item $(\textnormal{i}^{\textnormal{cl}})\Longleftrightarrow (\textnormal{i})$,
\item $(\textnormal{ii}^{\textnormal{cl}})\Longleftrightarrow (\textnormal{ii})$,
\item $(\textnormal{iii}_{\textnormal{supcrit}}^{\textnormal{cl}})\Longrightarrow (\textnormal{iii}_{\textnormal{supcrit}})$ and $(\textnormal{iii}_{\textnormal{crit}}^{\textnormal{cl}})\Longrightarrow (\textnormal{iii}_{\textnormal{crit}})$.
\end{enumerate}
\end{proposition}
\begin{proof}
$\textnormal{Cl}(\bar{\alpha})$ is indeed a poset since irreflexivity is immediate and transitivity was checked after Definition \ref{def:clposet}.\\
\begin{enumerate}[(a)]
\item We show that $\mathcal N_=^{\text{cl}}=\mathcal N_=$; the proof that $\mathcal N_-^{\text{cl}}=\mathcal N_-$ and $\mathcal N_+^{\text{cl}}=\mathcal N_+$ is analogous. We start by observing that since Lemma \ref{lem:posetcl} yields ``$w<z$ in $\bar{\alpha}$ implies $w<z$ in $\textnormal{Cl}(\bar{\alpha})$", it follows that ``$\sigma\in \mathcal N_=^{\text{cl}}$ implies $\sigma\in \mathcal N_=$". Conversely, let $\sigma\in\mathcal N_=$ so it suffices to show that $\sigma\in\mathcal N^{\text{cl}}$. The latter holds since if $w<z$ in $\textnormal{Cl}(\bar{\alpha})$, then it must be, by the definition of $\textnormal{Cl}(\bar{\alpha})$, that $\sigma(w)<\sigma(z)$, and hence $\sigma\in\mathcal N^{\text{cl}}$.\\
\item Follows trivially from (a).\\
\item Follows trivially from (a).\\
\item We show that
\begin{align}
\label{eq:clsupcrit}
|\mathcal N_{=}^{\text{cl}}(\nsim,\sim)|=|\mathcal N_{=}^{\text{cl}}(\sim,\nsim)|=|\mathcal N_{=}^{\text{cl}}(\sim,\sim)|=0\quad\Longrightarrow \quad |\mathcal N_{=}(\nsim,\sim)|=|\mathcal N_{=}(\sim,\nsim)|=|\mathcal N_{=}(\sim,\sim)|=0,
\end{align}
which proves $(\textnormal{iii}_{\textnormal{supcrit}}^{\textnormal{cl}})\Longrightarrow (\textnormal{iii}_{\textnormal{supcrit}})$ by Lemma \ref{lem:suff}(a). To establish \eqref{eq:clsupcrit} we show $|\mathcal N_{=}(\nsim,\sim)|=0$; the proof of $|\mathcal N_{=}(\sim,\nsim)|=0$ and $|\mathcal N_{=}(\sim,\sim)|=0$ is analogous. Suppose $|\mathcal N_{=}(\nsim,\sim)|>0$ so there exists $\sigma\in\mathcal N_=$ such that $\sigma(x_{\ell})=i_{\ell}$ and $x_{\ell}<\sigma^{-1}(i_{\ell}+1)$ in $\bar{\alpha}$. By (a), $\sigma \in\mathcal N_{=}^{\text{cl}}$, and by Lemma \ref{lem:posetcl}, $x_{\ell}<\sigma^{-1}(i_{\ell}+1)$ in $\textnormal{Cl}(\bar{\alpha})$. It follows that $\sigma\in\mathcal N_{=}^{\text{cl}}(\nsim,\sim)\cup \mathcal N_{=}^{\text{cl}}(\sim,\sim)$, which is a contradiction.
Next we show
\begin{align}
\label{eq:clto}
|\mathcal N_{-}^{\text{cl}}(\sim,\sim)|=|\mathcal N_{+}^{\text{cl}}(\sim,\sim)|=0\quad\Longrightarrow \quad |\mathcal N_{-}(\sim,\sim)|=|\mathcal N_{+}(\sim,\sim)|=0.
\end{align}
Since $(\textnormal{iii}_{\textnormal{crit}}^{\textnormal{cl}})\Longrightarrow (\textnormal{i}_{\textnormal{crit}}^{\textnormal{cl}})$ by Proposition \ref{prop:suff}(a--b), and since $(\textnormal{i}^{\textnormal{cl}})\Longleftrightarrow (\textnormal{i})$ by part (a), the proof will be complete by Lemma \ref{lem:suff}(b).
To establish \eqref{eq:clto} we show that $|\mathcal N_{+}^{\text{cl}}(\sim,\sim)|=0\Rightarrow |\mathcal N_{+}(\sim,\sim)|=0$; the proof of $|\mathcal N_{-}^{\text{cl}}(\sim,\sim)|=0\Rightarrow |\mathcal N_{-}(\sim,\sim)|=0$ is analogous. Indeed, if $|\mathcal N_{+}(\sim,\sim)|>0$ then there exists $\sigma\in \mathcal N_{+}$ such that $\sigma(x_{\ell})=i_{\ell}+1$ and $\sigma^{-1}(i_{\ell}-1),\sigma^{-1}(i_{\ell})$ are both smaller than $x_{\ell}$ in $\bar{\alpha}$. By (a), $\sigma\in \mathcal N_{+}^{\text{cl}}$, and by Lemma \ref{lem:posetcl}, $\sigma^{-1}(i_{\ell}-1),\sigma^{-1}(i_{\ell})$ are both smaller than $x_{\ell}$ in $\textnormal{Cl}(\bar{\alpha})$. It follows that $\sigma\in \mathcal N_{+}^{\text{cl}}(\sim,\sim)$. In other words, $|\mathcal N_{+}(\sim,\sim)|>0\Rightarrow |\mathcal N_{+}^{\text{cl}}(\sim,\sim)|>0$, which is the contrapositive of what we want to show.
\end{enumerate}
\end{proof}
\section{Proof outline}
\label{sec:outline}
In this section we outline the proof of the characterization of the extremals of Stanley's inequalities. The first step is to understand how we use the closure procedure. We have the following equivalences:
\begin{align*}
&(\textnormal{i}^{\text{cl}})\quad\underset{\text{Thms. \ref{thm:supcritsec}, \ref{thm:critsec} + Lem. \ref{lem:suff}}}{\Longrightarrow}\hspace{.05in} (\textnormal{iii}^{\text{cl}})\quad\underset{\text{Prop. \ref{prop:suff}(b-c) }}{\Longrightarrow}\quad (\textnormal{ii}^{\text{cl}})\quad\underset{\text{trivial}}{\Longrightarrow}\quad (\textnormal{i}^{\text{cl}})\\
&\Updownarrow\tiny\text{Prop. \ref{prop:clposet}(b)}\hspace{1.23in}\Downarrow \tiny\text{Prop. \ref{prop:clposet}(d)}\hspace{.69in}\Updownarrow \tiny\text{Prop. \ref{prop:clposet}(c)}\hspace{0.23in}\Updownarrow\tiny\text{Prop. \ref{prop:clposet}(b)}\\
&\hspace{0.01in}(\textnormal{i}) \hspace{1.8in}(\textnormal{iii}) \hspace{0.3in}\underset{\text{Prop. \ref{prop:suff}(b-c)}}{\Longrightarrow}\hspace{0.15in}(\textnormal{ii})\quad\hspace{0.1in}\underset{\text{trivial}}{\Longrightarrow}\quad \hspace{0.02in}(\textnormal{i})
\end{align*}
The only implication that has not been proven thus far is $(\textnormal{i}^{\text{cl}})\Longrightarrow (\textnormal{iii}^{\text{cl}})$, which will follow from Theorem \ref{thm:supcritsec}, Theorem \ref{thm:critsec}, and Lemma \ref{lem:suff}. Hence, from here on we may assume:
\begin{assumption}
\label{ass:cl}
\[
\bar{\alpha}=\textnormal{Cl}(\bar{\alpha}).
\]
\end{assumption}
Note that Remark \ref{rem:posetchar}, which is proven in Proposition \ref{prop:IIIimpliesIV}, does not require Assumption \ref{ass:cl}. The first extremals we need to characterize are those arising in the trivial case $|\mathcal N_=|=0$, which we dispose of in Theorem \ref{thm:trivsubcrit}. Assuming that $|\mathcal N_=|>0$, the characterization of \eqref{eq:Stanleyeq} is divided to three types of classes, \emph{subcritical}, \emph{supercritical}, and \emph{critical}. By subcritical we mean that $\mathcal K$ is subcritical. The supercritical and critical settings were defined in Definition \ref{def:ssc} and Definition \ref{def:posetcrit}.\\
The characterization of the \textbf{subcritical extremals} relies on the \emph{splitting mechanism} (Definition \ref{def:split} and Proposition \ref{prop:split}). The idea is that if $\mathcal K$ is truly subcritical, rather than critical, we can reduce the problem to the extremals of a poset with a shorter chain $\{x_i\}$. Arguing by induction, we then characterize the subcritical extremals (Theorem \ref{thm:subcrit}).\\
For the \textbf{supercritical extremals}, the starting point is Theorem \ref{thm:SvHComb} which yields that
$|\mathcal N_{=}|^2=|\mathcal N_{-}||\mathcal N_{+}|$ holds, if, and only if, there exist $a\ge 0$ and $\mathrm v\in \R^{n-k}$ such that
\begin{align}
\label{eq:hsup}
h_{K_{\ell-1}}(\mathrm u)=h_{aK_{\ell}+\mathrm v}(\mathrm u) \quad \mbox{for all }(B,\mathcal K)\textnormal{-extreme normal directions } \mathrm u.
\end{align}
The identity \eqref{eq:hsup} constitutes a system of equations (one equation for each $\mathrm u$) and the goal is to interpret these equations as combinatorial constraints on the poset $\bar{\alpha}$. Hence, the first important step is to find enough $(B,\mathcal K)$--extreme normal directions which can be described combinatorially. This is achieved in Section \ref{sec:ext} (Proposition \ref{prop:dir}(a-d)) by using the \emph{mixing} phenomenon (Section \ref{subsec:guide}). Once these directions are found in Section \ref{sec:ext}, Section \ref{sec:supercrit} is dedicated to plugging these directions back into \eqref{eq:hsup} and analyzing the outcomes. The second important step is to show that the scalar $a$ and the vector $\mathrm v$ in \eqref{eq:hsup} satisfy $a=1$ and $\mathrm v_j=0$ for certain $j$'s. The identity \eqref{eq:hsup} then further simplifies and provides the bulk of the desired characterization of the extremals (Theorem \ref{thm:supcritsec}). We explain in Section \ref{sec:supercrit} how to control $a$ and $\mathrm v$.\\
The starting point for the \textbf{critical extremals} is again Theorem \ref{thm:SvHComb}, but now we need to use its second part which states that
$|\mathcal N_{=}|^2=|\mathcal N_{-}||\mathcal N_{+}|$ holds, if, and only if, there exist $a\ge 0,\,\mathrm v\in \R^{n-k}$, and a number $0\le d<\infty$ of $\mathcal K$-degenerate pairs $(P_1,Q_1),\ldots,(P_d,Q_d)$, such that
\[
h_{K_{\ell-1}+\sum_{j=1}^dQ_j}(\mathrm u)=h_{aK_{\ell}+\mathrm v+\sum_{j=1}^dP_j}(\mathrm u) \quad \mbox{for all }(B,\mathcal K)\textnormal{-extreme normal directions } \mathrm u.
\]
The presence of the degenerate pairs causes great difficulties (which are not just technical since, as we saw, new extremals do indeed arise for critical posets). The first key idea to resolve these problems is to find a sub-poset of $\bar{\alpha}$ on which we have more-or-less a supercritical behavior. From a geometric standpoint, this corresponds to finding a subspace $E^{\perp}$ such that
\begin{align}
\label{eq:hcrit}
h_{K_{\ell-1}}(\mathrm u)=h_{aK_{\ell}+\mathrm v}(\mathrm u) \quad \mbox{for all }(B,\mathcal K)\textnormal{-extreme normal directions } \mathrm u\text{ which are contained in }E^{\perp}.
\end{align}
The identification of $E^{\perp}$ and its properties relies on the mixing properties of the \emph{maximal splitting pair} (Section \ref{lem:splitequiv}). Even after identifying $E^{\perp}$ we face the problem that \eqref{eq:hcrit} provides less constraints than \eqref{eq:hsup} due to the restriction to the subspace $E^{\perp}$. Hence, we cannot derive enough combinatorial constraints on $\bar{\alpha}$. The solution is to find even more $(B,\mathcal K)$-extreme normal directions which were not needed for supercritical posets (Proposition \ref{prop:dir}(e-h)). With these new directions in hand, Section \ref{sec:crit} proceeds roughly as Section \ref{sec:supercrit} to show that $a=1$ and $\mathrm v_j=0$ for certain $j$'s. This description is an oversimplification since the situation is in fact much more delicate. It is precisely this delicacy which leads to the new extremals for critical posets.
\section{Notions of criticality}
\label{sec:notionsCrit}
In this section we start building our dictionary between convex geometry and combinatorics. The first building block is a correspondence between geometric and combinatorial notions of criticality, which will be used throughout this work. Section \ref{subsec:triv} starts with the easiest correspondence (Lemma \ref{lem:spancollec}), which connects the linear spans of polytopes in $\mathcalK$ with subsets of $\bar{\alpha}$. Consequently, we characterize the trivial extremals which appear when $|\mathcal N_=|=0$ (Theorem \ref{thm:trivsubcrit}). Section \ref{subsec:equivcrit} is dedicated to the equivalences between geometric and combinatorial notions of criticality (Proposition \ref{prop:critequiv}), and their consequences on sharp-subcritical and sharp-critical collections (Lemmas \ref{lem:sharsubcrit}, \ref{lem:sharcrit}).
\subsection{The trivial extremals}
\label{subsec:triv}
We start with some notation. Given a convex body $C$ let $\mathop{\mathrm{aff}}(C)$ stand for the affine hull of $C$, and let $\mathop{\mathrm{lin}}(C)$ stand for the vector space obtained by the translation of $\mathop{\mathrm{aff}}(C)$ to the origin, i.e., $\mathop{\mathrm{lin}}(C):=\mathop{\mathrm{aff}}(C)-c_0=\mathop{\mathrm{span}}(C-c_0)$, for any $c_0\in C$. Given a collection $\mathcal C$ of convex bodies, it is immediate to see that
\begin{equation}
\label{eq:linspan}
\mathop{\mathrm{lin}}\left(\sum_{C\in \mathcal C}C\right)=\mathop{\mathrm{span}}\left((\mathop{\mathrm{lin}}(C_1),\ldots,\mathop{\mathrm{lin}}(C_{|\mathcal C|}\right)).
\end{equation}
The following lemma relates the combinatorics of subsets of $\alpha$ to the linear spans of the polytopes in $\{K_i\}$.
\begin{lemma}
\label{lem:spancollec}
Let $j_0:=-1$, $0\le j_1<\cdots<j_p\le k$, $j_{p+1}:=k+1$ and set
\[
\mathcal K':=(\underbrace{K_{j_1},\ldots,K_{j_1}}_{\kappa_1},\ldots,\underbrace{K_{j_p},\ldots,K_{j_p}}_{\kappa_p}),
\]
where $\kappa_1,\ldots, \kappa_p$ are non-negative.
Then,
\[
\mathop{\mathrm{lin}}\left(\sum_{K\in \mathcal K'}K\right)=\R^{\beta_{\{j_1,\ldots,j_p\}}},
\]
and, consequently,
\[
\dim\left(\sum_{K\in \mathcal K' }K\right)=n-k-\sum_{q=0}^p|\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}|.
\]
\end{lemma}
\begin{proof}
Combining \eqref{eq:linspan} and \eqref{eq:Kiordpoly} shows that $\mathop{\mathrm{lin}}\left(\sum_{K\in \mathcal K'}K\right)=\R^{\beta_{\{j_1,\ldots,j_p\}}}$. It follows that
\[
\dim\left(\sum_{K\in \mathcal K' }K\right)=\left|\bigcup_{q=1}^p\beta_{j_q}\right|=\left|\bigcup_{q=1}^p\alpha\backslash(\alpha_{<x_{j_q}}\cup \alpha_{>x_{j_q+1}})\right|=\left|\alpha\Big\backslash\bigcap_{q=1}^p(\alpha_{<x_{j_q}}\cup \alpha_{>x_{j_q+1}})\right|.
\]
The proof is complete by \eqref{eq:capcup}, and by noting that the sets $\{\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}\}_{q\in \llbracket 0,p\rrbracket}$ are disjoint.
\end{proof}
As a first application of Lemma \ref{lem:spancollec}, we dispose of the trivial extremals. Before doing so, we present the following definition which will be used throughout the paper.
\begin{definition}
\label{def:splitpair}
A pair $(r,s)$ is \emph{splitting} if $0\le r+1< s\le k+1$ and $(r+1,s)\neq (0,k+1)$. A splitting pair $(r,s)$ is an \emph{$\ell$-splitting pair} if $r+1<\ell<s$.
\end{definition}
\begin{theorem}{\textnormal{(\textbf{Trivial extremals})}}
\label{thm:trivsubcrit}
We have $|\mathcal N_=|=0$ if, and only if, there exists a splitting pair $(r,s)$ such that
\[
|\bar{\alpha}_{>x_{r+1},<x_s}|> i_s-i_{r+1}-1.
\]
\end{theorem}
\begin{proof}
$~$
$\Longleftarrow$:
Suppose there exists a splitting pair $(r,s)$ such that
\[
|\bar{\alpha}_{>x_{r+1},<x_s}|> i_s-i_{r+1}-1.
\]
Every $\sigma\in\mathcal N_=$ must satisfy $\sigma(z)\in \llbracket i_{r+1}+1, i_s-1\rrbracket$ for every $z\in \bar{\alpha}_{>x_{r+1},<x_s}$. Since $|\llbracket i_{r+1}+1, i_s-1\rrbracket|=(i_s-1)-(i_{r+1}+1)+1=i_s-i_{r+1}-1<|\bar{\alpha}_{>x_{r+1},<x_s}|$, we see that no such $\sigma$ can exist.\\
$\Longrightarrow$: If $|\mathcal N_{=}|=0$ then, by \eqref{eq:posetvolrep} and Lemma \ref{lem:mvpos}, there exist $0\le j_1<\cdots<j_p\le k$, and $\kappa_1,\ldots, \kappa_p$, with $0\le\kappa_q\le i_{j_q+1}-i_{j_q}-1$ for $q\in [p]$, such that, with
\[
\mathcal K'=(\underbrace{K_{j_1},\ldots,K_{j_1}}_{\kappa_1},\ldots,\underbrace{K_{j_p},\ldots,K_{j_p}}_{\kappa_p})\subseteq\mathcal (K_{\ell-1},K_{\ell},\mathcal K),
\]
we have
\[
\dim\left(\sum_{K\in \mathcal K' }K\right)<|\mathcal K'|.
\]
Let $j_0:=-1,~ j_{p+1}:=k+1$ and use Lemma \ref{lem:spancollec} to get
\[
\dim\left(\sum_{K\in \mathcal K' }K\right)=n-k-\sum_{q=0}^p|\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}|.
\]
On the other hand,
\begin{align*}
|\mathcal K'|&=\sum_{q=1}^p\kappa_q\le \sum_{q=1}^p[i_{j_q+1}-i_{j_q}-1]=n-k-i_{j_{p+1}}+i_{j_0+1}+k+1+\sum_{q=1}^p(i_{j_q+1}-i_{j_q}-1)\\
&=n-k-\left(\sum_{q=1}^{p+1}i_{j_q}\right)+\left(\sum_{q=0}^pi_{j_q+1}\right)+j_{p+1}-j_0-(p+1)\\
&=n-k-\left(\sum_{q=0}^pi_{j_{(q+1)}}\right)+\left(\sum_{q=0}^pi_{j_q+1}\right)+j_{p+1}-j_0-(p+1)\\
&=n-k-\sum_{q=0}^p(i_{j_{(q+1)}}-i_{j_q+1}-j_{q+1}+j_q+1).
\end{align*}
It follows that
\begin{align}
\label{eq:thm:trivsubcrittemp}
\sum_{q=0}^p(i_{j_{(q+1)}}-i_{j_q+1}-j_{q+1}+j_q+1)<\sum_{q=0}^p|\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}|.
\end{align}
Since
\begin{align*}
|\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}|&=1_{\{j_q+1<j_{(q+1)}\}}\,|\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}|,\\
i_{j_{(q+1)}}-i_{j_q+1}-j_{q+1}+j_q+1&=1_{\{j_q+1<j_{(q+1)}\}}\,(i_{j_{(q+1)}}-i_{j_q+1}-j_{(q+1)}+j_q+1),
\end{align*}
the inequality \eqref{eq:thm:trivsubcrittemp} is equivalent to
\[
\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}(i_{j_{(q+1)}}-i_{j_q+1}-j_{q+1}+j_q+1)<\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}|\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}|.
\]
Using
\[
1_{\{j_q+1<j_{(q+1)}\}}\,|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}\backslash \alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}|=j_{(q+1)}-j_q-2,
\]
we get that \eqref{eq:thm:trivsubcrittemp} is equivalent to
\[
\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}(i_{j_{(q+1)}}-i_{j_q+1}-1)<\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|.
\]
Hence, there must exist a pair $(j_q+1,j_{(q+1)})$, with $j_q+1<j_{(q+1)}$, such that
\[
|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|>i_{j_{(q+1)}}-i_{j_q+1}-1.
\]
Since $(j_q+1,j_{(q+1)})\neq (0,k+1)$, because $j_q+1=0\Rightarrow q=0$ so $j_{(q+1)}=j_1\le j_p<j_{p+1}=k+1$, we conclude that there exists a splitting pair $(r,s)$ such that
\[
|\bar{\alpha}_{>x_{r+1},<x_s}|> i_s-i_{r+1}-1.
\]
\end{proof}
\begin{remark}
Theorem \ref{thm:trivsubcrit} is the same as the result of Chan, Pak, and Panova in \cite[Theorem 1.12]{CP22}, where it was proved using purely combinatorial arguments.
\end{remark}
In light of Theorem \ref{thm:trivsubcrit} we assume from here on that $|\mathcal N_=|>0$. Note that $|\mathcal N_=|>0$ implies, by \eqref{eq:posetvolrep} and Lemma \ref{lem:mvpos}, that $\mathcalK$ is subcritical. To summarize:
\begin{assumption}
\label{ass:N=>0}
\[
|\mathcal N_=|^2=|\mathcal N_-||\mathcal N_+|,\quad|\mathcal N_=|>0,\quad\text{and}\quad \mathcalK\text{ is subcritical}.
\]
\end{assumption}
\begin{remark}
\label{rem:totorder}
For future reference, we note that under Assumption \ref{ass:N=>0}, $\bar{\alpha}$ cannot be totally ordered. Indeed, if $\bar{\alpha}$ is totally ordered, then at least two elements in $\{|\mathcal N_=|,|\mathcal N_-|,|\mathcal N_+|\}$ are zero. But since $|\mathcal N_=|^2=|\mathcal N_-||\mathcal N_+|$, that would imply that $|\mathcal N_=|=0$.
\end{remark}
\subsection{Equivalences of criticality notions}
\label{subsec:equivcrit}
The next result is at the base of the correspondence between criticality notions in our geometric and combinatorial settings, namely, the equivalence between Definition \ref{def:ssc} and Definition \ref{def:posetcrit}.
\begin{proposition}
\label{prop:critequiv}
Fix a nonnegative integer $c$. The following are equivalent.
\begin{enumerate}[(1)]
\item For every
\[
\mathcal K':=(\underbrace{K_{j_1},\ldots,K_{j_1}}_{\kappa_1},\ldots,\underbrace{K_{j_p},\ldots,K_{j_p}}_{\kappa_p})\subseteq \mathcal K,
\]
where $j_0:=-1$, $0\le j_1<\cdots<j_p\le k$, $j_{p+1}:=k+1$, $\kappa_q\le i_{j_q+1}-i_{j_q}-1-1_{j_q\in\{\ell-1,\ell\}}$, and $p\in [n-k-2]$,
\[
\dim\left(\sum_{K\in \mathcal K' }K\right)\ge |\mathcalK'|+c.
\]
\item For every $j_0:=-1$, $0\le j_1<\cdots<j_p\le k$, $j_{p+1}:=k+1$, where $p\in [n-k-2]$,
\begin{align*}
\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|\le |\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|-c+\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,(i_{j_{(q+1)}}-i_{j_q+1}-1).
\end{align*}
\end{enumerate}
\end{proposition}
The proof of Proposition \ref{prop:critequiv} follows the logic of the proof of Theorem \ref{thm:trivsubcrit}, but it is more complicated since we now work with collections $\mathcalK'\subseteq\mathcalK$, rather than $\mathcalK'\subseteq(K_{\ell-1},K_{\ell},\mathcalK)$. This leads to the presence of the term $1_{j_{(q+1)}=\ell}+ 1_{j_q+1=\ell}$ in the proof below.
\begin{proof}[Proof of Proposition \ref{prop:critequiv}]
Fix
\[
\mathcal K':=(\underbrace{K_{j_1},\ldots,K_{j_1}}_{\kappa_1},\ldots,\underbrace{K_{j_p},\ldots,K_{j_p}}_{\kappa_p})\subseteq \mathcal K,
\]
where $j_0:=-1$, $0\le j_1<\cdots<j_p\le k$, $j_{p+1}:=k+1$, $\kappa_q\le i_{j_q+1}-i_{j_q}-1-1_{j_q\in\{\ell-1,\ell\}}$, and $p\in [1,n-k-2]$. By Lemma \ref{lem:spancollec},
\[
\dim\left(\sum_{K\in \mathcal K' }K\right)=n-k-\sum_{q=0}^p|\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}|.
\]
On the other hand, using $\ell\notin \{0,k+1\}$, and arguing as in the proof of Theorem \ref{thm:trivsubcrit},
\begin{align*}
|\mathcal K'|&=\sum_{q=1}^p\kappa_q\le \sum_{q=1}^p\left(i_{j_q+1}-i_{j_q}-1- 1_{j_q\in\{\ell-1,\ell\}}\right)\\
&=n-k-\sum_{q=0}^p\left(i_{j_{(q+1)}}-i_{j_q+1}-j_{q+1}+j_q+1+ 1_{j_{(q+1)}=\ell}+ 1_{j_q+1=\ell}\right).
\end{align*}
Hence, given $c$, we have that
\[
\dim\left(\sum_{K\in \mathcal K' }K\right)< |\mathcalK'|+c,
\]
\emph{for all} $\mathcalK' \subseteq \mathcalK$, only if, for all $0\le j_1<\cdots<j_p\le k$,
\begin{align}
\label{eq:cdiminq}
\sum_{q=0}^p|\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}|> -c+\sum_{q=0}^p\left(i_{j_{(q+1)}}-i_{j_q+1}-j_{q+1}+j_q+1+ 1_{j_{(q+1)}=\ell}+ 1_{j_q+1=\ell}\right).
\end{align}
Conversely, if \eqref{eq:cdiminq} holds, then we may take $\mathcal K'$ to be such that $\kappa_q=i_{j_q+1}-i_{j_q}-1- 1_{j_q\in\{\ell-1,\ell\}}$ for every $q$, to get $|\mathcal K'|=n-k-\sum_{q=0}^p\left(i_{j_{(q+1)}}-i_{j_q+1}-j_{q+1}+j_q+1+ 1_{j_{(q+1)}=\ell}+ 1_{j_q+1=\ell}\right)$. We may then conclude that $\dim\left(\sum_{K\in \mathcal K' }K\right)< |\mathcalK'|+c$. Hence, we get
\[
\dim\left(\sum_{K\in \mathcal K' }K\right)\ge |\mathcalK'|+c\quad \Longleftrightarrow \quad\eqref{eq:cdiminqconverse},
\]
where
\begin{align}
\label{eq:cdiminqconverse}
\sum_{q=0}^p|\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}|\le -c+\sum_{q=0}^p\left(i_{j_{(q+1)}}-i_{j_q+1}-j_{q+1}+j_q+1+ 1_{j_{(q+1)}=\ell}+ 1_{j_q+1=\ell}\right).
\end{align}
Since
\[
\sum_{q=0}^p(1_{j_{(q+1)}=\ell}+ 1_{j_q+1=\ell})=|\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|,
\]
and
\begin{align*}
|\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}|&=1_{\{j_q+1<j_{(q+1)}\}}\,|\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}|,\\
i_{j_{(q+1)}}-i_{j_q+1}-j_{q+1}+j_q+1&=1_{\{j_q+1<j_{(q+1)}\}}(i_{j_{(q+1)}}-i_{j_q+1}-j_{(q+1)}+j_q+1),
\end{align*}
the inequality \eqref{eq:cdiminqconverse} is equivalent to
\begin{align*}
&\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,|\alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}|\\
&\le |\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|-c
+\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,\left(i_{j_{(q+1)}}-i_{j_q+1}-j_{q+1}+j_q+1\right),\\
&=|\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|-c+\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,\left(i_{j_{(q+1)}}-i_{j_q+1}-1-j_{(q+1)}+j_q+2\right).
\end{align*}
Using
\[
1_{\{j_q+1<j_{(q+1)}\}}\,|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}\backslash \alpha_{>x_{j_q+1},<x_{j_{(q+1)}}}|=j_{(q+1)}-j_q-2,
\]
we find that \eqref{eq:cdiminqconverse} is equivalent to
\begin{align*}
&\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|\le |\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|-c+\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,(i_{j_{(q+1)}}-i_{j_q+1}-1).
\end{align*}
\end{proof}
In contrast to Proposition \ref{prop:critequiv}, the next lemma, which treats the opposite inequality of Proposition \ref{prop:critequiv}, holds for a \emph{fixed} $\mathcal K'$.
\begin{lemma}
\label{lem:le}
Fix
\[
\mathcal K':=(\underbrace{K_{j_1},\ldots,K_{j_1}}_{\kappa_1},\ldots,\underbrace{K_{j_p},\ldots,K_{j_p}}_{\kappa_p})\subseteq \mathcal K,
\]
where $j_0:=-1$, $0\le j_1<\cdots<j_p\le k$, $j_{p+1}:=k+1$, $\kappa_q\le i_{j_q+1}-i_{j_q}-1-1_{j_q\in\{\ell-1,\ell\}}$, and $p\in [n-k-2]$, such that
\[
\dim\left(\sum_{K\in \mathcal K' }K\right)\le |\mathcalK'|+c.
\]
Then
\begin{align*}
&\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|\ge |\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|-c+\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,(i_{j_{(q+1)}}-i_{j_q+1}-1).
\end{align*}
\end{lemma}
\begin{proof}
We proceed as in the proof of Proposition \ref{prop:critequiv} and use
\[
|\mathcal K'|=\sum_{q=1}^p\kappa_q\le \sum_{q=1}^p\left(i_{j_q+1}-i_{j_q}-1- 1_{j_q\in\{\ell-1,\ell\}}\right),
\]
to reason about a fixed collection $\mathcal K'$.
\end{proof}
As a consequence of Lemma \ref{lem:le}, we get the following combinatorial information about \emph{sharp} collections.
\begin{lemma}
\label{lem:critequiveq}
Fix $c\ge 0$ and suppose there exist
\[
\mathcal K':=(\underbrace{K_{j_1},\ldots,K_{j_1}}_{\kappa_1},\ldots,\underbrace{K_{j_p},\ldots,K_{j_p}}_{\kappa_p})\subseteq \mathcal K,
\]
where $j_0:=-1$, $0\le j_1<\cdots<j_p\le k$, $j_{p+1}:=k+1$, $\kappa_q\le i_{j_q+1}-i_{j_q}-1-1_{j_q\in\{\ell-1,\ell\}}$, and $p\in [n-k-2]$, such that
\[
\dim\left(\sum_{K\in \mathcal K' }K\right)= |\mathcalK'|+c.
\]
Then,
\[
|\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|\le c.
\]
\end{lemma}
\begin{proof}
The assumption $\dim\left(\sum_{K\in \mathcal K' }K\right)= |\mathcalK'|+c$ implies $\dim\left(\sum_{K\in \mathcal K' }K\right)\le |\mathcalK'|+c$, so by Lemma \ref{lem:le},
\begin{align}
\label{eq:dimtemp1}
&\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|\ge |\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|-c+\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,(i_{j_{(q+1)}}-i_{j_q+1}-1).
\end{align}
On the other hand, since $|\mathcal N_=|>0$, we have
\begin{align}
\label{eq:dimtemp1.5}
1_{\{j_q+1<j_{(q+1)}\}}\,|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|\le 1_{\{j_q+1<j_{(q+1)}\}}\,(i_{j_{(q+1)}}-i_{j_q+1}-1)
\end{align}
(because $|\llbracket i_{j_q+1}+1,i_{j_{(q+1)}}-1\rrbracket|\le i_{j_{(q+1)}}-i_{j_q+1}-1$),
so
\begin{align}
\label{eq:dimtemp2}
\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|\le \sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,(i_{j_{(q+1)}}-i_{j_q+1}-1).
\end{align}
Combining \eqref{eq:dimtemp1} and \eqref{eq:dimtemp2} we get
\[
|\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|\le c.
\]
\end{proof}
We are now ready to characterize the sharp-(sub)critical collections. We start with the sharp-subcritical collections.
\begin{lemma}{\textnormal{(\textbf{Sharp-subcritical collections})}}
\label{lem:sharsubcrit}
Every sharp-subcritical collection
\[
\mathcal K':=(\underbrace{K_{j_1},\ldots,K_{j_1}}_{\kappa_1},\ldots,\underbrace{K_{j_p},\ldots,K_{j_p}}_{\kappa_p})\subseteq \mathcal K,
\]
where $j_0:=-1$, $0\le j_1<\cdots<j_p\le k$, $j_{p+1}:=k+1$, $\kappa_q\le i_{j_q+1}-i_{j_q}-1-1_{j_q\in\{\ell-1,\ell\}}$, and $p\in [n-k-2]$,
must satisfy
\[
\forall ~ q\in [p]:\quad j_q\notin\{\ell-1,\ell\}\quad\text{and}\quad1_{\{j_q+1<j_{(q+1)}\}}\,|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|=1_{\{j_q+1<j_{(q+1)}\}}\,(i_{j_{(q+1)}}-i_{j_q+1}-1).
\]
\end{lemma}
\begin{proof}
Take $c=0$ in Lemma \ref{lem:critequiveq} to get
\begin{align}
\label{eq:c=0}
|\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|=0.
\end{align}
Since $
\dim\left(\sum_{K\in \mathcal K' }K\right)\le |\mathcalK'|$, and $\mathcal K$ is subcritical, applying Lemma \ref{lem:le} and Proposition \ref{prop:critequiv}, with $c=0$, yields
\[
\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|=\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,(i_{j_{(q+1)}}-i_{j_q+1}-1).
\]
By \eqref{eq:dimtemp1.5}, it follows that, for every $0\le j_q\le k+1$,
\[
1_{\{j_q+1<j_{(q+1)}\}}\,|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|=1_{\{j_q+1<j_{(q+1)}\}}\,(i_{j_{(q+1)}}-i_{j_q+1}-1).
\]
\end{proof}
We now turn to the sharp-critical collections. The assumption made in the following lemma does not follow automatically from the fact that $\mathcal K$ is sharp-critical. Rather, we will be able to make this assumption only after Section \ref{sec:subcrit}. The proof however is similar in spirit to the rest of this section so it is included here.
\begin{lemma}{\textnormal{(\textbf{Sharp-critical collections})}}
\label{lem:sharcrit}
Suppose $|\bar{\alpha}_{>x_{r+1},<x_s}| \le i_s-i_{r+1}-2$ for every splitting pair $(r,s)$. Then, every
\[
\mathcal K':=(\underbrace{K_{j_1},\ldots,K_{j_1}}_{\kappa_1},\ldots,\underbrace{K_{j_p},\ldots,K_{j_p}}_{\kappa_p})\subseteq \mathcal K,
\]
where $j_0:=-1$, $0\le j_1<\cdots<j_p\le k$, $j_{p+1}:=k+1$, $\kappa_q\le i_{j_q+1}-i_{j_q}-1-1_{j_q\in\{\ell-1,\ell\}}$, and $p\in [n-k-2]$, such that
\[
\dim\left(\sum_{K\in \mathcal K' }K\right)=|\mathcalK'|+1,
\]
must be of the form
\[
\mathcal K'=(\mathcal K_0,\mathcal K_1,\ldots,\mathcal K_{r-1},\mathcal K_r,\mathcal K_s,\mathcal K_{s+1},\ldots,\mathcal K_k),
\]
where $(r,s)$ is an $\ell$-splitting pair satisfying
\[
|\bar{\alpha}_{>x_{r+1},<x_s}|= i_s-i_{r+1}-2.
\]
\end{lemma}
\begin{proof}
First note that $(j_q+1,j_{(q+1)})\neq (0,k+1)$ because $j_q+1=0\Rightarrow q=0$ so $j_{(q+1)}=j_1\le j_p<j_{p+1}=k+1$. The assumption $|\bar{\alpha}_{>x_{r+1},<x_s}| \le i_s-i_{r+1}-2$ for every splitting pair $(r,s)$ implies that
\begin{align*}
&\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|\le \sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,(i_{j_{(q+1)}}-i_{j_q+1}-2)\\
=&-|\{q\in [p]: j_q+1<j_{(q+1)}\}|+\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}(i_{j_{(q+1)}}-i_{j_q+1}-1).
\end{align*}
On the other hand, since $
\dim\left(\sum_{K\in \mathcal K' }K\right)\le |\mathcalK'|+1$, applying Lemma \ref{lem:le} with $c=1$ yields
\begin{align}
\label{eq:lower}
\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|\ge |\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|-1+\sum_{q=0}^p1_{\{j_q+1<j_{(q+1)}\}}\,(i_{j_{(q+1)}}-i_{j_q+1}-1).
\end{align}
We conclude that
\begin{align}
\label{eq:twoterms}
|\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|+|\{q\in [p]: j_q+1<j_{(q+1)}\}|\le 1.
\end{align}
Since
\[
|\{q\in [p]: j_q+1<j_{(q+1)}\}|=0\quad\Longrightarrow \quad\{j_1,\ldots,j_p\}=\{1,\ldots, k\},
\]
we get
\[
|\{q\in [p]: j_q+1<j_{(q+1)}\}|=0\quad \Longrightarrow \quad |\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|=2.
\]
Hence, \eqref{eq:twoterms} can hold if, and only if,
\[
|\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|=0\quad\text{and}\quad|\{q\in [p]: j_q+1<j_{(q+1)}\}|=1.
\]
It follows that
\[
\mathcal K'=(\mathcal K_0,\mathcal K_1,\ldots,\mathcal K_{r-1},\mathcal K_r,\mathcal K_s,\mathcal K_{s+1},\ldots,\mathcal K_k),
\]
where $(r,s)$ is an $\ell$-splitting pair. Finally, plugging in $ |\{q\in [p]:j_q\in\{\ell-1,\ell\}\}|=0$ into \eqref{eq:lower}, and using that $(r,s)$ is the only pair $(j_q,j_{(q+1)})$ satisfying $j_q+1<j_{(q+1)}$, yields
\[
|\bar{\alpha}_{>x_{r+1},<x_s}|\ge -1+[i_s-i_{r+1}-1]=i_s-i_{r+1}-2.
\]
On the other hand, by assumption, $|\bar{\alpha}_{>x_{r+1},<x_s}| \le i_s-i_{r+1}-2$, so we conclude
\[
|\bar{\alpha}_{>x_{r+1},<x_s}|=i_s-i_{r+1}-2.
\]
\end{proof}
\begin{remark}
In the proof of Lemma \ref{lem:sharcrit} we only used the condition $\dim\left(\sum_{K\in \mathcal K' }K\right)\le|\mathcalK'|+1$, so the reader might wonder why we assume that $\mathcal K'$ is sharp-critical. By Assumption \ref{ass:N=>0}, the only other possibility would be for $\mathcal K'$ to be sharp-subcritical, but this is impossible by Lemma \ref{lem:sharsubcrit} and the assumption $|\bar{\alpha}_{>x_{r+1},<x_s}| \le i_s-i_{r+1}-2$ for every splitting pair $(r,s)$.
\end{remark}
\section{Splitting and the subcritical extremals}
\label{sec:subcrit}
In this section we introduce the \emph{splitting} mechanism for posets, which is connected to a reduction to lower dimensional extremals. Consequently, we characterize the subcritical extremals (Theorem \ref{thm:subcrit}).
To motivate the splitting mechanism recall that, by Lemma \ref{lem:sharsubcrit}, we know that every sharp-subcritical collection
\[
\mathcal K':=(K_{j_1},\ldots,K_{j_1},\ldots,K_{j_p},\ldots,K_{j_p})\subseteq \mathcal K,
\]
must satisfy
\[
\forall ~ q\in [p]:\quad j_q\notin\{\ell-1,\ell\}\quad\text{and}\quad1_{\{j_q+1<j_{(q+1)}\}}\,|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|=1_{\{j_q+1<j_{(q+1)}\}}\,(i_{j_{(q+1)}}-i_{j_q+1}-1).
\]
Fix an index $j_q$ such that $j_q\notin\{\ell-1,\ell\}$ and $j_q+1<j_{(q+1)}$, so that
\[
|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|=i_{j_{(q+1)}}-i_{j_q+1}-1.
\]
Since $|\llbracket i_{j_q+1}+1,i_{j_{(q+1)}}-1\rrbracket|=i_{j_{(q+1)}}-i_{j_q+1}-1$, we must have
\[
\bar{\alpha}_{\ge x_{j_q+1},\le x_{j_{(q+1)}}}\quad\overset{\text{bijection}}{\mapsto} \quad \llbracket i_{j_q+1},i_{j_{(q+1)}}\rrbracket
\]
under any linear extension. This means that the poset $\bar{\alpha}$ can be \emph{split} by factoring out the poset $\bar{\alpha}_{\ge x_{j_q+1},\le x_{j_{(q+1)}}}$, so that we are left with a poset with a shorter chain. We will show that $|\mathcal N_=|^2=|\mathcal N_-||\mathcal N_+|$ implies that equality holds in Stanley's inequalities \emph{also for the poset with the shorter chain}. We may then resort to our induction hypothesis that the extremals in the case where the chain size is $<k$ were already characterized.
\begin{remark}
\label{rem:splitproj}
The splitting mechanism described in this section can be viewed as a combinatorial equivalence of the projection formula for mixed volumes \cite[Theorem 5.3.1]{Sch14}. This is another building block of our dictionary between geometry and combinatorics.
\end{remark}
We now proceed to formalize the above splitting mechanism.
\begin{definition}
\label{def:split}
The \emph{split} of $\bar{\alpha}$, based on a splitting pair $(r,s)$, is given by defining posets $\bar{\alpha}_1,\bar{\alpha}_2$ as
\begin{align*}
&\bar{\alpha}_1:= \bar{\alpha}_{\ge x_{r+1},\le x_s}\quad\mbox{and}\quad \bar{\alpha}_2:=(\bar{\alpha}\backslash\bar{\alpha}_1)\cup \{x\},
\end{align*}
where the relations for $x$ are defined via $x\ast z$, for $\ast\in\{<,>\}$ and $z\in \bar{\alpha}\backslash\bar{\alpha}_1$, if, and only if, there exists $w\in\bar{\alpha}_1$ such that $w\ast z$.\footnote{The new element $x$ should be thought of as a compression of $\bar{\alpha}_1$ into one element, namely $x$. The relations for $x$ are consistent since we cannot have $w_1<z<w_2$ for $w_1,w_2\in \bar{\alpha}_1, z\in \bar{\alpha}\backslash\bar{\alpha}_1$ because this would imply that $x_r\le z\le x_s$, and hence $z\in \bar{\alpha}_1$, which is a contradiction.}
\end{definition}
Let $(r,s)$ be a splitting pair satisfying $\ell\notin\{r+1,s\}$. We will define the analogues of $\mathcal N_-,\mathcal N_=,\mathcal N_+$ associated with the posets $\bar{\alpha}_1,\bar{\alpha}_2$. This requires distinguishing between two cases: (1) $x_{\ell}\in\{x_{r+2},\ldots,x_{s-1}\}$ and (2) $x_{\ell}\in\{x_1,\ldots,x_r\}\cup\{x_{s+1},\ldots,x_k\}$; note that $x_{\ell}\notin\{x_{r+1},x_s\}$ by assumption.\footnote{We use the convention $\{x_a,\ldots, x_z\}=\varnothing$ when $z<a$; e.g., $\{x_1,\ldots, x_{r-2}\}=\varnothing$ when $r=0$.} For $\iota=1,2$ let
\[
\mathcal N^{\iota}:=\{\mbox{bijections }\sigma:\bar{\alpha}_{\iota}\to [|\bar{\alpha}_{\iota}|]: w\le z\Rightarrow \sigma(w)\le\sigma(z)~\forall~ w,z\in\bar{\alpha}_i\},
\]
and, given $\circ\in\{-,=,+\}$, let $1_{\circ}:=1_{\{\circ\text{ is }+\}}-1_{\{\circ\text{ is }-\}}$.\\
\emph{Case (1).} For $\circ\in\{-,=,+\}$ set
\begin{align*}
&\mathcal N^1_{\circ}:=\{\sigma\in \mathcal N^1:\sigma(x_j)=i_j-i_{r+1}+1_{j=\ell}1_{\circ}\mbox{ for }j\in \llbracket r+1,s\rrbracket\},\\
&\mathcal N_{\circ}^2:=\{\sigma\in \mathcal N^2:\sigma(x_j)=i_j \mbox{ for }j\in \llbracket 0,r\rrbracket, ~\sigma(x)=i_{r+1}, \mbox{ and } \sigma(x_j)=i_j -(i_s-i_{r+1})\mbox{ for }j\in \llbracket s+1,k+1\rrbracket\};
\end{align*}
note that the definition of $\mathcal N_{\circ}^2$ is independent of $\circ$.\\
\emph{Case (2).} For $\circ\in\{-,=,+\}$ set
\begin{align*}
\mathcal N^1_{\circ}:=\{\sigma\in \mathcal N^1:\sigma(x_j)=i_j-i_{r+1}\mbox{ for }j\in \llbracket r+1,s\rrbracket\},
\end{align*}
and
\begin{align*}
\mathcal N_{\circ}^2:=\{\sigma\in \mathcal N^2&: \sigma(x_j)=i_j+1_{\{j=\ell\}}1_{\circ}\mbox{ for }j\in \llbracket 0,r\rrbracket, ~\sigma(x)=i_{r+1},\\
& \mbox{ and } \sigma(x_j)=i_j -(i_s-i_{r+1})+1_{j=\ell}1_{\circ}\mbox{ for }j\in \llbracket s+1,k+1\rrbracket\};
\end{align*}
note that the definition of $\mathcal N_{\circ}^1$ is independent of $\circ$.
Before exploiting the splitting mechanism we start with a quick observation.
\begin{lemma}
\label{lem:subcritcplus}
For every splitting pair $(r,s)$,
\begin{equation}
\label{eq:subcritcplus}
|\bar{\alpha}_{>x_{r+1},<x_s}|\le i_s-i_{r+1}-1-1_{r+1=\ell}-1_{s=\ell}.
\end{equation}
\end{lemma}
\begin{proof}
The converse of Theorem \ref{thm:trivsubcrit} yields
\[
|\bar{\alpha}_{>x_{r+1},<x_s}|\le i_s-i_{r+1}-1\quad\mbox{for every splitting pair } (r,s).
\]
Hence, it suffices to consider the case where either $r+1=\ell$ or $s=\ell$. Suppose $r+1=\ell$; the case $s=\ell$ is proven analogously. Then, every $\sigma\in\mathcal N_+$ (which must exist since $|\mathcal N_=|>0\Rightarrow |\mathcal N_+|>0$ as $|\mathcal N_=|^2=|\mathcal N_-||\mathcal N_+|$) satisfies $\sigma(x_{r+1})=i_{r+1}+1$ and $\sigma(x_s)=i_s$. Hence, given $z\in \bar{\alpha}_{>x_{r+1},<x_s}$, the number of available spots for $\sigma(z)$ is $|\llbracket i_{r+1}+2,i_s-1\rrbracket|=(i_s-1)-( i_{r+1}+2)+1=i_s-i_{r+1}-2$.
\end{proof}
\begin{proposition}
\label{prop:split}
Fix a splitting pair $(r,s)$ satisfying $\ell\notin\{r+1,s\}$, and let $\bar{\alpha}_1,\bar{\alpha}_2$ be the split based on $(r,s)$. One of the following must occur:
\begin{enumerate}[(i)]
\item $|\mathcal N_=^{\iota}|^2=|\mathcal N_-^{\iota}||\mathcal N_+^{\iota}|$ for every $\iota\in\{1,2\}$.
\item $|\bar{\alpha}_{>x_{r+1},<x_s}|\le i_s-i_{r+1}-2$.
\end{enumerate}
\end{proposition}
\begin{proof}
We will prove the proposition under the assumption that case (1) occurs; the proof for case (2) is analogous. Note that under case (1) we trivially have $|\mathcal N_=^2|^2=|\mathcal N_-^2||\mathcal N_+^2|$ since $\mathcal N_{\circ}^2$ is independent of $\circ$.
It suffices to show that if (ii) is false then (i) is true. This will be proven by showing that if (ii) is false, then, for any $\circ\in\{-,=,+\}$,
\begin{equation}
\label{eq:bijections}
|\mathcal N_{\circ}|=|\mathcal N_{\circ}^1||\mathcal N_{\circ}^2|,
\end{equation}
where we recall that $\mathcal N_{\circ}^2$ is independent of $\circ$. Plugging \eqref{eq:bijections} into $|\mathcal N_=|^2=|\mathcal N_-|\mathcal N_+|$ gives $|\mathcal N_=^1|^2|\mathcal N_{\circ}^2|^2=|\mathcal N_-^1||\mathcal N_+^1||\mathcal N_{\circ}^2|^2$. Canceling $|\mathcal N_{\circ}^2|$ on both sides ($|\mathcal N_{\circ}^2|>0$ since $|\mathcal N_=|>0$) gives (i).
We now turn to prove \eqref{eq:bijections} under the assumption that (ii) is false. By \eqref{eq:subcritcplus}, (ii) being false is equivalent to $|\bar{\alpha}_{>x_{r+1},<x_s}|= i_s-i_{r+1}-1$, i.e., $|\bar{\alpha}_1|=i_s-i_{r+1}+1$. We will prove \eqref{eq:bijections} by constructing a bijection $b:\mathcal N_{\circ}\to \mathcal N_{\circ}^1\times \mathcal N_{\circ}^2$ for $\circ\in \{-,=,+\}$. Fix $\circ\in \{-,=,+\}$ and define a map $b$ via $b=(b_1,b_2)$, with $b_1:\mathcal N_{\circ}\to \mathcal N_{\circ}^1, ~b_2:\mathcal N_{\circ}\to \mathcal N_{\circ}^2$, where we set, for each $\sigma\in \mathcal N_{\circ}$,
\begin{align*}
&\mbox{For }z\in \bar{\alpha}_1:\quad b_1(\sigma)(z)=\sigma(z)-i_{r+1},\\
&\mbox{For }z\in \bar{\alpha}_2:\quad b_2(\sigma)(z)=
\begin{cases}
\sigma(z)\mbox{ if }\sigma(z)\in \llbracket 0,i_{r+1}-1\rrbracket,\\
i_{r+1} \mbox{ if } z=x,\\
\sigma(z)-(i_s-i_{r+1})\mbox{ if }\sigma(z)\in \llbracket i_s+1, n+1\rrbracket.
\end{cases}
\end{align*}
We will first check that, given $\sigma\in \mathcal N_{\circ}$, $b_1(\sigma)\in \mathcal N_{\circ}^1$ and $b_2(\sigma)\in \mathcal N_{\circ}^2$. We will then construct a map $b':\mathcal N_{\circ}^1\times \mathcal N_{\circ}^2\to \mathcal N_{\circ}$ and show that $b\circ b'=b'\circ b=\operatorname{Id}$, completing the proof. That $b_1(\sigma)\in \mathcal N_{\circ}^1$ and $b_2(\sigma)\in \mathcal N_{\circ}^2$ follows from the definitions of $\mathcal N_{\circ}^1,\mathcal N_{\circ}^2$ and the fact that $\sigma\in \mathcal N_{\circ}$. The map $b':\mathcal N_{\circ}^1\times \mathcal N_{\circ}^2\to \mathcal N_{\circ}$ is defined by taking $\sigma_{\iota}\in \mathcal N_{\circ}^{\iota}$, for $\iota=1,2$, and setting, for $z\in \bar{\alpha}$,
\begin{align*}
b'(\sigma_1,\sigma_2)(z)=
\begin{cases}
\sigma_2(z)&\mbox{ if }z\in \bar{\alpha}_2\mbox{ and }\sigma_2(z)\in \llbracket 0,i_{r+1}-1\rrbracket\\
\sigma_1(z)+i_{r+1}&\mbox{ if }z\in \bar{\alpha}_1\\
\sigma_2(z)+(i_s-i_{r+1})&\mbox{ if }z\in \bar{\alpha}_2\mbox{ and }\sigma_2(z)\in \llbracket i_{r+1}+1,|\bar{\alpha}_2|\rrbracket
\end{cases}.
\end{align*}
To see that $b'(\sigma_1,\sigma_2)\in \mathcal N_{\circ}$ we first need to check that given $z<w$ we have $b'(\sigma_1,\sigma_2)(w)< b'(\sigma_1,\sigma_2)(z)$. If $w,z\in\bar{\alpha}_1$ or $w,z\in\bar{\alpha}_2$, this follows from $\sigma_i\in\mathcal N^{\iota}_{\circ}$, for $\iota=1,2$, so it remains to check $w\in\bar{\alpha}_1,~z\in\bar{\alpha}_2$ and $w\in\bar{\alpha}_2, ~z\in\bar{\alpha}_1$; we check the first case and the second case is analogous. Suppose that $w\in\bar{\alpha}_1$ and $z\in\bar{\alpha}_2$. Then, we must have $\sigma_2(z)\in \llbracket 0,i_{r+1}-1\rrbracket$ since, by the definition of $x$, $w>z\Rightarrow x>z$ and $\sigma_2(x)=i_{r+1}$. Hence, $b'(\sigma_1,\sigma_2)(w)=\sigma_1(w)+i_{r+1}> \sigma_2(z)=b'(\sigma_1,\sigma_2)(z)$.
Now that we know that $b'(\sigma_1,\sigma_2)$ respects the relations of $\bar{\alpha}$, in order to show that $b'(\sigma_1,\sigma_2)\in \mathcal N_{\circ}$, it remains to check that $b'(\sigma_1,\sigma_2)(x_j)=i_j+1_{j=\ell}1_{\circ}$ for all $1\le j\le k$. This follows immediately from the definitions of $\mathcal N_{\circ}^{\iota}$ for $\iota\in\{1,2\}$ and $\circ\in \{-,=,+\}$.
Finally, that $b\circ b'=b'\circ b=\operatorname{Id}$ follows from the construction of $b$ and $b'$.
\end{proof}
The next result provides a \emph{geometric} characterization under which the case in Proposition \ref{prop:split}(i) occurs.
\begin{lemma}
\label{lem:split}
Let $\mathcalK '\subseteq \mathcalK$ be a sharp subcritical collection. Then, there exists a splitting pair $(r,s)$ satisfying $\ell\notin\{r+1,s\}$, with a corresponding split $\bar{\alpha}_1,\bar{\alpha}_2$, such that $K_r,K_s\in\mathcalK'$ and $|\mathcal N_=^{\iota}|^2=|\mathcal N_-^{\iota}||\mathcal N_+^{\iota}|$ for every $\iota\in\{1,2\}$.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:sharsubcrit},
\[
\mathcal K'=(\mathcalK_{j_1},\ldots,\mathcalK_{j_p}),
\]where $j_0:=-1$, $0\le j_1<\cdots<j_p\le k$, $j_{p+1}:=k+1$, $\kappa_q\le i_{j_q+1}-i_{j_q}-1-1_{j_q\in\{\ell-1,\ell\}}$, and $p\in [n-k-2]$,
must satisfy
\[
\forall ~ q\in [p]:\quad j_q\notin\{\ell-1,\ell\}\quad\text{and}\quad1_{\{j_q+1<j_{(q+1)}\}}\,|\bar{\alpha}_{>x_{j_q+1},<x_{j_{(q+1)}}}|=1_{\{j_q+1<j_{(q+1)}\}}\,[i_{j_{(q+1)}}-i_{j_q+1}-1].
\]
Note that, for any $0\le q\le p$, $(j_q+1,j_{(q+1)})\neq (0,k+1)$. Indeed, for the latter to occur we need to have $p=1$ and $q=0$, but then $(j_0+1,j_{(0+1)})= (0,j_p)\neq (0,k+1)$ as $j_p<k+1$. We now show that there exists $0\le q'\le p$ such that $(j_{q'}+1,j_{(q'+1)})$ is a splitting pair. Indeed, if not, then $j_q+1=j_{(q+1)}$ for every $0\le q\le p$ so we get $j_1=0,j_2=1,\ldots ,j_{p+1}=k+1$ which contradicts $j_q\notin\{\ell-1,\ell\}$. Setting $r:=j_{q'},\, s:=j_{(q'+1)}$, we get a splitting pair $(r,s)$ such that $K_r,K_s\in \mathcalK'$ and $|\bar{\alpha}_{>x_{r+1},<x_s}|= i_s-i_{r+1}-1$. By Proposition \ref{prop:split}, we must have $|\mathcal N_=^{\iota}|^2=|\mathcal N_-^{\iota}||\mathcal N_+^{\iota}|$ for every $\iota\in\{1,2\}$.
\end{proof}
Using Lemma \ref{lem:split}, the characterization of the subcritical extremals of Stanley's inequalities now follows.
\begin{theorem}{\textnormal{(\textbf{Subcritical extremals})}}
\label{thm:subcrit}
$~$
Suppose that $\mathcalK$ has a sharp-subcritical collection. Then there exists a splitting pair $(r,s)$ such that the associated posets split $\bar{\alpha}_1,\bar{\alpha}_2$ satisfies $|\mathcal N_{=}^{\iota}|^2=|\mathcal N_{-}^{\iota}||\mathcal N_{+}^{\iota}|$ for every $\iota\in\{1,2\}$.
\end{theorem}
Our induction hypothesis Assumption \ref{ass:induct} is that Theorem \ref{thm:supcrit} and Theorem \ref{thm:crit} hold for $k-1$. Hence, without loss of generality we may assume from now on that
\begin{align}
\label{eq:ass}
\text{For all splits }\bar{\alpha}_1,\bar{\alpha}_2: \quad |\mathcal N_{=}^{\iota}|^2\neq|\mathcal N_{-}^{\iota}||\mathcal N_{+}^{\iota}|\quad \forall~\iota\in\{1,2\}.
\end{align}
By Theorem \ref{thm:subcrit}, the assumption \eqref{eq:ass} implies that $\mathcal K$ is critical. Further, by Proposition \ref{prop:split},
\[
|\bar{\alpha}_{>x_{r+1},<x_s}|\le i_s-i_{r+1}-2\quad\text{for every splitting pair $(r,s)$ satisfying $\ell\notin\{r+1,s\}$},
\]
so using in addition Lemma \ref{lem:subcritcplus}, we get
\[
|\bar{\alpha}_{>x_{r+1},<x_s}|\le i_s-i_{r+1}-2\quad\text{for every splitting pair $(r,s)$}.
\]
Putting everything together we assume from now on:
\begin{assumption}
\label{ass:crit}
The collection $\mathcal K$ is critical and
\[
|\bar{\alpha}_{\ge x_{r+1},\le x_s}|\le i_s-i_{r+1}\quad\mbox{for every splitting pair $(r,s)$}.
\]
\end{assumption}
\section{Mixing}
\label{sec:beyond}
Under the current assumptions, we know that $\bar{\alpha}$ cannot be totally ordered (Remark \ref{rem:totorder}). In this section, we develop the notion of \emph{mixing} which takes advantage of the fact that $\bar{\alpha}$ must have some incomparable elements. The level of mixing will depend on the criticality notions developed in Section \ref{sec:notionsCrit}, which will be further developed in the current section. We begin with Section \ref{subsec:towardlin} which characterizes the locations where elements of the poset can be placed. We then introduce in Section \ref{subsec:guide} the notions of criticality and maximality for splitting pairs. Finally, Section \ref{subsec:place} provides information on the mixing properties of splitting pairs.
\subsection{Range}
\label{subsec:towardlin}
A fixed element $y\in\alpha$ can only be placed in a limited number of locations under any linear extension. For example, if $\alpha$ is totally ordered, there would be only one such location. We start by defining a few quantities associated to $y$ which will provide information on the possible placements of $y$ under linear extensions.
\begin{definition}
\label{def:lmu}
Given $y\in\alpha$ let $i_{\max}(y)$ be the maximum index such that $y>x_{i_{\max}(y)}$ and let $i_{\min}(y)$ be the minimum index such that $y<x_{i_{\min}(y)}$. Set
\begin{align*}
l_{\circ}(y):=\max_{r\le i_{\max}(y)}(i_r^\circ+|\bar{\alpha}_{>x_r,\le y}|)\quad\text{and}\quad u_{\circ}(y):=\min_{s\ge i_{\min}(y)}(i_s^\circ-|\bar{\alpha}_{\ge y,<x_s}|),
\end{align*}
where
\[
i_j^\circ := i_j+1_{j=\ell}1_{\circ},
\]
and let
\begin{align*}
m^{\circ}_{\min}(y):=\min_{\sigma\in\mathcal N_{\circ}}\sigma(y)\quad\text{and}\quad m^{\circ}_{\max}(y):=\max_{\sigma\in\mathcal N_{\circ}}\sigma(y).
\end{align*}
\end{definition}
Note that $i_j^\circ$ is the location where $x_j$ is placed under every linear extension in $\mathcal N_{\circ}$. Hence, for any choice of $r\le i_{\max}(y)$ (res. $s\ge i_{\min}(y)$), $y$ must be placed at a location at least as large (res. small) as $i_r^\circ+|\bar{\alpha}_{>x_r,\le y}|$ (res. $i_s^\circ-|\bar{\alpha}_{\ge y,<x_s}|$).
Definition \ref{def:lmu} immediately implies the following relations between $l_{\circ}$ (res. $u_{\circ}$) for $\circ\in\{-,=,+\}$:
\begin{lemma}
\label{lem:UL}
Fix $y\in\alpha$. Then,\\
\begin{enumerate}[(i)]
\item $l_{=}(y)-1\le l_{-}(y)\le l_{=}(y)\le l_{+}(y)\le l_{=}(y)+1$.\\
\item If $i_{\max}(y)<\ell$, then $l_{-}(y)=l_{=}(y)=l_{+}(y)$.\\
\item $u_{=}(y)-1\le u_{-}(y)\le u_{=}(y)\le u_{+}(y)\le u_{=}(y)+1$.\\
\item If $i_{\min}(y)>\ell$, then $u_{-}(y)=u_{=}(y)=u_{+}(y)$.
\end{enumerate}
\end{lemma}
The next result provides necessary and sufficient conditions for an element of the poset to be placed at a specific location under linear extensions.
\begin{lemma}
\label{lem:range}
Fix $y\in \alpha$, $\circ\in\{-,=,+\}$, and $i\in [n]$. There exists $\sigma\in\mathcal N_{\circ}$ with $\sigma(y)=i$ if, and only if, $i\in\llbracket l_{\circ}(y),u_{\circ}(y)\rrbracket$ and $i\neq i_m^\circ$ for any $m\in [k]$.
\end{lemma}
\begin{proof}
$~$
$\Longrightarrow$: Fix $\sigma\in\mathcal N_{\circ}$ such that $\sigma(y)=i$. Since $y\neq x_m$ for all $m\in [k]$ it follows that $i\neq i_m^\circ$. We now show $i\le u_{\circ}(y)$; the argument for $i\ge l_{\circ}(y)$ is analogous. Given any $s\ge i_{\min}(y)$, every element $z\in \bar{\alpha}_{>y,<x_s}$ must satisfy $i=\sigma(y)<\sigma(z)<\sigma(x_s)$. Hence, $\sigma(z)$ can take on only $\sigma(x_s)-i-1$ possible values, which means that $|\bar{\alpha}_{>y,<x_s}|\le \sigma(x_s)-i-1$. In other words, $i\le \sigma(x_s)-|\bar{\alpha}_{\ge y,<x_s}|=i_s^\circ-|\bar{\alpha}_{\ge y,<x_s}|$. The latter holds for any $s\ge i_{\min}(y)$ which shows $i\le u_{\circ}(y)$. \\
$\Longleftarrow$: The assumption $i\neq i_m^\circ$ for any $m\in [k]$ implies that we can choose $m\in [k]$ such that $i_m^\circ<i<i_{m+1}^\circ$. Consider the poset $\bar{\alpha}':=\bar{\alpha}$ with the relabeling
\begin{align*}
&x_j'=x_j~\text{for } j\in \llbracket 1,m\rrbracket, \quad x_{m+1}'=y,\quad x_j'=x_{j-1}~\text{for } j\in \llbracket m+2,k+1\rrbracket,\\
&i_j'=i_j^\circ~\text{for } j\in \llbracket 1,m\rrbracket, \quad i_{m+1}'=i,\quad i_j'= i_{j-1}^\circ~\text{for } j\in \llbracket m+2,k+1\rrbracket.
\end{align*}
To complete the proof it suffices to show that there exists a linear extension $\sigma'$ of $\bar{\alpha}'$ satisfying $\sigma'(x_j')=i_j'$ for all $j\in\llbracket 1,k+1\rrbracket$. By Theorem \ref{thm:trivsubcrit}, it suffices to show that
\begin{equation}
\label{eq:temp}
|\bar{\alpha}'_{>x_{r+1}',<x_s'}|\le i_s'-i_{r+1}'-1\quad\text{for all }0\le r+1< s\le k+1.
\end{equation}
When $r+1\neq m+1,s\neq m+1$, \eqref{eq:temp} holds by the assumption $|\mathcal N_{\circ}|>0$ for all $\circ\in\{-,=,+\}$ and Theorem \ref{thm:trivsubcrit}. The case $r+1=m+1=s$ is impossible since $r+1<s$. It remains to check the cases $r+1=m+1,s\neq m+1$ and $r+1\ne m+1,s= m+1$. We verify \eqref{eq:temp} in the case $s=m+1$; the proof for the case $r+1=m+1$ is analogous. When $s=m+1$, \eqref{eq:temp} is equivalent to
\begin{equation}
\label{eq:temp1}
|\bar{\alpha}_{>x_{r+1},<y}|=|\bar{\alpha}'_{>x_{r+1}',<x_s'}|\le i_s'-i_{r+1}'-1=i-i_{r+1}^\circ-1.
\end{equation}
When $r+1\le i_{\max}(y)$, \eqref{eq:temp1} holds since, by assumption, $i-i_{r+1}^{\circ}-1\ge l_{\circ}(y)-i_{r+1}^{\circ}-1$, so \eqref{eq:temp1} holds by the definition of $l_{\circ}(y)$. When $i_{\max}(y)<r+1<s=m+1$, $\bar{\alpha}_{>x_{r+1},<y}=\varnothing$ because if there exists $x_{r+1}<z<y$, that would imply $x_{r+1}<y$, which contradicts the maximality of $i_{\max}(y)$. Hence, \eqref{eq:temp1} is equivalent to $0\le i-i_{r+1}^{\circ}-1$, which holds since $i_{r+1}^\circ\le i_m^\circ< i$, where the last inequality holds by the definition of $m$.
\end{proof}
Lemma \ref{lem:range} immediately implies:
\begin{corollary}
\label{cor:range}
Fix $y\in \alpha$ and $\circ\in\{-,=,+\}$. Then,
\[
l_{\circ}(y)\le m^{\circ}_{\min}(y)\quad\text{and}\quad m^{\circ}_{\max}(y)\le u_{\circ}(y).
\]
\end{corollary}
A second corollary of Lemma \ref{lem:range} is the proof of Remark \ref{rem:posetchar}. Note that Assumption \ref{ass:cl} is not needed for the following result.
\begin{proposition}
\label{prop:IIIimpliesIV}
The condition in Theorem \ref{thm:supcrit}(iii) is equivalent to
\[
\forall\,y<x_{\ell}~\exists\, s(y)\in\llbracket 0,k+1\rrbracket \text{ s.t. } y<x_{s(y)}\text{ and }|\bar{\alpha}_{>y, <x_{s(y)}}|>i_{s(y)}-i_{\ell},
\]
and
\[
\forall\,y>x_{\ell}~\exists\, r(y)\in\llbracket 0,k+1\rrbracket \text{ s.t. } y>x_{r(y)}\text{ and }|\bar{\alpha}_{>x_{r(y)},<y}|>i_{\ell}-i_{r(y)}.
\]
\end{proposition}
\begin{proof}
By Lemma \ref{lem:suff}(a), the conditions in Theorem \ref{thm:supcrit}(iii) are equivalent to: $ \sigma^{-1}(i_{\ell}-1)\nsim x_{\ell}\text{ and } \sigma^{-1}(i_{\ell}+1)\nsim x_{\ell} ~\forall \,\sigma\in\mathcal N_=$. We start by showing that
\begin{align*}
&\forall y<x_{\ell}~\exists\, s(y)\in\llbracket 0,k+1\rrbracket \text{ s.t. } y<x_{s(y)}\text{ and }|\bar{\alpha}_{>y,<x_{s(y)}}|>i_{s(y)}-i_{\ell}\\
&\Longleftrightarrow \\
&\sigma^{-1}(i_{\ell}-1)\nsim x_{\ell}~\forall \,\sigma\in\mathcal N_=;
\end{align*}
The equivalence $\forall\,y>x_{\ell}~\exists\, r(y)\in\llbracket 0,k+1\rrbracket \text{ s.t. } y>x_{r(y)}\text{ and }|\bar{\alpha}_{>x_{r(y)},<y}|>i_{\ell}-i_{r(y)}\Longleftrightarrow \sigma^{-1}(i_{\ell}+1)\nsim x_{\ell}~\forall \,\sigma\in\mathcal N_=$ is analogous.
Indeed, the statement $ \sigma^{-1}(i_{\ell}-1)\nsim x_{\ell} ~\forall\,\sigma\in\mathcal N_=$ is equivalent to the statement that for all $y<x_{\ell}$, there exists no $\sigma\in\mathcal N_=$ such that $\sigma(y)=i_{\ell}-1$. We will show that the latter is equivalent to $u_=(y)<i_{\ell}-1$, which completes the proof. To see this equivalence, note that if $u_=(y)<i_{\ell}-1$, then Lemma \ref{lem:range} implies that exists no $\sigma\in\mathcal N_=$ such that $\sigma(y)=i_{\ell}-1$. Conversely, suppose there exists no $\sigma\in\mathcal N_=$ such that $\sigma(y)=i_{\ell}-1$, so, by Lemma \ref{lem:range}, $i_{\ell}-1\neq \llbracket l_=(y),u_=(y)\rrbracket$. Note that, by Lemma \ref{lem:range}, $u_=(y)\le i_{\ell}-1$ as $y<x_{\ell}$. Hence, the possibility of $i_{\ell}-1< l_=(y)\le u_=(y)$ cannot occur, which means that $i_{\ell}-1\neq \llbracket l_=(y),u_=(y)\rrbracket\Rightarrow u_=(y)<i_{\ell}-1$, as claimed.
\end{proof}
\subsection{Introduction to mixing}
\label{subsec:guide}
When $\bar{\alpha}$ is totally ordered we have, for any splitting pair $(r,s)$,
\[
\bar{\alpha}_{\ge x_{r+1},\le x_s}\overset{\text{bijection}}{\mapsto} \llbracket i_{r+1}, i_s \rrbracket
\]
under any linear extension $\sigma\in\bigcup_{\circ\in\{-,=,+\}}\mathcal N_{\circ}$. But under the current assumptions, $\bar{\alpha}$ is not totally ordered (Remark \ref{rem:totorder}), which means that a certain amount of \emph{mixing} must occurs. In Section \ref{subsec:place} we will show that there is at least one mixed element (Lemma \ref{lem:splitequiv}) for any splitting pair $(r,s)$. When the splitting pair is in addition an $\ell$-splitting pair we characterize the exact number of mixed element, which depends on the criticality level of the pair:
\begin{definition}
\label{def:splitpaircrit}
An $\ell$-splitting pair $(r,s)$ is \emph{supercritical} if $\mathcalK':=(\mathcal K_0,\ldots,\mathcal K_r,\mathcal K_s,\ldots,\mathcal K_k)$ satisfies $\dim\left(\sum_{K\in \mathcal K'}K\right)\ge |\mathcal K'|+2$, and is \emph{sharp-critical} if $\dim\left(\sum_{K\in \mathcal K'}K\right)= |\mathcal K'|+1$.
\end{definition}
We show in Section \ref{subsec:place} how the above notion of criticality is related to the number of mixed elements (Lemma \ref{lem:generalmixed}). The sharp-critical $\ell$-splitting pairs give rise to the following unique pair which will play an important role in the characterization of the extremals of the critical posets.
\begin{definition}
\label{def:rmaxsmin}
Let $(r_{\iota},s_{\iota})_{\iota}$ be the sharp-critical $\ell$-splitting pairs, where we assume that at least one such pair exists. The \emph{maximal splitting pair} $(r_{\max},s_{\min})$ is given by $r_{\max}:=\max_{\iota}r_{\iota}$ and $s_{\min}:=\min_{\iota}s_{\iota}$.
Associated to the maximal splitting pair are
\begin{align}
\label{eq:Kmax}
\begin{split}
&\mathcalK_{\max}:=(\mathcalK_0,\ldots,\mathcalK_{r_{\max}},\mathcalK_{s_{\min}},\ldots, \mathcalK_k),\\
&\beta_{\max}:=\beta_{\llbracket 0, r_{\max}\rrbracket\cup \llbracket s_{\min}, k\rrbracket},\quad\text{and}\quad \alpha\backslash\beta_{\max}=\alpha_{>x_{r_{\max}+1},<x_{s_{\min}}},
\end{split}
\end{align}
where the last identity follows from Lemma \ref{lem:betaintv}.
\end{definition}
The notion of the maximal splitting pair in Definition \ref{def:rmaxsmin} is tied to the notion of maximal sharp-critical collections introduced \cite[section 9.1]{SvH20}, as part of the characterization of the extremals of the Alexandrov-Fenchel inequality for critical polytopes. In particular, a sharp-critical collection $\mathcalK '\subseteq\mathcal K$ is \emph{maximal} if, for any $\mathcalK '\subseteq\mathcalK ''\subseteq\mathcal K$, we have $\dim\left(\sum_{K\in \mathcal K''}K\right)\ge |\mathcal K''|+2$. In other words, any addition of polytopes to $\mathcal K'$ destroys its sharp-critical nature. The next result explains the connection between these two notions of maximality.
\begin{proposition}
\label{prop:max_notions}
Suppose there exists a sharp-critical collection. Then, $\mathcalK_{\max}$ is the only maximal sharp-critical collection.
\end{proposition}
\begin{proof}
We start by recalling that all sharp-critical maximal collections of $\mathcal K$ must be disjoint \cite[Lemma 9.2]{SvH20}. By assumption there exists a sharp-critical collection $\mathcalK'$ so let $\mathcal K_*$ be the (necessarily unique) maximal sharp-critical collection containing $\mathcalK'$. On the other hand, Lemma \ref{lem:sharcrit} shows that any two sharp-critical collections of $\mathcal K$ have a non-trivial intersection. It follows that $\mathcal K_*$ is the only maximal sharp-critical collection in $\mathcalK$.
Next we show that
\[
\mathcal K_*= \{\textnormal{union of all sharp-critical collections}\}=\mathcalK_{\max},
\]
where the second identity follows from Lemma \ref{lem:sharcrit}, which completes the proof. Indeed, clearly, $\mathcal K_*\subseteq\bigcup \{\textnormal{sharp-critical collection}\}$ since $\mathcal K_*$ is a sharp-critical collection. If $\bigcup \{\textnormal{sharp-critical collection}\}$ a strictly greater than $\mathcal K_*$, i.e., it contains a polytope $K$ not in $\mathcal K_*$, then there exists a sharp-critical collection $\mathcalK''$ such that $K\in \mathcalK''$. Let $\mathcal K_{**}$ be the (necessarily unique) maximal sharp-critical collection containing $\mathcalK''$. Then $\mathcal K_{**}\neq \mathcal K_{*}$ (as $K\in \mathcal K_{**}$ but $K\notin \mathcal K_{*}$), which contradicts the fact $\mathcal K_{*}$ is the only maximal sharp-critical collection.
\end{proof}
We conclude the section by introducing notation that will be used throughout the paper. Let
\begin{align}
\label{eq:[]}
\llbracket i_j,i_{j+1}\rrbracket^{\circ}:=\llbracket i^\circ_j,i^\circ_{j+1}\rrbracket = \llbracket i_j+1_{j=\ell}1_{\circ},i_{j+1}+1_{j+1=\ell}1_{\circ}\rrbracket.
\end{align}
We use this notation when constants are added as well, for example, $\llbracket i_j+1,i_{j+1}-1\rrbracket^{\circ}:=\llbracket i^\circ_j+1,i^\circ_{j+1}-1\rrbracket$.
\subsection{Mixing properties of splitting pairs}
\label{subsec:place}
In this section we analyze the mixing properties of splitting pairs---see Figure \ref{fig:orgsec} for a summary.
\begin{figure}
\centering
\begin{tikzpicture}
\node[rectangle, draw=black!60, fill=blue!10, very thick, minimum width=50mm, rounded corners, align=center] (a) at (0,0) {\footnotesize \textbf{splitting pair}\\ \footnotesize $\ge 1$ mixed element(s)\\
\footnotesize{(Lemma \ref{lem:splitequiv})}};
\node[rectangle, draw=black!60, fill=blue!10, very thick, minimum width=50mm, rounded corners, align=center] (c) at (-3,-3) {\footnotesize \textbf{supercritical $\ell$-splitting pair}\\\footnotesize $\ge 2$ mixed elements\\
\footnotesize{(Corollary \ref{cor:Ibetastrong})}};
\node[rectangle, draw=black!60, fill=blue!10, very thick, minimum width=50mm, rounded corners, align=center] (d) at (3,-3) {\footnotesize \textbf{sharp-critical $\ell$-splitting pair}\\ \footnotesize exactly $1$ mixed element\\
\footnotesize (Lemma \ref{lem:generalmixed})};
\node[rectangle, draw=black!60, fill=blue!10, very thick, minimum width=50mm, rounded corners, align=center] (f) at (3,-6) {\footnotesize \textbf{sharp-critical maximal splitting pair}\\ \footnotesize exactly $1$ mixed element, $y^{\sigma}_{\textnormal{crit}}$\\
\footnotesize (Corollary \ref{cor:max})
};
\draw [very thick] (a) -- (c);
\draw [very thick] (a) -- (d) -- (f);
\end{tikzpicture}
\caption{A summary of the mixing results from Section \ref{subsec:place}.}
\label{fig:orgsec}
\end{figure}
We start by showing that a minimal amount of mixing must occur for any splitting pair.
\begin{lemma}
\label{lem:splitequiv}
Fix a splitting pair $(r,s)$ and $\sigma\in\mathcal N_{=}$. There exists a mixed element $y^{\sigma}\in \beta_r\cup\beta_s$ such that $\sigma(y^\sigma) \in \llbracket i_{r+1}, i_s \rrbracket\backslash\{i_{r+1},\ldots,i_s\}$.
\end{lemma}
\begin{proof}
Recall that $|\bar{\alpha}_{\ge x_{r+1},\le x_s}|\le i_s-i_{r+1}$ by Assumption \ref{ass:crit}, which is equivalent to $|\alpha_{> x_{r+1},< x_s}|\le i_s-i_{r+1}-(s-(r+1))-1$. Fix $\sigma\in\mathcal N_{=}$. If there exists no $y^{\sigma} \in \beta_r\cup\beta_s$ with $\sigma(y^{\sigma})\in\llbracket i_{r+1}, i_s \rrbracket\backslash\{i_{r+1},\ldots,i_s\}$, then, by Lemma \ref{lem:betaintv},
\begin{align*}
|\alpha_{> x_{r+1},< x_s}|&=|\alpha\backslash(\beta_r\cup\beta_s\cup \alpha_{< x_{r+1}}\cup \alpha_{>x_s})|\ge |\llbracket i_{r+1}, i_s \rrbracket\backslash\{i_{r+1},\ldots,i_s\}|\\
&=i_s-i_{r+1}+1-(s-(r+1)+1)=i_s-i_{r+1}-(s-(r+1)),
\end{align*}
which is a contradiction.
\end{proof}
\begin{corollary}
\label{cor:splitequiv}
For every $0\le j\le k$, $i_j+1<i_{j+1}$.
\end{corollary}
\begin{proof}
If $k=1$ then the corollary holds by the assumption $i_{\ell}<i_{\ell+1}-1$. Otherwise, note that $(r,s) = (j-1,j+1)$ is a splitting pair. Fix $\sigma\in\mathcal N_=$ and note that Lemma \ref{lem:splitequiv} implies that there exists $y^{\sigma}\not \in \bar{\alpha}_{\ge x_{j}, \le x_{j+1}}$ with $\sigma(y^{\sigma})\in \llbracket i_{j}, i_{j+1}\rrbracket$. The first condition gives $y^\sigma \not \in \{ x_j,x_{j+1}\}$, so $\sigma(y^\sigma)\not \in \{ i_j, i_{j+1}\}$. We conclude that $\llbracket i_{j}+1, i_{j+1}-1\rrbracket = \llbracket i_{j}, i_{j+1}\rrbracket \setminus \{i_j, i_{j+1}\}$ is nonempty.
\end{proof}
Next we move to the mixing properties of $\ell$-splitting pairs. This requires the following simple result.
\begin{lemma}
\label{lem:Ibeta}
$~$
\begin{itemize}
\item Fix $j\in\llbracket 0,k\rrbracket$. For every $\sigma\in\mathcal N_=$, $ \llbracket i_j+1,i_{j+1}-1\rrbracket\subseteq \sigma(\beta_j)$ and, for every $S\subseteq \llbracket 0,k\rrbracket$, $\bigcup_{j\in S}\llbracket i_j+1,i_{j+1}-1\rrbracket\subseteq \sigma(\beta_S)$.\\
\item Fix $j\in \llbracket 0,k\rrbracket\backslash\{\ell-1,\ell\}$ and $\circ\in\{-,+\}$. For every $\sigma\in\mathcal N_{\circ}$, $\llbracket i_j+1,i_{j+1}-1\rrbracket^{\circ}\subseteq \sigma(\beta_j)$ and, for every $S\subseteq \llbracket 0,k\rrbracket\backslash\{\ell-1,\ell\}$, $\bigcup_{j\in S}\llbracket i_j+1,i_{j+1}-1\rrbracket^{\circ}\subseteq \sigma(\beta_S)$.
\end{itemize}
\end{lemma}
\begin{proof}
$~$
\begin{itemize}
\item Fix $\sigma\in\mathcal N_=$. We will show that $\sigma(y)\in \llbracket i_j+1,i_{j+1}-1\rrbracket\Rightarrow y\in\beta_j$ which implies $\llbracket i_j+1,i_{j+1}-1\rrbracket\subseteq \sigma(\beta_j)$; the statement about $S$ follows by taking unions. If $\sigma(y)\in \llbracket i_j+1,i_{j+1}-1\rrbracket$, then clearly $y\in\alpha$ and $\sigma(x_j)=i_j<\sigma(y)<i_{j+1}=\sigma(x_{j+1})$. Hence, neither $y<x_j$ nor $y>x_{j+1}$ can occur. It follows that $y\in \beta_j$.
\item The proof is the same as for the first part where we use that $j\notin\{\ell-1,\ell\}\Rightarrow \sigma(x_j)=i_j\text{ and }\sigma(x_{j+1})=i_{j+1}$.
\end{itemize}
\end{proof}
We now show how the mixing properties of $\ell$-splitting pairs are related to their criticality properties.
\begin{lemma}
\label{lem:generalmixed}
Fix an $\ell$-splitting pair $(r,s)$, let
\[
\mathcalK':=(\mathcalK_0,\ldots, \mathcalK_{r},\mathcalK_{s},\ldots ,\mathcalK_k),
\]
and set
\[
c := \dim\left(\sum_{K\in \mathcal K'}K\right) - |\mathcal K'|.
\]
Then, for any fixed $\sigma\in\mathcal N_{\circ}$, for $\circ\in\{-,=,+\}$, there are exactly $c$ distinct mixed elements $y^\sigma_1,\ldots, y^\sigma_c\in \beta_r\cup\beta_s$ satisfying $\sigma(y^\sigma_1), \ldots, \sigma(y^\sigma_c)\in \llbracket i_{r+1}, i_s \rrbracket\backslash\{i_{r+1},\ldots,i_s\}$.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:spancollec},
\[
|\beta_{\llbracket 0,r\rrbracket\cup \llbracket s,k\rrbracket}|=\dim\left(\sum_{K\in \mathcal K'}K\right) \quad \text{and} \quad |\mathcal K'|=| \cup_{j\in \llbracket 0,r\rrbracket\cup \llbracket s,k\rrbracket}\llbracket i_j+1,i_{j+1}-1\rrbracket|.
\]
On the other hand, applying Lemma \ref{lem:Ibeta} to $S:=\llbracket 0,r\rrbracket\cup \llbracket s,k\rrbracket$ yields $\cup_{j\in \llbracket 0,r\rrbracket\cup \llbracket s,k\rrbracket}\llbracket i_j+1,i_{j+1}-1\rrbracket\subseteq \sigma(\beta_{\llbracket 0,r\rrbracket\cup \llbracket s,k\rrbracket})$. Hence, there are exactly $c$ distinct elements $\{y_i^{\sigma}\}_{i\in [c]}$ satisfying $y_i^{\sigma}\in\beta_{\llbracket 0,r\rrbracket\cup \llbracket s,k\rrbracket}$ and $\sigma(y_i^{\sigma})\notin \cup_{j\in \llbracket 0,r\rrbracket\cup \llbracket s,k\rrbracket}\llbracket i_j+1,i_{j+1}-1\rrbracket$. Now recall that $\beta_{\llbracket 0,r\rrbracket\cup \llbracket s,k\rrbracket}=\beta_r\cup\beta_s\cup\alpha_{<x_{r+1}}\cup \alpha_{>{x_s}}$ (Lemma \ref{lem:betaintv}), and note that $\sigma(y_i^\sigma) \notin \cup_{j\in \llbracket 0,r\rrbracket\cup \llbracket s,k\rrbracket}\llbracket i_j+1,i_{j+1}-1\rrbracket$ implies that $y_i^{\sigma}\in \beta_r\cup\beta_s$.
\end{proof}
\begin{corollary}
\label{cor:Ibetastrong}
Let $(r,s)$ be a supercritical $\ell$-splitting pair. Then, for any $\sigma\in\mathcal N_{\circ}$, for $\circ\in\{-,=,+\}$, there are $c\ge 2$ distinct mixed elements $y^\sigma_1,\ldots, y^\sigma_c\in \beta_r\cup\beta_s$ satisfying $\sigma(y^\sigma_1), \ldots, \sigma(y^\sigma_c)\in \llbracket i_{r+1}, i_s \rrbracket\backslash\{i_{r+1},\ldots,i_s\}$.
\end{corollary}
Note that Corollary \ref{cor:Ibetastrong} is an improvement on Lemma \ref{lem:splitequiv} in the setting of supercritical $\ell$-splitting pairs, as it guarantees the existence of two distinct mixed elements rather than one. In addition, because Corollary \ref{cor:Ibetastrong} specializes to $\ell$-splitting pairs it can handle $\mathcal N_{\circ}$, for any $\circ\in\{-,=,+\}$, while Lemma \ref{lem:splitequiv} applies only to $\mathcal N_{=}$.
We conclude this section by specializing to the setting where the $\ell$-splitting pair is maximal. Since the maximal splitting pair is sharp-critical, Lemma \ref{lem:generalmixed} immediately gives that we have exactly one mixed element.
\begin{corollary}
\label{cor:max}
Fix $\circ\in \{-,=,+\}$ and $\sigma\in\mathcal N_{\circ}$. There exists a unique mixed element $y^{\sigma}_{\textnormal{crit}}$ satisfying $y^{\sigma}_{\textnormal{crit}} \in \beta_{r_{\max}}\cup\beta_{s_{\min}}$ and $\sigma(y^{\sigma}_{\textnormal{crit}})\in \llbracket i_{r_{\max}+1}, i_{s_{\min}} \rrbracket\backslash\{i_{r_{\max}+1},\ldots,i_{s_{\min}}\}$.
\end{corollary}
\section{The extreme normal directions}
\label{sec:ext}
Once Assumption \ref{ass:crit} is set in place, we are ready, in principle, to apply Theorem \ref{thm:SvH}. However, Theorem \ref{thm:SvH} characterizes the extremals \emph{geometrically} in terms of the $(B,\mathcal K)$-extreme normal directions so a \emph{combinatorial} interpretation of these vectors is needed. The goal of this section is to characterize, combinatorially, a sufficient number of the $(B,\mathcal K)$-extreme normal directions so that Theorem \ref{thm:SvH} can be applied.
We recall that $\{e_j\}_{j\in[n-k]}$ is the standard basis of $\R^{n-k}$ and, for $u,v\in [n-k]$ distinct, we let $e_{uv}:=\frac{e_u-e_v}{\sqrt{2}}$ and $o_{uv}:=\frac{e_u+e_v}{\sqrt{2}}$. We also recall the definition \eqref{eq:betai}:
\[
\beta_i:=\alpha\backslash(\alpha_{<x_i}\cup \alpha_{>x_{i+1}}).
\]
The next result characterizes certain faces of the polytopes $\{K_i\}$.
\begin{lemma}
\label{lem:dirspan}
Fix $ i\in \llbracket 0,k\rrbracket$. We have,
\begin{enumerate}[(i)]
\item For $y_j\notin\beta_i$, $\mathop{\mathrm{lin}}(F(K_i,\pm e_j))=\R^{\beta_i}$, and for $y_u,y_v\notin\beta_i$, $\mathop{\mathrm{lin}}(F(K_i,\pm e_{uv}))=\R^{\beta_i}$.
\item For $y_j\in\beta_i$, $\mathop{\mathrm{lin}}(F(K_i,-e_j))=\R^{\beta_i\backslash\alpha_{\le y_j}}$ and $\mathop{\mathrm{lin}}(F(K_i,e_j))=\R^{\beta_i\backslash\alpha_{\ge y_j}}$.
\item For $y_u,y_v\in\beta_i$ such that $y_v$ covers $y_u$ in $\alpha$, $\mathop{\mathrm{lin}}(F(K_i,e_{uv}))=\R^{\beta_i\backslash\{y_u,y_v\}}\oplus\mathop{\mathrm{span}}(o_{uv})$.
\end{enumerate}
\end{lemma}
\begin{proof}
We start by recalling \eqref{eq:Kiordpoly}:
\[
K_i=O_{\beta_i}+1_{\alpha_{>x_{i+1}}}\mbox{ for } i\in \llbracket 0,k\rrbracket
\]
so that
\[
\mathop{\mathrm{lin}}(F(K_i,\mathrm u))=\mathop{\mathrm{lin}}(F(O_{\beta_i},\mathrm u))\quad\forall\, \mathrm u\in S^{n-k-1}.
\]
\begin{enumerate}[(i)]
\item Let $\mathrm u\in \{\pm e_j\}$ so, since $h_{O_{\beta_i}}(\mathrm u)=0$ as $y_j\notin\beta_i$, we get that $\mathop{\mathrm{lin}}(F(K_i,\mathrm u))=O_{\beta_i}\cap \{t_j=0\}=O_{\beta_i}$, where the last equality holds as $y_j\notin\beta_i$. Similarly, let $\mathrm u\in\{\pm e_{uv}\}$ so, since $h_{O_{\beta_i}}(\mathrm u)=0$ as $y_u,y_v\notin\beta_i$, we get that $\mathop{\mathrm{lin}}(F(K_i,\mathrm u))=O_{\beta_i}\cap \{t_u=t_v\}=O_{\beta_i}$, where the last equality holds as $y_u,y_v\notin\beta_i$. The proof is complete as $\dim O_{\beta_i}=|\beta_i|$ (Lemma \ref{lem:dimOb}).
\item Since $h_{O_{\beta_i}}(-e_j)=0$, we get $\mathop{\mathrm{lin}}(F(K_i,-e_j))=O_{\beta_i}\cap \{t_j=0\}=O_{\beta_i\backslash\alpha_{\le y_j}}$ where the last equality holds as $y_j\in \beta_i$. Analogously, since $h_{O_{\beta_i}}(e_j)=1$ (because $y_j\in\beta_i$), we get $\mathop{\mathrm{lin}}(F(K_i,e_j))=O_{\beta_i}\cap \{t_j=1\}=O_{\beta_i\backslash\alpha_{\ge y_j}}$.
\item Since $y_u\le y_v$ we have $h_{O_{\beta_i}}(e_{uv})=0$, so $\mathop{\mathrm{lin}}(F(K_i,e_{uv}))=O_{\beta_i}\cap \{t_u=t_v\}$. Since $y_v$ covers $y_u$, it follows from Lemma \ref{lem:dimOb}(iii) that $\dim(\mathop{\mathrm{lin}}(F(K_i,e_{uv})))=|\beta_i|-1$. On the other hand, since $\mathop{\mathrm{lin}}(F(K_i,e_{uv}))\perp e_{uv}$, we have $\mathop{\mathrm{lin}}(F(K_i,e_{uv}))\subseteq \R^{\beta_i}\cap e_{uv}^{\perp}=\R^{\beta_i\backslash\{y_u,y_v\}}\oplus\mathop{\mathrm{span}}(o_{uv})$. The proof is complete since $\dim(\R^{\beta_i\backslash\{y_u,y_v\}}\oplus\mathop{\mathrm{span}}(o_{uv}))=|\beta_i|-1$.
\end{enumerate}
\end{proof}
The following proposition, which is the main result of this section, characterizes combinatorially some of the $(B,\mathcal K)$-extreme normal directions. We remark that the $(B,\mathcal K)$-extreme normal directions given in Proposition \ref{prop:dir}($e$--$h$) will be used only for the characterization of the extremals of sharp-critical posets.
\begin{proposition}
\label{prop:dir}
The following vectors are $(B,\mathcal K)$-extreme normal directions:\\
\begin{enumerate}[(a)]
\item For each fixed $ 0\le m\le \ell$: $-e_j$ for any $j$ such that $y_j\in\alpha_{>x_m}$ and there exists $\sigma\in\mathcal N_=$ satisfying $\sigma(y_j)=i_m+1$. \\
\item For each fixed $ \ell\le m\le k+1$: $e_j$ for any $j$ such that $y_j\in\alpha_{<x_m}$ and there exists $\sigma\in\mathcal N_=$ satisfying $\sigma(y_j)=i_m-1$.\\
\item $e_{uv}$ for any $u,v$ such that $y_u<y_v$ and there exists $\sigma\in\mathcal N_=$ satisfying $\sigma(y_u)+1=\sigma(y_v)$.\\
\item $e_{uv}$ for any $u,v$ such that $y_u<y_v$ and there exists $\sigma\in\mathcal N_=$ satisfying $\sigma(y_u)=i_{\ell}-1$ and $\sigma(y_v)=i_{\ell}+1$.\\
\item For each fixed $r_{\max}+1\le m\le\ell-1$: $-e_j$ for any $j$ such that $y_j\in\alpha_{>x_m}$ and there exists $\sigma\in\mathcal N_=$ satisfying $\sigma(y_j)=i_m+2$.\\
\item For each fixed $\ell+1\le m\le s_{\min}$: $e_j$ for any $j$ such that $y_j\in\alpha_{<x_m}$ and there exists $\sigma\in\mathcal N_=$ satisfying $\sigma(y_j)=i_m-2$.\\
\item $-e_j$ for any $j$ such that $y_j\in\alpha_{>x_{\ell-1}}$ and there exists $\sigma\in\mathcal N_+$ satisfying $\sigma(y_j)=i_{\ell-1}+2$.\\
\item $e_j$ for any $j$ such that $y_j\in\alpha_{<x_{\ell+1}}$ and there exists $\sigma\in\mathcal N_-$ satisfying $\sigma(y_j)=i_{\ell+1}-2$.
\end{enumerate}
\end{proposition}
Note that parts (a--b), which suffice for the supercritical posets, provide information about nearest neighbors of $x_m$, while parts (e--f), which are needed for the critical posets, provide information about second-nearest neighbors of $x_m$.
\begin{proof}{(of Proposition \ref{prop:dir})}
By Definition \ref{def:extreme}, we need to show that, whenever $\mathrm u$ is one of the vectors in the proposition, we have, for any collection $\mathcal K'\subseteq\mathcal K$,
\[
\dim\left(\sum_{K\in\mathcal K'}F(K,\mathrm u)\right)\ge |\mathcal K'|.
\]
Let $j_0:=-1<0\le j_1<\cdots<j_p\le k<k+1=:j_{p+1}$ and $\kappa_1,\ldots, \kappa_p$, with $0\le\kappa_q\le i_{j_q+1}-i_{j_q}-1-1_{j_q\in\{\ell-1,\ell\}}$, for $j_q\in\llbracket 0,k\rrbracket$, and set
\[
\mathcalK':=(\underbrace{K_{j_1},\ldots,K_{j_1}}_{\kappa_1},\ldots,\underbrace{K_{j_p},\ldots,K_{j_p}}_{\kappa_p}),
\]
\[
J:=\{j_1,\ldots,j_p\}.
\]
For notational simplicity we set
\begin{align}
\label{eq:Ij}
\textnormal{I}_j:=\llbracket i_j+1,i_{j+1}-1\rrbracket\quad \text{for }j\in\llbracket 0,k\rrbracket,\quad \textnormal{I}_S:=\cup_{j_q\in S}\textnormal{I}_{j_q}\quad \text{for }S\subset \llbracket 0,k\rrbracket;
\end{align}
for example,
\[
\textnormal{I}_{\llbracket r+1,s\rrbracket}=\llbracket i_{r+1},i_s\rrbracket\backslash\{i_{r+1},\ldots, i_s\}.
\]
Note that
\[
|\textnormal{I}_J|-1_{\ell-1\in J}-1_{\ell\in J}\ge |\mathcal K'|,
\]
because $0\le\kappa_q\le i_{j_q+1}-i_{j_q}-1-1_{j_q\in\{\ell-1,\ell\}}$ and since $\textnormal{I}_{j_q}=i_{j_q+1}-i_{j_q}-1$.
\begin{enumerate}[(a)]
\item Fix $0\le m\le \ell$ and consider $\sigma \in\mathcal N_=$ such that $\sigma(y_j)=i_m+1$ where $j$ is such that $y_j\in\alpha_{>x_m}$. Let
\[
\gamma_{j_q}:=
\begin{cases}
\beta_{j_q}&\mbox{if }y_j\notin\beta_{j_q}\\
\beta_{j_q}\backslash\alpha_{\le y_j} &\mbox{if }y_j\in\beta_{j_q}
\end{cases},
\]
and $\gamma_J:=\cup_{j_q\in J}\gamma_q$. By Lemma \ref{lem:dirspan}(i--ii),
\[
\mathop{\mathrm{lin}}(F(K_{j_q},-e_j))=\R^{\gamma_{j_q}}\quad \text{for all}\quad j_q\in J,
\]
so, by \eqref{eq:linspan},
\[
\mathop{\mathrm{lin}}\left(F\left(\sum_{K\in\mathcal K'}K,-e_j\right)\right)=\R^{\gamma_J}.
\]
It follows that
\[
\dim\left(\sum_{K\in\mathcal K'}F(K,-e_j)\right)=|\gamma_J|,
\]
so it remains to show that $|\gamma_J|\ge |\mathcal K'|$. Since $|\textnormal{I}_J|-1_{\ell-1\in J}-1_{\ell\in J}\ge |\mathcal K'|$, it will suffice to show that
\[
|\gamma_J|\ge |\textnormal{I}_J|-1_{\ell-1\in J}-1_{\ell\in J},
\]
which requires the following claim.
\begin{claim}
\label{cl:a}
$~$
\begin{enumerate}[(i)]
\item For $j_q\neq m$, $\textnormal{I}_{j_q} \subseteq\sigma(\gamma_{j_q})$.
\item For $j_q= m$, $\textnormal{I}_{m}\backslash\{i_m+1\} \subseteq\sigma(\gamma_m)$.
\end{enumerate}
\end{claim}
\begin{proof}
$~$
\begin{enumerate}[(i)]
\item We need to consider the cases $y_j\notin\beta_{j_q}$ and $y_j\in\beta_{j_q}$. If $y_j\notin\beta_{j_q}$ then the result holds by Lemma \ref{lem:Ibeta}. Suppose $y_j\in\beta_{j_q}$. Then, we must have $m<j_q$; otherwise, $j_q<m$ (by assumption $j_q\neq m$) so $x_{j_q+1}\le x_m<y_j$, but this implies $y_j\notin\beta_{j_q}$, which is a contradiction. Now let $y$ be any element such that $\sigma(y)\in \textnormal{I}_{j_q}$, which by Lemma \ref{lem:Ibeta}, implies that $y\in\beta_{j_q}$. Since $\sigma(y)\ge i_{j_q}+1>i_m+1=\sigma(y_j)$, we can conclude that, in fact, $y\in\beta_{j_q}\backslash\alpha_{\le y_j}=\gamma_{j_q}$. To summarize, $\sigma(y)\in \textnormal{I}_{j_q}\Rightarrow y\in\gamma_{j_q}$, which shows $\textnormal{I}_{j_q}\subseteq\sigma(\gamma_{j_q})$.
\item We need to consider the cases $y_j\notin\beta_m$ and $y_j\in\beta_m$. Suppose $y_j\notin\beta_m$. By Corollary \ref{cor:splitequiv}, $i_m+1<i_{m+1}$ so $\sigma(y_j)=i_m+1\in \textnormal{I}_m\subseteq\sigma(\beta_m)$, where we used Lemma \ref{lem:Ibeta}. This contradicts $y_j\notin\beta_m$ so we are left to consider $y_j\in\beta_m$. Let $y$ be any element such that $\sigma(y)\in \textnormal{I}_{m}\backslash\{i_m+1\}=\llbracket i_m+2,i_{m+1}-1\rrbracket$. Then, $y\in \beta_{m}\backslash\alpha_{\le y_j}$ since, by Lemma \ref{lem:Ibeta}, $y\in \beta_{m}$, but we also have $\sigma(y)\ge i_m+2>i_m+1=\sigma(y_j)$. To summarize, $\sigma(y)\in \textnormal{I}_{m}\backslash\{i_m+1\}\Rightarrow y\in \beta_{m}\backslash\alpha_{\le y_j}$, which shows $\textnormal{I}_{m}\backslash\{i_m+1\}\subseteq\sigma(\gamma_m)$.
\end{enumerate}
\end{proof}
In order to use Claim \ref{cl:a} in the proof of $|\gamma_J|\ge |\textnormal{I}_J|-1_{\ell-1\in J}-1_{\ell\in J}$, we distinguish between two cases: $m\notin J$ and $m\in J$. If $m\notin J$, then taking a union over $j_q\in J$ in Claim \ref{cl:a} gives $\textnormal{I}_J\subseteq\sigma(\gamma_J)$, so $|\gamma_J|\ge |\textnormal{I}_J|\ge |\textnormal{I}_J|-1_{\ell-1\in J}-1_{\ell\in J}$, as desired.
Suppose then that $m\in J$. Taking a union over $j_q\in J$ in Claim \ref{cl:a} gives $\textnormal{I}_J\backslash\{i_m+1\}\subseteq \sigma(\gamma_J)$. Hence, if $\ell\in J$, we have $|\gamma_J|\ge |\textnormal{I}_J|-1\ge |\textnormal{I}_J|-1_{\ell-1\in J}-1_{\ell\in J}$, which completes the proof. It remains to consider the case $m\in J$ and $\ell\notin J$:
Choose the largest $0\le b\le p$ such that $j_b<\ell$, so $j_b<\ell<j_{b+1}$, and, in particular, $(j_b,j_{b+1})$ is an $\ell$-splitting pair. By Lemma \ref{lem:splitequiv}, there exists $y^{\sigma} \in \beta_{j_b}\cup\beta_{j_b+1}$ such that $\sigma(y^\sigma) \in\textnormal{I}_{\llbracket j_b+1,j_{b+1}-1\rrbracket}$.
Since $m=j_q<\ell$ for some $ 0\le q\le p$, and since $b$ is the largest element in $\llbracket 0,p\rrbracket$ such that $j_b<\ell$, we have $q\le b$, and hence $m\le j_b$. It follows that $\sigma(y_j)=i_m+1<i_{j_b+1}+1\le \sigma(y^{\sigma})$, and, in particular, $y^{\sigma}\notin \alpha_{\le y_j}$. Hence, $y^{\sigma}\in (\beta_{j_b}\backslash \alpha_{\le y_j})\cup(\beta_{j_{b+1}}\backslash \alpha_{\le y_j})\subseteq \gamma_{j_b}\cup \gamma_{j_{b+1}}\subseteq\gamma_J$, so $(\textnormal{I}_J\backslash\{i_m+1\})\cup\{\sigma(y^{\sigma})\}\subseteq \sigma(\gamma_J)$. Finally, $\sigma(y^{\sigma})\notin\textnormal{I}_J$ because $J$ and $\llbracket j_b+1,j_{b+1}-1\rrbracket$ do not intersect, which completes the proof since it implies that $|\gamma_J|\ge |(\textnormal{I}_J\backslash\{i_m+1\})\cup\{\sigma(y^{\sigma})\}|\ge |\textnormal{I}_J|-1+1=|\textnormal{I}_J|\ge |\textnormal{I}_J|-1_{\ell-1\in J}-1_{\ell\in J}$.\\
\item The proof is analogous to part (a). \\
\item Fix $u,v$ such that there exist $y_u<y_v$ with $\sigma\in\mathcal N_=$ satisfying $\sigma(y_u)+1=\sigma(y_v)$. For $j_q\in J$, let
\[
\gamma_{j_q}:=
\begin{cases}
\beta_{j_q}&\mbox{if }y_u,y_v\notin \beta_{j_q}\\
\beta_{j_q}\backslash\{y_u,y_v\}&\mbox{if }y_u,y_v\in\beta_{j_q},\\
\beta_{j_q}\backslash\alpha_{\ge y_u} &\mbox{if }y_u\in\beta_{j_q},y_v\notin\beta_{j_q},\\
\beta_{j_q}\backslash\alpha_{\le y_v} &\mbox{if }y_u\notin\beta_{j_q},y_v\in\beta_{j_q}.
\end{cases}
\]
We start by describing the faces of $\{K_{j_q}\}_{j_q\in J}$ in the directions $\{e_{uv}\}$.
\begin{claim}
\label{cl:c1}
For every $j_q\in J$,
\[
\mathop{\mathrm{lin}}(F(K_{j_q},e_{uv}))=
\begin{cases}
\R^{\gamma_{j_q}}\oplus\mathop{\mathrm{span}}(o_{uv})&\mbox{if }y_u,y_v\in \beta_{j_q},\\
\R^{\gamma_{j_q}}&\mbox{ otherwise}.
\end{cases}
\]
\end{claim}
\begin{proof}
There are four cases to consider:
\begin{itemize}
\item $y_u,y_v\in \beta_{j_q}$: The claim follows from Lemma \ref{lem:dirspan}(iii).
\item $y_u,y_v\notin \beta_{j_q}$: The claim follows from Lemma \ref{lem:dirspan}(i).
\item $y_u\in \beta_{j_q}, y_v\notin \beta_{j_q}$: We will show that $\mathop{\mathrm{lin}}(F(K_{j_q},e_{uv}))=\mathop{\mathrm{lin}}(F(K_{j_q},e_u))$, and the claim will then follow from Lemma \ref{lem:dirspan}(ii). Indeed, the assumption $y_v\notin \beta_{j_q}$ implies that $y_v\in\alpha_{<x_{j_q}}\cup\alpha_{>x_{j_q+1}}$. But $y_v\notin\alpha_{<x_{j_q}}$ because, otherwise, $y_u<y_v<x_{j_q}$, which contradicts the assumption $y_u\in \beta_{j_q}$. Hence, $y_v>x_{j_q+1}$ so, by the definition \eqref{eq:Ki} of $K_{j_q}$, $t_v=1$ for any $t\in O_{\alpha}$. Since $\sup_{t_u\in [0,1]}t_u=1$ (as $y_u\in \beta_{j_q}\Rightarrow y\not<x_{j_q}$), it follows that $h_{K_{j_q}}(e_{uv})=\sup_{t_u\in [0,1]}\frac{t_u-t_v}{\sqrt{2}}=0$, and hence
\[
\mathop{\mathrm{lin}}(F(K_{j_q},e_{uv}))=K_{j_q}\cap \{t_u=t_v\}=K_{j_q}\cap \{t_u=1\}=\mathop{\mathrm{lin}}(F(K_{j_q},e_{u})),
\]
as needed.
\item $y_u\notin \beta_{j_q},y_v\in \beta_{j_q}$: The argument is analogous to the previous case: $y_u\in \beta_{j_q}$ and $y_v\notin \beta_{j_q}$.
\end{itemize}
\end{proof}
Next we prove the analogue of Claim \ref{cl:a}.
\begin{claim}
\label{cl:c2}
Choose $m\in \llbracket 0,k+1\rrbracket$ such that $i_m<\sigma(y_u)<\sigma(y_v)<i_{m+1}$.
\begin{enumerate}[(i)]
\item For $j_q\neq m$, $\textnormal{I}_{j_q} \subseteq\sigma(\gamma_{j_q})$.
\item For $j_q= m$, $\textnormal{I}_{m}\backslash\{\sigma(y_u),\sigma(y_v)\} \subseteq\sigma(\gamma_m)$.
\end{enumerate}
\begin{proof}
$~$
We need to consider the four cases (1) $y_u,y_v\in \beta_{j_q}$, (2) $y_u,y_v\notin \beta_{j_q}$, (3) $y_u\in \beta_{j_q}, y_v\notin \beta_{j_q}$, and (4) $y_u\notin \beta_{j_q},y_v\in \beta_{j_q}$.
\begin{enumerate}[(i)]
\item Case (1): For any $y$ such that $\sigma(y)\in \textnormal{I}_{j_q}$, we have $y\in \beta_{j_q}$, by Lemma \ref{lem:Ibeta}, and $y\notin\{y_u,y_v\}$, since $\sigma(y_u),\sigma(y_v)\in \textnormal{I}_m$, and $ \textnormal{I}_m\cap \textnormal{I}_{j_q}=\varnothing$ as $m\neq j_q$. Hence, $y\in \beta_{j_q}\backslash\{y_u,y_v\}=\gamma_{j_q}$, so we conclude $\textnormal{I}_{j_q}\subseteq \sigma(\gamma_{j_q})$.
Case (2): Since $\gamma_{j_q}=\beta_{j_q}$, Lemma \ref{lem:Ibeta} implies $\textnormal{I}_{j_q}\subseteq \sigma(\gamma_{j_q})$.
Case (3): For any $y$ such that $\sigma(y)\in \textnormal{I}_{j_q}$, we have $y\in \beta_{j_q}$, by Lemma \ref{lem:Ibeta}. On the other hand, the proof of Claim \ref{cl:c1} showed that $y_v>x_{j_q+1}$, so the assumption on $m$ implies that $j_q<m$, which means that $\sigma(y)<i_{j_q+1}\le i_m<\sigma (y_u)$. In particular, $y\notin \alpha_{\ge y_u}$ so we conclude that $y\in \beta_{j_q}\backslash \alpha_{\ge y_u}=\gamma_{j_q}$. It follows that $\textnormal{I}_{j_q}\subseteq \sigma(\gamma_{j_q})$.
Case (4) is analogous to case (3).
\item Case (1): For any $y\in\textnormal{I}_{m}\backslash\{\sigma(y_u),\sigma(y_v)\}$, Lemma \ref{lem:Ibeta} implies that $y\in \beta_m\backslash \{y_u,y_v\}=\gamma_m$, which implies that $\textnormal{I}_{m}\backslash\{\sigma(y_u),\sigma(y_v)\} \subseteq \sigma(\gamma_m)$.
Case (2): Since $\gamma_m=\beta_m$, Lemma \ref{lem:Ibeta} implies $\textnormal{I}_{m}\backslash\{\sigma(y_u),\sigma(y_v)\} \subseteq \sigma(\gamma_m)$.
Case (3): As shown in part (i) case (3), we must have $j_q<m$ so this case cannot occur.
Case (4) is analogous to case (3).
\end{enumerate}
\end{proof}
\end{claim}
Choose $m\in \llbracket 0,k+1\rrbracket$ such that $i_m<\sigma(y_u)<\sigma(y_v)<i_{m+1}$. To complete the proof we distinguish between two cases: $m\notin J$ and $m\in J$. Suppose $m\notin J$. By \eqref{eq:linspan} and Claim \ref{cl:c1}, $\R^{\gamma_J}\subseteq\mathop{\mathrm{lin}}\left(\sum_{K\in\mathcal K'}F(K,e_{uv})\right)$,
so $\dim\left(\sum_{K\in\mathcal K'}F(K,e_{uv})\right)\ge |\gamma_J|$. On the other hand, by Claim \ref{cl:c2} and as $m\notin J$, $|\gamma_J|\ge |\textnormal{I}_J|$. We conclude
\[
\dim\left(\sum_{K\in\mathcal K'}F(K,e_{uv})\right)\ge |\textnormal{I}_J|\ge |\textnormal{I}_J|-1_{\ell-1\in J}-1_{\ell\in J}\ge |\mathcal K'|,
\]
which completes the proof.
Suppose that $m\in J$. By the definition of $m$, $\sigma(y_u),\sigma(y_v)\in \textnormal{I}_m$, so Lemma \ref{lem:Ibeta} implies that $y_u,y_v\in \beta_m$. By Claim \ref{cl:c1}, it follows that $F(K_m,e_{uv})=\R^{\gamma_m}\oplus \mathop{\mathrm{span}}(o_{uv})$. On the other hand, for any $j_q\in J$, by the definition of $\gamma_{j_q}$, we have $y_u,y_v\notin\gamma_{j_q}$. Hence, $\R^{\gamma_{j_q}}\cap \mathop{\mathrm{span}}(o_{uv})= \{0\}$ for all $j_q\in J$, and in particular, $\R^{\gamma_{J}}\cap \mathop{\mathrm{span}}(o_{uv})= \{0\}$. It follows from \eqref{eq:linspan} that
\[
\mathop{\mathrm{lin}}\left(\sum_{K\in\mathcal K'}F(K,e_{uv})\right)=\R^{\gamma_J}\oplus \mathop{\mathrm{span}}(o_{uv}),
\]
and
\[
\dim\left(\sum_{K\in\mathcal K'}F(K,e_{uv})\right)= |\gamma_J|+1.
\]
We now consider separately the cases $\ell\in J$ and $\ell\notin J$. Suppose $\ell\in J$. By Claim \ref{cl:c2}, $|\gamma_J|\ge |\textnormal{I}_J|-2$ so
\[
|\gamma_J|+1\ge |\textnormal{I}_J|-1\ge |\textnormal{I}_J|-1-1_{\ell-1\in J}\ge |\mathcal K'|,
\]
which completes the proof. It remains to consider the case $m\in J$ and $\ell\notin J$:
Choose the largest $b\in\llbracket 0,p\rrbracket$ such that $j_b<\ell$, so $j_b<\ell<j_{b+1}$, and, in particular, $(j_b,j_{b+1})$ is an $\ell$-splitting pair. By Lemma \ref{lem:splitequiv}, there exists $y^{\sigma}\in \beta_{j_b}\cup\beta_{j_{b+1}}$ with $\sigma(y^{\sigma})\in \textnormal{I}_{\llbracket j_b+1,j_{b+1}-1\rrbracket}$. We will show that
\begin{align}
\label{eq:ysigmgamma}
y^{\sigma}\in \gamma_{j_b}\cup\gamma_{j_{b+1}}.
\end{align}
Assume for now that \eqref{eq:ysigmgamma} holds. Then, $(\textnormal{I}_J\backslash\{\sigma(y_u),\sigma(y_v)\})\cup\{\sigma(y^{\sigma})\}\subseteq \sigma(\gamma_J)$. On the other hand, arguing as in part (a) for the case $m\in J,\ell\notin J$, we have $\sigma(y^{\sigma})\notin \textnormal{I}_J$. Hence, $|\gamma_J|+1\ge |\textnormal{I}_J|$, so $\dim\left(\sum_{K\in\mathcal K'}F(K,e_{uv})\right)\ge |\textnormal{I}_J|\ge |\textnormal{I}_J|-1_{\ell-1\in J}-1_{\ell\in J}\ge |\mathcal K'|$, which completes the proof.
It remains to prove \eqref{eq:ysigmgamma}. We will show $y^{\sigma}\in \beta_{j_b}\Rightarrow y^{\sigma}\in \gamma_{j_b}$, and the argument for $y^{\sigma}\in \beta_{j_{b+1}}\Rightarrow y^{\sigma}\in \gamma_{j_{b+1}}$ is analogous. Since $y^{\sigma}\in \beta_{j_b}\cup\beta_{j_{b+1}}$, \eqref{eq:ysigmgamma} will follow. Suppose then that $y^{\sigma}\in \beta_{j_b}$ so our task is to show that $y^{\sigma}\in \gamma_{j_b}$. There are two cases to consider: $j_b\ge m$ and $j_{b+1}\le m$; we will consider the case $j_b\ge m$ and the argument for the case $j_{b+1}\le m$ is analogous.
Let us start by showing that $\gamma_{j_q}$ cannot be equal to $\beta_{j_q}\backslash\alpha_{\ge y_u}$. Indeed, the latter occurs only if $y_u\in\beta_{j_q},y_v\notin \beta_{j_q}$, in which case, either $y_v<x_{j_b}$ or $y_v>x_{j_b+1}$. If $y_v<x_{j_b}$, then $y_u<y_v<x_{j_b}$ which contradicts $y_u\in\beta_{j_q}$. If $y_v>x_{j_b+1}$, then $\sigma(x_{j_b+1})<\sigma(y_v)<i_{m+1}=\sigma(x_{m+1})$, which contradicts $m\le j_b$. We conclude that $\gamma_{j_q}\in\{\beta_{j_q},\beta_{j_q}\backslash\{y_u,y_v\},\beta_{j_q}\backslash\alpha_{\le y_u}\}$, and since $y^{\sigma}\in \beta_{j_b}$, it suffices to show that $y^{\sigma}\notin \{y_u,y_v\}$ and $y^{\sigma}\notin\alpha_{\le y_u}$. To see that $y^{\sigma}\notin \{y_u,y_v\}$, note that $\sigma(y^{\sigma})\in \textnormal{I}_{\llbracket j_b+1,j_{b+1}-1\rrbracket}$ while $\sigma(y_u),\sigma(y_v)\in \textnormal{I}_m$. Since $m\le j_b$, $\textnormal{I}_{\llbracket j_b+1,j_{b+1}-1\rrbracket}\cap \textnormal{I}_m=\varnothing$ so $y^{\sigma}\notin \{y_u,y_v\}$. To see that $y^{\sigma}\notin\alpha_{\le y_u}$, note that, since $m\le j_b$, $\sigma(y_u)<i_{m+1}\le i_{j_b+1}<\sigma(y^{\sigma})$, where the last inequality holds as $\sigma(y^{\sigma})\in \textnormal{I}_{\llbracket j_b+1,j_{b+1}-1\rrbracket}$.\\
\item Fix $u,v$ such that there exist $y_u<y_v$ with $\sigma\in\mathcal N_=$ satisfying $\sigma(y_u)=i_{\ell}-1$ and $\sigma(y_v)=i_{\ell}+1$. For $j_q\in J$ we let $\gamma_{j_q}$ be as in part (c). We start by showing that Claim \ref{cl:c1} holds here as well.
\begin{claim}
\label{cl:d1}
For every $j_q\in J$,
\[
\mathop{\mathrm{lin}}(F(K_{j_q},e_{uv}))=
\begin{cases}
\R^{\gamma_{j_q}}\oplus\mathop{\mathrm{span}}(o_{uv})&\mbox{if }y_u,y_v\in \beta_{j_q},\\
\R^{\gamma_{j_q}}&\mbox{ otherwise}.
\end{cases}
\]
\end{claim}
\begin{proof}
The proof is the same as the proof of Claim \ref{cl:c1}, but we need to check that, when $y_u,y_v\in\beta_{j_q}$, $y_v$ covers $y_u$ in $\alpha$. The latter must be true since, otherwise, there exists $z\in\alpha$ such that $y_u<z<y_v$, so $i_{\ell}-1=\sigma(y_u)<\sigma(z)<\sigma(y_v)=i_{\ell}+1$. This implies $z=x_{\ell}$, which contradicts $z\in\alpha$.
\end{proof}
Next we prove the analogue of Claim \ref{cl:c2}.
\begin{claim}
\label{cl:d2}
$~$
\begin{enumerate}[(i)]
\item For $j_q\notin\{\ell-1,\ell\}$, $\textnormal{I}_{j_q} \subseteq\sigma(\gamma_{j_q})$.
\item For $j_q=\ell-1$, $\textnormal{I}_{\ell-1}\backslash\{i_{\ell}-1\} \subseteq\sigma(\gamma_{\ell-1})$.
\item For $j_q=\ell$, $\textnormal{I}_{\ell}\backslash\{i_{\ell}+1\} \subseteq\sigma(\gamma_{\ell})$.
\end{enumerate}
\end{claim}
\begin{proof}
We need to consider the four cases (1) $y_u,y_v\in \beta_{j_q}$, (2) $y_u,y_v\notin \beta_{j_q}$, (3) $y_u\in \beta_{j_q}, y_v\notin \beta_{j_q}$, and (4) $y_u\notin \beta_{j_q},y_v\in \beta_{j_q}$.
\begin{enumerate}[(i)]
\item Case (1): For any $y$ such that $\sigma(y)\in \textnormal{I}_{j_q}$, we have $y\notin\{y_u,y_v\}$ since $\sigma(y_u),\sigma(y_v)\notin \textnormal{I}_{j_q}$ (because $j_q\notin\{\ell-1,\ell\}$). Hence, by Lemma \ref{lem:Ibeta}, $y\in\beta_{j_q}\backslash \{y_u,y_v\}=\gamma_{j_q}$, so we conclude $\textnormal{I}_{j_q} \subseteq\sigma(\gamma_{j_q})$.
Case (2): By Lemma \ref{lem:Ibeta}, $\textnormal{I}_{j_q}\subseteq \beta_{j_q}=\gamma_{j_q}$ so $\textnormal{I}_{j_q} \subseteq\sigma(\gamma_{j_q})$.
Case (3): We start by showing that $j_q<\ell$. Indeed, suppose for contradiction that $ j_q\ge \ell$. Since $y_v\notin\beta_{j_q}$, we have that either $y_v<x_{j_q}$ or $y_v>x_{j_q+1}\ge x_{\ell+1}$. We cannot have $y_v>x_{j_q+1}\ge x_{\ell+1}$, since $\sigma(y_v)=i_{\ell}+1<i_{\ell+1}=\sigma(x_{\ell+1})$. Hence, we must have $y_u<y_v<x_{j_q}$, which contradicts $y_u\in\beta_{j_q}$. We conclude that $j_q<\ell$. The assumption $j_q\notin\{\ell-1,\ell\}$ implies that in fact $j_q<\ell-1$. Hence, for any $y$ such that $\sigma(y)\in \textnormal{I}_{j_q}$, we have $y\in\beta_{j_q}\backslash\alpha_{\ge y_u}=\gamma_{j_q}$, because $\sigma(y)<i_{j_q+1}\le i_{\ell-1}<i_{\ell}-1=\sigma(y_u)$. It follows that $\textnormal{I}_{j_q}\subseteq \sigma(\gamma_{j_q})$.
Case (4) is analogous to case (3).
\item Case (1): For any $y$ such that $\sigma(y)\in \textnormal{I}_{\ell-1}\backslash\{i_{\ell}-1\}$, we have $y\notin\{y_u,y_v\}$ so, by Lemma \ref{lem:Ibeta}, $\textnormal{I}_{\ell-1}\backslash\{i_{\ell}-1\}\subseteq \sigma(\gamma_{\ell-1})$.
Case (2): By Lemma \ref{lem:Ibeta}, $\textnormal{I}_{\ell-1}\subseteq \sigma(\beta_{\ell-1})=\sigma(\gamma_{\ell-1})$ so $\textnormal{I}_{\ell-1} \subseteq\sigma(\gamma_{\ell-1})$.
Case (3): For any $y$ such that $\sigma(y)\in \textnormal{I}_{\ell-1}\backslash\{i_{\ell}-1\}$, we have $y\in\beta_{\ell-1}\backslash\alpha_{\ge y_u}=\gamma_{\ell-1}$, because, by the definition of $\textnormal{I}_{\ell-1}\backslash\{i_{\ell}-1\}$, $\sigma(y)<i_{\ell}-1=\sigma(y_u)$. It follows that $\textnormal{I}_{\ell-1}\backslash\{i_{\ell}-1\} \subseteq\sigma(\gamma_{\ell-1})$.
Case (4) is analogous to case (3).
\item The argument is analogous to (ii).
\end{enumerate}
\end{proof}
By \eqref{eq:linspan} and Claim \ref{cl:d1}, $\R^{\gamma_J}\subseteq\mathop{\mathrm{lin}}\left(\sum_{K\in\mathcal K'}F(K,e_{uv})\right)$, so $\dim\left(\sum_{K\in\mathcal K'}F(K,e_{uv})\right)\ge |\gamma_J|$. By Claim \ref{cl:d2}, using the fact that $\{\textnormal{I}_{j_q}\}_{j_q\in J\backslash\{\ell-1,\ell\}}, \textnormal{I}_{\ell-1},\textnormal{I}_{\ell}$ are disjoint, we have
\[
|\gamma_J|\ge \sum_{j_q\in J}[ |\textnormal{I}_{j_q}|-1_{j_q=\ell-1}-1_{j_q=\ell}]=|\textnormal{I}_J|-1_{\ell-1\in J}-1_{\ell\in J}\ge |\mathcalK'|,
\]
which completes the proof.\\
\item Fix $ r_{\max}+1\le m\le\ell-1$ and consider $\sigma\in\mathcal N_=$ such that $\sigma(y_j)=i_m+2$ where $j$ is such that $y_j\in\alpha_{>x_m}$. By Corollary \ref{cor:splitequiv}, $ \sigma(y_j)=i_m+2\le i_{m+1}=\sigma(x_{m+1})$, and since $\sigma(y_j)\neq \sigma(x_{m+1})$ (as $y_j\neq x_{m+1}$), we get that $ \sigma(y_j)=i_m+2<i_m+3\le \sigma(x_{m+1})=i_{m+1}$. It follows that $i_m+1<i_{m+1}-1$, so $\sigma(y_j)\in \textnormal{I}_m$.
For $j_q\in J$, let $\gamma_{j_q}$ be as in part (a), and note that an analogous argument yield
\[
\dim\left(\sum_{K\in\mathcal K'}F(K,-e_j)\right)=|\gamma_J|,
\]
and
\begin{claim}
\label{cl:e}
$~$
\begin{enumerate}[(i)]
\item For $j_q\neq m$, $\textnormal{I}_{j_q} \subseteq\sigma(\gamma_{j_q})$.
\item For $j_q= m$, $\textnormal{I}_{m}\backslash\{i_m+1,i_m+2\} \subseteq\sigma(\gamma_m)$.
\end{enumerate}
\end{claim}
In order to complete the proof we distinguish between two cases: $m\notin J$ and $m\in J$. The proof of the case $m\notin J$ is the same as in part (a). Suppose that $m\in J$ and consider the following cases:
\begin{itemize}
\item $\ell-1,\ell\in J$: The proof is complete since $|\mathcal K'|\le |\textnormal{I}_J|-1_{\ell-1\in J}-1_{\ell\in J}=|\textnormal{I}_J|-2$, and since Claim \ref{cl:e} yields $|\gamma_J|\ge |\textnormal{I}_J|-2$.\\
\item $\ell-1\in J$ and $\ell\notin J$: Since $\ell\notin J$, there is an index $j_b$ such that $j_b=\ell-1$ and $j_{b+1}>\ell$, and note that $(j_b,j_{b+1})$ is a splitting pair. Note that since $m\le \ell-1$, and $m\in J$, we must have $m\le j_b$. By Lemma \ref{lem:splitequiv}, there exists $y^{\sigma}\in \beta_{j_b}\cup \beta_{j_b+1}$ such that $\sigma(y^{\sigma})\in \textnormal{I}_{\llbracket j_b+1,j_{b+1}-1\rrbracket}$. Suppose $y^{\sigma}\in \beta_{j_b}$; the proof for the case $y^{\sigma}\in \beta_{j_b+1}$ is analogous. Since $\sigma(y^{\sigma})>i_{j_b+1}\ge i_{m+1}> \sigma(y_j)$, we get $y^{\sigma}\in \beta_{j_b}\backslash\alpha_{\le y_j}\subseteq \gamma_{j_b}\subseteq \gamma_J$. Hence,
\[
(\textnormal{I}_J\backslash\{ i_m+1,i_m+2\})\cup\{\sigma(y^{\sigma})\}\subseteq\sigma(\gamma_J).
\]
Since $\sigma(y^{\sigma})\in \textnormal{I}_{\llbracket j_b+1,j_{b+1}-1\rrbracket}$, we have $\sigma(y^{\sigma})\notin \textnormal{I}_J$ (because $j_b=\ell-1$ and $\ell\notin J$ so the indices $\{j_b+1,\ldots,j_{b+1}-1\}=\llbracket j_b+1,j_{b+1}-1\rrbracket$ are not in $J$), so we get that $|\gamma_J|\ge |\textnormal{I}_J|-1=|\textnormal{I}_J|-1_{\ell-1\in J}-1_{\ell\in J}\ge |\mathcal K'|$. \\
\item $\ell-1\notin J$ and $\ell\in J$: The proof is analogous to the case $\ell-1\in J$ and $\ell\notin J$.\\
\item $\ell-1,\ell\notin J$: Since $\ell-1,\ell\notin J$, we can choose $b$ to be an index such that $j_b<\ell-1<\ell<j_{b+1}$, or the largest index such $j_b<\ell-1<\ell$, and note that $(j_b,j_{b+1})$ is an $\ell$-splitting pair. Note that since $m\le \ell-1$, and $m\in J$, we must have $m\le j_b$. Consider the collection
\[
\mathcal K'':=(\mathcalK_0,\ldots, \mathcalK_{j_b},\mathcalK_{j_b+1},\ldots,\mathcalK_k)
\]
and note that, by Assumption \ref{ass:crit}, $\mathcal K''$ is critical. We claim that $\mathcal K''$ is in fact supercritical. Indeed, if $\mathcal K''$ is sharp-critical, then $j_b\le r_{\max}$. But $j_b\ge m>r_{\max}$, so we get a contradiction. Since $\mathcal K''$ is supercritical, and since $(j_b,j_{b+1})$ is an $\ell$-splitting pair, Corollary \ref{cor:Ibetastrong} provides two distinct $y^{\sigma},z^{\sigma}\in\beta_{j_b}\cup\beta_{j_{b+1}}$, with $\sigma(y^{\sigma}), \sigma(z^{\sigma})\in \textnormal{I}_{\llbracket j_b+1,j_{b+1}-1\rrbracket}$, from which it follows that
\[
\textnormal{I}_{\llbracket 0,j_b\rrbracket\cup \llbracket j_{b+1},k\rrbracket}\cup\{\sigma(y^{\sigma}),\sigma(z^{\sigma})\}\subseteq \sigma(\beta_{\llbracket 0,j_b\rrbracket\cup \llbracket j_{b+1},k\rrbracket}).
\]
Suppose that $y^{\sigma}\in \beta_{j_b}$; the case $y^{\sigma}\in \beta_{j_{b+1}}$ is analogous. Since $m\le j_b$, $\sigma(y^{\sigma})>i_{j_b+1}\ge i_{m+1}> \sigma(y_j)$, so we can conclude that $y^{\sigma}\in\beta_{j_b}\backslash\alpha_{\le y_j}\subseteq \gamma_{j_b}\subseteq\gamma_J$. Analogous argument shows that $z^{\sigma}\in \gamma_J$. By Claim \ref{cl:e}, it follows that
\[
(\textnormal{I}_J\backslash\{ i_m+1,i_m+2\})\cup\{\sigma(y^{\sigma}),\sigma(z^{\sigma})\}\subseteq\sigma(\gamma_J).
\]
Since $\sigma(y^{\sigma}),\sigma(z^{\sigma})\in \textnormal{I}_{\llbracket j_b+1,j_{b+1}-1\rrbracket}$, we have $\sigma(y^{\sigma}),\sigma(z^{\sigma})\notin \textnormal{I}_J$ (because $b$ satisfies $j_b<\ell-1<\ell<j_{b+1}$, or the maximal $j_b<\ell-1$, so the indices $\{j_b+1,\ldots,j_{b+1}-1\}=\llbracket j_b+1,j_{b+1}-1\rrbracket$ are not in $J$). On the other hand, because $m\in J$ and $i_m+2<i_{m+1}$, we have $i_m+1,i_m+2\in \textnormal{I}_J$. It follows that $(\textnormal{I}_J\backslash\{ i_m+1,i_m+2\})\cup\{\sigma(y^{\sigma}),\sigma(z^{\sigma})\}|=|\textnormal{I}_J|$, and hence, $|\gamma_J|\ge |\textnormal{I}_J|\ge |\mathcal K'| $.
\end{itemize}
$~$\\
\item The proof is analogous to part (e).\\
\item Consider $\sigma\in\mathcal N_+$ such that $\sigma(y_j)=i_{\ell-1}+2$ where $j$ is such that $y_j\in\alpha_{>x_{\ell-1}}$. By Corollary \ref{cor:splitequiv}, $\sigma(x_{\ell-1})<i_{\ell-1}+2=\sigma(y_j)<i_{\ell}+1=\sigma(x_{\ell})$, so we conclude that $y_j\in\beta_{\ell-1}$. For $j_q\in J$ let $\gamma_{j_q}$ be as in part (a), and note that an analogous argument yields $\dim\left(\sum_{K\in\mathcal K'}F(K,-e_j)\right)=|\gamma_J|$. We start with the analogue of Claim \ref{cl:d2}.
\begin{claim}
\label{cl:g}
$~$
\begin{enumerate}[(i)]
\item For $j_q\notin\{\ell-1,\ell\}$, $\textnormal{I}_{j_q} \subseteq\sigma(\gamma_{j_q})$.
\item For $j_q=\ell-1$, $(\textnormal{I}_{\ell-1}\cup\{i_{\ell}\})\backslash\{i_{\ell-1}+1,i_{\ell-1}+2\} \subseteq\sigma(\gamma_{\ell-1})$.
\item For $j_q=\ell$, $\textnormal{I}_{\ell}\backslash\{i_{\ell}+1\} \subseteq\sigma(\gamma_{\ell})$.
\end{enumerate}
\end{claim}
\begin{proof}
There two cases to consider: (1) $y_j\notin\beta_{j_q}$ and (2) $y_j\in\beta_{j_q}$.
\begin{enumerate}[(i)]
\item Case (1): By Lemma \ref{lem:Ibeta}, $\textnormal{I}_{j_q}\subseteq \sigma(\beta_{j_q})=\sigma(\gamma_{j_q})$.
Case (2): First we note that $j_q\ge \ell-1$ since, otherwise, $y_j>x_{\ell-1}\ge x_{j_q+1}$ which contradicts $y_j\in\beta_{j_q}$. Since $j_q\notin\{\ell-1,\ell\}$, it follows that in fact $\ell<j_q$. Hence, for any $y$ such that $\sigma(y)\in \textnormal{I}_{j_q}$, we have $\sigma(y)>\sigma(x_{j_q})\ge \sigma(x_{\ell+1})=i_{\ell+1}>i_{\ell-1}+2=\sigma(y_j)$, so that $y\notin\alpha_{\le y_j}$. It follows that $y\in \gamma_{j_q}$, so we conclude $\textnormal{I}_{j_q} \subseteq\sigma(\gamma_{j_q})$.
\item Case (1) cannot occur since we have shown that $y_j\in\beta_{\ell-1}$.
Case (2): Every $y$ such that $\sigma(y)\in(\textnormal{I}_{\ell-1}\cup\{i_{\ell}\})\backslash\{i_{\ell-1}+1,i_{\ell-1}+2\}$ satisfies $\sigma(x_{\ell-1})<\sigma(y)<\sigma(x_{\ell})$, so $y\in\beta_{\ell-1}$. Further, $\sigma(y)>i_{\ell-1}+2=\sigma(y_j)$, so $y\notin\alpha_{\le y_j}$. It follows that $y\in \gamma_{\ell-1}$, so we conclude $(\textnormal{I}_{\ell-1}\cup\{i_{\ell}\})\backslash\{i_{\ell-1}+1,i_{\ell-1}+2\} \subseteq\sigma(\gamma_{\ell-1})$.
\item Case (1): Every $y$ such that $\sigma(y)\in \textnormal{I}_{\ell}\backslash\{i_{\ell}+1\}$ satisfies $\sigma(x_{\ell})=i_{\ell}+1<\sigma(y)<i_{\ell+1}=\sigma(x_{\ell+1})$, so $y\in\beta_{\ell}=\gamma_{\ell}$. We conclude that $\textnormal{I}_{\ell}\backslash\{i_{\ell}+1\} \subseteq\sigma(\gamma_{\ell})$.
Case (2): Every $y$ such that $\sigma(y)\in \textnormal{I}_{\ell}\backslash\{i_{\ell}+1\}$ satisfies $\sigma(x_{\ell})=i_{\ell}+1<\sigma(y)<i_{\ell+1}=\sigma(x_{\ell+1}$, so $y\in\beta_{\ell}$. Further, $\sigma(y)>i_{\ell}+1>i_{\ell-1}+2=\sigma(y_j)$, so $y\notin\alpha_{\le y_j}$. It follows that $y\in \gamma_{\ell}$, so we conclude $ \textnormal{I}_{\ell}\backslash\{i_{\ell}+1\} \subseteq\sigma(\gamma_{\ell})$.
\end{enumerate}
\end{proof}
By \eqref{eq:linspan} $\mathop{\mathrm{lin}}\left(\sum_{K\in\mathcal K'}F(K,-e_j)\right)=\R^{\gamma_J}$, so $\dim\left(\sum_{K\in\mathcal K'}F(K,-e_j)\right)= |\gamma_J|$. By Claim \ref{cl:g}, using the fact that $\{\textnormal{I}_{j_q}\}_{j_q\in J\backslash\{\ell-1,\ell\}}, \textnormal{I}_{\ell-1},\textnormal{I}_{\ell}$ are disjoint, it suffices to show that $|(\textnormal{I}_{\ell-1}\cup\{i_{\ell}\})\backslash\{i_{\ell-1}+1,i_{\ell-1}+2\}|=|\textnormal{I}_{\ell-1}|-1$, and that $|\textnormal{I}_{\ell}\backslash\{i_{\ell}+1\}|=|\textnormal{I}_{\ell}|-1$, since then
\[
|\gamma_J|\ge \sum_{j_q\in J}[ |\textnormal{I}_{j_q}|-1_{j_q=\ell-1}-1_{j_q=\ell}]=|\textnormal{I}_J|-1_{\ell-1\in J}-1_{\ell\in J}\ge |\mathcalK'|,
\]
which completes the proof. To see that $|(\textnormal{I}_{\ell-1}\cup\{i_{\ell}\})\backslash\{i_{\ell-1}+1,i_{\ell-1}+2\}|=|\textnormal{I}_{\ell-1}|-1$, we note that $|\textnormal{I}_{\ell-1}\cup\{i_{\ell}\}|=|\textnormal{I}_{\ell-1}|+1$, and that $i_{\ell-1}+1,i_{\ell-1}+2\in \textnormal{I}_{\ell-1}\cup\{i_{\ell}\}$, because $i_{\ell-1}+1<i_{\ell-1}+2\le i_{\ell}$, by Corollary \ref{cor:splitequiv}. Hence, $|(\textnormal{I}_{\ell-1}\cup\{i_{\ell}\})\backslash\{i_{\ell-1}+1,i_{\ell-1}+2\}|=(|\textnormal{I}_{\ell-1}|+1)-2=|\textnormal{I}_{\ell-1}|-1$. Finally, it is clear that $|\textnormal{I}_{\ell}\backslash\{i_{\ell}+1\}|=|\textnormal{I}_{\ell}|-1$, since $i_{\ell}+1\in \textnormal{I}_{\ell}$. \\
\item The proof is analogous to part (g).
\end{enumerate}
\end{proof}
\section{Supercritical posets}
\label{sec:supercrit}
In this section we complete the characterization of the extremals of Stanley's inequalities for supercritical posets. The following result, together with Proposition \ref{prop:suff}, Lemma \ref{lem:suff}, Proposition \ref{prop:clposet}, and Proposition \ref{prop:critequiv}, complete the proof of Theorem \ref{thm:supcrit}.
\begin{theorem}
\label{thm:supcritsec}
Suppose that $\mathcalK$ is supercritical and that $|\mathcal N_{=}|^2=|\mathcal N_{-}||\mathcal N_{+}|$. Then,
\[|\mathcal N_=(\nsim,\sim)|=|\mathcal N_=(\sim,\nsim)|=|\mathcal N_=(\sim,\sim)|=0.
\]
\end{theorem}
In order to prove Theorem \ref{thm:supcritsec}, we will invoke Theorem \ref{thm:SvHComb} and use the extreme normal directions found in Proposition \ref{prop:dir}(a--d). Theorem \ref{thm:SvHComb} tells us that there exist $a\ge 0$ and $\mathrm v\in \R^{n-k}$ such that
\begin{align}
\label{eq:exth}
h_{K_{\ell-1}}(\mathrm u)=h_{aK_{\ell}+\mathrm v}(\mathrm u) \quad \mbox{for all }(B,\mathcal K)\textnormal{-extreme normal directions } \mathrm u.
\end{align}
The following results derive constraints from \eqref{eq:exth} on the allowed $a$ and $\mathrm v$. We start with $\mathrm v$.
\begin{proposition}
$~$
\label{prop:av}
\begin{enumerate}[(a)]
\item For each fixed $0\le m\le\ell-1$: $\mathrm v_j=0$ for any $j$ such that $y_j\in\alpha_{>x_m}$ and there exists $\sigma\in\mathcal N_=$ satisfying $\sigma(y_j)=i_m+1$.\\
\item For each fixed $\ell+1\le m\le k+1$: $\mathrm v_j=1-a$ for any $j$ such that $y_j\in\alpha_{<x_m}$ and there exists $\sigma\in\mathcal N_=$ satisfying $\sigma(y_j)=i_m-1$.\\
\item $\mathrm v_u=\mathrm v_v$ for any $u,v$ such that $y_u<y_v$ and there exists $\sigma\in\mathcal N_=$ satisfying $\sigma(y_u)+1=\sigma(y_v)$.\\
\item $\mathrm v_u=\mathrm v_v$ for any $u,v$ such that $y_u<y_v$ and there exists $\sigma\in\mathcal N_=$ satisfying $\sigma(y_u)=i_{\ell}-1$ and $\sigma(y_v)=i_{\ell}+1$.
\end{enumerate}
\end{proposition}
\begin{proof}
$~$
\begin{enumerate}[(a)]
\item By Proposition \ref{prop:dir}(a), $-e_j$ is a $(B,\mathcal K)$-extreme normal direction, so by \eqref{eq:exth}, $h_{K_{\ell-1}}(-e_j)=ah_{K_{\ell}}(-e_j)-\mathrm v_j$. Since $\sigma(y_j)=i_m+1$, and $m\le \ell-1$, we have $\sigma(y_j)=i_m+1\le i_{\ell-1}+1<i_{\ell},i_{\ell+1}$, so $y_j\notin\alpha_{>x_{\ell}}\cup \alpha_{>x_{\ell+1}}$. Hence, it follows from \eqref{eq:Kiordpoly} that $h_{K_{\ell-1}}(-e_j)=h_{K_{\ell}}(-e_j)=0$. We conclude that $\mathrm v_j=0$. \\
\item By Proposition \ref{prop:dir}(b), $e_j$ is a $(B,\mathcal K)$-extreme normal direction, so by \eqref{eq:exth}, $h_{K_{\ell-1}}(-e_j)=ah_{K_{\ell}}(-e_j)+\mathrm v_j$. Since $\sigma(y_j)=i_m-1$, and $m\ge \ell+1$, we have $\sigma(y_j)=i_m-1\ge i_{\ell+1}-1>i_{\ell},i_{\ell-1}$, so $y_j\notin\alpha_{<x_{\ell-1}}\cup \alpha_{<x_{\ell}}$. Hence, it follows from \eqref{eq:Kiordpoly} that $h_{K_{\ell-1}}(e_j)=1$ and $ah_{K_{\ell}}(e_j)+\mathrm v_j=a+\mathrm v_j$. We conclude that $\mathrm v_j=1-a$. \\
\item By Proposition \ref{prop:dir}(c), $e_{uv}$ is a $(B,\mathcal K)$-extreme normal direction, so by \eqref{eq:exth}, $h_{K_{\ell-1}}(e_{uv})=ah_{K_{\ell}}(e_{uv})+\frac{1}{\sqrt{2}} (\mathrm v_u-\mathrm v_v)$. We will show that $h_{K_{\ell-1}}(e_{uv})=h_{K_{\ell}}(e_{uv})=0$, from which we can conclude $\mathrm v_u=\mathrm v_v$. We will show that $h_{K_{\ell-1}}(e_{uv})=0$; the proof of $h_{K_{\ell}}(e_{uv})=0$ is analogous. We distinguish between the following cases:
Case (1): $y_u,y_v\in\beta_{\ell-1}$. By \eqref{eq:Kiordpoly}, $h_{K_{\ell-1}}(e_{uv})=0$ since $t_u\le t_v$ for $t\in O_{\beta_{\ell-1}}$, and equality is attained with $t=0$.
Case (2): $y_u\in\beta_{\ell-1},y_v\notin\beta_{\ell-1}$, or $y_u\notin\beta_{\ell-1},y_v\in\beta_{\ell-1}$. See the proof of Claim \ref{cl:c1}.
Case (3): $y_u,y_v\notin\beta_{\ell-1}$. Since there exists $\sigma\in\mathcal N_=$ with $\sigma(y_u)+1=\sigma(y_v)$, the assumption $y_u,y_v\notin\beta_{\ell-1}$ implies that either $y_u,y_v<x_{\ell-1}$, or $y_u,y_v>x_{\ell}$. Hence, either $t_u=t_v=1$, or $t_u=t_v=0$ for any $t\inK_{\ell-1}$, so, in particular, $h_{K_{\ell-1}}(e_{uv})=0$.\\
\item The proof is analogous to part (c), where we note that $y_u\notin\beta_{\ell-1}$ cannot occur.
\end{enumerate}
\end{proof}
While Proposition \ref{prop:av}(a--b) took care of elements neighboring $x_m$'s, the next result takes care of elements that are at the bottom (res. the top) of the poset.
\begin{lemma}
\label{lem:minmax}
For any $y_j\in\alpha$: If $m^{=}_{\min}(y_j)<i_{\ell}$ then $\mathrm v_j=0$, and if $m^{=}_{\max}(y_j)>i_{\ell}$ then $\mathrm v_j=1-a$.
\end{lemma}
\begin{proof}
We prove that $m^{=}_{\max}(y_j)>i_{\ell}\Rightarrow \mathrm v_j=1-a$; the proof of $m^{=}_{\min}(y_j)<i_{\ell}\Rightarrow \mathrm v_j=0$ is analogous.
Set $y_{j_0}:=y_j$ and construct the sequence $y_{j_0}<y_{j_1}<\cdots <y_{j_p}$, for some $p<\infty$, iteratively, according to the algorithm below. The sequence will be constructed so that $y_{j_i}\in\alpha$ for every $i\in\llbracket 0,p\rrbracket$, $\mathrm v_{j_i}=\mathrm v_{j_{i+1}}$ for all $i\in \llbracket 0,p-1\rrbracket$, and $\mathrm v_{j_p}=1-a$. Clearly, it will then follow that $\mathrm v_j=\mathrm v_{j_0}=1-a$, completing the proof.
Assume that the sequence $y_{j_0}<y_{j_1}<\cdots <y_{j_i}$ has been constructed. Set $M:=m^{=}_{\max}(y_{j_i})$, and note that $i_{\ell}<m^{=}_{\max}(y_{j_0})\le M$. Consider the following two cases:
\begin{itemize}
\item $M\neq i_m-1$ for every $\ell<m$: Choose $\sigma\in\mathcal N_=$ such that $\sigma(y_{j_i})=M$ (such a $\sigma$ must exist by the definition of $M$) and set $y_{j_{i+1}}:=\sigma^{-1}(M+1)$. We first show that $M+1\neq i_m$ for any $m\in\llbracket 0,k\rrbracket$. Indeed, by assumption $M+1\neq i_m$ for every $\ell<m$, and if $m\le \ell$, then $i_m\le i_{\ell}<M+1$. It follows that $y_{j_{i+1}}\in\alpha$. Next we show that $y_{j_i}<y_{j_{i+1}}$. Indeed, otherwise, by the definition of $M$, $y_{j_i}$ and $y_{j_{i+1}}$ must be incomparable, so we can swap the positions of $y_{j_i}$ and $y_{j_{i+1}}$ in $\sigma$ to get $\sigma'\in\mathcal N_=$ such that $\sigma'(y_{j_i})=M+1$, which contradicts the maximality of $M$. We conclude that $y_{j_i}<y_{j_{i+1}}$. Finally, by Proposition \ref{prop:av}(c), $\mathrm v_{j_i}=\mathrm v_{j_{i+1}}$.\\
\item $M= i_m-1$ for some $\ell<m$: In this case, the sequence will be terminated with $p:=i$. Note that Corollary \ref{cor:splitequiv} implies that $y_{j_i}\in\alpha$, since $M=i_m-1$. We will show that $\sigma(y_{j_i})<\sigma(x_m)$ for all $\sigma\in\cup_{\circ\in\{-,=,+\}}\mathcal N_{\circ}$. Then, by Assumption \ref{ass:cl}, it follows that $y_{j_i}<x_m$ so, by Proposition \ref{prop:av}(b), $\mathrm v_{j_i}=1-a$. To show that that $\sigma(y_{j_i})<\sigma(x_m)$ for all $\sigma\in\cup_{\circ\in\{-,=,+\}}\mathcal N_{\circ}$, suppose for contradiction otherwise, which means that there exists $\sigma\in\mathcal N_{\circ}$, for some $\circ\in\{-,=,+\}$, such that $\sigma(y_{j_i})>\sigma(x_m)=i_m$. Set $q:=\sigma(y_{j_i})$. We will show that Lemma \ref{lem:range} can be applied with $y_{j_i}$, $=$, and $q$, to yield $\sigma'\in\mathcal N_=$ such that $\sigma'(y_{j_i})=q$, contradicting the maximality of $M$ (since $q>i_m>i_m-1=M$).
To apply Lemma \ref{lem:range} to $y_{j_i}$, $=$, and $q$, we need to check that all of the conditions of the lemma are satisfied. Applying the lemma to $y_{j_i}$, $\circ$, and $q$, we get $q\le u_{\circ}(y_{j_i})$, and by Lemma \ref{lem:UL} (as $i_{\min}(y_{j_i})>m>\ell$), we get $q\le u_{\circ}(y_{j_i})=u_{=}(y_{j_i})$. On the other hand, by Corollary \ref{cor:range}, $l_{=}(y_{j_i})\le m^{=}_{\max}(y_{j_i})=M=i_m-1<q$. We conclude that the condition $q\in\llbracket l_{=}(y_{j_i}),u_{=}(y_{j_i})\rrbracket$ holds. Finally, we show that $q\neq i_r+1_{=}=i_r$ for any $r\in\llbracket 1,k\rrbracket$. Indeed, if $q=i_r$ for some $r\in\llbracket 1,k\rrbracket$, then $i_r=q>i_m$, which implies $\ell<m<r$, and hence $\sigma(x_r)=i_r$ as $r\neq \ell$. It follows that $\sigma(y_{j_i})=q=\sigma(x_r)$, contradicting $y\in\alpha$.
\end{itemize}
\end{proof}
Next we move to $a$.
\begin{lemma}
\label{lem:a=1}
$a=1$.
\end{lemma}
\begin{proof}
Fix $\sigma\in \mathcal N_{=}$ and set $y_u^{\sigma}:=\sigma^{-1}(i_{\ell}-1)$, $y_v^{\sigma}:=\sigma^{-1}(i_{\ell}+1)$. There are a few cases to check:\\
\begin{itemize}
\item $y_u^{\sigma}\nsim x_{\ell}$: If $m_{\max}^=(y_u^{\sigma})>i_{\ell}$, then, since $m_{\min}^=(y_u^{\sigma})\le \sigma (y_u^{\sigma})<i_{\ell}$, Lemma \ref{lem:minmax} implies that $\mathrm v_u=0$ and $\mathrm v_u=1-a$ so $a=1$.
Suppose then that $m_{\max}^=(y_u^{\sigma})<i_{\ell}$. We claim that $u_=(y_u^{\sigma})\le i_{\ell}$. Indeed, otherwise, $u_=(y_u^{\sigma})\ge i_{\ell}+1\ge \sigma(y_u^{\sigma})\ge l_=(y_u^{\sigma})$. Hence, since $i_{\ell}+1\neq \sigma(x_m)$ for any $m\in\llbracket 1,k\rrbracket$, Lemma \ref{lem:range} implies that there exists $\sigma'\in\mathcal N_=$ satisfying $\sigma'(y_u^{\sigma})=i_{\ell}+1$, which contradicts the maximality of $m_{\max}^=(y_u^{\sigma})<i_{\ell}$. Now, since $u_=(y_u^{\sigma})\le i_{\ell}$, there must exist $b$, with $y_u^{\sigma}<x_b$, such that $i_b^{\circ}-|\bar{\alpha}_{\ge y_{u}^{\sigma},<x_b}|\le i_{\ell}$. It follows that $|\bar{\alpha}_{\ge y_{u}^{\sigma},<x_b}|\ge i_b-i_{\ell}$, where we used $i_b^{=}=i_b$. Fix $z\in \bar{\alpha}_{>y_{u}^{\sigma},<x_b}$, and note that $z\neq x_{\ell}$, since otherwise $x_{\ell}>y_{u}^{\sigma}$, which contradicts the assumption $y_u^{\sigma}\nsim x_{\ell}$. In particular, since $\sigma(x_{\ell})=i_{\ell}$, we have $\sigma(z)\neq i_{\ell}$. Since $i_{\ell}-1=\sigma(y_{u}^{\sigma})<\sigma(z)<\sigma(x_b)=i_b$, we conclude that $\sigma(z)\in \llbracket i_{\ell}+1,i_b-1\rrbracket$. The size of $ \llbracket i_{\ell}+1,i_b-1\rrbracket$ is $i_b-i_{\ell}-1$, so combining $|\bar{\alpha}_{> y_{u}^{\sigma},<x_b}|\ge i_b-i_{\ell}-1$, with $\sigma(z)\in \llbracket i_{\ell}+1,i_b-1\rrbracket$ for every $z\in \bar{\alpha}_{>y_{u}^{\sigma},<x_b}$, shows that $\sigma(\bar{\alpha}_{>y_{u}^{\sigma},<x_b})=\llbracket i_{\ell}+1,i_b-1\rrbracket$. In particular, since $\sigma(y_v^{\sigma})=i_{\ell}+1$, we get that $y_v^{\sigma}\in \bar{\alpha}_{>y_{u}^{\sigma},<x_b}$, so $y_{v}^{\sigma}<y_{u}^{\sigma}$. It follows from Proposition \ref{prop:av}(c) that $\mathrm v_u=\mathrm v_v$. Since $m_{\min}^=(y_u^{\sigma})<i_{\ell}$, and $m_{\max}^=(y_v^{\sigma})>i_{\ell}$, Lemma \ref{lem:minmax} yields $0=\mathrm v_u=\mathrm v_v=1-a$. We conclude that $a=1$.\\
\item $y_v^{\sigma}\nsim x_{\ell}$: Analogous to the case $y_u^{\sigma}\nsim x_{\ell}$.\\
\item $y_u^{\sigma}<x_{\ell}$ and $x_{\ell}<y_v^{\sigma}$: By Proposition \ref{prop:av}(d), $\mathrm v_u=\mathrm v_v$. Since $y_u^{\sigma}<x_{\ell}$, we have $m_{\min}^=(y_u^{\sigma})<i_{\ell}$ so, by Lemma \ref{lem:minmax}, $\mathrm v_u=0$. Similarly, since $x_{\ell}<y_v^{\sigma}$, we have $m_{\max}^=(y_v^{\sigma})>i_{\ell}$ so, by Lemma \ref{lem:minmax}, $\mathrm v_v=1-a$. We conclude that $0=\mathrm v_u=\mathrm v_v=1-a$, so $a=1$.
\end{itemize}
\end{proof}
We are now ready to prove Theorem \ref{thm:supcritsec}.
\begin{proof}{\textnormal{(of Theorem \ref{thm:supcritsec})}}
We will show that
\begin{align}
\label{eq:supcritequiv}
\forall~ \sigma\in\mathcal N_=:\quad \sigma^{-1}(i_{\ell}-1)\nsim x_{\ell}\quad\text{and}\quad \sigma^{-1}(i_{\ell}+1)\nsim x_{\ell},
\end{align}
which is equivalent to $|\mathcal N_=(\nsim,\sim)|=|\mathcal N_=(\sim,\nsim)|=|\mathcal N_=(\sim,\sim)|=0$.
Let $y_j\in\alpha$ be any element such that there exists $\sigma\in\mathcal N_=$ with $\sigma(y_j)=i_{\ell}+1$; the proof for elements $y_j\in\alpha$ with $\sigma\in\mathcal N_=$ satisfying $\sigma(y_j)=i_{\ell}-1$ is analogous. Since $m^{=}_{\max}(y_j)>i_{\ell}$, Lemma \ref{lem:minmax} yields $\mathrm v_j=1-a=0$, where the last equality follows from Lemma \ref{lem:a=1}. Assume for contradiction that $x_{\ell}$ is comparable to $y_j$, which, by the assumption $\sigma(y_j)=i_{\ell}+1$, means that $x_{\ell}<y_j$. By Proposition \ref{prop:dir}(a), $-e_j$ is a $(B,\mathcal K)$-extreme normal direction so, by \eqref{eq:exth},
$h_{K_{\ell-1}}(-e_j)=h_{K_{\ell}}(-e_j)$. Since $x_{\ell}<y_j$, we have $h_{K_{\ell-1}}(-e_j)=-1$. On the other hand, $i_{\ell}<\sigma(y_j)=i_{\ell}+1<i_{\ell+1}$, so $y_j\in\beta_{\ell}$. By \eqref{eq:Kiordpoly}, $h_{K_{\ell}}(-e_j)=0\neq -1=h_{K_{\ell-1}}(-e_j)$, so we have arrived at the desired contradiction.
\end{proof}
\section{Critical posets}
\label{sec:crit}
In this section we complete the characterization of the extremals of Stanley's inequalities for critical posets. We will assume that $\mathcal K$ is sharp-critical since, otherwise, we reduce back to the supercritical setting. We note that the assumption that $\mathcal K$ is sharp-critical implies, by Proposition \ref{prop:max_notions}, that the maximal sharp-critical collection $\mathcalK_{\max}$, with its associated splitting pair $(r_{\max},s_{\min})$, exist. The following result, together with Proposition \ref{prop:suff}, Lemma \ref{lem:suff}, Proposition \ref{prop:clposet}, and Proposition \ref{prop:critequiv} complete the proof of Theorem \ref{thm:crit}.
\begin{theorem}
\label{thm:critsec}
Suppose that $\mathcalK$ is sharp-critical and that $|\mathcal N_{=}|^2=|\mathcal N_{-}||\mathcal N_{+}|$. Then,
\[
|\mathcal N_-(\sim,\sim)|=|\mathcal N_+(\sim,\sim)|=0.
\]
\end{theorem}
\subsection{The critical subspace}
We now enter the critical territory so the equation
\[
h_{K_{\ell-1}}(\mathrm u)=h_{aK_{\ell}+\mathrm v}(\mathrm u) \quad \mbox{for all }(B,\mathcal K)\textnormal{-extreme normal directions } \mathrm u,
\]
which held for supercritical posets, is no longer valid. Instead, we only have
\[
h_{K_{\ell-1}+\sum_{j=1}^dQ_j}(\mathrm u)=h_{aK_{\ell}+\mathrm v+\sum_{j=1}^dP_j}(\mathrm u) \quad \mbox{for all }(B,\mathcal K)\textnormal{-extreme normal directions } \mathrm u,
\]
where $(P_1,Q_1),\ldots,(P_d,Q_d)$ are $\mathcal K$-degenerate pairs. Our approach to this problem is to find a subspace $E^{\perp}$, on which we do in fact have $h_{K_{\ell-1}}(\mathrm u)=h_{aK_{\ell}+\mathrm v}(\mathrm u)$ for all $(B,\mathcal K)$-extreme normal directions $\mathrm u\in E^{\perp}$. Since we now require that the $(B,\mathcal K)$-extreme normal directions are contained in $E^{\perp}$, we will need more of them in order to derive enough constraints to characterize the extremals of critical posets. These extreme normal directions are the ones given in Proposition \ref{prop:dir}(e--h). We define the subspace $E^{\perp}$ by
\begin{align}
\label{eq:Eperp}
E^{\perp}:=\R^{\alpha\backslash \beta_{\max}},
\end{align}
where we recall \eqref{eq:Kmax}. We call the subspace $E$ the \emph{critical subspace} and note that, by Lemma \ref{lem:spancollec}, $\mathop{\mathrm{lin}}(\mathcalK_{\max})=\R^{\beta_{\max}}=E$. The following result explains the connection between $\mathcalK$-degenerate pairs and $E$.
\begin{lemma}
\label{lem:deg}
Let $(P,Q)$ be a $\mathcalK$-degenerate pair. Then, $\mathop{\mathrm{lin}}(P),\mathop{\mathrm{lin}}(Q)\subseteq E$.
\end{lemma}
\begin{proof}
The result follows by \cite[Lemma 9.6]{SvH20} and Proposition \ref{prop:max_notions}.
\end{proof}
When we restrict to the subspace $E^{\perp}$, we are in the supercritical case in the following sense:
\begin{lemma}
\label{lem:supcritE}
There exist $a\ge 0$ and $\mathrm v\in S^{n-k-1}$ such that
\[
h_{K_{\ell-1}}(\mathrm u)=h_{aK_{\ell}+\mathrm v}(\mathrm u) \quad \mbox{for all }(B,\mathcal K)\textnormal{-extreme normal directions } \mathrm u\text{ which are contained in $E^{\perp}$}.
\]
\end{lemma}
\begin{proof}
Let $\mathrm u\in E^{\perp}$ be a $(B,\mathcal K)$-extreme normal direction. By Theorem \ref{thm:SvHComb}
\[
h_{K_{\ell-1}+\sum_{j=1}^dQ_j'+\sum_{j=1}^dq_j}(\mathrm u)=h_{aK_{\ell}+\mathrm v+\sum_{j=1}^dP_j'+\sum_{j=1}^dp_j}(\mathrm u),
\]
where $(P_j,Q_j)_{j\in\llbracket 1,d\rrbracket}$ are $\mathcal K$-degenerate pairs and $P_j'=P_j-p_j$, $Q_j'=Q_j-q_j$ where $p_j\in P_j$, $q_j\in Q_j$ are fixed. Hence, with $\mathrm v':=v+\sum_{j=1}^dp_j-\sum_{j=1}^dq_j,~\tilde P:=\sum_{j=1}^dP_j'$, and $\tilde Q:=\sum_{j=1}^dQ_j'$, we have
\[
h_{K_{\ell-1}+\tilde Q}(\mathrm u)=h_{aK_{\ell}+\mathrm v'+\tilde P}(\mathrm u).
\]
Since $\tilde P,\tilde Q\subseteq E$ and $\mathrm u\in E^{\perp}$, we have $h_{\tilde Q}(\mathrm u)=h_{\tilde P}(\mathrm u)=0$. Relabeling $\mathrm v'\to\mathrm v$ completes the proof.
\end{proof}
\subsection{The critical extremals}
In order to prove Theorem \ref{thm:critsec} we need to prove the analogues of Proposition \ref{prop:av}, Lemma \ref{lem:minmax}, and Lemma \ref{lem:a=1}, as well as some additional results. Roughly speaking, on
\[
\alpha\backslash\beta_{\max}=\alpha_{>x_{r_{\max}+1},<x_{s_{\min}}},
\]
we have a supercritical behavior. Indeed, the proof of the following result is analogous to the proof of Proposition \ref{prop:av} once we use the full power of Proposition \ref{prop:dir}, Lemma \ref{lem:supcritE}, and restrict to $y_j,y_u,y_v\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$, rather than allowing for all $y_j,y_u,y_v\in \alpha$.
\begin{proposition}
\label{prop:avcrit}
For any $y_j,y_u,y_v\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$:
\begin{enumerate}[(a)]
\item For each fixed $0\le m\le \ell-1$: $\mathrm v_j=0$ for any $j$ such that $y_j\in\alpha_{>x_m}$ and there exists $\sigma\in\mathcal N_=$ satisfying $\sigma(y_j)=i_m+1$.\\
\item For each fixed $\ell+1\le m\le k+1$: $\mathrm v_j=1-a$ for any $j$ such that $y_j\in\alpha_{<x_m}$ and there exists $\sigma\in\mathcal N_=$ satisfying $\sigma(y_j)=i_m-1$.\\
\item $\mathrm v_u=\mathrm v_v$ for any $u,v$ such that $y_u<y_v$ and there exists $\sigma\in\mathcal N_=$ satisfying $\sigma(y_u)+1=\sigma(y_v)$.\\
\item $\mathrm v_u=\mathrm v_v$ for any $u,v$ such that $y_u<y_v$ and there exists $\sigma\in\mathcal N_=$ satisfying $\sigma(y_u)=i_{\ell}-1$ and $\sigma(y_v)=i_{\ell}+1$.\\
\item For each fixed $ r_{\max}\le m\le\ell-1$: $\mathrm v_j=0$ for any $j$ such that $y_j\in\alpha_{>x_m}$ and there exists $\sigma\in\mathcal N_=$ satisfying either $\sigma(y_j)=i_m+1$ or $\sigma(y_j)=i_m+2$.\\
\item For each fixed $\ell+1\le m\le s_{\min}$: $\mathrm v_j=1-a$ for any $j$ such that $y_j\in\alpha_{<x_m}$ and there exists $\sigma\in\mathcal N_=$ satisfying either $\sigma(y_j)=i_m-1$ or $\sigma(y_j)=i_m-2$.\\
\item $\mathrm v_j=0$ for any $j$ such that $y_j\in\alpha_{>x_{\ell-1}}$ and there exists $\sigma\in\mathcal N_+$ satisfying $\sigma(y_j)=i_{\ell-1}+2$.\\
\item $\mathrm v_j=1-a$ for any $j$ such that $y_j\in\alpha_{<x_{\ell+1}}$ and there exists $\sigma\in\mathcal N_-$ satisfying $\sigma(y_j)=i_{\ell+1}-2$.\\
\end{enumerate}
\end{proposition}
Towards the proofs of the analogues of Lemma \ref{lem:minmax} and Lemma \ref{lem:a=1} we recall Corollary \ref{cor:max}, together with some of its immediate consequences.
\begin{corollary}
\label{cor:max_plus}
Fix $\circ\in \{-,=,+\}$ and $\sigma\in\mathcal N_{\circ}$. There exists a unique mixed element $y^{\sigma}_{\textnormal{crit}}$ satisfying $y^{\sigma}_{\textnormal{crit}} \in \beta_{r_{\max}}\cup\beta_{s_{\min}}$ and $\sigma(y^{\sigma}_{\textnormal{crit}})\in \llbracket i_{r_{\max}+1}, i_{s_{\min}} \rrbracket\backslash\{i_{r_{\max}+1},\ldots,i_{s_{\min}}\}$. In particular, any other element $y\neq y^{\sigma}_{\textnormal{crit}}$ satisfying $\sigma(y) \in \llbracket i_{r_{\max}+1}, i_{s_{\min}} \rrbracket$ must satisfy $y\in\bar{\alpha}_{\ge x_{r_{\max}+1}, \le x_{s_{\min}}}$. Furthermore, $y^{\sigma}_{\textnormal{crit}}$ satisfies either $y^{\sigma}_{\textnormal{crit}} \not \ge x_{r_{\max}+1}$ or $y^{\sigma}_{\textnormal{crit}} \not \le x_{s_{\min}}$. If $y^{\sigma}_{\textnormal{crit}} \not \ge x_{r_{\max}+1}$, then $y^{\sigma}_{\textnormal{crit}} \not \ge y$ for any $y\in \bar{\alpha}_{\ge x_{r_{\max}+1}, \le x_{s_{\min}}}$. Analogously, if $y^{\sigma}_{\textnormal{crit}} \not \le x_{s_{\min}}$, then $y^{\sigma}_{\textnormal{crit}} \not \le y$ for any $y \in \bar{\alpha}_{\ge x_{r_{\max}+1}, \le x_{s_{\min}}}$.
\end{corollary}
The following result is the analogue of Lemma \ref{lem:minmax} where again we restrict to $y_j\in\alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$ rather than allowing for all $y_j\in\alpha$.
\begin{lemma}
\label{lem:minmaxcrit}
For any $y_j\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$: If $m^{=}_{\min}(y_j)<i_{\ell}$ then $\mathrm v_j=0$, and if $m^{=}_{\max}(y_j)>i_{\ell}$ then $\mathrm v_j=1-a$.
\end{lemma}
\begin{proof}
We prove that $m^{=}_{\max}(y_j)>i_{\ell}\Rightarrow\mathrm v_j=1-a$; the proof of $m^{=}_{\min}(y_j)<i_{\ell}\Rightarrow\mathrm v_j=0$ is analogous.
Set $y_{j_0}:=y_j\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$ and construct the sequence $y_{j_0}<y_{j_1}<\cdots <y_{j_p}$, for some $p<\infty$, iteratively, according to the algorithm below. The sequence will be constructed so that $y_{j_i}\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$ for every $i\in\llbracket 0,p\rrbracket$, $\mathrm v_{j_i}=\mathrm v_{j_{i+1}}$ for all $i\in \llbracket 0,p-1\rrbracket$, and $\mathrm v_{j_p}=1-a$. Clearly, it will then follow that $\mathrm v_j=\mathrm v_{j_0}=1-a$, completing the proof.
Assume that the sequence $y_{j_0}<y_{j_1}<\cdots <y_{j_i}$ has been constructed. Set $M:=m^{=}_{\max}(y_{j_i})$ and note that $i_{\ell}<m^{=}_{\max}(y_{j_0})\le M<i_{s_{\min}}$. Let $b$ be the index satisfying $i_b<M<i_{b+1}$ so that $\ell\le b\le s_{\min}-1$. Consider the following two cases:\\
\begin{itemize}
\item $M\notin\{i_m-2,i_m-1\}$ for every $\ell<m$. Choose $\sigma\in\mathcal N_=$ such that $\sigma(y_{j_i})=M$ (such a $\sigma$ must exist by the definition of $M$) and set $y_r=\sigma^{-1}(M+1), y_s=\sigma^{-1}(M+2)$. Note that $i_{b+1}\notin \{M+1,M+2\}$ since $M\notin\{i_m-2,i_m-1\}$ for every $\ell<m$, so in particular, we can take $b+1=m$ (using $b+1>\ell$). Hence, we have $i_b<M,M+1,M+2<i_{b+1}$, so $M,M+1,M+2\in \llbracket i_b+1,i_{b+1}-1\rrbracket$, and hence $y_r,y_s\in \alpha$. Note that $y_{j_i}<y_r$ since otherwise their positions in $\sigma$ can be swapped to contradict the maximality of $M$. Further, $M,M+1,M+2\in \llbracket i_b+1,i_{b+1}-1\rrbracket\Rightarrow \sigma(y_r),\sigma(y_s)\in \llbracket i_b+1,i_{b+1}-1\rrbracket\subseteq \textnormal{I}_{\llbracket r_{\max}+1,s_{\min}-1\rrbracket}$, where the last containment holds since $b\le s_{\min}-1$ (as shown above), and since $r_{\max}+1\le b$ (because $r_{\max}+1<\ell\le b$ as $(r_{\max},s_{\min})$ is an $\ell$-splitting pair). Corollary \ref{cor:max_plus} now yields $y_r,y_s\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}\cup\{y^{\sigma}_{\textnormal{crit}}\}$. We now choose $y_{j_{i+1}}$ as follows:
\begin{enumerate}[(1)]
\item If $y_r\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$ set $y_{j_{i+1}}:=y_r$. Then we see that $y_{j_i}<y_{j_{i+1}}$ and that $y_{j_{i+1}}\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$ so Proposition \ref{prop:avcrit}(c) yields $\mathrm v_{j_{i+1}}=\mathrm v_{j_{i}}$.
\item If $y_r=y^{\sigma}_{\textnormal{crit}}$, then $y_s \in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$. If $y^{\sigma}_{\textnormal{crit}} \not\ge x_{r_{\max}+1}$, then $y^{\sigma}_{\textnormal{crit}} \not \ge y_{j_i}$, a contradiction. Otherwise, $y^{\sigma}_{\textnormal{crit}} \not\le x_{s_{\min}}$, so $y^{\sigma}_{\textnormal{crit}} \not \le y_s$. Hence, we can swap the positions of $y_r=y^{\sigma}_{\textnormal{crit}}$ and $y_s$, which reduces to (1).
\end{enumerate}
\item $M\in\{i_m-2,i_m-1\}$ for some $\ell<m$. In this case the sequence will be terminated with $p:=i$. Arguing as in the analogous case in Lemma \ref{lem:minmax}, we get that $y_{j_i}<x_m$. Note that $m=b+1\le s_{\min}$ (the last inequality was shown above), so since $\ell+1\le m\le s_{\min}$, Proposition \ref{prop:avcrit}(f) yields $\mathrm v_{j_i}=1-a$.
\end{itemize}
\end{proof}
The following result can be viewed as a continuation of Lemma \ref{lem:minmaxcrit}. To ease the notation we will use
\begin{align}
\label{eq:I}
\textnormal{I}_j := \llbracket i_j+1,i_{j+1}-1\rrbracket\quad \text{and} \quad \textnormal{I}_S:= \cup_{j\in S} \textnormal{I}_j\quad \text{for}\quad S\subseteq \llbracket 0, k\rrbracket.
\end{align}
\begin{lemma}
\label{lem:minellmaxcrit}
For any $y_j\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$: If $\min_{\circ\in\{-,=,+\}}m_{\min}^{\circ}(y_j)<i_{\ell}+1_{\circ}$ then $\mathrm v_j=0$, and if $\max_{\circ\in\{-,=,+\}}m_{\max}^{\circ}(y_j)>i_{\ell}+1_{\circ}$ then $\mathrm v_j=1-a$.
\end{lemma}
\begin{proof}
We will prove $\max_{\circ\in\{-,=,+\}}m_{\max}^{\circ}(y_j)>i_{\ell}+1_{\circ}\Rightarrow\mathrm v_j=1-a$; the proof of $\min_{\circ\in\{-,=,+\}}m_{\min}^{\circ}(y_j)<i_{\ell}+1_{\circ}\Rightarrow\mathrm v_j=0$ is analogous. Fix $\circ\in\{-,=,+\}$ and $\sigma\in \mathcal N_{\circ}$ such that $\sigma(y_j)>\sigma(x_{\ell})=i_{\ell}+1_{\circ}$. There are three cases to consider:\\
\begin{enumerate}[(1)]
\item $\circ$ is $=$. We have $m^{=}_{\max}(y_j)\ge \sigma(y_j)$ and by assumption $\sigma(y_j)>\sigma(x_{\ell})=i_{\ell}$. Hence, $m^{=}_{\max}(y_j)>i_{\ell}$ and the proof is complete by Lemma \ref{lem:minmaxcrit}.\\
\item $\circ$ is $+$. Let $q:=\sigma(y_j)>i_{\ell}+1$. We are going to apply Lemma \ref{lem:range} with $y_j,=$, and $q$ so we will check its conditions. Since $i_{\min}(y_j)>\ell$, Lemma \ref{lem:UL} and Corollary \ref{cor:range} yield
$u_{=}(y_j)=u_{+}(y_j)\ge m_{\max}^+(y_j)\ge q$ and $l_{=}(y_j)\le l_{+}(y_j)\le m_{\min}^+(y_j)\le\sigma(y_j)=q$, so we conclude that $q\in\llbracket l_{=}(y_j),u_{=}(y_j)\rrbracket$. Next we show that $q\neq i_m$ for any $m\in [k]$. Indeed, if $m\le \ell$ then $i_m\le i_{\ell}<q$, and if $m>\ell$, then $i_m=q$ implies $\sigma(y_j)=\sigma(x_m)$, which is impossible since $y_j\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}\subseteq\alpha$. It follows from Lemma \ref{lem:range} that there exists $\sigma'\in\mathcal N_=$ such that $\sigma'(y_j)=q$. It follows that $m_{\max}^=(y_j)\ge \sigma'(y_j)=q>i_{\ell}$, and the proof is complete by Lemma \ref{lem:minmaxcrit}. \\
\item $\circ$ is $-$. If $m_{\max}^=(y_j)>i_{\ell}$ we are done by Lemma \ref{lem:minmaxcrit}. Suppose then that $m^{=}_{\max}(y_j)<i_{\ell}$ (note that $m^{=}_{\max}(y_j)=i_{\ell}$ is impossible).
\begin{claim}
\label{cl:1}
$m_{\max}^-(y_j)=i_{\ell}=\sigma(y_j)$.
\end{claim}
\begin{proof}
Suppose for contradiction that $m_{\max}^-(y_j)\ge i_{\ell}+1$, so there must exist $\sigma_1\in\mathcal N_-$ with $\sigma_1(y_j)\ge i_{\ell}+1$. Since $i_{\min}(y_j)>\ell$, Lemma \ref{lem:UL} and Corollary \ref{cor:range} yield
$u_{=}(y_j)=u_{-}(y_j)\ge m_{\max}^-(y_j)\ge \sigma_1(y_j)=i_{\ell}+1$. On the other hand, by Corollary \ref{cor:range} and the assumption $m_{\max}^=(y_j)<i_{\ell}$, we have $l_{=}(y_j)\le m_{\min}^=(y_j)\le m_{\max}^=(y_j)<i_{\ell}$, so we conclude that $i_{\ell}+1\in\llbracket l_{=}(y_j),u_{=}(y_j)\rrbracket$. By Corollary \ref{cor:splitequiv}, $i_m\neq i_{\ell}+1$ for any $m\in [k]$ so Lemma \ref{lem:range} implies that there exists $\sigma_2\in\mathcal N_=$ satisfying $\sigma_2(y_j)=i_{\ell}+1$, which contradicts $m^{=}_{\max}(y_j)<i_{\ell}$. We conclude that $m_{\max}^-(y_j)\le i_{\ell}$. Since, by assumption, $\sigma(y_j)>\sigma(x_{\ell})=i_{\ell}-1$ we get $m_{\max}^-(y_j)=\sigma(y_j)= i_{\ell}$.
\end{proof}
Let $y_v$ be such that $\sigma(y_v)=i_{\ell}+1$ and note that $y_v\in\alpha$ by Corollary \ref{cor:splitequiv}. We must have $y_j<y_v$ since if $y_j\nsim y_v$ (by Claim \ref{cl:1} it is impossible to have $y_v<y_j$), then we can swap the positions of $y_j$ and $y_v$ in $\sigma$ to get $\sigma_3\in\mathcal N_-$ satisfying $\sigma_3(y_j)=i_{\ell}+1$, which contradicts Claim \ref{cl:1}. Next we show that there exists $\sigma'\in\mathcal N_=$ satisfying $\sigma'(y_j)=i_{\ell}-1$ and $\sigma'(y_v)=i_{\ell}+1$. Indeed, since we assume $m_{\max}^=(y_j)<i_{\ell}$, we have that, for any $\sigma_4\in\mathcal N_{=}$, $\sigma_4(y_j)<\sigma_4(x_{\ell})$. Hence, since by the assumption $\sigma(y_j)>\sigma(x_{\ell})$, we must have $y_j\nsim x_{\ell}$. Swapping the positions of $y_j$ and $x_{\ell}$ in $\sigma$ yields $\sigma'$, where we used Claim \ref{cl:1}.\\
We will now analyze the element $y_v$. Since $\sigma'(y_v)=i_{\ell}+1$ we see that $\sigma'(y_v)\in \textnormal{I}_{\llbracket r_{\max}+1,s_{\min}-1\rrbracket}$ because $(r_{\max},s_{\min})$ is an $\ell$-splitting pair. Hence, Corollary \ref{cor:max_plus} yields that either $y_v=y^{\sigma}_{\textnormal{crit}}$ or $y_v \in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$. Consider both cases:\\
\begin{enumerate}[(a)]
\item $y_v\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$. Since $y_j<y_v$, and since there exists $\sigma'\in\mathcal N_=$ satisfying $\sigma'(y_j)=i_{\ell}-1$ and $\sigma'(y_v)=i_{\ell}+1$, Proposition \ref{prop:avcrit}(d) yields $\mathrm v_j=\mathrm v_v$. On the other hand, $\mathrm v_v=1-a$ by Lemma \ref{lem:minmaxcrit} since $m_{\max}^=(y_v)>i_{\ell}$ as $\sigma'(y_v)=i_{\ell}+1$. We conclude that $\mathrm v_j=1-a$, which proves the lemma.\\
\item $y_v=y^{\sigma}_{\textnormal{crit}}$. Since $y_v>y_j$ and $y_j \in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$ (as we cannot have $y_j=y^{\sigma}_{\textnormal{crit}}$), we have $y^{\sigma}_{\textnormal{crit}} > x_{r_{\max}+1}$. Hence, we must have $y^{\sigma}_{\textnormal{crit}} \not \le x_{s_{\min}}$. Let $z$ be such that $\sigma(z)=i_{\ell}+2$ and note that $\sigma'(z)=i_{\ell}+2$ as well (since $\sigma'$ was obtained from $\sigma$ by swapping the positions of $y_j$ and $x_{\ell}$ in $\sigma$). If $\sigma'(z)\in \textnormal{I}_{\llbracket r_{\max}+1,s_{\min}-1\rrbracket}$, then, by Corollary \ref{cor:max_plus}, since $z\neq y^{\sigma}_{\textnormal{crit}}$, we must have $z\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$. Recall that $y^{\sigma}_{\textnormal{crit}} \not \le x_{s_{\min}}$ so $y^{\sigma}_{\textnormal{crit}} \not \le z$. Hence, we can swap $y_v=y^{\sigma}_{\textnormal{crit}}$ and $z$ to reduce to the case (a).
Suppose then that $\sigma'(z)\notin \textnormal{I}_{\llbracket r_{\max}+1,s_{\min}-1\rrbracket}$:
\begin{claim}
\label{cl:2}
If $\sigma'(z)\notin \textnormal{I}_{\llbracket r_{\max}+1,s_{\min}-1\rrbracket}$ then $z=x_{\ell+1}$ and $i_{\ell+1}=i_{\ell}+2$.
\end{claim}
\begin{proof}
Since $\sigma'(z)=i_{\ell}+2\notin \textnormal{I}_{\llbracket r_{\max}+1,s_{\min}-1\rrbracket}$ we get that $i_{\ell}+2\notin \textnormal{I}_{\ell}$ (because $r_{\max}+1<\ell<s_{\min}$ as $(r_{\max},s_{\min})$ is an $\ell$-splitting pair). Hence, $i_{\ell}+2\ge i_{\ell+1}$ (since $i_{\ell}+2\le i_{\ell}$ is impossible). On the other hand, Corollary \ref{cor:splitequiv} yields $i_{\ell}+1<i_{\ell+1}\le i_{\ell}+2$ so we conclude $i_{\ell}+2=i_{\ell+1}$. Since $\sigma'\in\mathcal N_=$ we also conclude that $z=x_{\ell+1}$.
\end{proof}
Since $(r_{\max},s_{\min})$ is an $\ell$-splitting pair, we have that either $s_{\min}=\ell+1$ or $s_{\min}>\ell+1$. If $s_{\min}=\ell+1$ then, since by assumption $y_j\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$, we have $y_j<x_{s_{\min}}=x_{\ell+1}$. Since, by Claim \ref{cl:1} and Claim \ref{cl:2}, $\sigma(y
_j)=i_{\ell}=i_{\ell+1}-2$, Proposition \ref{prop:avcrit}(h) shows that $\mathrm v_j=1-a$.
Suppose then that $s_{\min}>\ell+1$. Consider the set
\[
\gamma := \{y\in \alpha : \sigma'(y) \in \textnormal{I}_{\llbracket \ell+1,s_{\min}-1\rrbracket}, y\not > x_{l+1} \}.
\]
We claim that $\gamma$ is nonempty. Indeed, since $(\ell,s_{\min})$ is a splitting pair, Lemma \ref{lem:splitequiv} yields $y^{\sigma'}\in\beta_{\ell}\cup\beta_{s_{\min}}$ such that $\sigma'(y^{\sigma'})\in \textnormal{I}_{\llbracket \ell+1,s_{\min}-1\rrbracket}$. We must have that either $y^{\sigma'} \not \le x_{s_{\min}}$ or $y^{\sigma'} \not \ge x_{l+1}$. We cannot have $y^{\sigma'} \not \le x_{s_{\min}}$ since $r_{\max}+1<\ell+1$ and $y^{\sigma'} \neq y^{\sigma}_{\textnormal{crit}}$ (as $\sigma'(y^{\sigma}_{\textnormal{crit}})=i_{\ell}+1\notin \textnormal{I}_{\llbracket \ell+1,s_{\min}-1\rrbracket}\ni \sigma'(y^{\sigma'} )$) imply $y^{\sigma'} \in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$. Hence, $y^{\sigma'} \in \gamma$. Now pick $y\in \gamma^\downarrow$, which exists as $\gamma$ is nonempty. Note that $y\in \gamma^\downarrow$ implies that $y\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$ because $ \sigma'(y) \in \textnormal{I}_{\llbracket \ell+1,s_{\min}-1\rrbracket}\subseteq \textnormal{I}_{\llbracket r_{\max}+1,s_{\min}-1\rrbracket}$ yields, by Corollary \ref{cor:max_plus}, $y\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}\cup \{y^{\sigma}_{\textnormal{crit}}\}$, and $y\neq y^{\sigma}_{\textnormal{crit}}$ since $\sigma'(y^{\sigma}_{\textnormal{crit}})=i_{\ell}+1\notin \textnormal{I}_{\llbracket \ell+1,s_{\min}-1\rrbracket}\ni \sigma'(y )$.
We will show next that the positions of $y$ and $y_v$ can be swapped in both $\sigma$ and $\sigma'$ to yield valid linear extensions in $\mathcal N_-,\mathcal N_=$, respectively. This completes the proof since we reduce back to 3(a).
Let us now verify that the swaps yield valid linear extensions. We will show the validity of the swap of $\sigma$; the argument for $\sigma'$ is analogous since by construction $\sigma$ and $\sigma'$ are the same up to the swap of $y_j$ and $x_{\ell}$. Suppose this swap violated some relation so that there exists $w$ such that $\sigma(y_v)=i_{\ell}+1<\sigma(w)<\sigma(y)$, satisfying either $y_v<w$ or $w<y$. We cannot have $y^{\sigma}_{\textnormal{crit}} =y_v<w$ because $\sigma(w)\in \llbracket i_{r_{\max}+1}, i_{s_{\min}}\rrbracket$ implies, by Corollary \ref{cor:max_plus}, that $w\le x_{s_{\min}}$ (as $w\neq y^{\sigma}_{\textnormal{crit}}$). But then $y^{\sigma}_{\textnormal{crit}} = y_v < w \le x_{s_{\min}}$, which contradicts $y^{\sigma}_{\textnormal{crit}} \not \le x_{s_{\min}}$, as was shown at the beginning of (3). We also cannot have $w<y$ since, otherwise, $w\not \ge x_{l+1}$ by the definition of $\gamma$. But if $w \not \in \alpha$, then $w = x_r$ for some $r\ge l+1$ (as $i_{\ell}+1<\sigma(w)$) which implies $w\ge x_{l+1}$, a contradiction. On the other hand, if $w\in \alpha$, then combined with $\sigma(w)\in \llbracket i_{l+1}, i_{s_{\min}}\rrbracket$ we have that $\sigma(w)\in \textnormal{I}_{\llbracket l+1,s_{\min}-1\rrbracket}$. Hence, $w\in \gamma$, which contradicts $y\in \gamma^\downarrow$.
\end{enumerate}
\end{enumerate}
\end{proof}
Next we move to proving the analogue of Lemma \ref{lem:a=1}. We will again use the notation \eqref{eq:I}.
\begin{lemma}
\label{lem:a=1crit}
$a=1$.
\end{lemma}
\begin{proof}
We will show that there exists $y_j\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$ such that $y_j\nsim x_{\ell}$. This will complete the proof since, by Assumption \ref{ass:cl}, $y_j\nsim x_{\ell}$ implies that there exist $\sigma,\sigma'\in\cup_{\circ\in\{-,=,+\}}\mathcal N_{\circ}$ satisfying $\sigma(y_j)>\sigma(x_{\ell})$ and $\sigma'(y_j)<\sigma'(x_{\ell})$. Applying Lemma \ref{lem:minellmaxcrit} yields $0=\mathrm v_j=1-a$ so $a=1$.
We now show that there exists $y_j\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$ such that $y_j\nsim x_{\ell}$. Suppose for contradiction that such $y_j$ does not exist. Then, for any $y\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$, we must have either $y<x_{\ell}$ or $y>x_{\ell}$. In particular, we have the disjoint union
\begin{align}
\label{eq:union}
\alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}= [\alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}\cap \alpha_{<x_{\ell}}]\cup [\alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}\cap \alpha_{>x_{\ell}}].
\end{align}
Let us show that
\begin{align}
\label{eq:eps<>}
|\alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}\cap \alpha_{>x_{\ell}}|\le |\textnormal{I}_{\llbracket \ell,s_{\min}-1\rrbracket}|-1 \quad \text{and}\quad |\alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}\cap \alpha_{<x_{\ell}}|\le |\textnormal{I}_{\llbracket r_{\max}+1,\ell-1\rrbracket}|-1;
\end{align}
we prove the first inequality and the proof of the second inequality is analogous. Given any $\sigma\in\mathcal N_+$ and $y\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}\cap \alpha_{>x_{\ell}}$ we have $i_{\ell}+1<\sigma(y)<i_{s_{\min}}$ so $\sigma(y)\in \textnormal{I}_{\llbracket \ell,s_{\min}-1\rrbracket}\backslash\{i_l+1\}$. It follows that $|\alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}\cap \alpha_{>x_{\ell}}|\le |\textnormal{I}_{\llbracket \ell,s_{\min}-1\rrbracket}|-1$ as desired. By \eqref{eq:union} and \eqref{eq:eps<>} we now get
\begin{align}
\label{eq:eps<>union}
|\alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}|\le|\textnormal{I}_{\llbracket r_{\max}+1,\ell-1\rrbracket}|+ |\textnormal{I}_{\llbracket \ell,s_{\min}-1\rrbracket}|-2= |\textnormal{I}_{\llbracket r_{\max}+1,s_{\min}-1\rrbracket}|-2.
\end{align}
However, by Lemma \ref{lem:generalmixed}, $|\alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}|=|\textnormal{I}_{\llbracket r_{\max}+1,s_{\min}-1\rrbracket}|-|\{\text{mixed elements}\}|$. Hence, the number of mixed elements is at least 2 which means that the maximal splitting pair is supercritical, which contradicts Proposition \ref{prop:max_notions}.
\end{proof}
We are now ready to prove Theorem \ref{thm:critsec}.
\begin{proof}[Proof of Theorem \ref{thm:critsec}]
We start by proving the analogue of \eqref{eq:supcritequiv}.
\begin{lemma}
\label{lem:incompcrit}
Let $y\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$.
\begin{enumerate}[(a)]
\item If there exists $\sigma\in\mathcal N_=$ such that either $\sigma(y)=i_{\ell}-1$ or $\sigma(y)=i_{\ell}+1$, then $y\nsim x_{\ell}$.
\item If there exists $\sigma\in\mathcal N_-\cup \mathcal N_+$ such that $\sigma(y)=i_{\ell}$, then $y\nsim x_{\ell}$.
\end{enumerate}
\end{lemma}
\begin{proof}
$~$
\begin{enumerate}[(a)]
\item We proceed as in the proof of Theorem \ref{thm:supcritsec} where we use Lemma \ref{lem:supcritE} rather than \eqref{eq:exth}.
\item Let $y\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$ be such that there exists $\sigma \in\mathcal N_-$ with $\sigma(y)=i_{\ell}$; the proof for the case $\sigma \in\mathcal N_+$ is analogous. Since we cannot have $y<x_{\ell}$ it suffices to show that $y\not>x_{\ell}$. Suppose for contradiction that $y>x_{\ell}$. By Lemma \ref{lem:range}, $l_{-}(y)\le i_{\ell}$ so by Lemma \ref{lem:UL} $l_{=}(y)\le i_{\ell}+1$. On the other hand, for any $\sigma'\in \mathcal N_=$, Corollary \ref{cor:range} yields $i_{\ell}=\sigma'(x_{\ell})<\sigma'(y)\le m_{\max}^=(y)\le u_=(y)$ so $u_=(y)\ge i_{\ell}+1$. Since $i_{\ell}+1\neq i_m$ for any $m\in [k]$ (by Corollary \ref{cor:splitequiv}), Lemma \ref{lem:range} yields $\sigma''\in\mathcal N_=$ such that $\sigma''(y)=i_{\ell}+1$. By part (a), $y\nsim x_{\ell}$, which contradicts $y>x_{\ell}$.
\end{enumerate}
\end{proof}
We now prove $|\mathcal N_+(\sim,\sim)|=0$; the proof of $|\mathcal N_-(\sim,\sim)|=0$ is analogous. Suppose for contradiction that $|\mathcal N_+(\sim,\sim)|>0$ so there exists $\sigma\in \mathcal N_+$ such that $y_u:=\sigma^{-1}(i_{\ell}-1)$ and $y_v:=\sigma^{-1}(i_{\ell})$ satisfy $y_u,y_v<x_{\ell}$. Since $ i_{\ell}-1,i_{\ell}\in \textnormal{I}_{\llbracket r_{\max}+1,s_{\min}-1\rrbracket}$ (because $(r_{\max},s_{\min})$ is an $\ell$-splitting pair so $i_{r_{\max}+1}\le i_{\ell-1}<i_{\ell}-1$ by Corollary \ref{cor:splitequiv}), Corollary \ref{cor:max_plus} yields $y_u,y_v\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}\cup\{y^{\sigma}_{\textnormal{crit}}\}$. Consider the following two cases:
If $y_v\in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$, then, by Lemma \ref{lem:incompcrit}(b), $y_v\nsim x_{\ell}$ which contradicts $y_v<x_{\ell}$.
If $y_v=y^{\sigma}_{\textnormal{crit}}$, we have $y_u \in \alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$. Then, because $y^{\sigma}_{\textnormal{crit}} < x_{\ell} < x_{s_{\min}}$, we must have $y^{\sigma}_{\textnormal{crit}} \not \ge x_{r_{\max}+1}$, which implies $y_v = y^{\sigma}_{\textnormal{crit}} \not \ge y_u$. Hence, we can swap the positions of $y_u$ and $y_v$ in $\sigma$ to reduce to the previous case.
\end{proof}
\section*{Notation index}
\begin{itemize}
\item $[p]:=\{1,\ldots, p\}$ for positive integers $p$.\\
\item $\llbracket p,q \rrbracket :=\{p,p+1,\ldots,q-1,q\}$ for integers $p\le q$; \eqref{eq:bracket}.\\
\item $\bar{\alpha}=\{y_1,\ldots,y_{n-k},x_0,x_1,\ldots, x_k,x_{k+1}\}$ and $\alpha=\{y_1,\ldots,y_{n-k}\}$ where $x_0$ (res. $x_{k+1}$) is smaller (res. bigger) than every element in $\bar{\alpha}$.\\
\item $i_0=0$ and $i_{k+1}=n+1$. $j_0=-1$ and $j_{p+1}=k+1$. \\
\item $1_{\circ}=1_{\{\circ\text{ is }+\}}-1_{\{\circ\text{ is }-\}}$ for $\circ\in\{-,=,+\}$.\\
\item $\beta_i=\alpha\backslash(\alpha_{<x_i}\cup \alpha_{>x_{i+1}})$ and $\beta_S=\cup_{i\in S}\beta_i$; \eqref{eq:betai}.\\
\item $i_{\max}(y)$ (res. $i_{\min}(y)$) is the maximum (res. minimum) number such that $y>x_{i_{\max}(y)}$ (res. $y<x_{i_{\min}(y)}$); Definition \ref{def:lmu}.\\
\item $l_{\circ}(y):=\max_{r\le i_{\max}(y)}(i_r+1_{\circ}+|\bar{\alpha}_{>x_r,<y}|+1)$ and $u_{\circ}(y):=\min_{s\ge i_{\min}(y)}(i_s+1_{\circ}-|\bar{\alpha}_{>y,<x_s}|-1)$; Definition \ref{def:lmu}.\\
\item $m^{\circ}_{\min}(y)=\min_{\sigma\in\mathcal N_{\circ}}\sigma(y)\quad\text{and}\quad m^{\circ}_{\max}(y)=\max_{\sigma\in\mathcal N_{\circ}}\sigma(y)$ for $\circ\in\{-,=,+\}$ and $y\in\alpha$; Definition \ref{def:lmu}.\\
\item $i_j^\circ := i_j+1_{j=\ell}1_{\circ}$; Definition \ref{def:lmu}.\\
\item $r_{\max}=\max_{\iota}r_{\iota}$ and $s_{\min}=\min_{\iota}s_{\iota}$ where $(r_{\iota},s_{\iota})$ are the sharp-critical $\ell$-splitting pairs; Definition \ref{def:rmaxsmin}.\\
\item $y^{\sigma}_{\textnormal{crit}}$; Corollary \ref{cor:max}.\\
\item $\mathcalK_{\max}$, $\beta_{\max}:=\beta_{\llbracket 0, r_{\max}\rrbracket\cup \llbracket s_{\min}, k\rrbracket}$, $\alpha\backslash\beta_{\max}=\alpha_{>x_{r_{\max}+1},<x_{s_{\min}}}$, and $E^{\perp}:=\R^{\alpha\backslash \beta_{\max}}$; \eqref{eq:Kmax}, \eqref{eq:Eperp}.\\
\item $\llbracket i_j,i_{j+1}\rrbracket^{\circ}:=\llbracket i^\circ_j,i^\circ_{j+1}\rrbracket = \llbracket i_j+1_{j=\ell}1_{\circ},i_{j+1}+1_{j+1=\ell}1_{\circ}\rrbracket$ and $\llbracket i_j+1,i_{j+1}-1\rrbracket^{\circ}:=\llbracket i^\circ_j+1,i^\circ_{j+1}-1\rrbracket$; \eqref{eq:[]}.\\
\item $\textnormal{I}_{j_q}:=\llbracket i_j+1,i_{j+1}-1\rrbracket\quad \text{for }j_q\in\llbracket 0,k\rrbracket,\quad \textnormal{I}_J:=\cup_{j_q\in J}\textnormal{I}_{j_q}$; \eqref{eq:Ij}, \eqref{eq:I}.\\
\end{itemize}
\subsection*{Acknowledgments} We are grateful to Swee Hong Chan, David Jerison, Igor Pak, Greta Panova, and Yufei Zhao for helpful comments on this work. We are especially grateful to Ramon van Handel for his valuable comments.
Zhao Yu Ma was partly supported by UROP at MIT. This material is based upon work supported by the National Science Foundation under Award Number 2002022.
\bibliographystyle{amsplain0}
| {
"timestamp": "2022-11-28T02:27:44",
"yymm": "2211",
"arxiv_id": "2211.14252",
"language": "en",
"url": "https://arxiv.org/abs/2211.14252",
"abstract": "Stanley's inequalities for partially ordered sets establish important log-concavity relations for sequences of linear extensions counts. Their extremals however, i.e., the equality cases of these inequalities, were until now poorly understood with even conjectures lacking. In this work, we solve this problem by providing a complete characterization of the extremals of Stanley's inequalities. Our proof is based on building a new ``dictionary\" between the combinatorics of partially ordered sets and the geometry of convex polytopes, which captures their extremal structures.",
"subjects": "Combinatorics (math.CO); Metric Geometry (math.MG)",
"title": "The extremals of Stanley's inequalities for partially ordered sets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587229064299,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7097978675880129
} |
https://arxiv.org/abs/1607.07067 | Classification of Category $\mathcal{J}$ Modules for Divergence Zero Vector Fields on a Torus | We consider a category of modules that admit compatible actions of the commutative algebra of Laurent polynomials and the Lie algebra of divergence zero vector fields on a torus and have a weight decomposition with finite dimensional weight spaces. We classify indecomposable and irreducible modules in this category. | \section{Introduction}
Consider the algebra $A_N=\mathbb{C}[t_1^{\pm1},\dots,t_{N}^{\pm1}]$ and Lie algebra $\text{Der}(A_N)$ of derivations of $A_N$.
The Lie algebra $\text{Der}(A_N)$ may be identified with the Lie algebra of polynomial vector fields on an $N$-dimensional torus (see Section 2).
In \cite{R2} Eswara Rao considered modules that admit compatible actions of both the Lie algebra
$\text{Der}(A_N)$ and the commutative algebra $A_N$. We refer to such modules as $(A_N,\text{Der}(A_N))$-modules.
Tensor fields on a torus provide examples of modules in this class.
Eswara Rao classified in \cite{R2} all irreducible $(A_N,\text{Der}(A_N))$-modules with finite-dimensional weight spaces
and proved that all such modules are in fact tensor modules. This result was extended in \cite{B} to a classification of indecomposable modules in this category. To accomplish this it was shown that the action of the Lie algebra is polynomial
(see \cite{BB} and \cite{BZ}); a strategy which will be used in the current paper.
In \cite{R1} Eswara Rao determines conditions for irreducibility of tensor modules for $\text{Der}(A_N)$.
Restrictions of these tensor modules to the subalgebra of divergence zero vector fields, denoted here by $\mathcal{S}_N$, are studied in \cite{T}, and it was found that these modules remain irreducible under similar conditions.
The goal of this paper is to study the category $\mathcal{J}$ of $(A_N, \mathcal{S}_N)$-modules with finite-dimensional weight spaces and classify irreducible and indecomposable modules in this category.
Let $S_N^+$ be the Lie algebra of divergence zero elements of $\text{Der}(\mathbb{C}[x_1,\dots,x_N])$ of non-negative degrees, $\mathcal{H}$ the three dimensional Heisenberg algebra and $\mathfrak{a}_N$ an abelian algebra of dimension $N$. The main result of this paper is Theorem \ref{classification} which is stated below (the action of $\mathcal{S}_N$ will also be given).
\begin{theorem*}
Let $\lambda\in\mathbb{C}^N$ and let $\mathcal{J}_{\lambda}$ be a subcategory of modules in category $\mathcal{J}$ supported on $\lambda+\mathbb{Z}^N$.
\begin{enumerate}
\item[(a)]
For $N=2$ there is an equivalence of categories between the category of finite dimensional modules for $S_2^+\oplus\mathcal{H}$ and $\mathcal{J}_{\lambda}$. This equivalence maps $U$ to $A_2\otimes U$ where $U$ is a finite dimensional module for $S_2^+\oplus\mathcal{H}$.
\item[(b)]
For $N\geq 3$, there is an equivalence of categories between the category of finite dimensional modules for $S_N^+\oplus\mathfrak{a}_N$ and $\mathcal{J}_{\lambda}$. This equivalence maps $U$ to $A_N\otimes U$ where $U$ is a finite dimensional module for $S_N^+\oplus\mathfrak{a}_N$.
\end{enumerate}
\end{theorem*}
Section 2 of the current paper will introduce category $\mathcal{J}$ for the Lie algebra of divergence zero vector fields on an $N$-dimensional torus. An immediate consequence of this definition is that weights for an indecomposable module $J$ in category $\mathcal{J}$ form a single coset $\lambda+\mathbb{Z}^N$ where $\lambda\in\mathbb{C}^N$. Furthermore, since $J$ is a free $A_N$-module of finite rank, all weight spaces have the same dimension and it follows that $J\cong A_N\otimes U$, where $U$ is any weight space of $J$.
Section 3 contains the bulk of the proof for the classification of category $\mathcal{J}$. It begins by showing that the action of $\mathcal{S}_N$ on $J$ is completely determined by the action of a certain Lie algebra on $U$. The remainder of the section is to show that $\mathcal{S}_N$ acts on $J$ by certain $\text{End}(U)$-valued polynomials. It is seen however that the case $N=2$ is exceptional in this regard and it must be considered separately from the cases where $N\geq 3$.
The main results are presented in Sections 4 and 5. Here the so-called polynomial action on $U$ is seen to be a representation of the Lie algebra $S_N^+$, along with the three dimensional Heisenberg in the case $N=2$, or an abelian algebra in the case $N\geq3$. In the case of irreducible modules the action of $S_N^+$ simplifies to a representation of $\mathfrak{sl}_N$, the Lie algebra of $N\times N$ matrices with trace zero, and the three dimensional Heisenberg converts to an abelian algebra when $N=2$.
Irreducible representations for the case $N=2$ are studied in \cite{JL} by Jiang and Lin. Lemma \ref{eigenvector} below makes use of a technique found in this paper in order to obtain a family of eigenvectors. This provides a crucial step in the classification.
\section{Preliminaries}
Let $A_N=\mathbb{C}[t_1^{\pm1},\dots,t_{N}^{\pm1}]$ be the algebra of Laurent polynomials over $\mathbb{C}$. Elements of $A_N$ are presented with multi-index notation $t^r=t_1^{r_1}\dots t_{N}^{r_{N}}$ where $r=(r_1,\dots,r_{N})\in\mathbb{Z}^{N}$. Let $\{e_1,\dots,e_N\}$ denote the standard basis for $\mathbb{Z}^N$. For $k\in\mathbb{Z}^N $, $|k|=k_1+\dots+k_N$, $k!=k_1!\dots k_N!$ and $\binom{r}{k}=\frac{r!}{k!(r-k)!}$. Denote the set of non-negative integers by $\mathbb{Z}_{\geq0}$.
For $i\in\{1,\dots,N\}$, let $d_i=t_i\frac{\partial}{\partial t_i}$. The vector space of derivations of $A_N$, $\text{Der}(A_N)=\text{Span}_{\mathbb{C}}\left\{t^rd_i|i\in\{1,\dots,N\}, r\in\mathbb{Z}^{N}\right\}$, forms a Lie algebra called the Witt algebra denoted here by $\mathcal{W}_N$. The Lie bracket in $\mathcal{W}_N$ is given by $[t^rd_i,t^sd_j]=s_it^{r+s}d_j-r_jt^{r+s}d_i$.
Geometrically, $\mathcal{W}_N$ may be interpreted as the Lie algebra of (complex-valued) polynomial vector fields on an $N$ dimensional torus via the mapping $t_j=e^{\sqrt{-1}\theta_j}$ for all $j\in\{1,\dots,N\}$, where $\theta_j$ is the $j$th angular coordinate. This has an interesting subalgebra, the Lie algebra of divergence-zero vector fields, denoted $\mathcal{S}_N$.
The change of coordinates $t_j=e^{\sqrt{-1}\theta_j}$, gives $\frac{\partial}{\partial \theta_j}=\frac{\partial t_j}{\partial \theta_j}\cdot\frac{\partial}{\partial t_j}=\sqrt{-1} t_j\frac{\partial}{\partial t_j}=\sqrt{-1} d_j$. Thus an element $X=\sum_{j=1}^{N}f_j(t)d_j\in \mathcal{W}_N$ can be written in the form $X=-\sqrt{-1}\sum_{j=1}^{N}f_j(t)\frac{\partial}{\partial \theta_j}$. The divergence of $X$ with respect to the natural volume form in angular coordinates is then $-\sqrt{-1}\sum_{j=1}^{N}\frac{\partial f_j}{\partial \theta_j}=\sum_{j=1}^{N}t_j\frac{\partial f_j}{\partial t_j}$. Letting $d_{ab}(r)=r_{b}t^rd_a-r_at^rd_{b}$, it follows that
\begin{equation*}
\mathcal{S}_N=\text{Span}_{\mathbb{C}}\left\{d_a,d_{ab}(r)|a,b\in\{1,\dots,N\}, r\in\mathbb{Z}^{N}\right\}
\end{equation*}
and has commutative Cartan subalgebra $\mathfrak{h}=\text{Span}_{\mathbb{C}}\{d_j|j\in\{1,\dots,N\}\}$. It will be useful to have the Lie bracket of $\mathcal{S}_N$ in terms of the elements $ d_{ab}(r)$. For $r,s\in\mathbb{Z}^N$ and $a,b,p,q\in\{1,\dots,N\}$, $[d_{a},d_{pq}(r)]=r_ad_{pq}(r),$ and
\begin{multline*}
[d_{ab}(r),d_{pq}(s)]\\=r_bs_pd_{aq}(r+s)-r_bs_qd_{ap}(r+s)-r_as_pd_{bq}(r+s)+r_as_qd_{bp}(r+s).
\end{multline*}
By definition $d_{ab}(0)=0,d_{aa}(r)=0$ and $d_{ba}(r)=-d_{ab}(r)$. When $N\geq3$, $d_{ab}(r)=0$ in the case $r_a=r_b=0$, and in general,
\begin{equation*}
r_pd_{ab}(r)+r_ad_{bp}(r)+r_bd_{pa}(r)=0.
\end{equation*}
A family of modules for $\mathcal{W}_N$ called category $\mathcal{J}$ was defined in \cite{B}. An analogous category of modules for $\mathcal{S}_N$ is defined as follows:
\begin{defn}\label{CatJ}
Let $N>1$. An $\mathcal{S}_N$-module $J$ belongs to category $\mathcal{J}$ if the following properties hold:
\begin{enumerate}
\item[(J1)] The action of $d_a$ on $J$ is diagonalizable for all $a\in\{1,\dots,N\}$.
\item[(J2)] Module $J$ is a free $A_N$-module of finite rank.
\item[(J3)] For any $X\in\mathcal{S}_N,f\in A_N$ and $u\in J$, $X(fu)=(X(f))u+f(Xu).$
\end{enumerate}
\end{defn}
A submodule of any $J\in\mathcal{J}$ must be invariant under the actions of both $A_N$ and $\mathcal{S}_N$. Classifying the modules of category $\mathcal{J}$ is the goal of this paper. From property (J2) it follows that any module in $\mathcal{J}$ is a finite direct sum of indecomposable modules and hence it suffices to examine indecomposable modules $J\in\mathcal{J}$. Using (J1) we may consider the $\mathfrak{h}$-weight decomposition, $J=\bigoplus_{\lambda\in\mathbb{C}^N}J_{\lambda}$ where $J_{\lambda}=\{u\in J|d_a(u)=\lambda_au\}$. For $u\in J_{\lambda}$,
\begin{equation*}
d_a(d_{bc}(r)u)=d_{bc}(r)d_au+[d_a,d_{bc}(r)]u=(\lambda_a+r_a)d_{bc}(r)u
\end{equation*}
and thus $d_{bc}(r)J_{\lambda}\subset J_{\lambda+r}$. Similarly by (J3) $t^rJ_{\lambda}\subset J_{\lambda+r}$. These two relations partition the weights of $J$ into $\mathbb{Z}^N$-cosets of $\mathbb{C}^N$, and decompose $J$ into a direct sum of submodules, each corresponding to a distinct coset. Thus if $J$ is indecomposable its set of weights is one such coset $\lambda+\mathbb{Z}^N$ for $\lambda\in\mathbb{C}^N$ and $J=\bigoplus_{r\in\mathbb{Z}^N}J_{\lambda+r}$.
Let $U=J_{\lambda}$. The invertible map $t^r:U\rightarrow J_{\lambda+r}$ identifies all weight spaces with $U$ and since $J$ is a free module for the associative algebra $A_N$ it follows that any basis for $U$ is also basis for $J$ viewed as a free $A_N$-module. Furthermore the finite rank condition of property (J2) implies that $U$ must be finite dimensional. This yields that $J\cong A_N\otimes U$. Homogeneous elements of $J$ will be denoted $t^s\otimes v$, for $s\in\mathbb{Z}^N,v\in U$.
\section{Polynomial Action}
The map $d_{ab}(r):1\otimes U\rightarrow t^r\otimes U$ induces an endomorphism $D_{ab}(r):U\rightarrow U$ defined by
\begin{equation*}
D_{ab}(r)u=(t^{-r}\circ d_{ab}(r))u
\end{equation*}
for $u\in U$. Combining this with (J3) yields
\begin{equation}\label{SNactiononJ}
d_{ab}(r)(t^s\otimes v)=(r_bs_a-s_br_a)t^{r+s}\otimes v + t^{r+s}\otimes D_{ab}(r)v,
\end{equation}
and so the action of $d_{ab}(r)$ on $J$ is determined by that of $D_{ab}(r)$ on $U$.
The key to proving the main result is to show that $D_{ab}(r)$ acts on $U$ by an $\text{End}(U)$-valued polynomial in $r$ when $N\geq 3$. That is,
\begin{equation*}
D_{ab}(r)=\sum_{k\in\mathbb{Z}_{\geq0}^N}\frac{r^k}{k!}P^{(k)}_{ab},
\end{equation*}
where the $P^{(k)}_{ab}\in\text{End}(U)$ do not depend on $r$, and the sum is finite. The factor of $k!$ is for convenience. In the case $N=2$ a slight modification needs to be made and the corresponding expansion has the form
\begin{equation*}
D_{ab}(r)=\sum_{k\in\mathbb{Z}_{\geq0}^2}\frac{r^k}{k!}P^{(k)}_{ab}-\delta_{r,0}P^{(0)}_{ab},
\end{equation*}
where $\delta_{r,0}$ is the Kronecker delta.
Since $D_{ab}(r)=t^{-r}d_{ab}(r)$, the Lie bracket for $D_{ab}(r)$ follows from that of $d_{ab}(r)$.
\begin{align*}
&\quad[D_{ab}(r),D_{cd}(s)]\\
&=[t^{-r}d_{ab}(r),t^{-s}d_{cd}(s)]\\
&=t^{-r}(d_{ab}(r)(t^{-s}))d_{cd}(s)-t^{-s}(d_{cd}(s)(t^{-r}))d_{ab}(r)+t^{-r-s}[d_{ab}(r),d_{cd}(s)]\\
&=t^{-r}(-r_bs_a+r_as_b)t^{r-s}d_{cd}(s)-t^{-s}(-r_cs_d+r_ds_c)t^{s-r}d_{ab}(r)\\
&\quad+t^{-r-s}\left(r_bs_cd_{ad}(r+s)-r_bs_dd_{ac}(r+s)-r_as_cd_{bd}(r+s)+r_as_dd_{bc}(r+s)\right)\\
&=(r_as_b-r_bs_a)D_{cd}(s)+(r_cs_d-r_ds_c)D_{ab}(r)\\
&\quad+r_bs_cD_{ad}(r+s)-r_bs_dD_{ac}(r+s)-r_as_cD_{bd}(r+s)+r_as_dD_{bc}(r+s).
\end{align*}
This has special case
\begin{equation}
\label{bracket}
[D_{ab}(r),D_{ab}(s)]=(r_as_b-r_bs_a)(D_{ab}(r)+D_{ab}(s)-D_{ab}(r+s)).
\end{equation}
Note that for $N\geq3$
\begin{equation*}
r_cD_{ab}(r)+r_aD_{bc}(r)+r_bD_{ca}(r)=0.
\end{equation*}
For a function $f$ whose domain is $\mathbb{Z}^N$ the \emph{difference derivative} in direction $r\in\mathbb{Z}^N$, denoted by $\partial_rf$, is defined as
\begin{equation*}
\partial_rf(s)=f(s+r)-f(s).
\end{equation*}
Higher order derivatives are obtained by iteration and thus
\begin{equation}\label{higherorderderiv}
\partial_r^mf(s)=\sum_{i=0}^m(-1)^{m-i}\binom{m}{i}f(s+ir).
\end{equation}
To simplify notation let $\partial_a=\partial_{e_a}$. Applying the above twice yields
\begin{equation}\label{mixedpartialderiv}
\partial_{a}^m\partial_{b}^nf(s)=\sum_{i=0}^m\sum_{j=0}^n(-1)^{m+n-i-j}\binom{m}{i}\binom{n}{j}f(s+ie_a+je_b).
\end{equation}
A technique for finding eigenvectors was found in \cite{JL} and provides a key step to proving the result here (cf. Lemma 4 in \cite{JL}).
\begin{lem}\label{eigenvector}
Let $m,n\in\mathbb{Z}_{\geq0}$, and $a,b\in\{1,\dots,N\}$. Then for $m\geq1$
\begin{equation*}
[D_{ab}(-e_b),[D_{ab}(-e_a),\partial_{a}^m\partial_{b}^nD_{ab}(e_a)]]=-n(m+1)\partial_{a}^m\partial_{b}^nD_{ab}(e_a).
\end{equation*}
\end{lem}
\begin{proof}
First apply the formula for the difference derivatives,
\begin{align*}
&[D_{ab}(-e_b),[D_{ab}(-e_a),\partial_{a}^m\partial_{b}^nD_{ab}(e_a)]\\
&=\sum_{i=0}^m\sum_{j=0}^n(-1)^{m+n-i-j}\binom{m}{i}\binom{n}{j}[D_{ab}(-e_b),[D_{ab}(-e_a),D_{ab}((i+1)e_a+je_b)],
\end{align*}
then evaluate the Lie bracket for $D_{ab}(r)$,
\begin{align*}
&=-\sum_{i=0}^m\sum_{j=0}^n(-1)^{m+n-i-j}\binom{m}{i}\binom{n}{j}j[D_{ab}(-e_b),D_{ab}(-e_a)]\\
&\quad-\sum_{i=0}^m\sum_{j=0}^n(-1)^{m+n-i-j}\binom{m}{i}\binom{n}{j}j[D_{ab}(-e_b),D_{ab}((i+1)e_a+je_b)]\\
&\quad+\sum_{i=0}^m\sum_{j=0}^n(-1)^{m+n-i-j}\binom{m}{i}\binom{n}{j}j[D_{ab}(-e_b),D_{ab}(ie_a+je_b))].
\end{align*}
Simplifying the binomial coefficients and evaluating the Lie bracket yields,
\begin{align*}
&=-n\sum_{j=1}^n(-1)^{n-j}\binom{n-1}{j-1}\left(\sum_{i=0}^m(-1)^{m-i}\binom{m}{i}\right)[D_{ab}(-e_b),D_{ab}(-e_a)]\\
&\quad-n\sum_{i=0}^m\sum_{j=1}^n(-1)^{m+n-i-j}\binom{m}{i}\binom{n-1}{j-1}\\
&\quad\times(i+1)(D_{ab}(-e_b)+D_{ab}((i+1)e_a+je_b)-D_{ab}((i+1)e_a+(j-1)e_b))\\
&\quad+n\sum_{i=0}^m\sum_{j=1}^n(-1)^{m+n-i-j}\binom{m}{i}\binom{n-1}{j-1}\\
&\quad\times i(D_{ab}(-e_b)+D_{ab}(ie_a+je_b)-D_{ab}(ie_a+(j-1)e_b)).
\end{align*}
The first term vanishes because the sum in parentheses is zero for $m\geq1$. For a similar reason the terms involving $D_{ab}(-e_b)$ will vanish. A change of summation index causes a sign change leaving,
\begin{align*}
&n\sum_{i=0}^m\sum_{j=0}^{n-1}(-1)^{m+n-i-j}\binom{m}{i}\binom{n-1}{j}\\
&\times(D_{ab}((i+1)e_a+(j+1)e_b)-D_{ab}((i+1)e_a+je_b))\\
&-mn\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}(-1)^{m+n-i-j}\binom{m-1}{i}\binom{n-1}{j}\\
&\times(D_{ab}((i+2)e_a+(j+1)e_b)-D_{ab}((i+2)e_a+je_b))\\
&+mn\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}(-1)^{m+n-i-j}\binom{m-1}{i}\binom{n-1}{j}\\
&\times(D_{ab}((i+1)e_a+(j+1)e_b)-D_{ab}((i+1)e_a+je_b)).
\end{align*}
Applying the definition of the difference derivative combines terms to give
\begin{align*}
&\quad n\sum_{i=0}^{m}\sum_{j=0}^{n-1}(-1)^{m+n-i-j}\binom{m}{i}\binom{n-1}{j}\partial_{b}D_{ab}((i+1)e_a+je_b)\\
&\quad-mn\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}(-1)^{m+n-i-j}\binom{m-1}{i}\binom{n-1}{j}\\
&\quad\times(\partial_{b}D_{ab}((i+2)e_a+je_b)-\partial_{b}D_{ab}((i+1)e_a+je_b))\\
&=-n\partial_{a}^m\partial_{b}^{n-1}\left(\partial_{b}D_{ab}(e_a)\right)\\
&\quad-mn\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}(-1)^{m+n-i-j}\binom{m-1}{i}\binom{n-1}{j}\partial_{a}\partial_{b}D_{ab}((i+1)e_a+je_b)\\
&=-n\partial_{a}^m\partial_{b}^nD_{ab}(e_a)-mn\partial_{a}^{m-1}\partial_{b}^{n-1}\left(\partial_{a}\partial_{b}D_{ab}(e_a)\right)\\
&=-n(m+1)\partial_{a}^m\partial_{b}^nD_{ab}(e_a).
\end{align*}
\end{proof}
The Lemma above shows that for various $m$ and $n$, $\partial_{a}^m\partial_{b}^nD_{ab}(e_a)$ are eigenvectors for $\text{ad}(D_{ab}(-e_b))\text{ad}(D_{ab}(-e_a))$ and yield infinitely many distinct eigenvalues. This fact will be used to show that $\partial_{a}^m\partial_{b}^nD_{ab}(e_a)=0$ for large enough values of $m$ and $n$.
The following Lemma was proven in \cite{B} and is presented here without proof.
\begin{lem}\label{finiteeigenvals}
Let $\mathfrak{L}$ be a Lie algebra with nonzero elements $y,y_1,y_2,\dots$ with the property that
\[[y,y_i]=\alpha_iy_i\]
for $i=1,2,\dots$, and $\alpha_i\in\mathbb{C}$. Then for a finite dimensional representation $(U,\rho)$ of $\mathfrak{L}$, there are at most $(\dim U)^2-\dim U+1$ distinct eigenvalues for which $\rho(y_i)\neq 0$.
\end{lem}
Now consider the case $N=2$ where $\mathcal{S}_2=\text{Span}_{\mathbb{C}}\left\{d_1(r),d_2(r),d_{12}(r)|r\in\mathbb{Z}^{2}\right\}$. Combining the two Lemmas above shows that there exists $K\in\mathbb{N}$ such that $\partial_1^{m+1}\partial_2^nD_{12}(e_1)=0$ and $\partial_1^m\partial_2^{n+1}D_{12}(e_2)=0$ for all $m+n> K$. This fact along with the following Lemmas will show that $D_{12}(r)$ is a polynomial plus a delta function.
\begin{lem}\label{df=0forallmngeq0}
If $\partial_1^m\partial_2^nf(r)=0$ for all $m,n\geq 0$ then $f(r+ie_1+je_2)=0$ for all $i,j\geq0$.
\end{lem}
\begin{proof}
Using (\ref{higherorderderiv}) it follows by induction that if $\partial_a^mf(r)=0$ for all $m\geq0$ then $f(r+ie_a)=0$ for all $i\geq 0$.
Suppose $\partial_1^m\partial_2^nf(r)=0$ for all $m,n\geq 0$, and let $g_{n}(r)=\partial_2^nf(r)$ so that by assumption, for each $n\geq 0$, $\partial_1^mg_{n}(r)=0$ for all $m\geq 0$. By the first part of the proof this implies that $g_{n}(r+ie_1)=0$ for all $i\geq 0$. So for any $i\geq0$, $\partial_2^nf(r+ie_1)=0$ for all $n\geq 0$, which implies that $f(r+ie_1+je_s)=0$ for all $i,j\geq 0$.
\end{proof}
For $K+1$ ordered pairs $(x_i,a_i)$, $i=0,\dots,K$ with distinct $x_i$, there exists a unique interpolating polynomial $P(X)$ of degree at most $K$, such that $P(x_i)=a_i$. This can be extended to functions of two variables in the following way.
\begin{lem}\label{deg2pol}
Given $\frac{(K+1)(K+2)}{2}$ triples $(x_i,y_j,a_{ij})$ for $0\leq i+j\leq K$, with distinct $x_i$ and distinct $y_j$, there exists a unique polynomial $P(X,Y)$ of degree at most $K$ such that $P(x_i,y_j)=a_{ij}$.
\end{lem}
\begin{proof}
For $K=0$ the constant function $P(X,Y)=a_{00}$ is the unique polynomial of degree 0 through $(x_0,y_0,a_{00})$. Proceed by induction on $K$. The univariate case yields a unique polynomial $R(X)$ of degree at most $K$ such that $R(x_i)=a_{i0}$ for all $i\in\{0,\dots,K\}$. For $i\geq0$ and $j\geq1$ let
\begin{equation*}
b_{ij}=\frac{a_{ij}-R(x_i)}{y_j-y_0}.
\end{equation*}
By induction there is a unique interpolating polynomial $Q(X,Y)$ of degree at most $K-1$ such that $Q(x_i,y_j)=b_{ij}$ for the $\frac{K(K+1)}{2}$ triples $(x_i,y_j,b_{ij})$ where $1\leq i+j\leq K$, and $j\geq1$. Polynomial $P(X,Y)=R(X)+(Y-y_0)Q(X,Y)$ is of degree at most $K$ and $P(x_i,y_j)=a_{ij}$ for $0\leq i+j\leq K$.
Suppose $T(X,Y)$ is a polynomial of degree at most $K$ and $T(x_i,y_i)=a_{ij}$ for $0\leq i+j\leq K$. Since the decomposition $T(X,Y)=F(X)+(Y-y_0)G(X,Y)$ is unique for polynomials $F$ and $G$, it must be that $F(X)=R(X)$ and $G(X,Y)=Q(X,Y)$. Hence $P(X,Y)$ is unique.
\end{proof}
\begin{lem}\label{equalpolynomials}
Let $S=S_1\times\dots\times S_N\in\mathbb{C}^N$, where each $S_i$ is a set with $K+1$ elements, and let $F$ and $G$ be polynomials of degree at most $K$ in $N$ variables, $X_1,\dots,X_N$, that agree on $S$. Then $F=G$.
\end{lem}
\begin{proof} Use induction on $N$ where the case $N=1$ is well known (the case $N=2$ follows from Lemma \ref{deg2pol}). Let $a\in S_1$ and divide $F(X)$ and $G(X)$ by $(X_1-a)$ to get $F(X)=(X_1-a)P(X)+R(X_2,\dots,X_N)$ and $G(X)=(X_1-a)Q(X)+T(X_2,\dots,X_N)$. Then $R$ and $T$ are of degree at most $K$, and agree on $S_2\times\dots\times S_N$. By induction $R=T$ and so $P(x)=Q(x)$ for all $x\in S'=(S_1\setminus\{a\})\times S_2\times\dots\times S_N$. Then $P=Q$ also by induction, since $P$ and $Q$ are of degree at most $K-1$ and $S'$ contains a cube of size $K$. Therefore $F=G$.
\end{proof}
\begin{lem}\label{polynomialonquadrant}
Let $r\in\mathbb{Z}^2$. Suppose $\partial_1^m\partial_2^nf(r)=0$ for all $m+n> K$, for some $K\in\mathbb{N}$. Let $p(t)$ be the bivariate interpolating polynomial of degree at most $K$ such that $p(r+ie_1+je_2)=f(r+ie_1+je_2)$ for $0\leq i+j\leq K$. Then $f(s)=p(s)$ for all $s_1\geq r_1,s_2\geq r_2$.
\end{lem}
\begin{proof}
Let $h(t)=f(t)-p(t)$. Then (\ref{mixedpartialderiv}) implies that $\partial_1^m\partial_2^nh(r)=0$ for $m+n\leq K$, because $h(r+ie_1+je_2)=0$ for $i+j\leq K$. When $m+n>K$, $\partial_1^m\partial_2^nf(r)=0$ by assumption and $\partial_1^m\partial_2^np(r)=0$ since it is a polynomial of degree at most $K$. Thus
$\partial_1^m\partial_2^nh(r)=0$ for $m+n\geq 0$ and so by Lemma \ref{df=0forallmngeq0}, $f(r+ie_1+je_2)=p(r+ie_1+je_2)$ for all $i,j\geq 0$.
\end{proof}
\begin{prop}\label{N=2case}
Let $N=2$ and let $J=A_2\otimes U$ be a module in category $\mathcal{J}$. Then $D_{12}(r)$ acts on $U$ by
\begin{equation*}
D_{12}(r)=\sum_{k\in\mathbb{Z}_{\geq 0}^2}\frac{r^k}{k!}P^{(k)}_{12}-\delta_{r,0}P_{12}^{(0)}
\end{equation*}
for all $r\in\mathbb{Z}^2$, where $P^{(k)}_{12}\in\text{\emph{End}}(U)$ does not depend on $r$, and the summation is finite.
\end{prop}
\begin{proof}
It follows from Lemmas \ref{eigenvector} and \ref{finiteeigenvals} that there exists a $K\in\mathbb{N}$ such that $\partial_1^{m+1}\partial_2^nD_{12}(e_1)=0$ and $\partial_1^m\partial_2^{n+1}D_{12}(e_2)=0$ for $n+m>K$. By Lemma \ref{polynomialonquadrant} there exist $\text{End}(U)$-valued polynomials $P_1$ and $P_2$ such that
\begin{equation*}
P_1(r)=\partial_1D_{12}(r)\text{ and }P_2(s)=\partial_2D_{12}(s),
\end{equation*}
for all $r=r_1e_1+r_2e_2$ and $s=s_1e_1+s_2e_2$ with $r_1,s_2\geq1$ and $r_2,s_1\geq0$.
Taking polynomial difference antiderivatives $\bar{P}_1(r)$ and $\bar{P}_2 (s)$ respectively, we get
\begin{equation*}
D_{12}(r)=\bar{P}_1(r)+g_1(r_2)\text{ and }D_{12}(s)=\bar{P}_2(s)+g_2(s_1),
\end{equation*}
where $\bar{P}_1$ and $\bar{P}_2$ are polynomials, and $g_1$ and $g_2$ are functions of $r_2$ and $s_1$ respectively. Then $\bar{P}_1(r)+g_1(r_2)=\bar{P}_2(r)+g_2(r_1)$ for $r_1,r_2\geq 1$, so
\begin{equation*}
g_2(r_1)-g_1(r_2)=\bar{P}_1(r)-\bar{P}_2(r).
\end{equation*}
Taking the $m$th difference derivative in $e_1$ where $m>K$ gives
\begin{equation*}
\partial_1^m g_2(r_1)=\partial_1^m(\bar{P}_1(r)-\bar{P}_2(r))=0
\end{equation*}
which implies that $g_2$ is a polynomial in $r_1$. Similarly $g_1$ is a polynomial in $r_2$. Thus
$D_{12}(r)=\bar{P}_1(r)+g_1(r_2)$ and $D_{12}(r)=\bar{P}_2(r)+g_2(r_1),$ are polynomials that agree on $\mathcal{R}_1=\{(i,j)\in\mathbb{Z}^2|i,j\geq1\}$, and hence must be equal by Lemma \ref{equalpolynomials}. Therefore $D_{12}(r)$ is an $\text{End}(U)$-valued polynomial $Q_1(r)$ on $\mathcal{R}_1$. It remains to show that $D_{12}(r)$ acts by a polynomial $P(r)$ on all of $\mathbb{Z}^2$ except at the origin.
Let $\mathcal{L}$ be the Lie algebra with basis elements $D_{12}(r)$ for $r\in\mathbb{Z}^2$ and Lie bracket given by \ref{bracket}. Consider the automorphisms $\varphi_1$ and $\varphi_2$ of $\mathcal{L}$, where $\varphi_1(D_{12}(r_1,r_2))=-D_{12}(-r_1,r_2)$, $\varphi_2(D_{12}(r_1,r_2))=-D_{12}(r_1,-r_2)$, and their composition $\varphi_2\circ\varphi_1$ where $\varphi_2\circ\varphi_1(D_{12}(r_1,r_2)) = D_{12}(-r_1,-r_2)$. What was proven for $\mathcal{L}$ is also true for its image under these automorphisms. Thus there are $\text{End}(U)$-valued polynomials $Q_2,Q_3$, and $Q_4$ such that $D_{12}(r)=Q_2(r)$ for $r\in\mathcal{R}_2=\{(-i,j)\in\mathbb{Z}^2|i,j\geq1\}$, $D_{12}(r)=Q_3(r)$ for $r\in\mathcal{R}_3=\{(-i,-j)\in\mathbb{Z}^2|i,j\geq1\}$, and $D_{12}(r)=Q_4(r)$ for $r\in\mathcal{R}_4=\{(i,-j)\in\mathbb{Z}^2|i,j\geq1\}$.
Let $\sigma:\mathcal{L}\rightarrow\mathcal{L}$ be the automorphism defined by $\sigma(D_{12}(r_1,r_2))=-D_{12}(-r_1+r_2,r_2)$. When applied to $\mathcal{R}_1$, this implies the existence of an $\text{End}(U)$-valued polynomial $P$ such that $D_{12}(r)=P(r)$ on the region $\mathcal{R}_5=\{(i,j)\in\mathbb{Z}^2|j\geq1,i\leq j\}$. Lemma \ref{equalpolynomials} may be applied to the intersection of $\mathcal{R}_1$ and $\mathcal{R}_5$ which says that $Q_1=P$. Applied again to the intersection of $\mathcal{R}_2$ and $\mathcal{R}_5$ yields that $Q_2=P$. Thus $D_{12}(r)$ acts by $\text{End}(U)$-valued polynomial $P(r)$ for $r\in\{(i,j)\in\mathbb{Z}^2|j\geq1\}$.
Similar techniques may be applied to connect this region with $\mathcal{R}_3$ and $\mathcal{R}_4$. The result that follows is that $D_{12}(r)$ acts by $\text{End}(U)$-valued polynomial $P(r)$ for $r\in\mathbb{Z}^2\setminus\{(0,0)\}$.
To indicate that the polynomial obtained above is specific to the operator $D_{12}(r)$, write $P_{12}$ instead of $P$. Decompose $P_{12}(r)$ into powers of $r$ as
\begin{equation*}
P_{12}(r)=\sum_{k\in\mathbb{Z}_{\geq 0}^2}\frac{r^k}{k!}P^{(k)}_{12},
\end{equation*}
where the sum is finite, the $P^{(k)}_{12}\in\text{End}(U)$ do not depend on $r$, and the factor of $k!$ is for convenience. Note that $P_{12}(0,0)=P_{12}^{(0,0)}$, however $D_{12}(0,0)=0$ by definition and so it must act by zero. To avoid a contradiction at the origin a delta function is added so that
\begin{equation*}
D_{12}(r)=\sum_{k\in\mathbb{Z}_{\geq 0}^2}\frac{r^k}{k!}P^{(k)}_{12}-\delta_{r,0}P_{12}^{(0)},
\end{equation*}
which is now valid for all $r\in\mathbb{Z}^2$.
\end{proof}
\begin{lem}\label{Ngeq2case}
Let $N\geq2$ and $J=A_N\otimes U$ be a module in category $\mathcal{J}$. Then for $a,b\in\{1,\dots,N\}$, $a\neq b$, $D_{ab}(r)$ acts on $U$ by
\begin{equation*}
D_{ab}(r)=\sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{r^k}{k!}P^{(k)}_{ab}
\end{equation*}
for $r\in\mathbb{Z}^N$ with $(r_a,r_b)\neq(0,0)$, where the $P^{(k)}_{ab}\in\text{\emph{End}}(U)$ do not depend on $r$, and the summation is finite.
\end{lem}
\begin{proof}
It follows from Proposition \ref{N=2case} that for any $a,b\in\{1,\dots,N\}$, $a\neq b$, the operators $D_{ab}(r_ae_a+r_be_b)$ act by polynomial in $r_a,r_b$, when $(r_a,r_b)\neq(0,0)$, since for these values the delta function vanishes.
The result will be proven by induction on $N$, with induction hypothesis that $D_{12}\left(r_1e_1+\dots+r_{N-1}e_{N-1}\right)$ acts by polynomial for $(r_1,r_2)\neq(0,0)$. For convenience this is stated for $D_{12}(r)$, though it holds for any $D_{ab}(r)$ by a change of indices. The basis of induction $N=2$ follows from Proposition \ref{N=2case}.
Assume until otherwise stated that $r_i\neq0$ for all $i\in\{1,\dots,N\}$. Consider
\begin{multline*}
[D_{1N}(r_{N}e_{N}),D_{12}(r_1e_1+\dots+r_{N-1}e_{N-1})]\\=-r_1r_{N}D_{12}(r_1e_1+\dots+r_{N-1}e_{N-1})+r_1r_{N}D_{12}(r_1e_1+\dots+r_Ne_{N}).
\end{multline*}
Both $D_{1N}(r_{N}e_{N})$, and $D_{12}(r_1e_1+\dots+r_{N-1}e_{N-1})$ act by polynomial by the induction hypothesis. Rearrange to get
\begin{multline*}
r_1r_{N}D_{12}(r_1e_1+\dots+r_Ne_{N})\\
=[D_{1N}(r_{N}e_{N}),D_{12}(r_1e_1+\dots+r_{N-1}e_{N-1})]+r_1r_{N}D_{12}(r_1e_1+\dots+r_{N-1}e_{N-1}).\!
\end{multline*}
The right hand side is a polynomial in $r_1,\dots,r_{N}$ and thus $r_1r_{N}D_{12}(r)=P(r)$, for some $\text{End}(U)$-valued polynomial $P$, and $r=r_1e_1+\dots+r_Ne_{N}$. Symmetry in indices $1$ and $2$ yields that $r_2r_{N}D_{12}(r)=Q(r)$, for some $\text{End}(U)$-valued polynomial $Q$. Thus,
\begin{equation*}
r_2P(r)=r_1Q(r).
\end{equation*}
Unique factorization of a polynomial into irreducible factors implies that $P$ factors as $P(r)=r_1\bar{P}(r)$ and so $r_1r_{N}D_{12}(r)=r_1\bar{P}(r)$. Since $r_1\neq 0$, division of polynomials gives that $r_{N}D_{12}(r)=\bar{P}(r)$. Thus, $r_{N}D_{12}(r)$ acts by polynomial, or more generally, $r_aD_{bc}(r)$ acts by polynomial for $a\neq b,c$.
Fix $s_N\neq0$ and consider
\begin{align*}
&[D_{1N}(r_{1}e_{1}+s_Ne_N),D_{2N}(r_2e_2+r_3e_3+\dots+(r_N-s_N)e_N)]\\
&=r_{1}(r_N-s_N)D_{2N}(r_2e_2+r_3e_3+\dots+(r_N-s_N)e_N)\\
&\quad-s_Nr_2D_{1N}(r_{1}e_{1}+s_Ne_N)-r_{1}(r_N-s_N)D_{2N}(r_1e_1+\dots+r_{N}e_{N})\\
&\quad-s_N(r_N-s_N)D_{12}(r_1e_1+\dots+r_{N}e_{N})+s_Nr_2D_{1N}(r_1e_1+\dots+r_{N}e_{N}).
\end{align*}
Let $r=r_1e_1+\dots+r_{N}e_{N}$, and isolate the $D_{12}$ term to get
\begin{align*}
&s_N(r_N-s_N)D_{12}(r)=r_{1}(r_N-s_N)D_{2N}(r_2e_2+r_3e_3+\dots+(r_N-s_N)e_N)\\
&-[D_{1N}(r_{1}e_{1}+s_Ne_N),D_{2N}(r_2e_2+r_3e_3+\dots+(r_N-s_N)e_N)]\\
&-s_Nr_2D_{1N}(r_{1}e_{1}+s_Ne_N)-r_{1}(r_N-s_N)D_{2N}(r)+s_Nr_2D_{1N}(r).
\end{align*}
The first three terms on the right hand side act by polynomial by induction. The last two terms are of the form $r_aD_{bc}(r)$, as is $r_ND_{12}(r)$ on the left hand side, and hence these act as polynomials by the previous step. Since both left hand side $s_N(r_N-s_N)D_{12}(r)$, and $s_Nr_ND_{12}(r)$ act by polynomial, so does their difference $-s_N^2D_{12}(r)$. Because $s_N$ is a nonzero constant this implies that $D_{12}(r)$ acts by polynomial. Again by considering a change of indices, this proves that $D_{ab}(r)$ acts by polynomial on the region $\mathcal{R}_0=\bigcap_i\{r_i\neq0\}$. It remains to show that $D_{ab}(r)$ acts by polynomial for $(r_a,r_b)\neq(0,0)$.
Let $r,s\in\mathbb{Z}^N$ where $s$ is constant. Rearranging the bracket formula gives
\begin{equation*}
(s_ar_b-r_as_b)D_{ab}(r)=(s_ar_b-r_as_b)(D_{ab}(s)+D_{ab}(r-s))-[D_{ab}(s),D_{ab}(r-s)].
\end{equation*}
On the right hand side $D_{ab}(s)$ is constant in $r$ and, by what was just shown, $D_{ab}(r-s)$ acts by polynomial in the region $\mathcal{R}_{s}=\bigcap_i\{r_i\neq s_i\}$. Thus there is an $\text{End}(U)$-valued polynomial $T$ such that $(s_ar_b-r_as_b)D_{ab}(r)=T(r)$ for $r\in\mathcal{R}_s$. Similarly for $s'\in\mathbb{Z}^N$, there is a polynomial $T'$ such that $(s_a'r_b-r_as_b')D_{ab}(r)=T'(r)$ for $r\in\mathcal{R}_{s'}$. Then
\begin{equation*}
(s_a'r_b-r_as_b')T(r)=(s_ar_b-r_as_b)T'(r)
\end{equation*}
for $r\in\mathcal{R}_{s}\cap\mathcal{R}_{s'}$, which implies that $(s_ar_b-r_as_b)$ is an irreducible factor of $T(r)$. So $(s_ar_b-r_as_b)D_{ab}(r)=(s_ar_b-r_as_b)\bar{T}(r)$, for polynomial $\bar{T}$, and when $s_ar_b-r_as_b\neq0$, $D_{ab}(r)=\bar{T}(r)$. Thus $D_{ab}(r)$ acts by polynomial on the region $\mathcal{R}_s\cap\{s_ar_b\neq s_br_a\}$. The union of these regions is $\bigcup_s(\mathcal{R}_s\cap\{s_ar_b\neq s_br_a\})=\{(r_a,r_b)\neq(0,0)\}$. Since these regions are defined by deleting a finite number of hyperplanes from $\mathbb{Z}^N$, the intersection of any two contains a cube of arbitrary size. So any two polynomials that agree on the intersection must be equal. Therefore $D_{ab}(r)$ acts by an $\text{End}(U)$-valued polynomial $P_{ab}$ on the region $\{(r_a,r_b)\neq(0,0)\}$. The polynomial $P_{ab}$ can be decomposed into a finite sum in powers of $r$ so that
\begin{equation*}
D_{ab}(r)=\sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{r^k}{k!}P_{ab}^{(k)}.
\end{equation*}
for all $r\in\mathbb{Z}^N$ with $\{(r_a,r_b)\neq(0,0)\}$.
\end{proof}
\begin{prop}\label{Ngeq3case}
Let $N\geq3$ and $J=A_N\otimes U$ a module in category $\mathcal{J}$. Then for $a,b\in\{1,\dots,N\}$, $a\neq b$, $D_{ab}(r)$ acts on $U$ by
\begin{equation*}
D_{ab}(r)=\sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{r^k}{k!}P^{(k)}_{ab}
\end{equation*}
for all $r\in\mathbb{Z}^N$, where the $P^{(k)}_{ab}\in\text{\emph{End}}(U)$ do not depend on $r$, and the summation is finite. In addition $P_{ab}^{(k)}=0$ when $k_a=k_b=0$.
\end{prop}
\begin{proof}
Since $D_{ab}\left(r\right)=0$ when $r_a=r_b=0$ by definition, it follows from Lemma \ref{Ngeq2case} that $D_{ab}(r)$ may be expressed as
\begin{equation*}
D_{ab}(r)=\sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{r^k}{k!}P_{ab}^{(k)}-\delta_{r_a,0}\delta_{r_b,0}\sum_{\substack{k\in\mathbb{Z}_{\geq 0}^{N}\\k_a=k_b=0}}\frac{r^k}{k!}P_{ab}^{(k)}
\end{equation*}
which holds for all $r\in\mathbb{Z}^N$. Now substitute the expression above into the equation $r_cD_{ab}(r)+r_aD_{bc}(r)+r_bD_{ca}(r)=0$ on the region $r_i\neq0$ for all $i\in\{1,\dots,N\}$. Terms with delta functions vanish leaving
\begin{equation}\label{P_ab^krelationship}
r_c\sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{r^k}{k!}P_{ab}^{(k)}+r_a\sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{r^k}{k!}P_{bc}^{(k)}+r_b\sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{r^k}{k!}P_{ca}^{(k)}=0.
\end{equation}
Extracting the coefficient of $r_cr^k$ for $k\in\mathbb{Z}_{\geq 0}^N$ with $k_a=k_b=0$ yields that
\begin{equation*}
\frac{1}{k!}P_{ab}^{(k)}=0.
\end{equation*}
This shows that the terms with delta functions are not necessary, and therefore
\begin{equation*}
D_{ab}(r)=\sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{r^k}{k!}P_{ab}^{(k)}
\end{equation*}
for all $r\in\mathbb{Z}^N$.
\end{proof}
\section{Classification}
Consider the Lie algebra of derivations of polynomials in $N$ variables,
\begin{equation*}
\text{Der}(\mathbb{C}[x_1,\dots,x_N])=\text{Span}_{\mathbb{C}}\left\{\left.x^k\frac{\partial}{\partial x_a}\right|a\in\{1,\dots,N\},k\in\mathbb{Z}_{\geq0}^N\right\},
\end{equation*}
and its subalgebra consisting of divergence zero elements,
\begin{equation*}
S_N=\text{Span}_{\mathbb{C}}\left\{\left.S_{ab}(k)\right|a,b\in\{1,\dots,N\},k\in\mathbb{Z}_{\geq0}^N\right\},
\end{equation*}
where $S_{ab}(k)=k_bx^{k-e_b}\frac{\partial}{\partial x_a}-k_ax^{k-e_a}\frac{\partial}{\partial x_b}$. Its Lie bracket is given by
\begin{multline*}
[S_{ab}(q),S_{cd}(k)]=q_bk_cS_{ad}(q+k-e_b-e_c)-q_bk_dS_{ac}(q+k-e_b-e_d)\\
-q_ak_cS_{bd}(q+k-e_a-e_c)+q_ak_dS_{bc}(q+k-e_a-e_d).
\end{multline*}
Note that $S_{ab}(e_a)=-\frac{\partial}{\partial x_b}$ and $S_{ab}(e_b)=\frac{\partial}{\partial x_a}$.
For $n\in\mathbb{N}$ let $\mathfrak{L}_n=\text{Span}_{\mathbb{C}}\left\{\left.S_{ab}(k)\right|a,b\in\{1,\dots,N\},|k|=n+2\right\}$ so that $S_N=\bigoplus_{i=-1}^{\infty}\mathfrak{L}_i$. The bracket above
shows that $[\mathfrak{L}_i,\mathfrak{L}_j]\subset \mathfrak{L}_{i+j}$.
\begin{lem}\label{slNmodules}
In the grading $S_N=\bigoplus_{i=-1}^{\infty}\mathfrak{L}_i$, component $\mathfrak{L}_0$ is isomorphic to $\mathfrak{sl}_N$ and each $\mathfrak{L}_i$ is an irreducible $\mathfrak{sl}_N$-module.
\end{lem}
\begin{proof}
To see that $\mathfrak{L}_0$ is isomorphic to $\mathfrak{sl}_N$, identify $x_a\frac{\partial}{\partial x_b}$ with $E_{ab}$ and $x_a\frac{\partial}{\partial x_a}-x_b\frac{\partial}{\partial x_b}$ with elements $E_{aa}-E_{bb}$ of the Cartan subalgebra. Each $\mathfrak{L}_i$ is an $\mathfrak{sl}_N$-module via the adjoint action of $\mathfrak{L}_0$.
By Weyl's Theorem on complete reducibilty and the fact that every finite dimensional simple $\mathfrak{sl}_N$-module is a highest weight module, it suffices to show that each $\mathfrak{L}_i$ has a unique highest weight vector. In other words the goal is to show that for each $i$ there exists a unique (up to scalar) $v\in\mathfrak{L}_i$ such that $\left[x_a\frac{\partial}{\partial x_b},v\right]=0$ for all $a,b$ with $a<b$.
An arbitrary member of $\mathfrak{L}_n$ can be expressed as
$\sum\limits_{|m|=n} u_m$ with
$u_m = $
\break
$\sum_{j=1}^{N}C_jx^{m+e_j}\frac{\partial}{\partial x_j}$ where
$\sum_{j=1}^{N}C_j(m_j+1)=0$, since it has divergence zero. Since $\left[x_a\frac{\partial}{\partial x_a}-x_b\frac{\partial}{\partial x_b},u_m\right]=(m_a-m_b)u_m$ for all $a,b\in\{1,\dots,N\}$, weight vectors of $\mathfrak{L}_n$ must have the form $u_m$ for some fixed $m$.
Let $u_m$ be a highest weight vector for $\mathfrak{L}_n$. Since $x$ may only have nonnegative exponents, two cases arise; either $m_j=-1$ for a single index $j$ and $m_i\geq 0$ otherwise, or else all entries of $m$ are nonnegative. The former forces all coefficients $C_i$ to be zero except for $C_j$, and hence $u_m=Cx^{k}\frac{\partial}{\partial x_j}$ with $k_j=0$. In the latter $u_m=\sum_{j=1}^{N}C_jx^{m+e_j}\frac{\partial}{\partial x_j}$ with $m_j+1>0$ for each $j$.
Suppose $u_m=Cx^{k}\frac{\partial}{\partial x_j}$ with $k_j=0$. Then $\left[x_a\frac{\partial}{\partial x_b},u_m\right]=Ck_bx^{k+e_a-e_b}\frac{\partial}{\partial x_j}-\delta_{aj}Cx^{k+e_j}\frac{\partial}{\partial x_b}$. Since $1\leq a<b\leq N$ it follows that the only $u_m$ of this form annihilated by all raising operators $x_a\frac{\partial}{\partial x_b}$ is $u_m=C x_1^{n+1}\frac{\partial}{\partial x_N}$.
It remains to show that no vectors of the form $u_m=\sum_{j=1}^{N}C_jx^{m+e_j}\frac{\partial}{\partial x_j}$ with $m_j+1>0$ for each $j$, are highest weight vectors. Suppose for $a<b$ that $\left[x_a\frac{\partial}{\partial x_b},\sum_{j=1}^{N}C_jx^{m+e_j}\frac{\partial}{\partial x_j}\right]=0$. The coefficient of $\frac{\partial}{\partial x_b}$ on the right hand side is $(C_b(m_b+1)-C_a)x^{m+e_a}=0$. Letting $b=N$ and varying $a$ shows that $C_a=C_N(m_N+1)$ for $a=1,\dots,N-1$. Plugging this into the expression for the divergence of $u$ gives
\begin{equation*}
\sum_{j=1}^{N-1}C_N(m_N+1)(m_j+1)+C_N(m_N+1)=C_N(m_N+1)\left(\sum_{j=1}^{N-1}(m_j+1)+1\right),
\end{equation*}
which is zero only when $C_N=0$ since each $(m_j+1)$ was assumed to be positive. Then $C_N=0$ implies $C_a=0$ for $a=1,\dots,N-1$, and thus $u_m=0$.
\end{proof}
The action of the $P_{ab}^{(k)}$ with $|k|>1$ will be shown to be a representation of the subalgebra
\begin{equation*}
S_N^+=\text{Span}_{\mathbb{C}}\left\{\left.S_{ab}(k)\right|a,b\in\{1,\dots,N\},k\in\mathbb{Z}_{\geq0}^N,|k|>1\right\}.
\end{equation*}
\begin{prop}\label{Prepresentation}
The map $\rho(S_{ab}(k))=P_{ab}^{(k)}\in\text{\emph{End}}(U)$ for $|k|>1$ is a finite dimensional representation of $S_N^+$ on $U$ for $N\geq2$.
\end{prop}
\begin{proof}
Using Lemma \ref{Ngeq2case} the Lie bracket of $D_{ab}(r)$ with $D_{cd}(s)$ with $r, s \neq0$ may be expressed
\begin{equation*}
[D_{ab}(r),D_{cd}(s)]=\sum_{j,k\in\mathbb{Z}_{\geq 0}^N}\frac{r^js^k}{j!k!}[P_{ab}^{(j)},P_{cd}^{(k)}],
\end{equation*}
where the left hand side may be computed
\begin{multline}\label{Pbracket}
(r_as_b-r_bs_a)D_{cd}(s)+(r_cs_d-r_ds_c)D_{ab}(r)+r_bs_cD_{ad}(r+s)\\-r_bs_dD_{ac}(r+s)
-r_as_cD_{bd}(r+s)+r_as_dD_{bc}(r+s)\\
=(r_as_b-r_bs_a)\!\sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{s^k}{k!}P_{cd}^{(k)}+(r_cs_d-r_ds_c)\!\sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{r^k}{k!}P_{ab}^{(k)}+r_bs_c\sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{(r+s)^k}{k!}P_{ad}^{(k)}\\
-r_bs_d\sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{(r+s)^k}{k!}P_{ac}^{(k)}-r_as_c\sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{(r+s)^k}{k!}P_{bd}^{(k)}+r_as_d\sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{(r+s)^k}{k!}P_{bc}^{(k)}.
\end{multline}
Thus $[P_{ab}^{(j)},P_{cd}^{(k)}]$ is obtained by extracting the coefficient of $\frac{r^js^k}{j!k!}$ in the expression above. Then for any $j,k\in\mathbb{Z}_{\geq 0}^N$ with $|j|,|k|>1$, the bracket is given by
\begin{multline}\label{Pracket:j,k>1}
[P_{ab}^{(j)},P_{cd}^{(k)}]=\\
j_bk_cP_{ad}^{(j+k-e_b-e_c)}-j_bk_dP_{ac}^{(j+k-e_b-e_d)}-j_ak_cP_{bd}^{(j+k-e_a-e_c)}+j_ak_dP_{bc}^{(j+k-e_a-e_d)}.
\end{multline}
Note that this expression differs when either $|j|\leq1$ or $|k|\leq1$. The equation above shows that $\rho(S_{ab}(k))=P_{ab}^{(k)}$ preserves the Lie bracket of $S_N^+$ and is therefore a finite dimensional representation on $U$.
\end{proof}
Since $D_{ab}(r)=-D_{ba}(r)$ it follows that $P_{ab}^{(k)}=-P_{ba}^{(k)}$ for any $k\in\mathbb{Z}_{\geq0}^N$. A linear relationship for the $P_{ab}^{(k)}$ is seen in (\ref{P_ab^krelationship}), and extracting the coefficient on $r^k$ with $k=e_b+e_c$ gives that $P_{ab}^{(e_b)}=P_{ac}^{(e_c)}$. For $N\geq3$, $P_{ab}^{(0)}=0$ and $P_{ab}^{(e_i)}=0$ for $i\neq a,b$.
Consider the Lie algebra spanned by $\left\{\left.P_{ab}^{(k)}\right|a,b\in\{1,\dots,N\},k\in\mathbb{Z}_{\geq0}^N\right\}$. As was noted above, the expression in (\ref{Pracket:j,k>1}) is valid only when $|j|,|k|>1$. The remaining brackets are obtained by extracting the coefficient of $\frac{r^js^k}{j!k!}$ in (\ref{Pbracket}) for appropriate values of $j$ and $k$. Doing so yields that $[P_{ab}^{(j)},P_{cd}^{(k)}]=0$ when either $j$ or $k$ is zero. If $|j|>1,|k|=1$ or $|j|=1,k>1$ the terms on the right hand side of (\ref{Pbracket}) vanish using the relationship in (\ref{P_ab^krelationship}) when $N\geq3$ or cancel directly when $N=2$. When both $|j|=1$ and $|k|=1$ the right hand side of (\ref{Pbracket}) has only terms $P_{ab}^{(0)}$ (for some $a$ and $b$). In the case $N=2$, $[P_{ab}^{(e_a)},P_{ab}^{(e_b)}]=P_{ab}^{(0)}$, and $[P_{ab}^{(e_a)},P_{ab}^{(e_a)}]=0$, however when $N\geq3$, $P_{ab}^{(0)}=0$, and so $[P_{ab}^{(j)},P_{cd}^{(k)}]=0$ when $|j|=1$ and $|k|=1$.
Thus for $N\geq3$ the subset of elements $P_{ab}^{(e_i)}$ spans an abelian algebra with generators $\left\{\left.P_{12}^{(e_2)},P_{i1}^{(e_1)}\right|i=2,\dots,N\right\}$, and in the case $N=2$ there is a Heisenberg algebra spanned by $\left\{P_{12}^{(e_1)},P_{12}^{(e_2)},P_{12}^{(0)}\right\}$.
Let $\mathfrak{a}_N=\text{Span}_{\mathbb{C}}\{C_i|i=1,\dots,N\}$ be an $N$ dimensional abelian Lie algebra, and $\mathcal{H}=\text{Span}_{\mathbb{C}}\{X,Y,Z\}$ a three dimensional Heisenberg algebra with central element $Z=[X,Y]$. For $N\geq3$ the map $\rho(C_a)=P_{ab}^{(e_b)}$ is a finite dimensional representation of $\mathfrak{a}_N$ on $U$. When $N=2$ the map $\rho(X)=P_{12}^{(e_2)},\rho(Y)=P_{21}^{(e_1)}$, and $\rho(Z)=P_{12}^{(0)}$ is a finite dimensional representation of $\mathcal{H}$ on $U$. The following theorem considers Lie algebras $S_2^+\oplus\mathcal{H}$ and $S_N^+\oplus\mathfrak{a}_N$. In either case the bracket of $\mathcal{H}$ or $\mathfrak{a}_N$ with $S_N^+$ is zero.
Since $[\rho(S_{ab}(e_a+e_b)),\rho(S_{ab}(ne_a))]=n\rho(S_{ab}(ne_a))$ for $n\geq0$, Lemma \ref{finiteeigenvals} implies that for some $k_0\geq0$, $\rho(S_{ab}(ke_a))$ acts as zero on $U$ for all $k\geq k_0$. The irreducibility in Lemma \ref{slNmodules} ensures that all of $\mathfrak{L}_k$ acts as zero. So for some $k_0\geq0$, $\mathfrak{L}_k$ acts trivially on $U$ for all $k\geq k_0$.
\begin{thm}\label{classification}
Let $\lambda\in\mathbb{C}^N$ and let $\mathcal{J}_{\lambda}$ be a subcategory of modules in $\mathcal{J}$ supported on $\lambda+\mathbb{Z}^N$.
\begin{enumerate}
\item[(a)]
For $N=2$ there is an equivalence of categories between the category of finite dimensional modules for $S_2^+\oplus\mathcal{H}$ and $\mathcal{J}_{\lambda}$. This equivalence maps $U$ to $A_2\otimes U$ where $U$ is a finite dimensional module for $S_2^+\oplus\mathcal{H}$. The action of $\mathcal{S}_2$ on $A_2\otimes U$ is given by $d_a(t^{s}\otimes u)=(s_a+\lambda_a)t^s\otimes u$ and for $r\neq0$,
\begin{multline}\label{N=2action}
d_{12}(r)(t^{s}\otimes u)=\left(r_2s_1-r_1s_2\right)t^{r+s}\otimes u\\+t^{r+s}\otimes\sum_{\substack{k\in\mathbb{Z}_{\geq 0}^2\\|k|>1}}\frac{r^k}{k!}\rho(S_{12}(k))u+t^{r+s}\otimes \left(r_1\rho(X)-r_2\rho(Y)+\rho(Z)\right)u.
\end{multline}
\item[(b)]
For $N\geq 3$, there is an equivalence of categories between the category of finite dimensional modules for $S_N^+\oplus\mathfrak{a}_N$ and $\mathcal{J}_{\lambda}$. This equivalence maps $U$ to $A_N\otimes U$ where $U$ is a finite dimensional module for $S_N^+\oplus\mathfrak{a}_N$. The action of $\mathcal{S}_N$ on $A_N\otimes U$ is given by $d_a(t^{s}\otimes u)=(s_a+\lambda_a)t^s\otimes u$ and
\begin{multline}\label{Ngeq3action}
d_{ab}(r)(t^{s}\otimes u)=\left(r_bs_a-r_as_b\right)t^{r+s}\otimes u\\+t^{r+s}\otimes\sum_{\substack{k\in\mathbb{Z}_{\geq 0}^N\\|k|>1}}\frac{r^k}{k!}\rho(S_{ab}(k))u+t^{r+s}\otimes \left(r_b\rho(C_a)-r_a\rho(C_b)\right)u.
\end{multline}
\end{enumerate}
\end{thm}
\begin{proof}
Let $J$ be a module in $\mathcal{J}_\lambda$. As was noted at the end of Section 2, the module $J$ may be identified
with $A_N\otimes U$ where $U$ is the weight space $J_\lambda$. Then (J1) with (J3) yields $d_a(t^{s}\otimes u)=(s_a+\lambda_a)t^s\otimes u$ for $u\in U$.
Section 3 showed that the action of $d_{ab}(r)\in\mathcal{S}_N$ on $J$ is determined by its restriction to $U$ and is given by an $\text{End}(U)$-valued polynomial in $r$. When $r\neq0$,
\begin{equation*}
d_{ab}(r)(t^s\otimes u)=(r_bs_a-s_br_a)t^{r+s}\otimes u + t^{r+s}\otimes \sum_{k\in\mathbb{Z}_{\geq 0}^N}\frac{r^k}{k!}P^{(k)}_{ab}u,
\end{equation*}
and for $r=0$, $d_{ab}(r)=0$ and thus acts trivially. Proposition \ref{Prepresentation} and the remarks that follow show that $U$ is a finite dimensional $S_2^+\oplus\mathcal{H}$-module when $N=2$ and $U$ is a finite dimensional $S_N^+\oplus\mathfrak{a}_N$-module when $N\geq3$. The actions in (\ref{N=2action}) and (\ref{Ngeq3action}) follow.
Conversely let $U$ be a finite dimensional module for $S_2^+\oplus\mathcal{H}$. Identify the elements of $S_2^+\oplus\mathcal{H}$ with the $P_{12}^{(k)}$ as above and let
\begin{equation*}
D_{12}(r)=\sum_{k\in\mathbb{Z}_{\geq 0}^2}\frac{r^k}{k!}P_{12}^{(k)}.
\end{equation*}
This sum is finite due to the discussion just before the theorem. The Lie bracket of $S_2^+\oplus\mathcal{H}$ yields the commutator relations for the $D_{12}(r)$ operators via equation (\ref{Pbracket}). The Lie bracket of the $D_{12}(r)$ along with the action of $d_{12}(r)$ given in (\ref{SNactiononJ}) recovers the commutator relations in $\mathcal{S}_2$. Thus $A_2\otimes U$ is an $\mathcal{S}_2$-module.
The fact that a finite dimensional $S_N^+\oplus\mathfrak{a}_N$-module $U$ yields a finite dimensional module $A_N\otimes U$ for $\mathcal{S}_N$ follows in a similar fashion for the $N\geq 3$ case.
\end{proof}
\section{Irreducible Tensor Modules}
This section considers simple modules from category $\mathcal{J}$. Note that in a finite dimensional irreducible representation of the Heisenberg algebra, the central element must act by zero. Hence $P_{12}^{(0)}=0$ in the case $N=2$ and so the Heisenberg algebra $\mathcal{H}$ used above gets replaced with the two dimensional abelian algebra $\mathfrak{a}_2$. A further simplification found in irreducible modules is that the action of $S_N^+$ becomes the action of $\mathfrak{sl}_N$, its degree zero component from the grading in Lemma \ref{slNmodules}. The following will be used to show this (cf. \cite{CS} Lemma 2.4 and \cite{FH} Lemma 9.13).
\begin{lem}\label{trivialaction}
Let $\mathfrak{g}$ be a finite dimensional Lie algebra over $\mathbb{C}$ with solvable radical $\text{\emph{Rad}}(\mathfrak{g})$. Then $[\mathfrak{g},\text{\emph{Rad}}(\mathfrak{g})]$ acts trivially on any finite dimensional irreducible $\mathfrak{g}$-module.
\end{lem}
As noted above, there exists $k_0$ such that $\mathfrak{L}_k$ acts as zero for all $k\geq k_0$, and so the ideal $I=\bigoplus_{k\geq k_0}\mathfrak{L}_k$ must also act trivially. To apply Lemma \ref{trivialaction} consider the finite dimensional Lie algebra $\mathfrak{g}=S_N^+/I\oplus\mathfrak{a}_N$ and its action on $U$. Since $[\mathfrak{L}_n,\mathfrak{L}_m]\subset\mathfrak{L}_{n+m}$ it follows that $\text{Rad}(\mathfrak{g})=\left(\bigoplus_{n>0}\mathfrak{L}_n\right)/I\oplus\mathfrak{a}_N$ and hence $[\mathfrak{g},\text{Rad}(\mathfrak{g})]=\left(\bigoplus_{n>0}\mathfrak{L}_n\right)/I$ acts trivially. Therefore the ideal $\bigoplus_{n>0}\mathfrak{L}_n$ of $S_N^+\oplus\mathfrak{a}_N$ acts trivially on a simple module from category $\mathcal{J}$.
\begin{thm}
Let $\lambda\in\mathbb{C}^N$ and let $\mathcal{J}_{\lambda}$ be a subcategory of modules in $\mathcal{J}$ supported on $\lambda+\mathbb{Z}^N$. For $N\geq 2$ there is a one-to-one correspondence between the finite dimensional irreducible modules for $\mathfrak{sl}_N(\mathbb{C})\oplus\mathfrak{a}_N$ and the irreducible modules in $\mathcal{J}_{\lambda}$. This correspondence maps a finite dimensional irreducible module $V$ for $\mathfrak{sl}_N(\mathbb{C})\oplus\mathfrak{a}_N$ to $A_N\otimes V$. The action of $\mathcal{S}_N$ on $A_N\otimes V$ is given by $d_a(t^{s}\otimes u)=(s_a+\lambda_a)t^s\otimes u$ and
\begin{multline*}
d_{ab}(r)(t^{s}\otimes u)=\left(r_b(s_a+\mu_a)-r_a(s_b+\mu_b)\right)t^{r+s}\otimes u\\+t^{r+s}\otimes\sum_{\substack{i=1\\i\neq a}}^Nr_ir_a\varphi(E_{ia})u-\sum_{\substack{i=1\\i\neq b}}^Nr_ir_b\varphi(E_{ib})u+r_ar_b\varphi(E_{aa}-E_{bb})u,
\end{multline*}
where $\mu_a,\mu_b\in\mathbb{C}$ are the action of $C_a,C_b\in\mathfrak{a}_N$, and $\varphi$ is a representation of $\mathfrak{sl}_N(\mathbb{C})$.
\end{thm}
\begin{proof}
The correspondence is given in Theorem \ref{classification}. By Lemma \ref{trivialaction} and the discussion above, the ideal $I=
\text{Span}\left\{\left.S_{ab}(j)\right|a,b\in\{1,\dots,N\},|j|>2\right\}$ acts trivially on $V$. Then $S_N^+/I\cong\mathfrak{sl}_N$ and so the action of $\mathcal{L}_0$ in (\ref{Ngeq3action}) is represented by elements of $\mathfrak{sl}_N$ as seen in Lemma \ref{slNmodules}. By Schur's Lemma, elements of $\mathfrak{a}_N$ act by scalars and so $\rho(C_a)$ and $\rho(C_b)$ become $\mu_a$ and $\mu_b$ respectively in (\ref{Ngeq3action}).
\end{proof}
\section{Acknowledgements}
The first author gratefully acknowledges funding from the Natural Sciences and Engineering Research Council of Canada.
| {
"timestamp": "2016-10-12T02:00:35",
"yymm": "1607",
"arxiv_id": "1607.07067",
"language": "en",
"url": "https://arxiv.org/abs/1607.07067",
"abstract": "We consider a category of modules that admit compatible actions of the commutative algebra of Laurent polynomials and the Lie algebra of divergence zero vector fields on a torus and have a weight decomposition with finite dimensional weight spaces. We classify indecomposable and irreducible modules in this category.",
"subjects": "Representation Theory (math.RT)",
"title": "Classification of Category $\\mathcal{J}$ Modules for Divergence Zero Vector Fields on a Torus",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620066026709,
"lm_q2_score": 0.7154240018510025,
"lm_q1q2_score": 0.7097449708480185
} |
https://arxiv.org/abs/2107.05670 | A rainbow connectivity threshold for random graph families | Given a family $\mathcal G$ of graphs on a common vertex set $X$, we say that $\mathcal G$ is rainbow connected if for every vertex pair $u,v \in X$, there exists a path from $u$ to $v$ that uses at most one edge from each graph in $\mathcal G$. We consider the case that $\mathcal G$ contains $s$ graphs, each sampled randomly from $G(n,p)$, with $n = |X|$ and $p = \frac{c \log n}{sn}$, where $c > 1$ is a constant. We show that when $s$ is sufficiently large, $\mathcal G$ is a.a.s. rainbow connected, and when $s$ is sufficiently small, $\mathcal G$ is a.a.s. not rainbow connected. We also calculate a threshold of $s$ for the rainbow connectivity of $\mathcal G$, and we show that this threshold is concentrated on at most three values, which are larger than the diameter of the union of $\mathcal G$ by about $\frac{\log n}{(\log \log n)^2}$.The same results also hold in a more traditional random rainbow setting, where we take a random graph $G\in G(n,p)$ with $p=\frac{c \log n}{n}$ ($c>1$) and color each edge of $G$ with a color chosen uniformly at random from the set $[s]$ of $s$ colors. | \section{Introduction}
In this paper, we consider random graphs using the \emph{Erd\H{o}s-R\'enyi model}, which are defined as follows. For a positive integer $n$, we consider a set $X$ of $n$ vertices. Then, for some value $0 \leq p \leq 1$, we construct a graph $G$ on $X$ by independently letting each edge $e \in \binom{X}{2}$ belong to $E(G)$ with probability $p$. We say that $G$ is a \emph{random graph} in $G(n,p)$. When a statement involving a value $n$ holds with probability approaching $1$ as $n$ approaches infinity, we say that the statement holds \emph{asymptotically almost surely}, or \emph{a.a.s.}~for short.
\subsection{Background}
One particular property of random graphs that has been the focus of extensive research is the diameter. Recall that the diameter $\diam(G)$ of a graph $G$ is defined as the maximum distance $\dist(u,v)$ taken over all vertex pairs $u,v$ in the graph.
Random graphs are often used as theoretical models for complex networks \cite{Network1} \cite{Network2},
in which case the diameter of a random graph represents the maximum degree of separation between any two nodes in a network. Therefore, the diameter of random graphs is often studied in order to gain a better understanding of the connections between elements of real systems.
In a seminal paper on random graphs from 1959, Erd\H{o}s and R\'enyi \cite{Erdos1959} showed that if a graph $G$ is randomly sampled from $G(n,p)$ with $p = \frac{c \log n}{n}$ and $c$ a constant, then $G$ is a.a.s.~connected when $c > 1$ and a.a.s.~disconnected when $c < 1$. This result of Erd\H{o}s and R\'enyi was essentially the first result on the diameter of random graphs,
giving a probability threshold for when the diameter of a random graph is finite. Later, in 1974, Burtin \cite{Burtin} determined that when $p \gg n^{-\frac{d-1}{d}}$ for a positive integer $d$, a random graph sampled from $G(n,p)$ a.a.s.~has a diameter of at most $d$, and Klee and Larman \cite{Klee} rediscovered this result independently in 1981. Bollob\'as \cite{Bollobas84} then showed in 1984 that when
a graph $G$ on $n$ vertices has $\frac{c \log n}{n} \binom{n}{2}$ randomly placed edges (where $c > 1$ is a constant), the graph $G$ has a diameter that is a.a.s.~equal to one of at most four consecutive integer values. Chung and Lu \cite{Chung} later translated this result of Bollob\'as into the random setting $G(n,p)$, giving the following bounds:
\begin{theorem} \cite{Chung}
\label{thmChung}
Let $G$ be a random graph in $G(n,p)$, where $p = \frac{c \log n}{n}$ and $c > 1$. Then a.a.s.,
$$\frac{\log \left( \frac{c}{11} \right) + \log n}{\log c + \log \log n} \leq \diam(G) \leq \frac{\log \left( \frac{33c^2}{400} \right) + \log \log n + \log n}{\log c + \log \log n} + 2.$$
\end{theorem}
In other words, Chung and Lu show that the diameter of $G$ is a.a.s.~one of at most four consecutive integer values, each within a constant from $\frac{\log n}{\log c + \log \log n}$.
In seeking the diameter of a random graph $G$, one essentially asks the following question: For which values of $s$ does there a.a.s.~exist a path of length at most $s$ between every pair of vertices in $G$?
In this paper, we ask a similar question in the following \emph{rainbow} setting.
We consider a family $\mathcal G = \{G_1, \dots, G_s\}$ of $s$ graphs on a common vertex set $X$ of size $n$. We say that a path $P \subseteq \bigcup_{i = 1}^s E(G_i)$ is a \emph{rainbow path} if there exists an injection $\phi:E(P) \rightarrow [s]$ such that for each edge $e \in E(P)$, $e \in E(G_{\phi(e)})$.
For two vertices $u,v \in X$, we say that $u$ and $v$ are \emph{rainbow connected} if there exists a rainbow path with $u$ and $v$ as endpoints. Furthermore, we say that $\mathcal G$ is \emph{rainbow connected} if every pair $(u,v) \in \binom{X}{2}$ is rainbow connected.
If we let each graph $G_i \in \mathcal G$ have its edges colored with the color $i$, then we may equivalently define a rainbow path as a path that uses at most one edge of each color. For a given family of random graphs, we will ask, for which values of $s$ does there a.a.s.~exist a rainbow path of length at most $s$ between every pair of vertices in $X$? Equivalently, we ask, for which values of $s$ is $\mathcal G$ a.a.s.~rainbow connected?
The notion of rainbow connectivity was first introduced by Chartrand et al.~\cite{Chartrand} as a theoretical tool for studying communication in secure networks. They defined the \emph{rainbow connection number} of a graph $G$ as the minimum number of colors needed in order to give $G$ an edge-coloring with which $G$ is rainbow connected.
The rainbow connection number is well-understood for certain graph classes including trees, cycles \cite{Chartrand}, and Cayley graphs on abelian groups \cite{LiRainbow}. For random graphs $G$ sampled from $G(n,p)$ with $p = \frac{\log n + \omega}{n}$ and $\omega = o(\log n)$ an unbounded increasing function, Frieze and Tsourakakis \cite{Frieze} showed that the rainbow connection number of $G$ asymptotically approaches the diameter of $G$.
Rainbow paths are an example of rainbow graphs, which are defined as edge-colored graphs in which each edge has a unique color. Depending on the setting, rainbow graphs are also often referred to either as transversals or partial transversals of a graph family. Recently, rainbow graph structures have received increasing attention, and several classical results have been extended into the rainbow setting. For instance, a famous theorem of Dirac \cite{Dirac} states that a graph on $n$ vertices with minimum degree at least $n/2$ must contain a Hamiltonian cycle. Joos and Kim \cite{Joos} have generalized Dirac's result to show that given a family $\mathcal G = \{G_1, \dots, G_n\}$ of $n$ graphs on a common set $X$ of $n$ vertices, each of minimum degree at least $n/2$, where the edges of each graph $G_i$ are monochromatically colored with the color $i$, there must exist a Hamiltonian cycle on $X$ using exactly one edge of each color. In the same flavor, a classic result of Moon and Moser \cite{Moon} gives a minimum degree condition for the existence of a Hamiltonian cycle in a bipartite graph, and one of the authors of this paper has recently shown a similar generalization of this result into the rainbow setting \cite{PBbipartite}. In addition to rainbow Hamiltonian cycles, certain other rainbow structures have been shown to exist under appropriate conditions. For instance, Aharoni et al.~\cite{Aharoni} obtained a rainbow version of Mantel's theorem, proving that given a family $\mathcal G$ of three graphs on a common set of $n$ vertices, if each graph in $\mathcal G$ contains at least $0.2557n^2$ edges, then there exists a rainbow triangle---that is, a triangle that uses exactly one edge from each graph of $\mathcal G$.
Above, we introduced rainbow connectivity in a graph family $\mathcal G$ that is the union of many individual random graphs on a common vertex set, each with monochromatically colored edges of a distinct color. There is also a more traditional setting for discussing randomly edge-colored graphs, in which a single graph $G$ is constructed using some random process, and then each edge of $G$ is randomly given a color. For example, Frieze and McKay \cite{FriezeMcKay} show that when $G$ is constructed by randomly adding edges one at a time and giving each new edge one of $n-1$ colors uniformly at random, the time at which $G$ first contains edges of every color almost surely coincides with the time at which $G$ first contains a rainbow spanning tree. Additionally, Ferber and Krivelevich \cite{Ferber} show that for a graph $G$ randomly sampled from $G(n,p)$ with $p = \frac{\log n + \log \log n + \omega}{n}$, with $\omega$ an unbounded increasing function, if each edge of $G$ randomly receives one of $(1+o(1))n$ colors, then $G$ a.a.s.~contains a rainbow Hamiltonian cycle.
\subsection{Our results}
First, we fix some notation that we will use throughout the paper. We let $X$ be a set of $n$ vertices, where $n$ is a large integer. We pick an integer $s \geq 1$, depending on $n$, and we let $c > 1$ be a fixed constant. We let
$$p = \frac{c \log n}{sn}.$$
Then, for $1 \leq i \leq s$, we take a random graph $G_i \in G(n,p)$, and we let $\mathcal G= \{G_1, \dots, G_s \}$. We often refer to the values $1, \dots, s$ as \emph{colors}, and we will
imagine
that each graph $G_i$ has its edges colored with color $i$.
In this setting, it is straightforward to show that an edge $e \in \binom{X}{2}$ belongs to at least one graph $G_i \in \mathcal G$ with probability $(c - o(1))\frac{\log n}{n}$, and therefore, by the threshold of Erd\H{o}s and R\'enyi \cite{Erdos1959}, $\bigcup_{i = 1}^s G_i$ is a.a.s.~connected.
In the following two main results, we determine a threshold for the number $s$ of graphs required in $\mathcal G$ to ensure rainbow connectivity.
\begin{theorem}
\label{thmUB}
Let $\mathcal G = \{G_1, \dots, G_s\}$ be a family of $s$ graphs on a common set of $n$ vertices, each taken randomly from $G(n,p)$, with $p = \frac{c \log n}{sn}$, where $c > 1$ is a constant.
If $$s \leq \frac{\log n}{\log c - 1 + \log \log n} - \frac{1}{2} + \frac{\log \log \log n}{3 \log \log n},$$
then a.a.s.~$\mathcal G$ is not rainbow connected.
\end{theorem}
\begin{theorem}
\label{thmEasyLB}
Let $\mathcal G = \{G_1, \dots, G_s\}$ be a family of $s$ graphs on a common set of $n$ vertices, each taken randomly from $G(n,p)$, with $p = \frac{c \log n}{sn}$, where $c > 1$ is a constant.
If
$$s \geq \frac{\log n}{\log c - 1 + \log \log n} + \frac{3}{2} + \frac{2 \sqrt{ \log \log \log n }}{ {\log \log n}} ,$$
then a.a.s.~$\mathcal G$ is rainbow connected.
\end{theorem}
The minimum value $s$ that guarantees that the $s$ graphs in $\mathcal G$ a.a.s.~make a rainbow connected family is called the \emph{rainbow connectivity threshold}.
Together, Theorems \ref{thmUB} and \ref{thmEasyLB} show that this threshold is concentrated on at most three consecutive integer values
in the vicinity of $ \frac{\log n}{\log c - 1 + \log \log n}$: two integer values that may fit between the two bounds, along with the smallest integer greater than or equal to the bound of Theorem \ref{thmEasyLB}.
\begin{corollary}
The rainbow connectivity threshold is a.a.s.~equal to one of the three values, $s_0,\allowbreak s_0+1,s_0+2$, where $s_0 = \bigl\lfloor \frac{\log n}{\log c - 1 + \log \log n} + \frac{1}{2} + \frac{\log \log \log n}{3 \log \log n} \bigr\rfloor$.
\end{corollary}
It is simple to observe that in order for $\mathcal G$ to be rainbow connected, $s$ must be at least as large as the diameter of $\bigcup_{i = 1}^s G_i$. However, by comparing the threshold found in Theorems \ref{thmUB} and \ref{thmEasyLB} with the result of Theorem \ref{thmChung}, we see in fact that letting $s$ equal the diameter of $\bigcup_{i = 1}^s G_i$ is not enough to ensure a.a.s. the rainbow connectivity of $\mathcal G$.
Indeed, according to Theorem \ref{thmChung}, the diameter of $\bigcup_{i = 1}^s G_i$ is a.a.s.\ close to $\frac{\log n}{\log c + \log \log n}$, which is slightly smaller (by about $\frac{\log n}{(\log \log n)^2}$) than our rainbow connectivity threshold.
Finally, in our concluding section we discuss rainbow connectivity in the more traditional random rainbow setting, where we take a random graph $G\in G(n,p)$ with $p=\frac{c \log n}{n}$ ($c>1$) and color each edge of $G$ with a color chosen uniformly at random from the set $[s]$ of $s$ colors. In Section \ref{sec:differentSetting}, we show that results similar to Theorems \ref{thmUB} and \ref{thmEasyLB} also hold in this alternative type of random setting. See Theorems \ref{thm:newUB} and \ref{thm:newLB}.
\section{Proof of Theorem \ref{thmUB}}
We start by proving the simpler one of the two main results.
\begin{proof}[Proof of Theorem \ref{thmUB}]
We show that for an arbitrary vertex pair $u,v \in X$ ($u\ne v$), there a.a.s.~exists no rainbow path from $u$ to $v$. We will use the First Moment Method (cf. \cite[Chapter 3]{MolloyReed}).
Let $u,v \in X$ be a vertex pair, and let $1 \leq t \leq s$. The total number of possible rainbow colored paths of length $t$ from $u$ to $v$ is less than $n^{t-1} s!$, and the probability of any such rainbow path's existence in $\mathcal G$ is equal to $p^t$. Therefore, the expected number of rainbow paths of length $t$ from $u$ to $v$ is at most $n^{t-1} s! \,p^t$. Now, using Stirling's approximation, we may estimate that the expected number of rainbow paths from $u$ to $v$ of length $t$ is at most
\begin{eqnarray*}
\frac{(pn)^t}{n}\, s! &<& \frac{(pn)^t}{n}\cdot s^{s + \frac{1}{2}} e^{-s+1} \\
& = & \frac{1}{n} \left( \frac{psn}{e} \right)^t s^{s-t+\frac{1}{2}} e^{t-s+1} \\
& = & \frac{1}{n} \left( \frac{c \log n}{e} \right)^t \left(\frac{s}{e}\right)^{s-t+\frac{1}{2}} e^{3/2}.
\end{eqnarray*}
Since
$$s \leq \frac{\log n}{\log c - 1 + \log\log n} - \frac{1}{2} + \frac{\log\log\log n}{3\log\log n},$$
we can write $s = \frac{\log n}{\log c - 1 + \log\log n} +k$, where
$$k \le - \frac{1}{2} + \frac{\frac{1}{2} \log\log\log n - \sqrt{\log\log\log n} }{\log c - 1 + \log\log n} .$$
Note that $n = \left(\frac{c \log n}{e} \right)^{s-k}$.
Then, letting $Y$ be the number of rainbow paths from $u$ to $v$ of any length $t\ge1$, we have
\begin{eqnarray*}
\E[Y] < \sum_{t = 1}^s \left( \frac{c \log n}{e} \right)^{t-s +k } \left( \frac{s}{e} \right)^{s-t+\frac{1}{2}} e^{3/2} & = & \left( \frac{c \log n}{e} \right)^k e \sqrt{s} \cdot \sum_{t = 1}^s \left( \frac{s}{c \log n} \right)^{s-t} .
\end{eqnarray*}
Since $s < \frac{2 \log n}{\log \log n}$, we have
\begin{eqnarray*}
\E[Y] & < & \left( \frac{c \log n}{e} \right)^k e\, \sqrt{\frac{2\log n}{\log \log n}} \, \sum_{t = 1}^s \left( \frac{2}{c \log \log n} \right)^{s-t} .
\end{eqnarray*}
Since the sum is less than $2$, and since $\sqrt{2} e < 4$, we have
\begin{eqnarray*}
\E[Y] & < & 8 \left( \frac{c \log n}{e} \right)^k \sqrt{\frac{\log n}{\log \log n}} .
\end{eqnarray*}
Finally, since $k+\frac{1}{2}$ is at most the logarithm of
$\sqrt{\log\log n} \exp \bigl(-\sqrt{\log \log \log n}\,\bigr)$ with a base of $c \log n / e$, it follows that
\begin{eqnarray*}
\E[Y] & < & 8
e^{1/2} c^{-1/2} \exp \left( {- \sqrt { \log \log \log n } } \right) \rightarrow 0.
\end{eqnarray*}
By Markov's inequality, the probability that there exists a rainbow path from $u$ to $v$ is at most $\E[Y]$, and thus it holds that for an arbitrarily chosen vertex pair $u,v \in X$, $u$ and $v$ are a.a.s.~not rainbow connected.
\end{proof}
\section{Tools and key ideas}
In this section, we outline some tools and key ideas that we will need for the proof of Theorem \ref{thmEasyLB}. For this entire section and the next, we set
\begin{equation}
d = \lceil \log \log \log \log n \rceil. \label{eq:def_d}
\end{equation}
We will need several inequalities that will help us estimate various probabilities. First, we have the following inequality, which follows easily from the inequality $(1-p)^x \le \exp(-px)$. We will use it throughout the entire paper without explicitly stating that we are doing so.
\begin{lemma}
If\/ $p,x > 0$ and $px < 1$, then $1 - (1-p)^x > px - p^2 x^2.$
\end{lemma}
Next,
we will use the following forms of the Chernoff bound, which can be found, for example, in Chapter 4 of \cite{Mitzenmacher}.
\begin{theorem}
\label{thmChernoff}
Let $Y$ be a random variable that is the sum of pairwise independent indicator variables---that is, random variables taking values in $\{0,1\}$. Let $\mu = \E[Y]$. Then for any value $\delta \in (0,1)$,
$$\Pr(Y < (1 - \delta) \mu) \leq \left( \frac{e^{-\delta}}{(1-\delta)^{1-\delta}} \right)^{\mu} ,$$
and for any value $\delta > 0$,
$$ \Pr(Y > (1 + \delta) \mu) \leq \left( \frac{e^{\delta}}{(1+\delta)^{1+\delta}} \right)^{\mu}.$$
\end{theorem}
\begin{theorem}
\label{thmSimpleChernoff}
Let $Y$ be a random variable that is the sum of pairwise independent indicator variables, and let $\mu = \E[Y]$. Then for any value $\delta > 0$,
$$\Pr(Y > (1 + \delta) \mu) \leq \exp \left( - \frac{ \delta^2 \mu }{2 + \delta }\right)$$
and for any value $\delta \in (0,1)$,
$$\Pr(Y < (1 - \delta) \mu) \leq \exp \left( - \frac{1}{2} \delta^2 \mu \right).$$
\end{theorem}
Mitzenmacher \cite{Mitzenmacher} points out that for the first statement of each theorem, it is enough to let $\mu \leq \E[Y]$, and for the second statement of each theorem, it is enough to let $\mu \geq \E[Y]$.
\subsection{Bounding the relevant values of $s$}
\label{bddS}
Theorems \ref{thmUB} and \ref{thmEasyLB} show that the rainbow connectivity threshold of $\mathcal G$ is of the form $(1+o(1)) \frac{\log n}{\log \log n}$. Intuitively, this should imply that the most difficult values of $s$ for which to prove these theorems are those close to $\frac{\log n}{\log \log n}$ and that other values of $s$ need not be carefully considered. In the following results, we will formalize this intuition.
First, by Theorem \ref{thmUB}, we may assume that $s \geq \frac{\log n}{2 \log \log n}$.
Next, we show that it is sufficient just to prove Theorem \ref{thmEasyLB} for values $s \leq \frac{3 \log n}{\log \log n}$. The proof of this lemma gives some intuition regarding why $\mathcal G$ is more likely to be rainbow connected for larger values of $s$.
\begin{lemma}
\label{lemmaSmall}
Suppose that whenever
$$\frac{\log n}{\log c - 1 + \log \log n} + \frac{3}{2} + \frac{2 \sqrt{ \log \log \log n }}{ {\log \log n}} \leq s \leq \frac{3 \log n}{\log \log n},$$
$\mathcal G$ is a.a.s.~rainbow connected. Then $\mathcal G$ is a.a.s.~rainbow connected also for any value $s > \frac{3 \log n}{\log \log n}$.
\end{lemma}
\begin{proof}
Suppose that $s > \frac{3 \log n}{\log \log n}$. Let $k$ be the smallest power of $2$ for which $ \left \lfloor s/k \right \rfloor < \frac{3 \log n}{\log \log n}$.
By our choice of $k$, we know that
\begin{equation}
\label{eqnSK}
\frac{s}{k} \geq \frac{3 \log n}{2 \log \log n} .
\end{equation}
We then create a family $\mathcal H$ of $\lfloor s / k \rfloor$ graphs by letting $$H_i = G_{ki+1} \cup G_{ki + 2} \cup \dots \cup G_{k(i+ 1)}$$ for $0 \leq i < \lfloor s / k \rfloor$. We observe that each graph $H_i \in \mathcal H$ is randomly sampled from $G(n,p^*)$, where $p^* = 1-(1-p)^k$. We estimate
$$p^* \left \lfloor \frac{s}{k} \right \rfloor > \frac{s-k}{k} (kp-k^2p^2) > ps \left( 1 - \frac{k}{s} - kp \right) > ps \left( 1 - \frac{2k}{s} \right) = \frac{c^* \log n}{n},$$
where $c^* = c \left( 1 - \frac{2k}{s} \right).$
By our hypothesis, if $\lfloor s / k \rfloor \geq \frac{\log n}{\log c^* - 1 + \log \log n} + \frac{3}{2} + \frac{2 \sqrt{ \log \log \log n }}{ {\log \log n}} $,
then $\mathcal H$ is a.a.s.~rainbow connected, which implies that $\mathcal G$ is a.a.s.~rainbow connected.
We also see
$$\log c^* = \log c + \log \left( 1 - \frac{2k}{s} \right) > \log c - \frac{3k}{s}.$$
Therefore, it suffices to show that
$$\left \lfloor \frac{s}{k} \right \rfloor \geq \frac{\log n}{\log c - \frac{3k}{s} - 1 + \log \log n} + 2 , $$
or stronger still, that
$$\left \lfloor \frac{s}{k} \right \rfloor > (1 + o(1)) \frac{\log n}{\log \log n}$$
holds for every function $o(1)$. However, this follows immediately from (\ref{eqnSK}).
Hence, we know that $\mathcal H$ is a.a.s.~rainbow connected, and thus $\mathcal G$ is also a.a.s.~rainbow connected.
\end{proof}
By combining Theorem \ref{thmUB} and Lemma \ref{lemmaSmall} with the hypothesis of Theorem \ref{thmEasyLB}, we assume throughout the rest of the paper that $s$ satisfies
\begin{equation}\tag{$\star$}
\label{sBound}
\frac{\log n}{2 \log \log n} \leq s \leq \frac{3 \log n}{\log \log n}.
\end{equation}
\subsection{Spheres and breadth-first search for rainbow paths}
\label{BFS}
In this subsection, we will outline a breadth-first search technique that we use extensively in the proof of Theorem \ref{thmEasyLB}.
For an ordinary graph $G$ and a vertex $v \in V(G)$, the \emph{sphere} of radius $t$ around $v$, denoted by $\Gamma_t(v)$, is defined as the set of vertices in $V(G)$ at distance exactly $t$ from $v$. Then, the statement that $\diam(G) > t$ is equivalent to the statement that there exists a vertex pair $u,v \in V(G)$ for which $u \not \in \bigcup_{i = 1}^t \Gamma_i(v)$. For each value $t \geq 0$, $\Gamma_t(v)$ can be calculated by carrying out a breadth-first search on $G$ starting at $v$ and searching up to distance $t$. Therefore, for ordinary graphs, the concepts of a graph's diameter, spheres around vertices, and breadth-first search are intimately related.
In our rainbow setting, we aim to determine the number $s$ of colors needed to make $\mathcal G$ a.a.s.~rainbow connected.
We will see that similarly to a graph's diameter, the value $s$ also has bounds that are closely related to breadth first search and spheres, which brings us to the following definition.
\begin{definition}
Let $v \in X$, $C \subseteq [s]$, and let $t \geq 0$ be an integer. Then we define $\Gamma_t^C(v)$ to be the set of vertices $u \in X$ satisfying the following two conditions:
\begin{itemize}
\item[(i)] $u$ can be reached from $v$ by a rainbow path of length $t$ consisting of edges of graphs $G_i$ for which $i \in C$;
\item[(ii)] $u$ cannot be reached from $v$ by a rainbow path of length at most $t-1$ consisting of edges of graphs $G_i$ for which $i \in C$.
\end{itemize}
\end{definition}
We will refer to these sets $\Gamma_t^C(v)$ as \emph{spheres}.
We observe that for each vertex $v \in X$, $v$ is rainbow connected with every vertex in $\bigcup_{i = 0}^s\Gamma_i^{[s]}(v)$.
In fact, $v$ would be rainbow connected with every vertex in $\bigcup_{i = 0}^s\Gamma_i^{[s]}(v)$ even if we removed the second condition in the definition of each $\Gamma_i^{[s]}(v)$, but we will see later that we need this second condition for technical reasons.
Similarly to spheres in ordinary graphs, for a vertex $v \in X$, the sets $\Gamma_i^{C}(v)$ can be computed recursively with a breadth-first search.
First, we let $\Gamma_0^{C}(v) = \{v\}$. Then, for $0 \leq t \leq |C| - 1$, we can compute $\Gamma_{t+1}^{C}(v)$ from $\Gamma_{t}^{C}(v)$ as follows.
We consider each vertex $w \in \Gamma_{t}^{C}(v)$ individually. By definition, there exists a nonempty set $\mathcal P_w$ of rainbow paths from $v$ to $w$ of length exactly $t$, each of which using only edge colors in $C$. Using each path $P \in \mathcal P_w$, we define $C(P)$ as the set of $t$ colors appearing at the edges $E(P)$. Then, for each path $P \in \mathcal P_w$, we search for vertices $u \in X \setminus \bigcup_{i = 0}^t \Gamma^C_i$ for which $wu \in E(G_j)$ for some color $j \in C \setminus C(P)$. Whenever we find such a vertex $u$, we add $u$ to $\Gamma_{t+1}^{C}(v)$. By carrying out this process for each vertex $w \in \Gamma_t^{C}(v)$ and each path $P \in \mathcal P_w$, we determine $\Gamma_{t+1}^{C}(v)$.
When we calculate bounds for $s$, we will be interested in estimating the sizes of the spheres $\Gamma_t^C(v)$. We have several results that will help us.
The first one shows that each individual graph $G_i \in \mathcal G$ almost surely has a small maximum degree.
\begin{lemma}
\label{lemmaDelta}
It holds a.a.s.~that for each graph $G_i \in \mathcal G$, $\Delta(G_i) < \frac{2c \log n}{\log \log n}.$
\end{lemma}
\begin{proof}
Let $v \in X$, and let $G_i \in \mathcal G$. The expected value $\mu$ of $\deg_{G_i}(v)$ is equal to
$$\mu = p(n-1) < \frac{c \log n}{s} \leq 2c \log \log n =: \nu,$$
using the bound (\ref{sBound}).
Hence, by applying a Chernoff bound (Theorem \ref{thmChernoff} with the remark stated after the theorem),
$$\Pr \left( \deg_{G_i}(v) > \frac{2 c \log n}{\log \log n} \right) < \left( \frac{e^\delta}{(1 + \delta)^{1 + \delta}}\right)^{\nu},$$
where $\delta = \frac{ \log n}{(\log \log n)^2}$.
Calculating further,
\begin{eqnarray*}
\left( \frac{e^\delta}{(1 + \delta)^{1 + \delta}}\right)^{\nu} &=& \exp( \nu \delta - \nu (1 + \delta) \log (1 + \delta) )\\
&<& \exp(\nu \delta (1 - \log \delta)) \\
&=& \exp \left( \frac{2c \log n}{\log \log n} (- \log \log n +2 \log \log \log n + 1) \right)\\
& =& o \left(\frac{1}{n^2} \right).
\end{eqnarray*}
Therefore, it a.a.s.~holds that for each vertex $v \in X$, and for each graph $G_i \in \mathcal G$, $\deg_{G_i}(v) < \frac{2c \log n}{\log \log n}$.
\end{proof}
Next, we show that for each vertex $v \in X$, there are logarithmically many edges of $\mathcal G$ incident to $v$, even when a set of $d+1$ colors is removed, where $d$ is defined in (\ref{eq:def_d}).
\begin{lemma}
\label{lemFirstStep}
There exists a value $\epsilon = \epsilon(c) > 0$ such that a.a.s., for every vertex $v \in X$ and every set $C \subseteq [s]$ of size at most $d+1$,
$$\left |\Gamma^{[s] \setminus C}_1(v) \right | \geq \epsilon \log n.$$
\end{lemma}
\begin{proof}
Let $v \in X$. We first estimate $\left |\Gamma^{[s]}_1(v) \right |$. The probability that $v$ is adjacent to a given vertex $u \in X \setminus \{v\}$ in at least one graph $G_i \in \mathcal G$ is equal to $1 - (1-p)^{s} > ps - p^2s^2.$ Therefore, the expected number of vertices in $\Gamma^{[s]}_1(v)$ is at least
$$\nu = (n-1)(ps - p^2s^2) > \alpha \log n,$$
for some constant $\alpha > 1$ depending only on $c$. Therefore, by a Chernoff bound (Theorem \ref{thmChernoff}), for a constant $\beta > 0$,
$$\Pr \left( \left |\Gamma^{[s]}_1(v) \right | < \beta \nu \right) < \left( \frac{e^{-1 + \beta}}{\beta ^{\beta}} \right)^{\nu} \leq \exp \left( (-1 + \beta - \beta \log \beta ) \alpha \log n \right).$$
When $\beta$ is a sufficiently small constant, $(-1+\beta - \beta \log \beta) \alpha$ is bounded below and away from $-1$, so
$$\Pr \left( \left |\Gamma^{[s]}_1(v) \right | < \beta \nu \right) = o \left( \frac{1}{n} \right).$$
Therefore, a.a.s., for every vertex $v \in X$,
$$ \left |\Gamma^{[s]}_1(v) \right | \geq \beta \nu > \beta \log n.$$
Finally, by Lemma \ref{lemmaDelta}, the degree of each graph $G_i$ at $v$ is at most $\frac{2 c \log n}{\log \log n}$. Therefore, a.a.s, for each vertex $v \in X$,
$$ \left |\Gamma^{[s]\setminus C}_1(v) \right | > \beta \log n - |C| \left( \frac{2 c \log n}{\log \log n} \right) > \epsilon \log n,$$
for a sufficiently small positive constant $\epsilon>0$ depending only on $c$.
\end{proof}
We will use the value $\epsilon$ from Lemma \ref{lemFirstStep} throughout the rest of the paper.
With our final lemma of this section, we will show that when a breadth first search is carried out to compute a sphere $\Gamma^{[s] \setminus C} _k(v)$ for some positive integer $k$, the sets $\Gamma^{[s] \setminus C}_t(v)$ $(0 \leq t \leq k)$ at least double in size at each step of the search until a certain number of vertices are reached by the search. With this lemma, we can ensure that during all but the late steps of a breadth first search, at least half of the vertices reached by the search belong to the outer sphere.
\begin{lemma}
\label{lemmaDouble}
It holds a.a.s.~that for every vertex $v \in X$, every set $C \subseteq [s]$ of size at most $ d+1$, and every $t$ with $0 \leq t \leq s - |C| - 1$ for which $\max_{1 \leq i \leq t} \left |\Gamma^{[s] \setminus C}_{i}(v) \right | \leq \frac{n}{10}$, we have
$$\left |\Gamma^{[s] \setminus C}_{t+1}(v) \right | \geq 2 \left |\Gamma^{[s] \setminus C}_{t}(v) \right |.$$
\end{lemma}
\begin{proof}
We fix $v \in X$ and $C \subseteq [s]$. For every $t\ge0$ we let $V_t := \Gamma^{[s] \setminus C}_t(v)$ and $n_t := |V_t|$.
As we have $n$ choices for $v$, fewer than $n^{d+1}$ choices for $C$, and fewer than $\log n$ steps $t$, it suffices to show that the inequality holds at each step with probability at least $1 - o(\exp(-(d+2) \log n - \log \log n))$. In fact, we will see that at each step, the probability of failure is only $\exp(-\gamma (\log n)^2)$, for some positive constant $\gamma> 0$.
The proof is by induction on $t$. When $t = 0$, the statement follows from Lemma \ref{lemFirstStep}. When $t = 1$, we show that a stronger bound holds. For a vertex $w \in V_1$ and a vertex $u \in X \setminus (V_0 \cup V_1)$, the probability that there exists an edge $uw$ in a graph $G_i$ for which some graph $G_j \neq G_i$ satisfies $vw \in E(G_j)$ is at least
$$
1 - (1-p)^{s- |C| - 1 } > \frac{1}{2}\, ps = \frac{c \log n}{2n}.
$$
Therefore, for each vertex $u \in X \setminus (V_0 \cup V_1)$,
$$\Pr(u \in V_2) > 1 - \left( 1 - \frac{c \log n}{2n} \right)^{n_1}.$$
By Lemmas \ref{lemmaDelta} and \ref{lemFirstStep} and ($\star$),
$$\epsilon \log n \leq n_1 \leq \frac{2c \log n}{\log \log n} \cdot s \leq 6c \left( \frac{\log n}{\log \log n} \right)^2.$$
Therefore, we may estimate that
\begin{eqnarray*}
1 - \left( 1 - \frac{c \log n}{2n} \right)^{n_1} &>& n_1 \left( \frac{c \log n}{2n} \right) - n_1^2 \left( \frac{c \log n}{2n} \right)^2 \\
& = & n_1 \cdot \frac{c \log n}{2n}\, (1 - o(1)) \\
& > & n_1 \cdot \frac{c \log n}{3n} .
\end{eqnarray*}
Hence, the expected number of vertices in $V_2$ is at least
$$ n_1 \cdot \frac{c \log n}{3n} \left(n - 6c \left( \frac{\log n}{\log \log n} \right)^2 -1 \right) > \frac{1}{4} \epsilon (\log n)^2.$$
Thus, by a Chernoff bound (Theorem \ref{thmSimpleChernoff}),
$$\Pr \left(n_2 < \frac{1}{8} \epsilon (\log n)^2 \right) < \exp \left( - \frac{1}{32} \epsilon (\log n)^2 \right) .$$
Hence, we may assume that $n_2 \geq \frac{1}{8} \epsilon (\log n)^2$, which is larger than $2n_1$~a.a.s.
Suppose now that $t \geq 2$ and $n_t\le\frac{n}{10}$. By the induction hypothesis, $n_0+n_1+\cdots +n_t \le n_t(1+\frac{1}{2}+\frac{1}{4}+\cdots+\frac{1}{2^t}) < \frac{n}{5}$. As $t \leq s - |C| - 1$, for each vertex $w \in V_t$, there exists at least one color $i \in [s] \setminus C$ such that our breadth first search may extend from $w$ using $G_i$. Using a similar argument as before, we have:
\begin{eqnarray}
\label{eqnGamma}
\E[n_{t+1}] \geq \frac{4}{5}n \left(1-(1-p)^{n_t} \right) > \frac{4}{5}n \left(1-\exp (-n_t p) \right).
\end{eqnarray}
We can express the right-hand expression in (\ref{eqnGamma}) in the form $ n_t g(n, n_t)$, where
$$g(n, n_t) = \frac{4n}{5 n_t} \left(1-\exp (-n_t p) \right).$$
Furthermore, by logarithmic differentiation,
\begin{eqnarray}
\label{loggydif}
\frac{n_t}{g} \frac{\partial g}{\partial n_t} = -1 + \frac{n_t p \exp(-n_t p)}{1 - \exp(-n_t p)} .
\end{eqnarray}
Since $x < e^x - 1$ for $x > 0$, it follows that
$$\frac{x e ^{-x}}{1 - e^{-x}} < 1$$
for $x > 0$. Therefore, $\frac{\partial g}{\partial n_t} < 0$, and $g(n,n_t)$ is minimized by increasing $n_t$. Thus, since $n_t \leq \frac{n}{10}$,
$$\E \left [ n_{t+1} \right ] \geq n_t g \left(n,\frac{n}{10} \right) = 8 \left(1 - \exp \left(-\frac{np}{10} \right) \right)\cdot n_t > 4 n_t.$$
Hence, the expected value of $n_{t+1}$ is at least $4 n_t$. Then, by a Chernoff bound (Theorem \ref{thmSimpleChernoff}),
$$\Pr \left( n_{t+1} < 2 n_t \right) < \exp \left( - \frac{1}{2} n_t \right) < \exp \left(-\frac{1}{16} \epsilon (\log n)^2 \right) .$$
Therefore, the conclusion of the lemma a.a.s.~holds for all vertices $v \in X$ and all subsets $C \subseteq [s]$ of size at most $d+1$.
\end{proof}
\section{Proof of Theorem \ref{thmEasyLB}}
In this section, we will prove Theorem \ref{thmEasyLB}. Our strategy is to show that for an arbitrary vertex pair $u,v \in X$, $u$ and $v$ are rainbow connected with probability $1 - o \left( \frac{1}{n^2} \right)$, from which it will follow that $\mathcal G$ is a.a.s.~rainbow connected.
Recall that by the bound (\ref{sBound}), we assume that $s \leq \frac{3 \log n }{\log \log n}$.
We will need the following fact.
\begin{lemma}
\label{lemmaLiftOff}
It a.a.s.~holds that for every vertex $u \in X$, there exists a value $t^* \leq d$
for which
\begin{equation}
\label{eqnLiftOff}
\left |\Gamma_{t^*}^{[s-1]}(u) \right | \geq \frac{1}{2}\epsilon (c \log n)^d.
\end{equation}
\end{lemma}
\begin{proof}
We consider a vertex $u \in X$.
To prove (\ref{eqnLiftOff}), we will perform a breadth first search as described in Section \ref{BFS}. As before, we will use the term \emph{unused vertices} to refer to those vertices that have not yet been reached by our search. In our breadth first search, if a set $\Gamma_t^{[s-1]} $ ever contains $(c \log n)^{d}$ vertices, then (\ref{eqnLiftOff}) is proven, along with the lemma. Otherwise, we may assume that at each step of our breadth first search, we have at least $n - d(c \log n)^{d}$ unused vertices.
For $1 \leq t \leq d-1$, we define the value
\begin{equation}
\label{eqn:delta1}
\delta_t = \left\{
\begin{array}{ll}
(\log n)^{-1/3}, & t=1,2; \\[0.5mm]
({\log n})^{-1}, & 3 \le t\le d-1.
\end{array}
\right.
\end{equation}
Then, for $0 \leq t \leq d$, we define values $L_0 = 1$, $L_1 = \epsilon \log n$, and set
$$L_{t+1} = L_t \left( pn \right) ( s-t - 1) (1 - ps) \left(1 - \frac{d(c \log n)^{d}}{n} \right) (1 - \delta_t)$$
for $1 \leq t \leq d - 1$.
For $t \geq 1$, this gives us the following closed-form expression:
\begin{equation}
\label{eqnLLB}
L_t = (\epsilon \log n) \left( pn \right)^{t-1} (s-2)^{\underline{t-1}}(1 - ps)^{t-1} \left(1 - \frac{d(c \log n)^{d}}{n} \right)^{t-1} \prod_{i = 1}^{t-1}(1 - \delta_i).
\end{equation}
Using the inequality
$$(s-2)^{\underline{d-1}} > s^{d-1} \left(1 - \frac{d}{s} \right)^{d-1} > s^{d-1} \exp \left( - \frac{2d^2}{s} \right),$$
we can estimate
\begin{eqnarray*}
L_d &=& (\epsilon \log n) \left( pn \right)^{d-1} (s-2)^{\underline{d-1}}(1 - ps)^{d-1} \left(1 - \frac{d(c \log n)^{d}}{n} \right)^{d-1} \prod_{t = 1}^{d-1}(1 - \delta_t) \\
&>& \epsilon (c \log n)^d \exp \left( - \frac{2d^2}{s} \right) (1 - ps)^{d-1} \left(1 - \frac{d(c \log n)^{d}}{n} \right)^{d-1} \prod_{t = 1}^{d-1}(1 - \delta_t)\\
&>& \frac{1}{2} \epsilon (c \log n)^d.
\end{eqnarray*}
Therefore, if we prove that
\begin{equation}
\label{eqnLBLBLB}
\left | \Gamma_{t}^{[s-1]}(u) \right | \geq L_t
\end{equation}
for $0\le t\le d$, then (\ref{eqnLiftOff}) will follow.
We will prove (\ref{eqnLBLBLB}) by induction on $t$.
We see that $\left | \Gamma_0^{[s-1]}(u) \right | = L_0 = 1$ by definition, and $\left | \Gamma_1^{[s-1]}(u) \right | \geq L_1 = \epsilon \log n$ by Lemma \ref{lemFirstStep}. Assuming that (\ref{eqnLBLBLB}) holds for values up to $t$, we estimate $\left | \Gamma_{t+1}^{[s-1]}(u) \right | $, for $t =1,\dots,d-1$. After $\Gamma_t^{[s-1]}(u)$ is computed, then for a vertex $u \in \Gamma_t^{[s-1]}(u)$ and an unused vertex $w \in X \setminus \bigcup_{j = 0}^t \Gamma_t^{[s-1]}(u)$, the probability that there exists an edge $vw$ in a graph $G_i$ for which $i$ does not appear on a given rainbow path from $u$ to $v$ is equal to $1 - (1-p)^{s-t-1} > (s-t-1)p -( s-t-1)^2 p^2$. Hence, if $\Gamma_t^{[s-1]}(u)$ has at least $L_t$ vertices, then
\begin{eqnarray}
\notag
\E \left [ \left | \Gamma_{t+1}^{[s-1]}(u) \right | \right ] \,>\, \mu_{t+1} &:=& L_t \left(( s-t-1)p - ( s-t-1)^2p^2 \right) (n - d (c \log n)^{d})\\
\notag
& > & L_t \cdot np \cdot {(s - t-1)(1 - ps)} \left(1 - \frac{ d(c \log n)^{d} }{n} \right) \\
\label{eqnMuL}
& = & L_{t+1} \, (1-\delta_t)^{-1}.
\end{eqnarray}
For $t \in \{ 1,2\}$, we have the rough bound of $\mu_{t+1} > (\log n)^{7/4}$ by the induction hypothesis and (\ref{eqnLLB}), (\ref{eqnLBLBLB}), and (\ref{eqnMuL}). Hence, by a Chernoff bound (Theorem \ref{thmSimpleChernoff}),
\begin{eqnarray*}
\Pr \left( \left | \Gamma_{t+1}^{[s-1]} (u) \right | < L_{t+1} \right)
&\le& \Pr \left(\left | \Gamma_{t+1}^{[s-1]} (u) \right | < \mu_{t+1}(1 - \delta_t) \right ) \\
&\leq & \exp \left(-\frac{1}{2}\delta_t^2 \mu_{t+1} \right) \\
&<& \exp \left(-\frac{1}{2} (\log n)^{13/12} \right)\\
&=& o \left( \frac{1}{dn} \right).
\end{eqnarray*}
When $t \geq 3$, we have the rough bound of $\mu_{t+1} > (\log n)^{7/2}$ by the induction hypothesis and (\ref{eqnLLB}), (\ref{eqnLBLBLB}), and (\ref{eqnMuL}). Hence, by a Chernoff bound (Theorem \ref{thmSimpleChernoff}),
\begin{eqnarray*}
\Pr \left(\left | \Gamma_{t+1}^{[s-1]}(u) \right | < L_{t+1} \right) & \le & \Pr \left(\left | \Gamma_{t+1}^{[s-1]} (u) \right | < \mu_{t+1}(1 - \delta_t) \right)\\
& \leq& \exp\left(-\frac{1}{2}\delta_t^2 \mu_{t+1} \right) \\
&< & \exp \left(-\frac{1}{2} (\log n)^{3/2} \right) \\
& =& o \left( \frac{1}{d n} \right).
\end{eqnarray*}
Therefore, with probability $1 - o \left(\frac{1}{n} \right)$, (\ref{eqnLBLBLB}) holds for all values $0 \leq t \leq d$, and the proof is complete.
\end{proof}
Now, we move to the main strategy. We consider a pair of distinct vertices $u,v \in X$.
For each rainbow path $P$ with an endpoint at $u$ and with length at most $d$, we let $C(P)$ be the set of colors used in $E(P)$. We consider only rainbow paths with $C(P)\subseteq [s-1]$, and we define
$$R_P := [s-1] \setminus C(P).$$
We also define
$$r := s - d - 1 \leq |R_P|.$$
We will consider spheres centered at $v$ obtained from a breadth first search using edges with colors in $R_P$. We write
$$\xi = \frac{ n }{ (\log n)^{d-1}}$$
and we claim that $|\Gamma_t^{R_P}(v)| \geq \xi$ for some value $t \leq r$. We show this in the following lemma.
\begin{lemma}
\label{lemmaVSpheres}
Let $u,v \in X$ be a pair of distinct vertices, and let $P$ be a rainbow path of length at most $d$ with an endpoint at $u$. With probability $1 - o \left ( \frac{1}{n^4} \right )$, there exists a value $t\in[r]$ for which either $\Gamma_t^{R_P}(v)$ intersects $P$ or $\left |\Gamma_t^{R_P}(v) \right | \geq \xi$.
\end{lemma}
\begin{proof}
For $0 \leq t \leq r$, we write $V_t = \Gamma^{R_P}_t (v) $.
Similarly to Lemma \ref{lemmaLiftOff}, we will define values $L_t$ with the goal of showing that if $\left | V_t \right | < \xi$ for every $t \leq r$, then with high probability, $\left |V_t \right | \geq L_t$ for all values $0 \leq t \leq r$, which will ultimately give us a contradiction. In order to estimate $\left |V_t \right | $ for $0 \leq t \leq r$, we will carry out a breadth first search from $v$ as defined in Section \ref{BFS}.
We will assume that this breadth first search never reaches a vertex of $P$, since otherwise, the lemma would be proven.
For $1 \leq t \leq r-1$, we define $\delta_t$ as in (\ref{eqn:delta1}).
Next, we define $\phi = \frac{1 }{\log n}$ and $\alpha_t = 1 - (r-t)p$. Finally, we define $L_0 = 1$, $L_1 = \frac{1}{2} \epsilon \log n$, and for $t \geq 2$,
\begin{equation}
\label{eqnLLLB}
L_t = \frac{1}{2} \epsilon \log n \left( \frac{c \log n}{s} \right)^{t-1} (r-1)^{\underline{t-1}} \, (1 - 2 \phi)^{2t-2}\, \prod_{i = 1}^{t-1} \alpha_i(1 - \delta_i).
\end{equation}
If there exists a value $k \leq r$ for which
$\left | \bigcup_{t = 0}^k V_t \right |\geq \phi n$,
then there must exist a value $t \leq r$ for which $\left | V_t \right | \geq \frac{\phi n}{r} > \xi$. Therefore, we assume that we always have at least $n-\phi n$ unused vertices.
\begin{claim}
\label{claimLLLL}
With probability $1 - o \left( \frac{1}{n^4} \right)$, for all values $0 \leq t \leq r$, $|V_t| \geq L_t$.
\end{claim}
\begin{proof}
We prove the claim by induction on $t$.
We let $\epsilon$ be the value from Lemma \ref{lemFirstStep}.
Clearly, $|\Gamma_0| = L_0 = 1$, and $|\Gamma_1| \geq L_1 = \epsilon \log n$ by Lemma \ref{lemFirstStep}.
Now, for each value $t \geq 1$, we assume that $V_t $ is already computed, and we seek a lower bound for $V_{t+1}$. For notational simplicity, we write $m = |V_t |$.
For an unused vertex $x \in X \setminus \bigcup_{j = 0}^t V_j$ and a vertex $y \in V_t$, the probability that $x$ is reached from $y$ in our breadth first search is at least
$$1 - (1-p)^{r-t} > (r-t)p - (r-t)^2 p^2 = (r-t)p (1 - (r-t)p) = (r-t)p\alpha_t.$$
Therefore, using the assumption that $m < \xi$, the probability that $x$ is reached from at least one vertex of $V_t $ is at least
\begin{eqnarray*}
1- (1 - (r-t)p\alpha_t)^{m} &>& (r-t) p \alpha_t m - (r-t)^2 p^2 \alpha_t^2 m^2 \\
& \geq & (r-t)p \alpha_t m ( 1 - sp \xi) \\
& = & (r-t)p \alpha_t m \left( 1 - \frac{ c \log n}{n} \cdot \frac{ n }{ (\log n)^{d-1}} \right)\\
&>& (r-t)p \alpha_t m ( 1 - 2 \phi).
\end{eqnarray*}
Hence,
\begin{eqnarray*}
\mu_{t+1}:= \E \left [ |V_{t+1}| \right ] & > & n(1 - \phi) \left( 1- (1 - (r-t)p\alpha_t)^{m} \right) \\
& > & n( 1 - 2 \phi)^2 p \alpha_t (r - t) m \\
&\geq& L_t \cdot n( 1 - 2 \phi) ^2 p \alpha_t (r - t) \\
& = &L_t \cdot \frac{c \log n}{s} \cdot (1 - 2 \phi)^2 \alpha_t ({r - t}) .
\end{eqnarray*}
When $t \in \{1,2\}$, the equation (\ref{eqnLLLB}) and the induction hypothesis give us the rough bound $\mu_{t+1} > (\log n)^{11/6}$. Therefore, by a Chernoff bound (Theorem \ref{thmSimpleChernoff}),
\begin{eqnarray*}
\Pr \left( |V_{t+1}| < L_{t+1} \right)
&=& \Pr \left( |V_{t+1}| < \mu_{t+1} (1 - \delta_t) \right) \\
&\leq & \exp \left(-\frac{1}{2} \delta_t^2 \mu_t \right)\\
& < & \exp \left(-\frac{1}{2} (\log n)^{7/6} \right) \\
& = & o\left( \frac{1}{n^5} \right).
\end{eqnarray*}
When $t \geq 3$, the equation (\ref{eqnLLLB}) and the induction hypothesis give us the rough bound $\mu_t > (\log n)^{7/2}$. Therefore,
\begin{eqnarray*}
\Pr \left( |V_{t+1}| < L_{t+1} \right)
&=& \Pr \left( |V_{t+1}| < \mu_t(1 - \delta_t) \right) \\
& \leq& \exp \left( -\frac{1}{2} \delta_t^2 \mu_t \right) \\
& <& \exp \left( -\frac{1}{2}(\log n)^{3/2} \right ) \\
& =& o \left( \frac{1}{n^5} \right).
\end{eqnarray*}
Putting these bounds together, the probability that $|V_t| < L_t$ for some $t$ is at most
$o \left( \frac{1}{n^5} \right) + r\, o \left( \frac{1}{n^5} \right) = o \left( \frac{1}{n^4} \right).$ This completes the proof.
\end{proof}
Now, we apply Claim \ref{claimLLLL} with $t = r$ and obtain the following bound:
$$
|V_r| \geq \frac{1}{2} \epsilon \log n ( pn )^{r-1} (r-1)! (1 - \phi)^{2r-2} \prod_{i = 1}^{r-1} \alpha_i (1 - \delta_i) .
$$
By Stirling's approximation, $(r-1)! = \frac{1}{r} \cdot r! \geq r^{r-1} e^{-r} \sqrt{2 \pi r}$. Therefore,
\begin{eqnarray*}
|V_r|
&\geq& \frac{1}{2} \epsilon \log n \left( pnr \right)^{r-1} e^{-r} \sqrt{2\pi r} (1 - \phi)^{2r-2} \prod_{i = 1}^{r-1} \alpha_i (1 - \delta_i) \\
&\geq&
\frac{1}{2} \epsilon \log n ( c \log n)^{r-1} (r/s)^{r-1} e^{-r} \sqrt{2 \pi r} (1 - \phi)^{2r-2} \prod_{i = 1}^{r-1} \alpha_i (1 - \delta_i) \\
&=& \exp \bigg (r \log \log n + (r-1) \left(\log c + \log\left( \frac{r}{s} \right) + 2\log (1- \phi)\right) - r + \frac{1}{2} \log r + O(1) \bigg ).
\end{eqnarray*}
We recall that $\phi = \frac{1}{\log n}$,
and then using the inequality $-\log(1-x) < 2x$ for small $x$, it follows that
$$\left | (r-1)\log(1-\phi) \right | < 2 (r-1) \phi = O(1).$$
Furthermore,
$$\left | (r-1) \log \left( \frac{r}{s} \right) \right| = \left | (r-1) \log \left(1 - \frac{d+1}{s} \right) \right | < 2 (r-1)\cdot \frac{d+1}{s} = O(d).$$
Hence, we may write more simply:
\begin{eqnarray*}
|V_{r}|
&\geq& \exp \big (r \log \log n + (r-1) \log c - r + \frac{1}{2} \log r + O(d) \big ) \\
&\geq& \exp \left( (s -d - \tfrac{1}{2} ) \log \log n + s (\log c - 1) + O(d) \right).
\end{eqnarray*}
Then,
\begin{equation}
\frac {|V_r|}{n / (\log n)^{d-1}}
\geq \exp \big( s(\log \log n + \log c - 1) - \tfrac{3}{2} \log \log n - \log n + O(d) \big). \label{eq:end1}
\end{equation}
By the hypothesis of Theorem \ref{thmEasyLB} we have that
\begin{equation*}
s \geq \frac{\log n}{\log c - 1 + \log \log n} + \frac{3}{2} + \frac{2 \sqrt{ \log \log \log n }}{ {\log \log n}}.
\end{equation*}
This assumption and (\ref{eq:end1}) imply the following:
\begin{eqnarray*}
\frac {|V_r|}{n / (\log n)^{d-1}}
&\geq& \exp \bigg( \frac{3}{2}(\log c - 1) + \frac{2\sqrt{\log\log\log n}}{\log\log n} (\log\log n + \log c - 1) + O(d) \bigg) \\
&\geq& \exp \big( \sqrt{\log\log\log n} + O(d) \big) \\
& > & 1.
\end{eqnarray*}
Thus, we conclude that
$$|V_r| \geq \frac{ n }{ (\log n)^{d-1}} = \xi.$$
Therefore, the assumption that $|V_t| < \xi$ for every value $0 \leq t \leq r$ is contradicted with probability $1 - o \left ( \frac{1}{n^4} \right )$, and thus the lemma is proven.
\end{proof}
Now, using Lemmas \ref{lemmaLiftOff} and \ref{lemmaVSpheres}, we are ready to prove Theorem \ref{thmEasyLB}. We choose a pair of distinct vertices $u,v \in X$. If we are able to show that there exists a rainbow path with endpoints $u$ and $v$ with probability $1 - o \left ( \frac{1}{n^2} \right )$, then this will show that $\mathcal G$ is a.a.s.~rainbow connected, and Theorem \ref{thmEasyLB} will be proven. By Lemma \ref{lemmaVSpheres}, for each rainbow path $P$ with an endpoint at $u$ and with length at most $d$, it holds with probability $1 - o \left ( \frac{1}{n^4} \right )$ that there exists a value $t_P$ for which $\Gamma_{t_P}^{R_P}(v)$ intersects $P$ or contains at least $\xi$ vertices. Using Lemma \ref{lemmaDelta}, the number of rainbow paths with an endpoint at $u$ and of length at most $d$ is bounded above by $ds^d \left ( \frac{2c \log n}{\log \log n} \right )^d < n$. Therefore, with probability $1 - o\left ( \frac{1}{n^3} \right )$, it holds for every rainbow path $P$ with an endpoint at $u$ and of length at most $d$ that there exists a value $t_P$ for which $\Gamma_{t_P}^{R_P}(v)$ intersects $P$ or contains at least $\xi$ vertices. If one of the rainbow paths $P$ is intersected by $\Gamma_{t_P}^{R_P}(v)$, then clearly $u$ and $v$ are connected by a rainbow path; therefore, we may assume that for every $P$, we can find a sphere $\Gamma_{t_P}^{R_P}$ with at least $\xi$ vertices.
Now, we define a set $E_{uv}$ of vertex pairs as follows. If there exists a rainbow path $P$ of length at most $d$ with endpoints $u$ and $w$, and if $x \in \Gamma_{t_P}^{R_P}(v)$, then we add the pair $\{w,x\}$ to $E_{uv}$. We estimate the number of vertex pairs in $E_{uv}$. By Lemma \ref{lemmaLiftOff}, there exists a set $\mathcal P$ of at least $\frac{1}{2} \epsilon (c \log n)^d$ rainbow paths of length at most $d$, each with $u$ as an endpoint, and all with distinct second endpoints $w$ in some sphere $\Gamma_{t^*}^{[s-1]}(u) $. Furthermore, for each path $P \in \mathcal P$, all of the vertices $x \in \Gamma_{t_P}^{R_P}(v)$ are distinct. Therefore, each pair $\{w,x\}$ with $w \in \Gamma_{t^*}^{[s-1]}(u) $ can be added at most twice to $E_{uv}$, once with $w$ as an endpoint of a path $P \in \mathcal P$ and with $x \in \Gamma_{t_P}^{R_P}(v)$, and once with $x$ as an endpoint of a path $P \in \mathcal P$ and with $w \in \Gamma_{t_P}^{R_P}(v)$. By Lemma \ref{lemmaVSpheres}, for each path $P \in \mathcal P$, at least $\xi$ pairs are added to $E_{uv}$, giving $E_{uv}$ a total of at least
\[ \frac{1}{4} \epsilon (c \log n)^d \xi > 4 \log n / p\]
distinct pairs.
With probability at least
$$1-(1 - p)^{4 \log n / p} = 1 - o \left(\frac{1}{n^2} \right),$$
$G_s$ has an edge at some pair in $E_{uv}$, giving us a rainbow path between $u$ and $v$ with probability $1 - o \left ( \frac{1}{n^2} \right )$. Since the number of vertex pairs $u,v \in X$ is less than $n^2$, it follows that every vertex pair in $X$ is connected by a rainbow path with probability $1 - o(1)$. This completes the proof of Theorem \ref{thmEasyLB}.
\section{An alternative colorful random setting}
\label{sec:differentSetting}
The results in the previous sections all belong to the setting in which a randomly edge-colored graph is obtained by taking the union of many monochromatically edge-colored graphs on a common vertex set. As mentioned in the introduction, a different random model is obtained by taking a single random graph $G$, each of whose edges is given a single color from the set $[s]$ uniformly at random. These two settings are not equivalent; for example, in the first setting, a single vertex-pair may have edges of multiple colors, while in the second setting, this is not possible. Nevertheless, these two models are similar in many ways, and in fact, Theorems \ref{thmUB} and \ref{thmEasyLB} have direct counterparts in this new random setting.
\begin{theorem}
\label{thm:newUB}
Let $G$ be a graph taken randomly from $G(n,p)$, with $p = \frac{c \log n}{n}$, where $c>1$ is a constant. Let each edge of $G$ be given a color from $[s]$ uniformly at random.
If $$s \leq \frac{\log n}{\log c - 1 + \log \log n} - \frac{1}{2} + \frac{\log \log \log n}{3 \log \log n},$$
then a.a.s.~$G$ is not rainbow connected.
\end{theorem}
\begin{theorem}
\label{thm:newLB}
Let $G$ be a graph taken randomly from $G(n,p)$, with $p = \frac{c \log n}{n}$, where $c>1$ is a constant. Let each edge of $G$ be given a color from $[s]$ uniformly at random.
If
$$s \geq \frac{\log n}{\log c - 1 + \log \log n} + \frac{3}{2} + \frac{2 \sqrt{ \log \log \log n }}{ {\log \log n}} ,$$
then a.a.s.~$G$ is rainbow connected.
\end{theorem}
We sketch some of the details that one might use to prove these theorems in the new setting. In order to prove Theorem \ref{thm:newUB}, the First Moment Method calculation used in the proof of Theorem \ref{thmUB} can be copied exactly. In order to prove Theorem \ref{thm:newLB}, the techniques and lemmas used to prove Theorem \ref{thmEasyLB} can be followed very closely. First, the inequality (\ref{sBound}) can be obtained using the same method as in Lemma \ref{lemmaSmall}. Then, the same breadth first search method can be employed, and we again can use Chernoff bounds in order to estimate the sizes of spheres. In fact, when we compute lower bounds for the sizes of spheres, we will often obtain better estimates. The reason for this is that in the original setting, we estimate the probability that a vertex pair contains an edge belonging to a color set $C$ as
\[1-(1-p)^{|C|} > p|C| - p^2 |C|^2 = (1- o(1)) \frac{c |C| \log n}{sn}.\]
However, in the new setting, we can write this probability exactly as
\[\frac{p|C|}{s} = \frac{c |C|\log n}{sn},\] which gives us approximately the same probability, but with no error term. Therefore, the lower bounds for sphere sizes that we use in the original setting can be translated into the new setting without any changes. Finally, our last technique of finding an edge of color $s$ between the endpoints of two rainbow paths can be used without any modifications. Therefore, our overall proof structure can be used in both settings.
\raggedright
\bibliographystyle{abbrv}
| {
"timestamp": "2021-07-15T02:21:57",
"yymm": "2107",
"arxiv_id": "2107.05670",
"language": "en",
"url": "https://arxiv.org/abs/2107.05670",
"abstract": "Given a family $\\mathcal G$ of graphs on a common vertex set $X$, we say that $\\mathcal G$ is rainbow connected if for every vertex pair $u,v \\in X$, there exists a path from $u$ to $v$ that uses at most one edge from each graph in $\\mathcal G$. We consider the case that $\\mathcal G$ contains $s$ graphs, each sampled randomly from $G(n,p)$, with $n = |X|$ and $p = \\frac{c \\log n}{sn}$, where $c > 1$ is a constant. We show that when $s$ is sufficiently large, $\\mathcal G$ is a.a.s. rainbow connected, and when $s$ is sufficiently small, $\\mathcal G$ is a.a.s. not rainbow connected. We also calculate a threshold of $s$ for the rainbow connectivity of $\\mathcal G$, and we show that this threshold is concentrated on at most three values, which are larger than the diameter of the union of $\\mathcal G$ by about $\\frac{\\log n}{(\\log \\log n)^2}$.The same results also hold in a more traditional random rainbow setting, where we take a random graph $G\\in G(n,p)$ with $p=\\frac{c \\log n}{n}$ ($c>1$) and color each edge of $G$ with a color chosen uniformly at random from the set $[s]$ of $s$ colors.",
"subjects": "Combinatorics (math.CO)",
"title": "A rainbow connectivity threshold for random graph families",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620063679783,
"lm_q2_score": 0.7154239957834733,
"lm_q1q2_score": 0.7097449646607485
} |
https://arxiv.org/abs/1508.04675 | Independent Sets, Matchings, and Occupancy Fractions | We prove tight upper bounds on the logarithmic derivative of the independence and matching polynomials of d-regular graphs. For independent sets, this theorem is a strengthening of the results of Kahn, Galvin and Tetali, and Zhao showing that a union of copies of $K_{d,d}$ maximizes the number of independent sets and the independence polynomial of a d-regular graph.For matchings, this shows that the matching polynomial and the total number of matchings of a d-regular graph are maximized by a union of copies of $K_{d,d}$. Using this we prove the asymptotic upper matching conjecture of Friedland, Krop, Lundow, and Markström.In probabilistic language, our main theorems state that for all d-regular graphs and all $\lambda$, the occupancy fraction of the hard-core model and the edge occupancy fraction of the monomer-dimer model with fugacity $\lambda$ are maximized by $K_{d,d}$. Our method involves constrained optimization problems over distributions of random variables and applies to all d-regular graphs directly, without a reduction to the bipartite case. | \section{Independent Sets}
Let $G$ be a graph. The independence polynomial of $G$ is
\[P_G(\lambda) = \sum_{I \in \mathcal I} \lambda^{|I|} \]
where $\mathcal I $ is the set of all independent sets of $G$. By convention we consider the empty independent set to be a member of $\mathcal I$. The \textit{hard-core model} with fugacity $\lambda$ on $G$ is a random independent set $I$ drawn according to the distribution
\[
\Pr_\lambda[ I] = \frac{ \lambda^{|I|}} { P_G(\lambda)}\,.
\]
$P_G(\lambda)$ is also called the partition function of the hard-core model on $G$.
In the hard-core model, the quantity $\alpha_G(\lambda) = \frac{1}{|V(G)|} \frac{\lambda P^\prime_G(\lambda) }{P_G(\lambda)}$ is the \textit{occupancy fraction}: the expected fraction of vertices of $G$ belonging to the random independent set $I$. In particular,
\begin{align*}
\label{eq:occdef}
\alpha_G(\lambda) &= \frac{1}{|V(G)|} \sum_{v \in G} \Pr[v \in I] = \frac{1}{|V(G)|} \frac{\sum _{I \in \mathcal I} |I| \lambda ^{|I|} }{P_G(\lambda) } \\
&= \frac{1}{|V(G)|} \frac{\lambda P_G^\prime(\lambda)}{P_G(\lambda) } = \left(\frac{1}{|V(G)|} \log P_G(\lambda) \right )^\prime \,.
\end{align*}
We write $K_{d,d}$ for the complete bipartite graph with $d$ vertices in each part. If $2d$ divides $n$, let $H_{d,n}$ denote the $d$-regular, $n$-vertex graph that is the disjoint union of $n/(2d)$ copies of $K_{d,d}$. Kahn \cite{kahn2001entropy} showed that $H_{d,n}$ maximizes the total number of independent sets over all $d$-regular, $n$-vertex bipartite graphs, and then showed~\cite{kahn2002entropy} that in fact $K_{d,d}$ (or $H_{d,n}$) maximizes $\frac{1}{|V(G)|} \log P_G(\lambda)$ for $\lambda \ge 1$ over all $d$-regular bipartite graphs. The log partition function result generalizes the counting result as the latter can be recovered by setting $\lambda=1$. Galvin and Tetali~\cite{galvin2004weighted} then gave a broad generalization of Kahn's result to counting homomorphisms from a $d$-regular, bipartite $G$ to any graph $H$. The case of $H$ formed of two connected vertices, one with a self-loop, is that of counting independent sets. Via a modification of $H$ and a limiting argument, they proved that in fact $\frac{1}{|V(G)|} \log P_G(\lambda)$ is maximized for any $\lambda>0$ over $d$-regular bipartite graphs by $K_{d,d}$. Zhao \cite{zhao2010number} then removed the bipartite restriction in these results for independent sets by reducing the general case to the bipartite case, in particular proving that $H_{d,n}$ has the greatest number of independent sets of any $d$-regular graph on $n$ vertices.
Here we prove a strengthening of the above results for independent sets.
\begin{theorem}
\label{thm:occupy}
For all $d$-regular graphs $G$ and all $\lambda>0$, we have
\[ \alpha_G(\lambda) \le \alpha_{K_{d,d}}(\lambda) = \frac{\lambda(1+\lambda)^{d-1}}{2(1+\lambda)^d -1} \,. \]
The maximum is achieved only by unions of copies of $K_{d,d}$.
\end{theorem}
In particular Theorem~\ref{thm:occupy} states that the derivative of $\frac{1}{|V(G)|}\log P_G(\lambda)$ is maximized over $d$-regular graphs for all $\lambda$ by $K_{d,d}$, which when integrated, immediately implies that the normalized log partition function, $\frac{1}{|V(G)|} \log P_G(\lambda)$, is maximized. Even more, it says that the difference $\frac{1}{2d} \log P_{K_{d,d}}(\lambda)- \frac{1}{|V(G)|} \log P_G(\lambda)$ is strictly increasing in $\lambda$ for any $d$-regular graph $G$ that is not $H_{d,n}$. Note that $\frac{1}{n} \log P_{H_{d,n}}(\lambda) = \frac{1}{2d} \log P_{K_{d,d}}(\lambda)$ for any $n$ divisible by $2d$.
In Section \ref{sec:givensize} we observe that the above bound on the partition function gives new upper bounds on a related problem: maximizing the number of independent sets of a given size in $d$-regular graphs.
Next, let $\alpha_{T_d}(\lambda)$ be occupancy fraction of the unique translation invariant hard-core measure on the infinite $d$-regular tree $T_d$ at fugacity $\lambda$; that is, $\alpha_{T_d}(\lambda)$ is the solution of the equation
\[ \frac{\alpha}{\lambda (1-\alpha)} = \left(\frac{1-2\alpha}{1-\alpha} \right)^d \]
(see e.g. \cite{bhatnagar2014decay}).
Using a variant of the method used to establish Theorem \ref{thm:occupy}, we prove a lower bound on the occupancy fraction in any $d$-regular, vertex-transitive, bipartite graph $G$.
\begin{theorem}
\label{thm:lowerbound}
For any $d$-regular, vertex-transitive, bipartite graph $G$,
\[ \alpha_G(\lambda) > \alpha_{T_d}(\lambda)\,. \]
\end{theorem}
The corresponding statement for the normalized log partition function (the integrated version of Theorem~\ref{thm:lowerbound}) holds without the condition of vertex transitivity~\cite{ruozzi2012bethe}. Theorem~\ref{thm:lowerbound} itself may not hold without vertex transitivity (see Section 5 of~\cite{csikvari2014matchings} for a related discussion about matchings). For $\lambda \le \lambda_c(T_d) = \frac{(d-1)^{d-1}}{(d-2)^d}$ (the uniqueness threshold of the hard-core model on $T_d$), the bound in Theorem~\ref{thm:lowerbound} is asymptotically tight for this class of graphs. From the results of Weitz \cite{weitz2006counting}, any sequence of graphs $G_n$ that converges locally (in the sense of Benjamini-Schramm \cite{benjamini2011recurrence}) to $T_d$ has occupancy fraction $\alpha_{T_d}(\lambda) +o(1)$ as $n \to \infty$; for example, we can take a sequence of bipartite Cayley graphs of large girth.
\section{Matchings}
The matching polynomial of a graph $G$ is
\[ M_G(\lambda) = \sum_{H \in \mathcal M} \lambda^{|H|} \]
where $\mathcal M$ is the set of all matchings of $G$ (including the empty matching) and $|H|$ is the number of edges in the matching $H$. Just as in the hard-core model above we can define a probability distribution over matchings:
\[ \Pr_\lambda [H] = \frac{ \lambda^{|H|} }{M_G(\lambda) } \,. \]
This defines the monomer-dimer model from statistical physics \cite{heilmann1972theory}: dimers are edges of the random matching $H$ and monomers the unmatched vertices.
The \textit{edge occupancy fraction}, or the dimer density, is the expected fraction of the edges of $G$ in such a random matching:
\begin{equation*}
\alpha^M_G(\lambda) = \frac{1}{|E(G)|} \sum_{e \in G} \Pr[e \in H] = \frac{1}{|E(G)|} \frac{\lambda M_G^\prime(\lambda)}{M_G(\lambda)} \,.
\end{equation*}
Our next result is an upper bound on the edge occupancy fraction of any $d$-regular graph:
\begin{theorem}
\label{thm:matchingGeneral}
For all $d$-regular graphs $G$ and all $\lambda>0$, we have
\[ \alpha^M_G(\lambda) \le \alpha^M_{K_{d,d}}(\lambda) \,. \]
The maximum is achieved only by unions of copies of $K_{d,d}$.
\end{theorem}
This states that the normalized logarithmic derivative of $M_G(\lambda)$ is maximized by $K_{d,d}$, and hence via integration that $K_{d,d}$ (and thus also $H_{d,n}$) maximizes $\frac{1}{|E(G)|} \log M_G(\lambda)$ for any $\lambda >0$. This resolves Conjecture 7.1 in \cite{galvin2014three}. Br\'egman's theorem \cite{bregman1973some} says that the number of perfect matchings of a $d$-regular, $n$-vertex bipartite graph is maximized by $H_{d,n}$, and this was extended by Kahn and Lov\'asz to all $d$-regular graphs (see \cite{galvin2014three} for a full discussion). Our result on $M_G(\lambda)$ extends this: letting $\lambda \to \infty$ recovers the result for perfect matchings, while setting $\lambda =1$ shows that $H_{d,n}$ maximizes the total number of matchings of any $d$-regular graph on $n$ vertices.
In Section \ref{sec:givensize} we use Theorem~\ref{thm:matchingGeneral} to give new upper bounds on the number of matchings of a given size in $d$-regular graphs. We then use these bounds to prove the `asymptotic upper matching conjecture' of Friedland, Krop, Lundow, and Markstr{\"o}m~\cite{friedland2008validations}.
\section{Related work}
The results of Kahn \cite{kahn2001entropy,kahn2002entropy}, Galvin and Tetali \cite{galvin2004weighted}, and Zhao \cite{zhao2010number} culminating in the fact that $\frac{1}{|V(G)|} \log P_G(\lambda)$ is maximized over $d$-regular graphs by $K_{d,d}$ are based on the entropy method, a powerful tool for the type of problems we address here. Apart from the results mentioned above, see \cite{radhakrishnan20036} and \cite{galvin2014three} for surveys of the method. A direct application of the method requires the graph $G$ to be bipartite. Zhao \cite{zhao2011bipartite} showed that in some, but not all cases, this restriction can be removed by using a `bipartite swapping trick'. An entropy-free proof of Galvin and Tetali's general theorem on counting homomorphisms was recently given by Lubetzky and Zhao~\cite{lubetzky2014replica}. Our method also does not use entropy, but in contrast to the other proofs it works directly for all $d$-regular graphs, without a reduction to the bipartite case. The method deals directly with the hard-core model instead of counting homomorphisms and seems to require more problem-specific information than the entropy method; a question for future work is to extend the method to a more general class of homomorphisms.
The technique of writing the expected size of an independent set in two ways (as we do here) was used by Shearer \cite{shearer1995independence} in proving lower bounds on the average size of an independent set in $K_r$-free graphs and then by Alon \cite{alon1996independence} for graphs in which all vertex neighborhoods are $r$-colorable. The idea of bounding the occupancy fraction instead of the partition function comes in part from work of the third author \cite{perkins2015birthday} in improving, at low densities, the bounds on matchings of a given size in Ilinca and Kahn \cite{ilinca2013asymptotics} and independent sets of a given size in Carroll, Galvin, and Tetali \cite{carroll2009matchings}. The use of linear programming for counting graph homomorphisms appears in Kopparty and Rossman \cite{kopparty2011homomorphism}, where they use a combination of entropy and linear programming to compute a related quantity, the homomorphism domination exponent, in chordal and series-parallel graphs.
For matchings, Carroll, Galvin, and Tetali~\cite{carroll2009matchings} used the entropy method to give an upper bound of $\frac{1}{2} \log (1+d \lambda)$ on $\frac{1}{|V(G)|} \log M_G(\lambda)$ over $d$-regular graphs. It was previously conjectured (eg.\ \cite{friedland2008number,galvin2014three}) that $K_{d,d}$ maximizes $\frac{1}{|V(G)|} \log M_G(\lambda)$ over all $d$-regular graphs. This is an implication of our Theorem~\ref{thm:matchingGeneral}.
In \cite{csikvari2014lower}, Csikv{\'a}ri proved the `lower matching conjecture' of \cite{friedland2008number} and in \cite{csikvari2014matchings} gave a new lower bound on the number of perfect matchings of $d$-regular, vertex-transitive, bipartite graphs, in both comparing an arbitrary graph with the infinite $d$-regular tree (see also the recent extension by Lelarge \cite{lelarge2015counting} to irregular graphs). Proposition 2.10 in \cite{csikvari2014matchings} states that the edge occupancy fraction of any $d$-regular, vertex-transitive, bipartite graph is at least that of the infinite $d$-regular tree; in Theorem~\ref{thm:lowerbound} we prove an analogous result for independent sets. Csikv{\'a}ri's techniques in the two papers are different than the methods of this paper, but similar in that he bounds the occupancy fraction instead of directly working with the partition function. His results rely on an elegant interplay between the Heilman-Lieb theorem \cite{heilmann1972theory} and Benjamini-Schramm convergence of bounded-degree graphs.
In statistical physics, the analogue of the occupancy fraction in a general spin system is called the \textit{mean magnetization}; on general graphs it is $\#P$-hard to compute the magnetization in the ferromagnetic Ising model, the monomer-dimer model, and the hard-core model~\cite{sinclair2014lee,schulman2015symbolic}.
\section{The Method}
\label{sec:themethod}
To introduce our method, we start by proving Theorem~\ref{thm:occupy} under the assumption that $G$ is triangle-free. In what follows $I$ will denote the random independent set drawn according to the hard-core model with fugacity $\lambda$ on a $d$-regular, $n$-vertex graph $G$.
We say a vertex $v$ is \textit{occupied} if $v \in I$ and \textit{uncovered} if none of its neighbors are in $I$: $N(v) \cap I = \emptyset$. Let $p_v$ be the probability $v$ is occupied and $q_v$ be the probability $v$ is uncovered. The idea of considering $q_v$ appears in Kahn's paper \cite{kahn2001entropy}.
We will show that for every $\lambda>0$ and any triangle-free $G$, $\alpha_G(\lambda)$ is maximized by $K_{d,d}$. (It is easy to see by linearity of expectation or by manipulating the partition function that the occupancy fraction is the same for any number of copies of $K_{d,d}$).
Letting $\alpha=\alpha_G(\lambda)$, we write
\begin{align}
\nonumber
\alpha & = \frac{1}{n} \sum_{v \in G} p_v \\
\label{eq:pvqv}
&= \frac{1}{n} \sum_{v \in G} \frac{\lambda}{1+ \lambda} q_v \\
\label{eq:uncov}
&= \frac{\lambda}{1+ \lambda} \cdot \frac{1}{n} \sum_{v \in G} \sum_{j=0}^d \Pr[j \text{ neighbors of } v \text{ are uncovered}] \cdot (1+\lambda)^{-j} \\
\nonumber
&= \frac{\lambda}{1+ \lambda} \cdot \mathbb{E} [ (1+\lambda)^{-Y}]
\end{align}
where $Y$ is the random variable that counts the number of uncovered neighbors of a uniformly chosen vertex from $G$, with respect to the random independent set $I$. $Y$ is an integer valued random variable bounded between $0$ and $d$.
Equation (\ref{eq:pvqv}) follows since $v$ must be uncovered if it is to be occupied, and conditioning on being uncovered $v$ is occupied with probability $\frac{\lambda}{1+ \lambda}$. Equation (\ref{eq:uncov}) is similar: conditioned on the event that $u_1, \dotsc, u_j$, neighbors of $v$, are all uncovered, the probability that none are occupied is $(1+\lambda)^{-j}$.
This is where we use the fact that $G$ is triangle-free: there are no edges between neighbors of $v$.
We also have
\begin{align*}
\mathbb{E} Y &= \frac{1}{n} \sum_{v \in G} \sum_{u \sim v} q_u = d \cdot \frac{1+ \lambda}{\lambda} \alpha
\end{align*}
since each $u$ appears in the double sum exactly $d$ times as $G$ is $d$-regular. This gives the identity
\begin{equation*}
\mathbb{E} Y = d \cdot \mathbb{E} [ (1+\lambda)^{-Y}]\,.
\end{equation*}
Now let
\[ \alpha^* = \frac{\lambda}{d (1+ \lambda)} \cdot \sup_{0 \le Y \le d} \{ \mathbb{E} Y: \mathbb{E} Y = d \cdot \mathbb{E} [ (1+\lambda)^{-Y}] \} \]
where the sup is over all distributions of random variables $Y$ bounded between $0$ and $d$.
For any $\lambda$ and $d$ there is a unique distribution $Y$ supported only on $0$ and $d$ that satisfies the constraint $\mathbb{E} Y = d \cdot \mathbb{E} [ (1+\lambda)^{-Y}]$. We claim that the sup is uniquely achieved by this distribution. The claim follows from convexity, but we defer details to the proof of a more general statement in Section \ref{sec:triangles}. Since the distribution $Y$ associated to $H_{d,n}$ satisfies the constraint and is supported on $0$ and $d$, it must maximize $\alpha$. Since unions of copies of $K_{d,d}$ are the only graphs whose associated distribution is supported on $0$ and $d$, they uniquely achieve the maximum.
To recap, the method is the following:
\begin{enumerate}[(i)]
\item Define a random variable $Y$ using randomness in the hard-core model on $G$ and in choosing a random vertex of $G$. In the proof above, $Y$ was the number of uncovered neighbors of a random vertex.
\item Write $\alpha$ as the expectation of a function of $Y$.
\item Add constraints that the random variable $Y$ must satisfy for any graph $G$ in our class. In the case above, the constraints were that the two ways of writing $\alpha$ are equal and that $0 \le Y \le d$.
\item Relax the optimization problem from random variables $Y$ induced by graphs to all random variables $Y$ that satisfy the constraints. Show that the unique maximizer of $\alpha$ is the distribution associated to the extremal graph, and therefore $\alpha$ is maximized by the extremal graph.
\end{enumerate}
In Section~\ref{sec:triangles} we give the full proof of Theorem~\ref{thm:occupy}. We prove the lower bound, Theorem~\ref{thm:lowerbound}, in Section~\ref{sec:LB}. We turn to matchings and Theorem~\ref{thm:matchingGeneral} in Section~\ref{sec:matchingproofs} before giving new bounds on the number of independent sets and matchings of a given size in Section~\ref{sec:givensize}.
\section{Proof of Theorem~\ref{thm:occupy}}
\label{sec:triangles}
For a vertex $v \in G$ and an independent set $I$, we define the \textit{free neighborhood} of $v$ to be the subgraph of $G$ induced by the neighbors of $v$ which are not adjacent to any vertex in $I \setminus N(v)$. We use the convention $v \notin N(v)$. The vertices in the free neighborhood may be uncovered or covered, but if they are covered it must be from another vertex in the free neighborhood. In a triangle-free graph the free neighborhood is always a set (possibly empty) of isolated vertices. Note that if $v \in I$, then the free neighborhood of $v$ is necessarily empty.
Let $ C$ be the random free neighborhood of $v$ when we draw $I$ according to the hard-core model and choose vertex $v$ uniformly at random from $G$. For any graph $F$, let $p_F$ be the probability that $ C$ is isomorphic to $F$. Also let $P_C = P_C(\lambda)$ be the independence polynomial of $C$ at fugacity $\lambda$. Then we can write $\alpha$ in two ways:
\begin{align}
\label{eq:tri1}
\alpha &= \frac{\lambda}{1+ \lambda} \mathbb{E} \left [ \frac{1}{P_C(\lambda)} \right ]
\intertext{and}
\label{eq:tri2}
\alpha &= \frac{\lambda}{d} \mathbb{E} \left[ \frac{P_C^\prime(\lambda)}{P_C(\lambda)} \right]
\end{align}
where in both equations the expectations are over the random free neighborhood $C$. Equation (\ref{eq:tri1}) follows since $v$ itself is uncovered if and only if all vertices in its free neighborhood are unoccupied. Given that the free neighborhood is isomorphic to $C$, all vertices in the free neighborhood are unoccupied with probability $\frac{1}{P_C(\lambda)}$. Equation (\ref{eq:tri2}) follows by counting the expected number of occupied neighbors of $v$ and dividing by $d$: only vertices in the free neighborhood can be occupied, and, given $C$, the expected number of occupied vertices in the free neighborhood is $\frac{\lambda P_C^\prime(\lambda)}{P_C(\lambda)}$.
Now let
\begin{equation}\label{eq:Cconstr}
\alpha^* = \frac{\lambda}{1+ \lambda} \cdot \sup \left \{ \mathbb{E} \left [ \frac{1}{P_C(\lambda)} \right ] : \frac{d}{1+\lambda} \cdot \mathbb{E} \left [ \frac{1}{P_C(\lambda)} \right ] = \mathbb{E} \left[ \frac{P_C^\prime(\lambda)}{P_C(\lambda)} \right] \right \}
\end{equation}
where the sup is over all distributions of random free neighborhoods $C$ supported on graphs of at most $d$ vertices. From (\ref{eq:tri1}) and (\ref{eq:tri2}), the distribution obtained from $G$ satisfies the constraint above.
We claim that for any $\lambda>0$, $\alpha^*$ is achieved uniquely by a distribution supported only on the empty graph and the graph consisting of $d$ isolated vertices, $\overline {K_d}$. The theorem follows since disjoint unions of copies of $K_{d,d}$ are the only graphs for which the free neighborhood can only be the empty set or $\overline {K_d}$, and since there is a unique distribution with this support satisfying the constraint. To prove this claim we use the language of linear programming, see e.g.~\cite{boyd2004convex}.
\subsection{The linear program}
Let $p_C$ be the probability of a given free neighborhood $C$, and let $\mathcal C_d$ be the set of all graphs on at most $d$ vertices, including the empty graph. Equation (\ref{eq:Cconstr}) defines a linear program with the decision variables $\{ p_C \} _{C \in \mathcal C_d}$. We write the linear program in standard form as
\begin{align*}
\alpha^* = \max \, \frac{\lambda}{2(1+\lambda)} &\sum_{C \in \mathcal C_d} p_C ( a_C +b_C) \text{ subject to } \\
&\sum_{C \in \mathcal C_d} p_C = 1 \\
&\sum_{C \in \mathcal C_d} p_C( a_C -b_C) =0 \\
&p_C \ge 0 \, \, \, \forall C \in \mathcal C_d
\end{align*}
where $a_C = \frac{1}{ P_C(\lambda)}$ and $b_C = \frac{(1+\lambda) P^\prime_C(\lambda)}{d P_C(\lambda)}$. We can calculate $a_{\emptyset} = 1$, $b_{\emptyset} = 0$, $a_{\overline{K}_d} = (1+\lambda)^{-d}$, $b_{\overline{K}_d} = 1$.
The solution $p_{\emptyset} = \frac{1-(1+\lambda)^{-d}}{2 -(1+ \lambda)^{-d}} $ and $p_{\overline {K_d}} = \frac{1}{2 -(1+ \lambda)^{-d}}$ is the unique feasible solution supported only on $\emptyset $ and $\overline{K}_d$, and gives the objective value $\frac{ \lambda (1+\lambda)^{d-1}}{2 (1+\lambda)^d -1}$. Our claim is that this is the unique maximum.
The dual linear program is
\begin{align*}
\alpha^* = &\min \frac{\lambda}{2(1+\lambda)} \Lambda_1 \text { s.t. } \\
&\Lambda_1 + \Lambda_2 (a_C - b_C) \ge a_C + b_C \, \, \, \forall C \in \mathcal C_d
\end{align*}
where $\Lambda_1, \Lambda_2$ are the decision variables.
Guided by the candidate solution above we set $\Lambda_1 = \frac{2}{2- (1+\lambda)^{-d}}$ and $\Lambda_2= 1-\Lambda_1$. With these values, the dual constraints corresponding to $C = \emptyset, \overline{K}_d$ hold with equality, and the objective value is $\frac{\lambda}{2(1+\lambda)} \Lambda_1=\frac{ \lambda (1+\lambda)^{d-1}}{2 (1+\lambda)^d -1}$. To finish the proof we claim that $\Lambda_1, \Lambda_2$ are feasible for the dual program, which means showing that
\[ \Lambda_1 + \Lambda_2 (a_C - b_C) \ge a_C + b_C \]
for all $C \in \mathcal C_d$. We will show that in fact the inequality holds strictly for all $C \in \mathcal C_d \setminus \{\emptyset, \overline {K}_d \}$. Substituting our values of $\Lambda_1, \Lambda_2$, this inequality reduces to
\begin{equation}
\label{eq:indFact}
\frac{\lambda P_C^\prime(\lambda)}{P_C(\lambda)-1} < \frac{ \lambda d (1+\lambda)^{d-1}}{(1+\lambda)^d- 1} \, .
\end{equation}
The LHS of (\ref{eq:indFact}) is the expected size of the random independent set from the hard-core model on $C$ conditioned on it being non-empty. The RHS is the same quantity for $\overline{K}_d$.
Inequality \eqref{eq:indFact} follows directly from the observation that, over all $C\in \mathcal C_d$, the graph $\overline{K}_d$ maximizes the ratio of subsequent terms in the polynomial $P_C$.
Let $t_i=\binom{d}{i}$, the coefficient of $\lambda^i$ in $P_{\overline{K}_d}$, and write $P_C= 1+\sum_{i=1}^d r_i\lambda^i$.
We have $(i+1)t_{i+1}=(d-i)t_i$ and $(i+1)r_{i+1}\leq(d-i)r_i$ by counting independent sets of size $i+1$.
To verify \eqref{eq:indFact} we show that for each $1\leq k \leq d$ the coefficient $c_k$ of $\lambda^k$ in the polynomial $(\lambda P'_{\overline{K}_d})(P_C-1) - (\lambda P'_C)(P_{\overline{K}_d}-1)$ is non-negative.
We have
\begin{align*}
s_k &= \sum_{i=1}^{k-1}it_ir_{k-i} - \sum_{i=1}^{k-1} it_{k-i}r_i\\
&= \sum_{i=1}^{\lfloor k/2\rfloor } (k-2i)(t_{k-i}r_i - t_ir_{k-i})\,.
\end{align*}
Observe that term-by-term the above sum giving $s_k$ is non-negative by comparing the ratio of successive coefficients in $P_{\overline{K}_d}$ and $P_C$.
Furthermore, if $P_C\neq P_{\overline{K}_d}$ then at least one $s_k$ must be positive, which completes the claim.
To see the optimizer is unique note that strict inequality in the dual constraints corresponding to configurations besides $\emptyset$ and $\overline{K}_d$ implies by complementary slackness that any optimal solution is supported on these two configurations, and there is a unique such distribution.
\section{Proof of Theorem~\ref{thm:lowerbound}}
\label{sec:LB}
To prove Theorem~\ref{thm:lowerbound} we will use the fact that occupancies of vertices on the same side of a bipartite graph are positively correlated:
\begin{lemma}
\label{lem:bipariteCor}
Let $G$ be a bipartite graph with bipartition $\mathcal E \cup \mathcal O$. For any $r \ge 2$, let $u_1, u_2, \dots , u_r \in \mathcal E$. Then
\[ \Pr[ \{ u_1 , \dots , u_r \} \subseteq I] \ge \prod_{i=1}^r p_{u_i} \]
in the hard-core model for any $\lambda$. Similarly, let $U$ be the random set of uncovered vertices of $G$. Then
\[ \Pr[ \{ u_1 , \dots , u_r \} \subseteq U] \ge \prod_{i=1}^r q_{u_i} \]
Moreover, the inequalities are strict when $\lambda >0$ and at least two of the $u_i$'s are in the same connected component of $G$.
\end{lemma}
The first part of the lemma follows by induction on $r$ from the fact that $\Pr[ u_1, u_2 \in I] > \Pr[u_1 \in I] \cdot \Pr[u_2 \in I]$ when $u_1, u_2$ are in the same connected component and in the same part of the bipartition of $G$. In~\cite{van1994percolation} this is shown to be a consequence of the FKG inequality; see also~\cite{fill2001stochastic} and Corollary 1.5 of~\cite{bencs2014christoffel}. An intuitive reason for this fact (which can be turned into a rigorous argument using Weitz's tree~\cite{weitz2006counting}), is that conditioning on the event that a vertex $v$ is occupied forbids its neighbors from being in the independent set; conditioning on the event that $v$ is not occupied increases the probability each of its neighbors are occupied, and these effects propagate through the bipartite graph.
To prove the second part of the lemma, note that $p_{u_i} = \frac{\lambda}{1+\lambda} q_{u_i}$, and for $u_1, \dots , u_r \in \mathcal E$, $\Pr[ \{u_1, \dots , u_r \} \subseteq I] = (\frac{\lambda}{1+ \lambda} )^r \Pr [ \{ u_1, \dots , u_r \} \subseteq U]$, since there are no edges between the $u_i$'s. Then the desired inequality follows from the first part of the lemma.
\begin{proof}[Proof of Theorem~\ref{thm:lowerbound}]
By vertex transitivity, for all $v$, $p_v=\alpha$ and $q_v = \frac{1+\lambda}{\lambda} \alpha$. Fix a vertex $v$ and let $Y$ be the number of uncovered neighbors of $v$. For $u \sim v$ let $Y_u$ be the indicator that $u$ is uncovered.
\begin{align*} \alpha &= \frac{\lambda}{1+\lambda}\mathbb{E}[(1+\lambda)^{-Y}]\\
&= \frac{\lambda}{1+\lambda}\mathbb{E}[(1+\lambda)^{- \sum_{u \sim v}Y_u}]\\
&= \frac{\lambda}{1+\lambda}\Big(\alpha + (1-\alpha) \mathbb{E}[(1+\lambda)^{- \sum_{u \sim v}Y_u}|v \notin I]\Big)\,, \text{ hence} \\
\frac{\alpha}{\lambda(1-\alpha)} &= \mathbb{E}[(1+\lambda)^{- \sum_{u \sim v}Y_u}|v \notin I]\,.
\end{align*}
Now for $u \sim v$, let $\tilde Y_u$ be the indicator that $u$ is uncovered, conditioned on the event $\{ v \notin I \}$. For each $u$, $\tilde Y_u$ has a Bernoulli($p$) distribution, where $p =\frac{1+ \lambda}{\lambda} \frac{\alpha}{1-\alpha}$, and by Lemma~\ref{lem:bipariteCor} applied to $G \setminus v$, the $\tilde Y_u$'s are positively correlated. This gives
\begin{align*}
\frac{\alpha}{\lambda (1-\alpha)} = \mathbb{E} [ (1+\lambda)^{-\sum_{u \sim v} \tilde Y_u } ] & > \prod_{u \sim v} \mathbb{E} [ (1+\lambda)^{- \tilde Y_u} ] = \left(1-p + \frac{p}{1+\lambda} \right )^d=\left( \frac{1-2 \alpha}{1-\alpha} \right) ^d\,.
\end{align*}
The function $\frac{\alpha}{\lambda (1-\alpha)}$ is increasing in $\alpha$, the function $\left( \frac{1-2 \alpha}{1-\alpha} \right) ^d$ is decreasing in $\alpha$, and the two functions are equal at $\alpha= \alpha_{T_d}(\lambda)$, so we conclude that $\alpha > \alpha_{T_d}(\lambda)$.
\end{proof}
\section{Proof of Theorem~\ref{thm:matchingGeneral}}
\label{sec:matchingproofs}
Recall that we use the notation $M_G(\lambda)$ for the matching polynomial of a graph $G$, and let $H$ be a matching drawn from the monomer-dimer model at fugacity $\lambda$.
We refer to an edge as covered if an incident edge is in the random matching $H$.
Let $e$ be an edge of $G$ chosen uniformly at random, with an arbitrary left/right orientation chosen at random. In applying the method to matchings we introduce a subtle change of presentation. We now define the \textit{free neighborhood} $C$ to be the subgraph of $G$ containing all the incident edges to $e$ that are not covered by edges outside of both $e$ and its incident edges. When considering independent sets, the free neighborhood was empty if the random vertex $v$ was in the independent set. Here the presence or absence in the matching of $e$ or an edge adjacent to $e$ does not affect $C$.
Given $e$ and $C$, we use the term \textit{externally uncovered neighbor} to refer to an edge of $C$ incident to $e$.
The possible free neighborhoods $C$ are completely defined by three parameters: $L,R,K\in\{0,1,\dotsc,d-1\}$, counting the number of left and right neighboring edges in $C$ with an endpoint of degree $1$, and the number of triangles formed by $e$ and $C$.
An example is pictured below.
\begin{center}
\begin{tikzpicture}[line width=1pt,every node/.style={circle,fill,draw,inner sep=0,minimum size=4pt}]
\node (lroot) at (-1,0) {};
\node (rroot) at ( 1,0) {};
\node (k1) at ($(0,1.732050807568877)$) {};
\node (l1) at ($(lroot)+(120:2cm)$) {};
\node (l2) at ($(lroot)+(180:2cm)$) {};
\node (l3) at ($(lroot)+(240:2cm)$) {};
\node (r1) at ($(rroot)+(30:2cm)$) {};
\node (r2) at ($(rroot)+(-30:2cm)$) {};
\node[draw=none, fill=none] (labele) at (0,-0.3) {$e$};
\node[draw=none, fill=none] (labelk) at (0,2.2) {$K=1$};
\draw[style=dotted] (lroot) --(rroot);
\draw (lroot) -- (k1);
\draw (lroot) -- (l1);
\draw (lroot) -- (l2);
\draw (lroot) -- (l3);
\draw (rroot) -- (k1);
\draw (rroot) -- (r1);
\draw (rroot) -- (r2);
\draw [decorate,decoration={brace,amplitude=10pt}] ($(r1)+(0.1,5pt)$) -- ($(r2)+(0.1,-5pt)$) node [fill=none,draw=none,midway,xshift=1cm] {$R=2$};
\draw [decorate,decoration={brace,amplitude=10pt,mirror}] ($(l1)+(-1.2cm,5pt)$) -- ($(l3)+(-1.2cm,-5pt)$) node [fill=none,draw=none,midway,xshift=-1cm] {$L=3$};
\end{tikzpicture}
\end{center}
Let $q(i,j,k) = \Pr[L = i, R=j, K=k]$, and denote the matching polynomial for such a free neighborhood by $M_{i,j,k}$, where we can compute
\[ M_{i,j,k}(\lambda) = 1 + (i+j+2k)\lambda + \big[k^2 + k(i+j-1) +ij\big]\lambda^2 \,. \]
Conditioned on the event that the free neighborhood of $e$ is $C$, the random matching $H$ restricted to $e$ and its incident edges is distributed according to the monomer-dimer model on the graph $C$ with the edge $e$ added; the partition function of this model is $\lambda + M_C(\lambda)$, with the term $\lambda$ corresponding to the event that $e \in H$.
We can write $\alpha_M:=\alpha^M_G(\lambda)$ as the expected fraction of edges incident to $e$ that are in the matching, as each edge in a $d$-regular graph is incident to exactly $2(d-1)$ other edges:
\begin{align*}
\alpha_M & = \frac{2}{dn} \sum_e \sum_{f \sim e} \frac{1}{2(d-1)} \Pr[f \in H] \\
&= \mathbb{E} \left[ \frac{\lambda M_C^\prime(\lambda)}{2 (d-1) (\lambda + M_C(\lambda))} \right ] \\
&= \sum_{i,j,k} q(i,j,k) \frac{\lambda M^\prime_{i,j,k}(\lambda) }{ 2 (d-1) (\lambda + M_{i,j,k}(\lambda))} \, ,
\end{align*}
where the expectation in the second line is over the random free neighborhood $C$ resulting from the two-part experiment described above. If we write the expected fraction of occupied neighbors of $e$ in a configuration as $\overline \alpha_M(i,j,k)=\frac{1}{2(d-1)}\frac{\lambda M_{i,j,k}'}{\lambda + M_{i,j,k}}$, the above expression can be written $\alpha_M = \sum_{i,j,k}q(i,j,k)\overline\alpha_M(i,j,k)$.
\subsection{The linear program for matchings}
We now introduce additional constraints before optimizing $\alpha_M$ over distributions of free neighborhoods. We could write multiple expressions for $\alpha_M$, equate them, and solve the maximization problem as we did for independent sets. Using three expressions for $\alpha_M$ we were able to prove Theorem~\ref{thm:matchingGeneral} for the case $d=3$, in which the optimal distribution is supported on only three values: $q(0,0,0)$, $q(1,1,0)$, $q(2,2,0)$. But in general we need at least $d-1$ constraints (in addition to the constraint that the $q(i,j,k)'s$ sum to one) as the distribution induced by $K_{d,d}$ is supported on $d$ values.
Instead, we write, for all $t$, two expressions for the marginal probability that the number of uncovered neighbors on a randomly chosen side of a random edge is equal to $t$. We find the two expressions by choosing uniformly: a random edge $e$, a random side left or right, and $f$, a random neighboring edge of $e$ from the given side.
We first calculate the probability that $e$ has $t$ uncovered neighbors on the side containing $f$, then we calculate the probability that $f$ has $t$ uncovered neighbors on the side containing $e$.
Given a free neighborhood $C$ with $L=i$, $R=j$, and $K=k$, $e$ can have $0, 1, i+k-1,$ or $i+k$ uncovered left neighbors; an edge $f$ to the left of $e$ can have $0, 1, i+k-2, i+k-1, i+k$, or $i+k+1$ uncovered right neighbors (depending on whether $f$ itself is in the free neighborhood $C$).
Let $\gamma^e_{i,j,k}(t) = \Pr[ e \text { has } t \text{ uncovered left neighbors } | L=i, R=j, K=k]$ and $\gamma^f_{i,j,k}(t) = \Pr[ f \text { has } t \text{ uncovered right neighbors } | L=i, R=j, K=k]$, where $f$ is a uniformly chosen left neighbor of $e$.
\begin{claim}
Let $\beta_t = 1+ t \lambda$. Then we have
\begin{align*}\numberthis\label{eq:gamE}
\gamma^e_{i,j,k}(t) &= \frac{1}{\lambda + M_{i,j,k}} \Bigl( \mathbf 1_{t=0} \cdot \lambda + \mathbf 1_{t=1} \cdot [i \lambda \beta_{j+k} + k \lambda \beta_{j+k-1}] \\
&\hspace{3.7cm}+ \mathbf 1_{t=i+k} \cdot \beta_j + \mathbf 1_{t=i+k-1} \cdot k \lambda \Bigr) \\
\numberthis\label{eq:gamF}
\gamma^f_{i,j,k}(t) &= \frac{1}{(d-1)(\lambda + M_{i,j,k})} \Big( \mathbf 1_{t=0} \cdot \left[i \lambda \beta_{j+k} + k \lambda \beta_{j+k-1}\right] \\*
&\hspace{1cm}+ \mathbf 1_{t=1} \cdot \left[(d-1)\lambda + (d-2)(i \lambda \beta_{j+k} + k \lambda \beta_{j+k-1} ) \right] \\*
&\hspace{1cm}+ \mathbf 1_{t=i+k-2} \cdot \left[(i+k-1)k\lambda\right] + \mathbf 1_{t=i+k-1} \cdot \left[(d-i-k)k\lambda+(i+k)j\lambda\right] \\*
&\hspace{1cm}+ \mathbf 1_{t=i+k} \cdot \left[(d-1-i-k)j\lambda+(i+k)\right] + \mathbf 1_{t=i+k+1} \cdot \left[d-1-i-k\right] \Bigr)\,.
\end{align*}
\end{claim}
\begin{proof}
To compute the functions $\gamma^e_{i,j,k}(t)$ we consider the following disjoint events: 1) no left edge and no right edge from a triangle is in the matching 2) $e$ is in the matching 3) a left edge is in the matching 4) no left edge is in the matching, but a right edge from a triangle is in the matching. These events happen with probability $\frac{\beta_j}{\lambda + M_{i,j,k}}, \frac{\lambda}{\lambda + M_{i,j,k}}, \frac{i \lambda \beta_{j+k} + k \lambda \beta_{j+k-1}}{\lambda + M_{i,j,k}},$ and $ \frac{k \lambda }{\lambda + M_{i,j,k}}$ respectively. Under these events the number of uncovered neighbors of $e$ is $i+k, 0, 1,$ and $i+k-1$ respectively. This gives \eqref{eq:gamE}.
To compute the functions $\gamma^f_{i,j,k}(t)$ we refine the above events to include the possible choices of $f$: $f$ can be an edge outside the free neighborhood with probability $(d-1-i-k)/(d-1)$; an edge in the free neighborhood but not in a triangle with probability $i/(d-1)$; in the free neighborhood and in a triangle with probability $k/(d-1)$. If a left edge is in the matching we choose it as $f$ with probability $1/(d-1)$, and if a right edge in a triangle is in the matching we choose $f$ adjacent to it with probability $1/(d-1)$. Computing the number of uncovered neighbors of $f$ in each case gives \eqref{eq:gamF}.
\end{proof}
We now define a linear program with constraints imposing that the two different ways of writing the marginal probabilities are equal. The marginal probability constraint for $t=d-1$ is redundant and we omit it. To account for the equal chance that $f$ is chosen from the left side of $e$ and the right side of $e$, we average $\gamma^f_{i,j,k}(t)$ and $\gamma^f_{j,i,k}(t)$, and $\gamma^e_{i,j,k}(t)$ and $\gamma^e_{j,i,k}(t)$.
\begin{align*}\label{eq:matchlinprogno3}
\alpha_M^* = \max &\sum_{i,j,k} q(i,j,k) \overline \alpha_M(i,j,k) \, \, \, \text{ subject to }\\
& q(i,j,k) \ge 0 \, \, \forall \, \,i,j,k\\
&\sum_{i,j,k} q(i,j,k) = 1 \\
&\sum_{i,j,k} q(i,j,k) \frac{1}{2}\Big[\gamma^f_{i,j,k}(t)+\gamma^f_{j,i,k}(t) -\gamma^e_{i,j,k}(t)-\gamma^e_{j,i,k}(t)\Big] = 0 \, \, \,\forall \, t= 0, \dots , d-2 \, .
\end{align*}
Disjoint unions of copies of $K_{d,d}$ are the only graphs that induce a distribution $q(i,j,k)$ supported on triples with $i=j$ and $k=0$. This gives us a candidate solution to the linear program.
The dual program is
\begin{align*}
\alpha^*_M ={} &\min \, \Lambda_p \, \, \text{ subject to }\\
& \Lambda_p - \overline \alpha_M(i,j,k) +\sum_{t=0}^{d-2} \Lambda_t \frac{1}{2}\Big[\gamma^f_{i,j,k}(t)+\gamma^f_{j,i,k}(t) -\gamma^e_{i,j,k}(t)-\gamma^e_{j,i,k}(t)\Big] \ge 0 \, \, \forall \, \, i,j,k \, .
\end{align*}
To show that $K_{d,d}$ is optimal, we find values for the dual variables $\Lambda_0, \dotsc, \Lambda_{d-2}$ so that the dual constraints hold with $\Lambda_p = \alpha^M_{K_{d,d}}(\lambda)$. To find such values, we solve the system of equations generated by setting equality in the constraints corresponding to $i=j$ and $k=0$ and solve for the variables $\Lambda_t$, $t=0, \dots, d-2$.
With this choice of values for the dual variables, we start by simplifying the form of the dual constraints with a substitution coming from equality in the $(i,j,k)=(0,0,0)$ constraint. The $(0,0,0)$ dual constraint has the simple form
\begin{equation*}
\Lambda_0-\Lambda_1=\alpha^M_{K_{d,d}}\, .
\end{equation*}
Moreover, observe that from the $\mathbf 1_{t=0}$ and $\mathbf 1_{t=1}$ terms in $\gamma_{i,j,k}^e(t)$ and $\gamma_{i,j,k}^f(t)$, every dual constraint contains the term
\begin{align*}
\left[\overline \alpha_M(i,j,k) - \frac{\lambda}{(\lambda+M_{i,j,k})}\right](\Lambda_0-\Lambda_1)
= \left[\overline \alpha_M(i,j,k) - \frac{\lambda}{(\lambda+M_{i,j,k})}\right]\alpha^M_{K_{d,d}} \, .
\end{align*}
With this simplification, we multiply through by $2(d-1)(\lambda+M_{i,j,k})$ and expand $\overline \alpha_M(i,j,k)$ terms to obtain the following form of the dual constraints:
\begin{align*}
\numberthis
\label{eq:dual2}
\alpha^M_{K_{d,d}}\big[\lambda &M'_{i,j,k}+2(d-1)M_{i,j,k}\big]-\lambda M'_{i,j,k}\\
&+\Lambda_{i+k-2}\cdot(i+k-1)k\lambda\\
&+\Lambda_{i+k-1}\cdot\left[(d-i-k)k\lambda+(i+k)j\lambda-(d-1)k\lambda\right]\\
&+\Lambda_{i+k}\cdot\left[(d-1-i-k)j\lambda+i+k-(d-1)\beta_j\right]\\
&+\Lambda_{i+k+1}\cdot(d-1-i-k)\\
&+\Lambda_{j+k-2}\cdot(j+k-1)k\lambda\\
&+\Lambda_{j+k-1}\cdot\left[(d-j-k)k\lambda+(j+k)i\lambda-(d-1)k\lambda\right]\\
&+\Lambda_{j+k}\cdot\left[(d-1-j-k)i\lambda+j+k-(d-1)\beta_i\right]\\
&+\Lambda_{j+k+1}\cdot(d-1-j-k) \geq 0 \, .
\end{align*}
The $(i,i,0)$ equality constraints now read
\begin{align}\label{eq:equalityDC}\textstyle
\alpha^M_{K_{d,d}}\beta_i\big(\beta_i+\frac{i\lambda}{d-1}\big)-\frac{i\lambda\beta_i}{d-1}+\Lambda_{i-1}\frac{i^2\lambda}{d-1}-\Lambda_i\frac{d-1-i+i^2\lambda}{d-1}+\Lambda_{i+1}\frac{d-1-i}{d-1}=0 \, .
\end{align}
With this we can write $\Lambda_{i+k+1}$ in terms of $\Lambda_{i+k}$ and $\Lambda_{i+k-1}$, and similarly for $\Lambda_{j+k+1}$.
Substituting this into \eqref{eq:dual2} and dividing by $\lambda$ we derive the simplified form of the dual constraints:
\begin{align*}\label{eq:niceDC}
\numberthis\lambda\big[&(i-j)^2+2k\big](1-d \alpha^M_{K_{d,d}})\\
&+\Lambda_{i+k-2}(i+k-1)k +\Lambda_{i+k-1}[k+(i+k)(j-i-2k)]\\
&+\Lambda_{i+k}(i+k)(i+k-j)\\
&+\Lambda_{j+k-2}(j+k-1)k +\Lambda_{j+k-1}[k+(j+k)(i-j-2k)]\\
&+\Lambda_{j+k}(j+k)(j+k-i) \geq 0 \, .
\end{align*}
Write $L(i,j,k)$ for the LHS of this inequality.
The marginal constraint for $t=d-1$ was omitted, but we nonetheless introduce $\Lambda_{d-1}:=0$ in order to simplify the presentation of the argument. The $(d-1,d-1,0)$ equality constraint gives $\Lambda_{d-2}$ directly:
\begin{align*}
\Lambda_{d-2}&=\frac{1}{(d-1)\lambda}\left[\lambda + (d-1)\lambda^2 -\alpha^M_{K_{d,d}} \beta_{d-1}\beta_d\right] \, .
\end{align*}
With $\Lambda_{d-1}$, $\Lambda_{d-2}$, and the recurrence relation \eqref{eq:equalityDC} the dual variables are fully determined.
We do not give a closed-form expression for $\Lambda_t$ as the values are used in an induction below.
Using $\Lambda_{d-1}$, $\Lambda_{d-2}$, and \eqref{eq:equalityDC} suffices for the proof.
We now reduce the problem of showing that the dual constraints \eqref{eq:niceDC} corresponding to triples $(i,j,k)$ with $k > 0$ or $i\ne j$ hold with strict inequality to showing that a particular function is increasing. We go on to prove this fact in Claims~\ref{claim:Fexp} and~\ref{claim:Fdt}.
Putting $k=0$ into \eqref{eq:niceDC} gives:
\begin{align*}\label{eq:k0dualconst}
\frac{L(i,j,0)}{(j-i)} &= \lambda(j-i)(1-d \alpha^M_{K_{d,d}})+i\Lambda_{i-1}-i\Lambda_{i}-j\Lambda_{j-1}+j\Lambda_{j}\\
&= F_d(j)-F_d(i)
\end{align*}
where
\begin{equation}\label{eq:Fdef}
F_d(t) := t\left[\lambda(1-d \alpha^M_{K_{d,d}})+\Lambda_t-\Lambda_{t-1}\right] \, .
\end{equation}
From \eqref{eq:niceDC} we obtain
\begin{align*}
L(i-1,j-1,k+1)-L(i,j,k) = F_d(i+k) - F_d(i+k-1) + F_d(j+k) - F_d(j+k-1).
\end{align*}
Therefore if $F_d(t)$ is strictly increasing, we have $L(i,j,0) > 0$ for $i\ne j$, and $L(i-1,j-1,k+1) > L(i,j,k) > \cdots > L(i+k,j+k,0) \geq 0$.
We first find an explicit expression for $F_d(t)$. Recall that we write $M_{K_{t,t}}$ for the matching polynomial of the graph $K_{t,t}$.
\begin{claim}\label{claim:Fexp} For all $d\geq 2$ and $1\leq t\leq d-1$,
\begin{align}\label{eq:explicitF}
F_d(t) = \frac{t (d - 1)}{M_{K_{d,d}}}\sum_{\ell=t-1}^{d-2}\frac{(d-1-t)!}{(\ell+1-t)!}\lambda^{d-\ell}M_{K_{\ell,\ell}}\,.
\end{align}
\end{claim}
\begin{proof}
We will use the following two facts:
\begin{gather}
\label{eq:LaGuerre}
\MK{d} - \beta_{2d-1} \MK{d-1} + (d-1)^2\lambda^2 \MK{d-2} = 0 \\
\label{eq:MKd}
\alpha^M_{K_{d,d}} = \frac{\lambda \MK{d-1}}{\MK{d}} \, .
\end{gather}
The first is a Laguerre polynomial identity, verifiable by hand; the second is a short calculation. The equality dual constraint \eqref{eq:equalityDC} implies:
\begin{align}\label{eq:Frec}
(d-1-t)F_d(t+1)=(t+1) [ t\lambda F_d(t)+(d-1)\lambda-(d-1)\alpha^M_{K_{d,d}} \beta_{d+t} ] \,.
\end{align}
We first show that the right hand side of \eqref{eq:explicitF} satisfies the above recurrence relation. Using \eqref{eq:MKd} this amounts to showing that the following expression is equal to zero for all $d\geq 2$ and $1\leq t\leq d-1$:
\[
\Phi_d(t):=(d-1-t)! \Bigg( \sum_{\ell=t}^{d-2}\frac{\lambda^{d-\ell}M_{K_{\ell,\ell}}}{(\ell-t)!}-t^2 \sum_{\ell=t-1}^{d-2}\frac{\lambda^{d+1-\ell}M_{K_{\ell,\ell}}}{(\ell+1-t)!} \Bigg )-\lambda( M_{K_{d,d}}-\beta_{d+t}M_{K_{d-1,d-1}}) \,.
\]
We proceed by induction on $d$. Note that when $d=2$, $\Phi_2(1)$ is easily verified to be zero. Note that
\begin{align*}
\Phi_{d+1}(t)
&=\lambda\Big((d-t)\Phi_d(t) -\MK{d+1}+\beta_{2d+1}\MK{d}- d^2\lambda^2\MK{d-1}\Big) \,.
\end{align*}
By the induction hypothesis and \eqref{eq:LaGuerre} the result follows. To complete the proof of the claim it suffices to show that \eqref{eq:explicitF} holds for $t=d-1$. Recalling that
\begin{align*}
\Lambda_{d-1} &= 0 \\
\Lambda_{d-2} &= \frac{1}{d - 1} + \lambda - \frac{\alpha^M_{K_{d,d}}}{(d-1)\lambda}\beta_d\beta_{d-1} \,,
\end{align*}
substituting into \eqref{eq:Fdef}, and using \eqref{eq:LaGuerre} and \eqref{eq:MKd} we have
\begin{align*}
F_d(d-1)
&= (d-1)\left[\lambda(1-d \alpha^M_{K_{d,d}})-\frac{1}{d-1}-\lambda+\frac{\alpha^M_{K_{d,d}}}{(d-1)\lambda}\beta_d\beta_{d-1}\right]\\
&=\frac{\alpha^M_{K_{d,d}}}{\lambda}\beta_{2d-1}-1\\
&=\frac{1}{M_{K_{d,d}}}\left[\beta_{2d-1}M_{K_{d-1,d-1}}-M_{K_{d,d}
}\right]\\
&=\frac{(d-1)^2\lambda^2M_{K_{d-2,d-2}}}{M_{K_{d,d}}} \, ,
\end{align*}
verifying \eqref{eq:explicitF} for $t=d-1$.
\end{proof}
Using Claim \ref{claim:Fexp} we prove the following.
\begin{claim}
\label{claim:Fdt}
$F_d(t)$ is strictly increasing as a function of $t$.
\end{claim}
\begin{proof}
To prove that $F_d(t)$ is increasing, we show that
\begin{align*}
R_d(t)
&:=\frac{\MK{d}}{(d-1)}\cdot\frac{F_d(t+1)-F_d(t)}{(d-2-t)!}\\
&=(t+1)\sum_{\ell=t}^{d-2}\frac{\lambda^{d-\ell}}{(\ell-t)!}\MK{\ell} -t(d-1-t)\sum_{\ell=t-1}^{d-2}\frac{\lambda^{d-\ell}}{(\ell+1-t)!}\MK{\ell}
\end{align*}
is positive for each $t$ with $1\leq t\leq d-2$. We do this by fixing $t$ and inducting on $d$ from $t+2$ upwards. A useful inequality will be $\MK{t} > t\lambda \MK{t-1}$ which comes from only counting matchings of $K_{t,t}$ that use a specific vertex. Iterating this inequality we obtain
\begin{align}\label{eq:crudebound}
\MK{t} > \frac{t!}{\ell !}\lambda^{t-\ell} \MK{\ell} \hbox{ for $0 \leq \ell \leq t-1$}\,.
\end{align}
For the base case of our induction, $d = t+2$, we have $R_d(d-2)=\lambda^2 \big[ \MK{d-2}-(d-2)\lambda \MK{d-3}\big]$ which by \eqref{eq:crudebound} is positive.
For the inductive step we have
\begin{align*}
R_{d+1}(t)=\lambda \bigg[ R_d(t) + \frac{\lambda}{(d-1-t)!}\MK{d-1} - \sum_{\ell=t-1}^{d-2}\frac{t \lambda^{d-\ell}}{(\ell-t+1)!}\MK{\ell} \bigg]
\end{align*}
and so it is sufficient to show
\begin{align}\label{eq:Rdt}
\sum_{\ell=t-1}^{d-2}\frac{t \lambda^{d-\ell}}{(\ell+1-t)!}\MK{\ell} < \frac{\lambda}{(d-1-t)!}\MK{d-1}\,.
\end{align}
We use the inequality \eqref{eq:crudebound} in each term of the sum to see that the LHS of \eqref{eq:Rdt} is less than
\begin{align*}
\sum_{\ell=t-1}^{d-2}\frac{t \ell!\lambda}{(\ell+1-t)!(d-1)!}\MK{d-1}
\end{align*}
and so
\begin{align*}
\sum_{\ell=t-1}^{d-2}\frac{t \lambda^{d-\ell}}{(\ell+1-t)!}\MK{\ell} &< \sum_{\ell=t-1}^{d-2}\frac{t \ell!\lambda}{(\ell+1-t)!(d-1)!}\MK{d-1}\\
&= \frac{\lambda \MK{d-1}}{(d-1-t)!}\cdot \sum_{\ell=t-1}^{d-2}\frac{t \ell!(d-1-t)!}{(\ell+1-t)!(d-1)!} \\
&= \frac{\lambda \MK{d-1}}{(d-1-t)!}\cdot \binom{d-1}{t}^{-1}\cdot \sum_{\ell=t-1}^{d-2}\binom{\ell}{t-1}\\
&= \frac{\lambda \MK{d-1}}{(d-1-t)!}\,,
\end{align*}
therefore \eqref{eq:Rdt} holds as required.
\end{proof}
This completes the proof of dual feasibility and shows our candidate solution to the primal program is optimal. The uniqueness of the solution follows from two facts. First, strict inequality in the dual constraints outside of the $(i,i,0)$ constraints implies, by complementary slackness, that the support of any optimal solution in the primal is contained in the set of $(i,i,0)$ configurations. Second, the distribution induced by $K_{d,d}$ is the unique distribution satisfying the constraints with such a support.
This follows from the fact that $\Lambda_i$ is uniquely determined by \eqref{eq:equalityDC} where we have set the $(i,i,0)$ dual constraints to hold with equality, which in turn shows that the relevant $d \times d$ submatrix of the constraint matrix is full rank. This proves Theorem~\ref{thm:matchingGeneral}.
\section{Independent sets and matchings of a given size}
\label{sec:givensize}
Let $i_k(G)$ be the number of independent sets of size $k$ in a graph $G$, and $m_k(G)$ the number of matchings of size $k$. Kahn~\cite{kahn2001entropy} conjectured that $i_k(G)$ is maximized over $d$-regular, $n$-vertex graphs by $H_{d,n}$ for all $k$ (when $2d$ divides $n$), and Friedland, Krop, and Markstr{\"o}m~\cite{friedland2008number} conjectured the same for $m_k(G)$. Previous bounds towards these conjectures were given in \cite{carroll2009matchings,ilinca2013asymptotics,perkins2015birthday}; for $d$ fixed and $k$ linear in $n$, all previous bounds were off the conjectured values by a multiplicative factor exponential in $n$. Here we adapt the method of Carroll, Galvin, and Tetali (and use the above result on the matching polynomial) to give bounds for both problems that are tight up to a factor of $2 \sqrt{n}$, for all $d$ and all $k$.
\begin{theorem}
\label{thm:givensize}
For all $d$-regular graphs $G$ on $n$ vertices (where $2d$ divides $n$),
\begin{align*}
i_k(G) &\le 2 \sqrt{n} \cdot i_k(H_{d,n})
\intertext{and}
m_k(G) &\le 2 \sqrt{n} \cdot m_k(H_{d,n}) \, .
\end{align*}
\end{theorem}
We start with a fact about the independence and matching polynomials of $H_{d,n}$.
\begin{lemma}
\label{lem:lampick}
For all $1 \le k \le n/2$, there exists a $\lambda$ so that
\[ \frac{i_k(H_{d,n}) \lambda^k} {P_{H_{d,n}}(\lambda)} = \Pr_{H_{d,n}} [|I|=k] > \frac{1}{2 \sqrt{n}} \]
and a $\lambda$ so that
\[ \frac{m_k(H_{d,n}) \lambda^k} {M_{H_{d,n}}(\lambda)} = \Pr_{H_{d,n}} [|H|=k] > \frac{1}{2 \sqrt{n}} \, . \]
\end{lemma}
\begin{proof}
The distribution of the size of a random independent set $I$ drawn from the hard-core model on $H_{d,n}$ is log-concave; that is,
\[ \Pr_{H_{d,n}}[|I| =j]^2 > \Pr_{H_{d,n}}[|I| =j+1] \cdot \Pr_{H_{d,n}}[|I| =j-1] \]
for all $1 < j < n/2$. This follows from two facts: the size distribution of the hard-core model on $K_{d,d}$ is log-concave, and the convolution of two log-concave distributions is again log-concave. The first fact is simply the calculation
\[ \binom{d}{j} ^2 > \binom{d}{j-1} \binom {d}{j+1} \, . \]
Now choose $\lambda$ so that $ \Pr_{H_{d,n}}[|I| =k] = \Pr_{H_{d,n}}[|I| =k+1]$. Log-concavity then implies that $ \Pr_{H_{d,n}}[|I| =k]$ is maximal. Some explicit computations for the variance for a single $K_{d,d}$ give that the variance of $|I|$ is at most $n/8$; then via Chebyshev's inequality, with probability at least $2/3$ the size of $I$ is one of at most $\frac{4}{3} \sqrt n$ values, and thus the largest probability of a single size is greater than $\frac{1}{2 \sqrt n}$.
The proof for $ m_k(H_{d,n})$ is the same: the variance of the size of a random matching is also at most $n/8$ (see, e.g. \cite{kahn2000normal}), and log-concavity of the size distribution on $K_{d,d}$ is verified via the inequality
\[
\binom{d}{j} ^4 j!^2 > \binom{d}{j-1}^2 (j-1)! \binom {d}{j+1}^2 (j+1)!
\qedhere
\]
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:givensize}]
Assume for sake of contradiction that $m_k(G) > 2 \sqrt{n} \cdot m_k(H_{d,n})$. Choose $\lambda$ according to Lemma \ref{lem:lampick}. We have:
\begin{align*}
M_G(\lambda)&\ge m_k(G) \lambda ^k > 2 \sqrt{n} \cdot m_k(H_{d,n}) \lambda^k > M_{H_{d,n}}(\lambda) \, ,
\end{align*}
but this contradicts Theorem~\ref{thm:matchingGeneral}. The case of independent sets is identical.
\end{proof}
The above proof is essentially the same as the proofs in Carroll, Galvin, and Tetali~\cite{carroll2009matchings} with the small observation that $\lambda$ can be chosen so that $k$ is the most likely size of a matching (or independent set) drawn from $H_{d,n}$. The factor $2 \sqrt{n}$ in both cases can surely be improved by using some regularity of the independent set and matchings sequence of a general $d$-regular graph; we leave this for future work.
As a consequence, we prove the asymptotic upper matching conjecture of Friedland, Krop, Lundow, and Markstr{\"o}m~\cite{friedland2008validations}. Fix $d$ and consider an infinite sequence of $d$-regular graphs $\mathcal G_d = G_{1}, G_{2}, \dots$ where $G_n$ has $n$ vertices. For any $\rho \in [0,1/2]$, the $\rho$-monomer entropy is
\[ h_{\mathcal G_d} ( \rho) = \sup_{ \{k_n\}} \limsup_{n \to \infty} \frac{ \log m_{k_n} (G_n)}{n} \, , \]
where the supremum is taken over all integer sequences $ \{ k_n \}$ with $k_n / n \to \rho$. Let $h_d(\rho) = \lim_{n \to \infty} \frac{\log m_{\lfloor \rho n \rfloor } ( H_{d,n}) }{ n}$, where the limit is taken over the sequences of integers divisible by $2d$. Then the conjecture states that for all $\mathcal G_d$ and all $\rho \in [0,1/2]$, $h_{\mathcal G_d} (\rho) \le h_d(\rho)$.
To prove this, first assume $\rho > 0$ since for $\rho=0$ the result is trivially true. Assume for the sake of contradiction that $\limsup \frac{\log m_{k_n} (G_n)}{n} > h_d(\rho) + \epsilon$ for some $\epsilon >0$. Take $N_0$ large enough that for all $n_1 \ge N_0$, divisible by $2d$, $\frac{\log m_{\lfloor \rho n_1 \rfloor } ( H_{d,n_1}) }{ n_1 } < h_d(\rho) +\epsilon/2$. Now take some $n \ge N_0$ with $\frac{\log m_{k_n} (G_n)}{n} > h_d(\rho) + \epsilon$, and let $n_1 = 2d \cdot \lceil n/(2d) \rceil$. By Lemma~\ref{lem:lampick}, we choose $\lambda$ so that $m_{\lfloor \rho n_1 \rfloor}(H_{d,n_1}) \lambda^{\lfloor \rho n_1 \rfloor} > \frac{1}{2 \sqrt{n_1}} M_{H_{d,n_1}}(\lambda)$. Note that since $\rho >0$, such $\lambda$ is bounded away from $0$ as $n_1 \to \infty$. Then we have
\begin{align*}
\frac{ \log M_{G_n}(\lambda)}{n} \ge \frac{\log m_{k_n}(G_n)\lambda^{k_n}}{n} &> \frac{k_n}{n} \log \lambda + h_d(\rho) + \epsilon\\
& = \rho \log \lambda + h_d(\rho) + \epsilon +o(1) \text{ as } n \to \infty
\intertext{and}
\frac{\log M_{K_{d,d}}(\lambda)}{2d} = \frac{ \log M_{H_{d,n_1}}(\lambda)}{n_1} &< \frac{ \log \left( 2 \sqrt{n_1} \cdot m_{\lfloor \rho n_1 \rfloor}(H_{d,n_1}) \lambda^{\lfloor \rho n_1 \rfloor} \right ) }{n_1} \\
&< \frac{\log(2 \sqrt {n_1})}{n_1} + \frac{\lfloor \rho n_1 \rfloor}{n_1} \log \lambda + h_d(\rho) + \epsilon /2 \\
&= \rho \log \lambda + h_d(\rho) + \epsilon/2 +o(1) \, ,
\end{align*}
but this contradicts Theorem~\ref{thm:matchingGeneral}. With the same proof, the analogous statement for independent set entropy holds.
\section{Conclusions}
To recap, our method consists of writing down a set of constraints on local probabilities in the hard-core or monomer-dimer model that hold for every $d$-regular graph, then optimizing an expression for the occupancy fraction in terms of local probabilities over all distributions that satisfy the constraints. Verifying that our desired graph is the optimizer involves constructing a feasible solution to the dual linear program. This method allowed us to prove tight bounds on the logarithmic derivative of the partition function in both models. In the case of independent sets the result is a strengthening and an alternate proof of the fact that the independence polynomial is maximized by $K_{d,d}$; in the case of matchings, the corresponding statement about the matching polynomial was itself previously unknown.
In both cases our results are neither implied by nor imply conjectures that the numbers of independent sets \cite{kahn2001entropy} and matchings \cite{friedland2008number} of each given size are maximized by $H_{d,n}$; while we improve the known bounds in both cases, these conjectures remain open. Here we give even stronger conjectures:
\begin{conj}
\label{conj:Ind}
Let $G$ be a $d$-regular, $n$-vertex graph, where $2d$ divides $n$. Then for all $k$, the ratio $\frac{i_k(G)}{i_{k-1}(G)}$ is maximized by $H_{d,n}$.
\end{conj}
\begin{conj}
\label{conj:match}
Let $G$ be a $d$-regular, $n$-vertex graph, where $2d$ divides $n$. Then for all $k$, the ratio $\frac{m_k(G)}{m_{k-1}(G)}$ is maximized by $H_{d,n}$.
\end{conj}
Conjecture \ref{conj:Ind} also appeared in a draft of \cite{perkins2015birthday}. These conjectures are stronger than Theorems~\ref{thm:occupy} and \ref{thm:matchingGeneral} and imply the conjectures of \cite{kahn2001entropy} and \cite{friedland2008number}. The relation to the work here is that Conjectures \ref{conj:Ind} and \ref{conj:match} can be stated as follows: the expected number of neighbors of uniformly random independent set (matching) of size $k$ is minimized by $H_{d,n}$. Theorems \ref{thm:occupy} and \ref{thm:matchingGeneral} show that such a statement is true when the random independent set (matching) is chosen according to the hard-core model instead of uniformly over those of a given size.
\section*{Acknowledgements}
We thank Prasad Tetali for his helpful comments, Emma Cohen for finding a mistake in an earlier draft of the paper, Laci V\'egh for explaining to us some aspects of duality in linear programming, and Ron Peled for an enjoyable and inspiring discussion of the hard-core model in London.
| {
"timestamp": "2017-05-10T02:05:08",
"yymm": "1508",
"arxiv_id": "1508.04675",
"language": "en",
"url": "https://arxiv.org/abs/1508.04675",
"abstract": "We prove tight upper bounds on the logarithmic derivative of the independence and matching polynomials of d-regular graphs. For independent sets, this theorem is a strengthening of the results of Kahn, Galvin and Tetali, and Zhao showing that a union of copies of $K_{d,d}$ maximizes the number of independent sets and the independence polynomial of a d-regular graph.For matchings, this shows that the matching polynomial and the total number of matchings of a d-regular graph are maximized by a union of copies of $K_{d,d}$. Using this we prove the asymptotic upper matching conjecture of Friedland, Krop, Lundow, and Markström.In probabilistic language, our main theorems state that for all d-regular graphs and all $\\lambda$, the occupancy fraction of the hard-core model and the edge occupancy fraction of the monomer-dimer model with fugacity $\\lambda$ are maximized by $K_{d,d}$. Our method involves constrained optimization problems over distributions of random variables and applies to all d-regular graphs directly, without a reduction to the bipartite case.",
"subjects": "Combinatorics (math.CO); Probability (math.PR)",
"title": "Independent Sets, Matchings, and Occupancy Fractions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9920620075414418,
"lm_q2_score": 0.7154239836484143,
"lm_q1q2_score": 0.7097449534615415
} |
https://arxiv.org/abs/2003.01172 | Volume Above Distance Below | Given a pair of metric tensors $g_1 \ge g_0$ on a Riemannian manifold, $M$, it is well known that $\operatorname{Vol}_1(M) \ge \operatorname{Vol}_0(M)$. Furthermore one has rigidity: the volumes are equal if and only if the metric tensors are the same $g_1=g_0$. Here we prove that if $g_j \ge g_0$ and $\operatorname{Vol}_1(M)\to \operatorname{Vol}_0(M)$ then $(M,g_j)$ converge to $(M,g_0)$ in the volume preserving intrinsic flat sense. Well known examples demonstrate that one need not obtain smooth, $C^0$, Lipschitz, or even Gromov-Hausdorff convergence in this setting. Our theorem may also be applied as a tool towards proving other open conjectures concerning the geometric stability of a variety of rigidity theorems in Riemannian geometry. To complete our proof, we provide a novel way of estimating the intrinsic flat distance between Riemannian manifolds which is interesting in its own right. | \section{Introduction}\label{sect:intro}
Over the past few decades a number of geometric stability theorems have been proven where one assumes a lower bound on Ricci curvature and proves the Riemannian manifolds are close in the Gromov-Hausdorff sense. However, without a lower bound on Ricci curvature, there are usually counter examples showing that Gromov-Hausdorff stability is too strong a notion. Ilmanen's Example depicted in Figure~\ref{fig-Ilmanen} is a sequence of spheres $({\mathbb{S}}^m,g_j)$ with Riemannian metric tensors, $g_j\ge g_0$, that have positive scalar curvature, where $\operatorname{Vol}_j({\mathbb{S}}^m) \to \operatorname{Vol}_0({\mathbb{S}}^m)$ but the sequence has no converging subsequence. See
\cite{Sormani-scalar} for a survey of open stability conjectures and similar counter examples to stability in the Gromov-Hausdorff sense.
\begin{figure}[h]
\center{\includegraphics[width=.6\textwidth]{Ilmanen}}
\caption{A sequence of spheres $({\mathbb{S}}^m,g_j)$ with $g_j\ge g_0$ and $\operatorname{Vol}_j({\mathbb{S}}^m) \to \operatorname{Vol}_0({\mathbb{S}}^m)$ that have no Gromov-Hausdorff limit.}
\label{fig-Ilmanen}
\end{figure}
Gromov has suggested in \cite{Gromov-Dirac} that intrinsic flat convergence might be the right notion of convergence to study for sequences of manifolds with lower bounds on scalar curvature. Intrinsic flat convergence was first defined by the third author with Wenger in \cite{SW-JDG} building upon the work of Ambrosio-Kirchheim \cite{AK}. A sequence of oriented manifolds $M_j$ converges in the intrinsic flat sense to $M_0$, $M_j \stackrel {\mathcal{F}}{\longrightarrow} M_0$ iff they can be embedded by distance preserving maps $\phi_j: M_j \to Z$ into a common complete metric space $Z$ so that the submanifolds $\varphi_j(M_j)$ converge in the flat or weak sense as currents in $Z$ \cite{SW-JDG}. See Section~\ref{sect:background} for the precise definition. The sequence is said to converge in the volume preserving intrinsic flat sense $M_j\stackrel {\mathcal{VF}}{\longrightarrow} M_0$ if and only if
\begin{equation}
M_j \stackrel {\mathcal{F}}{\longrightarrow} M_0 \textrm{ and } \operatorname{Vol}_j(M_j) \to \operatorname{Vol}_0(M_0).
\end{equation}
The third author, Portegies, Lee, and Jauregui have proven many consequences of intrinsic flat convergence in
\cite{Sormani-ArzAsc} \cite{Portegies-Sormani1} \cite{Jauregui-Lee-SWIF}. In particular balls and spheres within the manifolds also converge in the volume preserving intrinsic flat sense and their filling volumes converge. In addition Portegies has shown volume preserving intrinsic flat convergence implies measure convergence and that the Laplace spectra of the manifolds semiconverge \cite{Portegies-F-evalue}.
In this paper we prove a new theorem estimating the intrinsic flat distance between two manifolds. The proof involves a new construction of a common metric space, $Z$, into which we embed the Riemannian manifolds $M_j$ and $M_0$ assuming their distance functions satisfy, $d_j\ge d_0$, and are almost equal on a set of large measure. We then apply this new estimate to prove the following stability theorem:
\begin{thm}\label{vol-thm-better}
Suppose we have a fixed compact oriented Riemannian manifold, $M_0=(M^m,g_0)$,
without boundary and a sequence of distance non-increasing $C^1$ diffeomorphisms
\begin{equation}\label{eq-thmbetterF}
F_j: M_j=(M,g_j) \to M_0=(M,g_0)
\end{equation}
i.e.
\begin{equation}\label{eq-thmbetterd}
d_j(p,q) \ge d_0(F_j(p), F_j(q)) \qquad \forall p,q\in M_0
\end{equation}
and a uniform upper bound on diameter
\begin{equation}
\operatorname{Diam}_j(M_j) \le D_0
\end{equation}
and volume convergence
\begin{equation}
\operatorname{Vol}_j(M_j) \to \operatorname{Vol}_0(M_0)
\end{equation}
then $M_j$ converge to $M_0$ in the volume preserving intrinsic flat sense:
\begin{equation}
M_j \stackrel {\mathcal{VF}}{\longrightarrow} M_0.
\end{equation}
\end{thm}
Note that this theorem can be seen as a stability result, since it is known that $g_1\ge g_0 \implies \operatorname{Vol}_1\ge \operatorname{Vol}_0$ and if the volumes are equal, $\operatorname{Vol}_1=\operatorname{Vol}_2$, then $g_1=g_2$.
Our results have applications to important stability theorems as well. One of the most famous rigidity theorems involving scalar curvature is the Scalar Torus Rigidity Theorem of Schoen-Yau and Gromov-Lawson, which states that if a manifold is homeomorphic to a torus and has $\Scal\ge 0$ then it is isometric to a flat torus
\cite{Schoen-Yau-min-surf}\cite{Gromov-Lawson-torus}. Gromov has conjectured that this theorem is stable with respect to intrinsic flat convergence \cite{Gromov-Dirac} (cf.~\cite{Sormani-scalar}). The first author has proven this stability in the warped product setting in joint work with Hernandez-Vazquez, Parise, Payne, and Wang \cite{AHMPPW1}. The second author has proven this stability in the graph setting in joint work with Cabrera Pacheco
and Ketterer \cite{CPKP19}. In both these settings the distances are bounded from below and the volumes from above, and thus one may apply Theorem~\ref{vol-thm} as an endplay for their proofs. The first author has recently applied this paper to prove the stability in the conformal setting \cite{Allen-Conformal-Torus}.
Another important rigidity theorems involving scalar curvature is the Schoen-Yau Positive Mass Theorem \cite{Schoen-Yau-positive-mass}. This theorem is also conjectured to be stable with respect to intrinsic flat convergence (cf.~\cite{Sormani-scalar}). As this theorem involves noncompact manifolds, one proves stability by proving intrinsic flat convergence of balls within these spaces. The first and second author have recently applied the theorems and techniques in this paper to manifolds with boundary and applied their results to prove the almost rigidity of the positive mass theorem in the graph setting without black holes \cite{Allen-Perales-1}. In joint work with Huang and Lee, the second author has applied this work to prove it in the graph setting with black holes as well \cite{Huang-Lee-Perales} providing a completely new proof of the results claimed in earlier work of Huang, Lee, and the third author \cite{HLS}. The second author has also applied these theorems in an upcoming paper to prove the stability of the hyperbolic positive mass theorem in the graph setting in joint work with Cabrera Pacheco \cite{CPP21}. Sakovich and Sormani will also apply this work to study the spacetime intrinsic flat convergence of key examples in their upcoming work \cite{Sakovich-Sormani}.
It is important to note that the hypotheses \eqref{eq-thmbetterF} and \eqref{eq-thmbetterd} of Theorem~\ref{vol-thm-better} are equivalent to assuming
$g_j\ge g_0$ on a fixed manifold (see Theorem~\ref{vol-thm} within).
It is also important to note that Theorem~\ref{vol-thm-better} (equivalently Theorem~\ref{vol-thm}) only applies for distances bounded below and volumes bounded above and not visa versa. In Example~\ref{Cinched-Sphere} we see that with $g_j\le g_0$ and $\operatorname{Vol}_j \to \operatorname{Vol}_0$ the
$M_j$ can fail to converge to $M_0$. This surprising example of conformal metrics on a sphere first appeared in work of the first and third authors \cite{Allen-Sormani-2} and a similar example with warped product metrics appeared in an earlier paper of theirs \cite{Allen-Sormani}. These examples
converge to a cinched sphere, a cinched cylinder, or a cinched torus. In Example~\ref{to-Finsler} we see warped product metrics $g_j$ on a torus
${\mathbb{T}}^2$
such that $g_j \le g_0$ and $\operatorname{Vol}_j\to \operatorname{Vol}_0$ and yet the
Gromov-Hausdorff and intrinsic flat limit of $({\mathbb{T}}^2, g_j)$ is a Finsler manifold with a symmetric norm that is not an inner product \cite{Allen-Sormani}. We review these examples in Section~\ref{sect:background}. Any weaker geometric notion of convergence must also have the same limit, so one can never prove stability for distances above and volumes below.
Some might say that the hypothesis requiring pointwise control on the metric tensors from below
is too strong a hypotheses to be useful in more general settings. In Corollary~\ref{cor-vol-thm} we see that we only need $C^0$ convergence of the metric tensors from below instead of $g_j\ge g_0$. In Corollary~\ref{cor-Lp-thm} we see that $L^p$ convergence with $p\ge m$ can replace the volume convergence. In Remark~\ref{rmrk-diffeo}
we point out that one really only needs a sequence of diffeomorphic Riemannian manifolds for which one can find a sequence of diffeomorphisms for which the pull backs of the metric tensors satisfy the hypotheses of our theorem or corollary to obtain the conclusion since intrinsic flat convergence is invariant under isometry. It should also be noted that in the study of Kahler manifolds with fixed background metrics and potential functions, one does have such pointwise controls. This is being investigated by Eleonora DiNezza.
In Section~\ref{sect:background}, we briefly provide sufficient background on integral current spaces and the intrinsic flat distance so as to make this paper understandable to those who are new to this notion. We refer the reader also to \cite{Sormani-scalar} for a longer review. We also review key examples of the first and third authors which are relevant to this paper as well as their earlier versions of Theorem~\ref{vol-thm} which imply Gromov-Hausdorff as well as intrinsic flat convergence under significantly stronger hypotheses.
In Section~\ref{sect:NewSWIFEstimate}, we prove Theorem~\ref{est-SWIF} which
provides the new method of estimating the intrinsic flat distance between two Riemannian manifolds. The proof involves a new construction of a common metric space, $Z$, into which we embed the Riemannian manifolds $M_j$ and $M_0$ such that $d_j\ge d_0$ and $d_j$ is close to $d_0$ on a good set of almost full measure.
In Section~\ref{sect:GoodSet}, we show how to construct a good set with almost full measure where we can guarantee control on the distance function on $M_j$. A key insight is to use Egoroff's Theorem in order to go from pointwise convergence of distance almost everywhere to uniform convergence on a subset of $M\times M$ of almost full measure. The bulk of the section is then devoted to describing a good subset of $M$ of almost full measure which satisfies the necessary hypotheses of Section \ref{sect:NewSWIFEstimate} in order to estimate the Intrinsic Flat distance.
In Section~\ref{sect:ProofMainThm}, we put all of these results together in order to prove Theorem \ref{vol-thm}. We also state and prove Corollary \ref{cor-vol-thm}. The paper closes with a section of open problems.
We would like to thank Misha Gromov for his interest in intrinsic flat convergence and all the attendees of the IAS Emerging Topics on Scalar Curvature and Convergence. We would particularly like to thank Ian Adelstein, Lucas Ambrozio, Armando Cabrera Pacheco, Alessandro Carlotto, Michael Eichmair, Lan-Hsuan Huang, Jeff Jauregui, Demetre Kazaras, Christian Ketterer, Sajjad Lakzian, Dan Lee, Chao Li, Yevgeny Liokumovich, Siyuan Lu, Fernando Coda Marques, Elena Maeder-Baumdicker, Andrea Malchiodi, Yashar Memarian, Pengzi Miao, Frank Morgan, Alexander Nabutovsky, Andre Neves, Alec Payne, Jacobus Portegies, Regina Rotman, Richard Schoen, Craig Sutton, Shengwen Wang, Guofang Wei, Franco Vargas Pallete, Robert Young, Ruobing Zhang, and Xin Zhou for intriguing discussions with us related to intrinsic flat convergence and stability at this event and at other workshops at Yale and NYU. We would also like to thank the many participants in the 2020 Virtual Workshop on Ricci and Scalar Curvature in honor of Gromov for their many thoughts and suggestions of further applications of this work.
\section{Background}\label{sect:background}
The main theorem in this paper is a stability or almost rigidity theorem. In this section we first
write a restatement of the main theorem with different equivalent hypotheses and prove the equivalence
using basic Riemannian geometry. We then review the corresponding rigidity theorem and provide a new proof of that
theorem which gives in some sense an outline of our proof of the rigidity theorem. Next we review the key
aspects of intrinsic flat convergence and work of Ambrosio-Kirchheim
needed to understand the statement of our main theorem and its proof.
We then review an older theorem of Huang-Lee and the third author which is used to prove stronger convergence
and apply this older theorem to present the examples mentioned in the introduction. The final subsection reviews a key theorem by the first and third authors which will be applied to prove the main theorem.
\subsection{Restating the Main Theorem}
Before we begin we would like to clarify that our main theorem is equivalent to the following theorem:
\begin{thm}\label{vol-thm}
Suppose we have a fixed compact oriented Riemannian manifold, $M_0=(M^m,g_0)$,
without boundary and
a sequence of metric tensors $g_j$ on $M$ defining $M_j=(M, g_j)$ with
\begin{equation} \label{g_j-below-vol-thm}
g_0(v,v) \le g_j(v,v) \qquad \forall v\in TM
\end{equation}
and a uniform upper bound on diameter
\begin{equation}
\operatorname{Diam}_j(M_j) \le D_0
\end{equation}
and volume convergence
\begin{equation}
\operatorname{Vol}_j(M_j) \to \operatorname{Vol}_0(M_0)
\end{equation}
then $M_j$ converge to $M_0$ in the volume preserving intrinsic flat sense:
\begin{equation}
M_j \stackrel {\mathcal{VF}}{\longrightarrow} M_0.
\end{equation}
\end{thm}
The equivalence can be seen by pushing forward the metric $g_j$ to $M_0$ using the map $F_j: M_j \to M_0$
and applying the two lemmas below.
\begin{lem}\label{DistToMetric}
Let $M_1=(M,g_1)$ and $M_0=(M,g_0)$ be Riemannian manifolds and $F: M_1 \rightarrow M_0$ be a $C^1$ diffeomorphism then
\begin{equation}
g_0 (dF(v), dF(v)) \leq g_1 (v,v) \qquad \forall v\in TM_1.
\end{equation}
iff
\begin{equation}
d_0(F(p),F(q))\le d_1(p,q)\qquad \forall p,q\in M_1
\end{equation}
\end{lem}
\begin{proof}
First recall by the definition of the Riemannian distance
\begin{equation}
d_g(p,q) = \inf \{ L_{g}(C): \, C(0)=p,\, C(1)=q\} \textrm{ where } L_g(C)=\int_0^1 g(C',C')^{1/2} \, dt.
\end{equation}
Thus it is easy to see that
\begin{equation}
d_0(F(p),F(q))\le L_{g_0}(F\circ C) \le L_{g_1}(C)
\end{equation}
and taking the infimum we have $d_0(F(p),F(q))\le d_1(p,q)$.
On the other hand, if we let $C:(-1,1)\to M_1$ be any smooth curve
such that $C(0)=p$ and $C'(0)=v$. Then we can calculate,
\begin{align}
g_1 (v,v) &= \lim_{t \to 0}\frac{d_1(C(t), p)^2}{t^2}
\\& \ge \lim_{t \to 0}\frac{d_0(F(C(t)), F(p))^2}{t^2}\label{DistNonIncreasing}
\\& = g_0(dF(v), dF(v)),
\end{align}
where we are using the distance non-increasing assumption in \eqref{DistNonIncreasing}.
\end{proof}
\begin{lem}\label{DistToMetric}
Let $M_1=(M,g_1)$ and $M_0=(M,g_0)$ be compact Riemannian manifolds such that
\begin{equation}
g_0 (v, v) \leq g_1 (v,v) \qquad \forall v\in TM.
\end{equation}
Then, there is $Q \geq 1$ such that
\begin{equation}
d_1(p,q)\le Qd_0(p,q)\qquad \forall p,q\in M.
\end{equation}
\end{lem}
\begin{proof}
Define
\begin{equation}
Q=\sup \{ \tfrac{g_1(v,v)} {g_0(v,v)} \,|\, v \in TM, g_0(v, v)=1\}.
\end{equation}
Since $g_0 (v, v) \leq g_1 (v,v)$ for all $v$, $Q \geq 1$. Since
$M$ is compact and thus the unit sphere bundle over $M$ is compact,
$Q$ is well defined.
Let $p, q$ be two points in $M$ and $\gamma$ be the $g_0$ geodesic joining them.
Then,
\begin{align}
d_1(p,q) \leq & L_{g_1} (\gamma) \\
\leq & \int Q g_0(\gamma', \gamma') dt \\
= & Q d_0(p,q).
\end{align}
\end{proof}
\subsection{Volume-Distance Rigidity Theorem}
Our main theorem is an almost rigidity theorem for the following well known rigidity theorem.
\begin{thm}\label{rigidity-thm}
Suppose $M_1=(M,g_1)$ and $M_0=(M,g_0)$ are a pair of Riemannian manifolds, and $F: M_1 \to M_0$
is a $C^1$ diffeomorphism that is distance non-increasing
\begin{equation}
d_0(F(p),F(q))\le d_1(p,q)\qquad \forall p,q\in M_1
\end{equation}
then
\begin{equation} \label{v-0-v-1}
\operatorname{Vol}_0(M_0) \le \operatorname{Vol}_1(M_1).
\end{equation}
Furthermore if $\operatorname{Vol}_1(M_1)=\operatorname{Vol}_0(M_0)$ then they are isometric
\begin{equation} \label{d-0-d-1}
d_0(F(p),F(q))\le d_1(p,q)\qquad \forall p,q\in M_1.
\end{equation}
\end{thm}
For completeness of exposition we include a proof of this rigidity theorem and then follow this with an explanation
as to the difficulties which arise when trying to prove an almost rigidity version of this theorem.
\begin{proof}
We begin by proving the inequality (\ref{v-0-v-1}) through a series of inequalities. Starting with
\begin{equation}
d_0(F(p),F(q))\le d_1(p,q)\qquad \forall p,q\in M_1
\end{equation}
and applying Lemma~\ref{DistToMetric}, we have
\begin{equation}
g_0 (dF(v), dF(v)) \leq g_1 (v,v) \qquad \forall v\in TM_1.
\end{equation}
By pushing the metric $g_0$ forward through the map $F$ we can
without loss of generality consider $g_1 = F^*g_1$ in order to write
\begin{equation}
g_1 (v, v) \ge g_0 (v,v) \qquad \forall v\in TM_0.
\end{equation}
In particular the eigenvalues of $g_1$ with respect to $g_0$:
\begin{equation}
\lambda \textrm{ such that } \exists v_\lambda \textrm{ such that } g_1 (v_\lambda, v_\lambda) =\lambda g_0 (v_\lambda,v_\lambda)
\end{equation}
must all have $\lambda\ge 1$. Taking the product of these eigenvalues we have
\begin{equation}
Det_{g_0}(g_1) \ge 1
\end{equation}
Since for any Borel set $A \subset M$
\begin{equation}
\operatorname{Vol}_{g_1}(A) =\int_A \sqrt{Det_{g_0}(g_1)} \, dvol{g_0} \ge \int_A 1 \, dvol_{g_0} = \operatorname{Vol}_{g_0}(A)
\end{equation}
we have (\ref{v-0-v-1}) as desired.
Now we prove the rigidity by observing that all the inequalities above become equalities
when the final line has an equality. We start with $\operatorname{Vol}_1(M_1)=\operatorname{Vol}_0(M_0)$, then we are forced to have equality for any
Borel set $A \subset M$
\begin{equation}
\operatorname{Vol}_{g_1}(A) =\int_A \sqrt{Det_{g_0}(g_1)} \, dvol_{g_0} = \int_A 1 \, dvol_{g_0} = \operatorname{Vol}_{g_0}(A)
\end{equation}
and so by continuity
\begin{equation}
Det_{g_0}(g_1) = 1.
\end{equation}
Hence all the eigenvalues are equal to $1$ and hence
\begin{equation}
g_1=g_0.
\end{equation}
Returning to the use of $F$ we have
\begin{equation}
g_0 (dF(v), dF(v)) = g_1 (v,v)
\end{equation}
which by Lemma~\ref{DistToMetric} gives us (\ref{d-0-d-1}).
\end{proof}
To prove an almost rigidity theorem one then starts with an almost equality in the final line,
or assume
\begin{equation}
\lim_{j\to \infty} \operatorname{Vol}_j(M_j)=\operatorname{Vol}_0(M_0).
\end{equation}
We can then show that for any Borel set $A\subset M_0$
\begin{equation}
\operatorname{Vol}_{g_j}(A) =\int_A \sqrt{Det_{g_0}(g_j)} \, dvol_{g_0} \to \int_A 1 \, dvol_{g_0} = \operatorname{Vol}_{g_0}(A)
\end{equation}
which will be done within this paper carefully.
However one cannot conclude
\begin{equation}
\sqrt{Det_{g_0}(g_j)} \to 1.
\end{equation}
In fact we will see this is not well controlled at all. Instead we will apply a theorem of the first
and third authors from \cite{Allen-Sormani-2} which chooses special sets $\mathcal{T}=\mathcal{T}_{p,q}$ that can be thought of as thin cylinders
around geodesics from $p$ to $q$ so that
\begin{equation}
\operatorname{Vol}_{g_j}( \mathcal T_{p,q}) \textrm{ is close to } \omega_{m-1}\epsilon^{m-1} d_j(p,q)
\end{equation}
and eventually show that there is a subsequence such that
\begin{equation}
d_j (p,q) \to d_0(p,q) \textrm{ pointwise almost everywhere } (p,q) \in M\times M.
\end{equation}
We review this theorem in the final subsection of the background.
This paper is dedicated to proving intrinsic flat convergence using this
control on the distances combined with the bounds on volume and diameter.
\subsection{Review of the Intrinsic Flat Distance}\label{subsect:SWIFBackground}
In \cite{SW-JDG}, Sormani-Wenger defined the intrinsic flat distance between pairs of oriented Riemannian manifolds with boundary as follows:
\begin{equation}\label{defn-IF}
d_{\mathcal{F}}(M_1^m, M_2^m) = \inf d_F^Z\left(\varphi_{1\#} [[M_1]], \varphi_{2\#} [[M_2]]\right)
\end{equation}
where the infimum is taken over all complete metric spaces $Z$ and all distance preserving maps $\varphi_i: M_i \to Z$,
\begin{equation}
d_Z(\varphi_i(p), \varphi_i(q)) = d_i(p,q) \qquad \forall p,q\in M_i.
\end{equation}
Here the flat distance between the images of $M_i$, viewed as integral currents,
\begin{equation}
T_i=\varphi_{i\#} [[M_i]]\in I_m(Z),
\end{equation}
is defined
\begin{equation} \label{dFZ}
d_F^Z(T_1, T_2)= \inf \left({\mathbf M}(A) + {\mathbf M}(B)\right)
\end{equation}
where the infimum is over all integral currents, $ A \in I_m(Z), \, B \in I_{m+1}(Z)$ such that
$A + \partial B = T_1 - T_2$.
To rigorously understand this definition one needs Ambrosio-Kirchheim theory which we review the essential
elements of below.
The intuitive idea is that the intrinsic flat distance is measuring the volume between the two Riemannian manifolds. To estimate the intrinsic flat distance, one first embeds them into a common metric space $Z$ without distorting distances,
then one finds an oriented rectifiable submanifold $A$ so that
the images $\varphi_i(M_i)$ and $A$ form the boundary of an oriented rectifiable submanifold $B$ of one dimension higher, and then one bounds the intrinsic flat distance from above by the sum of the volumes of $A$ and $B$. One needs generalized weighted submanifolds called integral currents to find the precise value of the intrinsic flat distance. These
currents were first defined by Federer-Flemming \cite{FF} in Euclidean space
and
by Ambrosio-Kirchheim for complete metric spaces in \cite{AK}.
In \cite{AK}, Ambrosio-Kirchheim defined the class of $m$-dimensional integral currents, $T\in I_m(Z)$,
in a complete metric space $Z$ as integer rectifiable currents whose boundaries are also integer rectifiable.
Since there are
no differential forms on metric spaces, Ambrosio-Kirchheim defined currents as acting on tuples
$(f, \pi_1,...,\pi_m)$, where $f: Z\to {\mathbb{R}}$ is a bounded Lipschitz function and each
$\pi_j: Z\to {\mathbb{R}}$ is Lipschitz, rather than forms $f d\pi_1 \wedge \cdots d\pi_m$. They define
\begin{equation}
\varphi_{\#} [[M]](f, \pi_1,...\pi_m) = \int_{M} (f\circ \varphi)\, d(\pi_1 \circ \varphi)\wedge \cdots \wedge d(\pi_m \circ \varphi)
\end{equation}
which is well defined for any oriented Riemannian manifold with or without boundary and any Lipschitz function $\varphi: M\to Z$.
More generally an $m$ dimensional integer rectifiable current, $T$, can be parametrized by
a countable collection of biLipschitz charts, $\varphi_i: A_i \to \varphi_i(A_i)\subset Z$ where
$A_i$ are Borel in ${\mathbb R}^m$ with pairwise disjoint images and integer weights $\theta_i\in {\mathbb Z}$ such that
\begin{equation}
T(f, \pi_1,...\pi_m) = \sum_{i=1}^\infty \theta_i \int_{A_i} (f\circ \varphi_i)\, d(\pi_1 \circ \varphi_i)\wedge \cdots \wedge d(\pi_m \circ \varphi_i)
\end{equation}
has finite mass, ${\mathbf M}(T)=||T||(Z)$. Their definition of mass and mass measure $||T||$
is subtle for currents in general but in Section 9 of \cite{AK}, they prove that for rectifiable currents
\begin{equation}
||T||= \lambda \theta \mathcal{H}^m
\end{equation}
where $\theta$ is an integer valued function and the area factor
$\lambda: \rm{set}(T) \to \mathbb{R}$ is a measurable function bounded above by
\begin{equation} \label{C_m}
C_m=2^m/\omega_m \textrm{ where } \omega_m=\operatorname{Vol}_{{\mathbb E}^m}(B_0(1)).
\end{equation}
So that
\begin{equation}
{\mathbf M}(T) \le C_m \sum_{i=1}^\infty |\theta_i| \mathcal{H}^m( \varphi_i(A_i)) < \infty.
\end{equation}
Ambrosio-Kirchheim define the boundary of any current to be
\begin{equation}
\partial T(f, \pi_1,...\pi_{m-1})= T(1,f, \pi_1,...\pi_{m-1}).
\end{equation}
This agrees with the notion of the boundary of a submanifold:
\begin{eqnarray}
\qquad
\partial \varphi_{\#}[[M]](f, \pi_1,...\pi_{m-1})&=&
\varphi_{\#}[[M]](1, f, \pi_1,...\pi_{m-1})\\
&=&
\int_{M} d(f\circ \varphi)\wedge d(\pi_1 \circ \varphi)\wedge \cdots \wedge d(\pi_m \circ \varphi)\\
&=& \int_{\partial M} (f\circ \varphi)\, d(\pi_1 \circ \varphi)\wedge \cdots \wedge d(\pi_m \circ \varphi)\\
&=&\varphi_{\#}[[\partial M]](f, \pi_1,...\pi_{m-1}).
\end{eqnarray}
They define an $m$ dimensional integral current to be an integer rectifiable current, $T$, whose
boundary $\partial T$ is also integer rectifiable. With this information(\ref{dFZ}) is well defined and finite.
Note that the definition of intrinsic flat convergence in (\ref{defn-IF}) does not require $M_j$ to be smooth Riemannian
manifolds. In \cite{SW-JDG} the distance is defined between a larger class of spaces called integral current spaces.
We do not need to consider general integral current spaces in this paper. However it is worth observing that the
definition as in (\ref{defn-IF}) can be understood for a pair of $C^1$ oriented manifolds, $M_j$, endowed with
metric tensors $g_j$ that need not even be continuous, just so long as the $C^0$ charts are biLipschitz with
respect to the distance functions:
\begin{equation}
d_j: M_j \times M_j \to [0,\infty)
\end{equation}
defined by
\begin{equation} \label{djdefn}
d_j(p,q) = \inf\{L_j(C):\, C(0)=p, \, C(1)=q\}
\end{equation}
where
\begin{equation}\label{Ljdefn}
L_j(C)=\int_0^1 g_j(C'(s),C'(s))^{1/2}\, ds.
\end{equation}
We see that such manifolds can arise as intrinsic flat limits of sequences of smooth
Riemannian manifolds in the next section.
\subsection{Convergence of Metrics on a Fixed Manifold}
In the Appendix to \cite{HLS}, Lan-Hsuan Huang, Dan Lee, and the third author considered sequences
of distance functions on a fixed metric space just as we do here except with significantly stronger hypotheses.
Here we restate their appendix theorem in the simplified setting where $M^m$
is a manifold and $d_j$ are defined as in (\ref{djdefn})-(\ref{Ljdefn}). We state this theorem because it is applied to prove the convergence of some of the examples and because its proof inspired some of our ideas. We do not apply this theorem to prove our Theorem~\ref{vol-thm} because the hypotheses of this theorem are too strong.
\begin{thm}\label{app-thm}\cite{HLS}
Given $(M, d_0)$ Riemannian without boundary and fix
$\lambda>0$, suppose that
$d_j$ are length metrics on $M$ such that
\begin{equation}\label{HLS-d_j}
\lambda \ge \frac{d_j(p,q)}{d_0(p,q)} \ge \frac{1}{\lambda}.
\end{equation}
Then there exists a subsequence, also denoted $d_j$,
and a length metric $d_\infty$ satisfying (\ref{HLS-d_j}) such that
$d_j$ converges uniformly to $d_\infty$:
\begin{equation}\label{HLS-epsj}
\varepsilon_j= \sup\left\{|d_j(p,q)-d_\infty(p,q)|:\,\, p,q\in X\right\} \to 0.
\end{equation}
and $M_j$ converges in the intrinsic flat and Gromov-Hausdorff sense to $M_\infty$:
\begin{equation}
M_j \stackrel {\mathcal{F}}{\longrightarrow} M_\infty \textrm{ and } M_j \stackrel { \textrm{GH}}{\longrightarrow} M_\infty
\end{equation}
where $M_j=(M,d_j)$ and $M_\infty=(M, d_\infty)$.
\end{thm}
Note that the hypotheses of our main theorem, Theorem~\ref{vol-thm}, do not imply the upper bound in the
hypothesis (\ref{HLS-d_j}) of Theorem~\ref{app-thm}. Yet this upper bound is crucially applied in the Appendix to \cite{HLS} to obtain the
existence of a subsequence which converges uniformly as in (\ref{HLS-epsj}) and that uniform convergence
is applied to provide an
explicit construction of the common metric space
\begin{equation}
Z_j= [-\varepsilon_j, \varepsilon_j] \times M
\end{equation}
with an explicit distance function $d_j'$ on $Z_j$ such that
\begin{equation} \label{iso-}
d'_j((-\varepsilon_j,p), (-\varepsilon_j,q)) = d_j(p,q)
\end{equation}
\begin{equation} \label{iso+}
d'_j((\varepsilon_j,p), (\varepsilon_j,q)) = d_\infty(p,q).
\end{equation}
Taking $A=0$ and $B=[[Z_j]]$ the intrinsic flat distance is then
proven to be
\begin{equation}\label{Fj}
d_{\mathcal{F}}\left(M_j, M_\infty \right) \le {\mathbf M}(A)+{\mathbf M} (B) \le
2^{(n+1)/2} \lambda^{n+1} 2\varepsilon_j \operatorname{Vol}_0(M) \to 0.
\end{equation}
The proof of our main theorem, Theorem~\ref{vol-thm} will also involve the explicit construction of a space $Z$.
\subsection{Examples Without Distance Bounded Below}\label{subsect:PreviousWorkContrasting}
In \cite{Allen-Sormani} and \cite{Allen-Sormani-2}, the first and third authors presented a number of examples comparing and contrasting various notions of convergence for Riemannian manifolds. The examples in \cite{Allen-Sormani} were warped products and the examples in \cite{Allen-Sormani-2} were conformal. Here we present two crucial examples from these papers demonstrating the importance of the lower bounds on distance, $g_j \ge g_0$ in Theorem \ref{vol-thm}
and the $C^0$ control on the metric tensor from below in Corollary~\ref{cor-vol-thm}. In these examples
we have an upper bound on distance $g_j \le g_0$ and $\operatorname{Vol}_j \to \operatorname{Vol}_0$ but $M_j$
converge to something other than $M_0$.
In the first example, which is Example 3.1 in \cite{Allen-Sormani-2}, we have a sequence of conformal metric tensors on the sphere that are shrunk near the equator so that one obtains a cinched sphere as the intrinsic flat limit instead of the round sphere. See also
Example 3.4 in \cite{Allen-Sormani}.
\begin{ex} \label{Cinched-Sphere} \cite{Allen-Sormani-2}
Let $g_0$ be the standard round metric on the sphere, ${\mathbb S}^m$. Let $g_j=f_j^2 g_0$ be
metrics conformal to $g_0$ with smooth conformal factors, $f_j$,
that are radially defined from the north pole with a cinch at the equator as follows:
\begin{equation}
f_j(r)=
\begin{cases}
1 & r\in[0,\pi/2- 1/j]
\\ h(jr-\pi/2) & r\in[\pi/2- 1/j, \pi/2+ 1/j]
\\ 1 &r\in [\pi/2+ 1/j, \pi]
\end{cases}
\end{equation}
where $h:[-1,1]\rightarrow \mathbb{R}$ is an even function
decreasing to $h(0)=h_0\in (0,1)$ and then
increasing back up to $h(1)=1$. Observe that
\begin{align}
g_0 \not \le g_j \textrm{ but instead } g_j \le g_0,
\end{align}
\begin{equation}
\operatorname{Vol}_j(\Sp^m) \to \operatorname{Vol}_0(\Sp^m),
\end{equation}
and
\begin{equation}
\operatorname{Diam}_j(\Sp^m) \le \operatorname{Diam}_0(\Sp^m).
\end{equation}
In \cite{Allen-Sormani-2}, it is proven that
\begin{equation}
M_j \stackrel {\mathcal{F}}{\longrightarrow} M_{\infty}
\end{equation}
where $M_{\infty}$ is not isometric to $\Sp^m$. Instead $M_{\infty}$ is endowed with the conformal metric,
$g_\infty=f_\infty^2g_0$ with
a piecewise conformal factor that is not continuous:
\begin{equation}
f_{\infty}(r)=
\begin{cases}
h_0 & r=\pi/2
\\ 1 &\text{ otherwise}
\end{cases}.
\end{equation}
The distance, $d_\infty$, between pairs of points near the equator in this limit space is defined as
in (\ref{djdefn})-(\ref{Ljdefn}). It
is achieved by geodesics which run to the
equator, and then around inside the cinched equator, and then out again.
To prove this convergence in \cite{Allen-Sormani-2}, Theorem~\ref{app-thm} was applied to show a subsequence $d_j$ converges uniformly to some distance function and then it was shown explicitly that $d_j$ converge pointwise to $d_\infty$, thus $d_\infty$ is the
uniform limit of in fact any subsequence. Theorem~\ref{app-thm} then implied $(M,d_\infty)$ was the
intrinsic flat and Gromov-Hausdorff limit as well.
\end{ex}
Some might point out that in the above example the limit space is locally isometric to a standard sphere almost everywhere, and that perhaps it is thus not so different from a standard sphere. In the next example, which is Example 3.12 in \cite{Allen-Sormani}, we see that the limit space need not even be locally isometric
to $M_0$ anywhere. In fact it need not even be Riemannian but could instead be a Finsler manifold with a symmetric norm that is not an inner product.
\begin{ex} \label{to-Finsler} \cite{Allen-Sormani}
Let $M={\mathbb T}^2$ be a torus with warped product metrics $g_j=dr^2 + f_j(r)^2 d\theta^2$
where smooth $f_j: [-\pi, \pi]\to [1,5]$ are defined so that
\begin{equation}
g_j \le g_0=dr^2 + 5^2 d\theta^2 \textrm{ and } \operatorname{Vol}_j \to \operatorname{Vol}_0
\end{equation}
but the $f_j$ are cinched on an increasingly dense set, so that the $d_j$ converge
uniformly to a distance, $d_\infty$, that is Finsler, and $M_j\stackrel {\mathcal{F}}{\longrightarrow} M_\infty$.
Taking
\begin{eqnarray}
S&=&\left\{s_{i,j}=-\pi + \tfrac{2\pi i}{2^j}\,: \, i=1,2,... (2^j-1),\, j\in \mathbb{N}\right\}\\
&=& \left\{-\pi + \tfrac{2\pi}{2},-\pi+\tfrac{2\pi}{4}, -\pi + \tfrac{2\pi 2}{4}, -\pi + \tfrac{2\pi3}{4},
-\pi + \tfrac{2\pi}{8},...\right\}
\end{eqnarray}
which is dense in $[-\pi,\pi]$
and
\begin{equation}
\{\delta_j=(1/2)^{2j}:\, j \in \mathbb{N}\} =\{1/4, 1/16, 1/32,...\}
\end{equation}
one can define the functions $f_j$ that are cinched on this set $S$ as follows
\begin{equation}
f_j(r)=
\begin{cases}
h((r-s_{i,j})/\delta_j ) & r\in [s_{i,j}-\delta_j, s_{i,j} +\delta_j] \textrm{ for } i =1...2^j-1
\\ 5 & \textrm{ elsewhere }
\end{cases}
\end{equation}
where $h$ is an even function such that
$h(-1)=5$ decreasing down to $h(0)=1$ and then
increasing back up to $h(1)=5$.
Then $f_j(r)\ge 1$ converges pointwise
to 1 on the dense set, $S$, and pointwise to $5$ elsewhere. This causes the existence of so many shorter
paths in the limit space that $d_j$ are shown in \cite{Allen-Sormani-2} to converge pointwise to
\begin{equation}
d_{\infty}((s_1,\theta_1),(s_2,\theta_2))= \min \left\{ \sqrt{s^2 + 5^2 \theta^2} ,
s\left( \tfrac{\sqrt{24}}{5} \right)+ \theta \right\}
\end{equation}
where $s=d_{{\mathbb S}^1}(s_1,s_2)$ and $\theta=d_{{\mathbb S}^1}(\theta_1, \theta_2)$.
To obtain uniform, intrinsic flat and Gromov-Hausdorff convergence, Theorem~\ref{app-thm} was applied.
\end{ex}
We encourage the reader to explore the other examples of sequences of conformal Riemannian metrics given by Allen and Sormani in \cite{Allen-Sormani-2} which provide further understanding of relationship between important notions of convergence in geometric analysis. Keep in mind that the examples presented here are particularly nice because
we do have the strong two sided bounds required to apply Theorem~\ref{app-thm}.
\subsection{Ilmanen Example}
We now provide the details of Example~\ref{ex-Ilmanen} that was depicted in
Figure~\ref{fig-Ilmanen} in the introduction. It is a sequence of spheres $({\mathbb{S}}^3,g_j)$ with $g_j\ge g_0$ and $\operatorname{Vol}_j({\mathbb{S}}^m) \to \operatorname{Vol}_0({\mathbb{S}}^3)$ that has no Gromov-Hausdorff limit
but by our new Theorem~\ref{vol-thm} converges in the intrinsic flat sense to the standard round sphere, $({\mathbb{S}}^m, g_0)$. In dimension $m=3$ this was presented in a talk by Ilmanen as an example of a sequence with positive scalar curvature and no Gromov-Hausdorff limit that ought to converge in some sense to
the standard sphere. That example was described in more detail in the Appendix of \cite{SW-JDG} by the third author in part to justify that intrinsic flat convergence is the right notion of convergence for such a sequence. Here we modify the construction slightly so that it is easier to see, and then discuss how it is related to this paper. It is an important example to keep in mind when reading the proof of Theorem~\ref{vol-thm}.
\begin{ex} \label{ex-Ilmanen}
Ilmanen presented a sequence of three dimensional spheres $M_j=({\mathbb S}^3, g_j)$
of positive scalar curvature
that had increasingly many increasingly thin wells with no Gromov-Hausdorff limit as in
Figure~\ref{fig-Ilmanen} (cf. \cite{SW-JDG}).
To construct the sequence one starts with $M_0=({\mathbb S}^3, g_0)$ with the standard round metric $g_0$.
One then choses a finite collection of points
\begin{equation}
Q_j=\{q^j_1,q^j_2,...q^j_j\} \subset M_0
\end{equation}
and a radius $\rho_j \to 0$ so that balls of radius $\rho_j$ centered at $q\in Q_j$ are pairwise disjoint.
They can be chosen to be increasingly dense and in fact we can always take the first and last
to be opposite poles
\begin{equation}
q^j_1=q_+ \textrm{ and } q^j_j =q_- \textrm{ so that } d_0(q^j_1, q^j_j)=\pi.
\end{equation}
One then fixes a length $R>0$ and constructs
wells $(W_j,g_j)$ which are rotationally symmetric balls with positive scalar curvature such that
$B_p(R)\subset W_j$ has $\operatorname{Area}_j(\partial B_p(r))$ increased from $0$ at $r=0$ to $4\pi \rho_j^2>0$ at
$r=R$. At $r=R$ each well $W_j$ is smoothly attached to the standard sphere replacing each ball, $B_{q^j_i}(\rho_j)
\subset M_0$ with a ball $B_{p^j_i}(R) \subset M_j$. Outside of the wells $g_j=g_0$.
Ilmanen took $\rho_j \to 0$ fast enough that $j \operatorname{Vol}_j(W_j) \le j R 4\pi \rho_j^2 \to 0$ so that
\begin{equation}
\operatorname{Vol}_j(M_j)
= \operatorname{Vol}_0 \left( M_0\setminus \bigcup_{q\in Q_j} B_q(\rho_j) \right) + j \operatorname{Vol}_j(W_j) \to \operatorname{Vol}_0(M_0).
\end{equation}
Note also that $\operatorname{Diam}_j(M_j) \le \pi + 2R$.
One can construct a distance non-increasing diffeomorphism $F_j: M_j\to M_0$ by taking $F_j$ to be the identity map away from the wells and setting $F_j$ to be rotationally symmetric on each well determined by the requirement that
\begin{equation}
F_j:\, \partial B_p(r) \subset M_j \,\,\to \,\, \partial B_q(\rho_j(r))
\textrm{ where } 4\pi (\rho_j(r))^2 = \operatorname{Area}_j(\partial B_p(r)).
\end{equation}
So by our new Theorem~\ref{vol-thm} we have a new proof that
\begin{equation}
M_j=({\mathbb S}^m, d_j) \stackrel {\mathcal{F}}{\longrightarrow} M_0=({\mathbb S}^m, d_0).
\end{equation}
As mentioned in the first section of the background, we can replace $g_j$ with $F_j^*g_j$ to view them as a sequence of metric tensors on a fixed manifold and the distance functions
\begin{equation}
d_j: M \times M \to [0, D]
\end{equation}
as a sequence of distance functions on a fixed manifold. Note that we do not have pointwise convergence
of the distance functions. Take for example the tips of the wells at the poles correspond to the
poles $p_+$ and $p_-$, and that
\begin{equation}
d_0(p_-, p_+)= \pi \textrm{ however } d_j(p_-, p_+) \to \pi + 2 R
\end{equation}
due to the depth of the wells.
\end{ex}
We discuss convergence of the distance functions pointwise almost everywhere in the
next section.
\subsection{Volume to Pointwise Almost Everywhere Convergence}\label{subsect:PreviousWorkRelating}
An important theorem established by the first and third authors as Theorem 4.4 in \cite{Allen-Sormani-2} gives a way of obtaining pointwise convergence almost everywhere of the distance functions $d_j$ from the hypotheses of Theorem \ref{vol-thm}. We review the statement and proof of this theorem here since it will be applied in Section \ref{sect:ProofMainThm} to prove Theorem \ref{vol-thm}.
\begin{thm}\label{PointwiseConvergenceAE}
If $(M, g_j)$ are compact continuous Riemannian manifolds without boundary and $(M, g_0)$ is a smooth Riemannian manifold such that
\begin{equation}
g_j(v,v) \ge g_0(v,v) \qquad \forall v\in T_pM
\end{equation}
and
\begin{equation}
\operatorname{Vol}_j(M) \to \operatorname{Vol}_0(M)
\end{equation}
then there exists a subsequence such that
\begin{equation}
\lim_{j\to \infty} d_j(p,q) = d_0(p,q) \textrm{ pointwise a.e. } (p,q) \in M\times M.
\end{equation}
\end{thm}
Since this theorem is so fundamental to the proof of our results in this paper, we outline the proof here. The details
required for all the estimates are in \cite{Allen-Sormani-2}. When reading the proof consult Figure~\ref{fig-APS-tubes}.
\begin{proof}
In \cite{Allen-Sormani-2}, the first and third authors first show
that for any Borel set $\operatorname{Vol}_j(U) \ge \operatorname{Vol}_0(U)$ because $g_j\ge g_0$.
Since $\operatorname{Vol}_j(M)\to \operatorname{Vol}_0(M)$, they further prove that
\begin{equation}\label{VolumeSetsConverge}
Vol_j(U) = \int_U \sqrt{Det_{g_0}(g_j)}\, dV_{g_0} \rightarrow Vol_0(U)=\int_U dV_{g_0}.
\end{equation}
They next apply this to tubes of $g_0$-geodesics as depicted in Figure~\ref{fig-APS-tubes}.
\begin{figure}[h]
\center{\includegraphics[width=.6\textwidth]{APS-tubes}}
\caption{A tube $\mathcal{T}$ foliated by $g_0$-geodesics, $\gamma$,
with $L_j(\gamma)\ge L_0(\gamma)$ has $\operatorname{Vol}_j(\mathcal{T})\to \operatorname{Vol}_0(\mathcal{T})$ so $L_j(\gamma)\to L_0(\gamma)$ for almost every $\gamma$ but not for $\gamma$ ending at a tip.}
\label{fig-APS-tubes}
\end{figure}
Since they only wish to show pointwise almost everywhere convergence, they consider
$p,q \in M$ so that $q$ is not a cut point of $p$ with respect to $g_0$:
\begin{equation}
\mathcal{U}=\{(p,q):\, q \notin CutLocus_{g_0}(p)\} \subset M \times M.
\end{equation}
They choose
\begin{equation}
v=v_{p,q}\in T_pM \textrm{ such that } \exp_p(v_{p,q}) = q
\end{equation}
and $\gamma_p(t)=\exp_p(tv)$ is $g_0$-length minimizing from $p$ to $q$: $L_0(\gamma_p)=d_0(p,q)$.
The goal is to define a parametrized tube around the geodesic from $p$ to $q$. In order to accomplish this they define
\begin{align}
w \in N_{v,\alpha,p}=\{w: w \in S_k\subset T_pM , |w|_{g_0} < \alpha\}
\end{align}
where $S_k$ is a sphere (or a hyperplane if $k=0$) through the origin in $T_pM$ which is carefully chosen to avoid
focal points in the foliation constructed below and $\alpha>0$ is chosen small enough so that
they can extend $v$ uniquely to $T_{p'}M$ for every $p'=\exp_p(w)$, $w \in N_{v,\alpha,p}$ by choosing a $v \in T_{p'}M$ so that $v \perp \exp_p(N_{v,\alpha,p})$, of the same length as $v \in T_pM$, and $v$ is a continuous vector field on $ \exp_p(N_{v,\alpha,p})$. Then the foliation is defined:
\begin{align}
\mathcal{T}_{v, \alpha,p} = \{\gamma_{p'}(t):p' = \exp_p(w), w \in N_{v,\alpha,p}, 0 \le t \le 1\},
\end{align}
created using a foliation by length minimizing $g_0$-geodesics
\begin{align}
\gamma_{p'}(t) = \exp_{p'}(tv), \quad 0 \le t \le 1 \textrm{ running from } p' \textrm{ to } q'
\end{align}
where $p' = \exp_p(w)$.
Keep in mind that
\begin{equation}\label{71}
L_j(\gamma_{p'})\ge d_j(p',q') \ge d_0(p',q')=L_0(\gamma_{p'})
\end{equation}
These tubes of $g_0$-geodesics are depicted in Figure~\ref{fig-APS-tubes} so that one sees
how large $L_j(\gamma_{p'})$ when the geodesic reaches into a tip.
By $Vol_j(M) \rightarrow Vol_0(M)$ and \eqref{VolumeSetsConverge} one has
convergence of the volumes of the tubes
\begin{align}\label{FoliationVolume}
Vol_j(\mathcal{T}_{v,\alpha,p})\rightarrow Vol_0(\mathcal{T}_{v,\alpha,p}).
\end{align}
They next work to show that if the volumes of the tubes are converging then for almost every
$\gamma_{p'}$,
\begin{equation} \label{Ljplan}
L_j(\gamma_{p'}) \to L_0(\gamma_{p'}) \textrm{ and thus by (\ref{71}) one has }
d_j(p',q') \to d_0(p',q').
\end{equation}
To do this rigorously they must be careful to keep track of the variation between the geodesics.
Taking
\begin{align}
d\exp^{\perp}: N\, \exp_p(S_k) \rightarrow M
\end{align}
to be the differential of the normal exponential map where $ N\exp_p(S_k)$ is the normal bundle to $ S_k\subset M$, $|d \exp^{\perp}|_{g_0}$ is the determinant of the map in directions orthogonal to the foliation, $d \mu_{N_{v,\alpha,p}}=d\mu_N$ be the usual measure for $N_{v,\alpha,p} \subset T_pM \approx\mathbb{R}^m$, $\lambda_1^2,...,\lambda_m^2$ the eigenvalues of $g_j$ with respect to $g_0$ where $\lambda_1 \le ... \le \lambda_m$, and $\sqrt{h}$ the square root of the determinant of the metric $h$ for the hypersurface $\exp(N_{v,\alpha,p})$ in normal coordinates on $N_{v,\alpha,p}$ then
\begin{align}
\operatorname{Vol}_j(\mathcal{T}_{v,\alpha,p})&= \int_{\mathcal{T}_{v, \alpha}} \sqrt{Det_{g_0}(g_j)}\,dV_{g_0}
\\&= \int_{N_{v,\alpha,p}}\int_{\gamma_{p'}}\lambda_1...\lambda_m |d \exp_{p'}^{\perp}|_{g_0}\sqrt{h}dt_{g_0}\,d\mu_{N}
\\& \ge \int_{N_{v,\alpha,p}}\int_{\gamma_{p'}}\lambda_1...\lambda_{m-1} |d \exp^{\perp}|_{g_0}\sqrt{h}dt_{g_j}\,d\mu_{N}
\\&\ge \int_{N_{v,\alpha,p}}\int_{\gamma_{p'}} |d \exp^{\perp}|_{g_0}\sqrt{h}dt_{g_j}\,d\mu_{N}\label{CrucialVolumeToDistanceEq1}
\\&\ge \int_{N_{v,\alpha,p}}\int_{\gamma_{p'}} |d \exp^{\perp}|_{g_0}\sqrt{h}dt_{g_0}\,d\mu_{N}\label{CrucialVolumeToDistanceEq2}
\\&= \operatorname{Vol}_0(\mathcal{T}_{v,\alpha,p}).
\end{align}
By the convergence of the volumes of the tubes in (\ref{FoliationVolume}), one concludes that
\eqref{CrucialVolumeToDistanceEq1} and \eqref{CrucialVolumeToDistanceEq2} converge to one another:
\begin{equation}
\int_{N_{v,\alpha,p}} \int_{\gamma_{p'}} |d \exp^{\perp}|_{g_0}\sqrt{h}(dt_{g_j}-dt_{g_0})\,d\mu_{N} \to 0.
\end{equation}
Next they show that
\begin{align}
|d \exp^{\perp}|_{g_0} \ge A_{p,q} >0 \quad \text{on } \mathcal{T}_{v,\alpha,p}, \label{NonDegnerateFoliation}
\end{align}
using a careful discussion to avoid $g_0$-focal points. Note that the constants $A_{p,q}$ might be
quite small if $p$ and $q$ are almost conjugate to one another. Also, $\sqrt{H} > h_0 > 0$ on $N_{v,\alpha,p}$ since in normal coordinates they notice that $\sqrt{H}=1$ at $p$.
They then obtain (\ref{Ljplan}) rigorously as follows
\begin{align}
\int_{N_{v,\alpha,p}}&\int_{\gamma_{p'}} |d \exp^{\perp}|_{g_0}\sqrt{h}(dt_{g_j}-dt_{g_0})\,d\mu_{N}
\\&\ge A_{p,q} h_0 \int_{N_{v,\alpha,p}} L_j(\gamma_{p'}) - L_0(\gamma_{p'})\,d \mu_N
\\&\ge A_{p,q} h_0 \int_{N_{v,\alpha,p}} d_j(p',q') - d_0(p',q') \,d \mu_N
\\&= A_{p,q} h_0 \int_{N_{v,\alpha,p}} |d_j(p',q') - d_0(p',q')| \,d \mu_N, \label{FirstIntegralDistancesToZero}
\end{align}
and hence \eqref{FirstIntegralDistancesToZero} converges to $0$ as $j \rightarrow \infty$. In particular
for almost every $p' \in \exp_p(N_{v,\alpha,p})$ and $q'$ determined by $p'$ they have $d_j(p',q') \to d_0(p',q')$. However one needs to show pointwise almost everywhere convergence where $(p',q')\in M\times M$ with $q'$ independent of $p'$
and $p'$ running freely almost everywhere in $M$.
In order to obtain a $M\times M$ open set around $(p,q)$ they need to free themselves from the restrictions to submanifolds depending on $N_{v,\alpha,p}$
and the dependence of $q'$ on $p'$, so they construct a $2m$ dimensional set of deformations of
$N_{v,\alpha,p}$ as follows:
\begin{equation}
\mathcal{N}_{p,q}= \{(v,\tau, \eta, w): \,v\in V_\varepsilon, \,\tau\in (-\bar{\tau},\bar{\tau}), \,\eta\in (\bar{\eta}_1, \bar{\eta}_2),
\, w\in N_{\eta v,\alpha,p_\tau} \}
\end{equation}
with
\begin{eqnarray*}
v\,&\in & V_\varepsilon= \{v'\in T_pM:\, |v'|_{g_0}=|v_{p,q}|_{g_0},\, g_0(v',v_{p,q}) > (1-\varepsilon) |v_{p,q}|_{g_0}\}\\
&& \qquad \qquad \textrm{where }\varepsilon > 0 \textrm{ sufficiently close to 0 depending on $p,q$}\\
\eta\, &\in& (\bar{\eta}_1, \bar{\eta}_2) \textrm{ sufficiently close to 1 depending on $p,q$}\\
\tau\, &\in& (-\bar{\tau},\bar{\tau}) \textrm{ sufficiently close to 0 depending on $p,q$}\\
p_\tau &=& p_{\tau,v}= \exp_p(\tau v) \\
p'_{\tau}&=& p'_{v, \tau, \eta,w}= \exp_{p_\tau}(w) \textrm{ where } w\in N_{v,\alpha,p_\tau}\\
q'_\eta&=&q'_{v, \tau, \eta,w}= \exp_{p'_\tau}(\eta v') \textrm{ after parallel transporting $v$ to $v'=v'_{v, \tau, \eta,w}$}.
\end{eqnarray*}
They define
\begin{equation}
\Psi_{p,q}: \mathcal{N}_{p,q} \to U_{p,q} \subset M\times M
\textrm{ to be }
\Psi_{p,q}=(p'_{v, \tau, \eta,w}, q'_{v, \tau, \eta,w})
\end{equation}
and prove it is bijective onto an open neighborhood $U_{p,q}$ of $(p,q) \in M\times M$ using the fact that
$q$ is not a cut point of $p$.
They repeat the integration as above replacing $N_{v_{p,q},\alpha,p}$ with $N_{v,\alpha,p_\tau}$
where $v\in V_\varepsilon$, $\tau\in (-\bar{\tau},\bar{\tau})$, and $\eta\in (\bar{\eta}_1, \bar{\eta}_2)$.
In fact the integrals are not only converging to $0$ but also uniformly
bounded above:
\begin{align}
\int_{N_{ v,\alpha,p}}&\int_{\gamma_{p'_\tau}} |d \exp^{\perp}|_{g_0}\sqrt{h}(dt_{g_j}-dt_{g_0})\,d\mu_{N}
\\&\le \operatorname{Vol}_j(T_{\eta v,\alpha,p}) - \operatorname{Vol}_0(T_{\eta v,\alpha,p})) \le \operatorname{Vol}_j(M)\le V_0.
\end{align}
Thus they apply the Dominated Convergence Theorem to see that
\begin{equation}
\int_{v\in V_\varepsilon} \int_{-\bar{\tau}}^{\bar{\tau}}
\int_{\bar{\eta}_1}^{\bar{\eta}_2}
\int_{N_{v,\alpha,p}} \int_{\gamma_{p'_\tau}} |d \exp^{\perp}|_{g_0}\sqrt{h}(dt_{g_j}-dt_{g_0})\,d\mu_{N} \, d\eta d\tau dV\to 0.
\end{equation}
This implies as in (\ref{FirstIntegralDistancesToZero}) that one has
\begin{equation}
\int_{v\in V_\varepsilon} \int_{-\bar{\tau}}^{\bar{\tau}}
\int_{\bar{\eta}_1}^{\bar{\eta}_2}
\int_{N_{v,\alpha,p}} |d_j(p'_\tau,q'_\eta) - d_0(p'_\tau,q'_\eta)| \,d\mu_{N} \, d\eta d\tau dV\to 0.
\end{equation}
Applying the map $\Psi_{p,q}: \mathcal{N}_{p,q} \to U_{p,q}$ we have
\begin{equation}
\int_{U_{p,q}} |d_j(p',q') - d_0(p',q')| \,d\mu \,d\mu \to 0.
\end{equation}
So there is a subsequence converging pointwise almost everywhere on $U_{p,q}$.
To complete the proof, they observe that $\mathcal{U}=\{(p,q):\, q \notin CutLocus_{g_0}(p)\}$ is a set of full measure
in $M\times M$ and has a compact exhaustion:
\begin{equation}
K_1\subset K_2 \subset \cdots \subset K_k\subset \cdots \subset \, \mathcal{U}
\textrm{ and } \bigcup_{k =1}^\infty K_k \,=\, \mathcal{U}.
\end{equation}
Since the open cover of each compact set
\begin{equation}
K_k \, \subset \, \mathcal{U}\, \subset \bigcup_{(p,q)\in \mathcal{U}} U_{p,q}
\end{equation}
has a finite subcover, we obtain a countable cover of $\mathcal{U}$
\begin{equation}
\mathcal{U}\subset \bigcup_{i=1}^\infty U_{p_i,q_i}
\textrm{ where each } K_k \subset \bigcup_{i=1}^{N_k} U_{p_i,q_i}.
\end{equation}
They now take a subsequence of $d_j: M \times M \to [0,D]$ which converges pointwise
almost everywhere on $U_{p_1,q_1}$, then a further subsequence which
converges pointwise almost everywhere on $U_{p_2,q_2}$, and so on and diagonalize,
to obtain a subsequence that converges pointwise almost everywhere on all
$U_{p_i,q_i}$ and thus on $\mathcal{U}$ which has full measure in $M\times M$.
\end{proof}
\section{A New Explicit Estimate on the Intrinsic Flat Distance}\label{sect:NewSWIFEstimate}
In this section we prove a new explicit estimate on the intrinsic flat distance between
metric spaces where $d_j\ge d_0$ everywhere and $d_j \le d_0+\lambda$ on a set $W$
where $\operatorname{Vol}_j(M_j\setminus W_j)$ is small. This explicit estimate will be applied to prove our
main theorem.
\begin{thm}\label{est-SWIF}
Let $M$ be an oriented, connected and closed manifold, $M_j=(M,g_j)$ and $M_0=(M,g_0)$ be Riemannian manifolds with $\operatorname{Diam}(M_j) \le D$, $\operatorname{Vol}_j(M_j)\le V$
and $F_j: M_j \rightarrow M_0$ a $C^1$ diffeomorphism and distance non-increasing map:
\begin{equation}
d_j(x,y) \ge d_0(F_j(x), F_j(y)) \quad \forall x,y \in M_j.
\end{equation}
Let $W_j \subset M_j$ be a measurable set and assume that there exists a $\delta_j > 0$ so that
\begin{equation}\label{eq-distCond}
d_j(x,y) \le d_0(F_j(x), F_j(y)) +2 \delta_j \qquad \forall x,y \in W_j
\end{equation}
with
\begin{equation}\label{eq-volCond}
\operatorname{Vol}_j(M_j \setminus W_j) \le V_j
\end{equation}
and
\begin{equation}
h_j \ge \sqrt{2 \delta_j D + \delta_j^2}
\end{equation}
then
\begin{equation}
d_{\mathcal{F}}(M_0,M_j) \le 2V_j + h_j V.
\end{equation}
\end{thm}
\begin{figure}[h]
\center{\includegraphics[width=.4\textwidth]{AS2-A.jpg}}
\caption{Here we see a piecewise flat $(M,d_j)$ around $(M, d_0)=({\mathbb S}^2, d_{\mathbb S})$ clearly identifying the regions where $d_j$ is not close to $d_0$ in yellow.}
\label{fig-AS2-A}
\end{figure}
\begin{rmrk}
Observe that the hypotheses of this theorem are much weaker than the hypotheses of
the theorem of the third author with Huang and Lee in the Appendix of \cite{HLS} which requires controlling the
distances in biLipschitz way everywhere. We may also contrast this theorem with an earlier theorem of the third author with Lakzian (see Theorem 4.6 in \cite{Lakzian-Sormani-1}). The theorem with Lakzian does not require the distance decreasing map we require here, but does require that one obtain uniform bounds on the metric tensor in the good region. It requires a two-sided distance estimate in place of (\ref{eq-distCond}). In addition to
a volume estimate similar to (\ref{eq-volCond}), it requires uniform control on the areas of $\partial W_j$.
All three of these theorems are proven by constructing an explicit common metric space $Z$ into which the oriented manifolds embed via distance preserving maps. However the metric spaces are quite different for each theorem and thus provide different estimates requiring different bounds.
\end{rmrk}
\subsection{Constructing the Common Space $Z$}
Now we construct a complete metric space $Z$ for which two Riemannian manifolds can be embedded in a distance preserving manner.
\begin{figure}[h]
\center{\includegraphics[width=.8\textwidth]{AS2-B.jpg}}
\caption{Here we see $(M,d_j)$ and $(M, d_0)$ on the left and $Z$ on the right, using the same coloring as
in Figure~\ref{fig-AS2-A}.}
\label{fig-AS2-B}
\end{figure}
\begin{lem} \label{cnstr0-Z}
Let $M$ be a connected, closed manifold, $M_j=(M,g_j)$ and $M_0=(M,g_0)$ be Riemannian manifolds with $\operatorname{Diam}(M_j) \le D$, and $F_j: M_j \rightarrow M_0$ be a $C^1$ diffeomorphism and distance non-increasing map. Let $W_j \subset M_j$ and define the space $Z$ as in Figure~\ref{fig-AS2-B} to be
\begin{equation}
Z= M_0 \sqcup \left( M \times [0,h_j] \right) \sqcup M_j \,\,|_\sim
\end{equation}
where we identify points via the bijection
\begin{equation}
\bar{F}_j: \, M \times \{0\} \subset M\times [0,h_j] \to M_0 \textrm{ where } \bar{F}_j(x,0)=F_j(x),
\end{equation}
and identify points via the bijection
\begin{equation}
id: \,\overline{W}_j \subset M_j \to \overline{W}_j \times \{h_j\} \subset M\times [0,h_j] \textrm{ where } id(x)=(x,h_j).
\end{equation}
Then $Z$ is a metric space with distance, $d_Z: Z \times Z \to [0, \infty)$, given by
\begin{equation}
d_Z(z_1, z_2) = \inf \{L_Z(C):\, C(0)=z_1,\, C(1)=z_2\}
\end{equation}
where $C$ is any piecewise smooth curve whose length, $L_Z$, is determined using $g_j$ in $M_j$,
$g_0$ in $M_0$ and the isometric product $g_j + dh^2$ in $M \times (0,h_j]$.
In addition, for all points $(x_1,h),(x_2,h') \in M\times[0,h_j] \subset Z$ we have:
\begin{equation}
d_Z((x_1,h),(x_2,h'))\ge \sqrt{d_0(F_j(x_1),F_j(x_2))^2 +|h-h'|^2} ,
\end{equation}
\begin{equation} \label{region-dist-dec-to-Z}
d_Z((x_1,h),(x_2,h')) \le \sqrt{d_j(x_1,x_2)^2 +|h-h'|^2}.
\end{equation}
\end{lem}
\begin{rmrk}
Note that the way in which we measure the lengths of curves in $M\times(0,h_j] \subset Z$ is via the isometric product $g_j+dh^2$ but we are not claiming that the metric space has a product structure on $M\times(0,h_j]$. In general one does not expect the metric space $(Z,d_Z)$ to have a product structure because it will be advantageous to take advantage of shortcuts through $M_0$ which is identified with $M\times \{0\}$.
\end{rmrk}
\begin{proof}
Observe that the metric space $Z$ constructed in the statement of this lemma is well defined by the discussion given in Section 2.1 of Burago, Burago, Ivanov \cite{BBI}. In particular, the set of piecewise smooth curves is a class of admissible paths and we can measure lengths by lengths of admissible paths by using $g_j$ in $M_j$,
$g_0$ in $M_0$ and the isometric product $g_j + dh^2$ in $M \times (0,h_j]$. Then by Exercise 2.1.2 of \cite{BBI} the distance function $d_Z: Z \times Z \to [0, \infty)$ defined by
\begin{equation}
d_Z(z_1, z_2) = \inf \{L_Z(C):\, C(0)=z_1,\, C(1)=z_2\}
\end{equation}
turns $(Z,d_Z)$ into a metric space. Now we would like to show the claimed estimates on $d_Z$.
Given any $(x_1,h_1),(x_2,h_2) \in Z' = M \times [0,h_j] \subset Z$,with the metric on $Z'$ restricted from $Z$, let
\begin{equation}
C:[0,1]\to Z \textrm{ such that }C(0)=(x_1,h_1)
\textrm{ and } C(1)=(x_2,h_2).
\end{equation}
We claim that we can take
\begin{equation} \label{identified-segments}
C([0,1]) \subset Z'=M \times [0,h_j].
\end{equation}
On the contrary, if $C$ were not contained in $Z'$ then let $S \subset [0,1]$ be the maximal subset so that for all $s \in S$, $C(s) \not \in Z'$. Note that by the fact that $C(0),C(1) \in Z'$ we know $S \subset (0,1)$. If we define the map
\begin{align}
id:M_j \rightarrow M \times \{h_j\}\subset Z, \qquad id(x) = (x,h_j),
\end{align}
then we can define a new curve
\begin{align}
\tilde{C}(t)=
\begin{cases}
id(C(t))& \text{ if } t \in S
\\mathbb{C}(t)& \text{ otherwise.}
\end{cases}
\end{align}
By construction $\tilde{C} \subset Z'$ and since we measure the lengths of curves the same way in $M_j$ and in $M\times \{h_j\}$ we find that $L_Z(C)=L_Z(\tilde{C})$.
Now assume that $C([0,1]) \subset Z'$ so we can write:
$C(t)=(x(t), h(t))$ where $x(t)\in M$ and $h(t)\in [0,h_j]$. Then by Lemma \ref{DistToMetric} we know that $g_0 (dF_j(v), dF_j(v)) \leq g_j (v,v)$ and hence
\begin{align}\label{LengthLowerBound}
L_Z(C) \ge \int_0^1 \sqrt{g_0(dF_j(x'), dF_j(x'))+ h'^2}dt.
\end{align}
Since \eqref{LengthLowerBound} holds for all $C$ and the right hand side is how lengths would be measured in the Riemannian product $g_0 + dh^2$ we can take the infimum over all curves to conclude that
\begin{align}
d_Z((x_1,h_1),(x_2,h_2)) \ge \sqrt{d_0(F_j(x_1),F_j(x_2))^2 +|h_1-h_2|^2},
\end{align}
for all $(x_1,h_1),(x_2,h_2) \in Z'_j$.
Again using the fact that $g_0 (dF_j(v), dF_j(v)) \leq g_j (v,v)$ we can observe
\begin{align}\label{LengthUpperBound}
L_Z(C) \le \int_0^1 \sqrt{g_j(x', x')+ h'^2}dt.
\end{align}
Since \eqref{LengthUpperBound} holds for all $C$ and the right hand side is how lengths would be measured in the Riemannian product $g_j + dh^2$ we can take the infimum over all curves to conclude that
\begin{align}
d_Z((x_1,h_1),(x_2,h_2)) \le \sqrt{d_j(x_1,x_2)^2 +|h_1-h_2|^2}.
\end{align}
\end{proof}
Now we use the metric space, $Z$, constructed in Lemma \ref{cnstr0-Z} to show that $M_j$ and $M_0$ can be embedded in $Z$ in a distance preserving manner. See Figures~\ref{fig-AS2-A} and~\ref{fig-AS2-B}.
\begin{lem} \label{cnstr-Z}
Let $M$ be a connected, closed manifold, $M_j=(M,g_j)$ and $M_0=(M,g_0)$ be Riemannian manifolds with $\operatorname{Diam}(M_j) \le D$, and $F_j: M_j \rightarrow M_0$ be a $C^1$ diffeomorphism and distance non-increasing map.
Let $W_j \subset M_j$ be a measurable set. Assume that
\begin{equation}\label{eq-distCond0}
d_j(x,y) \le d_0(F_j(x), F_j(y)) +2 \delta_j \qquad \forall x,y \in W_j
\end{equation}
and take
\begin{equation}
h_j \ge \sqrt {2 \delta_j D + \delta_j^2}
\end{equation}
then the maps
\begin{equation}
\varphi_j: M_j \to Z \textrm{ where } \varphi_j(x) = x \, \textrm{if} \, x \notin \overline{W}_j \, \textrm{otherwise} \, \varphi_j(x) = (x, h_j)
\end{equation}
and
\begin{equation}
\varphi_0: M_0 \to Z \textrm{ where } \varphi_0(x) = (F_j^{-1}(x), 0)
\end{equation}
are distance preserving maps.
\end{lem}
\begin{proof}
First we show that $\varphi_0: M_0 \to Z$ is distance preserving.
Given any $p,q \in M_0$ where $\varphi_0(p)=(x_p,0), \varphi_0=(x_q,0)$ we can use the estimate of Lemma \ref{cnstr0-Z} to notice
\begin{align}\label{d0distIneq}
d_Z(\varphi_0(p),\varphi_0(q)) \ge d_0(p,q).
\end{align}
Since we can choose a curve $C \subset M \times \{0\}$ whose length achieves the equality in \eqref{d0distIneq} we see that
\begin{align}
d_Z(\varphi_0(p),\varphi_0(q)) = d_0(p,q),
\end{align}
and hence $\varphi_0$ is distance preserving.
\bigskip
Now we show that $\varphi_j: M_j \to Z$ is distance preserving.
Consider $p,q\in M_j$. Let
\begin{equation}
C:[0,1]\to Z \textrm{ such that }C(0)=\varphi_j(p)
\textrm{ and } C(1)=\varphi_j(q).
\end{equation}
In the case where $p,q \in \bar{W}_j$ we know by the proof of Lemma \ref{cnstr0-Z} that we can take
\begin{equation}
C([0,1]) \subset Z_j'=M \times [0,h_j],
\end{equation}
with the metric on $Z_j'$ restricted from $Z$.
Thus we have $C(t)=(x(t), h(t) )$ where
\begin{equation} \label{isom-prod-parts-2}
x(0)= p \quad h(0)=h_j \quad x(1)=q \quad h(1)=h_j.
\end{equation}
If all of $C([0,1])$ lies above
$h=0$ we have
\begin{eqnarray}
L_Z(C)& = &\int_0^1 \sqrt{ g_j(x'(t),x'(t))+ h'(t)^2 \,} \, dt \\
&\geq& \int_0^1 \sqrt{ g_j(x'(t),x'(t))\,} \, dt =L_{g_j}(x[0,1]).
\end{eqnarray}
However if $C$ does
reach $h=0$ then we only have
\begin{equation}\label{eq-trianIn}
L_Z(C([0,1]))\ge d_Z(\varphi_j(p), (x_p,0))+ d_{0}(F_j(x_p), F_j(x_q)) + d_Z((x_q,0), \varphi_j(q))
\end{equation}
where $(x_p,0)$ and $(x_q,0)$ are the first and last points where $C$ hits $h=0$.
By our choice of $h_j$ and for any $0 < d \le D$ we have,
\begin{equation}
d^2+h_j^2 \ge d^2 + 2 \delta_j D + \delta_j^2 \ge d^2 + 2\delta_j d +\delta_j^2= (d+\delta_j)^2.
\end{equation}
Since $\operatorname{Diam}(M_0) \leq \operatorname{Diam}(M_j) \leq D$ and using the estimates from Lemma \ref{cnstr0-Z} we find
\begin{align}
d_Z(\varphi_j(p), (x_p,0)) &\ge \sqrt{d_{0}(F_j(p),F_j(x_p))^2 + h_j^2}\\
&\ge d_{0}(F_j(p),F_j(x_p))+ \delta_j
\\ d_Z(\varphi_j(q), (x_q,0)) &\ge \sqrt{d_{0}(F_j(q),F_j(x_q))^2 + h_j^2}
\\&\ge d_{0}(F_j(q),F_j(x_q)) + \delta_j.
\end{align}
Now recall that $F_j$ is distance non-increasing and satisfies
(\ref{eq-distCond0}) where (\ref{eq-distCond0}) also holds for points $p,q$ in the $d_j$ closure of $W_j$ by continuity.
Substituting these observations in (\ref{eq-trianIn}) we find
\begin{align}
L_Z(C[0,1])
&\ge d_{0}(F_j(p),F_j(x_p)) + d_{0}(F_j(x_p), F_j(x_q))
\\&\qquad + d_{0}(F_j(q),F_j(x_q)) + 2\delta_j
\\&\ge d_{0}(F_j(p),F_j(q)) + 2\delta_j
\\&\ge d_{j}(p,q).
\end{align}
Since we can choose a $C \subset Z'$ which realizes the distance $d_j(p,q)$ we see that $\varphi_j$ is distance preserving for $p,q\in \overline{W}_j$.
If $p$ or $q$ lies in $M_j \setminus \overline{W}_j$, then any curve $C:[0,1]\to Z$ from
$C(0)=\varphi_j(p)$ to $C(1)=\varphi_j(q)$ starts and ends at a point which is not
identified with a point in $Z_j'$. If no points in $C$ are identified with a point in $Z_j'$
then
\begin{equation}
L_Z(C[0,1])=L_{g_j}(C[0,1]) \ge d_{j}(p,q).
\end{equation}
Otherwise let $p'$ be the first point on $C$ identified with a point in $Z_j'$ and
$q'$ be the last such point. Then $p', q' \in \overline{W}_j$, and so we know from above that
\begin{equation}
d_Z((p',h_j),(q',h_j))=d_{j}(p',q').
\end{equation}
Applying the fact that unidentified points are measured using $d_{j}$
we have
\begin{equation}
L(C[0,1])\ge d_{j}(p,p')+ d_{j}(p',q')+d_{j}(q',q)\ge d_{j}(p,q).
\end{equation}
Since we can choose a $C \subset M_j$ which realizes the distance $d_j(p,q)$ we see that $\varphi_j$ is distance preserving for $p,q\in M_j \setminus\overline{W}_j$. Hence $\varphi_j:M_j \to Z$ is distance preserving, as desired.
\end{proof}
\subsection{Estimating Intrinsic Flat Distance}
We now use the metric space $Z$ constructed in the previous subsection in order to give a new estimate on the intrinsic flat distance between Riemannian manifolds. Readers may wish to review Subsection~\ref{subsect:SWIFBackground} before reading this proof.
\begin{proof}[Proof of Theorem \ref{est-SWIF}]
In order to estimate the intrinsic flat distance between $M_j$ and $M_0$ we must be very careful with orientation.
Remember $M_j=(M,g_j)$ and $M_0=(M, g_0)$ where $M$ is the same compact oriented manifold and $F_j: M_j\to M_0$
is biLipschitz. So there is
an oriented atlas of smooth charts
\begin{equation}
\phi_i: U_i \subset \mathbb{R}^m \to \phi_i(U_i)\subset M_j \textrm{ and } F_j\circ \varphi_i: U_i \subset \mathbb{R}^m \to F_j(\phi_i(U_i))\subset M_0.
\end{equation}
Note that these charts are diffeomorphisms so they are biLipschitz with different constants for both
$M_j=(M,g_j)$ and $M_0=(M,g_0)$ and they can be restricted to $A_j \subset U_j$
to ensure they are pairwise disjoint as required when considering them as rectifiable charts for $M_j$ and $M_0$.
Furthermore
\begin{align}\label{eq-canonicalT}
[[M_j]] (f,\pi)= & \sum_{i=1}^\infty \int_{ A_i} (f \circ \phi_i ) \, d(\pi_1\circ \phi_i)\wedge\cdots \wedge d(\pi_m\circ \phi_i)
\end{align}
for any $f: M_j \to \mathbb{R}$ Lipschitz and bounded and $\pi=(\pi_1,...,\pi_m)$ where each component is Lipschitz
and
\begin{equation}
[[M_0]]= {F_j}_\sharp[[M_j]].
\end{equation}
Let $\iota: [0,h_j] \to [0,h_j]$ be the identity map. Then, $(\phi_i, \iota) : A_i\times [0,h_j] \to M_j \times [0,h_j]$ defines an oriented atlas of biLipschitz maps. Then we can write $[[ \, M_j \times [0,h_j] \, ]]$ as a countable sum of integrals as above using this atlas.
Now consider the identity map $\iota_j: M_j \times [0,h_j] \to Z'_j$. Since it is $1$-Lipschitz by (\ref{region-dist-dec-to-Z}) and bijective, the maps
$$\iota_j \circ (\phi_i, \iota) : A_i\times [0,h_j] \to M_j \times [0,h_j] \to Z'_j$$
define an oriented atlas of Lipschitz maps for $Z'_j$, where the maps can be considered to be biLipschitz as before.
Thus, we can define the current with weight $1$ given by this oriented atlas, $B$. Moreover,
\begin{equation}
B= {\iota_j}_\sharp [[ \,M_j \times [0,h_j] \,]].
\end{equation}
Recall that the boundary operator commutes with the pushforward operator, thus
\begin{equation}
\partial B= {\iota_j}_\sharp \partial[[ \, M_j \times [0,h_j] \,]]= {\iota_j}_\sharp \alpha_\sharp [[ M_j \times \{h_j \} ]] - {\iota_j}_\sharp \beta_\sharp [[M_j \times \{0\}]],
\end{equation}
where $\alpha: M_j \times \{h_j\} \to M_j \times [ 0, h_j]$ and $\beta: M_j \times \{0\} \to M_j \times [ 0, h_j] $ are inclusion maps and are trivially Lipschitz maps.
By the definition of $\varphi_0$,
\begin{eqnarray}
\varphi_{0\#}[[ M_0]] & = & {\iota_j}_\sharp \beta_\sharp [[M_j \times \{0\}]].
\end{eqnarray}
Since $W_j \subset M_j$ is a measurable set, we can define an integer rectifiable current of weight $1$, $[[W_j \times \{h_j \} ]]$, by restricting the atlas of $M_j \times \{h_j\}$. In a similar way, $[[M_j \setminus W_j]]$ is a well define integer rectifiable current.
By the definition of $\varphi_j$, and
\begin{eqnarray}
\varphi_{j\#}[[ M_j]] & = & {\iota_j}_\sharp \alpha_\sharp [[W_j \times \{h_j \} ]] + \tilde \alpha_\sharp [[M_j \setminus W_j]].
\end{eqnarray}
We define now an integer rectifiable current in the following way,
\begin{equation}
A = \tilde \alpha_\sharp [[M_j \setminus W_j]] - {\iota_j}_\sharp \alpha_\sharp [[ (M_j \setminus W_j ) \times \{h_j \} ]],
\end{equation}
where $\tilde \alpha: M_j \setminus W_j \to Z$ is the inclusion map, which is Lipschitz since it is distance preserving.
Note that the second term in $A$ corresponds to the current of weight $1$ on the set of unidentified points in $Z$ drawn in yellow
in Figure~\ref{fig-AS2-B}.
Furthermore, $A$ is an integral current given that
\begin{equation}
\partial A = \partial \tilde \alpha_\sharp [[M_j \setminus W_j]] - \partial {\iota_j}_\sharp \alpha_\sharp [[ (M_j \setminus W_j ) \times \{h_j \} ]]=0.
\end{equation}
From the previous equalities,
\begin{equation}
A = \varphi_{j\#}[[M_j]] - {\iota_j}_\sharp \alpha_\sharp [[ M_j \times \{h_j \} ]].
\end{equation}
We conclude that
\begin{equation}
\partial B + A = \varphi_{j\#}[[M_j]]- \varphi_{0\#}[[ M_0]]
\end{equation}
and thus
\begin{equation}
d_{\mathcal{F}}(M_j, M_0) \le {\mathbf M}(B) + {\mathbf M}(A).
\end{equation}
To finish the proof, since $B= {\iota_j}_\sharp [[ \, M_j \times [0,h_j] \, ]]$ we know that
\begin{equation}
{\mathbf M}(B) \leq {\mathbf M}([[\, M_j \times [0,h_j] \,]]) = \operatorname{Vol}_j(M_j \times [0,h_j]) \leq h_jV,
\end{equation}
where we used the fact that the map $\iota_j: M_j \times [0,h_j] \to Z_j'$ is a 1-Lipschitz map.
In a similar way,
\begin{equation}
{\mathbf M}(A) \leq 2 \operatorname{Vol}_j(M_j \setminus W_j).
\end{equation}
\end{proof}
\section{Pointwise Convergence and Volume Bounds imply Intrinsic Flat Convergence}\label{sect:GoodSet}
In this section we prove the following theorem which we will
later apply to prove our main theorem.
\begin{thm} \label{ptwise-to-SWIF}
Suppose we have a fixed closed and oriented Riemannian manifold, $M_0=(M,g_0)$,
and a sequence of metric tensors $g_j$ on $M$ defining $M_j=(M, g_j)$ such that
\begin{equation}
g_0(v,v) \le g_j(v,v), \quad \forall v \in T_pM,
\end{equation}
\begin{equation}
\operatorname{Diam}_j(M_j) \le D_0,
\end{equation}
\begin{equation}\label{ptwise-to-bulk-1}
d_j(p,q)\to d_0(p,q) \textrm{ pointwise a.e. } p,q \in (M\times M, d_0\times d_0)
\end{equation}
and
\begin{equation}
\operatorname{Vol}_j(M_j) \rightarrow \operatorname{Vol}_0(M_0).
\end{equation}
Then
\begin{equation}
d_{\mathcal{F}}(M_j, M_0 ) \to 0.
\end{equation}
\end{thm}
Theorem \ref{ptwise-to-SWIF} is proven by ensuring that we can apply Theorem \ref{est-SWIF}. In particular, we need to show the existence of subsets in $M$ satisfying
\eqref{eq-distCond} and \eqref{eq-volCond}. We now give an outline of this. In Subsection \ref{subsect:Egoroff's Theorem} we use Egoroff's theorem to go from pointwise convergence almost everywhere to uniform converence on a set of almost full measure $S_\varepsilon \subset M\times M$. This set is not contained in $M$ and hence cannot be used to apply Theorem \ref{est-SWIF}. As a preliminary step, in Subsection \ref{subsect:Fubini} we
use the coarea formula to see that for almost every $p \in M$, the sets $S_{p,\varepsilon} = \{ q \, |\, (p, q) \in S_\varepsilon \} \subset M$ have almost full measure.
By Egoroff's theorem we only know that for all $q \in S_{p,\varepsilon}$ we have $d_j(p,q) \to d_0(p,q)$. Thus, in Subsection \ref{subsect:Good Set} we define the good sets $W_{\kappa \varepsilon}$, that are used to apply Theorem \ref{est-SWIF} as the set of points $p$ such that $S_{p,\varepsilon}$ have almost full measure (quantified in terms of $\kappa$), and we show that these $W_{\kappa \varepsilon}$ also have almost full measure (quantified in terms of $\kappa$). That is, they satisfy \eqref{eq-volCond}. In Subsection \ref{subsect:Distance Bounds} we ensure that the good sets satisfy \eqref{eq-distCond}. Notice that given two points $p_1, p_2 \in W_{\kappa \varepsilon}$ we need to show that $d_j(p_1, p_2)$ close to $d_0(p_1, p_2)$ in a specific quantified way. Since $(p_1,p_2)$ might not be contained in $S_{p_1,\varepsilon}$ we show the existence of a point $q \in S_{p_1,\varepsilon} \cap S_{p_2,\varepsilon}$ so that we can use a triangle inequality argument to
obtain the uniform distance bound on pairs of points contained in the good set $W_{\kappa \varepsilon}$. In Subsection \ref{subsect:Proof of SWIF} we finish the proof of Theorem \ref{ptwise-to-SWIF} by applying Theorem \ref{est-SWIF} in combination with all previous subsections.
\subsection{Egoroff's Theorem}\label{subsect:Egoroff's Theorem}
We begin by reminding the reader of Egoroff's theorem which can be found in the book of Evans and Gariepy \cite{Evans-Gariepy}.
\begin{thm}[Egoroff's Theorem] \label{Egoroff's Theorem}
Let $f_n:X \rightarrow \mathbb{R}$ be a sequence of measurable functions on a measure space $(X, \mu)$. Assume there is a measurable set $A \subset X$, $\mu(A) < \infty$, so that $f_n$ converges pointwise $\mu-$almost everywhere to a measurable function $f$. Then for every $\varepsilon > 0$, there exists a measurable subset $B_{\varepsilon} \subset A$ so that
\begin{align}
\mu(B_{\varepsilon} ) < \varepsilon
\end{align}
and
\begin{align}
f_n \rightarrow f
\end{align}
uniformly on $A \setminus B_{\varepsilon} $.
\end{thm}
Now we apply Egoroff's theorem to obtain uniform convergence on a set of almost full measure.
\begin{prop}\label{Svare}
Under the hypotheses of Theorem \ref{ptwise-to-SWIF}, for every $\varepsilon >0$ there exists a
$dvol_{g_0}\times dvol_{g_0}$ measurable set, $S_\varepsilon\subset M\times M$, such that
\begin{equation}\label{unifSvare}
\sup\{|d_j(p,q)- d_0(p,q)|\,:\, (p,q)\in S_\varepsilon\}=\delta_{\varepsilon,j} \to 0,
\end{equation}
\begin{equation}\label{volSvare}
\operatorname{Vol}_{0\times 0} (S_\varepsilon)> (1-\varepsilon)\operatorname{Vol}_{0\times 0}(M\times M).
\end{equation}
and
\begin{equation} \label{Svaresym}
(p,q)\in S_\varepsilon \iff (q,p) \in S_\varepsilon.
\end{equation}
\end{prop}
\begin{proof}
By Egoroff's Theorem \ref{Egoroff's Theorem} since $(M,d_0,dvol_{g_0})$ is a metric measure space so that $dvol_{g_0}(M)< \infty$ and
\b
d_j(p,q)\to d_0(p,q) \textrm{ ptwise $dvol_{g_0} \times dvol_{g_0}$ a.e. } (p,q) \in M\times M,
\end{equation}
then for all $\varepsilon>0$ there exists a $dvol_{g_0}\times dvol_{g_0}$ measurable set, $S_\varepsilon\subset M\times M$, such that
\begin{equation}\label{unifSvare}
\sup\{|d_j(p,q)- d_0(p,q)|\,:\, (p,q)\in S_\varepsilon\}=\delta_{\varepsilon,j} \to 0
\end{equation}
and
\begin{equation}\label{volSvare}
\operatorname{Vol}_{0\times 0} (S_\varepsilon)> (1-\varepsilon)\operatorname{Vol}_{0\times 0}(M\times M).
\end{equation}
Note that since $d_j(p,q)=d_j(q,p)$ we can ensure, by possibly enlarging $S_\varepsilon$, that
\begin{equation} \label{Svaresym}
(p,q)\in S_\varepsilon \iff (q,p) \in S_\varepsilon.
\end{equation}
\end{proof}
\subsection{Product Structures} \label{subsect:Fubini}
We now use the product Riemannian structure on $(M\times M, g_0 \times g_0)$ in order to relate $S_{\varepsilon} \subset M \times M$ to subsets of $M$ through the control on the volume of $S_{\varepsilon}$.
\begin{lem}\label{lem-Spvare}
Under the assumptions of Proposition \ref{Svare}, for almost every $p \in M$ the sets
\begin{equation}\label{Spvare}
S_{p,\varepsilon}=\{q\in M\,:\, (p,q)\in S_\varepsilon\},
\end{equation}
are $dvol_{g_0}$ measurable and satisfy
\begin{equation}\label{AverageAreaOfGoodSetInequality}
(1-\varepsilon) \operatorname{Vol}_0(M) < \int_{p\in M} \frac{\operatorname{Vol}_0(S_{p,\varepsilon})}{ \operatorname{Vol}_0(M)}\, dvol_{g_0}.
\end{equation}
\end{lem}
See Figure~\ref{fig-AS-star}.
\begin{figure}[h]
\center{\includegraphics[width=.8\textwidth]{AS-star.jpg}}
\caption{Here we see three copies of $(M,d_j)$ with volume close to $(M,d_0)=({\mathbb S}^2, d_{\mathbb{S}})$.
On the left we see a point $p_1$ at the base of a well whose $S_{p_1,\varepsilon}$ is most of the
manifold except the other wells. On the right we see a point $p_2$ far from a well whose $S_{p_2,\varepsilon}$ is most
of the manifold away from the wells. In the middle we see points $p_i$ on wells whose $S_{p_i,\varepsilon}$ is small.}
\label{fig-AS-star}
\end{figure}
\begin{proof}
Since $S_{\varepsilon}$ is $dvol_{g_0}\times dvol_{g_0}$ measurable it follows that for almost every $p$ such that
$(p,q) \in S_{\varepsilon}$ for some $q$,
\begin{align}
S_{p,\varepsilon}
\end{align}
is measurable.
Moreover, by (\ref{Svaresym}) we have
\begin{equation} \label{Spvaresym}
q\in S_{p,\varepsilon} \iff p \in S_{q,\varepsilon}
\end{equation}
Now by the product Riemannian structure on $(M \times M, g_0 \times g_0)$:
\begin{equation}
\operatorname{Vol}_{0\times 0} (S_\varepsilon)=\int_{p\in M} \operatorname{Vol}_0(S_{p,\varepsilon}) \,dvol_{g_0}.
\end{equation}
Thus, by (\ref{volSvare}) and $\operatorname{Vol}_{0\times0}(M\times M)=(\operatorname{Vol}_0(M))^2$, we get
\begin{equation}\label{AverageAreaOfGoodSetInequality1}
(1-\varepsilon) \operatorname{Vol}_0(M) < \int_{p\in M} \frac{\operatorname{Vol}_0(S_{p,\varepsilon})}{ \operatorname{Vol}_0(M)}\, dvol_{g_0}.
\end{equation}
\end{proof}
\subsection{Selecting our Good Set}\label{subsect:Good Set}
For $\varepsilon >0$ and $\kappa >1$ such that $\kappa \varepsilon < 1$ let
\begin{equation} \label{Wkappavare}
W_{\kappa\varepsilon}=\{p:\, \operatorname{Vol}_0(S_{p,\varepsilon}) > (1- \kappa\varepsilon) \operatorname{Vol}_0(M)\}.
\end{equation}
First we notice that $W_{\kappa \varepsilon}$ is measurable by defining the function $\Phi:M \rightarrow [0,\infty)$ as $\Phi(p) = \int_M \mathbbm{1}_{S_{\varepsilon}}(p,q) dvol_{g_0}(q)$ and so $W_{\kappa \varepsilon}$ is measurable since it is the preimage of a measurable function.
In Figure~\ref{fig-AS-star} we can intuitively see that $W_{\kappa\varepsilon}$
consists of points like $p_1$ and $p_2$ that do not lie inside the wells. In the following lemmas we show that $W_{\kappa \varepsilon}$ has the correct volume to be used as the good set in Lemma \ref{est-SWIF}.
\begin{lem}\label{vol-W} For $W_{\kappa\varepsilon}$ defined as in \eqref{Wkappavare} we find
\begin{equation}
\operatorname{Vol}_0(W_{\kappa\varepsilon}) > \frac{\kappa-1}{\kappa} \operatorname{Vol}_0(M).
\end{equation}
\end{lem}
\begin{proof}
Starting with \eqref{AverageAreaOfGoodSetInequality} calculate
\begin{eqnarray*}
(1-\varepsilon) \operatorname{Vol}_0(M) &<& \int_{p\in W_{\kappa\varepsilon}} \frac{\operatorname{Vol}_0(S_{p,\varepsilon})}{ \operatorname{Vol}_0(M)}\, dvol_{g_0}
+ \int_{p\in M\setminus W_{\kappa\varepsilon}} \frac{\operatorname{Vol}_0(S_{p,\varepsilon})}{\operatorname{Vol}_0(M)}\, dvol_{g_0}\\
&\le &\int_{p\in W_{\kappa\varepsilon}} 1 \, dvol_{g_0}
+ \int_{p\in M\setminus W_{\kappa\varepsilon}} (1- \kappa\varepsilon)\, dvol_{g_0}\\
&=& \operatorname{Vol}_0(W_{\kappa\varepsilon}) +
(1- \kappa\varepsilon) \operatorname{Vol}_0(M\setminus W_{\kappa\varepsilon})\\
&=& \operatorname{Vol}_0(W_{\kappa\varepsilon}) +
(1- \kappa\varepsilon) \operatorname{Vol}_0(M) -(1- \kappa\varepsilon)\operatorname{Vol}_0(W_{\kappa\varepsilon})\\
&=&\kappa\varepsilon \, \operatorname{Vol}_0(W_{\kappa\varepsilon})+ (1- \kappa\varepsilon) \operatorname{Vol}_0(M).
\end{eqnarray*}
Hence,
\begin{equation}
(\kappa\varepsilon-\varepsilon) \operatorname{Vol}_0(M) < \kappa\varepsilon \, \operatorname{Vol}_0(W_{\kappa\varepsilon}).
\end{equation}
This concludes the proof.
\end{proof}
\begin{lem}\label{vol_j-W}
For $W_{\kappa\varepsilon}$ defined as in \eqref{Wkappavare} we get
\begin{equation}
\operatorname{Vol}_j(M \setminus W_{\kappa\varepsilon}) \le \frac{1}{\kappa}\operatorname{Vol}_0(M)+ |\operatorname{Vol}_j(M)-\operatorname{Vol}_0(M)|.
\end{equation}
\end{lem}
\begin{proof}
From $g_0 \le g_j$ we know that $d_0 \leq d_j$. Then, $\operatorname{Vol}_0 \leq \operatorname{Vol}_j$ and the following holds,
\begin{equation*}
\operatorname{Vol}_0(W_{\kappa \varepsilon}) + \operatorname{Vol}_j(M) \leq \operatorname{Vol}_j(W_{\kappa \varepsilon}) + \operatorname{Vol}_j(M) + \left(\operatorname{Vol}_0(M) - \operatorname{Vol}_0(M) \right).
\end{equation*}
Rearranging terms,
\begin{equation*}
- \operatorname{Vol}_j(W_{\kappa \varepsilon}) + \operatorname{Vol}_j(M) \leq - \operatorname{Vol}_0(W_{\kappa \varepsilon}) + \operatorname{Vol}_0(M) + \left(\operatorname{Vol}_j(M) - \operatorname{Vol}_0(M) \right).
\end{equation*}
Then by Lemma \ref{vol-W},
$$
\operatorname{Vol}_j(M \setminus W_{\kappa \varepsilon}) < - \frac{\kappa-1}{\kappa} \operatorname{Vol}_0(M) + \operatorname{Vol}_0(M) + \left(\operatorname{Vol}_j(M) - \operatorname{Vol}_0(M) \right).
$$
\end{proof}
\subsection{Uniform Distance Bounds}\label{subsect:Distance Bounds}
The aim of this subsection is to prove Lemma \ref{unif-on-W} where we find a
uniform distance bound on pairs of points contained in the good sets $W_{\kappa \varepsilon}$.
More explicitly, given two points $p_1, p_2 \in W_{\kappa \varepsilon}$ we need to show that $d_j(p_1, p_2)$ is close to $d_0(p_1, p_2)$ in a specific quantified way, see \eqref{eq-lemUnifW}.
Since in principle $(p_1,p_2)$ might not be contained in $S_{p_1,\varepsilon}$, we show that $S_{p_1,\varepsilon} \cap S_{p_2,\varepsilon} \neq \emptyset$ and then use a triangle inequality argument to get Lemma \ref{unif-on-W}. In Figure~\ref{fig-AS-star} note that $S_{p_1,\varepsilon} \cap S_{p_2,\varepsilon}$ would consist of everything not in the wells and thus has a large volume and for points there their $d_j$ and $d_0$ distances are almost the same.
\begin{lem} \label{Sp12int}
Consider $W_{\kappa\varepsilon}$ defined as in \eqref{Wkappavare} and $S_{p,\epsilon}$ defined in \eqref{Spvare}.
Let $p_1, p_2$ be two points in $W_{\kappa\varepsilon}$. Then
$S_{p_1,\varepsilon}$ and $S_{p_2,\varepsilon}$ cannot be disjoint for $\kappa \varepsilon < 1/2$.
In fact,
\begin{equation}
\operatorname{Vol}_0(S_{p_1,\varepsilon} \cap S_{p_2,\varepsilon}) > (1-2\kappa\varepsilon) \operatorname{Vol}_0(M).
\end{equation}
\end{lem}
\begin{proof}
If they are disjoint then
\begin{equation}
\operatorname{Vol}_0(S_{p_1,\varepsilon}) + \operatorname{Vol}_0(S_{p_2,\varepsilon} )\le \operatorname{Vol}_0(M).
\end{equation}
Then by (\ref{Wkappavare}) we get
\begin{equation}
(1- \kappa\varepsilon) \operatorname{Vol}_0(M)+(1- \kappa\varepsilon) \operatorname{Vol}_0(M)< \operatorname{Vol}_0(M)
\end{equation}
which implies $(1-\kappa \varepsilon) \leq 1/2$.
In fact, taking $K_i=M\setminus S_{p_i,\varepsilon}$ we have
\begin{eqnarray}
\operatorname{Vol}_0(S_{p_1,\varepsilon} \cap S_{p_2,\varepsilon}) &\ge& \operatorname{Vol}_0(M)-\operatorname{Vol}_0(K_1\cup K_2)\\
&\ge & \operatorname{Vol}_0(M)-(\operatorname{Vol}_0(K_1) +\operatorname{Vol}_0(K_2))\\
&> & \operatorname{Vol}_0(M)(1 -\kappa \varepsilon-\kappa\varepsilon).
\end{eqnarray}
\end{proof}
\begin{lem} \label{ball-in-0}
Let $M_0$ be a compact Riemannian manifold. For any $\lambda' \in (0, \operatorname{Diam}(M_0))$, $\kappa >1$ there exists $\varepsilon > 0$ small enough so that $\kappa\varepsilon \in (0,1/2)$ and
\begin{equation}
\min_{x\in M} \operatorname{Vol}_0(B(x,\lambda')) \geq 2\kappa\varepsilon \operatorname{Vol}_0(M)
\end{equation}
and thus under the hypotheses of Lemma \ref{Sp12int},
\begin{equation}
B(x,\lambda') \cap S_{p_1,\varepsilon} \cap S_{p_2,\varepsilon} \neq \emptyset \quad \forall x \in M, \,\, p_1,p_2\in W_{\kappa\varepsilon}.
\end{equation}
\end{lem}
Note that $\kappa\varepsilon$ is a decreasing function of $\lambda'$.
In Figure~\ref{fig-AS-star} note that $S_{p_1,\varepsilon} \cap S_{p_2,\varepsilon}$
would consist of everything not in the wells and any point $x$ in $M$
cannot be far away when measured using $d_0$. Note that $x$ lying on the
tip of a well might be far away measured using $d_j$.
\begin{proof}
Observe that there is some $K$ possibly negative, such that the Ricci curvature on
$(M, g_0)$ has $Ric(g_0) \ge (m-1)K$ where $m$ is the dimension of $M$.
By the Volume Comparison Theorem
we know that for $r_1 \leq r_2$
\begin{align}
\frac{\operatorname{Vol}_0(B(x,r_1))}{\operatorname{Vol}_K(B(x_K,r_1))} \ge \frac{\operatorname{Vol}_0(B(x,r_2))}{\operatorname{Vol}_K(B(x_K,r_2))},
\end{align}
where $B(x_K,r_1)$ is a ball in the $m$ dimensional space form of constant sectional curvature $K \in \mathbb{R}$, and $\operatorname{Vol}_K$ is the volume as measured in this space form. Now by choosing $r_2 = \operatorname{Diam}(M_0)$, we find
\begin{align}
\operatorname{Vol}_0(B(x,r_1)) \ge \frac{\operatorname{Vol}_K(B(x_K,r_1))}{\operatorname{Vol}_K(B(x_K,\operatorname{Diam}(M_0)))} \operatorname{Vol}_0(M) \quad \forall x \in M.
\end{align}
Hence by choosing $r_1 = \lambda'$, let $\varepsilon > 0$ be chosen so that the equality holds
\begin{align}
\frac{\operatorname{Vol}_K(B(x_K,\lambda'))}{\operatorname{Vol}_K(B(x_K,\operatorname{Diam}(M_0)))} = 2\kappa \varepsilon.
\end{align}
Thus we get the result.
\end{proof}
\begin{lem}\label{unif-on-W}
Let $M_j,M_0$ be Riemannian manifolds which satisfy the hypotheses of Theorem \ref{ptwise-to-SWIF}. For any $\lambda'>0$ and $\kappa>1$, let $\varepsilon >0$ be given as
in Lemma~\ref{ball-in-0}. Then,
\begin{equation}\label{eq-lemUnifW}
|d_j(p_1,p_2)-d_0(p_1,p_2)| < 2 \lambda' + 2\delta_{\varepsilon,j}
\end{equation}
for all $p_1, p_2 \in W_{\kappa\varepsilon}$.
\end{lem}
Note that for $(x_1,x_2) \in S_\varepsilon$ by (\ref{unifSvare}) we have a better distance bound but we are calculating a distance bound for points in
$W_{\kappa\varepsilon} \times W_{\kappa\varepsilon}$ which is not necessarily contained in $S_\varepsilon$. This happens, in particular, when $p_2 \notin S_{p_1,\varepsilon}$.
\begin{proof}
Let $p_1, p_2 \in W_{\kappa\varepsilon}$, and let $x$ be their $d_0$-midpoint:
\begin{equation}\label{midpt}
d_0(p_1,x)+d_0(p_2,x)= d_0(p_1,p_2).
\end{equation}
By Lemma~\ref{ball-in-0}, there exists
\begin{equation}
q\in B_0(x,\lambda') \cap S_{p_1,\varepsilon} \cap S_{p_2,\varepsilon}.
\end{equation}
So
\begin{equation}
(p_i,q) \in S_\varepsilon \textrm{ and } d_0(x,q)<\lambda'.
\end{equation}
By (\ref{unifSvare})
\begin{equation}
d_0(p_i,q)\le d_j(p_i,q) < d_0(p_i,q)+\delta_{\varepsilon,j}.
\end{equation}
Combining this with the triangle inequality, we have
\begin{eqnarray}
\qquad d_0(p_1,p_2)&\le& d_j(p_1,p_2)\\
&\le& d_j(p_1,q)+d_j(q,p_2)\\
&\le & d_0(p_1,q)+d_0(q,p_2) + 2 \delta_{\varepsilon,j}\\
&\le & (d_0(p_1,x) + d_0(x,q))+ (d_0(q,x)+d_0(x, p_2)) + 2 \delta_{\varepsilon,j}\\
& < & (d_0(p_1,x) + \lambda') + (\lambda'+d_0(x,p_2)) + 2 \delta_{\varepsilon,j}\\
&=& d_0(p_1,p_2)+ 2 \lambda' + 2\delta_{\varepsilon,j}
\end{eqnarray}
where the last line follows from (\ref{midpt}).
\end{proof}
\subsection{Proof of Theorem \ref{ptwise-to-SWIF}}\label{subsect:Proof of SWIF}
\begin{proof}[Proof of Theorem \ref{ptwise-to-SWIF}]
For any $\kappa >1$ and $\lambda' \in (0, \operatorname{Diam}(M_0))$, let $\varepsilon>0$ be given as in
Lemma \ref{ball-in-0}. That is, choose $\varepsilon > 0$ such that
\begin{align}
\frac{\operatorname{Vol}_K(B(x_K,\lambda'))}{\operatorname{Vol}_K(B(x_K,\operatorname{Diam}(M_0)))} = 2\kappa \varepsilon.
\end{align}
Thus, with this $\varepsilon >0$ we obtain by applying the results in sections \ref{subsect:Fubini} - \ref{subsect:Good Set}
a set $S_\varepsilon$ and a set $W_{\kappa \varepsilon}$, see (\ref{Spvare}) and (\ref{Wkappavare}), such that by Lemma \ref{vol_j-W},
\begin{equation}
\operatorname{Vol}_j(M \setminus W_{\kappa \varepsilon}) \le \frac{1}{\kappa}\operatorname{Vol}_0(M)+|\operatorname{Vol}_j(M)-\operatorname{Vol}_0(M)|.
\end{equation}
Moreover, by Lemma \ref{unif-on-W} we find that
\begin{equation}
|d_j(p_1,p_2)-d_0(p_1,p_2)| < 2 \lambda' + 2\delta_{\varepsilon,j}
\end{equation}
for all $p_1, p_2 \in W_{\kappa\varepsilon}$.
Thus, we can apply Lemma \ref{est-SWIF} to get
\begin{align}
d_{\mathcal{F}}(M_0,M_j) \le \left( \tfrac{2}{\kappa}\operatorname{Vol}_0(M)+ 2|\operatorname{Vol}_j(M)-\operatorname{Vol}_0(M)| + h_j V \right)
\end{align}
where $h_j = \sqrt{ 2(\lambda'+\delta_{\varepsilon,j})D + (\lambda'+\delta_{\varepsilon,j})^2}$
and $V >0$ is an upper volume bound which exists since $\operatorname{Vol}_j(M) \to \operatorname{Vol}_0(M)$ by hypothesis.
Hence we find
\begin{align}
\limsup_{j\rightarrow \infty} d_{\mathcal{F}}(M_0,M_j) \le \left( \tfrac{2}{\kappa} \operatorname{Vol}_0(M)+ \sqrt {2 \lambda'D + \lambda'^2}\, V \right),
\end{align}
and since this is true for any $\kappa, \lambda'$ we find that
\begin{align}
\limsup_{j\rightarrow \infty} d_{\mathcal{F}}(M_0,M_j) =0.
\end{align}
\end{proof}
\section{Proving our Main Results}\label{sect:ProofMainThm}
In this section we will combine our results to prove Theorem~\ref{vol-thm} which was stated in the introduction.
As a corollary to Theorem \ref{vol-thm} we notice that we are allowed to loosen the metric inequality from below in \eqref{g_j-below-vol-thm} and still come away with the same conclusion. This is useful in applications such as the geometric stability of the scalar torus rigidity theorem explored by Allen, Hernandez-Vazquez,Parise, Payne, and Wang \cite{AHMPPW1} in the case of warped products and Cabrera Pacheco, Ketterer, and Perales \cite{CPKP19} in the case of graphs. The following corollary has been applied by Allen to prove geometric stability of the scalar torus rigidity theorem in the conformal case \cite{Allen-Conformal-Torus}.
\begin{cor}\label{cor-vol-thm}
Suppose we have a fixed compact oriented Riemannian manifold, $M_0=(M,g_0)$,
without boundary and
a sequence of metric tensors $g_j$ on $M$ defining $M_j=(M, g_j)$ with
\begin{equation} \label{g_j-below-vol-thm}
\left(1 - \tfrac{1}{2j} \right)g_0(v,v) \le g_j(v,v) \qquad \forall v\in T_pM
\end{equation}
and a uniform upper bound on diameter
\begin{equation}
\operatorname{Diam}_j(M_j) \le D_0
\end{equation}
and volume convergence
\begin{equation}
\operatorname{Vol}_j(M_j) \to \operatorname{Vol}_0(M_0)
\end{equation}
then
\begin{equation}
M_j \stackrel {\mathcal{VF}}{\longrightarrow} M_0.
\end{equation}
\end{cor}
\subsection{Proof of Theorem~\ref{vol-thm}}
\begin{proof}
By Theorem \ref{PointwiseConvergenceAE} there exists a subsequence such that
\begin{equation}
\lim_{j\to \infty} d_{j}(p,q) = d_0(p,q) \textrm{ pointwise a.e. } (p,q) \in M\times M.
\end{equation}
Hence by combining the hypotheses of Theorem \ref{vol-thm} with Theorem \ref{ptwise-to-SWIF} we have
\begin{equation}
d_{\mathcal{F}}(M_j, M_0 ) \to 0.
\end{equation}
If not, we would have a subsequence so that
\begin{align}
d_{\mathcal{F}}(M_{j_i},M_0 )> \varepsilon,
\end{align}
then by the argument above there would be a subsequence which converges to $M_0$ which is a contradiction. Hence, we have the desired claim that the original sequence must converge to $M_0$.
\end{proof}
\subsection{Proof of Corollary~\ref{cor-vol-thm}}
\begin{proof}
Consider $\tilde{g}_j = \frac{1}{1-\frac{1}{2j}} g_j$ and $\tilde{M}_j=(M,\tilde{g}_j)$.
Then $ \tilde{g}_j \ge g_0$ and
\begin{align}
\operatorname{Vol}_j(\tilde{M}_j) &= \left(1-\tfrac{1}{2j} \right)^{\frac{n}{2}} \operatorname{Vol}_j(M_j) \rightarrow \operatorname{Vol}(M_0)
\\ \operatorname{Diam}_j(\tilde{M}_j) &= \left(1-\tfrac{1}{2j} \right)^{\frac{1}{2}} \operatorname{Diam}_j(M_j) \le D_0.
\end{align}
Hence $\tilde{M}_j$ satisfies the hypotheses of Theorem \ref{vol-thm} which implies
\begin{align}
\tilde{M}_j \stackrel {\mathcal{VF}}{\longrightarrow} M_0.
\end{align}
On the other hand, by construction we have $\|g_j-\tilde{g}_j\|_{C^0_{g_0}(M)} \rightarrow 0$ which implies
\begin{align}
\sup_{p,q\in M}|d_j(p,q)-\tilde{d}_j(p,q)|\rightarrow 0,
\end{align}
and since $\tilde{g}_j \ge g_j$ we can apply Theorem \ref{est-SWIF} with $W_j=M$ to find
\begin{align}
d_{\mathcal{F}}(\tilde{M}_j, M_j) \rightarrow 0.
\end{align}
Hence by the triangle inequality for the intrinsic flat distance we find
\begin{align}
M_j \stackrel {\mathcal{VF}}{\longrightarrow} M_0.
\end{align}
\end{proof}
\subsection{Statement and Proof of Corollary~\ref{cor-Lp-thm}}
We now state a corollary which uses the observation that if one has $L^{\frac{m}{2}}$ convergence of the metric tensors $g_j$ to $g_0$ then that implies volume convergence. This perspective on Theorem \ref{vol-thm}
says that if we have $L^p$, $p\ge \frac{m}{2}$ convergence of Riemannian manifolds then we can use Theorem \ref{vol-thm} to bootstrap up to the stronger notion of volume preserving intrinsic flat convergence by ensuring a diameter bound and a $C^0$ bound from below.
\begin{cor} \label{cor-Lp-thm}
Suppose we have a fixed compact oriented Riemannian manifold, $M_0=(M,g_0)$,
without boundary and
a sequence of metric tensors $g_j$ on $M$ defining $M_j=(M, g_j)$ with
\begin{equation} \label{g_j-below-vol-thm}
\left(1 - \tfrac{1}{2j} \right)g_0(v,v) \le g_j(v,v) \qquad \forall v\in T_pM
\end{equation}
and a uniform upper bound on diameter
\begin{equation}
\operatorname{Diam}_j(M_j) \le D_0
\end{equation}
and $L^p$ convergence with $p\ge \frac{m}{2}$
\begin{equation}
\left (\int_M|g_j-g_0|_{g_0}^p dV_{g_0} \right)^{\frac{1}{p}} \rightarrow 0
\end{equation}
then
\begin{equation}
M_j \stackrel {\mathcal{VF}}{\longrightarrow} M_0.
\end{equation}
\end{cor}
\begin{proof}
By Lemma 2.5 and Lemma 2.7 of \cite{Allen-Sormani-2} we know that $L^p$ convergence with $p \ge \frac{m}{2}$ implies
\begin{equation}
\int_M |g_j|_{g_0}^{\frac{m}{2}} dV_{g_0} \rightarrow \int_M |g_0|_{g_0}^{\frac{m}{2}} dV_{g_0} .
\end{equation}
Then by the hypothesis that $g_j \ge \left(1 - \tfrac{1}{2j} \right) g_0$ and Lemma 4.3 of \cite{Allen-Sormani-2} we find volume convergence which allows us to use Corollary \ref{cor-vol-thm} to finish the proof.
\end{proof}
\begin{rmrk}\label{rmrk-diffeo}
It should also be noted that if one has a sequence of Riemannian manifolds which are diffeomorphic and
one can find a sequence of diffeomorphisms such that the pull back metrics satisfy the hypotheses of the
corollary stated above, then one obtains $M_j \stackrel {\mathcal{VF}}{\longrightarrow} M_0$ as well.
\end{rmrk}
\section{Open Problems}
Here we discuss possible extensions and applications of Theorem~\ref{vol-thm}. We encourage anyone interested in working in this area to contact the third author to join teams and attend workshops on these questions.
\subsection{What if $M_j$ are not diffeomorphic?}
It would be of interest to prove a version of Theorem~\ref{vol-thm} which does not require $M_j$
to be diffeomorphic to $M_0$. Indeed some of the steps of the proof do not require the diffeomorphism.
We do very much require that both $M_j$ and $M_0$ be Riemannian manifolds rather than more
singular limit spaces. However it should be possible with some effort to write a statement which allows
for different topologies imitating some ideas from the work of Lakzian and the third author (particularly the proof of
Theorem 4.6 in \cite{Lakzian-Sormani-1}). Anyone interested in this project should contact the third author
to avoid a conflict with other young mathematicians.
\subsection{What if $M_j$ have boundary?}
The authors had investigated versions of Theorem~\ref{vol-thm} for manifolds with boundary \cite{Allen-Perales-1}.
This was applied to recover a partial stability result of the positive theorem originally proven by the third named author and Huang and Lee \cite{HLS}.
It would be of interest to extend their work in order to prove some of the conjectures mentioned in the introduction.
\subsection{Can one prove scalar stability theorems?}
There are many conjectures concerning the stability of various scalar curvature rigidity theorems in \cite{Sormani-scalar}. Theorem~\ref{vol-thm} should apply directly to prove some which involve compact manifolds without
boundary. Anyone interested in applying this theorem towards one of these conjectures in some special case or other is asked to contact the third author to ensure that there are no conflicts.
The third author can also help form teams of young mathematicians to work on these problems together. Students and postdocs wishing to work on these projects may wish to watch the Fields Institute lectures by the third author on this topic. As Gromov is very interested in stability theorems for scalar curvature \cite{Gromov-Dirac}, there is an upcoming volume of SIGMA in honor of Gromov that has an open call for papers that would welcome papers proving special cases of these conjectures.
\bibliographystyle{alpha}
| {
"timestamp": "2021-01-06T02:24:45",
"yymm": "2003",
"arxiv_id": "2003.01172",
"language": "en",
"url": "https://arxiv.org/abs/2003.01172",
"abstract": "Given a pair of metric tensors $g_1 \\ge g_0$ on a Riemannian manifold, $M$, it is well known that $\\operatorname{Vol}_1(M) \\ge \\operatorname{Vol}_0(M)$. Furthermore one has rigidity: the volumes are equal if and only if the metric tensors are the same $g_1=g_0$. Here we prove that if $g_j \\ge g_0$ and $\\operatorname{Vol}_1(M)\\to \\operatorname{Vol}_0(M)$ then $(M,g_j)$ converge to $(M,g_0)$ in the volume preserving intrinsic flat sense. Well known examples demonstrate that one need not obtain smooth, $C^0$, Lipschitz, or even Gromov-Hausdorff convergence in this setting. Our theorem may also be applied as a tool towards proving other open conjectures concerning the geometric stability of a variety of rigidity theorems in Riemannian geometry. To complete our proof, we provide a novel way of estimating the intrinsic flat distance between Riemannian manifolds which is interesting in its own right.",
"subjects": "Metric Geometry (math.MG); Differential Geometry (math.DG)",
"title": "Volume Above Distance Below",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9833429634078179,
"lm_q2_score": 0.7217432182679957,
"lm_q1q2_score": 0.7097211150711464
} |
https://arxiv.org/abs/1811.03527 | A Local Limit Theorem for Cliques in G(n,p) | We prove a local limit theorem the number of $r$-cliques in $G(n,p)$ for $p\in(0,1)$ and $r\ge 3$ fixed constants. Our bounds hold in both the $\ell^\infty$ and $\ell^1$ metric. The main work of the paper is an estimate for the characteristic function of this random variable. This is accomplished by introducing a new technique for bounding the characteristic function of constant degree polynomials in independent Bernoulli random variables, combined with a decoupling argument. | \section{Introduction}
In 1960 Erd\H{o}s and R\'enyi introduced the study of $G(n,p)$, the random graph on $n$ vertices where each edge is included independently at random with probability $p$. In \cite{ErdosRenyi} they showed, among other results, that the number of cliques of size $r$ in $G(n,p)$ is concentrated about its mean using Chebyshev's inqeuality. Since then the Erd\H{o}s-R\'enyi random graph has become an object of much study, and many nice results have been obtained concerning the following natural question:
\begin{question}
Let $H$ be some fixed graph. What is the distribution of the number of copies of $H$ as a random variable?
\end{question}
In this paper, we will consider this question for the regime where $H$ is the $r$-clique, $K_r$, and $p\in (0,1)$ is a fixed constant.
Let ${f_{r}}$ denote the random variable counting the number of $r$-cliques in $G(n,p)$ and set $\mu=\E[f_r]$ and $\sigma^2=Var(f_r)$.
In the 1980's there were several papers studying which subgraph counts obeyed a central limit theorem (see \cite{Kar83, Kar84, NW88, Ruc88}, for example). By that time central limit theorems stating that ${f_{r}}$ converged in distribution to the Gaussian were known. That is for any real numbers $a<b$
\begin{equation}\label{clteq}
\Pr\left[f_r\in[\mu+a\sigma, \mu+b\sigma]\right]=\frac{1}{\sqrt{2\pi}}\int_{a}^b e^{-t^2/2}dt+o(1)
\end{equation}
Note that the central limit theorem in equation \ref{clteq}
bounds the probability that ${f_{r}}$ lies in an interval of length $O(\sigma)$. In this paper we will show that the distribution of ${f_{r}}$ is \textit{pointwise} close to a discrete Gaussian. Our main result is the following local limit theorem:
\begin{thm} \label{Sup Main}
Fix any $0<\tau<\min(1/12,1/2r)$.
For any $m\in \mathbb{N}$ we have that
$$\Pr[f_r=m]=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{\left(m-\mu\right)^2}{2\sigma^2}}+O\left(\frac{1}{\sigma n^{\frac12-\tau}}\right)$$
\end{thm}
Because of the quantitative error bound, we are also able to extend this to the following $\ell^1$, or statistical distance bound between
${f_{r}}$ and the discrete Gaussian.
\begin{thm}\label{L1 Main}
$$\sum_{m\in \mathbb{N}}\left|\Pr({f_{r}}=m)-\frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{\left(m-\mu\right)^2}{2\sigma^2}\right)\right|=O(n^{-\frac12+\tau})$$
\end{thm}
\subsection{Related Results}
Our methods depend on examining a particular orthogonal basis of the space of functions on $G(n,p)$. For many other applications of such orthogonal decompositions to counting problems on $G(n,p)$, see \cite{MR1219708}
If $p$ were allowed to become arbitrarily small as $n$ grows, then ${f_{r}}$ may be shown in some cases to resemble a Poisson random variable. For example, if the edge probability $p\sim cn$ for some constant $c$, then Erd\H{o}s and Renyi \cite{ErdosRenyi} showed that the number of triangles in $G(n,p)$ converges to a Poisson distribution. This result was a local limit theorem, as it estimated the pointwise probabilities $\Pr[f_3=k]$ for $k$ constant. Further, R\"{o}llin and Ross \cite{Ross} showed a local limit theorem when $p\sim cn^{\alpha}$ for $\alpha\in [-1,-\frac12]$. In this regime they showed that the triangle counting distribution converges to a translated Poisson distribution (which is in turn close to a discrete Gaussian) in both the $\ell^\infty$ and $\ell^1$ metrics.
In 2014, Gilmer and Kopparty \cite{JustinTriangles} proved a local limit theorem for triangle counts in $G(n,p)$ in the regime where $p$ is a fixed constant. Their main theorem was the following pointwise bound:
$$\Pr[f_3=m]=\frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{\left(m-\mu\right)^2}{2\sigma_n^2}\right)\pm o(n^{-2})$$
The proof in \cite{JustinTriangles} proceeded by using the characteristic function. The main step there was to show that $|\varphi(t)-\varphi_{f_3}(t)|$ is small for $t\in [-\pi \sigma_n,~\pi\sigma_n]$, where $\varphi$ represents the characteristic function of the standard normal distribution, and $\varphi_{f_3}$ represents the characteristic function the triangle counting function $f_3$.
\cite{triangles} extended this result by improving the error bound and obtained a bound on the statistical distance between $f_3$ and the discrete Gaussian as well.
\subsection{High Level Overview of Techniques}
The central technique in this paper is to examine the characteristic function $\varphi_{\k}(t)$, where $\k=(f_r-\mu)/\sigma$ is the
mean 0 variance 1 normalization of $f_r$. The main calculation is showing that
$$\int_{t=-\pi\sigma}^{\pi\sigma} \left|\varphi_\k(t)-\varphi(t)\right|dt=O(n^{-\frac12+\tau})$$
where $\varphi(t):=e^{-t^2/2}$ is the characteristic function of the standard unit normal random variable.
The local limit theorem then follows from Fourier inversion for lattices.
However, bounding the characteristic function of sums of dependent random variables is a tricky problem and several new ideas were needed.
\subsubsection{Estimating $\varphi_\k(t)$ for small $t$}
First, building on the method in our earlier work \cite{triangles}, we rewrite our random variable $f_r$ as a polynomial, not in the natural 0,1 indicator random variables $x_e$, but instead in the orthogonal $p$-biased Fourier basis $\chi_e$. This
slight change of basis immediately simplifies the proof of the central limit theorem and lays bare the intuition that the number of triangles in $G(n,p)$
is almost completley driven by the \textit{number of edges} present in the graph. In fact, once we switch from $x_e$ to $\chi_e$ and normalize to unit variance, the degree
${r\choose 2}$ polynomial $\k=({f_{r}}-\mu)/\sigma$ becomes $1-o(1)$ close to a degree 1 polynomial. This turns out to be sufficient to prove that $
\varphi_\k(t)$ is close to a Gaussian for $t$ small. Because $|e^{itx}-e^{itx'}|\le |x-x'|$ for any $x,x'\in \mathbb{R}$ we can simply estimate $\varphi_\k(t)$ by noting that
\begin{equation}\label{introsmallteq}
| \E[e^{it\k}]-\E[e^{it\k^{=1}}]|\le \E[|t\k^{>1}|]
\end{equation}
Because $\k^{=1}$ is a sum of i.i.d.\ Bernoulli random variables, the fact that its characteristic function is close to Gaussian is the well
known Berry-Esseen bound. Meanwhile, we will show that as noted above, $\k$ is concentrated on degree 1 terms and so $\k^{>1}$ is small.
\subsubsection{Bounding $\varphi_\k(t)$ for slightly larger $t$}
The bound in equation \ref{introsmallteq} is useful, but crude, and it degrades in usefulness rapidly as $t$ grows. Let $X=\k^{=1}=\sum_{e} \hat\k(e)\chi_e$ and $Y=\k^{>1}$ (recall that we will expect $Y$ to be small). Then $\k=X+Y$ and we obtain a better approach by using Taylor's Theorem to rewrite the above estimate as
\begin{align*}\label{introsmallteq}
| \E[e^{it\k}]-\E[e^{itX}]|=|\E[e^{itX}(e^{itY}-1)]|=\E\left[e^{itX}\sum_{j=1}^\ell \frac{(itY)^j}{j!}\right]+O\left(\E\left|e^{itX}(tY)^{\ell+1}\right|\right)
\end{align*}
Assuming $tY$ is typically small and $\ell$ some large but fixed constant we will be able to show that $e^{itX}$ and $Y^j$ are nearly uncorrelated.
To this end we prove a result which, with some omitted terminology, says:
\begin{thm}\label{mainchf}
Let $Z=X+Y$, where $X=\sum_{i=1}^n X_i$ is an i.i.d.\ sum of Bernoulli random variables. Assume that $\hat{\|}Y\hat{\|}_1=O(n)$,
and $\varphi_{X}(t)=O(\exp(-n^{\Omega(1)}))$.
Then for any fixed $\ell$
$$|\varphi_{Z}(t)-\varphi_X(t)|=O\left(\left|t\cdot\|Y\|_2\right|^\ell\right)$$
\end{thm}
\subsubsection{Bounding $\varphi_\k(t)$ for $t$ even larger still}
Several substantial barriers present themselves for adapting the above arguments to bounding $\varphi_\k(t)$ for $t\ge O(n)$.
First, in order to apply Theorem \ref{mainchf} profitably, there was the requirement that we consider a random variable of the form $X+Y$ where
$X=\sum X_i$ is a sum of i.i.d.\ random variables, and $t\|Y\|_2$ is small. This is a source of trouble as once $t>\|Y\|_2^{-1}$ our bound will be worthless. Second, and equally troubling, the characteristic function the sum of ${n\choose 2}$ i.i.d.\ independent Bernoullis, $\sum \frac{1}{n}\chi_e$ is only small for $t=O(n)$, but we require our characteristic function to be small for $t\le \sigma=O(n^{r-1})$. It should be noted that this
barrier is not artificial. Some subgraph counts, such as the number of disjoint pairs of edges, do obey a central limit theorem by the proofs above,
but not a local limit theorem. In these cases the problem occurs because of the breakdown of the characteristic function at
$t=O(n)$ Again, this is not accidental, but a consequence of the fact that the number of pairs of disjoint edges is always a square, and therefore
almost on a lattice of step size $O(n)$.
The main idea is, very roughly speaking, that the higher order terms of the polynomial $\k$ are responsible for controlling the size of
$\varphi_{\k}(t)$ for $t$ large. In particular, when $t=n$, it is most profitable to look at $\k^{=2}$, the degree 2 polynomial rather than
the $\k^{=1}$ as in the previous arguments. However there is still trouble: what to do with the larger $\k^{=1}$ term? The answer
lies in a decoupling trick which allows us to ``clear out'' the lower order terms. We illustrate with an example extracted from \cite{JustinTriangles}.
\begin{example}\label{cs example}
Let $f_3$ be the triangle counting random variable. Partition the vertex set $[n]=U_0\cup U_1$ with
$|U_0|=|U_1|=n/2$. Let $B_0$ denote the edges internal to $U_0$, and $B_1$ be all other edges. Let $X\in \{0,1\}^{B_0}$ and $Y\in \{0,1\}^{B_1}$
be random vectors drawn according to the probability distribution $G(n,p)$. Finally let $Y_0,Y_1$ denote independently drawn copies of $Y$. Finally
rewrite $f_3=A(X)+B(Y)+C(X,Y)$, isolating the monomials in $f_3$ which only depend on either $X$ or $Y$. Then we can bound the characteristic function of $f_3$ by doing the following decoupling trick
\begin{align*}
|\varphi_{f_3}(t)|^2&=|\E_{X,Y}[e^{itf_3}]|^2=\left| \E_X e^{it A(X)} \E_Y e^{it(B(Y)+C(X,Y))}\right|^2\le \E_X\left| \E_Y e^{it(B+C)}\right|^2\\
&=\E_X\left(\E_{Y_1}e^{it(B+C)}\overline{\E_{Y_2}e^{it(B+C)}}\right)=\E_{Y_1,Y_2} e^{it(B(Y_0)-B(Y_1))}\E_Xe^{it(C(X,Y_0)-C(X,Y_1))}\\
&\le\E_{Y_0,Y_1}\left |\E_X e^{it(C(X,Y_0)-C(X,Y_1))}\right|
\end{align*}
In the last line above, the terms $A$ and $B$, which depended on only one of $X$ or $Y$, have vanished. Additionally, in the inner expectation we consider $C(X,Y_0)-C(X,Y_1)$ as a polynomial in $X$ for some random but fixed choice of $Y_0,Y_1$.
A moment's reflection will reveal that the only monomials in $C(X,Y)$ correspond to triangles with two vertices in $U_0$ and one vertex in $U_1$. Therefore
each triangle represented in $C(X,Y)$ has two edges in $B_1$, but only one in $B_0$. So $C(X,Y)$ is only a polynomial of
degree 1 in $X$.
Then we can use standard methods to analyze $\E[e^{it[C(X,Y_0)-C(X,Y_1)]}]$, because it is a sum of independent Bernoulli random variables.
One last wrinkle in the above that should be mentioned is that the linear function $C(X,Y_0)-C(X,Y_1)$ depends on the samples $Y_0,Y_1$ of edges
in $B_1$. But after some work, we can show that with overwhelming probability (in the sampling of $Y_0,Y_1$), we will have that
$\E[e^{it[C(X,Y_0)-C(X,Y_1)]}]$ is small.
\end{example}
Section \ref{decoupling section} develops a version of this decoupling trick for higher degree polynomials. In order to eliminate all monomials of degree
at most $k-1$, we will require $k+1$ partitions of our vertices and $2k$ independent samples. One additional difference will be that, upon performing
this decoupling trick, we will not always be left with a linear function but rather a polynomial which is highly concentrated on degree 1 terms.
But combining some careful analysis with Theorem \ref{mainchf}, we will be able to obtain our bounds on $\varphi_{\k}(t)$ in a similar manner
to the above example.
\subsection{Organization of this Paper}
In Section \ref{prelimsection} we set up our notation and introduce some facts which will be necessary for the later sections. Section \ref{mainsection} contains the statements and proofs of our main results, modulo the main technical lemmas. In Section \ref{mainchf section} we prove Theorem \ref{mainchf}, which is our main technical tool for bounding characteristic
functions of constant degree polynomials in this paper. In Section \ref{decoupling section} we prove our main decoupling Lemma. Section \ref{KrProp}
contains our analysis of the properites of the clique counting random variables $f_r$ and $\k$.
Finally, Sections \ref{smallt section} through \ref{larget section} are dedicated to applying the afforementioned Lemma to bounding the characteristic
function of $\k$ in different regimes depending on $|t|$.
\section{Preliminaries and Notation} \label{prelimsection}
\subsection{Definition of our random variables $f_r$ and $\k$}
Throughout we will always be working with the probability space $G(n,p)$. We will assume a vertex set of $[n]=\{1,2,\ldots,n\}$, and
a set of indicator random variables, $x_e$ for each edge $e\in {[n]\choose 2}$. $x_e$ will be 1 if edge $e$ is present in our sampled graph and 0 otherwise. All edges will be present independently at random with probability $p$. We will use $\lambda$ to denote $\min(p,1-p)$.
The graph $K_r$ is the clique on $r$ vertices, that is it has $r$ vertices and contains all edges between them.
Let $f_r$ denote the random variable counting the number of copies of $K_r$ in our random graph. We express this as
$$f_r=\sum_{S\equiv K_r} x^S$$
where the sum is taken over all ${n\choose r}$ sets of edges $S\subset {[n]\choose 2}$ which are isomorphic to the $r$-clique $K_r$, and $x^S:=\prod_{i\in S} x_i$. We will also
frequently refer to the mean and standard deviation of $f_r$. Throught the paper we will use $\mu$ and $\sigma$ to denote
$$\mu:=\E_{G\sim G(n,p)} f_r(G)=p^{r\choose 2}{n\choose r}\qquad\qquad \sigma:=\sqrt{\E_{G\sim G(n,p)}[f_r^2-\mu^2]}$$
Note that $\mu$ and $\sigma$ depend on $n$, as well as the fixed parameters $r$ and $p$. Throughout it will be more convenient to work with the normalized copy of $f_r$, which we label $\k$
$$\k:=\k_r(G):=\frac{f_r-\mu}{\sigma}$$
\subsection{Parameters and asymptotics}
We will have need of a fixed, but arbitrarily small constant labeled $\tau \in (0,1/2r)$. $\tau$ will be the same constant throughout the entire paper. Additionally, we will always assume that $r\ge 3$, and $p\in (0,1)$ are fixed constants which do not depend on $n$. Our results
will then apply in the asymptotic setting as $n\to \infty$. Additionally, all asymptotic notation of the form $O(\cdot),~o(\cdot),$ or $\Omega(\cdot)$ will view
$r,p,\tau$ as fixed constants.
\subsection{$p$-biased basis}
A crucial tool throughout this paper will be the $p$-biased Fourier basis. Rather than working with the indicator random variable $x_e$ directly
we will instead apply the following linear transformation
\begin{define}[Fourier Basis]
$$\chi_e:=\chi_e(x_e)=\frac{x_e-p}{\sqrt{p(1-p)}}=\begin{cases}
-\sqrt{\frac{p}{1-p}}&\mbox{if }x=0\\
\sqrt{\frac{1-p}{p}}&\mbox{if }x=1
\end{cases}
$$
For $S\subset {[n]\choose 2}$ set $\chi_S=\prod_{e\in S}\chi_e$. For $S=\varnothing$ we have $\chi_\varnothing\equiv 1$.
\end{define}
As the $x_e$ variables are independent $p$-biased Bernoulli random variables, for any $S\neq T\subset {[n]\choose 2}$ we have $\E[\chi_S^2]=1$ and $\E[\chi_S\chi_T]=0$. Therefore
the set of functions $\{\chi_S~|~S\subset {[n]\choose 2}\}$ is an orthonormal basis for the space of functions on $G(n,p)$.
We will also use the notation of the Fourier transform. Since the $\chi_S$ form an orthonormal basis, for any function
$f:\{0,1\}^{[n]\choose 2}\to \mathbb{R}$ we can choose coefficients $\hat{f}(S)$ so that
$$f= \sum_{S\subset {[n]\choose 2}} \hat{f}(S)\chi_S$$
The coefficients $\hat{f}(S)$ are called the Fourier Coefficients of $f$, and the function $\hat{f}:2^{[n]\choose 2}\to \mathbb{R}$ is called
the Fourier transform of $f$. Morevoer we can compute these coefficients by noting that $\hat{f}(S)=\E[f\chi_S]$. The degree of a monomial $\chi_S$ is $|S|$, and for an arbitrary function $f$, we say that it has degree equal the degree
of the largest monomial in its Fourier expansion. That is
$\deg(f)=\max_{\hat{f}(S)\neq \varnothing}|S|$.
Additionally, it will be helpful to refer only to terms of $f$ of a certain degree. For any $k\in \mathbb{N}$ let
\begin{align*}
f^{=k}&:=\sum_{|S|=k}\hat{f}(S)\chi_S\\
f^{>k}&:=\sum_{|S|>k}\hat{f}(S)\chi_S
\end{align*}
The $2$-norm of a function $\|f\|_2$ and the spectral 1-norm $\hat{\|}f\hat{\|}_1$ are defined to be
\begin{align*}
\|f\|_2&:=\sqrt{\E[f^2]}\\
\hat{\|}f\hat{\|}_1&:=\sum_S |\hat{f}(S)|
\end{align*}
Another fact which we will need throughout this paper is Parseval's Theorem, which allows us to easily compute the variance of a function
in terms of its Fourier transform
\begin{thm}(Parseval's Theorem) \label{parseval}
$$\E[f^2]=\sum_{S\subset {[n]\choose 2}} \hat{f}(S)^2$$
Also, it therefore holds that
$$Var(f)=\E[f^2]-\E[f]^2=\E[f^2]-\hat{f}(\varnothing)^2=\sum_{S\neq \varnothing} \hat{f}(S)^2$$
\end{thm}
\subsection{Function Restrictions}\label{restrictions subsection}
Let $f:\{0,1\}^{[n]\choose 2}\to \mathbb{R}$ be an arbitrary function and $H\subset {[n]\choose 2}$.
Given a setting $\beta\in \{0,1\}^{H^c}$ of the edges not in $H$ we define the restricted function $f_\beta:\{0,1\}^H\to \mathbb{R}$ by
$$f_\beta(\alpha):=f(\alpha,\beta)$$
Whenever we use this restriction notation the choice of $H$ will be made explicit beforehand, but not referenced in the notation for the sake of compactness.
For any $S\subset H$ we may express the Fourier coefficients of $\widehat{f_\beta}(S)$ in terms of the coefficients of $f$ as follows:
$$\widehat{f_\beta}(S)=\sum_{T\subset H^c} \hat{f}(S\cup T) \chi_T(\beta)$$
Additionally, sometimes we wish to restrict by looking at some set of vertices, and only considering edges incident to those vertices. To this end we will define the notion of the vertex support of some set of edges.
\begin{define}
Let $S\subset {[n]\choose 2}$ then $\supp(S)$ is defined to be the set of vertices incident to an edge in $S$. If we interpret an edge as a set of two vertices, then $\supp(S)=\cup_{e\in S} e$.
\end{define}
\subsection{Characteristic Functions}
The bulk of this paper will be concerned with estimating the characteristic function of $\k$, our normalized copy of $f_r$, the $K_r$ counting random variable. First we recall the definition of the characteristic function:
\begin{define}
Let $X$ be a random variable. Then its characteristic function $\varphi_X:\mathbb{R}\to \mathbb{C}$ is defined to be
$$\varphi_X(t):=\E[e^{itX}]$$
\end{define}
These are very well studied objects, and they completely determine their associated random variable. In particular, we will need the following inversion
formula which specifies the probability distribution of a latice valued random variable in terms of its characteristic function.
\begin{thm}[Fourier Inversion for Lattices] \label{inversion}
Let $X$ be a random variable supported on the lattice $\mathcal{L}:=b+h\mathbb{Z}$. Let $\varphi_X(t):=\E[e^{itX}]$, the characteristic function of $X$. Then for $x\in\mathcal{L}$
$$\mathbb{P}(X=x)=\frac{h}{2\pi}\int_{-\frac{\pi}{h}}^{\frac{\pi}{h}} e^{-itx}\varphi_X(t)dt$$
\end{thm}
For a proof, see Theorem 4 of chapter 15.3 in volume 2 of Feller \cite{Feller2}.
We will also need the following bounds on the characteristic function of the Bernoulli random variable.
\begin{lem}\label{bernoulli}
Let $Y$ be a random variable taking the value $1$ with probabillity $p$ and $-1$ with probability $1-p$. For any $|t|<\frac\pi 2$, $|\E[e^{itY}]|<1-\frac{8p(1-p) t^2}{\pi^2}$. Consequently, it also follows that for $|t|<\pi$ we have $|\E[e^{itx_e}]|\le 1-\frac{4p(1-p)t^2}{\pi^2}$ and also for $|t|<\sqrt{p(1-p)}\pi$ we have $|\E[e^{it\chi_e}]|\le 1-\frac{2t^2}{\pi^2}$.
\end{lem}
\begin{proof}
For $Y$ we note that
\begin{align*}
|\E[e^{itY}]|^2&=|pe^{it}+(1-p)e^{-it}|^2=\cos^2(t)+(1-2p)^2\sin^2(t)=1-4p(1-p)\sin^2(t)\\
&\le 1-\frac{16p(1-p)t^2}{\pi^2
\end{align*}
where the first inequality used the fact that $\sin(t)\ge 2t/\pi$ for $|t|\le \frac{\pi}{2}$. The first claimed result now follows by noting that
$\sqrt{1-x}\le 1-\frac x 2$.
For the subsequent claim we note that for any random variable $X$, and $a,b\in \mathbb{R}$
$$\E[e^{it(aX+b)}]=e^{itb}\E[e^{i(at)X}]$$
Since $x_e=(Y+1)/2$ and $\chi_e=\frac{x_e-p}{\sqrt{p(1-p)}}=\frac{Y+1-2p}{2\sqrt{p(1-p)}}$, the result follows. For example:
$$|\E[e^{it\chi_e}]|=|\E[e^{i\frac{t}{2\sqrt{p(1-p)}}Y}]|\le 1-\frac{8p(1-p)t^2}{4p(1-p)\pi^2}$$
\end{proof}
\subsection{Concentration of low degree polynomials}
Throughout, we will lean heavily on the following hypercontractivity bounds, which roughly says that low degree polynomials in the
$\chi_e$ are reasonably well behaved in terms of moments and concentration. The reference given here is for Ryan O'Donnell's textbook, but the results do not originate there and are due to a series of results by Bonami, Beckner, Borell and others. See the notes in \cite{ODonnell} for further reference on the matter.
\begin{thm}[\cite{ODonnell} Theorem 10.24]\label{concentration hypercontractivity}
If $f$ has degree at most $d$ then for any $t\ge (2e/\lambda)^{d/2}$ (recall $\lambda=\min(p,1-p)$),
$$\Pr\left[|f(X)|\ge t\|f\|_2\right]\le \lambda^k\exp\left(-\frac{d}{2e}\lambda t^{2/d}\right)$$
\end{thm}
\begin{thm}[\cite{ODonnell} Theorem 10.21]\label{moment hypercontractivity}
If $f$ has degree at most $d$ then for $q\ge 1$
$$\E[|f|^{2q}]\le (2q-1)^{dq}\lambda^{2d-q}\|f\|_2^{2q}$$
\end{thm}
\section{Main Results} \label{mainsection}
In this section we give an overview of the proof of our local limit theorems and statistical distance bounds without the proofs of our lemmas and calculations which will follow in subsequent sections. In this section we will use the following notation for the density and characteristic functions of
the standard unit normal $N(0,1)$ respectively
\begin{align*}
\mathcal{N}(x)&:=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\\
\varphi(t):&=e^{-\frac{t^2}{2}}
\end{align*}
\subsection{Tools For Proving Local Limit Theorems}
Our main engine is the Fourier inversion formula given by Theorem \ref{inversion}.
Using this theorem, we can obtain our local limit theorem and statistical distance bounds. To this end we cite the following lemmas. For proofs see
\cite{triangles} (although the ideas do not originate there).
\begin{lem}\label{Pointwise Convergence}
Let $X_n$ be a sequence of random variables
supported in the lattices $\mathcal{L}_n=b_n+h_n\mathbb{Z}$, then
$$|h_n\mathcal{N}(x)-\mathbb{P}(X_n=x)|\le h_n\left(\int_{-\frac{\pi}{h_n}}^{\frac{\pi}{h_n}}\left|\varphi(t)-\varphi_n(t)\right|dt+e^{-\frac{\pi^2}{2h_n^2}}\right)$$
\end{lem}
\begin{lem}\label{L1 Lemma}
Let $X_n$ be a sequence of random variables supported in the lattice
$\mathcal{L}_n:=b_n+h_n\mathbb{Z}$,
and with chf's $\varphi_n$. Assume that the following hold:
\begin{enumerate}
\item $\sup_{x\in \mathcal{L}_n} |\Pr(X_n=x)-h_n\N(x)|<\delta_n h_n$
\item $\Pr(|X_n|>A)\le \epsilon_n$
\end{enumerate}
Then $\sum_{x\in \mathcal{L}_n} |\Pr(X_n=x)-\N(x)|\le 2A \delta_n+\epsilon_n+\frac{h_n}{\sqrt{2\pi}A}e^\frac{-A^2}{2}$.
\end{lem}
\subsection{Proofs of Main Results}
The main calculation of this paper is the following characteristic function bound:
\begin{thm}\label{MainKr}
Fix $0<\tau<\min(1/2r,~1/12)$. Recall $\k:=(f_r-\mu)/\sigma$, and let $\varphi_\k(t)$ be the characteristic function of $\k$. Then
$$\int_{-\pi \sigma}^{\pi \sigma}\left|\varphi_\k(t)-e^\frac{-t^2}{2}\right|=O(n^{-1/2+2\tau})$$
\end{thm}
\begin{proof}
This proof is a combination of our estimates for the characteristic function $\varphi_\k(t)$ from sections \ref{smallt section} through \ref{larget section} . The relevant bounds are
\begin{itemize}
\item For $|t|\le n^\tau$, we use Lemma \ref{smallest char bound} to say that $|\varphi_\k(t)-e^{-t^2/2}|=O(n^{-1/2+\tau})$.
\item For $n^\tau<|t|\le n^{\frac12+2\tau}$ we use Lemma \ref{subgraphmain2} to say that $|\varphi_\k(t)|=O(n^{-50})$.
\item For $n^{\frac12+2\tau}<|t|\le n^{\frac{r}{2}-5/12-2\tau}$ Corollary \ref{midchfbound} implies that $|\varphi_\k(t)|=O(n^{-r^2})$
\item For $n^{\frac{r}{2}-5/12-2\tau}<|t|\le n^{r-1-2(r-2)\tau}$ Corollary \ref{midhighchfbound} implies that $|\varphi_\k(t)|=\exp(-\Omega(n^\tau/2r^2))$.
\item For $n^{r-1-2(r-2)\tau}<|t|\le \pi \sigma_n$ Lemma \ref{highchfbound} tells us that $|\varphi_\k(t)
=\exp(-\Omega(n^{1-2(r-2)\tau})$.
\end{itemize}
Note that in order for the last item on this list to be an effective bound, we require $\tau<\frac{1}{2r-2}$, which is satisfied.
Combining all of these pieces we find that
\begin{align*}
\int_{-\pi \sigma_n}^{\pi \sigma_n}\left|\varphi_\k(t)-e^\frac{-t^2}{2}\right|\le \int_{-n^\tau}^{n^\tau} \frac{t}{\sqrt{n}}dt+2\int_{n^{-\tau}}^{\pi\sigma} e^{-\frac{t^2}{2}}dt+2\int_{n^\tau<|t|\le \pi\sigma}|\varphi_\k(t)|dt=O(n^{-\frac12+2\tau})
\end{align*}
\end{proof}
Theorem \ref{Sup Main} is now just a restatement of the following corollary:
\begin{cor}\label{Sup Bound}
Let $\mathcal{L}_n:=\frac{1}{\sigma}(\mathbb{Z}-\mu)$. Then for any $x\in \mathcal{L}_n$
$$\left|\mathbb{P}(\k=x)-\frac{\N(x)}{\sigma_n}\right|=O\left(\frac{1}{\sigma n^{\frac12-2\tau}}\right)$$
\end{cor}
\begin{proof}
Apply Lemma \ref{Pointwise Convergence} to $\k$ (where $h_n=1/\sigma$ and $b_n=\mu$), combined with the estimate for the characteristic function of $Z$ given by Theorem \ref{MainKr}.
\end{proof}
Next we prove that $f_r$ and the discrete Gaussian are close in the $\ell^1$ metric as well.
\begin{repthm}{L1 Main}
$$\sum_{m\in \mathbb{N}}\left|\Pr(f_r=m)-\frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{\left(t-\mu\right)^2}{2\sigma^2}\right)\right|=O(n^{-\frac12+2\tau})$$
\end{repthm}
\begin{proof}
It is equivalent to show that for $\mathcal{L}=\frac{1}{\sigma}(\mathbb{Z}-\mu)$ we have
$$\sum_{x\in \mathcal{L}_n}\left|\Pr(\k=x)-\frac{1}{\sigma}\N(x)\right|=O(n^{-1/2+2\tau})$$
This follows from Lemma \ref{L1 Lemma}. We may set $\delta_n=O(1/\sigma n^{1/2-2\tau})$ by Corollary \ref{Sup Bound}. We may also set $A=\log(n)^2$ and take $\epsilon_n=\Pr(|\k|\ge \log(n)^2)=O(1/n)$ (this can be shown in several ways. Lemma \ref{concentration hypercontractivity} will suffice for our purposes but much stronger tools exist). The main term will then be
$$\sum_{x\in \mathcal{L}_n}\left|\Pr(Z=x)-\frac{1}{\sigma}\N(x)\right|\le O\left(\log^2(n)n^{-1/2+2\tau}+\frac{1}{n}+n^{-\omega(1)}\right)$$
Since choice of $\tau$ was arbitrary, this is sufficient to prove our result.
\end{proof}
\section{Proof of Theorem \ref{mainchf}}\label{mainchf section}
Let $X=\sum_{i=1}^n a_iX_i$ be a sum of independent $p$-biased mean 0 variance 1 Bernoulli random variables. Let $Y$ be a degree $d$ polynomial in the $X_i$ such that $Y$
contains no degree 1 monomials. Assume $\sum_{i=1}^n a_i^2=T$, and $a_i^2\le \delta$ for all $i$. Set $\eta:=\|Y\|_2$ and $\epsilon:= \exp[-2t^2(T-\delta d\ell)/\pi^2]$ . Let $\varphi_X:=\E e^{itx}$ and $\varphi(t)=\E[e^{it(X+Y)}]$ characteristic functions of $X$ and $X+Y$ respectively.
\begin{repthm}{mainchf}
Fix some $\ell\in \mathbb{N}$. Then for all $t$ such that
$$|t|<\min\left(\sqrt{p(1-p)}\pi\delta^{-1/2},~~(2e)^\frac \ell 2 \lambda^{-\frac \ell 2}\eta^{-1}\right)$$
it follows that
\begin{align*}
|\varphi(t)-\varphi_X(t)|&\le \ell\epsilon\left(1+\left|t\hat{\|}Y\hat{\|}_1\right|^\ell\right)+\frac{|t\eta|^{\ell+1}}{(\ell+1)!} \ell^{\frac{d(\ell+1)}{2}}\lambda^{d\left(\frac{1-\ell}{2}\right)} +\lambda^d\exp\left[-\frac{d\lambda}{2e}\left|t\eta\right|^{-2/d}\right]\\
&\qquad+|t\eta|^{\frac{\ell+1}{2}}\lambda^{\frac{3d-\ell}{4}}(\ell+1)\exp\left[-\frac{d\lambda}{4e}\left|t\eta\right|^{-2/d}\right]
\end{align*}
\end{repthm}
\begin{proof}
\begin{align*}
\varphi(t)-\varphi_X(t)&=\E[e^{it(X+Y)}]-\E[e^{itX}]=\E[e^{it(X+Y)}-e^{itX}]=\E[e^{itX}\left(e^{itY}-1\right)]
\end{align*}
By hypothesis we have that $\|tY\|_2\le (2e)^{\ell/2}\lambda^{-\ell/2}$, so an application of Theorem \ref{concentration hypercontractivity} yields
$$\Pr(|Yt|\ge 1)\le\lambda^d \exp\left(-\frac{d}{2e}\left(t\eta\right)^{-2/d}\right)$$
And when $|Yt|\le 1$ we can use the degree $\ell$ Taylor polynomial of $e^{itY}$ to say that
\begin{align*}
\left|e^{itY}-1-\left(\sum_{j=1}^\ell \frac{t^jY^j}{j!}\right)\right|\le \frac{e|t^{\ell+1}Y^{\ell+1}|}{(\ell+1)!}
\end{align*}
To acount for the unlikely event that $tY$ is too large to use this Taylor bound let $A$ be the event that $|tY|\ge 1$. Now let $Z$ be the error random variable $Z:=1_A\left( \left|e^{itY}-\sum^\ell t^jY^j/j!\right|- \frac{e|t^{\ell+1}Y^{\ell+1}|}{(\ell+1)!}\right)$.
So now we can categorically say that always
$$\left|e^{itY}-1-\left(\sum_{j=1}^\ell \frac{t^jY^j}{j!}\right)\right|\le \frac{e|t^{\ell+1}Y^{\ell+1}|}{(\ell+1)!} +Z$$
We show that $|Z|$ has small expectation. $|Z|\le 1+(k+1)|tY|^{k+1}$ uniformly. By Theorem \ref{concentration hypercontractivity}:
$$\Pr(Z\neq 0)=\Pr(A)=\Pr(|tY|\ge 1)\le \lambda^d\exp\left(-\frac{d\lambda}{2e}\left(t\eta\right)^{-2/d}\right)$$
Because $Y$ is a degree $d$ polynomial with $\|Y\|_2=\eta$ by Theorem \ref{moment hypercontractivity}
\begin{equation}\label{thm3eq1}
\E|Y|^{\ell+1}=\|Y\|_{\ell+1}^{\ell+1}\le \ell^{\frac{d(\ell+1)}{2}}\lambda^{d\left(\frac{1-\ell}{2}\right)} \eta^{\ell+1}
\end{equation}
So it follows by Cauchy-Schwarz that
\begin{align}\label{thm3eq2}
\E|Z|&\le \E\left[1_A\cdot (1+(\ell+1)|tY|^{\ell+1})\right]\le \Pr(A)+\|1_A\|_2\|(\ell+1)|tY|^{\ell+1}\|_2\nonumber\\
&\le\lambda^d\exp\left[-\frac{d\lambda}{2e}\left(t\eta\right)^{-2/d}\right]+\left(\lambda^d\exp\left[-\frac{d\lambda}{2e}\left(t\eta\right)^{-2/d}\right]\right)^\frac12\left((\ell+1)^2t^{\ell+1} \ell^{\frac{d(\ell+1)}{2}}\lambda^{d\left(\frac{1-\ell}{2}\right)} \eta^{\ell+1}\right)^\frac12\nonumber\\
&=\lambda^d\exp\left[-\frac{d\lambda}{2e}\left(t\eta\right)^{-2/d}\right]+(t\eta)^{\frac{\ell+1}{2}}\lambda^{\frac{3d-\ell}{4}}(\ell+1)\exp\left[-\frac{d\lambda}{4e}\left(t\eta\right)^{-2/d}\right]
\end{align}
Next, we analyze the Taylor polynomial of $e^{itY}$ by splitting $Y^j$ into a sum of monomials. Let $\mathcal{M}_j$ be the set of monomials supported in $Y^j$ and say
$$Y^j=\sum_{m\in \mathcal{M}_j} a_mm$$
as a result we may write
$$\E\left[e^{itX}\frac{t^jY^j}{j!}\right]=\frac{t^j}{j!}\sum_{m\in \mathcal{M}_j} a_m\E\left[e^{itX}m\right]$$
We now examine this one monomial at a time. Let $M$ denote the set of variables $X_i$ appearing in some fixed monomial $m$. Note that because $Y$ has degree $d$ and $m$ is in $Y^j$ we have that $|M|\le dj\le d\ell$. Then:
\begin{align}\label{thm3eq3}
\E[e^{itX}m]=\E_{M}\E_{M^c}[me^{itX}]=\E_{M}me^{it\sum_{i\in M} a_iX_i}\E_{M^c}[e^{it\sum_{i\in M^c} a_iX_i}]
\end{align}
So we have that
$$\left|\E[e^{itX}m]\right|=\left|\E_{M}me^{it\sum_{i\in M} a_iX_i}\E_{M^c}[e^{it\sum_{i\in M^c} a_iX_i}]\right|
\le |m|\left|\E_{M^c}[e^{it\sum_{i\in M^c} a_iX_i}]\right|$$
Using the hypothesis $|a_it|<\sqrt{p(1-p)}\pi$, we obtain from Lemma \ref{bernoulli} that
\begin{align*}
\left|\E_{M^c}[e^{it\sum_{i\in M^c} a_iX_i}]\right|=&\prod_{i\in M^c}\left|\E[e^{ita_iX_i}]\right|\le \prod_{i\in M^c}\left(1-\frac{2}{\pi^2}(t^2a_i^2)\right)
\le e^{-\frac{2}{\pi^2}t^2\sum_{i\in M^c}a_i^2}\\
&\le e^{-\frac{2}{\pi^2}t^2(T-\delta|M|)}\le e^{-\frac{2}{\pi^2}t^2(T-\delta d\ell)}\\
&\le \epsilon
\end{align*}
So plugging this back into equation \ref{thm3eq3} we find that
$$\left|\E\left[e^{itX}\frac{t^jY^j}{j!}\right]\right|\le \sum_{m\in \mathcal{M}_j}\frac{|t|^j}{j!}\left|\E\left[e^{itX}m\right]\right|\le\frac{|t|^j}{j!}\sum_{m\in \mathcal{M}_j} |a_m m|\epsilon\le \frac{|t|^j}{j!}\epsilon C_p^{dj} \hat\|Y^j\hat\|_1\le \frac{|tC_p^d|^j}{j!}\epsilon \hat\|Y\hat\|_1^j$$
Where the last inequality uses Lemma \ref{OneNorm} and the constant
$$C_p=\left(1+\frac{|1-2p|}{\sqrt{p(1-p)}}\right)\|\chi_p\|_\infty=\left(1+\frac{|1-2p|}{\sqrt{p(1-p)}}\right)\sqrt{\frac{1-\lambda}{\lambda}}$$
Summing over all $j$ from 1 to $\ell$
\begin{align*}
|\varphi(t)-\varphi_X(t)|&=\left|\E\left[e^{itX}(e^{itY}-1)\right]\right|\le \left|\sum_{j=1}^\ell\E\left[e^{itX}\frac{t^jY^j}{j!}\right]\right|
+\E\left|e^{itX}\left( \frac{e|t^{\ell+1}Y^{\ell+1}|}{(\ell+1)!} +Z\right)\right|\\
&\le \left(\sum_{j=1}^\ell \frac{C_p^{dj}|t|^j}{j!}\epsilon \hat\|Y\|_1^j\right)+\frac{|t|^{\ell+1}}{(\ell+1)!}\E|Y|^{\ell+1} +\E|Z|\\
&\le \ell\epsilon\left(1+\left(tC_p^d\hat{\|}Y\hat{\|}_1\right)^\ell\right)+\frac{|t|^{\ell+1}}{(\ell+1)!}\E|Y|^{\ell+1} +\E|Z|
\end{align*}
Where the $1+\ldots$ inside the first parenthesis on the last line is to account for the possibility that $t\hat\|Y\hat\|_1\le 1$. We may use equations
\ref{thm3eq1} and \ref{thm3eq2} to bound the two error terms in the above equation. Putting all of the estimates together yields
\begin{align*}
|\varphi(t)-\varphi_X(t)|&=\left|\E\left[e^{itX}(e^{itY}-1)\right]\right|\le \left|\sum_{j=1}^\ell\E\left[e^{itX}\frac{t^jY^j}{j!}\right]\right|
+\E\left|e^{itX}\left( \frac{e|t^{\ell+1}Y^{\ell+1}|}{(\ell+1)!} +Z\right)\right|\\
&\le \left(\sum_{j=1}^\ell \frac{|t|^j}{j!}\epsilon \|Y\|_1^j\right)+\frac{|t|^{\ell+1}}{(\ell+1)!}\E|Y|^{\ell+1} +\E|Z|\\
&\le \ell\epsilon\left(1+\left|t\hat{\|}Y\hat{\|}_1\right|^\ell\right)+\frac{|t|^{\ell+1}}{(\ell+1)!}\E|Y|^{\ell+1} +\E|Z|\\
&\le \ell\epsilon\left(1+\left|t\hat{\|}Y\hat{\|}_1\right|^\ell\right)+\frac{|t\eta|^{\ell+1}}{(\ell+1)!} \ell^{\frac{d(\ell+1)}{2}}\lambda^{d\left(\frac{1-\ell}{2}\right)} +\lambda^d\exp\left[-\frac{d\lambda}{2e}\left|t\eta\right|^{-2/d}\right]\\
&\qquad+|t\eta|^{\frac{\ell+1}{2}}\lambda^{\frac{3d-\ell}{4}}(\ell+1)\exp\left[-\frac{d\lambda}{4e}\left|t\eta\right|^{-2/d}\right]
\end{align*}
\end{proof}
\begin{lem}\label{OneNorm}
Let $f$ be a polynomial. For any $j\in \mathbb{N}$ we have that
$$\|f^{j}\|_1\le \left[1+\frac{|1-2p|}{\sqrt{p(1-p)}}\right]^{j-1}\hat\|f\hat\|_1^{j}$$
\end{lem}
\begin{proof}
We prove this by induction on $j$. For $j=1$ the statement is trivial. Now assume it is true for some arbitrary $j$. Let $\mathcal{M}$ be the set of monomials supported in $f$ and $f=\sum_\mathcal{M} a_mm$. Then
\begin{align*}
\hat\|f^{j+1}\hat\|_1&=\hat\|\sum_{m\in \mathcal{M}} a_mmf^j\hat\|_1\le \sum_{m\in \mathcal{M}} |a_m|\hat\|mf^j\hat\|_1\le \sum_{m\in \mathcal{M}} |a_m|\left(1+\frac{|1-2p|}{\sqrt{p(1-p)}}\right)\hat\|f\hat\|_1^j\\
&\le \left[1+\frac{|1-2p|}{\sqrt{p(1-p)}}\right]^j\hat\|f\hat\|_1^{j+1}
\end{align*}
completing the induction and the proof.
\end{proof}
\section{Decoupling and Polynomial Degree Reduction}\label{decoupling section}
In this section we set up our decoupling technique for reducing the degree of polynomials in characteristic function computations. For an example
illustrating the idea, see Example \ref{cs example} in the introduction.
\subsection{The $\alpha$ operator}\label{alpha def section}
Partition the edge set of ${[n]\choose 2}$ into $k+1$ parts $B_0,B_1,\ldots,B_k$. Let $X\in\{0,1\}^{B_0}$ be the indicator random variables denoting which edges from $B_0$ are in our graph sampled from $G(n,p)$.
Similarly for $i\in [k]$ let $Y_i$ denote the indicator random variables for the edges in $B_i$. Also, for any $i\in [k]$ let $Y_i^{0}$ and $Y_i^1$ denote independent random variables with the same marginals as $Y_i$.
We let $B_i^0$ and $B_i^1$ denote separate copies of the edges in $B_i$, so that we may say $Y_i^j\in \{0,1\}^{B_i^j}$.
For $v\in \{0,1\}^k$ let $Y^v:=(Y_1^{v_1},Y_2^{v_2},\ldots,Y_k^{v_k})$.
One may interpret this as choosing two random samples $Y_i^0$ and $Y_i^1$ from $G(n,p)$ for the edges in each $B_i$. Then $X,Y^v$ would correspond to a sampling of every edge in the graph, with which of the copies of each $Y_i$ you query controlled by
the binary vector $v$.
Finally, set $\mathcal{B}:=\cup_{i=1}^k\cup_{j=0}^1 B_i^j$, and let $\mathbf{Y}:=(Y_1^0,Y_1^1,Y_2^1,\ldots, Y_k^1)\in \{0,1\}^\mathcal{B}$. For $v\in \{0,1\}^k$ let $|v|$ denote the hamming weight of $v$.
We are now ready to define our $\alpha$ operator.
\begin{define}\label{alpha}
Given the partition ${[n]\choose 2}=B_0\cup B_1\cup\ldots\cup B_k$ as above, define the operator on functions of the form $f(X,Y_1,\ldots,Y_k)$, which outputs
the function $\alpha(f):\{0,1\}^{B_0\cup\mathcal{B}}\to \mathbb{R}$ given by
\begin{align*}
\alpha(f)(X,\mathbf{Y})&:=\sum_{\textbf{v}\in \{0,1\}^k}(-1)^{|v|}f(X,Y^v)
\end{align*}
\end{define}
Note that $\alpha$ is a linear operator, and in the rest of this section we will describe its action. First we define a pair of terms which will be useful in our analysis.
\begin{define}[Rainbow Sets]
We call a set $S\subset {[n]\choose 2}$ \textbf{rainbow} if $S\cap B_i\neq \varnothing$ for all $1\le i \le k$.
\end{define}
\begin{define}[Flattening from $\mathcal{B}$ to ${[n]\choose 2}$]
Given a set $S\subset B_0\cup \mathcal{B}$ define $$\mbox{flat}(S):=\{e\in {[n]\choose 2}\st e^0\mbox{ or }e^1 \in S\}$$ That is, $\mbox{flat}(S)$ takes in a subset of $\mathcal{B}$, and outputs
the set of edges in ${[n]\choose 2}$ relevant to $S$ (information about which copy or both of $e$ were in $S$ is omitted).
\end{define}
\subsection{Action of $\alpha$ on $\chi_S$}
In this subsection we compute the action of $\alpha$ on our basis functions $\chi_S$. Write $S=S_0\cup S_1\cup\ldots\cup S_k$ where
$S_i=S\cap B_i$. Because $\chi_S:=\prod_{i=0}^k \chi_{S_i}$ we can compute that
\begin{align*}
\alpha(S)&:=\alpha(\chi_S)=\sum_{\textbf{v}\in \{0,1\}^k}(-1)^{|v|}\chi_S(X,Y^v)=\sum_{\textbf{v}\in \{0,1\}^k}\chi_{S_0}(X)\prod_{i=1}^k(-1)^{v_i}\chi_{S_i}(Y_i^{v_i})\\
&=\chi_{S_0}(X)\prod_{i=1}^k\left(\chi_{S_i}(Y_i^0)-\chi_{S_i}(Y_i^1)\right)
\end{align*}
If $S$ is not rainbow, then for some $i$ we have $S_i=\varnothing$, so it follows from the above product form that if $\alpha(S)\equiv 0$. Furthermore, if $S$ is rainbow, then
\begin{align*}
\E \alpha(S)&=\hat\alpha(\varnothing)=0\\
\|\alpha(S)\|_2^2&=\sum_{T\subset \mathbf{Y}}\hat\alpha(T)^2=\sum_{v\in \{0,1\}^k} (-1)^{2|v|}=2^k
\end{align*}
We can also compute that for $S, T$, both rainbow we have
\begin{align*}
\E[\alpha(\chi_S)\alpha(\chi_t)]&=\E\left[\chi_{S_0}\chi_{T_0}\right]\prod_{i=0}^k\E\left[\left(\chi_{S_i}(Y_i^0)-\chi_{S_i}(Y_i^1)\right)\left(\chi_{T_i}(Y_i^0)-\chi_{T_i}(Y_i^1)\right)\right]\\
&=\begin{cases}
2^k\mbox&\mbox{if }S_i=T_i\neq\varnothing~\forall i\\
0&\mbox{else}
\end{cases}
\end{align*}
So we have shown that the linear operator $\alpha$ is orthogonal for $S$ rainbow, and 0 on $S$ nonrainbow. In particular, if
$f=\sum_{S} \hat{f}(S)\chi_S$ then
\begin{align*}
\|\alpha(f)\|_2^2&=\E\left[\left(\sum_{S\subset E} \alpha(\chi_S)\hat{f}(S)\right)^2\right]=\sum_{S\mbox{ rainbow}} 2^k \hat{f}(S)^2
\end{align*}
We will also need to examine products of the form $\alpha(\chi_S)\alpha(\chi_T)$ for distinct sets $S,T\subset {[n]\choose 2}$.
\begin{lem} \label{alpha product}
Let $S,T\subset {[n]\choose 2}$ and set $\gamma=\frac{1-2p}{\sqrt{p(1-p)}}$.
For $U\subset B_0\cup \mathcal{B}$ we have
\begin{align*}
\left|\widehat{\alpha(\chi_{S})\alpha(\chi_T)}(U)\right|\le \begin{cases}
0&\mbox{if }S\Delta T\not \subset \mbox{flat}(U)\mbox{ or } \mbox{flat}(U)\not \subset S\cup T\\
\max\left(\gamma^{|U|}, 1\right)&\mbox{if } S\Delta T\subset \mbox{flat}(U)\subset S\cup T
\end{cases}
\end{align*}
\end{lem}
The proof is mostly a calculation and is contained in appendix \ref{alpha product appendix}.
\subsection{$\alpha$ and decoupling}
We are now ready to state our main Lemma of this section.
\begin{lem}\label{decoupling}
Let $f:=f(X,Y)$ be a function of the independent random variables $X,Y_1,\ldots,Y_k$. Let $\varphi(t)=\E[e^{itf(X,Y)}]$ the characteristic function of $f$. Then we have
$$|\varphi(t)|^{2^k}\le \E_{\mathbf{Y}}\left| \E_X e^{it\alpha(f)(X,\mathbf{Y})}\right|$$
\end{lem}
\begin{proof}
We prove this statement inductively on $k$. For $k=0$ the proof is trivial, and for $k=1$ the proof is in Example \ref{cs example}.
Let $\tilde X=(X,Y_{k})$ and $\tilde Y=Y_1,\ldots,Y_{k-1}$. Then by the inductive hypothesis:
$$|\varphi(t)|^{2^{k-1}}\le \E_{\tilde Y}\left| \E_{\tilde X} e^{it\sum_{v\in \{0,1\}^{k-1}}(-1)^{|v|} f(\tilde X,\tilde Y^{v})}\right| $$
Applying Cauchy-Schwarz to the inner term of the above expectation we find
\begin{align*}
\big|\E_{X,Y_{k}} &e^{it\sum_{\textbf{v}\in \{0,1\}^{k-1}}(-1)^{|v|} f(X,Y_k,\tilde Y^{v})}\big|^
\le\E_X\left|\E_{Y_k}e^{it\sum_{v\in \{0,1\}^{k-1}}(-1)^{|v|} f(X,Y_k,\tilde Y^{v})}\right|^2\\
&=\E_X\left(\E_{Y_k^0}e^{it\sum_{v\in \{0,1\}^{k-1}}(-1)^{|v|} f(X,Y_k,\tilde{Y}^{v})}\right)\overline{\left(\E_{Y_k^1}e^{it\sum_{{v}\in \{0,1\}^{k-1}}(-1)^{|v|} f(X,Y_k,\tilde{Y}^{v})}\right)}\\
&=\E_{X,Y_k^0,Y_k^1}e^{it\sum_{v\in \{0,1\}^k} f(X,Y^v)(-1)^{|v|}}=\E_{X,Y_k^0,Y_k^1}e^{it\alpha(f)}
\end{align*}
So a second application of Cauchy-Schwarz now tells us that
\begin{align*}
|\varphi(t)|^{2^k}&\le \left(\E_{\tilde Y}\left| \E_{\tilde X} e^{it\sum_{v\in \{0,1\}^{k-1}}(-1)^{|v|} f(\tilde X,\tilde Y^{v})}\right|\right)^2
\le \E_{\tilde Y}\left| \E_{\tilde X} e^{it\sum_{v\in \{0,1\}^{k-1}}(-1)^{|v|} f(\tilde X,\tilde Y^{v})}\right|^2\\
&\le\E_{\tilde Y}\E_{X,Y_k^0,Y_k^1}e^{it\alpha(f)}\le \E_{\mathbf{Y}}\left|\E_{X}e^{it\alpha(f)(X,\mathbf{Y})}\right|
\end{align*}
\end{proof}
\section{Properties of the $K_r$ counting function}\label{KrProp}
In this section we compute the Fourier transform of $f_r$, the $K_r$ counting function, and its normalized brother $\k$. Recall the definition of $f_r$ by
$$f_r=\sum_{\substack{H\subset G\\H\equiv K_r}} 1_H$$
Where the sum is over all copies of $K_r$ in $G$.
Meanwhile for each individual $r$-clique $H$, its indicator function is given by
$$1_H(X)=\prod_{e\in H} x_e=\prod_{e\in H} \left(\sqrt{p(1-p)}\chi_e+p\right)=p^{r\choose 2}\sum_{\supp(S)\subset H} \left(\frac{1-p}{p}\right)^{|S|/2}\chi_S$$
Summing over all posible choices of $H$ we find that $\widehat{f_r}(S)$ is $p^{r\choose 2}(\frac{1-p}{p})^{|S|/2}$ multiplied by the number of different $r$-cliques containing all of the edges in $S$. But we know any set $S$ supported on $t$ vertices appears in exactly ${n-t\choose r-t}$
$r$-cliques. Therefore
\begin{equation}
f_r=p^{r\choose2}\sum_{t=0}^r{n-t\choose r-t}\left(\frac{1-p}{p}\right)^{|S|/2}\sum_{|\supp(S)|=t} \chi_S
\end{equation}
Combining this formula with Theorem \ref{parseval} allows us to quickly compute $\sigma^2$, the Variance $f_r(G)$ when $G$ is drawn from $G(n,p)$.
\begin{align}
\sigma^2:=Var_{G\sim G(n,p)}(f(G))&=\sum_{S\neq \varnothing} \hat{f}(S)^2=p^{r(r-1)}\sum_{t=2}^r\left[{n-t\choose r-t}\left(\frac{1-p}{p}\right)^{|S|/2}\right]^2\sum_{|\supp(S)|=t} 1\\
&=\frac{p^{r(r-1)-1}(1-p)}{2(r-2)!^2} n^{2r-2}+O\left(n^{2r-3}\right)\nonumber
\end{align}
As a reminder recall that $\k=\frac{f_r-\mu}{\sigma}$ is the normalized (mean 0, variance 1) rescaling of $f_r$. We note that
\begin{align}
W^{1}(\k)&=\sum_{|S|=1} \hat{\k}(S)^2=1-O\left(\frac1n\right)\\
W^{>1}(\k)&=\sum_{|S|\ge 2} \hat{\k}(S)^2=\Theta\left(\frac1n\right)\\
\hat{\k}(S)&=\frac{p^{r\choose 2}\left(\frac{1-p}{p}\right)^{|S|/2}{n-|\supp(S)|\choose r-|\supp(S)|}}{\sigma}=\Theta\left(n^{1-|\supp(S)|}\right)
\end{align}
Where the above formula for $\hat{\k}(S)$ is valid for all $S\neq \varnothing$ (and $\hat{\k}(\varnothing)=0$).
\section{Bound for $|t|\le n^{\tau}$}\label{smallt section}
In this section we prove the following Lemma:
\begin{lem}\label{smallest char bound}
For all $t=o(n)$ we have $|\varphi_\k(t)-e^{-t^2/2}|=O\left(\frac{t}{\sqrt{n}}\right)$.
\end{lem}
In order to do this, we will need the Berry-Esseen theorem. The following lemma is a restatement of Lemma 1 of Chapter V in Petrov's Sums of Independent Random Variables \cite{Petrov}.
\begin{lem}\label{BELemma}
Let $Q^2=\frac{1}{{n\choose 2}}$ and set $X=\sum_{e\in {[n]\choose 2}} Q\chi_e$. It is the mean 0 variance 1 sum of independent random variables. Further define $L_n$ to be
$$L_n:={n\choose 2}\E[|Q\chi_e|^3]=\frac{p^2+(1-p)^2}{\sqrt{{n\choose 2}p(1-p)}}=\Theta_p\left(1/n\right)$$
then for $t\le \frac{1}{4L_n}$ we have that
\begin{equation}\label{eqBerryEsseen}
\left|\E[e^{itX}]-e^{-\frac{t^2}{2}}\right| \le 16L_n|t|^3e^{\frac{-t^2}{3}}
\end{equation}
\end{lem}
With this bound, we are ready to prove Lemma \ref{smallest char bound}.
\begin{proof}[Proof of Lemma \ref{smallest char bound}]
Decompose $\k$ into two parts: $X$ a mean 0 variance 1 sum of i.i.d.\ random variables, and $Y$, which is considered as an error term. Set $Q=\frac{1}{\sqrt{n\choose 2}}$ and let
\begin{align*}
X:=\sum_{e\in {[n]\choose 2}} Q \chi_e&&Y:=\k-X=\sum_{e\in {[n]\choose 2}} (\hat \k(e)-Q)\chi_e+\sum_{|S|\ge 2} \hat \k(S) \chi_S
\end{align*}
We know that all edges $e\in {[n]\choose 2}$ have the same Fourier coefficient $\hat{\k}(e)$, and further that
$$\sum_{e}\hat{\k}(e)^2={n\choose 2}\hat{\k}(e)^2=1-W^{>1}(\k)=1-O\left(\frac1n\right)$$
Therefore it follows that
$$|\hat{\k}(e)-Q|=\left| \frac{\hat{\k}(e)^2-Q^2}{\hat{\k}(e)+Q}\right|=O\left(\frac1{n^2}\right)$$
So now we can compute
$$\|Y\|_2^2=\sum_{e} (\hat{\k}(e)-Q)^2+\sum_{|S|\ge 2} \hat{\k}(S)^2=O\left(\frac1{n^2}\right)+W^{>1}(\k)=O\left(\frac1n\right)$$
For $t\le \frac{1}{4L_n}=\Theta(n)$, Lemma \ref{BELemma}, the above calculation, and Cauchy-Schwarz applied to $\E[|Y|]^2$ tell us
\begin{align*}
\left|\varphi_\k(t)-e^{-\frac{t^2}{2}}\right|&=\left|\E\left[e^{it\k}\right]-e^{-\frac{t^2}{2}}\right|=\left|\E\left[e^{it(X+Y)}\right]-e^{-\frac{t^2}{2}}\right|\le \left|\E\left[e^{itX}\right]-e^{-\frac{t^2}{2}}\right|+\left|\E\left[e^{itX+Y}\right]-\E e^{itX}\right|\\
&\le 16L_n|t|^3e^{\frac{-t^2}{3}}+\E|tY|=O\left(\frac{t^3e^{-\frac{t^2}{3}}}{n}+\frac{t}{\sqrt{n}}\right)
\end{align*}
\end{proof}
\section{Bound for $|t|\in [n^{\tau}, n^{\frac12+2\tau}]$} \label{tier2 section}
The goal for this section is to prove the following lemma
\begin{lem}\label{subgraphmain2}
For $n^{\tau}<t\le n^{\frac34}$
$$|\varphi_\k(t)|\le O\left(n^{-50}\right)$$
\end{lem}
\subsection{High Level Proof}
In this subsection, we will assume the following helper claims, and then prove Lemma \ref{subgraphmain2}.
\begin{claim}\label{goodgraph}
For all sufficiently large $n$ and any $\alpha\in (n^{-1+\tau},1)$
there exists a set of edges $H\subset {[n] \choose 2}$ with $|H|\ge \alpha{n\choose 2}$ such that
$$\sum_{\substack{S\subset H\\|S|\ge 2}} n^{2-2|\supp(S)|}=O( \alpha^2 n^{-1})$$
\end{claim}
For the subsequent claims, assume we have chosen one such $H$ as promised by Lemma \ref{goodgraph} which will be fixed throughout.
\begin{claim}\label{GoodEvent}
Let $A$ be the event (over the space of revelations $\beta \in \{0,1\}^{H^c}$) that for \emph{every} edge $e\in H$
$$|\widehat{\k_\beta}(e)-\hat{\k}(e)|<\frac{1}{n^{1.4}}$$
Recall $\lambda:=\min(p,1-p)$. Then $\Pr(A)\ge 1-n^2\exp\left[-\Omega\left( n^{\frac{0.4}{r^2}}\right)\right]$.
\end{claim}
\begin{claim}\label{goodtimes}
Let $B$ be the event (over the space of revelations $\beta \in \{0,1\}^{H^c}$) that for \emph{every} set $S\subset {[n]\choose 2}$ with $|S|\ge 2$
$$|\widehat{\k_\beta}(S)|=Cn^{r-|S|}$$
where $C$ is a fixed constant depending on $r$ and $p$, but not on $n$. Then $\Pr(B)\ge 1-\exp\left(-\Omega\left[n^{-1/r^2}\right]\right)$
\end{claim}
\begin{claim}\label {goodcase}
Assume $\beta \in A\cap B$. Then for $t\in [n^\tau,n^{3/4}]$
$$\E_{H}[ e^{it\k_\beta}]=O(n^{-50})$$
\end{claim}
Lemma \ref{subgraphmain2} now follows by combining all of these claims.
\begin{proof}[Proof of Lemma \ref{subgraphmain2}]
Let $A$, and $B$ be as defined in Claims \ref{GoodEvent} and \ref{goodtimes}.
We can break up $\{0,1\}^{H^c}$ into $A\cap B$ and $(A\cap B)^c$ and estimate
\begin{align*}
|\varphi_\k(t)|&:=\big{|}\E_{(\alpha,\beta)\in 2^{n\choose 2}}[e^{it\k(\alpha,\beta)}]\big|\le \E_{\beta\subset H^c}|\E_{\alpha\subset H}[e^{it\k_\beta(\alpha)}]|\le \Pr[(A\cap B)^c]+ \Pr[A\cap B]\E_{\beta \in (A\cap B)}\left|\E_{\alpha}[e^{it\k_\beta}]\right|
\end{align*}
Combining Claims \ref{GoodEvent} \ref{goodtimes}, and \ref{goodcase} we can bound the right hand side of the above by $O(n^{-50})$.
\end{proof}
\subsubsection{Proof of Claims}
\begin{proof}[Proof Of Claim \ref{goodgraph}]
Draw a random graph $H$ on $n$ vertices by choosing $\alpha{n\choose 2}$ edges uniformly at random. Then we note that
\begin{align*}
\E\sum_{\substack{S\subset H\\|S|\ge 2}} n^{2-2|\supp(S)|}=\sum_{\substack{S\subset{n\choose 2}\\2\le |S|\le r}} n^{2-2|\supp(S)|}\Pr(S\subset H)
\le \alpha^2n^2\sum_{i=3}^{r}n^{-2i}\sum_{|\supp(S)|=i} 1=O\left(\alpha^2 n^{-1}\right)
\end{align*}
So some $H$ must have at most the average value for this sum.
\end{proof}
We prove Claim \ref{GoodEvent} by noting that the formula for $\widehat{\k_\beta}(S)$ (a coefficient in the polynomial $\k_\beta$) is \emph{itself} a low degree polynomial, and therefore may be shown to have tight concentration by hypercontractivity.
\begin{proof}[Proof Of Claim \ref{GoodEvent}]
Recall that from Section \ref{restrictions subsection} that
$$\widehat{\k_\beta}(e)=\sum_{T\subset H^c} \hat \k(e\cup T)\chi_{T}(\beta)$$
So $\widehat{\k_\beta}(e):\{0,1\}^{H^c}\to\mathbb{R}$ is a polynomial (in the functions $\chi_e$), and we can began by estimating its coefficients.
First we see that
$$\E[\widehat{\k_\beta}(e)] =\widehat{\widehat{\k}_\beta(e)}(\varnothing)=\hat{\k}(e)$$
\begin{comment}
$$\widehat{\widehat{\k}_\beta()}=\begin{cases}
\hat{\k}(e)&\mbox{if } S=\varnothing\\
\hat{\k}(&\mbox{if }|S|=1\\
p^2(1-p)&\mbox{if } S=\{e_1,e_2\},~e_1\sim e_2\\
p^\frac32(1-p)^\frac32 &\mbox{if }S=\triangle\\
0&\mbox{else}
\end{cases}
$$
\end{comment}
Also for any $T\subset \{0,1\}^{H^c}$ we know that $\hat{\k}(e\cup T)\neq 0$ only if $|\supp(e\cup T)|\le r$. So we can compute:
\begin{align*}
Var_\beta( \widehat{\k_\beta}(e))&=\sum_{\substack{T\subset H^c\\T\neq \varnothing}} \hat{\k}(e\cup T)^2=\sum_{i=3}^r \sum_{\substack{T\subset H^c\\|\supp(T\cup e)|=i}} \hat \k(e\cup T)^2\\
&\le \sum_{i=3}^r\sum_{|\supp(T\cup e)|=i} \hat \k(e\cup T)^2\le \sum_{i=3}^r {n-2 \choose i-2}O(n^{2-2i})=O\left(\frac{1}{n^3}\right)
\end{align*}
Since $\widehat{\k_\beta}(e)$ has degree less than ${r\choose 2}$, an application of Theorem \ref{concentration hypercontractivity} gives us that for any $e\in H$
$$\Pr\left[\left|\widehat{\k_\beta}(e)-\hat \k(e)\right|\ge \frac{1}{n^{1.4}}\right]<\exp\left(-\Omega\left( n^{\frac{0.4}{r^2}}\right)\right)$$
Applying a union bound over all edges in $H$ completes the proof.
\end{proof}
\begin{proof}[Proof of Claim \ref{goodtimes}]
Again we use the decomposition
$$\widehat{\k_\beta}(S)=\sum_{T\subset H^c} \hat \k(S\cup T)\chi_{T}(\beta)$$
and note that
$$\E[\widehat{\k_\beta}(S)] =\widehat{\widehat{\k}_\beta(S)}(\varnothing)=\hat{\k}(S)$$
\begin{comment}
$$\widehat{\widehat{Z}_\beta()}=\begin{cases}
\hat{Z}(e)&\mbox{if } S=\varnothing\\
\hat{Z}(&\mbox{if }|S|=1\\
p^2(1-p)&\mbox{if } S=\{e_1,e_2\},~e_1\sim e_2\\
p^\frac32(1-p)^\frac32 &\mbox{if }S=\triangle\\
0&\mbox{else}
\end{cases}
$$
\end{comment}
Let $|\supp(S)|=s$. For any $T\subset \{0,1\}^{H^c}$ we know that $\hat{\k}(S\cup T)\neq 0$ if and only if $|\supp(S\cup T)|\le r$. There are at most ${n-s\choose \ell-s}2^{{\ell\choose 2}}\le 2^{r^2}n^{\ell-s}$ choices of $T$ such that $s=|\supp(S\cup T)|=\ell$. And further for each of these choices we know that $\hat \k(S\cup T)= \Theta(n^{1-\ell})$.
Define the helper function
$$g:=\sum_{\substack{T\subset H^c\\|\supp(S\cup T)|>s}} \hat \k(S\cup T)\chi_T(\beta)$$
We can compute that
\begin{align*}
Var(g)&\le \sum_{\ell=s+1}^r \sum_{|\supp(S\cup T)=\ell} \left(\hat{\k}(S\cup T)\right)^2 \le \sum_{\ell=s+1}^{r} 2^{r^2}n^{\ell-s} \Theta(n^{2-2\ell})\\
&\le O(n^{2-2s-1})
\end{align*}
Further we can see that $g$ is a polynomial of degree at most ${r\choose2}$, and so by Theorem \ref{concentration hypercontractivity}
\begin{align*}
\Pr\left[|g|\ge n^{1-s+\frac{1}{4}}\right]
=\exp\left[-\Omega\left(n^{1/r^2}\right)\right]
\end{align*}
If $|g|<n^{1-s+1/4}$ then we can conclude that
\begin{align*}
\hat \k(S)=\sum_{|\supp(S\cup T)|=s} \hat \k(S\cup T)\chi_T(\beta)+g(\beta)\le 2^{{s\choose 2}} \Theta(n^{1-s})+n^{1-s+\frac14}=O(n^{1-s})
\end{align*}
So for any $S\subset H$ we find that
$|\widehat{\k_\beta}(S)|\le O(n^{1-s})$ with probability at least $1-\exp(-\Omega(n^{\frac{2}{r^2}}))$. Taking a union bound over all such $S$ finishes the proof.
\end{proof}
\begin{proof}[Proof of Claim \ref{goodcase}]
Fix $\alpha=\frac{1}{t}$ and assume that $\beta\in A\cap B$. Let $X$ and $Y$ be
\begin{align*}
X:=\sum_{e\in {[n]\choose 2}} \widehat{\k_\beta}(e) \chi_e&&Y:=\sum_{|S|\ge 2} \widehat{\k_\beta}(S) \chi_S
\end{align*}
then $\k_\beta=X+Y$, where $X$ is an independent sum, and $Y$ is small. To apply Theorem \ref{mainchf} we set $\ell=99$ and compute the relevant parameters to be
\begin{enumerate}
\item $T:= \sum_{e\in H} \widehat{\k_\beta}(e)^2\ge \sum_{e\in H} \hat{\k}(e)^2/4\ge \alpha{n\choose 2} \frac{1}{4n^2}\ge \frac{1}{t}$ where the last inequality uses the fact that $n^\tau\le t\le n^{\frac34}$.
\item $\displaystyle \eta^2=\|Y\|_2^2= \sum_{\substack{S\subset H\\|S|\ge 2}}\widehat{\k_\beta)}(S)^2\le \sum_{\substack{S\subset H\\|S|\ge 2}}O(n^{2-2|S|})=O(\alpha^2n^{-1})=O\left(\frac{1}{nt^2}\right)$
\item $\hat{\|}Y\hat{\|}_1=\sum_{S\subset H} |\widehat{\k_\beta}(S)|=O(n)$.
\item $\delta=\max_{e}(\widehat{\k_\beta}(e))=3\widehat{\k}(e)/2\le \frac{3}{n}$
\item \begin{align*}
\epsilon&=\exp\left(-\frac{t^2[T-\delta {r\choose 2}\ell]}{\pi^2}\right)=\exp\left(-\Omega\left(t^2\left[t^{-1}-\frac{\ell{r\choose 2}}{n}\right]\right)\right)=\exp\left(-\Omega\left(t\right)\right)=\exp\left(-\Omega\left(n^{\tau}\right)\right)
\end{align*}
\end{enumerate}
Where the third step above used the fact that $t=o(n)$, and the last used that $t\ge n^\tau$. Given these settings of parameters we can plug into Theorem \ref{mainchf} and find that
$$\E[e^{it\k_\beta}]\le O\left(\epsilon n^\ell+ (n^{-{\frac12}})^{\ell+1}+ \exp\left(-\Omega(n^{1/r^2})\right)+n^{\frac{\ell+2}{4}} \exp\left(-\Omega(n^{1/r^2})\right) \right)=O(n^{-50})$$
\end{proof}
\section{Bound for $|t|\in [n^{\frac12+2\tau},n^{\frac{r}{2}-\frac{5}{12}-\tau}]$}
\subsection{High Level Overview}
In this section we discuss how to bound the characteristic function $\varphi_\k(t)$ for $t\in [n^{\frac12+2\tau},n^{\frac{r}{2}-\frac{5}{12}-\tau}]$. The central trick is the decoupling
tool from Section \ref{decoupling section} combined with Theorem \ref{mainchf}. The basic outline is that we will partition the edges of ${[n]\choose 2}$ into $k+1$ pieces, apply Lemma \ref{decoupling} to switch our attention from $\k$ to $\alpha(\k)$, and then further examine a random restriction
to edges on some subset $U_0\subset [n]$ of the vertices. The restricted polynomial $\alpha(\k)_\mathbf{Y}$ will have its Fourier mass concentrated on degree 1 terms. We will then use Theorem \ref{mainchf} to bound the characteristic function of $\alpha(\k)_\mathbf{Y}$.
\subsection{Notation for Section and Setup}\label{setup section}
Partition the vertex set $[n]$ into $[n]=\cup_{i=0}^{k} U_i$. Assume that for $i=1,2,\ldots,k$ all sets $U_i$ have a common size $u:=|U_i|$. $U_0$ will contain all the other vertices, and we will always insist that $|U_0|\ge \frac{n}{k+1}$. Once this partition has been made we can refer to a vertex in $U_i$ as having been colored with the color $i$. Thus, tautologically, $U_i$ is the set of all vertices colored $i$.
We partition our edge variables into $k+1$ classes $B_0,B_1,\ldots, B_k$ by saying an edge $e=(u,v)$ is in $B_i$ if the largest color among the colors of its endpoints is color $i$. Equivalently, if $e$ is an edge between a vertices in $U_i$ and $U_j$ respectively, then $e\in B_{\max(i,j)}$.
We define the $\alpha$ operator as per Section \ref{alpha def section} with respect to this partition.
For $i=1,2,\ldots,k$ let $B_i^0$ and $B_i^1$ denote two separate copies of the edges in $B_i$,
and let $Y_i^0$ and $Y_i^1$ denote independent identically drawn $p$-biased edge sets from $B_i^0$ and $B_i^1$ respectively. Meanwhile, let $X$ denote the edge variables in $B_0={U_0\choose 2}$ Then by Lemma \ref{decoupling}
\begin{equation}\label{grestrictioneq}
|\varphi_\k(t)|^{2^k}\le \E_{\mathbf{Y}}\left| \E_X e^{it\sum_{\textbf{v}\in \{0,1\}^k}(-1)^{|v|} \k(X,Y^{v})}\right|=\E_\mathbf{Y}\left|\E_X[ e^{it\alpha( \k)_\mathbf{Y}(X)}]\right|
\end{equation}
We recall from Section \ref{decoupling} that $\alpha(\k)$ is a function in the variable set $X,Y_1^0,Y_1^1,\ldots,Y_k^1$. However, the expectation we wish to bound above is only in terms of the variables in $X$. We define our restricted functions, as per the notation in Section \ref{restrictions subsection}, to be
\begin{align}
g(X)&:=\alpha(\k)_\mathbf{Y}(X):=\alpha(\k)(X,\mathbf{Y})=\sum_{T\subset B_0\cup \mathcal{B}}\hat{\k}(T)\alpha(\chi_T)(X,\mathbf{Y})\nonumber\\
g_{S}(\mathbf{Y})&:=g_{S}:=\widehat{\alpha(\k)_\mathbf{Y}}(S)=\sum_{\substack{T\subset \mathcal{B}\\T~\mbox{\scriptsize{rainbow}}}} \widehat{\k}(S\cup T) \alpha(\chi_T)(\mathbf{Y})
\label{getransformeq}
\end{align}
Where in the last line $S\subset B_0$. The rainbow condition in the subsequent sum is not technically necessary, but there to prune out the nonrainbow sets which have have a Fourier coefficient of 0.
Thus, by equation \ref{grestrictioneq}, our goal for the rest of this section is to show that $\varphi_g(t):=\E_X[e^{itg}]$ is small with high probability over choice of restriction $\mathbf{Y}$.
To do this we will split the random variable $g$ into two pieces $h,d:\{0,1\}^{\mathcal{B}_0}\to \mathbb{R}$ where
$$h(X)=\sum_{e\in B_0}g_{e}(\mathbf{Y})\chi_S(X)=g^{=1} \qquad\qquad d(X)=\sum_{|S|\ge 2} g_{S}(\mathbf{Y})\chi_S(X)=g^{>1}$$
For $h$, we hope to show that its characteristic function is small, and for $d$ we will be interested in bounding $\hat{\|}d\hat{\|}_1$ and $\|d\|_2$ with an eye towards applying Theorem \ref{mainchf}.
\subsection{Main proofs of the section modulo lemmas}
The main result of this section is the following characteristic function bound:
\begin{lem}\label{puttinittogether}
Assume $\left(\frac nu \right)^{k/2}n^{k/2+\tau}\le t\le \left(\frac n u \right)^{k/2}n^{k/2+1/3-\tau}$ and $u=\Omega(n^{2\tau})$.
Then for any fixed natural number $\ell$
$$\E_{Y}|\varphi_g(t)|=O\left(n^{-\frac{\ell}{6}}\right)$$
\end{lem}
Before proving the lemma, we first state some claims, to be proven afterward, about the behavior of $g,~h,$ and $d$.
\begin{claim}\label{dsmallcor}
With probability $\ge 1-\exp(-\Omega(n^{-\tau/r^2})$ over sampling of $\mathbf{Y}$ we have that
\begin{align*}
\|d\|_2^2&=O\left(\left(\frac{u}{n}\right)^kn^{-k-1+2\tau}\right)\\
\hat{\|}d\hat{\|}_1&=O\left(\left(\frac{u}{n}\right)^{\frac{k}{2}}n^{-k/2+1+\tau}\right)
\end{align*}
\end{claim}
\begin{claim}\label{GeCor}
Under the assumption that $u\ge n^{2\tau},$ $\tau<\frac12$, and $k\le r-2$. Then there exists a constant $C$ such that for sufficiently large $n$
$$\Pr\left[\left|\sum_{e\in B_0} g_e^2(\mathbf{Y})\right|\le C\left(\frac{u}{n}\right)^kn^{-k}\right]\le \exp\left(-\Omega(n^{\tau/2r^2})\right)$$
\end{claim}
\begin{claim}\label{geconcentration}
For any $e\in B_0$ there exists a $C>0$ such that for all sufficiently large $n$
$$\Pr\left[\left|g_e(\mathbf{Y})\right|\ge Cn^{-\frac k2-1+\tau}\left(\frac{u}{n}\right)^{\frac{k}{2}}\right]\le \exp\left(-\Omega\left(-n^{\tau/r^2}\right)\right)$$
\end{claim}
With these ingredients we can prove Lemma \ref{puttinittogether}
\begin{proof}
Let $A$ be the event that $\|d\|_2^2$ and $\hat\|d\hat\|_1$ are small as promised by Claim \ref{dsmallcor}, that $g_e$ is small for all $e\in B_0$ as promised by Claim \ref{geconcentration}, and $\sum_{e\in B_0}g_e^2$ is large as promised by Claim \ref{GeCor}.
By those results we know that $\Pr(A)\ge 1-\exp(-\Omega(n^{-\tau/2r^2})$.
We apply Theorem \ref{mainchf} to $g=h+d$.
Conditioning on the event $A$ we can estimate the relevant parameters of that theorem to be
\begin{enumerate}
\item $T:= \sum_{e\in {U_0\choose 2}} g_e^2\ge \Omega\left(\left(\frac u n \right)^kn^{-k}\right)$
\item $\eta^2=\|d\|_2^2\le O\left(\left(\frac u n \right)^kn^{-k-1+2\tau}\right)$
\item $\hat{\|}d\hat{\|}_1=O\left(\left(\frac u n\right)^{\frac k 2}n^{-k/2+1+\tau}\right)$
\item $\delta^2=\max_{e}(g_e^2)=O\left(\left(\frac u n \right)^kn^{-k-2+2\tau}\right)$
\item
\begin{align*}
\epsilon&=\exp\left(-\frac{2t^2}{\pi^2}\left[T-\delta {r\choose 2}\ell\right]\right)=\exp\left(-\Omega\left[\left(\frac u n \right)^kt^2\left(n^{-k}-\ell{r\choose 2}n^{-k-2+2\tau}\right)\right]\right)\\
&\exp\left(-\Omega\left(\left(\frac u n \right)^kt^2n^{-k}\right)\right)
\end{align*}
\end{enumerate}
So for any fixed $\ell$ such that $t\le (2e)^{\ell/2}\lambda^{-\ell/2}\eta^{-1}=O((u/n)^{-k}n^{k+1-2\tau})$ Theorem \ref{mainchf} tells us
\begin{align*}
|\varphi_g(t)-\varphi_{h}(t)|&\le \ell \epsilon\left(1+\left(t\hat{\|}d\hat{\|}_1\right)^\ell \right)+\frac{(t\eta)^{(\ell+1)}}{(\ell+1)!} \ell^{\frac{(\ell+1){r\choose 2}}{2}}\lambda^{{r\choose 2}\left(\frac{1-\ell}{2}\right)}
+\lambda^{{r\choose 2}}
\exp\left[-\frac{{r\choose 2}\lambda}{2e}|t\eta|^{-\frac{2}{{r\choose2}}}\right]\\
&\qquad+\left|t\eta\right|^{(\ell+1)/2}\lambda^{\frac{3{r\choose 2}-\ell}{4}}
\left(\ell+1\right)\exp\left[-\frac{{r\choose 2}\lambda}{4e}\left|t\eta\right|^{-\frac{2}{{r\choose 2}}}\right]\\
&=O\left(\left(\frac u n \right)^{k\ell/2}t^\ell n^{-\ell k/2+\ell+\ell \epsilon}\exp\left(-\Omega\left(\left(\frac u n \right)^kt^2n^{-k}\right)\right)+(t\eta)^{\ell+1}\right.\\
&\left.\qquad+\exp\left(-\Omega\left[(t\eta)^{-\frac{2}{{r\choose 2}}}\right]\right)+(t\eta)^{\frac{\ell+1}{2}}\exp\left(-\Omega\left[(t\eta)^{-\frac{2}{{r\choose 2}}}\right]\right)\right)
\end{align*}
Assuming that $t\ge n^{k+\tau}u^{-k/2}$
we find that the first term in the right hand side above is bounded above by $\exp\left(-\Omega(n^{2\tau})\right)$.
Additionally, assuming $t\le u^{-k/2}n^{k+\frac13+\tau}$ we have that $t\eta=O( n^{-1/6})$, and so the subsequent terms in the above expansion
can be bounded above by $O\left(n^{-\ell/6}+\exp(-n^{-1/2r^2})\right)$.
Therefore, whenever the event $A$ occurs, we have $|\varphi_g(t)-\varphi_{g^{=1}}(t)|\le O(n^{-\ell/6})$.
Combining the fact that $\Pr(A^c)=\exp\left(\Omega(n^{-\tau/r^2})\right)$
with the observation that $|\varphi_g(t)-\varphi_{h}(t)|\le 2$, it follows that
$$\E_{\mathbf{Y}}|\varphi_g(t)-\varphi_{h}(t)|\le O(n^{-\ell/6})+2\Pr(A^c)=O(n^{-\ell/6})$$
To finish the proof of the lemma, we just have to bound the characteristic function of $h$. But $h$ is a sum of independent $p$ biased Bernoulli random variables. So we can compute (again, conditioning on $A$), that
\begin{align*}
|\varphi_{h}(t)|&=\E\left[e^{it\sum_{e\in B_0} itg_e\chi_e}\right]=\prod_{e\in B_0}\left|\E[e^{itg_e\chi_e}]\right|\le \exp\left(-\frac{4}{\pi^2}t^2\sum g_e^2\right)\le \exp(-\Omega(t^2T))\\
&\le \exp(-\Omega(n^{2\tau}))
\end{align*}
Where the first inequality uses Lemma \ref{bernoulli} combined with the assumption on the event $A$ (from Claim \ref{geconcentration}) that $|g_et|=o(1)$
\end{proof}
We now apply this Lemma for appropriate choices of $k,\ell,$ and $u$ to bound $\varphi_\k(t)$ in a form suitable for use in the proof of our main
local limit theorem in Section \ref{mainsection}
\begin{cor}\label{midchfbound}
Assume $\tau<\frac{1}{12}$. For any $1\le k\le r-2$, and $n^{\frac k2+2\tau}\le t\le n^{\frac k 2+\frac7{12}-2\tau}$ we have $|\varphi_{\k}(t)|\le O(n^{-r^2})$.
\end{cor}
\begin{proof}
We proceed in two cases. In the first, set $u=n/(k+1)$. Then Lemma \ref{puttinittogether} tells us that for some constants $C_1,C_2$ we have
whenever $ C_1n^{k/2+\tau}\le t\le C_2n^{k/2+1/3-\tau}$ then $\E|\varphi_{g}(t)|=O(n^{-\ell/6})$. Furthermore,
we know from Lemma \ref{decoupling} that $|\varphi_\k(t)|^{2^{k}}\le \E_{\mathbf{Y}}|\varphi_g(t)|$. So choosing $\ell=2^{k+5}r^2$ we find that $|\varphi_\k(t)|=O(n^{-r^2})$.
In the second case set $u=n^{1-1/2k}$. Lemma \ref{puttinittogether} along with the same choice of $\ell=2^{k+5}r^2$ will tell us that for $n^{\frac{k}{2}+\frac{1}{4}+\tau}\le t\le n^{\frac{k}{2}+7/12-\tau}$ we have $|\varphi_\k(t)|=O(n^{-r^2})$. So long as $\tau<\frac1{12}$ these intervals will overlap (at least in the limit).
\end{proof}
\begin{cor}\label{midchfbound}
If $\tau<\frac{1}{12}$, then for any $t\in [n^{\frac{1}{2}+2\tau}, n^{\frac{r}{2}-\frac{5}{12}-2\tau}]$ we have $|\varphi_{\k}(t)|\le O(n^{-r^2})$.
\end{cor}
\begin{proof}
This follows by taking the union of the bounds in the above corollary for $1\le k\le r-2$.
\end{proof}
\subsection{Proof of Claim \ref{GeCor}} \label{GeCor Section}
\subsubsection{Showing that $\sum g_e^2$ is large}
In this section, we will show that, with high probability over $\mathbf{Y}$, $\sum_{e\in B_0} g_e^2=\|h\|_2^2$ is large. To do this we first separate out the family
$\mathcal{F}_e$ of subsets of $\mathcal{B}$ which contain most of the Fourier weight of $g_e(\mathbf{Y})$.
\begin{define} \label{FeDef}
$$\mathcal{F}_e=\{S\subset \mathcal{B}\st |\supp(S-e)|=k,~S~\mbox{\scriptsize{rainbow}}\}$$
\end{define}
We then carve $\sum g_e^2$ into pieces as follows. Let
\begin{align*}
G_e(\mathbf{Y})&:=\sum_{S\in \mathcal{F}_e} \widehat{g_e}(S)\alpha(\chi_S)(\mathbf{Y})\\
H_e(\mathbf{Y})&:=\sum_{S\notin \mathcal{F}_e} \widehat{g_e}(S)\alpha(\chi_S)(\mathbf{Y})\\
Z(\mathbf{Y})&=\sum_{e\in B_0} G_e^2=\sum_{e} (g_e-H_e)^2
\end{align*}
So in particular $G_e+H_e=g_e$, and $G_e$ is the main term while $H_e$ is best thought of as an error term.
We now embark on proving the following estimates for $G_e$ and $H_e$ respectively
\begin{lem}\label{GHZ}
Assume that $1\le k\le r-2$. Then
\begin{align*}
\|G_e\|_2^2&=\Theta\left(n^{-k-2}\left(\frac{u}{n}\right)^k\right)\\
\|H_e\|_2^2&=O\left(n^{-k-3}\left(\frac{u}{n}\right)^k\right)\\
\|g_e\|_2^2&=\Theta\left(n^{-k-2}\left(\frac{u}{n}\right)^k\right)
\end{align*}
\end{lem}
\begin{proof}
The third claim follows immediately from the previous two. For $H_e$ we recall equation \ref{getransformeq} and compute.
\begin{align*}
2^{-k}\|H_e\|_2^2=2^{-k}\sum_{S\in \mathcal{F}_e^c} \widehat{g_e}(S)^2&=\sum_{t=k+3}^{r} \sum_{\substack{|V\cup e|=t}}\sum_{\substack{S\subset{V+e\choose 2}\\S~\mbox{\scriptsize{rainbow}}}} \hat{\k}(S\cup e)^2\le\sum_{t=k+3}^{r} \sum_{|V|=t}\sum_{\substack{S\subset{V+e\choose 2}\\S~\mbox{\scriptsize{rainbow}}}}
O\left(n^{-2t+2}\right)\\
&\le\sum_{t=k+3}^r u^kn^{t-k-2}2^{t\choose 2} O(n^{-2t+2})\le O\left(n^{-k-3}\left(\frac{u}{n}\right)^k\right)
\end{align*}
Where the second inequality follows from noting that there are at most $u^{k}n^{|\supp(S)|-k-2}$ ways to choose the support of a rainbow set of edges,
and then at most $2^{|\supp(S)|\choose 2}$ ways to pick edges with that support.
Meanwhile for $G_e$
\begin{align*}
2^{-k}\E[G_e(\mathbf{Y})^2]&=\sum_{S\in \mathcal{F}_e} \hat{\k}(S\cup e)^2= \sum_{|V|=k+2}\sum_{\substack{\supp(S\cup e)=V\\S~\mbox{\scriptsize{rainbow}}}} \hat{\k}(S\cup e)^2= \sum_{|V|=k+2}\sum_{\substack{\supp(S\cup e)=V\\S~\mbox{\scriptsize{rainbow}}}} C_Sn^{-2k-2}\\
&\ge \left(\prod_{i=1}^k |U_i|\right)C_Sn^{-2k-2}\\
&=\Theta\left(n^{-k-2}\left(\frac{u}{n}\right)^k\right)
\end{align*}
Where $C_S$ is some constant depending on $|S|$, which always lies in $[\lambda^{r^2},1]$ and can be read off of the Fourier expansion of $\k$ in Section \ref{KrProp} . Note that we used the assumption that $k+2\le r$ to ensure that the sums above were nonempty.
\end{proof}
Meanwhile, for each $e\in U$ we know that $|G_e-g_e|=|H_e|$, and so
$g_e^2\ge G_e^2-2|G_eH_e|$. But $H_e$ is relatively small, so by Cauchy Schwarz
$$\|G_eH_e\|_2^2=\E[G_e^2H_e^2]\le \sqrt{\E[G_e^4]\E[H_e^4]}=\|G_e\|_4^2\|H_e\|_4^2$$
Since $\deg(G_e),\deg(H_e)\le {r\choose 2}$ Theorem \ref{moment hypercontractivity} tells us that
$$\|G_e\|_4^4\|H_e\|_4^4\le (3)^{2r^2}\lambda^{2r^2-4}\|H_e\|_2^4\|G_e\|_2^4=\Theta\left((u/n)^{4k}n^{-4k-10}\right)$$
This in turn implies that $\|G_eH_e\|_2=\Theta(u^{k}n^{-2k-2.5})$. So it follows from Theorem \ref{concentration hypercontractivity} that $|G_eH_e|\le u^{k}n^{-2.5+\tau}$ with probability $1-\exp(\Omega(-n^{\tau/2r^2})$ . Therefore with high probability
$$\sum_e g_e^2\ge \sum_{e\in B_0} \left[G_e^2-O(u^{k}n^{-2k-2.5+\tau})\right]=Z-O(u^{k}n^{-2k-1/2+\tau})$$
We restate this as a lemma.
\begin{lem}\label{ZtoGBound}
With probability at least $1-\exp(\Omega(n^{-\tau/2r^2})$ over choice of $\mathbf{Y}$, we have that
$$\sum_e g_e^2\ge \sum_{e\in B_0} \left[G_e^2-O(u^{k}n^{-2k-2.5+\tau})\right]=Z-O(u^{k}n^{-2k-\tau+1/2})$$
\end{lem}
To finish our argument, we require the fact that $Z$ is large with high probability. This will follow immediately from observing that
$Z$ is a fixed degree polynomial and computing the variance of $Z$. Unfortunately, computing this variance is cumbersome, and so
the proof of the following lemma is in Appendix \ref{appendix z}
\begin{lem}\label{GeBound}
Let $Z=\sum_{e\in B_0} G_e^2$. Assume that $u\ge n^{2\tau}$ and $1\le k\le r-2$. Then there exists a constant $C$ such that for sufficiently large $n$, $\Pr\left[|Z|\le C\left(\frac{u}{n}\right)^kn^{-k}\right]\le \exp\left(\Omega(n^{-\tau/r^2})\right)$
\end{lem}
We are now in a position to prove Claim \ref{GeCor}
\begin{repclaim}{GeCor}
Under the assumption that $u\ge n^{2\tau},$ $\tau<\frac12$, and $k\le r-2$. Then there exists a constant $C$ such that for sufficiently large $n$
$$\Pr\left[\left|\sum_{e\in B_0} g_e^2(\mathbf{Y})\right|\le C\left(\frac{u}{n}\right)^kn^{-k}\right]\le \exp\left(\Omega(n^{-\tau/2r^2})\right)$$
\end{repclaim}
\begin{proof}
Lemma \ref{GeBound} above implies that for some $C_1>0$ we have $Z\ge C_1u^kn^{-2k}$ with probability $1-\exp\left(\Omega(n^{-\tau/r^2})\right)$.
Meanwhile Lemma \ref{ZtoGBound} implies that $\sum_{e\in B_0} g_e^2\ge Z-O(u^kn^{-2k-\tau+\frac{1}{2}})$ with probability $1-\exp\left(\Omega(n^{-\tau/2r^2})\right)$. Combining these inequalities yields the corollary.
\end{proof}
\subsection{Proof of Claim \ref{geconcentration}}
\begin{repclaim}{geconcentration}
For any $e\in B_0$ there exists a $C>0$ such that for all sufficiently large $n$
$$\Pr\left[\left|g_e(\mathbf{Y})\right|\ge Cn^{-\frac k2-1+\tau}\left(\frac{u}{n}\right)^{\frac{k}{2}}\right]\le \exp\left(-\Theta\left(n^{-\frac{\tau}{r^2}}\right)\right)$$
\end{repclaim}
\begin{proof}
We computed in Lemma \ref{GHZ} that $\|g_e\|_2=O(n^{-k-2}(u/n)^k)$. We also know that $\E[g_e(\mathbf{Y})]=\hat{g_e}(\varnothing)=0$. $g_e$ is a polynomial in $\mathbf{Y}$ of degree at most $r\choose 2$ so the
result then follows from Lemma \ref{concentration hypercontractivity}.
\end{proof}
\subsection{Proof of Claim \ref{dsmallcor}}
First, we compute a bound on the Fourier coefficients $g_S(\mathbf{Y})=\widehat{\alpha(\k)_\mathbf{Y}}(S)$.
\begin{lem}\label{d2union}
For some $C>0$ and for all sufficienetly large $n$
$$g_{S}(\mathbf{Y})^2\le \left(\frac{u}{n}\right)^kn^{-k-2|\supp(S)|+2+2\tau}$$
holds for \textit{all} $S\subset B_0$ with probability $1-\exp(-\Theta(n^{\tau/r^2}))$.
\end{lem}
\begin{proof}
For any $S$, we note that $\E_{\mathbf{Y}}[g_S(\mathbf{Y})]=0$. If $|S|>r-k$ then $g_S(\mathbf{Y})$ is identically 0. If $|S|\le r-k$ then we compute this quantity to have variance
\begin{align*}
2^{-k}\E[g_{S}(\mathbf{Y})^2]&=\sum_{\substack{T\subset \mathcal{B}\\T~\mbox{\scriptsize{rainbow}}}} \hat{\k}(S\cup T)^2=\sum_{t=k+|\supp(S)|}^{r} \sum_{|V|=t}\sum_{\substack{\supp(S\cup T)=V\\T~\mbox{\scriptsize{rainbow}}}} \hat{\k}(S\cup T)^2\\
&\le\sum_{t=k+|\supp(S)|}^{r} \sum_{|V|=t}\sum_{\substack{\supp(S\cup T)=V\\T~\mbox{\scriptsize{rainbow}}}} O\left(n^{-2t+2}\right)\\
&\le\sum_{t=k+|\supp(S)|}^{r} u^kn^{t-k-|\supp(S)|}2^{t\choose 2}O\left(n^{-2t+2}\right)=O\left(\left(\frac{u}{n}\right)^kn^{-k-2|\supp(S)|+2}\right)
\end{align*}
We also know that $g_{S}(\mathbf{Y})$ is a polyomial of degree at most ${r\choose 2}$ and $\E[g_{\mathcal{B}|S}(\mathbf{Y})]=0$. By Theorem \ref{concentration hypercontractivity}, for $n$ sufficiently large we have that $\Pr[|g_{S}|\ge \left(\frac{u}{n}\right)^{k/2}n^{-k/2-|\supp(S)|+1+\tau}]\le \exp(-\Omega(n^{\tau/r^2}))$. Since there are at most $O(n^{r^2})$ possible monomials $S$
for which $g_S(\mathbf{Y})\not \equiv 0 $ a union bound finishes the proof of the lemma.
\end{proof}
\begin{repclaim}{dsmallcor}
With probability $1-\exp(-\Omega(n^{-\tau/r^2})$ we have that both
\begin{align*}
\|d\|_2^2&=O\left(\left(\frac{u}{n}\right)^kn^{-k-1+2\tau}\right)\\
\hat{\|}d\hat{\|}_1&=O\left(\left(\frac{u}{n}\right)^{\frac{k}{2}}n^{-k/2+1+\tau}\right)
\end{align*}
\end{repclaim}
\begin{proof}
Both of these statements follow from a computation using the bound given in Lemma \ref{d2union}. Throughout we condition on the assumption that all of the Fourier coeficients of $d$ are as small as promised by Lemma \ref{d2union}. First we bound the 2-norm of $d$ by
\begin{align*}
\|d\|_2^2&=\sum_{\substack{S\subset B_0\\|S|\ge 2}} g_S(\mathbf{Y})^2=\sum_{t=3}^{r-k}\sum_{|\supp(S)|=t} g_S(\mathbf{Y})^2\le\left(\frac{u}{n}\right)^k\sum_{t=3}^{r-k}\sum_{|\supp(S)|=t} n^{-k-2t+2+2\tau}\\
&\le\left(\frac{u}{n}\right)^k\sum_{t=3}^{r-k}{n\choose t}2^tO\left(n^{-k-2t+2+2\tau}\right)=O\left(\left(\frac{u}{n}\right)^kn^{-k-1+2\tau}\right)
\end{align*}
Then we bound the spectral 1 norm by
\begin{align*}
\hat{\|}d\hat{\|}_1&=\sum_{S\subset B_0}| \hat{d}(S)|=\sum_{t=3}^{r-k} \sum_{|\supp(S)|=t} O\left(\left(\frac{u}{n}\right)^{k/2}n^{-k/2-t+1+\tau}\right)=O\left(\left(\frac{u}{n}\right)^{k/2}n^{-k/2+1+\tau}\right)
\end{align*}
\end{proof}
\section{Bound for $|t|\in[ n^{\frac{r}{2}-\frac16-2\tau},~n^{r-1-r\tau}]$}
Here we repeat the same setup and notation from Section \ref{setup section}, but now we focus exclusively on the special case when $k=r-2$.
The function $g=\alpha(\k)_\mathbf{Y}$ exhibits some different behavior in this case.
First, let's look at what happens when $k=r-1$. Then any rainbow set $T\subset \mathcal{B}$ contains vertices of from $U_1,U_2,\ldots,U_k$, and so in particular has at least $r-1$ vertices not in $U_0$. Therefore for any nonempty $S\subset {U_0\choose 2}$ we have $|\supp(S\cup T)|\ge r+1$. But recall that $\k$ is supported on sets of edges spanning at most $r$ vertices. Therefore $\alpha(\k)$ does not depend on the edges in $B_0$ at all, and so $g(X)$ is a constant.
But if $k=r-2$, then for any rainbow $T\subset \mathcal{B}$ and $S\subset{U_0\choose 2}$ if $|S|>1$ we have
$|\supp(S\cup T)|\ge k+3=r+1$. Therefore $\hat{g}(S)=0$ when $S$ is not just the set of a singleton edge. In particular for any edge $e\in {U_0\choose 2}$ if $T$ is rainbow, and $|\supp(e\cup T)|\le r$ then it follows that $|\supp(T)-e|=k=r-2$. Recallling definition \ref{FeDef} that
$\mathcal{F}_e=\{S\subset \mathcal{B}\st |\supp(S)-e|=k,~S\mbox{ rainbow}\}$, we can restate our observation as the following lemma.
\begin{lem}
Assume $k=r-2$. Then we have $d\equiv 0$. Further, for $e\in {U_0\choose 2}$ and $T\subset \mathcal{B},~T\notin \mathcal{F}_e$, it follows that $\widehat{g_{e}}(T)\equiv 0$. That is $g_e=\sum_{T\in \mathcal{F}_e} \widehat{g_{e}}(T)\chi_T(\mathbf{Y})$.
\end{lem}
This implies that for any choice of $\mathbf{Y}\in \{0,1\}^{\mathcal{B}}$ we have $\alpha(\k)_\mathbf{Y}=g$ is a a degree 1 polynomial in independent Bernoulli random variables.
Because of this, we can bound the characteristic function of $g$ more directly. Additionally, our analysis of $\sum g_e^2=\|g\|_2^2$ also becomes easier.
\begin{lem}\label{lowhighchfbound}
For $t\in [ (r-1)^{r/2-1}n^{r/2-1+\tau/2},~n^{r-2-r\tau}]$, we have $|\E_{\mathbf{Y}}[\varphi_g(t)]|=\exp(-\Omega(n^{\tau/2r^2}))$.
\end{lem}
\begin{proof}
Set $u=n^{2+\tau/(r-2)}t^{-2/(r-2)}$, and therefore $(u/n)^{r-2}=n^{r-2+\tau}/t^2$. First, we check that this is a feasible choice of $u$ for Claims \ref{GeCor} and \ref{geconcentration}, that is $n^{2\tau}\le u\le n/(r-1)$.
For the lower bound, we find the requirement
$$n^{2+\tau/(r-2)}t^{-2/(r-2)}\ge n^{2\tau}\implies t \le n^{r-2-r\tau}$$
For the upper bound we need
$$n^{2+\tau/(r-2)}t^{-2/(r-2)}\le \frac{n}{r-1}\implies t \ge (r-1)^{r/2-1}n^{r/2-1+\tau/2}$$
And these are exactly the hypotheses on $t$.
Let $A$ be the event that $\sum_{e\in B_0} g_e^2\ge (u/n)^{r-2}n^{-r+2}=n^{\tau}/t^2$, and $g_e^2\le (u/n)^{r-2}n^{-r+\tau}=n^{-2+2\tau}/t^2$ for all $e\in B_0$. By Claims \ref{GeCor} and \ref{geconcentration} we have $\Pr(A^c)\le \exp(-\Omega(n^{\tau/2r^2}))$.
Given that event $A$ occurs we have that $|g_e t|\le n^{-1+\tau}<\sqrt{p(1-p)}\pi$. Therefore we can use Lemma \ref{bernoulli} to bound
$$|\varphi_g(t)|=|\E[e^{it\sum g_e\chi_e}]|=\prod_{e\in B_0} \E[e^{itg_e\chi_e}]\le e^{-\frac{2}{\pi^2}t^2\sum g_e^2}\le \exp(-\Omega(n^{\tau/r^2}))$$.
To complete the proof of the lemma, we combine this inequality with the bounds that $\Pr(A^c)\le \exp(-\Omega(n^{-\tau/2r^2}))$ and $|e^{ix}|\le 1$.
\end{proof}
Setting $u$ slightly smaller yields a result more suited to slightly larger values of $t$
\begin{lem} \label{highmidchfbound}
For $t\in [ (r-1)^{r/2-1}n^{r/2-3\tau/2},~n^{r-1-2(r-2)\tau}]$, we have $|\E_{\mathbf{Y}}[\varphi_g(t)]|=\exp(-\Omega(n^{\tau/2r^2}))$.
\end{lem}
The proof is more or less the same as the above, but we include it here as subtle errors would be easy to make.
\begin{proof}
Set $u=n^{2+\frac{2-3\tau}{r-2}}t^{-2/(r-2)}$, and therefore $(u/n)^{r-2}=n^{r-3\tau}/t^2$. First, we check that this is a feasible choice of $u$ for Claims \ref{GeCor} and \ref{geconcentration}, that is $n^{2\tau}\le u\le n/(r-1)$.
For the lower bound, we find the requirement
$$n^{2+\frac{2-3\tau}{r-2}}t^{-2/(r-2)}\ge n^{2\tau}\implies t \le n^{r-1-2(r-2)\tau}$$
For the upper bound we need
$$n^{2+\frac{2-3\tau}{r-2}}t^{-2/(r-2)}\le \frac{n}{r-1}\implies t \ge (r-1)^{r/2-1}n^{r/2-3\tau/2}$$
And these are exactly the hypotheses on $t$.
Let $A$ be the event that $\sum_{e\in B_0} g_e^2\ge (u/n)^{r-2}n^{-r+2}=n^{2-3\tau}/t^2$, and $g_e^2\le (u/n)^{r-2}n^{-r+2\tau}=n^{-\tau}/t^2$ for all $e\in B_0$. By Claims \ref{GeCor} and \ref{geconcentration} respectively we have $\Pr(A^c)\le \exp(-n^{\tau/2r^2})$.
Given that event $A$ occurs we have that, $|g_e t|\le n^{-\tau/2}<\sqrt{p(1-p)}\pi$. Therefore we can use Lemma \ref{bernoulli} to bound (again
conditional on the event $A$ occuring)
$$|\varphi_g(t)|=|\E[e^{it\sum g_e\chi_e}]|=\prod_{e\in B_0} \E[e^{itg_e\chi_e}]\le e^{-\frac{4}{\pi^2}t^2\sum g_e^2}\le \exp(-\Omega(n^{2-3\tau}))$$.
To complete the proof of the lemma, just use the fact the bound $\Pr(A^c)\le \exp(-\Omega(n^{-\tau/2r^2}))$ and the fact that $|e^{ix}|\le 1$.
\end{proof}
Combining Lemmas \ref{lowhighchfbound} and \ref{highmidchfbound} with Lemma \ref{decoupling} yields the following corollary
\begin{cor}\label{midhighchfbound}
Assume $0<\tau<\frac1{2r}$. For $t\in [n^{r/2-1+\tau},~n^{r-1-2(r-2)\tau}]$ we have $|\varphi_\k(t)|\le \exp(-\Omega(n^{\tau/2r^2}))$.
\end{cor}
\section{Bound for large $t$} \label{larget section}
For large $t$, an even more extreme application of Lemma \ref{decoupling} is needed. To do this, we take the following partition of the edge random variables.
Partition the vertex set $[n]$ into $\lfloor \frac{n}{r}\rfloor$ $r$-cliques. Let $\mathcal{F}$ be the family of cliques in this partition. Now let $\tilde{B_0},B_1,\ldots, B_{{r\choose 2}-1}$ be any partition of the edges
of these cliques such that each $B_i$ contains \textit{exactly one} edge from each clique in $\mathcal{F}$. Now set $B_0$ to be the union of $\tilde{B_0}$ along with all
edges of $K_n$ not already partitioned into a $B_i$ (i.e., edges connecting the different cliques in $\mathcal{F}$ as well as the leftover edges from vertices not put into cliques). See Figure \ref{larget figure} for an example of this partition.
In this section, rather than using the orthogonal character functions, it will be more convenient to use indicator vectors $x_e\in \{0,1\}$
instead. Additionally for a set of edges $S$, we will use $x^S$ to denote the monomial $\prod_{e\in S} x_e$.
Let $X\in \{0,1\}^{B_0}$ and $Y_i^0,~Y_i^1\in \{0,1\}^{B_i}$ independent as in section \ref{decoupling section}.
As before, for a given setting of $\mathbf{Y}\in \{0,1\}^\mathcal{B}$ we define $g(X)$ by setting
\begin{align*}
g(X):=\alpha(\k)_\mathbf{Y}(X)=\alpha(\k)(X,\mathbf{Y})
\end{align*}
Recall that $\alpha$ is a linear operator, and that furthermore we have $\alpha(x^S)=0$ unless $S$ is a rainbow set of edges. However, by construction we know that the only rainbow sets $S$ are exactly the cliques $S\in \mathcal{F}$. Therefore we have
$$\alpha(\k)=\sum_{S\equiv K_r} \alpha(x^S)=\sum_{S\in \mathcal{F}} \alpha(x^S)$$
For each $S\in \mathcal{F}$, we have $S=e\cup S'$ where $e\in B_0$ and $S'\subset \cup_{i\ge 1} B_i$. So, for any fixed $S\in \mathcal{F}$, if we sample $\mathbf{Y}$ at random, then we have that $Y_{e'}^0=1$ and $Y_{e'}^1=0$ for all edges $e'\in S'$ with probability at least $\lambda^{2{r\choose 2}^2}$.\footnote{recall $\lambda=\min(p,1-p)$} Label this event $A_S$. If $A_S$ occurs then it follows that
$$\alpha(x^S)(X,Y)=x_e\sum_{v\in \{0,1\}^k} (-1)^{|v|} x^{S'}=x_e$$
as the only nonzero term in the above sum is when $v=(0,0,\ldots,0)$.
Given that $A_S$ occurs and $t\le \pi \sigma$ by Lemma \ref{bernoulli} we have
$$\E_{x_e} e^{itx^S/\sigma}=\E_{x_e} e^{itx_e/\sigma}\le 1-\frac{8p(1-p)t^2}{\pi^2\sigma^2}$$
Let $z(\mathbf{Y})$ denote the number of edges $e\in \tilde B_0$ such that $A_e$ occurs. Using the fact that $g=\sum_{\mathcal{F}} \alpha(x^S)$ and that
the random variables $\alpha(x^S)$ are independent, we may compute
$$|\E[e^{itg}]|=\prod_{S\in A}\left|\E[e^{it\alpha(x^S)}]\right|\le \prod_{\substack{S\in A\\mathcal{A}_S\mbox{\scriptsize{ occurs}}}} \left(1-\frac{8p(1-p)t^2}{\pi^2\sigma^2}
\right)=\exp\left(-\frac{8p(1-p)t^2z(Y)}{\pi^2\sigma^2}\right)$$
Since each of the events $A_S$ are independent and occur with probability $\ge\lambda^{2{r\choose 2}^2}$ it follows from Chernoff bounds that $z(Y)\ge \lambda^{2{r\choose 2}^2}n/2r$ with probability $\ge 1-\exp(-\Omega(n))$.
So we find that
$$\E_\mathbf{Y} \left|\E_X[e^{it\alpha(\k)}]\right|\le \Pr(A) \exp\left(-\frac{8p(1-p)t^2}{\pi^2\sigma^2}\cdot\frac{\lambda^{2{r\choose 2}^2}n}{2r}\right)+\Pr(A^c)=\exp\left(-\Omega\left(\frac{t^2n}{\sigma^2}\right)\right)$$
Combining this with Lemma \ref{decoupling} we have proved the following:
\begin{lem}\label{highchfbound}
For $|t|\le \pi \sigma$ we have that $|\varphi_{\k}(t)|\le \exp(-\Omega(t^2n/\sigma^2))$.
\end{lem}
\begin{figure}[H]\label{larget figure}
\begin{centering}
\begin{tikzpicture}
\foreach \i in {1,2}{
\foreach \j in {1,2,3}{
\node[draw, circle, minimum size = .75cm, inner sep = 0cm, xshift = 4*\i cm] (v\i\j) at ({(\j-1)*120-30}:1cm){$v_{\i\j}$};
}
}
\foreach \i in {1,2}{
\draw[loosely dashed, ultra thick] (v\i1)--(v\i2);
\draw[dotted, ultra thick] (v\i3)--(v\i2);
\draw[ thin] (v\i1)--(v\i3);
}
\foreach \j in {1,2}{
\foreach \k in {2,3}{
\draw[ thin] (v1\j) to (v2\k);
}
\draw[ thin](v13) to (v22);
\draw[ thin, bend right = 30](v13) to (v23);
\draw[ thin, bend right = 40](v13) to (v21);
\draw[ thin, bend right = 20](v11) to (v21);
}
\end{tikzpicture}
\caption{Example illustrating the partition $B_0,B_1,\ldots, B_{r\choose 2-1}$ from section \ref{larget section}. In this case (where $r=3$) we have
3 line styles (loosely dahsed, dotted, and thin) representing the 3 different edge classes. Note that the only rainbow triangles are $(v_{11},v_{12},v_{13})$ and $(v_{21},v_{22},v_{23})$.
$B_0$, here represented by the thin edges is quite large, but most of the edges are not on even a single rainbow triangle and so $g(X)$ does
not depend on them at all}
\end{centering}
\end{figure}
| {
"timestamp": "2018-11-09T02:17:54",
"yymm": "1811",
"arxiv_id": "1811.03527",
"language": "en",
"url": "https://arxiv.org/abs/1811.03527",
"abstract": "We prove a local limit theorem the number of $r$-cliques in $G(n,p)$ for $p\\in(0,1)$ and $r\\ge 3$ fixed constants. Our bounds hold in both the $\\ell^\\infty$ and $\\ell^1$ metric. The main work of the paper is an estimate for the characteristic function of this random variable. This is accomplished by introducing a new technique for bounding the characteristic function of constant degree polynomials in independent Bernoulli random variables, combined with a decoupling argument.",
"subjects": "Combinatorics (math.CO)",
"title": "A Local Limit Theorem for Cliques in G(n,p)",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9833429634078179,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7097211150711463
} |
https://arxiv.org/abs/1411.3869 | Convergence properties of a geometric mesh smoothing algorithm | We describe a simple geometric transformation of triangles which leads to an efficient and effective algorithm to smooth triangle and tetrahedral meshes. Our focus lies on the convergence properties of this algorithm: we prove the effectivity for some planar triangle meshes and further introduce dynamical methods to study the dynamics of the algorithm which may be used for any kind of algorithm based on a geometric transformation. | \section{Introduction}\label{s.first}
\subsection{Preliminary remarks}
The finite element method is the standard instrument to simulate the behavior of solid bodies or fluids in engineering and physics. The first preparatory step of this method is the discretization of the underlying domain into finitely many elements which could be easily described by parameters, i.e. surfaces are mostly approximated by triangles, quatrilaterals or parallelograms. Because of the design process in modern engineering, an initial mesh for the domain is often given, and the next important step is the preprocessing of this mesh to obtain a good base for the application of the finite element method. As the requirements for simulation results are more and more strict and real time simulations and simulations on evolving objects present new challenges, a fast, reliable and preferably automatic mesh preprocessing is an important link in the simulation process. \\
Not surprisingly, there is a wide variety of methods at hand to improve the mesh quality of a given mesh, see e.g. the surveys \cite{FreyGeorge2000} and \cite{Carey1998}. One can identify two main approaches:
\begin{description}
\item[Geometry-based] A geometric smoothing method changes directly the geometry of the mesh, that is, it relocates the nodes. A popular example is the Laplacian smoothing which maps every node to the arithmetic mean of its neighboring nodes (see e.g. \cite{H76} or \cite{Field1988}). There are also methods that change the topology of the mesh by deleting small elements or subdividing large ones. All these methods have in common that they are usually quickly implemented and very fast; additionally, they can be often straightforwardly combined with techniques of parallel computing. The main disadvantage is their heuristic character so that the convergence of an algorithm is mostly only empirically, but not theoretically assured. Consequently, they are sometimes combined with optimizational approaches as in the early work \cite{Freitag1997}.
\item[Optimization-based] The principal idea behind any optimizational approach is to define a function on the set of meshes which represents the quality of a mesh, and to find the maximum of this function by usual numerical optimization, e.g. gradient methods (see \cite{FreitagKnupp1999} and following articles by these authors). The main advantage of these methods is clearly that they lead to a mesh of higher quality, but in the case of a non-convex quality function, it can usually not be assured that the transformed mesh corresponds to the global and not only local optimum. Also, the computation is usually costly with regard to runtime and storage. On the other hand, new advances for fast and robust solutions of optimization problems could be used as e.g. evolutionary optimization algorithms (see e.g. \cite{YK09}, \cite{HK03}).
\end{description}
In this article we present a geometric approach for mesh smoothing which consists of a simple geometric transformation of every element of a mesh which do not affect the topology. Here we only consider triangle and tetrahedral meshes. This ansatz is similar to the GETMe algorithm introduced for triangle and tetrahedral meshes in \cite{VAGW08,VWS09} and proved to be element-wise effective in \cite{VH14}. But while the serial of these articles mainly focus on the numerical results and the improvement of runtime and performance by adjusting the algorithm, we study the mathematics underlying our presented geometric method and prove that any not too distorted planar mesh of triangles converges to the best possible mesh for the given mesh topology, that is, the difference between the normalized distances from the vertices to the centroid for all triangles is the smallest possible.
This point - the mathematical discussion of convergence properties and the application of dynamical methods - is surely the main achievement of the present article. For completeness and as motivation for a future application, we shortly discuss the performance of our method as a smoothing algorithm, but we do not explore the practical aspects of our algorithm in detail.
\subsubsection{Organization of the article} In Section~\ref{s.trans} we describe the discrete geometric transformation of a triangle element on which the smoothing algorithm is based and discuss its mathematical properties. In the forthcoming Section~\ref{s.convergence} we derive the smoothing algorithm from this transformation for a triangle mesh and prove its convergence for some particular triangle meshes, i.e. if the transformation is iteratively applied, the mesh converges to the best possible mesh for a given mesh topology. At the end -- in Section~\ref{s.numerical} -- we briefly discuss the numerical results.
\section{The geometric triangle transformation}\label{s.trans}
Before we start with the rigorous mathematical description we motivate the geometric transformation, the subject of this article, and its regularizing mechanism by the following observations which have their offspring in \cite{V13}:
\subsection{Introductory observations}
\subsubsection{Imitating the rotational symmetry group action of the triangle}
The symmetry group of a regular triangle $\Delta=(z_0,z_1,z_2)$, $z_i \in \mathbb{C}$, is the dihedral group $D_{3}$ which is generated by a reflection and a rotation by $\frac{2\pi}{3}$ around the circumcenter $c$ of the triangle. Consider the rotation: if the circumcenter lies in the origin, the rotational element then acts on the triangle by mapping the vector $z_{i-1}$ onto the vector $z_i$ for $i\in\mathbb{Z}_3$. \\
If the triangle is not equilateral, we can take the centroid, that is, the arithmetic mean of the three nodes, instead of the circumcenter and imitate the rotation by still mapping the vector $z_{i-1}$ onto $\frac{|z_{i-1}|}{|z_i|}z_{i}$ such that the resulting vector has length equal to $z_{i-1}$ but points into the direction of $z_i$, i.e. we \emph{rotate} the vector $z_{i-1}$ around $c$.\\
Sure, this rotation around the centroid is neither $3$-periodic nor isometric, but this action, if iterated, converges to the classical rotation by $\frac{2\pi}{3}$ because the centroid converges to the circumcenter and the distances from the centroid to the vertices become equal. This will be shown in Subsection~\ref{s.proof} below.
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{rotation.mps}
\caption{The first three iterations of the geometric element transformation represented as rotations: observe how the circle radii approach and the centroid moves to the circumcenter.}
\label{fig:triangle_rotated}
\end{figure}
\subsubsection{First intuitive explanation of the mechanism}
Consider the centroid $c$. We start at the origin $c^{(0)}=0$. After the first iteration we get
$c^{(1)}=\frac{1}{3}\left(\frac{\left|z_2\right|}{\left|z_0\right|}z_0 + \frac{\left|z_0\right|}{\left|z_1\right|}z_1 + \frac{\left|z_1\right|}{\left|z_2\right|}z_2\right).$
Just change it a bit setting $r_i:=\frac{\left|z_{i-1}\right|}{\left|z_i\right|}$ to obtain
$\tilde{c}^{(1)}=\frac{1}{r_0+r_1+r_2}\left(r_0z_0 + r_1z_1 + r_2z_2\right)$.
Observe that by the inequality of the geometric and arithmetic mean we always have $3=3(\sqrt[3]{r_0r_1r_2})\leq r_0+r_1+r_2$.
The new $\tilde{c}$ is a \emph{weighted arithmetic mean} of the vertices where the weight $r_i:=\frac{\left|z_{i-1}\right|}{\left|z_i\right|}$ is greater the smaller the distance $\left|z_i\right|$ compared to $\left|z_{i-1}\right|$. Accordingly, $c$ is moved more than the average into the direction of $z_i$ if $z_i$ was much closer to $c$ then $z_{i-1}$. The new distance from $c^{(1)}$ to $\frac{\left|z_{i-1}\right|}{\left|z_i\right|}z_i$ is then less than $\left|z_{i-1}\right|$. On the other hand, if $z_i$ was far from $c$ compared to $z_{i-1}$, $c$ is moved less than the average into the direction of $z_i$, and the new distance from $c^{(1)}$ to $\frac{\left|z_{i-1}\right|}{\left|z_i\right|}z_i$ is greater than $\left|z_{i-1}\right|$. Consequently, due to the controlling weights the maximum distance from a vertex to the centroid lessens while the minimal distance augments such that the distances become equal in the long run. In other words, the centroid moves to the circumcenter of the triangle. It is obvious that $c$ stops moving if and only if the distance to each vertex is equal or equivalently, if and only if $c$ is the circumcenter.\\
In Section~\ref{s.proof} we give a rigorous proof that this transformation iteratively applied to any non-degenerate triangle makes it equilateral.
\subsection{The geometric element transformation}
\subsubsection{Description of the geometric element transformation}
Let $\Delta=(x_0,x_1,x_2)$ with $x_i \in \mathbb{R}^2$ or $x_i \in \mathbb{R}^3$ for $i=0,1,2$ be a triangle in the euclidean space where the vertices are denoted counter clockwisely. Denote by $c=\frac{1}{3}(x_0 + x_1 + x_2)$ the centroid of the triangle. The transformation works then as following:
\begin{align*}\Delta_{new}&=(x_{0,new},x_{1,new},x_{2,new}),\\
x_{i,new}&=\frac{\left\|x_{i-1} - c\right\|_2}{\left\|x_i - c\right\|_2}(x_i - c) + c \quad \mbox{for}\; i\in\mathbb{Z}_3.
\end{align*}
To keep the centroid fixed we move the centroid $c_{new}$ of the transformed triangle $\Delta_{new}$ back into the old centroid $c$:
$x_{i,new}=x_{i,new}-c_{new} + c, \; i=0,1,2.$
Combining these two steps into one we get for $i\in\mathbb{Z}_3$ with $r_i:=\left\|x_{i-1}-c\right\|\left\|x_i-c\right\|^{-1}$:
\begin{framed}
\begin{equation}\label{t.getme}x_{i,new}=\frac{2}{3}r_i(x_i - c) - \frac{1}{3}r_{i+1}(x_{i+1} - c)- \frac{1}{3}r_{i-1}(x_{i-1} - c) + c. \end{equation}
\end{framed}
This is the whole, very simple geometric transformation which can be directly implemented into any mathematical software, e.g. Matlab.
\subsubsection{Formal proof of element-wise convergence}\label{s.proof}
For the proof that any non-degenerate triangle converges under Transformation~(\ref{t.getme}) to a equilateral triangle we can supppose that the triangle lies in the euclidean plane $\mathbb{R}^2$ which we identify -- for simplifying notations -- with $\mathbb{C}$:
let $\Delta=(z_0^{(0)},z_1^{(0)},z_2^{(0)})$ with $z_i^{(0)} \in \mathbb{C}$ be an arbitrary triangle with $z_i^{(0)} \neq z_j^{(0)}$ for $i \neq j$. Denote by $c=\frac{1}{3}(z_0 + z_1 + z_2)$ the centroid of the triangle. Without loss of generality, we assume that the centroid $c$ lies in the origin, that is, $c=0$. For $n \in \mathbb{N}$ denote by $r_i^{(n)}$ the ratio $\frac{|z_{i-1}^{(n)}|}{|z^{(n)}_{i}|}$ with $i \in \mathbb{Z}_3$. Then Transformation~(\ref{t.getme}) becomes the following transformation, recursively defined for $n \geq 1$ and $i \in \mathbb{Z}_3$:
\begin{equation}\label{e.transformation}
z_i^{(n)}=\frac{2}{3}r_{i}^{(n-1)}z_i^{(n-1)} -\frac{1}{3}r_{i+1}^{(n-1)}z_{i+1}^{(n-1)}-\frac{1}{3}r_{i+2}^{(n-1)}z_{i+2}^{(n-1)}
\end{equation}
We prove that the ratio $r_i^{(n)}$ for $i=0,1,2$ converges to $1$. This implies that the centroid converges to the circumcenter and the distance of the vertices to the centroid gets constant.
\begin{theorem}\label{t.convergence}
With the notations above, we have
$\lim_{n \rightarrow \infty} r_i^{(n)}= 1$
for $i=0,1,2$, i.e. the ratio of the distances from the vertices to the centroid converges to $1$.
\end{theorem}
Before we start with the proof of Theorem~\ref{t.convergence} we show in the next two preliminary lemmata that the maximal distance $\max_i|z_i^{(n)}|$ from a vertex $z_i^{(n)}$ to the centroid $c$ is a strictly decreasing sequence, and symmetrically, that the minimal distance from a vertex to the centroid is a strictly increasing sequence.
\begin{lemma}\label{l.max}
For $n \geq 0$ we have $\max_{i=0}^2 |z_i^{(n+1)}| < \max_{i=0}^{2}|z_i^{(n)}|$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{l.max}]
We prove this Lemma by a simple estimation. Without loss of generality we assume that $\max_{i=0}^2|z_i^{(n+1)}| = |z_0^{(n+1)}|$. We have
\begin{align*}
|z_0^{(n+1)}| &= |\frac{2}{3} r_0^{(n)} z_0^{(n)} - \frac{1}{3}r_1^{(n)}z_1^{(n)}- \frac{1}{3}r_2^{(n)}z_2^{(n)}|\; \mbox{utilizing}\; z_0^{(n)} + z_1^{(n)}+z_2^{(n)}=0\\
&=|-\frac{2}{3}r_0^{(n)}(z_1^{(n)} + z_2^{(n)})- \frac{1}{3}(r_1^{(n)}z_1^{(n)} + r_2^{(n)}z_2^{(n)})|\\
&< \max_{i}r_i^{(n)} |z_0^{(n)}|
\end{align*}
If $r_0^{(n)}=\max_i r_i^{(n)}$ the proof is easily finished by
$$|z_0^{(n+1)}| < r_0|z_0^{(n)}|=|z_2^{(n)}| \leq \max_i |z_i^{(n)}|.$$
Otherwise, assume that $r_1^{(n)}$ is maximal (the case that $r_2^{(n)}$ is maximal works analogously). Then we substitute in the equation above $z_1^{(n)}=-z_0^{(n)}-z_2^{(n)}$ and we get:
\begin{align*}
|z_0^{(n+1)}| &= |(\frac{2}{3}r_0^{(n)} + \frac{1}{3}r_1^{(n)})z_0^{(n)}+ (\frac{1}{3}r_1^{(n)}-\frac{1}{3}r_2^{(n)}) z_2^{(n)}|\; r_1^{(n)}=\max r_i^{(n)}.\\
& < r_1^{(n)}|z_1^{(n)}|= |z_0^{(n)}|\;\leq \; \max |z_i^{(n)}|.
\end{align*}
\end{proof}
Now we prove in an analogous way that the sequence of minima is monotonically increasing:
\begin{lemma}\label{l.min}
For $n \geq 0$ we have $\min_{i=0}^2 |z_i^{(n)}| < \min_{i=0}^2 |z_i^{(n+1)}|$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{l.min}]
Assume without loss of generality that $|z_0^{(n+1)}|=\min |z_i^{(n+1)}|$. We have the following estimate:
\begin{align}\label{e.estimate1}
|z_0^{(n+1)}| &= |\frac{2}{3}r_0^{(n)}z_0^{(n)} - \frac{1}{3}\left(r_1^{(n)}z_1^{(n)} + r_2^{(n)}z_2^{(n)}\right)|\; \mbox{substituting}\; z_0^{(n)}=-z_1^{(n)}-z_2^{(n)}\nonumber\\
&=|- \frac{2}{3}r_0^{(n)}\left(z_1^{(n)} + z_2^{(n)}\right) - \frac{1}{3}\left(r_1^{(n)}z_1^{(n)} + r_2^{(n)}z_2^{(n)}\right)|\nonumber\\
& > \left(\frac{2}{3}r_0^{(n)} + \frac{1}{3}\min_{i=1,2}r_i^{(n)}\right)|z_0^{(n)}|
\end{align}
Consequently, we have to show that (\ref{e.estimate1}) is greater than $\min_i|z_i^{(n)}|$: If $r_0^{(n)}$ is minimal, we easily get
$$ \left(\frac{2}{3}r_0^{(n)} + \frac{1}{3}\min_{i=1,2}r_i^{(n)}\right) \geq r_0^{(n)}\; \Rightarrow\; |z_0^{(n+1)}| > r_0|z_0^{(n)}|=|z_2^{(n)}| \geq \min |z_i^{(n)}|.$$
Otherwise, if $r_0^{(n)}$ is maximal, we certainly have -- utilizing the inequality of arithmetic and geometric mean
$$\left(\frac{2}{3}r_0^{(n)} + \frac{1}{3}\min_{i=1,2}r_i^{(n)}\right)\geq\left(\frac{1}{3}r_0^{(n)} + \frac{1}{3}r_1^{(n)} + \frac{1}{3}r_2^{(n)}\right)\geq \sqrt[3]{r_0^{(n)}r_1^{(n)}r_2^{(n)}} = 1, $$
finishing the proof for this case due to $|z_0^{(n+1)}| > |z_0^{(n)}|$. In the last case, if $r_0^{(n)}$ is neither maximal nor minimal, either $r_1^{(n)}$ or $r_2^{(n)}$ is maximal. Assume without loss of generality that $r_1^{(n)}$ is maximal:
\begin{align*}
|z_0^{(n+1)}| &> \left(\frac{2}{3}r_0^{(n)} + \frac{1}{3}r_2^{(n)}\right)|z_0^{(n)}|\geq \sqrt[3]{r_0^{(n)}r_0^{(n)}r_2^{(n)}}|z_0^{(n)}| \\
&= \sqrt[3]{r_1^{(n)}r_1^{(n)}r_0^{(n)}}|z_1^{(n)}|\quad \mbox{utilizing}\; |z_0^{(n)}|=r_1^{(n)}|z_1^{(n)}|\; \mbox{and}\;r_0r_1r_2=1\\
&= \sqrt[3]{\frac{r_1^{(n)}}{r_2^{(n)}}}|z_1^{(n)}| \geq |z_1^{(n)}| \geq \min_i |z_i^{(n)}|,
\end{align*}
finishing the proof.
\end{proof}
With the help of these two lemmas we can directly conclude Theorem~\ref{t.convergence}:
\begin{proof}[Proof of Theorem~\ref{t.convergence}]
For $i\in\mathbb{Z}_3$ consider the sequence $\left(r_i^{(n)}=\frac{|z_{i-1}^{(n)}|}{|z_i^{(n)}|}\right)_{n \geq 0}$. We have for $n\geq 0$ the following bounds from below and above:
\begin{equation}\label{e.estimate2}
\frac{\min_i |z_i^{(n)}|}{\max_i |z_i^{(n)}|}\leq r_i^{(n)} \leq \frac{\max_i |z_i^{(n)}|}{\min_i |z_i^{(n)}|}.
\end{equation}
According to Lemmas~\ref{l.max} and \ref{l.min}, the sequence $\left(\frac{\max_i |z_i^{(n)}|}{\min_i |z_i^{(n)}|}\right)_{n\geq 0}$ is a strictly decreasing sequence bounded from below by $1$, so it converges to $1$; in the same way, the sequence $\left(\frac{\min_i |z_i^{(n)}|}{\max_i |z_i^{(n)}|}\right)_{n\geq 0}$ is a strictly increasing sequence bounded from above by $1$, so it also converges to $1$.
These two results combine with (\ref{e.estimate2}) to $\lim_{n \rightarrow \infty} r_i^{(n)} =1$ for $i=0,1,2$ finishing the proof.
\end{proof}
Theorem~\ref{t.convergence} directly gives us the required result for the geometric element transformation where we assume that $\Delta^n$ is non-degenerate, that is, the vertices are pairwise disjoint:
\begin{corollary}[Elementwise convergence]\label{c.convergence}
The triangle $\Delta^n=(z_0^{(n)}, z_1^{(n)}, z_2^{(n)})$ converges for $n \rightarrow \infty$ to an equilateral triangle.
\end{corollary}
\begin{proof}[Proof of Corollary~\ref{c.convergence}]
As $r_i^{(n)}=|\frac{z_{i-1}^{(n)}}{z_i^{(n)}}|$ converges to $1$ with $n \rightarrow \infty$, we get that $\lim |z_i^{(n)}| = \lim |z_{j}^{(n)}|$ for $i,j=0,1,2$. Therefore, the distances from the vertices to the centroid $c$ become equal, so that $c$ becomes the circumcenter, and the triangle equilateral.
\end{proof}
\section{Convergence of the smoothing algorithm for triangle meshes}\label{s.convergence}
Transformation~(\ref{t.getme}) can be used to transform a mesh of triangles by combining it at every vertex with taking the barycenter. We give the precise definition of the considered mesh transformation below after specifying in detail our setting. \\
We prove in this section that any triangle mesh which does not contain too pathological triangles converges under the transformation to a mesh of triangles as regular as possible. Let us make precise our setting:
\subsection{Preliminary notations:}
Let $\Sigma=\left\{0,\dots,N-1\right\}$ be a finite set of symbols. Let \newline$C=\left\{\Delta_i=(i_0,i_1,i_2)\in \Sigma^3\,\big|\, i=0,\dots,n-1\right\}$ be a finite set of triples of symbols. We call the set $C$ a \emph{connectivity} iff for any pair $\Delta_i, \Delta_j\in C$ there exist $k\leq n-1$ and a finite sequence $\Delta_0,\dots,\Delta_k \in C$ such that $\Delta_0=\Delta_i$ and $\Delta_k=\Delta_j$ and for $m=0,\dots,k-1$ the triples $\Delta_m$ and $\Delta_{m+1}$ have exactly two symbols in common.
Let $M_C: \Sigma \rightarrow \mathbb{R}^2, k\mapsto x_k$ be an injective map. We call $M_C$ a \emph{triangle mesh with connectivity C} iff for any $i,j=0,\dots,n-1$, $i\neq j$ the triangles defined by $M_C(\Delta_i):=(M_C(i_0),M_C(i_1),M_C(i_2))$ and $M_C(\Delta_j)$, counted counter clockwisely, are non-degenerate and have disjoint interior. Let denote by $X_C$ the set of meshes $M_C$ with connectivity $C$.
\begin{rem} As a consequence of the definition of connectivity, the set $\bigcup_{i=0}^{n-1}M_C(\Delta_i)$ is arcwise connected.
\end{rem}
\subsection{Definition of the mesh transformation}
We define a triangle mesh transformation in two steps. First, for any $i=0,\dots,n-1$ we define for the triangle $M_C(\Delta_i)=(x_{i_0},x_{i_1},x_{i_2})$ with centroid $c_i$ the triangle transformation (as defined above) by
\begin{align}\label{e.triangle_transformation}
&\theta_i:\mathbb{R}^6 \rightarrow \mathbb{R}^6\;\nonumber\\
&x_i:=(x_{i_0},x_{i_1},x_{i_2}) \mapsto (\theta_{i_0}(x_{i}),\theta_{i_1}(x_{i}),\theta_{i_2}(x_{i})),\nonumber\\
&\theta_{i_0}(x_{i})= \frac{2}{3}\frac{\left\|x_{i_2}-c_i\right\|}{\left\|x_{i_0}-c_i\right\|}(x_{i_0}-c_i) -\frac{1}{3}\frac{\left\|x_{i_0}-c_i\right\|}{\left\|x_{i_1}-c_i\right\|}(x_{i_1}-c_i)-\frac{1}{3}\frac{\left\|x_{i_1}-c_i\right\|}{\left\|x_{i_2}-c_i\right\|}(x_{i_2}-c_i) + c_i\nonumber\\
&\theta_{i_1}(x_{i})= \frac{2}{3}\frac{\left\|x_{i_0}-c_i\right\|}{\left\|x_{i_1}-c_i\right\|}(x_{i_1}-c_i) -\frac{1}{3}\frac{\left\|x_{i_1}-c_i\right\|}{\left\|x_{i_2}-c_i\right\|}(x_{i_2}-c_i)-\frac{1}{3}\frac{\left\|x_{i_2}-c_i\right\|}{\left\|x_{i_0}-c_i\right\|}(x_{i_0}-c_i)+c_i\nonumber\\
&\theta_{i_2}(x_{i})= \frac{2}{3}\frac{\left\|x_{i_1}-c_i\right\|}{\left\|x_{i_2}-c_i\right\|}(x_{i_2}-c_i) -\frac{1}{3}\frac{\left\|x_{i_2}-c_i\right\|}{\left\|x_{i_0}-c_i\right\|}(x_{i_0}-c_i)-\frac{1}{3}\frac{\left\|x_{i_0}-c_i\right\|}{\left\|x_{i_1}-c_i\right\|}(x_{i_1}-c_i)+c_i.
\end{align}
We can now define the mesh transformation under consideration.
For $k=0,\dots,N-1$ let $\Sigma_k$ denote the set of indices of the adjacent triangles at $x_k$ and we define the map
\begin{align}\label{e.mesh_transformation}
\Theta:X_C \subset \mathbb{R}^{2N}&\rightarrow \mathbb{R}^{2N}\nonumber\\
x=(x_0,\dots,x_{N-1})&\mapsto (\Theta_0(x),\dots,\Theta_{N-1}(x)), \; x_k, \Theta_k(x) \in \mathbb{R}^2\nonumber, \\
\mbox{with}\;&\Theta_k(x)=\frac{1}{|\Sigma_k|}\sum_{m \in \Sigma_k}t_{m_{j(m)}}(x_k),
\end{align}
where $j(m) \in {0,1,2}$ denotes the index of the vertex $x_k$ inside the triangle numbered by $m$, so $\theta_{m_{j(m)}}:\mathbb{R}^2 \rightarrow \mathbb{R}^2$.
\begin{rem}
The map $\Theta$ is clearly well-defined as a map from $X_C$ to $\mathbb{R}^{2N}$, but not as a map to $X_C$: let $M_C(\Sigma)=(x_0,\dots,x_{N-1}) \in \mathbb{R}^{2N}$ be a triangle mesh with connectivity $C$. Then $\Theta(M_C(\Sigma))$ is not necessarily a triangle mesh $M'_C$. It could happen that the interiors of two triangles $\Theta(\Delta_i)$, $\Theta(\Delta_j)$ are no longer disjoint.
\end{rem}
On the other hand note that a mesh of equilateral triangles is fixed under $\Theta$, such that we can prove the following lemma where we call \emph{distortion} of a triangle the ratio of the shortest by the longest edge length of a triangle:
\begin{lemma}{Well-definedness of $\Theta$}\label{l.welldefinedness}
Let $\Theta$ be defined as above. Then there exists $0< \delta < 1$ such that for any triangle mesh $M_C$ whose distortion of triangles is bounded from below by $\delta$ the image $\Theta(M_C)$ is a triangle mesh $M'_C$.
\end{lemma}
We postpone the proof to Subsection~\ref{s.similarity}.
\subsection{Similarity group action and equivariance of $\Theta$}\label{s.similarity}
Denote by $\Sim(\mathbb{R}^2)$ the four-dimensional group of similarities of $\mathbb{R}^2$ composed by the one-dimensional group $\mathbb{R}^+$ of scaling and the three-dimensional group $\isom(\mathbb{R}^2)$ of isometries. This groups naturally acts on the set of triangles by
$$\Sim(\mathbb{R}^2) \times (\mathbb{R}^2)^3 \rightarrow (\mathbb{R}^2)^3; (g, x)\mapsto g.x=(g.x_0,g.x_1,g.x_2)$$
for any triangle $x=(x_0,x_1,x_2) \in \mathbb{R}^6$ where there exists $A \in \So(2,\mathbb{R}^2)$, $\theta \in \mathbb{R}^2$ and $\lambda \in \mathbb{R}^+$ such that $g.x_i=\lambda(Ax_i + \theta)$ for $i=0,1,2$.
\begin{rem}
Two triangles $x,y$ are \emph{similar} iff there exists $g \in \Sim(\mathbb{R}^2)$ such that $g.x=y$. \\
\end{rem}
One can easily prove that the group $\Sim(\mathbb{R}^2)$ acts freely on the set of triangles:
\begin{lemma}
The group action as defined above is free.
\end{lemma}
\begin{proof}
Let $A \in \So(2,\mathbb{R}), \theta \in \mathbb{R}^2$ and $\lambda \in \mathbb{R}^+$ such that $\lambda(Ax + \theta)=x$ for a triangle $x=(x_0,x_1,x_2) \in \mathbb{R}^6$. This implies immediately that $\lambda=1$. So we have $Ax_0 + \theta = x_0$ and $Ax_1 + \theta=x_1$. We conclude that $A(x_0-x_1)=x_0-x_1$. This means that $x_0-x_1$ is the eigenvector to an eigenvalue $1$ of $A$. So we can conclude that $A$ is the identity. Consequently, we get $\theta=0$ finishing the proof.
\end{proof}
The action defined above can be straightforwardly generalized to the set $X_C$ of meshes with connectivity $C$ by
$$(g,x) \in \Sim(\mathbb{R}^2) \times (\mathbb{R}^2)^N\,\mapsto \, (g.x_0, g.x_1,\dots,g.x_{N-1})$$
for any mesh $M_C(\Sigma)=:x \in (\mathbb{R}^2)^N$. This action is certainly free as well.
\begin{rem}
One could think of defining the similarity group action on a mesh separately on every triangle. But in fact, the connectivity as defined above forces that the same group element acts simultaneously on each triangle of the mesh. So the group action defined above is the only one in accordance with the given definition of a mesh.
\end{rem}
As a consequence we can list the following properties of the group action:
\begin{enumerate}
\item Every $\Sim(\mathbb{R}^2)$-orbit is a four-dimensional smooth submanifold in $\mathbb{R}^{2N}$.
\item One computes immediately that the mesh transformation $\Theta$ is \emph{equivariant} under the group action of $\Sim(\mathbb{R}^2)$, that is
\begin{enumerate}
\item For every $M_C$ inside the domain of $\Theta$ the image $\Theta(M_C)$ lies as well in the domain.
\item $$\Theta(g.M_C) = g.\Theta(M_C) \quad \mbox{for any} \;M_C \in X_C \; \mbox{and}\; g \in \Sim(\mathbb{R}^2).$$
\end{enumerate}
(see e.g. \cite{F80} where important properties for equivariant dynamical systems are proved).
\item For any $h \in \Sim(\mathbb{R}^2)$ one computes for the Jacobian matrix of $\Theta$ for any $M_C \in X_C$
$$\Theta=h^{-1} \circ \Theta \circ h \; \Leftarrow \; D\Theta_{M_C}=Dh^{-1} \circ D\Theta_{h.M_C} \circ Dh,$$
and as $h,h^{-1}$ are linear maps one gets
$$ D\Theta_{M_C}=h^{-1} \circ D\Theta_{h.M_C} \circ h.$$
\end{enumerate}
Thanks to these properties we can reduce the question of global convergence to the following:
Let $M_C$ be a fixed point of $\Theta$, that is $\Theta(M_C)=M_C$, then the whole group orbit $\Lambda:=\Sim(\mathbb{R}^2).M_C$ is fixed and $\Lambda$ is consequently a four-dimensional submanifold of fixed points. \\
Now we can prove Lemma~\ref{l.welldefinedness}:
\begin{proof}[Proof of Lemma~\ref{l.welldefinedness}]
Let $M_{eq}$ be an equilateral mesh, then we have $\Theta(M_{eq})=M_{eq}$. On the other hand, the domain of $\Theta$ is clearly an open subset of $X_C$. By the continuity of $\Theta$, there exists an open set $\mathcal{U}$ of triangles sufficiently close to $M_{eq}$ such that $\Theta$ is well-defined on $\mathcal{U}$. The equivariance of $\Theta$ implies that $\Theta$ is well-defined on the group orbit $\Sim(\mathbb{R}^2).\mathcal{U}$ of $\mathcal{U}$ which contains all meshes whose distortion of triangles is bounded by some $ 0<\delta < 1$ which ends the proof.
\end{proof}
To study the convergence in a neighborhood of the fixed point $M_C$ it is enough to study the dynamics of $\Theta$ in a neighborhood of $\Lambda$ thanks to the following lemma:
\begin{lemma}\label{l.convergence}
Let $x=M_C\in\mathbb{R}^{2N}$ be a fixed point of $\Theta$ and $\Lambda:=\Sim(\mathbb{R}^2).M_C$ its group orbit. If there exists a $D\Theta$-invariant decomposition of the tangent bundle at $\Lambda$ $$T\mathbb{R}^{2N}|_{\Lambda}=T\Lambda \oplus E^s$$
such that $\left\|D\Theta|_{E^s}\right\| < 1$, then there exists a unique family $\mathcal{F}^s$ of injectively $C^r$-immersed submanifolds $\mathcal{F}^s(x)$ such that $x \in \mathcal{F}^s(x)$ and $\mathcal{F}^s(x)$ is tangent to $E^s_x$ at every $x \in \Lambda$. This family is $\Theta$-invariant, that is, $\Theta(\mathcal{F}^s(x))= \mathcal{F}^s(\Theta(x))$, and the manifolds $\mathcal{F}^s(x)$ are uniformly contracted by some iterate of $\Theta$. \\
That family actually forms a foliation of a neighborhood of $\Lambda$.
\end{lemma}
This Lemma is an immediate application of the invariant manifold theorem by Hirsch,Pugh and Shub, cited and proved for example in \cite[Th.B7, p.293]{BDV}.
The spectrum $\spec(D\Theta|_{\Lambda})$ tangent to the group orbit contains four eigenvalues equal to $1$. Consequently, we have the following direct corollary of Lemma~\ref{l.convergence}:
\begin{corollary}\label{c.convergence}
Let $M_C\in\mathbb{R}^{2N}$ be a fixed point of $\Theta$ and $\Lambda:=\Sim(\mathbb{R}^2).M_C$ its group orbit. If every eigenvalue of the Jacobian matrix $D\Theta$ which is not contained in $\spec(D\Theta|_{\Lambda})$ has an absolute value strictly smaller than $1$, then $\Lambda$ is an attractor and $\Theta^n(M)$ converges uniformly at exponential rate to \textbf{one} point in $\Lambda$ for $n \rightarrow \infty$ and for any triangle mesh $M$ sufficiently close to $\Lambda$.
\end{corollary}
So as a consequence of this corollary, it is enough to study the spectrum of $D\Theta$ at a fixed point and to prove that the absolute value of all eigenvalues except from four is strictly smaller than one. Nevertheless, this is still a difficult task as it will become obvious in the following. We start with the easiest cases gaining more and more complexity:
\newpage
\subsection{Convergence for particular cases}
\subsubsection{Case 1: A single triangle}
We start with the easiest case of a mesh which consists of a single triangle, so in fact, we study the global convergence of the previously defined triangle transformation $\theta:\mathbb{R}^6 \rightarrow \mathbb{R}^6$ on a triangle $x=(x_0,x_1,x_2) \in (\mathbb{R}^2)^3$ in more details and using the new setting above.
Let $x_{eq}\in \mathbb{R}^6$ be an equilateral triangle, then $\theta(x_{eq})=x_{eq}$. The equivariance of $\theta$ under the group of similarities provokes that $$\theta(\Lambda_{eq}) = \Lambda_{eq},\;\mbox{where}\; \Lambda_{eq}=\left\{x \in \mathbb{R}^6 \;\big|\; g \in \Sim(\mathbb{R}^2): \; x=g.x_{eq},\; \right\}.$$
is a $4$-dimensional submanifold of $\mathbb{R}^6$ which is $\theta$-invariant, that is, $\theta(\Lambda_{eq}) \subset \Lambda_{eq}$. Following Corollary~\ref{c.convergence} we compute the derivative $D\theta_{x_{eq}}$ of $\theta$ at $x_{eq} \in \Lambda_{eq}$. The Jacobian matrix is the same matrix $J$ for any $x_{eq} \in \Lambda_{eq}$:
\begin{align}\label{e.jacobian}
Dt(x_{eq}) &= \begin{pmatrix}A & B & C \\C & A&B\\B & C & A\end{pmatrix}=:J\quad \mbox{where}\nonumber \\
A&=\begin{pmatrix}\frac{3}{4}& -\frac{1}{4\sqrt{3}}\\\frac{1}{4\sqrt{3}}& \frac{3}{4}\end{pmatrix},\; B=\begin{pmatrix}\frac{1}{4}& -\frac{1}{4\sqrt{3}}\\ \frac{1}{4\sqrt{3}}&\frac{1}{4}\end{pmatrix},\; C=\begin{pmatrix}0& \frac{1}{2\sqrt{3}}\\-\frac{1}{2\sqrt{3}}& 0\end{pmatrix}.
\end{align}
Remark that $J$ is a \emph{circulant} block matrix. Further, $J$ is conjugate to the block diagonal matrix $(\frac{1}{2}R_{\pi/3}, \id_{\mathbb{R}^4})$ where $R_{\pi/3}:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ is a rotation by $\pi/3$. Accordingly, there exists a $2$-dimensional subspace $E^s$ spanned by the eigenvectors $v_1,v_2$ corresponding to the two eigenvalues $\neq 1$. There exists $c > 0$ constant such that for any $v \in E^s_x$, $x \in \Lambda_{eq}$, one has
$$\left\|Dt_xv\right\| \leq \frac{c}{2} \left\|v\right\|.$$
The four eigenvectors $v_3,\dots,v_6$ corresponding to eigenvalues $1$ span the tangent space of $\Lambda_{eq}$. So -- applying Corollary~\ref{c.convergence} -- the invariant set $\Lambda_{eq}$ is an \emph{attractor} for $\theta$. Hence, there exists a neighborhood $U_{eq} \supset \Lambda_{eq}$ such that every $x \in U_{eq}$ converges to $\Lambda_{eq}$ under iterates of $\theta$, that is,
$$\dist(\theta^nx,\Lambda_{eq}) \rightarrow 0,\quad n \rightarrow \infty.$$
Taking into account Theorem~\ref{t.convergence} one concludes that $\Lambda_{eq}$ is a \emph{global attractor}.
\begin{rem}
If one considers the orbit space of the free group action $\isom(\mathbb{R}^2)$ on $\mathbb{R}^6$ by identifying similar triangles, one observes that this space is the two-dimensional projective space $P^2(\mathbb{R})$. By the observation above the triangle transformation passes to a well defined map on this quotient space:
\begin{equation*}
\xymatrix{
\mathbb{R}^6 \ar[d]^p \ar[r]^t &\mathbb{R}^6\ar[d]^p\\
P^2(\mathbb{R}) \ar[r]^t &P^2(\mathbb{R})}
\end{equation*}
The attractor $\Lambda_{eq}$ projects to a globally attracting fixed point on $P^2(\mathbb{R})$. By the equivariance of the transformation $\theta$, the two-dimensional stable set tangent to $E^s$ passes also to a well-defined two-dimensional set in the orbit space reflecting the attraction of the fixed point.
\end{rem}
\subsubsection{Case 2: mesh of six equilateral triangles}
Let $\Sigma=\left\{0,\dots,6\right\}$ and $C=\left\{(0,1,2), (0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,1)\right\}$ be the connectivity and denote by $x_{eq}=(x_0,\dots,x_6)\in (\mathbb{R}^2)^7$ the mesh of six equilateral triangles. The group orbit $\Lambda_{eq}$ under the similarity group action is -- exactly as above -- a $4$-dimensional smooth submanifold of $\mathbb{R}^{14}$, and every mesh $x_{eq} \in \Lambda_{eq}$ is certainly a fixed point of the mesh transformation $\Theta$. We compute -- with the notations above -- the Jacobian matrix of $\Theta$ for $x_{eq}\in \Lambda_{eq}$ as
\begin{align}\label{e.meshjacobian6}
&D\Theta(x_{eq})=\nonumber\\
&\begin{pmatrix}A & \frac{1}{6}(B+C)^T & \frac{1}{6}(B+C)^T & \frac{1}{6}(B+C)^T & \frac{1}{6}(B+C)^T & \frac{1}{6}(B+C)^T & \frac{1}{6}(B+C)^T \\\frac{1}{2}B & A &\frac{1}{2}B^T & 0 & 0 & 0 &\frac{1}{2}C^T\\ \frac{1}{2}B & \frac{1}{2}C^T & A &\frac{1}{2}B^T & 0 & 0 & 0\\ \frac{1}{2}B & 0 & \frac{1}{2}C^T & A &\frac{1}{2}B^T & 0 & 0 \\\frac{1}{2}B & 0 & 0 &\frac{1}{2}C^T & A &\frac{1}{2}B^T & 0 \\ \frac{1}{2}B & \frac{1}{2}B^T & 0 & 0 & \frac{1}{2}C^T & A & \frac{1}{2}B^T\\\frac{1}{2}B & 0 & 0 & 0 &0 & \frac{1}{2}C^T & A \end{pmatrix}.
\end{align}
The matrix $D\Theta(x_{eq})$ has four eigenvalues $1$ whose eigenvectors span the $4$-dimensional tangent space of $\Lambda_{eq}$. Further, we have five pair of complex conjugate eigenvalues $\lambda_1,\overline{\lambda}_1,\dots, \overline{\lambda}_5$ with absolute values $\left|\lambda_i\right| \in [0.5774,0.8780]$. Consequently, the tangent space at $x_{eq} \in \Lambda_{eq}$ splits into a ten dimensional space $E^s(x_{eq})$ spanned by the eigenvectors $v_1,\overline{v}_1,\dots,\overline{v}_5$ and a $4$-dimensional eigenspace $T_{x_{eq}}\Lambda_{eq}$ of the equivalence relation: $$T_{\Lambda_{eq}}\mathbb{R}^{14} = E^s(\Lambda_{eq}) \oplus T\Lambda_{x_{eq}}.$$ So we can apply Corollary~\ref{c.convergence} and conclude that $\Lambda_{eq}$ is a local attractor, and consequently, there exists a neighborhood $\Lambda_{eq} \subset U_{eq} \subset \mathbb{R}^{14}$ such that every mesh $x=\mathcal{M}_C \in U_{eq}$ converges uniformly to one mesh $x_{eq}$ under $\Theta$: $$\dist(\Theta^n(x),x_{eq}) \rightarrow_{n \rightarrow \infty} 0 \quad x \in U_{eq}.$$
\begin{rem}
In contrast to the case of the triangle transformation we cannot prove that $\Lambda_{eq}$ is a global attractor: one observes numerically that $D\Theta(x)$ for $x \in X_C$ might have eigenvalues of absolute value $>1$, that is, there are directions in which $x$ is expanded. Numerical tests show, that after one or two iterations of $\Theta$, $x$ comes sufficiently close to $x_{eq}$ such that it converges uniformly to $x_{eq}$.
\end{rem}
\subsubsection{Case 3: simple meshes}
let $\Sigma=\left\{0,\dots,N-1\right\}$ be a set of $N$ symbols. We call a connectivity $C$ $N$-\emph{simple} iff all triples $(i_0,i_1,i_2) \in C$ has a common symbol. We call $M_C$ a \emph{$N$-simple mesh} iff its connectivity $C$ is $N$-simple.\\
Above, we consider -- in this terminology -- a $6$-simple mesh. For a $N$-simple mesh, one fixed point of $\Theta$ is the mesh defined by the vertices $x_0=(0,0)$ and $x_{k-1}=(\cos(2k\pi/(N-1)), \sin(2k\pi/(N-1))$ for $k=2,\dots,N$. Let denote the similarity group orbit of this mesh by $\Lambda_{eq,N}$.
We can then numerically compute the Jacobian matrix for $x_{eq}$ and show their spectra in Figure~\ref{fig:simplemesh} for $N=4,\dots,11$.
\begin{figure}[htpb]
\begin{minipage}{0.47\textwidth}
\includegraphics[width=\textwidth]{plot_simplemeshes-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\includegraphics[width=\textwidth]{plot_simplemeshes-eps-converted-to.pdf}
\end{minipage}
\caption{Plot of the spectrum of $DT_N(x)$ where $T_N$ is the mesh transformation of a $N$-simple meshes and $x$ runs through 50 randomly generated $N$-simple meshes. On the right, the spectrum for the regular $N$-simple mesh is depicted. Note that the equilateral $3$-simple mesh is not an attracting point, but a \emph{saddle point}.}
\label{fig:simplemesh}
\end{figure}
We easily conclude that $\Lambda_{N,eq}$ is for $4 < N \leq 11$ a local attractor. Further, Figure~\ref{fig:simplemesh} seems to suggest that for $N \in [5,8]$, the fixed point set $\Lambda_{eq,N}$ is attracting in a quite large region.
\subsubsection{Case 4: mesh of equilateral triangles}
Let $C$ be a connectivity such that every inner vertex has exactly six neighboring vertices, and consider the previously defined set $X_C$ of meshes with this connectivity. Denote by $N_i$ the indices of inner vertices and by $N_b$ the indices of boundary vertices. Let $x_{eq}=(x_0,\dots,x_{N-1})\in (\mathbb{R}^2)^N$ be the mesh of equilateral triangles. Exactly as above, we consider the whole group orbit $\Lambda_{eq} \subset X_C$ of the similarity group. Then we compute the Jacobian matrix $D\Theta(x_{eq})$ of the mesh transformation (\ref{e.mesh_transformation}) at $x_{eq} \in \Lambda_{eq}$:
\begin{align}\label{e.meshjacobian}
D\Theta(x_{eq}) &= \left(\frac{\partial \Theta_k}{\partial x_l}\right)_{k,l=0,\dots,N-1}, \quad \frac{\partial \Theta_k}{\partial x_l} \in \mathbb{R}^{2 \times 2}\nonumber\\
\frac{\partial \Theta_k}{\partial x_l} &= A\quad \mbox{if} \; k=l\nonumber\\
\frac{\partial \Theta_k}{\partial x_l} &= \frac{1}{6}(B + C)^T \quad \mbox{if}\; l \in \Sigma_k,\; k\in N_i,\; \nonumber\\
\frac{\partial \Theta_k}{\partial x_l} &= \frac{1}{2}C \quad \mbox{if}\; l \in \Sigma_k,\; l,k\in N_b,\nonumber\\
\frac{\partial \Theta_k}{\partial x_l} &= \frac{1}{2}B \quad \mbox{if}\; l \in \Sigma_k,\; l\in N_i,\;k\in N_b,\;\nonumber\\
\frac{\partial \Theta_k}{\partial x_l} &=0 \quad \mbox{if}\; l\notin \Sigma_k.
\end{align}
After various computations on different equilateral meshes we conjecture the following:
\begin{conj}
For any equilateral mesh $x$ the Jacobian matrix of $\Theta$ at $x$ has eigenvalues of absolute value $< 1$ except from exactly four. In particular, the group orbit $\Sim(\mathbb{R}^2).x$ of the mesh $x$ is an attractor.
\end{conj}
\subsubsection{Further generalization}
One could again study the jacobian matrix of $\Theta$ at any fixed point $x$. But things get much more complicated, because the matrices could not be expressed in a simple way.
We conjecture the following, where $\tilde{X}_C$ is the quotient space $X_C/\Sim(\mathbb{R}^2)$ of the group action of $\Sim(\mathbb{R}^2)$:
\begin{conj}\label{t.global} For any $4 \leq N < \infty$ and any connectivity $C$ with cardinality $N$ the following is true: there exists a metric $\left\|\;\right\|_X$ on the quotient space $\tilde{X}_C$ such that the map $\tilde{\Theta}$ induced on $\tilde{X}_C$ is strictly contracting on its domain with respect to this metric, that is
$$\left\|T(\mathcal{M}_C)-T(\mathcal{M}_C')\right\|_X \leq\lambda \left\|\mathcal{M}_C - \mathcal{M}'_C\right\|_X \;\mbox{for any two meshes}\; \mathcal{M}_C,\mathcal{M}_C' \in \tilde{X}_C.$$
\end{conj}
This would imply in particular that any fixed point $\tilde{x} \in \tilde{X}_C$ is an attractor.
\begin{rem} \begin{enumerate}
\item In Figure~\ref{fig:frobenius3} we show the absolute value of the six eigenvalues for 700 randomly generated triangles. This figure stresses also the fact that for not too distorted triangles the triangle transformation is strictly contracting transverse to the normally hyperbolic invariant set characterized by the four eigenvalues equal to one.
\item In Figure~\ref{fig:plot_meshtrans} we computed the norm of the Jacobian of the mesh transformation of a mesh of $7$ triangles (shown in the left picture) in relation to the matrix norm of the Jacobian for the most regular mesh of $7$ triangles. One observes in the right picture how the matrix norm approaches the optimal matrix norm as the quality of the triangle mesh approaches its optimum.
\end{enumerate}
\end{rem}
\begin{figure}[htpb]
\centering
\begin{minipage}{0.47\textwidth}
\includegraphics[width=\textwidth]{distribution_eigenvalues-eps-converted-to.pdf}
\caption{Plot of the absolute value of eigenvalues in dependence of the triangle quality of 700 randomly generated triangles. }
\label{fig:frobenius3}
\end{minipage}
\centering
\begin{minipage}{0.47\textwidth}
\includegraphics[width=\textwidth]{develop_frobenius-eps-converted-to.pdf}
\caption{Plot of the deviation $|\left\|dt^i\right\| - \left\|dt_e\right\||$ of the Frobenius norm of $t^i, i=1,\dots,10$ from the Frobenius norm $\left\|dt_e\right\|$ corresponding to an equilateral triangle.}
\label{fig:frobenius4}
\end{minipage}
\end{figure}
\paragraph{Outlook:}
The proof should be easily generalized for triangle meshes defined on Riemannian surfaces, that is, -- with the notations above -- the triangle mesh is defined by $M_C: \Sigma \rightarrow S, k \mapsto x_k \in S$ where $S$ is a Riemannian surface such that every triangle $M(\Delta)$ lies inside one chart neighborhood. \\
The techniques developed in this proof could also be adaptable to similar geometric mesh transformations. \\
In \cite{VB14}, we model the triangle transformation above by system of linear differential equations which could be seen as the description of coupled damped oszillations. This model provides another explanation why the transformation converges to a equilateral triangle. One could think of the mesh transformation as the discretization of the solution of a system of coupled damped oszillations which are driven by each other antagonizing the damping.
\begin{figure}[htpb]
\begin{minipage}{0.47\textwidth}
\includegraphics[width=\textwidth]{initial_mesh-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\includegraphics[width=\textwidth]{plot_meshfrobenius-eps-converted-to.pdf}
\end{minipage}
\caption{Plot of $\left\|dT\right\|_2$ and mean mesh quality during iterations for a mesh of $7$ triangles together with the same values for an optimal mesh of $7$ triangles with inner angle $2\pi/7$. }
\label{fig:plot_meshtrans}
\end{figure}
\section{Short discussion of implementation and numerical results}\label{s.numerical}
We do not focus in this article on the application of our algorithm, so the following discussion is kept very brief and should be treated as a motivation to explore further the practical possibilities of the presented algorithm in the future. We have implemented the method as it is described above in Section~\ref{s.convergence} inside the open source software Scilab 5.4.1. The method could be equally well directly implemented in $C$. For an industrial usage this is strongly preferable to make it more efficient. \\
We tested the method for a randomly generated triangulation of the unit square. See below in Figure~\ref{fig:trimesh} how the mesh converges to a mesh of quite equilateral triangles in very few iterations.
\begin{figure}[htbp]
\centering
\begin{minipage}{0.47\textwidth}
\includegraphics[width=\textwidth]{square_initial-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\includegraphics[width=\textwidth]{square_10iterations-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}{\textwidth}
\centering\vspace{0.2cm}
\includegraphics[width=8cm]{Coloring}
\vspace{-2cm}
\end{minipage}
\caption{A randomly generated triangulation of the unit square at the beginning and after 10 iterations colored depending on their quality measure $q_{\Delta} \in (0,1]$.}
\label{fig:trimesh}
\end{figure}
As quality measure $q_{\Delta}$ we used the ratio of minimal to maximal edge lengths of every triangle, $q_{\Delta}=\frac{\min_{i,j=1}^3 \left\|x_i-x_j\right\|}{\max_{i,j=1}^3 \left\|x_i -x_j\right\|}.$ The quality measure for a triangle mesh $V=(\Delta_0,\dots,\Delta_{|V|-1})$ is then the mean of the quality measure $q_{\Delta}$ for every triangle $\Delta \in V$: $q_V=\frac{1}{\left|V\right|}\sum_{\Delta \in V} q_{\Delta}.$\\
The mesh we smoothed in Figure~\ref{fig:trimesh} consists of 450 triangle elements. In Figure~\ref{fig:quality1} below we show how the number of elements with a certain quality measure develops over iterating the mesh and how the mean quality improves. \\
The smoothing algorithm works equally well for tetrahedra by applying the smoothing algorithm to the triangular faces. We display in Figure~\ref{fig:tetmesh} the cube $[0,1]^3$ cut at $x=0.5$ to show the improvement of the interior elements.
\begin{figure}[htbp]
\begin{subfigure}[l]{0.4\textwidth}
\includegraphics[width=\textwidth]{Cube_initial_slice-eps-converted-to.pdf}
\caption{Initial mesh: $q_V=0.4893$.}
\end{subfigure}
\begin{subfigure}[r]{0.4\textwidth}
\includegraphics[width=\textwidth]{Cube_10_slice-eps-converted-to.pdf}
\caption{10th iteration: $q_V=0.7652$.}
\end{subfigure}
\caption{Application to a tetrahedral mesh of 5318 elements of the unit cube (for $x < 0.5$).}
\label{fig:tetmesh}
\end{figure}
Let $T=(x_1,x_2,x_3,x_4)$ with $x_i \in \mathbb{R}^3$ be a tetrahedron. As quality measure $q_T$ for a tetrahedron $T$ we use the \emph{mean ratio quality measure} which is defined as following (see \cite{Knupp2001}):
\begin{align*}
q_T(T)&=\frac{3 \det(S)^{2/3}}{\trace(S^tS)}, \quad S=D(T)W, \;\mbox{where}\\
D(T)&=(x_2-x_1, x_3-x_1, x_4-x_1),\quad W=\begin{pmatrix}1& 1/2 &1/2\\0& \sqrt{3}/2&\sqrt{3}/6\\0&0&\sqrt{2/3}\end{pmatrix}.
\end{align*}
As quality measure for a tetrahedral mesh $V=(T_0, \dots, T_{|V|-1})$ we used the mean quality measure of every element:
$q_V=\frac{1}{|V|}\sum_{T \in V}q_T(T).$
In Figure~\ref{fig:quality}, one can observe how the quality measure of the mesh of Figure~\ref{fig:tetmesh} improves.
\begin{figure}[htpb]
\centering
\begin{subfigure}[l]{0.4\textwidth}
\includegraphics[width=0.9\textwidth]{meanratio_evaluation.pdf}
\caption{Improvement of the cube mesh element quality for the mesh in Figure~\ref{fig:tetmesh}}
\label{fig:quality}
\end{subfigure}
\begin{subfigure}[r]{0.4\textwidth}
\includegraphics[width=0.9\textwidth]{quality_trimesh2.pdf}
\caption{Square quality element measure $q_{\Delta}$ before and after the smoothing of the mesh in Figure~\ref{fig:trimesh}.}
\label{fig:quality1}
\end{subfigure}
\end{figure}
\section{Concluding remarks}
\subsection{Generalization to polygonal meshes}
Any polygon can be transformed in the exactly analogous way as the triangle above. Let $P=(x_0^{(0)},\dots,x_{k-1}^{(0)})$ be a convex $k$-gon with $x_i^{(0)} \in \mathbb{R}^2$ with its centroid in the origin. Then we can define a transformation in the following way recursively:
$$x_i^{(n+1)}=\frac{k-1}{k}r_i^{(n)}x_i^{(n)} - \frac{1}{k}\sum_{j=0, j\neq i}^{k-1} r_j^{(n)}x_{j}^{(n)}, \quad r_i^{(n)}=\frac{\left\|x_{i-1}^{(n)}\right\|_2}{\left\|x_{i}^{(n)}\right\|_2}.$$
Remark that the centroid is kept in the origin througout the transformation. But the iterated polygon $P^{(n)}=(x_0^{(n)}, \dots,x_{k-1}^{(n)})$ does not necessarily converge for $n \rightarrow \infty$ to a polygon with equal distances $\left\|x_i\right\|_2=\left\|x_j\right\|_2$, $i,j=0,\dots,k-1$, i.e. its centroid coincides with its circumcenter. Consider for example a quadrilateral $Q^{(0)}$ with $\left\|x_0\right\|=\left\|x_2\right\|$ and $\left\|x_1\right\|=\left\|x_3\right\|$. Then $Q^{(0)}=Q^{(2)}$ is two-periodic but do not converge. So the transformation has not a globally attracting fixed point for all initial polygons. Also observe the following: while a triangle is regular if and only if the distances of its vertices to its centroid is equal, this is not the case for other polygons where it is just a necessary, but not a sufficient condition.
\begin{comment}
\begin{claim}\label{t.quadrilateral}
Let $Q=(x_0,x_1,x_2,x_3)$ with $x_i \in \mathbb{R}^2$ be a quadrilateral such that its centroid lies in the origin, i.e. $x_1+x_2+x_3+x_4=0$. Further, assume that the distance from each vertex to the centroid is equal, i.e. $\left\|x_0\right\|_2=\left\|x_1\right\|_2=\left\|x_2\right\|_2=\left\|x_3\right\|_2$.Then we have (up to a change of indices) $$x_0=-x_1,\quad x_2=-x_3.$$
\end{claim}
\begin{proof}
Let $Q=(x_0,x_1,x_2,x_3)$ be a quadrilateral with $x_0+x_1+x_2+x_3=0$. In particular, we have $x_0=-x_1-x_2-x_3$. This implies
\begin{align}\label{e.quad}
\left\|x_0\right\|_2^2&= \left\|x_1+x_2+x_3\right\|_2^2\\
&= \left\|x_1\right\|_2^2 + \left\|x_2\right\|_2^2 + \left\|x_3\right\|_2^2 + 2(<x_1,x_2> + <x_2,x_3> + <x_1,x_3>).
\end{align}
We assume $\left\|x_i\right\|_2=\left\|x_j\right\|_2$ for $i,j=0,1,2,3$, so we get
\begin{align*}
0&=2\left\|x_3\right\|_2^2 + 2(<x_1,x_2> + <x_2,x_3> + <x_1,x_3>)\\
0&=<x_3,x_3> + <x_1,x_2>+<x_2,x_3> + <x_1,x_3>\\
0&=<x_1+x_2,x_3+x_1>
\end{align*}
Consequently, $x_1+x_2$ is orthogonal to $x_3+x_1$. But we can equally conclude from Equation~\ref{e.quad} the following:
\begin{align*}
0&=<x_2,x_2> + <x_1,x_2>+<x_2,x_3> + <x_1,x_3>\\
0&=<x_1+x_2,x_3+x_2>
\end{align*}
So $x_1+x_2$ is also orthogonal to $x_3+x_2$. In the plane $\mathbb{R}^2$ this implies that either
\begin{align*}
x_3+x_2 = x_3+x_1 \quad&\Rightarrow\quad x_2=x_1 \quad\mbox{or}\\
x_3+x_2=-x_3-x_1 \quad &\Rightarrow \quad x_3=-\frac{1}{2}(x_1+x_2) \quad\mbox{or}\\
x_2+x_3&=0\; (\mbox{or}\;x_3+x_1=0).
\end{align*}
The first case is excluded as our quadrilateral would be otherwise degenerate. In the second case we get for $x_0=-\frac{1}{2}(x_1+x_2)=x_3$. So this is also impossible. Consequently, we have $x_2+x_3=0$.
This implies $x_0=-x_1$ which finishes the proof.
\end{proof}
\end{comment}
\begin{comment}
\begin{figure}[h]
\begin{subfigure}[l]{0.3\textwidth}
\includegraphics[width=\textwidth]{nonregular_quad}
\caption{a non-regular quadrilateral where centroid and circumcenter coincide.}
\label{fig:quatrilateral}
\end{subfigure}
\begin{subfigure}[r]{0.3\textwidth}
\includegraphics[width=\textwidth]{pentagon4}
\caption{20th iteration of a random convex pentagon.}
\label{fig:pentagon}
\end{subfigure}
\caption{The limit non-equilateral polygons pour a random quadrilateral and a random pentagon.}
\label{fig:polygons}
\end{figure}
\end{comment}
Accordingly, the transformation cannot be directly used for a smoothing algorithm for polygonal meshes without further adaption. \\
But nevertheless, the transformation can be used to smooth any polygonal mesh by subdividing every polygon into triangles and then applying the transformation to every triangle.
\subsection{Outlook}
One easily detects the following shortcomings of the presented smoothing method which are open for future research:
\begin{description}
\item[Global convergence] We only prove the global convergence for a compact subset of triangle meshes which exclude triangles close to degenerate ones. By changing the transformation a bit -- with regard to the estimates we derive during the proof -- such that the transformation is integrable, that is, the gradient of a function, and consequently the jacobian matrix normal, one could obtain better bounds and therefore extend the convergence result to a greater subset of meshes.
\item[Performance] It was not the primary objective of this article to provide an efficient implementation, but to analyze the underlying mathematics. So we have to admit that each iteration step is numerically quite long in the present implementation. But if directly implemented inside $C$, we should attain comparable run times as for GETMe. As the algorithm relocates separately every vertex, it is open for an application of parallel computing techniques.
\item[Polygonal/ polyhedral meshes] Using the duality of certain polygons/ polyhedra to each other we hope to be able to adapt the current transformation to quadrilateral and hexahedral meshes. This is a current topic of our research.
\end{description}
\bibliographystyle{plain}
| {
"timestamp": "2014-11-18T02:16:33",
"yymm": "1411",
"arxiv_id": "1411.3869",
"language": "en",
"url": "https://arxiv.org/abs/1411.3869",
"abstract": "We describe a simple geometric transformation of triangles which leads to an efficient and effective algorithm to smooth triangle and tetrahedral meshes. Our focus lies on the convergence properties of this algorithm: we prove the effectivity for some planar triangle meshes and further introduce dynamical methods to study the dynamics of the algorithm which may be used for any kind of algorithm based on a geometric transformation.",
"subjects": "Numerical Analysis (math.NA); Metric Geometry (math.MG)",
"title": "Convergence properties of a geometric mesh smoothing algorithm",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9833429624315189,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7097211143665091
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.